Sample records for central computer system

  1. 21 CFR 1305.24 - Central processing of orders.

    Code of Federal Regulations, 2013 CFR

    2013-04-01

    ... or more registered locations and maintains a central processing computer system in which orders are... order with all linked records on the central computer system. (b) A company that has central processing... the company owns and operates. ...

  2. 21 CFR 1305.24 - Central processing of orders.

    Code of Federal Regulations, 2012 CFR

    2012-04-01

    ... or more registered locations and maintains a central processing computer system in which orders are... order with all linked records on the central computer system. (b) A company that has central processing... the company owns and operates. ...

  3. 21 CFR 1305.24 - Central processing of orders.

    Code of Federal Regulations, 2011 CFR

    2011-04-01

    ... or more registered locations and maintains a central processing computer system in which orders are... order with all linked records on the central computer system. (b) A company that has central processing... the company owns and operates. ...

  4. 21 CFR 1305.24 - Central processing of orders.

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ... or more registered locations and maintains a central processing computer system in which orders are... order with all linked records on the central computer system. (b) A company that has central processing... the company owns and operates. ...

  5. 21 CFR 1305.24 - Central processing of orders.

    Code of Federal Regulations, 2014 CFR

    2014-04-01

    ... or more registered locations and maintains a central processing computer system in which orders are... order with all linked records on the central computer system. (b) A company that has central processing... the company owns and operates. ...

  6. Computer graphics and the graphic artist

    NASA Technical Reports Server (NTRS)

    Taylor, N. L.; Fedors, E. G.; Pinelli, T. E.

    1985-01-01

    A centralized computer graphics system is being developed at the NASA Langley Research Center. This system was required to satisfy multiuser needs, ranging from presentation quality graphics prepared by a graphic artist to 16-mm movie simulations generated by engineers and scientists. While the major thrust of the central graphics system was directed toward engineering and scientific applications, hardware and software capabilities to support the graphic artists were integrated into the design. This paper briefly discusses the importance of computer graphics in research; the central graphics system in terms of systems, software, and hardware requirements; the application of computer graphics to graphic arts, discussed in terms of the requirements for a graphic arts workstation; and the problems encountered in applying computer graphics to the graphic arts. The paper concludes by presenting the status of the central graphics system.

  7. Hand-held computer operating system program for collection of resident experience data.

    PubMed

    Malan, T K; Haffner, W H; Armstrong, A Y; Satin, A J

    2000-11-01

    To describe a system for recording resident experience involving hand-held computers with the Palm Operating System (3 Com, Inc., Santa Clara, CA). Hand-held personal computers (PCs) are popular, easy to use, inexpensive, portable, and can share data among other operating systems. Residents in our program carry individual hand-held database computers to record Residency Review Committee (RRC) reportable patient encounters. Each resident's data is transferred to a single central relational database compatible with Microsoft Access (Microsoft Corporation, Redmond, WA). Patient data entry and subsequent transfer to a central database is accomplished with commercially available software that requires minimal computer expertise to implement and maintain. The central database can then be used for statistical analysis or to create required RRC resident experience reports. As a result, the data collection and transfer process takes less time for residents and program director alike, than paper-based or central computer-based systems. The system of collecting resident encounter data using hand-held computers with the Palm Operating System is easy to use, relatively inexpensive, accurate, and secure. The user-friendly system provides prompt, complete, and accurate data, enhancing the education of residents while facilitating the job of the program director.

  8. Scale Space for Camera Invariant Features.

    PubMed

    Puig, Luis; Guerrero, José J; Daniilidis, Kostas

    2014-09-01

    In this paper we propose a new approach to compute the scale space of any central projection system, such as catadioptric, fisheye or conventional cameras. Since these systems can be explained using a unified model, the single parameter that defines each type of system is used to automatically compute the corresponding Riemannian metric. This metric, is combined with the partial differential equations framework on manifolds, allows us to compute the Laplace-Beltrami (LB) operator, enabling the computation of the scale space of any central projection system. Scale space is essential for the intrinsic scale selection and neighborhood description in features like SIFT. We perform experiments with synthetic and real images to validate the generalization of our approach to any central projection system. We compare our approach with the best-existing methods showing competitive results in all type of cameras: catadioptric, fisheye, and perspective.

  9. Organising a University Computer System: Analytical Notes.

    ERIC Educational Resources Information Center

    Jacquot, J. P.; Finance, J. P.

    1990-01-01

    Thirteen trends in university computer system development are identified, system user requirements are analyzed, critical system qualities are outlined, and three options for organizing a computer system are presented. The three systems include a centralized network, local network, and federation of local networks. (MSE)

  10. Data processing for water monitoring system

    NASA Technical Reports Server (NTRS)

    Monford, L.; Linton, A. T.

    1978-01-01

    Water monitoring data acquisition system is structured about central computer that controls sampling and sensor operation, and analyzes and displays data in real time. Unit is essentially separated into two systems: computer system, and hard wire backup system which may function separately or with computer.

  11. AGIS: Integration of new technologies used in ATLAS Distributed Computing

    NASA Astrophysics Data System (ADS)

    Anisenkov, Alexey; Di Girolamo, Alessandro; Alandes Pradillo, Maria

    2017-10-01

    The variety of the ATLAS Distributed Computing infrastructure requires a central information system to define the topology of computing resources and to store different parameters and configuration data which are needed by various ATLAS software components. The ATLAS Grid Information System (AGIS) is the system designed to integrate configuration and status information about resources, services and topology of the computing infrastructure used by ATLAS Distributed Computing applications and services. Being an intermediate middleware system between clients and external information sources (like central BDII, GOCDB, MyOSG), AGIS defines the relations between experiment specific used resources and physical distributed computing capabilities. Being in production during LHC Runl AGIS became the central information system for Distributed Computing in ATLAS and it is continuously evolving to fulfil new user requests, enable enhanced operations and follow the extension of the ATLAS Computing model. The ATLAS Computing model and data structures used by Distributed Computing applications and services are continuously evolving and trend to fit newer requirements from ADC community. In this note, we describe the evolution and the recent developments of AGIS functionalities, related to integration of new technologies recently become widely used in ATLAS Computing, like flexible computing utilization of opportunistic Cloud and HPC resources, ObjectStore services integration for Distributed Data Management (Rucio) and ATLAS workload management (PanDA) systems, unified storage protocols declaration required for PandDA Pilot site movers and others. The improvements of information model and general updates are also shown, in particular we explain how other collaborations outside ATLAS could benefit the system as a computing resources information catalogue. AGIS is evolving towards a common information system, not coupled to a specific experiment.

  12. The DFVLR main department for central data processing, 1976 - 1983

    NASA Technical Reports Server (NTRS)

    1983-01-01

    Data processing, equipment and systems operation, operative and user systems, user services, computer networks and communications, text processing, computer graphics, and high power computers are discussed.

  13. Two-way cable television project

    NASA Astrophysics Data System (ADS)

    Wilkens, H.; Guenther, P.; Kiel, F.; Kraus, F.; Mahnkopf, P.; Schnee, R.

    1982-02-01

    The market demand for a multiuser computer system with interactive services was studied. Mean system work load at peak use hours was estimated and the complexity of dialog with a central computer was determined. Man machine communication by broadband cable television transmission, using digital techniques, was assumed. The end to end system is described. It is user friendly, able to handle 10,000 subscribers, and provides color television display. The central computer system architecture with remote audiovisual terminals is depicted and software is explained. Signal transmission requirements are dealt with. International availability of the test system, including sample programs, is indicated.

  14. TOWARD A COMPUTER BASED INSTRUCTIONAL SYSTEM.

    ERIC Educational Resources Information Center

    GARIGLIO, LAWRENCE M.; RODGERS, WILLIAM A.

    THE INFORMATION FOR THIS REPORT WAS OBTAINED FROM VARIOUS COMPUTER ASSISTED INSTRUCTION INSTALLATIONS. COMPUTER BASED INSTRUCTION REFERS TO A SYSTEM AIMED AT INDIVIDUALIZED INSTRUCTION, WITH THE COMPUTER AS CENTRAL CONTROL. SUCH A SYSTEM HAS 3 MAJOR SUBSYSTEMS--INSTRUCTIONAL, RESEARCH, AND MANAGERIAL. THIS REPORT EMPHASIZES THE INSTRUCTIONAL…

  15. The PLATO IV Architecture.

    ERIC Educational Resources Information Center

    Stifle, Jack

    The PLATO IV computer-based instructional system consists of a large scale centrally located CDC 6400 computer and a large number of remote student terminals. This is a brief and general description of the proposed input/output hardware necessary to interface the student terminals with the computer's central processing unit (CPU) using available…

  16. Central Computational Facility CCF communications subsystem options

    NASA Technical Reports Server (NTRS)

    Hennigan, K. B.

    1979-01-01

    A MITRE study which investigated the communication options available to support both the remaining Central Computational Facility (CCF) computer systems and the proposed U1108 replacements is presented. The facilities utilized to link the remote user terminals with the CCF were analyzed and guidelines to provide more efficient communications were established.

  17. Design of a modular digital computer system, CDRL no. D001, final design plan

    NASA Technical Reports Server (NTRS)

    Easton, R. A.

    1975-01-01

    The engineering breadboard implementation for the CDRL no. D001 modular digital computer system developed during design of the logic system was documented. This effort followed the architecture study completed and documented previously, and was intended to verify the concepts of a fault tolerant, automatically reconfigurable, modular version of the computer system conceived during the architecture study. The system has a microprogrammed 32 bit word length, general register architecture and an instruction set consisting of a subset of the IBM System 360 instruction set plus additional fault tolerance firmware. The following areas were covered: breadboard packaging, central control element, central processing element, memory, input/output processor, and maintenance/status panel and electronics.

  18. A Faster Parallel Algorithm and Efficient Multithreaded Implementations for Evaluating Betweenness Centrality on Massive Datasets

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Madduri, Kamesh; Ediger, David; Jiang, Karl

    2009-05-29

    We present a new lock-free parallel algorithm for computing betweenness centrality of massive small-world networks. With minor changes to the data structures, our algorithm also achieves better spatial cache locality compared to previous approaches. Betweenness centrality is a key algorithm kernel in the HPCS SSCA#2 Graph Analysis benchmark, which has been extensively used to evaluate the performance of emerging high-performance computing architectures for graph-theoretic computations. We design optimized implementations of betweenness centrality and the SSCA#2 benchmark for two hardware multithreaded systems: a Cray XMT system with the ThreadStorm processor, and a single-socket Sun multicore server with the UltraSparc T2 processor.more » For a small-world network of 134 million vertices and 1.073 billion edges, the 16-processor XMT system and the 8-core Sun Fire T5120 server achieve TEPS scores (an algorithmic performance count for the SSCA#2 benchmark) of 160 million and 90 million respectively, which corresponds to more than a 2X performance improvement over the previous parallel implementations. To better characterize the performance of these multithreaded systems, we correlate the SSCA#2 performance results with data from the memory-intensive STREAM and RandomAccess benchmarks. Finally, we demonstrate the applicability of our implementation to analyze massive real-world datasets by computing approximate betweenness centrality for a large-scale IMDb movie-actor network.« less

  19. Functional Analysis and Preliminary Specifications for a Single Integrated Central Computer System for Secondary Schools and Junior Colleges. Interim Report.

    ERIC Educational Resources Information Center

    1968

    The present report proposes a central computing facility and presents the preliminary specifications for such a system. It is based, in part, on the results of earlier studies by two previous contractors on behalf of the U.S. Office of Education. The recommendations are based upon the present contractors considered evaluation of the earlier…

  20. Understanding Emergency Care Delivery Through Computer Simulation Modeling.

    PubMed

    Laker, Lauren F; Torabi, Elham; France, Daniel J; Froehle, Craig M; Goldlust, Eric J; Hoot, Nathan R; Kasaie, Parastu; Lyons, Michael S; Barg-Walkow, Laura H; Ward, Michael J; Wears, Robert L

    2018-02-01

    In 2017, Academic Emergency Medicine convened a consensus conference entitled, "Catalyzing System Change through Health Care Simulation: Systems, Competency, and Outcomes." This article, a product of the breakout session on "understanding complex interactions through systems modeling," explores the role that computer simulation modeling can and should play in research and development of emergency care delivery systems. This article discusses areas central to the use of computer simulation modeling in emergency care research. The four central approaches to computer simulation modeling are described (Monte Carlo simulation, system dynamics modeling, discrete-event simulation, and agent-based simulation), along with problems amenable to their use and relevant examples to emergency care. Also discussed is an introduction to available software modeling platforms and how to explore their use for research, along with a research agenda for computer simulation modeling. Through this article, our goal is to enhance adoption of computer simulation, a set of methods that hold great promise in addressing emergency care organization and design challenges. © 2017 by the Society for Academic Emergency Medicine.

  1. A spacecraft computer repairable via command.

    NASA Technical Reports Server (NTRS)

    Fimmel, R. O.; Baker, T. E.

    1971-01-01

    The MULTIPAC is a central data system developed for deep-space probes with the distinctive feature that it may be repaired during flight via command and telemetry links by reprogramming around the failed unit. The computer organization uses pools of identical modules which the program organizes into one or more computers called processors. The interaction of these modules is dynamically controlled by the program rather than hardware. In the event of a failure, new programs are entered which reorganize the central data system with a somewhat reduced total processing capability aboard the spacecraft. Emphasis is placed on the evolution of the system architecture and the final overall system design rather than the specific logic design.

  2. Distributed Computing with Centralized Support Works at Brigham Young.

    ERIC Educational Resources Information Center

    McDonald, Kelly; Stone, Brad

    1992-01-01

    Brigham Young University (Utah) has addressed the need for maintenance and support of distributed computing systems on campus by implementing a program patterned after a national business franchise, providing the support and training of a centralized administration but allowing each unit to operate much as an independent small business.…

  3. 28 CFR 25.8 - System safeguards.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... justice agency computer site must have adequate physical security to protect against any unauthorized... Index is stored electronically for use in an FBI computer environment. The NICS central computer will... authorized personnel who have identified themselves and their need for access to a system security officer...

  4. 28 CFR 25.8 - System safeguards.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... justice agency computer site must have adequate physical security to protect against any unauthorized... Index is stored electronically for use in an FBI computer environment. The NICS central computer will... authorized personnel who have identified themselves and their need for access to a system security officer...

  5. 28 CFR 25.8 - System safeguards.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... justice agency computer site must have adequate physical security to protect against any unauthorized... Index is stored electronically for use in an FBI computer environment. The NICS central computer will... authorized personnel who have identified themselves and their need for access to a system security officer...

  6. 28 CFR 25.8 - System safeguards.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... justice agency computer site must have adequate physical security to protect against any unauthorized... Index is stored electronically for use in an FBI computer environment. The NICS central computer will... authorized personnel who have identified themselves and their need for access to a system security officer...

  7. 28 CFR 25.8 - System safeguards.

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... justice agency computer site must have adequate physical security to protect against any unauthorized... Index is stored electronically for use in an FBI computer environment. The NICS central computer will... authorized personnel who have identified themselves and their need for access to a system security officer...

  8. French Plans for Fifth Generation Computer Systems.

    DTIC Science & Technology

    1984-12-07

    centrally man- French industry In electronics, compu- aged project in France that covers all ters, software, and services and to make the facets of the...Centre National of Japan’s Fifth Generation Project , the de Recherche Scientifique (CNRS) Cooper- French scientific and industrial com- ative Research...systems, man-computer The National Projects interaction, novel computer structures, The French Ministry of Research and knowledge-based computer systems

  9. The role of the host in a cooperating mainframe and workstation environment, volumes 1 and 2

    NASA Technical Reports Server (NTRS)

    Kusmanoff, Antone; Martin, Nancy L.

    1989-01-01

    In recent years, advancements made in computer systems have prompted a move from centralized computing based on timesharing a large mainframe computer to distributed computing based on a connected set of engineering workstations. A major factor in this advancement is the increased performance and lower cost of engineering workstations. The shift to distributed computing from centralized computing has led to challenges associated with the residency of application programs within the system. In a combined system of multiple engineering workstations attached to a mainframe host, the question arises as to how does a system designer assign applications between the larger mainframe host and the smaller, yet powerful, workstation. The concepts related to real time data processing are analyzed and systems are displayed which use a host mainframe and a number of engineering workstations interconnected by a local area network. In most cases, distributed systems can be classified as having a single function or multiple functions and as executing programs in real time or nonreal time. In a system of multiple computers, the degree of autonomy of the computers is important; a system with one master control computer generally differs in reliability, performance, and complexity from a system in which all computers share the control. This research is concerned with generating general criteria principles for software residency decisions (host or workstation) for a diverse yet coupled group of users (the clustered workstations) which may need the use of a shared resource (the mainframe) to perform their functions.

  10. Design of a modular digital computer system

    NASA Technical Reports Server (NTRS)

    1980-01-01

    A Central Control Element (CCE) module which controls the Automatically Reconfigurable Modular System (ARMS) and allows both redundant processing and multi-computing in the same computer with real time mode switching, is discussed. The same hardware is used for either reliability enhancement, speed enhancement, or for a combination of both.

  11. Great Expectations: Distributed Financial Computing at Cornell.

    ERIC Educational Resources Information Center

    Schulden, Louise; Sidle, Clint

    1988-01-01

    The Cornell University Distributed Accounting (CUDA) system is an attempt to provide departments a software tool for better managing their finances, creating microcomputer standards, creating a vehicle for better administrative microcomputer support, and insuring local systems are consistent with central computer systems. (Author/MLW)

  12. Managing drought risk with a computer model of the Raritan River Basin water-supply system in central New Jersey

    USGS Publications Warehouse

    Dunne, Paul; Tasker, Gary

    1996-01-01

    The reservoirs and pumping stations that comprise the Raritan River Basin water-supply system and its interconnections to the Delaware-Raritan Canal water-supply system, operated by the New Jersey Water Supply Authority (NJWSA), provide potable water to central New Jersey communities. The water reserve of this combined system can easily be depleted by an extended period of below-normal precipitation. Efficient operation of the combined system is vital to meeting the water-supply needs of central New Jersey. In an effort to improve the efficiency of the system operation, the U.S. Geological Survey (USGS), in cooperation with the NJWSA, has developed a computer model that provides a technical basis for evaluating the effects of alternative patterns of operation of the Raritan River Basin water-supply system. This fact sheet describes the model, its technical basis, and its operation.

  13. [Personal computer-based computer monitoring system of the anesthesiologist (2-year experience in development and use)].

    PubMed

    Buniatian, A A; Sablin, I N; Flerov, E V; Mierbekov, E M; Broĭtman, O G; Shevchenko, V V; Shitikov, I I

    1995-01-01

    Creation of computer monitoring systems (CMS) for operating rooms is one of the most important spheres of personal computer employment in anesthesiology. The authors developed a PC RS/AT-based CMS and effectively used it for more than 2 years. This system permits comprehensive monitoring in cardiosurgical operations by real time processing the values of arterial and central venous pressure, pressure in the pulmonary artery, bioelectrical activity of the brain, and two temperature values. Use of this CMS helped appreciably improve patients' safety during surgery. The possibility to assess brain function by computer monitoring the EEF simultaneously with central hemodynamics and body temperature permit the anesthesiologist to objectively assess the depth of anesthesia and to diagnose cerebral hypoxia. Automated anesthesiological chart issued by the CMS after surgery reliably reflects the patient's status and the measures taken by the anesthesiologist.

  14. Automatic Mexican sign language and digits recognition using normalized central moments

    NASA Astrophysics Data System (ADS)

    Solís, Francisco; Martínez, David; Espinosa, Oscar; Toxqui, Carina

    2016-09-01

    This work presents a framework for automatic Mexican sign language and digits recognition based on computer vision system using normalized central moments and artificial neural networks. Images are captured by digital IP camera, four LED reflectors and a green background in order to reduce computational costs and prevent the use of special gloves. 42 normalized central moments are computed per frame and used in a Multi-Layer Perceptron to recognize each database. Four versions per sign and digit were used in training phase. 93% and 95% of recognition rates were achieved for Mexican sign language and digits respectively.

  15. Multiple-User, Multitasking, Virtual-Memory Computer System

    NASA Technical Reports Server (NTRS)

    Generazio, Edward R.; Roth, Don J.; Stang, David B.

    1993-01-01

    Computer system designed and programmed to serve multiple users in research laboratory. Provides for computer control and monitoring of laboratory instruments, acquisition and anlaysis of data from those instruments, and interaction with users via remote terminals. System provides fast access to shared central processing units and associated large (from megabytes to gigabytes) memories. Underlying concept of system also applicable to monitoring and control of industrial processes.

  16. A Faster Parallel Algorithm and Efficient Multithreaded Implementations for Evaluating Betweenness Centrality on Massive Datasets

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Madduri, Kamesh; Ediger, David; Jiang, Karl

    2009-02-15

    We present a new lock-free parallel algorithm for computing betweenness centralityof massive small-world networks. With minor changes to the data structures, ouralgorithm also achieves better spatial cache locality compared to previous approaches. Betweenness centrality is a key algorithm kernel in HPCS SSCA#2, a benchmark extensively used to evaluate the performance of emerging high-performance computing architectures for graph-theoretic computations. We design optimized implementations of betweenness centrality and the SSCA#2 benchmark for two hardware multithreaded systems: a Cray XMT system with the Threadstorm processor, and a single-socket Sun multicore server with the UltraSPARC T2 processor. For a small-world network of 134 millionmore » vertices and 1.073 billion edges, the 16-processor XMT system and the 8-core Sun Fire T5120 server achieve TEPS scores (an algorithmic performance count for the SSCA#2 benchmark) of 160 million and 90 million respectively, which corresponds to more than a 2X performance improvement over the previous parallel implementations. To better characterize the performance of these multithreaded systems, we correlate the SSCA#2 performance results with data from the memory-intensive STREAM and RandomAccess benchmarks. Finally, we demonstrate the applicability of our implementation to analyze massive real-world datasets by computing approximate betweenness centrality for a large-scale IMDb movie-actor network.« less

  17. An Integrated Model of the Cardiovascular and Central Nervous Systems for Analysis of Microgravity Induced Fluid Redistribution

    NASA Technical Reports Server (NTRS)

    Price, R.; Gady, S.; Heinemann, K.; Nelson, E. S.; Mulugeta, L.; Ethier, C. R.; Samuels, B. C.; Feola, A.; Vera, J.; Myers, J. G.

    2015-01-01

    A recognized side effect of prolonged microgravity exposure is visual impairment and intracranial pressure (VIIP) syndrome. The medical understanding of this phenomenon is at present preliminary, although it is hypothesized that the headward shift of bodily fluids in microgravity may be a contributor. Computational models can be used to provide insight into the origins of VIIP. In order to further investigate this phenomenon, NASAs Digital Astronaut Project (DAP) is developing an integrated computational model of the human body which is divided into the eye, the cerebrovascular system, and the cardiovascular system. This presentation will focus on the development and testing of the computational model of an integrated model of the cardiovascular system (CVS) and central nervous system (CNS) that simulates the behavior of pressures, volumes, and flows within these two physiological systems.

  18. Integrating Micro-computers with a Centralized DBMS: ORACLE, SEED AND INGRES

    NASA Technical Reports Server (NTRS)

    Hoerger, J.

    1984-01-01

    Users of ADABAS, a relational-like data base management system (ADABAS) with its data base programming language (NATURAL) are acquiring microcomputers with hopes of solving their individual word processing, office automation, decision support, and simple data processing problems. As processor speeds, memory sizes, and disk storage capacities increase, individual departments begin to maintain "their own" data base on "their own" micro-computer. This situation can adversely affect several of the primary goals set for implementing a centralized DBMS. In order to avoid this potential problem, these micro-computers must be integrated with the centralized DBMS. An easy to use and flexible means for transferring logic data base files between the central data base machine and micro-computers must be provided. Some of the problems encounted in an effort to accomplish this integration and possible solutions are discussed.

  19. 36 CFR 200.1 - Central organization.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ..., engineering, lands, aviation, and computer systems. The National Forest System includes: 155 Proclaimed or... other environmental concerns, forest insects and disease, forest fire and atmospheric science. Plans and...-wide management of systems and computer applications. [41 FR 24350, June 16, 1976, as amended at 42 FR...

  20. 36 CFR 200.1 - Central organization.

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ..., engineering, lands, aviation, and computer systems. The National Forest System includes: 155 Proclaimed or... other environmental concerns, forest insects and disease, forest fire and atmospheric science. Plans and...-wide management of systems and computer applications. [41 FR 24350, June 16, 1976, as amended at 42 FR...

  1. 36 CFR 200.1 - Central organization.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ..., engineering, lands, aviation, and computer systems. The National Forest System includes: 155 Proclaimed or... other environmental concerns, forest insects and disease, forest fire and atmospheric science. Plans and...-wide management of systems and computer applications. [41 FR 24350, June 16, 1976, as amended at 42 FR...

  2. 36 CFR 200.1 - Central organization.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ..., engineering, lands, aviation, and computer systems. The National Forest System includes: 155 Proclaimed or... other environmental concerns, forest insects and disease, forest fire and atmospheric science. Plans and...-wide management of systems and computer applications. [41 FR 24350, June 16, 1976, as amended at 42 FR...

  3. BIO-Plex Information System Concept

    NASA Technical Reports Server (NTRS)

    Jones, Harry; Boulanger, Richard; Arnold, James O. (Technical Monitor)

    1999-01-01

    This paper describes a suggested design for an integrated information system for the proposed BIO-Plex (Bioregenerative Planetary Life Support Systems Test Complex) at Johnson Space Center (JSC), including distributed control systems, central control, networks, database servers, personal computers and workstations, applications software, and external communications. The system will have an open commercial computing and networking, architecture. The network will provide automatic real-time transfer of information to database server computers which perform data collection and validation. This information system will support integrated, data sharing applications for everything, from system alarms to management summaries. Most existing complex process control systems have information gaps between the different real time subsystems, between these subsystems and central controller, between the central controller and system level planning and analysis application software, and between the system level applications and management overview reporting. An integrated information system is vitally necessary as the basis for the integration of planning, scheduling, modeling, monitoring, and control, which will allow improved monitoring and control based on timely, accurate and complete data. Data describing the system configuration and the real time processes can be collected, checked and reconciled, analyzed and stored in database servers that can be accessed by all applications. The required technology is available. The only opportunity to design a distributed, nonredundant, integrated system is before it is built. Retrofit is extremely difficult and costly.

  4. Central Data Processing System (CDPS) user's manual: Solar heating and cooling program

    NASA Technical Reports Server (NTRS)

    1976-01-01

    The software and data base management system required to assess the performance of solar heating and cooling systems installed at multiple sites is presented. The instrumentation data associated with these systems is collected, processed, and presented in a form which supported continuity of performance evaluation across all applications. The CDPS consisted of three major elements: communication interface computer, central data processing computer, and performance evaluation data base. Users of the performance data base were identified, and procedures for operation, and guidelines for software maintenance were outlined. The manual also defined the output capabilities of the CDPS in support of external users of the system.

  5. Making automated computer program documentation a feature of total system design

    NASA Technical Reports Server (NTRS)

    Wolf, A. W.

    1970-01-01

    It is pointed out that in large-scale computer software systems, program documents are too often fraught with errors, out of date, poorly written, and sometimes nonexistent in whole or in part. The means are described by which many of these typical system documentation problems were overcome in a large and dynamic software project. A systems approach was employed which encompassed such items as: (1) configuration management; (2) standards and conventions; (3) collection of program information into central data banks; (4) interaction among executive, compiler, central data banks, and configuration management; and (5) automatic documentation. A complete description of the overall system is given.

  6. Evaluating Computer Technology Integration in a Centralized School System

    ERIC Educational Resources Information Center

    Eteokleous, N.

    2008-01-01

    The study evaluated the current situation in Cyprus elementary classrooms regarding computer technology integration in an attempt to identify ways of expanding teachers' and students' experiences with computer technology. It examined how Cypriot elementary teachers use computers, and the factors that influence computer integration in their…

  7. Client-Server: What Is It and Are We There Yet?

    ERIC Educational Resources Information Center

    Gershenfeld, Nancy

    1995-01-01

    Discusses client-server architecture in dumb terminals, personal computers, local area networks, and graphical user interfaces. Focuses on functions offered by client personal computers: individualized environments; flexibility in running operating systems; advanced operating system features; multiuser environments; and centralized data…

  8. Cloud Based Educational Systems and Its Challenges and Opportunities and Issues

    ERIC Educational Resources Information Center

    Paul, Prantosh Kr.; Lata Dangwal, Kiran

    2014-01-01

    Cloud Computing (CC) is actually is a set of hardware, software, networks, storage, services an interface combines to deliver aspects of computing as a service. Cloud Computing (CC) actually uses the central remote servers to maintain data and applications. Practically Cloud Computing (CC) is extension of Grid computing with independency and…

  9. Computer System Resource Requirements of Novice Programming Students.

    ERIC Educational Resources Information Center

    Nutt, Gary J.

    The characteristics of jobs that constitute the mix for lower division FORTRAN classes in a university were investigated. Samples of these programs were also benchmarked on a larger central site computer and two minicomputer systems. It was concluded that a carefully chosen minicomputer system could offer service at least the equivalent of the…

  10. A Computer Program Functional Design of the Simulation Subsystem of an Automated Central Flow Control System

    DOT National Transportation Integrated Search

    1976-08-01

    This report contains a functional design for the simulation of a future automation concept in support of the ATC Systems Command Center. The simulation subsystem performs airport airborne arrival delay predictions and computes flow control tables for...

  11. Concept of operations for the use of connected vehicle data in road weather applications.

    DOT National Transportation Integrated Search

    2006-01-30

    The Computer Aided Dispatch (CAD) computer system went into live operation January 2002. System design involved creating a distributed network, which involved setting up a central main server at the Idaho State Police (ISP) headquarters located in Me...

  12. Thermodynamics of quasideterministic digital computers

    NASA Astrophysics Data System (ADS)

    Chu, Dominique

    2018-02-01

    A central result of stochastic thermodynamics is that irreversible state transitions of Markovian systems entail a cost in terms of an infinite entropy production. A corollary of this is that strictly deterministic computation is not possible. Using a thermodynamically consistent model, we show that quasideterministic computation can be achieved at finite, and indeed modest cost with accuracies that are indistinguishable from deterministic behavior for all practical purposes. Concretely, we consider the entropy production of stochastic (Markovian) systems that behave like and and a not gates. Combinations of these gates can implement any logical function. We require that these gates return the correct result with a probability that is very close to 1, and additionally, that they do so within finite time. The central component of the model is a machine that can read and write binary tapes. We find that the error probability of the computation of these gates falls with the power of the system size, whereas the cost only increases linearly with the system size.

  13. The revolution in data gathering systems

    NASA Technical Reports Server (NTRS)

    Cambra, J. M.; Trover, W. F.

    1975-01-01

    Data acquisition systems used in NASA's wind tunnels from the 1950's through the present time are summarized as a baseline for assessing the impact of minicomputers and microcomputers on data acquisition and data processing. Emphasis is placed on the cyclic evolution in computer technology which transformed the central computer system, and finally the distributed computer system. Other developments discussed include: medium scale integration, large scale integration, combining the functions of data acquisition and control, and micro and minicomputers.

  14. High order filtering methods for approximating hyperbolic systems of conservation laws

    NASA Technical Reports Server (NTRS)

    Lafon, F.; Osher, S.

    1991-01-01

    The essentially nonoscillatory (ENO) schemes, while potentially useful in the computation of discontinuous solutions of hyperbolic conservation-law systems, are computationally costly relative to simple central-difference methods. A filtering technique is presented which employs central differencing of arbitrarily high-order accuracy except where a local test detects the presence of spurious oscillations and calls upon the full ENO apparatus to remove them. A factor-of-three speedup is thus obtained over the full-ENO method for a wide range of problems, with high-order accuracy in regions of smooth flow.

  15. Eigenvector centrality for geometric and topological characterization of porous media

    NASA Astrophysics Data System (ADS)

    Jimenez-Martinez, Joaquin; Negre, Christian F. A.

    2017-07-01

    Solving flow and transport through complex geometries such as porous media is computationally difficult. Such calculations usually involve the solution of a system of discretized differential equations, which could lead to extreme computational cost depending on the size of the domain and the accuracy of the model. Geometric simplifications like pore networks, where the pores are represented by nodes and the pore throats by edges connecting pores, have been proposed. These models, despite their ability to preserve the connectivity of the medium, have difficulties capturing preferential paths (high velocity) and stagnation zones (low velocity), as they do not consider the specific relations between nodes. Nonetheless, network theory approaches, where a complex network is a graph, can help to simplify and better understand fluid dynamics and transport in porous media. Here we present an alternative method to address these issues based on eigenvector centrality, which has been corrected to overcome the centralization problem and modified to introduce a bias in the centrality distribution along a particular direction to address the flow and transport anisotropy in porous media. We compare the model predictions with millifluidic transport experiments, which shows that, albeit simple, this technique is computationally efficient and has potential for predicting preferential paths and stagnation zones for flow and transport in porous media. We propose to use the eigenvector centrality probability distribution to compute the entropy as an indicator of the "mixing capacity" of the system.

  16. Computer Instructional Aids for Undergraduate Control Education. 1978 Edition.

    ERIC Educational Resources Information Center

    Volz, Richard A.; And Others

    This work represents the development of computer tools for undergraduate students. Emphasis is on automatic control theory using hybrid and digital computation. The routine calculations of control system analysis are presented as students would use them on the University of Michigan's central digital computer and the time-shared graphic terminals…

  17. Extension of a streamwise upwind algorithm to a moving grid system

    NASA Technical Reports Server (NTRS)

    Obayashi, Shigeru; Goorjian, Peter M.; Guruswamy, Guru P.

    1990-01-01

    A new streamwise upwind algorithm was derived to compute unsteady flow fields with the use of a moving-grid system. The temporally nonconservative LU-ADI (lower-upper-factored, alternating-direction-implicit) method was applied for time marching computations. A comparison of the temporally nonconservative method with a time-conservative implicit upwind method indicates that the solutions are insensitive to the conservative properties of the implicit solvers when practical time steps are used. Using this new method, computations were made for an oscillating wing at a transonic Mach number. The computed results confirm that the present upwind scheme captures the shock motion better than the central-difference scheme based on the beam-warming algorithm. The new upwind option of the code allows larger time-steps and thus is more efficient, even though it requires slightly more computational time per time step than the central-difference option.

  18. A computer software system for the generation of global ocean tides including self-gravitation and crustal loading effects

    NASA Technical Reports Server (NTRS)

    Estes, R. H.

    1977-01-01

    A computer software system is described which computes global numerical solutions of the integro-differential Laplace tidal equations, including dissipation terms and ocean loading and self-gravitation effects, for arbitrary diurnal and semidiurnal tidal constituents. The integration algorithm features a successive approximation scheme for the integro-differential system, with time stepping forward differences in the time variable and central differences in spatial variables.

  19. Interactive access to forest inventory data for the South Central United States

    Treesearch

    William H. McWilliams

    1990-01-01

    On-line access to USDA, Forest Service successive forest inventory data for the South Central United States is provided by two computer systems. The Easy Access to Forest Inventory and Analysis Tables program (EZTAB) produces a set of tables for specific geographic areas. The Interactive Graphics and Retrieval System (INGRES) is a database management system that...

  20. Uniformity testing: assessment of a centralized web-based uniformity analysis system.

    PubMed

    Klempa, Meaghan C

    2011-06-01

    Uniformity testing is performed daily to ensure adequate camera performance before clinical use. The aim of this study is to assess the reliability of Beth Israel Deaconess Medical Center's locally built, centralized, Web-based uniformity analysis system by examining the differences between manufacturer and Web-based National Electrical Manufacturers Association integral uniformity calculations measured in the useful field of view (FOV) and the central FOV. Manufacturer and Web-based integral uniformity calculations measured in the useful FOV and the central FOV were recorded over a 30-d period for 4 cameras from 3 different manufacturers. These data were then statistically analyzed. The differences between the uniformity calculations were computed, in addition to the means and the SDs of these differences for each head of each camera. There was a correlation between the manufacturer and Web-based integral uniformity calculations in the useful FOV and the central FOV over the 30-d period. The average differences between the manufacturer and Web-based useful FOV calculations ranged from -0.30 to 0.099, with SD ranging from 0.092 to 0.32. For the central FOV calculations, the average differences ranged from -0.163 to 0.055, with SD ranging from 0.074 to 0.24. Most of the uniformity calculations computed by this centralized Web-based uniformity analysis system are comparable to the manufacturers' calculations, suggesting that this system is reasonably reliable and effective. This finding is important because centralized Web-based uniformity analysis systems are advantageous in that they test camera performance in the same manner regardless of the manufacturer.

  1. 23. VIEW OF THE FIRST FLOOR PLAN. THE FIRST FLOOR ...

    Library of Congress Historic Buildings Survey, Historic Engineering Record, Historic Landscapes Survey

    23. VIEW OF THE FIRST FLOOR PLAN. THE FIRST FLOOR HOUSED ADMINISTRATIVE OFFICES, THE CENTRAL COMPUTING, UTILITY SYSTEMS, ANALYTICAL LABORATORIES, AND MAINTENANCE SHOPS. THE ORIGINAL DRAWING HAS BEEN ARCHIVED ON MICROFILM. THE DRAWING WAS REPRODUCED AT THE BEST QUALITY POSSIBLE. LETTERS AND NUMBERS IN THE CIRCLES INDICATE FOOTER AND/OR COLUMN LOCATIONS. - Rocky Flats Plant, General Manufacturing, Support, Records-Central Computing, Southern portion of Plant, Golden, Jefferson County, CO

  2. The Lilongwe Central Hospital Patient Management Information System: A Success in Computer-Based Order Entry Where One Might Least Expect It

    PubMed Central

    GP, Douglas; RA, Deula; SE, Connor

    2003-01-01

    Computer-based order entry is a powerful tool for enhancing patient care. A pilot project in the pediatric department of the Lilongwe Central Hospital (LCH) in Malawi, Africa has demonstrated that computer-based order entry (COE): 1) can be successfully deployed and adopted in resource-poor settings, 2) can be built, deployed and sustained at relatively low cost and with local resources, and 3) has a greater potential to improve patient care in developing than in developed countries. PMID:14728338

  3. Brief Survey of TSC Computing Facilities

    DOT National Transportation Integrated Search

    1972-05-01

    The Transportation Systems Center (TSC) has four, essentially separate, in-house computing facilities. We shall call them Honeywell Facility, the Hybrid Facility, the Multimode Simulation Facility, and the Central Facility. In addition to these four,...

  4. The analysis of delays in simulator digital computing systems. Volume 1: Formulation of an analysis approach using a central example simulator model

    NASA Technical Reports Server (NTRS)

    Heffley, R. K.; Jewell, W. F.; Whitbeck, R. F.; Schulman, T. M.

    1980-01-01

    The effects of spurious delays in real time digital computing systems are examined. Various sources of spurious delays are defined and analyzed using an extant simulator system as an example. A specific analysis procedure is set forth and four cases are viewed in terms of their time and frequency domain characteristics. Numerical solutions are obtained for three single rate one- and two-computer examples, and the analysis problem is formulated for a two-rate, two-computer example.

  5. Extraction and visualization of the central chest lymph-node stations

    NASA Astrophysics Data System (ADS)

    Lu, Kongkuo; Merritt, Scott A.; Higgins, William E.

    2008-03-01

    Lung cancer remains the leading cause of cancer death in the United States and is expected to account for nearly 30% of all cancer deaths in 2007. Central to the lung-cancer diagnosis and staging process is the assessment of the central chest lymph nodes. This assessment typically requires two major stages: (1) location of the lymph nodes in a three-dimensional (3D) high-resolution volumetric multi-detector computed-tomography (MDCT) image of the chest; (2) subsequent nodal sampling using transbronchial needle aspiration (TBNA). We describe a computer-based system for automatically locating the central chest lymph-node stations in a 3D MDCT image. Automated analysis methods are first run that extract the airway tree, airway-tree centerlines, aorta, pulmonary artery, lungs, key skeletal structures, and major-airway labels. This information provides geometrical and anatomical cues for localizing the major nodal stations. Our system demarcates these stations, conforming to criteria outlined for the Mountain and Wang standard classification systems. Visualization tools within the system then enable the user to interact with these stations to locate visible lymph nodes. Results derived from a set of human 3D MDCT chest images illustrate the usage and efficacy of the system.

  6. A Computational Model of Reasoning from the Clinical Literature

    PubMed Central

    Rennels, Glenn D.

    1986-01-01

    This paper explores the premise that a formalized representation of empirical studies can play a central role in computer-based decision support. The specific motivations underlying this research include the following propositions: 1. Reasoning from experimental evidence contained in the clinical literature is central to the decisions physicians make in patient care. 2. A computational model, based upon a declarative representation for published reports of clinical studies, can drive a computer program that selectively tailors knowledge of the clinical literature as it is applied to a particular case. 3. The development of such a computational model is an important first step toward filling a void in computer-based decision support systems. Furthermore, the model may help us better understand the general principles of reasoning from experimental evidence both in medicine and other domains. Roundsman is a developmental computer system which draws upon structured representations of the clinical literature in order to critique plans for the management of primary breast cancer. Roundsman is able to produce patient-specific analyses of breast cancer management options based on the 24 clinical studies currently encoded in its knowledge base. The Roundsman system is a first step in exploring how the computer can help to bring a critical analysis of the relevant literature to the physician, structured around a particular patient and treatment decision.

  7. Centralized Monitoring of the Microsoft Windows-based computers of the LHC Experiment Control Systems

    NASA Astrophysics Data System (ADS)

    Varela Rodriguez, F.

    2011-12-01

    The control system of each of the four major Experiments at the CERN Large Hadron Collider (LHC) is distributed over up to 160 computers running either Linux or Microsoft Windows. A quick response to abnormal situations of the computer infrastructure is crucial to maximize the physics usage. For this reason, a tool was developed to supervise, identify errors and troubleshoot such a large system. Although the monitoring of the performance of the Linux computers and their processes was available since the first versions of the tool, it is only recently that the software package has been extended to provide similar functionality for the nodes running Microsoft Windows as this platform is the most commonly used in the LHC detector control systems. In this paper, the architecture and the functionality of the Windows Management Instrumentation (WMI) client developed to provide centralized monitoring of the nodes running different flavour of the Microsoft platform, as well as the interface to the SCADA software of the control systems are presented. The tool is currently being commissioned by the Experiments and it has already proven to be very efficient optimize the running systems and to detect misbehaving processes or nodes.

  8. NRL Fact Book 1992-1993

    DTIC Science & Technology

    1993-06-01

    administering contractual support for lab-wide or multiple buys of ADP systems, software, and services. Computer systems located in the Central Computing Facility...Code Dr. D.L. Bradley Vacant Mrs. N.J. Beauchamp Dr. W.A. Kuperman Dr. E.R. Franchi Dr. M.H. Orr Dr. J.A. Bucaro Mr. L.B. Palmer Dr. D.J. Ramsdale Mr

  9. Space Tug Avionics Definition Study. Volume 5: Cost and Programmatics

    NASA Technical Reports Server (NTRS)

    1975-01-01

    The baseline avionics system features a central digital computer that integrates the functions of all the space tug subsystems by means of a redundant digital data bus. The central computer consists of dual central processor units, dual input/output processors, and a fault tolerant memory, utilizing internal redundancy and error checking. Three electronically steerable phased arrays provide downlink transmission from any tug attitude directly to ground or via TDRS. Six laser gyros and six accelerometers in a dodecahedron configuration make up the inertial measurement unit. Both a scanning laser radar and a TV system, employing strobe lamps, are required as acquisition and docking sensors. Primary dc power at a nominal 28 volts is supplied from dual lightweight, thermally integrated fuel cells which operate from propellant grade reactants out of the main tanks.

  10. Digital Data Transmission Via CATV.

    ERIC Educational Resources Information Center

    Stifle, Jack; And Others

    A low cost communications network has been designed for use in the PLATO IV computer-assisted instruction system. Over 1,000 remote computer graphic terminals each requiring a 1200 bps channel are to be connected to one centrally located computer. Digital data are distributed to these terminals using standard commercial cable television (CATV)…

  11. When does a physical system compute?

    PubMed

    Horsman, Clare; Stepney, Susan; Wagner, Rob C; Kendon, Viv

    2014-09-08

    Computing is a high-level process of a physical system. Recent interest in non-standard computing systems, including quantum and biological computers, has brought this physical basis of computing to the forefront. There has been, however, no consensus on how to tell if a given physical system is acting as a computer or not; leading to confusion over novel computational devices, and even claims that every physical event is a computation. In this paper, we introduce a formal framework that can be used to determine whether a physical system is performing a computation. We demonstrate how the abstract computational level interacts with the physical device level, in comparison with the use of mathematical models in experimental science. This powerful formulation allows a precise description of experiments, technology, computation and simulation, giving our central conclusion: physical computing is the use of a physical system to predict the outcome of an abstract evolution . We give conditions for computing, illustrated using a range of non-standard computing scenarios. The framework also covers broader computing contexts, where there is no obvious human computer user. We introduce the notion of a 'computational entity', and its critical role in defining when computing is taking place in physical systems.

  12. When does a physical system compute?

    PubMed Central

    Horsman, Clare; Stepney, Susan; Wagner, Rob C.; Kendon, Viv

    2014-01-01

    Computing is a high-level process of a physical system. Recent interest in non-standard computing systems, including quantum and biological computers, has brought this physical basis of computing to the forefront. There has been, however, no consensus on how to tell if a given physical system is acting as a computer or not; leading to confusion over novel computational devices, and even claims that every physical event is a computation. In this paper, we introduce a formal framework that can be used to determine whether a physical system is performing a computation. We demonstrate how the abstract computational level interacts with the physical device level, in comparison with the use of mathematical models in experimental science. This powerful formulation allows a precise description of experiments, technology, computation and simulation, giving our central conclusion: physical computing is the use of a physical system to predict the outcome of an abstract evolution. We give conditions for computing, illustrated using a range of non-standard computing scenarios. The framework also covers broader computing contexts, where there is no obvious human computer user. We introduce the notion of a ‘computational entity’, and its critical role in defining when computing is taking place in physical systems. PMID:25197245

  13. CRYSNET manual. Informal report. [Hardware and software of crystallographic computing network

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    None,

    1976-07-01

    This manual describes the hardware and software which together make up the crystallographic computing network (CRYSNET). The manual is intended as a users' guide and also provides general information for persons without any experience with the system. CRYSNET is a network of intelligent remote graphics terminals that are used to communicate with the CDC Cyber 70/76 computing system at the Brookhaven National Laboratory (BNL) Central Scientific Computing Facility. Terminals are in active use by four research groups in the field of crystallography. A protein data bank has been established at BNL to store in machine-readable form atomic coordinates and othermore » crystallographic data for macromolecules. The bank currently includes data for more than 20 proteins. This structural information can be accessed at BNL directly by the CRYSNET graphics terminals. More than two years of experience has been accumulated with CRYSNET. During this period, it has been demonstrated that the terminals, which provide access to a large, fast third-generation computer, plus stand-alone interactive graphics capability, are useful for computations in crystallography, and in a variety of other applications as well. The terminal hardware, the actual operations of the terminals, and the operations of the BNL Central Facility are described in some detail, and documentation of the terminal and central-site software is given. (RWR)« less

  14. Analysis of Selected Enhancements to the En Route Central Computing Complex

    DOT National Transportation Integrated Search

    1981-09-01

    This report analyzes selected hardware enhancements that could improve the performance of the 9020 computer systems, which are used to provide en route air traffic control services. These enhancements could be implemented quickly, would be relatively...

  15. 75 FR 60415 - Proposed Collection; Comment Request

    Federal Register 2010, 2011, 2012, 2013, 2014

    2010-09-30

    ... computer systems and networks. This information collection is required to obtain the necessary data... card reflecting those benefits and privileges, and to maintain a centralized database of the eligible... card reflecting those benefits and privileges, and to maintain a centralized database of the eligible...

  16. Beam orbit simulation in the central region of the RIKEN AVF cyclotron

    NASA Astrophysics Data System (ADS)

    Toprek, Dragan; Goto, Akira; Yano, Yasushige

    1999-04-01

    This paper describes the modification design of the central region for h=2 mode of acceleration in the RIKEN AVF cyclotron. we made a small modification to the electrode shape in the central region for optimization of the beam transmission. The central region is equipped with an axial injection system. The spiral type inflector is used for axial injection. The electric field distribution in the inflector and in four acceleration gaps has been numerically calculated from an electric potential map produced by the program RELAX3D. The magnetic field is measured. The geometry of the central region has been tested with the computations of orbits carried out by means of the computer code CYCLONE. The optical properties of the spiral inflector and the central region are studied by using the program CASINO and CYCLONE, respectively. We have also made an effort to minimize the inflector fringe field effects using the RELAX3D program.

  17. Digital system for structural dynamics simulation

    NASA Technical Reports Server (NTRS)

    Krauter, A. I.; Lagace, L. J.; Wojnar, M. K.; Glor, C.

    1982-01-01

    State-of-the-art digital hardware and software for the simulation of complex structural dynamic interactions, such as those which occur in rotating structures (engine systems). System were incorporated in a designed to use an array of processors in which the computation for each physical subelement or functional subsystem would be assigned to a single specific processor in the simulator. These node processors are microprogrammed bit-slice microcomputers which function autonomously and can communicate with each other and a central control minicomputer over parallel digital lines. Inter-processor nearest neighbor communications busses pass the constants which represent physical constraints and boundary conditions. The node processors are connected to the six nearest neighbor node processors to simulate the actual physical interface of real substructures. Computer generated finite element mesh and force models can be developed with the aid of the central control minicomputer. The control computer also oversees the animation of a graphics display system, disk-based mass storage along with the individual processing elements.

  18. SCANIT: centralized digitizing of forest resource maps or photographs

    Treesearch

    Elliot L. Amidon; E. Joyce Dye

    1981-01-01

    Spatial data on wildland resource maps and aerial photographs can be analyzed by computer after digitizing. SCANIT is a computerized system for encoding such data in digital form. The system, consisting of a collection of computer programs and subroutines, provides a powerful and versatile tool for a variety of resource analyses. SCANIT also may be converted easily to...

  19. On Robustness of Deadlock Detection Algorithms for Distributed Computing Systems.

    DTIC Science & Technology

    1982-02-01

    temrs : nake it much,- ore Eff’: -ult -,o detect, avcii :r -revenn -hsr fn -,he earlier muJtiroaming centralized computing systems. :eadlock :)rever...failure of site C would not have been critical after the B ^ad ’ teen sent. The effect of a type c site (site _ in our examrle’ falling would have no

  20. The Virtual Solar System Project: Developing Conceptual Understanding of Astronomical Concepts through Building Three-Dimensional Computational Models.

    ERIC Educational Resources Information Center

    Keating, Thomas; Barnett, Michael; Barab, Sasha A.; Hay, Kenneth E.

    2002-01-01

    Describes the Virtual Solar System (VSS) course which is one of the first attempts to integrate three-dimensional (3-D) computer modeling as a central component of introductory undergraduate education. Assesses changes in student understanding of astronomy concepts as a result of participating in an experimental introductory astronomy course in…

  1. Organization of the secure distributed computing based on multi-agent system

    NASA Astrophysics Data System (ADS)

    Khovanskov, Sergey; Rumyantsev, Konstantin; Khovanskova, Vera

    2018-04-01

    Nowadays developing methods for distributed computing is received much attention. One of the methods of distributed computing is using of multi-agent systems. The organization of distributed computing based on the conventional network computers can experience security threats performed by computational processes. Authors have developed the unified agent algorithm of control system of computing network nodes operation. Network PCs is used as computing nodes. The proposed multi-agent control system for the implementation of distributed computing allows in a short time to organize using of the processing power of computers any existing network to solve large-task by creating a distributed computing. Agents based on a computer network can: configure a distributed computing system; to distribute the computational load among computers operated agents; perform optimization distributed computing system according to the computing power of computers on the network. The number of computers connected to the network can be increased by connecting computers to the new computer system, which leads to an increase in overall processing power. Adding multi-agent system in the central agent increases the security of distributed computing. This organization of the distributed computing system reduces the problem solving time and increase fault tolerance (vitality) of computing processes in a changing computing environment (dynamic change of the number of computers on the network). Developed a multi-agent system detects cases of falsification of the results of a distributed system, which may lead to wrong decisions. In addition, the system checks and corrects wrong results.

  2. Fiber Optic Communication System For Medical Images

    NASA Astrophysics Data System (ADS)

    Arenson, Ronald L.; Morton, Dan E.; London, Jack W.

    1982-01-01

    This paper discusses a fiber optic communication system linking ultrasound devices, Computerized tomography scanners, Nuclear Medicine computer system, and a digital fluoro-graphic system to a central radiology research computer. These centrally archived images are available for near instantaneous recall at various display consoles. When a suitable laser optical disk is available for mass storage, more extensive image archiving will be added to the network including digitized images of standard radiographs for comparison purposes and for remote display in such areas as the intensive care units, the operating room, and selected outpatient departments. This fiber optic system allows for a transfer of high resolution images in less than a second over distances exceeding 2,000 feet. The advantages of using fiber optic cables instead of typical parallel or serial communication techniques will be described. The switching methodology and communication protocols will also be discussed.

  3. NIMH Prototype Management Information System for Community Mental Health Centers

    PubMed Central

    Wurster, Cecil R.; Goodman, John D.

    1980-01-01

    Various approaches to centralized support of computer applications in health care are described. The NIMH project to develop a prototype Management Information System (MIS) for community mental health centers is presented and discussed as a centralized development of an automated data processing system for multiple user organizations. The NIMH program is summarized, the prototype MIS is characterized, and steps taken to provide for the differing needs of the mental health centers are highlighted.

  4. Automated Power Systems Management (APSM)

    NASA Technical Reports Server (NTRS)

    Bridgeforth, A. O.

    1981-01-01

    A breadboard power system incorporating autonomous functions of monitoring, fault detection and recovery, command and control was developed, tested and evaluated to demonstrate technology feasibility. Autonomous functions including switching of redundant power processing elements, individual load fault removal, and battery charge/discharge control were implemented by means of a distributed microcomputer system within the power subsystem. Three local microcomputers provide the monitoring, control and command function interfaces between the central power subsystem microcomputer and the power sources, power processing and power distribution elements. The central microcomputer is the interface between the local microcomputers and the spacecraft central computer or ground test equipment.

  5. Nanotube Heterojunctions and Endo-Fullerenes for Nanoelectronics

    NASA Technical Reports Server (NTRS)

    Srivastava, Deepak; Menon, M.; Andriotis, Antonis; Cho, K.; Park, Jun; Biegel, Bryan A. (Technical Monitor)

    2002-01-01

    Topics discussed include: (1) Light-Weight Multi-Functional Materials: Nanomechanics; Nanotubes and Composites; Thermal/Chemical/Electrical Characterization; (2) Biomimetic/Revolutionary Concepts: Evolutionary Computing and Sensing; Self-Heating Materials; (3) Central Computing System: Molecular Electronics; Materials for Quantum Bits; and (4) Molecular Machines.

  6. Airborne Advanced Reconfigurable Computer System (ARCS)

    NASA Technical Reports Server (NTRS)

    Bjurman, B. E.; Jenkins, G. M.; Masreliez, C. J.; Mcclellan, K. L.; Templeman, J. E.

    1976-01-01

    A digital computer subsystem fault-tolerant concept was defined, and the potential benefits and costs of such a subsystem were assessed when used as the central element of a new transport's flight control system. The derived advanced reconfigurable computer system (ARCS) is a triple-redundant computer subsystem that automatically reconfigures, under multiple fault conditions, from triplex to duplex to simplex operation, with redundancy recovery if the fault condition is transient. The study included criteria development covering factors at the aircraft's operation level that would influence the design of a fault-tolerant system for commercial airline use. A new reliability analysis tool was developed for evaluating redundant, fault-tolerant system availability and survivability; and a stringent digital system software design methodology was used to achieve design/implementation visibility.

  7. A comparative approach for the investigation of biological information processing: An examination of the structure and function of computer hard drives and DNA

    PubMed Central

    2010-01-01

    Background The robust storage, updating and utilization of information are necessary for the maintenance and perpetuation of dynamic systems. These systems can exist as constructs of metal-oxide semiconductors and silicon, as in a digital computer, or in the "wetware" of organic compounds, proteins and nucleic acids that make up biological organisms. We propose that there are essential functional properties of centralized information-processing systems; for digital computers these properties reside in the computer's hard drive, and for eukaryotic cells they are manifest in the DNA and associated structures. Methods Presented herein is a descriptive framework that compares DNA and its associated proteins and sub-nuclear structure with the structure and function of the computer hard drive. We identify four essential properties of information for a centralized storage and processing system: (1) orthogonal uniqueness, (2) low level formatting, (3) high level formatting and (4) translation of stored to usable form. The corresponding aspects of the DNA complex and a computer hard drive are categorized using this classification. This is intended to demonstrate a functional equivalence between the components of the two systems, and thus the systems themselves. Results Both the DNA complex and the computer hard drive contain components that fulfill the essential properties of a centralized information storage and processing system. The functional equivalence of these components provides insight into both the design process of engineered systems and the evolved solutions addressing similar system requirements. However, there are points where the comparison breaks down, particularly when there are externally imposed information-organizing structures on the computer hard drive. A specific example of this is the imposition of the File Allocation Table (FAT) during high level formatting of the computer hard drive and the subsequent loading of an operating system (OS). Biological systems do not have an external source for a map of their stored information or for an operational instruction set; rather, they must contain an organizational template conserved within their intra-nuclear architecture that "manipulates" the laws of chemistry and physics into a highly robust instruction set. We propose that the epigenetic structure of the intra-nuclear environment and the non-coding RNA may play the roles of a Biological File Allocation Table (BFAT) and biological operating system (Bio-OS) in eukaryotic cells. Conclusions The comparison of functional and structural characteristics of the DNA complex and the computer hard drive leads to a new descriptive paradigm that identifies the DNA as a dynamic storage system of biological information. This system is embodied in an autonomous operating system that inductively follows organizational structures, data hierarchy and executable operations that are well understood in the computer science industry. Characterizing the "DNA hard drive" in this fashion can lead to insights arising from discrepancies in the descriptive framework, particularly with respect to positing the role of epigenetic processes in an information-processing context. Further expansions arising from this comparison include the view of cells as parallel computing machines and a new approach towards characterizing cellular control systems. PMID:20092652

  8. A comparative approach for the investigation of biological information processing: an examination of the structure and function of computer hard drives and DNA.

    PubMed

    D'Onofrio, David J; An, Gary

    2010-01-21

    The robust storage, updating and utilization of information are necessary for the maintenance and perpetuation of dynamic systems. These systems can exist as constructs of metal-oxide semiconductors and silicon, as in a digital computer, or in the "wetware" of organic compounds, proteins and nucleic acids that make up biological organisms. We propose that there are essential functional properties of centralized information-processing systems; for digital computers these properties reside in the computer's hard drive, and for eukaryotic cells they are manifest in the DNA and associated structures. Presented herein is a descriptive framework that compares DNA and its associated proteins and sub-nuclear structure with the structure and function of the computer hard drive. We identify four essential properties of information for a centralized storage and processing system: (1) orthogonal uniqueness, (2) low level formatting, (3) high level formatting and (4) translation of stored to usable form. The corresponding aspects of the DNA complex and a computer hard drive are categorized using this classification. This is intended to demonstrate a functional equivalence between the components of the two systems, and thus the systems themselves. Both the DNA complex and the computer hard drive contain components that fulfill the essential properties of a centralized information storage and processing system. The functional equivalence of these components provides insight into both the design process of engineered systems and the evolved solutions addressing similar system requirements. However, there are points where the comparison breaks down, particularly when there are externally imposed information-organizing structures on the computer hard drive. A specific example of this is the imposition of the File Allocation Table (FAT) during high level formatting of the computer hard drive and the subsequent loading of an operating system (OS). Biological systems do not have an external source for a map of their stored information or for an operational instruction set; rather, they must contain an organizational template conserved within their intra-nuclear architecture that "manipulates" the laws of chemistry and physics into a highly robust instruction set. We propose that the epigenetic structure of the intra-nuclear environment and the non-coding RNA may play the roles of a Biological File Allocation Table (BFAT) and biological operating system (Bio-OS) in eukaryotic cells. The comparison of functional and structural characteristics of the DNA complex and the computer hard drive leads to a new descriptive paradigm that identifies the DNA as a dynamic storage system of biological information. This system is embodied in an autonomous operating system that inductively follows organizational structures, data hierarchy and executable operations that are well understood in the computer science industry. Characterizing the "DNA hard drive" in this fashion can lead to insights arising from discrepancies in the descriptive framework, particularly with respect to positing the role of epigenetic processes in an information-processing context. Further expansions arising from this comparison include the view of cells as parallel computing machines and a new approach towards characterizing cellular control systems.

  9. Automating the Analytical Laboratories Section, Lewis Research Center, National Aeronautics and Space Administration: A feasibility study

    NASA Technical Reports Server (NTRS)

    Boyle, W. G.; Barton, G. W.

    1979-01-01

    The feasibility of computerized automation of the Analytical Laboratories Section at NASA's Lewis Research Center was considered. Since that laboratory's duties are not routine, the automation goals were set with that in mind. Four instruments were selected as the most likely automation candidates: an atomic absorption spectrophotometer, an emission spectrometer, an X-ray fluorescence spectrometer, and an X-ray diffraction unit. Two options for computer automation were described: a time-shared central computer and a system with microcomputers for each instrument connected to a central computer. A third option, presented for future planning, expands the microcomputer version. Costs and benefits for each option were considered. It was concluded that the microcomputer version best fits the goals and duties of the laboratory and that such an automted system is needed to meet the laboratory's future requirements.

  10. Efficacy of a computerized system applied to central operating theatre for medical records collection.

    PubMed

    Yamamoto, K; Ogura, H; Furutani, H; Kitazoe, Y; Takeda, Y; Hirakawa, M

    1986-01-01

    A computer system operation is introduced, which has been in use since October 1981 at Kochi medical school as one of the integral sub-systems of the total hospital information system called IMIS. The system was designed from the beginning with the main purposes of obtaining better management of operations, and detailed medical records are included for before, during and after operations. It is shown that almost all operations except emergencies were managed using the computer system rather than the paper system. After presenting some of the results of the accumulated records we will discuss the reason for this high frequency of use of the computer system.

  11. Processing Diabetes Mellitus Composite Events in MAGPIE.

    PubMed

    Brugués, Albert; Bromuri, Stefano; Barry, Michael; Del Toro, Óscar Jiménez; Mazurkiewicz, Maciej R; Kardas, Przemyslaw; Pegueroles, Josep; Schumacher, Michael

    2016-02-01

    The focus of this research is in the definition of programmable expert Personal Health Systems (PHS) to monitor patients affected by chronic diseases using agent oriented programming and mobile computing to represent the interactions happening amongst the components of the system. The paper also discusses issues of knowledge representation within the medical domain when dealing with temporal patterns concerning the physiological values of the patient. In the presented agent based PHS the doctors can personalize for each patient monitoring rules that can be defined in a graphical way. Furthermore, to achieve better scalability, the computations for monitoring the patients are distributed among their devices rather than being performed in a centralized server. The system is evaluated using data of 21 diabetic patients to detect temporal patterns according to a set of monitoring rules defined. The system's scalability is evaluated by comparing it with a centralized approach. The evaluation concerning the detection of temporal patterns highlights the system's ability to monitor chronic patients affected by diabetes. Regarding the scalability, the results show the fact that an approach exploiting the use of mobile computing is more scalable than a centralized approach. Therefore, more likely to satisfy the needs of next generation PHSs. PHSs are becoming an adopted technology to deal with the surge of patients affected by chronic illnesses. This paper discusses architectural choices to make an agent based PHS more scalable by using a distributed mobile computing approach. It also discusses how to model the medical knowledge in the PHS in such a way that it is modifiable at run time. The evaluation highlights the necessity of distributing the reasoning to the mobile part of the system and that modifiable rules are able to deal with the change in lifestyle of the patients affected by chronic illnesses.

  12. Implementing a Computer Program that Captures Students' Work on Customizable, Periodic-System Data Assignments

    ERIC Educational Resources Information Center

    Wiediger, Susan D.

    2009-01-01

    The periodic table and the periodic system are central to chemistry and thus to many introductory chemistry courses. A number of existing activities use various data sets to model the development process for the periodic table. This paper describes an image arrangement computer program developed to mimic a paper-based card sorting periodic table…

  13. Teacher Training Takes to the Road. Mobile Van, Computers Add Convenience and Quality to Continuing Education.

    ERIC Educational Resources Information Center

    Lehmann, Phyllis E.

    1971-01-01

    This article describes the development and use of a new delivery system for education services based on the concepts of mobility and individualized instruction. The system consists of a mobile van equipped with a central IBM computer and 15 student terminals. Traveling through rural Pennsylvania, it offers local teachers a course in special…

  14. Computer model of Raritan River Basin water-supply system in central New Jersey

    USGS Publications Warehouse

    Dunne, Paul; Tasker, Gary D.

    1996-01-01

    This report describes a computer model of the Raritan River Basin water-supply system in central New Jersey. The computer model provides a technical basis for evaluating the effects of alternative patterns of operation of the Raritan River Basin water-supply system during extended periods of below-average precipitation. The computer model is a continuity-accounting model consisting of a series of interconnected nodes. At each node, the inflow volume, outflow volume, and change in storage are determined and recorded for each month. The model runs with a given set of operating rules and water-use requirements including releases, pumpages, and diversions. The model can be used to assess the hypothetical performance of the Raritan River Basin water- supply system in past years under alternative sets of operating rules. It also can be used to forecast the likelihood of specified outcomes, such as the depletion of reservoir contents below a specified threshold or of streamflows below statutory minimum passing flows, for a period of up to 12 months. The model was constructed on the basis of current reservoir capacities and the natural, unregulated monthly runoff values recorded at U.S. Geological Survey streamflow- gaging stations in the basin.

  15. MEDLARS and the Library Community

    PubMed Central

    Adams, Scott

    1964-01-01

    The intention of the National Library of Medicine is to share with other libraries the products and the capabilities developed by the MEDLARS system. MEDLARS will provide bibliographic services of use to other libraries from the central system. The decentralization of the central system to permit libraries with access to computers to establish local machine retrieval systems is also indicated. The implications of such decentralization for the American medical library network and its effect on library evolution are suggested, as are the implications for international development of mechanized storage and retrieval systems. PMID:14119289

  16. Winner-take-all in a phase oscillator system with adaptation.

    PubMed

    Burylko, Oleksandr; Kazanovich, Yakov; Borisyuk, Roman

    2018-01-11

    We consider a system of generalized phase oscillators with a central element and radial connections. In contrast to conventional phase oscillators of the Kuramoto type, the dynamic variables in our system include not only the phase of each oscillator but also the natural frequency of the central oscillator, and the connection strengths from the peripheral oscillators to the central oscillator. With appropriate parameter values the system demonstrates winner-take-all behavior in terms of the competition between peripheral oscillators for the synchronization with the central oscillator. Conditions for the winner-take-all regime are derived for stationary and non-stationary types of system dynamics. Bifurcation analysis of the transition from stationary to non-stationary winner-take-all dynamics is presented. A new bifurcation type called a Saddle Node on Invariant Torus (SNIT) bifurcation was observed and is described in detail. Computer simulations of the system allow an optimal choice of parameters for winner-take-all implementation.

  17. A computer system for analysis and transmission of spirometry waveforms using volume sampling.

    PubMed

    Ostler, D V; Gardner, R M; Crapo, R O

    1984-06-01

    A microprocessor-controlled data gathering system for telemetry and analysis of spirometry waveforms was implemented using a completely digital design. Spirometry waveforms were obtained from an optical shaft encoder attached to a rolling seal spirometer. Time intervals between 10-ml volume changes (volume sampling) were stored. The digital design eliminated problems of analog signal sampling. The system measured flows up to 12 liters/sec with 5% accuracy and volumes up to 10 liters with 1% accuracy. Transmission of 10 waveforms took about 3 min. Error detection assured that no data were lost or distorted during transmission. A pulmonary physician at the central hospital reviewed the volume-time and flow-volume waveforms and interpretations generated by the central computer before forwarding the results and consulting with the rural physician. This system is suitable for use in a major hospital, rural hospital, or small clinic because of the system's simplicity and small size.

  18. A Graphics Editor for Structured Analysis with a Data Dictionary.

    DTIC Science & Technology

    1987-12-01

    4-3 Human/Computer Interface Considerations 4-3 Screen Layout .... ............. 4-4 Menu System ..... .............. 4-6 Voice Feedback...central computer system . This project is a direct follow on to the 1986 thesis by James W. Urscheler. lie created an initial version of a tool (nicknamed...graphics information. Background r SADT. SADT is the name of SofTech’s methodology for doing requirement analysis and system design. It was first published

  19. COMPUTER PROGRAM FOR CALCULATING THE COST OF DRINKING WATER TREATMENT SYSTEMS

    EPA Science Inventory

    This FORTRAN computer program calculates the construction and operation/maintenance costs for 45 centralized unit treatment processes for water supply. The calculated costs are based on various design parameters and raw water quality. These cost data are applicable to small size ...

  20. SNS programming environment user's guide

    NASA Technical Reports Server (NTRS)

    Tennille, Geoffrey M.; Howser, Lona M.; Humes, D. Creig; Cronin, Catherine K.; Bowen, John T.; Drozdowski, Joseph M.; Utley, Judith A.; Flynn, Theresa M.; Austin, Brenda A.

    1992-01-01

    The computing environment is briefly described for the Supercomputing Network Subsystem (SNS) of the Central Scientific Computing Complex of NASA Langley. The major SNS computers are a CRAY-2, a CRAY Y-MP, a CONVEX C-210, and a CONVEX C-220. The software is described that is common to all of these computers, including: the UNIX operating system, computer graphics, networking utilities, mass storage, and mathematical libraries. Also described is file management, validation, SNS configuration, documentation, and customer services.

  1. The snow system: A decentralized medical data processing system.

    PubMed

    Bellika, Johan Gustav; Henriksen, Torje Starbo; Yigzaw, Kassaye Yitbarek

    2015-01-01

    Systems for large-scale reuse of electronic health record data is claimed to have the potential to transform the current health care delivery system. In principle three alternative solutions for reuse exist: centralized, data warehouse, and decentralized solutions. This chapter focuses on the decentralized system alternative. Decentralized systems may be categorized into approaches that move data to enable computations or move computations to the where data is located to enable computations. We describe a system that moves computations to where the data is located. Only this kind of decentralized solution has the capabilities to become ideal systems for reuse as the decentralized alternative enables computation and reuse of electronic health record data without moving or exposing the information to outsiders. This chapter describes the Snow system, which is a decentralized medical data processing system, its components and how it has been used. It also describes the requirements this kind of systems need to support to become sustainable and successful in recruiting voluntary participation from health institutions.

  2. 77 FR 60401 - Privacy Act of 1974; Systems of Records

    Federal Register 2010, 2011, 2012, 2013, 2014

    2012-10-03

    ... computer password protection.'' * * * * * System manager(s) and address: Delete entry and replace with...; Systems of Records AGENCY: National Security Agency/Central Security Service, DoD. ACTION: Notice to amend a system of records. SUMMARY: The National Security Agency (NSA) is proposing to amend a system of...

  3. Laboratory and software applications for clinical trials: the global laboratory environment.

    PubMed

    Briscoe, Chad

    2011-11-01

    The Applied Pharmaceutical Software Meeting is held annually. It is sponsored by The Boston Society, a not-for-profit organization that coordinates a series of meetings within the global pharmaceutical industry. The meeting generally focuses on laboratory applications, but in recent years has expanded to include some software applications for clinical trials. The 2011 meeting emphasized the global laboratory environment. Global clinical trials generate massive amounts of data in many locations that must be centralized and processed for efficient analysis. Thus, the meeting had a strong focus on establishing networks and systems for dealing with the computer infrastructure to support such environments. In addition to the globally installed laboratory information management system, electronic laboratory notebook and other traditional laboratory applications, cloud computing is quickly becoming the answer to provide efficient, inexpensive options for managing the large volumes of data and computing power, and thus it served as a central theme for the meeting.

  4. Development of a forestry government agency enterprise GIS system: a disconnected editing approach

    NASA Astrophysics Data System (ADS)

    Zhu, Jin; Barber, Brad L.

    2008-10-01

    The Texas Forest Service (TFS) has developed a geographic information system (GIS) for use by agency personnel in central Texas for managing oak wilt suppression and other landowner assistance programs. This Enterprise GIS system was designed to support multiple concurrent users accessing shared information resources. The disconnected editing approach was adopted in this system to avoid the overhead of maintaining an active connection between TFS central Texas field offices and headquarters since most field offices are operating with commercially provided Internet service. The GIS system entails maintaining a personal geodatabase on each local field office computer. Spatial data from the field is periodically up-loaded into a central master geodatabase stored in a Microsoft SQL Server at the TFS headquarters in College Station through the ESRI Spatial Database Engine (SDE). This GIS allows users to work off-line when editing data and requires connecting to the central geodatabase only when needed.

  5. Proceedings of the Ship Control Systems Symposium (9th) Held in Bethesda, Maryland on 10-14 September 1990. Theme: Automation in Surface Ship Control Systems, Today’s Applications and Future Trends. Volume 1

    DTIC Science & Technology

    1990-09-14

    transmission of detected variations through sound lines of communication to centrally located standard Navy computers . These computers would be programmed to...have been programmed in C language. The program runs under the operating system ,OS9 on a VME-bus computer with a 68000 microprocessor. A number of full...present practice of"add-on" supervisory controls during ship design and construction,and "fix-it" R&D programs implemented after the ship isoperational

  6. 17. VIEW OF HYDRIDING SYSTEM IN BUILDING 881. THE HYDRIDING ...

    Library of Congress Historic Buildings Survey, Historic Engineering Record, Historic Landscapes Survey

    17. VIEW OF HYDRIDING SYSTEM IN BUILDING 881. THE HYDRIDING SYSTEM WAS PART OF THE FAST ENRICHED URANIUM RECOVERY PROCESS. (11/11/59) - Rocky Flats Plant, General Manufacturing, Support, Records-Central Computing, Southern portion of Plant, Golden, Jefferson County, CO

  7. Centralized Accounting and Electronic Filing Provides Efficient Receivables Collection.

    ERIC Educational Resources Information Center

    School Business Affairs, 1983

    1983-01-01

    An electronic filing system makes financial control manageable at Bowling Green State University, Ohio. The system enables quick access to computer-stored consolidated account data and microfilm images of charges, statements, and other billing documents. (MLF)

  8. Forest vegetation simulation tools and forest health assessment

    Treesearch

    Richard M. Teck; Melody Steele

    1995-01-01

    A Stand Hazard Rating System for Central ldaho forests has been incorporated into the Central ldaho Prognosis variant of the Forest Vegetation Simulator to evaluate how insects, disease and fire hazards within the Deadwood River Drainage change over time. A custom interface, BOISE.COMPUTE.PR, has been developed so hazard ratings can be electronically downloaded...

  9. Electronic Mail Is One High-Tech Management Tool that Really Delivers.

    ERIC Educational Resources Information Center

    Parker, Donald C.

    1987-01-01

    Describes an electronic mail system used by the Horseheads (New York) Central School Distict's eight schools and central office that saves time and enhances productivity. This software calls up information from the district's computer network and sends it to other users' special files--electronic "mailboxes" set aside for messages and…

  10. Saccadic eye movements analysis as a measure of drug effect on central nervous system function.

    PubMed

    Tedeschi, G; Quattrone, A; Bonavita, V

    1986-04-01

    Peak velocity (PSV) and duration (SD) of horizontal saccadic eye movements are demonstrably under the control of specific brain stem structures. Experimental and clinical evidence suggest the existence of an immediate premotor system for saccade generation located in the paramedian pontine reticular formation (PPRF). Effects on saccadic eye movements have been studied in normal volunteers with barbiturates, benzodiazepines, amphetamine and ethanol. On two occasions computer analysis of PSV, SD, saccade reaction time (SRT) and saccade accuracy (SA) was carried out in comparison with more traditional methods of assessment of human psychomotor performance like choice reaction time (CRT) and critical flicker fusion threshold (CFFT). The computer system proved to be a highly sensitive and objective method for measuring drug effect on central nervous system (CNS) function. It allows almost continuous sampling of data and appears to be particularly suitable for studying rapidly changing drug effects on the CNS.

  11. Geometric and topological characterization of porous media: insights from eigenvector centrality

    NASA Astrophysics Data System (ADS)

    Jimenez-Martinez, J.; Negre, C.

    2017-12-01

    Solving flow and transport through complex geometries such as porous media involves an extreme computational cost. Simplifications such as pore networks, where the pores are represented by nodes and the pore throats by edges connecting pores, have been proposed. These models have the ability to preserve the connectivity of the medium. However, they have difficulties capturing preferential paths (high velocity) and stagnation zones (low velocity), as they do not consider the specific relations between nodes. Network theory approaches, where the complex network is conceptualized like a graph, can help to simplify and better understand fluid dynamics and transport in porous media. To address this issue, we propose a method based on eigenvector centrality. It has been corrected to overcome the centralization problem and modified to introduce a bias in the centrality distribution along a particular direction which allows considering the flow and transport anisotropy in porous media. The model predictions are compared with millifluidic transport experiments, showing that this technique is computationally efficient and has potential for predicting preferential paths and stagnation zones for flow and transport in porous media. Entropy computed from the eigenvector centrality probability distribution is proposed as an indicator of the "mixing capacity" of the system.

  12. Inertial subsystem functional and design requirements for the orbiter (Phase B extension baseline)

    NASA Technical Reports Server (NTRS)

    Flanders, J. H.; Green, J. P., Jr.

    1972-01-01

    The design requirements use the Phase B extension baseline system definition. This means that a GNC computer is specified for all command control functions instead of a central computer communicating with the ISS through a databus. Forced air cooling is used instead of cold plate cooling.

  13. Iodine Coulometry of Various Reducing Agents Including Thiols with Online Photocell Detection Coupled to a Multifunctional Chemical Analysis Station to Eliminate Student End Point Detection by Eye

    ERIC Educational Resources Information Center

    Padilla Mercado, Jeralyne B.; Coombs, Eri M.; De Jesus, Jenny P.; Bretz, Stacey Lowery; Danielson, Neil D.

    2018-01-01

    Multifunctional chemical analysis (MCA) systems provide a viable alternative for large scale instruction while supporting a hands-on approach to more advanced instrumentation. These systems are robust and typically use student stations connected to a remote central computer for data collection, minimizing the need for computers at every student…

  14. Intravascular lymphoma involving the central and peripheral nervous systems in a dog.

    PubMed

    Bush, William W; Throop, Juliene L; McManus, Patricia M; Kapatkin, Amy S; Vite, Charles H; Van Winkle, Tom J

    2003-01-01

    A 5-year-old, castrated male mixed-breed dog was presented for paraparesis, ataxia, hyperesthesia, and thrombocytopenia of 5 months' duration and recurrent seizures during the preceding 2 weeks. Multifocal neurological, ophthalmological, pulmonary, and cardiac diseases were identified. Magnetic resonance imaging and cerebrospinal fluid analysis supported a tentative diagnosis of neoplastic or inflammatory disease. A computed tomography-guided biopsy provided both cytopathological and histopathological evidence of intravascular lymphoma. The disease progressed despite chemotherapy with prednisone, L-asparginase, and vincristine. Postmortem histopathological examinations suggested intravascular lymphoma in the central and peripheral nervous systems as well as in multiple other organ systems. This is the first description of an antemortem diagnosis and treatment of intravascular lymphoma involving the central nervous system of a dog.

  15. A user's manual for DELSOL3: A computer code for calculating the optical performance and optimal system design for solar thermal central receiver plants

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kistler, B.L.

    DELSOL3 is a revised and updated version of the DELSOL2 computer program (SAND81-8237) for calculating collector field performance and layout and optimal system design for solar thermal central receiver plants. The code consists of a detailed model of the optical performance, a simpler model of the non-optical performance, an algorithm for field layout, and a searching algorithm to find the best system design based on energy cost. The latter two features are coupled to a cost model of central receiver components and an economic model for calculating energy costs. The code can handle flat, focused and/or canted heliostats, and externalmore » cylindrical, multi-aperture cavity, and flat plate receivers. The program optimizes the tower height, receiver size, field layout, heliostat spacings, and tower position at user specified power levels subject to flux limits on the receiver and land constraints for field layout. DELSOL3 maintains the advantages of speed and accuracy which are characteristics of DELSOL2.« less

  16. Shuttle Program Information Management System (SPIMS) data base

    NASA Technical Reports Server (NTRS)

    1983-01-01

    The Shuttle Program Information Management System (SPIMS) is a computerized data base operations system. The central computer is the CDC 170-730 located at Johnson Space Center (JSC), Houston, Texas. There are several applications which have been developed and supported by SPIMS. A brief description is given.

  17. A Patient Record-Filing System for Family Practice

    PubMed Central

    Levitt, Cheryl

    1988-01-01

    The efficient storage and easy retrieval of quality records are a central concern of good family practice. Many physicians starting out in practice have difficulty choosing a practical and lasting system for storing their records. Some who have established practices are installing computers in their offices and finding that their filing systems are worn, outdated, and incompatible with computerized systems. This article describes a new filing system installed simultaneously with a new computer system in a family-practice teaching centre. The approach adopted solved all identifiable problems and is applicable in family practices of all sizes.

  18. Closely Spaced Independent Parallel Runway Simulation.

    DTIC Science & Technology

    1984-10-01

    facility consists of the Central Computer Facility, the Controller Laboratory, and the Simulator Pilot Complex. CENTRAL COMPUTER FACILITY. The Central... Computer Facility consists of a group of mainframes, minicomputers, and associated peripherals which host the operational and data acquisition...in the Controller Laboratory and convert their verbal directives into a keyboard entry which is transmitted to the Central Computer Complex, where

  19. Micromagnetics on high-performance workstation and mobile computational platforms

    NASA Astrophysics Data System (ADS)

    Fu, S.; Chang, R.; Couture, S.; Menarini, M.; Escobar, M. A.; Kuteifan, M.; Lubarda, M.; Gabay, D.; Lomakin, V.

    2015-05-01

    The feasibility of using high-performance desktop and embedded mobile computational platforms is presented, including multi-core Intel central processing unit, Nvidia desktop graphics processing units, and Nvidia Jetson TK1 Platform. FastMag finite element method-based micromagnetic simulator is used as a testbed, showing high efficiency on all the platforms. Optimization aspects of improving the performance of the mobile systems are discussed. The high performance, low cost, low power consumption, and rapid performance increase of the embedded mobile systems make them a promising candidate for micromagnetic simulations. Such architectures can be used as standalone systems or can be built as low-power computing clusters.

  20. Scaling Up and Zooming In: Big Data and Personalization in Language Learning

    ERIC Educational Resources Information Center

    Godwin-Jones, Robert

    2017-01-01

    From its earliest days, practitioners of computer-assisted language learning (CALL) have collected data from computer-mediated learning environments. Indeed, that has been a central aspect of the field from the beginning. Usage logs provided valuable insights into how systems were used and how effective they were for language learning. That…

  1. A Low Cost Microcomputer Laboratory for Investigating Computer Architecture.

    ERIC Educational Resources Information Center

    Mitchell, Eugene E., Ed.

    1980-01-01

    Described is a microcomputer laboratory at the United States Military Academy at West Point, New York, which provides easy access to non-volatile memory and a single input/output file system for 16 microcomputer laboratory positions. A microcomputer network that has a centralized data base is implemented using the concepts of computer network…

  2. LSU Slashes Energy Use

    ERIC Educational Resources Information Center

    Collier, Herbert I.

    1978-01-01

    Energy conservation programs at Louisiana State University reduced energy use 23 percent. The programs involved computer controlled power management systems, adjustment of building temperatures and lighting levels to prescribed standards, consolidation of night classes, centralization of chilled water systems, and manual monitoring of heating and…

  3. ANL statement of site strategy for computing workstations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Fenske, K.R.; Boxberger, L.M.; Amiot, L.W.

    1991-11-01

    This Statement of Site Strategy describes the procedure at Argonne National Laboratory for defining, acquiring, using, and evaluating scientific and office workstations and related equipment and software in accord with DOE Order 1360.1A (5-30-85), and Laboratory policy. It is Laboratory policy to promote the installation and use of computing workstations to improve productivity and communications for both programmatic and support personnel, to ensure that computing workstations acquisitions meet the expressed need in a cost-effective manner, and to ensure that acquisitions of computing workstations are in accord with Laboratory and DOE policies. The overall computing site strategy at ANL is tomore » develop a hierarchy of integrated computing system resources to address the current and future computing needs of the laboratory. The major system components of this hierarchical strategy are: Supercomputers, Parallel computers, Centralized general purpose computers, Distributed multipurpose minicomputers, and Computing workstations and office automation support systems. Computing workstations include personal computers, scientific and engineering workstations, computer terminals, microcomputers, word processing and office automation electronic workstations, and associated software and peripheral devices costing less than $25,000 per item.« less

  4. Increasing complexity with quantum physics.

    PubMed

    Anders, Janet; Wiesner, Karoline

    2011-09-01

    We argue that complex systems science and the rules of quantum physics are intricately related. We discuss a range of quantum phenomena, such as cryptography, computation and quantum phases, and the rules responsible for their complexity. We identify correlations as a central concept connecting quantum information and complex systems science. We present two examples for the power of correlations: using quantum resources to simulate the correlations of a stochastic process and to implement a classically impossible computational task.

  5. Portable Map-Reduce Utility for MIT SuperCloud Environment

    DTIC Science & Technology

    2015-09-17

    Reuther, A. Rosa, C. Yee, “Driving Big Data With Big Compute,” IEEE HPEC, Sep 10-12, 2012, Waltham, MA. [6] Apache Hadoop 1.2.1 Documentation: HDFS... big data architecture, which is designed to address these challenges, is made of the computing resources, scheduler, central storage file system...databases, analytics software and web interfaces [1]. These components are common to many big data and supercomputing systems. The platform is

  6. Person-Locator System Based On Wristband Radio Transponders

    NASA Technical Reports Server (NTRS)

    Mintz, Frederick W.; Blaes, Brent R.; Chandler, Charles W.

    1995-01-01

    Computerized system based on wristband radio frequency (RF), passive transponders is being developed for use in real-time tracking of individuals in custodial institutions like prisons and mental hospitals. Includes monitoring system that contains central computer connected to low-power, high-frequency central transceiver. Transceiver connected to miniature transceiver nodes mounted unobtrusively at known locations throughout the institution. Wristband transponders embedded in common hospital wristbands. Wristbands tamperproof: each contains embedded wire loop which, when broken or torn off and discarded, causes wristband to disappear from system, thus causing alarm. Individuals could be located in a timely fashion at relatively low cost.

  7. On Roles of Models in Information Systems

    NASA Astrophysics Data System (ADS)

    Sølvberg, Arne

    The increasing penetration of computers into all aspects of human activity makes it desirable that the interplay among software, data and the domains where computers are applied is made more transparent. An approach to this end is to explicitly relate the modeling concepts of the domains, e.g., natural science, technology and business, to the modeling concepts of software and data. This may make it simpler to build comprehensible integrated models of the interactions between computers and non-computers, e.g., interaction among computers, people, physical processes, biological processes, and administrative processes. This chapter contains an analysis of various facets of the modeling environment for information systems engineering. The lack of satisfactory conceptual modeling tools seems to be central to the unsatisfactory state-of-the-art in establishing information systems. The chapter contains a proposal for defining a concept of information that is relevant to information systems engineering.

  8. The PLATO IV Communications System.

    ERIC Educational Resources Information Center

    Sherwood, Bruce Arne; Stifle, Jack

    The PLATO IV computer-based educational system contains its own communications hardware and software for operating plasma-panel graphics terminals. Key echoing is performed by the central processing unit: every key pressed at a terminal passes through the entire system before anything appears on the terminal's screen. Each terminal is guaranteed…

  9. System Description and Status Report: California Education Information System.

    ERIC Educational Resources Information Center

    California State Dept. of Education, Sacramento.

    The California Education Information System (CEIS) consists of two subsystems of computer programs designed to process business and pupil data for local school districts. Creating and maintaining records concerning the students in the schools, the pupil subsystem provides for a central repository of school district identification information and a…

  10. Advanced manned space flight simulation and training: An investigation of simulation host computer system concepts

    NASA Technical Reports Server (NTRS)

    Montag, Bruce C.; Bishop, Alfred M.; Redfield, Joe B.

    1989-01-01

    The findings of a preliminary investigation by Southwest Research Institute (SwRI) in simulation host computer concepts is presented. It is designed to aid NASA in evaluating simulation technologies for use in spaceflight training. The focus of the investigation is on the next generation of space simulation systems that will be utilized in training personnel for Space Station Freedom operations. SwRI concludes that NASA should pursue a distributed simulation host computer system architecture for the Space Station Training Facility (SSTF) rather than a centralized mainframe based arrangement. A distributed system offers many advantages and is seen by SwRI as the only architecture that will allow NASA to achieve established functional goals and operational objectives over the life of the Space Station Freedom program. Several distributed, parallel computing systems are available today that offer real-time capabilities for time critical, man-in-the-loop simulation. These systems are flexible in terms of connectivity and configurability, and are easily scaled to meet increasing demands for more computing power.

  11. Weighted link graphs: a distributed IDS for secondary intrusion detection and defense

    NASA Astrophysics Data System (ADS)

    Zhou, Mian; Lang, Sheau-Dong

    2005-03-01

    While a firewall installed at the perimeter of a local network provides the first line of defense against the hackers, many intrusion incidents are the results of successful penetration of the firewalls. One computer"s compromise often put the entire network at risk. In this paper, we propose an IDS that provides a finer control over the internal network. The system focuses on the variations of connection-based behavior of each single computer, and uses a weighted link graph to visualize the overall traffic abnormalities. The functionality of our system is of a distributed personal IDS system that also provides a centralized traffic analysis by graphical visualization. We use a novel weight assignment schema for the local detection within each end agent. The local abnormalities are quantitatively carried out by the node weight and link weight and further sent to the central analyzer to build the weighted link graph. Thus, we distribute the burden of traffic processing and visualization to each agent and make it more efficient for the overall intrusion detection. As the LANs are more vulnerable to inside attacks, our system is designed as a reinforcement to prevent corruption from the inside.

  12. Vectorization of transport and diffusion computations on the CDC Cyber 205

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Abu-Shumays, I.K.

    1986-01-01

    The development and testing of alternative numerical methods and computational algorithms specifically designed for the vectorization of transport and diffusion computations on a Control Data Corporation (CDC) Cyber 205 vector computer are described. Two solution methods for the discrete ordinates approximation to the transport equation are summarized and compared. Factors of 4 to 7 reduction in run times for certain large transport problems were achieved on a Cyber 205 as compared with run times on a CDC-7600. The solution of tridiagonal systems of linear equations, central to several efficient numerical methods for multidimensional diffusion computations and essential for fluid flowmore » and other physics and engineering problems, is also dealt with. Among the methods tested, a combined odd-even cyclic reduction and modified Cholesky factorization algorithm for solving linear symmetric positive definite tridiagonal systems is found to be the most effective for these systems on a Cyber 205. For large tridiagonal systems, computation with this algorithm is an order of magnitude faster on a Cyber 205 than computation with the best algorithm for tridiagonal systems on a CDC-7600.« less

  13. SSCR Automated Manager (SAM) release 1. 1 reference manual

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Not Available

    1988-10-01

    This manual provides instructions for using the SSCR Automated Manager (SAM) to manage System Software Change Records (SSCRs) online. SSCRs are forms required to document all system software changes for the Martin Marietta Energy Systems, Inc., Central computer systems. SAM, a program developed at Energy Systems, is accessed through IDMS/R (Integrated Database Management System) on an IBM system.

  14. Connecting the virtual world of computers to the real world of medicinal chemistry.

    PubMed

    Glen, Robert C

    2011-03-01

    Drug discovery involves the simultaneous optimization of chemical and biological properties, usually in a single small molecule, which modulates one of nature's most complex systems: the balance between human health and disease. The increased use of computer-aided methods is having a significant impact on all aspects of the drug-discovery and development process and with improved methods and ever faster computers, computer-aided molecular design will be ever more central to the discovery process.

  15. SharP

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Venkata, Manjunath Gorentla; Aderholdt, William F

    The pre-exascale systems are expected to have a significant amount of hierarchical and heterogeneous on-node memory, and this trend of system architecture in extreme-scale systems is expected to continue into the exascale era. along with hierarchical-heterogeneous memory, the system typically has a high-performing network ad a compute accelerator. This system architecture is not only effective for running traditional High Performance Computing (HPC) applications (Big-Compute), but also for running data-intensive HPC applications and Big-Data applications. As a consequence, there is a growing desire to have a single system serve the needs of both Big-Compute and Big-Data applications. Though the system architecturemore » supports the convergence of the Big-Compute and Big-Data, the programming models and software layer have yet to evolve to support either hierarchical-heterogeneous memory systems or the convergence. A programming abstraction to address this problem. The programming abstraction is implemented as a software library and runs on pre-exascale and exascale systems supporting current and emerging system architecture. Using distributed data-structures as a central concept, it provides (1) a simple, usable, and portable abstraction for hierarchical-heterogeneous memory and (2) a unified programming abstraction for Big-Compute and Big-Data applications.« less

  16. The DOE/NASA wind turbine data acquisition system. Part 3: Unattended power performance monitor

    NASA Technical Reports Server (NTRS)

    Halleyy, A.; Heidkamp, D.; Neustadter, H.; Olson, R.

    1983-01-01

    Software documentation, operational procedures, and diagnostic instructions for development version of an unattended wind turbine performance monitoring system is provided. Designed to be used for off line intelligent data acquisition in conjunction with the central host computer.

  17. High order filtering methods for approximating hyberbolic systems of conservation laws

    NASA Technical Reports Server (NTRS)

    Lafon, F.; Osher, S.

    1990-01-01

    In the computation of discontinuous solutions of hyperbolic systems of conservation laws, the recently developed essentially non-oscillatory (ENO) schemes appear to be very useful. However, they are computationally costly compared to simple central difference methods. A filtering method which is developed uses simple central differencing of arbitrarily high order accuracy, except when a novel local test indicates the development of spurious oscillations. At these points, the full ENO apparatus is used, maintaining the high order of accuracy, but removing spurious oscillations. Numerical results indicate the success of the method. High order of accuracy was obtained in regions of smooth flow without spurious oscillations for a wide range of problems and a significant speed up of generally a factor of almost three over the full ENO method.

  18. Magnetic resonance imaging of the central nervous system

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Not Available

    1988-02-26

    This report reviews the current applications of magnetic resonance imaging of the central nervous system. Since its introduction into the clinical environment in the early 1980's, this technology has had a major impact on the practice of neurology. It has proved to be superior to computed tomography for imaging many diseases of the brain and spine. In some instances it has clearly replaced computed tomography. It is likely that it will replace myelography for the assessment of cervicomedullary junction and spinal regions. The magnetic field strengths currently used appear to be entirely safe for clinical application in neurology except inmore » patients with cardiac pacemakers or vascular metallic clips. Some shortcomings of magnetic resonance imaging include its expense, the time required for scanning, and poor visualization of cortical bone.« less

  19. A programming environment for distributed complex computing. An overview of the Framework for Interdisciplinary Design Optimization (FIDO) project. NASA Langley TOPS exhibit H120b

    NASA Technical Reports Server (NTRS)

    Townsend, James C.; Weston, Robert P.; Eidson, Thomas M.

    1993-01-01

    The Framework for Interdisciplinary Design Optimization (FIDO) is a general programming environment for automating the distribution of complex computing tasks over a networked system of heterogeneous computers. For example, instead of manually passing a complex design problem between its diverse specialty disciplines, the FIDO system provides for automatic interactions between the discipline tasks and facilitates their communications. The FIDO system networks all the computers involved into a distributed heterogeneous computing system, so they have access to centralized data and can work on their parts of the total computation simultaneously in parallel whenever possible. Thus, each computational task can be done by the most appropriate computer. Results can be viewed as they are produced and variables changed manually for steering the process. The software is modular in order to ease migration to new problems: different codes can be substituted for each of the current code modules with little or no effect on the others. The potential for commercial use of FIDO rests in the capability it provides for automatically coordinating diverse computations on a networked system of workstations and computers. For example, FIDO could provide the coordination required for the design of vehicles or electronics or for modeling complex systems.

  20. Data Acquisition Systems

    NASA Technical Reports Server (NTRS)

    1994-01-01

    In the mid-1980s, Kinetic Systems and Langley Research Center determined that high speed CAMAC (Computer Automated Measurement and Control) data acquisition systems could significantly improve Langley's ARTS (Advanced Real Time Simulation) system. The ARTS system supports flight simulation R&D, and the CAMAC equipment allowed 32 high performance simulators to be controlled by centrally located host computers. This technology broadened Kinetic Systems' capabilities and led to several commercial applications. One of them is General Atomics' fusion research program. Kinetic Systems equipment allows tokamak data to be acquired four to 15 times more rapidly. Ford Motor company uses the same technology to control and monitor transmission testing facilities.

  1. Space station data management system - A common GSE test interface for systems testing and verification

    NASA Technical Reports Server (NTRS)

    Martinez, Pedro A.; Dunn, Kevin W.

    1987-01-01

    This paper examines the fundamental problems and goals associated with test, verification, and flight-certification of man-rated distributed data systems. First, a summary of the characteristics of modern computer systems that affect the testing process is provided. Then, verification requirements are expressed in terms of an overall test philosophy for distributed computer systems. This test philosophy stems from previous experience that was gained with centralized systems (Apollo and the Space Shuttle), and deals directly with the new problems that verification of distributed systems may present. Finally, a description of potential hardware and software tools to help solve these problems is provided.

  2. Soft-error tolerance and energy consumption evaluation of embedded computer with magnetic random access memory in practical systems using computer simulations

    NASA Astrophysics Data System (ADS)

    Nebashi, Ryusuke; Sakimura, Noboru; Sugibayashi, Tadahiko

    2017-08-01

    We evaluated the soft-error tolerance and energy consumption of an embedded computer with magnetic random access memory (MRAM) using two computer simulators. One is a central processing unit (CPU) simulator of a typical embedded computer system. We simulated the radiation-induced single-event-upset (SEU) probability in a spin-transfer-torque MRAM cell and also the failure rate of a typical embedded computer due to its main memory SEU error. The other is a delay tolerant network (DTN) system simulator. It simulates the power dissipation of wireless sensor network nodes of the system using a revised CPU simulator and a network simulator. We demonstrated that the SEU effect on the embedded computer with 1 Gbit MRAM-based working memory is less than 1 failure in time (FIT). We also demonstrated that the energy consumption of the DTN sensor node with MRAM-based working memory can be reduced to 1/11. These results indicate that MRAM-based working memory enhances the disaster tolerance of embedded computers.

  3. The emergence of understanding in a computer model of concepts and analogy-making

    NASA Astrophysics Data System (ADS)

    Mitchell, Melanie; Hofstadter, Douglas R.

    1990-06-01

    This paper describes Copycat, a computer model of the mental mechanisms underlying the fluidity and adaptability of the human conceptual system in the context of analogy-making. Copycat creates analogies between idealized situations in a microworld that has been designed to capture and isolate many of the central issues of analogy-making. In Copycat, an understanding of the essence of a situation and the recognition of deep similarity between two superficially different situations emerge from the interaction of a large number of perceptual agents with an associative, overlapping, and context-sensitive network of concepts. Central features of the model are: a high degree of parallelism; competition and cooperation among a large number of small, locally acting agents that together create a global understanding of the situation at hand; and a computational temperature that measures the amount of perceptual organization as processing proceeds and that in turn controls the degree of randomness with which decisions are made in the system.

  4. Embedding global and collective in a torus network with message class map based tree path selection

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chen, Dong; Coteus, Paul W.; Eisley, Noel A.

    Embodiments of the invention provide a method, system and computer program product for embedding a global barrier and global interrupt network in a parallel computer system organized as a torus network. The computer system includes a multitude of nodes. In one embodiment, the method comprises taking inputs from a set of receivers of the nodes, dividing the inputs from the receivers into a plurality of classes, combining the inputs of each of the classes to obtain a result, and sending said result to a set of senders of the nodes. Embodiments of the invention provide a method, system and computermore » program product for embedding a collective network in a parallel computer system organized as a torus network. In one embodiment, the method comprises adding to a torus network a central collective logic to route messages among at least a group of nodes in a tree structure.« less

  5. Innovations: clinical computing: an audio computer-assisted self-interviewing system for research and screening in public mental health settings.

    PubMed

    Bertollo, David N; Alexander, Mary Jane; Shinn, Marybeth; Aybar, Jalila B

    2007-06-01

    This column describes the nonproprietary software Talker, used to adapt screening instruments to audio computer-assisted self-interviewing (ACASI) systems for low-literacy populations and other populations. Talker supports ease of programming, multiple languages, on-site scoring, and the ability to update a central research database. Key features include highly readable text display, audio presentation of questions and audio prompting of answers, and optional touch screen input. The scripting language for adapting instruments is briefly described as well as two studies in which respondents provided positive feedback on its use.

  6. Exploratory modeling of forest disturbance scenarios in central Oregon using computational experiments in GIS

    Treesearch

    Deana D. Pennington

    2007-01-01

    Exploratory modeling is an approach used when process and/or parameter uncertainties are such that modeling attempts at realistic prediction are not appropriate. Exploratory modeling makes use of computational experimentation to test how varying model scenarios drive model outcome. The goal of exploratory modeling is to better understand the system of interest through...

  7. ONR Europe Reports. Computer Science/Computer Engineering in Central Europe: A Report on Czechoslovakia, Hungary, and Poland

    DTIC Science & Technology

    1992-08-01

    Rychlik J.: Simulation of distributed control systems. Research report of Institute of Technology in 22 Pilsen no. 209-07-85, Jun. 1985 Kocur P... Kocur P.: Sensitivity analysis of reliability parameters. Proceedings of conf. FTSD, Brno, Jun. 1986, pp. 97-101 Smrha P., Kocur P., Racek S.: A

  8. Embedding global barrier and collective in torus network with each node combining input from receivers according to class map for output to senders

    DOEpatents

    Chen, Dong; Coteus, Paul W; Eisley, Noel A; Gara, Alan; Heidelberger, Philip; Senger, Robert M; Salapura, Valentina; Steinmacher-Burow, Burkhard; Sugawara, Yutaka; Takken, Todd E

    2013-08-27

    Embodiments of the invention provide a method, system and computer program product for embedding a global barrier and global interrupt network in a parallel computer system organized as a torus network. The computer system includes a multitude of nodes. In one embodiment, the method comprises taking inputs from a set of receivers of the nodes, dividing the inputs from the receivers into a plurality of classes, combining the inputs of each of the classes to obtain a result, and sending said result to a set of senders of the nodes. Embodiments of the invention provide a method, system and computer program product for embedding a collective network in a parallel computer system organized as a torus network. In one embodiment, the method comprises adding to a torus network a central collective logic to route messages among at least a group of nodes in a tree structure.

  9. Program For Engineering Electrical Connections

    NASA Technical Reports Server (NTRS)

    Billitti, Joseph W.

    1990-01-01

    DFACS is interactive multiuser computer-aided-engineering software tool for system-level electrical integration and cabling engineering. Purpose of program to provide engineering community with centralized data base for putting in and gaining access to data on functional definition of system, details of end-circuit pinouts in systems and subsystems, and data on wiring harnesses. Objective, to provide instantaneous single point of interchange of information, thus avoiding error-prone, time-consuming, and costly shuttling of data along multiple paths. Designed to operate on DEC VAX mini or micro computer using Version 5.0/03 of INGRES.

  10. Acquisition of electroencephalographic data in a large regional hospital - Bringing the brain waves to the computer.

    NASA Technical Reports Server (NTRS)

    Low, M. D.; Baker, M.; Ferguson, R.; Frost, J. D., Jr.

    1972-01-01

    This paper describes a complete electroencephalographic acquisition and transmission system, designed to meet the needs of a large hospital with multiple critical care patient monitoring units. The system provides rapid and prolonged access to a centralized recording and computing area from remote locations within the hospital complex, and from locations in other hospitals and other cities. The system includes quick-on electrode caps, amplifier units and cable transmission for access from within the hospital, and EEG digitization and telephone transmission for access from other hospitals or cities.

  11. 7 CFR 274.3 - Retailer management.

    Code of Federal Regulations, 2012 CFR

    2012-01-01

    ... retailer, and it must include acceptable privacy and security features. Such systems shall only be... terminals that are capable of relaying electronic transactions to a central database computer for... specifications prior to implementation of the EBT system to enable third party processors to access the database...

  12. Advanced Software Techniques for Data Management Systems. Volume 2: Space Shuttle Flight Executive System: Functional Design

    NASA Technical Reports Server (NTRS)

    Pepe, J. T.

    1972-01-01

    A functional design of software executive system for the space shuttle avionics computer is presented. Three primary functions of the executive are emphasized in the design: task management, I/O management, and configuration management. The executive system organization is based on the applications software and configuration requirements established during the Phase B definition of the Space Shuttle program. Although the primary features of the executive system architecture were derived from Phase B requirements, it was specified for implementation with the IBM 4 Pi EP aerospace computer and is expected to be incorporated into a breadboard data management computer system at NASA Manned Spacecraft Center's Information system division. The executive system was structured for internal operation on the IBM 4 Pi EP system with its external configuration and applications software assumed to the characteristic of the centralized quad-redundant avionics systems defined in Phase B.

  13. A perioperative echocardiographic reporting and recording system.

    PubMed

    Pybus, David A

    2004-11-01

    Advances in video capture, compression, and streaming technology, coupled with improvements in central processing unit design and the inclusion of a database engine in the Windows operating system, have simplified the task of implementing a digital echocardiographic recording system. I describe an application that uses these technologies and runs on a notebook computer.

  14. The building loads analysis system thermodynamics (BLAST) program, Version 2. 0: input booklet. Final report

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sowell, E.

    1979-06-01

    The Building Loads Analysis and System Thermodynamics (BLAST) program is a comprehensive set of subprograms for predicting energy consumption in buildings. There are three major subprograms: (1) the space load predicting subprogram, which computes hourly space loads in a building or zone based on user input and hourly weather data; (2) the air distribution system simulation subprogram, which uses the computed space load and user inputs describing the building air-handling system to calculate hot water or steam, chilled water, and electric energy demands; and (3) the central plant simulation program, which simulates boilers, chillers, onsite power generating equipment and solarmore » energy systems and computes monthly and annual fuel and electrical power consumption and plant life cycle cost.« less

  15. Interlibrary Lending with Computerized Union Catalogues.

    ERIC Educational Resources Information Center

    Lehmann, Klaus-Dieter

    Interlibrary loans in the Federal Republic of Germany are facilitated by applying techniques of data processing and computer output microfilm (COM) to the union catalogs of the national library system. The German library system consists of two national libraries, four central specialized libraries of technology, medicine, agriculture, and…

  16. Computer graphics for management: An abstract of capabilities and applications of the EIS system

    NASA Technical Reports Server (NTRS)

    Solem, B. J.

    1975-01-01

    The Executive Information Services (EIS) system, developed as a computer-based, time-sharing tool for making and implementing management decisions, and including computer graphics capabilities, was described. The following resources are available through the EIS languages: centralized corporate/gov't data base, customized and working data bases, report writing, general computational capability, specialized routines, modeling/programming capability, and graphics. Nearly all EIS graphs can be created by a single, on-line instruction. A large number of options are available, such as selection of graphic form, line control, shading, placement on the page, multiple images on a page, control of scaling and labeling, plotting of cum data sets, optical grid lines, and stack charts. The following are examples of areas in which the EIS system may be used: research, estimating services, planning, budgeting, and performance measurement, national computer hook-up negotiations.

  17. Assessing Postural Asymmetry with a Podoscope in Infants with Central Coordination Disturbance

    ERIC Educational Resources Information Center

    Pyzio-Kowalik, Magdalena; Wojtowicz, Dorota; Skrzek, Anna

    2013-01-01

    The aim of this study was to digitally evaluate the incidence and severity of postural asymmetry in infants with Central Coordination Disturbance (CCD) by using a computer-aided podoscope (PodoBaby) from CQ Elektronik System. A sample of 120 infants aged from 3 months (plus or minus 1 week) to 6 months (plus or minus 1 week) took part in the…

  18. Production planning, production systems for flexible automation

    NASA Astrophysics Data System (ADS)

    Spur, G.; Mertins, K.

    1982-09-01

    Trends in flexible manufacturing system (FMS) applications are reviewed. Machining systems contain machines which complement each other and can replace each other. Computer controlled storage systems are widespread, with central storage capacity ranging from 20 pallet spaces to 200 magazine spaces. Handling function is fulfilled by pallet chargers in over 75% of FMS's. Data system degree of automation varies considerably. No trends are noted for transport systems.

  19. ICC '86; Proceedings of the International Conference on Communications, Toronto, Canada, June 22-25, 1986, Conference Record. Volumes 1, 2, & 3

    NASA Astrophysics Data System (ADS)

    Papers are presented on ISDN, mobile radio systems and techniques for digital connectivity, centralized and distributed algorithms in computer networks, communications networks, quality assurance and impact on cost, adaptive filters in communications, the spread spectrum, signal processing, video communication techniques, and digital satellite services. Topics discussed include performance evaluation issues for integrated protocols, packet network operations, the computer network theory and multiple-access, microwave single sideband systems, switching architectures, fiber optic systems, wireless local communications, modulation, coding, and synchronization, remote switching, software quality, transmission, and expert systems in network operations. Consideration is given to wide area networks, image and speech processing, office communications application protocols, multimedia systems, customer-controlled network operations, digital radio systems, channel modeling and signal processing in digital communications, earth station/on-board modems, computer communications system performance evaluation, source encoding, compression, and quantization, and adaptive communications systems.

  20. A cross-disciplinary introduction to quantum annealing-based algorithms

    NASA Astrophysics Data System (ADS)

    Venegas-Andraca, Salvador E.; Cruz-Santos, William; McGeoch, Catherine; Lanzagorta, Marco

    2018-04-01

    A central goal in quantum computing is the development of quantum hardware and quantum algorithms in order to analyse challenging scientific and engineering problems. Research in quantum computation involves contributions from both physics and computer science; hence this article presents a concise introduction to basic concepts from both fields that are used in annealing-based quantum computation, an alternative to the more familiar quantum gate model. We introduce some concepts from computer science required to define difficult computational problems and to realise the potential relevance of quantum algorithms to find novel solutions to those problems. We introduce the structure of quantum annealing-based algorithms as well as two examples of this kind of algorithms for solving instances of the max-SAT and Minimum Multicut problems. An overview of the quantum annealing systems manufactured by D-Wave Systems is also presented.

  1. Decentralized Resource Management in Distributed Computer Systems.

    DTIC Science & Technology

    1982-02-01

    directly exchanging user state information. Eventcounts and sequencers correspond to semaphores in the sense that synchronization primitives are used to...and techniques are required to achieve synchronization in distributed computers without reliance on any centralized entity such as a semaphore ...known solutions to the access synchronization problem was Dijkstra’s semaphore [12]. The importance of the semaphore is that it correctly addresses the

  2. BeeSign: A Computationally-Mediated Intervention to Examine K-1 Students' Representational Activities in the Context of Teaching Complex Systems Concepts

    ERIC Educational Resources Information Center

    Danish, Joshua Adam

    2009-01-01

    Representations such as drawings, graphs, and computer simulations, are central to learning and doing science. Furthermore, ongoing success in science learning requires students to build on the representations and associated practices that they are presumed to have learned throughout their schooling career. Without these practices, students have…

  3. Computer User's Guide to the Protection of Information Resources. NIST Special Publication 500-171.

    ERIC Educational Resources Information Center

    Helsing, Cheryl; And Others

    Computers have changed the way information resources are handled. Large amounts of information are stored in one central place and can be accessed from remote locations. Users have a personal responsibility for the security of the system and the data stored in it. This document outlines the user's responsibilities and provides security and control…

  4. Computer Assisted Thermography And Its Application In Ovulation Detection

    NASA Astrophysics Data System (ADS)

    Rao, K. H.; Shah, A. V.

    1984-08-01

    Hardware and software of a computer-assisted image analyzing system used for infrared images in medical applications are discussed. The application of computer-assisted thermography (CAT) as a complementary diagnostic tool in centralized diagnostic management is proposed. The authors adopted 'Computer Assisted Thermography' to study physiological changes in the breasts related to the hormones characterizing the menstrual cycle of a woman. Based on clinical experi-ments followed by thermal image analysis, they suggest that 'differential skin temperature (DST)1 be measured to detect the fertility interval in the menstrual cycle of a woman.

  5. LaRC local area networks to support distributed computing

    NASA Technical Reports Server (NTRS)

    Riddle, E. P.

    1984-01-01

    The Langley Research Center's (LaRC) Local Area Network (LAN) effort is discussed. LaRC initiated the development of a LAN to support a growing distributed computing environment at the Center. The purpose of the network is to provide an improved capability (over inteactive and RJE terminal access) for sharing multivendor computer resources. Specifically, the network will provide a data highway for the transfer of files between mainframe computers, minicomputers, work stations, and personal computers. An important influence on the overall network design was the vital need of LaRC researchers to efficiently utilize the large CDC mainframe computers in the central scientific computing facility. Although there was a steady migration from a centralized to a distributed computing environment at LaRC in recent years, the work load on the central resources increased. Major emphasis in the network design was on communication with the central resources within the distributed environment. The network to be implemented will allow researchers to utilize the central resources, distributed minicomputers, work stations, and personal computers to obtain the proper level of computing power to efficiently perform their jobs.

  6. The operation of large computer-controlled manufacturing systems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Upton, D.M.

    1988-01-01

    This work examines methods for operation of large computer-controlled manufacturing systems, with more than 50 or so disparate CNC machines in congregation. The central theme is the development of a distributed control system, which requires minimal central supervision, and allows manufacturing system re-configuration without extensive control software re-writes. Provision is made for machines to learn from their experience and provide estimates of the time necessary to effect various tasks. Routing is opportunistic, with varying degrees of myopia depending on the prevailing situation. Necessary curtailments of opportunism are built in to the system, in order to provide a society of machinesmore » that operate in unison rather than in chaos. Negotiation and contention resolution are carried out using a UHF radio communications network, along with processing capability on both pallets and tools. Graceful and robust error recovery is facilitated by ensuring adequate pessimistic consideration of failure modes at each stage in the scheme. Theoretical models are developed and an examination is made of fundamental characteristics of auction-based scheduling methods.« less

  7. Scalable and massively parallel Monte Carlo photon transport simulations for heterogeneous computing platforms

    NASA Astrophysics Data System (ADS)

    Yu, Leiming; Nina-Paravecino, Fanny; Kaeli, David; Fang, Qianqian

    2018-01-01

    We present a highly scalable Monte Carlo (MC) three-dimensional photon transport simulation platform designed for heterogeneous computing systems. Through the development of a massively parallel MC algorithm using the Open Computing Language framework, this research extends our existing graphics processing unit (GPU)-accelerated MC technique to a highly scalable vendor-independent heterogeneous computing environment, achieving significantly improved performance and software portability. A number of parallel computing techniques are investigated to achieve portable performance over a wide range of computing hardware. Furthermore, multiple thread-level and device-level load-balancing strategies are developed to obtain efficient simulations using multiple central processing units and GPUs.

  8. Contextuality as a Resource for Models of Quantum Computation with Qubits

    NASA Astrophysics Data System (ADS)

    Bermejo-Vega, Juan; Delfosse, Nicolas; Browne, Dan E.; Okay, Cihan; Raussendorf, Robert

    2017-09-01

    A central question in quantum computation is to identify the resources that are responsible for quantum speed-up. Quantum contextuality has been recently shown to be a resource for quantum computation with magic states for odd-prime dimensional qudits and two-dimensional systems with real wave functions. The phenomenon of state-independent contextuality poses a priori an obstruction to characterizing the case of regular qubits, the fundamental building block of quantum computation. Here, we establish contextuality of magic states as a necessary resource for a large class of quantum computation schemes on qubits. We illustrate our result with a concrete scheme related to measurement-based quantum computation.

  9. The Cronus Distributed DBMS (Database Management System) Project

    DTIC Science & Technology

    1989-10-01

    projects, e.g., HiPAC [Dayal 88] and Postgres [Stonebraker 86]. Although we expect to use these techniques, they have been developed for centralized...Computing Systems, June 1989. (To appear). [Stonebraker 86] Stonebraker, M. and Rowe, L. A., "The Design of POSTGRES ," Proceedings ACM SIGMOD Annual

  10. New ARCH: Future Generation Internet Architecture

    DTIC Science & Technology

    2004-08-01

    a vocabulary to talk about a system . This provides a framework ( a “reference model ...layered model Modularity and abstraction are central tenets of Computer Science thinking. Modularity breaks a system into parts, normally to permit...this complexity is hidden. Abstraction suggests a structure for the system . A popular and simple structure is a layered model : lower layer

  11. Bathymetric surveys of Morse and Geist Reservoirs in central Indiana made with acoustic Doppler current profiler and global positioning system technology, 1996

    USGS Publications Warehouse

    Wilson, J.T.; Morlock, S.E.; Baker, N.T.

    1997-01-01

    Acoustic Doppler current profiler, global positioning system, and geographic information system technology were used to map the bathymetry of Morse and Geist Reservoirs, two artificial lakes used for public water supply in central Indiana. The project was a pilot study to evaluate the use of the technologies for bathymetric surveys. Bathymetric surveys were last conducted in 1978 on Morse Reservoir and in 1980 on Geist Reservoir; those surveys were done with conventional methods using networks of fathometer transects. The 1996 bathymetric surveys produced updated estimates of reservoir volumes that will serve as base-line data for future estimates of storage capacity and sedimentation rates.An acoustic Doppler current profiler and global positioning system receiver were used to collect water-depth and position data from April 1996 through October 1996. All water-depth and position data were imported to a geographic information system to create a data base. The geographic information system then was used to generate water-depth contour maps and to compute the volumes for each reservoir.The computed volume of Morse Reservoir was 22,820 acre-feet (7.44 billion gallons), with a surface area of 1,484 acres. The computed volume of Geist Reservoir was 19,280 acre-feet (6.29 billion gallons), with a surface area of 1,848 acres. The computed 1996 reservoir volumes are less than the design volumes and indicate that sedimentation has occurred in both reservoirs. Cross sections were constructed from the computer-generated surfaces for 1996 and compared to the fathometer profiles from the 1978 and 1980 surveys; analysis of these cross sections also indicates that some sedimentation has occurred in both reservoirs.The acoustic Doppler current profiler, global positioning system, and geographic information system technologies described in this report produced bathymetric maps and volume estimates more efficiently and with comparable or greater resolution than conventional bathymetry methods.

  12. CDC 7600 LTSS programming stratagens: preparing your first production code for the Livermore Timesharing System

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Fong, K. W.

    1977-08-15

    This report deals with some techniques in applied programming using the Livermore Timesharing System (LTSS) on the CDC 7600 computers at the National Magnetic Fusion Energy Computer Center (NMFECC) and the Lawrence Livermore Laboratory Computer Center (LLLCC or Octopus network). This report is based on a document originally written specifically about the system as it is implemented at NMFECC but has been revised to accommodate differences between LLLCC and NMFECC implementations. Topics include: maintaining programs, debugging, recovering from system crashes, and using the central processing unit, memory, and input/output devices efficiently and economically. Routines that aid in these procedures aremore » mentioned. The companion report, UCID-17556, An LTSS Compendium, discusses the hardware and operating system and should be read before reading this report.« less

  13. A computer software system for the generation of global ocean tides including self-gravitation and crustal loading effects

    NASA Technical Reports Server (NTRS)

    Estes, R. H.

    1977-01-01

    A computer software system is described which computes global numerical solutions of the integro-differential Laplace tidal equations, including dissipation terms and ocean loading and self-gravitation effects, for arbitrary diurnal and semidiurnal tidal constituents. The integration algorithm features a successive approximation scheme for the integro-differential system, with time stepping forward differences in the time variable and central differences in spatial variables. Solutions for M2, S2, N2, K2, K1, O1, P1 tidal constituents neglecting the effects of ocean loading and self-gravitation and a converged M2, solution including ocean loading and self-gravitation effects are presented in the form of cotidal and corange maps.

  14. Application of queueing models to multiprogrammed computer systems operating in a time-critical environment

    NASA Technical Reports Server (NTRS)

    Eckhardt, D. E., Jr.

    1979-01-01

    A model of a central processor (CPU) which services background applications in the presence of time critical activity is presented. The CPU is viewed as an M/M/1 queueing system subject to periodic interrupts by deterministic, time critical process. The Laplace transform of the distribution of service times for the background applications is developed. The use of state of the art queueing models for studying the background processing capability of time critical computer systems is discussed and the results of a model validation study which support this application of queueing models are presented.

  15. Heterogeneous real-time computing in radio astronomy

    NASA Astrophysics Data System (ADS)

    Ford, John M.; Demorest, Paul; Ransom, Scott

    2010-07-01

    Modern computer architectures suited for general purpose computing are often not the best choice for either I/O-bound or compute-bound problems. Sometimes the best choice is not to choose a single architecture, but to take advantage of the best characteristics of different computer architectures to solve your problems. This paper examines the tradeoffs between using computer systems based on the ubiquitous X86 Central Processing Units (CPU's), Field Programmable Gate Array (FPGA) based signal processors, and Graphical Processing Units (GPU's). We will show how a heterogeneous system can be produced that blends the best of each of these technologies into a real-time signal processing system. FPGA's tightly coupled to analog-to-digital converters connect the instrument to the telescope and supply the first level of computing to the system. These FPGA's are coupled to other FPGA's to continue to provide highly efficient processing power. Data is then packaged up and shipped over fast networks to a cluster of general purpose computers equipped with GPU's, which are used for floating-point intensive computation. Finally, the data is handled by the CPU and written to disk, or further processed. Each of the elements in the system has been chosen for its specific characteristics and the role it can play in creating a system that does the most for the least, in terms of power, space, and money.

  16. ALMA test interferometer control system: past experiences and future developments

    NASA Astrophysics Data System (ADS)

    Marson, Ralph G.; Pokorny, Martin; Kern, Jeff; Stauffer, Fritz; Perrigouard, Alain; Gustafsson, Birger; Ramey, Ken

    2004-09-01

    The Atacama Large Millimeter Array (ALMA) will, when it is completed in 2012, be the world's largest millimeter & sub-millimeter radio telescope. It will consist of 64 antennas, each one 12 meters in diameter, connected as an interferometer. The ALMA Test Interferometer Control System (TICS) was developed as a prototype for the ALMA control system. Its initial task was to provide sufficient functionality for the evaluation of the prototype antennas. The main antenna evaluation tasks include surface measurements via holography and pointing accuracy, measured at both optical and millimeter wavelengths. In this paper we will present the design of TICS, which is a distributed computing environment. In the test facility there are four computers: three real-time computers running VxWorks (one on each antenna and a central one) and a master computer running Linux. These computers communicate via Ethernet, and each of the real-time computers is connected to the hardware devices via an extension of the CAN bus. We will also discuss our experience with this system and outline changes we are making in light of our experiences.

  17. Incremental Centrality Algorithms for Dynamic Network Analysis

    DTIC Science & Technology

    2013-08-01

    encouragement he gave me to complete my degree. Last but not least, I would like to thank CASOS members for insightful discussions and feedback they gave me at...Systems ( CASOS ) under the Institute for Software Research within the School of Computer Science (SCS) at Carnegie Mellon University (CMU). Financial...discusses several ways of generalizing betweenness 23 centrality including scaling of values with respect to length, inclusion of end-points in the

  18. User interface concerns

    NASA Technical Reports Server (NTRS)

    Redhed, D. D.

    1978-01-01

    Three possible goals for the Numerical Aerodynamic Simulation Facility (NASF) are: (1) a computational fluid dynamics (as opposed to aerodynamics) algorithm development tool; (2) a specialized research laboratory facility for nearly intractable aerodynamics problems that industry encounters; and (3) a facility for industry to use in its normal aerodynamics design work that requires high computing rates. The central system issue for industry use of such a computer is the quality of the user interface as implemented in some kind of a front end to the vector processor.

  19. CFD in Support of Wind Tunnel Testing for Aircraft/Weapons Integration

    DTIC Science & Technology

    2004-06-01

    Warming flux vector splitting scheme. Viscous rate t mies s to the oDentati ote t fluxes (computed using spatial central differencing) in erotate try...computations factors to eliminate them from the current computation. performed. The grid system consisted of 18 x 106 points These newly i-blanked grid...273-295. 130 14. van Leer, B., "Towards the Ultimate Conservative 18 . Suhs, N.E., and R.W. Tramel, "PEGSUS 4.0 Users Manual." Difference Scheme V. A

  20. Dormitory renovation project reduces energy use by 69%

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kokayko, M.J.

    1997-06-01

    Baldwin Hall is a three-story, 46,000 ft{sup 2} (4,273 m{sup 1}) dormitory on the campus of Allegheny College in Meadville, Pa. The building was originally built in the 1950s; an additional wing was added in the 1970s so that it has about 37,000 ft{sup 2} (3,437 m{sup 2}). The building contains approximately 100 double-occupancy student rooms; three common bathroom groups per floor; central study, lounge, and computer areas; and a laundry. Design for the renovation started in the winter of 1993; construction took place in the summer of 1994. The major goals of the renovation were: (1) to replace themore » entire building heating system (central boiler plant, distribution piping, and room heating terminals); (2) add a ventilation system within the building; (3) upgrade the building electrical system; (4) provide computer data cabling and cable TV wiring to each room; and, (5) improve room and hallway lighting and finishes.« less

  1. A Distributed Prognostic Health Management Architecture

    NASA Technical Reports Server (NTRS)

    Bhaskar, Saha; Saha, Sankalita; Goebel, Kai

    2009-01-01

    This paper introduces a generic distributed prognostic health management (PHM) architecture with specific application to the electrical power systems domain. Current state-of-the-art PHM systems are mostly centralized in nature, where all the processing is reliant on a single processor. This can lead to loss of functionality in case of a crash of the central processor or monitor. Furthermore, with increases in the volume of sensor data as well as the complexity of algorithms, traditional centralized systems become unsuitable for successful deployment, and efficient distributed architectures are required. A distributed architecture though, is not effective unless there is an algorithmic framework to take advantage of its unique abilities. The health management paradigm envisaged here incorporates a heterogeneous set of system components monitored by a varied suite of sensors and a particle filtering (PF) framework that has the power and the flexibility to adapt to the different diagnostic and prognostic needs. Both the diagnostic and prognostic tasks are formulated as a particle filtering problem in order to explicitly represent and manage uncertainties; however, typically the complexity of the prognostic routine is higher than the computational power of one computational element ( CE). Individual CEs run diagnostic routines until the system variable being monitored crosses beyond a nominal threshold, upon which it coordinates with other networked CEs to run the prognostic routine in a distributed fashion. Implementation results from a network of distributed embedded devices monitoring a prototypical aircraft electrical power system are presented, where the CEs are Sun Microsystems Small Programmable Object Technology (SPOT) devices.

  2. Electron beam diagnostic system using computed tomography and an annular sensor

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Elmer, John W.; Teruya, Alan T.

    2015-08-11

    A system for analyzing an electron beam including a circular electron beam diagnostic sensor adapted to receive the electron beam, the circular electron beam diagnostic sensor having a central axis; an annular sensor structure operatively connected to the circular electron beam diagnostic sensor, wherein the sensor structure receives the electron beam; a system for sweeping the electron beam radially outward from the central axis of the circular electron beam diagnostic sensor to the annular sensor structure wherein the electron beam is intercepted by the annular sensor structure; and a device for measuring the electron beam that is intercepted by themore » annular sensor structure.« less

  3. Electron beam diagnostic system using computed tomography and an annular sensor

    DOEpatents

    Elmer, John W.; Teruya, Alan T.

    2014-07-29

    A system for analyzing an electron beam including a circular electron beam diagnostic sensor adapted to receive the electron beam, the circular electron beam diagnostic sensor having a central axis; an annular sensor structure operatively connected to the circular electron beam diagnostic sensor, wherein the sensor structure receives the electron beam; a system for sweeping the electron beam radially outward from the central axis of the circular electron beam diagnostic sensor to the annular sensor structure wherein the electron beam is intercepted by the annular sensor structure; and a device for measuring the electron beam that is intercepted by the annular sensor structure.

  4. Different approaches for centralized and decentralized water system management in multiple decision makers' problems

    NASA Astrophysics Data System (ADS)

    Anghileri, D.; Giuliani, M.; Castelletti, A.

    2012-04-01

    There is a general agreement that one of the most challenging issues related to water system management is the presence of many and often conflicting interests as well as the presence of several and independent decision makers. The traditional approach to multi-objective water systems management is a centralized management, in which an ideal central regulator coordinates the operation of the whole system, exploiting all the available information and balancing all the operating objectives. Although this approach allows to obtain Pareto-optimal solutions representing the maximum achievable benefit, it is based on assumptions which strongly limits its application in real world contexts: 1) top-down management, 2) existence of a central regulation institution, 3) complete information exchange within the system, 4) perfect economic efficiency. A bottom-up decentralized approach seems therefore to be more suitable for real case applications since different reservoir operators may maintain their independence. In this work we tested the consequences of a change in the water management approach moving from a centralized toward a decentralized one. In particular we compared three different cases: the centralized management approach, the independent management approach where each reservoir operator takes the daily release decision maximizing (or minimizing) his operating objective independently from each other, and an intermediate approach, leading to the Nash equilibrium of the associated game, where different reservoir operators try to model the behaviours of the other operators. The three approaches are demonstrated using a test case-study composed of two reservoirs regulated for the minimization of flooding in different locations. The operating policies are computed by solving one single multi-objective optimal control problem, in the centralized management approach; multiple single-objective optimization problems, i.e. one for each operator, in the independent case; using techniques related to game theory for the description of the interaction between the two operators, in the last approach. Computational results shows that the Pareto-optimal control policies obtained in the centralized approach dominate the control policies of both the two cases of decentralized management and that the so called price of anarchy increases moving toward the independent management approach. However, the Nash equilibrium solution seems to be the most promising alternative because it represents a good compromise in maximizing management efficiency without limiting the behaviours of the reservoir operators.

  5. A web-based remote radiation treatment planning system using the remote desktop function of a computer operating system: a preliminary report.

    PubMed

    Suzuki, Keishiro; Hirasawa, Yukinori; Yaegashi, Yuji; Miyamoto, Hideki; Shirato, Hiroki

    2009-01-01

    We developed a web-based, remote radiation treatment planning system which allowed staff at an affiliated hospital to obtain support from a fully staffed central institution. Network security was based on a firewall and a virtual private network (VPN). Client computers were installed at a cancer centre, at a university hospital and at a staff home. We remotely operated the treatment planning computer using the Remote Desktop function built in to the Windows operating system. Except for the initial setup of the VPN router, no special knowledge was needed to operate the remote radiation treatment planning system. There was a time lag that seemed to depend on the volume of data traffic on the Internet, but it did not affect smooth operation. The initial cost and running cost of the system were reasonable.

  6. Summary of Research Academic Departments, 1987-1988

    DTIC Science & Technology

    1988-12-01

    quantify the computer nccring students and their faculty with roughly system’s ability to enhance learning of the course equivalent computers; one group...Sponsor: Naval Academy Instructional Development Advisory Committee To understand mathematics , a student must under- also to explain the central concepts... Mathematics Department. The project will attempt resources for in-class and extra instruction , to move toward these goals by preparing extra Students

  7. Resiliency in Future Cyber Combat

    DTIC Science & Technology

    2016-04-04

    including the Internet , telecommunications networks, computer systems, and embed- ded processors and controllers.”6 One important point emerging from the...definition is that while the Internet is part of cyberspace, it is not all of cyberspace. Any computer processor capable of communicating with a...central proces- sor on a modern car are all part of cyberspace, although only some of them are routinely connected to the Internet . Most modern

  8. Accuracy and time requirements of a bar-code inventory system for medical supplies.

    PubMed

    Hanson, L B; Weinswig, M H; De Muth, J E

    1988-02-01

    The effects of implementing a bar-code system for issuing medical supplies to nursing units at a university teaching hospital were evaluated. Data on the time required to issue medical supplies to three nursing units at a 480-bed, tertiary-care teaching hospital were collected (1) before the bar-code system was implemented (i.e., when the manual system was in use), (2) one month after implementation, and (3) four months after implementation. At the same times, the accuracy of the central supply perpetual inventory was monitored using 15 selected items. One-way analysis of variance tests were done to determine any significant differences between the bar-code and manual systems. Using the bar-code system took longer than using the manual system because of a significant difference in the time required for order entry into the computer. Multiple-use requirements of the central supply computer system made entering bar-code data a much slower process. There was, however, a significant improvement in the accuracy of the perpetual inventory. Using the bar-code system for issuing medical supplies to the nursing units takes longer than using the manual system. However, the accuracy of the perpetual inventory was significantly improved with the implementation of the bar-code system.

  9. A computer-assisted data collection system for use in a multicenter study of American Indians and Alaska Natives: SCAPES.

    PubMed

    Edwards, Roger L; Edwards, Sandra L; Bryner, James; Cunningham, Kelly; Rogers, Amy; Slattery, Martha L

    2008-04-01

    We describe a computer-assisted data collection system developed for a multicenter cohort study of American Indian and Alaska Native people. The study computer-assisted participant evaluation system or SCAPES is built around a central database server that controls a small private network with touch screen workstations. SCAPES encompasses the self-administered questionnaires, the keyboard-based stations for interviewer-administered questionnaires, a system for inputting medical measurements, and administrative tasks such as data exporting, backup and management. Elements of SCAPES hardware/network design, data storage, programming language, software choices, questionnaire programming including the programming of questionnaires administered using audio computer-assisted self-interviewing (ACASI), and participant identification/data security system are presented. Unique features of SCAPES are that data are promptly made available to participants in the form of health feedback; data can be quickly summarized for tribes for health monitoring and planning at the community level; and data are available to study investigators for analyses and scientific evaluation.

  10. Description of data base management systems activities

    NASA Technical Reports Server (NTRS)

    1983-01-01

    One of the major responsibilities of the JPL Computing and Information Services Office is to develop and maintain a JPL plan for providing computing services to the JPL management and administrative community that will lead to improved productivity. The CISO plan to accomplish this objective has been titled 'Management and Administrative Support Systems' (MASS). The MASS plan is based on the continued use of JPL's IBM 3032 Computer system for administrative computing and for the MASS functions. The current candidate administrative Data Base Management Systems required to support the MASS include ADABASE, Cullinane IDMS and TOTAL. Previous uses of administrative Data Base Systems have been applied to specific local functions rather than in a centralized manner with elements common to the many user groups. Limited capacity data base systems have been installed in microprocessor based office automation systems in a few Project and Management Offices using Ashton-Tate dBASE II. These experiences plus some other localized in house DBMS uses have provided an excellent background for developing user and system requirements for a single DBMS to support the MASS program.

  11. The Washington Library Network

    ERIC Educational Resources Information Center

    Franklin, Ralph W.; MacDonald, Clarice I.

    1976-01-01

    The objectives of the Washington Library Network (WLN) are 1) statewide sharing of resources among all types of libraries, 2) economically meeting the information demands of all citizens of the state, and 3) centralized computer-communication systems for bibliographic services. (Author)

  12. Primary central nervous system lymphoma in an human immunodeficiency virus-infected patient mimicking bilateral eye sign in brain seen in fluorine-18 fluorodeoxyglucose-positron emission tomography/computed tomography.

    PubMed

    Kamaleshwaran, Koramadai Karuppusany; Thirugnanam, Rajasekar; Shibu, Deepu; Kalarikal, Radhakrishnan Edathurthy; Shinto, Ajit Sugunan

    2014-04-01

    Fluorodeoxyglucose-positron emission tomography/computed tomography (FDG PET/CT) has proven useful in the diagnosis, staging, and detection of metastasis and posttreatment monitoring of several malignancies in human immunodeficiency virus (HIV)-infected patients. It also has the ability to make the important distinction between malignancy and infection in the evaluation of central nervous system (CNS) lesions, leading to the initiation of the appropriate treatment and precluding the need for invasive biopsy. We report an interesting case of HIV positive 35-year-old woman presented with headache, disorientation, and decreased level of consciousness. She underwent whole body PET/CT which showed multiple lesions in the cerebrum which mimics bilateral eye in brain. A diagnosis of a primary CNS lymphoma was made and patient was started on chemotherapy.

  13. The application of simulation modeling to the cost and performance ranking of solar thermal power plants

    NASA Technical Reports Server (NTRS)

    Rosenberg, L. S.; Revere, W. R.; Selcuk, M. K.

    1981-01-01

    A computer simulation code was employed to evaluate several generic types of solar power systems (up to 10 MWe). Details of the simulation methodology, and the solar plant concepts are given along with cost and performance results. The Solar Energy Simulation computer code (SESII) was used, which optimizes the size of the collector field and energy storage subsystem for given engine-generator and energy-transport characteristics. Nine plant types were examined which employed combinations of different technology options, such as: distributed or central receivers with one- or two-axis tracking or no tracking; point- or line-focusing concentrator; central or distributed power conversion; Rankin, Brayton, or Stirling thermodynamic cycles; and thermal or electrical storage. Optimal cost curves were plotted as a function of levelized busbar energy cost and annualized plant capacity. Point-focusing distributed receiver systems were found to be most efficient (17-26 percent).

  14. Development of the Centralized Storm Information System (CSIS) for use in severe weather prediction

    NASA Technical Reports Server (NTRS)

    Mosher, F. R.

    1984-01-01

    The centralized storm information system is now capable of ingesting and remapping radar scope presentations on a satellite projection. This can be color enhanced and superposed on other data types. Presentations from more than one radar can be composited on a single image. As with most other data sources, a simple macro establishes the loops and scheduling of the radar ingestions as well as the autodialing. There are approximately 60 NWS network 10 cm radars that can be interrogated. NSSFC forecasters have found this data source to be extremely helpful in severe weather situations. The capability to access lightning frequency data stored in a National Weather Service computer was added. Plans call for an interface with the National Meteorological Center to receive and display prognostic fields from operational computer forecast models. Programs are to be developed to plot and display locations of reported severe local storm events.

  15. Evaluation of accuracy of shade selection using two spectrophotometer systems: Vita Easyshade and Degudent Shadepilot.

    PubMed

    Kalantari, Mohammad Hassan; Ghoraishian, Seyed Ahmad; Mohaghegh, Mina

    2017-01-01

    The aim of this in vitro study was to evaluate the accuracy of shade matching using two spectrophotometric devices. Thirteen patients who require a full coverage restoration for one of their maxillary central incisors were selected while the adjacent central incisor was intact. 3 same frameworks were constructed for each tooth using computer-aided design and computer-aided manufacturing technology. Shade matching was performed using Vita Easyshade spectrophotometer, Shadepilot spectrophotometer, and Vitapan classical shade guide for the first, second, and third crown subsequently. After application, firing, and glazing of the porcelain, the color was evaluated and scored by five inspectors. Both spectrophotometric systems showed significantly better results than visual method ( P < 0.05) while there were no significant differences between Vita Easyshade and Shadepilot spectrophotometers ( P < 0.05). Spectrophotometers are a good substitute for visual color selection methods.

  16. Evaluation of accuracy of shade selection using two spectrophotometer systems: Vita Easyshade and Degudent Shadepilot

    PubMed Central

    Kalantari, Mohammad Hassan; Ghoraishian, Seyed Ahmad; Mohaghegh, Mina

    2017-01-01

    Objective: The aim of this in vitro study was to evaluate the accuracy of shade matching using two spectrophotometric devices. Materials and Methods: Thirteen patients who require a full coverage restoration for one of their maxillary central incisors were selected while the adjacent central incisor was intact. 3 same frameworks were constructed for each tooth using computer-aided design and computer-aided manufacturing technology. Shade matching was performed using Vita Easyshade spectrophotometer, Shadepilot spectrophotometer, and Vitapan classical shade guide for the first, second, and third crown subsequently. After application, firing, and glazing of the porcelain, the color was evaluated and scored by five inspectors. Results: Both spectrophotometric systems showed significantly better results than visual method (P < 0.05) while there were no significant differences between Vita Easyshade and Shadepilot spectrophotometers (P < 0.05). Conclusion: Spectrophotometers are a good substitute for visual color selection methods. PMID:28729792

  17. Modular multiple sensors information management for computer-integrated surgery.

    PubMed

    Vaccarella, Alberto; Enquobahrie, Andinet; Ferrigno, Giancarlo; Momi, Elena De

    2012-09-01

    In the past 20 years, technological advancements have modified the concept of modern operating rooms (ORs) with the introduction of computer-integrated surgery (CIS) systems, which promise to enhance the outcomes, safety and standardization of surgical procedures. With CIS, different types of sensor (mainly position-sensing devices, force sensors and intra-operative imaging devices) are widely used. Recently, the need for a combined use of different sensors raised issues related to synchronization and spatial consistency of data from different sources of information. In this study, we propose a centralized, multi-sensor management software architecture for a distributed CIS system, which addresses sensor information consistency in both space and time. The software was developed as a data server module in a client-server architecture, using two open-source software libraries: Image-Guided Surgery Toolkit (IGSTK) and OpenCV. The ROBOCAST project (FP7 ICT 215190), which aims at integrating robotic and navigation devices and technologies in order to improve the outcome of the surgical intervention, was used as the benchmark. An experimental protocol was designed in order to prove the feasibility of a centralized module for data acquisition and to test the application latency when dealing with optical and electromagnetic tracking systems and ultrasound (US) imaging devices. Our results show that a centralized approach is suitable for minimizing synchronization errors; latency in the client-server communication was estimated to be 2 ms (median value) for tracking systems and 40 ms (median value) for US images. The proposed centralized approach proved to be adequate for neurosurgery requirements. Latency introduced by the proposed architecture does not affect tracking system performance in terms of frame rate and limits US images frame rate at 25 fps, which is acceptable for providing visual feedback to the surgeon in the OR. Copyright © 2012 John Wiley & Sons, Ltd.

  18. Computer systems for automatic earthquake detection

    USGS Publications Warehouse

    Stewart, S.W.

    1974-01-01

    U.S Geological Survey seismologists in Menlo park, California, are utilizing the speed, reliability, and efficiency of minicomputers to monitor seismograph stations and to automatically detect earthquakes. An earthquake detection computer system, believed to be the only one of its kind in operation, automatically reports about 90 percent of all local earthquakes recorded by a network of over 100 central California seismograph stations. The system also monitors the stations for signs of malfunction or abnormal operation. Before the automatic system was put in operation, all of the earthquakes recorded had to be detected by manually searching the records, a time-consuming process. With the automatic detection system, the stations are efficiently monitored continuously. 

  19. A Distributed Processing Approach to Payroll Time Reporting for a Large School District.

    ERIC Educational Resources Information Center

    Freeman, Raoul J.

    1983-01-01

    Describes a system for payroll reporting from geographically disparate locations in which data is entered, edited, and verified locally on minicomputers and then uploaded to a central computer for the standard payroll process. Communications and hardware, time-reporting software, data input techniques, system implementation, and its advantages are…

  20. The Montana experience

    NASA Technical Reports Server (NTRS)

    Dundas, T. R.

    1981-01-01

    The development and capabilities of the Montana geodata system are discussed. The system is entirely dependent on the state's central data processing facility which serves all agencies and is therefore restricted to batch mode processing. The computer graphics equipment is briefly described along with its application to state lands and township mapping and the production of water quality interval maps.

  1. Advance development of a technique for characterizing the thermomechanical properties of thermally stable polymers

    NASA Technical Reports Server (NTRS)

    Gillham, J. K.; Stadnicki, S. J.; Hazony, Y.

    1974-01-01

    The torsional braid experiment has been interfaced with a centralized hierarchical computing system for data acquisition and data processing. Such a system, when matched by the appropriate upgrading of the monitoring techniques, provides high resolution thermomechanical spectra of rigidity and damping, and their derivatives with respect to temperature.

  2. 16. VIEW OF THE ENRICHED URANIUM RECOVERY SYSTEM. ENRICHED URANIUM ...

    Library of Congress Historic Buildings Survey, Historic Engineering Record, Historic Landscapes Survey

    16. VIEW OF THE ENRICHED URANIUM RECOVERY SYSTEM. ENRICHED URANIUM RECOVERY PROCESSED RELATIVELY PURE MATERIALS AND SOLUTIONS AND SOLID RESIDUES WITH RELATIVELY LOW URANIUM CONTENT. URANIUM RECOVERY INVOLVED BOTH SLOW AND FAST PROCESSES. (4/4/66) - Rocky Flats Plant, General Manufacturing, Support, Records-Central Computing, Southern portion of Plant, Golden, Jefferson County, CO

  3. Mechanistic experimental pain assessment in computer users with and without chronic musculoskeletal pain.

    PubMed

    Ge, Hong-You; Vangsgaard, Steffen; Omland, Øyvind; Madeleine, Pascal; Arendt-Nielsen, Lars

    2014-12-06

    Musculoskeletal pain from the upper extremity and shoulder region is commonly reported by computer users. However, the functional status of central pain mechanisms, i.e., central sensitization and conditioned pain modulation (CPM), has not been investigated in this population. The aim was to evaluate sensitization and CPM in computer users with and without chronic musculoskeletal pain. Pressure pain threshold (PPT) mapping in the neck-shoulder (15 points) and the elbow (12 points) was assessed together with PPT measurement at mid-point in the tibialis anterior (TA) muscle among 47 computer users with chronic pain in the upper extremity and/or neck-shoulder pain (pain group) and 17 pain-free computer users (control group). Induced pain intensities and profiles over time were recorded using a 0-10 cm electronic visual analogue scale (VAS) in response to different levels of pressure stimuli on the forearm with a new technique of dynamic pressure algometry. The efficiency of CPM was assessed using cuff-induced pain as conditioning pain stimulus and PPT at TA as test stimulus. The demographics, job seniority and number of working hours/week using a computer were similar between groups. The PPTs measured at all 15 points in the neck-shoulder region were not significantly different between groups. There were no significant differences between groups neither in PPTs nor pain intensity induced by dynamic pressure algometry. No significant difference in PPT was observed in TA between groups. During CPM, a significant increase in PPT at TA was observed in both groups (P < 0.05) without significant differences between groups. For the chronic pain group, higher clinical pain intensity, lower PPT values from the neck-shoulder and higher pain intensity evoked by the roller were all correlated with less efficient descending pain modulation (P < 0.05). This suggests that the excitability of the central pain system is normal in a large group of computer users with low pain intensity chronic upper extremity and/or neck-shoulder pain and that increased excitability of the pain system cannot explain the reported pain. However, computer users with higher pain intensity and lower PPTs were found to have decreased efficiency in descending pain modulation.

  4. An Omnidirectional Vision Sensor Based on a Spherical Mirror Catadioptric System.

    PubMed

    Barone, Sandro; Carulli, Marina; Neri, Paolo; Paoli, Alessandro; Razionale, Armando Viviano

    2018-01-31

    The combination of mirrors and lenses, which defines a catadioptric sensor, is widely used in the computer vision field. The definition of a catadioptric sensors is based on three main features: hardware setup, projection modelling and calibration process. In this paper, a complete description of these aspects is given for an omnidirectional sensor based on a spherical mirror. The projection model of a catadioptric system can be described by the forward projection task (FP, from 3D scene point to 2D pixel coordinates) and backward projection task (BP, from 2D coordinates to 3D direction of the incident light). The forward projection of non-central catadioptric vision systems, typically obtained by using curved mirrors, is usually modelled by using a central approximation and/or by adopting iterative approaches. In this paper, an analytical closed-form solution to compute both forward and backward projection for a non-central catadioptric system with a spherical mirror is presented. In particular, the forward projection is reduced to a 4th order polynomial by determining the reflection point on the mirror surface through the intersection between a sphere and an ellipse. A matrix format of the implemented models, suitable for fast point clouds handling, is also described. A robust calibration procedure is also proposed and applied to calibrate a catadioptric sensor by determining the mirror radius and center with respect to the camera.

  5. An Omnidirectional Vision Sensor Based on a Spherical Mirror Catadioptric System

    PubMed Central

    Barone, Sandro; Carulli, Marina; Razionale, Armando Viviano

    2018-01-01

    The combination of mirrors and lenses, which defines a catadioptric sensor, is widely used in the computer vision field. The definition of a catadioptric sensors is based on three main features: hardware setup, projection modelling and calibration process. In this paper, a complete description of these aspects is given for an omnidirectional sensor based on a spherical mirror. The projection model of a catadioptric system can be described by the forward projection task (FP, from 3D scene point to 2D pixel coordinates) and backward projection task (BP, from 2D coordinates to 3D direction of the incident light). The forward projection of non-central catadioptric vision systems, typically obtained by using curved mirrors, is usually modelled by using a central approximation and/or by adopting iterative approaches. In this paper, an analytical closed-form solution to compute both forward and backward projection for a non-central catadioptric system with a spherical mirror is presented. In particular, the forward projection is reduced to a 4th order polynomial by determining the reflection point on the mirror surface through the intersection between a sphere and an ellipse. A matrix format of the implemented models, suitable for fast point clouds handling, is also described. A robust calibration procedure is also proposed and applied to calibrate a catadioptric sensor by determining the mirror radius and center with respect to the camera. PMID:29385051

  6. An Overview of the Computational Physics and Methods Group at Los Alamos National Laboratory

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Baker, Randal Scott

    CCS Division was formed to strengthen the visibility and impact of computer science and computational physics research on strategic directions for the Laboratory. Both computer science and computational science are now central to scientific discovery and innovation. They have become indispensable tools for all other scientific missions at the Laboratory. CCS Division forms a bridge between external partners and Laboratory programs, bringing new ideas and technologies to bear on today’s important problems and attracting high-quality technical staff members to the Laboratory. The Computational Physics and Methods Group CCS-2 conducts methods research and develops scientific software aimed at the latest andmore » emerging HPC systems.« less

  7. Computing and information services at the Jet Propulsion Laboratory - A management approach to a diversity of needs

    NASA Technical Reports Server (NTRS)

    Felberg, F. H.

    1984-01-01

    The Jet Propulsion Laboratory, a research and development organization with about 5,000 employees, presents a complicated set of requirements for an institutional system of computing and informational services. The approach taken by JPL in meeting this challenge is one of controlled flexibility. A central communications network is provided, together with selected computing facilities for common use. At the same time, staff members are given considerable discretion in choosing the mini- and microcomputers that they believe will best serve their needs. Consultation services, computer education, and other support functions are also provided.

  8. ERA 1103 UNIVAC 2 Calculating Machine

    NASA Image and Video Library

    1955-09-21

    The new 10-by 10-Foot Supersonic Wind Tunnel at the Lewis Flight Propulsion Laboratory included high tech data acquisition and analysis systems. The reliable gathering of pressure, speed, temperature, and other data from test runs in the facilities was critical to the research process. Throughout the 1940s and early 1950s female employees, known as computers, recorded all test data and performed initial calculations by hand. The introduction of punch card computers in the late 1940s gradually reduced the number of hands-on calculations. In the mid-1950s new computational machines were installed in the office building of the 10-by 10-Foot tunnel. The new systems included this UNIVAC 1103 vacuum tube computer—the lab’s first centralized computer system. The programming was done on paper tape and fed into the machine. The 10-by 10 computer center also included the Lewis-designed Computer Automated Digital Encoder (CADDE) and Digital Automated Multiple Pressure Recorder (DAMPR) systems which converted test data to binary-coded decimal numbers and recorded test pressures automatically, respectively. The systems primarily served the 10-by 10, but were also applied to the other large facilities. Engineering Research Associates (ERA) developed the initial UNIVAC computer for the Navy in the late 1940s. In 1952 the company designed a commercial version, the UNIVAC 1103. The 1103 was the first computer designed by Seymour Cray and the first commercially successful computer.

  9. Water budgets for major streams in the Central Valley, California, 1961-77

    USGS Publications Warehouse

    Mullen, J.R.; Nady, Paul

    1985-01-01

    A compilation of annual streamflow data for 20 major stream systems in the central Valley of California, for water years 1961-77, is presented. The water-budget tables list gaged and ungaged inflow from tributaries and canals, diversions, and gaged outflow. Theoretical outflow and gain or loss in a reach are computed. A schematic diagram and explanation of the data are provided for each water-budget table. (USGS)

  10. DOD Hotline Allegations on Army Use of A Computer Contract

    DTIC Science & Technology

    1993-10-29

    Army, the Navy, and the Defense Logistics Agency central order processing offices and reviewed delivery orders issued on the EDS contract. A...o The contracting officers used the EDS contract line item number and the description when completing a delivery order. o The central order ... processing offices used an automated data base system to match contract line item numbers from the delivery orders to the EDS contract. o EDS verified that

  11. An algorithm for solving the perturbed gas dynamic equations

    NASA Technical Reports Server (NTRS)

    Davis, Sanford

    1993-01-01

    The present application of a compact, higher-order central-difference approximation to the linearized Euler equations illustrates the multimodal character of these equations by means of computations for acoustic, vortical, and entropy waves. Such dissipationless central-difference methods are shown to propagate waves exhibiting excellent phase and amplitude resolution on the basis of relatively large time-steps; they can be applied to wave problems governed by systems of first-order partial differential equations.

  12. System and Method for Monitoring Distributed Asset Data

    NASA Technical Reports Server (NTRS)

    Gorinevsky, Dimitry (Inventor)

    2015-01-01

    A computer-based monitoring system and monitoring method implemented in computer software for detecting, estimating, and reporting the condition states, their changes, and anomalies for many assets. The assets are of same type, are operated over a period of time, and outfitted with data collection systems. The proposed monitoring method accounts for variability of working conditions for each asset by using regression model that characterizes asset performance. The assets are of the same type but not identical. The proposed monitoring method accounts for asset-to-asset variability; it also accounts for drifts and trends in the asset condition and data. The proposed monitoring system can perform distributed processing of massive amounts of historical data without discarding any useful information where moving all the asset data into one central computing system might be infeasible. The overall processing is includes distributed preprocessing data records from each asset to produce compressed data.

  13. Virtualized Networks and Virtualized Optical Line Terminal (vOLT)

    NASA Astrophysics Data System (ADS)

    Ma, Jonathan; Israel, Stephen

    2017-03-01

    The success of the Internet and the proliferation of the Internet of Things (IoT) devices is forcing telecommunications carriers to re-architecture a central office as a datacenter (CORD) so as to bring the datacenter economics and cloud agility to a central office (CO). The Open Network Operating System (ONOS) is the first open-source software-defined network (SDN) operating system which is capable of managing and controlling network, computing, and storage resources to support CORD infrastructure and network virtualization. The virtualized Optical Line Termination (vOLT) is one of the key components in such virtualized networks.

  14. Primary central nervous system lymphoma with lymphomatosis cerebri in an immunocompetent child: MRI and 18F-FDG PET-CT findings.

    PubMed

    Jain, Tarun K; Sharma, Punit; Suman, Sudhir K C; Faizi, Nauroze A; Bal, Chandrasekhar; Kumar, Rakesh

    2013-01-01

    Primary central nervous system lymphoma (PCNSL) is extremely rare in immunocompetent children. We present the magnetic resonance imaging (MRI) and (18)F-fluorodeoxyglucose ((18)F-FDG) positron emission tomography-computed tomography (PET-CT) findings of such a case in a 14-year old immunocompetent boy. In this patient, PCNSL was associated with lymphomatosis cerebri. Familiarity with the findings of this rare condition will improve the diagnostic confidence of the nuclear radiologist and avoid misdiagnosis. Copyright © 2013 Elsevier España, S.L. and SEMNIM. All rights reserved.

  15. Laboratory-based ROTEM(®) analysis: implementing pneumatic tube transport and real-time graphic transmission.

    PubMed

    Colucci, G; Giabbani, E; Barizzi, G; Urwyler, N; Alberio, L

    2011-08-01

    ROTEM(®) is considered a helpful point-of-care device to monitor blood coagulation. Centrally performed analysis is desirable but rapid transport of blood samples and real-time transmission of graphic results are an important prerequisite. The effect of sample transport through a pneumatic tube system on ROTEM(®) results is unknown. The aims of the present work were (i) to determine the influence of blood sample transport through a pneumatic tube system on ROTEM(®) parameters compared to manual transportation, and (ii) to verify whether graphic results can be transmitted on line via virtual network computing using local area network to the physician in charge of the patient. Single centre study with 30 normal volunteers. Two whole blood samples were transferred to the central haematology laboratory by either normal transport or pneumatic delivery. EXTEM, INTEM, FIBTEM and APTEM were analysed in parallel with two ROTEM(®) devices and compared. Connection between central laboratory, emergency and operating rooms was established using local area network. All collected ROTEM(®) parameters were within normal limits. No statistically significant differences between normal transport and pneumatic delivery were observed. Real-time transmission of the original ROTEM(®) curves using local area network is feasible and easy to establish. At our institution, transport of blood samples by pneumatic delivery does not influence ROTEM(®) parameters. Blood samples can be analysed centrally, and results transmitted live via virtual network computing to emergency or operating rooms. Prior to analyse blood samples centrally, the type of sample transport should be tested to exclude in vitro blood activation by local pneumatic transport system. © 2011 Blackwell Publishing Ltd.

  16. Uncovering many-body correlations in nanoscale nuclear spin baths by central spin decoherence

    PubMed Central

    Ma, Wen-Long; Wolfowicz, Gary; Zhao, Nan; Li, Shu-Shen; Morton, John J.L.; Liu, Ren-Bao

    2014-01-01

    Central spin decoherence caused by nuclear spin baths is often a critical issue in various quantum computing schemes, and it has also been used for sensing single-nuclear spins. Recent theoretical studies suggest that central spin decoherence can act as a probe of many-body physics in spin baths; however, identification and detection of many-body correlations of nuclear spins in nanoscale systems are highly challenging. Here, taking a phosphorus donor electron spin in a 29Si nuclear spin bath as our model system, we discover both theoretically and experimentally that many-body correlations in nanoscale nuclear spin baths produce identifiable signatures in decoherence of the central spin under multiple-pulse dynamical decoupling control. We demonstrate that under control by an odd or even number of pulses, the central spin decoherence is principally caused by second- or fourth-order nuclear spin correlations, respectively. This study marks an important step toward studying many-body physics using spin qubits. PMID:25205440

  17. Topographic steep central islands following excimer laser photorefractive keratectomy

    NASA Astrophysics Data System (ADS)

    Krueger, Ronald R.; McDonnell, Peter J.

    1994-06-01

    The purpose of this study is to demonstrate that topographic irregularities in the form of central islands of higher refractive power can be seen following excimer laser refractive surgery. We reviewed the computerized corneal topographic maps of 35 patients undergoing excimer laser PRK for compound myopic astigmatism or anisometropia from 8/91 to 8/93 at the USC/Doheny Eye Institute. The topographic maps were generated by the Computed Anatomy Corneal Modeling System, and central islands were defined as topographic areas of steepening of at least 3 diopters and 3 mm in diameter. A grading system was developed based on the presence of central islands during the postoperative period. Visually significant topographic steep central islands may be seen in over 50% of patients at 1 month following excimer laser PRK, and persist at 3 months in up to 24% of patients without nitrogen gas blowing. Loss of best corrected visual acuity or ghosting is associated with island formation, and may prolong visual rehabilitation after excimer laser PRK.

  18. Web-based remote monitoring of infant incubators in the ICU.

    PubMed

    Shin, D I; Huh, S J; Lee, T S; Kim, I Y

    2003-09-01

    A web-based real-time operating, management, and monitoring system for checking temperature and humidity within infant incubators using the Intranet has been developed and installed in the infant Intensive Care Unit (ICU). We have created a pilot system which has a temperature and humidity sensor and a measuring module in each incubator, which is connected to a web-server board via an RS485 port. The system transmits signals using standard web-based TCP/IP so that users can access the system from any Internet-connected personal computer in the hospital. Using this method, the system gathers temperature and humidity data transmitted from the measuring modules via the RS485 port on the web-server board and creates a web document containing these data. The system manager can maintain centralized supervisory monitoring of the situations in all incubators while sitting within the infant ICU at a work space equipped with a personal computer. The system can be set to monitor unusual circumstances and to emit an alarm signal expressed as a sound or a light on a measuring module connected to the related incubator. If the system is configured with a large number of incubators connected to a centralized supervisory monitoring station, it will improve convenience and assure meaningful improvement in response to incidents that require intervention.

  19. A scalable quantum computer with ions in an array of microtraps

    PubMed

    Cirac; Zoller

    2000-04-06

    Quantum computers require the storage of quantum information in a set of two-level systems (called qubits), the processing of this information using quantum gates and a means of final readout. So far, only a few systems have been identified as potentially viable quantum computer models--accurate quantum control of the coherent evolution is required in order to realize gate operations, while at the same time decoherence must be avoided. Examples include quantum optical systems (such as those utilizing trapped ions or neutral atoms, cavity quantum electrodynamics and nuclear magnetic resonance) and solid state systems (using nuclear spins, quantum dots and Josephson junctions). The most advanced candidates are the quantum optical and nuclear magnetic resonance systems, and we expect that they will allow quantum computing with about ten qubits within the next few years. This is still far from the numbers required for useful applications: for example, the factorization of a 200-digit number requires about 3,500 qubits, rising to 100,000 if error correction is implemented. Scalability of proposed quantum computer architectures to many qubits is thus of central importance. Here we propose a model for an ion trap quantum computer that combines scalability (a feature usually associated with solid state proposals) with the advantages of quantum optical systems (in particular, quantum control and long decoherence times).

  20. Risk of central nervous system defects in offspring of women with and without mental illness.

    PubMed

    Ayoub, Aimina; Fraser, William D; Low, Nancy; Arbour, Laura; Healy-Profitós, Jessica; Auger, Nathalie

    2018-02-22

    We sought to determine the relationship between maternal mental illness and the risk of having an infant with a central nervous system defect. We analyzed a cohort of 654,882 women aged less than 20 years between 1989 and 2013 who later delivered a live born infant in any hospital in Quebec, Canada. The primary exposure was mental illness during pregnancy or hospitalization for mental illness before pregnancy. The outcomes were neural and non-neural tube defects of the central nervous system in any offspring. We computed risk ratios (RR) and 95% confidence intervals (CI) for the association between mental disorders and risk of central nervous system defects in log-binomial regression models adjusted for age at delivery, total parity, comorbidity, socioeconomic deprivation, place of residence, and time period. Maternal mental illness was associated with an increased risk of nervous system defects in offspring (RR 1.76, 95% CI 1.64-1.89). Hospitalization for any mental disorder was more strongly associated with non-neural tube (RR 1.84, 95% CI 1.71-1.99) than neural tube defects (RR 1.31, 95% CI 1.08-1.59). Women at greater risk of nervous system defects in offspring tended to be diagnosed with multiple mental disorders, have more than one hospitalization for mental disease, or be 17 or older at first hospitalization. A history of mental illness is associated with central nervous system defects in offspring. Women hospitalized for mental illness may merit counseling at first symptoms to prevent central nervous system defects at pregnancy.

  1. Fluid Centrality: A Social Network Analysis of Social-Technical Relations in Computer-Mediated Communication

    ERIC Educational Resources Information Center

    Enriquez, Judith Guevarra

    2010-01-01

    In this article, centrality is explored as a measure of computer-mediated communication (CMC) in networked learning. Centrality measure is quite common in performing social network analysis (SNA) and in analysing social cohesion, strength of ties and influence in CMC, and computer-supported collaborative learning research. It argues that measuring…

  2. Nature as a network of morphological infocomputational processes for cognitive agents

    NASA Astrophysics Data System (ADS)

    Dodig-Crnkovic, Gordana

    2017-01-01

    This paper presents a view of nature as a network of infocomputational agents organized in a dynamical hierarchy of levels. It provides a framework for unification of currently disparate understandings of natural, formal, technical, behavioral and social phenomena based on information as a structure, differences in one system that cause the differences in another system, and computation as its dynamics, i.e. physical process of morphological change in the informational structure. We address some of the frequent misunderstandings regarding the natural/morphological computational models and their relationships to physical systems, especially cognitive systems such as living beings. Natural morphological infocomputation as a conceptual framework necessitates generalization of models of computation beyond the traditional Turing machine model presenting symbol manipulation, and requires agent-based concurrent resource-sensitive models of computation in order to be able to cover the whole range of phenomena from physics to cognition. The central role of agency, particularly material vs. cognitive agency is highlighted.

  3. Graphics processing units in bioinformatics, computational biology and systems biology.

    PubMed

    Nobile, Marco S; Cazzaniga, Paolo; Tangherloni, Andrea; Besozzi, Daniela

    2017-09-01

    Several studies in Bioinformatics, Computational Biology and Systems Biology rely on the definition of physico-chemical or mathematical models of biological systems at different scales and levels of complexity, ranging from the interaction of atoms in single molecules up to genome-wide interaction networks. Traditional computational methods and software tools developed in these research fields share a common trait: they can be computationally demanding on Central Processing Units (CPUs), therefore limiting their applicability in many circumstances. To overcome this issue, general-purpose Graphics Processing Units (GPUs) are gaining an increasing attention by the scientific community, as they can considerably reduce the running time required by standard CPU-based software, and allow more intensive investigations of biological systems. In this review, we present a collection of GPU tools recently developed to perform computational analyses in life science disciplines, emphasizing the advantages and the drawbacks in the use of these parallel architectures. The complete list of GPU-powered tools here reviewed is available at http://bit.ly/gputools. © The Author 2016. Published by Oxford University Press.

  4. Memory interface simulator: A computer design aid

    NASA Technical Reports Server (NTRS)

    Taylor, D. S.; Williams, T.; Weatherbee, J. E.

    1972-01-01

    Results are presented of a study conducted with a digital simulation model being used in the design of the Automatically Reconfigurable Modular Multiprocessor System (ARMMS), a candidate computer system for future manned and unmanned space missions. The model simulates the activity involved as instructions are fetched from random access memory for execution in one of the system central processing units. A series of model runs measured instruction execution time under various assumptions pertaining to the CPU's and the interface between the CPU's and RAM. Design tradeoffs are presented in the following areas: Bus widths, CPU microprogram read only memory cycle time, multiple instruction fetch, and instruction mix.

  5. An MPA-IO interface to HPSS

    NASA Technical Reports Server (NTRS)

    Jones, Terry; Mark, Richard; Martin, Jeanne; May, John; Pierce, Elsie; Stanberry, Linda

    1996-01-01

    This paper describes an implementation of the proposed MPI-IO (Message Passing Interface - Input/Output) standard for parallel I/O. Our system uses third-party transfer to move data over an external network between the processors where it is used and the I/O devices where it resides. Data travels directly from source to destination, without the need for shuffling it among processors or funneling it through a central node. Our distributed server model lets multiple compute nodes share the burden of coordinating data transfers. The system is built on the High Performance Storage System (HPSS), and a prototype version runs on a Meiko CS-2 parallel computer.

  6. FPGA-Based High-Performance Embedded Systems for Adaptive Edge Computing in Cyber-Physical Systems: The ARTICo³ Framework.

    PubMed

    Rodríguez, Alfonso; Valverde, Juan; Portilla, Jorge; Otero, Andrés; Riesgo, Teresa; de la Torre, Eduardo

    2018-06-08

    Cyber-Physical Systems are experiencing a paradigm shift in which processing has been relocated to the distributed sensing layer and is no longer performed in a centralized manner. This approach, usually referred to as Edge Computing, demands the use of hardware platforms that are able to manage the steadily increasing requirements in computing performance, while keeping energy efficiency and the adaptability imposed by the interaction with the physical world. In this context, SRAM-based FPGAs and their inherent run-time reconfigurability, when coupled with smart power management strategies, are a suitable solution. However, they usually fail in user accessibility and ease of development. In this paper, an integrated framework to develop FPGA-based high-performance embedded systems for Edge Computing in Cyber-Physical Systems is presented. This framework provides a hardware-based processing architecture, an automated toolchain, and a runtime to transparently generate and manage reconfigurable systems from high-level system descriptions without additional user intervention. Moreover, it provides users with support for dynamically adapting the available computing resources to switch the working point of the architecture in a solution space defined by computing performance, energy consumption and fault tolerance. Results show that it is indeed possible to explore this solution space at run time and prove that the proposed framework is a competitive alternative to software-based edge computing platforms, being able to provide not only faster solutions, but also higher energy efficiency for computing-intensive algorithms with significant levels of data-level parallelism.

  7. Design of the central region for axial injection in the VINCY cyclotron

    NASA Astrophysics Data System (ADS)

    Milinković, Ljiljana; Toprek, Dragan

    1996-02-01

    This paper describes the design of the central region for h = 1, h = 2 and h = 4 modes of acceleration in the VINCY cyclotron. The result which is worth reported in that the central region is unique and compatible with the three above mentioned harmonic modes of operation. Only one spiral type inflector will be used. The central region is designed to operate with two external ion sources: (a) an ECR ion source with the maximum extraction voltage of 25 kV for heavy ions, and (b) a multicusp ion source with the maximum extraction voltage of 30 kV for H - and D - ions. Heavy ions will be accelerated by the second and fourth harmonics, D - ions by the second harmonic and H - ions by the first harmonic of the RF field. The central region is equipped with an axial injection system. The electric field distribution in the inflector and in the four acceleration gaps has been numerically calculated from an electric potential map produced by the program RELAX3D. The geometry of the central region has been tested with the computations of orbits carried out by means of the computer code CYCLONE. The optical properties of the spiral inflector and the central region were studied by using the programs CASINO and CYCLONE respectively. We have also made an effort to minimize the inflector fringe field using the RELAX3D program.

  8. Animals as Mobile Biological Sensors for Forest Fire Detection.

    PubMed

    Sahin, Yasar Guneri

    2007-12-04

    This paper proposes a mobile biological sensor system that can assist in earlydetection of forest fires one of the most dreaded natural disasters on the earth. The main ideapresented in this paper is to utilize animals with sensors as Mobile Biological Sensors(MBS). The devices used in this system are animals which are native animals living inforests, sensors (thermo and radiation sensors with GPS features) that measure thetemperature and transmit the location of the MBS, access points for wireless communicationand a central computer system which classifies of animal actions. The system offers twodifferent methods, firstly: access points continuously receive data about animals' locationusing GPS at certain time intervals and the gathered data is then classified and checked tosee if there is a sudden movement (panic) of the animal groups: this method is called animalbehavior classification (ABC). The second method can be defined as thermal detection(TD): the access points get the temperature values from the MBS devices and send the datato a central computer to check for instant changes in the temperatures. This system may beused for many purposes other than fire detection, namely animal tracking, poachingprevention and detecting instantaneous animal death.

  9. Animals as Mobile Biological Sensors for Forest Fire Detection

    PubMed Central

    2007-01-01

    This paper proposes a mobile biological sensor system that can assist in early detection of forest fires one of the most dreaded natural disasters on the earth. The main idea presented in this paper is to utilize animals with sensors as Mobile Biological Sensors (MBS). The devices used in this system are animals which are native animals living in forests, sensors (thermo and radiation sensors with GPS features) that measure the temperature and transmit the location of the MBS, access points for wireless communication and a central computer system which classifies of animal actions. The system offers two different methods, firstly: access points continuously receive data about animals' location using GPS at certain time intervals and the gathered data is then classified and checked to see if there is a sudden movement (panic) of the animal groups: this method is called animal behavior classification (ABC). The second method can be defined as thermal detection (TD): the access points get the temperature values from the MBS devices and send the data to a central computer to check for instant changes in the temperatures. This system may be used for many purposes other than fire detection, namely animal tracking, poaching prevention and detecting instantaneous animal death. PMID:28903281

  10. Using Microcomputers to Manage Grants.

    ERIC Educational Resources Information Center

    Joseph, Jonathan L.; And Others

    1982-01-01

    Features of microcomputer systems and software that can be useful in administration of research grants are outlined, including immediacy of reporting, flexibility, accurate balance availability, useful coding, accurate payroll control, and forecasting capabilities. These are contrasted with the less flexible centralized computer operation. (MSE)

  11. 18. VIEW OF THE GENERAL CHEMISTRY LAB. THE LABORATORY PROVIDED ...

    Library of Congress Historic Buildings Survey, Historic Engineering Record, Historic Landscapes Survey

    18. VIEW OF THE GENERAL CHEMISTRY LAB. THE LABORATORY PROVIDED GENERAL ANALYTICAL AND STANDARDS CALIBRATION, AS WELL AS DEVELOPMENT OPERATIONS INCLUDING WASTE TECHNOLOGY DEVELOPMENT AND DEVELOPMENT AND TESTING OF MECHANICAL SYSTEMS FOR WEAPONS SYSTEMS. (4/4/66) - Rocky Flats Plant, General Manufacturing, Support, Records-Central Computing, Southern portion of Plant, Golden, Jefferson County, CO

  12. Shared-resource computing for small research labs.

    PubMed

    Ackerman, M J

    1982-04-01

    A real time laboratory computer network is described. This network is composed of four real-time laboratory minicomputers located in each of four division laboratories and a larger minicomputer in a centrally located computer room. Off the shelf hardware and software were used with no customization. The network is configured for resource sharing using DECnet communications software and the RSX-11-M multi-user real-time operating system. The cost effectiveness of the shared resource network and multiple real-time processing using priority scheduling is discussed. Examples of utilization within a medical research department are given.

  13. Reconciliation of the cloud computing model with US federal electronic health record regulations

    PubMed Central

    2011-01-01

    Cloud computing refers to subscription-based, fee-for-service utilization of computer hardware and software over the Internet. The model is gaining acceptance for business information technology (IT) applications because it allows capacity and functionality to increase on the fly without major investment in infrastructure, personnel or licensing fees. Large IT investments can be converted to a series of smaller operating expenses. Cloud architectures could potentially be superior to traditional electronic health record (EHR) designs in terms of economy, efficiency and utility. A central issue for EHR developers in the US is that these systems are constrained by federal regulatory legislation and oversight. These laws focus on security and privacy, which are well-recognized challenges for cloud computing systems in general. EHRs built with the cloud computing model can achieve acceptable privacy and security through business associate contracts with cloud providers that specify compliance requirements, performance metrics and liability sharing. PMID:21727204

  14. [Research on the Application of Fuzzy Logic to Systems Analysis and Control

    NASA Technical Reports Server (NTRS)

    1998-01-01

    Research conducted with the support of NASA Grant NCC2-275 has been focused in the main on the development of fuzzy logic and soft computing methodologies and their applications to systems analysis and control. with emphasis 011 problem areas which are of relevance to NASA's missions. One of the principal results of our research has been the development of a new methodology called Computing with Words (CW). Basically, in CW words drawn from a natural language are employed in place of numbers for computing and reasoning. There are two major imperatives for computing with words. First, computing with words is a necessity when the available information is too imprecise to justify the use of numbers, and second, when there is a tolerance for imprecision which can be exploited to achieve tractability, robustness, low solution cost, and better rapport with reality. Exploitation of the tolerance for imprecision is an issue of central importance in CW.

  15. Reconciliation of the cloud computing model with US federal electronic health record regulations.

    PubMed

    Schweitzer, Eugene J

    2012-01-01

    Cloud computing refers to subscription-based, fee-for-service utilization of computer hardware and software over the Internet. The model is gaining acceptance for business information technology (IT) applications because it allows capacity and functionality to increase on the fly without major investment in infrastructure, personnel or licensing fees. Large IT investments can be converted to a series of smaller operating expenses. Cloud architectures could potentially be superior to traditional electronic health record (EHR) designs in terms of economy, efficiency and utility. A central issue for EHR developers in the US is that these systems are constrained by federal regulatory legislation and oversight. These laws focus on security and privacy, which are well-recognized challenges for cloud computing systems in general. EHRs built with the cloud computing model can achieve acceptable privacy and security through business associate contracts with cloud providers that specify compliance requirements, performance metrics and liability sharing.

  16. Introduction to the LaRC central scientific computing complex

    NASA Technical Reports Server (NTRS)

    Shoosmith, John N.

    1993-01-01

    The computers and associated equipment that make up the Central Scientific Computing Complex of the Langley Research Center are briefly described. The electronic networks that provide access to the various components of the complex and a number of areas that can be used by Langley and contractors staff for special applications (scientific visualization, image processing, software engineering, and grid generation) are also described. Flight simulation facilities that use the central computers are described. Management of the complex, procedures for its use, and available services and resources are discussed. This document is intended for new users of the complex, for current users who wish to keep appraised of changes, and for visitors who need to understand the role of central scientific computers at Langley.

  17. Critical care procedure logging using handheld computers

    PubMed Central

    Carlos Martinez-Motta, J; Walker, Robin; Stewart, Thomas E; Granton, John; Abrahamson, Simon; Lapinsky, Stephen E

    2004-01-01

    Introduction We conducted this study to evaluate the feasibility of implementing an internet-linked handheld computer procedure logging system in a critical care training program. Methods Subspecialty trainees in the Interdepartmental Division of Critical Care at the University of Toronto received and were trained in the use of Palm handheld computers loaded with a customized program for logging critical care procedures. The procedures were entered into the handheld device using checkboxes and drop-down lists, and data were uploaded to a central database via the internet. To evaluate the feasibility of this system, we tracked the utilization of this data collection system. Benefits and disadvantages were assessed through surveys. Results All 11 trainees successfully uploaded data to the central database, but only six (55%) continued to upload data on a regular basis. The most common reason cited for not using the system pertained to initial technical problems with data uploading. From 1 July 2002 to 30 June 2003, a total of 914 procedures were logged. Significant variability was noted in the number of procedures logged by individual trainees (range 13–242). The database generated by regular users provided potentially useful information to the training program director regarding the scope and location of procedural training among the different rotations and hospitals. Conclusion A handheld computer procedure logging system can be effectively used in a critical care training program. However, user acceptance was not uniform, and continued training and support are required to increase user acceptance. Such a procedure database may provide valuable information that may be used to optimize trainees' educational experience and to document clinical training experience for licensing and accreditation. PMID:15469577

  18. Instrumentation and test methods of an automated radiated susceptibility system

    NASA Astrophysics Data System (ADS)

    Howard, M. W.; Deere, J.

    1983-09-01

    The instrumentation and test methods of an automated electromagnetic compatibility (EMC) system for performing radiated susceptibility tests from 14 kHz to 1000 MHz is described. Particular emphasis is given to the effectiveness of the system in the evaluation of electronic circuits for susceptibility to RF radiation. The system consists of a centralized data acquisition/control unit which interfaces with the equipment under test (EUT), the RF isolated field probes, and RF amplifier ALC output; four broadband linear RF amplifiers; and a frequency synthesizer with drive level increments in steps of 0.1 dB. Centralized control of the susceptibility test system is provided by a desktop computer. It is found that the system can reduce the execution time of RF susceptibility tests by as much as 70 percent. A block diagram of the system is provided.

  19. Adaptable radiation monitoring system and method

    DOEpatents

    Archer, Daniel E [Livermore, CA; Beauchamp, Brock R [San Ramon, CA; Mauger, G Joseph [Livermore, CA; Nelson, Karl E [Livermore, CA; Mercer, Michael B [Manteca, CA; Pletcher, David C [Sacramento, CA; Riot, Vincent J [Berkeley, CA; Schek, James L [Tracy, CA; Knapp, David A [Livermore, CA

    2006-06-20

    A portable radioactive-material detection system capable of detecting radioactive sources moving at high speeds. The system has at least one radiation detector capable of detecting gamma-radiation and coupled to an MCA capable of collecting spectral data in very small time bins of less than about 150 msec. A computer processor is connected to the MCA for determining from the spectral data if a triggering event has occurred. Spectral data is stored on a data storage device, and a power source supplies power to the detection system. Various configurations of the detection system may be adaptably arranged for various radiation detection scenarios. In a preferred embodiment, the computer processor operates as a server which receives spectral data from other networked detection systems, and communicates the collected data to a central data reporting system.

  20. Evaluation of the large scale computing needs of the energy research program and how to meet them. Final report

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Miller, B.

    The Energy Research program may be on the verge of abdicating an important role it has traditionally played in the development and use of state-of-the-art computer systems. The lack of easy access to Class VI systems coupled to the easy availability of local, user-friendly systems is conspiring to drive many investigators away from forefront research in computational science and in the use of state-of-the-art computers for more discipline-oriented problem solving. The survey conducted under the auspices of this contract clearly demonstrates a significant suppressed demand for actual Class VI hours totaling the full capacity of one such system. The currentmore » usage is about a factor of 15 below this level. There is also a need for about 50% more capacity in the current mini/midi availability. Meeting the needs of the ER community for this level of computing power and capacity is most probably best achieved through the establishment of a central Class VI capability at some site linked through a nationwide network to the various ER laboratories and universities and interfaced with the local user-friendly systems at those remote sites.« less

  1. Vertebrobasilar system computed tomographic angiography in central vertigo

    PubMed Central

    Paşaoğlu, Lale

    2017-01-01

    Abstract The incidence of vertigo in the population is 20% to 30% and one-fourth of the cases are related to central causes. The aim of this study was to evaluate computed tomography angiography (CTA) findings of the vertebrobasilar system in central vertigo without stroke. CTA and magnetic resonance images of patients with vertigo were retrospectively evaluated. One hundred twenty-nine patients suspected of having central vertigo according to history, physical examination, and otological and neurological tests without signs of infarction on diffusion-weighted magnetic resonance imaging were included in the study. The control group included 120 patients with similar vascular disease risk factors but without vertigo. Vertebral and basilar artery diameters, hypoplasias, exit-site variations of vertebral artery, vertebrobasilar tortuosity, and stenosis of ≥50% detected on CTA were recorded for all patients. Independent-samples t test was used in variables with normal distribution, and Mann–Whitney U test in non-normal distribution. The difference of categorical variable distribution according to groups was analyzed with χ2 and/or Fisher exact test. Vertebral artery hypoplasia and ≥50% stenosis were seen more often in the vertigo group (P = 0.000, <0.001). Overall 78 (60.5%) vertigo patients had ≥50% stenosis, 54 (69.2%) had stenosis at V1 segment, 9 (11.5%) at V2 segment, 2 (2.5%) at V3 segment, and 13 (16.6%) at V4 segment. Both vertigo and control groups had similar basilar artery hypoplasia and ≥50% stenosis rates (P = 0.800, >0.05). CTA may be helpful to clarify the association between abnormal CTA findings of vertebral arteries and central vertigo. This article reveals the opportunity to diagnose posterior circulation abnormalities causing central vertigo with a feasible method such as CTA. PMID:28328808

  2. Vertebrobasilar system computed tomographic angiography in central vertigo.

    PubMed

    Paşaoğlu, Lale

    2017-03-01

    The incidence of vertigo in the population is 20% to 30% and one-fourth of the cases are related to central causes. The aim of this study was to evaluate computed tomography angiography (CTA) findings of the vertebrobasilar system in central vertigo without stroke.CTA and magnetic resonance images of patients with vertigo were retrospectively evaluated. One hundred twenty-nine patients suspected of having central vertigo according to history, physical examination, and otological and neurological tests without signs of infarction on diffusion-weighted magnetic resonance imaging were included in the study. The control group included 120 patients with similar vascular disease risk factors but without vertigo. Vertebral and basilar artery diameters, hypoplasias, exit-site variations of vertebral artery, vertebrobasilar tortuosity, and stenosis of ≥50% detected on CTA were recorded for all patients. Independent-samples t test was used in variables with normal distribution, and Mann-Whitney U test in non-normal distribution. The difference of categorical variable distribution according to groups was analyzed with χ and/or Fisher exact test.Vertebral artery hypoplasia and ≥50% stenosis were seen more often in the vertigo group (P = 0.000, <0.001). Overall 78 (60.5%) vertigo patients had ≥50% stenosis, 54 (69.2%) had stenosis at V1 segment, 9 (11.5%) at V2 segment, 2 (2.5%) at V3 segment, and 13 (16.6%) at V4 segment. Both vertigo and control groups had similar basilar artery hypoplasia and ≥50% stenosis rates (P = 0.800, >0.05).CTA may be helpful to clarify the association between abnormal CTA findings of vertebral arteries and central vertigo.This article reveals the opportunity to diagnose posterior circulation abnormalities causing central vertigo with a feasible method such as CTA.

  3. 18 CFR 367.9310 - Account 931, Rents.

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ... NATURAL GAS ACT UNIFORM SYSTEM OF ACCOUNTS FOR CENTRALIZED SERVICE COMPANIES SUBJECT TO THE PROVISIONS OF..., including taxes, paid for the property of others used, occupied or operated in connection with service... structure, office furniture, fixtures, computers, data processing equipment, microwave and telecommunication...

  4. 18 CFR 367.9310 - Account 931, Rents.

    Code of Federal Regulations, 2011 CFR

    2011-04-01

    ... NATURAL GAS ACT UNIFORM SYSTEM OF ACCOUNTS FOR CENTRALIZED SERVICE COMPANIES SUBJECT TO THE PROVISIONS OF..., including taxes, paid for the property of others used, occupied or operated in connection with service... structure, office furniture, fixtures, computers, data processing equipment, microwave and telecommunication...

  5. 18 CFR 367.9310 - Account 931, Rents.

    Code of Federal Regulations, 2012 CFR

    2012-04-01

    ... NATURAL GAS ACT UNIFORM SYSTEM OF ACCOUNTS FOR CENTRALIZED SERVICE COMPANIES SUBJECT TO THE PROVISIONS OF..., including taxes, paid for the property of others used, occupied or operated in connection with service... structure, office furniture, fixtures, computers, data processing equipment, microwave and telecommunication...

  6. 18 CFR 367.9310 - Account 931, Rents.

    Code of Federal Regulations, 2014 CFR

    2014-04-01

    ... NATURAL GAS ACT UNIFORM SYSTEM OF ACCOUNTS FOR CENTRALIZED SERVICE COMPANIES SUBJECT TO THE PROVISIONS OF..., including taxes, paid for the property of others used, occupied or operated in connection with service... structure, office furniture, fixtures, computers, data processing equipment, microwave and telecommunication...

  7. 18 CFR 367.9310 - Account 931, Rents.

    Code of Federal Regulations, 2013 CFR

    2013-04-01

    ... NATURAL GAS ACT UNIFORM SYSTEM OF ACCOUNTS FOR CENTRALIZED SERVICE COMPANIES SUBJECT TO THE PROVISIONS OF..., including taxes, paid for the property of others used, occupied or operated in connection with service... structure, office furniture, fixtures, computers, data processing equipment, microwave and telecommunication...

  8. Interoceptive inference: From computational neuroscience to clinic.

    PubMed

    Owens, Andrew P; Allen, Micah; Ondobaka, Sasha; Friston, Karl J

    2018-04-22

    The central and autonomic nervous systems can be defined by their anatomical, functional and neurochemical characteristics, but neither functions in isolation. For example, fundamental components of autonomically mediated homeostatic processes are afferent interoceptive signals reporting the internal state of the body and efferent signals acting on interoceptive feedback assimilated by the brain. Recent predictive coding (interoceptive inference) models formulate interoception in terms of embodied predictive processes that support emotion and selfhood. We propose interoception may serve as a way to investigate holistic nervous system function and dysfunction in disorders of brain, body and behaviour. We appeal to predictive coding and (active) interoceptive inference, to describe the homeostatic functions of the central and autonomic nervous systems. We do so by (i) reviewing the active inference formulation of interoceptive and autonomic function, (ii) survey clinical applications of this formulation and (iii) describe how it offers an integrative approach to human physiology; particularly, interactions between the central and peripheral nervous systems in health and disease. Crown Copyright © 2018. Published by Elsevier Ltd. All rights reserved.

  9. Utilization of parallel processing in solving the inviscid form of the average-passage equation system for multistage turbomachinery

    NASA Technical Reports Server (NTRS)

    Mulac, Richard A.; Celestina, Mark L.; Adamczyk, John J.; Misegades, Kent P.; Dawson, Jef M.

    1987-01-01

    A procedure is outlined which utilizes parallel processing to solve the inviscid form of the average-passage equation system for multistage turbomachinery along with a description of its implementation in a FORTRAN computer code, MSTAGE. A scheme to reduce the central memory requirements of the program is also detailed. Both the multitasking and I/O routines referred to are specific to the Cray X-MP line of computers and its associated SSD (Solid-State Disk). Results are presented for a simulation of a two-stage rocket engine fuel pump turbine.

  10. Using Swarming Agents for Scalable Security in Large Network Environments

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Crouse, Michael; White, Jacob L.; Fulp, Errin W.

    2011-09-23

    The difficulty of securing computer infrastructures increases as they grow in size and complexity. Network-based security solutions such as IDS and firewalls cannot scale because of exponentially increasing computational costs inherent in detecting the rapidly growing number of threat signatures. Hostbased solutions like virus scanners and IDS suffer similar issues, and these are compounded when enterprises try to monitor these in a centralized manner. Swarm-based autonomous agent systems like digital ants and artificial immune systems can provide a scalable security solution for large network environments. The digital ants approach offers a biologically inspired design where each ant in the virtualmore » colony can detect atoms of evidence that may help identify a possible threat. By assembling the atomic evidences from different ant types the colony may detect the threat. This decentralized approach can require, on average, fewer computational resources than traditional centralized solutions; however there are limits to its scalability. This paper describes how dividing a large infrastructure into smaller managed enclaves allows the digital ant framework to effectively operate in larger environments. Experimental results will show that using smaller enclaves allows for more consistent distribution of agents and results in faster response times.« less

  11. Characterization of physiological networks in sleep apnea patients using artificial neural networks for Granger causality computation

    NASA Astrophysics Data System (ADS)

    Cárdenas, Jhon; Orjuela-Cañón, Alvaro D.; Cerquera, Alexander; Ravelo, Antonio

    2017-11-01

    Different studies have used Transfer Entropy (TE) and Granger Causality (GC) computation to quantify interconnection between physiological systems. These methods have disadvantages in parametrization and availability in analytic formulas to evaluate the significance of the results. Other inconvenience is related with the assumptions in the distribution of the models generated from the data. In this document, the authors present a way to measure the causality that connect the Central Nervous System (CNS) and the Cardiac System (CS) in people diagnosed with obstructive sleep apnea syndrome (OSA) before and during treatment with continuous positive air pressure (CPAP). For this purpose, artificial neural networks were used to obtain models for GC computation, based on time series of normalized powers calculated from electrocardiography (EKG) and electroencephalography (EEG) signals recorded in polysomnography (PSG) studies.

  12. Survey of methods for secure connection to the internet

    NASA Astrophysics Data System (ADS)

    Matsui, Shouichi

    1994-04-01

    This paper describes a study of a security method of protecting inside network computers against outside miscreants and unwelcome visitors and a control method when these computers are connected with the Internet. In the present Internet, a method to encipher all data cannot be used, so that it is necessary to utilize PEM (Privacy Enhanced Mail) capable of the encipherment and conversion of secret information. For preventing miscreant access by eavesdropping password, one-time password is effective. The most cost-effective method is a firewall system. This system lies between the outside and inside network. By limiting computers that directly communicate with the Internet, control is centralized and inside network security is protected. If the security of firewall systems is strictly controlled under correct setting, security within the network can be secured even in open networks such as the Internet.

  13. Acceleration of the matrix multiplication of Radiance three phase daylighting simulations with parallel computing on heterogeneous hardware of personal computer

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zuo, Wangda; McNeil, Andrew; Wetter, Michael

    2013-05-23

    Building designers are increasingly relying on complex fenestration systems to reduce energy consumed for lighting and HVAC in low energy buildings. Radiance, a lighting simulation program, has been used to conduct daylighting simulations for complex fenestration systems. Depending on the configurations, the simulation can take hours or even days using a personal computer. This paper describes how to accelerate the matrix multiplication portion of a Radiance three-phase daylight simulation by conducting parallel computing on heterogeneous hardware of a personal computer. The algorithm was optimized and the computational part was implemented in parallel using OpenCL. The speed of new approach wasmore » evaluated using various daylighting simulation cases on a multicore central processing unit and a graphics processing unit. Based on the measurements and analysis of the time usage for the Radiance daylighting simulation, further speedups can be achieved by using fast I/O devices and storing the data in a binary format.« less

  14. Distributed Optimization of Multi-Agent Systems: Framework, Local Optimizer, and Applications

    NASA Astrophysics Data System (ADS)

    Zu, Yue

    Convex optimization problem can be solved in a centralized or distributed manner. Compared with centralized methods based on single-agent system, distributed algorithms rely on multi-agent systems with information exchanging among connected neighbors, which leads to great improvement on the system fault tolerance. Thus, a task within multi-agent system can be completed with presence of partial agent failures. By problem decomposition, a large-scale problem can be divided into a set of small-scale sub-problems that can be solved in sequence/parallel. Hence, the computational complexity is greatly reduced by distributed algorithm in multi-agent system. Moreover, distributed algorithm allows data collected and stored in a distributed fashion, which successfully overcomes the drawbacks of using multicast due to the bandwidth limitation. Distributed algorithm has been applied in solving a variety of real-world problems. Our research focuses on the framework and local optimizer design in practical engineering applications. In the first one, we propose a multi-sensor and multi-agent scheme for spatial motion estimation of a rigid body. Estimation performance is improved in terms of accuracy and convergence speed. Second, we develop a cyber-physical system and implement distributed computation devices to optimize the in-building evacuation path when hazard occurs. The proposed Bellman-Ford Dual-Subgradient path planning method relieves the congestion in corridor and the exit areas. At last, highway traffic flow is managed by adjusting speed limits to minimize the fuel consumption and travel time in the third project. Optimal control strategy is designed through both centralized and distributed algorithm based on convex problem formulation. Moreover, a hybrid control scheme is presented for highway network travel time minimization. Compared with no controlled case or conventional highway traffic control strategy, the proposed hybrid control strategy greatly reduces total travel time on test highway network.

  15. Federal Research Opportunities: DOE, DOD, and HHS Need Better Guidance for Participant Activities

    DTIC Science & Technology

    2016-01-01

    process controls of advanced power systems, gas sensors and high temperatures, improving extraction of earth elements, quantum computing, biofilms ...chronic diseases (e.g., heart, obesity, cancer ), environmental health, toxic substances, health statistics, and public health preparedness. Food and...Health Localization of proteins using molecular markers, gene regulatory effects in cancer , medical informatics, and central nervous system

  16. Taper-based system for estimating stem volumes of upland oaks

    Treesearch

    Donald E. Hilt

    1980-01-01

    A taper-based system for estimating stem volumes is developed for Central States upland oaks. Inside bark diameters up the stem are predicted as a function of dbhib, total height, and powers and relative height. A Fortran IV computer program, OAKVOL, is used to predict cubic and board-foot volumes to any desired merchantable top dib. Volumes of...

  17. Automation Is the Answer, but What Is the Question? Progress and Prospects for Central and Eastern European Libraries.

    ERIC Educational Resources Information Center

    Borgman, Christine L.

    1996-01-01

    Reports on a survey of 70 research libraries in Croatia, Czech Republic, Hungary, Poland, Slovakia, and Slovenia. Results show that libraries are rapidly acquiring automated processing systems, CD-ROM databases, and connections to computer networks. Discusses specific data on system implementation and network services by country and by type of…

  18. The Microcomputer in the Administrative Office.

    ERIC Educational Resources Information Center

    Huntington, Fred

    1983-01-01

    Discusses microcomputer uses for administrative computing in education at site level and central office and recommends that administrators start with a word processing program for time management, an electronic spreadsheet for financial accounting, a database management system for inventories, and self-written programs to alleviate paper…

  19. Information Security and the Internet.

    ERIC Educational Resources Information Center

    Doddrell, Gregory R.

    1996-01-01

    As business relies less on "fortress" style central computers and more on distributed systems, the risk of disruption increases because of inadequate physical security, support services, and site monitoring. This article discusses information security and why protection is required on the Internet, presents a best practice firewall, and…

  20. Scheduling quality of precise form sets which consist of tasks of circular type in GRID systems

    NASA Astrophysics Data System (ADS)

    Saak, A. E.; Kureichik, V. V.; Kravchenko, Y. A.

    2018-05-01

    Users’ demand in computer power and rise of technology favour the arrival of Grid systems. The quality of Grid systems’ performance depends on computer and time resources scheduling. Grid systems with a centralized structure of the scheduling system and user’s task are modeled by resource quadrant and re-source rectangle accordingly. A Non-Euclidean heuristic measure, which takes into consideration both the area and the form of an occupied resource region, is used to estimate scheduling quality of heuristic algorithms. The authors use sets, which are induced by the elements of square squaring, as an example of studying the adapt-ability of a level polynomial algorithm with an excess and the one with minimal deviation.

  1. Computer simulation of plasma and N-body problems

    NASA Technical Reports Server (NTRS)

    Harries, W. L.; Miller, J. B.

    1975-01-01

    The following FORTRAN language computer codes are presented: (1) efficient two- and three-dimensional central force potential solvers; (2) a three-dimensional simulator of an isolated galaxy which incorporates the potential solver; (3) a two-dimensional particle-in-cell simulator of the Jeans instability in an infinite self-gravitating compressible gas; and (4) a two-dimensional particle-in-cell simulator of a rotating self-gravitating compressible gaseous system of which rectangular coordinate and superior polar coordinate versions were written.

  2. A Web-based home welfare and care services support system using a pen type image sensor.

    PubMed

    Ogawa, Hidekuni; Yonezawa, Yoshiharu; Maki, Hiromichi; Sato, Haruhiko; Hahn, Allen W; Caldwell, W Morton

    2003-01-01

    A long-term care insurance law for elderly persons was put in force two years ago in Japan. The Home Helpers, who are employed by hospitals, care companies or the welfare office, provide home welfare and care services for the elderly, such as cooking, bathing, washing, cleaning, shopping, etc. We developed a web-based home welfare and care services support system using wireless Internet mobile phones and Internet client computers, which employs a pen type image sensor. The pen type image sensor is used by the elderly people as the entry device for their care requests. The client computer sends the requests to the server computer in the Home Helper central office, and then the server computer automatically transfers them to the Home Helper's mobile phone. This newly-developed home welfare and care services support system is easily operated by elderly persons and enables Homes Helpers to save a significant amount of time and extra travel.

  3. Statistical behavior of ten million experimental detection limits

    NASA Astrophysics Data System (ADS)

    Voigtman, Edward; Abraham, Kevin T.

    2011-02-01

    Using a lab-constructed laser-excited fluorimeter, together with bootstrapping methodology, the authors have generated many millions of experimental linear calibration curves for the detection of rhodamine 6G tetrafluoroborate in ethanol solutions. The detection limits computed from them are in excellent agreement with both previously published theory and with comprehensive Monte Carlo computer simulations. Currie decision levels and Currie detection limits, each in the theoretical, chemical content domain, were found to be simply scaled reciprocals of the non-centrality parameter of the non-central t distribution that characterizes univariate linear calibration curves that have homoscedastic, additive Gaussian white noise. Accurate and precise estimates of the theoretical, content domain Currie detection limit for the experimental system, with 5% (each) probabilities of false positives and false negatives, are presented.

  4. Attendance fingerprint identification system using arduino and single board computer

    NASA Astrophysics Data System (ADS)

    Muchtar, M. A.; Seniman; Arisandi, D.; Hasanah, S.

    2018-03-01

    Fingerprint is one of the most unique parts of the human body that distinguishes one person from others and is easily accessed. This uniqueness is supported by technology that can automatically identify or recognize a person called fingerprint sensor. Yet, the existing Fingerprint Sensor can only do fingerprint identification on one machine. For the mentioned reason, we need a method to be able to recognize each user in a different fingerprint sensor. The purpose of this research is to build fingerprint sensor system for fingerprint data management to be centralized so identification can be done in each Fingerprint Sensor. The result of this research shows that by using Arduino and Raspberry Pi, data processing can be centralized so that fingerprint identification can be done in each fingerprint sensor with 98.5 % success rate of centralized server recording.

  5. Enhancements to the Network Repair Level Analysis (NRLA) Model Using Marginal Analysis Techniques and Centralized Intermediate Repair Facility (CIRF) Maintenance Concepts.

    DTIC Science & Technology

    1983-12-01

    while at the same time improving its operational efficiency. Through their integration and use, System Program Managers have a comprehensive analytical... systems . The NRLA program is hosted on the CREATE Operating System and contains approxiamately 5500 lines of computer code. It consists of a main...associated with C alternative maintenance plans. As the technological complexity of weapons systems has increased new and innovative logisitcal support

  6. Data management applications

    NASA Technical Reports Server (NTRS)

    1983-01-01

    Kennedy Space Center's primary institutional computer is a 4 megabyte IBM 4341 with 3.175 billion characters of IBM 3350 disc storage. This system utilizes the Software AG product known as ADABAS with the on line user oriented features of NATURAL and COMPLETE as a Data Base Management System (DBMS). It is operational under the OS/VSI and is currently supporting batch/on line applications such as Personnel, Training, Physical Space Management, Procurement, Office Equipment Maintenance, and Equipment Visibility. A third and by far the largest DBMS application is known as the Shuttle Inventory Management System (SIMS) which is operational on a Honeywell 6660 (dedicated) computer system utilizing Honeywell Integrated Data Storage I (IDSI) as the DBMS. The SIMS application is designed to provide central supply system acquisition, inventory control, receipt, storage, and issue of spares, supplies, and materials.

  7. A Uniform Ontology for Software Interfaces

    NASA Technical Reports Server (NTRS)

    Feyock, Stefan

    2002-01-01

    It is universally the case that computer users who are not also computer specialists prefer to deal with computers' in terms of a familiar ontology, namely that of their application domains. For example, the well-known Windows ontology assumes that the user is an office worker, and therefore should be presented with a "desktop environment" featuring entities such as (virtual) file folders, documents, appointment calendars, and the like, rather than a world of machine registers and machine language instructions, or even the DOS command level. The central theme of this research has been the proposition that the user interacting with a software system should have at his disposal both the ontology underlying the system, as well as a model of the system. This information is necessary for the understanding of the system in use, as well as for the automatic generation of assistance for the user, both in solving the problem for which the application is designed, and for providing guidance in the capabilities and use of the system.

  8. [Hardware for graphics systems].

    PubMed

    Goetz, C

    1991-02-01

    In all personal computer applications, be it for private or professional use, the decision of which "brand" of computer to buy is of central importance. In the USA Apple computers are mainly used in universities, while in Europe computers of the so-called "industry standard" by IBM (or clones thereof) have been increasingly used for many years. Independently of any brand name considerations, the computer components purchased must meet the current (and projected) needs of the user. Graphic capabilities and standards, processor speed, the use of co-processors, as well as input and output devices such as "mouse", printers and scanners are discussed. This overview is meant to serve as a decision aid. Potential users are given a short but detailed summary of current technical features.

  9. Network Information System

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    1996-05-01

    The Network Information System (NWIS) was initially implemented in May 1996 as a system in which computing devices could be recorded so that unique names could be generated for each device. Since then the system has grown to be an enterprise wide information system which is integrated with other systems to provide the seamless flow of data through the enterprise. The system Iracks data for two main entities: people and computing devices. The following are the type of functions performed by NWIS for these two entities: People Provides source information to the enterprise person data repository for select contractors andmore » visitors Generates and tracks unique usernames and Unix user IDs for every individual granted cyber access Tracks accounts for centrally managed computing resources, and monitors and controls the reauthorization of the accounts in accordance with the DOE mandated interval Computing Devices Generates unique names for all computing devices registered in the system Tracks the following information for each computing device: manufacturer, make, model, Sandia property number, vendor serial number, operating system and operating system version, owner, device location, amount of memory, amount of disk space, and level of support provided for the machine Tracks the hardware address for network cards Tracks the P address registered to computing devices along with the canonical and alias names for each address Updates the Dynamic Domain Name Service (DDNS) for canonical and alias names Creates the configuration files for DHCP to control the DHCP ranges and allow access to only properly registered computers Tracks and monitors classified security plans for stand-alone computers Tracks the configuration requirements used to setup the machine Tracks the roles people have on machines (system administrator, administrative access, user, etc...) Allows systems administrators to track changes made on the machine (both hardware and software) Generates an adjustment history of changes on selected fields« less

  10. Quantum error correction in crossbar architectures

    NASA Astrophysics Data System (ADS)

    Helsen, Jonas; Steudtner, Mark; Veldhorst, Menno; Wehner, Stephanie

    2018-07-01

    A central challenge for the scaling of quantum computing systems is the need to control all qubits in the system without a large overhead. A solution for this problem in classical computing comes in the form of so-called crossbar architectures. Recently we made a proposal for a large-scale quantum processor (Li et al arXiv:1711.03807 (2017)) to be implemented in silicon quantum dots. This system features a crossbar control architecture which limits parallel single-qubit control, but allows the scheme to overcome control scaling issues that form a major hurdle to large-scale quantum computing systems. In this work, we develop a language that makes it possible to easily map quantum circuits to crossbar systems, taking into account their architecture and control limitations. Using this language we show how to map well known quantum error correction codes such as the planar surface and color codes in this limited control setting with only a small overhead in time. We analyze the logical error behavior of this surface code mapping for estimated experimental parameters of the crossbar system and conclude that logical error suppression to a level useful for real quantum computation is feasible.

  11. 31 CFR 285.7 - Salary offset.

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... Secretary, has waived certain requirements of the Computer Matching and Privacy Protection Act of 1988, 5 U... process known as centralized salary offset computer matching, identify Federal employees who owe delinquent nontax debt to the United States. Centralized salary offset computer matching is the computerized...

  12. Art for the Brain's Sake.

    ERIC Educational Resources Information Center

    Sylwester, Robert

    1998-01-01

    From fine-tuning muscular systems to integrating emotion and logic, the arts have important biological value. Motion and emotion are central to the arts and life itself. It is counterproductive to promote high performance standards while displacing skill development with computer technologies and reducing arts programs that move students from…

  13. Centralized Fabric Management Using Puppet, Git, and GLPI

    NASA Astrophysics Data System (ADS)

    Smith, Jason A.; De Stefano, John S., Jr.; Fetzko, John; Hollowell, Christopher; Ito, Hironori; Karasawa, Mizuki; Pryor, James; Rao, Tejas; Strecker-Kellogg, William

    2012-12-01

    Managing the infrastructure of a large and complex data center can be extremely difficult without taking advantage of recent technological advances in administrative automation. Puppet is a seasoned open-source tool that is designed for enterprise class centralized configuration management. At the RHIC and ATLAS Computing Facility (RACF) at Brookhaven National Laboratory, we use Puppet along with Git, GLPI, and some custom scripts as part of our centralized configuration management system. In this paper, we discuss how we use these tools for centralized configuration management of our servers and services, change management requiring authorized approval of production changes, a complete version controlled history of all changes made, separation of production, testing and development systems using puppet environments, semi-automated server inventory using GLPI, and configuration change monitoring and reporting using the Puppet dashboard. We will also discuss scalability and performance results from using these tools on a 2,000+ node cluster and 400+ infrastructure servers with an administrative staff of approximately 25 full-time employees (FTEs).

  14. Comparison of automated satellite systems with conventional systems for hydrologic data collection in west-central Florida

    USGS Publications Warehouse

    Woodham, W.M.

    1982-01-01

    This report provides results of reliability and cost-effective studies of the goes satellite data-collection system used to operate a small hydrologic data network in west-central Florida. The GOES system, in its present state of development, was found to be about as reliable as conventional methods of data collection. Benefits of using the GOES system include some cost and manpower reduction, improved data accuracy, near real-time data availability, and direct computer storage and analysis of data. The GOES system could allow annual manpower reductions of 19 to 23 percent with reduction in cost for some and increase in cost for other single-parameter sites, such as streamflow, rainfall, and ground-water monitoring stations. Manpower reductions of 46 percent or more appear possible for multiple-parameter sites. Implementation of expected improvements in instrumentation and data handling procedures should further reduce costs. (USGS)

  15. Improving Situational Awareness for First Responders via Mobile Computing

    NASA Technical Reports Server (NTRS)

    Betts, Bradley J.; Mah, Robert W.; Papasin, Richard; Del Mundo, Rommel; McIntosh, Dawn M.; Jorgensen, Charles

    2005-01-01

    This project looks to improve first responder situational awareness using tools and techniques of mobile computing. The prototype system combines wireless communication, real-time location determination, digital imaging, and three-dimensional graphics. Responder locations are tracked in an outdoor environment via GPS and uploaded to a central server via GPRS or an 802.11 network. Responders can also wirelessly share digital images and text reports, both with other responders and with the incident commander. A pre-built three dimensional graphics model of a particular emergency scene is used to visualize responder and report locations. Responders have a choice of information end points, ranging from programmable cellular phones to tablet computers. The system also employs location-aware computing to make responders aware of particular hazards as they approach them. The prototype was developed in conjunction with the NASA Ames Disaster Assistance and Rescue Team and has undergone field testing during responder exercise at NASA Ames.

  16. Improving Situational Awareness for First Responders via Mobile Computing

    NASA Technical Reports Server (NTRS)

    Betts, Bradley J.; Mah, Robert W.; Papasin, Richard; Del Mundo, Rommel; McIntosh, Dawn M.; Jorgensen, Charles

    2006-01-01

    This project looks to improve first responder incident command, and an appropriately managed flow of situational awareness using mobile computing techniques. The prototype system combines wireless communication, real-time location determination, digital imaging, and three-dimensional graphics. Responder locations are tracked in an outdoor environment via GPS and uploaded to a central server via GPRS or an 802. II network. Responders can also wireless share digital images and text reports, both with other responders and with the incident commander. A pre-built three dimensional graphics model of the emergency scene is used to visualize responder and report locations. Responders have a choice of information end points, ranging from programmable cellular phones to tablet computers. The system also employs location-aware computing to make responders aware of particular hazards as they approach them. The prototype was developed in conjunction with the NASA Ames Disaster Assistance and Rescue Team and has undergone field testing during responder exercises at NASA Ames.

  17. Using a Cray Y-MP as an array processor for a RISC Workstation

    NASA Technical Reports Server (NTRS)

    Lamaster, Hugh; Rogallo, Sarah J.

    1992-01-01

    As microprocessors increase in power, the economics of centralized computing has changed dramatically. At the beginning of the 1980's, mainframes and super computers were often considered to be cost-effective machines for scalar computing. Today, microprocessor-based RISC (reduced-instruction-set computer) systems have displaced many uses of mainframes and supercomputers. Supercomputers are still cost competitive when processing jobs that require both large memory size and high memory bandwidth. One such application is array processing. Certain numerical operations are appropriate to use in a Remote Procedure Call (RPC)-based environment. Matrix multiplication is an example of an operation that can have a sufficient number of arithmetic operations to amortize the cost of an RPC call. An experiment which demonstrates that matrix multiplication can be executed remotely on a large system to speed the execution over that experienced on a workstation is described.

  18. Job-mix modeling and system analysis of an aerospace multiprocessor.

    NASA Technical Reports Server (NTRS)

    Mallach, E. G.

    1972-01-01

    An aerospace guidance computer organization, consisting of multiple processors and memory units attached to a central time-multiplexed data bus, is described. A job mix for this type of computer is obtained by analysis of Apollo mission programs. Multiprocessor performance is then analyzed using: 1) queuing theory, under certain 'limiting case' assumptions; 2) Markov process methods; and 3) system simulation. Results of the analyses indicate: 1) Markov process analysis is a useful and efficient predictor of simulation results; 2) efficient job execution is not seriously impaired even when the system is so overloaded that new jobs are inordinately delayed in starting; 3) job scheduling is significant in determining system performance; and 4) a system having many slow processors may or may not perform better than a system of equal power having few fast processors, but will not perform significantly worse.

  19. 31 CFR 285.7 - Salary offset.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... requirements of the Computer Matching and Privacy Protection Act of 1988, 5 U.S.C. 552a, as amended, for... known as centralized salary offset computer matching, identify Federal employees who owe delinquent nontax debt to the United States. Centralized salary offset computer matching is the computerized...

  20. 31 CFR 285.7 - Salary offset.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... requirements of the Computer Matching and Privacy Protection Act of 1988, 5 U.S.C. 552a, as amended, for... known as centralized salary offset computer matching, identify Federal employees who owe delinquent nontax debt to the United States. Centralized salary offset computer matching is the computerized...

  1. 31 CFR 285.7 - Salary offset.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... requirements of the Computer Matching and Privacy Protection Act of 1988, 5 U.S.C. 552a, as amended, for... known as centralized salary offset computer matching, identify Federal employees who owe delinquent nontax debt to the United States. Centralized salary offset computer matching is the computerized...

  2. 31 CFR 285.7 - Salary offset.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... requirements of the Computer Matching and Privacy Protection Act of 1988, 5 U.S.C. 552a, as amended, for... known as centralized salary offset computer matching, identify Federal employees who owe delinquent nontax debt to the United States. Centralized salary offset computer matching is the computerized...

  3. Congressional Report on Defense Business Operations

    DTIC Science & Technology

    2010-03-15

    by more than 1,700 users and used to store approximately 250 submissions a month. Each month, more than 2,000 documents are accessed and downloaded . 6...that is stored, managed and main- tained centrally. Data includes Geographic Information Systems ( GIS ) and Computer Aided Design and Drafting (CADD...Office FTP File Transfer Protocol FY Fiscal Year GAO Government Accountability Office GFEBS General Fund Enterprise Business System GIS Geographic

  4. Radar Detection Models in Computer Supported Naval War Games

    DTIC Science & Technology

    1979-06-08

    revealed a requirement for the effective centralized manage- ment of computer supported war game development and employment in the U.S. Navy. A...considerations and supports the requirement for centralized Io 97 management of computerized war game development . Therefore it is recommended that a central...managerial and fiscal authority be estab- lished for computerized tactical war game development . This central authority should ensure that new games

  5. En Garde: Fencing at Kansas City's Central Computers Unlimited/Classical Greek Magnet High School, 1991-1995

    ERIC Educational Resources Information Center

    Poos, Bradley W.

    2015-01-01

    Central High School in Kansas City, Missouri is one of the oldest schools west of the Mississippi and the first public high school built in Kansas City. Kansas City's magnet plan resulted in Central High School being rebuilt as the Central Computers Unlimited/Classical Greek Magnet High School, a school that was designed to offer students an…

  6. Examining the architecture of cellular computing through a comparative study with a computer

    PubMed Central

    Wang, Degeng; Gribskov, Michael

    2005-01-01

    The computer and the cell both use information embedded in simple coding, the binary software code and the quadruple genomic code, respectively, to support system operations. A comparative examination of their system architecture as well as their information storage and utilization schemes is performed. On top of the code, both systems display a modular, multi-layered architecture, which, in the case of a computer, arises from human engineering efforts through a combination of hardware implementation and software abstraction. Using the computer as a reference system, a simplistic mapping of the architectural components between the two is easily detected. This comparison also reveals that a cell abolishes the software–hardware barrier through genomic encoding for the constituents of the biochemical network, a cell's ‘hardware’ equivalent to the computer central processing unit (CPU). The information loading (gene expression) process acts as a major determinant of the encoded constituent's abundance, which, in turn, often determines the ‘bandwidth’ of a biochemical pathway. Cellular processes are implemented in biochemical pathways in parallel manners. In a computer, on the other hand, the software provides only instructions and data for the CPU. A process represents just sequentially ordered actions by the CPU and only virtual parallelism can be implemented through CPU time-sharing. Whereas process management in a computer may simply mean job scheduling, coordinating pathway bandwidth through the gene expression machinery represents a major process management scheme in a cell. In summary, a cell can be viewed as a super-parallel computer, which computes through controlled hardware composition. While we have, at best, a very fragmented understanding of cellular operation, we have a thorough understanding of the computer throughout the engineering process. The potential utilization of this knowledge to the benefit of systems biology is discussed. PMID:16849179

  7. Examining the architecture of cellular computing through a comparative study with a computer.

    PubMed

    Wang, Degeng; Gribskov, Michael

    2005-06-22

    The computer and the cell both use information embedded in simple coding, the binary software code and the quadruple genomic code, respectively, to support system operations. A comparative examination of their system architecture as well as their information storage and utilization schemes is performed. On top of the code, both systems display a modular, multi-layered architecture, which, in the case of a computer, arises from human engineering efforts through a combination of hardware implementation and software abstraction. Using the computer as a reference system, a simplistic mapping of the architectural components between the two is easily detected. This comparison also reveals that a cell abolishes the software-hardware barrier through genomic encoding for the constituents of the biochemical network, a cell's "hardware" equivalent to the computer central processing unit (CPU). The information loading (gene expression) process acts as a major determinant of the encoded constituent's abundance, which, in turn, often determines the "bandwidth" of a biochemical pathway. Cellular processes are implemented in biochemical pathways in parallel manners. In a computer, on the other hand, the software provides only instructions and data for the CPU. A process represents just sequentially ordered actions by the CPU and only virtual parallelism can be implemented through CPU time-sharing. Whereas process management in a computer may simply mean job scheduling, coordinating pathway bandwidth through the gene expression machinery represents a major process management scheme in a cell. In summary, a cell can be viewed as a super-parallel computer, which computes through controlled hardware composition. While we have, at best, a very fragmented understanding of cellular operation, we have a thorough understanding of the computer throughout the engineering process. The potential utilization of this knowledge to the benefit of systems biology is discussed.

  8. High Available COTS Based Computer for Space

    NASA Astrophysics Data System (ADS)

    Hartmann, J.; Magistrati, Giorgio

    2015-09-01

    The availability and reliability factors of a system are central requirements of a target application. From a simple fuel injection system used in cars up to a flight control system of an autonomous navigating spacecraft, each application defines its specific availability factor under the target application boundary conditions. Increasing quality requirements on data processing systems used in space flight applications calling for new architectures to fulfill the availability, reliability as well as the increase of the required data processing power. Contrary to the increased quality request simplification and use of COTS components to decrease costs while keeping the interface compatibility to currently used system standards are clear customer needs. Data processing system design is mostly dominated by strict fulfillment of the customer requirements and reuse of available computer systems were not always possible caused by obsolescence of EEE-Parts, insufficient IO capabilities or the fact that available data processing systems did not provide the required scalability and performance.

  9. Computer-based System for the Virtual-Endoscopic Guidance of Bronchoscopy.

    PubMed

    Helferty, J P; Sherbondy, A J; Kiraly, A P; Higgins, W E

    2007-11-01

    The standard procedure for diagnosing lung cancer involves two stages: three-dimensional (3D) computed-tomography (CT) image assessment, followed by interventional bronchoscopy. In general, the physician has no link between the 3D CT image assessment results and the follow-on bronchoscopy. Thus, the physician essentially performs bronchoscopic biopsy of suspect cancer sites blindly. We have devised a computer-based system that greatly augments the physician's vision during bronchoscopy. The system uses techniques from computer graphics and computer vision to enable detailed 3D CT procedure planning and follow-on image-guided bronchoscopy. The procedure plan is directly linked to the bronchoscope procedure, through a live registration and fusion of the 3D CT data and bronchoscopic video. During a procedure, the system provides many visual tools, fused CT-video data, and quantitative distance measures; this gives the physician considerable visual feedback on how to maneuver the bronchoscope and where to insert the biopsy needle. Central to the system is a CT-video registration technique, based on normalized mutual information. Several sets of results verify the efficacy of the registration technique. In addition, we present a series of test results for the complete system for phantoms, animals, and human lung-cancer patients. The results indicate that not only is the variation in skill level between different physicians greatly reduced by the system over the standard procedure, but that biopsy effectiveness increases.

  10. The utilization of parallel processing in solving the inviscid form of the average-passage equation system for multistage turbomachinery

    NASA Technical Reports Server (NTRS)

    Mulac, Richard A.; Celestina, Mark L.; Adamczyk, John J.; Misegades, Kent P.; Dawson, Jef M.

    1987-01-01

    A procedure is outlined which utilizes parallel processing to solve the inviscid form of the average-passage equation system for multistage turbomachinery along with a description of its implementation in a FORTRAN computer code, MSTAGE. A scheme to reduce the central memory requirements of the program is also detailed. Both the multitasking and I/O routines referred to in this paper are specific to the Cray X-MP line of computers and its associated SSD (Solid-state Storage Device). Results are presented for a simulation of a two-stage rocket engine fuel pump turbine.

  11. Hyper-Spectral Synthesis of Active OB Stars Using GLaDoS

    NASA Astrophysics Data System (ADS)

    Hill, N. R.; Townsend, R. H. D.

    2016-11-01

    In recent years there has been considerable interest in using graphics processing units (GPUs) to perform scientific computations that have traditionally been handled by central processing units (CPUs). However, there is one area where the scientific potential of GPUs has been overlooked - computer graphics, the task they were originally designed for. Here we introduce GLaDoS, a hyper-spectral code which leverages the graphics capabilities of GPUs to synthesize spatially and spectrally resolved images of complex stellar systems. We demonstrate how GLaDoS can be applied to calculate observables for various classes of stars including systems with inhomogenous surface temperatures and contact binaries.

  12. Equipment for linking the AutoAnalyzer on-line to a computer

    PubMed Central

    Simpson, D.; Sims, G. E.; Harrison, M. I.; Whitby, L. G.

    1971-01-01

    An Elliott 903 computer with 8K central core store and magnetic tape backing store has been operated for approximately 20 months in a clinical chemistry laboratory. Details of the equipment designed for linking AutoAnalyzers on-line to the computer are described, and data presented concerning the time required by the computer for different processes. The reliability of the various components in daily operation is discussed. Limitations in the system's capabilities have been defined, and ways of overcoming these are delineated. At present, routine operations include the preparation of worksheets for a limited range of tests (five channels), monitoring of up to 11 AutoAnalyzer channels at a time on a seven-day week basis (with process control and automatic calculation of results), and the provision of quality control data. Cumulative reports can be printed out on those analyses for which computer-prepared worksheets are provided but the system will require extension before these can be issued sufficiently rapidly for routine use. PMID:5551384

  13. Learning to Share

    ERIC Educational Resources Information Center

    Raths, David

    2010-01-01

    In the tug-of-war between researchers and IT for supercomputing resources, a centralized approach can help both sides get more bang for their buck. As 2010 began, the University of Washington was preparing to launch its first shared high-performance computing cluster, a 1,500-node system called Hyak, dedicated to research activities. Like other…

  14. Help at Hand

    ERIC Educational Resources Information Center

    Demski, Jennifer

    2009-01-01

    This article describes how centralized presentation control systems enable IT support staff to monitor equipment and assist end users more efficiently. At Temple University, 70 percent of the classrooms are equipped with an AMX touch panel, linked via a Netlink controller to an in-classroom computer, projector, DVD/VCR player, and speakers. The…

  15. 12 CFR 1402.22 - Fees to be charged.

    Code of Federal Regulations, 2012 CFR

    2012-01-01

    ... Banks and Banking FARM CREDIT SYSTEM INSURANCE CORPORATION RELEASING INFORMATION Fees for Provision of...) (i.e., basic pay plus 16 percent of that rate) of the employee(s) making the search. (c) Computer... the cost of operating the central processing unit for that portion of operating time that is directly...

  16. 12 CFR 1402.22 - Fees to be charged.

    Code of Federal Regulations, 2013 CFR

    2013-01-01

    ... Banks and Banking FARM CREDIT SYSTEM INSURANCE CORPORATION RELEASING INFORMATION Fees for Provision of...) (i.e., basic pay plus 16 percent of that rate) of the employee(s) making the search. (c) Computer... the cost of operating the central processing unit for that portion of operating time that is directly...

  17. 12 CFR 1402.22 - Fees to be charged.

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ... Banks and Banking FARM CREDIT SYSTEM INSURANCE CORPORATION RELEASING INFORMATION Fees for Provision of...) (i.e., basic pay plus 16 percent of that rate) of the employee(s) making the search. (c) Computer... the cost of operating the central processing unit for that portion of operating time that is directly...

  18. 12 CFR 1402.22 - Fees to be charged.

    Code of Federal Regulations, 2014 CFR

    2014-01-01

    ... Banks and Banking FARM CREDIT SYSTEM INSURANCE CORPORATION RELEASING INFORMATION Fees for Provision of...) (i.e., basic pay plus 16 percent of that rate) of the employee(s) making the search. (c) Computer... the cost of operating the central processing unit for that portion of operating time that is directly...

  19. 12 CFR 1402.22 - Fees to be charged.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... Banks and Banking FARM CREDIT SYSTEM INSURANCE CORPORATION RELEASING INFORMATION Fees for Provision of...) (i.e., basic pay plus 16 percent of that rate) of the employee(s) making the search. (c) Computer... the cost of operating the central processing unit for that portion of operating time that is directly...

  20. Women@Work: Listening to Gendered Relations of Power in Teachers' Talk about New Technologies.

    ERIC Educational Resources Information Center

    Jenson, Jennifer; Rose, Chloe Brushwood

    2003-01-01

    Examines teachers' working identities, highlighting gender inequities among teachers, within school systems, and in society, especially in relation to computers. Highlights tensions central to teaching in relation to new technologies, emphasizing gender inequities that structure understandings of teaching. Documents how, for the teachers studied,…

  1. Preliminary design of a solar central receiver for a site-specific repowering application (Saguaro Power Plant). Volume IV. Appendixes. Final report, October 1982-September 1983

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Weber, E.R.

    1983-09-01

    The appendixes for the Saguaro Power Plant includes the following: receiver configuration selection report; cooperating modes and transitions; failure modes analysis; control system analysis; computer codes and simulation models; procurement package scope descriptions; responsibility matrix; solar system flow diagram component purpose list; thermal storage component and system test plans; solar steam generator tube-to-tubesheet weld analysis; pipeline listing; management control schedule; and system list and definitions.

  2. Earth Observatory Satellite system definition study. Report 5: System design and specifications. Volume 6: Specification for EOS Central Data Processing Facility (CDPF)

    NASA Technical Reports Server (NTRS)

    1974-01-01

    The specifications and functions of the Central Data Processing (CDPF) Facility which supports the Earth Observatory Satellite (EOS) are discussed. The CDPF will receive the EOS sensor data and spacecraft data through the Spaceflight Tracking and Data Network (STDN) and the Operations Control Center (OCC). The CDPF will process the data and produce high density digital tapes, computer compatible tapes, film and paper print images, and other data products. The specific aspects of data inputs and data processing are identified. A block diagram of the CDPF to show the data flow and interfaces of the subsystems is provided.

  3. Sensor Control And Film Annotation For Long Range, Standoff Reconnaissance

    NASA Astrophysics Data System (ADS)

    Schmidt, Thomas G.; Peters, Owen L.; Post, Lawrence H.

    1984-12-01

    This paper describes a Reconnaissance Data Annotation System that incorporates off-the-shelf technology and system designs providing a high degree of adaptability and interoperability to satisfy future reconnaissance data requirements. The history of data annotation for reconnaissance is reviewed in order to provide the base from which future developments can be assessed and technical risks minimized. The system described will accommodate new developments in recording head assemblies and the incorporation of advanced cameras of both the film and electro-optical type. Use of microprocessor control and digital bus inter-face form the central design philosophy. For long range, high altitude, standoff missions, the Data Annotation System computes the projected latitude and longitude of central target position from aircraft position and attitude. This complements the use of longer ranges and high altitudes for reconnaissance missions.

  4. Cerebral pyogranuloma associated with systemic coronavirus infection in a ferret.

    PubMed

    Gnirs, K; Quinton, J F; Dally, C; Nicolier, A; Ruel, Y

    2016-01-01

    A 2-year-old male ferret was presented with central nervous system signs. Computed tomography (CT) of the brain revealed a well-defined contrast-enhancing lesion on the rostral forebrain that appeared extraparenchymal. Surgical excision of the mass was performed and the ferret was euthanised during the procedure. Histopathology of the excised mass showed multiple meningeal nodular lesions with infiltrates of epithelioid macrophages, occasionally centred on degenerated neutrophils and surrounded by a broad rim of plasma cells, features consistent with pyogranulomatous meningitis. The histopathological features in this ferret were similar to those in cats with feline infectious peritonitis. Definitive diagnosis was assessed by immunohistochemistry, confirming a ferret systemic coronavirus (FSCV) associated disease. This is the first case of coronavirus granuloma described on CT-scan in the central nervous system of a ferret. © 2015 British Small Animal Veterinary Association.

  5. An Event-Based Approach to Distributed Diagnosis of Continuous Systems

    NASA Technical Reports Server (NTRS)

    Daigle, Matthew; Roychoudhurry, Indranil; Biswas, Gautam; Koutsoukos, Xenofon

    2010-01-01

    Distributed fault diagnosis solutions are becoming necessary due to the complexity of modern engineering systems, and the advent of smart sensors and computing elements. This paper presents a novel event-based approach for distributed diagnosis of abrupt parametric faults in continuous systems, based on a qualitative abstraction of measurement deviations from the nominal behavior. We systematically derive dynamic fault signatures expressed as event-based fault models. We develop a distributed diagnoser design algorithm that uses these models for designing local event-based diagnosers based on global diagnosability analysis. The local diagnosers each generate globally correct diagnosis results locally, without a centralized coordinator, and by communicating a minimal number of measurements between themselves. The proposed approach is applied to a multi-tank system, and results demonstrate a marked improvement in scalability compared to a centralized approach.

  6. Improving Distributed Diagnosis Through Structural Model Decomposition

    NASA Technical Reports Server (NTRS)

    Bregon, Anibal; Daigle, Matthew John; Roychoudhury, Indranil; Biswas, Gautam; Koutsoukos, Xenofon; Pulido, Belarmino

    2011-01-01

    Complex engineering systems require efficient fault diagnosis methodologies, but centralized approaches do not scale well, and this motivates the development of distributed solutions. This work presents an event-based approach for distributed diagnosis of abrupt parametric faults in continuous systems, by using the structural model decomposition capabilities provided by Possible Conflicts. We develop a distributed diagnosis algorithm that uses residuals computed by extending Possible Conflicts to build local event-based diagnosers based on global diagnosability analysis. The proposed approach is applied to a multitank system, and results demonstrate an improvement in the design of local diagnosers. Since local diagnosers use only a subset of the residuals, and use subsystem models to compute residuals (instead of the global system model), the local diagnosers are more efficient than previously developed distributed approaches.

  7. Annual ADP planning document

    NASA Technical Reports Server (NTRS)

    Mogilevsky, M.

    1973-01-01

    The Category A computer systems at KSC (Al and A2) which perform scientific and business/administrative operations are described. This data division is responsible for scientific requirements supporting Saturn, Atlas/Centaur, Titan/Centaur, Titan III, and Delta vehicles, and includes realtime functions, Apollo-Soyuz Test Project (ASTP), and the Space Shuttle. The work is performed chiefly on the GEL-635 (Al) system located in the Central Instrumentation Facility (CIF). The Al system can perform computations and process data in three modes: (1) real-time critical mode; (2) real-time batch mode; and (3) batch mode. The Division's IBM-360/50 (A2) system, also at the CIF, performs business/administrative data processing such as personnel, procurement, reliability, financial management and payroll, real-time inventory management, GSE accounting, preventive maintenance, and integrated launch vehicle modification status.

  8. The Fermilab Accelerator control system

    NASA Astrophysics Data System (ADS)

    Bogert, Dixon

    1986-06-01

    With the advent of the Tevatron, considerable upgrades have been made to the controls of all the Fermilab Accelerators. The current system is based on making as large an amount of data as possible available to many operators or end-users. Specifically there are about 100 000 separate readings, settings, and status and control registers in the various machines, all of which can be accessed by seventeen consoles, some in the Main Control Room and others distributed throughout the complex. A "Host" computer network of approximately eighteen PDP-11/34's, seven PDP-11/44's, and three VAX-11/785's supports a distributed data acquisition system including Lockheed MAC-16's left from the original Main Ring and Booster instrumentation and upwards of 1000 Z80, Z8002, and M68000 microprocessors in dozens of configurations. Interaction of the various parts of the system is via a central data base stored on the disk of one of the VAXes. The primary computer-hardware communication is via CAMAC for the new Tevatron and Antiproton Source; certain subsystems, among them vacuum, refrigeration, and quench protection, reside in the distributed microprocessors and communicate via GAS, an in-house protocol. An important hardware feature is an accurate clock system making a large number of encoded "events" in the accelerator supercycle available for both hardware modules and computers. System software features include the ability to save the current state of the machine or any subsystem and later restore it or compare it with the state at another time, a general logging facility to keep track of specific variables over long periods of time, detection of "exception conditions" and the posting of alarms, and a central filesharing capability in which files on VAX disks are available for access by any of the "Host" processors.

  9. Bank Terminals

    NASA Technical Reports Server (NTRS)

    1978-01-01

    In the photo, employees of the UAB Bank, Knoxville, Tennessee, are using Teller Transaction Terminals manufactured by SCI Systems, Inc., Huntsville, Alabama, an electronics firm which has worked on a number of space projects under contract with NASA. The terminals are part of an advanced, computerized financial transaction system that offers high efficiency in bank operations. The key to the system's efficiency is a "multiplexing" technique developed for NASA's Space Shuttle. Multiplexing is simultaneous transmission of large amounts of data over a single transmission link at very high rates of speed. In the banking application, a small multiplex "data bus" interconnects all the terminals and a central computer which stores information on clients' accounts. The data bus replaces the maze-of wiring that would be needed to connect each terminal separately and it affords greater speed in recording transactions. The SCI system offers banks real-time data management through constant updating of the central computer. For example, a check is immediately cancelled at the teller's terminal and the computer is simultaneously advised of the transaction; under other methods, the check would be cancelled and the transaction recorded at the close of business. Teller checkout at the end of the day, conventionally a time-consuming matter of processing paper, can be accomplished in minutes by calling up a summary of the day's transactions. SCI manufactures other types of terminals for use in the system, such as an administrative terminal that provides an immediate printout of a client's account, and another for printing and recording savings account deposits and withdrawals. SCI systems have been installed in several banks in Tennessee, Arizona, and Oregon and additional installations are scheduled this year.

  10. Assessment of a prototype computer colour matching system to reproduce natural tooth colour on ceramic restorations.

    PubMed

    Kristiansen, Joshua; Sakai, Maiko; Da Silva, John D; Gil, Mindy; Ishikawa-Nagai, Shigemi

    2011-12-01

    The aim of this study was to assess the accuracy of a prototype computer colour matching (CCM) system for dental ceramics targeting the colour of natural maxillary central incisors employing a dental spectrophotometer and the Kubelka-Munk theory. Seventeen human volunteers with natural intact maxillary central incisors were selected to participate in this study. One central incisor from each subject was measured in the body region by a spectrophotometer and the reflectance values were used by the CCM system in order to generate a prescription for a ceramic mixture to reproduce the target tooth's colour. Ceramic discs were fabricated based on these prescriptions and layered on a zirconia ceramic core material of a specified colour. The colour match of each two-layered specimen to the target natural tooth was assessed by CIELAB colour coordinates (ΔE(*), ΔL(*), Δa(*) and Δb(*)). The average colour difference ΔE(*) value was 2.58±84 for the ceramic specimen-natural tooth (CS-NT) pairs. ΔL(*) values ranged from 0.17 to 2.71, Δa(*) values ranged from -1.70 to 0.61, and Δb(*) values ranged from -1.48 to 3.81. There was a moderate inverse correlation (R=-0.44, p-value=0.0721) between L(*) values for natural target teeth and ΔE(*) values; no such correlation was found for a(*) and b(*) values. The newly developed prototype CCM system has the potential to be used as an efficient tool in the reproduction of natural tooth colour. Copyright © 2011. Published by Elsevier Ltd.

  11. Distributed control system for parallel-connected DC boost converters

    DOEpatents

    Goldsmith, Steven

    2017-08-15

    The disclosed invention is a distributed control system for operating a DC bus fed by disparate DC power sources that service a known or unknown load. The voltage sources vary in v-i characteristics and have time-varying, maximum supply capacities. Each source is connected to the bus via a boost converter, which may have different dynamic characteristics and power transfer capacities, but are controlled through PWM. The invention tracks the time-varying power sources and apportions their power contribution while maintaining the DC bus voltage within the specifications. A central digital controller solves the steady-state system for the optimal duty cycle settings that achieve a desired power supply apportionment scheme for a known or predictable DC load. A distributed networked control system is derived from the central system that utilizes communications among controllers to compute a shared estimate of the unknown time-varying load through shared bus current measurements and bus voltage measurements.

  12. [The Development of Information Centralization and Management Integration System for Monitors Based on Wireless Sensor Network].

    PubMed

    Xu, Xiu; Zhang, Honglei; Li, Yiming; Li, Bin

    2015-07-01

    Developed the information centralization and management integration system for monitors of different brands and models with wireless sensor network technologies such as wireless location and wireless communication, based on the existing wireless network. With adaptive implementation and low cost, the system which possesses the advantages of real-time, efficiency and elaboration is able to collect status and data of the monitors, locate the monitors, and provide services with web server, video server and locating server via local network. Using an intranet computer, the clinical and device management staffs can access the status and parameters of monitors. Applications of this system provide convenience and save human resource for clinical departments, as well as promote the efficiency, accuracy and elaboration for the device management. The successful achievement of this system provides solution for integrated and elaborated management of the mobile devices including ventilator and infusion pump.

  13. X-wing fly-by-wire vehicle management system

    NASA Technical Reports Server (NTRS)

    Fischer, Jr., William C. (Inventor)

    1990-01-01

    A complete, computer based, vehicle management system (VMS) for X-Wing aircraft using digital fly-by-wire technology controlling many subsystems and providing functions beyond the classical aircraft flight control system. The vehicle management system receives input signals from a multiplicity of sensors and provides commands to a large number of actuators controlling many subsystems. The VMS includes--segregating flight critical and mission critical factors and providing a greater level of back-up or redundancy for the former; centralizing the computation of functions utilized by several subsystems (e.g. air data, rotor speed, etc.); integrating the control of the flight control functions, the compressor control, the rotor conversion control, vibration alleviation by higher harmonic control, engine power anticipation and self-test, all in the same flight control computer (FCC) hardware units. The VMS uses equivalent redundancy techniques to attain quadruple equivalency levels; includes alternate modes of operation and recovery means to back-up any functions which fail; and uses back-up control software for software redundancy.

  14. PANORAMA: An approach to performance modeling and diagnosis of extreme-scale workflows

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Deelman, Ewa; Carothers, Christopher; Mandal, Anirban

    Here we report that computational science is well established as the third pillar of scientific discovery and is on par with experimentation and theory. However, as we move closer toward the ability to execute exascale calculations and process the ensuing extreme-scale amounts of data produced by both experiments and computations alike, the complexity of managing the compute and data analysis tasks has grown beyond the capabilities of domain scientists. Therefore, workflow management systems are absolutely necessary to ensure current and future scientific discoveries. A key research question for these workflow management systems concerns the performance optimization of complex calculation andmore » data analysis tasks. The central contribution of this article is a description of the PANORAMA approach for modeling and diagnosing the run-time performance of complex scientific workflows. This approach integrates extreme-scale systems testbed experimentation, structured analytical modeling, and parallel systems simulation into a comprehensive workflow framework called Pegasus for understanding and improving the overall performance of complex scientific workflows.« less

  15. Trivariate characteristics of intensity fluctuations for heavily saturated optical systems.

    PubMed

    Das, Biman; Drake, Eli; Jack, John

    2004-02-01

    Trivariate cumulants of intensity fluctuations have been computed starting from a trivariate intensity probability distribution function, which rests on the assumption that the variation of intensity has a maximum entropy distribution with the constraint that the total intensity is constant. The assumption holds for optical systems such as a thin, long, mirrorless gas laser amplifier where under heavy gain saturation the total output approaches a constant intensity, although intensity of any mode fluctuates rapidly over the average intensity. The relations between trivariate cumulants and central moments that were needed for the computation of trivariate cumulants were derived. The results of the computation show that the cumulants have characteristic values that depend on the number of interacting modes in the system. The cumulant values approach zero when the number of modes is infinite, as expected. The results will be useful for comparison with the experimental triavariate statistics of heavily saturated optical systems such as the output from a thin, long, bidirectional gas laser amplifier.

  16. Computer models of complex multiloop branched pipeline systems

    NASA Astrophysics Data System (ADS)

    Kudinov, I. V.; Kolesnikov, S. V.; Eremin, A. V.; Branfileva, A. N.

    2013-11-01

    This paper describes the principal theoretical concepts of the method used for constructing computer models of complex multiloop branched pipeline networks, and this method is based on the theory of graphs and two Kirchhoff's laws applied to electrical circuits. The models make it possible to calculate velocities, flow rates, and pressures of a fluid medium in any section of pipeline networks, when the latter are considered as single hydraulic systems. On the basis of multivariant calculations the reasons for existing problems can be identified, the least costly methods of their elimination can be proposed, and recommendations for planning the modernization of pipeline systems and construction of their new sections can be made. The results obtained can be applied to complex pipeline systems intended for various purposes (water pipelines, petroleum pipelines, etc.). The operability of the model has been verified on an example of designing a unified computer model of the heat network for centralized heat supply of the city of Samara.

  17. PANORAMA: An approach to performance modeling and diagnosis of extreme-scale workflows

    DOE PAGES

    Deelman, Ewa; Carothers, Christopher; Mandal, Anirban; ...

    2015-07-14

    Here we report that computational science is well established as the third pillar of scientific discovery and is on par with experimentation and theory. However, as we move closer toward the ability to execute exascale calculations and process the ensuing extreme-scale amounts of data produced by both experiments and computations alike, the complexity of managing the compute and data analysis tasks has grown beyond the capabilities of domain scientists. Therefore, workflow management systems are absolutely necessary to ensure current and future scientific discoveries. A key research question for these workflow management systems concerns the performance optimization of complex calculation andmore » data analysis tasks. The central contribution of this article is a description of the PANORAMA approach for modeling and diagnosing the run-time performance of complex scientific workflows. This approach integrates extreme-scale systems testbed experimentation, structured analytical modeling, and parallel systems simulation into a comprehensive workflow framework called Pegasus for understanding and improving the overall performance of complex scientific workflows.« less

  18. GPU-accelerated computational tool for studying the effectiveness of asteroid disruption techniques

    NASA Astrophysics Data System (ADS)

    Zimmerman, Ben J.; Wie, Bong

    2016-10-01

    This paper presents the development of a new Graphics Processing Unit (GPU) accelerated computational tool for asteroid disruption techniques. Numerical simulations are completed using the high-order spectral difference (SD) method. Due to the compact nature of the SD method, it is well suited for implementation with the GPU architecture, hence solutions are generated at orders of magnitude faster than the Central Processing Unit (CPU) counterpart. A multiphase model integrated with the SD method is introduced, and several asteroid disruption simulations are conducted, including kinetic-energy impactors, multi-kinetic energy impactor systems, and nuclear options. Results illustrate the benefits of using multi-kinetic energy impactor systems when compared to a single impactor system. In addition, the effectiveness of nuclear options is observed.

  19. A centralized audio presentation manager

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Papp, A.L. III; Blattner, M.M.

    1994-05-16

    The centralized audio presentation manager addresses the problems which occur when multiple programs running simultaneously attempt to use the audio output of a computer system. Time dependence of sound means that certain auditory messages must be scheduled simultaneously, which can lead to perceptual problems due to psychoacoustic phenomena. Furthermore, the combination of speech and nonspeech audio is examined; each presents its own problems of perceptibility in an acoustic environment composed of multiple auditory streams. The centralized audio presentation manager receives abstract parameterized message requests from the currently running programs, and attempts to create and present a sonic representation in themore » most perceptible manner through the use of a theoretically and empirically designed rule set.« less

  20. Distributed intelligent control and status networking

    NASA Technical Reports Server (NTRS)

    Fortin, Andre; Patel, Manoj

    1993-01-01

    Over the past two years, the Network Control Systems Branch (Code 532) has been investigating control and status networking technologies. These emerging technologies use distributed processing over a network to accomplish a particular custom task. These networks consist of small intelligent 'nodes' that perform simple tasks. Containing simple, inexpensive hardware and software, these nodes can be easily developed and maintained. Once networked, the nodes can perform a complex operation without a central host. This type of system provides an alternative to more complex control and status systems which require a central computer. This paper will provide some background and discuss some applications of this technology. It will also demonstrate the suitability of one particular technology for the Space Network (SN) and discuss the prototyping activities of Code 532 utilizing this technology.

  1. Centralized Duplicate Removal Video Storage System with Privacy Preservation in IoT.

    PubMed

    Yan, Hongyang; Li, Xuan; Wang, Yu; Jia, Chunfu

    2018-06-04

    In recent years, the Internet of Things (IoT) has found wide application and attracted much attention. Since most of the end-terminals in IoT have limited capabilities for storage and computing, it has become a trend to outsource the data from local to cloud computing. To further reduce the communication bandwidth and storage space, data deduplication has been widely adopted to eliminate the redundant data. However, since data collected in IoT are sensitive and closely related to users' personal information, the privacy protection of users' information becomes a challenge. As the channels, like the wireless channels between the terminals and the cloud servers in IoT, are public and the cloud servers are not fully trusted, data have to be encrypted before being uploaded to the cloud. However, encryption makes the performance of deduplication by the cloud server difficult because the ciphertext will be different even if the underlying plaintext is identical. In this paper, we build a centralized privacy-preserving duplicate removal storage system, which supports both file-level and block-level deduplication. In order to avoid the leakage of statistical information of data, Intel Software Guard Extensions (SGX) technology is utilized to protect the deduplication process on the cloud server. The results of the experimental analysis demonstrate that the new scheme can significantly improve the deduplication efficiency and enhance the security. It is envisioned that the duplicated removal system with privacy preservation will be of great use in the centralized storage environment of IoT.

  2. Power system distributed oscilation detection based on Synchrophasor data

    NASA Astrophysics Data System (ADS)

    Ning, Jiawei

    Along with increasing demand for electricity, integration of renewable energy and deregulation of power market, power industry is facing unprecedented challenges nowadays. Within the last couple of decades, several serious blackouts have been taking place in United States. As an effective approach to prevent that, power system small signal stability monitoring has been drawing more interests and attentions from researchers. With wide-spread implementation of Synchrophasors around the world in the last decade, power systems real-time online monitoring becomes much more feasible. Comparing with planning study analysis, real-time online monitoring would benefit control room operators immediately and directly. Among all online monitoring methods, Oscillation Modal Analysis (OMA), a modal identification method based on routine measurement data where the input is unmeasured ambient excitation, is a great tool to evaluate and monitor power system small signal stability. Indeed, high sampling Synchrophasor data around power system is fitted perfectly as inputs to OMA. Existing methods in OMA for power systems are all based on centralized algorithms applying at control centers only; however, with rapid growing number of online Synchrophasors the computation burden at control centers is and will be continually exponentially expanded. The increasing computation time at control center compromises the real-time feature of online monitoring. The communication efforts between substation and control center will also be out of reach. Meanwhile, it is difficult or even impossible for centralized algorithms to detect some poorly damped local modes. In order to avert previous shortcomings of centralized OMA methods and embrace the new changes in the power systems, two new distributed oscillation detection methods with two new decentralized structures are presented in this dissertation. Since the new schemes brought substations into the big oscillation detection picture, the proposed methods could achieve faster and more reliable results. Subsequently, this claim is tested and approved by test results of IEEE Two-area simulation test system and real power system historian synchrophasor data case studies.

  3. Grid site availability evaluation and monitoring at CMS

    DOE PAGES

    Lyons, Gaston; Maciulaitis, Rokas; Bagliesi, Giuseppe; ...

    2017-10-01

    The Compact Muon Solenoid (CMS) experiment at the Large Hadron Collider (LHC) uses distributed grid computing to store, process, and analyse the vast quantity of scientific data recorded every year. The computing resources are grouped into sites and organized in a tiered structure. Each site provides computing and storage to the CMS computing grid. Over a hundred sites worldwide contribute with resources from hundred to well over ten thousand computing cores and storage from tens of TBytes to tens of PBytes. In such a large computing setup scheduled and unscheduled outages occur continually and are not allowed to significantly impactmore » data handling, processing, and analysis. Unscheduled capacity and performance reductions need to be detected promptly and corrected. CMS developed a sophisticated site evaluation and monitoring system for Run 1 of the LHC based on tools of the Worldwide LHC Computing Grid. For Run 2 of the LHC the site evaluation and monitoring system is being overhauled to enable faster detection/reaction to failures and a more dynamic handling of computing resources. Furthermore, enhancements to better distinguish site from central service issues and to make evaluations more transparent and informative to site support staff are planned.« less

  4. Grid site availability evaluation and monitoring at CMS

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lyons, Gaston; Maciulaitis, Rokas; Bagliesi, Giuseppe

    The Compact Muon Solenoid (CMS) experiment at the Large Hadron Collider (LHC) uses distributed grid computing to store, process, and analyse the vast quantity of scientific data recorded every year. The computing resources are grouped into sites and organized in a tiered structure. Each site provides computing and storage to the CMS computing grid. Over a hundred sites worldwide contribute with resources from hundred to well over ten thousand computing cores and storage from tens of TBytes to tens of PBytes. In such a large computing setup scheduled and unscheduled outages occur continually and are not allowed to significantly impactmore » data handling, processing, and analysis. Unscheduled capacity and performance reductions need to be detected promptly and corrected. CMS developed a sophisticated site evaluation and monitoring system for Run 1 of the LHC based on tools of the Worldwide LHC Computing Grid. For Run 2 of the LHC the site evaluation and monitoring system is being overhauled to enable faster detection/reaction to failures and a more dynamic handling of computing resources. Furthermore, enhancements to better distinguish site from central service issues and to make evaluations more transparent and informative to site support staff are planned.« less

  5. Grid site availability evaluation and monitoring at CMS

    NASA Astrophysics Data System (ADS)

    Lyons, Gaston; Maciulaitis, Rokas; Bagliesi, Giuseppe; Lammel, Stephan; Sciabà, Andrea

    2017-10-01

    The Compact Muon Solenoid (CMS) experiment at the Large Hadron Collider (LHC) uses distributed grid computing to store, process, and analyse the vast quantity of scientific data recorded every year. The computing resources are grouped into sites and organized in a tiered structure. Each site provides computing and storage to the CMS computing grid. Over a hundred sites worldwide contribute with resources from hundred to well over ten thousand computing cores and storage from tens of TBytes to tens of PBytes. In such a large computing setup scheduled and unscheduled outages occur continually and are not allowed to significantly impact data handling, processing, and analysis. Unscheduled capacity and performance reductions need to be detected promptly and corrected. CMS developed a sophisticated site evaluation and monitoring system for Run 1 of the LHC based on tools of the Worldwide LHC Computing Grid. For Run 2 of the LHC the site evaluation and monitoring system is being overhauled to enable faster detection/reaction to failures and a more dynamic handling of computing resources. Enhancements to better distinguish site from central service issues and to make evaluations more transparent and informative to site support staff are planned.

  6. Hybrid Quantum-Classical Approach to Quantum Optimal Control.

    PubMed

    Li, Jun; Yang, Xiaodong; Peng, Xinhua; Sun, Chang-Pu

    2017-04-14

    A central challenge in quantum computing is to identify more computational problems for which utilization of quantum resources can offer significant speedup. Here, we propose a hybrid quantum-classical scheme to tackle the quantum optimal control problem. We show that the most computationally demanding part of gradient-based algorithms, namely, computing the fitness function and its gradient for a control input, can be accomplished by the process of evolution and measurement on a quantum simulator. By posing queries to and receiving answers from the quantum simulator, classical computing devices update the control parameters until an optimal control solution is found. To demonstrate the quantum-classical scheme in experiment, we use a seven-qubit nuclear magnetic resonance system, on which we have succeeded in optimizing state preparation without involving classical computation of the large Hilbert space evolution.

  7. Air Force Global Weather Central System Architecture Study. Final System/Subsystem Summary Report. Volume 7. Implementation and Development Plans

    DTIC Science & Technology

    1976-03-01

    special access; PS2 will be for the variable perimeter; and PS3, PS4 , and PS5 will make up the normal access area. This added computer power will be...implementation of PS1 and PS4 will continue as new com- munications consoles are actively established for possible side-by-side opera- tion of the

  8. Performance Analysis of the Unitree Central File

    NASA Technical Reports Server (NTRS)

    Pentakalos, Odysseas I.; Flater, David

    1994-01-01

    This report consists of two parts. The first part briefly comments on the documentation status of two major systems at NASA#s Center for Computational Sciences, specifically the Cray C98 and the Convex C3830. The second part describes the work done on improving the performance of file transfers between the Unitree Mass Storage System running on the Convex file server and the users workstations distributed over a large georgraphic area.

  9. Quantification of peripheral and central blood pressure variability using a time-frequency method.

    PubMed

    Kouchaki, Z; Butlin, M; Qasem, A; Avolio, A P

    2016-08-01

    Systolic blood pressure variability (BPV) is associated with cardiovascular events. As the beat-to-beat variation of blood pressure is due to interaction of several cardiovascular control systems operating with different response times, assessment of BPV by spectral analysis using the continuous measurement of arterial pressure in the finger is used to differentiate the contribution of these systems in regulating blood pressure. However, as baroreceptors are centrally located, this study considered applying a continuous aortic pressure signal estimated noninvasively from finger pressure for assessment of systolic BPV by a time-frequency method using Short Time Fourier Transform (STFT). The average ratio of low frequency and high frequency power band (LF PB /HF PB ) was computed by time-frequency decomposition of peripheral systolic pressure (pSBP) and derived central aortic systolic blood pressure (cSBP) in 30 healthy subjects (25-62 years) as a marker of balance between cardiovascular control systems contributing in low and high frequency blood pressure variability. The results showed that the BPV assessed from finger pressure (pBPV) overestimated the BPV values compared to that assessed from central aortic pressure (cBPV) for identical cardiac cycles (P<;0.001), with the overestimation being greater at higher power.

  10. A user view of office automation or the integrated workstation

    NASA Technical Reports Server (NTRS)

    Schmerling, E. R.

    1984-01-01

    Central data bases are useful only if they are kept up to date and easily accessible in an interactive (query) mode rather than in monthly reports that may be out of date and must be searched by hand. The concepts of automatic data capture, data base management and query languages require good communications and readily available work stations to be useful. The minimal necessary work station is a personal computer which can be an important office tool if connected into other office machines and properly integrated into an office system. It has a great deal of flexibility and can often be tailored to suit the tastes, work habits and requirements of the user. Unlike dumb terminals, there is less tendency to saturate a central computer, since its free standing capabilities are available after down loading a selection of data. The PC also permits the sharing of many other facilities, like larger computing power, sophisticated graphics programs, laser printers and communications. It can provide rapid access to common data bases able to provide more up to date information than printed reports. Portable computers can access the same familiar office facilities from anywhere in the world where a telephone connection can be made.

  11. BES-III distributed computing status

    NASA Astrophysics Data System (ADS)

    Belov, S. D.; Deng, Z. Y.; Korenkov, V. V.; Li, W. D.; Lin, T.; Ma, Z. T.; Nicholson, C.; Pelevanyuk, I. S.; Suo, B.; Trofimov, V. V.; Tsaregorodtsev, A. U.; Uzhinskiy, A. V.; Yan, T.; Yan, X. F.; Zhang, X. M.; Zhemchugov, A. S.

    2016-09-01

    The BES-III experiment at the Institute of High Energy Physics (Beijing, China) is aimed at the precision measurements in e+e- annihilation in the energy range from 2.0 till 4.6 GeV. The world's largest samples of J/psi and psi' events and unique samples of XYZ data have been already collected. The expected increase of the data volume in the coming years required a significant evolution of the computing model, namely shift from a centralized data processing to a distributed one. This report summarizes a current design of the BES-III distributed computing system, some of key decisions and experience gained during 2 years of operations.

  12. The development and testing of a fieldworthy system of improved fluid pumping device and liquid sensor for oil wells

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Buckman, W.G.

    1991-12-31

    A major expenditure to maintain oil and gas leases is the support of pumpers, those individuals who maintain the pumping systems on wells to achieve optimum production. Many leases are marginal and are in remote areas and this requires considerable driving time for the pumper. The Air Pulse Oil Pump System is designed to be an economical system for the shallow stripper wells. To improve on the economics of this system, we have designed a Remote Oil Field Monitor and Controller to enable us to acquire data from the lease to our central office at anytime and to control themore » pumping activities from the central office by using a personal computer. The advent and economics of low-power microcontrollers have made it feasible to use this type of system for numerous remote control systems. We can also adapt this economical system to monitor and control the production of gas wells and/or pump jacks.« less

  13. Application of a reversible chemical reaction system to solar thermal power plants

    NASA Technical Reports Server (NTRS)

    Hanseth, E. J.; Won, Y. S.; Seibowitz, L. P.

    1980-01-01

    Three distributed dish solar thermal power systems using various applications of SO2/SO3 chemical energy storage and transport technology were comparatively assessed. Each system features various roles for the chemical system: (1) energy storage only, (2) energy transport, or (3) energy transport and storage. These three systems were also compared with the dish-Stirling, using electrical transport and battery storage, and the central receiver Rankine system, with thermal storage, to determine the relative merit of plants employing a thermochemical system. As an assessment criterion, the busbar energy costs were compared. Separate but comparable solar energy cost computer codes were used for distributed receiver and central receiver systems. Calculations were performed for capacity factors ranging from 0.4 to 0.8. The results indicate that SO2/SO3 technology has the potential to be more cost effective in transporting the collected energy than in storing the energy for the storage capacity range studied (2-15 hours)

  14. Computing at DESY — current setup, trends and strategic directions

    NASA Astrophysics Data System (ADS)

    Ernst, Michael

    1998-05-01

    Since the HERA experiments H1 and ZEUS started data taking in '92, the computing environment at DESY has changed dramatically. Running a mainframe centred computing for more than 20 years, DESY switched to a heterogeneous, fully distributed computing environment within only about two years in almost every corner where computing has its applications. The computing strategy was highly influenced by the needs of the user community. The collaborations are usually limited by current technology and their ever increasing demands is the driving force for central computing to always move close to the technology edge. While DESY's central computing has a multidecade experience in running Central Data Recording/Central Data Processing for HEP experiments, the most challenging task today is to provide for clear and homogeneous concepts in the desktop area. Given that lowest level commodity hardware draws more and more attention, combined with the financial constraints we are facing already today, we quickly need concepts for integrated support of a versatile device which has the potential to move into basically any computing area in HEP. Though commercial solutions, especially addressing the PC management/support issues, are expected to come to market in the next 2-3 years, we need to provide for suitable solutions now. Buying PC's at DESY currently at a rate of about 30/month will otherwise absorb any available manpower in central computing and still will leave hundreds of unhappy people alone. Though certainly not the only region, the desktop issue is one of the most important one where we need HEP-wide collaboration to a large extent, and right now. Taking into account that there is traditionally no room for R&D at DESY, collaboration, meaning sharing experience and development resources within the HEP community, is a predominant factor for us.

  15. Solar space- and water-heating system at Stanford University. Central Food Services Building

    NASA Astrophysics Data System (ADS)

    1980-05-01

    The closed-loop drain-back system is described as offering dependability of gravity drain-back freeze protection, low maintenance, minimal costs, and simplicity. The system features an 840 square-foot collector and storage capacity of 1550 gallons. The acceptance testing and the predicted system performance data are briefly described. Solar performance calculations were performed using a computer design program (FCHART). Bidding, costs, and economics of the system are reviewed. Problems are discussed and solutions and recommendations given. An operation and maintenance manual is given.

  16. TFTR CAMAC systems and components

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Rauch, W.A.; Bergin, W.; Sichta, P.

    1987-08-01

    Princeton's tokamak fusion test reactor (TFTR) utilizes Computer Automated Measurement and Control (CAMAC) to provide instrumentation for real and quasi real time control, monitoring, and data acquisition systems. This paper describes and discusses the complement of CAMAC hardware systems and components that comprise the interface for tokamak control and measurement instrumentation, and communication with the central instrumentation control and data acquisition (CICADA) system. It also discusses CAMAC reliability and calibration, types of modules used, a summary of data acquisition and control points, and various diagnostic maintenance tools used to support and troubleshoot typical CAMAC systems on TFTR.

  17. The revolution in data gathering systems. [mini and microcomputers in NASA wind tunnels

    NASA Technical Reports Server (NTRS)

    Cambra, J. M.; Trover, W. F.

    1975-01-01

    This paper gives a review of the data-acquisition systems used in NASA's wind tunnels from the 1950's to the present as a basis for assessing the impact of minicomputers and microcomputers on data acquisition and processing. The operation and disadvantages of wind-tunnel data systems are summarized for the period before 1950, the early 1950's, the early and late 1960's, and the early 1970's. Some significant advances discussed include the use or development of solid-state components, minicomputer systems, large central computers, on-line data processing, autoranging DC amplifiers, MOS-FET multiplexers, MSI and LSI logic, computer-controlled programmable amplifiers, solid-state remote multiplexing, integrated circuits, and microprocessors. The distributed system currently in use with the 40-ft by 80-ft wind tunnel at Ames Research Center is described in detail. The expected employment of distributed systems and microprocessors in the next decade is noted.

  18. Deployment and early experience with remote-presence patient care in a community hospital.

    PubMed

    Petelin, J B; Nelson, M E; Goodman, J

    2007-01-01

    The introduction of the RP6 (InTouch Health, Santa Barbara, CA, USA) remote-presence "robot" appears to offer a useful telemedicine device. The authors describe the deployment and early experience with the RP6 in a community hospital and provided a live demonstration of the system on April 16, 2005 during the Emerging Technologies Session of the 2005 SAGES Meeting in Fort Lauderdale, Florida. The RP6 is a 5-ft 4-in. tall, 215-pound robot that can be remotely controlled from an appropriately configured computer located anywhere on the Internet (i.e., on this planet). The system is composed of a control station (a computer at the central station), a mechanical robot, a wireless network (at the remote facility: the hospital), and a high-speed Internet connection at both the remote (hospital) and central locations. The robot itself houses a rechargeable power supply. Its hardware and software allows communication over the Internet with the central station, interpretation of commands from the central station, and conversion of the commands into mechanical and nonmechanical actions at the remote location, which are communicated back to the central station over the Internet. The RP6 system allows the central party (e.g., physician) to control the movements of the robot itself, see and hear at the remote location (hospital), and be seen and heard at the remote location (hospital) while not physically there. Deployment of the RP6 system at the hospital was accomplished in less than a day. The wireless network at the institution was already in place. The control station setup time ranged from 1 to 4 h and was dependent primarily on the quality of the Internet connection (bandwidth) at the remote locations. Patients who visited with the RP6 on their discharge day could be discharged more than 4 h earlier than with conventional visits, thereby freeing up hospital beds on a busy med-surg floor. Patient visits during "off hours" (nights and weekends) were three times more efficient than conventional visits during these times (20 min per visit vs 40-min round trip travel + 20-min visit). Patients and nursing personnel both expressed tremendous satisfaction with the remote-presence interaction. The authors' early experience suggests a significant benefit to patients, hospitals, and physicians with the use of RP6. The implications for future development are enormous.

  19. Self managing experiment resources

    NASA Astrophysics Data System (ADS)

    Stagni, F.; Ubeda, M.; Tsaregorodtsev, A.; Romanovskiy, V.; Roiser, S.; Charpentier, P.; Graciani, R.

    2014-06-01

    Within this paper we present an autonomic Computing resources management system, used by LHCb for assessing the status of their Grid resources. Virtual Organizations Grids include heterogeneous resources. For example, LHC experiments very often use resources not provided by WLCG, and Cloud Computing resources will soon provide a non-negligible fraction of their computing power. The lack of standards and procedures across experiments and sites generated the appearance of multiple information systems, monitoring tools, ticket portals, etc... which nowadays coexist and represent a very precious source of information for running HEP experiments Computing systems as well as sites. These two facts lead to many particular solutions for a general problem: managing the experiment resources. In this paper we present how LHCb, via the DIRAC interware, addressed such issues. With a renewed Central Information Schema hosting all resources metadata and a Status System (Resource Status System) delivering real time information, the system controls the resources topology, independently of the resource types. The Resource Status System applies data mining techniques against all possible information sources available and assesses the status changes, that are then propagated to the topology description. Obviously, giving full control to such an automated system is not risk-free. Therefore, in order to minimise the probability of misbehavior, a battery of tests has been developed in order to certify the correctness of its assessments. We will demonstrate the performance and efficiency of such a system in terms of cost reduction and reliability.

  20. Electronic Library and Other Technology "Connects" Anchorage Students.

    ERIC Educational Resources Information Center

    Davis, E. E. (Gene); Scott, Marilynn S.

    1986-01-01

    The Anchorage, Alaska, School District is dealing with the problem of teaching students about the "information age" through a unique program in their central library system. It was one of the first school districts in the nation to computerize its library and to provide access to computer databases to the students through telephones as…

  1. Improving estimates of ecosystem metabolism by reducing effects of tidal advection on dissolved oxygen time series

    EPA Science Inventory

    In aquatic systems, time series of dissolved oxygen (DO) have been used to compute estimates of ecosystem metabolism. Central to this open-water method is the assumption that the DO time series is a Lagrangian specification of the flow field. However, most DO time series are coll...

  2. Glyburide - Novel Prophylaxis and Effective Treatment for Traumatic Brain Injury

    DTIC Science & Technology

    2010-08-01

    tested for incremental lear ning and for rapid lear ning. Incremental learning was significantly abnormal on days 14–18, as were the memory probe and...Computational biology - modeling of primary blast effects on the central nervous system. Neuroimage. 47 Suppl 2, T10-T20. MOSS,W.C., KING ,M.J., and

  3. Central mechanisms for force and motion--towards computational synthesis of human movement.

    PubMed

    Hemami, Hooshang; Dariush, Behzad

    2012-12-01

    Anatomical, physiological and experimental research on the human body can be supplemented by computational synthesis of the human body for all movement: routine daily activities, sports, dancing, and artistic and exploratory involvements. The synthesis requires thorough knowledge about all subsystems of the human body and their interactions, and allows for integration of known knowledge in working modules. It also affords confirmation and/or verification of scientific hypotheses about workings of the central nervous system (CNS). A simple step in this direction is explored here for controlling the forces of constraint. It requires co-activation of agonist-antagonist musculature. The desired trajectories of motion and the force of contact have to be provided by the CNS. The spinal control involves projection onto a muscular subset that induces the force of contact. The projection of force in the sensory motor cortex is implemented via a well-defined neural population unit, and is executed in the spinal cord by a standard integral controller requiring input from tendon organs. The sensory motor cortex structure is extended to the case for directing motion via two neural population units with vision input and spindle efferents. Digital computer simulations show the feasibility of the system. The formulation is modular and can be extended to multi-link limbs, robot and humanoid systems with many pairs of actuators or muscles. It can be expanded to include reticular activating structures and learning. Copyright © 2012 Elsevier Ltd. All rights reserved.

  4. Centralized automated quality assurance for large scale health care systems. A pilot method for some aspects of dental radiography.

    PubMed

    Benn, D K; Minden, N J; Pettigrew, J C; Shim, M

    1994-08-01

    President Clinton's Health Security Act proposes the formation of large scale health plans with improved quality assurance. Dental radiography consumes 4% ($1.2 billion in 1990) of total dental expenditure yet regular systematic office quality assurance is not performed. A pilot automated method is described for assessing density of exposed film and fogging of unexposed processed film. A workstation and camera were used to input intraoral radiographs. Test images were produced from a phantom jaw with increasing exposure times. Two radiologists subjectively classified the images as too light, acceptable, or too dark. A computer program automatically classified global grey level histograms from the test images as too light, acceptable, or too dark. The program correctly classified 95% of 88 clinical films. Optical density of unexposed film in the range 0.15 to 0.52 measured by computer was reliable to better than 0.01. Further work is needed to see if comprehensive centralized automated radiographic quality assurance systems with feedback to dentists are feasible, are able to improve quality, and are significantly cheaper than conventional clerical methods.

  5. Interaction entropy for protein-protein binding

    NASA Astrophysics Data System (ADS)

    Sun, Zhaoxi; Yan, Yu N.; Yang, Maoyou; Zhang, John Z. H.

    2017-03-01

    Protein-protein interactions are at the heart of signal transduction and are central to the function of protein machine in biology. The highly specific protein-protein binding is quantitatively characterized by the binding free energy whose accurate calculation from the first principle is a grand challenge in computational biology. In this paper, we show how the interaction entropy approach, which was recently proposed for protein-ligand binding free energy calculation, can be applied to computing the entropic contribution to the protein-protein binding free energy. Explicit theoretical derivation of the interaction entropy approach for protein-protein interaction system is given in detail from the basic definition. Extensive computational studies for a dozen realistic protein-protein interaction systems are carried out using the present approach and comparisons of the results for these protein-protein systems with those from the standard normal mode method are presented. Analysis of the present method for application in protein-protein binding as well as the limitation of the method in numerical computation is discussed. Our study and analysis of the results provided useful information for extracting correct entropic contribution in protein-protein binding from molecular dynamics simulations.

  6. Interaction entropy for protein-protein binding.

    PubMed

    Sun, Zhaoxi; Yan, Yu N; Yang, Maoyou; Zhang, John Z H

    2017-03-28

    Protein-protein interactions are at the heart of signal transduction and are central to the function of protein machine in biology. The highly specific protein-protein binding is quantitatively characterized by the binding free energy whose accurate calculation from the first principle is a grand challenge in computational biology. In this paper, we show how the interactionentropy approach, which was recently proposed for protein-ligand binding free energy calculation, can be applied to computing the entropic contribution to the protein-protein binding free energy. Explicit theoretical derivation of the interactionentropy approach for protein-protein interaction system is given in detail from the basic definition. Extensive computational studies for a dozen realistic protein-protein interaction systems are carried out using the present approach and comparisons of the results for these protein-protein systems with those from the standard normal mode method are presented. Analysis of the present method for application in protein-protein binding as well as the limitation of the method in numerical computation is discussed. Our study and analysis of the results provided useful information for extracting correct entropic contribution in protein-protein binding from molecular dynamics simulations.

  7. Biomorphic Multi-Agent Architecture for Persistent Computing

    NASA Technical Reports Server (NTRS)

    Lodding, Kenneth N.; Brewster, Paul

    2009-01-01

    A multi-agent software/hardware architecture, inspired by the multicellular nature of living organisms, has been proposed as the basis of design of a robust, reliable, persistent computing system. Just as a multicellular organism can adapt to changing environmental conditions and can survive despite the failure of individual cells, a multi-agent computing system, as envisioned, could adapt to changing hardware, software, and environmental conditions. In particular, the computing system could continue to function (perhaps at a reduced but still reasonable level of performance) if one or more component( s) of the system were to fail. One of the defining characteristics of a multicellular organism is unity of purpose. In biology, the purpose is survival of the organism. The purpose of the proposed multi-agent architecture is to provide a persistent computing environment in harsh conditions in which repair is difficult or impossible. A multi-agent, organism-like computing system would be a single entity built from agents or cells. Each agent or cell would be a discrete hardware processing unit that would include a data processor with local memory, an internal clock, and a suite of communication equipment capable of both local line-of-sight communications and global broadcast communications. Some cells, denoted specialist cells, could contain such additional hardware as sensors and emitters. Each cell would be independent in the sense that there would be no global clock, no global (shared) memory, no pre-assigned cell identifiers, no pre-defined network topology, and no centralized brain or control structure. Like each cell in a living organism, each agent or cell of the computing system would contain a full description of the system encoded as genes, but in this case, the genes would be components of a software genome.

  8. A DICOM based radiotherapy plan database for research collaboration and reporting

    NASA Astrophysics Data System (ADS)

    Westberg, J.; Krogh, S.; Brink, C.; Vogelius, I. R.

    2014-03-01

    Purpose: To create a central radiotherapy (RT) plan database for dose analysis and reporting, capable of calculating and presenting statistics on user defined patient groups. The goal is to facilitate multi-center research studies with easy and secure access to RT plans and statistics on protocol compliance. Methods: RT institutions are able to send data to the central database using DICOM communications on a secure computer network. The central system is composed of a number of DICOM servers, an SQL database and in-house developed software services to process the incoming data. A web site within the secure network allows the user to manage their submitted data. Results: The RT plan database has been developed in Microsoft .NET and users are able to send DICOM data between RT centers in Denmark. Dose-volume histogram (DVH) calculations performed by the system are comparable to those of conventional RT software. A permission system was implemented to ensure access control and easy, yet secure, data sharing across centers. The reports contain DVH statistics for structures in user defined patient groups. The system currently contains over 2200 patients in 14 collaborations. Conclusions: A central RT plan repository for use in multi-center trials and quality assurance was created. The system provides an attractive alternative to dummy runs by enabling continuous monitoring of protocol conformity and plan metrics in a trial.

  9. Scalable computing for evolutionary genomics.

    PubMed

    Prins, Pjotr; Belhachemi, Dominique; Möller, Steffen; Smant, Geert

    2012-01-01

    Genomic data analysis in evolutionary biology is becoming so computationally intensive that analysis of multiple hypotheses and scenarios takes too long on a single desktop computer. In this chapter, we discuss techniques for scaling computations through parallelization of calculations, after giving a quick overview of advanced programming techniques. Unfortunately, parallel programming is difficult and requires special software design. The alternative, especially attractive for legacy software, is to introduce poor man's parallelization by running whole programs in parallel as separate processes, using job schedulers. Such pipelines are often deployed on bioinformatics computer clusters. Recent advances in PC virtualization have made it possible to run a full computer operating system, with all of its installed software, on top of another operating system, inside a "box," or virtual machine (VM). Such a VM can flexibly be deployed on multiple computers, in a local network, e.g., on existing desktop PCs, and even in the Cloud, to create a "virtual" computer cluster. Many bioinformatics applications in evolutionary biology can be run in parallel, running processes in one or more VMs. Here, we show how a ready-made bioinformatics VM image, named BioNode, effectively creates a computing cluster, and pipeline, in a few steps. This allows researchers to scale-up computations from their desktop, using available hardware, anytime it is required. BioNode is based on Debian Linux and can run on networked PCs and in the Cloud. Over 200 bioinformatics and statistical software packages, of interest to evolutionary biology, are included, such as PAML, Muscle, MAFFT, MrBayes, and BLAST. Most of these software packages are maintained through the Debian Med project. In addition, BioNode contains convenient configuration scripts for parallelizing bioinformatics software. Where Debian Med encourages packaging free and open source bioinformatics software through one central project, BioNode encourages creating free and open source VM images, for multiple targets, through one central project. BioNode can be deployed on Windows, OSX, Linux, and in the Cloud. Next to the downloadable BioNode images, we provide tutorials online, which empower bioinformaticians to install and run BioNode in different environments, as well as information for future initiatives, on creating and building such images.

  10. Color-coded topography and shaded relief map of the lunar near side and far side hemispheres

    USGS Publications Warehouse

    ,

    2003-01-01

    This publication is a set of three sheets of topographic maps that presents color-coded topographic data digitally merged with shaded relief data. Adopted figure: The figure for the Moon, used for the computation of the map projection, is a sphere with a radius of 1737.4 km. Because the Moon has no surface water, and hence no sea level, the datum (the 0 km contour) for elevations is defined as the radius of 1737.4 km. Coordinates are based on the mean Earth/polar axis (M.E.) coordinates system, the z axis is the axis of the Moon's rotation, and the x axis is the mean Earth direction. The center of mass is the origin of the coordinate system. The equator lies in the x-y plane and the prime meridian lies in the x-z plane with east longitude values being positive. Projection: The projection is Lambert Azimuthal Equal Area Projection. The scale factor at the central latitude and central longitude point is 1:10,000,000. For the near side hemisphere the central latitude and central longitude point is at 0° and 0°. For the far side hemisphere the central latitude and central longitude point is at 0° and 180°.

  11. Doppler compensation by shifting transmitted object frequency within limits

    NASA Technical Reports Server (NTRS)

    Laughlin, C. R., Jr.; Hollenbaugh, R. C.; Allen, W. K. (Inventor)

    1973-01-01

    A system and method are disclosed for position locating, deriving centralized air traffic control data, and communicating via voice and digital signals between a multiplicity of remote aircraft, including supersonic transports, and a central station. Such communication takes place through a synchronous satellite relay station. Side tone ranging patterns, as well as the digital and voice signals, are modulated on a carrier transmitted from the central station and received on all of the supersonic transports. Each aircraft communicates with the ground stations via a different frequency multiplexed spectrum. Supersonic transport position is derived from a computer at the central station and supplied to a local air traffic controller. Position is determined in response to variable phase information imposed on the side tones at the aircrafts. Common to all of the side tone techniques is Doppler compensation for the supersonic transport velocity.

  12. Computational Modeling of Space Physiology

    NASA Technical Reports Server (NTRS)

    Lewandowski, Beth E.; Griffin, Devon W.

    2016-01-01

    The Digital Astronaut Project (DAP), within NASAs Human Research Program, develops and implements computational modeling for use in the mitigation of human health and performance risks associated with long duration spaceflight. Over the past decade, DAP developed models to provide insights into space flight related changes to the central nervous system, cardiovascular system and the musculoskeletal system. Examples of the models and their applications include biomechanical models applied to advanced exercise device development, bone fracture risk quantification for mission planning, accident investigation, bone health standards development, and occupant protection. The International Space Station (ISS), in its role as a testing ground for long duration spaceflight, has been an important platform for obtaining human spaceflight data. DAP has used preflight, in-flight and post-flight data from short and long duration astronauts for computational model development and validation. Examples include preflight and post-flight bone mineral density data, muscle cross-sectional area, and muscle strength measurements. Results from computational modeling supplement space physiology research by informing experimental design. Using these computational models, DAP personnel can easily identify both important factors associated with a phenomenon and areas where data are lacking. This presentation will provide examples of DAP computational models, the data used in model development and validation, and applications of the model.

  13. Forest fire autonomous decision system based on fuzzy logic

    NASA Astrophysics Data System (ADS)

    Lei, Z.; Lu, Jianhua

    2010-11-01

    The proposed system integrates GPS / pseudolite / IMU and thermal camera in order to autonomously process the graphs by identification, extraction, tracking of forest fire or hot spots. The airborne detection platform, the graph-based algorithms and the signal processing frame are analyzed detailed; especially the rules of the decision function are expressed in terms of fuzzy logic, which is an appropriate method to express imprecise knowledge. The membership function and weights of the rules are fixed through a supervised learning process. The perception system in this paper is based on a network of sensorial stations and central stations. The sensorial stations collect data including infrared and visual images and meteorological information. The central stations exchange data to perform distributed analysis. The experiment results show that working procedure of detection system is reasonable and can accurately output the detection alarm and the computation of infrared oscillations.

  14. HEP - A semaphore-synchronized multiprocessor with central control. [Heterogeneous Element Processor

    NASA Technical Reports Server (NTRS)

    Gilliland, M. C.; Smith, B. J.; Calvert, W.

    1976-01-01

    The paper describes the design concept of the Heterogeneous Element Processor (HEP), a system tailored to the special needs of scientific simulation. In order to achieve high-speed computation required by simulation, HEP features a hierarchy of processes executing in parallel on a number of processors, with synchronization being largely accomplished by hardware. A full-empty-reserve scheme of synchronization is realized by zero-one-valued hardware semaphores. A typical system has, besides the control computer and the scheduler, an algebraic module, a memory module, a first-in first-out (FIFO) module, an integrator module, and an I/O module. The architecture of the scheduler and the algebraic module is examined in detail.

  15. Pharmacological and structure-activity relationship evaluation of 4-aryl-1-diphenylacetyl(thio)semicarbazides.

    PubMed

    Wujec, Monika; Kędzierska, Ewa; Kuśmierz, Edyta; Plech, Tomasz; Wróbel, Andrzej; Paneth, Agata; Orzelska, Jolanta; Fidecka, Sylwia; Paneth, Piotr

    2014-04-16

    This article describes the synthesis of six 4-aryl-(thio)semicarbazides (series a and b) linked with diphenylacetyl moiety along with their pharmacological evaluation on the central nervous system in mice and computational studies, including conformational analysis and electrostatic properties. All thiosemicarbazides (series b) were found to exhibit strong antinociceptive activity in the behavioural model. Among them, compound 1-diphenylacetyl-4-(4-methylphenyl)thiosemicarbazide 1b was found to be the most potent analgesic agent, whose activity is connected with the opioid system. For compounds from series a significant anti-serotonergic effect, especially for compound 1-diphenylacetyl-4-(4-methoxyphenyl)semicarbazide 2b was observed. The computational studies strongly support the obtained results.

  16. Intelligent Network Management and Functional Cerebellum Synthesis

    NASA Technical Reports Server (NTRS)

    Loebner, Egon E.

    1989-01-01

    Transdisciplinary modeling of the cerebellum across histology, physiology, and network engineering provides preliminary results at three organization levels: input/output links to central nervous system networks; links between the six neuron populations in the cerebellum; and computation among the neurons of the populations. Older models probably underestimated the importance and role of climbing fiber input which seems to supply write as well as read signals, not just to Purkinje but also to basket and stellate neurons. The well-known mossy fiber-granule cell-Golgi cell system should also respond to inputs originating from climbing fibers. Corticonuclear microcomplexing might be aided by stellate and basket computation and associate processing. Technological and scientific implications of the proposed cerebellum model are discussed.

  17. Land classification of south-central Iowa from computer enhanced images

    NASA Technical Reports Server (NTRS)

    Lucas, J. R.; Taranik, J. V.; Billingsley, F. C. (Principal Investigator)

    1977-01-01

    The author has identified the following significant results. Enhanced LANDSAT imagery was most useful for land classification purposes, because these images could be photographically printed at large scales such as 1:63,360. The ability to see individual picture elements was no hindrance as long as general image patterns could be discerned. Low cost photographic processing systems for color printings have proved to be effective in the utilization of computer enhanced LANDSAT products for land classification purposes. The initial investment for this type of system was very low, ranging from $100 to $200 beyond a black and white photo lab. The technical expertise can be acquired from reading a color printing and processing manual.

  18. Fault tolerant computer control for a Maglev transportation system

    NASA Technical Reports Server (NTRS)

    Lala, Jaynarayan H.; Nagle, Gail A.; Anagnostopoulos, George

    1994-01-01

    Magnetically levitated (Maglev) vehicles operating on dedicated guideways at speeds of 500 km/hr are an emerging transportation alternative to short-haul air and high-speed rail. They have the potential to offer a service significantly more dependable than air and with less operating cost than both air and high-speed rail. Maglev transportation derives these benefits by using magnetic forces to suspend a vehicle 8 to 200 mm above the guideway. Magnetic forces are also used for propulsion and guidance. The combination of high speed, short headways, stringent ride quality requirements, and a distributed offboard propulsion system necessitates high levels of automation for the Maglev control and operation. Very high levels of safety and availability will be required for the Maglev control system. This paper describes the mission scenario, functional requirements, and dependability and performance requirements of the Maglev command, control, and communications system. A distributed hierarchical architecture consisting of vehicle on-board computers, wayside zone computers, a central computer facility, and communication links between these entities was synthesized to meet the functional and dependability requirements on the maglev. Two variations of the basic architecture are described: the Smart Vehicle Architecture (SVA) and the Zone Control Architecture (ZCA). Preliminary dependability modeling results are also presented.

  19. The Archive Solution for Distributed Workflow Management Agents of the CMS Experiment at LHC

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kuznetsov, Valentin; Fischer, Nils Leif; Guo, Yuyi

    The CMS experiment at the CERN LHC developed the Workflow Management Archive system to persistently store unstructured framework job report documents produced by distributed workflow management agents. In this paper we present its architecture, implementation, deployment, and integration with the CMS and CERN computing infrastructures, such as central HDFS and Hadoop Spark cluster. The system leverages modern technologies such as a document oriented database and the Hadoop eco-system to provide the necessary flexibility to reliably process, store, and aggregatemore » $$\\mathcal{O}$$(1M) documents on a daily basis. We describe the data transformation, the short and long term storage layers, the query language, along with the aggregation pipeline developed to visualize various performance metrics to assist CMS data operators in assessing the performance of the CMS computing system.« less

  20. The Archive Solution for Distributed Workflow Management Agents of the CMS Experiment at LHC

    DOE PAGES

    Kuznetsov, Valentin; Fischer, Nils Leif; Guo, Yuyi

    2018-03-19

    The CMS experiment at the CERN LHC developed the Workflow Management Archive system to persistently store unstructured framework job report documents produced by distributed workflow management agents. In this paper we present its architecture, implementation, deployment, and integration with the CMS and CERN computing infrastructures, such as central HDFS and Hadoop Spark cluster. The system leverages modern technologies such as a document oriented database and the Hadoop eco-system to provide the necessary flexibility to reliably process, store, and aggregatemore » $$\\mathcal{O}$$(1M) documents on a daily basis. We describe the data transformation, the short and long term storage layers, the query language, along with the aggregation pipeline developed to visualize various performance metrics to assist CMS data operators in assessing the performance of the CMS computing system.« less

  1. Operating room integration and telehealth.

    PubMed

    Bucholz, Richard D; Laycock, Keith A; McDurmont, Leslie

    2011-01-01

    The increasing use of advanced automated and computer-controlled systems and devices in surgical procedures has resulted in problems arising from the crowding of the operating room with equipment and the incompatible control and communication standards associated with each system. This lack of compatibility between systems and centralized control means that the surgeon is frequently required to interact with multiple computer interfaces in order to obtain updates and exert control over the various devices at his disposal. To reduce this complexity and provide the surgeon with more complete and precise control of the operating room systems, a unified interface and communication network has been developed. In addition to improving efficiency, this network also allows the surgeon to grant remote access to consultants and observers at other institutions, enabling experts to participate in the procedure without having to travel to the site.

  2. Assessment of a high-resolution central scheme for the solution of the relativistic hydrodynamics equations

    NASA Astrophysics Data System (ADS)

    Lucas-Serrano, A.; Font, J. A.; Ibáñez, J. M.; Martí, J. M.

    2004-12-01

    We assess the suitability of a recent high-resolution central scheme developed by \\cite{kurganov} for the solution of the relativistic hydrodynamic equations. The novelty of this approach relies on the absence of Riemann solvers in the solution procedure. The computations we present are performed in one and two spatial dimensions in Minkowski spacetime. Standard numerical experiments such as shock tubes and the relativistic flat-faced step test are performed. As an astrophysical application the article includes two-dimensional simulations of the propagation of relativistic jets using both Cartesian and cylindrical coordinates. The simulations reported clearly show the capabilities of the numerical scheme of yielding satisfactory results, with an accuracy comparable to that obtained by the so-called high-resolution shock-capturing schemes based upon Riemann solvers (Godunov-type schemes), even well inside the ultrarelativistic regime. Such a central scheme can be straightforwardly applied to hyperbolic systems of conservation laws for which the characteristic structure is not explicitly known, or in cases where a numerical computation of the exact solution of the Riemann problem is prohibitively expensive. Finally, we present comparisons with results obtained using various Godunov-type schemes as well as with those obtained using other high-resolution central schemes which have recently been reported in the literature.

  3. A 3-D Approach for Teaching and Learning about Surface Water Systems through Computational Thinking, Data Visualization and Physical Models

    NASA Astrophysics Data System (ADS)

    Caplan, B.; Morrison, A.; Moore, J. C.; Berkowitz, A. R.

    2017-12-01

    Understanding water is central to understanding environmental challenges. Scientists use `big data' and computational models to develop knowledge about the structure and function of complex systems, and to make predictions about changes in climate, weather, hydrology, and ecology. Large environmental systems-related data sets and simulation models are difficult for high school teachers and students to access and make sense of. Comp Hydro, a collaboration across four states and multiple school districts, integrates computational thinking and data-related science practices into water systems instruction to enhance development of scientific model-based reasoning, through curriculum, assessment and teacher professional development. Comp Hydro addresses the need for 1) teaching materials for using data and physical models of hydrological phenomena, 2) building teachers' and students' comfort or familiarity with data analysis and modeling, and 3) infusing the computational knowledge and practices necessary to model and visualize hydrologic processes into instruction. Comp Hydro teams in Baltimore, MD and Fort Collins, CO are integrating teaching about surface water systems into high school courses focusing on flooding (MD) and surface water reservoirs (CO). This interactive session will highlight the successes and challenges of our physical and simulation models in helping teachers and students develop proficiency with computational thinking about surface water. We also will share insights from comparing teacher-led vs. project-led development of curriculum and our simulations.

  4. Radiated Power and Impurity Concentrations in the EXTRAP-T2R Reversed-Field Pinch

    NASA Astrophysics Data System (ADS)

    Corre, Y.; Rachlew, E.; Cecconello, M.; Gravestijn, R. M.; Hedqvist, A.; Pégourié, B.; Schunke, B.; Stancalie, V.

    2005-01-01

    A numerical and experimental study of the impurity concentration and radiation in the EXTRAP-T2R device is reported. The experimental setup consists of an 8-chord bolometer system providing the plasma radiated power and a vacuum-ultraviolet spectrometer providing information on the plasma impurity content. The plasma emissivity profile as measured by the bolometric system is peaked in the plasma centre. A one dimensional Onion Skin Collisional-Radiative model (OSCR) has been developed to compute the density and radiation distributions of the main impurities. The observed centrally peaked emissivity profile can be reproduced by OSCR simulations only if finite particle confinement time and charge-exchange processes between plasma impurities and neutral hydrogen are taken into account. The neutral hydrogen density profile is computed with a recycling code. Simulations show that recycling on metal first wall such as in EXTRAP-T2R (stainless steel vacuum vessel and molybdenum limiters) is compatible with a rather high neutral hydrogen density in the plasma centre. Assuming an impurity concentration of 10% for oxygen and 3% for carbon compared with the electron density, the OSCR calculation including lines and continuum emission reproduces about 60% of the total radiated power with a similarly centrally peaked emissivity profile. The centrally peaked emissivity profile is due to low ionisation stages and strongly radiating species in the plasma core, mainly O4+ (Be-like) and C3+ Li-like.

  5. A new concept of a unified parameter management, experiment control, and data analysis in fMRI: application to real-time fMRI at 3T and 7T.

    PubMed

    Hollmann, M; Mönch, T; Mulla-Osman, S; Tempelmann, C; Stadler, J; Bernarding, J

    2008-10-30

    In functional MRI (fMRI) complex experiments and applications require increasingly complex parameter handling as the experimental setup usually consists of separated soft- and hardware systems. Advanced real-time applications such as neurofeedback-based training or brain computer interfaces (BCIs) may even require adaptive changes of the paradigms and experimental setup during the measurement. This would be facilitated by an automated management of the overall workflow and a control of the communication between all experimental components. We realized a concept based on an XML software framework called Experiment Description Language (EDL). All parameters relevant for real-time data acquisition, real-time fMRI (rtfMRI) statistical data analysis, stimulus presentation, and activation processing are stored in one central EDL file, and processed during the experiment. A usability study comparing the central EDL parameter management with traditional approaches showed an improvement of the complete experimental handling. Based on this concept, a feasibility study realizing a dynamic rtfMRI-based brain computer interface showed that the developed system in combination with EDL was able to reliably detect and evaluate activation patterns in real-time. The implementation of a centrally controlled communication between the subsystems involved in the rtfMRI experiments reduced potential inconsistencies, and will open new applications for adaptive BCIs.

  6. SARANA: language, compiler and run-time system support for spatially aware and resource-aware mobile computing.

    PubMed

    Hari, Pradip; Ko, Kevin; Koukoumidis, Emmanouil; Kremer, Ulrich; Martonosi, Margaret; Ottoni, Desiree; Peh, Li-Shiuan; Zhang, Pei

    2008-10-28

    Increasingly, spatial awareness plays a central role in many distributed and mobile computing applications. Spatially aware applications rely on information about the geographical position of compute devices and their supported services in order to support novel functionality. While many spatial application drivers already exist in mobile and distributed computing, very little systems research has explored how best to program these applications, to express their spatial and temporal constraints, and to allow efficient implementations on highly dynamic real-world platforms. This paper proposes the SARANA system architecture, which includes language and run-time system support for spatially aware and resource-aware applications. SARANA allows users to express spatial regions of interest, as well as trade-offs between quality of result (QoR), latency and cost. The goal is to produce applications that use resources efficiently and that can be run on diverse resource-constrained platforms ranging from laptops to personal digital assistants and to smart phones. SARANA's run-time system manages QoR and cost trade-offs dynamically by tracking resource availability and locations, brokering usage/pricing agreements and migrating programs to nodes accordingly. A resource cost model permeates the SARANA system layers, permitting users to express their resource needs and QoR expectations in units that make sense to them. Although we are still early in the system development, initial versions have been demonstrated on a nine-node system prototype.

  7. Concentrated solar-flux measurements at the IEA-SSPS solar-central-receiver power plant, Tabernas - Lameria (Spain)

    NASA Astrophysics Data System (ADS)

    Vontobel, G.; Schelders, C.; Real, M.

    A flux analyzing system (F.A.S.) was installed at the central receiver system of the SSPS project to determine the relative flux distribution of the heliostat field and to measure the entire optical solar flux reflected from the heliostat field into the receiver cavity. The functional principles of the F.A.S. are described. The raw data and the evaluation of the measurements of the entire helistat field are given, and an approach to determine the actual fluxes which hit the receiver tube bundle is presented. A method is described to qualify the performance of each heliostat using a computer code. The data of the measurements of the direct radiation are presented.

  8. On computing stress in polymer systems involving multi-body potentials from molecular dynamics simulation

    NASA Astrophysics Data System (ADS)

    Fu, Yao; Song, Jeong-Hoon

    2014-08-01

    Hardy stress definition has been restricted to pair potentials and embedded-atom method potentials due to the basic assumptions in the derivation of a symmetric microscopic stress tensor. Force decomposition required in the Hardy stress expression becomes obscure for multi-body potentials. In this work, we demonstrate the invariance of the Hardy stress expression for a polymer system modeled with multi-body interatomic potentials including up to four atoms interaction, by applying central force decomposition of the atomic force. The balance of momentum has been demonstrated to be valid theoretically and tested under various numerical simulation conditions. The validity of momentum conservation justifies the extension of Hardy stress expression to multi-body potential systems. Computed Hardy stress has been observed to converge to the virial stress of the system with increasing spatial averaging volume. This work provides a feasible and reliable linkage between the atomistic and continuum scales for multi-body potential systems.

  9. A report from the Space Science and Engineering Center, the University of Wisconsin-Madison, Madison, Wisconsin

    NASA Technical Reports Server (NTRS)

    1985-01-01

    Operational forecasters have habitually been plagued with the problems associated with acquisition, display, and dissemination of data used in preparing forecasts. The centralized storm information system (CSIS) experiment provided an operational forecaster with an interactive computer system which could perform these preliminary tasks more quickly and accurately than any human could. CSIS objectives pertaining to improved severe storms forecasting and warning procedures are addressed.

  10. Wireless sensor platform

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Joshi, Pooran C.; Killough, Stephen M.; Kuruganti, Phani Teja

    A wireless sensor platform and methods of manufacture are provided. The platform involves providing a plurality of wireless sensors, where each of the sensors is fabricated on flexible substrates using printing techniques and low temperature curing. Each of the sensors can include planar sensor elements and planar antennas defined using the printing and curing. Further, each of the sensors can include a communications system configured to encode the data from the sensors into a spread spectrum code sequence that is transmitted to a central computer(s) for use in monitoring an area associated with the sensors.

  11. Nose and Nasal Planum Neoplasia, Reconstruction.

    PubMed

    Worley, Deanna R

    2016-07-01

    Most intranasal lesions are best treated with radiation therapy. Computed tomographic imaging with intravenous contrast is critical for treatment planning. Computed tomographic images of the nose will best assess the integrity of the cribriform plate for central nervous system invasion by a nasal tumor. Because of an owner's emotional response to an altered appearance of their dog's face, discussions need to include the entire family before proceeding with nasal planectomy or radical planectomy. With careful case selection, nasal planectomy and radical planectomy surgeries can be locally curative. Copyright © 2016 Elsevier Inc. All rights reserved.

  12. REVIEW: Widespread access to predictive models in the motor system: a short review

    NASA Astrophysics Data System (ADS)

    Davidson, Paul R.; Wolpert, Daniel M.

    2005-09-01

    Recent behavioural and computational studies suggest that access to internal predictive models of arm and object dynamics is widespread in the sensorimotor system. Several systems, including those responsible for oculomotor and skeletomotor control, perceptual processing, postural control and mental imagery, are able to access predictions of the motion of the arm. A capacity to make and use predictions of object dynamics is similarly widespread. Here, we review recent studies looking at the predictive capacity of the central nervous system which reveal pervasive access to forward models of the environment.

  13. An observatory control system for the University of Hawai'i 2.2m Telescope

    NASA Astrophysics Data System (ADS)

    McKay, Luke; Erickson, Christopher; Mukensnable, Donn; Stearman, Anthony; Straight, Brad

    2016-07-01

    The University of Hawai'i 2.2m telescope at Maunakea has operated since 1970, and has had several controls upgrades to date. The newest system will operate as a distributed hierarchy of GNU/Linux central server, networked single-board computers, microcontrollers, and a modular motion control processor for the main axes. Rather than just a telescope control system, this new effort is towards a cohesive, modular, and robust whole observatory control system, with design goals of fully robotic unattended operation, high reliability, and ease of maintenance and upgrade.

  14. Using technology to develop and distribute patient education storyboards across a health system.

    PubMed

    Kisak, Anne Z; Conrad, Kathryn J

    2004-01-01

    To describe the successful implementation of a centrally designed and managed patient education storyboard project using Microsoft PowerPoint in a large multihospital system and physician-based practice settings. Journal articles, project evaluation, and clinical and educational experience. The use of posters, bulletin boards, and storyboards as educational strategies has been reported widely. Two multidisciplinary committees applied new technology to develop storyboards for patient, family, and general public education. Technology can be used to coordinate centralized development of patient education posters, improving accuracy and content of patient education across a healthcare system while streamlining the development and review process and avoiding duplication of work effort. Storyboards are excellent sources of unit-based current, consistent patient education; reduce duplication of efforts; enhance nursing computer competencies; market nursing expertise; and promote nurse educators.

  15. Thermal energy supply optimization for Edgewood Area, US Army Aberdeen Proving Ground: Energy supply alternatives. Final report

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    McCammon, T.L.; Dilks, C.L.; Savoie, M.J.

    1995-09-01

    Relatively poor performance at the aging central heating plants (OH Ps) and planned changes in steam demand at Aberdeen Proving Ground (APG) Edgewood Area, Aberdeen, MD warranted an investigation of alternatives for providing thermal energy to the installation. This study: (1) evaluated the condition of the APG CHPs and heat distribution system, (2) identified thermal energy supply problems and cost-effective technologies to maintain APG`s capability to produce and distribute the needed thermal energy, and (3) recommended renovation and modernization projects for the system. Heating loads were analyzed using computer simulations, and life cycle costs were developed for each alternative. Recommendedmore » alternatives included upgrading the existing system, installing new boilers, consolidating the central heating plants, and introducing the use of absorption chilling.« less

  16. Platform Architecture for Decentralized Positioning Systems.

    PubMed

    Kasmi, Zakaria; Norrdine, Abdelmoumen; Blankenbach, Jörg

    2017-04-26

    A platform architecture for positioning systems is essential for the realization of a flexible localization system, which interacts with other systems and supports various positioning technologies and algorithms. The decentralized processing of a position enables pushing the application-level knowledge into a mobile station and avoids the communication with a central unit such as a server or a base station. In addition, the calculation of the position on low-cost and resource-constrained devices presents a challenge due to the limited computing, storage capacity, as well as power supply. Therefore, we propose a platform architecture that enables the design of a system with the reusability of the components, extensibility (e.g., with other positioning technologies) and interoperability. Furthermore, the position is computed on a low-cost device such as a microcontroller, which simultaneously performs additional tasks such as data collecting or preprocessing based on an operating system. The platform architecture is designed, implemented and evaluated on the basis of two positioning systems: a field strength system and a time of arrival-based positioning system.

  17. Measuring the Resilience of Advanced Life Support Systems

    NASA Technical Reports Server (NTRS)

    Bell, Ann Maria; Dearden, Richard; Levri, Julie A.

    2002-01-01

    Despite the central importance of crew safety in designing and operating a life support system, the metric commonly used to evaluate alternative Advanced Life Support (ALS) technologies does not currently provide explicit techniques for measuring safety. The resilience of a system, or the system s ability to meet performance requirements and recover from component-level faults, is fundamentally a dynamic property. This paper motivates the use of computer models as a tool to understand and improve system resilience throughout the design process. Extensive simulation of a hybrid computational model of a water revitalization subsystem (WRS) with probabilistic, component-level faults provides data about off-nominal behavior of the system. The data can then be used to test alternative measures of resilience as predictors of the system s ability to recover from component-level faults. A novel approach to measuring system resilience using a Markov chain model of performance data is also developed. Results emphasize that resilience depends on the complex interaction of faults, controls, and system dynamics, rather than on simple fault probabilities.

  18. Platform Architecture for Decentralized Positioning Systems

    PubMed Central

    Kasmi, Zakaria; Norrdine, Abdelmoumen; Blankenbach, Jörg

    2017-01-01

    A platform architecture for positioning systems is essential for the realization of a flexible localization system, which interacts with other systems and supports various positioning technologies and algorithms. The decentralized processing of a position enables pushing the application-level knowledge into a mobile station and avoids the communication with a central unit such as a server or a base station. In addition, the calculation of the position on low-cost and resource-constrained devices presents a challenge due to the limited computing, storage capacity, as well as power supply. Therefore, we propose a platform architecture that enables the design of a system with the reusability of the components, extensibility (e.g., with other positioning technologies) and interoperability. Furthermore, the position is computed on a low-cost device such as a microcontroller, which simultaneously performs additional tasks such as data collecting or preprocessing based on an operating system. The platform architecture is designed, implemented and evaluated on the basis of two positioning systems: a field strength system and a time of arrival-based positioning system. PMID:28445414

  19. TFTR diagnostic control and data acquisition system

    NASA Astrophysics Data System (ADS)

    Sauthoff, N. R.; Daniels, R. E.

    1985-05-01

    General computerized control and data-handling support for TFTR diagnostics is presented within the context of the Central Instrumentation, Control and Data Acquisition (CICADA) System. Procedures, hardware, the interactive man-machine interface, event-driven task scheduling, system-wide arming and data acquisition, and a hierarchical data base of raw data and results are described. Similarities in data structures involved in control, monitoring, and data acquisition afford a simplification of the system functions, based on ``groups'' of devices. Emphases and optimizations appropriate for fusion diagnostic system designs are provided. An off-line data reduction computer system is under development.

  20. TFTR diagnostic control and data acquisition system

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sauthoff, N.R.; Daniels, R.E.; PPL Computer Division

    1985-05-01

    General computerized control and data-handling support for TFTR diagnostics is presented within the context of the Central Instrumentation, Control and Data Acquisition (CICADA) System. Procedures, hardware, the interactive man--machine interface, event-driven task scheduling, system-wide arming and data acquisition, and a hierarchical data base of raw data and results are described. Similarities in data structures involved in control, monitoring, and data acquisition afford a simplification of the system functions, based on ''groups'' of devices. Emphases and optimizations appropriate for fusion diagnostic system designs are provided. An off-line data reduction computer system is under development.

  1. High-performance computing on GPUs for resistivity logging of oil and gas wells

    NASA Astrophysics Data System (ADS)

    Glinskikh, V.; Dudaev, A.; Nechaev, O.; Surodina, I.

    2017-10-01

    We developed and implemented into software an algorithm for high-performance simulation of electrical logs from oil and gas wells using high-performance heterogeneous computing. The numerical solution of the 2D forward problem is based on the finite-element method and the Cholesky decomposition for solving a system of linear algebraic equations (SLAE). Software implementations of the algorithm used the NVIDIA CUDA technology and computing libraries are made, allowing us to perform decomposition of SLAE and find its solution on central processor unit (CPU) and graphics processor unit (GPU). The calculation time is analyzed depending on the matrix size and number of its non-zero elements. We estimated the computing speed on CPU and GPU, including high-performance heterogeneous CPU-GPU computing. Using the developed algorithm, we simulated resistivity data in realistic models.

  2. Seeing the forest for the trees: Networked workstations as a parallel processing computer

    NASA Technical Reports Server (NTRS)

    Breen, J. O.; Meleedy, D. M.

    1992-01-01

    Unlike traditional 'serial' processing computers in which one central processing unit performs one instruction at a time, parallel processing computers contain several processing units, thereby, performing several instructions at once. Many of today's fastest supercomputers achieve their speed by employing thousands of processing elements working in parallel. Few institutions can afford these state-of-the-art parallel processors, but many already have the makings of a modest parallel processing system. Workstations on existing high-speed networks can be harnessed as nodes in a parallel processing environment, bringing the benefits of parallel processing to many. While such a system can not rival the industry's latest machines, many common tasks can be accelerated greatly by spreading the processing burden and exploiting idle network resources. We study several aspects of this approach, from algorithms to select nodes to speed gains in specific tasks. With ever-increasing volumes of astronomical data, it becomes all the more necessary to utilize our computing resources fully.

  3. Conceptual Modeling in the Time of the Revolution: Part II

    NASA Astrophysics Data System (ADS)

    Mylopoulos, John

    Conceptual Modeling was a marginal research topic at the very fringes of Computer Science in the 60s and 70s, when the discipline was dominated by topics focusing on programs, systems and hardware architectures. Over the years, however, the field has moved to centre stage and has come to claim a central role both in Computer Science research and practice in diverse areas, such as Software Engineering, Databases, Information Systems, the Semantic Web, Business Process Management, Service-Oriented Computing, Multi-Agent Systems, Knowledge Management, and more. The transformation was greatly aided by the adoption of standards in modeling languages (e.g., UML), and model-based methodologies (e.g., Model-Driven Architectures) by the Object Management Group (OMG) and other standards organizations. We briefly review the history of the field over the past 40 years, focusing on the evolution of key ideas. We then note some open challenges and report on-going research, covering topics such as the representation of variability in conceptual models, capturing model intentions, and models of laws.

  4. Landslide and Flood Warning System Prototypes based on Wireless Sensor Networks

    NASA Astrophysics Data System (ADS)

    Hloupis, George; Stavrakas, Ilias; Triantis, Dimos

    2010-05-01

    Wireless sensor networks (WSNs) are one of the emerging areas that received great attention during the last few years. This is mainly due to the fact that WSNs have provided scientists with the capability of developing real-time monitoring systems equipped with sensors based on Micro-Electro-Mechanical Systems (MEMS). WSNs have great potential for many applications in environmental monitoring since the sensor nodes that comprised from can host several MEMS sensors (such as temperature, humidity, inertial, pressure, strain-gauge) and transducers (such as position, velocity, acceleration, vibration). The resulting devices are small and inexpensive but with limited memory and computing resources. Each sensor node contains a sensing module which along with an RF transceiver. The communication is broadcast-based since the network topology can change rapidly due to node failures [1]. Sensor nodes can transmit their measurements to central servers through gateway nodes without any processing or they make preliminary calculations locally in order to produce results that will be sent to central servers [2]. Based on the above characteristics, two prototypes using WSNs are presented in this paper: A Landslide detection system and a Flood warning system. Both systems sent their data to central processing server where the core of processing routines exists. Transmission is made using Zigbee and IEEE 802.11b protocol but is capable to use VSAT communication also. Landslide detection system uses structured network topology. Each measuring node comprises of a columnar module that is half buried to the area under investigation. Each sensing module contains a geophone, an inclinometer and a set of strain gauges. Data transmitted to central processing server where possible landslide evolution is monitored. Flood detection system uses unstructured network topology since the failure rate of sensor nodes is expected higher. Each sensing module contains a custom water level sensor (based on plastic optical fiber). Data transmitted directly to server where the early warning algorithms monitor the water level variations in real time. Both sensor nodes use power harvesting techniques in order to extend their battery life as much as possible. [1] Yick J.; Mukherjee, B.; Ghosal, D. Wireless sensor network survey. Comput. Netw. 2008, 52, 2292-2330. [2] Garcia, M.; Bri, D.; Boronat, F.; Lloret, J. A new neighbor selection strategy for group-based wireless sensor networks, In The Fourth International Conference on Networking and Services (ICNS 2008), Gosier, Guadalupe, March 16-21, 2008.

  5. Percolation Centrality: Quantifying Graph-Theoretic Impact of Nodes during Percolation in Networks

    PubMed Central

    Piraveenan, Mahendra; Prokopenko, Mikhail; Hossain, Liaquat

    2013-01-01

    A number of centrality measures are available to determine the relative importance of a node in a complex network, and betweenness is prominent among them. However, the existing centrality measures are not adequate in network percolation scenarios (such as during infection transmission in a social network of individuals, spreading of computer viruses on computer networks, or transmission of disease over a network of towns) because they do not account for the changing percolation states of individual nodes. We propose a new measure, percolation centrality, that quantifies relative impact of nodes based on their topological connectivity, as well as their percolation states. The measure can be extended to include random walk based definitions, and its computational complexity is shown to be of the same order as that of betweenness centrality. We demonstrate the usage of percolation centrality by applying it to a canonical network as well as simulated and real world scale-free and random networks. PMID:23349699

  6. Carbonate aquifer of the Central Roswell Basin: recharge estimation by numerical modeling

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Rehfeldt, K.R.; Gross, G.W.

    The flow of ground water in the Roswell, New Mexico, Artesian Basin, has been studied since the early 1900s and varied ideas have been proposed to explain different aspects of the ground water flow system. The purpose of the present study was to delineate the spatial distribution and source, or sources, of recharge to the carbonate aquifer of the central Roswell Basin. A computer model was used to simulate ground water flow in the carbonate aquifer, beneath and west of Roswell and in the Glorieta Sandstone and Yeso Formation west of the carbonate aquifer.

  7. Basic ICT adoption and use by general practitioners: an analysis of primary care systems in 31 European countries.

    PubMed

    De Rosis, Sabina; Seghieri, Chiara

    2015-08-22

    There is general consensus that appropriate development and use of information and communication technologies (ICT) are crucial in the delivery of effective primary care (PC). Several countries are defining policies to support and promote a structural change of the health care system through the introduction of ICT. This study analyses the state of development of basic ICT in PC systems of 31 European countries with the aim to describe the extent of, and main purposes for, computer use by General Practitioners (GPs) across Europe. Additionally, trends over time have been analysed. Descriptive statistical analysis was performed on data from the QUALICOPC (Quality and Costs of Primary Care in Europe) survey, to describe the geographic differences in the general use of computer, and in specific computerized clinical functions for different health-related purposes such as prescribing, medication checking, generating health records and research for medical information on the Internet. While all the countries have achieved a near-universal adoption of a computer in their primary care practices, with only a few countries near or under the boundary of 90 %, the computerisation of primary care clinical functions presents a wide variability of adoption within and among countries and, in several cases (such as in the southern and central-eastern Europe), a large room for improvement. At European level, more efforts could be done to support southern and central-eastern Europe in closing the gap in adoption and use of ICT in PC. In particular, more attention seems to be need on the current usages of the computer in PC, by focusing policies and actions on the improvement of the appropriate usages that can impact on quality and costs of PC and can facilitate an interconnected health care system. However, policies and investments seem necessary but not sufficient to achieve these goals. Organizational, behavioural and also networking aspects should be taken in consideration.

  8. Decoherence in adiabatic quantum computation

    NASA Astrophysics Data System (ADS)

    Albash, Tameem; Lidar, Daniel A.

    2015-06-01

    Recent experiments with increasingly larger numbers of qubits have sparked renewed interest in adiabatic quantum computation, and in particular quantum annealing. A central question that is repeatedly asked is whether quantum features of the evolution can survive over the long time scales used for quantum annealing relative to standard measures of the decoherence time. We reconsider the role of decoherence in adiabatic quantum computation and quantum annealing using the adiabatic quantum master-equation formalism. We restrict ourselves to the weak-coupling and singular-coupling limits, which correspond to decoherence in the energy eigenbasis and in the computational basis, respectively. We demonstrate that decoherence in the instantaneous energy eigenbasis does not necessarily detrimentally affect adiabatic quantum computation, and in particular that a short single-qubit T2 time need not imply adverse consequences for the success of the quantum adiabatic algorithm. We further demonstrate that boundary cancellation methods, designed to improve the fidelity of adiabatic quantum computing in the closed-system setting, remain beneficial in the open-system setting. To address the high computational cost of master-equation simulations, we also demonstrate that a quantum Monte Carlo algorithm that explicitly accounts for a thermal bosonic bath can be used to interpolate between classical and quantum annealing. Our study highlights and clarifies the significantly different role played by decoherence in the adiabatic and circuit models of quantum computing.

  9. Emotor control: computations underlying bodily resource allocation, emotions, and confidence.

    PubMed

    Kepecs, Adam; Mensh, Brett D

    2015-12-01

    Emotional processes are central to behavior, yet their deeply subjective nature has been a challenge for neuroscientific study as well as for psychiatric diagnosis. Here we explore the relationships between subjective feelings and their underlying brain circuits from a computational perspective. We apply recent insights from systems neuroscience-approaching subjective behavior as the result of mental computations instantiated in the brain-to the study of emotions. We develop the hypothesis that emotions are the product of neural computations whose motor role is to reallocate bodily resources mostly gated by smooth muscles. This "emotor" control system is analagous to the more familiar motor control computations that coordinate skeletal muscle movements. To illustrate this framework, we review recent research on "confidence." Although familiar as a feeling, confidence is also an objective statistical quantity: an estimate of the probability that a hypothesis is correct. This model-based approach helped reveal the neural basis of decision confidence in mammals and provides a bridge to the subjective feeling of confidence in humans. These results have important implications for psychiatry, since disorders of confidence computations appear to contribute to a number of psychopathologies. More broadly, this computational approach to emotions resonates with the emerging view that psychiatric nosology may be best parameterized in terms of disorders of the cognitive computations underlying complex behavior.

  10. Optimizing physician access to surgical intensive care unit laboratory information through mobile computing.

    PubMed

    Strain, J J; Felciano, R M; Seiver, A; Acuff, R; Fagan, L

    1996-01-01

    Approximately 30 minutes of computer access time are required by surgical residents at Stanford University Medical Center (SUMC) to examine the lab values of all patients on a surgical intensive care unit (ICU) service, a task that must be performed several times a day. To reduce the time accessing this information and simultaneously increase the readability and currency of the data, we have created a mobile, pen-based user interface and software system that delivers lab results to surgeons in the ICU. The ScroungeMaster system, loaded on a portable tablet computer, retrieves lab results for a subset of patients from the central laboratory computer and stores them in a local database cache. The cache can be updated on command; this update takes approximately 2.7 minutes for all ICU patients being followed by the surgeon, and can be performed as a background task while the user continues to access selected lab results. The user interface presents lab results according to physiologic system. Which labs are displayed first is governed by a layout selection algorithm based on previous accesses to the patient's lab information, physician preferences, and the nature of the patient's medical condition. Initial evaluation of the system has shown that physicians prefer the ScroungeMaster interface to that of existing systems at SUMC and are satisfied with the system's performance. We discuss the evolution of ScroungeMaster and make observations on changes to physician work flow with the presence of mobile, pen-based computing in the ICU.

  11. Distributed Prognostics based on Structural Model Decomposition

    NASA Technical Reports Server (NTRS)

    Daigle, Matthew J.; Bregon, Anibal; Roychoudhury, I.

    2014-01-01

    Within systems health management, prognostics focuses on predicting the remaining useful life of a system. In the model-based prognostics paradigm, physics-based models are constructed that describe the operation of a system and how it fails. Such approaches consist of an estimation phase, in which the health state of the system is first identified, and a prediction phase, in which the health state is projected forward in time to determine the end of life. Centralized solutions to these problems are often computationally expensive, do not scale well as the size of the system grows, and introduce a single point of failure. In this paper, we propose a novel distributed model-based prognostics scheme that formally describes how to decompose both the estimation and prediction problems into independent local subproblems whose solutions may be easily composed into a global solution. The decomposition of the prognostics problem is achieved through structural decomposition of the underlying models. The decomposition algorithm creates from the global system model a set of local submodels suitable for prognostics. Independent local estimation and prediction problems are formed based on these local submodels, resulting in a scalable distributed prognostics approach that allows the local subproblems to be solved in parallel, thus offering increases in computational efficiency. Using a centrifugal pump as a case study, we perform a number of simulation-based experiments to demonstrate the distributed approach, compare the performance with a centralized approach, and establish its scalability. Index Terms-model-based prognostics, distributed prognostics, structural model decomposition ABBREVIATIONS

  12. Bus-Programmable Slave Card

    NASA Technical Reports Server (NTRS)

    Hall, William A.

    1990-01-01

    Slave microprocessors in multimicroprocessor computing system contains modified circuit cards programmed via bus connecting master processor with slave microprocessors. Enables interactive, microprocessor-based, single-loop control. Confers ability to load and run program from master/slave bus, without need for microprocessor development station. Tristate buffers latch all data and information on status. Slave central processing unit never connected directly to bus.

  13. Study of the Kinetics of an S[subscript N]1 Reaction by Conductivity Measurement

    ERIC Educational Resources Information Center

    Marzluff, Elaine M.; Crawford, Mary A.; Reynolds, Helen

    2011-01-01

    Substitution reactions, a central part of organic chemistry, provide a model system in physical chemistry to study reaction rates and mechanisms. Here, the use of inexpensive and readily available commercial conductivity probes coupled with computer data acquisition for the study of the temperature and solvent dependence of the solvolysis of…

  14. Keeping Student Performance Central: The New York Assessment Collection. Studies on Exhibitions.

    ERIC Educational Resources Information Center

    Allen, David; McDonald, Joseph

    This report describes a computer tool used by the state of New York to assess student performance in elementary and secondary grades. Based on the premise that every assessment is a system of interacting elements, the tool examines students on six dimensions: vision, prompt, coaching context, performance, standards, and reflection. Vision, which…

  15. A Usability Study of Users' Perceptions toward a Multimedia Computer-Assisted Learning Tool for Neuroanatomy

    ERIC Educational Resources Information Center

    Gould, Douglas J.; Terrell, Mark A.; Fleming, Jo

    2008-01-01

    This usability study evaluated users' perceptions of a multimedia prototype for a new e-learning tool: Anatomy of the Central Nervous System: A Multimedia Course. Usability testing is a collection of formative evaluation methods that inform the developmental design of e-learning tools to maximize user acceptance, satisfaction, and adoption.…

  16. Creating Engaging Online Learning Material with the JSAV JavaScript Algorithm Visualization Library

    ERIC Educational Resources Information Center

    Karavirta, Ville; Shaffer, Clifford A.

    2016-01-01

    Data Structures and Algorithms are a central part of Computer Science. Due to their abstract and dynamic nature, they are a difficult topic to learn for many students. To alleviate these learning difficulties, instructors have turned to algorithm visualizations (AV) and AV systems. Research has shown that especially engaging AVs can have an impact…

  17. VIEWDATA--Interactive Television, with Particular Emphasis on the British Post Office's PRESTEL.

    ERIC Educational Resources Information Center

    Rimmer, Tony

    An overview of "Viewdata," an interactive medium that connects the home or business television set with a central computer database through telephone lines, is presented in this paper. It notes how Viewdata differs from broadcast Teletext systems and reviews the technical aspects of the two media to clarify terminology used in the…

  18. Earth-atmosphere system and surface reflectivities in arid regions from LANDSAT multispectral scanner measurements

    NASA Technical Reports Server (NTRS)

    Otterman, J.; Fraser, R. S.

    1976-01-01

    Programs for computing atmospheric transmission and scattering solar radiation were used to compute the ratios of the Earth-atmosphere system (space) directional reflectivities in the vertical direction to the surface reflectivity, for the four bands of the LANDSAT multispectral scanner (MSS). These ratios are presented as graphs for two water vapor levels, as a function of the surface reflectivity, for various sun elevation angles. Space directional reflectivities in the vertical direction are reported for selected arid regions in Asia, Africa and Central America from the spectral radiance levels measured by the LANDSAT MSS. From these space reflectivities, surface vertical reflectivities were computed applying the pertinent graphs. These surface reflectivities were used to estimate the surface albedo for the entire solar spectrum. The estimated albedos are in the range 0.34-0.52, higher than the values reported by most previous researchers from space measurements, but are consistent with laboratory measurements.

  19. Probabilistic Structural Analysis Theory Development

    NASA Technical Reports Server (NTRS)

    Burnside, O. H.

    1985-01-01

    The objective of the Probabilistic Structural Analysis Methods (PSAM) project is to develop analysis techniques and computer programs for predicting the probabilistic response of critical structural components for current and future space propulsion systems. This technology will play a central role in establishing system performance and durability. The first year's technical activity is concentrating on probabilistic finite element formulation strategy and code development. Work is also in progress to survey critical materials and space shuttle mian engine components. The probabilistic finite element computer program NESSUS (Numerical Evaluation of Stochastic Structures Under Stress) is being developed. The final probabilistic code will have, in the general case, the capability of performing nonlinear dynamic of stochastic structures. It is the goal of the approximate methods effort to increase problem solving efficiency relative to finite element methods by using energy methods to generate trial solutions which satisfy the structural boundary conditions. These approximate methods will be less computer intensive relative to the finite element approach.

  20. Mathematical and Computational Challenges in Population Biology and Ecosystems Science

    NASA Technical Reports Server (NTRS)

    Levin, Simon A.; Grenfell, Bryan; Hastings, Alan; Perelson, Alan S.

    1997-01-01

    Mathematical and computational approaches provide powerful tools in the study of problems in population biology and ecosystems science. The subject has a rich history intertwined with the development of statistics and dynamical systems theory, but recent analytical advances, coupled with the enhanced potential of high-speed computation, have opened up new vistas and presented new challenges. Key challenges involve ways to deal with the collective dynamics of heterogeneous ensembles of individuals, and to scale from small spatial regions to large ones. The central issues-understanding how detail at one scale makes its signature felt at other scales, and how to relate phenomena across scales-cut across scientific disciplines and go to the heart of algorithmic development of approaches to high-speed computation. Examples are given from ecology, genetics, epidemiology, and immunology.

  1. How to maintain blood supply during computer network breakdown: a manual backup system.

    PubMed

    Zeiler, T; Slonka, J; Bürgi, H R; Kretschmer, V

    2000-12-01

    Electronic data management systems using computer network systems and client/server architecture are increasingly used in laboratories and transfusion services. Severe problems arise if there is no network access to the database server and critical functions are not available. We describe a manual backup system (MBS) developed to maintain the delivery of blood products to patients in a hospital transfusion service in case of a computer network breakdown. All data are kept on a central SQL database connected to peripheral workstations in a local area network (LAN). Request entry from wards is performed via machine-readable request forms containing self-adhesive specimen labels with barcodes for test tubes. Data entry occurs on-line by bidirectional automated systems or off-line manually. One of the workstations in the laboratory contains a second SQL database which is frequently and incrementally updated. This workstation is run as a stand-alone, read-only database if the central SQL database is not available. In case of a network breakdown, the time-graded MBS is launched. Patient data, requesting ward and ordered tests/requests, are photocopied through a template from the request forms on special MBS worksheets serving as laboratory journal for manual processing and result report (a copy is left in the laboratory). As soon as the network is running again the data from the off-line period are entered into the primary SQL server. The MBS was successfully used at several occasions. The documentation of a 90-min breakdown period is presented in detail. Additional work resulted from the copy work and the belated manual data entry after restoration of the system. There was no delay in issue of blood products or result reporting. The backup system described has been proven to be simple, quick and safe to maintain urgent blood supply and distribution of laboratory results in case of unexpected network breakdown.

  2. Monitoring techniques and alarm procedures for CMS services and sites in WLCG

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Molina-Perez, J.; Bonacorsi, D.; Gutsche, O.

    2012-01-01

    The CMS offline computing system is composed of roughly 80 sites (including most experienced T3s) and a number of central services to distribute, process and analyze data worldwide. A high level of stability and reliability is required from the underlying infrastructure and services, partially covered by local or automated monitoring and alarming systems such as Lemon and SLS, the former collects metrics from sensors installed on computing nodes and triggers alarms when values are out of range, the latter measures the quality of service and warns managers when service is affected. CMS has established computing shift procedures with personnel operatingmore » worldwide from remote Computing Centers, under the supervision of the Computing Run Coordinator at CERN. This dedicated 24/7 computing shift personnel is contributing to detect and react timely on any unexpected error and hence ensure that CMS workflows are carried out efficiently and in a sustained manner. Synergy among all the involved actors is exploited to ensure the 24/7 monitoring, alarming and troubleshooting of the CMS computing sites and services. We review the deployment of the monitoring and alarming procedures, and report on the experience gained throughout the first two years of LHC operation. We describe the efficiency of the communication tools employed, the coherent monitoring framework, the proactive alarming systems and the proficient troubleshooting procedures that helped the CMS Computing facilities and infrastructure to operate at high reliability levels.« less

  3. Planning report for the Edwards-Trinity Regional Aquifer-System analysis in central Texas, southeast Oklahoma, and southwest Arkansas

    USGS Publications Warehouse

    Bush, Peter W.

    1986-01-01

    The Edwards-Trinity regional aquifer-system analysis project, begun in October 1985 and scheduled to be completed by October 1991, is one of a series of similar projects being conducted nationwide. The project is intended to define the hydrogeologic framework, and to describe the geochemistry and groundwater flow of the aquifer system in order to provide a better understanding of the system's long-term water-yielding potential. A multidisciplinary approach will be used in which computer-based digital simulation of flow in the system will be the principal method of hydrogeologic investigation.

  4. Radio System for Locating Emergency Workers

    NASA Technical Reports Server (NTRS)

    Larson, William; Medelius, Pedro; Starr, Stan; Bedette, Guy; Taylor, John; Moerk, Steve

    2003-01-01

    A system based on low-power radio transponders and associated analog and digital electronic circuitry has been developed for locating firefighters and other emergency workers deployed in a building or other structure. The system has obvious potential for saving lives and reducing the risk of injuries. The system includes (1) a central station equipped with a computer and a transceiver; (2) active radio-frequency (RF) identification tags, each placed in a different room or region of the structure; and (3) transponder units worn by the emergency workers. The RF identification tags can be installed in a new building as built-in components of standard fire-detection devices or ground-fault electrical outlets or can be attached to such devices in a previously constructed building, without need for rewiring the building. Each RF identification tag contains information that uniquely identifies it. When each tag is installed, information on its location and identity are reported to, and stored at, the central station. In an emergency, if a building has not been prewired with RF identification tags, leading emergency workers could drop sequentially numbered portable tags in the rooms of the building, reporting the tag numbers and locations by radio to the central station as they proceed.

  5. Scientific Computing Strategic Plan for the Idaho National Laboratory

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Whiting, Eric Todd

    Scientific computing is a critical foundation of modern science. Without innovations in the field of computational science, the essential missions of the Department of Energy (DOE) would go unrealized. Taking a leadership role in such innovations is Idaho National Laboratory’s (INL’s) challenge and charge, and is central to INL’s ongoing success. Computing is an essential part of INL’s future. DOE science and technology missions rely firmly on computing capabilities in various forms. Modeling and simulation, fueled by innovations in computational science and validated through experiment, are a critical foundation of science and engineering. Big data analytics from an increasing numbermore » of widely varied sources is opening new windows of insight and discovery. Computing is a critical tool in education, science, engineering, and experiments. Advanced computing capabilities in the form of people, tools, computers, and facilities, will position INL competitively to deliver results and solutions on important national science and engineering challenges. A computing strategy must include much more than simply computers. The foundational enabling component of computing at many DOE national laboratories is the combination of a showcase like data center facility coupled with a very capable supercomputer. In addition, network connectivity, disk storage systems, and visualization hardware are critical and generally tightly coupled to the computer system and co located in the same facility. The existence of these resources in a single data center facility opens the doors to many opportunities that would not otherwise be possible.« less

  6. Method and apparatus for fault tolerance

    NASA Technical Reports Server (NTRS)

    Masson, Gerald M. (Inventor); Sullivan, Gregory F. (Inventor)

    1993-01-01

    A method and apparatus for achieving fault tolerance in a computer system having at least a first central processing unit and a second central processing unit. The method comprises the steps of first executing a first algorithm in the first central processing unit on input which produces a first output as well as a certification trail. Next, executing a second algorithm in the second central processing unit on the input and on at least a portion of the certification trail which produces a second output. The second algorithm has a faster execution time than the first algorithm for a given input. Then, comparing the first and second outputs such that an error result is produced if the first and second outputs are not the same. The step of executing a first algorithm and the step of executing a second algorithm preferably takes place over essentially the same time period.

  7. TDRSS data handling and management system study. Ground station systems for data handling and relay satellite control

    NASA Technical Reports Server (NTRS)

    1973-01-01

    Results of a two-phase study of the (Data Handling and Management System DHMS) are presented. An original baseline DHMS is described. Its estimated costs are presented in detail. The DHMS automates the Tracking and Data Relay Satellite System (TDRSS) ground station's functions and handles both the forward and return link user and relay satellite data passing through the station. Direction of the DHMS is effected via a TDRSS Operations Control Central (OCC) that is remotely located. A composite ground station system, a modified DHMS (MDHMS), was conceptually developed. The MDHMS performs both the DHMS and OCC functions. Configurations and costs are presented for systems using minicomputers and midicomputers. It is concluded that a MDHMS should be configured with a combination of the two computer types. The midicomputers provide the system's organizational direction and computational power, and the minicomputers (or interface processors) perform repetitive data handling functions that relieve the midicomputers of these burdensome tasks.

  8. The curse of planning: dissecting multiple reinforcement-learning systems by taxing the central executive.

    PubMed

    Otto, A Ross; Gershman, Samuel J; Markman, Arthur B; Daw, Nathaniel D

    2013-05-01

    A number of accounts of human and animal behavior posit the operation of parallel and competing valuation systems in the control of choice behavior. In these accounts, a flexible but computationally expensive model-based reinforcement-learning system has been contrasted with a less flexible but more efficient model-free reinforcement-learning system. The factors governing which system controls behavior-and under what circumstances-are still unclear. Following the hypothesis that model-based reinforcement learning requires cognitive resources, we demonstrated that having human decision makers perform a demanding secondary task engenders increased reliance on a model-free reinforcement-learning strategy. Further, we showed that, across trials, people negotiate the trade-off between the two systems dynamically as a function of concurrent executive-function demands, and people's choice latencies reflect the computational expenses of the strategy they employ. These results demonstrate that competition between multiple learning systems can be controlled on a trial-by-trial basis by modulating the availability of cognitive resources.

  9. Notes from a Centralized Office: A Renewed Interest in ERP Has School Administrators Reconsidering the Vast Business Management Systems They Abandoned a Few Short Years Ago

    ERIC Educational Resources Information Center

    Houston, Melissa; Goggins, Patrick

    2008-01-01

    It used to be much easier to get paid by the San Diego Unified School District (SDUSD). A lot easier, that is, if you didn't work there. Saddled with an antiquated computer system and manual, repetitive data entry of time cards, officials at California's second-largest school district discovered the payroll department was mistakenly issuing $1…

  10. Vehicle Integrated Prognostic Reasoner (VIPR) Final Report

    NASA Technical Reports Server (NTRS)

    Bharadwaj, Raj; Mylaraswamy, Dinkar; Cornhill, Dennis; Biswas, Gautam; Koutsoukos, Xenofon; Mack, Daniel

    2013-01-01

    A systems view is necessary to detect, diagnose, predict, and mitigate adverse events during the flight of an aircraft. While most aircraft subsystems look for simple threshold exceedances and report them to a central maintenance computer, the vehicle integrated prognostic reasoner (VIPR) proactively generates evidence and takes an active role in aircraft-level health assessment. Establishing the technical feasibility and a design trade-space for this next-generation vehicle-level reasoning system (VLRS) is the focus of our work.

  11. The Computerized Laboratory Notebook concept for genetic toxicology experimentation and testing.

    PubMed

    Strauss, G H; Stanford, W L; Berkowitz, S J

    1989-03-01

    We describe a microcomputer system utilizing the Computerized Laboratory Notebook (CLN) concept developed in our laboratory for the purpose of automating the Battery of Leukocyte Tests (BLT). The BLT was designed to evaluate blood specimens for toxic, immunotoxic, and genotoxic effects after in vivo exposure to putative mutagens. A system was developed with the advantages of low cost, limited spatial requirements, ease of use for personnel inexperienced with computers, and applicability to specific testing yet flexibility for experimentation. This system eliminates cumbersome record keeping and repetitive analysis inherent in genetic toxicology bioassays. Statistical analysis of the vast quantity of data produced by the BLT would not be feasible without a central database. Our central database is maintained by an integrated package which we have adapted to develop the CLN. The clonal assay of lymphocyte mutagenesis (CALM) section of the CLN is demonstrated. PC-Slaves expand the microcomputer to multiple workstations so that our computerized notebook can be used next to a hood while other work is done in an office and instrument room simultaneously. Communication with peripheral instruments is an indispensable part of many laboratory operations, and we present a representative program, written to acquire and analyze CALM data, for communicating with both a liquid scintillation counter and an ELISA plate reader. In conclusion we discuss how our computer system could easily be adapted to the needs of other laboratories.

  12. Holonic Rationale and Bio-inspiration on Design of Complex Emergent and Evolvable Systems

    NASA Astrophysics Data System (ADS)

    Leitao, Paulo

    Traditional centralized and rigid control structures are becoming inflexible to face the requirements of reconfigurability, responsiveness and robustness, imposed by customer demands in the current global economy. The Holonic Manufacturing Systems (HMS) paradigm, which was pointed out as a suitable solution to face these requirements, translates the concepts inherited from social organizations and biology to the manufacturing world. It offers an alternative way of designing adaptive systems where the traditional centralized control is replaced by decentralization over distributed and autonomous entities organized in hierarchical structures formed by intermediate stable forms. In spite of its enormous potential, methods regarding the self-adaptation and self-organization of complex systems are still missing. This paper discusses how the insights from biology in connection with new fields of computer science can be useful to enhance the holonic design aiming to achieve more self-adaptive and evolvable systems. Special attention is devoted to the discussion of emergent behavior and self-organization concepts, and the way they can be combined with the holonic rationale.

  13. Information management system breadboard data acquisition and control system.

    NASA Technical Reports Server (NTRS)

    Mallary, W. E.

    1972-01-01

    Description of a breadboard configuration of an advanced information management system based on requirements for high data rates and local and centralized computation for subsystems and experiments to be housed on a space station. The system is to contain a 10-megabit-per-second digital data bus, remote terminals with preprocessor capabilities, and a central multiprocessor. A concept definition is presented for the data acquisition and control system breadboard, and a detailed account is given of the operation of the bus control unit, the bus itself, and the remote acquisition and control unit. The data bus control unit is capable of operating under control of both its own test panel and the test processor. In either mode it is capable of both single- and multiple-message operation in that it can accept a block of data requests or update commands for transmission to the remote acquisition and control unit, which in turn is capable of three levels of data-handling complexity.

  14. Decentralized System Identification Using Stochastic Subspace Identification for Wireless Sensor Networks

    PubMed Central

    Cho, Soojin; Park, Jong-Woong; Sim, Sung-Han

    2015-01-01

    Wireless sensor networks (WSNs) facilitate a new paradigm to structural identification and monitoring for civil infrastructure. Conventional structural monitoring systems based on wired sensors and centralized data acquisition systems are costly for installation as well as maintenance. WSNs have emerged as a technology that can overcome such difficulties, making deployment of a dense array of sensors on large civil structures both feasible and economical. However, as opposed to wired sensor networks in which centralized data acquisition and processing is common practice, WSNs require decentralized computing algorithms to reduce data transmission due to the limitation associated with wireless communication. In this paper, the stochastic subspace identification (SSI) technique is selected for system identification, and SSI-based decentralized system identification (SDSI) is proposed to be implemented in a WSN composed of Imote2 wireless sensors that measure acceleration. The SDSI is tightly scheduled in the hierarchical WSN, and its performance is experimentally verified in a laboratory test using a 5-story shear building model. PMID:25856325

  15. An automated system for the study of ionospheric spatial structures

    NASA Astrophysics Data System (ADS)

    Belinskaya, I. V.; Boitman, O. N.; Vugmeister, B. O.; Vyborova, V. M.; Zakharov, V. N.; Laptev, V. A.; Mamchenko, M. S.; Potemkin, A. A.; Radionov, V. V.

    The system is designed for the study of the vertical distribution of electron density and the parameters of medium-scale ionospheric irregularities over the sounding site as well as the reconstruction of the spatial distribution of electron density within the range of up to 300 km from the sounding location. The system comprises an active central station as well as passive companion stations. The central station is equipped with the digital ionosonde ``Basis'', the measuring-and-computing complex IVK-2, and the receiver-recorder PRK-3M. The companion stations are equipped with receivers-recorders PRK-3. The automated comlex software system includes 14 subsystems. Data transfer between them is effected using magnetic disk data sets. The system is operated in both ionogram mode and Doppler shift and angle-of-arrival mode. Using data obtained in these two modes, the reconstruction of the spatial distribution of electron density in the region is carried out. Reconstruction is checked for accuracy using data from companion stations.

  16. Improving the security of international ISO container traffic by centralizing the archival of inspection results

    NASA Astrophysics Data System (ADS)

    Chalmers, Alex

    2004-09-01

    To increase the security and throughput of ISO traffic through international terminals more technology must be applied to the problem. A transnational central archive of inspection records is discussed that can be accessed by national agencies as ISO containers approach their borders. The intent is to improve the throughput and security of the cargo inspection process. A review of currently available digital media archiving technologies is presented and their possible application to the tracking of international ISO container shipments. Specific image formats employed by current x-ray inspection systems are discussed. Sample x-ray data from systems in use today are shown that could be entered into such a system. Data from other inspection technologies are shown to be easily integrated, as well as the creation of database records suitable for interfacing with other computer systems. Overall system performance requirements are discussed in terms of security, response time and capacity. Suggestions for pilot projects based on existing border inspection processes are made also.

  17. Identifying a system of predominant negative symptoms: Network analysis of three randomized clinical trials.

    PubMed

    Levine, Stephen Z; Leucht, Stefan

    2016-12-01

    Reasons for the recent mixed success of research into negative symptoms may be informed by conceptualizing negative symptoms as a system that is identifiable from network analysis. We aimed to identify: (I) negative symptom systems; (I) central negative symptoms within each system; and (III) differences between the systems, based on network analysis of negative symptoms for baseline, endpoint and change. Patients with chronic schizophrenia and predominant negative symptoms participated in three clinical trials that compared placebo and amisulpride to 60days (n=487). Networks analyses were computed from the Scale for the Assessment of Negative Symptoms (SANS) scores for baseline and endpoint for severity, and estimated change based on mixed models. Central symptoms to each network were identified. The networks were contrasted for connectivity with permutation tests. Network analysis showed that the baseline and endpoint symptom severity systems formed symptom groups of Affect, Poor responsiveness, Lack of interest, and Apathy-inattentiveness. The baseline and endpoint networks did not significantly differ in terms of connectivity, but both significantly (P<0.05) differed to the change network. In the change network the apathy-inattentiveness symptom group split into three other groups. The most central symptoms were Decreased Spontaneous Movements at baseline and endpoint, and Poverty of Speech for estimated change. Results provide preliminary evidence for: (I) a replicable negative symptom severity system; and (II) symptoms with high centrality (e.g., Decreased Spontaneous Movement), that may be future treatment targets following replication to ensure the curent results generalize to other samples. Copyright © 2016 Elsevier B.V. All rights reserved.

  18. Application of the Linux cluster for exhaustive window haplotype analysis using the FBAT and Unphased programs.

    PubMed

    Mishima, Hiroyuki; Lidral, Andrew C; Ni, Jun

    2008-05-28

    Genetic association studies have been used to map disease-causing genes. A newly introduced statistical method, called exhaustive haplotype association study, analyzes genetic information consisting of different numbers and combinations of DNA sequence variations along a chromosome. Such studies involve a large number of statistical calculations and subsequently high computing power. It is possible to develop parallel algorithms and codes to perform the calculations on a high performance computing (HPC) system. However, most existing commonly-used statistic packages for genetic studies are non-parallel versions. Alternatively, one may use the cutting-edge technology of grid computing and its packages to conduct non-parallel genetic statistical packages on a centralized HPC system or distributed computing systems. In this paper, we report the utilization of a queuing scheduler built on the Grid Engine and run on a Rocks Linux cluster for our genetic statistical studies. Analysis of both consecutive and combinational window haplotypes was conducted by the FBAT (Laird et al., 2000) and Unphased (Dudbridge, 2003) programs. The dataset consisted of 26 loci from 277 extended families (1484 persons). Using the Rocks Linux cluster with 22 compute-nodes, FBAT jobs performed about 14.4-15.9 times faster, while Unphased jobs performed 1.1-18.6 times faster compared to the accumulated computation duration. Execution of exhaustive haplotype analysis using non-parallel software packages on a Linux-based system is an effective and efficient approach in terms of cost and performance.

  19. Application of the Linux cluster for exhaustive window haplotype analysis using the FBAT and Unphased programs

    PubMed Central

    Mishima, Hiroyuki; Lidral, Andrew C; Ni, Jun

    2008-01-01

    Background Genetic association studies have been used to map disease-causing genes. A newly introduced statistical method, called exhaustive haplotype association study, analyzes genetic information consisting of different numbers and combinations of DNA sequence variations along a chromosome. Such studies involve a large number of statistical calculations and subsequently high computing power. It is possible to develop parallel algorithms and codes to perform the calculations on a high performance computing (HPC) system. However, most existing commonly-used statistic packages for genetic studies are non-parallel versions. Alternatively, one may use the cutting-edge technology of grid computing and its packages to conduct non-parallel genetic statistical packages on a centralized HPC system or distributed computing systems. In this paper, we report the utilization of a queuing scheduler built on the Grid Engine and run on a Rocks Linux cluster for our genetic statistical studies. Results Analysis of both consecutive and combinational window haplotypes was conducted by the FBAT (Laird et al., 2000) and Unphased (Dudbridge, 2003) programs. The dataset consisted of 26 loci from 277 extended families (1484 persons). Using the Rocks Linux cluster with 22 compute-nodes, FBAT jobs performed about 14.4–15.9 times faster, while Unphased jobs performed 1.1–18.6 times faster compared to the accumulated computation duration. Conclusion Execution of exhaustive haplotype analysis using non-parallel software packages on a Linux-based system is an effective and efficient approach in terms of cost and performance. PMID:18541045

  20. New Challenges in Computational Thermal Hydraulics

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Yadigaroglu, George; Lakehal, Djamel

    New needs and opportunities drive the development of novel computational methods for the design and safety analysis of light water reactors (LWRs). Some new methods are likely to be three dimensional. Coupling is expected between system codes, computational fluid dynamics (CFD) modules, and cascades of computations at scales ranging from the macro- or system scale to the micro- or turbulence scales, with the various levels continuously exchanging information back and forth. The ISP-42/PANDA and the international SETH project provide opportunities for testing applications of single-phase CFD methods to LWR safety problems. Although industrial single-phase CFD applications are commonplace, computational multifluidmore » dynamics is still under development. However, first applications are appearing; the state of the art and its potential uses are discussed. The case study of condensation of steam/air mixtures injected from a downward-facing vent into a pool of water is a perfect illustration of a simulation cascade: At the top of the hierarchy of scales, system behavior can be modeled with a system code; at the central level, the volume-of-fluid method can be applied to predict large-scale bubbling behavior; at the bottom of the cascade, direct-contact condensation can be treated with direct numerical simulation, in which turbulent flow (in both the gas and the liquid), interfacial dynamics, and heat/mass transfer are directly simulated without resorting to models.« less

  1. Serial network simplifies the design of multiple microcomputer systems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Folkes, D.

    1981-01-01

    Recently there has been a lot of interest in developing network communication schemes for carrying digital data between locally distributed computing stations. Many of these schemes have focused on distributed networking techniques for data processing applications. These applications suggest the use of a serial, multipoint bus, where a number of remote intelligent units act as slaves to a central or host computer. Each slave would be serially addressable from the host and would perform required operations upon being addressed by the host. Based on an MK3873 single-chip microcomputer, the SCU 20 is designed to be such a remote slave device.more » The capabilities of the SCU 20 and its use in systems applications are examined.« less

  2. Color fields of the static pentaquark system computed in SU(3) lattice QCD

    NASA Astrophysics Data System (ADS)

    Cardoso, Nuno; Bicudo, Pedro

    2013-02-01

    We compute the color fields of SU(3) lattice QCD created by static pentaquark systems, in a 243×48 lattice at β=6.2 corresponding to a lattice spacing a=0.07261(85)fm. We find that the pentaquark color fields are well described by a multi-Y-type shaped flux tube. The flux tube junction points are compatible with Fermat-Steiner points minimizing the total flux tube length. We also compare the pentaquark flux tube profile with the diquark-diantiquark central flux tube profile in the tetraquark and the quark-antiquark fundamental flux tube profile in the meson, and they match, thus showing that the pentaquark flux tubes are composed of fundamental flux tubes.

  3. The development and testing of a fieldworthy system of improved fluid pumping device and liquid sensor for oil wells. Fourth quarter technical progress report, 1991

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Buckman, W.G.

    1991-12-31

    A major expenditure to maintain oil and gas leases is the support of pumpers, those individuals who maintain the pumping systems on wells to achieve optimum production. Many leases are marginal and are in remote areas and this requires considerable driving time for the pumper. The Air Pulse Oil Pump System is designed to be an economical system for the shallow stripper wells. To improve on the economics of this system, we have designed a Remote Oil Field Monitor and Controller to enable us to acquire data from the lease to our central office at anytime and to control themore » pumping activities from the central office by using a personal computer. The advent and economics of low-power microcontrollers have made it feasible to use this type of system for numerous remote control systems. We can also adapt this economical system to monitor and control the production of gas wells and/or pump jacks.« less

  4. A system to build distributed multivariate models and manage disparate data sharing policies: implementation in the scalable national network for effectiveness research.

    PubMed

    Meeker, Daniella; Jiang, Xiaoqian; Matheny, Michael E; Farcas, Claudiu; D'Arcy, Michel; Pearlman, Laura; Nookala, Lavanya; Day, Michele E; Kim, Katherine K; Kim, Hyeoneui; Boxwala, Aziz; El-Kareh, Robert; Kuo, Grace M; Resnic, Frederic S; Kesselman, Carl; Ohno-Machado, Lucila

    2015-11-01

    Centralized and federated models for sharing data in research networks currently exist. To build multivariate data analysis for centralized networks, transfer of patient-level data to a central computation resource is necessary. The authors implemented distributed multivariate models for federated networks in which patient-level data is kept at each site and data exchange policies are managed in a study-centric manner. The objective was to implement infrastructure that supports the functionality of some existing research networks (e.g., cohort discovery, workflow management, and estimation of multivariate analytic models on centralized data) while adding additional important new features, such as algorithms for distributed iterative multivariate models, a graphical interface for multivariate model specification, synchronous and asynchronous response to network queries, investigator-initiated studies, and study-based control of staff, protocols, and data sharing policies. Based on the requirements gathered from statisticians, administrators, and investigators from multiple institutions, the authors developed infrastructure and tools to support multisite comparative effectiveness studies using web services for multivariate statistical estimation in the SCANNER federated network. The authors implemented massively parallel (map-reduce) computation methods and a new policy management system to enable each study initiated by network participants to define the ways in which data may be processed, managed, queried, and shared. The authors illustrated the use of these systems among institutions with highly different policies and operating under different state laws. Federated research networks need not limit distributed query functionality to count queries, cohort discovery, or independently estimated analytic models. Multivariate analyses can be efficiently and securely conducted without patient-level data transport, allowing institutions with strict local data storage requirements to participate in sophisticated analyses based on federated research networks. © The Author 2015. Published by Oxford University Press on behalf of the American Medical Informatics Association.

  5. Geochemistry of and radioactivity in ground water of the Highland Rim and Central Basin aquifer systems, Hickman and Maury counties, Tennessee

    USGS Publications Warehouse

    Hileman, G.E.; Lee, R.W.

    1993-01-01

    A reconnaissance of the geochemistry of and radioactivity in ground water from the Highland Rim and Central Basin aquifer systems in Hickman and Maury Counties, Tennessee, was conducted in 1989. Water in both aquifer systems typically is of the calcium or calcium magnesium bicarbonate type, but concentrations of calcium, magnesium, sodium, potassium, chloride, and sulfate are greater in water of the Central Basin system; differences in the concentrations are statistically significant. Dissolution of calcite, magnesium-calcite, dolomite, and gypsum are the primary geochemical processes controlling ground-water chemistry in both aquifer systems. Saturation-state calculations using the computer code WATEQF indicated that ground water from the Central Basin system is more saturated with respect to calcite, dolomite, and gypsum than water from the Highland Rim system. Geochemical environments within each aquifer system are somewhat different with respect to dissolution of magnesium-bearing minerals. Water samples from the Highland Rim system had a fairly constant calcium to magnesium molar ratio, implying congruent dissolution of magnesium-bearing minerals, whereas water samples from the Central Basin system had highly variable ratios, implying either incongruent dissolution or heterogeneity in soluble constituents of the aquifer matrix. Concentrations of radionuclides in water were low and not greatly different between aquifer systems. Median gross alpha activities were 0.54 picocuries per liter in water from each system; median gross beta activities were 1.1 and 2.3 picocuries per liter in water from the Highland Rim and Central Basin systems, respectively. Radon-222 concentrations were 559 and 422 picocuries per liter, respectively. Concentrations of gross alpha and radium in all samples were substantially less than Tennessee?s maximum permissible levels for community water-supply systems. The data indicated no relations between concentrations of dissolved radionuclides (uranium, radium-226, radium-228, radon-222, gross alpha, and gross beta) and any key indicators of water chemistry, except in water from the Highland Rim system, in which radon-222 was moderately related to pH and weakly related to dissolved magnesium. The only relation among radiochemical constituents indicated by the data was between radium-226 and gross alpha activity; this relation was indicated for water from both aquifer systems.

  6. Semi-automatic central-chest lymph-node definition from 3D MDCT images

    NASA Astrophysics Data System (ADS)

    Lu, Kongkuo; Higgins, William E.

    2010-03-01

    Central-chest lymph nodes play a vital role in lung-cancer staging. The three-dimensional (3D) definition of lymph nodes from multidetector computed-tomography (MDCT) images, however, remains an open problem. This is because of the limitations in the MDCT imaging of soft-tissue structures and the complicated phenomena that influence the appearance of a lymph node in an MDCT image. In the past, we have made significant efforts toward developing (1) live-wire-based segmentation methods for defining 2D and 3D chest structures and (2) a computer-based system for automatic definition and interactive visualization of the Mountain central-chest lymph-node stations. Based on these works, we propose new single-click and single-section live-wire methods for segmenting central-chest lymph nodes. The single-click live wire only requires the user to select an object pixel on one 2D MDCT section and is designed for typical lymph nodes. The single-section live wire requires the user to process one selected 2D section using standard 2D live wire, but it is more robust. We applied these methods to the segmentation of 20 lymph nodes from two human MDCT chest scans (10 per scan) drawn from our ground-truth database. The single-click live wire segmented 75% of the selected nodes successfully and reproducibly, while the success rate for the single-section live wire was 85%. We are able to segment the remaining nodes, using our previously derived (but more interaction intense) 2D live-wire method incorporated in our lymph-node analysis system. Both proposed methods are reliable and applicable to a wide range of pulmonary lymph nodes.

  7. Distributed and Centralized Conflict Management Under Traffic Flow Management Constraints

    NASA Technical Reports Server (NTRS)

    Feron, Eric; Bilimoria, Karl (Technical Monitor)

    2003-01-01

    Current air transportation in the United States relies on a system born half a century ago. While demand for air travel has kept increasing over the years, technologies at the heart of the National Airspace System (NAS) have not been able to follow an adequate evolution. For instance, computers used to centralize flight data in airspace sectors run a software developed in 1972. Safety, as well as certification and portability issues arise as major obstacles for the improvement of the system. The NAS is a structure that has never been designed, but has rather evolved over time. This has many drawbacks, mainly due to a lack of integration and engineering leading to many inefficiencies and losses of performance. To improve the operations, understanding of this complex needs to be built up to a certain level. This work presents research done on Air Traffic Management (ATM) at the level of the en-route sector.

  8. The Organization and Evaluation of a Computer-Assisted, Centralized Immunization Registry.

    ERIC Educational Resources Information Center

    Loeser, Helen; And Others

    1983-01-01

    Evaluation of a computer-assisted, centralized immunization registry after one year shows that 93 percent of eligible health practitioners initially agreed to provide data and that 73 percent continue to do so. Immunization rates in audited groups have improved significantly. (GC)

  9. Distributed computing for macromolecular crystallography

    PubMed Central

    Krissinel, Evgeny; Uski, Ville; Lebedev, Andrey; Ballard, Charles

    2018-01-01

    Modern crystallographic computing is characterized by the growing role of automated structure-solution pipelines, which represent complex expert systems utilizing a number of program components, decision makers and databases. They also require considerable computational resources and regular database maintenance, which is increasingly more difficult to provide at the level of individual desktop-based CCP4 setups. On the other hand, there is a significant growth in data processed in the field, which brings up the issue of centralized facilities for keeping both the data collected and structure-solution projects. The paradigm of distributed computing and data management offers a convenient approach to tackling these problems, which has become more attractive in recent years owing to the popularity of mobile devices such as tablets and ultra-portable laptops. In this article, an overview is given of developments by CCP4 aimed at bringing distributed crystallographic computations to a wide crystallographic community. PMID:29533240

  10. Distributed computing for macromolecular crystallography.

    PubMed

    Krissinel, Evgeny; Uski, Ville; Lebedev, Andrey; Winn, Martyn; Ballard, Charles

    2018-02-01

    Modern crystallographic computing is characterized by the growing role of automated structure-solution pipelines, which represent complex expert systems utilizing a number of program components, decision makers and databases. They also require considerable computational resources and regular database maintenance, which is increasingly more difficult to provide at the level of individual desktop-based CCP4 setups. On the other hand, there is a significant growth in data processed in the field, which brings up the issue of centralized facilities for keeping both the data collected and structure-solution projects. The paradigm of distributed computing and data management offers a convenient approach to tackling these problems, which has become more attractive in recent years owing to the popularity of mobile devices such as tablets and ultra-portable laptops. In this article, an overview is given of developments by CCP4 aimed at bringing distributed crystallographic computations to a wide crystallographic community.

  11. Evolution in a centralized transfusion service.

    PubMed

    AuBuchon, James P; Linauts, Sandra; Vaughan, Mimi; Wagner, Jeffrey; Delaney, Meghan; Nester, Theresa

    2011-12-01

    The metropolitan Seattle area has utilized a centralized transfusion service model throughout the modern era of blood banking. This approach has used four laboratories to serve over 20 hospitals and clinics, providing greater capabilities for all at a lower consumption of resources than if each depended on its own laboratory and staff for these functions. In addition, this centralized model has facilitated wider use of the medical capabilities of the blood center's physicians, and a county-wide network of transfusion safety officers is now being developed to increase the impact of the blood center's transfusion expertise at the patient's bedside. Medical expectations and traffic have led the blood center to evolve the centralized model to include on-site laboratories at facilities with complex transfusion requirements (e.g., a children's hospital) and to implement in all the others a system of remote allocation. This new capability places a refrigerator stocked with uncrossmatched units in the hospital but retains control over the dispensing of these through the blood center's computer system; the correct unit can be electronically cross-matched and released on demand, obviating the need for transportation to the hospital and thus speeding transfusion. This centralized transfusion model has withstood the test of time and continues to evolve to meet new situations and ensure optimal patient care. © 2011 American Association of Blood Banks.

  12. Integrating all medical records to an enterprise viewer.

    PubMed

    Li, Haomin; Duan, Huilong; Lu, Xudong; Zhao, Chenhui; An, Jiye

    2005-01-01

    The idea behind hospital information systems is to make all of a patient's medical reports, lab results, and images electronically available to clinicians, instantaneously, wherever they are. But the higgledy-piggledy evolution of most hospital computer systems makes it hard to integrate all these clinical records. Although several integration standards had been proposed to meet this challenger, none of them is fit to Chinese hospitals. In this paper, we introduce our work of implementing a three-tiered architecture enterprise viewer in Huzhou Central Hospital to integration all existing medical information systems using limited resource.

  13. A Computerized Hospital Patient Information Management System

    PubMed Central

    Wig, Eldon D.

    1982-01-01

    The information processing needs of a hospital are many, with varying degrees of complexity. The prime concern in providing an integrated hospital information management system lies in the ability to process the data relating to the single entity for which every hospital functions - the patient. This paper examines the PRIMIS computer system developed to accommodate hospital needs with respect to a central patient registry, inpatients (i.e., Admission/Transfer/Discharge), and out-patients. Finally, the potential for expansion to permit the incorporation of more hospital functions within PRIMIS is examined.

  14. A human operator simulator model of the NASA Terminal Configured Vehicle (TCV)

    NASA Technical Reports Server (NTRS)

    Glenn, F. A., III; Doane, S. M.

    1981-01-01

    A generic operator model called HOS was used to simulate the behavior and performance of a pilot flying a transport airplane during instrument approach and landing operations in order to demonstrate the applicability of the model to problems associated with interfacing a crew with a flight system. The model which was installed and operated on NASA Langley's central computing system is described. Preliminary results of its application to an investigation of an innovative display system under development in Langley's terminal configured vehicle program are considered.

  15. Experimental realization of universal geometric quantum gates with solid-state spins.

    PubMed

    Zu, C; Wang, W-B; He, L; Zhang, W-G; Dai, C-Y; Wang, F; Duan, L-M

    2014-10-02

    Experimental realization of a universal set of quantum logic gates is the central requirement for the implementation of a quantum computer. In an 'all-geometric' approach to quantum computation, the quantum gates are implemented using Berry phases and their non-Abelian extensions, holonomies, from geometric transformation of quantum states in the Hilbert space. Apart from its fundamental interest and rich mathematical structure, the geometric approach has some built-in noise-resilience features. On the experimental side, geometric phases and holonomies have been observed in thermal ensembles of liquid molecules using nuclear magnetic resonance; however, such systems are known to be non-scalable for the purposes of quantum computing. There are proposals to implement geometric quantum computation in scalable experimental platforms such as trapped ions, superconducting quantum bits and quantum dots, and a recent experiment has realized geometric single-bit gates in a superconducting system. Here we report the experimental realization of a universal set of geometric quantum gates using the solid-state spins of diamond nitrogen-vacancy centres. These diamond defects provide a scalable experimental platform with the potential for room-temperature quantum computing, which has attracted strong interest in recent years. Our experiment shows that all-geometric and potentially robust quantum computation can be realized with solid-state spin quantum bits, making use of recent advances in the coherent control of this system.

  16. Numerically stable, scalable formulas for parallel and online computation of higher-order multivariate central moments with arbitrary weights

    DOE PAGES

    Pebay, Philippe; Terriberry, Timothy B.; Kolla, Hemanth; ...

    2016-03-29

    Formulas for incremental or parallel computation of second order central moments have long been known, and recent extensions of these formulas to univariate and multivariate moments of arbitrary order have been developed. Such formulas are of key importance in scenarios where incremental results are required and in parallel and distributed systems where communication costs are high. We survey these recent results, and improve them with arbitrary-order, numerically stable one-pass formulas which we further extend with weighted and compound variants. We also develop a generalized correction factor for standard two-pass algorithms that enables the maintenance of accuracy over nearly the fullmore » representable range of the input, avoiding the need for extended-precision arithmetic. We then empirically examine algorithm correctness for pairwise update formulas up to order four as well as condition number and relative error bounds for eight different central moment formulas, each up to degree six, to address the trade-offs between numerical accuracy and speed of the various algorithms. Finally, we demonstrate the use of the most elaborate among the above mentioned formulas, with the utilization of the compound moments for a practical large-scale scientific application.« less

  17. Reclassification and Documentation in a Medium-sized Medical Center Library: The MTST System in the Simultaneous Production of Catalog Cards and a Computer Stored Record

    PubMed Central

    Love, Erika; Butzin, Diane; Robinson, Robert E.; Lee, Soo

    1971-01-01

    A project to recatalog and reclassify the book collection of the Bowman Gray School of Medicine Library utilizing the Magnetic Tape/Selectric Typwriter system for simultaneous catalog card production and computer stored data acquisition marks the beginning of eventual computerization of all library operations. A keyboard optical display system will be added by late 1970. Major input operations requiring the creation of “hard copy” will continue via the MTST system. Updating, editing and retrieval operations as well as input without hard copy production will be done through the “on-line” keyboard optical display system. Once the library's first data bank, the book catalog, has been established the computer may be consulted directly for library holdings from any optical display terminal throughout the medical center. Three basic information retrieval operations may be carried out through “on-line” optical display terminals. Output options include the reproduction of part or all of a given document, or the generation of statistical data, which are derived from two Acquisition Code lines. The creation of a central bibliographic record of Bowman Gray Faculty publications patterned after the cataloging program is presently under way. The cataloging and computer storage of serial holdings records will begin after completion of the reclassification project. All acquisitions added to the collection since October 1967 are computer-stored and fully retrievable. Reclassification of older titles will be completed in early 1971. PMID:5542915

  18. The Careful Puppet Master: Reducing risk and fortifying acceptance testing with Jenkins CI

    NASA Astrophysics Data System (ADS)

    Smith, Jason A.; Richman, Gabriel; DeStefano, John; Pryor, James; Rao, Tejas; Strecker-Kellogg, William; Wong, Tony

    2015-12-01

    Centralized configuration management, including the use of automation tools such as Puppet, can greatly increase provisioning speed and efficiency when configuring new systems or making changes to existing systems, reduce duplication of work, and improve automated processes. However, centralized management also brings with it a level of inherent risk: a single change in just one file can quickly be pushed out to thousands of computers and, if that change is not properly and thoroughly tested and contains an error, could result in catastrophic damage to many services, potentially bringing an entire computer facility offline. Change management procedures can—and should—be formalized in order to prevent such accidents. However, like the configuration management process itself, if such procedures are not automated, they can be difficult to enforce strictly. Therefore, to reduce the risk of merging potentially harmful changes into our production Puppet environment, we have created an automated testing system, which includes the Jenkins CI tool, to manage our Puppet testing process. This system includes the proposed changes and runs Puppet on a pool of dozens of RedHat Enterprise Virtualization (RHEV) virtual machines (VMs) that replicate most of our important production services for the purpose of testing. This paper describes our automated test system and how it hooks into our production approval process for automatic acceptance testing. All pending changes that have been pushed to production must pass this validation process before they can be approved and merged into production.

  19. The Curse of Planning: Dissecting multiple reinforcement learning systems by taxing the central executive

    PubMed Central

    Otto, A. Ross; Gershman, Samuel J.; Markman, Arthur B.; Daw, Nathaniel D.

    2013-01-01

    A number of accounts of human and animal behavior posit the operation of parallel and competing valuation systems in the control of choice behavior. Along these lines, a flexible but computationally expensive model-based reinforcement learning system has been contrasted with a less flexible but more efficient model-free reinforcement learning system. The factors governing which system controls behavior—and under what circumstances—are still unclear. Based on the hypothesis that model-based reinforcement learning requires cognitive resources, we demonstrate that having human decision-makers perform a demanding secondary task engenders increased reliance on a model-free reinforcement learning strategy. Further, we show that across trials, people negotiate this tradeoff dynamically as a function of concurrent executive function demands and their choice latencies reflect the computational expenses of the strategy employed. These results demonstrate that competition between multiple learning systems can be controlled on a trial-by-trial basis by modulating the availability of cognitive resources. PMID:23558545

  20. LEMON - LHC Era Monitoring for Large-Scale Infrastructures

    NASA Astrophysics Data System (ADS)

    Marian, Babik; Ivan, Fedorko; Nicholas, Hook; Hector, Lansdale Thomas; Daniel, Lenkes; Miroslav, Siket; Denis, Waldron

    2011-12-01

    At the present time computer centres are facing a massive rise in virtualization and cloud computing as these solutions bring advantages to service providers and consolidate the computer centre resources. However, as a result the monitoring complexity is increasing. Computer centre management requires not only to monitor servers, network equipment and associated software but also to collect additional environment and facilities data (e.g. temperature, power consumption, cooling efficiency, etc.) to have also a good overview of the infrastructure performance. The LHC Era Monitoring (Lemon) system is addressing these requirements for a very large scale infrastructure. The Lemon agent that collects data on every client and forwards the samples to the central measurement repository provides a flexible interface that allows rapid development of new sensors. The system allows also to report on behalf of remote devices such as switches and power supplies. Online and historical data can be visualized via a web-based interface or retrieved via command-line tools. The Lemon Alarm System component can be used for notifying the operator about error situations. In this article, an overview of the Lemon monitoring is provided together with a description of the CERN LEMON production instance. No direct comparison is made with other monitoring tool.

  1. Computer input and output files associated with ground-water-flow simulations of the Albuquerque Basin, central New Mexico, 1901-94, with projections to 2020; (supplement one to U.S. Geological Survey Water-resources investigations report 94-4251)

    USGS Publications Warehouse

    Kernodle, J.M.

    1996-01-01

    This report presents the computer input files required to run the three-dimensional ground-water-flow model of the Albuquerque Basin, central New Mexico, documented in Kernodle and others (Kernodle, J.M., McAda, D.P., and Thorn, C.R., 1995, Simulation of ground-water flow in the Albuquerque Basin, central New Mexico, 1901-1994, with projections to 2020: U.S. Geological Survey Water-Resources Investigations Report 94-4251, 114 p.). Output files resulting from the computer simulations are included for reference.

  2. Essentials and Perspectives of Computational Modelling Assistance for CNS-oriented Nanoparticle-based Drug Delivery Systems.

    PubMed

    Kisała, Joanna; Heclik, Kinga I; Pogocki, Krzysztof; Pogocki, Dariusz

    2018-05-16

    The blood-brain barrier (BBB) is a complex system controlling two-way substances traffic between circulatory (cardiovascular) system and central nervous system (CNS). It is almost perfectly crafted to regulate brain homeostasis and to permit selective transport of molecules that are essential for brain function. For potential drug candidates, the CNS-oriented neuropharmaceuticals as well as for those of primary targets in the periphery, the extent to which a substance in the circulation gains access to the CNS seems crucial. With the advent of nanopharmacology the problem of the BBB permeability for drug nano-carriers gains new significance. Compare to some other fields of medicinal chemistry, the computational science of nanodelivery is still prematured to offer the black-box type solutions, especially for the BBB-case. However, even its enormous complexity can be spell out the physical principles, and as such subjected to computation. Basic understanding of various physico-chemical parameters describing the brain uptake is required to take advantage of their usage for the BBB-nanodelivery. This mini-review provides a sketchy introduction into essential concepts allowing application of computational simulation to the BBB-nanodelivery design. Copyright© Bentham Science Publishers; For any queries, please email at epub@benthamscience.org.

  3. Experiments in Computing: A Survey

    PubMed Central

    Moisseinen, Nella

    2014-01-01

    Experiments play a central role in science. The role of experiments in computing is, however, unclear. Questions about the relevance of experiments in computing attracted little attention until the 1980s. As the discipline then saw a push towards experimental computer science, a variety of technically, theoretically, and empirically oriented views on experiments emerged. As a consequence of those debates, today's computing fields use experiments and experiment terminology in a variety of ways. This paper analyzes experimentation debates in computing. It presents five ways in which debaters have conceptualized experiments in computing: feasibility experiment, trial experiment, field experiment, comparison experiment, and controlled experiment. This paper has three aims: to clarify experiment terminology in computing; to contribute to disciplinary self-understanding of computing; and, due to computing's centrality in other fields, to promote understanding of experiments in modern science in general. PMID:24688404

  4. Experiments in computing: a survey.

    PubMed

    Tedre, Matti; Moisseinen, Nella

    2014-01-01

    Experiments play a central role in science. The role of experiments in computing is, however, unclear. Questions about the relevance of experiments in computing attracted little attention until the 1980s. As the discipline then saw a push towards experimental computer science, a variety of technically, theoretically, and empirically oriented views on experiments emerged. As a consequence of those debates, today's computing fields use experiments and experiment terminology in a variety of ways. This paper analyzes experimentation debates in computing. It presents five ways in which debaters have conceptualized experiments in computing: feasibility experiment, trial experiment, field experiment, comparison experiment, and controlled experiment. This paper has three aims: to clarify experiment terminology in computing; to contribute to disciplinary self-understanding of computing; and, due to computing's centrality in other fields, to promote understanding of experiments in modern science in general.

  5. Executive control systems in the engineering design environment

    NASA Technical Reports Server (NTRS)

    Hurst, P. W.; Pratt, T. W.

    1985-01-01

    Executive Control Systems (ECSs) are software structures for the unification of various engineering design application programs into comprehensive systems with a central user interface (uniform access) method and a data management facility. Attention is presently given to the most significant determinations of a research program conducted for 24 ECSs, used in government and industry engineering design environments to integrate CAD/CAE applications programs. Characterizations are given for the systems' major architectural components and the alternative design approaches considered in their development. Attention is given to ECS development prospects in the areas of interdisciplinary usage, standardization, knowledge utilization, and computer science technology transfer.

  6. Efficient evaluation of wireless real-time control networks.

    PubMed

    Horvath, Peter; Yampolskiy, Mark; Koutsoukos, Xenofon

    2015-02-11

    In this paper, we present a system simulation framework for the design and performance evaluation of complex wireless cyber-physical systems. We describe the simulator architecture and the specific developments that are required to simulate cyber-physical systems relying on multi-channel, multihop mesh networks. We introduce realistic and efficient physical layer models and a system simulation methodology, which provides statistically significant performance evaluation results with low computational complexity. The capabilities of the proposed framework are illustrated in the example of WirelessHART, a centralized, real-time, multi-hop mesh network designed for industrial control and monitor applications.

  7. Construction of In-house Databases in a Corporation

    NASA Astrophysics Data System (ADS)

    Sano, Hikomaro

    This report outlines “Repoir” (Report information retrieval) system of Toyota Central R & D Laboratories, Inc. as an example of in-house information retrieval system. The online system was designed to process in-house technical reports with the aid of a mainframe computer and has been in operation since 1979. Its features are multiple use of the information for technical and managerial purposes and simplicity in indexing and data input. The total number of descriptors, specially selected for the system, was minimized for ease of indexing. The report also describes the input items, processing flow and typical outputs in kanji letters.

  8. A computer vision system for the recognition of trees in aerial photographs

    NASA Technical Reports Server (NTRS)

    Pinz, Axel J.

    1991-01-01

    Increasing problems of forest damage in Central Europe set the demand for an appropriate forest damage assessment tool. The Vision Expert System (VES) is presented which is capable of finding trees in color infrared aerial photographs. Concept and architecture of VES are discussed briefly. The system is applied to a multisource test data set. The processing of this multisource data set leads to a multiple interpretation result for one scene. An integration of these results will provide a better scene description by the vision system. This is achieved by an implementation of Steven's correlation algorithm.

  9. Computer networks for remote laboratories in physics and engineering

    NASA Technical Reports Server (NTRS)

    Starks, Scott; Elizandro, David; Leiner, Barry M.; Wiskerchen, Michael

    1988-01-01

    This paper addresses a relatively new approach to scientific research, telescience, which is the conduct of scientific operations in locations remote from the site of central experimental activity. A testbed based on the concepts of telescience is being developed to ultimately enable scientific researchers on earth to conduct experiments onboard the Space Station. This system along with background materials are discussed.

  10. The Role of Prototype Learning in Hierarchical Models of Vision

    ERIC Educational Resources Information Center

    Thomure, Michael David

    2014-01-01

    I conduct a study of learning in HMAX-like models, which are hierarchical models of visual processing in biological vision systems. Such models compute a new representation for an image based on the similarity of image sub-parts to a number of specific patterns, called prototypes. Despite being a central piece of the overall model, the issue of…

  11. A Directory of Sources of Information and Data Bases on Education and Training.

    DTIC Science & Technology

    1980-09-01

    ACADO07 National Opinion Research Center (NORC) ... ............. ... ACADOO8 U of California Union Catalog Supp. (1963-1967...Records (RSR) ...... .................. ... ARMYO30 Union Central Registry System (UCRSYS) .... .............. ... ARMY032 Training Control Card Report...research. Your query directs a computer search of the Compre- hensive Dissertation Database. The search produces a list of all titles matching your

  12. U.S. EPA computational toxicology programs: Central role of chemical-annotation efforts and molecular databases

    EPA Science Inventory

    EPA’s National Center for Computational Toxicology is engaged in high-profile research efforts to improve the ability to more efficiently and effectively prioritize and screen thousands of environmental chemicals for potential toxicity. A central component of these efforts invol...

  13. Rydberg Atoms in Strong Fields: a Testing Ground for Quantum Chaos.

    NASA Astrophysics Data System (ADS)

    Courtney, Michael

    1995-01-01

    Rydberg atoms in strong static electric and magnetic fields provide experimentally accessible systems for studying the connections between classical chaos and quantum mechanics in the semiclassical limit. This experimental accessibility has motivated the development of reliable quantum mechanical solutions. This thesis uses both experimental and computed quantum spectra to test the central approaches to quantum chaos. These central approaches consist mainly of developing methods to compute the spectra of quantum systems in non -perturbative regimes, correlating statistical descriptions of eigenvalues with the classical behavior of the same Hamiltonian, and the development of semiclassical methods such as periodic-orbit theory. Particular emphasis is given to identifying the spectral signature of recurrences --quantum wave packets which follow classical orbits. The new findings include: the breakdown of the connection between energy-level statistics and classical chaos in odd-parity diamagnetic lithium, the discovery of the signature of very long period orbits in atomic spectra, quantitative evidence for the scattering of recurrences by the alkali -metal core, quantitative description of the behavior of recurrences near bifurcations, and a semiclassical interpretation of the evolution of continuum Stark spectra. (Copies available exclusively from MIT Libraries, Rm. 14-0551, Cambridge, MA 02139-4307. Ph. 617-253-5668; Fax 617-253-1690.).

  14. Post-quantum cryptography.

    PubMed

    Bernstein, Daniel J; Lange, Tanja

    2017-09-13

    Cryptography is essential for the security of online communication, cars and implanted medical devices. However, many commonly used cryptosystems will be completely broken once large quantum computers exist. Post-quantum cryptography is cryptography under the assumption that the attacker has a large quantum computer; post-quantum cryptosystems strive to remain secure even in this scenario. This relatively young research area has seen some successes in identifying mathematical operations for which quantum algorithms offer little advantage in speed, and then building cryptographic systems around those. The central challenge in post-quantum cryptography is to meet demands for cryptographic usability and flexibility without sacrificing confidence.

  15. Management and development of local area network upgrade prototype

    NASA Technical Reports Server (NTRS)

    Fouser, T. J.

    1981-01-01

    Given the situation of having management and development users accessing a central computing facility and given the fact that these same users have the need for local computation and storage, the utilization of a commercially available networking system such as CP/NET from Digital Research provides the building blocks for communicating intelligent microsystems to file and print services. The major problems to be overcome in the implementation of such a network are the dearth of intelligent communication front-ends for the microcomputers and the lack of a rich set of management and software development tools.

  16. Post-quantum cryptography

    NASA Astrophysics Data System (ADS)

    Bernstein, Daniel J.; Lange, Tanja

    2017-09-01

    Cryptography is essential for the security of online communication, cars and implanted medical devices. However, many commonly used cryptosystems will be completely broken once large quantum computers exist. Post-quantum cryptography is cryptography under the assumption that the attacker has a large quantum computer; post-quantum cryptosystems strive to remain secure even in this scenario. This relatively young research area has seen some successes in identifying mathematical operations for which quantum algorithms offer little advantage in speed, and then building cryptographic systems around those. The central challenge in post-quantum cryptography is to meet demands for cryptographic usability and flexibility without sacrificing confidence.

  17. Integrating Xgrid into the HENP distributed computing model

    NASA Astrophysics Data System (ADS)

    Hajdu, L.; Kocoloski, A.; Lauret, J.; Miller, M.

    2008-07-01

    Modern Macintosh computers feature Xgrid, a distributed computing architecture built directly into Apple's OS X operating system. While the approach is radically different from those generally expected by the Unix based Grid infrastructures (Open Science Grid, TeraGrid, EGEE), opportunistic computing on Xgrid is nonetheless a tempting and novel way to assemble a computing cluster with a minimum of additional configuration. In fact, it requires only the default operating system and authentication to a central controller from each node. OS X also implements arbitrarily extensible metadata, allowing an instantly updated file catalog to be stored as part of the filesystem itself. The low barrier to entry allows an Xgrid cluster to grow quickly and organically. This paper and presentation will detail the steps that can be taken to make such a cluster a viable resource for HENP research computing. We will further show how to provide to users a unified job submission framework by integrating Xgrid through the STAR Unified Meta-Scheduler (SUMS), making tasks and jobs submission effortlessly at reach for those users already using the tool for traditional Grid or local cluster job submission. We will discuss additional steps that can be taken to make an Xgrid cluster a full partner in grid computing initiatives, focusing on Open Science Grid integration. MIT's Xgrid system currently supports the work of multiple research groups in the Laboratory for Nuclear Science, and has become an important tool for generating simulations and conducting data analyses at the Massachusetts Institute of Technology.

  18. Design issues for grid-connected photovoltaic systems

    NASA Astrophysics Data System (ADS)

    Ropp, Michael Eugene

    1998-08-01

    Photovoltaics (PV) is the direct conversion of sunlight to electrical energy. In areas without centralized utility grids, the benefits of PV easily overshadow the present shortcomings of the technology. However, in locations with centralized utility systems, significant technical challenges remain before utility-interactive PV (UIPV) systems can be integrated into the mix of electricity sources. One challenge is that the needed computer design tools for optimal design of PV systems with curved PV arrays are not available, and even those that are available do not facilitate monitoring of the system once it is built. Another arises from the issue of islanding. Islanding occurs when a UIPV system continues to energize a section of a utility system after that section has been isolated from the utility voltage source. Islanding, which is potentially dangerous to both personnel and equipment, is difficult to prevent completely. The work contained within this thesis targets both of these technical challenges. In Task 1, a method for modeling a PV system with a curved PV array using only existing computer software is developed. This methodology also facilitates comparison of measured and modeled data for use in system monitoring. The procedure is applied to the Georgia Tech Aquatic Center (GTAC) FV system. In the work contained under Task 2, islanding prevention is considered. The existing state-of-the- art is thoroughly reviewed. In Subtask 2.1, an analysis is performed which suggests that standard protective relays are in fact insufficient to guarantee protection against islanding. In Subtask 2.2. several existing islanding prevention methods are compared in a novel way. The superiority of this new comparison over those used previously is demonstrated. A new islanding prevention method is the subject under Subtask 2.3. It is shown that it does not compare favorably with other existing techniques. However, in Subtask 2.4, a novel method for dramatically improving this new islanding prevention method is described. It is shown, both by computer modeling and experiment, that this new method is one of the most effective available today. Finally, under Subtask 2.5, the effects of certain types of loads; on the effectiveness of islanding prevention methods are discussed.

  19. Evolution of the Hubble Space Telescope Safing Systems

    NASA Technical Reports Server (NTRS)

    Pepe, Joyce; Myslinski, Michael

    2006-01-01

    The Hubble Space Telescope (HST) was launched on April 24 1990, with an expected lifespan of 15 years. Central to the spacecraft design was the concept of a series of on-orbit shuttle servicing missions permitting astronauts to replace failed equipment, update the scientific instruments and keep the HST at the forefront of astronomical discoveries. One key to the success of the Hubble mission has been the robust Safing systems designed to monitor the performance of the observatory and to react to keep the spacecraft safe in the event of equipment anomaly. The spacecraft Safing System consists of a range of software tests in the primary flight computer that evaluate the performance of mission critical hardware, safe modes that are activated when the primary control mode is deemed inadequate for protecting the vehicle, and special actions that the computer can take to autonomously reconfigure critical hardware. The HST Safing System was structured to autonomously detect electrical power system, data management system, and pointing control system malfunctions and to configure the vehicle to ensure safe operation without ground intervention for up to 72 hours. There is also a dedicated safe mode computer that constantly monitors a keep-alive signal from the primary computer. If this signal stops, the safe mode computer shuts down the primary computer and takes over control of the vehicle, putting it into a safe, low-power configuration. The HST Safing system has continued to evolve as equipment has aged, as new hardware has been installed on the vehicle, and as the operation modes have matured during the mission. Along with the continual refinement of the limits used in the safing tests, several new tests have been added to the monitoring system, and new safe modes have been added to the flight software. This paper will focus on the evolution of the HST Safing System and Safing tests, and the importance of this evolution to prolonging the science operations of the telescope.

  20. An explicit mixed numerical method for mesoscale model

    NASA Technical Reports Server (NTRS)

    Hsu, H.-M.

    1981-01-01

    A mixed numerical method has been developed for mesoscale models. The technique consists of a forward difference scheme for time tendency terms, an upstream scheme for advective terms, and a central scheme for the other terms in a physical system. It is shown that the mixed method is conditionally stable and highly accurate for approximating the system of either shallow-water equations in one dimension or primitive equations in three dimensions. Since the technique is explicit and two time level, it conserves computer and programming resources.

  1. Lightning Detection and Ranging system LDAR system description and performance objectives

    NASA Technical Reports Server (NTRS)

    Poehler, H. A.; Lennon, C. L.

    1979-01-01

    The instruments used at the six remote stations to measure both the time-of-arrival of the envelope of the pulsed 60 MHz to 80 MHz portion of the RF signal emitted by lightning, and the electric field waveforms are described as well as the two methods of transmitting the signal to the central station. Other topics discussed include data processing, recording, and reduction techniques and the software used for the 2100S, 2114, and 2116 computers.

  2. Magnetic resonance imaging diagnosis of disseminated necrotizing leukoencephalopathy

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Atlas, S.W.; Grossman, R.I.; Packer, R.J.

    1987-01-01

    Disseminated necrotizing leukoencephalopathy is a rare syndrome of progressive neurologic deterioration seen most often in patients who have received central nervous system irradiation combined with intrathecal or systemic chemotherapy in the treatment or prophylaxis of various malignancies. Magnetic resonance imaging was more sensitive than computed tomography in detecting white matter abnormalities in the case of disseminated necrotizing leukoencephalopathy reported here. Magnetic resonance imaging may be useful in diagnosing incipient white matter changes in disseminated necrotizing leukoencephalopathy, thus permitting early, appropriate therapeutic modifications.

  3. Discrete Adjoint Sensitivity Analysis of Hybrid Dynamical Systems With Switching [Discrete Adjoint Sensitivity Analysis of Hybrid Dynamical Systems

    DOE PAGES

    Zhang, Hong; Abhyankar, Shrirang; Constantinescu, Emil; ...

    2017-01-24

    Sensitivity analysis is an important tool for describing power system dynamic behavior in response to parameter variations. It is a central component in preventive and corrective control applications. The existing approaches for sensitivity calculations, namely, finite-difference and forward sensitivity analysis, require a computational effort that increases linearly with the number of sensitivity parameters. In this paper, we investigate, implement, and test a discrete adjoint sensitivity approach whose computational effort is effectively independent of the number of sensitivity parameters. The proposed approach is highly efficient for calculating sensitivities of larger systems and is consistent, within machine precision, with the function whosemore » sensitivity we are seeking. This is an essential feature for use in optimization applications. Moreover, our approach includes a consistent treatment of systems with switching, such as dc exciters, by deriving and implementing the adjoint jump conditions that arise from state-dependent and time-dependent switchings. The accuracy and the computational efficiency of the proposed approach are demonstrated in comparison with the forward sensitivity analysis approach. In conclusion, this paper focuses primarily on the power system dynamics, but the approach is general and can be applied to hybrid dynamical systems in a broader range of fields.« less

  4. Discrete Adjoint Sensitivity Analysis of Hybrid Dynamical Systems With Switching [Discrete Adjoint Sensitivity Analysis of Hybrid Dynamical Systems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zhang, Hong; Abhyankar, Shrirang; Constantinescu, Emil

    Sensitivity analysis is an important tool for describing power system dynamic behavior in response to parameter variations. It is a central component in preventive and corrective control applications. The existing approaches for sensitivity calculations, namely, finite-difference and forward sensitivity analysis, require a computational effort that increases linearly with the number of sensitivity parameters. In this paper, we investigate, implement, and test a discrete adjoint sensitivity approach whose computational effort is effectively independent of the number of sensitivity parameters. The proposed approach is highly efficient for calculating sensitivities of larger systems and is consistent, within machine precision, with the function whosemore » sensitivity we are seeking. This is an essential feature for use in optimization applications. Moreover, our approach includes a consistent treatment of systems with switching, such as dc exciters, by deriving and implementing the adjoint jump conditions that arise from state-dependent and time-dependent switchings. The accuracy and the computational efficiency of the proposed approach are demonstrated in comparison with the forward sensitivity analysis approach. In conclusion, this paper focuses primarily on the power system dynamics, but the approach is general and can be applied to hybrid dynamical systems in a broader range of fields.« less

  5. Continuous Security and Configuration Monitoring of HPC Clusters

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Garcia-Lomeli, H. D.; Bertsch, A. D.; Fox, D. M.

    Continuous security and configuration monitoring of information systems has been a time consuming and laborious task for system administrators at the High Performance Computing (HPC) center. Prior to this project, system administrators had to manually check the settings of thousands of nodes, which required a significant number of hours rendering the old process ineffective and inefficient. This paper explains the application of Splunk Enterprise, a software agent, and a reporting tool in the development of a user application interface to track and report on critical system updates and security compliance status of HPC Clusters. In conjunction with other configuration managementmore » systems, the reporting tool is to provide continuous situational awareness to system administrators of the compliance state of information systems. Our approach consisted of the development, testing, and deployment of an agent to collect any arbitrary information across a massively distributed computing center, and organize that information into a human-readable format. Using Splunk Enterprise, this raw data was then gathered into a central repository and indexed for search, analysis, and correlation. Following acquisition and accumulation, the reporting tool generated and presented actionable information by filtering the data according to command line parameters passed at run time. Preliminary data showed results for over six thousand nodes. Further research and expansion of this tool could lead to the development of a series of agents to gather and report critical system parameters. However, in order to make use of the flexibility and resourcefulness of the reporting tool the agent must conform to specifications set forth in this paper. This project has simplified the way system administrators gather, analyze, and report on the configuration and security state of HPC clusters, maintaining ongoing situational awareness. Rather than querying each cluster independently, compliance checking can be managed from one central location.« less

  6. Some issues related to simulation of the tracking and communications computer network

    NASA Technical Reports Server (NTRS)

    Lacovara, Robert C.

    1989-01-01

    The Communications Performance and Integration branch of the Tracking and Communications Division has an ongoing involvement in the simulation of its flight hardware for Space Station Freedom. Specifically, the communication process between central processor(s) and orbital replaceable units (ORU's) is simulated with varying degrees of fidelity. The results of investigations into three aspects of this simulation effort are given. The most general area involves the use of computer assisted software engineering (CASE) tools for this particular simulation. The second area of interest is simulation methods for systems of mixed hardware and software. The final area investigated is the application of simulation methods to one of the proposed computer network protocols for space station, specifically IEEE 802.4.

  7. Some issues related to simulation of the tracking and communications computer network

    NASA Astrophysics Data System (ADS)

    Lacovara, Robert C.

    1989-12-01

    The Communications Performance and Integration branch of the Tracking and Communications Division has an ongoing involvement in the simulation of its flight hardware for Space Station Freedom. Specifically, the communication process between central processor(s) and orbital replaceable units (ORU's) is simulated with varying degrees of fidelity. The results of investigations into three aspects of this simulation effort are given. The most general area involves the use of computer assisted software engineering (CASE) tools for this particular simulation. The second area of interest is simulation methods for systems of mixed hardware and software. The final area investigated is the application of simulation methods to one of the proposed computer network protocols for space station, specifically IEEE 802.4.

  8. Urban land use monitoring from computer-implemented processing of airborne multispectral data

    NASA Technical Reports Server (NTRS)

    Todd, W. J.; Mausel, P. W.; Baumgardner, M. F.

    1976-01-01

    Machine processing techniques were applied to multispectral data obtained from airborne scanners at an elevation of 600 meters over central Indianapolis in August, 1972. Computer analysis of these spectral data indicate that roads (two types), roof tops (three types), dense grass (two types), sparse grass (two types), trees, bare soil, and water (two types) can be accurately identified. Using computers, it is possible to determine land uses from analysis of type, size, shape, and spatial associations of earth surface images identified from multispectral data. Land use data developed through machine processing techniques can be programmed to monitor land use changes, simulate land use conditions, and provide impact statistics that are required to analyze stresses placed on spatial systems.

  9. Design of the central region in the Gustaf Werner cyclotron at the Uppsala university

    NASA Astrophysics Data System (ADS)

    Toprek, Dragan; Reistad, Dag; Lundstrom, Bengt; Wessman, Dan

    2002-07-01

    This paper describes the design of the central region in the Gustaf Werner cyclotron for h=1, 2 and 3 modes of acceleration. The electric field distribution in the inflector and in the four acceleration gaps has been numerically calculated from an electric potential map produced by the program RELAX3D. The geometry of the central region has been tested with the computations of orbits carried out by means of the computer code CYCLONE. The optical properties of the spiral inflector and the central region were studied by using the programs CASINO and CYCLONE, respectively.

  10. Extension of filament propagation in water with Bessel-Gaussian beams

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kaya, G.; Sayrac, M.; Boran, Y.

    We experimentally studied intense femtosecond pulse filamentation and propagation in water for Bessel-Gaussian beams with different numbers of radial modal lobes. The transverse modes of the incident Bessel-Gaussian beam were created from a Gaussian beam of a Ti:sapphire laser system by using computer generated hologram techniques. We found that filament propagation length increased with increasing number of lobes under the conditions of the same peak intensity, pulse duration, and the size of the central peak of the incident beam, suggesting that the radial modal lobes may serve as an energy reservoir for the filaments formed by the central intensity peak.

  11. Overview of the LINCS architecture

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Fletcher, J.G.; Watson, R.W.

    1982-01-13

    Computing at the Lawrence Livermore National Laboratory (LLNL) has evolved over the past 15 years with a computer network based resource sharing environment. The increasing use of low cost and high performance micro, mini and midi computers and commercially available local networking systems will accelerate this trend. Further, even the large scale computer systems, on which much of the LLNL scientific computing depends, are evolving into multiprocessor systems. It is our belief that the most cost effective use of this environment will depend on the development of application systems structured into cooperating concurrent program modules (processes) distributed appropriately over differentmore » nodes of the environment. A node is defined as one or more processors with a local (shared) high speed memory. Given the latter view, the environment can be characterized as consisting of: multiple nodes communicating over noisy channels with arbitrary delays and throughput, heterogenous base resources and information encodings, no single administration controlling all resources, distributed system state, and no uniform time base. The system design problem is - how to turn the heterogeneous base hardware/firmware/software resources of this environment into a coherent set of resources that facilitate development of cost effective, reliable, and human engineered applications. We believe the answer lies in developing a layered, communication oriented distributed system architecture; layered and modular to support ease of understanding, reconfiguration, extensibility, and hiding of implementation or nonessential local details; communication oriented because that is a central feature of the environment. The Livermore Interactive Network Communication System (LINCS) is a hierarchical architecture designed to meet the above needs. While having characteristics in common with other architectures, it differs in several respects.« less

  12. Emotor control: computations underlying bodily resource allocation, emotions, and confidence

    PubMed Central

    Kepecs, Adam; Mensh, Brett D.

    2015-01-01

    Emotional processes are central to behavior, yet their deeply subjective nature has been a challenge for neuroscientific study as well as for psychiatric diagnosis. Here we explore the relationships between subjective feelings and their underlying brain circuits from a computational perspective. We apply recent insights from systems neuroscience—approaching subjective behavior as the result of mental computations instantiated in the brain—to the study of emotions. We develop the hypothesis that emotions are the product of neural computations whose motor role is to reallocate bodily resources mostly gated by smooth muscles. This “emotor” control system is analagous to the more familiar motor control computations that coordinate skeletal muscle movements. To illustrate this framework, we review recent research on “confidence.” Although familiar as a feeling, confidence is also an objective statistical quantity: an estimate of the probability that a hypothesis is correct. This model-based approach helped reveal the neural basis of decision confidence in mammals and provides a bridge to the subjective feeling of confidence in humans. These results have important implications for psychiatry, since disorders of confidence computations appear to contribute to a number of psychopathologies. More broadly, this computational approach to emotions resonates with the emerging view that psychiatric nosology may be best parameterized in terms of disorders of the cognitive computations underlying complex behavior. PMID:26869840

  13. Large-scale neural circuit mapping data analysis accelerated with the graphical processing unit (GPU).

    PubMed

    Shi, Yulin; Veidenbaum, Alexander V; Nicolau, Alex; Xu, Xiangmin

    2015-01-15

    Modern neuroscience research demands computing power. Neural circuit mapping studies such as those using laser scanning photostimulation (LSPS) produce large amounts of data and require intensive computation for post hoc processing and analysis. Here we report on the design and implementation of a cost-effective desktop computer system for accelerated experimental data processing with recent GPU computing technology. A new version of Matlab software with GPU enabled functions is used to develop programs that run on Nvidia GPUs to harness their parallel computing power. We evaluated both the central processing unit (CPU) and GPU-enabled computational performance of our system in benchmark testing and practical applications. The experimental results show that the GPU-CPU co-processing of simulated data and actual LSPS experimental data clearly outperformed the multi-core CPU with up to a 22× speedup, depending on computational tasks. Further, we present a comparison of numerical accuracy between GPU and CPU computation to verify the precision of GPU computation. In addition, we show how GPUs can be effectively adapted to improve the performance of commercial image processing software such as Adobe Photoshop. To our best knowledge, this is the first demonstration of GPU application in neural circuit mapping and electrophysiology-based data processing. Together, GPU enabled computation enhances our ability to process large-scale data sets derived from neural circuit mapping studies, allowing for increased processing speeds while retaining data precision. Copyright © 2014 Elsevier B.V. All rights reserved.

  14. Large scale neural circuit mapping data analysis accelerated with the graphical processing unit (GPU)

    PubMed Central

    Shi, Yulin; Veidenbaum, Alexander V.; Nicolau, Alex; Xu, Xiangmin

    2014-01-01

    Background Modern neuroscience research demands computing power. Neural circuit mapping studies such as those using laser scanning photostimulation (LSPS) produce large amounts of data and require intensive computation for post-hoc processing and analysis. New Method Here we report on the design and implementation of a cost-effective desktop computer system for accelerated experimental data processing with recent GPU computing technology. A new version of Matlab software with GPU enabled functions is used to develop programs that run on Nvidia GPUs to harness their parallel computing power. Results We evaluated both the central processing unit (CPU) and GPU-enabled computational performance of our system in benchmark testing and practical applications. The experimental results show that the GPU-CPU co-processing of simulated data and actual LSPS experimental data clearly outperformed the multi-core CPU with up to a 22x speedup, depending on computational tasks. Further, we present a comparison of numerical accuracy between GPU and CPU computation to verify the precision of GPU computation. In addition, we show how GPUs can be effectively adapted to improve the performance of commercial image processing software such as Adobe Photoshop. Comparison with Existing Method(s) To our best knowledge, this is the first demonstration of GPU application in neural circuit mapping and electrophysiology-based data processing. Conclusions Together, GPU enabled computation enhances our ability to process large-scale data sets derived from neural circuit mapping studies, allowing for increased processing speeds while retaining data precision. PMID:25277633

  15. Rapid Determination of Appropriate Source Models for Tsunami Early Warning using a Depth Dependent Rigidity Curve: Method and Numerical Tests

    NASA Astrophysics Data System (ADS)

    Tanioka, Y.; Miranda, G. J. A.; Gusman, A. R.

    2017-12-01

    Recently, tsunami early warning technique has been improved using tsunami waveforms observed at the ocean bottom pressure gauges such as NOAA DART system or DONET and S-NET systems in Japan. However, for tsunami early warning of near field tsunamis, it is essential to determine appropriate source models using seismological analysis before large tsunamis hit the coast, especially for tsunami earthquakes which generated significantly large tsunamis. In this paper, we develop a technique to determine appropriate source models from which appropriate tsunami inundation along the coast can be numerically computed The technique is tested for four large earthquakes, the 1992 Nicaragua tsunami earthquake (Mw7.7), the 2001 El Salvador earthquake (Mw7.7), the 2004 El Astillero earthquake (Mw7.0), and the 2012 El Salvador-Nicaragua earthquake (Mw7.3), which occurred off Central America. In this study, fault parameters were estimated from the W-phase inversion, then the fault length and width were determined from scaling relationships. At first, the slip amount was calculated from the seismic moment with a constant rigidity of 3.5 x 10**10N/m2. The tsunami numerical simulation was carried out and compared with the observed tsunami. For the 1992 Nicaragua tsunami earthquake, the computed tsunami was much smaller than the observed one. For the 2004 El Astillero earthquake, the computed tsunami was overestimated. In order to solve this problem, we constructed a depth dependent rigidity curve, similar to suggested by Bilek and Lay (1999). The curve with a central depth estimated by the W-phase inversion was used to calculate the slip amount of the fault model. Using those new slip amounts, tsunami numerical simulation was carried out again. Then, the observed tsunami heights, run-up heights, and inundation areas for the 1992 Nicaragua tsunami earthquake were well explained by the computed one. The other tsunamis from the other three earthquakes were also reasonably well explained by the computed ones. Therefore, our technique using a depth dependent rigidity curve is worked to estimate an appropriate fault model which reproduces tsunami heights near the coast in Central America. The technique may be worked in the other subduction zones by finding a depth dependent rigidity curve in that particular subduction zone.

  16. Multiphasic Health Testing in the Clinic Setting

    PubMed Central

    LaDou, Joseph

    1971-01-01

    The economy of automated multiphasic health testing (amht) activities patterned after the high-volume Kaiser program can be realized in low-volume settings. amht units have been operated at daily volumes of 20 patients in three separate clinical environments. These programs have displayed economics entirely compatible with cost figures published by the established high-volume centers. This experience, plus the expanding capability of small, general purpose, digital computers (minicomputers) indicates that a group of six or more physicians generating 20 laboratory appraisals per day can economically justify a completely automated multiphasic health testing facility. This system would reside in the clinic or hospital where it is used and can be configured to do analyses such as electrocardiography and generate laboratory reports, and communicate with large computer systems in university medical centers. Experience indicates that the most effective means of implementing these benefits of automation is to make them directly available to the medical community with the physician playing the central role. Economic justification of a dedicated computer through low-volume health testing then allows, as a side benefit, automation of administrative as well as other diagnostic activities—for example, patient billing, computer-aided diagnosis, and computer-aided therapeutics. PMID:4935771

  17. On computing stress in polymer systems involving multi-body potentials from molecular dynamics simulation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Fu, Yao, E-mail: fu5@mailbox.sc.edu, E-mail: jhsong@cec.sc.edu; Song, Jeong-Hoon, E-mail: fu5@mailbox.sc.edu, E-mail: jhsong@cec.sc.edu

    2014-08-07

    Hardy stress definition has been restricted to pair potentials and embedded-atom method potentials due to the basic assumptions in the derivation of a symmetric microscopic stress tensor. Force decomposition required in the Hardy stress expression becomes obscure for multi-body potentials. In this work, we demonstrate the invariance of the Hardy stress expression for a polymer system modeled with multi-body interatomic potentials including up to four atoms interaction, by applying central force decomposition of the atomic force. The balance of momentum has been demonstrated to be valid theoretically and tested under various numerical simulation conditions. The validity of momentum conservation justifiesmore » the extension of Hardy stress expression to multi-body potential systems. Computed Hardy stress has been observed to converge to the virial stress of the system with increasing spatial averaging volume. This work provides a feasible and reliable linkage between the atomistic and continuum scales for multi-body potential systems.« less

  18. Hybrid parallel computing architecture for multiview phase shifting

    NASA Astrophysics Data System (ADS)

    Zhong, Kai; Li, Zhongwei; Zhou, Xiaohui; Shi, Yusheng; Wang, Congjun

    2014-11-01

    The multiview phase-shifting method shows its powerful capability in achieving high resolution three-dimensional (3-D) shape measurement. Unfortunately, this ability results in very high computation costs and 3-D computations have to be processed offline. To realize real-time 3-D shape measurement, a hybrid parallel computing architecture is proposed for multiview phase shifting. In this architecture, the central processing unit can co-operate with the graphic processing unit (GPU) to achieve hybrid parallel computing. The high computation cost procedures, including lens distortion rectification, phase computation, correspondence, and 3-D reconstruction, are implemented in GPU, and a three-layer kernel function model is designed to simultaneously realize coarse-grained and fine-grained paralleling computing. Experimental results verify that the developed system can perform 50 fps (frame per second) real-time 3-D measurement with 260 K 3-D points per frame. A speedup of up to 180 times is obtained for the performance of the proposed technique using a NVIDIA GT560Ti graphics card rather than a sequential C in a 3.4 GHZ Inter Core i7 3770.

  19. Towards computational materials design from first principles using alchemical changes and derivatives.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    von Lilienfeld-Toal, Otto Anatole

    2010-11-01

    The design of new materials with specific physical, chemical, or biological properties is a central goal of much research in materials and medicinal sciences. Except for the simplest and most restricted cases brute-force computational screening of all possible compounds for interesting properties is beyond any current capacity due to the combinatorial nature of chemical compound space (set of stoichiometries and configurations). Consequently, when it comes to computationally optimizing more complex systems, reliable optimization algorithms must not only trade-off sufficient accuracy and computational speed of the models involved, they must also aim for rapid convergence in terms of number of compoundsmore » 'visited'. I will give an overview on recent progress on alchemical first principles paths and gradients in compound space that appear to be promising ingredients for more efficient property optimizations. Specifically, based on molecular grand canonical density functional theory an approach will be presented for the construction of high-dimensional yet analytical property gradients in chemical compound space. Thereafter, applications to molecular HOMO eigenvalues, catalyst design, and other problems and systems shall be discussed.« less

  20. Adopting a corporate perspective on databases. Improving support for research and decision making.

    PubMed

    Meistrell, M; Schlehuber, C

    1996-03-01

    The Veterans Health Administration (VHA) is at the forefront of designing and managing health care information systems that accommodate the needs of clinicians, researchers, and administrators at all levels. Rather than using one single-site, centralized corporate database VHA has constructed several large databases with different configurations to meet the needs of users with different perspectives. The largest VHA database is the Decentralized Hospital Computer Program (DHCP), a multisite, distributed data system that uses decoupled hospital databases. The centralization of DHCP policy has promoted data coherence, whereas the decentralization of DHCP management has permitted system development to be done with maximum relevance to the users'local practices. A more recently developed VHA data system, the Event Driven Reporting system (EDR), uses multiple, highly coupled databases to provide workload data at facility, regional, and national levels. The EDR automatically posts a subset of DHCP data to local and national VHA management. The development of the EDR illustrates how adoption of a corporate perspective can offer significant database improvements at reasonable cost and with modest impact on the legacy system.

  1. A modular architecture for transparent computation in recurrent neural networks.

    PubMed

    Carmantini, Giovanni S; Beim Graben, Peter; Desroches, Mathieu; Rodrigues, Serafim

    2017-01-01

    Computation is classically studied in terms of automata, formal languages and algorithms; yet, the relation between neural dynamics and symbolic representations and operations is still unclear in traditional eliminative connectionism. Therefore, we suggest a unique perspective on this central issue, to which we would like to refer as transparent connectionism, by proposing accounts of how symbolic computation can be implemented in neural substrates. In this study we first introduce a new model of dynamics on a symbolic space, the versatile shift, showing that it supports the real-time simulation of a range of automata. We then show that the Gödelization of versatile shifts defines nonlinear dynamical automata, dynamical systems evolving on a vectorial space. Finally, we present a mapping between nonlinear dynamical automata and recurrent artificial neural networks. The mapping defines an architecture characterized by its granular modularity, where data, symbolic operations and their control are not only distinguishable in activation space, but also spatially localizable in the network itself, while maintaining a distributed encoding of symbolic representations. The resulting networks simulate automata in real-time and are programmed directly, in the absence of network training. To discuss the unique characteristics of the architecture and their consequences, we present two examples: (i) the design of a Central Pattern Generator from a finite-state locomotive controller, and (ii) the creation of a network simulating a system of interactive automata that supports the parsing of garden-path sentences as investigated in psycholinguistics experiments. Copyright © 2016 Elsevier Ltd. All rights reserved.

  2. Ocean-Atmosphere Coupled Model Simulations of Precipitation in the Central Andes

    NASA Technical Reports Server (NTRS)

    Nicholls, Stephen D.; Mohr, Karen I.

    2015-01-01

    The meridional extent and complex orography of the South American continent contributes to a wide diversity of climate regimes ranging from hyper-arid deserts to tropical rainforests to sub-polar highland regions. In addition, South American meteorology and climate are also made further complicated by ENSO, a powerful coupled ocean-atmosphere phenomenon. Modelling studies in this region have typically resorted to either atmospheric mesoscale or atmosphere-ocean coupled global climate models. The latter offers full physics and high spatial resolution, but it is computationally inefficient typically lack an interactive ocean, whereas the former offers high computational efficiency and ocean-atmosphere coupling, but it lacks adequate spatial and temporal resolution to adequate resolve the complex orography and explicitly simulate precipitation. Explicit simulation of precipitation is vital in the Central Andes where rainfall rates are light (0.5-5 mm hr-1), there is strong seasonality, and most precipitation is associated with weak mesoscale-organized convection. Recent increases in both computational power and model development have led to the advent of coupled ocean-atmosphere mesoscale models for both weather and climate study applications. These modelling systems, while computationally expensive, include two-way ocean-atmosphere coupling, high resolution, and explicit simulation of precipitation. In this study, we use the Coupled Ocean-Atmosphere-Wave-Sediment Transport (COAWST), a fully-coupled mesoscale atmosphere-ocean modeling system. Previous work has shown COAWST to reasonably simulate the entire 2003-2004 wet season (Dec-Feb) as validated against both satellite and model analysis data when ECMWF interim analysis data were used for boundary conditions on a 27-9-km grid configuration (Outer grid extent: 60.4S to 17.7N and 118.6W to 17.4W).

  3. Predictive biophysical modeling and understanding of the dynamics of mRNA translation and its evolution

    PubMed Central

    Zur, Hadas; Tuller, Tamir

    2016-01-01

    mRNA translation is the fundamental process of decoding the information encoded in mRNA molecules by the ribosome for the synthesis of proteins. The centrality of this process in various biomedical disciplines such as cell biology, evolution and biotechnology, encouraged the development of dozens of mathematical and computational models of translation in recent years. These models aimed at capturing various biophysical aspects of the process. The objective of this review is to survey these models, focusing on those based and/or validated on real large-scale genomic data. We consider aspects such as the complexity of the models, the biophysical aspects they regard and the predictions they may provide. Furthermore, we survey the central systems biology discoveries reported on their basis. This review demonstrates the fundamental advantages of employing computational biophysical translation models in general, and discusses the relative advantages of the different approaches and the challenges in the field. PMID:27591251

  4. Epstein-Barr virus-associated primary central nervous system lymphoma in a child with the acquired immunodeficiency syndrome. A case report and review of the literature.

    PubMed

    Rodriguez, M M; Delgado, P I; Petito, C K

    1997-12-01

    A 34-month-old black boy who had contracted acquired immunodeficiency syndrome from his mother presented with fever, vomiting, and cough. He was cachectic, hypertonic, and developmentally delayed. A brain computed tomography scan revealed masses in the left frontal horn, subependymal, and periventricular regions; secondary edema; and hydrocephalus. The differential diagnosis was cerebral lymphoma versus toxoplasmosis. The patient had disseminated Mycobacterium avium-intracellulare infection, lymphoid interstitial pneumonitis, as well as Pseudomonas and Klebsiella pneumonia. He died of respiratory insufficiency 53 days after admission. The autopsy confirmed a primary cerebral B-cell lymphoma, large cell type, which was positive for Epstein-Barr virus, latent phase, by in situ hybridization. Primary central nervous system lymphomas are rare in children, in contrast to adults. To our knowledge, only five well-documented cases of primary cerebral lymphomas in infants and children with acquired immunodeficiency syndrome have been reported previously. The current study shows that these childhood lymphomas are associated with and presumably caused by Epstein-Barr virus and thus have a pathogenesis similar to that of primary central nervous system lymphomas in adults.

  5. Geoid modeling in Mexico and the collaboration with Central America and the Caribbean.

    NASA Astrophysics Data System (ADS)

    Avalos, D.; Gomez, R.

    2012-12-01

    The model of geoidal heights for Mexico, named GGM10, is presented as a geodetic tool to support vertical positioning in the context of regional height system unification. It is a purely gravimetric solution computed by the Stokes-Helmert technique in resolution of 2.5 arc minutes. This product from the Instituto Nacional de Estadistica y Geografia (INEGI) is released together with a series of 10 gravimetric models which add to the improvements in description of the gravity field. In the recent years, the INEGI joined the initiative of the U.S. National Geodetic Survey and the Canada's Geodetic Survey Division to promote the regional height system unification. In an effort to further improve the compatibility among national geoid models in the region, the INEGI has begun to champion a network of specialists that includes national representatives from Central America and the Caribbean. Through the opening of opportunities for training and more direct access to international agreements and discussions, the tropical region is gaining participation. Now a significantly increased number of countries is pushing for a future North and Central American geoid-based vertical datum as support of height system unification.eoidal height in Mexico, mapped from the model GGM10.

  6. Feasibility of Decentralized Linear-Quadratic-Gaussian Control of Autonomous Distributed Spacecraft

    NASA Technical Reports Server (NTRS)

    Carpenter, J. Russell

    1999-01-01

    A distributed satellite formation, modeled as an arbitrary number of fully connected nodes in a network, could be controlled using a decentralized controller framework that distributes operations in parallel over the network. For such problems, a solution that minimizes data transmission requirements, in the context of linear-quadratic-Gaussian (LQG) control theory, was given by Speyer. This approach is advantageous because it is non-hierarchical, detected failures gracefully degrade system performance, fewer local computations are required than for a centralized controller, and it is optimal with respect to the standard LQG cost function. Disadvantages of the approach are the need for a fully connected communications network, the total operations performed over all the nodes are greater than for a centralized controller, and the approach is formulated for linear time-invariant systems. To investigate the feasibility of the decentralized approach to satellite formation flying, a simple centralized LQG design for a spacecraft orbit control problem is adapted to the decentralized framework. The simple design uses a fixed reference trajectory (an equatorial, Keplerian, circular orbit), and by appropriate choice of coordinates and measurements is formulated as a linear time-invariant system.

  7. NASTRAN data generation of helicopter fuselages using interactive graphics. [preprocessor system for finite element analysis using IBM computer

    NASA Technical Reports Server (NTRS)

    Sainsbury-Carter, J. B.; Conaway, J. H.

    1973-01-01

    The development and implementation of a preprocessor system for the finite element analysis of helicopter fuselages is described. The system utilizes interactive graphics for the generation, display, and editing of NASTRAN data for fuselage models. It is operated from an IBM 2250 cathode ray tube (CRT) console driven by an IBM 370/145 computer. Real time interaction plus automatic data generation reduces the nominal 6 to 10 week time for manual generation and checking of data to a few days. The interactive graphics system consists of a series of satellite programs operated from a central NASTRAN Systems Monitor. Fuselage structural models including the outer shell and internal structure may be rapidly generated. All numbering systems are automatically assigned. Hard copy plots of the model labeled with GRID or elements ID's are also available. General purpose programs for displaying and editing NASTRAN data are included in the system. Utilization of the NASTRAN interactive graphics system has made possible the multiple finite element analysis of complex helicopter fuselage structures within design schedules.

  8. Distributed solar radiation fast dynamic measurement for PV cells

    NASA Astrophysics Data System (ADS)

    Wan, Xuefen; Yang, Yi; Cui, Jian; Du, Xingjing; Zheng, Tao; Sardar, Muhammad Sohail

    2017-10-01

    To study the operating characteristics about PV cells, attention must be given to the dynamic behavior of the solar radiation. The dynamic behaviors of annual, monthly, daily and hourly averages of solar radiation have been studied in detail. But faster dynamic behaviors of solar radiation need more researches. The solar radiation random fluctuations in minute-long or second-long range, which lead to alternating radiation and cool down/warm up PV cell frequently, decrease conversion efficiency. Fast dynamic processes of solar radiation are mainly relevant to stochastic moving of clouds. Even in clear sky condition, the solar irradiations show a certain degree of fast variation. To evaluate operating characteristics of PV cells under fast dynamic irradiation, a solar radiation measuring array (SRMA) based on large active area photodiode, LoRa spread spectrum communication and nanoWatt MCU is proposed. This cross photodiodes structure tracks fast stochastic moving of clouds. To compensate response time of pyranometer and reduce system cost, the terminal nodes with low-cost fast-responded large active area photodiode are placed besides positions of tested PV cells. A central node, consists with pyranometer, large active area photodiode, wind detector and host computer, is placed in the center of the central topologies coordinate to scale temporal envelope of solar irradiation and get calibration information between pyranometer and large active area photodiodes. In our SRMA system, the terminal nodes are designed based on Microchip's nanoWatt XLP PIC16F1947. FDS-100 is adopted for large active area photodiode in terminal nodes and host computer. The output current and voltage of each PV cell are monitored by I/V measurement. AS62-T27/SX1278 LoRa communication modules are used for communicating between terminal nodes and host computer. Because the LoRa LPWAN (Low Power Wide Area Network) specification provides seamless interoperability among Smart Things without the need of complex local installations, configuring of our SRMA system is very easy. Lora also provides SRMA a means to overcome the short communication distance and weather signal propagation decline such as in ZigBee and WiFi. The host computer in SRMA system uses the low power single-board PC EMB-3870 which was produced by NORCO. Wind direction sensor SM5386B and wind-force sensor SM5387B are installed to host computer through RS-485 bus for wind reference data collection. And Davis 6450 solar radiation sensor, which is a precision instrument that detects radiation at wavelengths of 300 to 1100 nanometers, allow host computer to follow real-time solar radiation. A LoRa polling scheme is adopt for the communication between host computer and terminal nodes in SRMA. An experimental SRMA has been established. This system was tested in Ganyu, Jiangshu province from May to August, 2016. In the test, the distances between the nodes and the host computer were between 100m and 1900m. At work, SRMA system showed higher reliability. Terminal nodes could follow the instructions from host computer and collect solar radiation data of distributed PV cells effectively. And the host computer managed the SRAM and achieves reference parameters well. Communications between the host computer and terminal nodes were almost unaffected by the weather. In conclusion, the testing results show that SRMA could be a capable method for fast dynamic measuring about solar radiation and related PV cell operating characteristics.

  9. Research into display sharing techniques for distributed computing environments

    NASA Technical Reports Server (NTRS)

    Hugg, Steven B.; Fitzgerald, Paul F., Jr.; Rosson, Nina Y.; Johns, Stephen R.

    1990-01-01

    The X-based Display Sharing solution for distributed computing environments is described. The Display Sharing prototype includes the base functionality for telecast and display copy requirements. Since the prototype implementation is modular and the system design provided flexibility for the Mission Control Center Upgrade (MCCU) operational consideration, the prototype implementation can be the baseline for a production Display Sharing implementation. To facilitate the process the following discussions are presented: Theory of operation; System of architecture; Using the prototype; Software description; Research tools; Prototype evaluation; and Outstanding issues. The prototype is based on the concept of a dedicated central host performing the majority of the Display Sharing processing, allowing minimal impact on each individual workstation. Each workstation participating in Display Sharing hosts programs to facilitate the user's access to Display Sharing as host machine.

  10. The C23A system, an exmaple of quantitative control of plant growth associated with a data base

    NASA Technical Reports Server (NTRS)

    Andre, M.; Daguenet, A.; Massimino, D.; Gerbaud, A.

    1986-01-01

    The architecture of the C23A (Chambers de Culture Automatique en Atmosphere Artificielles) system for the controlled study of plant physiology is described. A modular plant growth chambers and associated instruments (I.R. CO2 analyser, Mass spectrometer and Chemical analyser); network of frontal processors controlling this apparatus; a central computer for the periodic control and the multiplex work of processors; and a network of terminal computers able to ask the data base for data processing and modeling are discussed. Examples of present results are given. A growth curve analysis study of CO2 and O2 gas exchanges of shoots and roots, and daily evolution of algal photosynthesis and of the pools of dissolved CO2 in sea water are discussed.

  11. Atmospheric numerical modeling resource enhancement and model convective parameterization/scale interaction studies

    NASA Technical Reports Server (NTRS)

    Cushman, Paula P.

    1993-01-01

    Research will be undertaken in this contract in the area of Modeling Resource and Facilities Enhancement to include computer, technical and educational support to NASA investigators to facilitate model implementation, execution and analysis of output; to provide facilities linking USRA and the NASA/EADS Computer System as well as resident work stations in ESAD; and to provide a centralized location for documentation, archival and dissemination of modeling information pertaining to NASA's program. Additional research will be undertaken in the area of Numerical Model Scale Interaction/Convective Parameterization Studies to include implementation of the comparison of cloud and rain systems and convective-scale processes between the model simulations and what was observed; and to incorporate the findings of these and related research findings in at least two refereed journal articles.

  12. I/O-Efficient Scientific Computation Using TPIE

    NASA Technical Reports Server (NTRS)

    Vengroff, Darren Erik; Vitter, Jeffrey Scott

    1996-01-01

    In recent years, input/output (I/O)-efficient algorithms for a wide variety of problems have appeared in the literature. However, systems specifically designed to assist programmers in implementing such algorithms have remained scarce. TPIE is a system designed to support I/O-efficient paradigms for problems from a variety of domains, including computational geometry, graph algorithms, and scientific computation. The TPIE interface frees programmers from having to deal not only with explicit read and write calls, but also the complex memory management that must be performed for I/O-efficient computation. In this paper we discuss applications of TPIE to problems in scientific computation. We discuss algorithmic issues underlying the design and implementation of the relevant components of TPIE and present performance results of programs written to solve a series of benchmark problems using our current TPIE prototype. Some of the benchmarks we present are based on the NAS parallel benchmarks while others are of our own creation. We demonstrate that the central processing unit (CPU) overhead required to manage I/O is small and that even with just a single disk, the I/O overhead of I/O-efficient computation ranges from negligible to the same order of magnitude as CPU time. We conjecture that if we use a number of disks in parallel this overhead can be all but eliminated.

  13. Massively Parallel Signal Processing using the Graphics Processing Unit for Real-Time Brain-Computer Interface Feature Extraction.

    PubMed

    Wilson, J Adam; Williams, Justin C

    2009-01-01

    The clock speeds of modern computer processors have nearly plateaued in the past 5 years. Consequently, neural prosthetic systems that rely on processing large quantities of data in a short period of time face a bottleneck, in that it may not be possible to process all of the data recorded from an electrode array with high channel counts and bandwidth, such as electrocorticographic grids or other implantable systems. Therefore, in this study a method of using the processing capabilities of a graphics card [graphics processing unit (GPU)] was developed for real-time neural signal processing of a brain-computer interface (BCI). The NVIDIA CUDA system was used to offload processing to the GPU, which is capable of running many operations in parallel, potentially greatly increasing the speed of existing algorithms. The BCI system records many channels of data, which are processed and translated into a control signal, such as the movement of a computer cursor. This signal processing chain involves computing a matrix-matrix multiplication (i.e., a spatial filter), followed by calculating the power spectral density on every channel using an auto-regressive method, and finally classifying appropriate features for control. In this study, the first two computationally intensive steps were implemented on the GPU, and the speed was compared to both the current implementation and a central processing unit-based implementation that uses multi-threading. Significant performance gains were obtained with GPU processing: the current implementation processed 1000 channels of 250 ms in 933 ms, while the new GPU method took only 27 ms, an improvement of nearly 35 times.

  14. An intermediate level of abstraction for computational systems chemistry.

    PubMed

    Andersen, Jakob L; Flamm, Christoph; Merkle, Daniel; Stadler, Peter F

    2017-12-28

    Computational techniques are required for narrowing down the vast space of possibilities to plausible prebiotic scenarios, because precise information on the molecular composition, the dominant reaction chemistry and the conditions for that era are scarce. The exploration of large chemical reaction networks is a central aspect in this endeavour. While quantum chemical methods can accurately predict the structures and reactivities of small molecules, they are not efficient enough to cope with large-scale reaction systems. The formalization of chemical reactions as graph grammars provides a generative system, well grounded in category theory, at the right level of abstraction for the analysis of large and complex reaction networks. An extension of the basic formalism into the realm of integer hyperflows allows for the identification of complex reaction patterns, such as autocatalysis, in large reaction networks using optimization techniques.This article is part of the themed issue 'Reconceptualizing the origins of life'. © 2017 The Author(s).

  15. Characterization and Developmental History of Problem Solving Methods in Medicine

    PubMed Central

    Harbort, Robert A.

    1980-01-01

    The central thesis of this paper is the importance of the framework in which information is structured. It is technically important in the design of systems; it is also important in guaranteeing that systems are usable by clinicians. Progress in medical computing depends on our ability to develop a more quantitative understanding of the role of context in our choice of problem solving techniques. This in turn will help us to design more flexible and responsive computer systems. The paper contains an overview of some models of knowledge and problem solving methods, a characterization of modern diagnostic techniques, and a discussion of skill development in medical practice. Diagnostic techniques are examined in terms of how they are taught, what problem solving methods they use, and how they fit together into an overall theory of interpretation of the medical status of a patient.

  16. Dichotomy in the definition of prescriptive information suggests both prescribed data and prescribed algorithms: biosemiotics applications in genomic systems.

    PubMed

    D'Onofrio, David J; Abel, David L; Johnson, Donald E

    2012-03-14

    The fields of molecular biology and computer science have cooperated over recent years to create a synergy between the cybernetic and biosemiotic relationship found in cellular genomics to that of information and language found in computational systems. Biological information frequently manifests its "meaning" through instruction or actual production of formal bio-function. Such information is called prescriptive information (PI). PI programs organize and execute a prescribed set of choices. Closer examination of this term in cellular systems has led to a dichotomy in its definition suggesting both prescribed data and prescribed algorithms are constituents of PI. This paper looks at this dichotomy as expressed in both the genetic code and in the central dogma of protein synthesis. An example of a genetic algorithm is modeled after the ribosome, and an examination of the protein synthesis process is used to differentiate PI data from PI algorithms.

  17. Self-* properties through gossiping.

    PubMed

    Babaoglu, Ozalp; Jelasity, Márk

    2008-10-28

    As computer systems have become more complex, numerous competing approaches have been proposed for these systems to self-configure, self-manage, self-repair, etc. such that human intervention in their operation can be minimized. In ubiquitous systems, this has always been a central issue as well. In this paper, we overview techniques to implement self-* properties in large-scale, decentralized networks through bio-inspired techniques in general, and gossip-based algorithms in particular. We believe that gossip-based algorithms could be an important inspiration for solving problems in ubiquitous computing as well. As an example, we outline a novel approach to arrange large numbers of mobile agents (e.g. vehicles, rescue teams carrying mobile devices) into different formations in a totally decentralized manner. The approach is inspired by the biological mechanism of cell sorting via differential adhesion, as well as by our earlier work in self-organizing peer-to-peer overlay networks.

  18. Economics of Computing: The Case of Centralized Network File Servers.

    ERIC Educational Resources Information Center

    Solomon, Martin B.

    1994-01-01

    Discusses computer networking and the cost effectiveness of decentralization, including local area networks. A planned experiment with a centralized approach to the operation and management of file servers at the University of South Carolina is described that hopes to realize cost savings and the avoidance of staffing problems. (Contains four…

  19. 51. VIEW OF LORAL ADS 100A COMPUTERS LOCATED CENTRALLY ON ...

    Library of Congress Historic Buildings Survey, Historic Engineering Record, Historic Landscapes Survey

    51. VIEW OF LORAL ADS 100A COMPUTERS LOCATED CENTRALLY ON NORTH WALL OF TELEMETRY ROOM (ROOM 106). SLC-3W CONTROL ROOM IS VISIBLE IN BACKGROUND THROUGH WINDOW IN NORTH WALL. - Vandenberg Air Force Base, Space Launch Complex 3, Launch Operations Building, Napa & Alden Roads, Lompoc, Santa Barbara County, CA

  20. A Computer Program for Training Eccentric Reading in Persons with Central Scotoma

    ERIC Educational Resources Information Center

    Kasten, Erich; Haschke, Peggy; Meinhold, Ulrike; Oertel-Verweyen, Petra

    2010-01-01

    This article explores the effectiveness of a computer program--Xcentric viewing--for training eccentric reading in persons with central scotoma. The authors conducted a small study to investigate whether this program increases the reading capacities of individuals with age-related macular degeneration (AMD). Instead of a control group, they…

Top