Science.gov

Sample records for networked linux systems

  1. Network of networks in Linux operating system

    NASA Astrophysics Data System (ADS)

    Wang, Haoqin; Chen, Zhen; Xiao, Guanping; Zheng, Zheng

    2016-04-01

    Operating system represents one of the most complex man-made systems. In this paper, we analyze Linux Operating System (LOS) as a complex network via modeling functions as nodes and function calls as edges. It is found that for the LOS network and modularized components within it, the out-degree follows an exponential distribution and the in-degree follows a power-law distribution. For better understanding the underlying design principles of LOS, we explore the coupling correlations of components in LOS from aspects of topology and function. The result shows that the component for device drivers has a strong manifestation in topology while a weak manifestation in function. However, the component for process management shows the contrary phenomenon. Moreover, in an effort to investigate the impact of system failures on networks, we make a comparison between the networks traced from normal and failure status of LOS. This leads to a conclusion that the failure will change function calls which should be executed in normal status and introduce new function calls in the meanwhile.

  2. Interactivity vs. fairness in networked linux systems

    SciTech Connect

    Wu, Wenji; Crawford, Matt; /Fermilab

    2007-01-01

    In general, the Linux 2.6 scheduler can ensure fairness and provide excellent interactive performance at the same time. However, our experiments and mathematical analysis have shown that the current Linux interactivity mechanism tends to incorrectly categorize non-interactive network applications as interactive, which can lead to serious fairness or starvation issues. In the extreme, a single process can unjustifiably obtain up to 95% of the CPU! The root cause is due to the facts that: (1) network packets arrive at the receiver independently and discretely, and the 'relatively fast' non-interactive network process might frequently sleep to wait for packet arrival. Though each sleep lasts for a very short period of time, the wait-for-packet sleeps occur so frequently that they lead to interactive status for the process. (2) The current Linux interactivity mechanism provides the possibility that a non-interactive network process could receive a high CPU share, and at the same time be incorrectly categorized as 'interactive.' In this paper, we propose and test a possible solution to address the interactivity vs. fairness problems. Experiment results have proved the effectiveness of the proposed solution.

  3. The Linux operating system: An introduction

    NASA Technical Reports Server (NTRS)

    Bokhari, Shahid H.

    1995-01-01

    Linux is a Unix-like operating system for Intel 386/486/Pentium based IBM-PCs and compatibles. The kernel of this operating system was written from scratch by Linus Torvalds and, although copyrighted by the author, may be freely distributed. A world-wide group has collaborated in developing Linux on the Internet. Linux can run the powerful set of compilers and programming tools of the Free Software Foundation, and XFree86, a port of the X Window System from MIT. Most capabilities associated with high performance workstations, such as networking, shared file systems, electronic mail, TeX, LaTeX, etc. are freely available for Linux. It can thus transform cheap IBM-PC compatible machines into Unix workstations with considerable capabilities. The author explains how Linux may be obtained, installed and networked. He also describes some interesting applications for Linux that are freely available. The enormous consumer market for IBM-PC compatible machines continually drives down prices of CPU chips, memory, hard disks, CDROMs, etc. Linux can convert such machines into powerful workstations that can be used for teaching, research and software development. For professionals who use Unix based workstations at work, Linux permits virtually identical working environments on their personal home machines. For cost conscious educational institutions Linux can create world-class computing environments from cheap, easily maintained, PC clones. Finally, for university students, it provides an essentially cost-free path away from DOS into the world of Unix and X Windows.

  4. The performance analysis of linux networking - packet receiving

    SciTech Connect

    Wu, Wenji; Crawford, Matt; Bowden, Mark; /Fermilab

    2006-11-01

    The computing models for High-Energy Physics experiments are becoming ever more globally distributed and grid-based, both for technical reasons (e.g., to place computational and data resources near each other and the demand) and for strategic reasons (e.g., to leverage equipment investments). To support such computing models, the network and end systems, computing and storage, face unprecedented challenges. One of the biggest challenges is to transfer scientific data sets--now in the multi-petabyte (10{sup 15} bytes) range and expected to grow to exabytes within a decade--reliably and efficiently among facilities and computation centers scattered around the world. Both the network and end systems should be able to provide the capabilities to support high bandwidth, sustained, end-to-end data transmission. Recent trends in technology are showing that although the raw transmission speeds used in networks are increasing rapidly, the rate of advancement of microprocessor technology has slowed down. Therefore, network protocol-processing overheads have risen sharply in comparison with the time spent in packet transmission, resulting in degraded throughput for networked applications. More and more, it is the network end system, instead of the network, that is responsible for degraded performance of network applications. In this paper, the Linux system's packet receive process is studied from NIC to application. We develop a mathematical model to characterize the Linux packet receiving process. Key factors that affect Linux systems network performance are analyzed.

  5. Implementing Journaling in a Linux Shared Disk File System

    NASA Technical Reports Server (NTRS)

    Preslan, Kenneth W.; Barry, Andrew; Brassow, Jonathan; Cattelan, Russell; Manthei, Adam; Nygaard, Erling; VanOort, Seth; Teigland, David; Tilstra, Mike; O'Keefe, Matthew; Erickson, Grant; Agarwal, Manish

    2000-01-01

    In computer systems today, speed and responsiveness is often determined by network and storage subsystem performance. Faster, more scalable networking interfaces like Fibre Channel and Gigabit Ethernet provide the scaffolding from which higher performance computer systems implementations may be constructed, but new thinking is required about how machines interact with network-enabled storage devices. In this paper we describe how we implemented journaling in the Global File System (GFS), a shared-disk, cluster file system for Linux. Our previous three papers on GFS at the Mass Storage Symposium discussed our first three GFS implementations, their performance, and the lessons learned. Our fourth paper describes, appropriately enough, the evolution of GFS version 3 to version 4, which supports journaling and recovery from client failures. In addition, GFS scalability tests extending to 8 machines accessing 8 4-disk enclosures were conducted: these tests showed good scaling. We describe the GFS cluster infrastructure, which is necessary for proper recovery from machine and disk failures in a collection of machines sharing disks using GFS. Finally, we discuss the suitability of Linux for handling the big data requirements of supercomputing centers.

  6. Building CHAOS: An Operating System for Livermore Linux Clusters

    SciTech Connect

    Garlick, J E; Dunlap, C M

    2003-02-21

    The Livermore Computing (LC) Linux Integration and Development Project (the Linux Project) produces and supports the Clustered High Availability Operating System (CHAOS), a cluster operating environment based on Red Hat Linux. Each CHAOS release begins with a set of requirements and ends with a formally tested, packaged, and documented release suitable for use on LC's production Linux clusters. One characteristic of CHAOS is that component software packages come from different sources under varying degrees of project control. Some are developed by the Linux Project, some are developed by other LC projects, some are external open source projects, and some are commercial software packages. A challenge to the Linux Project is to adhere to release schedules and testing disciplines in a diverse, highly decentralized development environment. Communication channels are maintained for externally developed packages in order to obtain support, influence development decisions, and coordinate/understand release schedules. The Linux Project embraces open source by releasing locally developed packages under open source license, by collaborating with open source projects where mutually beneficial, and by preferring open source over proprietary software. Project members generally use open source development tools. The Linux Project requires system administrators and developers to work together to resolve problems that arise in production. This tight coupling of production and development is a key strategy for making a product that directly addresses LC's production requirements. It is another challenge to balance support and development activities in such a way that one does not overwhelm the other.

  7. CompactPCI/Linux Platform in FTU Slow Control System

    NASA Astrophysics Data System (ADS)

    Iannone, F.; Wang, L.; Centioli, C.; Panella, M.; Mazza, G.; Vitale, V.

    2004-12-01

    In large fusion experiments, such as tokamak devices, there is a common trend for slow control systems. Because of complexity of the plants, the so-called `Standard Model' (SM) in slow control has been adopted on several tokamak machines. This model is based on a three-level hierarchical control: 1) High-Level Control (HLC) with a supervisory function; 2) Medium-Level Control (MLC) to interface and concentrate I/O field equipments; 3) Low-Level Control (LLC) with hard real-time I/O function, often managed by PLCs. FTU control system designed with SM concepts has underwent several stages of developments in its fifteen years duration of runs. The latest evolution was inevitable, due to the obsolescence of the MLC CPUs, based on VME-MOTOROLA 68030 with OS9 operating system. A large amount of C code was developed for that platform to route the data flow from LLC, which is constituted by 24 Westinghouse Numalogic PC-700 PLCs with about 8000 field-points, to HLC, based on a commercial Object-Oriented Real-Time database on Alpha/CompaqTru64 platform. Therefore, we have to look for cost-effective solutions and finally a CompactPCI-Intel x86 platform with Linux operating system was chosen. A software porting has been done, taking into account the differences between OS9 and Linux operating system in terms of Inter/Network Processes Communications and I/O multi-ports serial driver. This paper describes the hardware/software architecture of the new MLC system, emphasizing the reliability and the low costs of the open source solutions. Moreover, a huge amount of software packages available in open source environment will assure a less painful maintenance, and will open the way to further improvements of the system itself.

  8. QMP-MVIA: a message passing system for Linux clusters with gigabit Ethernet mesh connections

    SciTech Connect

    Jie Chen; W. Watson III; Robert Edwards; Weizhen Mao

    2004-09-01

    Recent progress in performance coupled with a decline in price for copper-based gigabit Ethernet (GigE) interconnects makes them an attractive alternative to expensive high speed network interconnects (NIC) when constructing Linux clusters. However traditional message passing systems based on TCP for GigE interconnects cannot fully utilize the raw performance of today's GigE interconnects due to the overhead of kernel involvement and multiple memory copies during sending and receiving messages. The overhead is more evident in the case of mesh connected Linux clusters using multiple GigE interconnects in a single host. We present a general message passing system called QMP-MVIA (QCD Message Passing over M-VIA) for Linux clusters with mesh connections using GigE interconnects. In particular, we evaluate and compare the performance characteristics of TCP and M-VIA (an implementation of the VIA specification) software for a mesh communication architecture to demonstrate the feasibility of using M-VIA as a point-to-point communication software, on which QMP-MVIA is based. Furthermore, we illustrate the design and implementation of QMP-MVIA for mesh connected Linux clusters with emphasis on both point-to-point and collective communications, and demonstrate that QMP-MVIA message passing system using GigE interconnects achieves bandwidth and latency that are not only better than systems based on TCP but also compare favorably to systems using some of the specialized high speed interconnects in a switched architecture at much lower cost.

  9. Potential performance bottleneck in Linux TCP

    SciTech Connect

    Wu, Wenji; Crawford, Matt; /Fermilab

    2006-12-01

    TCP is the most widely used transport protocol on the Internet today. Over the years, especially recently, due to requirements of high bandwidth transmission, various approaches have been proposed to improve TCP performance. The Linux 2.6 kernel is now preemptible. It can be interrupted mid-task, making the system more responsive and interactive. However, we have noticed that Linux kernel preemption can interact badly with the performance of the networking subsystem. In this paper we investigate the performance bottleneck in Linux TCP. We systematically describe the trip of a TCP packet from its ingress into a Linux network end system to its final delivery to the application; we study the performance bottleneck in Linux TCP through mathematical modeling and practical experiments; finally we propose and test one possible solution to resolve this performance bottleneck in Linux TCP.

  10. AIRE-Linux

    NASA Astrophysics Data System (ADS)

    Zhou, Jianfeng; Xu, Benda; Peng, Chuan; Yang, Yang; Huo, Zhuoxi

    2015-08-01

    AIRE-Linux is a dedicated Linux system for astronomers. Modern astronomy faces two big challenges: massive observed raw data which covers the whole electromagnetic spectrum, and overmuch professional data processing skill which exceeds personal or even a small team's abilities. AIRE-Linux, which is a specially designed Linux and will be distributed to users by Virtual Machine (VM) images in Open Virtualization Format (OVF), is to help astronomers confront the challenges. Most astronomical software packages, such as IRAF, MIDAS, CASA, Heasoft etc., will be integrated into AIRE-Linux. It is easy for astronomers to configure and customize the system and use what they just need. When incorporated into cloud computing platforms, AIRE-Linux will be able to handle data intensive and computing consuming tasks for astronomers. Currently, a Beta version of AIRE-Linux is ready for download and testing.

  11. Linux-PC based 1024-Channel Transient Digitizer System for the DRIFT Experiment Acquisition System

    NASA Astrophysics Data System (ADS)

    Ayad, R.; Hanson-Hart, Z.; Hyatt, M.; Katz-Hyman, M.; Maher, P.; Martoff, C. J.; Posner, A.; Freytag, D.; Freytag, M.; Haller, G.; Nelson, D.

    2003-04-01

    The DRIFT Experiment [1] is an underground search for WIMP Dark Matter using a novel detector invented for this purpose: the Negative Ion TPC (NITPC). The data acquisition system for DRIFT had to allow acquisition of long duration time digitized data from the 1024 analog channels at an affordable price. This was accomplished with a system based on a Linux PC, the Comedi [2] open-source device driver software, the inexpensive PCI-DIO-32HS National Instruments high speed digital I/O board, and custom 32-channel preamp+digitizer boards built at SLAC. System architecture, testing, and performance will be discussed, as well as further upgrade plans. [1] Low Pressure Negative Ion TPC for Dark Matter Search. D. P. Snowden-Ifft, C. J. Martoff, J. M. Burwell, Phys Rev. D. Rapid Comm. 61, 101301 (2000) [2] Comedi: linux Control and MEasurement Device Interface : http://stm.lbl.gov/comedi/

  12. The Case for A Hierarchal System Model for Linux Clusters

    SciTech Connect

    Seager, M; Gorda, B

    2009-06-05

    The computer industry today is no longer driven, as it was in the 40s, 50s and 60s, by High-performance computing requirements. Rather, HPC systems, especially Leadership class systems, sit on top of a pyramid investment mode. Figure 1 shows a representative pyramid investment model for systems hardware. At the base of the pyramid is the huge investment (order 10s of Billions of US Dollars per year) in semiconductor fabrication and process technologies. These costs, which are approximately doubling with every generation, are funded from investments multiple markets: enterprise, desktops, games, embedded and specialized devices. Over and above these base technology investments are investments for critical technology elements such as microprocessor, chipsets and memory ASIC components. Investments for these components are spread across the same markets as the base semiconductor processes investments. These second tier investments are approximately half the size of the lower level of the pyramid. The next technology investment layer up, tier 3, is more focused on scalable computing systems such as those needed for HPC and other markets. These tier 3 technology elements include networking (SAN, WAN and LAN), interconnects and large scalable SMP designs. Above these is tier 4 are relatively small investments necessary to build very large, scalable systems high-end or Leadership class systems. Primary among these are the specialized network designs of vertically integrated systems, etc.

  13. Linux and the chemist.

    SciTech Connect

    Moore, J. M.; McCann, M. P.; Materials Science Division; Sam Houston State Univ.

    2003-02-01

    Linux is a freely available computer operating system. Instead of buying multiple copies of the same operating system for use on each computer, Linux may be freely copied onto every computer. Linux distributions come with hundreds of applications, such as compilers, browsers, various servers, graphics software, text editors, and spreadsheets, just to mention a few. Many commercial software companies have ported their applications over to Linux. Numerous programs for chemists, such as statistical treatment, molecular modeling, NMR spectral processing, DNA sequence evaluation, crystal structure solving, and molucular dynamics are available online, many at no cost.

  14. Development of Automatic Live Linux Rebuilding System with Flexibility in Science and Engineering Education and Applying to Information Processing Education

    NASA Astrophysics Data System (ADS)

    Sonoda, Jun; Yamaki, Kota

    We develop an automatic Live Linux rebuilding system for science and engineering education, such as information processing education, numerical analysis and so on. Our system is enable to easily and automatically rebuild a customized Live Linux from a ISO image of Ubuntu, which is one of the Linux distribution. Also, it is easily possible to install/uninstall packages and to enable/disable init daemons. When we rebuild a Live Linux CD using our system, we show number of the operations is 8, and the rebuilding time is about 33 minutes on CD version and about 50 minutes on DVD version. Moreover, we have applied the rebuilded Live Linux CD in a class of information processing education in our college. As the results of a questionnaires survey from our 43 students who used the Live Linux CD, we obtain that the our Live Linux is useful for about 80 percents of students. From these results, we conclude that our system is able to easily and automatically rebuild a useful Live Linux in short time.

  15. CORSET: Service-Oriented Resource Management System in Linux

    NASA Astrophysics Data System (ADS)

    Kang, Dong-Jae; Kim, Chei-Yol; Jung, Sung-In

    Generally, system resources are not enough for many running services and applications in a system. And services are more important than single process in real world and they have different priority or importance. So each service should be treated with discrimination in aspect of system resources. But administrator can't guarantee the specific service has proper resources in unsettled workload situation because many processes are in race condition. So, we suppose the service-oriented resource management subsystem to resolve upper problems. It guarantees the performance or QoS of the specific service in changeable workload situation by satisfying the minimum resource requirement for the service.

  16. [Design of an embedded stroke rehabilitation apparatus system based on Linux computer engineering].

    PubMed

    Zhuang, Pengfei; Tian, XueLong; Zhu, Lin

    2014-04-01

    A realizaton project of electrical stimulator aimed at motor dysfunction of stroke is proposed in this paper. Based on neurophysiological biofeedback, this system, using an ARM9 S3C2440 as the core processor, integrates collection and display of surface electromyography (sEMG) signal, as well as neuromuscular electrical stimulation (NMES) into one system. By embedding Linux system, the project is able to use Qt/Embedded as a graphical interface design tool to accomplish the design of stroke rehabilitation apparatus. Experiments showed that this system worked well. PMID:25039129

  17. A new design and implementation of an infrared device driver in embedded Linux systems

    NASA Astrophysics Data System (ADS)

    Jia, Li-li; Cui, Hua; Wang, Ru-li

    2009-07-01

    Wireless infrared communication systems are widely-used for the remote controls in portable terminals, particularly for systems requiring low cost, light weight, moderate data rates. They have already proven their electiveness for short-range temporary communications and in high data rate longer range point-to-point systems. This paper proposes the issue of design and implementation of an infrared device driver in a personal portable intelligent digital infrared communications system. After analyzing the various constraints, we use the embedded system based on Samsung S3C2440A 32-bit processor and Linux operating system to design the driver program. The program abandons its traditional Serial interface control mode, uses the generic GPIO to achieve infrared receiver device driver, and intends a user-defined communication protocol which is much more simple and convenient instead of traditional infrared communication protocol to design the character device drivers for the infrared receiver. The communication protocol uses interrupt counter to determine to receive the value and the first code.In this paper, the interrupt handling and an I/O package to reuse Linux device drivers in embedded system is introduced. Via this package, the whole Linux device driver source tree can be reused without any modifications. The driver program can set up and initialize the infrared device, transfer data between the device and the software, configure the device, monitor and trace the status of the device, reset the device, and shut down the device as requested. At last infrared test procedure was prepared and some testing and evaluations were made in a mobile infrared intelligent cicerone system, and the test result shows that the design is simple, practical, with advantages such as easy transplantation, strong reliability and convenience.

  18. Linux support at Fermilab

    SciTech Connect

    D.R. Yocum, C. Sieh, D. Skow, S. Kovich, D. Holmgren and R. Kennedy

    1998-12-01

    In January of 1998 Fermilab issued an official statement of support of the Linux operating system. This was the result of a ground swell of interest in the possibilities of a cheap, easily used platform for computation and analysis culminating with the successful demonstration of a small computation farm as reported at CHEP97. This paper will describe the current status of Linux support and deployment at Fermilab. The collaborative development process for Linux creates some problems with traditional support models. A primary example of this is that there is no definite OS distribution ala a CD distribution from a traditional Unix vendor. Fermilab has had to make a more definite statement about what is meant by Linux for this reason. Linux support at Fermilab is restricted to the Intel processor platform. A central distribution system has been created to mitigate problems with multiple distribution and configuration options. This system is based on the Red Hat distribution with the Fermi Unix Environment (FUE) layered above it. Deployment of Linux at the lab has been rapidly growing and by CHEP there are expected to be hundreds of machines running Linux. These include computational farms, trigger processing farms, and desktop workstations. The former groups are described in other talks and consist of clusters of many tens of very similar machines devoted to a few tasks. The latter group is more diverse and challenging. The user community has been very supportive and active in defining needs for Linux features and solving various compatibility issues. We will discuss the support arrangements currently in place.

  19. Use of Low-Cost Acquisition Systems with an Embedded Linux Device for Volcanic Monitoring

    PubMed Central

    Moure, David; Torres, Pedro; Casas, Benito; Toma, Daniel; Blanco, María José; Del Río, Joaquín; Manuel, Antoni

    2015-01-01

    This paper describes the development of a low-cost multiparameter acquisition system for volcanic monitoring that is applicable to gravimetry and geodesy, as well as to the visual monitoring of volcanic activity. The acquisition system was developed using a System on a Chip (SoC) Broadcom BCM2835 Linux operating system (based on DebianTM) that allows for the construction of a complete monitoring system offering multiple possibilities for storage, data-processing, configuration, and the real-time monitoring of volcanic activity. This multiparametric acquisition system was developed with a software environment, as well as with different hardware modules designed for each parameter to be monitored. The device presented here has been used and validated under different scenarios for monitoring ocean tides, ground deformation, and gravity, as well as for monitoring with images the island of Tenerife and ground deformation on the island of El Hierro. PMID:26295394

  20. Use of Low-Cost Acquisition Systems with an Embedded Linux Device for Volcanic Monitoring.

    PubMed

    Moure, David; Torres, Pedro; Casas, Benito; Toma, Daniel; Blanco, María José; Del Río, Joaquín; Manuel, Antoni

    2015-01-01

    This paper describes the development of a low-cost multiparameter acquisition system for volcanic monitoring that is applicable to gravimetry and geodesy, as well as to the visual monitoring of volcanic activity. The acquisition system was developed using a System on a Chip (SoC) Broadcom BCM2835 Linux operating system (based on DebianTM) that allows for the construction of a complete monitoring system offering multiple possibilities for storage, data-processing, configuration, and the real-time monitoring of volcanic activity. This multiparametric acquisition system was developed with a software environment, as well as with different hardware modules designed for each parameter to be monitored. The device presented here has been used and validated under different scenarios for monitoring ocean tides, ground deformation, and gravity, as well as for monitoring with images the island of Tenerife and ground deformation on the island of El Hierro. PMID:26295394

  1. Unsteady, Cooled Turbine Simulation Using a PC-Linux Analysis System

    NASA Technical Reports Server (NTRS)

    List, Michael G.; Turner, Mark G.; Chen, Jen-Pimg; Remotigue, Michael G.; Veres, Joseph P.

    2004-01-01

    The fist stage of the high-pressure turbine (HPT) of the GE90 engine was simulated with a three-dimensional unsteady Navier-Sokes solver, MSU Turbo, which uses source terms to simulate the cooling flows. In addition to the solver, its pre-processor, GUMBO, and a post-processing and visualization tool, Turbomachinery Visual3 (TV3) were run in a Linux environment to carry out the simulation and analysis. The solver was run both with and without cooling. The introduction of cooling flow on the blade surfaces, case, and hub and its effects on both rotor-vane interaction as well the effects on the blades themselves were the principle motivations for this study. The studies of the cooling flow show the large amount of unsteadiness in the turbine and the corresponding hot streak migration phenomenon. This research on the GE90 turbomachinery has also led to a procedure for running unsteady, cooled turbine analysis on commodity PC's running the Linux operating system.

  2. A machine vision system for micro-EDM based on linux

    NASA Astrophysics Data System (ADS)

    Guo, Rui; Zhao, Wansheng; Li, Gang; Li, Zhiyong; Zhang, Yong

    2006-11-01

    Due to the high precision and good surface quality that it can give, Electrical Discharge Machining (EDM) is potentially an important process for the fabrication of micro-tools and micro-components. However, a number of issues remain unsolved before micro-EDM becomes a reliable process with repeatable results. To deal with the difficulties in micro electrodes on-line fabrication and tool wear compensation, a micro-EDM machine vision system is developed with a Charge Coupled Device (CCD) camera, with an optical resolution of 1.61μm and an overall magnification of 113~729. Based on the Linux operating system, an image capturing program is developed with the V4L2 API, and an image processing program is exploited by using OpenCV. The contour of micro electrodes can be extracted by means of the Canny edge detector. Through the system calibration, the micro electrodes diameter can be measured on-line. Experiments have been carried out to prove its performance, and the reasons of measurement error are also analyzed.

  3. Chandra Science Operational Data System Migration to Linux: Herding Cats through a Funnel

    NASA Astrophysics Data System (ADS)

    Evans, J.; Evans, I.; Fabbiano, G.; Nichols, J.; Paton, L.; Rots, A.

    2014-05-01

    Migration to a new operational system requires technical and non-technical planning to address all of the functional associations affiliated with an established operations environment. The transition to (or addition of) a new platform often includes project planning that has organizational and operational elements. The migration likely tasks individuals both directly and indirectly involved in the project, so identification and coordination of key personnel is essential. The new system must be accurate and robust, and the transition plan typically must ensure that interruptions to services are minimized. Despite detailed integration and testing efforts, back-up plans that include procedures to follow if there are issues during or after installation need to be in place as part of the transition task. In this paper, we present some of the important steps involved in the migration of an operational data system. The management steps include setting objectives and defining scope, identifying stakeholders and establishing communication, assessing the environment and estimating workload, building a schedule, and coordinating with all involved to see it through. We discuss, specifically, the recent migration of the Chandra data system and data center operations from Solaris 32 to Linux 64. The code base is approximately 2 million source lines of code, and supports proposal planning, science mission planning, data processing, and the Chandra data archive. The overall project took approximately 18 months to plan and implement with the resources we had available. Data center operations continued uninterrupted with the exception of a small downtime during the changeover. We highlight our planning and implementation, the experience we gained during the project, and the lessons that we have learned.

  4. Recent advances in PC-Linux systems for electronic structure computations by optimized compilers and numerical libraries.

    PubMed

    Yu, Jen-Shiang K; Yu, Chin-Hui

    2002-01-01

    One of the most frequently used packages for electronic structure research, GAUSSIAN 98, is compiled on Linux systems with various hardware configurations, including AMD Athlon (with the "Thunderbird" core), AthlonMP, and AthlonXP (with the "Palomino" core) systems as well as the Intel Pentium 4 (with the "Willamette" core) machines. The default PGI FORTRAN compiler (pgf77) and the Intel FORTRAN compiler (ifc) are respectively employed with different architectural optimization options to compile GAUSSIAN 98 and test the performance improvement. In addition to the BLAS library included in revision A.11 of this package, the Automatically Tuned Linear Algebra Software (ATLAS) library is linked against the binary executables to improve the performance. Various Hartree-Fock, density-functional theories, and the MP2 calculations are done for benchmarking purposes. It is found that the combination of ifc with ATLAS library gives the best performance for GAUSSIAN 98 on all of these PC-Linux computers, including AMD and Intel CPUs. Even on AMD systems, the Intel FORTRAN compiler invariably produces binaries with better performance than pgf77. The enhancement provided by the ATLAS library is more significant for post-Hartree-Fock calculations. The performance on one single CPU is potentially as good as that on an Alpha 21264A workstation or an SGI supercomputer. The floating-point marks by SpecFP2000 have similar trends to the results of GAUSSIAN 98 package. PMID:12086529

  5. SLURM: Simple Linux Utility for Resource Management

    SciTech Connect

    Jette, M; Grondona, M

    2002-12-19

    Simple Linux Utility for Resource Management (SLURM) is an open source, fault-tolerant, and highly scalable cluster management and job scheduling system for Linux clusters of thousands of nodes. Components include machine status, partition management, job management, scheduling and stream copy modules. This paper presents an overview of the SLURM architecture and functionality.

  6. SLURM: Simplex Linux Utility for Resource Management

    SciTech Connect

    Jette, M; Grondona, M

    2003-04-22

    Simple Linux Utility for Resource Management (SLURM) is an open source, fault-tolerant, and highly scalable cluster management and job scheduling system for Linux clusters of thousands of nodes. Components include machine status, partition management, job management, scheduling, and stream copy modules. This paper presents an overview of the SLURM architecture and functionality.

  7. v9fb: a remote framebuffer infrastructure of linux

    SciTech Connect

    Kulkarni, Abhishek; Ionkov, Latchesar

    2008-01-01

    v9fb is a software infrastructure that allows extending framebufFer devices in Linux over the network by providing an abstraction to them in the form of a filesystem hierarchy. Framebuffer based graphic devices export a synthetic filesystem which offers a simple and easy-to-use interface for performing common framebuffer operations. Remote framebuffer devices could be accessed over the network using the 9P protocol support in Linux. We describe the infrastructure in detail and review some of the benefits it offers similar to Plan 9 distributed systems. We discuss the applications of this infrastructure to remotely display and run interactive applications on a terminal while ofFloading the computation to remote servers, and more importantly the flexibility it offers in driving tiled-display walls by aggregating graphic devices in the network.

  8. Managing a Real-Time Embedded Linux Platform with Buildroot

    SciTech Connect

    Diamond, J.; Martin, K.

    2015-01-01

    Developers of real-time embedded software often need to build the operating system, kernel, tools and supporting applications from source to work with the differences in their hardware configuration. The first attempts to introduce Linux-based real-time embedded systems into the Fermilab accelerator controls system used this approach but it was found to be time-consuming, difficult to maintain and difficult to adapt to different hardware configurations. Buildroot is an open source build system with a menu-driven configuration tool (similar to the Linux kernel build system) that automates this process. A customized Buildroot [1] system has been developed for use in the Fermilab accelerator controls system that includes several hardware configuration profiles (including Intel, ARM and PowerPC) and packages for Fermilab support software. A bootable image file is produced containing the Linux kernel, shell and supporting software suite that varies from 3 to 20 megabytes large – ideal for network booting. The result is a platform that is easier to maintain and deploy in diverse hardware configurations

  9. SLURM: Simple Linux Utility for Resource Management

    SciTech Connect

    Jette, M; Dunlap, C; Garlick, J; Grondona, M

    2002-07-08

    Simple Linux Utility for Resource Management (SLURM) is an open source, fault-tolerant, and highly scalable cluster management and job scheduling system for Linux clusters of thousands of nodes. Components include machine status, partition management, job management, scheduling and stream copy modules. The design also includes a scalable, general-purpose communication infrastructure. This paper presents a overview of the SLURM architecture and functionality.

  10. Climate Modeling with a Linux Cluster

    NASA Astrophysics Data System (ADS)

    Renold, M.; Beyerle, U.; Raible, C. C.; Knutti, R.; Stocker, T. F.; Craig, T.

    2004-08-01

    Until recently, computationally intensive calculations in many scientific disciplines have been limited to institutions which have access to supercomputing centers. Today, the computing power of PC processors permits the assembly of inexpensive PC clusters that nearly approach the power of supercomputers. Moreover, the combination of inexpensive network cards and Open Source software provides an easy linking of standard computer equipment to enlarge such clusters. Universities and other institutions have taken this opportunity and built their own mini-supercomputers on site. Computing power is a particular issue for the climate modeling and impacts community. The purpose of this article is to make available a Linux cluster version of the Community Climate System Model developed by the National Center for Atmospheric Research (NCAR; http://www.cgd.ucar.edu/csm).

  11. An Open Source Rapid Computer Aided Control System Design Toolchain Using Scilab, Scicos and RTAI Linux

    NASA Astrophysics Data System (ADS)

    Bouchpan-Lerust-Juéry, L.

    2007-08-01

    Current and next generation on-board computer systems tend to implement real-time embedded control applications (e.g. Attitude and Orbit Control Subsystem (AOCS), Packet Utililization Standard (PUS), spacecraft autonomy . . . ) which must meet high standards of Reliability and Predictability as well as Safety. All these requirements require a considerable amount of effort and cost for Space Sofware Industry. This paper, in a first part, presents a free Open Source integrated solution to develop RTAI applications from analysis, design, simulation and direct implementation using code generation based on Open Source and in its second part summarises this suggested approach, its results and the conclusion for further work.

  12. Abstract of talk for Silicon Valley Linux Users Group

    NASA Technical Reports Server (NTRS)

    Clanton, Sam

    2003-01-01

    The use of Linux for research at NASA Ames is discussed.Topics include:work with the Atmospheric Physics branch on software for a spectrometer to be used in the CRYSTAL-FACE mission this summer; work on in the Neuroengineering Lab with code IC including an introduction to the extension of the human senses project,advantages with using linux for real-time biological data processing,algorithms utilized on a linux system, goals of the project,slides of people with Neuroscan caps on, and progress that has been made and how linux has helped.

  13. KNBD: A Remote Kernel Block Server for Linux

    NASA Technical Reports Server (NTRS)

    Becker, Jeff

    1999-01-01

    I am developing a prototype of a Linux remote disk block server whose purpose is to serve as a lower level component of a parallel file system. Parallel file systems are an important component of high performance supercomputers and clusters. Although supercomputer vendors such as SGI and IBM have their own custom solutions, there has been a void and hence a demand for such a system on Beowulf-type PC Clusters. Recently, the Parallel Virtual File System (PVFS) project at Clemson University has begun to address this need (1). Although their system provides much of the functionality of (and indeed was inspired by) the equivalent file systems in the commercial supercomputer market, their system is all in user-space. Migrating their 10 services to the kernel could provide a performance boost, by obviating the need for expensive system calls. Thanks to Pavel Machek, the Linux kernel has provided the network block device (2) with kernels 2.1.101 and later. You can configure this block device to redirect reads and writes to a remote machine's disk. This can be used as a building block for constructing a striped file system across several nodes.

  14. BTime A Clock Synchronization Tool For Linux Clusters

    SciTech Connect

    Loncaric, Josip

    2004-10-22

    BTime software synchronizes the system clocks on Linux computers that can communicate on a network. Primarily intended for Linux computers that form a cluster, BTime ensures that all computers in the cluster have approximately the same time (usually within microseconds). In operation, a BTime server broadcasts target times every second. All BTime clients filter timing data and apply local time corrections synchronously at multiples of 64 seconds. Bayesian estimation of target time errors feeds a Kalman filter which estimates local errors in time, clock drift, and wander rates. Server dock adjustments are detected and compensated, thus reducing filter convergence time. Low probability events (e.g. significant time changes) are handled through heuristics also designed to reduce filter convergence time. Normal BTime corrects dock differences, while another version of BTime that only tracks clock differences can be used for measurements. In authors test lasting four days, BTime delivered estimated dock synchronization within 10 microseconds with 99.75% confidence. Standard deviation of the estimated clock offset is typically 2-3 microseconds, even over busy multi-hop networks. These results are about 100 times better than published results for Network Time Protocol (NTP).

  15. BTime A Clock Synchronization Tool For Linux Clusters

    Energy Science and Technology Software Center (ESTSC)

    2004-10-22

    BTime software synchronizes the system clocks on Linux computers that can communicate on a network. Primarily intended for Linux computers that form a cluster, BTime ensures that all computers in the cluster have approximately the same time (usually within microseconds). In operation, a BTime server broadcasts target times every second. All BTime clients filter timing data and apply local time corrections synchronously at multiples of 64 seconds. Bayesian estimation of target time errors feeds amore » Kalman filter which estimates local errors in time, clock drift, and wander rates. Server dock adjustments are detected and compensated, thus reducing filter convergence time. Low probability events (e.g. significant time changes) are handled through heuristics also designed to reduce filter convergence time. Normal BTime corrects dock differences, while another version of BTime that only tracks clock differences can be used for measurements. In authors test lasting four days, BTime delivered estimated dock synchronization within 10 microseconds with 99.75% confidence. Standard deviation of the estimated clock offset is typically 2-3 microseconds, even over busy multi-hop networks. These results are about 100 times better than published results for Network Time Protocol (NTP).« less

  16. BSD Portals for LINUX 2.0

    NASA Technical Reports Server (NTRS)

    McNab, A. David; woo, Alex (Technical Monitor)

    1999-01-01

    Portals, an experimental feature of 4.4BSD, extend the file system name space by exporting certain open () requests to a user-space daemon. A portal daemon is mounted into the file name space as if it were a standard file system. When the kernel resolves a pathname and encounters a portal mount point, the remainder of the path is passed to the portal daemon. Depending on the portal "pathname" and the daemon's configuration, some type of open (2) is performed. The resulting file descriptor is passed back to the kernel which eventually returns it to the user, to whom it appears that a "normal" open has occurred. A proxy portalfs file system is responsible for kernel interaction with the daemon. The overall effect is that the portal daemon performs an open (2) on behalf of the kernel, possibly hiding substantial complexity from the calling process. One particularly useful application is implementing a connection service that allows simple scripts to open network sockets. This paper describes the implementation of portals for LINUX 2.0.

  17. Berkeley lab checkpoint/restart (BLCR) for Linux clusters

    NASA Astrophysics Data System (ADS)

    Hargrove, Paul H.; Duell, Jason C.

    2006-09-01

    This article describes the motivation, design and implementation of Berkeley Lab Checkpoint/Restart (BLCR), a system-level checkpoint/restart implementation for Linux clusters that targets the space of typical High Performance Computing applications, including MPI. Application-level solutions, including both checkpointing and fault-tolerant algorithms, are recognized as more time and space efficient than system-level checkpoints, which cannot make use of any application-specific knowledge. However, system-level checkpointing allows for preemption, making it suitable for responding to ''fault precursors'' (for instance, elevated error rates from ECC memory or network CRCs, or elevated temperature from sensors). Preemption can also increase the efficiency of batch scheduling; for instance reducing idle cycles (by allowing for shutdown without any queue draining period or reallocation of resources to eliminate idle nodes when better fitting jobs are queued), and reducing the average queued time (by limiting large jobs to running during off-peak hours, without the need to limit the length of such jobs). Each of these potential uses makes BLCR a valuable tool for efficient resource management in Linux clusters.

  18. Kernel-based Linux emulation for Plan 9.

    SciTech Connect

    Minnich, Ronald G.

    2010-09-01

    CNKemu is a kernel-based system for the 9k variant of the Plan 9 kernel. It is designed to provide transparent binary support for programs compiled for IBM's Compute Node Kernel (CNK) on the Blue Gene series of supercomputers. This support allows users to build applications with the standard Blue Gene toolchain, including C++ and Fortran compilers. While the CNK is not Linux, IBM designed the CNK so that the user interface has much in common with the Linux 2.0 system call interface. The Plan 9 CNK emulator hence provides the foundation of kernel-based Linux system call support on Plan 9. In this paper we discuss cnkemu's implementation and some of its more interesting features, such as the ability to easily intermix Plan 9 and Linux system calls.

  19. SLURM: Simple Linux Utility for Resource Management

    SciTech Connect

    Jette, M; Dunlap, C; Garlick, J; Grondona, M

    2002-04-24

    Simple Linux Utility for Resource Management (SLURM) is an open source, fault-tolerant, and highly scalable cluster management and job scheduling system for Linux clusters of thousands of nodes. Components include machine status, partition management, job management, and scheduling modules. The design also includes a scalable, general-purpose communication infrastructure. Development will take place in four phases: Phase I results in a solid infrastructure; Phase II produces a functional but limited interactive job initiation capability without use of the interconnect/switch; Phase III provides switch support and documentation; Phase IV provides job status, fault-tolerance, and job queuing and control through Livermore's Distributed Production Control System (DPCS), a meta-batch and resource management system.

  20. Network operating system

    NASA Technical Reports Server (NTRS)

    1985-01-01

    Long-term and short-term objectives for the development of a network operating system for the Space Station are stated. The short-term objective is to develop a prototype network operating system for a 100 megabit/second fiber optic data bus. The long-term objective is to establish guidelines for writing a detailed specification for a Space Station network operating system. Major milestones are noted. Information is given in outline form.

  1. Linux Kernel Co-Scheduling For Bulk Synchronous Parallel Applications

    SciTech Connect

    Jones, Terry R

    2011-01-01

    This paper describes a kernel scheduling algorithm that is based on co-scheduling principles and that is intended for parallel applications running on 1000 cores or more where inter-node scalability is key. Experimental results for a Linux implementation on a Cray XT5 machine are presented.1 The results indicate that Linux is a suitable operating system for this new scheduling scheme, and that this design provides a dramatic improvement in scaling performance for synchronizing collective operations at scale.

  2. Linux Kernel Co-Scheduling and Bulk Synchronous Parallelism

    SciTech Connect

    Jones, Terry R

    2012-01-01

    This paper describes a kernel scheduling algorithm that is based on coscheduling principles and that is intended for parallel applications running on 1000 cores or more. Experimental results for a Linux implementation on a Cray XT5 machine are presented. The results indicate that Linux is a suitable operating system for this new scheduling scheme, and that this design provides a dramatic improvement in scaling performance for synchronizing collective operations at scale.

  3. Real Time Linux - The RTOS for Astronomy?

    NASA Astrophysics Data System (ADS)

    Daly, P. N.

    The BoF was attended by about 30 participants and a free CD of real time Linux-based upon RedHat 5.2-was available. There was a detailed presentation on the nature of real time Linux and the variants for hard real time: New Mexico Tech's RTL and DIAPM's RTAI. Comparison tables between standard Linux and real time Linux responses to time interval generation and interrupt response latency were presented (see elsewhere in these proceedings). The present recommendations are to use RTL for UP machines running the 2.0.x kernels and RTAI for SMP machines running the 2.2.x kernel. Support, both academically and commercially, is available. Some known limitations were presented and the solutions reported e.g., debugging and hardware support. The features of RTAI (scheduler, fifos, shared memory, semaphores, message queues and RPCs) were described. Typical performance statistics were presented: Pentium-based oneshot tasks running > 30kHz, 486-based oneshot tasks running at ~ 10 kHz, periodic timer tasks running in excess of 90 kHz with average zero jitter peaking to ~ 13 mus (UP) and ~ 30 mus (SMP). Some detail on kernel module programming, including coding examples, were presented showing a typical data acquisition system generating simulated (random) data writing to a shared memory buffer and a fifo buffer to communicate between real time Linux and user space. All coding examples were complete and tested under RTAI v0.6 and the 2.2.12 kernel. Finally, arguments were raised in support of real time Linux: it's open source, free under GPL, enables rapid prototyping, has good support and the ability to have a fully functioning workstation capable of co-existing hard real time performance. The counter weight-the negatives-of lack of platforms (x86 and PowerPC only at present), lack of board support, promiscuous root access and the danger of ignorance of real time programming issues were also discussed. See ftp://orion.tuc.noao.edu/pub/pnd/rtlbof.tgz for the StarOffice overheads

  4. Building the World's Fastest Linux Cluster

    SciTech Connect

    Goldstone, R; Seager, M

    2003-10-24

    Imagine having 2,304 Xeon processors running day and night solving complex problems. With a theoretical peak of 11.2 teraflops, that is just what the MCR cluster at Lawrence Livermore National Labs (LLNL) is doing. Over the past several years, Lawrence Livermore National Laboratory has deployed a series of increasingly large and powerful Intel-based Linux clusters. The most significant of these is a cluster known as the MCR (Multiprogrammactic Capability Resource). With 1,152 Intel Xeon (2.4 GHz) dual-processor nodes from Linux NetworX and a high performance interconnect from Quadrics, LTD., the MCR currently ranks third on the 21st Top 500 Supercomputer Sites List and is the fastest Linux cluster in the world. This feat was accomplished with a total system cost (hardware including maintenance, in-reconnect, global file system storage) of under $14 million. Although production clusters like the MCR are still custom built supercomputers that require as much artistry as skill, the experiences of LLNL have helped clear an important path for other clusters to follow.

  5. A General Purpose High Performance Linux Installation Infrastructure

    SciTech Connect

    Wachsmann, Alf

    2002-06-17

    With more and more and larger and larger Linux clusters, the question arises how to install them. This paper addresses this question by proposing a solution using only standard software components. This installation infrastructure scales well for a large number of nodes. It is also usable for installing desktop machines or diskless Linux clients, thus, is not designed for cluster installations in particular but is, nevertheless, highly performant. The infrastructure proposed uses PXE as the network boot component on the nodes. It uses DHCP and TFTP servers to get IP addresses and a bootloader to all nodes. It then uses kickstart to install Red Hat Linux over NFS. We have implemented this installation infrastructure at SLAC with our given server hardware and installed a 256 node cluster in 30 minutes. This paper presents the measurements from this installation and discusses the bottlenecks in our installation.

  6. Network systems security analysis

    NASA Astrophysics Data System (ADS)

    Yilmaz, Ä.°smail

    2015-05-01

    Network Systems Security Analysis has utmost importance in today's world. Many companies, like banks which give priority to data management, test their own data security systems with "Penetration Tests" by time to time. In this context, companies must also test their own network/server systems and take precautions, as the data security draws attention. Based on this idea, the study cyber-attacks are researched throughoutly and Penetration Test technics are examined. With these information on, classification is made for the cyber-attacks and later network systems' security is tested systematically. After the testing period, all data is reported and filed for future reference. Consequently, it is found out that human beings are the weakest circle of the chain and simple mistakes may unintentionally cause huge problems. Thus, it is clear that some precautions must be taken to avoid such threats like updating the security software.

  7. The network queueing system

    NASA Technical Reports Server (NTRS)

    Kingsbury, Brent K.

    1986-01-01

    Described is the implementation of a networked, UNIX based queueing system developed on contract for NASA. The system discussed supports both batch and device requests, and provides the facilities of remote queueing, request routing, remote status, queue access controls, batch request resource quota limits, and remote output return.

  8. Software structure for broadband wireless sensor network system

    NASA Astrophysics Data System (ADS)

    Kwon, Hyeokjun; Oh, Sechang; Yoon, Hargsoon; Varadan, Vijay K.

    2010-04-01

    Zigbee Sensor Network system has been investigating for monitoring and analyzing the data measured from a lot of sensors because the Zigbee Sensor Network has several advantages of low power consumption, compact size, and multi-node connection. However, it has a disadvantage not to be able to monitor the data measured from sensors at the remote area such as other room that is located at other city. This paper describes the software structure to compensate the defect with combining the Zigbee Sensor Network and wireless LAN technology for remote monitoring of measured sensor data. The software structure has both benefits of Zigbee Sensor Network and the advantage of wireless LAN. The software structure has three main software structures. The first software structure consists of the function in order to acquire the data from sensors and the second software structure is to gather the sensor data through wireless Zigbee and to send the data to Monitoring system by using wireless LAN. The second part consists of Linux packages software based on 2440 CPU (Samsung corp.), which has ARM9 core. The Linux packages include bootloader, device drivers, kernel, and applications, and the applications are TCP/IP server program, the program interfacing with Zigbee RF module, and wireless LAN program. The last part of software structure is to receive the sensor data through TCP/IP client program from Wireless Gate Unit and to display graphically measured data by using MATLAB program; the sensor data is measured on 100Hz sampling rate and the measured data has 10bit data resolution. The wireless data transmission rate per each channel is 1.6kbps.

  9. Achieving Order through CHAOS: the LLNL HPC Linux Cluster Experience

    SciTech Connect

    Braby, R L; Garlick, J E; Goldstone, R J

    2003-05-02

    Since fall 2001, Livermore Computing at Lawrence Livermore National Laboratory has deployed 11 Intel IA-32-based Linux clusters ranging in size up to 1154 nodes. All provide a common programming model and implement a similar cluster architecture. Hardware components are carefully selected for performance, usability, manageability, and reliability and are then integrated and supported using a strategy that evolved from practical experience. Livermore Computing Linux clusters run a common software environment that is developed and maintained in-house while drawing components and additional support from the open source community and industrial partnerships. The environment is based on Red Hat Linux and adds kernel modifications, cluster system management, monitoring and failure detection, resource management, authentication and access control, development environment, and parallel file system. The overall strategy has been successful and demonstrates that world-class high-performance computing resources can be built and maintained using commodity off-the-shelf hardware and open source software.

  10. Network Systems Technician.

    ERIC Educational Resources Information Center

    Ohio State Univ., Columbus. Center on Education and Training for Employment.

    This publication contains 17 subjects appropriate for use in a competency list for the occupation of network systems technician, 1 of 12 occupations within the business/computer technologies cluster. Each unit consists of a number of competencies; a list of competency builders is provided for each competency. Titles of the 17 units are as follows:…

  11. Scalability and Performance of a Large Linux Cluster

    SciTech Connect

    BRIGHTWELL,RONALD B.; PLIMPTON,STEVEN J.

    2000-01-20

    In this paper the authors present performance results from several parallel benchmarks and applications on a 400-node Linux cluster at Sandia National Laboratories. They compare the results on the Linux cluster to performance obtained on a traditional distributed-memory massively parallel processing machine, the Intel TeraFLOPS. They discuss the characteristics of these machines that influence the performance results and identify the key components of the system software that they feel are important to allow for scalability of commodity-based PC clusters to hundreds and possibly thousands of processors.

  12. Network Information System

    Energy Science and Technology Software Center (ESTSC)

    1996-05-01

    The Network Information System (NWIS) was initially implemented in May 1996 as a system in which computing devices could be recorded so that unique names could be generated for each device. Since then the system has grown to be an enterprise wide information system which is integrated with other systems to provide the seamless flow of data through the enterprise. The system Iracks data for two main entities: people and computing devices. The following aremore » the type of functions performed by NWIS for these two entities: People Provides source information to the enterprise person data repository for select contractors and visitors Generates and tracks unique usernames and Unix user IDs for every individual granted cyber access Tracks accounts for centrally managed computing resources, and monitors and controls the reauthorization of the accounts in accordance with the DOE mandated interval Computing Devices Generates unique names for all computing devices registered in the system Tracks the following information for each computing device: manufacturer, make, model, Sandia property number, vendor serial number, operating system and operating system version, owner, device location, amount of memory, amount of disk space, and level of support provided for the machine Tracks the hardware address for network cards Tracks the P address registered to computing devices along with the canonical and alias names for each address Updates the Dynamic Domain Name Service (DDNS) for canonical and alias names Creates the configuration files for DHCP to control the DHCP ranges and allow access to only properly registered computers Tracks and monitors classified security plans for stand-alone computers Tracks the configuration requirements used to setup the machine Tracks the roles people have on machines (system administrator, administrative access, user, etc...) Allows systems administrators to track changes made on the machine (both hardware and software) Generates an

  13. Networked differential GPS system

    NASA Technical Reports Server (NTRS)

    Mueller, K. Tysen (Inventor); Loomis, Peter V. W. (Inventor); Kalafus, Rudolph M. (Inventor); Sheynblat, Leonid (Inventor)

    1994-01-01

    An embodiment of the present invention relates to a worldwide network of differential GPS reference stations (NDGPS) that continually track the entire GPS satellite constellation and provide interpolations of reference station corrections tailored for particular user locations between the reference stations Each reference station takes real-time ionospheric measurements with codeless cross-correlating dual-frequency carrier GPS receivers and computes real-time orbit ephemerides independently. An absolute pseudorange correction (PRC) is defined for each satellite as a function of a particular user's location. A map of the function is constructed, with iso-PRC contours. The network measures the PRCs at a few points, so-called reference stations and constructs an iso-PRC map for each satellite. Corrections are interpolated for each user's site on a subscription basis. The data bandwidths are kept to a minimum by transmitting information that cannot be obtained directly by the user and by updating information by classes and according to how quickly each class of data goes stale given the realities of the GPS system. Sub-decimeter-level kinematic accuracy over a given area is accomplished by establishing a mini-fiducial network.

  14. Distributed System Intruder Tools, Trinoo and Tribe Flood Network

    SciTech Connect

    Criscuolo, P.J.; Rathbun, T

    1999-12-21

    Trinoo and Tribe Flood Network (TFN) are new forms of denial of Service (DOS) attacks. attacks are designed to bring down a computer or network by overloading it with a large amount of network traffic using TCP, UDP, or ICMP. In the past, these attacks came from a single location and were easy to detect. Trinoo and TFN are distributed system intruder tools. These tools launch DoS attacks from multiple computer systems at a target system simultaneously. This makes the assault hard to detect and almost impossible to track to the original attacker. Because these attacks can be launched from hundreds of computers under the command of a single attacker, they are far more dangerous than any DoS attack launched from a single location. These distributed tools have only been seen on Solaris and Linux machines, but there is no reason why they could not be modified for UNIX machines. The target system can also be of any type because the attack is based on the TCP/IP architecture, not a flaw in any particular operating system (OS). CIAC considers the risks presented by these DoS tools to be high.

  15. Network Systems Administration Needs Assessment.

    ERIC Educational Resources Information Center

    Lexington Community Coll., KY. Office of Institutional Research.

    In spring 1996, Lexington Community College (LCC) in Kentucky, conducted a survey to gather information on employment trends and educational needs in the field of network systems administration (NSA). NSA duties involve the installation and administration of network operating systems, applications software, and networking infrastructure;…

  16. Transitioning to Intel-based Linux Servers in the Payload Operations Integration Center

    NASA Technical Reports Server (NTRS)

    Guillebeau, P. L.

    2004-01-01

    The MSFC Payload Operations Integration Center (POIC) is the focal point for International Space Station (ISS) payload operations. The POIC contains the facilities, hardware, software and communication interface necessary to support payload operations. ISS ground system support for processing and display of real-time spacecraft and telemetry and command data has been operational for several years. The hardware components were reaching end of life and vendor costs were increasing while ISS budgets were becoming severely constrained. Therefore it has been necessary to migrate the Unix portions of our ground systems to commodity priced Intel-based Linux servers. hardware architecture including networks, data storage, and highly available resources. This paper will concentrate on the Linux migration implementation for the software portion of our ground system. The migration began with 3.5 million lines of code running on Unix platforms with separate servers for telemetry, command, Payload information management systems, web, system control, remote server interface and databases. The Intel-based system is scheduled to be available for initial operational use by August 2004 The overall migration to Intel-based Linux servers in the control center involves changes to the This paper will address the Linux migration study approach including the proof of concept, criticality of customer buy-in and importance of beginning with POSlX compliant code. It will focus on the development approach explaining the software lifecycle. Other aspects of development will be covered including phased implementation, interim milestones and metrics measurements and reporting mechanisms. This paper will also address the testing approach covering all levels of testing including development, development integration, IV&V, user beta testing and acceptance testing. Test results including performance numbers compared with Unix servers will be included. need for a smooth transition while maintaining

  17. Views of wireless network systems.

    SciTech Connect

    Young, William Frederick; Duggan, David Patrick

    2003-10-01

    Wireless networking is becoming a common element of industrial, corporate, and home networks. Commercial wireless network systems have become reliable, while the cost of these solutions has become more affordable than equivalent wired network solutions. The security risks of wireless systems are higher than wired and have not been studied in depth. This report starts to bring together information on wireless architectures and their connection to wired networks. We detail information contained on the many different views of a wireless network system. The method of using multiple views of a system to assist in the determination of vulnerabilities comes from the Information Design Assurance Red Team (IDART{trademark}) Methodology of system analysis developed at Sandia National Laboratories.

  18. Language Networks as Complex Systems

    ERIC Educational Resources Information Center

    Lee, Max Kueiming; Ou, Sheue-Jen

    2008-01-01

    Starting in the late eighties, with a growing discontent with analytical methods in science and the growing power of computers, researchers began to study complex systems such as living organisms, evolution of genes, biological systems, brain neural networks, epidemics, ecology, economy, social networks, etc. In the early nineties, the research…

  19. Networking Systems and Equipment.

    ERIC Educational Resources Information Center

    Kranz, Maciej

    2002-01-01

    Describes how high-bandwidth networks are delivering new educational and administrative opportunities for K-12 school districts. Addresses implementing the new network, upgrading to a switched environment, adding intelligent switches, IP telephony, and wireless technology. Describes deployment and benefits of broadband in the Denver public schools…

  20. Optimizing Performance on Linux Clusters Using Advanced Communication Protocols: Achieving Over 10 Teraflops on a 8.6 Teraflops Linpack-Rated Linux Cluster

    SciTech Connect

    Krishnan, Manoj Kumar; Nieplocha, Jarek

    2005-04-26

    Advancements in high-performance networks (Quadrics, Infiniband or Myrinet) continue to improve the efficiency of modern clusters. However, the average application efficiency is as small fraction of the peak as the system’s efficiency. This paper describes techniques for optimizing application performance on Linux clusters using Remote Memory Access communication protocols. The effectiveness of these optimizations is presented in the context of an application kernel, dense matrix multiplication. The result was achieving over 10 teraflops on HP Linux cluster on which LINPACK performance is measured as 8.6 teraflops.

  1. The APS control system network

    SciTech Connect

    Sidorowicz, K.V.; McDowell, W.P.

    1995-12-31

    The APS accelerator control system is a distributed system consisting of operator interfaces, a network, and computer-controlled interfaces to hardware. This implementation of a control system has come to be called the {open_quotes}Standard Model.{close_quotes} The operator interface is a UNDC-based workstation with an X-windows graphical user interface. The workstation may be located at any point on the facility network and maintain full functionality. The function of the network is to provide a generalized communication path between the host computers, operator workstations, input/output crates, and other hardware that comprise the control system. The crate or input/output controller (IOC) provides direct control and input/output interfaces for each accelerator subsystem. The network is an integral part of all modem control systems and network performance will determine many characteristics of a control system. This paper will describe the overall APS network and examine the APS control system network in detail. Metrics are provided on the performance of the system under various conditions.

  2. Developing and Benchmarking Native Linux Applications on Android

    NASA Astrophysics Data System (ADS)

    Batyuk, Leonid; Schmidt, Aubrey-Derrick; Schmidt, Hans-Gunther; Camtepe, Ahmet; Albayrak, Sahin

    Smartphones get increasingly popular where more and more smartphone platforms emerge. Special attention was gained by the open source platform Android which was presented by the Open Handset Alliance (OHA) hosting members like Google, Motorola, and HTC. Android uses a Linux kernel and a stripped-down userland with a custom Java VM set on top. The resulting system joins the advantages of both environments, while third-parties are intended to develop only Java applications at the moment.

  3. Proposal of Network-Based Multilingual Space Dictionary Database System

    NASA Astrophysics Data System (ADS)

    Yoshimitsu, T.; Hashimoto, T.; Ninomiya, K.

    2002-01-01

    The International Academy of Astronautics (IAA) is now constructing a multilingual dictionary database system of space-friendly terms. The database consists of a lexicon and dictionaries of multiple languages. The lexicon is a table which relates corresponding terminology in different languages. Each language has a dictionary which contains terms and their definitions. The database assumes the use on the internet. Updating and searching the terms and definitions are conducted via the network. Maintaining the database is conducted by the international cooperation. A new word arises day by day, thus to easily input new words and their definitions to the database is required for the longstanding success of the system. The main key of the database is an English term which is approved at the table held once or twice with the working group members. Each language has at lease one working group member who is responsible of assigning the corresponding term and the definition of the term of his/her native language. Inputting and updating terms and their definitions can be conducted via the internet from the office of each member which may be located at his/her native country. The system is constructed by freely distributed database server program working on the Linux operating system, which will be installed at the head office of IAA. Once it is installed, it will be open to all IAA members who can search the terms via the internet. Currently the authors are constructing the prototype system which is described in this paper.

  4. Climate tools in mainstream Linux distributions

    NASA Astrophysics Data System (ADS)

    McKinstry, Alastair

    2015-04-01

    Debian/meterology is a project to integrate climate tools and analysis software into the mainstream Debian/Ubuntu Linux distributions. This work describes lessons learnt, and recommends practices for scientific software to be adopted and maintained in OS distributions. In addition to standard analysis tools (cdo,, grads, ferret, metview, ncl, etc.), software used by the Earth System Grid Federation was chosen for integraion, to enable ESGF portals to be built on this base; however exposing scientific codes via web APIs enables security weaknesses, normally ignorable, to be exposed. How tools are hardened, and what changes are required to handle security upgrades, are described. Secondly, to enable libraries and components (e.g. Python modules) to be integrated requires planning by writers: it is not sufficient to assume users can upgrade their code when you make incompatible changes. Here, practices are recommended to enable upgrades and co-installability of C, C++, Fortran and Python codes. Finally, software packages such as NetCDF and HDF5 can be built in multiple configurations. Tools may then expect incompatible versions of these libraries (e.g. serial and parallel) to be simultaneously available; how this was solved in Debian using "pkg-config" and shared library interfaces is described, and best practices for software writers to enable this are summarised.

  5. Multilevel Complex Networks and Systems

    NASA Astrophysics Data System (ADS)

    Caldarelli, Guido

    2014-03-01

    Network theory has been a powerful tool to model isolated complex systems. However, the classical approach does not take into account the interactions often present among different systems. Hence, the scientific community is nowadays concentrating the efforts on the foundations of new mathematical tools for understanding what happens when multiple networks interact. The case of economic and financial networks represents a paramount example of multilevel networks. In the case of trade, trade among countries the different levels can be described by the different granularity of the trading relations. Indeed, we have now data from the scale of consumers to that of the country level. In the case of financial institutions, we have a variety of levels at the same scale. For example one bank can appear in the interbank networks, ownership network and cds networks in which the same institution can take place. In both cases the systemically important vertices need to be determined by different procedures of centrality definition and community detection. In this talk I will present some specific cases of study related to these topics and present the regularities found. Acknowledged support from EU FET Project ``Multiplex'' 317532.

  6. Computation and Communication Evaluation of an Authentication Mechanism for Time-Triggered Networked Control Systems.

    PubMed

    Martins, Goncalo; Moondra, Arul; Dubey, Abhishek; Bhattacharjee, Anirban; Koutsoukos, Xenofon D

    2016-01-01

    In modern networked control applications, confidentiality and integrity are important features to address in order to prevent against attacks. Moreover, network control systems are a fundamental part of the communication components of current cyber-physical systems (e.g., automotive communications). Many networked control systems employ Time-Triggered (TT) architectures that provide mechanisms enabling the exchange of precise and synchronous messages. TT systems have computation and communication constraints, and with the aim to enable secure communications in the network, it is important to evaluate the computational and communication overhead of implementing secure communication mechanisms. This paper presents a comprehensive analysis and evaluation of the effects of adding a Hash-based Message Authentication (HMAC) to TT networked control systems. The contributions of the paper include (1) the analysis and experimental validation of the communication overhead, as well as a scalability analysis that utilizes the experimental result for both wired and wireless platforms and (2) an experimental evaluation of the computational overhead of HMAC based on a kernel-level Linux implementation. An automotive application is used as an example, and the results show that it is feasible to implement a secure communication mechanism without interfering with the existing automotive controller execution times. The methods and results of the paper can be used for evaluating the performance impact of security mechanisms and, thus, for the design of secure wired and wireless TT networked control systems. PMID:27463718

  7. A package of Linux scripts for the parallelization of Monte Carlo simulations

    NASA Astrophysics Data System (ADS)

    Badal, Andreu; Sempau, Josep

    2006-09-01

    also provided. Typical running time:The execution time of each script largely depends on the number of computers that are used, the actions that are to be performed and, to a lesser extent, on the network connexion bandwidth. Unusual features of the program:Any computer on the Internet with a Secure Shell client/server program installed can be used as a node of a virtual computer cluster for parallel calculations with the sequential source code. The simplicity of the parallelization scheme makes the use of this package a straightforward task, which does not require installing any additional libraries. Program summary 2Title of program:seedsMLCG Catalogue identifier:ADYE_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/ADYE_v1_0 Program obtainable from:CPC Program Library, Queen's University of Belfast, Northern Ireland Computer for which the program is designed and others in which it is operable:Any computer with a FORTRAN compiler Operating systems under which the program has been tested:Linux (RedHat 8.0, SuSe 8.1, Debian Woody 3.1), MS Windows (2000, XP) Compilers:GNU FORTRAN g77 (Linux and Windows); g95 (Linux); Intel Fortran Compiler 7.1 (Linux); Compaq Visual Fortran 6.1 (Windows) Programming language used:FORTRAN 77 No. of bits in a word:32 Memory required to execute with typical data:500 kilobytes No. of lines in distributed program, including test data, etc.:492 No. of bytes in distributed program, including test data, etc.:5582 Distribution format:tar.gz Nature of the physical problem:Statistically independent results from different runs of a Monte Carlo code can be obtained using uncorrelated sequences of random numbers on each execution. Multiplicative linear congruential generators (MLCG), or other generators that are based on them such as RANECU, can be adapted to produce these sequences. Method of solution:For a given MLCG, the presented program calculates initialization values that produce disjoint, consecutive sequences of pseudo

  8. Lightweight Corefile Library for Linux

    Energy Science and Technology Software Center (ESTSC)

    2007-09-22

    Liblwcf attempts to generate stack traces from failing processes as opposed to dumping full corefiles. This can be beneficial when running large parallel applications where dumping f a fully memory image could flood network filesystem servers.

  9. Wireless nanosensor network system

    NASA Astrophysics Data System (ADS)

    Oh, Sechang; Kwon, Hyukjun; Kegley, Lauren; Yoon, Hargsoon; Varadan, Vijay K.

    2009-03-01

    Many types of wireless modules are being developed to enhance wireless performance with low power consumption, compact size, high data rates, and wide range coverage. However trade-offs must be taken into consideration in order to satisfy all aspects of wireless performance. For example, in order to increase the data rate and wide range coverage, power consumption should be sacrificed. To overcome these drawbacks, the paper presents a wireless client module which offers low power consumption along with a wireless receiver module that has the strength to provide high data rates and wide range coverage. Adopting Zigbee protocol in the wireless client module, the power consumption performance is enhanced so that it plays a part of the mobile device. On the other hand, the wireless receiver module, as adopting Zigbee and Wi-Fi protocol, provides high data rate, wide range coverage, and easy connection to the existing Internet network so that it plays a part of the portable device. This module demonstrates monitoring of gait analysis. The results show that the sensing data being measured can be monitored in any remote place with access to the Internet network.

  10. Promoting Social Network Awareness: A Social Network Monitoring System

    ERIC Educational Resources Information Center

    Cadima, Rita; Ferreira, Carlos; Monguet, Josep; Ojeda, Jordi; Fernandez, Joaquin

    2010-01-01

    To increase communication and collaboration opportunities, members of a community must be aware of the social networks that exist within that community. This paper describes a social network monitoring system--the KIWI system--that enables users to register their interactions and visualize their social networks. The system was implemented in a…

  11. Network analyses in systems pharmacology

    PubMed Central

    Berger, Seth I.; Iyengar, Ravi

    2009-01-01

    Systems pharmacology is an emerging area of pharmacology which utilizes network analysis of drug action as one of its approaches. By considering drug actions and side effects in the context of the regulatory networks within which the drug targets and disease gene products function, network analysis promises to greatly increase our knowledge of the mechanisms underlying the multiple actions of drugs. Systems pharmacology can provide new approaches for drug discovery for complex diseases. The integrated approach used in systems pharmacology can allow for drug action to be considered in the context of the whole genome. Network-based studies are becoming an increasingly important tool in understanding the relationships between drug action and disease susceptibility genes. This review discusses how analysis of biological networks has contributed to the genesis of systems pharmacology and how these studies have improved global understanding of drug targets, suggested new targets and approaches for therapeutics, and provided a deeper understanding of the effects of drugs. Taken together, these types of analyses can lead to new therapeutic options while improving the safety and efficacy of existing medications. Contact: ravi.iyengar@mssm.edu PMID:19648136

  12. Linux Incident Response Volatile Data Analysis Framework

    ERIC Educational Resources Information Center

    McFadden, Matthew

    2013-01-01

    Cyber incident response is an emphasized subject area in cybersecurity in information technology with increased need for the protection of data. Due to ongoing threats, cybersecurity imposes many challenges and requires new investigative response techniques. In this study a Linux Incident Response Framework is designed for collecting volatile data…

  13. The Design of NetSecLab: A Small Competition-Based Network Security Lab

    ERIC Educational Resources Information Center

    Lee, C. P.; Uluagac, A. S.; Fairbanks, K. D.; Copeland, J. A.

    2011-01-01

    This paper describes a competition-style of exercise to teach system and network security and to reinforce themes taught in class. The exercise, called NetSecLab, is conducted on a closed network with student-formed teams, each with their own Linux system to defend and from which to launch attacks. Students are expected to learn how to: 1) install…

  14. Network operating system focus technology

    NASA Technical Reports Server (NTRS)

    1985-01-01

    An activity structured to provide specific design requirements and specifications for the Space Station Data Management System (DMS) Network Operating System (NOS) is outlined. Examples are given of the types of supporting studies and implementation tasks presently underway to realize a DMS test bed capability to develop hands-on understanding of NOS requirements as driven by actual subsystem test beds participating in the overall Johnson Space Center test bed program. Classical operating system elements and principal NOS functions are listed.

  15. Berkeley Lab Checkpoint/Restart for Linux

    Energy Science and Technology Software Center (ESTSC)

    2003-11-15

    This package implements system-level checkpointing of scientific applications mnning on Linux clusters in a manner suitable for implementing preemption, migration and fault recovery by a batch scheduler The design includes documented interfaces for a cooperating application or library to implement extensions to the checkpoint system, such as consistent checkpointing of distnbuted MPI applications Using this package with an appropnate MPI implementation, the vast majority of scientific applications which use MPI for communucation are checkpointable withoutmore » any modifications to the application source code. Extending VMAdump code used in the bproc system, the BLCR kemel modules provide three additional features necessary for useful system-level checkpointing of scientific applications(installation of bproc is not required to use BLCR) First, this package provides the bookkeeping and coordination required for checkpointing and restoring multi-threaded and multi-process applications mnning on a single node Secondly, this package provides a system call interface allowing checkpoints to be requested by any aufhonzed process, such as a batch scheduler. Thirdly, this package provides a system call interface allowing applications and/or application libraries to extend the checkpoint capabilities in user space, for instance to proide coordination of checkpoints of distritsuted MPI applications. The "Iibcr" library in this package implements a wrapper around the system call interface exported by the kemel modules, and mantains bookkeeping to allow registration of callbacks by runtime libraries This library also provides the necesary thread-saftety and signal-safety mechanisms Thus, this library provides the means for applications and run-time libranes, such as MPI, to register callback functions to be run when a checkpoint is taken or when restarting from one. This library may also be used as a LD_PRELOAD to enable checkpointing of applications with development

  16. Networked control of microgrid system of systems

    NASA Astrophysics Data System (ADS)

    Mahmoud, Magdi S.; Rahman, Mohamed Saif Ur; AL-Sunni, Fouad M.

    2016-08-01

    The microgrid has made its mark in distributed generation and has attracted widespread research. However, microgrid is a complex system which needs to be viewed from an intelligent system of systems perspective. In this paper, a network control system of systems is designed for the islanded microgrid system consisting of three distributed generation units as three subsystems supplying a load. The controller stabilises the microgrid system in the presence of communication infractions such as packet dropouts and delays. Simulation results are included to elucidate the effectiveness of the proposed control strategy.

  17. Architecting Communication Network of Networks for Space System of Systems

    NASA Technical Reports Server (NTRS)

    Bhasin, Kul B.; Hayden, Jeffrey L.

    2008-01-01

    The National Aeronautics and Space Administration (NASA) and the Department of Defense (DoD) are planning Space System of Systems (SoS) to address the new challenges of space exploration, defense, communications, navigation, Earth observation, and science. In addition, these complex systems must provide interoperability, enhanced reliability, common interfaces, dynamic operations, and autonomy in system management. Both NASA and the DoD have chosen to meet the new demands with high data rate communication systems and space Internet technologies that bring Internet Protocols (IP), routers, servers, software, and interfaces to space networks to enable as much autonomous operation of those networks as possible. These technologies reduce the cost of operations and, with higher bandwidths, support the expected voice, video, and data needed to coordinate activities at each stage of an exploration mission. In this paper, we discuss, in a generic fashion, how the architectural approaches and processes are being developed and used for defining a hypothetical communication and navigation networks infrastructure to support lunar exploration. Examples are given of the products generated by the architecture development process.

  18. Comparing genomes to computer operating systems in terms of the topology and evolution of their regulatory control networks.

    PubMed

    Yan, Koon-Kiu; Fang, Gang; Bhardwaj, Nitin; Alexander, Roger P; Gerstein, Mark

    2010-05-18

    The genome has often been called the operating system (OS) for a living organism. A computer OS is described by a regulatory control network termed the call graph, which is analogous to the transcriptional regulatory network in a cell. To apply our firsthand knowledge of the architecture of software systems to understand cellular design principles, we present a comparison between the transcriptional regulatory network of a well-studied bacterium (Escherichia coli) and the call graph of a canonical OS (Linux) in terms of topology and evolution. We show that both networks have a fundamentally hierarchical layout, but there is a key difference: The transcriptional regulatory network possesses a few global regulators at the top and many targets at the bottom; conversely, the call graph has many regulators controlling a small set of generic functions. This top-heavy organization leads to highly overlapping functional modules in the call graph, in contrast to the relatively independent modules in the regulatory network. We further develop a way to measure evolutionary rates comparably between the two networks and explain this difference in terms of network evolution. The process of biological evolution via random mutation and subsequent selection tightly constrains the evolution of regulatory network hubs. The call graph, however, exhibits rapid evolution of its highly connected generic components, made possible by designers' continual fine-tuning. These findings stem from the design principles of the two systems: robustness for biological systems and cost effectiveness (reuse) for software systems. PMID:20439753

  19. Network centrality of metro systems.

    PubMed

    Derrible, Sybil

    2012-01-01

    Whilst being hailed as the remedy to the world's ills, cities will need to adapt in the 21(st) century. In particular, the role of public transport is likely to increase significantly, and new methods and technics to better plan transit systems are in dire need. This paper examines one fundamental aspect of transit: network centrality. By applying the notion of betweenness centrality to 28 worldwide metro systems, the main goal of this paper is to study the emergence of global trends in the evolution of centrality with network size and examine several individual systems in more detail. Betweenness was notably found to consistently become more evenly distributed with size (i.e. no "winner takes all") unlike other complex network properties. Two distinct regimes were also observed that are representative of their structure. Moreover, the share of betweenness was found to decrease in a power law with size (with exponent 1 for the average node), but the share of most central nodes decreases much slower than least central nodes (0.87 vs. 2.48). Finally the betweenness of individual stations in several systems were examined, which can be useful to locate stations where passengers can be redistributed to relieve pressure from overcrowded stations. Overall, this study offers significant insights that can help planners in their task to design the systems of tomorrow, and similar undertakings can easily be imagined to other urban infrastructure systems (e.g., electricity grid, water/wastewater system, etc.) to develop more sustainable cities. PMID:22792373

  20. Network command processing system overview

    NASA Technical Reports Server (NTRS)

    Nam, Yon-Woo; Murphy, Lisa D.

    1993-01-01

    The Network Command Processing System (NCPS) developed for the National Aeronautics and Space Administration (NASA) Ground Network (GN) stations is a spacecraft command system utilizing a MULTIBUS I/68030 microprocessor. This system was developed and implemented at ground stations worldwide to provide a Project Operations Control Center (POCC) with command capability for support of spacecraft operations such as the LANDSAT, Shuttle, Tracking and Data Relay Satellite, and Nimbus-7. The NCPS consolidates multiple modulation schemes for supporting various manned/unmanned orbital platforms. The NCPS interacts with the POCC and a local operator to process configuration requests, generate modulated uplink sequences, and inform users of the ground command link status. This paper presents the system functional description, hardware description, and the software design.

  1. The LILARTI neural network system

    SciTech Connect

    Allen, J.D. Jr.; Schell, F.M.; Dodd, C.V.

    1992-10-01

    The material of this Technical Memorandum is intended to provide the reader with conceptual and technical background information on the LILARTI neural network system of detail sufficient to confer an understanding of the LILARTI method as it is presently allied and to facilitate application of the method to problems beyond the scope of this document. Of particular importance in this regard are the descriptive sections and the Appendices which include operating instructions, partial listings of program output and data files, and network construction information.

  2. Systems engineering technology for networks

    NASA Technical Reports Server (NTRS)

    1994-01-01

    The report summarizes research pursued within the Systems Engineering Design Laboratory at Virginia Polytechnic Institute and State University between May 16, 1993 and January 31, 1994. The project was proposed in cooperation with the Computational Science and Engineering Research Center at Howard University. Its purpose was to investigate emerging systems engineering tools and their applicability in analyzing the NASA Network Control Center (NCC) on the basis of metrics and measures.

  3. The automated ground network system

    NASA Technical Reports Server (NTRS)

    Smith, Miles T.; Militch, Peter N.

    1993-01-01

    The primary goal of the Automated Ground Network System (AGNS) project is to reduce Ground Network (GN) station life-cycle costs. To accomplish this goal, the AGNS project will employ an object-oriented approach to develop a new infrastructure that will permit continuous application of new technologies and methodologies to the Ground Network's class of problems. The AGNS project is a Total Quality (TQ) project. Through use of an open collaborative development environment, developers and users will have equal input into the end-to-end design and development process. This will permit direct user input and feedback and will enable rapid prototyping for requirements clarification. This paper describes the AGNS objectives, operations concept, and proposed design.

  4. [Making a low cost IPSec router on Linux and the assessment for practical use].

    PubMed

    Amiki, M; Horio, M

    2001-09-01

    We installed Linux and FreeS/WAN on a PC/AT compatible machine to make an IPSec router. We measured the time of ping/ftp, only in the university, between the university and the external network. Between the university and the external network (the Internet), there were no differences. Therefore, we concluded that CPU load was not remarkable at low speed networks, because packets exchanged via the Internet are small, or compressions of VPN are more effective than encoding and decoding. On the other hand, in the university, the IPSec router performed down about 20-30% compared with normal IP communication, but this is not a serious problem for practical use. Recently, VPN machines are becoming cheaper, but they do not function sufficiently to create a fundamental VPN environment. Therefore, if one wants a fundamental VPN environment at a low cost, we believe you should select a VPN router on Linux. PMID:11570054

  5. The AMSC network control system

    NASA Technical Reports Server (NTRS)

    Garner, William B.

    1990-01-01

    The American Mobile Satellite Corporation (AMSC) is going to construct, launch, and operate a satellite system in order to provide mobile satellite services to the United States. AMSC is going to build, own, and operate a Network Control System (NCS) for managing the communications usage of the satellites, and to control circuit switched access between mobile earth terminals and feeder-link earth stations. An overview of the major NCS functional and performance requirements, the control system physical architecture, and the logical architecture is provided.

  6. Autonomous telemetry system by using mobile networks for a long-term seismic observation

    NASA Astrophysics Data System (ADS)

    Hirahara, S.; Uchida, N.; Nakajima, J.

    2012-04-01

    When a large earthquake occurs, it is important to know the detailed distribution of aftershocks immediately after the main shock for the estimation of the fault plane. The large amount of seismic data is also required to determine the three-dimensional seismic velocity structure around the focal area. We have developed an autonomous telemetry system using mobile networks, which is specialized for aftershock observations. Because the newly developed system enables a quick installation and real-time data transmission by using mobile networks, we can construct a dense online seismic network even in mountain areas where conventional wired networks are not available. This system is equipped with solar panels that charge lead-acid battery, and enables a long-term seismic observation without maintenance. Furthermore, this system enables a continuous observation at low costs with flat-rate or prepaid Internet access. We have tried to expand coverage areas of mobile communication and back up Internet access by configuring plural mobile carriers. A micro server embedded with Linux consists of automatic control programs of the Internet connection and data transmission. A status monitoring and remote maintenance are available via the Internet. In case of a communication failure, an internal storage can back up data for two years. The power consumption of communication device ranges from 2.5 to 4.0 W. With a 50 Ah lead-acid battery, this system continues to record data for four days if the battery charging by solar panels is temporarily unavailable.

  7. THE IMPLEMENTATION OF THE STAR DATA ACQUISITION SYSTEM USING A MYRINET NETWORK.

    SciTech Connect

    LANDGRAF,J.M.; ADLER,C.; LEVINE,M.J.; LJUBICIC,A.,JR.; ET AL

    2000-10-15

    We will present results from the first year of operation of the STAR DAQ system using a Myrinet Network. STAR is one of four experiments to have been commissioned at the Relativistic Heavy Ion Collider (RHIC) at BNL during 1999 and 2000. The DAQ system is fully integrated with a Level 3 Trigger. The combined system currently consists of 33 Myrinet Nodes which run in a mixed environment of MVME processors running VxWorks, DEC Alpha workstations running Linux, and SUN Solaris machines. The network will eventually contain up to 150 nodes for the expected final size of the L3 processor farm. Myrinet is a switched, high speed, low latency network produced by Myricom and available for PCI and PMC on a wide variety of platforms. The STAR DAQ system uses the Myrinet network for messaging, L3 processing, and event building. After the events are built, they are sent via Gigabit Ethernet to the RHIC computing facility and stored to tape using HPSS. The combined DAQ/L3 system processes 160 MB events at 100 Hz, compresses each event to {approximately}20 MB, and performs tracking on the events to implement a physics-based filter to reduce the data storage rate to 20 MB/sec.

  8. Berkeley Lab Checkpoint/Restart (BLCR) for Linux Clusters

    SciTech Connect

    Hargrove, Paul H.; Duell, Jason C.

    2006-07-26

    This article describes the motivation, design andimplementation of Berkeley Lab Checkpoint/Restart (BLCR), a system-levelcheckpoint/restart implementation for Linux clusters that targets thespace of typical High Performance Computing applications, including MPI.Application-level solutions, including both checkpointing andfault-tolerant algorithms, are recognized as more time and spaceefficient than system-level checkpoints, which cannot make use of anyapplication-specific knowledge. However, system-level checkpointingallows for preemption, making it suitable for responding to "faultprecursors" (for instance, elevated error rates from ECC memory ornetwork CRCs, or elevated temperature from sensors). Preemption can alsoincrease the efficiency of batch scheduling; for instance reducing idlecycles (by allowing for shutdown without any queue draining period orreallocation of resources to eliminate idle nodes when better fittingjobs are queued), and reducing the average queued time (by limiting largejobs to running during off-peak hours, without the need to limit thelength of such jobs). Each of these potential uses makes BLCR a valuabletool for efficient resource management in Linux clusters.

  9. Network Centrality of Metro Systems

    PubMed Central

    Derrible, Sybil

    2012-01-01

    Whilst being hailed as the remedy to the world’s ills, cities will need to adapt in the 21st century. In particular, the role of public transport is likely to increase significantly, and new methods and technics to better plan transit systems are in dire need. This paper examines one fundamental aspect of transit: network centrality. By applying the notion of betweenness centrality to 28 worldwide metro systems, the main goal of this paper is to study the emergence of global trends in the evolution of centrality with network size and examine several individual systems in more detail. Betweenness was notably found to consistently become more evenly distributed with size (i.e. no “winner takes all”) unlike other complex network properties. Two distinct regimes were also observed that are representative of their structure. Moreover, the share of betweenness was found to decrease in a power law with size (with exponent 1 for the average node), but the share of most central nodes decreases much slower than least central nodes (0.87 vs. 2.48). Finally the betweenness of individual stations in several systems were examined, which can be useful to locate stations where passengers can be redistributed to relieve pressure from overcrowded stations. Overall, this study offers significant insights that can help planners in their task to design the systems of tomorrow, and similar undertakings can easily be imagined to other urban infrastructure systems (e.g., electricity grid, water/wastewater system, etc.) to develop more sustainable cities. PMID:22792373

  10. Tri-Laboratory Linux Capacity Cluster 2007 SOW

    SciTech Connect

    Seager, M

    2007-03-22

    well, the budget demands are extreme and new, more cost effective ways of fielding these systems must be developed. This Tri-Laboratory Linux Capacity Cluster (TLCC) procurement represents the ASC first investment vehicle in these capacity systems. It also represents a new strategy for quickly building, fielding and integrating many Linux clusters of various sizes into classified and unclassified production service through a concept of Scalable Units (SU). The programmatic objective is to dramatically reduce the overall Total Cost of Ownership (TCO) of these 'capacity' systems relative to the best practices in Linux Cluster deployments today. This objective only makes sense in the context of these systems quickly becoming very robust and useful production clusters under the crushing load that will be inflicted on them by the ASC and SSP scientific simulation capacity workload.

  11. PhyLIS: A Simple GNU/Linux Distribution for Phylogenetics and Phyloinformatics

    PubMed Central

    Thomson, Robert C.

    2009-01-01

    PhyLIS is a free GNU/Linux distribution that is designed to provide a simple, standardized platform for phylogenetic and phyloinformatic analysis. The operating system incorporates most commonly used phylogenetic software, which has been pre-compiled and pre-configured, allowing for straightforward application of phylogenetic methods and development of phyloinformatic pipelines in a stable Linux environment. The software is distributed as a live CD and can be installed directly or run from the CD without making changes to the computer. PhyLIS is available for free at http://www.eve.ucdavis.edu/rcthomson/phylis/. PMID:19812729

  12. PhyLIS: a simple GNU/Linux distribution for phylogenetics and phyloinformatics.

    PubMed

    Thomson, Robert C

    2009-01-01

    PhyLIS is a free GNU/Linux distribution that is designed to provide a simple, standardized platform for phylogenetic and phyloinformatic analysis. The operating system incorporates most commonly used phylogenetic software, which has been pre-compiled and pre-configured, allowing for straightforward application of phylogenetic methods and development of phyloinformatic pipelines in a stable Linux environment. The software is distributed as a live CD and can be installed directly or run from the CD without making changes to the computer. PhyLIS is available for free at http://www.eve.ucdavis.edu/rcthomson/phylis/. PMID:19812729

  13. Networked analytical sample management system

    SciTech Connect

    Kerrigan, W.J.; Spencer, W.A.

    1986-01-01

    Since 1982, the Savannah River Laboratory (SRL) has operated a computer-controlled analytical sample management system. The system, pogrammed in COBOL, runs on the site IBM 3081 mainframe computer. The system provides for the following subtasks: sample logging, analytical method assignment, worklist generation, cost accounting, and results reporting. Within these subtasks the system functions in a time-sharing mode. Communications between subtasks are done overnight in a batch mode. The system currently supports management of up to 3000 samples a month. Each sample requires, on average, three independent methods. Approximately 100 different analytical techniques are available for customized input of data. The laboratory has implemented extensive computer networking using Ethernet. Electronic mail, RS/1, and online literature searches are in place. Based on our experience with the existing sample management system, we have begun a project to develop a second generation system. The new system will utilize the panel designs developed for the present LIMS, incorporate more realtime features, and take advantage of the many commercial LIMS systems.

  14. Introduction to Network Analysis in Systems Biology

    PubMed Central

    Ma’ayan, Avi

    2011-01-01

    This Teaching Resource provides lecture notes, slides, and a problem set for a set of three lectures from a course entitled “Systems Biology: Biomedical Modeling.” The materials are from three separate lectures introducing applications of graph theory and network analysis in systems biology. The first lecture describes different types of intracellular networks, methods for constructing biological networks, and different types of graphs used to represent regulatory intracellular networks. The second lecture surveys milestones and key concepts in network analysis by introducing topological measures, random networks, growing network models, and topological observations from molecular biological systems abstracted to networks. The third lecture discusses methods for analyzing lists of genes and experimental data in the context of prior knowledge networks to make predictions. PMID:21917719

  15. Parallel Analysis and Visualization on Cray Compute Node Linux

    SciTech Connect

    Pugmire, Dave; Ahern, Sean

    2008-01-01

    Capability computer systems are deployed to give researchers the computational power required to investigate and solve key challenges facing the scientific community. As the power of these computer systems increases, the computational problem domain typically increases in size, complexity and scope. These increases strain the ability of commodity analysis and visualization clusters to effectively perform post-processing tasks and provide critical insight and understanding to the computed results. An alternative to purchasing increasingly larger, separate analysis and visualization commodity clusters is to use the computational system itself to perform post-processing tasks. In this paper, the recent successful port of VisIt, a parallel, open source analysis and visualization tool, to compute node linux running on the Cray is detailed. Additionally, the unprecedented ability of this resource for analysis and visualization is discussed and a report on obtained results is presented.

  16. Linux OS Jitter Measurements at Large Node Counts using a BlueGene/L

    SciTech Connect

    Jones, Terry R; Tauferner, Mr. Andrew; Inglett, Mr. Todd

    2010-01-01

    We present experimental results for a coordinated scheduling implementation of the Linux operating system. Results were collected on an IBM Blue Gene/L machine at scales up to 16K nodes. Our results indicate coordinated scheduling was able to provide a dramatic improvement in scaling performance for two applications characterized as bulk synchronous parallel programs.

  17. Networked Dynamic Systems: Identification, Controllability, and Randomness

    NASA Astrophysics Data System (ADS)

    Nabi-Abdolyousefi, Marzieh

    The presented dissertation aims to develop a graph-centric framework for the analysis and synthesis of networked dynamic systems (NDS) consisting of multiple dynamic units that interact via an interconnection topology. We examined three categories of network problems, namely, identification, controllability, and randomness. In network identification, as a subclass of inverse problems, we made an explicit relation between the input-output behavior of an NDS and the underlying interacting network. In network controllability, we provided structural and algebraic insights into features of the network that enable external signal(s) to control the state of the nodes in the network for certain classes of interconnections, namely, path, circulant, and Cartesian networks. We also examined the relation between network controllability and the symmetry structure of the graph. Motivated by the analysis results for the controllability and observability of deterministic networks, a natural question is whether randomness in the network layer or in the layer of inputs and outputs generically leads to favorable system theoretic properties. In this direction, we examined system theoretic properties of random networks including controllability, observability, and performance of optimal feedback controllers and estimators. We explored some of the ramifications of such an analysis framework in opinion dynamics over social networks and sensor networks in estimating the real-time position of a Seaglider from experimental data.

  18. Improving Memory Error Handling Using Linux

    SciTech Connect

    Carlton, Michael Andrew; Blanchard, Sean P.; Debardeleben, Nathan A.

    2014-07-25

    As supercomputers continue to get faster and more powerful in the future, they will also have more nodes. If nothing is done, then the amount of memory in supercomputer clusters will soon grow large enough that memory failures will be unmanageable to deal with by manually replacing memory DIMMs. "Improving Memory Error Handling Using Linux" is a process oriented method to solve this problem by using the Linux kernel to disable (offline) faulty memory pages containing bad addresses, preventing them from being used again by a process. The process of offlining memory pages simplifies error handling and results in reducing both hardware and manpower costs required to run Los Alamos National Laboratory (LANL) clusters. This process will be necessary for the future of supercomputing to allow the development of exascale computers. It will not be feasible without memory error handling to manually replace the number of DIMMs that will fail daily on a machine consisting of 32-128 petabytes of memory. Testing reveals the process of offlining memory pages works and is relatively simple to use. As more and more testing is conducted, the entire process will be automated within the high-performance computing (HPC) monitoring software, Zenoss, at LANL.

  19. Impact on TRMM Products of Conversion to Linux

    NASA Technical Reports Server (NTRS)

    Stocker, Erich Franz; Kwiatkowski, John

    2008-01-01

    In June 2008, TRMM data processing will be assumed by the Precipitation Processing System (PPS). This change will also mean a change in the hardware production environment from an SGI 32 bit IRIX processing environment to a Linux (Beowulf) 64 bit processing environment. This change of platform and operating system addressing (32 to 64) has some influence on data values in the TRMM data products. This paper will describe the transition architecture and scheduling. It will also provide an analysis of what the nature of the product differences will be. It will demonstrate that the differences are not scientifically significant and are generally not visible. However, they are not always identical with those which the SGI would produce.

  20. High Performance Diskless Linux Workstations in AX-Division

    SciTech Connect

    Councell, E; Busby, L

    2003-09-30

    AX Division has recently installed a number of diskless Linux workstations to meet the needs of its scientific staff for classified processing. Results so far are quite positive, although problems do remain. Some unusual requirements were met using a novel, but simple, design: Each diskless client has a dedicated partition on a server disk that contains a complete Linux distribution.

  1. Kennedy Space Center network documentation system

    NASA Technical Reports Server (NTRS)

    Lohne, William E.; Schuerger, Charles L.

    1995-01-01

    The Kennedy Space Center Network Documentation System (KSC NDS) is being designed and implemented by NASA and the KSC contractor organizations to provide a means of network tracking, configuration, and control. Currently, a variety of host and client platforms are in use as a result of each organization having established its own network documentation system. The solution is to incorporate as many existing 'systems' as possible in the effort to consolidate and standardize KSC-wide documentation.

  2. Statistically Validated Networks in Bipartite Complex Systems

    PubMed Central

    Tumminello, Michele; Miccichè, Salvatore; Lillo, Fabrizio; Piilo, Jyrki; Mantegna, Rosario N.

    2011-01-01

    Many complex systems present an intrinsic bipartite structure where elements of one set link to elements of the second set. In these complex systems, such as the system of actors and movies, elements of one set are qualitatively different than elements of the other set. The properties of these complex systems are typically investigated by constructing and analyzing a projected network on one of the two sets (for example the actor network or the movie network). Complex systems are often very heterogeneous in the number of relationships that the elements of one set establish with the elements of the other set, and this heterogeneity makes it very difficult to discriminate links of the projected network that are just reflecting system's heterogeneity from links relevant to unveil the properties of the system. Here we introduce an unsupervised method to statistically validate each link of a projected network against a null hypothesis that takes into account system heterogeneity. We apply the method to a biological, an economic and a social complex system. The method we propose is able to detect network structures which are very informative about the organization and specialization of the investigated systems, and identifies those relationships between elements of the projected network that cannot be explained simply by system heterogeneity. We also show that our method applies to bipartite systems in which different relationships might have different qualitative nature, generating statistically validated networks in which such difference is preserved. PMID:21483858

  3. Performance Evaluation of Multi-Channel Wireless Mesh Networks with Embedded Systems

    PubMed Central

    Lam, Jun Huy; Lee, Sang-Gon; Tan, Whye Kit

    2012-01-01

    Many commercial wireless mesh network (WMN) products are available in the marketplace with their own proprietary standards, but interoperability among the different vendors is not possible. Open source communities have their own WMN implementation in accordance with the IEEE 802.11s draft standard, Linux open80211s project and FreeBSD WMN implementation. While some studies have focused on the test bed of WMNs based on the open80211s project, none are based on the FreeBSD. In this paper, we built an embedded system using the FreeBSD WMN implementation that utilizes two channels and evaluated its performance. This implementation allows the legacy system to connect to the WMN independent of the type of platform and distributes the load between the two non-overlapping channels. One channel is used for the backhaul connection and the other one is used to connect to the stations to wireless mesh network. By using the power efficient 802.11 technology, this device can also be used as a gateway for the wireless sensor network (WSN). PMID:22368482

  4. Performance evaluation of multi-channel wireless mesh networks with embedded systems.

    PubMed

    Lam, Jun Huy; Lee, Sang-Gon; Tan, Whye Kit

    2012-01-01

    Many commercial wireless mesh network (WMN) products are available in the marketplace with their own proprietary standards, but interoperability among the different vendors is not possible. Open source communities have their own WMN implementation in accordance with the IEEE 802.11s draft standard, Linux open80211s project and FreeBSD WMN implementation. While some studies have focused on the test bed of WMNs based on the open80211s project, none are based on the FreeBSD. In this paper, we built an embedded system using the FreeBSD WMN implementation that utilizes two channels and evaluated its performance. This implementation allows the legacy system to connect to the WMN independent of the type of platform and distributes the load between the two non-overlapping channels. One channel is used for the backhaul connection and the other one is used to connect to the stations to wireless mesh network. By using the power efficient 802.11 technology, this device can also be used as a gateway for the wireless sensor network (WSN). PMID:22368482

  5. Method and system for mesh network embedded devices

    NASA Technical Reports Server (NTRS)

    Wang, Ray (Inventor)

    2009-01-01

    A method and system for managing mesh network devices. A mesh network device with integrated features creates an N-way mesh network with a full mesh network topology or a partial mesh network topology.

  6. Impulsive synchronization of networked nonlinear dynamical systems

    NASA Astrophysics Data System (ADS)

    Jiang, Haibo; Bi, Qinsheng

    2010-06-01

    In this Letter, we investigate the problem of impulsive synchronization of networked multi-agent systems, where each agent can be modeled as an identical nonlinear dynamical system. Firstly, an impulsive control protocol is designed for network with fixed topology based on the local information of agents. Then sufficient conditions are given to guarantee the synchronization of the networked nonlinear dynamical system by using algebraic graph theory and impulsive control theory. Furthermore, how to select the discrete instants and impulsive constants is discussed. The case that the topologies of the networks are switching is also considered. Numerical simulations show the effectiveness of our theoretical results.

  7. NASDA knowledge-based network planning system

    NASA Technical Reports Server (NTRS)

    Yamaya, K.; Fujiwara, M.; Kosugi, S.; Yambe, M.; Ohmori, M.

    1993-01-01

    One of the SODS (space operation and data system) sub-systems, NP (network planning) was the first expert system used by NASDA (national space development agency of Japan) for tracking and control of satellite. The major responsibilities of the NP system are: first, the allocation of network and satellite control resources and, second, the generation of the network operation plan data (NOP) used in automated control of the stations and control center facilities. Up to now, the first task of network resource scheduling was done by network operators. NP system automatically generates schedules using its knowledge base, which contains information on satellite orbits, station availability, which computer is dedicated to which satellite, and how many stations must be available for a particular satellite pass or a certain time period. The NP system is introduced.

  8. Systemic risk on different interbank network topologies

    NASA Astrophysics Data System (ADS)

    Lenzu, Simone; Tedeschi, Gabriele

    2012-09-01

    In this paper we develop an interbank market with heterogeneous financial institutions that enter into lending agreements on different network structures. Credit relationships (links) evolve endogenously via a fitness mechanism based on agents' performance. By changing the agent's trust on its neighbor's performance, interbank linkages self-organize themselves into very different network architectures, ranging from random to scale-free topologies. We study which network architecture can make the financial system more resilient to random attacks and how systemic risk spreads over the network. To perturb the system, we generate a random attack via a liquidity shock. The hit bank is not automatically eliminated, but its failure is endogenously driven by its incapacity to raise liquidity in the interbank network. Our analysis shows that a random financial network can be more resilient than a scale free one in case of agents' heterogeneity.

  9. Broadband network on-line data acquisition system with web based interface for control and basic analysis

    NASA Astrophysics Data System (ADS)

    Polkowski, Marcin; Grad, Marek

    2016-04-01

    Passive seismic experiment "13BB Star" is operated since mid 2013 in northern Poland and consists of 13 broadband seismic stations. One of the elements of this experiment is dedicated on-line data acquisition system comprised of both client (station) side and server side modules with web based interface that allows monitoring of network status and provides tools for preliminary data analysis. Station side is controlled by ARM Linux board that is programmed to maintain 3G/EDGE internet connection, receive data from digitizer, send data do central server among with additional auxiliary parameters like temperatures, voltages and electric current measurements. Station side is controlled by set of easy to install PHP scripts. Data is transmitted securely over SSH protocol to central server. Central server is a dedicated Linux based machine. Its duty is receiving and processing all data from all stations including auxiliary parameters. Server side software is written in PHP and Python. Additionally, it allows remote station configuration and provides web based interface for user friendly interaction. All collected data can be displayed for each day and station. It also allows manual creation of event oriented plots with different filtering abilities and provides numerous status and statistic information. Our solution is very flexible and easy to modify. In this presentation we would like to share our solution and experience. National Science Centre Poland provided financial support for this work via NCN grant DEC-2011/02/A/ST10/00284.

  10. Neural network based system for equipment surveillance

    DOEpatents

    Vilim, R.B.; Gross, K.C.; Wegerich, S.W.

    1998-04-28

    A method and system are disclosed for performing surveillance of transient signals of an industrial device to ascertain the operating state. The method and system involves the steps of reading into a memory training data, determining neural network weighting values until achieving target outputs close to the neural network output. If the target outputs are inadequate, wavelet parameters are determined to yield neural network outputs close to the desired set of target outputs and then providing signals characteristic of an industrial process and comparing the neural network output to the industrial process signals to evaluate the operating state of the industrial process. 33 figs.

  11. Neural network based system for equipment surveillance

    DOEpatents

    Vilim, Richard B.; Gross, Kenneth C.; Wegerich, Stephan W.

    1998-01-01

    A method and system for performing surveillance of transient signals of an industrial device to ascertain the operating state. The method and system involves the steps of reading into a memory training data, determining neural network weighting values until achieving target outputs close to the neural network output. If the target outputs are inadequate, wavelet parameters are determined to yield neural network outputs close to the desired set of target outputs and then providing signals characteristic of an industrial process and comparing the neural network output to the industrial process signals to evaluate the operating state of the industrial process.

  12. Freight Network Equilibrium Model revisited: the Freight Network Modeling System

    SciTech Connect

    Tobin, R.L.

    1984-01-01

    The Freight Network Equilibrium Model (FNEM) was developed to study potential coal transportation impacts that could result from widespread conversion of boilers to use coal for fuel, as mandated under the Powerplant and Industrial Fuel Use Act of 1978. Continued improvement of FNEM and creation of auxiliary software and data during applications of the model in various transportation analyses led to the development of the Freight Network Modeling System, a general and flexible modeling system designed to have wide applicability to a variety of freight transportation analyses. It consists of compatible network data bases, data management software, models of freight transportation, report generators, and graphics output. The network data include US rail, water, highway, and pipeline systems. Data management software automates the task of setting up a study network of appropriate detail in appropriate regions of the country. The major analytical tools in the system are FNEM and Shortest Path Analysis and Display (SPAD); FNEM is predictive and simulates decisions of both shippers and carriers, taking into account the competition for transportation facilities; SPAD is a simpler model that optimizes routings of single shipments. Output for both FNEM and SPAD includes detailed routings, cost and delay estimates for all shipments, and data on total traffic levels. SPAD can be used interactively with routes displayed graphically. 13 references, 10 figures, 2 tables.

  13. Development of Control Softwares for the SRAO 6-Meter Telescope Based on PCs Running Linux

    NASA Astrophysics Data System (ADS)

    Byun, Do-Young; Yun, Young-Zoo

    2002-12-01

    We have developed a control system for the Seoul Radio AstronomyObservatory (SRAO) 6-meter telescope operating in the 85 115 GHzfrequency range. Four PCs running the Linux operating system control sourcetracking, take data, execute observations and handle userinterface.The control system utilizes a modular and multiprocessingstructure to facilitate easy upgrading and troubleshooting.Communication between the processes relies on the interprocesscommunication (IPC) resources on Linux such as shared memory,message queues, and TCP/IP sockets. Communication between PCs is made via an Ethernet link. We also use digital I/O lines for some status signals whichrequire a short delay.The control system supports scheduling observations, updatesobservation logs automatically and also supports graphical userinterfaces. These all makes the operation easy.By using a commercially available motion control card with anembedded microcomputer for antenna control, we achieved atracking accuracy to better than 1 arcsec.

  14. Network Physiology: How Organ Systems Dynamically Interact

    PubMed Central

    Bartsch, Ronny P.; Liu, Kang K. L.; Bashan, Amir; Ivanov, Plamen Ch.

    2015-01-01

    We systematically study how diverse physiologic systems in the human organism dynamically interact and collectively behave to produce distinct physiologic states and functions. This is a fundamental question in the new interdisciplinary field of Network Physiology, and has not been previously explored. Introducing the novel concept of Time Delay Stability (TDS), we develop a computational approach to identify and quantify networks of physiologic interactions from long-term continuous, multi-channel physiological recordings. We also develop a physiologically-motivated visualization framework to map networks of dynamical organ interactions to graphical objects encoded with information about the coupling strength of network links quantified using the TDS measure. Applying a system-wide integrative approach, we identify distinct patterns in the network structure of organ interactions, as well as the frequency bands through which these interactions are mediated. We establish first maps representing physiologic organ network interactions and discover basic rules underlying the complex hierarchical reorganization in physiologic networks with transitions across physiologic states. Our findings demonstrate a direct association between network topology and physiologic function, and provide new insights into understanding how health and distinct physiologic states emerge from networked interactions among nonlinear multi-component complex systems. The presented here investigations are initial steps in building a first atlas of dynamic interactions among organ systems. PMID:26555073

  15. Network Physiology: How Organ Systems Dynamically Interact.

    PubMed

    Bartsch, Ronny P; Liu, Kang K L; Bashan, Amir; Ivanov, Plamen Ch

    2015-01-01

    We systematically study how diverse physiologic systems in the human organism dynamically interact and collectively behave to produce distinct physiologic states and functions. This is a fundamental question in the new interdisciplinary field of Network Physiology, and has not been previously explored. Introducing the novel concept of Time Delay Stability (TDS), we develop a computational approach to identify and quantify networks of physiologic interactions from long-term continuous, multi-channel physiological recordings. We also develop a physiologically-motivated visualization framework to map networks of dynamical organ interactions to graphical objects encoded with information about the coupling strength of network links quantified using the TDS measure. Applying a system-wide integrative approach, we identify distinct patterns in the network structure of organ interactions, as well as the frequency bands through which these interactions are mediated. We establish first maps representing physiologic organ network interactions and discover basic rules underlying the complex hierarchical reorganization in physiologic networks with transitions across physiologic states. Our findings demonstrate a direct association between network topology and physiologic function, and provide new insights into understanding how health and distinct physiologic states emerge from networked interactions among nonlinear multi-component complex systems. The presented here investigations are initial steps in building a first atlas of dynamic interactions among organ systems. PMID:26555073

  16. Managing secure computer systems and networks.

    PubMed

    Von Solms, B

    1996-10-01

    No computer system or computer network can today be operated without the necessary security measures to secure and protect the electronic assets stored, processed and transmitted using such systems and networks. Very often the effort in managing such security and protection measures are totally underestimated. This paper provides an overview of the security management needed to secure and protect a typical IT system and network. Special reference is made to this management effort in healthcare systems, and the role of the information security officer is also highlighted. PMID:8960921

  17. Nonlinear Network Dynamics on Earthquake Fault Systems

    SciTech Connect

    Rundle, Paul B.; Rundle, John B.; Tiampo, Kristy F.; Sa Martins, Jorge S.; McGinnis, Seth; Klein, W.

    2001-10-01

    Earthquake faults occur in interacting networks having emergent space-time modes of behavior not displayed by isolated faults. Using simulations of the major faults in southern California, we find that the physics depends on the elastic interactions among the faults defined by network topology, as well as on the nonlinear physics of stress dissipation arising from friction on the faults. Our results have broad applications to other leaky threshold systems such as integrate-and-fire neural networks.

  18. Hierarchical interconnection networks for multicomputer systems

    SciTech Connect

    Dandamudi, S.P. ); Eager, D.L. )

    1990-06-01

    Multicomputer systems are distributed-memory MIMD systems. Communication in these systems occurs through explicit message passing. Therefore, the underlying processor interconnection network plays an important and direct role in determining their performance. Several types f interconnection networks have been proposed. Unfortunately, no network is universally better. Ideally, therefore, systems should use more than one such network. Furthermore, systems that have large numbers of processors should be able to exploit locality in communication in order to obtain improved performance. This paper proposes the use of hierarchical interconnection networks to meet both these requirements. A performance analysis of a class of hierarchical interconnection networks is presented. This analysis includes both static analysis (queuing delays are neglected) and queuing analysis. In both cases, the hierarchical networks are shown to have better cost-benefit ratios. The queuing analysis is also validated (within our model) by several simulation experiments. The impact of two performance enhancement schemes---replication of links and improved routing algorithms---on hierarchical interconnection network performance is also presented.

  19. A classifier neural network for rotordynamic systems

    NASA Astrophysics Data System (ADS)

    Ganesan, R.; Jionghua, Jin; Sankar, T. S.

    1995-07-01

    A feedforward backpropagation neural network is formed to identify the stability characteristic of a high speed rotordynamic system. The principal focus resides in accounting for the instability due to the bearing clearance effects. The abnormal operating condition of 'normal-loose' Coulomb rub, that arises in units supported by hydrodynamic bearings or rolling element bearings, is analysed in detail. The multiple-parameter stability problem is formulated and converted to a set of three-parameter algebraic inequality equations. These three parameters map the wider range of physical parameters of commonly-used rotordynamic systems into a narrow closed region, that is used in the supervised learning of the neural network. A binary-type state of the system is expressed through these inequalities that are deduced from the analytical simulation of the rotor system. Both the hidden layer as well as functional-link networks are formed and the superiority of the functional-link network is established. Considering the real time interpretation and control of the rotordynamic system, the network reliability and the learning time are used as the evaluation criteria to assess the superiority of the functional-link network. This functional-link network is further trained using the parameter values of selected rotor systems, and the classifier network is formed. The success rate of stability status identification is obtained to assess the potentials of this classifier network. The classifier network is shown that it can also be used, for control purposes, as an 'advisory' system that suggests the optimum way of parameter adjustment.

  20. ANPS - AUTOMATIC NETWORK PROGRAMMING SYSTEM

    NASA Technical Reports Server (NTRS)

    Schroer, B. J.

    1994-01-01

    Development of some of the space program's large simulation projects -- like the project which involves simulating the countdown sequence prior to spacecraft liftoff -- requires the support of automated tools and techniques. The number of preconditions which must be met for a successful spacecraft launch and the complexity of their interrelationship account for the difficulty of creating an accurate model of the countdown sequence. Researchers developed ANPS for the Nasa Marshall Space Flight Center to assist programmers attempting to model the pre-launch countdown sequence. Incorporating the elements of automatic programming as its foundation, ANPS aids the user in defining the problem and then automatically writes the appropriate simulation program in GPSS/PC code. The program's interactive user dialogue interface creates an internal problem specification file from user responses which includes the time line for the countdown sequence, the attributes for the individual activities which are part of a launch, and the dependent relationships between the activities. The program's automatic simulation code generator receives the file as input and selects appropriate macros from the library of software modules to generate the simulation code in the target language GPSS/PC. The user can recall the problem specification file for modification to effect any desired changes in the source code. ANPS is designed to write simulations for problems concerning the pre-launch activities of space vehicles and the operation of ground support equipment and has potential for use in developing network reliability models for hardware systems and subsystems. ANPS was developed in 1988 for use on IBM PC or compatible machines. The program requires at least 640 KB memory and one 360 KB disk drive, PC DOS Version 2.0 or above, and GPSS/PC System Version 2.0 from Minuteman Software. The program is written in Turbo Prolog Version 2.0. GPSS/PC is a trademark of Minuteman Software. Turbo Prolog

  1. High-speed, intra-system networks

    SciTech Connect

    Quinn, Heather M; Graham, Paul S; Manuzzato, Andrea; Fairbanks, Tom; Dallmann, Nicholas; Desgeorges, Rose

    2010-06-28

    Recently, engineers have been studying on-payload networks for fast communication paths. Using intra-system networks as a means to connect devices together allows for a flexible payload design that does not rely on dedicated communication paths between devices. In this manner, the data flow architecture of the system can be dynamically reconfigured to allow data routes to be optimized for the application or configured to route around devices that are temporarily or permanently unavailable. To use intra-system networks, devices will need network controllers and switches. These devices are likely to be affected by single-event effects, which could affect data communication. In this paper we will present radiation data and performance analysis for using a Broadcom network controller in a neutron environment.

  2. An online system for metabolic network analysis

    PubMed Central

    Cicek, Abdullah Ercument; Qi, Xinjian; Cakmak, Ali; Johnson, Stephen R.; Han, Xu; Alshalwi, Sami; Ozsoyoglu, Zehra Meral; Ozsoyoglu, Gultekin

    2014-01-01

    Metabolic networks have become one of the centers of attention in life sciences research with the advancements in the metabolomics field. A vast array of studies analyzes metabolites and their interrelations to seek explanations for various biological questions, and numerous genome-scale metabolic networks have been assembled to serve for this purpose. The increasing focus on this topic comes with the need for software systems that store, query, browse, analyze and visualize metabolic networks. PathCase Metabolomics Analysis Workbench (PathCaseMAW) is built, released and runs on a manually created generic mammalian metabolic network. The PathCaseMAW system provides a database-enabled framework and Web-based computational tools for browsing, querying, analyzing and visualizing stored metabolic networks. PathCaseMAW editor, with its user-friendly interface, can be used to create a new metabolic network and/or update an existing metabolic network. The network can also be created from an existing genome-scale reconstructed network using the PathCaseMAW SBML parser. The metabolic network can be accessed through a Web interface or an iPad application. For metabolomics analysis, steady-state metabolic network dynamics analysis (SMDA) algorithm is implemented and integrated with the system. SMDA tool is accessible through both the Web-based interface and the iPad application for metabolomics analysis based on a metabolic profile. PathCaseMAW is a comprehensive system with various data input and data access subsystems. It is easy to work with by design, and is a promising tool for metabolomics research and for educational purposes. Database URL: http://nashua.case.edu/PathwaysMAW/Web PMID:25267793

  3. Networking and AI systems: Requirements and benefits

    NASA Technical Reports Server (NTRS)

    1988-01-01

    The price performance benefits of network systems is well documented. The ability to share expensive resources sold timesharing for mainframes, department clusters of minicomputers, and now local area networks of workstations and servers. In the process, other fundamental system requirements emerged. These have now been generalized with open system requirements for hardware, software, applications and tools. The ability to interconnect a variety of vendor products has led to a specification of interfaces that allow new techniques to extend existing systems for new and exciting applications. As an example of the message passing system, local area networks provide a testbed for many of the issues addressed by future concurrent architectures: synchronization, load balancing, fault tolerance and scalability. Gold Hill has been working with a number of vendors on distributed architectures that range from a network of workstations to a hypercube of microprocessors with distributed memory. Results from early applications are promising both for performance and scalability.

  4. The Networking of Interactive Bibliographic Retrieval Systems.

    ERIC Educational Resources Information Center

    Marcus, Richard S.; Reintjes, J. Francis

    Research in networking of heterogeneous interactive bibliographic retrieval systems is being conducted which centers on the concept of a virtual retrieval system. Such a virtual system would be created through a translating computer interface that would provide access to the different retrieval systems and data bases in a uniform and convenient…

  5. Gradient systems on coupled cell networks

    NASA Astrophysics Data System (ADS)

    Manoel, Miriam; Roberts, Mark

    2015-10-01

    For networks of coupled dynamical systems we characterize admissible functions, that is, functions whose gradient is an admissible vector field. The schematic representation of a gradient network dynamical system is of an undirected cell graph, and we use tools from graph theory to deduce the general form of such functions, relating it to the topological structure of the graph defining the network. The coupling of pairs of dynamical systems cells is represented by edges of the graph, and from spectral graph theory we detect the existence and nature of equilibria of the gradient system from the critical points of the coupling function. In particular, we study fully synchronous and 2-state patterns of equilibria on regular graphs. These are two special types of equilibrium configurations for gradient networks. We also investigate equilibrium configurations of {{\\mathbf{S}}1} -invariant admissible functions on a ring of cells.

  6. Dynamic Artificial Neural Networks with Affective Systems

    PubMed Central

    Schuman, Catherine D.; Birdwell, J. Douglas

    2013-01-01

    Artificial neural networks (ANNs) are processors that are trained to perform particular tasks. We couple a computational ANN with a simulated affective system in order to explore the interaction between the two. In particular, we design a simple affective system that adjusts the threshold values in the neurons of our ANN. The aim of this paper is to demonstrate that this simple affective system can control the firing rate of the ensemble of neurons in the ANN, as well as to explore the coupling between the affective system and the processes of long term potentiation (LTP) and long term depression (LTD), and the effect of the parameters of the affective system on its performance. We apply our networks with affective systems to a simple pole balancing example and briefly discuss the effect of affective systems on network performance. PMID:24303015

  7. Mapping dynamical systems onto complex networks

    NASA Astrophysics Data System (ADS)

    Borges, E. P.; Cajueiro, D. O.; Andrade, R. F. S.

    2007-08-01

    The objective of this study is to design a procedure to characterize chaotic dynamical systems, in which they are mapped onto a complex network. The nodes represent the regions of space visited by the system, while the edges represent the transitions between these regions. Parameters developed to quantify the properties of complex networks, including those related to higher order neighbourhoods, are used in the analysis. The methodology is tested on the logistic map, focusing on the onset of chaos and chaotic regimes. The corresponding networks were found to have distinct features that are associated with the particular type of dynamics that generated them.

  8. Network support for system initiated checkpoints

    DOEpatents

    Chen, Dong; Heidelberger, Philip

    2013-01-29

    A system, method and computer program product for supporting system initiated checkpoints in parallel computing systems. The system and method generates selective control signals to perform checkpointing of system related data in presence of messaging activity associated with a user application running at the node. The checkpointing is initiated by the system such that checkpoint data of a plurality of network nodes may be obtained even in the presence of user applications running on highly parallel computers that include ongoing user messaging activity.

  9. Network representations of immune system complexity.

    PubMed

    Subramanian, Naeha; Torabi-Parizi, Parizad; Gottschalk, Rachel A; Germain, Ronald N; Dutta, Bhaskar

    2015-01-01

    The mammalian immune system is a dynamic multiscale system composed of a hierarchically organized set of molecular, cellular, and organismal networks that act in concert to promote effective host defense. These networks range from those involving gene regulatory and protein-protein interactions underlying intracellular signaling pathways and single-cell responses to increasingly complex networks of in vivo cellular interaction, positioning, and migration that determine the overall immune response of an organism. Immunity is thus not the product of simple signaling events but rather nonlinear behaviors arising from dynamic, feedback-regulated interactions among many components. One of the major goals of systems immunology is to quantitatively measure these complex multiscale spatial and temporal interactions, permitting development of computational models that can be used to predict responses to perturbation. Recent technological advances permit collection of comprehensive datasets at multiple molecular and cellular levels, while advances in network biology support representation of the relationships of components at each level as physical or functional interaction networks. The latter facilitate effective visualization of patterns and recognition of emergent properties arising from the many interactions of genes, molecules, and cells of the immune system. We illustrate the power of integrating 'omics' and network modeling approaches for unbiased reconstruction of signaling and transcriptional networks with a focus on applications involving the innate immune system. We further discuss future possibilities for reconstruction of increasingly complex cellular- and organism-level networks and development of sophisticated computational tools for prediction of emergent immune behavior arising from the concerted action of these networks. PMID:25625853

  10. Network representations of immune system complexity

    PubMed Central

    Subramanian, Naeha; Torabi-Parizi, Parizad; Gottschalk, Rachel A.; Germain, Ronald N.; Dutta, Bhaskar

    2015-01-01

    The mammalian immune system is a dynamic multi-scale system composed of a hierarchically organized set of molecular, cellular and organismal networks that act in concert to promote effective host defense. These networks range from those involving gene regulatory and protein-protein interactions underlying intracellular signaling pathways and single cell responses to increasingly complex networks of in vivo cellular interaction, positioning and migration that determine the overall immune response of an organism. Immunity is thus not the product of simple signaling events but rather non-linear behaviors arising from dynamic, feedback-regulated interactions among many components. One of the major goals of systems immunology is to quantitatively measure these complex multi-scale spatial and temporal interactions, permitting development of computational models that can be used to predict responses to perturbation. Recent technological advances permit collection of comprehensive datasets at multiple molecular and cellular levels while advances in network biology support representation of the relationships of components at each level as physical or functional interaction networks. The latter facilitate effective visualization of patterns and recognition of emergent properties arising from the many interactions of genes, molecules, and cells of the immune system. We illustrate the power of integrating ‘omics’ and network modeling approaches for unbiased reconstruction of signaling and transcriptional networks with a focus on applications involving the innate immune system. We further discuss future possibilities for reconstruction of increasingly complex cellular and organism-level networks and development of sophisticated computational tools for prediction of emergent immune behavior arising from the concerted action of these networks. PMID:25625853

  11. Synchronization in networks of spatially extended systems

    SciTech Connect

    Filatova, Anastasiya E.; Hramov, Alexander E.; Koronovskii, Alexey A.; Boccaletti, Stefano

    2008-06-15

    Synchronization processes in networks of spatially extended dynamical systems are analytically and numerically studied. We focus on the relevant case of networks whose elements (or nodes) are spatially extended dynamical systems, with the nodes being connected with each other by scalar signals. The stability of the synchronous spatio-temporal state for a generic network is analytically assessed by means of an extension of the master stability function approach. We find an excellent agreement between the theoretical predictions and the data obtained by means of numerical calculations. The efficiency and reliability of this method is illustrated numerically with networks of beam-plasma chaotic systems (Pierce diodes). We discuss also how the revealed regularities are expected to take place in other relevant physical and biological circumstances.

  12. An integrated multimedia medical information network system.

    PubMed

    Yamamoto, K; Makino, J; Sasagawa, N; Nagira, M

    1998-01-01

    An integrated multimedia medical information network system at Shimane Medical university has been developed to organize medical information generated from each section and provide information services useful for education, research and clinical practice. The report describes the outline of our system. It is designed to serve as a distributed database for electronic medical records and images. We are developing the MML engine that is to be linked to the world wide web (WWW) network system. To the users, this system will present an integrated multimedia representation of the patient records, providing access to both the image and text-based data required for an effective clinical decision making and medical education. PMID:10384445

  13. The APS control system network upgrade.

    SciTech Connect

    Sidorowicz, K. v.; Leibfritz, D.; McDowell, W. P.

    1999-10-22

    When it was installed,the Advanced Photon Source (APS) control system network was at the state-of-the-art. Different aspects of the system have been reported at previous meetings [1,2]. As loads on the controls network have increased due to newer and faster workstations and front-end computers, we have found performance of the system declining and have implemented an upgraded network. There have been dramatic advances in networking hardware in the last several years. The upgraded APS controls network replaces the original FDDI backbone and shared Ethernet hubs with redundant gigabit uplinks and fully switched 10/100 Ethernet switches with backplane fabrics in excess of 20 Gbits/s (Gbps). The central collapsed backbone FDDI concentrator has been replaced with a Gigabit Ethernet switch with greater than 30 Gbps backplane fabric. Full redundancy of the system has been maintained. This paper will discuss this upgrade and include performance data and performance comparisons with the original network.

  14. Optical multicast system for data center networks.

    PubMed

    Samadi, Payman; Gupta, Varun; Xu, Junjie; Wang, Howard; Zussman, Gil; Bergman, Keren

    2015-08-24

    We present the design and experimental evaluation of an Optical Multicast System for Data Center Networks, a hardware-software system architecture that uniquely integrates passive optical splitters in a hybrid network architecture for faster and simpler delivery of multicast traffic flows. An application-driven control plane manages the integrated optical and electronic switched traffic routing in the data plane layer. The control plane includes a resource allocation algorithm to optimally assign optical splitters to the flows. The hardware architecture is built on a hybrid network with both Electronic Packet Switching (EPS) and Optical Circuit Switching (OCS) networks to aggregate Top-of-Rack switches. The OCS is also the connectivity substrate of splitters to the optical network. The optical multicast system implementation requires only commodity optical components. We built a prototype and developed a simulation environment to evaluate the performance of the system for bulk multicasting. Experimental and numerical results show simultaneous delivery of multicast flows to all receivers with steady throughput. Compared to IP multicast that is the electronic counterpart, optical multicast performs with less protocol complexity and reduced energy consumption. Compared to peer-to-peer multicast methods, it achieves at minimum an order of magnitude higher throughput for flows under 250 MB with significantly less connection overheads. Furthermore, for delivering 20 TB of data containing only 15% multicast flows, it reduces the total delivery energy consumption by 50% and improves latency by 55% compared to a data center with a sole non-blocking EPS network. PMID:26368190

  15. Network Queuing System, Version 2.0

    NASA Technical Reports Server (NTRS)

    Walter, Howard; Bridges, Mike; Carver, Terrie; Kingsbury, Brent

    1993-01-01

    Network Queuing System (NQS) computer program is versatile batch- and device-queuing facility for single UNIX computer or group of computers in network. User invokes NQS collection of user-space programs to move batch and device jobs freely among different computers in network. Provides facilities for remote queuing, request routing, remote status, queue-status controls, batch-request resource quota limits, and remote output return. Revision of NQS provides for creation, deletion, addition, and setting of complexes aiding in limiting number of requests handled at one time. Also has improved device-oriented queues along with some revision of displays. Written in C language.

  16. Clinical information systems for integrated healthcare networks.

    PubMed Central

    Teich, J. M.

    1998-01-01

    In the 1990's, a large number of hospitals and medical practices have merged to form integrated healthcare networks (IHN's). The nature of an IHN creates new demands for information management, and also imposes new constraints on information systems for the network. Important tradeoffs must be made between homogeneity and flexibility, central and distributed governance, and access and confidentiality. This paper describes key components of clinical information systems for IHN's, and examines important design decisions that affect the value of such systems. Images Figure 1 PMID:9929178

  17. NOVANET: communications network for a control system

    SciTech Connect

    Hill, J.R.; Severyn, J.R.; VanArsdall, P.J.

    1983-05-23

    NOVANET is a control system oriented fiber optic local area network that was designed to meet the unique and often conflicting requirements of the Nova laser control system which will begin operation in 1984. The computers and data acquisition devices that form the distributed control system for a large laser fusion research facility need reliable, high speed communications. Both control/status messages and experimental data must be handled. A subset of NOVANET is currently operating on the two beam Novette laser system.

  18. Dissecting a Network-Based Education System

    ERIC Educational Resources Information Center

    Davis, Tiffany; Yoo, Seong-Moo; Pan, Wendi

    2005-01-01

    The Alabama Learning Exchange (ALEX; www.alex.state.al.us) is a network-based education system designed and implemented to help improve education in Alabama. It accomplishes this goal by providing a single location for the state's K-12 educators to find information that will help improve their classroom effectiveness. The ALEX system includes…

  19. Visual Tutoring System for Programming Multiprocessor Networks.

    ERIC Educational Resources Information Center

    Trichina, Elena

    1996-01-01

    Describes a visual tutoring system for programming distributive-memory multiprocessor networks. Highlights include difficulties of parallel programming, and three instructional modes in the system, including a hypertext-like lecture, a question-answer mode, and an expert aid mode. (Author/LRW)

  20. A Gateway Approach to Library System Networking.

    ERIC Educational Resources Information Center

    Anderson, David A.; Duggan, Michael T.

    1987-01-01

    Describes a technique for accessing a library system with limited interconnectivity by connecting the system to a gateway machine that is a host in the local area network and evaluates the performance of an existing prototype that has been implemented at the Los Alamos National Laboratory. (Author/CLB)

  1. Network analysis of eight industrial symbiosis systems

    NASA Astrophysics Data System (ADS)

    Zhang, Yan; Zheng, Hongmei; Shi, Han; Yu, Xiangyi; Liu, Gengyuan; Su, Meirong; Li, Yating; Chai, Yingying

    2016-06-01

    Industrial symbiosis is the quintessential characteristic of an eco-industrial park. To divide parks into different types, previous studies mostly focused on qualitative judgments, and failed to use metrics to conduct quantitative research on the internal structural or functional characteristics of a park. To analyze a park's structural attributes, a range of metrics from network analysis have been applied, but few researchers have compared two or more symbioses using multiple metrics. In this study, we used two metrics (density and network degree centralization) to compare the degrees of completeness and dependence of eight diverse but representative industrial symbiosis networks. Through the combination of the two metrics, we divided the networks into three types: weak completeness, and two forms of strong completeness, namely "anchor tenant" mutualism and "equality-oriented" mutualism. The results showed that the networks with a weak degree of completeness were sparse and had few connections among nodes; for "anchor tenant" mutualism, the degree of completeness was relatively high, but the affiliated members were too dependent on core members; and the members in "equality-oriented" mutualism had equal roles, with diverse and flexible symbiotic paths. These results revealed some of the systems' internal structure and how different structures influenced the exchanges of materials, energy, and knowledge among members of a system, thereby providing insights into threats that may destabilize the network. Based on this analysis, we provide examples of the advantages and effectiveness of recent improvement projects in a typical Chinese eco-industrial park (Shandong Lubei).

  2. A dynamical systems view of network centrality.

    PubMed

    Grindrod, Peter; Higham, Desmond J

    2014-05-01

    To gain insights about dynamic networks, the dominant paradigm is to study discrete snapshots, or timeslices, as the interactions evolve. Here, we develop and test a new mathematical framework where network evolution is handled over continuous time, giving an elegant dynamical systems representation for the important concept of node centrality. The resulting system allows us to track the relative influence of each individual. This new setting is natural in many digital applications, offering both conceptual and computational advantages. The novel differential equations approach is convenient for modelling and analysis of network evolution and gives rise to an interesting application of the matrix logarithm function. From a computational perspective, it avoids the awkward up-front compromises between accuracy, efficiency and redundancy required in the prevalent discrete-time setting. Instead, we can rely on state-of-the-art ODE software, where discretization takes place adaptively in response to the prevailing system dynamics. The new centrality system generalizes the widely used Katz measure, and allows us to identify and track, at any resolution, the most influential nodes in terms of broadcasting and receiving information through time-dependent links. In addition to the classical static network notion of attenuation across edges, the new ODE also allows for attenuation over time, as information becomes stale. This allows 'running measures' to be computed, so that networks can be monitored in real time over arbitrarily long intervals. With regard to computational efficiency, we explain why it is cheaper to track good receivers of information than good broadcasters. An important consequence is that the overall broadcast activity in the network can also be monitored efficiently. We use two synthetic examples to validate the relevance of the new measures. We then illustrate the ideas on a large-scale voice call network, where key features are discovered that are not

  3. Nonlinear Network Dynamics on Earthquake Fault Systems

    NASA Astrophysics Data System (ADS)

    Rundle, P. B.; Rundle, J. B.; Tiampo, K. F.

    2001-12-01

    Understanding the physics of earthquakes is essential if large events are ever to be forecast. Real faults occur in topologically complex networks that exhibit cooperative, emergent space-time behavior that includes precursory quiescence or activation, and clustering of events. The purpose of this work is to investigate the sensitivity of emergent behavior of fault networks to changes in the physics on the scale of single faults or smaller. In order to investigate the effect of changes at small scales on the behavior of the network, we need to construct models of earthquake fault systems that contain the essential physics. A network topology is therefore defined in an elastic medium, the stress Green's functions (i.e. the stress transfer coefficients) are computed, frictional properties are defined and the system is driven via the slip deficit as defined below. The long-range elastic interactions produce mean-field dynamics in the simulations. We focus in this work on the major strike-slip faults in Southern California that produce the most frequent and largest magnitude events. To determine the topology and properties of the network, we used the tabulation of fault properties published in the literature. We have found that the statistical distribution of large earthquakes on a model of a topologically complex, strongly correlated real fault network is highly sensitive to the precise nature of the stress dissipation properties of the friction laws associated with individual faults. These emergent, self-organizing space-time modes of behavior are properties of the network as a whole, rather than of the individual fault segments of which the network is comprised (ref: PBR et al., Physical Review Letters, in press, 2001).

  4. The LCOGT Network for Solar System Science

    NASA Astrophysics Data System (ADS)

    Lister, Tim

    2012-10-01

    Las Cumbres Observatory Global Telescope (LCOGT) network is a planned homogeneous network of over 35 telescopes at 6 locations in the northern and southern hemispheres. This network is versatile and designed to respond rapidly to target of opportunity events and also to do long term monitoring of slowly changing astronomical phenomena. The global coverage of the network and the apertures of telescope available make LCOGT ideal for follow-up and characterization of Solar System objects (e.g. asteroids, Kuiper Belt Objects, comets, Near-Earth Objects (NEOs)) and ultimately for the discovery of new objects. Currently LCOGT is operating the two 2m Faulkes Telescopes at Haleakala, Maui and Siding Spring Observatory, Australia and in March 2012 completed the install of the first member of the new 1m telescope network at McDonald Observatory, Texas. Further deployments of six to eight 1m telescopes to CTIO in Chile, SAAO in South Africa and Siding Spring Observatory are expected in late 2012-early 2013. I am using the growing LCOGT network to confirm newly detected NEO candidates produced by PanSTARRS (PS1) and other sky surveys and to obtain follow-up astrometry and photometry for radar-targeted objects. I have developed an automated system to retrieve new PS1 NEOs, compute orbits, plan observations and automatically schedule them for follow-up on the robotic telescopes of the LCOGT Network. In the future, LCOGT has proposed to develop a Minor Planet Investigation Project (MPIP) that will address the existing lack of resources for minor planet follow-up, takes advantage of ever-increasing new datasets, and develops a platform for broad public participation in relevant scientific exploration. We plan to produce a cloud-based Solar System investigation environment, a citizen science project (AgentNEO), and a cyberlearning environment, all under the umbrella of MPIP.

  5. Man-portable networked sensor system

    NASA Astrophysics Data System (ADS)

    Bryan, W. D.; Nguyen, Hoa G.; Gage, Douglas W.

    1998-08-01

    The Man-Portable Networked Sensor System (MPNSS), with its baseline sensor suite of a pan/tilt unit with video and FLIR cameras and laser rangefinder, functions in a distributed network of remote sensing packages and control stations designed to provide a rapidly deployable, extended-range surveillance capability for a wide variety of security operations and other tactical missions. While first developed as a man-portable prototype, these sensor packages can also be deployed on UGVs and UAVs, and a copy of this package been demonstrated flying on the Sikorsky Cypher VTOL UAV in counterdrug and MOUNT scenarios. The system makes maximum use of COTS components for sensing, processing, and communications, and of both established and emerging standard communications networking protocols and system integration techniques. This paper will discuss the technical issues involved in: (1) system integration using COTS components and emerging bus standards, (2) flexible networking for a scalable system, and (3) the human interface designed to maximize information presentation to the warfighter in battle situations.

  6. Kannada character recognition system using neural network

    NASA Astrophysics Data System (ADS)

    Kumar, Suresh D. S.; Kamalapuram, Srinivasa K.; Kumar, Ajay B. R.

    2013-03-01

    Handwriting recognition has been one of the active and challenging research areas in the field of pattern recognition. It has numerous applications which include, reading aid for blind, bank cheques and conversion of any hand written document into structural text form. As there is no sufficient number of works on Indian language character recognition especially Kannada script among 15 major scripts in India. In this paper an attempt is made to recognize handwritten Kannada characters using Feed Forward neural networks. A handwritten Kannada character is resized into 20x30 Pixel. The resized character is used for training the neural network. Once the training process is completed the same character is given as input to the neural network with different set of neurons in hidden layer and their recognition accuracy rate for different Kannada characters has been calculated and compared. The results show that the proposed system yields good recognition accuracy rates comparable to that of other handwritten character recognition systems.

  7. The Deep Space Network Advanced Systems Program

    NASA Technical Reports Server (NTRS)

    Davarian, Faramaz

    2010-01-01

    The deep space network (DSN)--with its three complexes in Goldstone, California, Madrid, Spain, and Canberra, Australia--provides the resources to track and communicate with planetary and deep space missions. Each complex consists of an array of capabilities for tracking probes almost anywhere in the solar system. A number of innovative hardware, software and procedural tools are used for day-to-day operations at DSN complexes as well as at the network control at the Jet Propulsion Laboratory (JPL). Systems and technologies employed by the network include large-aperture antennas (34-m and 70-m), cryogenically cooled receivers, high-power transmitters, stable frequency and timing distribution assemblies, modulation and coding schemes, spacecraft transponders, radiometric tracking techniques, etc. The DSN operates at multiple frequencies, including the 2-GHz band, the 7/8-GHz band, and the 32/34-GHz band.

  8. Social networks as embedded complex adaptive systems.

    PubMed

    Benham-Hutchins, Marge; Clancy, Thomas R

    2010-09-01

    As systems evolve over time, their natural tendency is to become increasingly more complex. Studies in the field of complex systems have generated new perspectives on management in social organizations such as hospitals. Much of this research appears as a natural extension of the cross-disciplinary field of systems theory. This is the 15th in a series of articles applying complex systems science to the traditional management concepts of planning, organizing, directing, coordinating, and controlling. In this article, the authors discuss healthcare social networks as a hierarchy of embedded complex adaptive systems. The authors further examine the use of social network analysis tools as a means to understand complex communication patterns and reduce medical errors. PMID:20798616

  9. Distributing Executive Information Systems through Networks.

    ERIC Educational Resources Information Center

    Penrod, James I.; And Others

    1993-01-01

    Many colleges and universities will soon adopt distributed systems for executive information and decision support. Distribution of shared information through computer networks will improve decision-making processes dramatically on campuses. Critical success factors include administrative support, favorable organizational climate, ease of use,…

  10. Networked Training: An Electronic Education System.

    ERIC Educational Resources Information Center

    Ryan, William J.

    1993-01-01

    Presents perspectives on networked training based on the development of an electronic education system at the Westinghouse Savannah River Company that integrated motion video, text, and data information with multiple audio sources. The technology options of compact disc, digital video architecture, and digital video interactive are discussed. (LRW)

  11. The Earthscope USArray Array Network Facility (ANF): Evolution of Data Acquisition, Processing, and Storage Systems

    NASA Astrophysics Data System (ADS)

    Davis, G. A.; Battistuz, B.; Foley, S.; Vernon, F. L.; Eakins, J. A.

    2009-12-01

    Since April 2004 the Earthscope USArray Transportable Array (TA) network has grown to over 400 broadband seismic stations that stream multi-channel data in near real-time to the Array Network Facility in San Diego. In total, over 1.7 terabytes per year of 24-bit, 40 samples-per-second seismic and state of health data is recorded from the stations. The ANF provides analysts access to real-time and archived data, as well as state-of-health data, metadata, and interactive tools for station engineers and the public via a website. Additional processing and recovery of missing data from on-site recorders (balers) at the stations is performed before the final data is transmitted to the IRIS Data Management Center (DMC). Assembly of the final data set requires additional storage and processing capabilities to combine the real-time data with baler data. The infrastructure supporting these diverse computational and storage needs currently consists of twelve virtualized Sun Solaris Zones executing on nine physical server systems. The servers are protected against failure by redundant power, storage, and networking connections. Storage needs are provided by a hybrid iSCSI and Fiber Channel Storage Area Network (SAN) with access to over 40 terabytes of RAID 5 and 6 storage. Processing tasks are assigned to systems based on parallelization and floating-point calculation needs. On-site buffering at the data-loggers provide protection in case of short-term network or hardware problems, while backup acquisition systems at the San Diego Supercomputer Center and the DMC protect against catastrophic failure of the primary site. Configuration management and monitoring of these systems is accomplished with open-source (Cfengine, Nagios, Solaris Community Software) and commercial tools (Intermapper). In the evolution from a single server to multiple virtualized server instances, Sun Cluster software was evaluated and found to be unstable in our environment. Shared filesystem

  12. Design Criteria For Networked Image Analysis System

    NASA Astrophysics Data System (ADS)

    Reader, Cliff; Nitteberg, Alan

    1982-01-01

    Image systems design is currently undergoing a metamorphosis from the conventional computing systems of the past into a new generation of special purpose designs. This change is motivated by several factors, notably among which is the increased opportunity for high performance with low cost offered by advances in semiconductor technology. Another key issue is a maturing in understanding of problems and the applicability of digital processing techniques. These factors allow the design of cost-effective systems that are functionally dedicated to specific applications and used in a utilitarian fashion. Following an overview of the above stated issues, the paper presents a top-down approach to the design of networked image analysis systems. The requirements for such a system are presented, with orientation toward the hospital environment. The three main areas are image data base management, viewing of image data and image data processing. This is followed by a survey of the current state of the art, covering image display systems, data base techniques, communications networks and software systems control. The paper concludes with a description of the functional subystems and architectural framework for networked image analysis in a production environment.

  13. Network and adaptive system of systems modeling and analysis.

    SciTech Connect

    Lawton, Craig R.; Campbell, James E. Dr.; Anderson, Dennis James; Eddy, John P.

    2007-05-01

    This report documents the results of an LDRD program entitled ''Network and Adaptive System of Systems Modeling and Analysis'' that was conducted during FY 2005 and FY 2006. The purpose of this study was to determine and implement ways to incorporate network communications modeling into existing System of Systems (SoS) modeling capabilities. Current SoS modeling, particularly for the Future Combat Systems (FCS) program, is conducted under the assumption that communication between the various systems is always possible and occurs instantaneously. A more realistic representation of these communications allows for better, more accurate simulation results. The current approach to meeting this objective has been to use existing capabilities to model network hardware reliability and adding capabilities to use that information to model the impact on the sustainment supply chain and operational availability.

  14. Teaching Hands-On Linux Host Computer Security

    ERIC Educational Resources Information Center

    Shumba, Rose

    2006-01-01

    In the summer of 2003, a project to augment and improve the teaching of information assurance courses was started at IUP. Thus far, ten hands-on exercises have been developed. The exercises described in this article, and presented in the appendix, are based on actions required to secure a Linux host. Publicly available resources were used to…

  15. Linux Adventures on a Laptop. Computers in Small Libraries

    ERIC Educational Resources Information Center

    Roberts, Gary

    2005-01-01

    This article discusses the pros and cons of open source software, such as Linux. It asserts that despite the technical difficulties of installing and maintaining this type of software, ultimately it is helpful in terms of knowledge acquisition and as a beneficial investment librarians can make in themselves, their libraries, and their patrons.…

  16. Drowning in PC Management: Could a Linux Solution Save Us?

    ERIC Educational Resources Information Center

    Peters, Kathleen A.

    2004-01-01

    Short on funding and IT staff, a Western Canada library struggled to provide adequate public computing resources. Staff turned to a Linux-based solution that supports up to 10 users from a single computer, and blends Web browsing and productivity applications with session management, Internet filtering, and user authentication. In this article,…

  17. Analysis of complex systems using neural networks

    SciTech Connect

    Uhrig, R.E. . Dept. of Nuclear Engineering Oak Ridge National Lab., TN )

    1992-01-01

    The application of neural networks, alone or in conjunction with other advanced technologies (expert systems, fuzzy logic, and/or genetic algorithms), to some of the problems of complex engineering systems has the potential to enhance the safety, reliability, and operability of these systems. Typically, the measured variables from the systems are analog variables that must be sampled and normalized to expected peak values before they are introduced into neural networks. Often data must be processed to put it into a form more acceptable to the neural network (e.g., a fast Fourier transformation of the time-series data to produce a spectral plot of the data). Specific applications described include: (1) Diagnostics: State of the Plant (2) Hybrid System for Transient Identification, (3) Sensor Validation, (4) Plant-Wide Monitoring, (5) Monitoring of Performance and Efficiency, and (6) Analysis of Vibrations. Although specific examples described deal with nuclear power plants or their subsystems, the techniques described can be applied to a wide variety of complex engineering systems.

  18. Analysis of complex systems using neural networks

    SciTech Connect

    Uhrig, R.E. |

    1992-12-31

    The application of neural networks, alone or in conjunction with other advanced technologies (expert systems, fuzzy logic, and/or genetic algorithms), to some of the problems of complex engineering systems has the potential to enhance the safety, reliability, and operability of these systems. Typically, the measured variables from the systems are analog variables that must be sampled and normalized to expected peak values before they are introduced into neural networks. Often data must be processed to put it into a form more acceptable to the neural network (e.g., a fast Fourier transformation of the time-series data to produce a spectral plot of the data). Specific applications described include: (1) Diagnostics: State of the Plant (2) Hybrid System for Transient Identification, (3) Sensor Validation, (4) Plant-Wide Monitoring, (5) Monitoring of Performance and Efficiency, and (6) Analysis of Vibrations. Although specific examples described deal with nuclear power plants or their subsystems, the techniques described can be applied to a wide variety of complex engineering systems.

  19. Networks for Autonomous Formation Flying Satellite Systems

    NASA Technical Reports Server (NTRS)

    Knoblock, Eric J.; Konangi, Vijay K.; Wallett, Thomas M.; Bhasin, Kul B.

    2001-01-01

    The performance of three communications networks to support autonomous multi-spacecraft formation flying systems is presented. All systems are comprised of a ten-satellite formation arranged in a star topology, with one of the satellites designated as the central or "mother ship." All data is routed through the mother ship to the terrestrial network. The first system uses a TCP/lP over ATM protocol architecture within the formation the second system uses the IEEE 802.11 protocol architecture within the formation and the last system uses both of the previous architectures with a constellation of geosynchronous satellites serving as an intermediate point-of-contact between the formation and the terrestrial network. The simulations consist of file transfers using either the File Transfer Protocol (FTP) or the Simple Automatic File Exchange (SAFE) Protocol. The results compare the IF queuing delay, and IP processing delay at the mother ship as well as application-level round-trip time for both systems, In all cases, using IEEE 802.11 within the formation yields less delay. Also, the throughput exhibited by SAFE is better than FTP.

  20. Zebra: A striped network file system

    NASA Technical Reports Server (NTRS)

    Hartman, John H.; Ousterhout, John K.

    1992-01-01

    The design of Zebra, a striped network file system, is presented. Zebra applies ideas from log-structured file system (LFS) and RAID research to network file systems, resulting in a network file system that has scalable performance, uses its servers efficiently even when its applications are using small files, and provides high availability. Zebra stripes file data across multiple servers, so that the file transfer rate is not limited by the performance of a single server. High availability is achieved by maintaining parity information for the file system. If a server fails its contents can be reconstructed using the contents of the remaining servers and the parity information. Zebra differs from existing striped file systems in the way it stripes file data: Zebra does not stripe on a per-file basis; instead it stripes the stream of bytes written by each client. Clients write to the servers in units called stripe fragments, which are analogous to segments in an LFS. Stripe fragments contain file blocks that were written recently, without regard to which file they belong. This method of striping has numerous advantages over per-file striping, including increased server efficiency, efficient parity computation, and elimination of parity update.

  1. GAS MAIN SENSOR AND COMMUNICATIONS NETWORK SYSTEM

    SciTech Connect

    Hagen Schempf, Ph.D.

    2003-02-27

    Automatika, Inc. was contracted by the Department of Energy (DOE) and with co-funding from the New York Gas Group (NYGAS), to develop an in-pipe natural gas prototype measurement and wireless communications system for assessing and monitoring distribution networks. A prototype system was built for low-pressure cast-iron mains and tested in a spider- and serial-network configuration in a live network in Long Island with the support of Keyspan Energy, Inc. The prototype unit combined sensors capable of monitoring pressure, flow, humidity, temperature and vibration, which were sampled and combined in data-packages in an in-pipe master-slave architecture to collect data from a distributed spider-arrangement, and in a master-repeater-slave configuration in serial or ladder-network arrangements. It was found that the system was capable of performing all data-sampling and collection as expected, yielding interesting results as to flow-dynamics and vibration-detection. Wireless in-pipe communications were shown to be feasible and valuable data was collected in order to determine how to improve on range and data-quality in the future.

  2. MOLAR: Modular Linux and Adaptive Runtime Support for HEC OS/R Research

    SciTech Connect

    Frank Mueller

    2009-02-05

    MOLAR is a multi-institution research effort that concentrates on adaptive, reliable,and efficient operating and runtime system solutions for ultra-scale high-end scientific computing on the next generation of supercomputers. This research addresses the challenges outlined by the FAST-OS - forum to address scalable technology for runtime and operating systems --- and HECRTF --- high-end computing revitalization task force --- activities by providing a modular Linux and adaptable runtime support for high-end computing operating and runtime systems. The MOLAR research has the following goals to address these issues. (1) Create a modular and configurable Linux system that allows customized changes based on the requirements of the applications, runtime systems, and cluster management software. (2) Build runtime systems that leverage the OS modularity and configurability to improve efficiency, reliability, scalability, ease-of-use, and provide support to legacy and promising programming models. (3) Advance computer reliability, availability and serviceability (RAS) management systems to work cooperatively with the OS/R to identify and preemptively resolve system issues. (4) Explore the use of advanced monitoring and adaptation to improve application performance and predictability of system interruptions. The overall goal of the research conducted at NCSU is to develop scalable algorithms for high-availability without single points of failure and without single points of control.

  3. Gas Main Sensor and Communications Network System

    SciTech Connect

    Hagen Schempf

    2006-05-31

    Automatika, Inc. was contracted by the Department of Energy (DOE) and with co-funding from the Northeast Gas Association (NGA), to develop an in-pipe natural gas prototype measurement and wireless communications system for assessing and monitoring distribution networks. This projected was completed in April 2006, and culminated in the installation of more than 2 dozen GasNet nodes in both low- and high-pressure cast-iron and steel mains owned by multiple utilities in the northeastern US. Utilities are currently logging data (off-line) and monitoring data in real time from single and multiple networked sensors over cellular networks and collecting data using wireless bluetooth PDA systems. The system was designed to be modular, using in-pipe sensor-wands capable of measuring, flow, pressure, temperature, water-content and vibration. Internal antennae allowed for the use of the pipe-internals as a waveguide for setting up a sensor network to collect data from multiple nodes simultaneously. Sensor nodes were designed to be installed with low- and no-blow techniques and tools. Using a multi-drop bus technique with a custom protocol, all electronics were designed to be buriable and allow for on-board data-collection (SD-card), wireless relaying and cellular network forwarding. Installation options afforded by the design included direct-burial and external polemounted variants. Power was provided by one or more batteries, direct AC-power (Class I Div.2) and solar-array. The utilities are currently in a data-collection phase and intend to use the collected (and processed) data to make capital improvement decisions, compare it to Stoner model predictions and evaluate the use of such a system for future expansion, technology-improvement and commercialization starting later in 2006.

  4. Fiber Optic Network Design Expert System

    NASA Astrophysics Data System (ADS)

    Artz, Timothy J.; Wnek, Roy M.

    1987-05-01

    The Fiber Optic Network Design Expert System (FONDES) is an engineering tool for the specification, design, and evaluation of fiber optic transmission systems. FONDES encompasses a design rule base and a data base of specifications of system components. This package applies to fiber optic design work in two ways, as a design-to-specification tool and a system performance prediction model. The FONDES rule base embodies the logic of design engineering. It can be used to produce a system design given a requirement specification or it can be used to predict system performance given a system design. The periodically updated FONDES data base contains performance specifications, price, and availability data for current fiber optic system components. FONDES is implemented in an artificial intelligence language, TURBO-PROLOG, and runs on an IBM-PC.

  5. Secured network sensor-based defense system

    NASA Astrophysics Data System (ADS)

    Wei, Sixiao; Shen, Dan; Ge, Linqiang; Yu, Wei; Blasch, Erik P.; Pham, Khanh D.; Chen, Genshe

    2015-05-01

    Network sensor-based defense (NSD) systems have been widely used to defend against cyber threats. Nonetheless, if the adversary finds ways to identify the location of monitor sensors, the effectiveness of NSD systems can be reduced. In this paper, we propose both temporal and spatial perturbation based defense mechanisms to secure NSD systems and make the monitor sensor invisible to the adversary. The temporal-perturbation based defense manipulates the timing information of published data so that the probability of successfully recognizing monitor sensors can be reduced. The spatial-perturbation based defense dynamically redeploys monitor sensors in the network so that the adversary cannot obtain the complete information to recognize all of the monitor sensors. We carried out experiments using real-world traffic traces to evaluate the effectiveness of our proposed defense mechanisms. Our data shows that our proposed defense mechanisms can reduce the attack accuracy of recognizing detection sensors.

  6. GAS MAIN SENSOR AND COMMUNICATIONS NETWORK SYSTEM

    SciTech Connect

    Hagen Schempf

    2004-09-30

    Automatika, Inc. was contracted by the Department of Energy (DOE) and with co-funding from the New York Gas Group (NYGAS), to develop an in-pipe natural gas prototype measurement and wireless communications system for assessing and monitoring distribution networks. In Phase II of this three-phase program, an improved prototype system was built for low-pressure cast-iron and high-pressure steel (including a no-blow installation system) mains and tested in a serial-network configuration in a live network in Long Island with the support of Keyspan Energy, Inc. The experiment was carried out in several open-hole excavations over a multi-day period. The prototype units (3 total) combined sensors capable of monitoring pressure, flow, humidity, temperature and vibration, which were sampled and combined in data-packages in an in-pipe master-repeater-slave configuration in serial or ladder-network arrangements. It was verified that the system was capable of performing all data-sampling, data-storage and collection as expected, yielding interesting results as to flow-dynamics and vibration-detection. Wireless in-pipe communications were shown to be feasible and the system was demonstrated to run off in-ground battery- and above-ground solar power. The remote datalogger access and storage-card features were demonstrated and used to log and post-process system data. Real-time data-display on an updated Phase-I GUI was used for in-field demonstration and troubleshooting.

  7. Hardware and Software Design of FPGA-based PCIe Gen3 interface for APEnet+ network interconnect system

    NASA Astrophysics Data System (ADS)

    Ammendola, R.; Biagioni, A.; Frezza, O.; Lo Cicero, F.; Lonardo, A.; Martinelli, M.; Paolucci, P. S.; Pastorelli, E.; Rossetti, D.; Simula, F.; Tosoratto, L.; Vicini, P.

    2015-12-01

    In the attempt to develop an interconnection architecture optimized for hybrid HPC systems dedicated to scientific computing, we designed APEnet+, a point-to-point, low-latency and high-performance network controller supporting 6 fully bidirectional off-board links over a 3D torus topology. The first release of APEnet+ (named V4) was a board based on a 40 nm Altera FPGA, integrating 6 channels at 34 Gbps of raw bandwidth per direction and a PCIe Gen2 x8 host interface. It has been the first-of-its-kind device to implement an RDMA protocol to directly read/write data from/to Fermi and Kepler NVIDIA GPUs using NVIDIA peer-to-peer and GPUDirect RDMA protocols, obtaining real zero-copy GPU-to-GPU transfers over the network. The latest generation of APEnet+ systems (now named V5) implements a PCIe Gen3 x8 host interface on a 28 nm Altera Stratix V FPGA, with multi-standard fast transceivers (up to 14.4 Gbps) and an increased amount of configurable internal resources and hardware IP cores to support main interconnection standard protocols. Herein we present the APEnet+ V5 architecture, the status of its hardware and its system software design. Both its Linux Device Driver and the low-level libraries have been redeveloped to support the PCIe Gen3 protocol, introducing optimizations and solutions based on hardware/software co-design.

  8. IEEE 342 Node Low Voltage Networked Test System

    SciTech Connect

    Schneider, Kevin P.; Phanivong, Phillippe K.; Lacroix, Jean-Sebastian

    2014-07-31

    The IEEE Distribution Test Feeders provide a benchmark for new algorithms to the distribution analyses community. The low voltage network test feeder represents a moderate size urban system that is unbalanced and highly networked. This is the first distribution test feeder developed by the IEEE that contains unbalanced networked components. The 342 node Low Voltage Networked Test System includes many elements that may be found in a networked system: multiple 13.2kV primary feeders, network protectors, a 120/208V grid network, and multiple 277/480V spot networks. This paper presents a brief review of the history of low voltage networks and how they evolved into the modern systems. This paper will then present a description of the 342 Node IEEE Low Voltage Network Test System and power flow results.

  9. Fault-tolerant interconnection networks for multiprocessor systems

    SciTech Connect

    Nassar, H.M.

    1989-01-01

    Interconnection networks represent the backbone of multiprocessor systems. A failure in the network, therefore, could seriously degrade the system performance. For this reason, fault tolerance has been regarded as a major consideration in interconnection network design. This thesis presents two novel techniques to provide fault tolerance capabilities to three major networks: the Beneline network and the Clos network. First, the Simple Fault Tolerance Technique (SFT) is presented. The SFT technique is in fact the result of merging two widely known interconnection mechanisms: a normal interconnection network and a shared bus. This technique is most suitable for networks with small switches, such as the Baseline network and the Benes network. For the Clos network, whose switches may be large for the SFT, another technique is developed to produce the Fault-Tolerant Clos (FTC) network. In the FTC, one switch is added to each stage. The two techniques are described and thoroughly analyzed.

  10. Conceptualizing and Advancing Research Networking Systems.

    PubMed

    Schleyer, Titus; Butler, Brian S; Song, Mei; Spallek, Heiko

    2012-03-01

    Science in general, and biomedical research in particular, is becoming more collaborative. As a result, collaboration with the right individuals, teams, and institutions is increasingly crucial for scientific progress. We propose Research Networking Systems (RNS) as a new type of system designed to help scientists identify and choose collaborators, and suggest a corresponding research agenda. The research agenda covers four areas: foundations, presentation, architecture, and evaluation. Foundations includes project-, institution- and discipline-specific motivational factors; the role of social networks; and impression formation based on information beyond expertise and interests. Presentation addresses representing expertise in a comprehensive and up-to-date manner; the role of controlled vocabularies and folksonomies; the tension between seekers' need for comprehensive information and potential collaborators' desire to control how they are seen by others; and the need to support serendipitous discovery of collaborative opportunities. Architecture considers aggregation and synthesis of information from multiple sources, social system interoperability, and integration with the user's primary work context. Lastly, evaluation focuses on assessment of collaboration decisions, measurement of user-specific costs and benefits, and how the large-scale impact of RNS could be evaluated with longitudinal and naturalistic methods. We hope that this article stimulates the human-computer interaction, computer-supported cooperative work, and related communities to pursue a broad and comprehensive agenda for developing research networking systems. PMID:24376309

  11. Conceptualizing and Advancing Research Networking Systems

    PubMed Central

    SCHLEYER, TITUS; BUTLER, BRIAN S.; SONG, MEI; SPALLEK, HEIKO

    2013-01-01

    Science in general, and biomedical research in particular, is becoming more collaborative. As a result, collaboration with the right individuals, teams, and institutions is increasingly crucial for scientific progress. We propose Research Networking Systems (RNS) as a new type of system designed to help scientists identify and choose collaborators, and suggest a corresponding research agenda. The research agenda covers four areas: foundations, presentation, architecture, and evaluation. Foundations includes project-, institution- and discipline-specific motivational factors; the role of social networks; and impression formation based on information beyond expertise and interests. Presentation addresses representing expertise in a comprehensive and up-to-date manner; the role of controlled vocabularies and folksonomies; the tension between seekers’ need for comprehensive information and potential collaborators’ desire to control how they are seen by others; and the need to support serendipitous discovery of collaborative opportunities. Architecture considers aggregation and synthesis of information from multiple sources, social system interoperability, and integration with the user’s primary work context. Lastly, evaluation focuses on assessment of collaboration decisions, measurement of user-specific costs and benefits, and how the large-scale impact of RNS could be evaluated with longitudinal and naturalistic methods. We hope that this article stimulates the human-computer interaction, computer-supported cooperative work, and related communities to pursue a broad and comprehensive agenda for developing research networking systems. PMID:24376309

  12. Digital Video Over Space Systems and Networks

    NASA Technical Reports Server (NTRS)

    Grubbs, Rodney

    2010-01-01

    This slide presentation reviews the use of digital video with space systems and networks. The earliest use of video was the use of film precluding live viewing, which gave way to live television from space. This has given way to digital video using internet protocol for transmission. This has provided for many improvements with new challenges. Some of these ehallenges are reviewed. The change to digital video transmitted over space systems can provide incredible imagery, however the process must be viewed as an entire system, rather than piece-meal.

  13. Network Event Recording Device: An automated system for Network anomaly detection, and notification. Draft

    SciTech Connect

    Simmons, D.G.; Wilkins, R.

    1994-09-01

    The goal of the Network Event Recording Device (NERD) is to provide a flexible autonomous system for network logging and notification when significant network anomalies occur. The NERD is also charged with increasing the efficiency and effectiveness of currently implemented network security procedures. While it has always been possible for network and security managers to review log files for evidence of network irregularities, the NERD provides real-time display of network activity, as well as constant monitoring and notification services for managers. Similarly, real-time display and notification of possible security breaches will provide improved effectiveness in combating resource infiltration from both inside and outside the immediate network environment.

  14. System/360 Computer Assisted Network Scheduling (CANS) System

    NASA Technical Reports Server (NTRS)

    Brewer, A. C.

    1972-01-01

    Computer assisted scheduling techniques that produce conflict-free and efficient schedules have been developed and implemented to meet needs of the Manned Space Flight Network. CANS system provides effective management of resources in complex scheduling environment. System is automated resource scheduling, controlling, planning, information storage and retrieval tool.

  15. System and method for networking electrochemical devices

    DOEpatents

    Williams, Mark C.; Wimer, John G.; Archer, David H.

    1995-01-01

    An improved electrochemically active system and method including a plurality of electrochemical devices, such as fuel cells and fluid separation devices, in which the anode and cathode process-fluid flow chambers are connected in fluid-flow arrangements so that the operating parameters of each of said plurality of electrochemical devices which are dependent upon process-fluid parameters may be individually controlled to provide improved operating efficiency. The improvements in operation include improved power efficiency and improved fuel utilization in fuel cell power generating systems and reduced power consumption in fluid separation devices and the like through interstage process fluid parameter control for series networked electrochemical devices. The improved networking method includes recycling of various process flows to enhance the overall control scheme.

  16. The realization of network video monitoring system

    NASA Astrophysics Data System (ADS)

    Hou, Zhuo-wei; Qiu, Yue-hong

    2013-08-01

    The paper presents a network video monitoring system based on field programmable gate array to implement the real time acquisition and transmission of video signals. The system includes image acquisition module, central control module and Ethernet transmission module. According to request, Cyclone FPGA is taken as the control center in the system, using Quartus II and Nios II IDE as development tool to build the hardware development platform. A kind of embedded hardware system is built based on SOPC technic, in which the Nios II soft-core and other controllers are combined by configuration. Meanwhile, the μClinux is used as embedded operating system to make the process of acquisition and transmission of the data picture on the Internet more reliable. In order to fulfill the task of MAC and PHY, the fast Ethernet controller should be connected to the SOPC. TCP/IP protocol is used to implement data transmission. Based on TCP/IP protocol, the Web Servers should be embedded to implement the protocol of HTTP, TCP and UDP. Through the research of the thesis, with programmable logic device being the core and network being the transmission media, the design scheme of the video monitoring system is presented. The hardware's design is mainly done in the thesis. The principal and function of the system is deeply explained, so it can be the important technology and specific method.

  17. System Design for Nano-Network Communications

    NASA Astrophysics Data System (ADS)

    ShahMohammadian, Hoda

    The potential applications of nanotechnology in a wide range of areas necessities nano-networking research. Nano-networking is a new type of networking which has emerged by applying nanotechnology to communication theory. Therefore, this dissertation presents a framework for physical layer communications in a nano-network and addresses some of the pressing unsolved challenges in designing a molecular communication system. The contribution of this dissertation is proposing well-justified models for signal propagation, noise sources, optimum receiver design and synchronization in molecular communication channels. The design of any communication system is primarily based on the signal propagation channel and noise models. Using the Brownian motion and advection molecular statistics, separate signal propagation and noise models are presented for diffusion-based and flow-based molecular communication channels. It is shown that the corrupting noise of molecular channels is uncorrelated and non-stationary with a signal dependent magnitude. The next key component of any communication system is the reception and detection process. This dissertation provides a detailed analysis of the effect of the ligand-receptor binding mechanism on the received signal, and develops the first optimal receiver design for molecular communications. The bit error rate performance of the proposed receiver is evaluated and the impact of medium motion on the receiver performance is investigated. Another important feature of any communication system is synchronization. In this dissertation, the first blind synchronization algorithm is presented for the molecular communication channels. The proposed algorithm uses a non-decision directed maximum likelihood criterion for estimating the channel delay. The Cramer-Rao lower bound is also derived and the performance of the proposed synchronization algorithm is evaluated by investigating its mean square error.

  18. Credit Default Swaps networks and systemic risk

    NASA Astrophysics Data System (ADS)

    Puliga, Michelangelo; Caldarelli, Guido; Battiston, Stefano

    2014-11-01

    Credit Default Swaps (CDS) spreads should reflect default risk of the underlying corporate debt. Actually, it has been recognized that CDS spread time series did not anticipate but only followed the increasing risk of default before the financial crisis. In principle, the network of correlations among CDS spread time series could at least display some form of structural change to be used as an early warning of systemic risk. Here we study a set of 176 CDS time series of financial institutions from 2002 to 2011. Networks are constructed in various ways, some of which display structural change at the onset of the credit crisis of 2008, but never before. By taking these networks as a proxy of interdependencies among financial institutions, we run stress-test based on Group DebtRank. Systemic risk before 2008 increases only when incorporating a macroeconomic indicator reflecting the potential losses of financial assets associated with house prices in the US. This approach indicates a promising way to detect systemic instabilities.

  19. Credit Default Swaps networks and systemic risk.

    PubMed

    Puliga, Michelangelo; Caldarelli, Guido; Battiston, Stefano

    2014-01-01

    Credit Default Swaps (CDS) spreads should reflect default risk of the underlying corporate debt. Actually, it has been recognized that CDS spread time series did not anticipate but only followed the increasing risk of default before the financial crisis. In principle, the network of correlations among CDS spread time series could at least display some form of structural change to be used as an early warning of systemic risk. Here we study a set of 176 CDS time series of financial institutions from 2002 to 2011. Networks are constructed in various ways, some of which display structural change at the onset of the credit crisis of 2008, but never before. By taking these networks as a proxy of interdependencies among financial institutions, we run stress-test based on Group DebtRank. Systemic risk before 2008 increases only when incorporating a macroeconomic indicator reflecting the potential losses of financial assets associated with house prices in the US. This approach indicates a promising way to detect systemic instabilities. PMID:25366654

  20. Credit Default Swaps networks and systemic risk

    PubMed Central

    Puliga, Michelangelo; Caldarelli, Guido; Battiston, Stefano

    2014-01-01

    Credit Default Swaps (CDS) spreads should reflect default risk of the underlying corporate debt. Actually, it has been recognized that CDS spread time series did not anticipate but only followed the increasing risk of default before the financial crisis. In principle, the network of correlations among CDS spread time series could at least display some form of structural change to be used as an early warning of systemic risk. Here we study a set of 176 CDS time series of financial institutions from 2002 to 2011. Networks are constructed in various ways, some of which display structural change at the onset of the credit crisis of 2008, but never before. By taking these networks as a proxy of interdependencies among financial institutions, we run stress-test based on Group DebtRank. Systemic risk before 2008 increases only when incorporating a macroeconomic indicator reflecting the potential losses of financial assets associated with house prices in the US. This approach indicates a promising way to detect systemic instabilities. PMID:25366654

  1. Pathways, Networks and Systems Medicine Conferences

    SciTech Connect

    Nadeau, Joseph H.

    2013-11-25

    The 6th Pathways, Networks and Systems Medicine Conference was held at the Minoa Palace Conference Center, Chania, Crete, Greece (16-21 June 2008). The Organizing Committee was composed of Joe Nadeau (CWRU, Cleveland), Rudi Balling (German Research Centre, Brauschweig), David Galas (Institute for Systems Biology, Seattle), Lee Hood (Institute for Systems Biology, Seattle), Diane Isonaka (Seattle), Fotis Kafatos (Imperial College, London), John Lambris (Univ. Pennsylvania, Philadelphia),Harris Lewin (Univ. of Indiana, Urbana-Champaign), Edison Liu (Genome Institute of Singapore, Singapore), and Shankar Subramaniam (Univ. California, San Diego). A total of 101 individuals from 21 countries participated in the conference: USA (48), Canada (5), France (5), Austria (4), Germany (3), Italy (3), UK (3), Greece (2), New Zealand (2), Singapore (2), Argentina (1), Australia (1), Cuba (1), Denmark (1), Japan (1), Mexico (1), Netherlands (1), Spain (1), Sweden (1), Switzerland (1). With respect to speakers, 29 were established faculty members and 13 were graduate students or postdoctoral fellows. With respect to gender representation, among speakers, 13 were female and 28 were male, and among all participants 43 were female and 58 were male. Program these included the following topics: Cancer Pathways and Networks (Day 1), Metabolic Disease Networks (Day 2), Day 3 ? Organs, Pathways and Stem Cells (Day 3), and Day 4 ? Inflammation, Immunity, Microbes and the Environment (Day 4). Proceedings of the Conference were not published.

  2. National Seismic Network System of Turkey

    NASA Astrophysics Data System (ADS)

    Zunbul, S.; Kadirioğlu, F. T.; Holoğlu, N.; Kartal, R. F.; Kiliç, T.; Yatman, A.; Iravul, Y.; Tüzel, B.

    2009-04-01

    In order to mitigate disaster losses, it is necessary to establish an effective disaster management and risk system. The first step of the management is constituted by preparedness studies before the earthquake (disaster). In order to determinate disaster and risk information it is necessary to have a seismological observation network. Due to the monitoring of the earhquakes in the country-wide scale, recording, evaluation, archieving and to inform to the public autority, the project named "Development of the National Seismic Network Project-USAG" has been started. 6 Three Component Short Period, 63 Broad-band, 13 One Component Short Period stations, 65 Local Network- Broad-band, and 247 accelerometers have been operated in the frame of this project. All of the stations transmit continuously their signal to the ERD (Earthquake Research Department) seismic data center in Ankara. Capability of the network is to determine an earthquake which is minimum local magnitude ML= 2.8 generally, in some region local magnitude threshold is ML=1.5 (the places where the stations are concentrated). Earthquake activity in Turkey and surrounding region has been observed 7 days / 24 hours, in ERD data center in Ankara. After the manuel location of an earthquake, If the magnitude is over 4.0, system sends to SMS message automaticaly to the authorized people and immediately press, public and national-local crisis center, scientific institutions are informed by fax and e-mail. Data exchange has been carried out to EMSC-CSEM. During the İnstallation of the broad-band stations, the seismotectonics of the region has been taken into consideration. Earthqauke record stations are concentrated at the most important fault zones in Turkey; North Anatolian Fault System, East Anatolian Fault System, Bitlis Overlap Belt and Aegean Graben (or opening) System. After 1999 İzmit and Düzce earthquakes, the number of the seismic stations in Turkey have been increased each passing year. In this study

  3. Deep Space Network information system architecture study

    NASA Technical Reports Server (NTRS)

    Beswick, C. A.; Markley, R. W. (Editor); Atkinson, D. J.; Cooper, L. P.; Tausworthe, R. C.; Masline, R. C.; Jenkins, J. S.; Crowe, R. A.; Thomas, J. L.; Stoloff, M. J.

    1992-01-01

    The purpose of this article is to describe an architecture for the Deep Space Network (DSN) information system in the years 2000-2010 and to provide guidelines for its evolution during the 1990s. The study scope is defined to be from the front-end areas at the antennas to the end users (spacecraft teams, principal investigators, archival storage systems, and non-NASA partners). The architectural vision provides guidance for major DSN implementation efforts during the next decade. A strong motivation for the study is an expected dramatic improvement in information-systems technologies, such as the following: computer processing, automation technology (including knowledge-based systems), networking and data transport, software and hardware engineering, and human-interface technology. The proposed Ground Information System has the following major features: unified architecture from the front-end area to the end user; open-systems standards to achieve interoperability; DSN production of level 0 data; delivery of level 0 data from the Deep Space Communications Complex, if desired; dedicated telemetry processors for each receiver; security against unauthorized access and errors; and highly automated monitor and control.

  4. Final Report for ?Queuing Network Models of Performance of High End Computing Systems?

    SciTech Connect

    Buckwalter, J

    2005-09-28

    The primary objective of this project is to perform general research into queuing network models of performance of high end computing systems. A related objective is to investigate and predict how an increase in the number of nodes of a supercomputer will decrease the running time of a user's software package, which is often referred to as the strong scaling problem. We investigate the large, MPI-based Linux cluster MCR at LLNL, running the well-known NAS Parallel Benchmark (NPB) applications. Data is collected directly from NPB and also from the low-overhead LLNL profiling tool mpiP. For a run, we break the wall clock execution time of the benchmark into four components: switch delay, MPI contention time, MPI service time, and non-MPI computation time. Switch delay is estimated from message statistics. MPI service time and non-MPI computation time are calculated directly from measurement data. MPI contention is estimated by means of a queuing network model (QNM), based in part on MPI service time. This model of execution time validates reasonably well against the measured execution time, usually within 10%. Since the number of nodes used to run the application is a major input to the model, we can use the model to predict application execution times for various numbers of nodes. We also investigate how the four components of execution time scale individually as the number of nodes increases. Switch delay and MPI service time scale regularly. MPI contention is estimated by the QNM submodel and also has a fairly regular pattern. However, non-MPI compute time has a somewhat irregular pattern, possibly due to caching effects in the memory hierarchy. In contrast to some other performance modeling methods, this method is relatively fast to set up, fast to calculate, simple for data collection, and yet accurate enough to be quite useful.

  5. Wireless sensor network for mobile surveillance systems

    NASA Astrophysics Data System (ADS)

    van Dijk, Gert J. A.; Maris, Marinus G.

    2004-11-01

    Guarding safety and security within industrial, commercial and military areas is an important issue nowadays. A specific challenge lies in the design of portable surveillance systems that can be rapidly deployed, installed and easily operated. Conventional surveillance systems typically employ stand alone sensors that transmit their data to a central control station for data-processing. One of the disadvantages of these kinds of systems is that they generate a lot of data that may induce processing or storage problems. Moreover, data from the sensors must be constantly observed and assessed by human operators. In this paper, a surveillance concept based on distributed intelligence in wireless sensor networks is proposed. In this concept, surveillance is automatically performed by means of many small sensing devices including cameras. The requirements for such surveillance systems are investigated. Experiments with a demonstration system were conducted to verify some of the claims made throughout this paper.

  6. Immune System Network and Cancer Vaccine

    NASA Astrophysics Data System (ADS)

    Bianca, Carlo; Pennisi, Marzio; Motta, Santo; Ragusa, Maria Alessandra

    2011-09-01

    This paper deals with the mathematical modelling of the immune system response to cancer disease, and specifically with the treatment of the mammary carcinoma in presence of an immunoprevenction vaccine. The innate action of the immune system network, the external stimulus represented by repeated vaccine administrations and the competition with cancer are described by an ordinary differential equations-based model. The mathematical model is able to depict preclinical experiments on transgenic mice. The results are of great interest both in the applied and theoretical sciences.

  7. Some queuing network models of computer systems

    NASA Technical Reports Server (NTRS)

    Herndon, E. S.

    1980-01-01

    Queuing network models of a computer system operating with a single workload type are presented. Program algorithms are adapted for use on the Texas Instruments SR-52 programmable calculator. By slightly altering the algorithm to process the G and H matrices row by row instead of column by column, six devices and an unlimited job/terminal population could be handled on the SR-52. Techniques are also introduced for handling a simple load dependent server and for studying interactive systems with fixed multiprogramming limits.

  8. Network video transmission system based on SOPC

    NASA Astrophysics Data System (ADS)

    Zhang, Zhengbing; Deng, Huiping; Xia, Zhenhua

    2008-03-01

    Video systems have been widely used in many fields such as conferences, public security, military affairs and medical treatment. With the rapid development of FPGA, SOPC has been paid great attentions in the area of image and video processing in recent years. A network video transmission system based on SOPC is proposed in this paper for the purpose of video acquisition, video encoding and network transmission. The hardware platform utilized to design the system is an SOPC board of model Altera's DE2, which includes an FPGA chip of model EP2C35F672C6, an Ethernet controller and a video I/O interface. An IP core, known as Nios II embedded processor, is used as the CPU of the system. In addition, a hardware module for format conversion of video data, and another module to realize Motion-JPEG have been designed with Verilog HDL. These two modules are attached to the Nios II processor as peripheral equipments through the Avalon bus. Simulation results show that these two modules work as expected. Uclinux including TCP/IP protocol as well as the driver of Ethernet controller is chosen as the embedded operating system and an application program scheme is proposed.

  9. LXtoo: an integrated live Linux distribution for the bioinformatics community

    PubMed Central

    2012-01-01

    Background Recent advances in high-throughput technologies dramatically increase biological data generation. However, many research groups lack computing facilities and specialists. This is an obstacle that remains to be addressed. Here, we present a Linux distribution, LXtoo, to provide a flexible computing platform for bioinformatics analysis. Findings Unlike most of the existing live Linux distributions for bioinformatics limiting their usage to sequence analysis and protein structure prediction, LXtoo incorporates a comprehensive collection of bioinformatics software, including data mining tools for microarray and proteomics, protein-protein interaction analysis, and computationally complex tasks like molecular dynamics. Moreover, most of the programs have been configured and optimized for high performance computing. Conclusions LXtoo aims to provide well-supported computing environment tailored for bioinformatics research, reducing duplication of efforts in building computing infrastructure. LXtoo is distributed as a Live DVD and freely available at http://bioinformatics.jnu.edu.cn/LXtoo. PMID:22813356

  10. Network Penetration Testing and Research

    NASA Technical Reports Server (NTRS)

    Murphy, Brandon F.

    2013-01-01

    This paper will focus the on research and testing done on penetrating a network for security purposes. This research will provide the IT security office new methods of attacks across and against a company's network as well as introduce them to new platforms and software that can be used to better assist with protecting against such attacks. Throughout this paper testing and research has been done on two different Linux based operating systems, for attacking and compromising a Windows based host computer. Backtrack 5 and BlackBuntu (Linux based penetration testing operating systems) are two different "attacker'' computers that will attempt to plant viruses and or NASA USRP - Internship Final Report exploits on a host Windows 7 operating system, as well as try to retrieve information from the host. On each Linux OS (Backtrack 5 and BlackBuntu) there is penetration testing software which provides the necessary tools to create exploits that can compromise a windows system as well as other operating systems. This paper will focus on two main methods of deploying exploits 1 onto a host computer in order to retrieve information from a compromised system. One method of deployment for an exploit that was tested is known as a "social engineering" exploit. This type of method requires interaction from unsuspecting user. With this user interaction, a deployed exploit may allow a malicious user to gain access to the unsuspecting user's computer as well as the network that such computer is connected to. Due to more advance security setting and antivirus protection and detection, this method is easily identified and defended against. The second method of exploit deployment is the method mainly focused upon within this paper. This method required extensive research on the best way to compromise a security enabled protected network. Once a network has been compromised, then any and all devices connected to such network has the potential to be compromised as well. With a compromised

  11. LIBRA: An inexpensive geodetic network densification system

    NASA Technical Reports Server (NTRS)

    Fliegel, H. F.; Gantsweg, M.; Callahan, P. S.

    1975-01-01

    A description is given of the Libra (Locations Interposed by Ranging Aircraft) system, by which geodesy and earth strain measurements can be performed rapidly and inexpensively to several hundred auxiliary points with respect to a few fundamental control points established by any other technique, such as radio interferometry or satellite ranging. This low-cost means of extending the accuracy of space age geodesy to local surveys provides speed and spatial resolution useful, for example, for earthquake hazards estimation. Libra may be combined with an existing system, Aries (Astronomical Radio Interferometric Earth Surveying) to provide a balanced system adequate to meet the geophysical needs, and applicable to conventional surveying. The basic hardware design was outlined and specifications were defined. Then need for network densification was described. The following activities required to implement the proposed Libra system are also described: hardware development, data reduction, tropospheric calibrations, schedule of development and estimated costs.

  12. Complex network synchronization of chaotic systems with delay coupling

    SciTech Connect

    Theesar, S. Jeeva Sathya Ratnavelu, K.

    2014-03-05

    The study of complex networks enables us to understand the collective behavior of the interconnected elements and provides vast real time applications from biology to laser dynamics. In this paper, synchronization of complex network of chaotic systems has been studied. Every identical node in the complex network is assumed to be in Lur’e system form. In particular, delayed coupling has been assumed along with identical sector bounded nonlinear systems which are interconnected over network topology.

  13. Evaluation of a Cyber Security System for Hospital Network.

    PubMed

    Faysel, Mohammad A

    2015-01-01

    Most of the cyber security systems use simulated data in evaluating their detection capabilities. The proposed cyber security system utilizes real hospital network connections. It uses a probabilistic data mining algorithm to detect anomalous events and takes appropriate response in real-time. On an evaluation using real-world hospital network data consisting of incoming network connections collected for a 24-hour period, the proposed system detected 15 unusual connections which were undetected by a commercial intrusion prevention system for the same network connections. Evaluation of the proposed system shows a potential to secure protected patient health information on a hospital network. PMID:26262217

  14. Requirements for Linux Checkpoint/Restart

    SciTech Connect

    Duell, Jason; Hargrove, Paul H.; Roman, Eric S.

    2002-02-26

    This document has 4 main objectives: (1) Describe data to be saved and restored during checkpoint/restart; (2) Describe how checkpoint/restart is used within the context of the Scalable Systems environment, and MPI applications; (3) Identify issues for a checkpoint/restart implementation; and (4) Sketch the architecture of a checkpoint/restart implementation.

  15. Toda Systems, Cluster Characters, and Spectral Networks

    NASA Astrophysics Data System (ADS)

    Williams, Harold

    2016-07-01

    We show that the Hamiltonians of the open relativistic Toda system are elements of the generic basis of a cluster algebra, and in particular are cluster characters of nonrigid representations of a quiver with potential. Using cluster coordinates defined via spectral networks, we identify the phase space of this system with the wild character variety related to the periodic nonrelativistic Toda system by the wild nonabelian Hodge correspondence. We show that this identification takes the relativistic Toda Hamiltonians to traces of holonomies around a simple closed curve. In particular, this provides nontrivial examples of cluster coordinates on SL n -character varieties for n > 2 where canonical functions associated to simple closed curves can be computed in terms of quivers with potential, extending known results in the SL 2 case.

  16. Advanced systems engineering and network planning support

    NASA Technical Reports Server (NTRS)

    Walters, David H.; Barrett, Larry K.; Boyd, Ronald; Bazaj, Suresh; Mitchell, Lionel; Brosi, Fred

    1990-01-01

    The objective of this task was to take a fresh look at the NASA Space Network Control (SNC) element for the Advanced Tracking and Data Relay Satellite System (ATDRSS) such that it can be made more efficient and responsive to the user by introducing new concepts and technologies appropriate for the 1997 timeframe. In particular, it was desired to investigate the technologies and concepts employed in similar systems that may be applicable to the SNC. The recommendations resulting from this study include resource partitioning, on-line access to subsets of the SN schedule, fluid scheduling, increased use of demand access on the MA service, automating Inter-System Control functions using monitor by exception, increase automation for distributed data management and distributed work management, viewing SN operational control in terms of the OSI Management framework, and the introduction of automated interface management.

  17. Network protocols for mobile robot systems

    NASA Astrophysics Data System (ADS)

    Gage, Douglas W.

    1998-01-01

    Communications and communications protocols will play an important role in mobile robot systems able to address real world applications. A poorly integrated 'stack' of communications protocols, or protocols which are poorly matched to the functional and performance characteristics of the underlying physical communications links, can greatly reduce the effectiveness of an otherwise well implemented robotic or networked sensors system. The proliferation of Internet-like networks in military as well as civilian domains has motivated research to address some of the performance limitations TCP suffers when using RF and other media with long bandwidth-delay, dynamic connectivity, and error-prone links. Beyond these performance issues, however, TCP is poorly matched to the requirements of mobile robot and other quasi-autonomous systems: it is oriented to providing a continuous data stream, rather than discrete messages, and the canonical 'socket' interface conceals short losses of communications connectivity, but simply gives up and forces the application layer software to deal with longer losses. For the multipurpose security and surveillance mission platform project, a software applique is being developed that will run on top of user datagram protocol to provide a reliable message-based transport service. In addition, a session layer protocol is planned to support the effective transfer of control of multiple platforms among multiple stations.

  18. An algorithm of a real time image tracking system using a camera with pan/tilt motors on an embedded system

    NASA Astrophysics Data System (ADS)

    Kim, Hie-Sik; Nam, Chul; Ha, Kwan-Yong; Ayurzana, Odgeral; Kwon, Jong-Won

    2005-12-01

    The embedded systems have been applied to many fields, including households and industrial sites. The user interface technology with simple display on the screen was implemented more and more. The user demands are increasing and the system has more various applicable fields due to a high penetration rate of the Internet. Therefore, the demand for embedded system is tend to rise. An embedded system for image tracking was implemented. This system is used a fixed IP for the reliable server operation on TCP/IP networks. Using an USB camera on the embedded Linux system developed a real time broadcasting of video image on the Internet. The digital camera is connected at the USB host port of the embedded board. All input images from the video camera are continuously stored as a compressed JPEG file in a directory at the Linux web-server. And each frame image data from web camera is compared for measurement of displacement Vector. That used Block matching algorithm and edge detection algorithm for past speed. And the displacement vector is used at pan/tilt motor control through RS232 serial cable. The embedded board utilized the S3C2410 MPU, which used the ARM 920T core form Samsung. The operating system was ported to embedded Linux kernel and mounted of root file system. And the stored images are sent to the client PC through the web browser. It used the network function of Linux and it developed a program with protocol of the TCP/IP.

  19. Networks.

    ERIC Educational Resources Information Center

    Maughan, George R.; Petitto, Karen R.; McLaughlin, Don

    2001-01-01

    Describes the connectivity features and options of modern campus communication and information system networks, including signal transmission (wire-based and wireless), signal switching, convergence of networks, and network assessment variables, to enable campus leaders to make sound future-oriented decisions. (EV)

  20. Simple Linux Utility for Resource Management

    Energy Science and Technology Software Center (ESTSC)

    2008-03-10

    SLURM is an open source, fault-tolerant, and highly scalable cluster management and job scheduling system for large and small computer clusters. As a cluster resource manager, SLURM has three key functions. First, it allocates exclusive and/or non-exclusive access to resources (compute nodes) to users for some duration of time so they can perform work. Second, it provides a framework for starting, executing, and monitoring work 9normally a parallel job) on the set of allocated nodes.more » Finally, it arbitrates conflicting requests for resources by managing a queue of pending work.« less

  1. Simple Linux Utility for Resource Management

    Energy Science and Technology Software Center (ESTSC)

    2009-09-09

    SLURM is an open source, fault-tolerant, and highly scalable cluster management and job scheduling system for large and small computer clusters. As a cluster resource manager, SLURM has three key functions. First, it allocates exclusive and/or non exclusive access to resources (compute nodes) to users for some duration of time so they can perform work. Second, it provides a framework for starting, executing, and monitoring work (normally a parallel job) on the set of allciatedmore » nodes. Finally, it arbitrates conflicting requests for resouces by managing a queue of pending work.« less

  2. Simple Linux Utility for Resource Management

    SciTech Connect

    Jette, M.

    2009-09-09

    SLURM is an open source, fault-tolerant, and highly scalable cluster management and job scheduling system for large and small computer clusters. As a cluster resource manager, SLURM has three key functions. First, it allocates exclusive and/or non exclusive access to resources (compute nodes) to users for some duration of time so they can perform work. Second, it provides a framework for starting, executing, and monitoring work (normally a parallel job) on the set of allciated nodes. Finally, it arbitrates conflicting requests for resouces by managing a queue of pending work.

  3. Simple Linux Utility for Resource Management

    SciTech Connect

    Ali, Amjad Majid; Albert, Don; Andersson, Par; Artiaga, Ernest; Auble, Daniel; Balle, Susanne; Blanchard, Anton; Cao, Hongjia; Christians, Daniel; Civario, Gilles; Clouston, Chuck; Dunlap, Chris; Ekstrom, Joseph; Garlick, James; Grondona, Mark; Hatazaki, Takao; Holmes, Christopher; Huff, Nathan; Jackson, David; Jette, Morris; Johnson, Greg; King, Jason; Kritkausky, Nancy; Lee, Puenlap; Li, Bernard; McDougall, Steven; Mecozzi, Donna; Morrone, Christopher; Munt, Pere; O'Sullivan, Bryan; Oliva, Gennaro; palermo, Daniel; Phung, Daniel; Pittman, Ashley; Riebs, Andrew; Sacerdoti, Federico; Squyers, Jeff; Tamraparni, Prashanth; Tew, Kevin; Windley, Jay; Wunderlin, Anne-Marie

    2008-03-10

    SLURM is an open source, fault-tolerant, and highly scalable cluster management and job scheduling system for large and small computer clusters. As a cluster resource manager, SLURM has three key functions. First, it allocates exclusive and/or non-exclusive access to resources (compute nodes) to users for some duration of time so they can perform work. Second, it provides a framework for starting, executing, and monitoring work 9normally a parallel job) on the set of allocated nodes. Finally, it arbitrates conflicting requests for resources by managing a queue of pending work.

  4. Efficient Parallel Engineering Computing on Linux Workstations

    NASA Technical Reports Server (NTRS)

    Lou, John Z.

    2010-01-01

    A C software module has been developed that creates lightweight processes (LWPs) dynamically to achieve parallel computing performance in a variety of engineering simulation and analysis applications to support NASA and DoD project tasks. The required interface between the module and the application it supports is simple, minimal and almost completely transparent to the user applications, and it can achieve nearly ideal computing speed-up on multi-CPU engineering workstations of all operating system platforms. The module can be integrated into an existing application (C, C++, Fortran and others) either as part of a compiled module or as a dynamically linked library (DLL).

  5. Simulation of large systems with neural networks

    SciTech Connect

    Paez, T.L.

    1994-09-01

    Artificial neural networks (ANNs) have been shown capable of simulating the behavior of complex, nonlinear, systems, including structural systems. Under certain circumstances, it is desirable to simulate structures that are analyzed with the finite element method. For example, when we perform a probabilistic analysis with the Monte Carlo method, we usually perform numerous (hundreds or thousands of) repetitions of a response simulation with different input and system parameters to estimate the chance of specific response behaviors. In such applications, efficiency in computation of response is critical, and response simulation with ANNs can be valuable. However, finite element analyses of complex systems involve the use of models with tens or hundreds of thousands of degrees of freedom, and ANNs are practically limited to simulations that involve far fewer variables. This paper develops a technique for reducing the amount of information required to characterize the response of a general structure. We show how the reduced information can be used to train a recurrent ANN. Then the trained ANN can be used to simulate the reduced behavior of the original system, and the reduction transformation can be inverted to provide a simulation of the original system. A numerical example is presented.

  6. Deep Space Network information system architecture study

    NASA Technical Reports Server (NTRS)

    Beswick, C. A.; Markley, R. W. (Editor); Atkinson, D. J.; Cooper, L. P.; Tausworthe, R. C.; Masline, R. C.; Jenkins, J. S.; Crowe, R. A.; Thomas, J. L.; Stoloff, M. J.

    1992-01-01

    The purpose of this article is to describe an architecture for the DSN information system in the years 2000-2010 and to provide guidelines for its evolution during the 1990's. The study scope is defined to be from the front-end areas at the antennas to the end users (spacecraft teams, principal investigators, archival storage systems, and non-NASA partners). The architectural vision provides guidance for major DSN implementation efforts during the next decade. A strong motivation for the study is an expected dramatic improvement in information-systems technologies--i.e., computer processing, automation technology (including knowledge-based systems), networking and data transport, software and hardware engineering, and human-interface technology. The proposed Ground Information System has the following major features: unified architecture from the front-end area to the end user; open-systems standards to achieve interoperability; DSN production of level 0 data; delivery of level 0 data from the Deep Space Communications Complex, if desired; dedicated telemetry processors for each receiver; security against unauthorized access and errors; and highly automated monitor and control.

  7. Transformation of legacy network management system to service oriented architecture

    NASA Astrophysics Data System (ADS)

    Sathyan, Jithesh; Shenoy, Krishnananda

    2007-09-01

    Service providers today are facing the challenge of operating and maintaining multiple networks, based on multiple technologies. Network Management System (NMS) solutions are being used to manage these networks. However the NMS is tightly coupled with Element or the Core network components. Hence there are multiple NMS solutions for heterogeneous networks. Current network management solutions are targeted at a variety of independent networks. The wide spread popularity of IP Multimedia Subsystem (IMS) is a clear indication that all of these independent networks will be integrated into a single IP-based infrastructure referred to as Next Generation Networks (NGN) in the near future. The services, network architectures and traffic pattern in NGN will dramatically differ from the current networks. The heterogeneity and complexity in NGN including concepts like Fixed Mobile Convergence will bring a number of challenges to network management. The high degree of complexity accompanying the network element technology necessitates network management systems (NMS) which can utilize this technology to provide more service interfaces while hiding the inherent complexity. As operators begin to add new networks and expand existing networks to support new technologies and products, the necessity of scalable, flexible and functionally rich NMS systems arises. Another important factor influencing NMS architecture is mergers and acquisitions among the key vendors. Ease of integration is a key impediment in the traditional hierarchical NMS architecture. These requirements trigger the need for an architectural framework that will address the NGNM (Next Generation Network Management) issues seamlessly. This paper presents a unique perspective of bringing service orientated architecture (SOA) to legacy network management systems (NMS). It advocates a staged approach in transforming a legacy NMS to SOA. The architecture at each stage is detailed along with the technical advantages and

  8. Technology Network Ties: Network Services and Technology Programs for New York State's Educational System.

    ERIC Educational Resources Information Center

    New York State Education Dept., Albany. Office of Elementary and Secondary Education Planning, Testing, and Technological Services.

    The New York State Technology Network Ties (TNT) systems is a statewide telecommunications network which consists of computers, telephone lines, and telecommunications hardware and software. This network links school districts, Boards of Cooperative Educational Services (BOCES), libraries, other educational institutions, and the State Education…

  9. Neural network system for traffic flow management

    NASA Astrophysics Data System (ADS)

    Gilmore, John F.; Elibiary, Khalid J.; Petersson, L. E. Rickard

    1992-09-01

    Atlanta will be the home of several special events during the next five years ranging from the 1996 Olympics to the 1994 Super Bowl. When combined with the existing special events (Braves, Falcons, and Hawks games, concerts, festivals, etc.), the need to effectively manage traffic flow from surface streets to interstate highways is apparent. This paper describes a system for traffic event response and management for intelligent navigation utilizing signals (TERMINUS) developed at Georgia Tech for adaptively managing special event traffic flows in the Atlanta, Georgia area. TERMINUS (the original name given Atlanta, Georgia based upon its role as a rail line terminating center) is an intelligent surface street signal control system designed to manage traffic flow in Metro Atlanta. The system consists of three components. The first is a traffic simulation of the downtown Atlanta area around Fulton County Stadium that models the flow of traffic when a stadium event lets out. Parameters for the surrounding area include modeling for events during various times of day (such as rush hour). The second component is a computer graphics interface with the simulation that shows the traffic flows achieved based upon intelligent control system execution. The final component is the intelligent control system that manages surface street light signals based upon feedback from control sensors that dynamically adapt the intelligent controller's decision making process. The intelligent controller is a neural network model that allows TERMINUS to control the configuration of surface street signals to optimize the flow of traffic away from special events.

  10. Network versus portfolio structure in financial systems

    NASA Astrophysics Data System (ADS)

    Kobayashi, Teruyoshi

    2013-10-01

    The question of how to stabilize financial systems has attracted considerable attention since the global financial crisis of 2007-2009. Recently, Beale et al. [Proc. Natl. Acad. Sci. USA 108, 12647 (2011)] demonstrated that higher portfolio diversity among banks would reduce systemic risk by decreasing the risk of simultaneous defaults at the expense of a higher likelihood of individual defaults. In practice, however, a bank default has an externality in that it undermines other banks’ balance sheets. This paper explores how each of these different sources of risk, simultaneity risk and externality, contributes to systemic risk. The results show that the allocation of external assets that minimizes systemic risk varies with the topology of the financial network as long as asset returns have negative correlations. In the model, a well-known centrality measure, PageRank, reflects an appropriately defined “infectiveness” of a bank. An important result is that the most infective bank needs not always to be the safest bank. Under certain circumstances, the most infective node should act as a firewall to prevent large-scale collective defaults. The introduction of a counteractive portfolio structure will significantly reduce systemic risk.

  11. Stoichiometric network theory for nonequilibrium biochemical systems.

    PubMed

    Qian, Hong; Beard, Daniel A; Liang, Shou-dan

    2003-02-01

    We introduce the basic concepts and develop a theory for nonequilibrium steady-state biochemical systems applicable to analyzing large-scale complex isothermal reaction networks. In terms of the stoichiometric matrix, we demonstrate both Kirchhoff's flux law sigma(l)J(l)=0 over a biochemical species, and potential law sigma(l) mu(l)=0 over a reaction loop. They reflect mass and energy conservation, respectively. For each reaction, its steady-state flux J can be decomposed into forward and backward one-way fluxes J = J+ - J-, with chemical potential difference deltamu = RT ln(J-/J+). The product -Jdeltamu gives the isothermal heat dissipation rate, which is necessarily non-negative according to the second law of thermodynamics. The stoichiometric network theory (SNT) embodies all of the relevant fundamental physics. Knowing J and deltamu of a biochemical reaction, a conductance can be computed which directly reflects the level of gene expression for the particular enzyme. For sufficiently small flux a linear relationship between J and deltamu can be established as the linear flux-force relation in irreversible thermodynamics, analogous to Ohm's law in electrical circuits. PMID:12542691

  12. Networks of Innovation - Change and Meaning in the Age of the Internet

    NASA Astrophysics Data System (ADS)

    Tuomi, Ilkka

    2003-01-01

    Integrating concepts from multiple theoretical disciplines and detailed analyses of the evolution of Internet-related innovations (including computer networking, the World Wide Web and the Linux open source operating system), this book develops foundations for a new theoretical and practical understanding of innovation. It covers topics ranging from fashion to history of art, and includes the most detailed analysis of the open source development model so far your published.

  13. CMA Member Survey: Network Management Systems Showing Little Improvement.

    ERIC Educational Resources Information Center

    Lusa, John M.

    1998-01-01

    Discusses results of a survey of 112 network and telecom managers--members of the Communications Managers Association (CMA)--to identify problems relating to the operation of large enterprise networks. Results are presented in a table under categories of: respondent profile; network management systems; carrier management; enterprise management;…

  14. System Leadership, Networks and the Question of Power

    ERIC Educational Resources Information Center

    Hatcher, Richard

    2008-01-01

    The author's argument revolves around the relationships between government agendas and the agency of teachers, and between them the intermediary role of management as "system leaders" of network forms. Network is a pluralistic concept: networks can serve very different educational-political interests. They offer the potential of new participatory…

  15. Neural networks for aircraft system identification

    NASA Technical Reports Server (NTRS)

    Linse, Dennis J.

    1991-01-01

    Artificial neural networks offer some interesting possibilities for use in control. Our current research is on the use of neural networks on an aircraft model. The model can then be used in a nonlinear control scheme. The effectiveness of network training is demonstrated.

  16. [Systemic inflammatory rheumatic diseases competence network].

    PubMed

    Rufenach, C; Burmester, G-R; Zeidler, H; Radbruch, A

    2004-04-01

    The foundation of the competence network for rheumatology, which is funded by the "Bundesministerium für Bildung und Forschung" (BMBF) since 1999, succeeded to create a unique research structure in Germany: medical doctors and scientists from six university rheumatology centres (Berlin, Düsseldorf, Erlangen, Freiburg, Hannover und Lübeck/Bad Bramstedt) work closely together with scientists doing basic research at the Deutsches Rheuma-Forschungszentrum (DRFZ), with rheumatological hospitals, reha-clinics, and rheumatologists. Jointly they are searching for causes of systemic inflammatory rheumatic diseases and try to improve therapies-nationwide and with an interdisciplinary approach. The primary objective of this collaboration is to transfer new scientific insights more rapidly in order to improve methods for diagnosis and patients treatment. PMID:14999386

  17. Famine Early Warning System Network (FEWS NET)

    USGS Publications Warehouse

    Verdin, James P.

    2006-01-01

    The FEWS NET mission is to identify potentially food-insecure conditions early through the provision of timely and analytical hazard and vulnerability information. U.S. Government decision-makers act on this information to authorize mitigation and response activities. The U.S. Geological Survey (USGS) FEWS NET provides tools and data for monitoring and forecasting the incidence of drought and flooding to identify shocks to the food supply system that could lead to famine. Historically focused on Africa, the scope of the network has expanded to be global coverage. FEWS NET implementing partners include the USGS, National Aeronautics and Space Administration (NASA), National Oceanic and Atmospheric Administration (NOAA), United States Agency for International Development (USAID), United States Department of Agriculture (USDA), and Chemonics International.

  18. The network-enabled optimization system server

    SciTech Connect

    Mesnier, M.P.

    1995-08-01

    Mathematical optimization is a technology under constant change and advancement, drawing upon the most efficient and accurate numerical methods to date. Further, these methods can be tailored for a specific application or generalized to accommodate a wider range of problems. This perpetual change creates an ever growing field, one that is often difficult to stay abreast of. Hence, the impetus behind the Network-Enabled Optimization System (NEOS) server, which aims to provide users, both novice and expert, with a guided tour through the expanding world of optimization. The NEOS server is responsible for bridging the gap between users and the optimization software they seek. More specifically, the NEOS server will accept optimization problems over the Internet and return a solution to the user either interactively or by e-mail. This paper discusses the current implementation of the server.

  19. System and method for generating a relationship network

    DOEpatents

    Franks, Kasian; Myers, Cornelia A.; Podowski, Raf M.

    2011-07-26

    A computer-implemented system and process for generating a relationship network is disclosed. The system provides a set of data items to be related and generates variable length data vectors to represent the relationships between the terms within each data item. The system can be used to generate a relationship network for documents, images, or any other type of file. This relationship network can then be queried to discover the relationships between terms within the set of data items.

  20. System and method for generating a relationship network

    DOEpatents

    Franks, Kasian; Myers, Cornelia A; Podowski, Raf M

    2015-05-05

    A computer-implemented system and process for generating a relationship network is disclosed. The system provides a set of data items to be related and generates variable length data vectors to represent the relationships between the terms within each data item. The system can be used to generate a relationship network for documents, images, or any other type of file. This relationship network can then be queried to discover the relationships between terms within the set of data items.

  1. Networking.

    ERIC Educational Resources Information Center

    Duvall, Betty

    Networking is an information giving and receiving system, a support system, and a means whereby women can get ahead in careers--either in new jobs or in current positions. Networking information can create many opportunities: women can talk about how other women handle situations and tasks, and previously established contacts can be used in…

  2. Environmental Sensor Networks: A revolution in Earth System Science?

    NASA Astrophysics Data System (ADS)

    Martinez, K.; Hart, J. K.

    2007-12-01

    Environmental Sensor Networks (ESNs) facilitate the study of fundamental processes and the development of hazard response systems. They have evolved from passive logging systems that require manual downloading, into 'intelligent' sensor networks that comprise a network of automatic sensor nodes and communications systems which actively communicate their data to a Sensor Network Server (SNS) where these data can be integrated with other environmental datasets. At present ESN's can be classified into three types: Large Scale Single Function Networks (which use large single purpose nodes to cover a wide geographical area), Localised Multifunction Sensor Networks (typically monitor a small area in more detail, often with wireless ad-hoc systems), and Biosensor Networks (which use emerging biotechnologies to monitor environmental processes as well as developing proxies for immediate use). In the future, sensor networks will integrate these three elements (Heterogeneous Sensor Networks). We describe the development of a glacial ESN (Glacsweb) to monitor subglacial processes in order to understand glacier response to climate change. We discuss the advantages of the new system, and research highlights, as well as the problems of real world ESNs. We argue that Environmental Sensor Networks will become a standard research tool for future Earth System and Environmental Science. Not only do they provide a 'virtual' connection with the environment, they allow new field and conceptual approaches to the study of environmental processes to be developed. We suggest that although technological advances have facilitated these changes, it is vital that Earth Systems and Environmental Scientists utilise them.

  3. Environmental Sensor Networks: A revolution in the earth system science?

    NASA Astrophysics Data System (ADS)

    Hart, Jane K.; Martinez, Kirk

    2006-10-01

    Environmental Sensor Networks (ESNs) facilitate the study of fundamental processes and the development of hazard response systems. They have evolved from passive logging systems that require manual downloading, into 'intelligent' sensor networks that comprise a network of automatic sensor nodes and communications systems which actively communicate their data to a Sensor Network Server (SNS) where these data can be integrated with other environmental datasets. The sensor nodes can be fixed or mobile and range in scale appropriate to the environment being sensed. ESNs range in scale and function and we have reviewed over 50 representative examples. Large Scale Single Function Networks tend to use large single purpose nodes to cover a wide geographical area. Localised Multifunction Sensor Networks typically monitor a small area in more detail, often with wireless ad-hoc systems. Biosensor Networks use emerging biotechnologies to monitor environmental processes as well as developing proxies for immediate use. In the future, sensor networks will integrate these three elements ( Heterogeneous Sensor Networks). The communications system and data storage and integration (cyberinfrastructure) aspects of ESNs are discussed, along with current challenges which need to be addressed. We argue that Environmental Sensor Networks will become a standard research tool for future Earth System and Environmental Science. Not only do they provide a 'virtual' connection with the environment, they allow new field and conceptual approaches to the study of environmental processes to be developed. We suggest that although technological advances have facilitated these changes, it is vital that Earth Systems and Environmental Scientists utilise them.

  4. A Comparison of Geographic Information Systems, Complex Networks, and Other Models for Analyzing Transportation Network Topologies

    NASA Technical Reports Server (NTRS)

    Alexandrov, Natalia (Technical Monitor); Kuby, Michael; Tierney, Sean; Roberts, Tyler; Upchurch, Christopher

    2005-01-01

    This report reviews six classes of models that are used for studying transportation network topologies. The report is motivated by two main questions. First, what can the "new science" of complex networks (scale-free, small-world networks) contribute to our understanding of transport network structure, compared to more traditional methods? Second, how can geographic information systems (GIS) contribute to studying transport networks? The report defines terms that can be used to classify different kinds of models by their function, composition, mechanism, spatial and temporal dimensions, certainty, linearity, and resolution. Six broad classes of models for analyzing transport network topologies are then explored: GIS; static graph theory; complex networks; mathematical programming; simulation; and agent-based modeling. Each class of models is defined and classified according to the attributes introduced earlier. The paper identifies some typical types of research questions about network structure that have been addressed by each class of model in the literature.

  5. Research of home networking system based on XML/BACnet

    NASA Astrophysics Data System (ADS)

    Wang, Zhongming

    2008-11-01

    To standardize home networking information and simplify its management, this paper form a universal information module of various devices in home networking by adopting XML technology and BACnet protocol(XML/BACnet). Then, a software architecture of home networking based on this module is designed, having the function like auto management and maintenance, safety, real-time and remote controlling. Consequently, a home networking system based on this architecture is completed. Tested and evaluated, this system is one easy-using, easy-realizing, nice real-time system with strong heterogeneity and stable safety system.

  6. DebtRank-transparency: Controlling systemic risk in financial networks

    PubMed Central

    Thurner, Stefan; Poledna, Sebastian

    2013-01-01

    Nodes in a financial network, such as banks, cannot assess the true risks associated with lending to other nodes in the network, unless they have full information on the riskiness of all other nodes. These risks can be estimated by using network metrics (as DebtRank) of the interbank liability network. With a simple agent based model we show that systemic risk in financial networks can be drastically reduced by increasing transparency, i.e. making the DebtRank of individual banks visible to others, and by imposing a rule, that reduces interbank borrowing from systemically risky nodes. This scheme does not reduce the efficiency of the financial network, but fosters a more homogeneous risk-distribution within the system in a self-organized critical way. The reduction of systemic risk is due to a massive reduction of cascading failures in the transparent system. A regulation-policy implementation of the proposed scheme is discussed. PMID:23712454

  7. The Network Information Management System (NIMS) in the Deep Space Network

    NASA Technical Reports Server (NTRS)

    Wales, K. J.

    1983-01-01

    In an effort to better manage enormous amounts of administrative, engineering, and management data that is distributed worldwide, a study was conducted which identified the need for a network support system. The Network Information Management System (NIMS) will provide the Deep Space Network with the tools to provide an easily accessible source of valid information to support management activities and provide a more cost-effective method of acquiring, maintaining, and retrieval data.

  8. Index : A Rule Based Expert System For Computer Network Maintenance

    NASA Astrophysics Data System (ADS)

    Chaganty, Srinivas; Pitchai, Anandhi; Morgan, Thomas W.

    1988-03-01

    Communications is an expert intensive discipline. The application of expert systems for maintenance of large and complex networks, mainly as an aid in trouble shooting, can simplify the task of network management. The important steps involved in troubleshooting are fault detection, fault reporting, fault interpretation and fault isolation. At present, Network Maintenance Facilities are capable of detecting and reporting the faults to network personnel. Fault interpretation refers to the next step in the process, which involves coming up with reasons for the failure. Fault interpretation can be characterized in two ways. First, it involves such a diversity of facts that it is difficult to predict. Secondly, it embodies a wealth of knowledge in the form of network management personnel. The application of expert systems in these interpretive tasks is an important step towards automation of network maintenance. In this paper, INDEX (Intelligent Network Diagnosis Expediter), a rule based production system for computer network alarm interpretation is described. It acts as an intelligent filter for people analyzing network alarms. INDEX analyzes the alarms in the network and identifies proper maintenance action to be taken.The important feature of this production system is that it is data driven. Working memory is the principal data repository of production systems and its contents represent the current state of the problem. Control is based upon which productions match the constantly changing working memory elements. Implementation of the prototype is in OPS83. Major issues in rule based system development such as rule base organization, implementation and efficiency are discussed.

  9. Neural Network Based Intelligent Sootblowing System

    SciTech Connect

    Mark Rhode

    2005-04-01

    , particulate matter is also a by-product of coal combustion. Modern day utility boilers are usually fitted with electrostatic precipitators to aid in the collection of particulate matter. Although extremely efficient, these devices are sensitive to rapid changes in inlet mass concentration as well as total mass loading. Traditionally, utility boilers are equipped with devices known as sootblowers, which use, steam, water or air to dislodge and clean the surfaces within the boiler and are operated based upon established rule or operator's judgment. Poor sootblowing regimes can influence particulate mass loading to the electrostatic precipitators. The project applied a neural network intelligent sootblowing system in conjunction with state-of-the-art controls and instruments to optimize the operation of a utility boiler and systematically control boiler slagging/fouling. This optimization process targeted reduction of NOx of 30%, improved efficiency of 2% and a reduction in opacity of 5%. The neural network system proved to be a non-invasive system which can readily be adapted to virtually any utility boiler. Specific conclusions from this neural network application are listed below. These conclusions should be used in conjunction with the specific details provided in the technical discussions of this report to develop a thorough understanding of the process.

  10. System Identification of X-33 Neural Network

    NASA Technical Reports Server (NTRS)

    Aggarwal, Shiv

    2003-01-01

    present attempt, as a start, focuses only on the entry phase. Since the main engine remains cut off in this phase, there is no thrust acting on the system. This considerably simplifies the equations of motion. We introduce another simplification by assuming the system to be linear after some non-linearities are removed analytically from our consideration. Under these assumptions, the problem could be solved by Classical Statistics by employing the least sum of squares approach. Instead we chose to use the Neural Network method. This method has many advantages. It is modern, more efficient, can be adapted to work even when the assumptions are diluted. In fact, Neural Networks try to model the human brain and are capable of pattern recognition.

  11. A Linux cluster for between-pulse magnetic equilibrium reconstructions and other processor bound analyses

    SciTech Connect

    Peng, Q.; Groebner, R. J.; Lao, L. L.; Schachter, J.; Schissel, D. P.; Wade, M. R.

    2001-08-01

    A 12-processor Linux PC cluster has been installed to perform between-pulse magnetic equilibrium reconstructions during tokamak operations using the EFIT code written in FORTRAN. The MPICH package implementing message passing interface is employed by EFIT for data distribution and communication. The new system calculates equilibria eight times faster than the previous system yielding a complete equilibrium time history on a 25 ms time scale 4 min after the pulse ends. A graphical interface is provided for users to control the time resolution and the type of EFITs. The next analysis to benefit from the cluster is CERQUICK written in IDL for ion temperature profile analysis. The plan is to expand the cluster so that a full profile analysis (Te, Ti, ne, Vr, Zeff) can be made available between pulses, which lays the ground work for Kinetic EFIT and/or ONETWO power balance analyses.

  12. A performance data network for solar process heat systems

    SciTech Connect

    Barker, G.; Hale, M.J.

    1996-03-01

    A solar process heat (SPH) data network has been developed to access remote-site performance data from operational solar heat systems. Each SPH system in the data network is outfitted with monitoring equipment and a datalogger. The datalogger is accessed via modem from the data network computer at the National Renewable Energy Laboratory (NREL). The dataloggers collect both ten-minute and hourly data and download it to the data network every 24-hours for archiving, processing, and plotting. The system data collected includes energy delivered (fluid temperatures and flow rates) and site meteorological conditions, such as solar insolation and ambient temperature. The SPH performance data network was created for collecting performance data from SPH systems that are serving in industrial applications or from systems using technologies that show promise for industrial applications. The network will be used to identify areas of SPH technology needing further development, to correlate computer models with actual performance, and to improve the credibility of SPH technology. The SPH data network also provides a centralized bank of user-friendly performance data that will give prospective SPH users an indication of how actual systems perform. There are currently three systems being monitored and archived under the SPH data network: two are parabolic trough systems and the third is a flat-plate system. The two trough systems both heat water for prisons; the hot water is used for personal hygiene, kitchen operations, and laundry. The flat plate system heats water for meat processing at a slaughter house. We plan to connect another parabolic trough system to the network during the first months of 1996. We continue to look for good examples of systems using other types of collector technologies and systems serving new applications (such as absorption chilling) to include in the SPH performance data network.

  13. A neural network hybrid expert system

    SciTech Connect

    Goulding, J.R. . Dept. of Mechanical Engineering)

    1991-01-01

    When knowledge-based expert rules, equations, and proprietary languages extend Computer Aided Design and Computer Aided Manufacturing (CAD CAM) software, previously designed mechanisms can be scaled to satisfy new design requirements in the shortest time. However, embedded design alternatives needed by design engineers during the product conception and rework stages are lacking, and an operator is required who has a thorough understanding of the intended design and the how-to expertise needed to create and optimize the mechanisms. By applying neural network technology to build an expert system, a robust design supervisor system emerged which automated the embedded intellectual operations (e.g. questioning, identifying, selecting, and coordinating the design process) to (1) select the best mechanisms necessary to design a power transmission gearbox from proven solutions; (2) aid the inexperienced operator in developing complex design solutions; and (3) provide design alternatives which add back-to-the-drawing board capabilities to knowledge-based mechanical CAD/CAM software programs. 15 refs., 2 figs.

  14. A Novel Characterization of Amalgamated Networks in Natural Systems

    PubMed Central

    Barranca, Victor J.; Zhou, Douglas; Cai, David

    2015-01-01

    Densely-connected networks are prominent among natural systems, exhibiting structural characteristics often optimized for biological function. To reveal such features in highly-connected networks, we introduce a new network characterization determined by a decomposition of network-connectivity into low-rank and sparse components. Based on these components, we discover a new class of networks we define as amalgamated networks, which exhibit large functional groups and dense connectivity. Analyzing recent experimental findings on cerebral cortex, food-web, and gene regulatory networks, we establish the unique importance of amalgamated networks in fostering biologically advantageous properties, including rapid communication among nodes, structural stability under attacks, and separation of network activity into distinct functional modules. We further observe that our network characterization is scalable with network size and connectivity, thereby identifying robust features significant to diverse physical systems, which are typically undetectable by conventional characterizations of connectivity. We expect that studying the amalgamation properties of biological networks may offer new insights into understanding their structure-function relationships. PMID:26035066

  15. Network anomaly detection system with optimized DS evidence theory.

    PubMed

    Liu, Yuan; Wang, Xiaofeng; Liu, Kaiyu

    2014-01-01

    Network anomaly detection has been focused on by more people with the fast development of computer network. Some researchers utilized fusion method and DS evidence theory to do network anomaly detection but with low performance, and they did not consider features of network-complicated and varied. To achieve high detection rate, we present a novel network anomaly detection system with optimized Dempster-Shafer evidence theory (ODS) and regression basic probability assignment (RBPA) function. In this model, we add weights for each sensor to optimize DS evidence theory according to its previous predict accuracy. And RBPA employs sensor's regression ability to address complex network. By four kinds of experiments, we find that our novel network anomaly detection model has a better detection rate, and RBPA as well as ODS optimization methods can improve system performance significantly. PMID:25254258

  16. Advanced information processing system: Input/output network management software

    NASA Technical Reports Server (NTRS)

    Nagle, Gail; Alger, Linda; Kemp, Alexander

    1988-01-01

    The purpose of this document is to provide the software requirements and specifications for the Input/Output Network Management Services for the Advanced Information Processing System. This introduction and overview section is provided to briefly outline the overall architecture and software requirements of the AIPS system before discussing the details of the design requirements and specifications of the AIPS I/O Network Management software. A brief overview of the AIPS architecture followed by a more detailed description of the network architecture.

  17. The architecture of a network level intrusion detection system

    SciTech Connect

    Heady, R.; Luger, G.; Maccabe, A.; Servilla, M.

    1990-08-15

    This paper presents the preliminary architecture of a network level intrusion detection system. The proposed system will monitor base level information in network packets (source, destination, packet size, and time), learning the normal patterns and announcing anomalies as they occur. The goal of this research is to determine the applicability of current intrusion detection technology to the detection of network level intrusions. In particular, the authors are investigating the possibility of using this technology to detect and react to worm programs.

  18. Observing Arctic Ecology using Networked Infomechanical Systems

    NASA Astrophysics Data System (ADS)

    Healey, N. C.; Oberbauer, S. F.; Hollister, R. D.; Tweedie, C. E.; Welker, J. M.; Gould, W. A.

    2012-12-01

    Understanding ecological dynamics is important for investigation into the potential impacts of climate change in the Arctic. Established in the early 1990's, the International Tundra Experiment (ITEX) began observational inquiry of plant phenology, plant growth, community composition, and ecosystem properties as part of a greater effort to study changes across the Arctic. Unfortunately, these observations are labor intensive and time consuming, greatly limiting their frequency and spatial coverage. We have expanded the capability of ITEX to analyze ecological phenomenon with improved spatial and temporal resolution through the use of Networked Infomechanical Systems (NIMS) as part of the Arctic Observing Network (AON) program. The systems exhibit customizable infrastructure that supports a high level of versatility in sensor arrays in combination with information technology that allows for adaptable configurations to numerous environmental observation applications. We observe stereo and static time-lapse photography, air and surface temperature, incoming and outgoing long and short wave radiation, net radiation, and hyperspectral reflectance that provides critical information to understanding how vegetation in the Arctic is responding to ambient climate conditions. These measurements are conducted concurrent with ongoing manual measurements using ITEX protocols. Our NIMS travels at a rate of three centimeters per second while suspended on steel cables that are ~1 m from the surface spanning transects ~50 m in length. The transects are located to span soil moisture gradients across a variety of land cover types including dry heath, moist acidic tussock tundra, shrub tundra, wet meadows, dry meadows, and water tracks. We have deployed NIMS at four locations on the North Slope of Alaska, USA associated with 1 km2 ARCSS vegetation study grids including Barrow, Atqasuk, Toolik Lake, and Imnavait Creek. A fifth system has been deployed in Thule, Greenland beginning in

  19. The deep space network, volume 18. [Deep Space Instrumentation Facility, Ground Communication Facility, and Network Control System

    NASA Technical Reports Server (NTRS)

    1973-01-01

    The objectives, functions, and organization of the Deep Space Network are summarized. The Deep Space Instrumentation Facility, the Ground Communications Facility, and the Network Control System are described.

  20. An efficient management system for wireless sensor networks.

    PubMed

    Ma, Yi-Wei; Chen, Jiann-Liang; Huang, Yueh-Min; Lee, Mei-Yu

    2010-01-01

    Wireless sensor networks have garnered considerable attention recently. Networks typically have many sensor nodes, and are used in commercial, medical, scientific, and military applications for sensing and monitoring the physical world. Many researchers have attempted to improve wireless sensor network management efficiency. A Simple Network Management Protocol (SNMP)-based sensor network management system was developed that is a convenient and effective way for managers to monitor and control sensor network operations. This paper proposes a novel WSNManagement system that can show the connections stated of relationships among sensor nodes and can be used for monitoring, collecting, and analyzing information obtained by wireless sensor networks. The proposed network management system uses collected information for system configuration. The function of performance analysis facilitates convenient management of sensors. Experimental results show that the proposed method enhances the alive rate of an overall sensor node system, reduces the packet lost rate by roughly 5%, and reduces delay time by roughly 0.2 seconds. Performance analysis demonstrates that the proposed system is effective for wireless sensor network management. PMID:22163534

  1. Vein matching using artificial neural network in vein authentication systems

    NASA Astrophysics Data System (ADS)

    Noori Hoshyar, Azadeh; Sulaiman, Riza

    2011-10-01

    Personal identification technology as security systems is developing rapidly. Traditional authentication modes like key; password; card are not safe enough because they could be stolen or easily forgotten. Biometric as developed technology has been applied to a wide range of systems. According to different researchers, vein biometric is a good candidate among other biometric traits such as fingerprint, hand geometry, voice, DNA and etc for authentication systems. Vein authentication systems can be designed by different methodologies. All the methodologies consist of matching stage which is too important for final verification of the system. Neural Network is an effective methodology for matching and recognizing individuals in authentication systems. Therefore, this paper explains and implements the Neural Network methodology for finger vein authentication system. Neural Network is trained in Matlab to match the vein features of authentication system. The Network simulation shows the quality of matching as 95% which is a good performance for authentication system matching.

  2. Decentralised ? - filtering of networked control systems: a jump system approach

    NASA Astrophysics Data System (ADS)

    Al-Radhawi, Muhammad Ali; Bettayeb, Maamar

    2014-10-01

    We consider the problem of decentralised estimation of discrete-time interconnected systems with local estimators communicating with their subsystems over lossy communication channels. Assuming that the packet losses follow the Gilbert-Elliot model, the networked estimation problem can be formulated into a Markovian jump linear system framework. Modelling subsystem interactions as sum quadratic constrained uncertainties, we design mode-dependent decentralised ?-estimators that robustly stabilise the estimator system and guarantee a given disturbance attenuation level. The estimation gains are derived with necessary and sufficient rank-constrained linear matrix inequality conditions. Results are also provided for local mode-dependent estimators. Estimator synthesis is done using a cone-complementarity linearisation algorithm for the rank-constraints. The results are illustrated via an example.

  3. Encouraging Autonomy through the Use of a Social Networking System

    ERIC Educational Resources Information Center

    Leis, Adrian

    2014-01-01

    The use of social networking systems has enabled communication to occur around the globe almost instantly, with news about various events being spread around the world as they happen. There has also been much interest in the benefits and disadvantages the use of such social networking systems may bring for education. This paper reports on the use…

  4. Self-organization of complex networks as a dynamical system

    NASA Astrophysics Data System (ADS)

    Aoki, Takaaki; Yawata, Koichiro; Aoyagi, Toshio

    2015-01-01

    To understand the dynamics of real-world networks, we investigate a mathematical model of the interplay between the dynamics of random walkers on a weighted network and the link weights driven by a resource carried by the walkers. Our numerical studies reveal that, under suitable conditions, the co-evolving dynamics lead to the emergence of stationary power-law distributions of the resource and link weights, while the resource quantity at each node ceaselessly changes with time. We analyze the network organization as a deterministic dynamical system and find that the system exhibits multistability, with numerous fixed points, limit cycles, and chaotic states. The chaotic behavior of the system leads to the continual changes in the microscopic network dynamics in the absence of any external random noises. We conclude that the intrinsic interplay between the states of the nodes and network reformation constitutes a major factor in the vicissitudes of real-world networks.

  5. CFDP for Interplanetary Overlay Network

    NASA Technical Reports Server (NTRS)

    Burleigh, Scott C.

    2011-01-01

    The CCSDS (Consultative Committee for Space Data Systems) File Delivery Protocol for Interplanetary Overlay Network (CFDP-ION) is an implementation of CFDP that uses IO' s DTN (delay tolerant networking) implementation as its UT (unit-data transfer) layer. Because the DTN protocols effect automatic, reliable transmission via multiple relays, CFDP-ION need only satisfy the requirements for Class 1 ("unacknowledged") CFDP. This keeps the implementation small, but without loss of capability. This innovation minimizes processing resources by using zero-copy objects for file data transmission. It runs without modification in VxWorks, Linux, Solaris, and OS/X. As such, this innovation can be used without modification in both flight and ground systems. Integration with DTN enables the CFDP implementation itself to be very simple; therefore, very small. Use of ION infrastructure minimizes consumption of storage and processing resources while maximizing safety.

  6. Wide area network monitoring system for HEP experiments at Fermilab

    SciTech Connect

    Grigoriev, Maxim; Cottrell, Les; Logg, Connie; /SLAC

    2004-12-01

    Large, distributed High Energy Physics (HEP) collaborations, such as D0, CDF and US-CMS, depend on stable and robust network paths between major world research centers. The evolving emphasis on data and compute Grids increases the reliance on network performance. Fermilab's experimental groups and network support personnel identified a critical need for WAN monitoring to ensure the quality and efficient utilization of such network paths. This has led to the development of the Network Monitoring system we will present in this paper. The system evolved from the IEPM-BW project, started at SLAC three years ago. At Fermilab this system has developed into a fully functional infrastructure with bi-directional active network probes and path characterizations. It is based on the Iperf achievable throughput tool, Ping and Synack to test ICMP/TCP connectivity. It uses Pipechar and Traceroute to test, compare and report hop-by-hop network path characterization. It also measures real file transfer performance by BBFTP and GridFTP. The Monitoring system has an extensive web-interface and all the data is available through standalone SOAP web services or by a MonaLISA client. Also in this paper we will present a case study of network path asymmetry and abnormal performance between FNAL and SDSC, which was discovered and resolved by utilizing the Network Monitoring system.

  7. Wide Area Network Monitoring System for HEP Experiments at Fermilab

    SciTech Connect

    Grigoriev, M.

    2004-11-23

    Large, distributed High Energy Physics (HEP) collaborations, such as D0, CDF and US-CMS, depend on stable and robust network paths between major world research centres. The evolving emphasis on data and compute Grids increases the reliance on network performance. Fermilab's experimental groups and network support personnel identified a critical need for WAN monitoring to ensure the quality and efficient utilization of such network paths. This has led to the development of the Network Monitoring system we will present in this paper. The system evolved from the IEPM-BW project, started at SLAC three years ago. At Fermilab this system has developed into a fully functional infrastructure with bi-directional active network probes and path characterizations. It is based on the Iperf achievable throughput tool, Ping and Synack to test ICMP/TCP connectivity. It uses Pipechar and Traceroute to test, compare and report hop-by-hop network path characterization. It also measures real file transfer performance by BBFTP and GridFTP. The Monitoring system has an extensive web-interface and all the data is available through standalone SOAP web services or by a MonaLISA client. Also in this paper we will present a case study of network path asymmetry and abnormal performance between FNAL and SDSC, which was discovered and resolved by utilizing the Network Monitoring system.

  8. Scalable Hierarchical Network Management System for Displaying Network Information in Three Dimensions

    NASA Technical Reports Server (NTRS)

    George, Jude (Inventor); Schlecht, Leslie (Inventor); McCabe, James D. (Inventor); LeKashman, John Jr. (Inventor)

    1998-01-01

    A network management system has SNMP agents distributed at one or more sites, an input output module at each site, and a server module located at a selected site for communicating with input output modules, each of which is configured for both SNMP and HNMP communications. The server module is configured exclusively for HNMP communications, and it communicates with each input output module according to the HNMP. Non-iconified, informationally complete views are provided of network elements to aid in network management.

  9. Network theory and its applications in economic systems

    NASA Astrophysics Data System (ADS)

    Huang, Xuqing

    This dissertation covers the two major parts of my Ph.D. research: i) developing theoretical framework of complex networks; and ii) applying complex networks models to quantitatively analyze economics systems. In part I, we focus on developing theories of interdependent networks, which includes two chapters: 1) We develop a mathematical framework to study the percolation of interdependent networks under targeted-attack and find that when the highly connected nodes are protected and have lower probability to fail, in contrast to single scale-free (SF) networks where the percolation threshold pc = 0, coupled SF networks are significantly more vulnerable with pc significantly larger than zero. 2) We analytically demonstrates that clustering, which quantifies the propensity for two neighbors of the same vertex to also be neighbors of each other, significantly increases the vulnerability of the system. In part II, we apply the complex networks models to study economics systems, which also includes two chapters: 1) We study the US corporate governance network, in which nodes representing directors and links between two directors representing their service on common company boards, and propose a quantitative measure of information and influence transformation in the network. Thus we are able to identify the most influential directors in the network. 2) We propose a bipartite networks model to simulate the risk propagation process among commercial banks during financial crisis. With empirical bank's balance sheet data in 2007 as input to the model, we find that our model efficiently identifies a significant portion of the actual failed banks reported by Federal Deposit Insurance Corporation during the financial crisis between 2008 and 2011. The results suggest that complex networks model could be useful for systemic risk stress testing for financial systems. The model also identifies that commercial rather than residential real estate assets are major culprits for the

  10. Adaptive Neural Network Based Control of Noncanonical Nonlinear Systems.

    PubMed

    Zhang, Yanjun; Tao, Gang; Chen, Mou

    2016-09-01

    This paper presents a new study on the adaptive neural network-based control of a class of noncanonical nonlinear systems with large parametric uncertainties. Unlike commonly studied canonical form nonlinear systems whose neural network approximation system models have explicit relative degree structures, which can directly be used to derive parameterized controllers for adaptation, noncanonical form nonlinear systems usually do not have explicit relative degrees, and thus their approximation system models are also in noncanonical forms. It is well-known that the adaptive control of noncanonical form nonlinear systems involves the parameterization of system dynamics. As demonstrated in this paper, it is also the case for noncanonical neural network approximation system models. Effective control of such systems is an open research problem, especially in the presence of uncertain parameters. This paper shows that it is necessary to reparameterize such neural network system models for adaptive control design, and that such reparameterization can be realized using a relative degree formulation, a concept yet to be studied for general neural network system models. This paper then derives the parameterized controllers that guarantee closed-loop stability and asymptotic output tracking for noncanonical form neural network system models. An illustrative example is presented with the simulation results to demonstrate the control design procedure, and to verify the effectiveness of such a new design method. PMID:26285223