Science.gov

Sample records for networked linux systems

  1. Network of networks in Linux operating system

    NASA Astrophysics Data System (ADS)

    Wang, Haoqin; Chen, Zhen; Xiao, Guanping; Zheng, Zheng

    2016-04-01

    Operating system represents one of the most complex man-made systems. In this paper, we analyze Linux Operating System (LOS) as a complex network via modeling functions as nodes and function calls as edges. It is found that for the LOS network and modularized components within it, the out-degree follows an exponential distribution and the in-degree follows a power-law distribution. For better understanding the underlying design principles of LOS, we explore the coupling correlations of components in LOS from aspects of topology and function. The result shows that the component for device drivers has a strong manifestation in topology while a weak manifestation in function. However, the component for process management shows the contrary phenomenon. Moreover, in an effort to investigate the impact of system failures on networks, we make a comparison between the networks traced from normal and failure status of LOS. This leads to a conclusion that the failure will change function calls which should be executed in normal status and introduce new function calls in the meanwhile.

  2. Interactivity vs. fairness in networked linux systems

    SciTech Connect

    Wu, Wenji; Crawford, Matt; /Fermilab

    2007-01-01

    In general, the Linux 2.6 scheduler can ensure fairness and provide excellent interactive performance at the same time. However, our experiments and mathematical analysis have shown that the current Linux interactivity mechanism tends to incorrectly categorize non-interactive network applications as interactive, which can lead to serious fairness or starvation issues. In the extreme, a single process can unjustifiably obtain up to 95% of the CPU! The root cause is due to the facts that: (1) network packets arrive at the receiver independently and discretely, and the 'relatively fast' non-interactive network process might frequently sleep to wait for packet arrival. Though each sleep lasts for a very short period of time, the wait-for-packet sleeps occur so frequently that they lead to interactive status for the process. (2) The current Linux interactivity mechanism provides the possibility that a non-interactive network process could receive a high CPU share, and at the same time be incorrectly categorized as 'interactive.' In this paper, we propose and test a possible solution to address the interactivity vs. fairness problems. Experiment results have proved the effectiveness of the proposed solution.

  3. The Linux operating system: An introduction

    NASA Technical Reports Server (NTRS)

    Bokhari, Shahid H.

    1995-01-01

    Linux is a Unix-like operating system for Intel 386/486/Pentium based IBM-PCs and compatibles. The kernel of this operating system was written from scratch by Linus Torvalds and, although copyrighted by the author, may be freely distributed. A world-wide group has collaborated in developing Linux on the Internet. Linux can run the powerful set of compilers and programming tools of the Free Software Foundation, and XFree86, a port of the X Window System from MIT. Most capabilities associated with high performance workstations, such as networking, shared file systems, electronic mail, TeX, LaTeX, etc. are freely available for Linux. It can thus transform cheap IBM-PC compatible machines into Unix workstations with considerable capabilities. The author explains how Linux may be obtained, installed and networked. He also describes some interesting applications for Linux that are freely available. The enormous consumer market for IBM-PC compatible machines continually drives down prices of CPU chips, memory, hard disks, CDROMs, etc. Linux can convert such machines into powerful workstations that can be used for teaching, research and software development. For professionals who use Unix based workstations at work, Linux permits virtually identical working environments on their personal home machines. For cost conscious educational institutions Linux can create world-class computing environments from cheap, easily maintained, PC clones. Finally, for university students, it provides an essentially cost-free path away from DOS into the world of Unix and X Windows.

  4. The performance analysis of linux networking - packet receiving

    SciTech Connect

    Wu, Wenji; Crawford, Matt; Bowden, Mark; /Fermilab

    2006-11-01

    The computing models for High-Energy Physics experiments are becoming ever more globally distributed and grid-based, both for technical reasons (e.g., to place computational and data resources near each other and the demand) and for strategic reasons (e.g., to leverage equipment investments). To support such computing models, the network and end systems, computing and storage, face unprecedented challenges. One of the biggest challenges is to transfer scientific data sets--now in the multi-petabyte (10{sup 15} bytes) range and expected to grow to exabytes within a decade--reliably and efficiently among facilities and computation centers scattered around the world. Both the network and end systems should be able to provide the capabilities to support high bandwidth, sustained, end-to-end data transmission. Recent trends in technology are showing that although the raw transmission speeds used in networks are increasing rapidly, the rate of advancement of microprocessor technology has slowed down. Therefore, network protocol-processing overheads have risen sharply in comparison with the time spent in packet transmission, resulting in degraded throughput for networked applications. More and more, it is the network end system, instead of the network, that is responsible for degraded performance of network applications. In this paper, the Linux system's packet receive process is studied from NIC to application. We develop a mathematical model to characterize the Linux packet receiving process. Key factors that affect Linux systems network performance are analyzed.

  5. Implementing Journaling in a Linux Shared Disk File System

    NASA Technical Reports Server (NTRS)

    Preslan, Kenneth W.; Barry, Andrew; Brassow, Jonathan; Cattelan, Russell; Manthei, Adam; Nygaard, Erling; VanOort, Seth; Teigland, David; Tilstra, Mike; O'Keefe, Matthew; Erickson, Grant; Agarwal, Manish

    2000-01-01

    In computer systems today, speed and responsiveness is often determined by network and storage subsystem performance. Faster, more scalable networking interfaces like Fibre Channel and Gigabit Ethernet provide the scaffolding from which higher performance computer systems implementations may be constructed, but new thinking is required about how machines interact with network-enabled storage devices. In this paper we describe how we implemented journaling in the Global File System (GFS), a shared-disk, cluster file system for Linux. Our previous three papers on GFS at the Mass Storage Symposium discussed our first three GFS implementations, their performance, and the lessons learned. Our fourth paper describes, appropriately enough, the evolution of GFS version 3 to version 4, which supports journaling and recovery from client failures. In addition, GFS scalability tests extending to 8 machines accessing 8 4-disk enclosures were conducted: these tests showed good scaling. We describe the GFS cluster infrastructure, which is necessary for proper recovery from machine and disk failures in a collection of machines sharing disks using GFS. Finally, we discuss the suitability of Linux for handling the big data requirements of supercomputing centers.

  6. Building CHAOS: An Operating System for Livermore Linux Clusters

    SciTech Connect

    Garlick, J E; Dunlap, C M

    2003-02-21

    The Livermore Computing (LC) Linux Integration and Development Project (the Linux Project) produces and supports the Clustered High Availability Operating System (CHAOS), a cluster operating environment based on Red Hat Linux. Each CHAOS release begins with a set of requirements and ends with a formally tested, packaged, and documented release suitable for use on LC's production Linux clusters. One characteristic of CHAOS is that component software packages come from different sources under varying degrees of project control. Some are developed by the Linux Project, some are developed by other LC projects, some are external open source projects, and some are commercial software packages. A challenge to the Linux Project is to adhere to release schedules and testing disciplines in a diverse, highly decentralized development environment. Communication channels are maintained for externally developed packages in order to obtain support, influence development decisions, and coordinate/understand release schedules. The Linux Project embraces open source by releasing locally developed packages under open source license, by collaborating with open source projects where mutually beneficial, and by preferring open source over proprietary software. Project members generally use open source development tools. The Linux Project requires system administrators and developers to work together to resolve problems that arise in production. This tight coupling of production and development is a key strategy for making a product that directly addresses LC's production requirements. It is another challenge to balance support and development activities in such a way that one does not overwhelm the other.

  7. Construction of a Linux based chemical and biological information system.

    PubMed

    Molnár, László; Vágó, István; Fehér, András

    2003-01-01

    A chemical and biological information system with a Web-based easy-to-use interface and corresponding databases has been developed. The constructed system incorporates all chemical, numerical and textual data related to the chemical compounds, including numerical biological screen results. Users can search the database by traditional textual/numerical and/or substructure or similarity queries through the web interface. To build our chemical database management system, we utilized existing IT components such as ORACLE or Tripos SYBYL for database management and Zope application server for the web interface. We chose Linux as the main platform, however, almost every component can be used under various operating systems.

  8. QMP-MVIA: a message passing system for Linux clusters with gigabit Ethernet mesh connections

    SciTech Connect

    Jie Chen; W. Watson III; Robert Edwards; Weizhen Mao

    2004-09-01

    Recent progress in performance coupled with a decline in price for copper-based gigabit Ethernet (GigE) interconnects makes them an attractive alternative to expensive high speed network interconnects (NIC) when constructing Linux clusters. However traditional message passing systems based on TCP for GigE interconnects cannot fully utilize the raw performance of today's GigE interconnects due to the overhead of kernel involvement and multiple memory copies during sending and receiving messages. The overhead is more evident in the case of mesh connected Linux clusters using multiple GigE interconnects in a single host. We present a general message passing system called QMP-MVIA (QCD Message Passing over M-VIA) for Linux clusters with mesh connections using GigE interconnects. In particular, we evaluate and compare the performance characteristics of TCP and M-VIA (an implementation of the VIA specification) software for a mesh communication architecture to demonstrate the feasibility of using M-VIA as a point-to-point communication software, on which QMP-MVIA is based. Furthermore, we illustrate the design and implementation of QMP-MVIA for mesh connected Linux clusters with emphasis on both point-to-point and collective communications, and demonstrate that QMP-MVIA message passing system using GigE interconnects achieves bandwidth and latency that are not only better than systems based on TCP but also compare favorably to systems using some of the specialized high speed interconnects in a switched architecture at much lower cost.

  9. Potential performance bottleneck in Linux TCP

    SciTech Connect

    Wu, Wenji; Crawford, Matt; /Fermilab

    2006-12-01

    TCP is the most widely used transport protocol on the Internet today. Over the years, especially recently, due to requirements of high bandwidth transmission, various approaches have been proposed to improve TCP performance. The Linux 2.6 kernel is now preemptible. It can be interrupted mid-task, making the system more responsive and interactive. However, we have noticed that Linux kernel preemption can interact badly with the performance of the networking subsystem. In this paper we investigate the performance bottleneck in Linux TCP. We systematically describe the trip of a TCP packet from its ingress into a Linux network end system to its final delivery to the application; we study the performance bottleneck in Linux TCP through mathematical modeling and practical experiments; finally we propose and test one possible solution to resolve this performance bottleneck in Linux TCP.

  10. I/O performance evaluation of a Linux-based network-attached storage device

    NASA Astrophysics Data System (ADS)

    Sun, Zhaoyan; Dong, Yonggui; Wu, Jinglian; Jia, Huibo; Feng, Guanping

    2002-09-01

    In a Local Area Network (LAN), clients are permitted to access the files on high-density optical disks via a network server. But the quality of read service offered by the conventional server is not satisfied because of the multiple functions on the server and the overmuch caller. This paper develops a Linux-based Network-Attached Storage (NAS) server. The Operation System (OS), composed of an optimized kernel and a miniaturized file system, is stored in a flash memory. After initialization, the NAS device is connected into the LAN. The administrator and users could configure the access the server through the web page respectively. In order to enhance the quality of access, the management of buffer cache in file system is optimized. Some benchmark programs are peformed to evaluate the I/O performance of the NAS device. Since data recorded in optical disks are usually for reading accesses, our attention is focused on the reading throughput of the device. The experimental results indicate that the I/O performance of our NAS device is excellent.

  11. AIRE-Linux

    NASA Astrophysics Data System (ADS)

    Zhou, Jianfeng; Xu, Benda; Peng, Chuan; Yang, Yang; Huo, Zhuoxi

    2015-08-01

    AIRE-Linux is a dedicated Linux system for astronomers. Modern astronomy faces two big challenges: massive observed raw data which covers the whole electromagnetic spectrum, and overmuch professional data processing skill which exceeds personal or even a small team's abilities. AIRE-Linux, which is a specially designed Linux and will be distributed to users by Virtual Machine (VM) images in Open Virtualization Format (OVF), is to help astronomers confront the challenges. Most astronomical software packages, such as IRAF, MIDAS, CASA, Heasoft etc., will be integrated into AIRE-Linux. It is easy for astronomers to configure and customize the system and use what they just need. When incorporated into cloud computing platforms, AIRE-Linux will be able to handle data intensive and computing consuming tasks for astronomers. Currently, a Beta version of AIRE-Linux is ready for download and testing.

  12. The Case for A Hierarchal System Model for Linux Clusters

    SciTech Connect

    Seager, M; Gorda, B

    2009-06-05

    The computer industry today is no longer driven, as it was in the 40s, 50s and 60s, by High-performance computing requirements. Rather, HPC systems, especially Leadership class systems, sit on top of a pyramid investment mode. Figure 1 shows a representative pyramid investment model for systems hardware. At the base of the pyramid is the huge investment (order 10s of Billions of US Dollars per year) in semiconductor fabrication and process technologies. These costs, which are approximately doubling with every generation, are funded from investments multiple markets: enterprise, desktops, games, embedded and specialized devices. Over and above these base technology investments are investments for critical technology elements such as microprocessor, chipsets and memory ASIC components. Investments for these components are spread across the same markets as the base semiconductor processes investments. These second tier investments are approximately half the size of the lower level of the pyramid. The next technology investment layer up, tier 3, is more focused on scalable computing systems such as those needed for HPC and other markets. These tier 3 technology elements include networking (SAN, WAN and LAN), interconnects and large scalable SMP designs. Above these is tier 4 are relatively small investments necessary to build very large, scalable systems high-end or Leadership class systems. Primary among these are the specialized network designs of vertically integrated systems, etc.

  13. Linux and the chemist.

    SciTech Connect

    Moore, J. M.; McCann, M. P.; Materials Science Division; Sam Houston State Univ.

    2003-02-01

    Linux is a freely available computer operating system. Instead of buying multiple copies of the same operating system for use on each computer, Linux may be freely copied onto every computer. Linux distributions come with hundreds of applications, such as compilers, browsers, various servers, graphics software, text editors, and spreadsheets, just to mention a few. Many commercial software companies have ported their applications over to Linux. Numerous programs for chemists, such as statistical treatment, molecular modeling, NMR spectral processing, DNA sequence evaluation, crystal structure solving, and molucular dynamics are available online, many at no cost.

  14. Development of Automatic Live Linux Rebuilding System with Flexibility in Science and Engineering Education and Applying to Information Processing Education

    NASA Astrophysics Data System (ADS)

    Sonoda, Jun; Yamaki, Kota

    We develop an automatic Live Linux rebuilding system for science and engineering education, such as information processing education, numerical analysis and so on. Our system is enable to easily and automatically rebuild a customized Live Linux from a ISO image of Ubuntu, which is one of the Linux distribution. Also, it is easily possible to install/uninstall packages and to enable/disable init daemons. When we rebuild a Live Linux CD using our system, we show number of the operations is 8, and the rebuilding time is about 33 minutes on CD version and about 50 minutes on DVD version. Moreover, we have applied the rebuilded Live Linux CD in a class of information processing education in our college. As the results of a questionnaires survey from our 43 students who used the Live Linux CD, we obtain that the our Live Linux is useful for about 80 percents of students. From these results, we conclude that our system is able to easily and automatically rebuild a useful Live Linux in short time.

  15. Development of a portable Linux-based ECG measurement and monitoring system.

    PubMed

    Tan, Tan-Hsu; Chang, Ching-Su; Huang, Yung-Fa; Chen, Yung-Fu; Lee, Cheng

    2011-08-01

    This work presents a portable Linux-based electrocardiogram (ECG) signals measurement and monitoring system. The proposed system consists of an ECG front end and an embedded Linux platform (ELP). The ECG front end digitizes 12-lead ECG signals acquired from electrodes and then delivers them to the ELP via a universal serial bus (USB) interface for storage, signal processing, and graphic display. The proposed system can be installed anywhere (e.g., offices, homes, healthcare centers and ambulances) to allow people to self-monitor their health conditions at any time. The proposed system also enables remote diagnosis via Internet. Additionally, the system has a 7-in. interactive TFT-LCD touch screen that enables users to execute various functions, such as scaling a single-lead or multiple-lead ECG waveforms. The effectiveness of the proposed system was verified by using a commercial 12-lead ECG signal simulator and in vivo experiments. In addition to its portability, the proposed system is license-free as Linux, an open-source code, is utilized during software development. The cost-effectiveness of the system significantly enhances its practical application for personal healthcare.

  16. Linux thin-client conversion in a large cardiology practice: initial experience.

    PubMed

    Echt, Martin P; Rosen, Jordan

    2004-01-01

    Capital Cardiology Associates (CCA) is a single-specialty cardiology practice with offices in New York and Massachusetts. In 2003, CCA converted its IT system from a Microsoft-based network to a Linux network employing Linux thin-client technology with overall positive outcomes.

  17. CORSET: Service-Oriented Resource Management System in Linux

    NASA Astrophysics Data System (ADS)

    Kang, Dong-Jae; Kim, Chei-Yol; Jung, Sung-In

    Generally, system resources are not enough for many running services and applications in a system. And services are more important than single process in real world and they have different priority or importance. So each service should be treated with discrimination in aspect of system resources. But administrator can't guarantee the specific service has proper resources in unsettled workload situation because many processes are in race condition. So, we suppose the service-oriented resource management subsystem to resolve upper problems. It guarantees the performance or QoS of the specific service in changeable workload situation by satisfying the minimum resource requirement for the service.

  18. [Design of an embedded stroke rehabilitation apparatus system based on Linux computer engineering].

    PubMed

    Zhuang, Pengfei; Tian, XueLong; Zhu, Lin

    2014-04-01

    A realizaton project of electrical stimulator aimed at motor dysfunction of stroke is proposed in this paper. Based on neurophysiological biofeedback, this system, using an ARM9 S3C2440 as the core processor, integrates collection and display of surface electromyography (sEMG) signal, as well as neuromuscular electrical stimulation (NMES) into one system. By embedding Linux system, the project is able to use Qt/Embedded as a graphical interface design tool to accomplish the design of stroke rehabilitation apparatus. Experiments showed that this system worked well. PMID:25039129

  19. [Design of an embedded stroke rehabilitation apparatus system based on Linux computer engineering].

    PubMed

    Zhuang, Pengfei; Tian, XueLong; Zhu, Lin

    2014-04-01

    A realizaton project of electrical stimulator aimed at motor dysfunction of stroke is proposed in this paper. Based on neurophysiological biofeedback, this system, using an ARM9 S3C2440 as the core processor, integrates collection and display of surface electromyography (sEMG) signal, as well as neuromuscular electrical stimulation (NMES) into one system. By embedding Linux system, the project is able to use Qt/Embedded as a graphical interface design tool to accomplish the design of stroke rehabilitation apparatus. Experiments showed that this system worked well.

  20. A new design and implementation of an infrared device driver in embedded Linux systems

    NASA Astrophysics Data System (ADS)

    Jia, Li-li; Cui, Hua; Wang, Ru-li

    2009-07-01

    Wireless infrared communication systems are widely-used for the remote controls in portable terminals, particularly for systems requiring low cost, light weight, moderate data rates. They have already proven their electiveness for short-range temporary communications and in high data rate longer range point-to-point systems. This paper proposes the issue of design and implementation of an infrared device driver in a personal portable intelligent digital infrared communications system. After analyzing the various constraints, we use the embedded system based on Samsung S3C2440A 32-bit processor and Linux operating system to design the driver program. The program abandons its traditional Serial interface control mode, uses the generic GPIO to achieve infrared receiver device driver, and intends a user-defined communication protocol which is much more simple and convenient instead of traditional infrared communication protocol to design the character device drivers for the infrared receiver. The communication protocol uses interrupt counter to determine to receive the value and the first code.In this paper, the interrupt handling and an I/O package to reuse Linux device drivers in embedded system is introduced. Via this package, the whole Linux device driver source tree can be reused without any modifications. The driver program can set up and initialize the infrared device, transfer data between the device and the software, configure the device, monitor and trace the status of the device, reset the device, and shut down the device as requested. At last infrared test procedure was prepared and some testing and evaluations were made in a mobile infrared intelligent cicerone system, and the test result shows that the design is simple, practical, with advantages such as easy transplantation, strong reliability and convenience.

  1. Linux support at Fermilab

    SciTech Connect

    D.R. Yocum, C. Sieh, D. Skow, S. Kovich, D. Holmgren and R. Kennedy

    1998-12-01

    In January of 1998 Fermilab issued an official statement of support of the Linux operating system. This was the result of a ground swell of interest in the possibilities of a cheap, easily used platform for computation and analysis culminating with the successful demonstration of a small computation farm as reported at CHEP97. This paper will describe the current status of Linux support and deployment at Fermilab. The collaborative development process for Linux creates some problems with traditional support models. A primary example of this is that there is no definite OS distribution ala a CD distribution from a traditional Unix vendor. Fermilab has had to make a more definite statement about what is meant by Linux for this reason. Linux support at Fermilab is restricted to the Intel processor platform. A central distribution system has been created to mitigate problems with multiple distribution and configuration options. This system is based on the Red Hat distribution with the Fermi Unix Environment (FUE) layered above it. Deployment of Linux at the lab has been rapidly growing and by CHEP there are expected to be hundreds of machines running Linux. These include computational farms, trigger processing farms, and desktop workstations. The former groups are described in other talks and consist of clusters of many tens of very similar machines devoted to a few tasks. The latter group is more diverse and challenging. The user community has been very supportive and active in defining needs for Linux features and solving various compatibility issues. We will discuss the support arrangements currently in place.

  2. Real-time head movement system and embedded Linux implementation for the control of power wheelchairs.

    PubMed

    Nguyen, H T; King, L M; Knight, G

    2004-01-01

    Mobility has become very important for our quality of life. A loss of mobility due to an injury is usually accompanied by a loss of self-confidence. For many individuals, independent mobility is an important aspect of self-esteem. Head movement is a natural form of pointing and can be used to directly replace the joystick whilst still allowing for similar control. Through the use of embedded LINUX and artificial intelligence, a hands-free head movement wheelchair controller has been designed and implemented successfully. This system provides for severely disabled users an effective power wheelchair control method with improved posture, ease of use and attractiveness.

  3. Use of Low-Cost Acquisition Systems with an Embedded Linux Device for Volcanic Monitoring.

    PubMed

    Moure, David; Torres, Pedro; Casas, Benito; Toma, Daniel; Blanco, María José; Del Río, Joaquín; Manuel, Antoni

    2015-01-01

    This paper describes the development of a low-cost multiparameter acquisition system for volcanic monitoring that is applicable to gravimetry and geodesy, as well as to the visual monitoring of volcanic activity. The acquisition system was developed using a System on a Chip (SoC) Broadcom BCM2835 Linux operating system (based on DebianTM) that allows for the construction of a complete monitoring system offering multiple possibilities for storage, data-processing, configuration, and the real-time monitoring of volcanic activity. This multiparametric acquisition system was developed with a software environment, as well as with different hardware modules designed for each parameter to be monitored. The device presented here has been used and validated under different scenarios for monitoring ocean tides, ground deformation, and gravity, as well as for monitoring with images the island of Tenerife and ground deformation on the island of El Hierro. PMID:26295394

  4. Use of Low-Cost Acquisition Systems with an Embedded Linux Device for Volcanic Monitoring.

    PubMed

    Moure, David; Torres, Pedro; Casas, Benito; Toma, Daniel; Blanco, María José; Del Río, Joaquín; Manuel, Antoni

    2015-08-19

    This paper describes the development of a low-cost multiparameter acquisition system for volcanic monitoring that is applicable to gravimetry and geodesy, as well as to the visual monitoring of volcanic activity. The acquisition system was developed using a System on a Chip (SoC) Broadcom BCM2835 Linux operating system (based on DebianTM) that allows for the construction of a complete monitoring system offering multiple possibilities for storage, data-processing, configuration, and the real-time monitoring of volcanic activity. This multiparametric acquisition system was developed with a software environment, as well as with different hardware modules designed for each parameter to be monitored. The device presented here has been used and validated under different scenarios for monitoring ocean tides, ground deformation, and gravity, as well as for monitoring with images the island of Tenerife and ground deformation on the island of El Hierro.

  5. Use of Low-Cost Acquisition Systems with an Embedded Linux Device for Volcanic Monitoring

    PubMed Central

    Moure, David; Torres, Pedro; Casas, Benito; Toma, Daniel; Blanco, María José; Del Río, Joaquín; Manuel, Antoni

    2015-01-01

    This paper describes the development of a low-cost multiparameter acquisition system for volcanic monitoring that is applicable to gravimetry and geodesy, as well as to the visual monitoring of volcanic activity. The acquisition system was developed using a System on a Chip (SoC) Broadcom BCM2835 Linux operating system (based on DebianTM) that allows for the construction of a complete monitoring system offering multiple possibilities for storage, data-processing, configuration, and the real-time monitoring of volcanic activity. This multiparametric acquisition system was developed with a software environment, as well as with different hardware modules designed for each parameter to be monitored. The device presented here has been used and validated under different scenarios for monitoring ocean tides, ground deformation, and gravity, as well as for monitoring with images the island of Tenerife and ground deformation on the island of El Hierro. PMID:26295394

  6. Unsteady, Cooled Turbine Simulation Using a PC-Linux Analysis System

    NASA Technical Reports Server (NTRS)

    List, Michael G.; Turner, Mark G.; Chen, Jen-Pimg; Remotigue, Michael G.; Veres, Joseph P.

    2004-01-01

    The fist stage of the high-pressure turbine (HPT) of the GE90 engine was simulated with a three-dimensional unsteady Navier-Sokes solver, MSU Turbo, which uses source terms to simulate the cooling flows. In addition to the solver, its pre-processor, GUMBO, and a post-processing and visualization tool, Turbomachinery Visual3 (TV3) were run in a Linux environment to carry out the simulation and analysis. The solver was run both with and without cooling. The introduction of cooling flow on the blade surfaces, case, and hub and its effects on both rotor-vane interaction as well the effects on the blades themselves were the principle motivations for this study. The studies of the cooling flow show the large amount of unsteadiness in the turbine and the corresponding hot streak migration phenomenon. This research on the GE90 turbomachinery has also led to a procedure for running unsteady, cooled turbine analysis on commodity PC's running the Linux operating system.

  7. A machine vision system for micro-EDM based on linux

    NASA Astrophysics Data System (ADS)

    Guo, Rui; Zhao, Wansheng; Li, Gang; Li, Zhiyong; Zhang, Yong

    2006-11-01

    Due to the high precision and good surface quality that it can give, Electrical Discharge Machining (EDM) is potentially an important process for the fabrication of micro-tools and micro-components. However, a number of issues remain unsolved before micro-EDM becomes a reliable process with repeatable results. To deal with the difficulties in micro electrodes on-line fabrication and tool wear compensation, a micro-EDM machine vision system is developed with a Charge Coupled Device (CCD) camera, with an optical resolution of 1.61μm and an overall magnification of 113~729. Based on the Linux operating system, an image capturing program is developed with the V4L2 API, and an image processing program is exploited by using OpenCV. The contour of micro electrodes can be extracted by means of the Canny edge detector. Through the system calibration, the micro electrodes diameter can be measured on-line. Experiments have been carried out to prove its performance, and the reasons of measurement error are also analyzed.

  8. Chandra Science Operational Data System Migration to Linux: Herding Cats through a Funnel

    NASA Astrophysics Data System (ADS)

    Evans, J.; Evans, I.; Fabbiano, G.; Nichols, J.; Paton, L.; Rots, A.

    2014-05-01

    Migration to a new operational system requires technical and non-technical planning to address all of the functional associations affiliated with an established operations environment. The transition to (or addition of) a new platform often includes project planning that has organizational and operational elements. The migration likely tasks individuals both directly and indirectly involved in the project, so identification and coordination of key personnel is essential. The new system must be accurate and robust, and the transition plan typically must ensure that interruptions to services are minimized. Despite detailed integration and testing efforts, back-up plans that include procedures to follow if there are issues during or after installation need to be in place as part of the transition task. In this paper, we present some of the important steps involved in the migration of an operational data system. The management steps include setting objectives and defining scope, identifying stakeholders and establishing communication, assessing the environment and estimating workload, building a schedule, and coordinating with all involved to see it through. We discuss, specifically, the recent migration of the Chandra data system and data center operations from Solaris 32 to Linux 64. The code base is approximately 2 million source lines of code, and supports proposal planning, science mission planning, data processing, and the Chandra data archive. The overall project took approximately 18 months to plan and implement with the resources we had available. Data center operations continued uninterrupted with the exception of a small downtime during the changeover. We highlight our planning and implementation, the experience we gained during the project, and the lessons that we have learned.

  9. Recent advances in PC-Linux systems for electronic structure computations by optimized compilers and numerical libraries.

    PubMed

    Yu, Jen-Shiang K; Yu, Chin-Hui

    2002-01-01

    One of the most frequently used packages for electronic structure research, GAUSSIAN 98, is compiled on Linux systems with various hardware configurations, including AMD Athlon (with the "Thunderbird" core), AthlonMP, and AthlonXP (with the "Palomino" core) systems as well as the Intel Pentium 4 (with the "Willamette" core) machines. The default PGI FORTRAN compiler (pgf77) and the Intel FORTRAN compiler (ifc) are respectively employed with different architectural optimization options to compile GAUSSIAN 98 and test the performance improvement. In addition to the BLAS library included in revision A.11 of this package, the Automatically Tuned Linear Algebra Software (ATLAS) library is linked against the binary executables to improve the performance. Various Hartree-Fock, density-functional theories, and the MP2 calculations are done for benchmarking purposes. It is found that the combination of ifc with ATLAS library gives the best performance for GAUSSIAN 98 on all of these PC-Linux computers, including AMD and Intel CPUs. Even on AMD systems, the Intel FORTRAN compiler invariably produces binaries with better performance than pgf77. The enhancement provided by the ATLAS library is more significant for post-Hartree-Fock calculations. The performance on one single CPU is potentially as good as that on an Alpha 21264A workstation or an SGI supercomputer. The floating-point marks by SpecFP2000 have similar trends to the results of GAUSSIAN 98 package.

  10. Recent advances in PC-Linux systems for electronic structure computations by optimized compilers and numerical libraries.

    PubMed

    Yu, Jen-Shiang K; Yu, Chin-Hui

    2002-01-01

    One of the most frequently used packages for electronic structure research, GAUSSIAN 98, is compiled on Linux systems with various hardware configurations, including AMD Athlon (with the "Thunderbird" core), AthlonMP, and AthlonXP (with the "Palomino" core) systems as well as the Intel Pentium 4 (with the "Willamette" core) machines. The default PGI FORTRAN compiler (pgf77) and the Intel FORTRAN compiler (ifc) are respectively employed with different architectural optimization options to compile GAUSSIAN 98 and test the performance improvement. In addition to the BLAS library included in revision A.11 of this package, the Automatically Tuned Linear Algebra Software (ATLAS) library is linked against the binary executables to improve the performance. Various Hartree-Fock, density-functional theories, and the MP2 calculations are done for benchmarking purposes. It is found that the combination of ifc with ATLAS library gives the best performance for GAUSSIAN 98 on all of these PC-Linux computers, including AMD and Intel CPUs. Even on AMD systems, the Intel FORTRAN compiler invariably produces binaries with better performance than pgf77. The enhancement provided by the ATLAS library is more significant for post-Hartree-Fock calculations. The performance on one single CPU is potentially as good as that on an Alpha 21264A workstation or an SGI supercomputer. The floating-point marks by SpecFP2000 have similar trends to the results of GAUSSIAN 98 package. PMID:12086529

  11. SLURM: Simplex Linux Utility for Resource Management

    SciTech Connect

    Jette, M; Grondona, M

    2003-04-22

    Simple Linux Utility for Resource Management (SLURM) is an open source, fault-tolerant, and highly scalable cluster management and job scheduling system for Linux clusters of thousands of nodes. Components include machine status, partition management, job management, scheduling, and stream copy modules. This paper presents an overview of the SLURM architecture and functionality.

  12. SLURM: Simple Linux Utility for Resource Management

    SciTech Connect

    Jette, M; Grondona, M

    2002-12-19

    Simple Linux Utility for Resource Management (SLURM) is an open source, fault-tolerant, and highly scalable cluster management and job scheduling system for Linux clusters of thousands of nodes. Components include machine status, partition management, job management, scheduling and stream copy modules. This paper presents an overview of the SLURM architecture and functionality.

  13. v9fb: a remote framebuffer infrastructure of linux

    SciTech Connect

    Kulkarni, Abhishek; Ionkov, Latchesar

    2008-01-01

    v9fb is a software infrastructure that allows extending framebufFer devices in Linux over the network by providing an abstraction to them in the form of a filesystem hierarchy. Framebuffer based graphic devices export a synthetic filesystem which offers a simple and easy-to-use interface for performing common framebuffer operations. Remote framebuffer devices could be accessed over the network using the 9P protocol support in Linux. We describe the infrastructure in detail and review some of the benefits it offers similar to Plan 9 distributed systems. We discuss the applications of this infrastructure to remotely display and run interactive applications on a terminal while ofFloading the computation to remote servers, and more importantly the flexibility it offers in driving tiled-display walls by aggregating graphic devices in the network.

  14. Managing a Real-Time Embedded Linux Platform with Buildroot

    SciTech Connect

    Diamond, J.; Martin, K.

    2015-01-01

    Developers of real-time embedded software often need to build the operating system, kernel, tools and supporting applications from source to work with the differences in their hardware configuration. The first attempts to introduce Linux-based real-time embedded systems into the Fermilab accelerator controls system used this approach but it was found to be time-consuming, difficult to maintain and difficult to adapt to different hardware configurations. Buildroot is an open source build system with a menu-driven configuration tool (similar to the Linux kernel build system) that automates this process. A customized Buildroot [1] system has been developed for use in the Fermilab accelerator controls system that includes several hardware configuration profiles (including Intel, ARM and PowerPC) and packages for Fermilab support software. A bootable image file is produced containing the Linux kernel, shell and supporting software suite that varies from 3 to 20 megabytes large – ideal for network booting. The result is a platform that is easier to maintain and deploy in diverse hardware configurations

  15. SLURM: Simple Linux Utility for Resource Management

    SciTech Connect

    Jette, M; Dunlap, C; Garlick, J; Grondona, M

    2002-07-08

    Simple Linux Utility for Resource Management (SLURM) is an open source, fault-tolerant, and highly scalable cluster management and job scheduling system for Linux clusters of thousands of nodes. Components include machine status, partition management, job management, scheduling and stream copy modules. The design also includes a scalable, general-purpose communication infrastructure. This paper presents a overview of the SLURM architecture and functionality.

  16. Abstract of talk for Silicon Valley Linux Users Group

    NASA Technical Reports Server (NTRS)

    Clanton, Sam

    2003-01-01

    The use of Linux for research at NASA Ames is discussed.Topics include:work with the Atmospheric Physics branch on software for a spectrometer to be used in the CRYSTAL-FACE mission this summer; work on in the Neuroengineering Lab with code IC including an introduction to the extension of the human senses project,advantages with using linux for real-time biological data processing,algorithms utilized on a linux system, goals of the project,slides of people with Neuroscan caps on, and progress that has been made and how linux has helped.

  17. KNBD: A Remote Kernel Block Server for Linux

    NASA Technical Reports Server (NTRS)

    Becker, Jeff

    1999-01-01

    I am developing a prototype of a Linux remote disk block server whose purpose is to serve as a lower level component of a parallel file system. Parallel file systems are an important component of high performance supercomputers and clusters. Although supercomputer vendors such as SGI and IBM have their own custom solutions, there has been a void and hence a demand for such a system on Beowulf-type PC Clusters. Recently, the Parallel Virtual File System (PVFS) project at Clemson University has begun to address this need (1). Although their system provides much of the functionality of (and indeed was inspired by) the equivalent file systems in the commercial supercomputer market, their system is all in user-space. Migrating their 10 services to the kernel could provide a performance boost, by obviating the need for expensive system calls. Thanks to Pavel Machek, the Linux kernel has provided the network block device (2) with kernels 2.1.101 and later. You can configure this block device to redirect reads and writes to a remote machine's disk. This can be used as a building block for constructing a striped file system across several nodes.

  18. BTime A Clock Synchronization Tool For Linux Clusters

    2004-10-22

    BTime software synchronizes the system clocks on Linux computers that can communicate on a network. Primarily intended for Linux computers that form a cluster, BTime ensures that all computers in the cluster have approximately the same time (usually within microseconds). In operation, a BTime server broadcasts target times every second. All BTime clients filter timing data and apply local time corrections synchronously at multiples of 64 seconds. Bayesian estimation of target time errors feeds amore » Kalman filter which estimates local errors in time, clock drift, and wander rates. Server dock adjustments are detected and compensated, thus reducing filter convergence time. Low probability events (e.g. significant time changes) are handled through heuristics also designed to reduce filter convergence time. Normal BTime corrects dock differences, while another version of BTime that only tracks clock differences can be used for measurements. In authors test lasting four days, BTime delivered estimated dock synchronization within 10 microseconds with 99.75% confidence. Standard deviation of the estimated clock offset is typically 2-3 microseconds, even over busy multi-hop networks. These results are about 100 times better than published results for Network Time Protocol (NTP).« less

  19. BTime A Clock Synchronization Tool For Linux Clusters

    SciTech Connect

    Loncaric, Josip

    2004-10-22

    BTime software synchronizes the system clocks on Linux computers that can communicate on a network. Primarily intended for Linux computers that form a cluster, BTime ensures that all computers in the cluster have approximately the same time (usually within microseconds). In operation, a BTime server broadcasts target times every second. All BTime clients filter timing data and apply local time corrections synchronously at multiples of 64 seconds. Bayesian estimation of target time errors feeds a Kalman filter which estimates local errors in time, clock drift, and wander rates. Server dock adjustments are detected and compensated, thus reducing filter convergence time. Low probability events (e.g. significant time changes) are handled through heuristics also designed to reduce filter convergence time. Normal BTime corrects dock differences, while another version of BTime that only tracks clock differences can be used for measurements. In authors test lasting four days, BTime delivered estimated dock synchronization within 10 microseconds with 99.75% confidence. Standard deviation of the estimated clock offset is typically 2-3 microseconds, even over busy multi-hop networks. These results are about 100 times better than published results for Network Time Protocol (NTP).

  20. BSD Portals for LINUX 2.0

    NASA Technical Reports Server (NTRS)

    McNab, A. David; woo, Alex (Technical Monitor)

    1999-01-01

    Portals, an experimental feature of 4.4BSD, extend the file system name space by exporting certain open () requests to a user-space daemon. A portal daemon is mounted into the file name space as if it were a standard file system. When the kernel resolves a pathname and encounters a portal mount point, the remainder of the path is passed to the portal daemon. Depending on the portal "pathname" and the daemon's configuration, some type of open (2) is performed. The resulting file descriptor is passed back to the kernel which eventually returns it to the user, to whom it appears that a "normal" open has occurred. A proxy portalfs file system is responsible for kernel interaction with the daemon. The overall effect is that the portal daemon performs an open (2) on behalf of the kernel, possibly hiding substantial complexity from the calling process. One particularly useful application is implementing a connection service that allows simple scripts to open network sockets. This paper describes the implementation of portals for LINUX 2.0.

  1. Berkeley lab checkpoint/restart (BLCR) for Linux clusters

    NASA Astrophysics Data System (ADS)

    Hargrove, Paul H.; Duell, Jason C.

    2006-09-01

    This article describes the motivation, design and implementation of Berkeley Lab Checkpoint/Restart (BLCR), a system-level checkpoint/restart implementation for Linux clusters that targets the space of typical High Performance Computing applications, including MPI. Application-level solutions, including both checkpointing and fault-tolerant algorithms, are recognized as more time and space efficient than system-level checkpoints, which cannot make use of any application-specific knowledge. However, system-level checkpointing allows for preemption, making it suitable for responding to ''fault precursors'' (for instance, elevated error rates from ECC memory or network CRCs, or elevated temperature from sensors). Preemption can also increase the efficiency of batch scheduling; for instance reducing idle cycles (by allowing for shutdown without any queue draining period or reallocation of resources to eliminate idle nodes when better fitting jobs are queued), and reducing the average queued time (by limiting large jobs to running during off-peak hours, without the need to limit the length of such jobs). Each of these potential uses makes BLCR a valuable tool for efficient resource management in Linux clusters.

  2. Kernel-based Linux emulation for Plan 9.

    SciTech Connect

    Minnich, Ronald G.

    2010-09-01

    CNKemu is a kernel-based system for the 9k variant of the Plan 9 kernel. It is designed to provide transparent binary support for programs compiled for IBM's Compute Node Kernel (CNK) on the Blue Gene series of supercomputers. This support allows users to build applications with the standard Blue Gene toolchain, including C++ and Fortran compilers. While the CNK is not Linux, IBM designed the CNK so that the user interface has much in common with the Linux 2.0 system call interface. The Plan 9 CNK emulator hence provides the foundation of kernel-based Linux system call support on Plan 9. In this paper we discuss cnkemu's implementation and some of its more interesting features, such as the ability to easily intermix Plan 9 and Linux system calls.

  3. Real-time data collection in Linux: a case study.

    PubMed

    Finney, S A

    2001-05-01

    Multiuser UNIX-like operating systems such as Linux are often considered unsuitable for real-time data collection because of the potential for indeterminate timing latencies resulting from preemptive scheduling. In this paper, Linux is shown to be fully adequate for precisely controlled programming with millisecond resolution or better. The Linux system calls that subserve such timing control are described and tested and then utilized in a MIDI-based program for tapping and music performance experiments. The timing of this program, including data input and output, is shown to be accurate at the millisecond level. This demonstrates that Linux, with proper programming, is suitable for real-time experiment software. In addition, the detailed description and test of both the operating system facilities and the application program itself may serve as a model for publicly documenting programming methods and software performance on other operating systems.

  4. SLURM: Simple Linux Utility for Resource Management

    SciTech Connect

    Jette, M; Dunlap, C; Garlick, J; Grondona, M

    2002-04-24

    Simple Linux Utility for Resource Management (SLURM) is an open source, fault-tolerant, and highly scalable cluster management and job scheduling system for Linux clusters of thousands of nodes. Components include machine status, partition management, job management, and scheduling modules. The design also includes a scalable, general-purpose communication infrastructure. Development will take place in four phases: Phase I results in a solid infrastructure; Phase II produces a functional but limited interactive job initiation capability without use of the interconnect/switch; Phase III provides switch support and documentation; Phase IV provides job status, fault-tolerance, and job queuing and control through Livermore's Distributed Production Control System (DPCS), a meta-batch and resource management system.

  5. Linux Kernel Co-Scheduling For Bulk Synchronous Parallel Applications

    SciTech Connect

    Jones, Terry R

    2011-01-01

    This paper describes a kernel scheduling algorithm that is based on co-scheduling principles and that is intended for parallel applications running on 1000 cores or more where inter-node scalability is key. Experimental results for a Linux implementation on a Cray XT5 machine are presented.1 The results indicate that Linux is a suitable operating system for this new scheduling scheme, and that this design provides a dramatic improvement in scaling performance for synchronizing collective operations at scale.

  6. Linux Kernel Co-Scheduling and Bulk Synchronous Parallelism

    SciTech Connect

    Jones, Terry R

    2012-01-01

    This paper describes a kernel scheduling algorithm that is based on coscheduling principles and that is intended for parallel applications running on 1000 cores or more. Experimental results for a Linux implementation on a Cray XT5 machine are presented. The results indicate that Linux is a suitable operating system for this new scheduling scheme, and that this design provides a dramatic improvement in scaling performance for synchronizing collective operations at scale.

  7. Building the World's Fastest Linux Cluster

    SciTech Connect

    Goldstone, R; Seager, M

    2003-10-24

    Imagine having 2,304 Xeon processors running day and night solving complex problems. With a theoretical peak of 11.2 teraflops, that is just what the MCR cluster at Lawrence Livermore National Labs (LLNL) is doing. Over the past several years, Lawrence Livermore National Laboratory has deployed a series of increasingly large and powerful Intel-based Linux clusters. The most significant of these is a cluster known as the MCR (Multiprogrammactic Capability Resource). With 1,152 Intel Xeon (2.4 GHz) dual-processor nodes from Linux NetworX and a high performance interconnect from Quadrics, LTD., the MCR currently ranks third on the 21st Top 500 Supercomputer Sites List and is the fastest Linux cluster in the world. This feat was accomplished with a total system cost (hardware including maintenance, in-reconnect, global file system storage) of under $14 million. Although production clusters like the MCR are still custom built supercomputers that require as much artistry as skill, the experiences of LLNL have helped clear an important path for other clusters to follow.

  8. A General Purpose High Performance Linux Installation Infrastructure

    SciTech Connect

    Wachsmann, Alf

    2002-06-17

    With more and more and larger and larger Linux clusters, the question arises how to install them. This paper addresses this question by proposing a solution using only standard software components. This installation infrastructure scales well for a large number of nodes. It is also usable for installing desktop machines or diskless Linux clients, thus, is not designed for cluster installations in particular but is, nevertheless, highly performant. The infrastructure proposed uses PXE as the network boot component on the nodes. It uses DHCP and TFTP servers to get IP addresses and a bootloader to all nodes. It then uses kickstart to install Red Hat Linux over NFS. We have implemented this installation infrastructure at SLAC with our given server hardware and installed a 256 node cluster in 30 minutes. This paper presents the measurements from this installation and discusses the bottlenecks in our installation.

  9. Network systems security analysis

    NASA Astrophysics Data System (ADS)

    Yilmaz, Ä.°smail

    2015-05-01

    Network Systems Security Analysis has utmost importance in today's world. Many companies, like banks which give priority to data management, test their own data security systems with "Penetration Tests" by time to time. In this context, companies must also test their own network/server systems and take precautions, as the data security draws attention. Based on this idea, the study cyber-attacks are researched throughoutly and Penetration Test technics are examined. With these information on, classification is made for the cyber-attacks and later network systems' security is tested systematically. After the testing period, all data is reported and filed for future reference. Consequently, it is found out that human beings are the weakest circle of the chain and simple mistakes may unintentionally cause huge problems. Thus, it is clear that some precautions must be taken to avoid such threats like updating the security software.

  10. The network queueing system

    NASA Technical Reports Server (NTRS)

    Kingsbury, Brent K.

    1986-01-01

    Described is the implementation of a networked, UNIX based queueing system developed on contract for NASA. The system discussed supports both batch and device requests, and provides the facilities of remote queueing, request routing, remote status, queue access controls, batch request resource quota limits, and remote output return.

  11. Software structure for broadband wireless sensor network system

    NASA Astrophysics Data System (ADS)

    Kwon, Hyeokjun; Oh, Sechang; Yoon, Hargsoon; Varadan, Vijay K.

    2010-04-01

    Zigbee Sensor Network system has been investigating for monitoring and analyzing the data measured from a lot of sensors because the Zigbee Sensor Network has several advantages of low power consumption, compact size, and multi-node connection. However, it has a disadvantage not to be able to monitor the data measured from sensors at the remote area such as other room that is located at other city. This paper describes the software structure to compensate the defect with combining the Zigbee Sensor Network and wireless LAN technology for remote monitoring of measured sensor data. The software structure has both benefits of Zigbee Sensor Network and the advantage of wireless LAN. The software structure has three main software structures. The first software structure consists of the function in order to acquire the data from sensors and the second software structure is to gather the sensor data through wireless Zigbee and to send the data to Monitoring system by using wireless LAN. The second part consists of Linux packages software based on 2440 CPU (Samsung corp.), which has ARM9 core. The Linux packages include bootloader, device drivers, kernel, and applications, and the applications are TCP/IP server program, the program interfacing with Zigbee RF module, and wireless LAN program. The last part of software structure is to receive the sensor data through TCP/IP client program from Wireless Gate Unit and to display graphically measured data by using MATLAB program; the sensor data is measured on 100Hz sampling rate and the measured data has 10bit data resolution. The wireless data transmission rate per each channel is 1.6kbps.

  12. Network Systems Technician.

    ERIC Educational Resources Information Center

    Ohio State Univ., Columbus. Center on Education and Training for Employment.

    This publication contains 17 subjects appropriate for use in a competency list for the occupation of network systems technician, 1 of 12 occupations within the business/computer technologies cluster. Each unit consists of a number of competencies; a list of competency builders is provided for each competency. Titles of the 17 units are as follows:…

  13. Achieving Order through CHAOS: the LLNL HPC Linux Cluster Experience

    SciTech Connect

    Braby, R L; Garlick, J E; Goldstone, R J

    2003-05-02

    Since fall 2001, Livermore Computing at Lawrence Livermore National Laboratory has deployed 11 Intel IA-32-based Linux clusters ranging in size up to 1154 nodes. All provide a common programming model and implement a similar cluster architecture. Hardware components are carefully selected for performance, usability, manageability, and reliability and are then integrated and supported using a strategy that evolved from practical experience. Livermore Computing Linux clusters run a common software environment that is developed and maintained in-house while drawing components and additional support from the open source community and industrial partnerships. The environment is based on Red Hat Linux and adds kernel modifications, cluster system management, monitoring and failure detection, resource management, authentication and access control, development environment, and parallel file system. The overall strategy has been successful and demonstrates that world-class high-performance computing resources can be built and maintained using commodity off-the-shelf hardware and open source software.

  14. Scalability and Performance of a Large Linux Cluster

    SciTech Connect

    BRIGHTWELL,RONALD B.; PLIMPTON,STEVEN J.

    2000-01-20

    In this paper the authors present performance results from several parallel benchmarks and applications on a 400-node Linux cluster at Sandia National Laboratories. They compare the results on the Linux cluster to performance obtained on a traditional distributed-memory massively parallel processing machine, the Intel TeraFLOPS. They discuss the characteristics of these machines that influence the performance results and identify the key components of the system software that they feel are important to allow for scalability of commodity-based PC clusters to hundreds and possibly thousands of processors.

  15. Network Information System

    1996-05-01

    The Network Information System (NWIS) was initially implemented in May 1996 as a system in which computing devices could be recorded so that unique names could be generated for each device. Since then the system has grown to be an enterprise wide information system which is integrated with other systems to provide the seamless flow of data through the enterprise. The system Iracks data for two main entities: people and computing devices. The following aremore » the type of functions performed by NWIS for these two entities: People Provides source information to the enterprise person data repository for select contractors and visitors Generates and tracks unique usernames and Unix user IDs for every individual granted cyber access Tracks accounts for centrally managed computing resources, and monitors and controls the reauthorization of the accounts in accordance with the DOE mandated interval Computing Devices Generates unique names for all computing devices registered in the system Tracks the following information for each computing device: manufacturer, make, model, Sandia property number, vendor serial number, operating system and operating system version, owner, device location, amount of memory, amount of disk space, and level of support provided for the machine Tracks the hardware address for network cards Tracks the P address registered to computing devices along with the canonical and alias names for each address Updates the Dynamic Domain Name Service (DDNS) for canonical and alias names Creates the configuration files for DHCP to control the DHCP ranges and allow access to only properly registered computers Tracks and monitors classified security plans for stand-alone computers Tracks the configuration requirements used to setup the machine Tracks the roles people have on machines (system administrator, administrative access, user, etc...) Allows systems administrators to track changes made on the machine (both hardware and software) Generates an

  16. Distributed System Intruder Tools, Trinoo and Tribe Flood Network

    SciTech Connect

    Criscuolo, P.J.; Rathbun, T

    1999-12-21

    Trinoo and Tribe Flood Network (TFN) are new forms of denial of Service (DOS) attacks. attacks are designed to bring down a computer or network by overloading it with a large amount of network traffic using TCP, UDP, or ICMP. In the past, these attacks came from a single location and were easy to detect. Trinoo and TFN are distributed system intruder tools. These tools launch DoS attacks from multiple computer systems at a target system simultaneously. This makes the assault hard to detect and almost impossible to track to the original attacker. Because these attacks can be launched from hundreds of computers under the command of a single attacker, they are far more dangerous than any DoS attack launched from a single location. These distributed tools have only been seen on Solaris and Linux machines, but there is no reason why they could not be modified for UNIX machines. The target system can also be of any type because the attack is based on the TCP/IP architecture, not a flaw in any particular operating system (OS). CIAC considers the risks presented by these DoS tools to be high.

  17. Networked differential GPS system

    NASA Technical Reports Server (NTRS)

    Mueller, K. Tysen (Inventor); Loomis, Peter V. W. (Inventor); Kalafus, Rudolph M. (Inventor); Sheynblat, Leonid (Inventor)

    1994-01-01

    An embodiment of the present invention relates to a worldwide network of differential GPS reference stations (NDGPS) that continually track the entire GPS satellite constellation and provide interpolations of reference station corrections tailored for particular user locations between the reference stations Each reference station takes real-time ionospheric measurements with codeless cross-correlating dual-frequency carrier GPS receivers and computes real-time orbit ephemerides independently. An absolute pseudorange correction (PRC) is defined for each satellite as a function of a particular user's location. A map of the function is constructed, with iso-PRC contours. The network measures the PRCs at a few points, so-called reference stations and constructs an iso-PRC map for each satellite. Corrections are interpolated for each user's site on a subscription basis. The data bandwidths are kept to a minimum by transmitting information that cannot be obtained directly by the user and by updating information by classes and according to how quickly each class of data goes stale given the realities of the GPS system. Sub-decimeter-level kinematic accuracy over a given area is accomplished by establishing a mini-fiducial network.

  18. Network Systems Administration Needs Assessment.

    ERIC Educational Resources Information Center

    Lexington Community Coll., KY. Office of Institutional Research.

    In spring 1996, Lexington Community College (LCC) in Kentucky, conducted a survey to gather information on employment trends and educational needs in the field of network systems administration (NSA). NSA duties involve the installation and administration of network operating systems, applications software, and networking infrastructure;…

  19. Network of Networks and the Climate System

    NASA Astrophysics Data System (ADS)

    Kurths, Jürgen; Boers, Niklas; Bookhagen, Bodo; Donges, Jonathan; Donner, Reik; Malik, Nishant; Marwan, Norbert; Stolbova, Veronika

    2013-04-01

    Network of networks is a new direction in complex systems science. One can find such networks in various fields, such as infrastructure (power grids etc.), human brain or Earth system. Basic properties and new characteristics, such as cross-degree, or cross-betweenness will be discussed. This allows us to quantify the structural role of single vertices or whole sub-networks with respect to the interaction of a pair of subnetworks on local, mesoscopic, and global topological scales. Next, we consider an inverse problem: Is there a backbone-like structure underlying the climate system? For this we propose a method to reconstruct and analyze a complex network from data generated by a spatio-temporal dynamical system. This technique is then applied to 3-dimensional data of the climate system. We interpret different heights in the atmosphere as different networks and the whole as a network of networks. This approach enables us to uncover relations to global circulation patterns in oceans and atmosphere. The global scale view on climate networks offers promising new perspectives for detecting dynamical structures based on nonlinear physical processes in the climate system. This concept is applied to Indian Monsoon data in order to characterize the regional occurrence of strong rain events and its impact on predictability. References: Arenas, A., A. Diaz-Guilera, J. Kurths, Y. Moreno, and C. Zhou, Phys. Reports 2008, 469, 93. Donges, J., Y. Zou, N. Marwan, and J. Kurths, Europhys. Lett. 2009, 87, 48007. Donner, R., Y. Zou, J. Donges, N. Marwan, and J. Kurths, Phys. Rev. E 2010, 81, 015101(R ). Mokhov, I. I., D. A. Smirnov, P. I. Nakonechny, S. S. Kozlenko, E. P. Seleznev, and J. Kurths, Geophys. Res. Lett. 2011, 38, L00F04. Malik, N., B. Bookhagen, N. Marwan, and J. Kurths, Climate Dynamics, 2012, 39, 971. Donges, J., H. Schultz, N. Marwan, Y. Zou, J. Kurths, Eur. J. Phys. B 2011, 84, 635-651. Donges, J., R. Donner, M. Trauth, N. Marwan, H.J. Schellnhuber, and J. Kurths

  20. Transitioning to Intel-based Linux Servers in the Payload Operations Integration Center

    NASA Technical Reports Server (NTRS)

    Guillebeau, P. L.

    2004-01-01

    The MSFC Payload Operations Integration Center (POIC) is the focal point for International Space Station (ISS) payload operations. The POIC contains the facilities, hardware, software and communication interface necessary to support payload operations. ISS ground system support for processing and display of real-time spacecraft and telemetry and command data has been operational for several years. The hardware components were reaching end of life and vendor costs were increasing while ISS budgets were becoming severely constrained. Therefore it has been necessary to migrate the Unix portions of our ground systems to commodity priced Intel-based Linux servers. hardware architecture including networks, data storage, and highly available resources. This paper will concentrate on the Linux migration implementation for the software portion of our ground system. The migration began with 3.5 million lines of code running on Unix platforms with separate servers for telemetry, command, Payload information management systems, web, system control, remote server interface and databases. The Intel-based system is scheduled to be available for initial operational use by August 2004 The overall migration to Intel-based Linux servers in the control center involves changes to the This paper will address the Linux migration study approach including the proof of concept, criticality of customer buy-in and importance of beginning with POSlX compliant code. It will focus on the development approach explaining the software lifecycle. Other aspects of development will be covered including phased implementation, interim milestones and metrics measurements and reporting mechanisms. This paper will also address the testing approach covering all levels of testing including development, development integration, IV&V, user beta testing and acceptance testing. Test results including performance numbers compared with Unix servers will be included. need for a smooth transition while maintaining

  1. Views of wireless network systems.

    SciTech Connect

    Young, William Frederick; Duggan, David Patrick

    2003-10-01

    Wireless networking is becoming a common element of industrial, corporate, and home networks. Commercial wireless network systems have become reliable, while the cost of these solutions has become more affordable than equivalent wired network solutions. The security risks of wireless systems are higher than wired and have not been studied in depth. This report starts to bring together information on wireless architectures and their connection to wired networks. We detail information contained on the many different views of a wireless network system. The method of using multiple views of a system to assist in the determination of vulnerabilities comes from the Information Design Assurance Red Team (IDART{trademark}) Methodology of system analysis developed at Sandia National Laboratories.

  2. [Network structures in biological systems].

    PubMed

    Oleskin, A V

    2013-01-01

    Network structures (networks) that have been extensively studied in the humanities are characterized by cohesion, a lack of a central control unit, and predominantly fractal properties. They are contrasted with structures that contain a single centre (hierarchies) as well as with those whose elements predominantly compete with one another (market-type structures). As far as biological systems are concerned, their network structures can be subdivided into a number of types involving different organizational mechanisms. Network organization is characteristic of various structural levels of biological systems ranging from single cells to integrated societies. These networks can be classified into two main subgroups: (i) flat (leaderless) network structures typical of systems that are composed of uniform elements and represent modular organisms or at least possess manifest integral properties and (ii) three-dimensional, partly hierarchical structures characterized by significant individual and/or intergroup (intercaste) differences between their elements. All network structures include an element that performs structural, protective, and communication-promoting functions. By analogy to cell structures, this element is denoted as the matrix of a network structure. The matrix includes a material and an immaterial component. The material component comprises various structures that belong to the whole structure and not to any of its elements per se. The immaterial (ideal) component of the matrix includes social norms and rules regulating network elements' behavior. These behavioral rules can be described in terms of algorithms. Algorithmization enables modeling the behavior of various network structures, particularly of neuron networks and their artificial analogs.

  3. Language Networks as Complex Systems

    ERIC Educational Resources Information Center

    Lee, Max Kueiming; Ou, Sheue-Jen

    2008-01-01

    Starting in the late eighties, with a growing discontent with analytical methods in science and the growing power of computers, researchers began to study complex systems such as living organisms, evolution of genes, biological systems, brain neural networks, epidemics, ecology, economy, social networks, etc. In the early nineties, the research…

  4. Optimizing Performance on Linux Clusters Using Advanced Communication Protocols: Achieving Over 10 Teraflops on a 8.6 Teraflops Linpack-Rated Linux Cluster

    SciTech Connect

    Krishnan, Manoj Kumar; Nieplocha, Jarek

    2005-04-26

    Advancements in high-performance networks (Quadrics, Infiniband or Myrinet) continue to improve the efficiency of modern clusters. However, the average application efficiency is as small fraction of the peak as the system’s efficiency. This paper describes techniques for optimizing application performance on Linux clusters using Remote Memory Access communication protocols. The effectiveness of these optimizations is presented in the context of an application kernel, dense matrix multiplication. The result was achieving over 10 teraflops on HP Linux cluster on which LINPACK performance is measured as 8.6 teraflops.

  5. The APS control system network

    SciTech Connect

    Sidorowicz, K.V.; McDowell, W.P.

    1995-12-31

    The APS accelerator control system is a distributed system consisting of operator interfaces, a network, and computer-controlled interfaces to hardware. This implementation of a control system has come to be called the {open_quotes}Standard Model.{close_quotes} The operator interface is a UNDC-based workstation with an X-windows graphical user interface. The workstation may be located at any point on the facility network and maintain full functionality. The function of the network is to provide a generalized communication path between the host computers, operator workstations, input/output crates, and other hardware that comprise the control system. The crate or input/output controller (IOC) provides direct control and input/output interfaces for each accelerator subsystem. The network is an integral part of all modem control systems and network performance will determine many characteristics of a control system. This paper will describe the overall APS network and examine the APS control system network in detail. Metrics are provided on the performance of the system under various conditions.

  6. Developing and Benchmarking Native Linux Applications on Android

    NASA Astrophysics Data System (ADS)

    Batyuk, Leonid; Schmidt, Aubrey-Derrick; Schmidt, Hans-Gunther; Camtepe, Ahmet; Albayrak, Sahin

    Smartphones get increasingly popular where more and more smartphone platforms emerge. Special attention was gained by the open source platform Android which was presented by the Open Handset Alliance (OHA) hosting members like Google, Motorola, and HTC. Android uses a Linux kernel and a stripped-down userland with a custom Java VM set on top. The resulting system joins the advantages of both environments, while third-parties are intended to develop only Java applications at the moment.

  7. Proposal of Network-Based Multilingual Space Dictionary Database System

    NASA Astrophysics Data System (ADS)

    Yoshimitsu, T.; Hashimoto, T.; Ninomiya, K.

    2002-01-01

    The International Academy of Astronautics (IAA) is now constructing a multilingual dictionary database system of space-friendly terms. The database consists of a lexicon and dictionaries of multiple languages. The lexicon is a table which relates corresponding terminology in different languages. Each language has a dictionary which contains terms and their definitions. The database assumes the use on the internet. Updating and searching the terms and definitions are conducted via the network. Maintaining the database is conducted by the international cooperation. A new word arises day by day, thus to easily input new words and their definitions to the database is required for the longstanding success of the system. The main key of the database is an English term which is approved at the table held once or twice with the working group members. Each language has at lease one working group member who is responsible of assigning the corresponding term and the definition of the term of his/her native language. Inputting and updating terms and their definitions can be conducted via the internet from the office of each member which may be located at his/her native country. The system is constructed by freely distributed database server program working on the Linux operating system, which will be installed at the head office of IAA. Once it is installed, it will be open to all IAA members who can search the terms via the internet. Currently the authors are constructing the prototype system which is described in this paper.

  8. Computation and Communication Evaluation of an Authentication Mechanism for Time-Triggered Networked Control Systems.

    PubMed

    Martins, Goncalo; Moondra, Arul; Dubey, Abhishek; Bhattacharjee, Anirban; Koutsoukos, Xenofon D

    2016-07-25

    In modern networked control applications, confidentiality and integrity are important features to address in order to prevent against attacks. Moreover, network control systems are a fundamental part of the communication components of current cyber-physical systems (e.g., automotive communications). Many networked control systems employ Time-Triggered (TT) architectures that provide mechanisms enabling the exchange of precise and synchronous messages. TT systems have computation and communication constraints, and with the aim to enable secure communications in the network, it is important to evaluate the computational and communication overhead of implementing secure communication mechanisms. This paper presents a comprehensive analysis and evaluation of the effects of adding a Hash-based Message Authentication (HMAC) to TT networked control systems. The contributions of the paper include (1) the analysis and experimental validation of the communication overhead, as well as a scalability analysis that utilizes the experimental result for both wired and wireless platforms and (2) an experimental evaluation of the computational overhead of HMAC based on a kernel-level Linux implementation. An automotive application is used as an example, and the results show that it is feasible to implement a secure communication mechanism without interfering with the existing automotive controller execution times. The methods and results of the paper can be used for evaluating the performance impact of security mechanisms and, thus, for the design of secure wired and wireless TT networked control systems.

  9. Computation and Communication Evaluation of an Authentication Mechanism for Time-Triggered Networked Control Systems.

    PubMed

    Martins, Goncalo; Moondra, Arul; Dubey, Abhishek; Bhattacharjee, Anirban; Koutsoukos, Xenofon D

    2016-01-01

    In modern networked control applications, confidentiality and integrity are important features to address in order to prevent against attacks. Moreover, network control systems are a fundamental part of the communication components of current cyber-physical systems (e.g., automotive communications). Many networked control systems employ Time-Triggered (TT) architectures that provide mechanisms enabling the exchange of precise and synchronous messages. TT systems have computation and communication constraints, and with the aim to enable secure communications in the network, it is important to evaluate the computational and communication overhead of implementing secure communication mechanisms. This paper presents a comprehensive analysis and evaluation of the effects of adding a Hash-based Message Authentication (HMAC) to TT networked control systems. The contributions of the paper include (1) the analysis and experimental validation of the communication overhead, as well as a scalability analysis that utilizes the experimental result for both wired and wireless platforms and (2) an experimental evaluation of the computational overhead of HMAC based on a kernel-level Linux implementation. An automotive application is used as an example, and the results show that it is feasible to implement a secure communication mechanism without interfering with the existing automotive controller execution times. The methods and results of the paper can be used for evaluating the performance impact of security mechanisms and, thus, for the design of secure wired and wireless TT networked control systems. PMID:27463718

  10. Computation and Communication Evaluation of an Authentication Mechanism for Time-Triggered Networked Control Systems

    PubMed Central

    Martins, Goncalo; Moondra, Arul; Dubey, Abhishek; Bhattacharjee, Anirban; Koutsoukos, Xenofon D.

    2016-01-01

    In modern networked control applications, confidentiality and integrity are important features to address in order to prevent against attacks. Moreover, network control systems are a fundamental part of the communication components of current cyber-physical systems (e.g., automotive communications). Many networked control systems employ Time-Triggered (TT) architectures that provide mechanisms enabling the exchange of precise and synchronous messages. TT systems have computation and communication constraints, and with the aim to enable secure communications in the network, it is important to evaluate the computational and communication overhead of implementing secure communication mechanisms. This paper presents a comprehensive analysis and evaluation of the effects of adding a Hash-based Message Authentication (HMAC) to TT networked control systems. The contributions of the paper include (1) the analysis and experimental validation of the communication overhead, as well as a scalability analysis that utilizes the experimental result for both wired and wireless platforms and (2) an experimental evaluation of the computational overhead of HMAC based on a kernel-level Linux implementation. An automotive application is used as an example, and the results show that it is feasible to implement a secure communication mechanism without interfering with the existing automotive controller execution times. The methods and results of the paper can be used for evaluating the performance impact of security mechanisms and, thus, for the design of secure wired and wireless TT networked control systems. PMID:27463718

  11. Multilevel Complex Networks and Systems

    NASA Astrophysics Data System (ADS)

    Caldarelli, Guido

    2014-03-01

    Network theory has been a powerful tool to model isolated complex systems. However, the classical approach does not take into account the interactions often present among different systems. Hence, the scientific community is nowadays concentrating the efforts on the foundations of new mathematical tools for understanding what happens when multiple networks interact. The case of economic and financial networks represents a paramount example of multilevel networks. In the case of trade, trade among countries the different levels can be described by the different granularity of the trading relations. Indeed, we have now data from the scale of consumers to that of the country level. In the case of financial institutions, we have a variety of levels at the same scale. For example one bank can appear in the interbank networks, ownership network and cds networks in which the same institution can take place. In both cases the systemically important vertices need to be determined by different procedures of centrality definition and community detection. In this talk I will present some specific cases of study related to these topics and present the regularities found. Acknowledged support from EU FET Project ``Multiplex'' 317532.

  12. Climate tools in mainstream Linux distributions

    NASA Astrophysics Data System (ADS)

    McKinstry, Alastair

    2015-04-01

    Debian/meterology is a project to integrate climate tools and analysis software into the mainstream Debian/Ubuntu Linux distributions. This work describes lessons learnt, and recommends practices for scientific software to be adopted and maintained in OS distributions. In addition to standard analysis tools (cdo,, grads, ferret, metview, ncl, etc.), software used by the Earth System Grid Federation was chosen for integraion, to enable ESGF portals to be built on this base; however exposing scientific codes via web APIs enables security weaknesses, normally ignorable, to be exposed. How tools are hardened, and what changes are required to handle security upgrades, are described. Secondly, to enable libraries and components (e.g. Python modules) to be integrated requires planning by writers: it is not sufficient to assume users can upgrade their code when you make incompatible changes. Here, practices are recommended to enable upgrades and co-installability of C, C++, Fortran and Python codes. Finally, software packages such as NetCDF and HDF5 can be built in multiple configurations. Tools may then expect incompatible versions of these libraries (e.g. serial and parallel) to be simultaneously available; how this was solved in Debian using "pkg-config" and shared library interfaces is described, and best practices for software writers to enable this are summarised.

  13. A package of Linux scripts for the parallelization of Monte Carlo simulations

    NASA Astrophysics Data System (ADS)

    Badal, Andreu; Sempau, Josep

    2006-09-01

    also provided. Typical running time:The execution time of each script largely depends on the number of computers that are used, the actions that are to be performed and, to a lesser extent, on the network connexion bandwidth. Unusual features of the program:Any computer on the Internet with a Secure Shell client/server program installed can be used as a node of a virtual computer cluster for parallel calculations with the sequential source code. The simplicity of the parallelization scheme makes the use of this package a straightforward task, which does not require installing any additional libraries. Program summary 2Title of program:seedsMLCG Catalogue identifier:ADYE_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/ADYE_v1_0 Program obtainable from:CPC Program Library, Queen's University of Belfast, Northern Ireland Computer for which the program is designed and others in which it is operable:Any computer with a FORTRAN compiler Operating systems under which the program has been tested:Linux (RedHat 8.0, SuSe 8.1, Debian Woody 3.1), MS Windows (2000, XP) Compilers:GNU FORTRAN g77 (Linux and Windows); g95 (Linux); Intel Fortran Compiler 7.1 (Linux); Compaq Visual Fortran 6.1 (Windows) Programming language used:FORTRAN 77 No. of bits in a word:32 Memory required to execute with typical data:500 kilobytes No. of lines in distributed program, including test data, etc.:492 No. of bytes in distributed program, including test data, etc.:5582 Distribution format:tar.gz Nature of the physical problem:Statistically independent results from different runs of a Monte Carlo code can be obtained using uncorrelated sequences of random numbers on each execution. Multiplicative linear congruential generators (MLCG), or other generators that are based on them such as RANECU, can be adapted to produce these sequences. Method of solution:For a given MLCG, the presented program calculates initialization values that produce disjoint, consecutive sequences of pseudo

  14. Promoting Social Network Awareness: A Social Network Monitoring System

    ERIC Educational Resources Information Center

    Cadima, Rita; Ferreira, Carlos; Monguet, Josep; Ojeda, Jordi; Fernandez, Joaquin

    2010-01-01

    To increase communication and collaboration opportunities, members of a community must be aware of the social networks that exist within that community. This paper describes a social network monitoring system--the KIWI system--that enables users to register their interactions and visualize their social networks. The system was implemented in a…

  15. The Design of NetSecLab: A Small Competition-Based Network Security Lab

    ERIC Educational Resources Information Center

    Lee, C. P.; Uluagac, A. S.; Fairbanks, K. D.; Copeland, J. A.

    2011-01-01

    This paper describes a competition-style of exercise to teach system and network security and to reinforce themes taught in class. The exercise, called NetSecLab, is conducted on a closed network with student-formed teams, each with their own Linux system to defend and from which to launch attacks. Students are expected to learn how to: 1) install…

  16. Lightweight Corefile Library for Linux

    SciTech Connect

    2007-09-22

    Liblwcf attempts to generate stack traces from failing processes as opposed to dumping full corefiles. This can be beneficial when running large parallel applications where dumping f a fully memory image could flood network filesystem servers.

  17. Network analyses in systems pharmacology

    PubMed Central

    Berger, Seth I.; Iyengar, Ravi

    2009-01-01

    Systems pharmacology is an emerging area of pharmacology which utilizes network analysis of drug action as one of its approaches. By considering drug actions and side effects in the context of the regulatory networks within which the drug targets and disease gene products function, network analysis promises to greatly increase our knowledge of the mechanisms underlying the multiple actions of drugs. Systems pharmacology can provide new approaches for drug discovery for complex diseases. The integrated approach used in systems pharmacology can allow for drug action to be considered in the context of the whole genome. Network-based studies are becoming an increasingly important tool in understanding the relationships between drug action and disease susceptibility genes. This review discusses how analysis of biological networks has contributed to the genesis of systems pharmacology and how these studies have improved global understanding of drug targets, suggested new targets and approaches for therapeutics, and provided a deeper understanding of the effects of drugs. Taken together, these types of analyses can lead to new therapeutic options while improving the safety and efficacy of existing medications. Contact: ravi.iyengar@mssm.edu PMID:19648136

  18. Network operating system focus technology

    NASA Technical Reports Server (NTRS)

    1985-01-01

    An activity structured to provide specific design requirements and specifications for the Space Station Data Management System (DMS) Network Operating System (NOS) is outlined. Examples are given of the types of supporting studies and implementation tasks presently underway to realize a DMS test bed capability to develop hands-on understanding of NOS requirements as driven by actual subsystem test beds participating in the overall Johnson Space Center test bed program. Classical operating system elements and principal NOS functions are listed.

  19. Linux Incident Response Volatile Data Analysis Framework

    ERIC Educational Resources Information Center

    McFadden, Matthew

    2013-01-01

    Cyber incident response is an emphasized subject area in cybersecurity in information technology with increased need for the protection of data. Due to ongoing threats, cybersecurity imposes many challenges and requires new investigative response techniques. In this study a Linux Incident Response Framework is designed for collecting volatile data…

  20. Image Capture and Display Based on Embedded Linux

    NASA Astrophysics Data System (ADS)

    Weigong, Zhang; Suran, Di; Yongxiang, Zhang; Liming, Li

    For the requirement of building a highly reliable communication system, SpaceWire was selected in the integrated electronic system. There was a need to test the performance of SpaceWire. As part of the testing work, the goal of this paper is to transmit image data from CMOS camera through SpaceWire and display real-time images on the graphical user interface with Qt in the embedded development platform of Linux & ARM. A point-to-point mode of transmission was chosen; the running result showed the two communication ends basically reach a consensus picture in succession. It suggests that the SpaceWire can transmit the data reliably.

  1. Millisecond accuracy video display using OpenGL under Linux.

    PubMed

    Stewart, Neil

    2006-02-01

    To measure people's reaction times to the nearest millisecond, it is necessary to know exactly when a stimulus is displayed. This article describes how to display stimuli with millisecond accuracy on a normal CRT monitor, using a PC running Linux. A simple C program is presented to illustrate how this may be done within X Windows using the OpenGL rendering system. A test of this system is reported that demonstrates that stimuli may be consistently displayed with millisecond accuracy. An algorithm is presented that allows the exact time of stimulus presentation to be deduced, even if there are relatively large errors in measuring the display time.

  2. Networked control of microgrid system of systems

    NASA Astrophysics Data System (ADS)

    Mahmoud, Magdi S.; Rahman, Mohamed Saif Ur; AL-Sunni, Fouad M.

    2016-08-01

    The microgrid has made its mark in distributed generation and has attracted widespread research. However, microgrid is a complex system which needs to be viewed from an intelligent system of systems perspective. In this paper, a network control system of systems is designed for the islanded microgrid system consisting of three distributed generation units as three subsystems supplying a load. The controller stabilises the microgrid system in the presence of communication infractions such as packet dropouts and delays. Simulation results are included to elucidate the effectiveness of the proposed control strategy.

  3. Architecting Communication Network of Networks for Space System of Systems

    NASA Technical Reports Server (NTRS)

    Bhasin, Kul B.; Hayden, Jeffrey L.

    2008-01-01

    The National Aeronautics and Space Administration (NASA) and the Department of Defense (DoD) are planning Space System of Systems (SoS) to address the new challenges of space exploration, defense, communications, navigation, Earth observation, and science. In addition, these complex systems must provide interoperability, enhanced reliability, common interfaces, dynamic operations, and autonomy in system management. Both NASA and the DoD have chosen to meet the new demands with high data rate communication systems and space Internet technologies that bring Internet Protocols (IP), routers, servers, software, and interfaces to space networks to enable as much autonomous operation of those networks as possible. These technologies reduce the cost of operations and, with higher bandwidths, support the expected voice, video, and data needed to coordinate activities at each stage of an exploration mission. In this paper, we discuss, in a generic fashion, how the architectural approaches and processes are being developed and used for defining a hypothetical communication and navigation networks infrastructure to support lunar exploration. Examples are given of the products generated by the architecture development process.

  4. Network command processing system overview

    NASA Technical Reports Server (NTRS)

    Nam, Yon-Woo; Murphy, Lisa D.

    1993-01-01

    The Network Command Processing System (NCPS) developed for the National Aeronautics and Space Administration (NASA) Ground Network (GN) stations is a spacecraft command system utilizing a MULTIBUS I/68030 microprocessor. This system was developed and implemented at ground stations worldwide to provide a Project Operations Control Center (POCC) with command capability for support of spacecraft operations such as the LANDSAT, Shuttle, Tracking and Data Relay Satellite, and Nimbus-7. The NCPS consolidates multiple modulation schemes for supporting various manned/unmanned orbital platforms. The NCPS interacts with the POCC and a local operator to process configuration requests, generate modulated uplink sequences, and inform users of the ground command link status. This paper presents the system functional description, hardware description, and the software design.

  5. Systems engineering technology for networks

    NASA Technical Reports Server (NTRS)

    1994-01-01

    The report summarizes research pursued within the Systems Engineering Design Laboratory at Virginia Polytechnic Institute and State University between May 16, 1993 and January 31, 1994. The project was proposed in cooperation with the Computational Science and Engineering Research Center at Howard University. Its purpose was to investigate emerging systems engineering tools and their applicability in analyzing the NASA Network Control Center (NCC) on the basis of metrics and measures.

  6. The LILARTI neural network system

    SciTech Connect

    Allen, J.D. Jr.; Schell, F.M.; Dodd, C.V.

    1992-10-01

    The material of this Technical Memorandum is intended to provide the reader with conceptual and technical background information on the LILARTI neural network system of detail sufficient to confer an understanding of the LILARTI method as it is presently allied and to facilitate application of the method to problems beyond the scope of this document. Of particular importance in this regard are the descriptive sections and the Appendices which include operating instructions, partial listings of program output and data files, and network construction information.

  7. Berkeley Lab Checkpoint/Restart for Linux

    2003-11-15

    This package implements system-level checkpointing of scientific applications mnning on Linux clusters in a manner suitable for implementing preemption, migration and fault recovery by a batch scheduler The design includes documented interfaces for a cooperating application or library to implement extensions to the checkpoint system, such as consistent checkpointing of distnbuted MPI applications Using this package with an appropnate MPI implementation, the vast majority of scientific applications which use MPI for communucation are checkpointable withoutmore » any modifications to the application source code. Extending VMAdump code used in the bproc system, the BLCR kemel modules provide three additional features necessary for useful system-level checkpointing of scientific applications(installation of bproc is not required to use BLCR) First, this package provides the bookkeeping and coordination required for checkpointing and restoring multi-threaded and multi-process applications mnning on a single node Secondly, this package provides a system call interface allowing checkpoints to be requested by any aufhonzed process, such as a batch scheduler. Thirdly, this package provides a system call interface allowing applications and/or application libraries to extend the checkpoint capabilities in user space, for instance to proide coordination of checkpoints of distritsuted MPI applications. The "Iibcr" library in this package implements a wrapper around the system call interface exported by the kemel modules, and mantains bookkeeping to allow registration of callbacks by runtime libraries This library also provides the necesary thread-saftety and signal-safety mechanisms Thus, this library provides the means for applications and run-time libranes, such as MPI, to register callback functions to be run when a checkpoint is taken or when restarting from one. This library may also be used as a LD_PRELOAD to enable checkpointing of applications with development

  8. Network file-storage system

    SciTech Connect

    Collins, W.W.; Devaney, M.J.; Willbanks, E.W.

    1982-01-01

    The Common File System (CFS) is a file management and mass storage system for the Los Alamos National Laboratory's computer network. The CFS is organized as a hierarchical storage system: active files are stored on fast-access storage devices, larger, less active files are stored on slower, less expensive devices, and archival files are stored offline. Files are automatically moved between the various classes of storage by a file migration program that analyzes file activity, file size and storage device capabilities. This has resulted in a cost-effective system that provides both fast access and large data storage capability (over five trillion bits currently stored).

  9. The automated ground network system

    NASA Technical Reports Server (NTRS)

    Smith, Miles T.; Militch, Peter N.

    1993-01-01

    The primary goal of the Automated Ground Network System (AGNS) project is to reduce Ground Network (GN) station life-cycle costs. To accomplish this goal, the AGNS project will employ an object-oriented approach to develop a new infrastructure that will permit continuous application of new technologies and methodologies to the Ground Network's class of problems. The AGNS project is a Total Quality (TQ) project. Through use of an open collaborative development environment, developers and users will have equal input into the end-to-end design and development process. This will permit direct user input and feedback and will enable rapid prototyping for requirements clarification. This paper describes the AGNS objectives, operations concept, and proposed design.

  10. Structural Modeling of Network Systems in Citation Analysis.

    ERIC Educational Resources Information Center

    Yaru, Dang

    1997-01-01

    Describes construction of citation network systems and some subsystems (time sequence network, cocitation network, couple network). Establishes structural modeling of these systems by means of system engineering. Explains and analyzes citation network systems. Includes graphs and charts. (JAK)

  11. THE IMPLEMENTATION OF THE STAR DATA ACQUISITION SYSTEM USING A MYRINET NETWORK.

    SciTech Connect

    LANDGRAF,J.M.; ADLER,C.; LEVINE,M.J.; LJUBICIC,A.,JR.; ET AL

    2000-10-15

    We will present results from the first year of operation of the STAR DAQ system using a Myrinet Network. STAR is one of four experiments to have been commissioned at the Relativistic Heavy Ion Collider (RHIC) at BNL during 1999 and 2000. The DAQ system is fully integrated with a Level 3 Trigger. The combined system currently consists of 33 Myrinet Nodes which run in a mixed environment of MVME processors running VxWorks, DEC Alpha workstations running Linux, and SUN Solaris machines. The network will eventually contain up to 150 nodes for the expected final size of the L3 processor farm. Myrinet is a switched, high speed, low latency network produced by Myricom and available for PCI and PMC on a wide variety of platforms. The STAR DAQ system uses the Myrinet network for messaging, L3 processing, and event building. After the events are built, they are sent via Gigabit Ethernet to the RHIC computing facility and stored to tape using HPSS. The combined DAQ/L3 system processes 160 MB events at 100 Hz, compresses each event to {approximately}20 MB, and performs tracking on the events to implement a physics-based filter to reduce the data storage rate to 20 MB/sec.

  12. The AMSC network control system

    NASA Technical Reports Server (NTRS)

    Garner, William B.

    1990-01-01

    The American Mobile Satellite Corporation (AMSC) is going to construct, launch, and operate a satellite system in order to provide mobile satellite services to the United States. AMSC is going to build, own, and operate a Network Control System (NCS) for managing the communications usage of the satellites, and to control circuit switched access between mobile earth terminals and feeder-link earth stations. An overview of the major NCS functional and performance requirements, the control system physical architecture, and the logical architecture is provided.

  13. The AMSC network control system

    NASA Astrophysics Data System (ADS)

    Garner, William B.

    The American Mobile Satellite Corporation (AMSC) is going to construct, launch, and operate a satellite system in order to provide mobile satellite services to the United States. AMSC is going to build, own, and operate a Network Control System (NCS) for managing the communications usage of the satellites, and to control circuit switched access between mobile earth terminals and feeder-link earth stations. An overview of the major NCS functional and performance requirements, the control system physical architecture, and the logical architecture is provided.

  14. [Making a low cost IPSec router on Linux and the assessment for practical use].

    PubMed

    Amiki, M; Horio, M

    2001-09-01

    We installed Linux and FreeS/WAN on a PC/AT compatible machine to make an IPSec router. We measured the time of ping/ftp, only in the university, between the university and the external network. Between the university and the external network (the Internet), there were no differences. Therefore, we concluded that CPU load was not remarkable at low speed networks, because packets exchanged via the Internet are small, or compressions of VPN are more effective than encoding and decoding. On the other hand, in the university, the IPSec router performed down about 20-30% compared with normal IP communication, but this is not a serious problem for practical use. Recently, VPN machines are becoming cheaper, but they do not function sufficiently to create a fundamental VPN environment. Therefore, if one wants a fundamental VPN environment at a low cost, we believe you should select a VPN router on Linux.

  15. Network Centrality of Metro Systems

    PubMed Central

    Derrible, Sybil

    2012-01-01

    Whilst being hailed as the remedy to the world’s ills, cities will need to adapt in the 21st century. In particular, the role of public transport is likely to increase significantly, and new methods and technics to better plan transit systems are in dire need. This paper examines one fundamental aspect of transit: network centrality. By applying the notion of betweenness centrality to 28 worldwide metro systems, the main goal of this paper is to study the emergence of global trends in the evolution of centrality with network size and examine several individual systems in more detail. Betweenness was notably found to consistently become more evenly distributed with size (i.e. no “winner takes all”) unlike other complex network properties. Two distinct regimes were also observed that are representative of their structure. Moreover, the share of betweenness was found to decrease in a power law with size (with exponent 1 for the average node), but the share of most central nodes decreases much slower than least central nodes (0.87 vs. 2.48). Finally the betweenness of individual stations in several systems were examined, which can be useful to locate stations where passengers can be redistributed to relieve pressure from overcrowded stations. Overall, this study offers significant insights that can help planners in their task to design the systems of tomorrow, and similar undertakings can easily be imagined to other urban infrastructure systems (e.g., electricity grid, water/wastewater system, etc.) to develop more sustainable cities. PMID:22792373

  16. Berkeley Lab Checkpoint/Restart (BLCR) for Linux Clusters

    SciTech Connect

    Hargrove, Paul H.; Duell, Jason C.

    2006-07-26

    This article describes the motivation, design andimplementation of Berkeley Lab Checkpoint/Restart (BLCR), a system-levelcheckpoint/restart implementation for Linux clusters that targets thespace of typical High Performance Computing applications, including MPI.Application-level solutions, including both checkpointing andfault-tolerant algorithms, are recognized as more time and spaceefficient than system-level checkpoints, which cannot make use of anyapplication-specific knowledge. However, system-level checkpointingallows for preemption, making it suitable for responding to "faultprecursors" (for instance, elevated error rates from ECC memory ornetwork CRCs, or elevated temperature from sensors). Preemption can alsoincrease the efficiency of batch scheduling; for instance reducing idlecycles (by allowing for shutdown without any queue draining period orreallocation of resources to eliminate idle nodes when better fittingjobs are queued), and reducing the average queued time (by limiting largejobs to running during off-peak hours, without the need to limit thelength of such jobs). Each of these potential uses makes BLCR a valuabletool for efficient resource management in Linux clusters.

  17. PhyLIS: A Simple GNU/Linux Distribution for Phylogenetics and Phyloinformatics

    PubMed Central

    Thomson, Robert C.

    2009-01-01

    PhyLIS is a free GNU/Linux distribution that is designed to provide a simple, standardized platform for phylogenetic and phyloinformatic analysis. The operating system incorporates most commonly used phylogenetic software, which has been pre-compiled and pre-configured, allowing for straightforward application of phylogenetic methods and development of phyloinformatic pipelines in a stable Linux environment. The software is distributed as a live CD and can be installed directly or run from the CD without making changes to the computer. PhyLIS is available for free at http://www.eve.ucdavis.edu/rcthomson/phylis/. PMID:19812729

  18. PhyLIS: a simple GNU/Linux distribution for phylogenetics and phyloinformatics.

    PubMed

    Thomson, Robert C

    2009-01-01

    PhyLIS is a free GNU/Linux distribution that is designed to provide a simple, standardized platform for phylogenetic and phyloinformatic analysis. The operating system incorporates most commonly used phylogenetic software, which has been pre-compiled and pre-configured, allowing for straightforward application of phylogenetic methods and development of phyloinformatic pipelines in a stable Linux environment. The software is distributed as a live CD and can be installed directly or run from the CD without making changes to the computer. PhyLIS is available for free at http://www.eve.ucdavis.edu/rcthomson/phylis/. PMID:19812729

  19. Real-time Periodic Processing of RT-middleware Utilizing Linux Standard Functionalities

    NASA Astrophysics Data System (ADS)

    Shimizu, Masaharu; Toda, Kengo; Hayashibara, Yasuo; Yamato, Hideaki; Furuta, Takayuki

    A new methodology of real-time periodic processing on RT-middleware based on the Linux standard functionalities is presented in this paper. The central of discussion is on the realization of real-time processing while keeping the reusability of software modules ensured by the RT-middleware framework as well as the portability provided by the Linux development mainstream. In order to show the validity of the proposed approach, two robot systems, including an omnidirectional electric wheelchair steered by haptic joystick, are presented and the discussion about the evaluation result follows from the view point of practicality.

  20. PhyLIS: a simple GNU/Linux distribution for phylogenetics and phyloinformatics.

    PubMed

    Thomson, Robert C

    2009-07-30

    PhyLIS is a free GNU/Linux distribution that is designed to provide a simple, standardized platform for phylogenetic and phyloinformatic analysis. The operating system incorporates most commonly used phylogenetic software, which has been pre-compiled and pre-configured, allowing for straightforward application of phylogenetic methods and development of phyloinformatic pipelines in a stable Linux environment. The software is distributed as a live CD and can be installed directly or run from the CD without making changes to the computer. PhyLIS is available for free at http://www.eve.ucdavis.edu/rcthomson/phylis/.

  1. Tri-Laboratory Linux Capacity Cluster 2007 SOW

    SciTech Connect

    Seager, M

    2007-03-22

    well, the budget demands are extreme and new, more cost effective ways of fielding these systems must be developed. This Tri-Laboratory Linux Capacity Cluster (TLCC) procurement represents the ASC first investment vehicle in these capacity systems. It also represents a new strategy for quickly building, fielding and integrating many Linux clusters of various sizes into classified and unclassified production service through a concept of Scalable Units (SU). The programmatic objective is to dramatically reduce the overall Total Cost of Ownership (TCO) of these 'capacity' systems relative to the best practices in Linux Cluster deployments today. This objective only makes sense in the context of these systems quickly becoming very robust and useful production clusters under the crushing load that will be inflicted on them by the ASC and SSP scientific simulation capacity workload.

  2. Networked analytical sample management system

    SciTech Connect

    Kerrigan, W.J.; Spencer, W.A.

    1986-01-01

    Since 1982, the Savannah River Laboratory (SRL) has operated a computer-controlled analytical sample management system. The system, pogrammed in COBOL, runs on the site IBM 3081 mainframe computer. The system provides for the following subtasks: sample logging, analytical method assignment, worklist generation, cost accounting, and results reporting. Within these subtasks the system functions in a time-sharing mode. Communications between subtasks are done overnight in a batch mode. The system currently supports management of up to 3000 samples a month. Each sample requires, on average, three independent methods. Approximately 100 different analytical techniques are available for customized input of data. The laboratory has implemented extensive computer networking using Ethernet. Electronic mail, RS/1, and online literature searches are in place. Based on our experience with the existing sample management system, we have begun a project to develop a second generation system. The new system will utilize the panel designs developed for the present LIMS, incorporate more realtime features, and take advantage of the many commercial LIMS systems.

  3. Spinal Cord Injury Model System Information Network

    MedlinePlus

    ... the UAB-SCIMS More The UAB-SCIMS Information Network The University of Alabama at Birmingham Spinal Cord Injury Model System (UAB-SCIMS) maintains this Information Network as a resource to promote knowledge in the ...

  4. Linux OS Jitter Measurements at Large Node Counts using a BlueGene/L

    SciTech Connect

    Jones, Terry R; Tauferner, Mr. Andrew; Inglett, Mr. Todd

    2010-01-01

    We present experimental results for a coordinated scheduling implementation of the Linux operating system. Results were collected on an IBM Blue Gene/L machine at scales up to 16K nodes. Our results indicate coordinated scheduling was able to provide a dramatic improvement in scaling performance for two applications characterized as bulk synchronous parallel programs.

  5. Parallel Analysis and Visualization on Cray Compute Node Linux

    SciTech Connect

    Pugmire, Dave; Ahern, Sean

    2008-01-01

    Capability computer systems are deployed to give researchers the computational power required to investigate and solve key challenges facing the scientific community. As the power of these computer systems increases, the computational problem domain typically increases in size, complexity and scope. These increases strain the ability of commodity analysis and visualization clusters to effectively perform post-processing tasks and provide critical insight and understanding to the computed results. An alternative to purchasing increasingly larger, separate analysis and visualization commodity clusters is to use the computational system itself to perform post-processing tasks. In this paper, the recent successful port of VisIt, a parallel, open source analysis and visualization tool, to compute node linux running on the Cray is detailed. Additionally, the unprecedented ability of this resource for analysis and visualization is discussed and a report on obtained results is presented.

  6. Kennedy Space Center network documentation system

    NASA Technical Reports Server (NTRS)

    Lohne, William E.; Schuerger, Charles L.

    1995-01-01

    The Kennedy Space Center Network Documentation System (KSC NDS) is being designed and implemented by NASA and the KSC contractor organizations to provide a means of network tracking, configuration, and control. Currently, a variety of host and client platforms are in use as a result of each organization having established its own network documentation system. The solution is to incorporate as many existing 'systems' as possible in the effort to consolidate and standardize KSC-wide documentation.

  7. Impact on TRMM Products of Conversion to Linux

    NASA Technical Reports Server (NTRS)

    Stocker, Erich Franz; Kwiatkowski, John

    2008-01-01

    In June 2008, TRMM data processing will be assumed by the Precipitation Processing System (PPS). This change will also mean a change in the hardware production environment from an SGI 32 bit IRIX processing environment to a Linux (Beowulf) 64 bit processing environment. This change of platform and operating system addressing (32 to 64) has some influence on data values in the TRMM data products. This paper will describe the transition architecture and scheduling. It will also provide an analysis of what the nature of the product differences will be. It will demonstrate that the differences are not scientifically significant and are generally not visible. However, they are not always identical with those which the SGI would produce.

  8. Improving Memory Error Handling Using Linux

    SciTech Connect

    Carlton, Michael Andrew; Blanchard, Sean P.; Debardeleben, Nathan A.

    2014-07-25

    As supercomputers continue to get faster and more powerful in the future, they will also have more nodes. If nothing is done, then the amount of memory in supercomputer clusters will soon grow large enough that memory failures will be unmanageable to deal with by manually replacing memory DIMMs. "Improving Memory Error Handling Using Linux" is a process oriented method to solve this problem by using the Linux kernel to disable (offline) faulty memory pages containing bad addresses, preventing them from being used again by a process. The process of offlining memory pages simplifies error handling and results in reducing both hardware and manpower costs required to run Los Alamos National Laboratory (LANL) clusters. This process will be necessary for the future of supercomputing to allow the development of exascale computers. It will not be feasible without memory error handling to manually replace the number of DIMMs that will fail daily on a machine consisting of 32-128 petabytes of memory. Testing reveals the process of offlining memory pages works and is relatively simple to use. As more and more testing is conducted, the entire process will be automated within the high-performance computing (HPC) monitoring software, Zenoss, at LANL.

  9. High Performance Diskless Linux Workstations in AX-Division

    SciTech Connect

    Councell, E; Busby, L

    2003-09-30

    AX Division has recently installed a number of diskless Linux workstations to meet the needs of its scientific staff for classified processing. Results so far are quite positive, although problems do remain. Some unusual requirements were met using a novel, but simple, design: Each diskless client has a dedicated partition on a server disk that contains a complete Linux distribution.

  10. Method and system for mesh network embedded devices

    NASA Technical Reports Server (NTRS)

    Wang, Ray (Inventor)

    2009-01-01

    A method and system for managing mesh network devices. A mesh network device with integrated features creates an N-way mesh network with a full mesh network topology or a partial mesh network topology.

  11. Generating fracture networks using iterated function systems

    NASA Astrophysics Data System (ADS)

    Mohrlok, U.; Liedl, R.

    In order to model flow and transport in fractured rocks it is important to know the geometry of the fracture network. A stochastic approach is commonly used to generate a synthetic fracture network from the statistics measured at a natural fracture network. The approach presented herein is able to incorporate the structures found in a natural fracture network into the synthetic fracture network. These synthetic fracture networks are the images generated by Iterated Function Systems (IFS) as introduced by Barnsley (1988). The conditions these IFS have to fulfil to determine images resembling fracture networks and the effects of their parameters on the images are discussed. It is possible to define the parameters of the IFS in order to generate some properties of a fracture network. The image of an IFS consists of many single points and has to be suitably processed for further use.

  12. Generating fracture networks using iterated function systems

    NASA Astrophysics Data System (ADS)

    Mohrlok, U.; Liedl, R.

    1996-03-01

    In order to model flow and transport in fractured rocks it is important to know the geometry of the fracture network. A stochastic approach is commonly used to generate a synthetic fracture network from the statistics measured at a natural fracture network. The approach presented herein is able to incorporate the structures found in a natural fracture network into the synthetic fracture network. These synthetic fracture networks are the images generated by Iterated Function Systems (IFS) as introduced by Barnsley (1988). The conditions these IFS have to fulfil to determine images resembling fracture networks and the effects of their parameters on the images are discussed. It is possible to define the parameters of the IFS in order to generate some properties of a fracture network. The image of an IFS consists of many single points and has to be suitably processed for further use.

  13. NASDA knowledge-based network planning system

    NASA Technical Reports Server (NTRS)

    Yamaya, K.; Fujiwara, M.; Kosugi, S.; Yambe, M.; Ohmori, M.

    1993-01-01

    One of the SODS (space operation and data system) sub-systems, NP (network planning) was the first expert system used by NASDA (national space development agency of Japan) for tracking and control of satellite. The major responsibilities of the NP system are: first, the allocation of network and satellite control resources and, second, the generation of the network operation plan data (NOP) used in automated control of the stations and control center facilities. Up to now, the first task of network resource scheduling was done by network operators. NP system automatically generates schedules using its knowledge base, which contains information on satellite orbits, station availability, which computer is dedicated to which satellite, and how many stations must be available for a particular satellite pass or a certain time period. The NP system is introduced.

  14. Broadband network on-line data acquisition system with web based interface for control and basic analysis

    NASA Astrophysics Data System (ADS)

    Polkowski, Marcin; Grad, Marek

    2016-04-01

    Passive seismic experiment "13BB Star" is operated since mid 2013 in northern Poland and consists of 13 broadband seismic stations. One of the elements of this experiment is dedicated on-line data acquisition system comprised of both client (station) side and server side modules with web based interface that allows monitoring of network status and provides tools for preliminary data analysis. Station side is controlled by ARM Linux board that is programmed to maintain 3G/EDGE internet connection, receive data from digitizer, send data do central server among with additional auxiliary parameters like temperatures, voltages and electric current measurements. Station side is controlled by set of easy to install PHP scripts. Data is transmitted securely over SSH protocol to central server. Central server is a dedicated Linux based machine. Its duty is receiving and processing all data from all stations including auxiliary parameters. Server side software is written in PHP and Python. Additionally, it allows remote station configuration and provides web based interface for user friendly interaction. All collected data can be displayed for each day and station. It also allows manual creation of event oriented plots with different filtering abilities and provides numerous status and statistic information. Our solution is very flexible and easy to modify. In this presentation we would like to share our solution and experience. National Science Centre Poland provided financial support for this work via NCN grant DEC-2011/02/A/ST10/00284.

  15. Systemic risk on different interbank network topologies

    NASA Astrophysics Data System (ADS)

    Lenzu, Simone; Tedeschi, Gabriele

    2012-09-01

    In this paper we develop an interbank market with heterogeneous financial institutions that enter into lending agreements on different network structures. Credit relationships (links) evolve endogenously via a fitness mechanism based on agents' performance. By changing the agent's trust on its neighbor's performance, interbank linkages self-organize themselves into very different network architectures, ranging from random to scale-free topologies. We study which network architecture can make the financial system more resilient to random attacks and how systemic risk spreads over the network. To perturb the system, we generate a random attack via a liquidity shock. The hit bank is not automatically eliminated, but its failure is endogenously driven by its incapacity to raise liquidity in the interbank network. Our analysis shows that a random financial network can be more resilient than a scale free one in case of agents' heterogeneity.

  16. Network representations of immune system complexity.

    PubMed

    Subramanian, Naeha; Torabi-Parizi, Parizad; Gottschalk, Rachel A; Germain, Ronald N; Dutta, Bhaskar

    2015-01-01

    The mammalian immune system is a dynamic multiscale system composed of a hierarchically organized set of molecular, cellular, and organismal networks that act in concert to promote effective host defense. These networks range from those involving gene regulatory and protein-protein interactions underlying intracellular signaling pathways and single-cell responses to increasingly complex networks of in vivo cellular interaction, positioning, and migration that determine the overall immune response of an organism. Immunity is thus not the product of simple signaling events but rather nonlinear behaviors arising from dynamic, feedback-regulated interactions among many components. One of the major goals of systems immunology is to quantitatively measure these complex multiscale spatial and temporal interactions, permitting development of computational models that can be used to predict responses to perturbation. Recent technological advances permit collection of comprehensive datasets at multiple molecular and cellular levels, while advances in network biology support representation of the relationships of components at each level as physical or functional interaction networks. The latter facilitate effective visualization of patterns and recognition of emergent properties arising from the many interactions of genes, molecules, and cells of the immune system. We illustrate the power of integrating 'omics' and network modeling approaches for unbiased reconstruction of signaling and transcriptional networks with a focus on applications involving the innate immune system. We further discuss future possibilities for reconstruction of increasingly complex cellular- and organism-level networks and development of sophisticated computational tools for prediction of emergent immune behavior arising from the concerted action of these networks.

  17. Neural network based system for equipment surveillance

    DOEpatents

    Vilim, Richard B.; Gross, Kenneth C.; Wegerich, Stephan W.

    1998-01-01

    A method and system for performing surveillance of transient signals of an industrial device to ascertain the operating state. The method and system involves the steps of reading into a memory training data, determining neural network weighting values until achieving target outputs close to the neural network output. If the target outputs are inadequate, wavelet parameters are determined to yield neural network outputs close to the desired set of target outputs and then providing signals characteristic of an industrial process and comparing the neural network output to the industrial process signals to evaluate the operating state of the industrial process.

  18. Neural network based system for equipment surveillance

    DOEpatents

    Vilim, R.B.; Gross, K.C.; Wegerich, S.W.

    1998-04-28

    A method and system are disclosed for performing surveillance of transient signals of an industrial device to ascertain the operating state. The method and system involves the steps of reading into a memory training data, determining neural network weighting values until achieving target outputs close to the neural network output. If the target outputs are inadequate, wavelet parameters are determined to yield neural network outputs close to the desired set of target outputs and then providing signals characteristic of an industrial process and comparing the neural network output to the industrial process signals to evaluate the operating state of the industrial process. 33 figs.

  19. Remote Energy Monitoring System via Cellular Network

    NASA Astrophysics Data System (ADS)

    Yunoki, Shoji; Tamaki, Satoshi; Takada, May; Iwaki, Takashi

    Recently, improvement on power saving and cost efficiency by monitoring the operation status of various facilities over the network has gained attention. Wireless network, especially cellular network, has advantage in mobility, coverage, and scalability. On the other hand, it has disadvantage of low reliability, due to rapid changes in the available bandwidth. We propose a transmission control scheme based on data priority and instantaneous available bandwidth to realize a highly reliable remote monitoring system via cellular network. We have developed our proposed monitoring system and evaluated the effectiveness of our scheme, and proved it reduces the maximum transmission delay of sensor status to 1/10 compared to best effort transmission.

  20. Network Physiology: How Organ Systems Dynamically Interact.

    PubMed

    Bartsch, Ronny P; Liu, Kang K L; Bashan, Amir; Ivanov, Plamen Ch

    2015-01-01

    We systematically study how diverse physiologic systems in the human organism dynamically interact and collectively behave to produce distinct physiologic states and functions. This is a fundamental question in the new interdisciplinary field of Network Physiology, and has not been previously explored. Introducing the novel concept of Time Delay Stability (TDS), we develop a computational approach to identify and quantify networks of physiologic interactions from long-term continuous, multi-channel physiological recordings. We also develop a physiologically-motivated visualization framework to map networks of dynamical organ interactions to graphical objects encoded with information about the coupling strength of network links quantified using the TDS measure. Applying a system-wide integrative approach, we identify distinct patterns in the network structure of organ interactions, as well as the frequency bands through which these interactions are mediated. We establish first maps representing physiologic organ network interactions and discover basic rules underlying the complex hierarchical reorganization in physiologic networks with transitions across physiologic states. Our findings demonstrate a direct association between network topology and physiologic function, and provide new insights into understanding how health and distinct physiologic states emerge from networked interactions among nonlinear multi-component complex systems. The presented here investigations are initial steps in building a first atlas of dynamic interactions among organ systems. PMID:26555073

  1. Network Physiology: How Organ Systems Dynamically Interact

    PubMed Central

    Bartsch, Ronny P.; Liu, Kang K. L.; Bashan, Amir; Ivanov, Plamen Ch.

    2015-01-01

    We systematically study how diverse physiologic systems in the human organism dynamically interact and collectively behave to produce distinct physiologic states and functions. This is a fundamental question in the new interdisciplinary field of Network Physiology, and has not been previously explored. Introducing the novel concept of Time Delay Stability (TDS), we develop a computational approach to identify and quantify networks of physiologic interactions from long-term continuous, multi-channel physiological recordings. We also develop a physiologically-motivated visualization framework to map networks of dynamical organ interactions to graphical objects encoded with information about the coupling strength of network links quantified using the TDS measure. Applying a system-wide integrative approach, we identify distinct patterns in the network structure of organ interactions, as well as the frequency bands through which these interactions are mediated. We establish first maps representing physiologic organ network interactions and discover basic rules underlying the complex hierarchical reorganization in physiologic networks with transitions across physiologic states. Our findings demonstrate a direct association between network topology and physiologic function, and provide new insights into understanding how health and distinct physiologic states emerge from networked interactions among nonlinear multi-component complex systems. The presented here investigations are initial steps in building a first atlas of dynamic interactions among organ systems. PMID:26555073

  2. Managing secure computer systems and networks.

    PubMed

    Von Solms, B

    1996-10-01

    No computer system or computer network can today be operated without the necessary security measures to secure and protect the electronic assets stored, processed and transmitted using such systems and networks. Very often the effort in managing such security and protection measures are totally underestimated. This paper provides an overview of the security management needed to secure and protect a typical IT system and network. Special reference is made to this management effort in healthcare systems, and the role of the information security officer is also highlighted.

  3. Nonlinear Network Dynamics on Earthquake Fault Systems

    SciTech Connect

    Rundle, Paul B.; Rundle, John B.; Tiampo, Kristy F.; Sa Martins, Jorge S.; McGinnis, Seth; Klein, W.

    2001-10-01

    Earthquake faults occur in interacting networks having emergent space-time modes of behavior not displayed by isolated faults. Using simulations of the major faults in southern California, we find that the physics depends on the elastic interactions among the faults defined by network topology, as well as on the nonlinear physics of stress dissipation arising from friction on the faults. Our results have broad applications to other leaky threshold systems such as integrate-and-fire neural networks.

  4. Empirical tests of Zipf's law mechanism in open source Linux distribution.

    PubMed

    Maillart, T; Sornette, D; Spaeth, S; von Krogh, G

    2008-11-21

    Zipf's power law is a ubiquitous empirical regularity found in many systems, thought to result from proportional growth. Here, we establish empirically the usually assumed ingredients of stochastic growth models that have been previously conjectured to be at the origin of Zipf's law. We use exceptionally detailed data on the evolution of open source software projects in Linux distributions, which offer a remarkable example of a growing complex self-organizing adaptive system, exhibiting Zipf's law over four full decades.

  5. ANPS - AUTOMATIC NETWORK PROGRAMMING SYSTEM

    NASA Technical Reports Server (NTRS)

    Schroer, B. J.

    1994-01-01

    Development of some of the space program's large simulation projects -- like the project which involves simulating the countdown sequence prior to spacecraft liftoff -- requires the support of automated tools and techniques. The number of preconditions which must be met for a successful spacecraft launch and the complexity of their interrelationship account for the difficulty of creating an accurate model of the countdown sequence. Researchers developed ANPS for the Nasa Marshall Space Flight Center to assist programmers attempting to model the pre-launch countdown sequence. Incorporating the elements of automatic programming as its foundation, ANPS aids the user in defining the problem and then automatically writes the appropriate simulation program in GPSS/PC code. The program's interactive user dialogue interface creates an internal problem specification file from user responses which includes the time line for the countdown sequence, the attributes for the individual activities which are part of a launch, and the dependent relationships between the activities. The program's automatic simulation code generator receives the file as input and selects appropriate macros from the library of software modules to generate the simulation code in the target language GPSS/PC. The user can recall the problem specification file for modification to effect any desired changes in the source code. ANPS is designed to write simulations for problems concerning the pre-launch activities of space vehicles and the operation of ground support equipment and has potential for use in developing network reliability models for hardware systems and subsystems. ANPS was developed in 1988 for use on IBM PC or compatible machines. The program requires at least 640 KB memory and one 360 KB disk drive, PC DOS Version 2.0 or above, and GPSS/PC System Version 2.0 from Minuteman Software. The program is written in Turbo Prolog Version 2.0. GPSS/PC is a trademark of Minuteman Software. Turbo Prolog

  6. High-speed, intra-system networks

    SciTech Connect

    Quinn, Heather M; Graham, Paul S; Manuzzato, Andrea; Fairbanks, Tom; Dallmann, Nicholas; Desgeorges, Rose

    2010-06-28

    Recently, engineers have been studying on-payload networks for fast communication paths. Using intra-system networks as a means to connect devices together allows for a flexible payload design that does not rely on dedicated communication paths between devices. In this manner, the data flow architecture of the system can be dynamically reconfigured to allow data routes to be optimized for the application or configured to route around devices that are temporarily or permanently unavailable. To use intra-system networks, devices will need network controllers and switches. These devices are likely to be affected by single-event effects, which could affect data communication. In this paper we will present radiation data and performance analysis for using a Broadcom network controller in a neutron environment.

  7. An online system for metabolic network analysis

    PubMed Central

    Cicek, Abdullah Ercument; Qi, Xinjian; Cakmak, Ali; Johnson, Stephen R.; Han, Xu; Alshalwi, Sami; Ozsoyoglu, Zehra Meral; Ozsoyoglu, Gultekin

    2014-01-01

    Metabolic networks have become one of the centers of attention in life sciences research with the advancements in the metabolomics field. A vast array of studies analyzes metabolites and their interrelations to seek explanations for various biological questions, and numerous genome-scale metabolic networks have been assembled to serve for this purpose. The increasing focus on this topic comes with the need for software systems that store, query, browse, analyze and visualize metabolic networks. PathCase Metabolomics Analysis Workbench (PathCaseMAW) is built, released and runs on a manually created generic mammalian metabolic network. The PathCaseMAW system provides a database-enabled framework and Web-based computational tools for browsing, querying, analyzing and visualizing stored metabolic networks. PathCaseMAW editor, with its user-friendly interface, can be used to create a new metabolic network and/or update an existing metabolic network. The network can also be created from an existing genome-scale reconstructed network using the PathCaseMAW SBML parser. The metabolic network can be accessed through a Web interface or an iPad application. For metabolomics analysis, steady-state metabolic network dynamics analysis (SMDA) algorithm is implemented and integrated with the system. SMDA tool is accessible through both the Web-based interface and the iPad application for metabolomics analysis based on a metabolic profile. PathCaseMAW is a comprehensive system with various data input and data access subsystems. It is easy to work with by design, and is a promising tool for metabolomics research and for educational purposes. Database URL: http://nashua.case.edu/PathwaysMAW/Web PMID:25267793

  8. Fast predictive control of networked energy systems

    NASA Astrophysics Data System (ADS)

    Chuang, Frank Fu-Han

    In this thesis we study the optimal control of networked energy systems. Networked energy systems consist of a collection of energy storage nodes and a network of links and inputs which allow energy to be exchanged, injected, or removed from the nodes. The nodes may exchange energy between each other autonomously or via controlled flows between the nodes. Examples of networked systems include building heating, ventilation, and air conditioning (HVAC) systems and networked battery systems. In the building system example, the nodes of the system are rooms which store thermal energy in the air and other elements which have thermal capacity. The rooms transfer energy autonomously through thermal conduction, convection, and radiation. Thermal energy can be injected into or removed from the rooms via conditioned air or slabs. In the case of a networked battery system, the batteries store electrical energy in their chemical cells. The batteries may be electrically linked so that a controller can move electrical charge from one battery to another. Networked energy systems are typically large-scale (contain many states and inputs), affected by uncertain forecasts and disturbances, and require fast computation on cheap embedded platforms. In this thesis, the optimal control technique we study is model predictive control for networked energy systems. Model predictive or receding horizon control is a time-domain optimization-based control technique which uses predictive models of a system to forecast its behavior and minimize a performance cost subject to system constraints. In this thesis we address two primary issues concerning model predictive control for networked energy systems: robustness to uncertainty in forecasts and reducing the complexity of the large-scale optimization problem for use in embedded platforms. The first half of the thesis deals primarily with the efficient computation of robust controllers for dealing with random and adversarial uncertainties in the

  9. Dynamic artificial neural networks with affective systems.

    PubMed

    Schuman, Catherine D; Birdwell, J Douglas

    2013-01-01

    Artificial neural networks (ANNs) are processors that are trained to perform particular tasks. We couple a computational ANN with a simulated affective system in order to explore the interaction between the two. In particular, we design a simple affective system that adjusts the threshold values in the neurons of our ANN. The aim of this paper is to demonstrate that this simple affective system can control the firing rate of the ensemble of neurons in the ANN, as well as to explore the coupling between the affective system and the processes of long term potentiation (LTP) and long term depression (LTD), and the effect of the parameters of the affective system on its performance. We apply our networks with affective systems to a simple pole balancing example and briefly discuss the effect of affective systems on network performance.

  10. Network support for system initiated checkpoints

    DOEpatents

    Chen, Dong; Heidelberger, Philip

    2013-01-29

    A system, method and computer program product for supporting system initiated checkpoints in parallel computing systems. The system and method generates selective control signals to perform checkpointing of system related data in presence of messaging activity associated with a user application running at the node. The checkpointing is initiated by the system such that checkpoint data of a plurality of network nodes may be obtained even in the presence of user applications running on highly parallel computers that include ongoing user messaging activity.

  11. Network representations of immune system complexity

    PubMed Central

    Subramanian, Naeha; Torabi-Parizi, Parizad; Gottschalk, Rachel A.; Germain, Ronald N.; Dutta, Bhaskar

    2015-01-01

    The mammalian immune system is a dynamic multi-scale system composed of a hierarchically organized set of molecular, cellular and organismal networks that act in concert to promote effective host defense. These networks range from those involving gene regulatory and protein-protein interactions underlying intracellular signaling pathways and single cell responses to increasingly complex networks of in vivo cellular interaction, positioning and migration that determine the overall immune response of an organism. Immunity is thus not the product of simple signaling events but rather non-linear behaviors arising from dynamic, feedback-regulated interactions among many components. One of the major goals of systems immunology is to quantitatively measure these complex multi-scale spatial and temporal interactions, permitting development of computational models that can be used to predict responses to perturbation. Recent technological advances permit collection of comprehensive datasets at multiple molecular and cellular levels while advances in network biology support representation of the relationships of components at each level as physical or functional interaction networks. The latter facilitate effective visualization of patterns and recognition of emergent properties arising from the many interactions of genes, molecules, and cells of the immune system. We illustrate the power of integrating ‘omics’ and network modeling approaches for unbiased reconstruction of signaling and transcriptional networks with a focus on applications involving the innate immune system. We further discuss future possibilities for reconstruction of increasingly complex cellular and organism-level networks and development of sophisticated computational tools for prediction of emergent immune behavior arising from the concerted action of these networks. PMID:25625853

  12. Network representations of immune system complexity.

    PubMed

    Subramanian, Naeha; Torabi-Parizi, Parizad; Gottschalk, Rachel A; Germain, Ronald N; Dutta, Bhaskar

    2015-01-01

    The mammalian immune system is a dynamic multiscale system composed of a hierarchically organized set of molecular, cellular, and organismal networks that act in concert to promote effective host defense. These networks range from those involving gene regulatory and protein-protein interactions underlying intracellular signaling pathways and single-cell responses to increasingly complex networks of in vivo cellular interaction, positioning, and migration that determine the overall immune response of an organism. Immunity is thus not the product of simple signaling events but rather nonlinear behaviors arising from dynamic, feedback-regulated interactions among many components. One of the major goals of systems immunology is to quantitatively measure these complex multiscale spatial and temporal interactions, permitting development of computational models that can be used to predict responses to perturbation. Recent technological advances permit collection of comprehensive datasets at multiple molecular and cellular levels, while advances in network biology support representation of the relationships of components at each level as physical or functional interaction networks. The latter facilitate effective visualization of patterns and recognition of emergent properties arising from the many interactions of genes, molecules, and cells of the immune system. We illustrate the power of integrating 'omics' and network modeling approaches for unbiased reconstruction of signaling and transcriptional networks with a focus on applications involving the innate immune system. We further discuss future possibilities for reconstruction of increasingly complex cellular- and organism-level networks and development of sophisticated computational tools for prediction of emergent immune behavior arising from the concerted action of these networks. PMID:25625853

  13. FLY: a code for LSS cosmological simulations for a PC Linux Cluster

    NASA Astrophysics Data System (ADS)

    Comparato, M.; Becciani, U.; Antonuccio-Delogu, V.; Costa, A.

    2006-07-01

    We developed FLY with the main goal of maximizing the number of particles that can be simulated in an MPP system without data replication. FLY builds a tree that is shared among all the processes that execute a simulation, each process having the same number of bodies which evolve during each time-step. Now we present the new version of the code that runs on a PC Linux Cluster using the one side communication paradigm MPI-2 and the performance results obtained.

  14. Synchronization in networks of spatially extended systems

    SciTech Connect

    Filatova, Anastasiya E.; Hramov, Alexander E.; Koronovskii, Alexey A.; Boccaletti, Stefano

    2008-06-15

    Synchronization processes in networks of spatially extended dynamical systems are analytically and numerically studied. We focus on the relevant case of networks whose elements (or nodes) are spatially extended dynamical systems, with the nodes being connected with each other by scalar signals. The stability of the synchronous spatio-temporal state for a generic network is analytically assessed by means of an extension of the master stability function approach. We find an excellent agreement between the theoretical predictions and the data obtained by means of numerical calculations. The efficiency and reliability of this method is illustrated numerically with networks of beam-plasma chaotic systems (Pierce diodes). We discuss also how the revealed regularities are expected to take place in other relevant physical and biological circumstances.

  15. The APS control system network upgrade.

    SciTech Connect

    Sidorowicz, K. v.; Leibfritz, D.; McDowell, W. P.

    1999-10-22

    When it was installed,the Advanced Photon Source (APS) control system network was at the state-of-the-art. Different aspects of the system have been reported at previous meetings [1,2]. As loads on the controls network have increased due to newer and faster workstations and front-end computers, we have found performance of the system declining and have implemented an upgraded network. There have been dramatic advances in networking hardware in the last several years. The upgraded APS controls network replaces the original FDDI backbone and shared Ethernet hubs with redundant gigabit uplinks and fully switched 10/100 Ethernet switches with backplane fabrics in excess of 20 Gbits/s (Gbps). The central collapsed backbone FDDI concentrator has been replaced with a Gigabit Ethernet switch with greater than 30 Gbps backplane fabric. Full redundancy of the system has been maintained. This paper will discuss this upgrade and include performance data and performance comparisons with the original network.

  16. Clinical information systems for integrated healthcare networks.

    PubMed Central

    Teich, J. M.

    1998-01-01

    In the 1990's, a large number of hospitals and medical practices have merged to form integrated healthcare networks (IHN's). The nature of an IHN creates new demands for information management, and also imposes new constraints on information systems for the network. Important tradeoffs must be made between homogeneity and flexibility, central and distributed governance, and access and confidentiality. This paper describes key components of clinical information systems for IHN's, and examines important design decisions that affect the value of such systems. Images Figure 1 PMID:9929178

  17. Circulation system complex networks and teleconnections

    NASA Astrophysics Data System (ADS)

    Gong, Zhi-Qiang; Wang, Xiao-Juan; Zhi, Rong; Feng, Ai-Xia

    2011-07-01

    In terms of the characteristic topology parameters of climate complex networks, the spatial connection structural complexity of the circulation system and the influence of four teleconnection patterns are quantitatively described. Results of node degrees for the Northern Hemisphere (NH) mid-high latitude (30° N-90° N) circulation system (NHS) networks with and without the Arctic Oscillations (AO), the North Atlantic Oscillations (NAO) and the Pacific—North American pattern (PNA) demonstrate that the teleconnections greatly shorten the mean shortest path length of the networks, thus being advantageous to the rapid transfer of local fluctuation information over the network and to the stability of the NHS. The impact of the AO on the NHS connection structure is most important and the impact of the NAO is the next important. The PNA is a relatively independent teleconnection, and its role in the NHS is mainly manifested in the connection between the NHS and the tropical circulation system (TRS). As to the Southern Hemisphere mid-high latitude (30° S-90° S) circulation system (SHS), the impact of the Antarctic Arctic Oscillations (AAO) on the structural stability of the system is most important. In addition, there might be a stable correlation dipole (AACD) in the SHS, which also has important influence on the structure of the SHS networks.

  18. Visual Tutoring System for Programming Multiprocessor Networks.

    ERIC Educational Resources Information Center

    Trichina, Elena

    1996-01-01

    Describes a visual tutoring system for programming distributive-memory multiprocessor networks. Highlights include difficulties of parallel programming, and three instructional modes in the system, including a hypertext-like lecture, a question-answer mode, and an expert aid mode. (Author/LRW)

  19. Evaluating neural networks and artificial intelligence systems

    NASA Astrophysics Data System (ADS)

    Alberts, David S.

    1994-02-01

    Systems have no intrinsic value in and of themselves, but rather derive value from the contributions they make to the missions, decisions, and tasks they are intended to support. The estimation of the cost-effectiveness of systems is a prerequisite for rational planning, budgeting, and investment documents. Neural network and expert system applications, although similar in their incorporation of a significant amount of decision-making capability, differ from each other in ways that affect the manner in which they can be evaluated. Both these types of systems are, by definition, evolutionary systems, which also impacts their evaluation. This paper discusses key aspects of neural network and expert system applications and their impact on the evaluation process. A practical approach or methodology for evaluating a certain class of expert systems that are particularly difficult to measure using traditional evaluation approaches is presented.

  20. Network analysis of eight industrial symbiosis systems

    NASA Astrophysics Data System (ADS)

    Zhang, Yan; Zheng, Hongmei; Shi, Han; Yu, Xiangyi; Liu, Gengyuan; Su, Meirong; Li, Yating; Chai, Yingying

    2016-06-01

    Industrial symbiosis is the quintessential characteristic of an eco-industrial park. To divide parks into different types, previous studies mostly focused on qualitative judgments, and failed to use metrics to conduct quantitative research on the internal structural or functional characteristics of a park. To analyze a park's structural attributes, a range of metrics from network analysis have been applied, but few researchers have compared two or more symbioses using multiple metrics. In this study, we used two metrics (density and network degree centralization) to compare the degrees of completeness and dependence of eight diverse but representative industrial symbiosis networks. Through the combination of the two metrics, we divided the networks into three types: weak completeness, and two forms of strong completeness, namely "anchor tenant" mutualism and "equality-oriented" mutualism. The results showed that the networks with a weak degree of completeness were sparse and had few connections among nodes; for "anchor tenant" mutualism, the degree of completeness was relatively high, but the affiliated members were too dependent on core members; and the members in "equality-oriented" mutualism had equal roles, with diverse and flexible symbiotic paths. These results revealed some of the systems' internal structure and how different structures influenced the exchanges of materials, energy, and knowledge among members of a system, thereby providing insights into threats that may destabilize the network. Based on this analysis, we provide examples of the advantages and effectiveness of recent improvement projects in a typical Chinese eco-industrial park (Shandong Lubei).

  1. A dynamical systems view of network centrality

    PubMed Central

    Grindrod, Peter; Higham, Desmond J.

    2014-01-01

    To gain insights about dynamic networks, the dominant paradigm is to study discrete snapshots, or timeslices, as the interactions evolve. Here, we develop and test a new mathematical framework where network evolution is handled over continuous time, giving an elegant dynamical systems representation for the important concept of node centrality. The resulting system allows us to track the relative influence of each individual. This new setting is natural in many digital applications, offering both conceptual and computational advantages. The novel differential equations approach is convenient for modelling and analysis of network evolution and gives rise to an interesting application of the matrix logarithm function. From a computational perspective, it avoids the awkward up-front compromises between accuracy, efficiency and redundancy required in the prevalent discrete-time setting. Instead, we can rely on state-of-the-art ODE software, where discretization takes place adaptively in response to the prevailing system dynamics. The new centrality system generalizes the widely used Katz measure, and allows us to identify and track, at any resolution, the most influential nodes in terms of broadcasting and receiving information through time-dependent links. In addition to the classical static network notion of attenuation across edges, the new ODE also allows for attenuation over time, as information becomes stale. This allows ‘running measures’ to be computed, so that networks can be monitored in real time over arbitrarily long intervals. With regard to computational efficiency, we explain why it is cheaper to track good receivers of information than good broadcasters. An important consequence is that the overall broadcast activity in the network can also be monitored efficiently. We use two synthetic examples to validate the relevance of the new measures. We then illustrate the ideas on a large-scale voice call network, where key features are discovered that are

  2. Network control processor for a TDMA system

    NASA Astrophysics Data System (ADS)

    Suryadevara, Omkarmurthy; Debettencourt, Thomas J.; Shulman, R. B.

    Two unique aspects of designing a network control processor (NCP) to monitor and control a demand-assigned, time-division multiple-access (TDMA) network are described. The first involves the implementation of redundancy by synchronizing the databases of two geographically remote NCPs. The two sets of databases are kept in synchronization by collecting data on both systems, transferring databases, sending incremental updates, and the parallel updating of databases. A periodic audit compares the checksums of the databases to ensure synchronization. The second aspect involves the use of a tracking algorithm to dynamically reallocate TDMA frame space. This algorithm detects and tracks current and long-term load changes in the network. When some portions of the network are overloaded while others have excess capacity, the algorithm automatically calculates and implements a new burst time plan.

  3. Nonlinear Network Dynamics on Earthquake Fault Systems

    NASA Astrophysics Data System (ADS)

    Rundle, P. B.; Rundle, J. B.; Tiampo, K. F.

    2001-12-01

    Understanding the physics of earthquakes is essential if large events are ever to be forecast. Real faults occur in topologically complex networks that exhibit cooperative, emergent space-time behavior that includes precursory quiescence or activation, and clustering of events. The purpose of this work is to investigate the sensitivity of emergent behavior of fault networks to changes in the physics on the scale of single faults or smaller. In order to investigate the effect of changes at small scales on the behavior of the network, we need to construct models of earthquake fault systems that contain the essential physics. A network topology is therefore defined in an elastic medium, the stress Green's functions (i.e. the stress transfer coefficients) are computed, frictional properties are defined and the system is driven via the slip deficit as defined below. The long-range elastic interactions produce mean-field dynamics in the simulations. We focus in this work on the major strike-slip faults in Southern California that produce the most frequent and largest magnitude events. To determine the topology and properties of the network, we used the tabulation of fault properties published in the literature. We have found that the statistical distribution of large earthquakes on a model of a topologically complex, strongly correlated real fault network is highly sensitive to the precise nature of the stress dissipation properties of the friction laws associated with individual faults. These emergent, self-organizing space-time modes of behavior are properties of the network as a whole, rather than of the individual fault segments of which the network is comprised (ref: PBR et al., Physical Review Letters, in press, 2001).

  4. Applying neural networks in autonomous systems

    NASA Astrophysics Data System (ADS)

    Thornbrugh, Allison L.; Layne, J. D.; Wilson, James M., III

    1992-03-01

    Autonomous and teleautonomous operations have been defined in a variety of ways by different groups involved with remote robotic operations. For example, Conway describes architectures for producing intelligent actions in teleautonomous systems. Applying neural nets in such systems is similar to applying them in general. However, for autonomy, learning or learned behavior may become a significant system driver. Thus, artificial neural networks are being evaluated as components in fully autonomous and teleautonomous systems. Feed- forward networks may be trained to perform adaptive signal processing, pattern recognition, data fusion, and function approximation -- as in control subsystems. Certain components of particular autonomous systems become more amenable to implementation using a neural net due to a match between the net's attributes and desired attributes of the system component. Criteria have been developed for distinguishing such applications and then implementing them. The success of hardware implementation is a crucial part of this application evaluation process. Three basic applications of neural nets -- autoassociation, classification, and function approximation -- are used to exemplify this process and to highlight procedures that are followed during the requirements, design, and implementation phases. This paper assumes some familiarity with basic neural network terminology and concentrates upon the use of different neural network types while citing references that cover the underlying mathematics and related research.

  5. Social network supported process recommender system.

    PubMed

    Ye, Yanming; Yin, Jianwei; Xu, Yueshen

    2014-01-01

    Process recommendation technologies have gained more and more attention in the field of intelligent business process modeling to assist the process modeling. However, most of the existing technologies only use the process structure analysis and do not take the social features of processes into account, while the process modeling is complex and comprehensive in most situations. This paper studies the feasibility of social network research technologies on process recommendation and builds a social network system of processes based on the features similarities. Then, three process matching degree measurements are presented and the system implementation is discussed subsequently. Finally, experimental evaluations and future works are introduced.

  6. The LCOGT Network for Solar System Science

    NASA Astrophysics Data System (ADS)

    Lister, Tim

    2012-10-01

    Las Cumbres Observatory Global Telescope (LCOGT) network is a planned homogeneous network of over 35 telescopes at 6 locations in the northern and southern hemispheres. This network is versatile and designed to respond rapidly to target of opportunity events and also to do long term monitoring of slowly changing astronomical phenomena. The global coverage of the network and the apertures of telescope available make LCOGT ideal for follow-up and characterization of Solar System objects (e.g. asteroids, Kuiper Belt Objects, comets, Near-Earth Objects (NEOs)) and ultimately for the discovery of new objects. Currently LCOGT is operating the two 2m Faulkes Telescopes at Haleakala, Maui and Siding Spring Observatory, Australia and in March 2012 completed the install of the first member of the new 1m telescope network at McDonald Observatory, Texas. Further deployments of six to eight 1m telescopes to CTIO in Chile, SAAO in South Africa and Siding Spring Observatory are expected in late 2012-early 2013. I am using the growing LCOGT network to confirm newly detected NEO candidates produced by PanSTARRS (PS1) and other sky surveys and to obtain follow-up astrometry and photometry for radar-targeted objects. I have developed an automated system to retrieve new PS1 NEOs, compute orbits, plan observations and automatically schedule them for follow-up on the robotic telescopes of the LCOGT Network. In the future, LCOGT has proposed to develop a Minor Planet Investigation Project (MPIP) that will address the existing lack of resources for minor planet follow-up, takes advantage of ever-increasing new datasets, and develops a platform for broad public participation in relevant scientific exploration. We plan to produce a cloud-based Solar System investigation environment, a citizen science project (AgentNEO), and a cyberlearning environment, all under the umbrella of MPIP.

  7. Multitask neural network for vision machine systems

    NASA Astrophysics Data System (ADS)

    Gupta, Madan M.; Knopf, George K.

    1991-02-01

    A multi-task dynamic neural network that can be programmed for storing processing and encoding spatio-temporal visual information is presented in this paper. This dynamic neural network called the PNnetwork is comprised of numerous densely interconnected neural subpopulations which reside in one of the two coupled sublayers P or N. The subpopulations in the P-sublayer transmit an excitatory or a positive influence onto all interconnected units whereas the subpopulations in the N-sublayer transmit an inhibitory or negative influence. The dynamical activity generated by each subpopulation is given by a nonlinear first-order system. By varying the coupling strength between these different subpopulations it is possible to generate three distinct modes of dynamical behavior useful for performing vision related tasks. It is postulated that the PN-network can function as a basic programmable processor for novel vision machine systems. 1. 0

  8. Social networks as embedded complex adaptive systems.

    PubMed

    Benham-Hutchins, Marge; Clancy, Thomas R

    2010-09-01

    As systems evolve over time, their natural tendency is to become increasingly more complex. Studies in the field of complex systems have generated new perspectives on management in social organizations such as hospitals. Much of this research appears as a natural extension of the cross-disciplinary field of systems theory. This is the 15th in a series of articles applying complex systems science to the traditional management concepts of planning, organizing, directing, coordinating, and controlling. In this article, the authors discuss healthcare social networks as a hierarchy of embedded complex adaptive systems. The authors further examine the use of social network analysis tools as a means to understand complex communication patterns and reduce medical errors.

  9. Building network management system for video conference system in intranet

    NASA Astrophysics Data System (ADS)

    Li, Hui; Bai, Lin; Ji, Yuefeng

    2004-04-01

    To provide visual communication over enterprise Intranet, the video conference system in H.323 has been proposed as a suitable architecture to take the place of circuit-switched telephony model. However, managing video conference system will be complicated due to the real-time monitoring and reporting. This paper presents some research on the network management of H.323 Video conference system, and introduces the standards about this system, such as ITU-T H.341 and H.350 recommendation, and then gives some advices on network management design for video conference system with the considering of the real-time feature.

  10. The Earthscope USArray Array Network Facility (ANF): Evolution of Data Acquisition, Processing, and Storage Systems

    NASA Astrophysics Data System (ADS)

    Davis, G. A.; Battistuz, B.; Foley, S.; Vernon, F. L.; Eakins, J. A.

    2009-12-01

    Since April 2004 the Earthscope USArray Transportable Array (TA) network has grown to over 400 broadband seismic stations that stream multi-channel data in near real-time to the Array Network Facility in San Diego. In total, over 1.7 terabytes per year of 24-bit, 40 samples-per-second seismic and state of health data is recorded from the stations. The ANF provides analysts access to real-time and archived data, as well as state-of-health data, metadata, and interactive tools for station engineers and the public via a website. Additional processing and recovery of missing data from on-site recorders (balers) at the stations is performed before the final data is transmitted to the IRIS Data Management Center (DMC). Assembly of the final data set requires additional storage and processing capabilities to combine the real-time data with baler data. The infrastructure supporting these diverse computational and storage needs currently consists of twelve virtualized Sun Solaris Zones executing on nine physical server systems. The servers are protected against failure by redundant power, storage, and networking connections. Storage needs are provided by a hybrid iSCSI and Fiber Channel Storage Area Network (SAN) with access to over 40 terabytes of RAID 5 and 6 storage. Processing tasks are assigned to systems based on parallelization and floating-point calculation needs. On-site buffering at the data-loggers provide protection in case of short-term network or hardware problems, while backup acquisition systems at the San Diego Supercomputer Center and the DMC protect against catastrophic failure of the primary site. Configuration management and monitoring of these systems is accomplished with open-source (Cfengine, Nagios, Solaris Community Software) and commercial tools (Intermapper). In the evolution from a single server to multiple virtualized server instances, Sun Cluster software was evaluated and found to be unstable in our environment. Shared filesystem

  11. Networked Training: An Electronic Education System.

    ERIC Educational Resources Information Center

    Ryan, William J.

    1993-01-01

    Presents perspectives on networked training based on the development of an electronic education system at the Westinghouse Savannah River Company that integrated motion video, text, and data information with multiple audio sources. The technology options of compact disc, digital video architecture, and digital video interactive are discussed. (LRW)

  12. Distributing Executive Information Systems through Networks.

    ERIC Educational Resources Information Center

    Penrod, James I.; And Others

    1993-01-01

    Many colleges and universities will soon adopt distributed systems for executive information and decision support. Distribution of shared information through computer networks will improve decision-making processes dramatically on campuses. Critical success factors include administrative support, favorable organizational climate, ease of use,…

  13. Threats to Networked RFID Systems

    NASA Astrophysics Data System (ADS)

    Mitrokotsa, Aikaterini; Beye, Michael; Peris-Lopez, Pedro

    RFID technology is an area currently undergoing active development. An issue, which has received a lot of attention, is the security risks that arise due to the inherent vulnerabilities of RFID technology. Most of this attention, however, has focused on related privacy issues. The goal of this chapter is to present a more global overview of RFID threats. This can not only help experts perform risk analyses of RFID systems but also increase awareness and understanding of RFID security issues for non-experts. We use clearly defined and widely accepted concepts from both the RFID area and classical risk analysis to structure this overview.

  14. Design Criteria For Networked Image Analysis System

    NASA Astrophysics Data System (ADS)

    Reader, Cliff; Nitteberg, Alan

    1982-01-01

    Image systems design is currently undergoing a metamorphosis from the conventional computing systems of the past into a new generation of special purpose designs. This change is motivated by several factors, notably among which is the increased opportunity for high performance with low cost offered by advances in semiconductor technology. Another key issue is a maturing in understanding of problems and the applicability of digital processing techniques. These factors allow the design of cost-effective systems that are functionally dedicated to specific applications and used in a utilitarian fashion. Following an overview of the above stated issues, the paper presents a top-down approach to the design of networked image analysis systems. The requirements for such a system are presented, with orientation toward the hospital environment. The three main areas are image data base management, viewing of image data and image data processing. This is followed by a survey of the current state of the art, covering image display systems, data base techniques, communications networks and software systems control. The paper concludes with a description of the functional subystems and architectural framework for networked image analysis in a production environment.

  15. Network and adaptive system of systems modeling and analysis.

    SciTech Connect

    Lawton, Craig R.; Campbell, James E. Dr.; Anderson, Dennis James; Eddy, John P.

    2007-05-01

    This report documents the results of an LDRD program entitled ''Network and Adaptive System of Systems Modeling and Analysis'' that was conducted during FY 2005 and FY 2006. The purpose of this study was to determine and implement ways to incorporate network communications modeling into existing System of Systems (SoS) modeling capabilities. Current SoS modeling, particularly for the Future Combat Systems (FCS) program, is conducted under the assumption that communication between the various systems is always possible and occurs instantaneously. A more realistic representation of these communications allows for better, more accurate simulation results. The current approach to meeting this objective has been to use existing capabilities to model network hardware reliability and adding capabilities to use that information to model the impact on the sustainment supply chain and operational availability.

  16. Zebra: A striped network file system

    NASA Technical Reports Server (NTRS)

    Hartman, John H.; Ousterhout, John K.

    1992-01-01

    The design of Zebra, a striped network file system, is presented. Zebra applies ideas from log-structured file system (LFS) and RAID research to network file systems, resulting in a network file system that has scalable performance, uses its servers efficiently even when its applications are using small files, and provides high availability. Zebra stripes file data across multiple servers, so that the file transfer rate is not limited by the performance of a single server. High availability is achieved by maintaining parity information for the file system. If a server fails its contents can be reconstructed using the contents of the remaining servers and the parity information. Zebra differs from existing striped file systems in the way it stripes file data: Zebra does not stripe on a per-file basis; instead it stripes the stream of bytes written by each client. Clients write to the servers in units called stripe fragments, which are analogous to segments in an LFS. Stripe fragments contain file blocks that were written recently, without regard to which file they belong. This method of striping has numerous advantages over per-file striping, including increased server efficiency, efficient parity computation, and elimination of parity update.

  17. Networks for Autonomous Formation Flying Satellite Systems

    NASA Technical Reports Server (NTRS)

    Knoblock, Eric J.; Konangi, Vijay K.; Wallett, Thomas M.; Bhasin, Kul B.

    2001-01-01

    The performance of three communications networks to support autonomous multi-spacecraft formation flying systems is presented. All systems are comprised of a ten-satellite formation arranged in a star topology, with one of the satellites designated as the central or "mother ship." All data is routed through the mother ship to the terrestrial network. The first system uses a TCP/lP over ATM protocol architecture within the formation the second system uses the IEEE 802.11 protocol architecture within the formation and the last system uses both of the previous architectures with a constellation of geosynchronous satellites serving as an intermediate point-of-contact between the formation and the terrestrial network. The simulations consist of file transfers using either the File Transfer Protocol (FTP) or the Simple Automatic File Exchange (SAFE) Protocol. The results compare the IF queuing delay, and IP processing delay at the mother ship as well as application-level round-trip time for both systems, In all cases, using IEEE 802.11 within the formation yields less delay. Also, the throughput exhibited by SAFE is better than FTP.

  18. Analysis of complex systems using neural networks

    SciTech Connect

    Uhrig, R.E. . Dept. of Nuclear Engineering Oak Ridge National Lab., TN )

    1992-01-01

    The application of neural networks, alone or in conjunction with other advanced technologies (expert systems, fuzzy logic, and/or genetic algorithms), to some of the problems of complex engineering systems has the potential to enhance the safety, reliability, and operability of these systems. Typically, the measured variables from the systems are analog variables that must be sampled and normalized to expected peak values before they are introduced into neural networks. Often data must be processed to put it into a form more acceptable to the neural network (e.g., a fast Fourier transformation of the time-series data to produce a spectral plot of the data). Specific applications described include: (1) Diagnostics: State of the Plant (2) Hybrid System for Transient Identification, (3) Sensor Validation, (4) Plant-Wide Monitoring, (5) Monitoring of Performance and Efficiency, and (6) Analysis of Vibrations. Although specific examples described deal with nuclear power plants or their subsystems, the techniques described can be applied to a wide variety of complex engineering systems.

  19. Analysis of complex systems using neural networks

    SciTech Connect

    Uhrig, R.E. |

    1992-12-31

    The application of neural networks, alone or in conjunction with other advanced technologies (expert systems, fuzzy logic, and/or genetic algorithms), to some of the problems of complex engineering systems has the potential to enhance the safety, reliability, and operability of these systems. Typically, the measured variables from the systems are analog variables that must be sampled and normalized to expected peak values before they are introduced into neural networks. Often data must be processed to put it into a form more acceptable to the neural network (e.g., a fast Fourier transformation of the time-series data to produce a spectral plot of the data). Specific applications described include: (1) Diagnostics: State of the Plant (2) Hybrid System for Transient Identification, (3) Sensor Validation, (4) Plant-Wide Monitoring, (5) Monitoring of Performance and Efficiency, and (6) Analysis of Vibrations. Although specific examples described deal with nuclear power plants or their subsystems, the techniques described can be applied to a wide variety of complex engineering systems.

  20. GAS MAIN SENSOR AND COMMUNICATIONS NETWORK SYSTEM

    SciTech Connect

    Hagen Schempf, Ph.D.

    2003-02-27

    Automatika, Inc. was contracted by the Department of Energy (DOE) and with co-funding from the New York Gas Group (NYGAS), to develop an in-pipe natural gas prototype measurement and wireless communications system for assessing and monitoring distribution networks. A prototype system was built for low-pressure cast-iron mains and tested in a spider- and serial-network configuration in a live network in Long Island with the support of Keyspan Energy, Inc. The prototype unit combined sensors capable of monitoring pressure, flow, humidity, temperature and vibration, which were sampled and combined in data-packages in an in-pipe master-slave architecture to collect data from a distributed spider-arrangement, and in a master-repeater-slave configuration in serial or ladder-network arrangements. It was found that the system was capable of performing all data-sampling and collection as expected, yielding interesting results as to flow-dynamics and vibration-detection. Wireless in-pipe communications were shown to be feasible and valuable data was collected in order to determine how to improve on range and data-quality in the future.

  1. Architecture for networked electronic patient record systems.

    PubMed

    Takeda, H; Matsumura, Y; Kuwata, S; Nakano, H; Sakamoto, N; Yamamoto, R

    2000-11-01

    There have been two major approaches to the development of networked electronic patient record (EPR) architecture. One uses object-oriented methodologies for constructing the model, which include the GEHR project, Synapses, HL7 RIM and so on. The second approach uses document-oriented methodologies, as applied in examples of HL7 PRA. It is practically beneficial to take the advantages of both approaches and to add solution technologies for network security such as PKI. In recognition of the similarity with electronic commerce, a certificate authority as a trusted third party will be organised for establishing networked EPR system. This paper describes a Japanese functional model that has been developed, and proposes a document-object-oriented architecture, which is-compared with other existing models. PMID:11154967

  2. Gas Main Sensor and Communications Network System

    SciTech Connect

    Hagen Schempf

    2006-05-31

    Automatika, Inc. was contracted by the Department of Energy (DOE) and with co-funding from the Northeast Gas Association (NGA), to develop an in-pipe natural gas prototype measurement and wireless communications system for assessing and monitoring distribution networks. This projected was completed in April 2006, and culminated in the installation of more than 2 dozen GasNet nodes in both low- and high-pressure cast-iron and steel mains owned by multiple utilities in the northeastern US. Utilities are currently logging data (off-line) and monitoring data in real time from single and multiple networked sensors over cellular networks and collecting data using wireless bluetooth PDA systems. The system was designed to be modular, using in-pipe sensor-wands capable of measuring, flow, pressure, temperature, water-content and vibration. Internal antennae allowed for the use of the pipe-internals as a waveguide for setting up a sensor network to collect data from multiple nodes simultaneously. Sensor nodes were designed to be installed with low- and no-blow techniques and tools. Using a multi-drop bus technique with a custom protocol, all electronics were designed to be buriable and allow for on-board data-collection (SD-card), wireless relaying and cellular network forwarding. Installation options afforded by the design included direct-burial and external polemounted variants. Power was provided by one or more batteries, direct AC-power (Class I Div.2) and solar-array. The utilities are currently in a data-collection phase and intend to use the collected (and processed) data to make capital improvement decisions, compare it to Stoner model predictions and evaluate the use of such a system for future expansion, technology-improvement and commercialization starting later in 2006.

  3. Hardware and Software Design of FPGA-based PCIe Gen3 interface for APEnet+ network interconnect system

    NASA Astrophysics Data System (ADS)

    Ammendola, R.; Biagioni, A.; Frezza, O.; Lo Cicero, F.; Lonardo, A.; Martinelli, M.; Paolucci, P. S.; Pastorelli, E.; Rossetti, D.; Simula, F.; Tosoratto, L.; Vicini, P.

    2015-12-01

    In the attempt to develop an interconnection architecture optimized for hybrid HPC systems dedicated to scientific computing, we designed APEnet+, a point-to-point, low-latency and high-performance network controller supporting 6 fully bidirectional off-board links over a 3D torus topology. The first release of APEnet+ (named V4) was a board based on a 40 nm Altera FPGA, integrating 6 channels at 34 Gbps of raw bandwidth per direction and a PCIe Gen2 x8 host interface. It has been the first-of-its-kind device to implement an RDMA protocol to directly read/write data from/to Fermi and Kepler NVIDIA GPUs using NVIDIA peer-to-peer and GPUDirect RDMA protocols, obtaining real zero-copy GPU-to-GPU transfers over the network. The latest generation of APEnet+ systems (now named V5) implements a PCIe Gen3 x8 host interface on a 28 nm Altera Stratix V FPGA, with multi-standard fast transceivers (up to 14.4 Gbps) and an increased amount of configurable internal resources and hardware IP cores to support main interconnection standard protocols. Herein we present the APEnet+ V5 architecture, the status of its hardware and its system software design. Both its Linux Device Driver and the low-level libraries have been redeveloped to support the PCIe Gen3 protocol, introducing optimizations and solutions based on hardware/software co-design.

  4. Secured network sensor-based defense system

    NASA Astrophysics Data System (ADS)

    Wei, Sixiao; Shen, Dan; Ge, Linqiang; Yu, Wei; Blasch, Erik P.; Pham, Khanh D.; Chen, Genshe

    2015-05-01

    Network sensor-based defense (NSD) systems have been widely used to defend against cyber threats. Nonetheless, if the adversary finds ways to identify the location of monitor sensors, the effectiveness of NSD systems can be reduced. In this paper, we propose both temporal and spatial perturbation based defense mechanisms to secure NSD systems and make the monitor sensor invisible to the adversary. The temporal-perturbation based defense manipulates the timing information of published data so that the probability of successfully recognizing monitor sensors can be reduced. The spatial-perturbation based defense dynamically redeploys monitor sensors in the network so that the adversary cannot obtain the complete information to recognize all of the monitor sensors. We carried out experiments using real-world traffic traces to evaluate the effectiveness of our proposed defense mechanisms. Our data shows that our proposed defense mechanisms can reduce the attack accuracy of recognizing detection sensors.

  5. GAS MAIN SENSOR AND COMMUNICATIONS NETWORK SYSTEM

    SciTech Connect

    Hagen Schempf

    2004-09-30

    Automatika, Inc. was contracted by the Department of Energy (DOE) and with co-funding from the New York Gas Group (NYGAS), to develop an in-pipe natural gas prototype measurement and wireless communications system for assessing and monitoring distribution networks. In Phase II of this three-phase program, an improved prototype system was built for low-pressure cast-iron and high-pressure steel (including a no-blow installation system) mains and tested in a serial-network configuration in a live network in Long Island with the support of Keyspan Energy, Inc. The experiment was carried out in several open-hole excavations over a multi-day period. The prototype units (3 total) combined sensors capable of monitoring pressure, flow, humidity, temperature and vibration, which were sampled and combined in data-packages in an in-pipe master-repeater-slave configuration in serial or ladder-network arrangements. It was verified that the system was capable of performing all data-sampling, data-storage and collection as expected, yielding interesting results as to flow-dynamics and vibration-detection. Wireless in-pipe communications were shown to be feasible and the system was demonstrated to run off in-ground battery- and above-ground solar power. The remote datalogger access and storage-card features were demonstrated and used to log and post-process system data. Real-time data-display on an updated Phase-I GUI was used for in-field demonstration and troubleshooting.

  6. Teaching Hands-On Linux Host Computer Security

    ERIC Educational Resources Information Center

    Shumba, Rose

    2006-01-01

    In the summer of 2003, a project to augment and improve the teaching of information assurance courses was started at IUP. Thus far, ten hands-on exercises have been developed. The exercises described in this article, and presented in the appendix, are based on actions required to secure a Linux host. Publicly available resources were used to…

  7. Linux Adventures on a Laptop. Computers in Small Libraries

    ERIC Educational Resources Information Center

    Roberts, Gary

    2005-01-01

    This article discusses the pros and cons of open source software, such as Linux. It asserts that despite the technical difficulties of installing and maintaining this type of software, ultimately it is helpful in terms of knowledge acquisition and as a beneficial investment librarians can make in themselves, their libraries, and their patrons.…

  8. Drowning in PC Management: Could a Linux Solution Save Us?

    ERIC Educational Resources Information Center

    Peters, Kathleen A.

    2004-01-01

    Short on funding and IT staff, a Western Canada library struggled to provide adequate public computing resources. Staff turned to a Linux-based solution that supports up to 10 users from a single computer, and blends Web browsing and productivity applications with session management, Internet filtering, and user authentication. In this article,…

  9. MOLAR: Modular Linux and Adaptive Runtime Support for HEC OS/R Research

    SciTech Connect

    Frank Mueller

    2009-02-05

    MOLAR is a multi-institution research effort that concentrates on adaptive, reliable,and efficient operating and runtime system solutions for ultra-scale high-end scientific computing on the next generation of supercomputers. This research addresses the challenges outlined by the FAST-OS - forum to address scalable technology for runtime and operating systems --- and HECRTF --- high-end computing revitalization task force --- activities by providing a modular Linux and adaptable runtime support for high-end computing operating and runtime systems. The MOLAR research has the following goals to address these issues. (1) Create a modular and configurable Linux system that allows customized changes based on the requirements of the applications, runtime systems, and cluster management software. (2) Build runtime systems that leverage the OS modularity and configurability to improve efficiency, reliability, scalability, ease-of-use, and provide support to legacy and promising programming models. (3) Advance computer reliability, availability and serviceability (RAS) management systems to work cooperatively with the OS/R to identify and preemptively resolve system issues. (4) Explore the use of advanced monitoring and adaptation to improve application performance and predictability of system interruptions. The overall goal of the research conducted at NCSU is to develop scalable algorithms for high-availability without single points of failure and without single points of control.

  10. IEEE 342 Node Low Voltage Networked Test System

    SciTech Connect

    Schneider, Kevin P.; Phanivong, Phillippe K.; Lacroix, Jean-Sebastian

    2014-07-31

    The IEEE Distribution Test Feeders provide a benchmark for new algorithms to the distribution analyses community. The low voltage network test feeder represents a moderate size urban system that is unbalanced and highly networked. This is the first distribution test feeder developed by the IEEE that contains unbalanced networked components. The 342 node Low Voltage Networked Test System includes many elements that may be found in a networked system: multiple 13.2kV primary feeders, network protectors, a 120/208V grid network, and multiple 277/480V spot networks. This paper presents a brief review of the history of low voltage networks and how they evolved into the modern systems. This paper will then present a description of the 342 Node IEEE Low Voltage Network Test System and power flow results.

  11. Fault-tolerant interconnection networks for multiprocessor systems

    SciTech Connect

    Nassar, H.M.

    1989-01-01

    Interconnection networks represent the backbone of multiprocessor systems. A failure in the network, therefore, could seriously degrade the system performance. For this reason, fault tolerance has been regarded as a major consideration in interconnection network design. This thesis presents two novel techniques to provide fault tolerance capabilities to three major networks: the Beneline network and the Clos network. First, the Simple Fault Tolerance Technique (SFT) is presented. The SFT technique is in fact the result of merging two widely known interconnection mechanisms: a normal interconnection network and a shared bus. This technique is most suitable for networks with small switches, such as the Baseline network and the Benes network. For the Clos network, whose switches may be large for the SFT, another technique is developed to produce the Fault-Tolerant Clos (FTC) network. In the FTC, one switch is added to each stage. The two techniques are described and thoroughly analyzed.

  12. Conceptualizing and Advancing Research Networking Systems.

    PubMed

    Schleyer, Titus; Butler, Brian S; Song, Mei; Spallek, Heiko

    2012-03-01

    Science in general, and biomedical research in particular, is becoming more collaborative. As a result, collaboration with the right individuals, teams, and institutions is increasingly crucial for scientific progress. We propose Research Networking Systems (RNS) as a new type of system designed to help scientists identify and choose collaborators, and suggest a corresponding research agenda. The research agenda covers four areas: foundations, presentation, architecture, and evaluation. Foundations includes project-, institution- and discipline-specific motivational factors; the role of social networks; and impression formation based on information beyond expertise and interests. Presentation addresses representing expertise in a comprehensive and up-to-date manner; the role of controlled vocabularies and folksonomies; the tension between seekers' need for comprehensive information and potential collaborators' desire to control how they are seen by others; and the need to support serendipitous discovery of collaborative opportunities. Architecture considers aggregation and synthesis of information from multiple sources, social system interoperability, and integration with the user's primary work context. Lastly, evaluation focuses on assessment of collaboration decisions, measurement of user-specific costs and benefits, and how the large-scale impact of RNS could be evaluated with longitudinal and naturalistic methods. We hope that this article stimulates the human-computer interaction, computer-supported cooperative work, and related communities to pursue a broad and comprehensive agenda for developing research networking systems.

  13. Conceptualizing and Advancing Research Networking Systems

    PubMed Central

    SCHLEYER, TITUS; BUTLER, BRIAN S.; SONG, MEI; SPALLEK, HEIKO

    2013-01-01

    Science in general, and biomedical research in particular, is becoming more collaborative. As a result, collaboration with the right individuals, teams, and institutions is increasingly crucial for scientific progress. We propose Research Networking Systems (RNS) as a new type of system designed to help scientists identify and choose collaborators, and suggest a corresponding research agenda. The research agenda covers four areas: foundations, presentation, architecture, and evaluation. Foundations includes project-, institution- and discipline-specific motivational factors; the role of social networks; and impression formation based on information beyond expertise and interests. Presentation addresses representing expertise in a comprehensive and up-to-date manner; the role of controlled vocabularies and folksonomies; the tension between seekers’ need for comprehensive information and potential collaborators’ desire to control how they are seen by others; and the need to support serendipitous discovery of collaborative opportunities. Architecture considers aggregation and synthesis of information from multiple sources, social system interoperability, and integration with the user’s primary work context. Lastly, evaluation focuses on assessment of collaboration decisions, measurement of user-specific costs and benefits, and how the large-scale impact of RNS could be evaluated with longitudinal and naturalistic methods. We hope that this article stimulates the human-computer interaction, computer-supported cooperative work, and related communities to pursue a broad and comprehensive agenda for developing research networking systems. PMID:24376309

  14. Digital Video Over Space Systems and Networks

    NASA Technical Reports Server (NTRS)

    Grubbs, Rodney

    2010-01-01

    This slide presentation reviews the use of digital video with space systems and networks. The earliest use of video was the use of film precluding live viewing, which gave way to live television from space. This has given way to digital video using internet protocol for transmission. This has provided for many improvements with new challenges. Some of these ehallenges are reviewed. The change to digital video transmitted over space systems can provide incredible imagery, however the process must be viewed as an entire system, rather than piece-meal.

  15. System/360 Computer Assisted Network Scheduling (CANS) System

    NASA Technical Reports Server (NTRS)

    Brewer, A. C.

    1972-01-01

    Computer assisted scheduling techniques that produce conflict-free and efficient schedules have been developed and implemented to meet needs of the Manned Space Flight Network. CANS system provides effective management of resources in complex scheduling environment. System is automated resource scheduling, controlling, planning, information storage and retrieval tool.

  16. Network Event Recording Device: An automated system for Network anomaly detection, and notification. Draft

    SciTech Connect

    Simmons, D.G.; Wilkins, R.

    1994-09-01

    The goal of the Network Event Recording Device (NERD) is to provide a flexible autonomous system for network logging and notification when significant network anomalies occur. The NERD is also charged with increasing the efficiency and effectiveness of currently implemented network security procedures. While it has always been possible for network and security managers to review log files for evidence of network irregularities, the NERD provides real-time display of network activity, as well as constant monitoring and notification services for managers. Similarly, real-time display and notification of possible security breaches will provide improved effectiveness in combating resource infiltration from both inside and outside the immediate network environment.

  17. System and method for networking electrochemical devices

    DOEpatents

    Williams, Mark C.; Wimer, John G.; Archer, David H.

    1995-01-01

    An improved electrochemically active system and method including a plurality of electrochemical devices, such as fuel cells and fluid separation devices, in which the anode and cathode process-fluid flow chambers are connected in fluid-flow arrangements so that the operating parameters of each of said plurality of electrochemical devices which are dependent upon process-fluid parameters may be individually controlled to provide improved operating efficiency. The improvements in operation include improved power efficiency and improved fuel utilization in fuel cell power generating systems and reduced power consumption in fluid separation devices and the like through interstage process fluid parameter control for series networked electrochemical devices. The improved networking method includes recycling of various process flows to enhance the overall control scheme.

  18. The network management expert system prototype for Sun Workstations

    NASA Technical Reports Server (NTRS)

    Leigh, Albert

    1990-01-01

    Networking has become one of the fastest growing areas in the computer industry. The emergence of distributed workstations make networking more popular because they need to have connectivity between themselves as well as with other computer systems to share information and system resources. Making the networks more efficient and expandable by selecting network services and devices that fit to one's need is vital to achieve reliability and fast throughput. Networks are dynamically changing and growing at a rate that outpaces the available human resources. Therefore, there is a need to multiply the expertise rapidly rather than employing more network managers. In addition, setting up and maintaining networks by following the manuals can be tedious and cumbersome even for an experienced network manager. This prototype expert system was developed to experiment on Sun Workstations to assist system and network managers in selecting and configurating network services.

  19. The realization of network video monitoring system

    NASA Astrophysics Data System (ADS)

    Hou, Zhuo-wei; Qiu, Yue-hong

    2013-08-01

    The paper presents a network video monitoring system based on field programmable gate array to implement the real time acquisition and transmission of video signals. The system includes image acquisition module, central control module and Ethernet transmission module. According to request, Cyclone FPGA is taken as the control center in the system, using Quartus II and Nios II IDE as development tool to build the hardware development platform. A kind of embedded hardware system is built based on SOPC technic, in which the Nios II soft-core and other controllers are combined by configuration. Meanwhile, the μClinux is used as embedded operating system to make the process of acquisition and transmission of the data picture on the Internet more reliable. In order to fulfill the task of MAC and PHY, the fast Ethernet controller should be connected to the SOPC. TCP/IP protocol is used to implement data transmission. Based on TCP/IP protocol, the Web Servers should be embedded to implement the protocol of HTTP, TCP and UDP. Through the research of the thesis, with programmable logic device being the core and network being the transmission media, the design scheme of the video monitoring system is presented. The hardware's design is mainly done in the thesis. The principal and function of the system is deeply explained, so it can be the important technology and specific method.

  20. Multi-agent tasks scheduling system in software defined networks

    NASA Astrophysics Data System (ADS)

    Skobelev, P. O.; Granichin, O. N.; Budaev, D. S.; Laryukhin, V. B.; Mayorov, I. V.

    2014-05-01

    In this paper a multi-agent tasks scheduling system in software defined networks is considered. This system is designed for distribution simulation and tasks implementation on computational resources including network dynamic characteristics and topology.

  1. Pathways, Networks and Systems Medicine Conferences

    SciTech Connect

    Nadeau, Joseph H.

    2013-11-25

    The 6th Pathways, Networks and Systems Medicine Conference was held at the Minoa Palace Conference Center, Chania, Crete, Greece (16-21 June 2008). The Organizing Committee was composed of Joe Nadeau (CWRU, Cleveland), Rudi Balling (German Research Centre, Brauschweig), David Galas (Institute for Systems Biology, Seattle), Lee Hood (Institute for Systems Biology, Seattle), Diane Isonaka (Seattle), Fotis Kafatos (Imperial College, London), John Lambris (Univ. Pennsylvania, Philadelphia),Harris Lewin (Univ. of Indiana, Urbana-Champaign), Edison Liu (Genome Institute of Singapore, Singapore), and Shankar Subramaniam (Univ. California, San Diego). A total of 101 individuals from 21 countries participated in the conference: USA (48), Canada (5), France (5), Austria (4), Germany (3), Italy (3), UK (3), Greece (2), New Zealand (2), Singapore (2), Argentina (1), Australia (1), Cuba (1), Denmark (1), Japan (1), Mexico (1), Netherlands (1), Spain (1), Sweden (1), Switzerland (1). With respect to speakers, 29 were established faculty members and 13 were graduate students or postdoctoral fellows. With respect to gender representation, among speakers, 13 were female and 28 were male, and among all participants 43 were female and 58 were male. Program these included the following topics: Cancer Pathways and Networks (Day 1), Metabolic Disease Networks (Day 2), Day 3 ? Organs, Pathways and Stem Cells (Day 3), and Day 4 ? Inflammation, Immunity, Microbes and the Environment (Day 4). Proceedings of the Conference were not published.

  2. Credit Default Swaps networks and systemic risk.

    PubMed

    Puliga, Michelangelo; Caldarelli, Guido; Battiston, Stefano

    2014-01-01

    Credit Default Swaps (CDS) spreads should reflect default risk of the underlying corporate debt. Actually, it has been recognized that CDS spread time series did not anticipate but only followed the increasing risk of default before the financial crisis. In principle, the network of correlations among CDS spread time series could at least display some form of structural change to be used as an early warning of systemic risk. Here we study a set of 176 CDS time series of financial institutions from 2002 to 2011. Networks are constructed in various ways, some of which display structural change at the onset of the credit crisis of 2008, but never before. By taking these networks as a proxy of interdependencies among financial institutions, we run stress-test based on Group DebtRank. Systemic risk before 2008 increases only when incorporating a macroeconomic indicator reflecting the potential losses of financial assets associated with house prices in the US. This approach indicates a promising way to detect systemic instabilities. PMID:25366654

  3. Credit Default Swaps networks and systemic risk.

    PubMed

    Puliga, Michelangelo; Caldarelli, Guido; Battiston, Stefano

    2014-01-01

    Credit Default Swaps (CDS) spreads should reflect default risk of the underlying corporate debt. Actually, it has been recognized that CDS spread time series did not anticipate but only followed the increasing risk of default before the financial crisis. In principle, the network of correlations among CDS spread time series could at least display some form of structural change to be used as an early warning of systemic risk. Here we study a set of 176 CDS time series of financial institutions from 2002 to 2011. Networks are constructed in various ways, some of which display structural change at the onset of the credit crisis of 2008, but never before. By taking these networks as a proxy of interdependencies among financial institutions, we run stress-test based on Group DebtRank. Systemic risk before 2008 increases only when incorporating a macroeconomic indicator reflecting the potential losses of financial assets associated with house prices in the US. This approach indicates a promising way to detect systemic instabilities.

  4. Credit Default Swaps networks and systemic risk

    NASA Astrophysics Data System (ADS)

    Puliga, Michelangelo; Caldarelli, Guido; Battiston, Stefano

    2014-11-01

    Credit Default Swaps (CDS) spreads should reflect default risk of the underlying corporate debt. Actually, it has been recognized that CDS spread time series did not anticipate but only followed the increasing risk of default before the financial crisis. In principle, the network of correlations among CDS spread time series could at least display some form of structural change to be used as an early warning of systemic risk. Here we study a set of 176 CDS time series of financial institutions from 2002 to 2011. Networks are constructed in various ways, some of which display structural change at the onset of the credit crisis of 2008, but never before. By taking these networks as a proxy of interdependencies among financial institutions, we run stress-test based on Group DebtRank. Systemic risk before 2008 increases only when incorporating a macroeconomic indicator reflecting the potential losses of financial assets associated with house prices in the US. This approach indicates a promising way to detect systemic instabilities.

  5. Credit Default Swaps networks and systemic risk

    PubMed Central

    Puliga, Michelangelo; Caldarelli, Guido; Battiston, Stefano

    2014-01-01

    Credit Default Swaps (CDS) spreads should reflect default risk of the underlying corporate debt. Actually, it has been recognized that CDS spread time series did not anticipate but only followed the increasing risk of default before the financial crisis. In principle, the network of correlations among CDS spread time series could at least display some form of structural change to be used as an early warning of systemic risk. Here we study a set of 176 CDS time series of financial institutions from 2002 to 2011. Networks are constructed in various ways, some of which display structural change at the onset of the credit crisis of 2008, but never before. By taking these networks as a proxy of interdependencies among financial institutions, we run stress-test based on Group DebtRank. Systemic risk before 2008 increases only when incorporating a macroeconomic indicator reflecting the potential losses of financial assets associated with house prices in the US. This approach indicates a promising way to detect systemic instabilities. PMID:25366654

  6. Final Report for ?Queuing Network Models of Performance of High End Computing Systems?

    SciTech Connect

    Buckwalter, J

    2005-09-28

    The primary objective of this project is to perform general research into queuing network models of performance of high end computing systems. A related objective is to investigate and predict how an increase in the number of nodes of a supercomputer will decrease the running time of a user's software package, which is often referred to as the strong scaling problem. We investigate the large, MPI-based Linux cluster MCR at LLNL, running the well-known NAS Parallel Benchmark (NPB) applications. Data is collected directly from NPB and also from the low-overhead LLNL profiling tool mpiP. For a run, we break the wall clock execution time of the benchmark into four components: switch delay, MPI contention time, MPI service time, and non-MPI computation time. Switch delay is estimated from message statistics. MPI service time and non-MPI computation time are calculated directly from measurement data. MPI contention is estimated by means of a queuing network model (QNM), based in part on MPI service time. This model of execution time validates reasonably well against the measured execution time, usually within 10%. Since the number of nodes used to run the application is a major input to the model, we can use the model to predict application execution times for various numbers of nodes. We also investigate how the four components of execution time scale individually as the number of nodes increases. Switch delay and MPI service time scale regularly. MPI contention is estimated by the QNM submodel and also has a fairly regular pattern. However, non-MPI compute time has a somewhat irregular pattern, possibly due to caching effects in the memory hierarchy. In contrast to some other performance modeling methods, this method is relatively fast to set up, fast to calculate, simple for data collection, and yet accurate enough to be quite useful.

  7. Deep Space Network information system architecture study

    NASA Technical Reports Server (NTRS)

    Beswick, C. A.; Markley, R. W. (Editor); Atkinson, D. J.; Cooper, L. P.; Tausworthe, R. C.; Masline, R. C.; Jenkins, J. S.; Crowe, R. A.; Thomas, J. L.; Stoloff, M. J.

    1992-01-01

    The purpose of this article is to describe an architecture for the Deep Space Network (DSN) information system in the years 2000-2010 and to provide guidelines for its evolution during the 1990s. The study scope is defined to be from the front-end areas at the antennas to the end users (spacecraft teams, principal investigators, archival storage systems, and non-NASA partners). The architectural vision provides guidance for major DSN implementation efforts during the next decade. A strong motivation for the study is an expected dramatic improvement in information-systems technologies, such as the following: computer processing, automation technology (including knowledge-based systems), networking and data transport, software and hardware engineering, and human-interface technology. The proposed Ground Information System has the following major features: unified architecture from the front-end area to the end user; open-systems standards to achieve interoperability; DSN production of level 0 data; delivery of level 0 data from the Deep Space Communications Complex, if desired; dedicated telemetry processors for each receiver; security against unauthorized access and errors; and highly automated monitor and control.

  8. Some queuing network models of computer systems

    NASA Technical Reports Server (NTRS)

    Herndon, E. S.

    1980-01-01

    Queuing network models of a computer system operating with a single workload type are presented. Program algorithms are adapted for use on the Texas Instruments SR-52 programmable calculator. By slightly altering the algorithm to process the G and H matrices row by row instead of column by column, six devices and an unlimited job/terminal population could be handled on the SR-52. Techniques are also introduced for handling a simple load dependent server and for studying interactive systems with fixed multiprogramming limits.

  9. Functional Network Dynamics of the Language System

    PubMed Central

    Chai, Lucy R.; Mattar, Marcelo G.; Blank, Idan Asher; Fedorenko, Evelina; Bassett, Danielle S.

    2016-01-01

    During linguistic processing, a set of brain regions on the lateral surfaces of the left frontal, temporal, and parietal cortices exhibit robust responses. These areas display highly correlated activity while a subject rests or performs a naturalistic language comprehension task, suggesting that they form an integrated functional system. Evidence suggests that this system is spatially and functionally distinct from other systems that support high-level cognition in humans. Yet, how different regions within this system might be recruited dynamically during task performance is not well understood. Here we use network methods, applied to fMRI data collected from 22 human subjects performing a language comprehension task, to reveal the dynamic nature of the language system. We observe the presence of a stable core of brain regions, predominantly located in the left hemisphere, that consistently coactivate with one another. We also observe the presence of a more flexible periphery of brain regions, predominantly located in the right hemisphere, that coactivate with different regions at different times. However, the language functional ROIs in the angular gyrus and the anterior temporal lobe were notable exceptions to this trend. By highlighting the temporal dimension of language processing, these results suggest a trade-off between a region's specialization and its capacity for flexible network reconfiguration. PMID:27550868

  10. Network video transmission system based on SOPC

    NASA Astrophysics Data System (ADS)

    Zhang, Zhengbing; Deng, Huiping; Xia, Zhenhua

    2008-03-01

    Video systems have been widely used in many fields such as conferences, public security, military affairs and medical treatment. With the rapid development of FPGA, SOPC has been paid great attentions in the area of image and video processing in recent years. A network video transmission system based on SOPC is proposed in this paper for the purpose of video acquisition, video encoding and network transmission. The hardware platform utilized to design the system is an SOPC board of model Altera's DE2, which includes an FPGA chip of model EP2C35F672C6, an Ethernet controller and a video I/O interface. An IP core, known as Nios II embedded processor, is used as the CPU of the system. In addition, a hardware module for format conversion of video data, and another module to realize Motion-JPEG have been designed with Verilog HDL. These two modules are attached to the Nios II processor as peripheral equipments through the Avalon bus. Simulation results show that these two modules work as expected. Uclinux including TCP/IP protocol as well as the driver of Ethernet controller is chosen as the embedded operating system and an application program scheme is proposed.

  11. Network Penetration Testing and Research

    NASA Technical Reports Server (NTRS)

    Murphy, Brandon F.

    2013-01-01

    This paper will focus the on research and testing done on penetrating a network for security purposes. This research will provide the IT security office new methods of attacks across and against a company's network as well as introduce them to new platforms and software that can be used to better assist with protecting against such attacks. Throughout this paper testing and research has been done on two different Linux based operating systems, for attacking and compromising a Windows based host computer. Backtrack 5 and BlackBuntu (Linux based penetration testing operating systems) are two different "attacker'' computers that will attempt to plant viruses and or NASA USRP - Internship Final Report exploits on a host Windows 7 operating system, as well as try to retrieve information from the host. On each Linux OS (Backtrack 5 and BlackBuntu) there is penetration testing software which provides the necessary tools to create exploits that can compromise a windows system as well as other operating systems. This paper will focus on two main methods of deploying exploits 1 onto a host computer in order to retrieve information from a compromised system. One method of deployment for an exploit that was tested is known as a "social engineering" exploit. This type of method requires interaction from unsuspecting user. With this user interaction, a deployed exploit may allow a malicious user to gain access to the unsuspecting user's computer as well as the network that such computer is connected to. Due to more advance security setting and antivirus protection and detection, this method is easily identified and defended against. The second method of exploit deployment is the method mainly focused upon within this paper. This method required extensive research on the best way to compromise a security enabled protected network. Once a network has been compromised, then any and all devices connected to such network has the potential to be compromised as well. With a compromised

  12. LIBRA: An inexpensive geodetic network densification system

    NASA Technical Reports Server (NTRS)

    Fliegel, H. F.; Gantsweg, M.; Callahan, P. S.

    1975-01-01

    A description is given of the Libra (Locations Interposed by Ranging Aircraft) system, by which geodesy and earth strain measurements can be performed rapidly and inexpensively to several hundred auxiliary points with respect to a few fundamental control points established by any other technique, such as radio interferometry or satellite ranging. This low-cost means of extending the accuracy of space age geodesy to local surveys provides speed and spatial resolution useful, for example, for earthquake hazards estimation. Libra may be combined with an existing system, Aries (Astronomical Radio Interferometric Earth Surveying) to provide a balanced system adequate to meet the geophysical needs, and applicable to conventional surveying. The basic hardware design was outlined and specifications were defined. Then need for network densification was described. The following activities required to implement the proposed Libra system are also described: hardware development, data reduction, tropospheric calibrations, schedule of development and estimated costs.

  13. Synthetic gene networks in plant systems.

    PubMed

    Junker, Astrid; Junker, Björn H

    2012-01-01

    Synthetic biology methods are routinely applied in the plant field as in other eukaryotic model systems. Several synthetic components have been developed in plants and an increasing number of studies report on the assembly into functional synthetic genetic circuits. This chapter gives an overview of the existing plant genetic networks and describes in detail the application of two systems for inducible gene expression. The ethanol-inducible system relies on the ethanol-responsive interaction of the AlcA transcriptional activator and the AlcR receptor resulting in the transcription of the gene of interest (GOI). In comparison, the translational fusion of GOI and the glucocorticoid receptor (GR) domain leads to the dexamethasone-dependent nuclear translocation of the GOI::GR protein. This chapter contains detailed protocols for the application of both systems in the model plants potato and Arabidopsis, respectively.

  14. Complex network synchronization of chaotic systems with delay coupling

    SciTech Connect

    Theesar, S. Jeeva Sathya Ratnavelu, K.

    2014-03-05

    The study of complex networks enables us to understand the collective behavior of the interconnected elements and provides vast real time applications from biology to laser dynamics. In this paper, synchronization of complex network of chaotic systems has been studied. Every identical node in the complex network is assumed to be in Lur’e system form. In particular, delayed coupling has been assumed along with identical sector bounded nonlinear systems which are interconnected over network topology.

  15. An algorithm of a real time image tracking system using a camera with pan/tilt motors on an embedded system

    NASA Astrophysics Data System (ADS)

    Kim, Hie-Sik; Nam, Chul; Ha, Kwan-Yong; Ayurzana, Odgeral; Kwon, Jong-Won

    2005-12-01

    The embedded systems have been applied to many fields, including households and industrial sites. The user interface technology with simple display on the screen was implemented more and more. The user demands are increasing and the system has more various applicable fields due to a high penetration rate of the Internet. Therefore, the demand for embedded system is tend to rise. An embedded system for image tracking was implemented. This system is used a fixed IP for the reliable server operation on TCP/IP networks. Using an USB camera on the embedded Linux system developed a real time broadcasting of video image on the Internet. The digital camera is connected at the USB host port of the embedded board. All input images from the video camera are continuously stored as a compressed JPEG file in a directory at the Linux web-server. And each frame image data from web camera is compared for measurement of displacement Vector. That used Block matching algorithm and edge detection algorithm for past speed. And the displacement vector is used at pan/tilt motor control through RS232 serial cable. The embedded board utilized the S3C2410 MPU, which used the ARM 920T core form Samsung. The operating system was ported to embedded Linux kernel and mounted of root file system. And the stored images are sent to the client PC through the web browser. It used the network function of Linux and it developed a program with protocol of the TCP/IP.

  16. Evaluation of a Cyber Security System for Hospital Network.

    PubMed

    Faysel, Mohammad A

    2015-01-01

    Most of the cyber security systems use simulated data in evaluating their detection capabilities. The proposed cyber security system utilizes real hospital network connections. It uses a probabilistic data mining algorithm to detect anomalous events and takes appropriate response in real-time. On an evaluation using real-world hospital network data consisting of incoming network connections collected for a 24-hour period, the proposed system detected 15 unusual connections which were undetected by a commercial intrusion prevention system for the same network connections. Evaluation of the proposed system shows a potential to secure protected patient health information on a hospital network. PMID:26262217

  17. Evaluation of a Cyber Security System for Hospital Network.

    PubMed

    Faysel, Mohammad A

    2015-01-01

    Most of the cyber security systems use simulated data in evaluating their detection capabilities. The proposed cyber security system utilizes real hospital network connections. It uses a probabilistic data mining algorithm to detect anomalous events and takes appropriate response in real-time. On an evaluation using real-world hospital network data consisting of incoming network connections collected for a 24-hour period, the proposed system detected 15 unusual connections which were undetected by a commercial intrusion prevention system for the same network connections. Evaluation of the proposed system shows a potential to secure protected patient health information on a hospital network.

  18. LXtoo: an integrated live Linux distribution for the bioinformatics community

    PubMed Central

    2012-01-01

    Background Recent advances in high-throughput technologies dramatically increase biological data generation. However, many research groups lack computing facilities and specialists. This is an obstacle that remains to be addressed. Here, we present a Linux distribution, LXtoo, to provide a flexible computing platform for bioinformatics analysis. Findings Unlike most of the existing live Linux distributions for bioinformatics limiting their usage to sequence analysis and protein structure prediction, LXtoo incorporates a comprehensive collection of bioinformatics software, including data mining tools for microarray and proteomics, protein-protein interaction analysis, and computationally complex tasks like molecular dynamics. Moreover, most of the programs have been configured and optimized for high performance computing. Conclusions LXtoo aims to provide well-supported computing environment tailored for bioinformatics research, reducing duplication of efforts in building computing infrastructure. LXtoo is distributed as a Live DVD and freely available at http://bioinformatics.jnu.edu.cn/LXtoo. PMID:22813356

  19. Toda Systems, Cluster Characters, and Spectral Networks

    NASA Astrophysics Data System (ADS)

    Williams, Harold

    2016-11-01

    We show that the Hamiltonians of the open relativistic Toda system are elements of the generic basis of a cluster algebra, and in particular are cluster characters of nonrigid representations of a quiver with potential. Using cluster coordinates defined via spectral networks, we identify the phase space of this system with the wild character variety related to the periodic nonrelativistic Toda system by the wild nonabelian Hodge correspondence. We show that this identification takes the relativistic Toda Hamiltonians to traces of holonomies around a simple closed curve. In particular, this provides nontrivial examples of cluster coordinates on SL n -character varieties for n > 2 where canonical functions associated to simple closed curves can be computed in terms of quivers with potential, extending known results in the SL 2 case.

  20. Advanced systems engineering and network planning support

    NASA Technical Reports Server (NTRS)

    Walters, David H.; Barrett, Larry K.; Boyd, Ronald; Bazaj, Suresh; Mitchell, Lionel; Brosi, Fred

    1990-01-01

    The objective of this task was to take a fresh look at the NASA Space Network Control (SNC) element for the Advanced Tracking and Data Relay Satellite System (ATDRSS) such that it can be made more efficient and responsive to the user by introducing new concepts and technologies appropriate for the 1997 timeframe. In particular, it was desired to investigate the technologies and concepts employed in similar systems that may be applicable to the SNC. The recommendations resulting from this study include resource partitioning, on-line access to subsets of the SN schedule, fluid scheduling, increased use of demand access on the MA service, automating Inter-System Control functions using monitor by exception, increase automation for distributed data management and distributed work management, viewing SN operational control in terms of the OSI Management framework, and the introduction of automated interface management.

  1. A heterogeneous sensor network simulation system with integrated terrain data for real-time target detection in 3D space

    NASA Astrophysics Data System (ADS)

    Lin, Hong; Tanner, Steve; Rushing, John; Graves, Sara; Criswell, Evans

    2008-03-01

    Large scale sensor networks composed of many low-cost small sensors networked together with a small number of high fidelity position sensors can provide a robust, fast and accurate air defense and warning system. The team has been developing simulations of such large networks, and is now adding terrain data in an effort to provide more realistic analysis of the approach. This work, a heterogeneous sensor network simulation system with integrated terrain data for real-time target detection in a three-dimensional environment is presented. The sensor network can be composed of large numbers of low fidelity binary and bearing-only sensors, and small numbers of high fidelity position sensors, such as radars. The binary and bearing-only sensors are randomly distributed over a large geographic region; while the position sensors are distributed evenly. The elevations of the sensors are determined through the use of DTED Level 0 dataset. The targets are located through fusing measurement information from all types of sensors modeled by the simulation. The network simulation utilizes the same search-based optimization algorithm as in our previous two-dimensional sensor network simulation with some significant modifications. The fusion algorithm is parallelized using spatial decomposition approach: the entire surveillance area is divided into small regions and each region is assigned to one compute node. Each node processes sensor measurements and terrain data only for the assigned sub region. A master process combines the information from all the compute nodes to get the overall network state. The simulation results have indicated that the distributed fusion algorithm is efficient enough so that an optimal solution can be reached before the arrival of the next sensor data with a reasonable time interval, and real-time target detection can be achieved. The simulation was performed on a Linux cluster with communication between nodes facilitated by the Message Passing Interface

  2. NMESys: An expert system for network fault detection

    NASA Technical Reports Server (NTRS)

    Nelson, Peter C.; Warpinski, Janet

    1991-01-01

    The problem of network management is becoming an increasingly difficult and challenging task. It is very common today to find heterogeneous networks consisting of many different types of computers, operating systems, and protocols. The complexity of implementing a network with this many components is difficult enough, while the maintenance of such a network is an even larger problem. A prototype network management expert system, NMESys, implemented in the C Language Integrated Production System (CLIPS). NMESys concentrates on solving some of the critical problems encountered in managing a large network. The major goal of NMESys is to provide a network operator with an expert system tool to quickly and accurately detect hard failures, potential failures, and to minimize or eliminate user down time in a large network.

  3. Phase-space networks of geometrically frustrated systems.

    PubMed

    Han, Yilong

    2009-11-01

    We illustrate a network approach to the phase-space study by using two geometrical frustration models: antiferromagnet on triangular lattice and square ice. Their highly degenerated ground states are mapped as discrete networks such that the quantitative network analysis can be applied to phase-space studies. The resulting phase spaces share some comon features and establish a class of complex networks with unique Gaussian spectral densities. Although phase-space networks are heterogeneously connected, the systems are still ergodic due to the random Poisson processes. This network approach can be generalized to phase spaces of some other complex systems.

  4. Phase-space networks of geometrically frustrated systems

    NASA Astrophysics Data System (ADS)

    Han, Yilong

    2009-11-01

    We illustrate a network approach to the phase-space study by using two geometrical frustration models: antiferromagnet on triangular lattice and square ice. Their highly degenerated ground states are mapped as discrete networks such that the quantitative network analysis can be applied to phase-space studies. The resulting phase spaces share some comon features and establish a class of complex networks with unique Gaussian spectral densities. Although phase-space networks are heterogeneously connected, the systems are still ergodic due to the random Poisson processes. This network approach can be generalized to phase spaces of some other complex systems.

  5. Simulation of large systems with neural networks

    SciTech Connect

    Paez, T.L.

    1994-09-01

    Artificial neural networks (ANNs) have been shown capable of simulating the behavior of complex, nonlinear, systems, including structural systems. Under certain circumstances, it is desirable to simulate structures that are analyzed with the finite element method. For example, when we perform a probabilistic analysis with the Monte Carlo method, we usually perform numerous (hundreds or thousands of) repetitions of a response simulation with different input and system parameters to estimate the chance of specific response behaviors. In such applications, efficiency in computation of response is critical, and response simulation with ANNs can be valuable. However, finite element analyses of complex systems involve the use of models with tens or hundreds of thousands of degrees of freedom, and ANNs are practically limited to simulations that involve far fewer variables. This paper develops a technique for reducing the amount of information required to characterize the response of a general structure. We show how the reduced information can be used to train a recurrent ANN. Then the trained ANN can be used to simulate the reduced behavior of the original system, and the reduction transformation can be inverted to provide a simulation of the original system. A numerical example is presented.

  6. DVD-RAM-based network storage system

    NASA Astrophysics Data System (ADS)

    Ura, Tetsuya; Tanabe, Takaya; Yamamoto, Manabu

    2000-04-01

    A network storage system with a high transfer rate and high capacity has been developed. This system, DVD-RAIL (Digital Versatile Disk-Redundant Array of Inexpensive Libraries), consists of six small DVD-RAM libraries and a RAILcontroller, which uses the RAID4 algorithm. Each library has two DVD-RAM drives, a robotic changer and a slot for storing up to 150 DVD-RAM disks. The system can handle up to 900 disks, corresponding to about 2 TB of storage. Data transfer is done in parallel from and to each library, so the transfer rate is over 6 MB/sec. The redundant architecture of RAIL provides high reliability, enabling the system to continue working even if an error occurs in one of the libraries. The RAILcontroller controls all the allocation and parallel transmission processes, so the system behaves as a large single library. Evaluation of the system showed that it can distribute high- definition moving pictures at over 20 Mbps and that a transfer rate of over 50 Mbps may be feasible.

  7. Deep Space Network information system architecture study

    NASA Technical Reports Server (NTRS)

    Beswick, C. A.; Markley, R. W. (Editor); Atkinson, D. J.; Cooper, L. P.; Tausworthe, R. C.; Masline, R. C.; Jenkins, J. S.; Crowe, R. A.; Thomas, J. L.; Stoloff, M. J.

    1992-01-01

    The purpose of this article is to describe an architecture for the DSN information system in the years 2000-2010 and to provide guidelines for its evolution during the 1990's. The study scope is defined to be from the front-end areas at the antennas to the end users (spacecraft teams, principal investigators, archival storage systems, and non-NASA partners). The architectural vision provides guidance for major DSN implementation efforts during the next decade. A strong motivation for the study is an expected dramatic improvement in information-systems technologies--i.e., computer processing, automation technology (including knowledge-based systems), networking and data transport, software and hardware engineering, and human-interface technology. The proposed Ground Information System has the following major features: unified architecture from the front-end area to the end user; open-systems standards to achieve interoperability; DSN production of level 0 data; delivery of level 0 data from the Deep Space Communications Complex, if desired; dedicated telemetry processors for each receiver; security against unauthorized access and errors; and highly automated monitor and control.

  8. Transformation of legacy network management system to service oriented architecture

    NASA Astrophysics Data System (ADS)

    Sathyan, Jithesh; Shenoy, Krishnananda

    2007-09-01

    Service providers today are facing the challenge of operating and maintaining multiple networks, based on multiple technologies. Network Management System (NMS) solutions are being used to manage these networks. However the NMS is tightly coupled with Element or the Core network components. Hence there are multiple NMS solutions for heterogeneous networks. Current network management solutions are targeted at a variety of independent networks. The wide spread popularity of IP Multimedia Subsystem (IMS) is a clear indication that all of these independent networks will be integrated into a single IP-based infrastructure referred to as Next Generation Networks (NGN) in the near future. The services, network architectures and traffic pattern in NGN will dramatically differ from the current networks. The heterogeneity and complexity in NGN including concepts like Fixed Mobile Convergence will bring a number of challenges to network management. The high degree of complexity accompanying the network element technology necessitates network management systems (NMS) which can utilize this technology to provide more service interfaces while hiding the inherent complexity. As operators begin to add new networks and expand existing networks to support new technologies and products, the necessity of scalable, flexible and functionally rich NMS systems arises. Another important factor influencing NMS architecture is mergers and acquisitions among the key vendors. Ease of integration is a key impediment in the traditional hierarchical NMS architecture. These requirements trigger the need for an architectural framework that will address the NGNM (Next Generation Network Management) issues seamlessly. This paper presents a unique perspective of bringing service orientated architecture (SOA) to legacy network management systems (NMS). It advocates a staged approach in transforming a legacy NMS to SOA. The architecture at each stage is detailed along with the technical advantages and

  9. Requirements for Linux Checkpoint/Restart

    SciTech Connect

    Duell, Jason; Hargrove, Paul H.; Roman, Eric S.

    2002-02-26

    This document has 4 main objectives: (1) Describe data to be saved and restored during checkpoint/restart; (2) Describe how checkpoint/restart is used within the context of the Scalable Systems environment, and MPI applications; (3) Identify issues for a checkpoint/restart implementation; and (4) Sketch the architecture of a checkpoint/restart implementation.

  10. Technology Network Ties: Network Services and Technology Programs for New York State's Educational System.

    ERIC Educational Resources Information Center

    New York State Education Dept., Albany. Office of Elementary and Secondary Education Planning, Testing, and Technological Services.

    The New York State Technology Network Ties (TNT) systems is a statewide telecommunications network which consists of computers, telephone lines, and telecommunications hardware and software. This network links school districts, Boards of Cooperative Educational Services (BOCES), libraries, other educational institutions, and the State Education…

  11. Neural network system for traffic flow management

    NASA Astrophysics Data System (ADS)

    Gilmore, John F.; Elibiary, Khalid J.; Petersson, L. E. Rickard

    1992-09-01

    Atlanta will be the home of several special events during the next five years ranging from the 1996 Olympics to the 1994 Super Bowl. When combined with the existing special events (Braves, Falcons, and Hawks games, concerts, festivals, etc.), the need to effectively manage traffic flow from surface streets to interstate highways is apparent. This paper describes a system for traffic event response and management for intelligent navigation utilizing signals (TERMINUS) developed at Georgia Tech for adaptively managing special event traffic flows in the Atlanta, Georgia area. TERMINUS (the original name given Atlanta, Georgia based upon its role as a rail line terminating center) is an intelligent surface street signal control system designed to manage traffic flow in Metro Atlanta. The system consists of three components. The first is a traffic simulation of the downtown Atlanta area around Fulton County Stadium that models the flow of traffic when a stadium event lets out. Parameters for the surrounding area include modeling for events during various times of day (such as rush hour). The second component is a computer graphics interface with the simulation that shows the traffic flows achieved based upon intelligent control system execution. The final component is the intelligent control system that manages surface street light signals based upon feedback from control sensors that dynamically adapt the intelligent controller's decision making process. The intelligent controller is a neural network model that allows TERMINUS to control the configuration of surface street signals to optimize the flow of traffic away from special events.

  12. Network versus portfolio structure in financial systems

    NASA Astrophysics Data System (ADS)

    Kobayashi, Teruyoshi

    2013-10-01

    The question of how to stabilize financial systems has attracted considerable attention since the global financial crisis of 2007-2009. Recently, Beale et al. [Proc. Natl. Acad. Sci. USA 108, 12647 (2011)] demonstrated that higher portfolio diversity among banks would reduce systemic risk by decreasing the risk of simultaneous defaults at the expense of a higher likelihood of individual defaults. In practice, however, a bank default has an externality in that it undermines other banks’ balance sheets. This paper explores how each of these different sources of risk, simultaneity risk and externality, contributes to systemic risk. The results show that the allocation of external assets that minimizes systemic risk varies with the topology of the financial network as long as asset returns have negative correlations. In the model, a well-known centrality measure, PageRank, reflects an appropriately defined “infectiveness” of a bank. An important result is that the most infective bank needs not always to be the safest bank. Under certain circumstances, the most infective node should act as a firewall to prevent large-scale collective defaults. The introduction of a counteractive portfolio structure will significantly reduce systemic risk.

  13. Stoichiometric network theory for nonequilibrium biochemical systems.

    PubMed

    Qian, Hong; Beard, Daniel A; Liang, Shou-dan

    2003-02-01

    We introduce the basic concepts and develop a theory for nonequilibrium steady-state biochemical systems applicable to analyzing large-scale complex isothermal reaction networks. In terms of the stoichiometric matrix, we demonstrate both Kirchhoff's flux law sigma(l)J(l)=0 over a biochemical species, and potential law sigma(l) mu(l)=0 over a reaction loop. They reflect mass and energy conservation, respectively. For each reaction, its steady-state flux J can be decomposed into forward and backward one-way fluxes J = J+ - J-, with chemical potential difference deltamu = RT ln(J-/J+). The product -Jdeltamu gives the isothermal heat dissipation rate, which is necessarily non-negative according to the second law of thermodynamics. The stoichiometric network theory (SNT) embodies all of the relevant fundamental physics. Knowing J and deltamu of a biochemical reaction, a conductance can be computed which directly reflects the level of gene expression for the particular enzyme. For sufficiently small flux a linear relationship between J and deltamu can be established as the linear flux-force relation in irreversible thermodynamics, analogous to Ohm's law in electrical circuits.

  14. Simple Linux Utility for Resource Management

    SciTech Connect

    Jette, M.

    2009-09-09

    SLURM is an open source, fault-tolerant, and highly scalable cluster management and job scheduling system for large and small computer clusters. As a cluster resource manager, SLURM has three key functions. First, it allocates exclusive and/or non exclusive access to resources (compute nodes) to users for some duration of time so they can perform work. Second, it provides a framework for starting, executing, and monitoring work (normally a parallel job) on the set of allciated nodes. Finally, it arbitrates conflicting requests for resouces by managing a queue of pending work.

  15. Simple Linux Utility for Resource Management

    2009-09-09

    SLURM is an open source, fault-tolerant, and highly scalable cluster management and job scheduling system for large and small computer clusters. As a cluster resource manager, SLURM has three key functions. First, it allocates exclusive and/or non exclusive access to resources (compute nodes) to users for some duration of time so they can perform work. Second, it provides a framework for starting, executing, and monitoring work (normally a parallel job) on the set of allciatedmore » nodes. Finally, it arbitrates conflicting requests for resouces by managing a queue of pending work.« less

  16. Simple Linux Utility for Resource Management

    2008-03-10

    SLURM is an open source, fault-tolerant, and highly scalable cluster management and job scheduling system for large and small computer clusters. As a cluster resource manager, SLURM has three key functions. First, it allocates exclusive and/or non-exclusive access to resources (compute nodes) to users for some duration of time so they can perform work. Second, it provides a framework for starting, executing, and monitoring work 9normally a parallel job) on the set of allocated nodes.more » Finally, it arbitrates conflicting requests for resources by managing a queue of pending work.« less

  17. Simple Linux Utility for Resource Management

    SciTech Connect

    Ali, Amjad Majid; Albert, Don; Andersson, Par; Artiaga, Ernest; Auble, Daniel; Balle, Susanne; Blanchard, Anton; Cao, Hongjia; Christians, Daniel; Civario, Gilles; Clouston, Chuck; Dunlap, Chris; Ekstrom, Joseph; Garlick, James; Grondona, Mark; Hatazaki, Takao; Holmes, Christopher; Huff, Nathan; Jackson, David; Jette, Morris; Johnson, Greg; King, Jason; Kritkausky, Nancy; Lee, Puenlap; Li, Bernard; McDougall, Steven; Mecozzi, Donna; Morrone, Christopher; Munt, Pere; O'Sullivan, Bryan; Oliva, Gennaro; palermo, Daniel; Phung, Daniel; Pittman, Ashley; Riebs, Andrew; Sacerdoti, Federico; Squyers, Jeff; Tamraparni, Prashanth; Tew, Kevin; Windley, Jay; Wunderlin, Anne-Marie

    2008-03-10

    SLURM is an open source, fault-tolerant, and highly scalable cluster management and job scheduling system for large and small computer clusters. As a cluster resource manager, SLURM has three key functions. First, it allocates exclusive and/or non-exclusive access to resources (compute nodes) to users for some duration of time so they can perform work. Second, it provides a framework for starting, executing, and monitoring work 9normally a parallel job) on the set of allocated nodes. Finally, it arbitrates conflicting requests for resources by managing a queue of pending work.

  18. Efficient Parallel Engineering Computing on Linux Workstations

    NASA Technical Reports Server (NTRS)

    Lou, John Z.

    2010-01-01

    A C software module has been developed that creates lightweight processes (LWPs) dynamically to achieve parallel computing performance in a variety of engineering simulation and analysis applications to support NASA and DoD project tasks. The required interface between the module and the application it supports is simple, minimal and almost completely transparent to the user applications, and it can achieve nearly ideal computing speed-up on multi-CPU engineering workstations of all operating system platforms. The module can be integrated into an existing application (C, C++, Fortran and others) either as part of a compiled module or as a dynamically linked library (DLL).

  19. An expert system application for network intrusion detection

    SciTech Connect

    Jackson, K.A.; Dubois, D.H.; Stallings, C.A.

    1991-01-01

    The paper describes the design of a prototype intrusion detection system for the Los Alamos National Laboratory's Integrated Computing Network (ICN). The Network Anomaly Detection and Intrusion Reporter (NADIR) differs in one respect from most intrusion detection systems. It tries to address the intrusion detection problem on a network, as opposed to a single operating system. NADIR design intent was to copy and improve the audit record review activities normally done by security auditors. We wished to replace the manual review of audit logs with a near realtime expert system. NADIR compares network activity, as summarized in user profiles, against expert rules that define network security policy, improper or suspicious network activities, and normal network and user activity. When it detects deviant (anomalous) behavior, NADIR alerts operators in near realtime, and provides tools to aid in the investigation of the anomalous event. 15 refs., 2 figs.

  20. System Leadership, Networks and the Question of Power

    ERIC Educational Resources Information Center

    Hatcher, Richard

    2008-01-01

    The author's argument revolves around the relationships between government agendas and the agency of teachers, and between them the intermediary role of management as "system leaders" of network forms. Network is a pluralistic concept: networks can serve very different educational-political interests. They offer the potential of new participatory…

  1. CMA Member Survey: Network Management Systems Showing Little Improvement.

    ERIC Educational Resources Information Center

    Lusa, John M.

    1998-01-01

    Discusses results of a survey of 112 network and telecom managers--members of the Communications Managers Association (CMA)--to identify problems relating to the operation of large enterprise networks. Results are presented in a table under categories of: respondent profile; network management systems; carrier management; enterprise management;…

  2. The network-enabled optimization system server

    SciTech Connect

    Mesnier, M.P.

    1995-08-01

    Mathematical optimization is a technology under constant change and advancement, drawing upon the most efficient and accurate numerical methods to date. Further, these methods can be tailored for a specific application or generalized to accommodate a wider range of problems. This perpetual change creates an ever growing field, one that is often difficult to stay abreast of. Hence, the impetus behind the Network-Enabled Optimization System (NEOS) server, which aims to provide users, both novice and expert, with a guided tour through the expanding world of optimization. The NEOS server is responsible for bridging the gap between users and the optimization software they seek. More specifically, the NEOS server will accept optimization problems over the Internet and return a solution to the user either interactively or by e-mail. This paper discusses the current implementation of the server.

  3. Famine Early Warning System Network (FEWS NET)

    USGS Publications Warehouse

    Verdin, James P.

    2006-01-01

    The FEWS NET mission is to identify potentially food-insecure conditions early through the provision of timely and analytical hazard and vulnerability information. U.S. Government decision-makers act on this information to authorize mitigation and response activities. The U.S. Geological Survey (USGS) FEWS NET provides tools and data for monitoring and forecasting the incidence of drought and flooding to identify shocks to the food supply system that could lead to famine. Historically focused on Africa, the scope of the network has expanded to be global coverage. FEWS NET implementing partners include the USGS, National Aeronautics and Space Administration (NASA), National Oceanic and Atmospheric Administration (NOAA), United States Agency for International Development (USAID), United States Department of Agriculture (USDA), and Chemonics International.

  4. [Systemic inflammatory rheumatic diseases competence network].

    PubMed

    Rufenach, C; Burmester, G-R; Zeidler, H; Radbruch, A

    2004-04-01

    The foundation of the competence network for rheumatology, which is funded by the "Bundesministerium für Bildung und Forschung" (BMBF) since 1999, succeeded to create a unique research structure in Germany: medical doctors and scientists from six university rheumatology centres (Berlin, Düsseldorf, Erlangen, Freiburg, Hannover und Lübeck/Bad Bramstedt) work closely together with scientists doing basic research at the Deutsches Rheuma-Forschungszentrum (DRFZ), with rheumatological hospitals, reha-clinics, and rheumatologists. Jointly they are searching for causes of systemic inflammatory rheumatic diseases and try to improve therapies-nationwide and with an interdisciplinary approach. The primary objective of this collaboration is to transfer new scientific insights more rapidly in order to improve methods for diagnosis and patients treatment.

  5. Network quotients: Structural skeletons of complex systems

    NASA Astrophysics Data System (ADS)

    Xiao, Yanghua; MacArthur, Ben D.; Wang, Hui; Xiong, Momiao; Wang, Wei

    2008-10-01

    A defining feature of many large empirical networks is their intrinsic complexity. However, many networks also contain a large degree of structural repetition. An immediate question then arises: can we characterize essential network complexity while excluding structural redundancy? In this article we utilize inherent network symmetry to collapse all redundant information from a network, resulting in a coarse graining which we show to carry the essential structural information of the “parent” network. In the context of algebraic combinatorics, this coarse-graining is known as the “quotient.” We systematically explore the theoretical properties of network quotients and summarize key statistics of a variety of “real-world” quotients with respect to those of their parent networks. In particular, we find that quotients can be substantially smaller than their parent networks yet typically preserve various key functional properties such as complexity (heterogeneity and hub vertices) and communication (diameter and mean geodesic distance), suggesting that quotients constitute the essential structural skeletons of their parent networks. We summarize with a discussion of potential uses of quotients in analysis of biological regulatory networks and ways in which using quotients can reduce the computational complexity of network algorithms.

  6. System and method for generating a relationship network

    DOEpatents

    Franks, Kasian; Myers, Cornelia A.; Podowski, Raf M.

    2011-07-26

    A computer-implemented system and process for generating a relationship network is disclosed. The system provides a set of data items to be related and generates variable length data vectors to represent the relationships between the terms within each data item. The system can be used to generate a relationship network for documents, images, or any other type of file. This relationship network can then be queried to discover the relationships between terms within the set of data items.

  7. System and method for generating a relationship network

    DOEpatents

    Franks, Kasian; Myers, Cornelia A; Podowski, Raf M

    2015-05-05

    A computer-implemented system and process for generating a relationship network is disclosed. The system provides a set of data items to be related and generates variable length data vectors to represent the relationships between the terms within each data item. The system can be used to generate a relationship network for documents, images, or any other type of file. This relationship network can then be queried to discover the relationships between terms within the set of data items.

  8. Environmental Sensor Networks: A revolution in the earth system science?

    NASA Astrophysics Data System (ADS)

    Hart, Jane K.; Martinez, Kirk

    2006-10-01

    Environmental Sensor Networks (ESNs) facilitate the study of fundamental processes and the development of hazard response systems. They have evolved from passive logging systems that require manual downloading, into 'intelligent' sensor networks that comprise a network of automatic sensor nodes and communications systems which actively communicate their data to a Sensor Network Server (SNS) where these data can be integrated with other environmental datasets. The sensor nodes can be fixed or mobile and range in scale appropriate to the environment being sensed. ESNs range in scale and function and we have reviewed over 50 representative examples. Large Scale Single Function Networks tend to use large single purpose nodes to cover a wide geographical area. Localised Multifunction Sensor Networks typically monitor a small area in more detail, often with wireless ad-hoc systems. Biosensor Networks use emerging biotechnologies to monitor environmental processes as well as developing proxies for immediate use. In the future, sensor networks will integrate these three elements ( Heterogeneous Sensor Networks). The communications system and data storage and integration (cyberinfrastructure) aspects of ESNs are discussed, along with current challenges which need to be addressed. We argue that Environmental Sensor Networks will become a standard research tool for future Earth System and Environmental Science. Not only do they provide a 'virtual' connection with the environment, they allow new field and conceptual approaches to the study of environmental processes to be developed. We suggest that although technological advances have facilitated these changes, it is vital that Earth Systems and Environmental Scientists utilise them.

  9. A Comparison of Geographic Information Systems, Complex Networks, and Other Models for Analyzing Transportation Network Topologies

    NASA Technical Reports Server (NTRS)

    Alexandrov, Natalia (Technical Monitor); Kuby, Michael; Tierney, Sean; Roberts, Tyler; Upchurch, Christopher

    2005-01-01

    This report reviews six classes of models that are used for studying transportation network topologies. The report is motivated by two main questions. First, what can the "new science" of complex networks (scale-free, small-world networks) contribute to our understanding of transport network structure, compared to more traditional methods? Second, how can geographic information systems (GIS) contribute to studying transport networks? The report defines terms that can be used to classify different kinds of models by their function, composition, mechanism, spatial and temporal dimensions, certainty, linearity, and resolution. Six broad classes of models for analyzing transport network topologies are then explored: GIS; static graph theory; complex networks; mathematical programming; simulation; and agent-based modeling. Each class of models is defined and classified according to the attributes introduced earlier. The paper identifies some typical types of research questions about network structure that have been addressed by each class of model in the literature.

  10. High Speed Quantum Key Distribution Over Optical Fiber Network System.

    PubMed

    Ma, Lijun; Mink, Alan; Tang, Xiao

    2009-01-01

    The National Institute of Standards and Technology (NIST) has developed a number of complete fiber-based high-speed quantum key distribution (QKD) systems that includes an 850 nm QKD system for a local area network (LAN), a 1310 nm QKD system for a metropolitan area network (MAN), and a 3-node quantum network controlled by a network manager. This paper discusses the key techniques used to implement these systems, which include polarization recovery, noise reduction, frequency up-conversion detection based on a periodically polled lithium nitrate (PPLN) waveguide, custom high-speed data handling boards and quantum network management. Using our quantum network, a QKD secured video surveillance application has been demonstrated. Our intention is to show the feasibility and sophistication of QKD systems based on current technology. PMID:27504218

  11. DebtRank-transparency: Controlling systemic risk in financial networks

    PubMed Central

    Thurner, Stefan; Poledna, Sebastian

    2013-01-01

    Nodes in a financial network, such as banks, cannot assess the true risks associated with lending to other nodes in the network, unless they have full information on the riskiness of all other nodes. These risks can be estimated by using network metrics (as DebtRank) of the interbank liability network. With a simple agent based model we show that systemic risk in financial networks can be drastically reduced by increasing transparency, i.e. making the DebtRank of individual banks visible to others, and by imposing a rule, that reduces interbank borrowing from systemically risky nodes. This scheme does not reduce the efficiency of the financial network, but fosters a more homogeneous risk-distribution within the system in a self-organized critical way. The reduction of systemic risk is due to a massive reduction of cascading failures in the transparent system. A regulation-policy implementation of the proposed scheme is discussed. PMID:23712454

  12. DebtRank-transparency: controlling systemic risk in financial networks.

    PubMed

    Thurner, Stefan; Poledna, Sebastian

    2013-01-01

    Nodes in a financial network, such as banks, cannot assess the true risks associated with lending to other nodes in the network, unless they have full information on the riskiness of all other nodes. These risks can be estimated by using network metrics (as DebtRank) of the interbank liability network. With a simple agent based model we show that systemic risk in financial networks can be drastically reduced by increasing transparency, i.e. making the DebtRank of individual banks visible to others, and by imposing a rule, that reduces interbank borrowing from systemically risky nodes. This scheme does not reduce the efficiency of the financial network, but fosters a more homogeneous risk-distribution within the system in a self-organized critical way. The reduction of systemic risk is due to a massive reduction of cascading failures in the transparent system. A regulation-policy implementation of the proposed scheme is discussed.

  13. DebtRank-transparency: Controlling systemic risk in financial networks

    NASA Astrophysics Data System (ADS)

    Thurner, Stefan; Poledna, Sebastian

    2013-05-01

    Nodes in a financial network, such as banks, cannot assess the true risks associated with lending to other nodes in the network, unless they have full information on the riskiness of all other nodes. These risks can be estimated by using network metrics (as DebtRank) of the interbank liability network. With a simple agent based model we show that systemic risk in financial networks can be drastically reduced by increasing transparency, i.e. making the DebtRank of individual banks visible to others, and by imposing a rule, that reduces interbank borrowing from systemically risky nodes. This scheme does not reduce the efficiency of the financial network, but fosters a more homogeneous risk-distribution within the system in a self-organized critical way. The reduction of systemic risk is due to a massive reduction of cascading failures in the transparent system. A regulation-policy implementation of the proposed scheme is discussed.

  14. The Network Information Management System (NIMS) in the Deep Space Network

    NASA Technical Reports Server (NTRS)

    Wales, K. J.

    1983-01-01

    In an effort to better manage enormous amounts of administrative, engineering, and management data that is distributed worldwide, a study was conducted which identified the need for a network support system. The Network Information Management System (NIMS) will provide the Deep Space Network with the tools to provide an easily accessible source of valid information to support management activities and provide a more cost-effective method of acquiring, maintaining, and retrieval data.

  15. FTAP: a Linux-based program for tapping and music experiments.

    PubMed

    Finney, S A

    2001-02-01

    This paper describes FTAP, a flexible data collection system for tapping and music experiments. FTAP runs on standard PC hardware with the Linux operating system and can process input keystrokes and auditory output with reliable millisecond resolution. It uses standard MIDI devices for input and output and is particularly flexible in the area of auditory feedback manipulation. FTAP can run a wide variety of experiments, including synchronization/continuation tasks (Wing & Kristofferson, 1973), synchronization tasks combined with delayed auditory feedback (Aschersleben & Prinz, 1997), continuation tasks with isolated feedback perturbations (Wing, 1977), and complex alterations of feedback in music performance (Finney, 1997). Such experiments have often been implemented with custom hardware and software systems, but with FTAP they can be specified by a simple ASCII text parameter file. FTAP is available at no cost in source-code form.

  16. Neural Network Based Intelligent Sootblowing System

    SciTech Connect

    Mark Rhode

    2005-04-01

    , particulate matter is also a by-product of coal combustion. Modern day utility boilers are usually fitted with electrostatic precipitators to aid in the collection of particulate matter. Although extremely efficient, these devices are sensitive to rapid changes in inlet mass concentration as well as total mass loading. Traditionally, utility boilers are equipped with devices known as sootblowers, which use, steam, water or air to dislodge and clean the surfaces within the boiler and are operated based upon established rule or operator's judgment. Poor sootblowing regimes can influence particulate mass loading to the electrostatic precipitators. The project applied a neural network intelligent sootblowing system in conjunction with state-of-the-art controls and instruments to optimize the operation of a utility boiler and systematically control boiler slagging/fouling. This optimization process targeted reduction of NOx of 30%, improved efficiency of 2% and a reduction in opacity of 5%. The neural network system proved to be a non-invasive system which can readily be adapted to virtually any utility boiler. Specific conclusions from this neural network application are listed below. These conclusions should be used in conjunction with the specific details provided in the technical discussions of this report to develop a thorough understanding of the process.

  17. Storage Area Networks and The High Performance Storage System

    SciTech Connect

    Hulen, H; Graf, O; Fitzgerald, K; Watson, R W

    2002-03-04

    The High Performance Storage System (HPSS) is a mature Hierarchical Storage Management (HSM) system that was developed around a network-centered architecture, with client access to storage provided through third-party controls. Because of this design, HPSS is able to leverage today's Storage Area Network (SAN) infrastructures to provide cost effective, large-scale storage systems and high performance global file access for clients. Key attributes of SAN file systems are found in HPSS today, and more complete SAN file system capabilities are being added. This paper traces the HPSS storage network architecture from the original implementation using HIPPI and IPI-3 technology, through today's local area network (LAN) capabilities, and to SAN file system capabilities now in development. At each stage, HPSS capabilities are compared with capabilities generally accepted today as characteristic of storage area networks and SAN file systems.

  18. The DOBIS and Washington Library Network Systems: A Comparison for the British Columbia Library Network. Revised.

    ERIC Educational Resources Information Center

    Shoffner, Ralph M.; Madden, Mary A.

    This study compares the three versions of DOBIS (Dartmunder Bibliothekssystem) that are currently running in Canada and the Washington Library Network (WLN) systems in order to determine which one is the most appropriate to replicate in support of the British Columbia Library Network (BCLN). Comparisons of systems costs and operating features, the…

  19. System Identification of X-33 Neural Network

    NASA Technical Reports Server (NTRS)

    Aggarwal, Shiv

    2003-01-01

    present attempt, as a start, focuses only on the entry phase. Since the main engine remains cut off in this phase, there is no thrust acting on the system. This considerably simplifies the equations of motion. We introduce another simplification by assuming the system to be linear after some non-linearities are removed analytically from our consideration. Under these assumptions, the problem could be solved by Classical Statistics by employing the least sum of squares approach. Instead we chose to use the Neural Network method. This method has many advantages. It is modern, more efficient, can be adapted to work even when the assumptions are diluted. In fact, Neural Networks try to model the human brain and are capable of pattern recognition.

  20. A performance data network for solar process heat systems

    SciTech Connect

    Barker, G.; Hale, M.J.

    1996-03-01

    A solar process heat (SPH) data network has been developed to access remote-site performance data from operational solar heat systems. Each SPH system in the data network is outfitted with monitoring equipment and a datalogger. The datalogger is accessed via modem from the data network computer at the National Renewable Energy Laboratory (NREL). The dataloggers collect both ten-minute and hourly data and download it to the data network every 24-hours for archiving, processing, and plotting. The system data collected includes energy delivered (fluid temperatures and flow rates) and site meteorological conditions, such as solar insolation and ambient temperature. The SPH performance data network was created for collecting performance data from SPH systems that are serving in industrial applications or from systems using technologies that show promise for industrial applications. The network will be used to identify areas of SPH technology needing further development, to correlate computer models with actual performance, and to improve the credibility of SPH technology. The SPH data network also provides a centralized bank of user-friendly performance data that will give prospective SPH users an indication of how actual systems perform. There are currently three systems being monitored and archived under the SPH data network: two are parabolic trough systems and the third is a flat-plate system. The two trough systems both heat water for prisons; the hot water is used for personal hygiene, kitchen operations, and laundry. The flat plate system heats water for meat processing at a slaughter house. We plan to connect another parabolic trough system to the network during the first months of 1996. We continue to look for good examples of systems using other types of collector technologies and systems serving new applications (such as absorption chilling) to include in the SPH performance data network.

  1. Lightweight modeling environment for network-centric systems

    NASA Astrophysics Data System (ADS)

    Ealy, William

    2001-08-01

    Future network centric systems will rely heavily on telecommunication network technology to provide the connectivity needed to support distributed C4ISR requirements. To develop and validate emerging network centric concepts, designers will need communication and network M&S tools to assess the ability of large-scale networks to achieve the required communication performance. Current network and communication simulation tools are highly accurate and provide detailed data for communication and network designers. However, they are far too complex and inefficient to model large scale networks. To model these networks, lighter weight abstract modeling and simulation (M&S) tools and techniques are required. To meet these requirements, Lockheed Martin Advanced Technology Laboratories (ATL) is applying abstract network modeling techniques, developed for large scale signal processing applications, to model complex, distributed network architectures. Rather than modeling the detailed radio, network protocol and individual data transactions, our approach uses abstract stochastic models to simulate the low-level radio and protocol functions to significantly reduce the complexity and execution times. This paper describes the abstract modeling tools and techniques we are developing, discusses how ATL applied them to Office of the Deputy Under Secretary of Defense for Science and Technology's (ODUSD S&T) Smart Sensor Web (SSW) network and how we are planning to extend them.

  2. The architecture of a network level intrusion detection system

    SciTech Connect

    Heady, R.; Luger, G.; Maccabe, A.; Servilla, M.

    1990-08-15

    This paper presents the preliminary architecture of a network level intrusion detection system. The proposed system will monitor base level information in network packets (source, destination, packet size, and time), learning the normal patterns and announcing anomalies as they occur. The goal of this research is to determine the applicability of current intrusion detection technology to the detection of network level intrusions. In particular, the authors are investigating the possibility of using this technology to detect and react to worm programs.

  3. Advanced information processing system: Input/output network management software

    NASA Technical Reports Server (NTRS)

    Nagle, Gail; Alger, Linda; Kemp, Alexander

    1988-01-01

    The purpose of this document is to provide the software requirements and specifications for the Input/Output Network Management Services for the Advanced Information Processing System. This introduction and overview section is provided to briefly outline the overall architecture and software requirements of the AIPS system before discussing the details of the design requirements and specifications of the AIPS I/O Network Management software. A brief overview of the AIPS architecture followed by a more detailed description of the network architecture.

  4. Observing Arctic Ecology using Networked Infomechanical Systems

    NASA Astrophysics Data System (ADS)

    Healey, N. C.; Oberbauer, S. F.; Hollister, R. D.; Tweedie, C. E.; Welker, J. M.; Gould, W. A.

    2012-12-01

    Understanding ecological dynamics is important for investigation into the potential impacts of climate change in the Arctic. Established in the early 1990's, the International Tundra Experiment (ITEX) began observational inquiry of plant phenology, plant growth, community composition, and ecosystem properties as part of a greater effort to study changes across the Arctic. Unfortunately, these observations are labor intensive and time consuming, greatly limiting their frequency and spatial coverage. We have expanded the capability of ITEX to analyze ecological phenomenon with improved spatial and temporal resolution through the use of Networked Infomechanical Systems (NIMS) as part of the Arctic Observing Network (AON) program. The systems exhibit customizable infrastructure that supports a high level of versatility in sensor arrays in combination with information technology that allows for adaptable configurations to numerous environmental observation applications. We observe stereo and static time-lapse photography, air and surface temperature, incoming and outgoing long and short wave radiation, net radiation, and hyperspectral reflectance that provides critical information to understanding how vegetation in the Arctic is responding to ambient climate conditions. These measurements are conducted concurrent with ongoing manual measurements using ITEX protocols. Our NIMS travels at a rate of three centimeters per second while suspended on steel cables that are ~1 m from the surface spanning transects ~50 m in length. The transects are located to span soil moisture gradients across a variety of land cover types including dry heath, moist acidic tussock tundra, shrub tundra, wet meadows, dry meadows, and water tracks. We have deployed NIMS at four locations on the North Slope of Alaska, USA associated with 1 km2 ARCSS vegetation study grids including Barrow, Atqasuk, Toolik Lake, and Imnavait Creek. A fifth system has been deployed in Thule, Greenland beginning in

  5. The deep space network, volume 18. [Deep Space Instrumentation Facility, Ground Communication Facility, and Network Control System

    NASA Technical Reports Server (NTRS)

    1973-01-01

    The objectives, functions, and organization of the Deep Space Network are summarized. The Deep Space Instrumentation Facility, the Ground Communications Facility, and the Network Control System are described.

  6. Vein matching using artificial neural network in vein authentication systems

    NASA Astrophysics Data System (ADS)

    Noori Hoshyar, Azadeh; Sulaiman, Riza

    2011-10-01

    Personal identification technology as security systems is developing rapidly. Traditional authentication modes like key; password; card are not safe enough because they could be stolen or easily forgotten. Biometric as developed technology has been applied to a wide range of systems. According to different researchers, vein biometric is a good candidate among other biometric traits such as fingerprint, hand geometry, voice, DNA and etc for authentication systems. Vein authentication systems can be designed by different methodologies. All the methodologies consist of matching stage which is too important for final verification of the system. Neural Network is an effective methodology for matching and recognizing individuals in authentication systems. Therefore, this paper explains and implements the Neural Network methodology for finger vein authentication system. Neural Network is trained in Matlab to match the vein features of authentication system. The Network simulation shows the quality of matching as 95% which is a good performance for authentication system matching.

  7. Network representation of reaction--diffusion systems far from equilibrium.

    PubMed

    Wyatt, J L

    1978-09-01

    This paper develops the network theory of chemical reaction systems from first principles. The network approach is then used to derive a canonical set of differential equations for reaction--diffusion systems, and an analysis of the Brusselator is presented as an example. PMID:755597

  8. Encouraging Autonomy through the Use of a Social Networking System

    ERIC Educational Resources Information Center

    Leis, Adrian

    2014-01-01

    The use of social networking systems has enabled communication to occur around the globe almost instantly, with news about various events being spread around the world as they happen. There has also been much interest in the benefits and disadvantages the use of such social networking systems may bring for education. This paper reports on the use…

  9. CFDP for Interplanetary Overlay Network

    NASA Technical Reports Server (NTRS)

    Burleigh, Scott C.

    2011-01-01

    The CCSDS (Consultative Committee for Space Data Systems) File Delivery Protocol for Interplanetary Overlay Network (CFDP-ION) is an implementation of CFDP that uses IO' s DTN (delay tolerant networking) implementation as its UT (unit-data transfer) layer. Because the DTN protocols effect automatic, reliable transmission via multiple relays, CFDP-ION need only satisfy the requirements for Class 1 ("unacknowledged") CFDP. This keeps the implementation small, but without loss of capability. This innovation minimizes processing resources by using zero-copy objects for file data transmission. It runs without modification in VxWorks, Linux, Solaris, and OS/X. As such, this innovation can be used without modification in both flight and ground systems. Integration with DTN enables the CFDP implementation itself to be very simple; therefore, very small. Use of ION infrastructure minimizes consumption of storage and processing resources while maximizing safety.

  10. Identification of power system load dynamics using artificial neural networks

    SciTech Connect

    Bostanci, M.; Koplowitz, J.; Taylor, C.W. |

    1997-11-01

    Power system loads are important for planning and operation of an electric power system. Load characteristics can significantly influence the results of synchronous stability and voltage stability studies. This paper presents a methodology for identification of power system load dynamics using neural networks. Input-output data of a power system dynamic load is used to design a neural network model which comprises delayed inputs and feedback connections. The developed neural network model can predict the future power system dynamic load behavior for arbitrary inputs. In particular, a third-order induction motor load neural network model is developed to verify the methodology. Neural network simulation results are illustrated and compared with the induction motor load response.

  11. Wide area network monitoring system for HEP experiments at Fermilab

    SciTech Connect

    Grigoriev, Maxim; Cottrell, Les; Logg, Connie; /SLAC

    2004-12-01

    Large, distributed High Energy Physics (HEP) collaborations, such as D0, CDF and US-CMS, depend on stable and robust network paths between major world research centers. The evolving emphasis on data and compute Grids increases the reliance on network performance. Fermilab's experimental groups and network support personnel identified a critical need for WAN monitoring to ensure the quality and efficient utilization of such network paths. This has led to the development of the Network Monitoring system we will present in this paper. The system evolved from the IEPM-BW project, started at SLAC three years ago. At Fermilab this system has developed into a fully functional infrastructure with bi-directional active network probes and path characterizations. It is based on the Iperf achievable throughput tool, Ping and Synack to test ICMP/TCP connectivity. It uses Pipechar and Traceroute to test, compare and report hop-by-hop network path characterization. It also measures real file transfer performance by BBFTP and GridFTP. The Monitoring system has an extensive web-interface and all the data is available through standalone SOAP web services or by a MonaLISA client. Also in this paper we will present a case study of network path asymmetry and abnormal performance between FNAL and SDSC, which was discovered and resolved by utilizing the Network Monitoring system.

  12. Wide Area Network Monitoring System for HEP Experiments at Fermilab

    SciTech Connect

    Grigoriev, M.

    2004-11-23

    Large, distributed High Energy Physics (HEP) collaborations, such as D0, CDF and US-CMS, depend on stable and robust network paths between major world research centres. The evolving emphasis on data and compute Grids increases the reliance on network performance. Fermilab's experimental groups and network support personnel identified a critical need for WAN monitoring to ensure the quality and efficient utilization of such network paths. This has led to the development of the Network Monitoring system we will present in this paper. The system evolved from the IEPM-BW project, started at SLAC three years ago. At Fermilab this system has developed into a fully functional infrastructure with bi-directional active network probes and path characterizations. It is based on the Iperf achievable throughput tool, Ping and Synack to test ICMP/TCP connectivity. It uses Pipechar and Traceroute to test, compare and report hop-by-hop network path characterization. It also measures real file transfer performance by BBFTP and GridFTP. The Monitoring system has an extensive web-interface and all the data is available through standalone SOAP web services or by a MonaLISA client. Also in this paper we will present a case study of network path asymmetry and abnormal performance between FNAL and SDSC, which was discovered and resolved by utilizing the Network Monitoring system.

  13. Scalable Hierarchical Network Management System for Displaying Network Information in Three Dimensions

    NASA Technical Reports Server (NTRS)

    George, Jude (Inventor); Schlecht, Leslie (Inventor); McCabe, James D. (Inventor); LeKashman, John Jr. (Inventor)

    1998-01-01

    A network management system has SNMP agents distributed at one or more sites, an input output module at each site, and a server module located at a selected site for communicating with input output modules, each of which is configured for both SNMP and HNMP communications. The server module is configured exclusively for HNMP communications, and it communicates with each input output module according to the HNMP. Non-iconified, informationally complete views are provided of network elements to aid in network management.

  14. Adaptive Neural Network Based Control of Noncanonical Nonlinear Systems.

    PubMed

    Zhang, Yanjun; Tao, Gang; Chen, Mou

    2016-09-01

    This paper presents a new study on the adaptive neural network-based control of a class of noncanonical nonlinear systems with large parametric uncertainties. Unlike commonly studied canonical form nonlinear systems whose neural network approximation system models have explicit relative degree structures, which can directly be used to derive parameterized controllers for adaptation, noncanonical form nonlinear systems usually do not have explicit relative degrees, and thus their approximation system models are also in noncanonical forms. It is well-known that the adaptive control of noncanonical form nonlinear systems involves the parameterization of system dynamics. As demonstrated in this paper, it is also the case for noncanonical neural network approximation system models. Effective control of such systems is an open research problem, especially in the presence of uncertain parameters. This paper shows that it is necessary to reparameterize such neural network system models for adaptive control design, and that such reparameterization can be realized using a relative degree formulation, a concept yet to be studied for general neural network system models. This paper then derives the parameterized controllers that guarantee closed-loop stability and asymptotic output tracking for noncanonical form neural network system models. An illustrative example is presented with the simulation results to demonstrate the control design procedure, and to verify the effectiveness of such a new design method.

  15. Network theory and its applications in economic systems

    NASA Astrophysics Data System (ADS)

    Huang, Xuqing

    This dissertation covers the two major parts of my Ph.D. research: i) developing theoretical framework of complex networks; and ii) applying complex networks models to quantitatively analyze economics systems. In part I, we focus on developing theories of interdependent networks, which includes two chapters: 1) We develop a mathematical framework to study the percolation of interdependent networks under targeted-attack and find that when the highly connected nodes are protected and have lower probability to fail, in contrast to single scale-free (SF) networks where the percolation threshold pc = 0, coupled SF networks are significantly more vulnerable with pc significantly larger than zero. 2) We analytically demonstrates that clustering, which quantifies the propensity for two neighbors of the same vertex to also be neighbors of each other, significantly increases the vulnerability of the system. In part II, we apply the complex networks models to study economics systems, which also includes two chapters: 1) We study the US corporate governance network, in which nodes representing directors and links between two directors representing their service on common company boards, and propose a quantitative measure of information and influence transformation in the network. Thus we are able to identify the most influential directors in the network. 2) We propose a bipartite networks model to simulate the risk propagation process among commercial banks during financial crisis. With empirical bank's balance sheet data in 2007 as input to the model, we find that our model efficiently identifies a significant portion of the actual failed banks reported by Federal Deposit Insurance Corporation during the financial crisis between 2008 and 2011. The results suggest that complex networks model could be useful for systemic risk stress testing for financial systems. The model also identifies that commercial rather than residential real estate assets are major culprits for the

  16. Synthesis of recurrent neural networks for dynamical system simulation.

    PubMed

    Trischler, Adam P; D'Eleuterio, Gabriele M T

    2016-08-01

    We review several of the most widely used techniques for training recurrent neural networks to approximate dynamical systems, then describe a novel algorithm for this task. The algorithm is based on an earlier theoretical result that guarantees the quality of the network approximation. We show that a feedforward neural network can be trained on the vector-field representation of a given dynamical system using backpropagation, then recast it as a recurrent network that replicates the original system's dynamics. After detailing this algorithm and its relation to earlier approaches, we present numerical examples that demonstrate its capabilities. One of the distinguishing features of our approach is that both the original dynamical systems and the recurrent networks that simulate them operate in continuous time.

  17. A network-based dynamical ranking system for competitive sports.

    PubMed

    Motegi, Shun; Masuda, Naoki

    2012-01-01

    From the viewpoint of networks, a ranking system for players or teams in sports is equivalent to a centrality measure for sports networks, whereby a directed link represents the result of a single game. Previously proposed network-based ranking systems are derived from static networks, i.e., aggregation of the results of games over time. However, the score of a player (or team) fluctuates over time. Defeating a renowned player in the peak performance is intuitively more rewarding than defeating the same player in other periods. To account for this factor, we propose a dynamic variant of such a network-based ranking system and apply it to professional men's tennis data. We derive a set of linear online update equations for the score of each player. The proposed ranking system predicts the outcome of the future games with a higher accuracy than the static counterparts.

  18. A network-based dynamical ranking system for competitive sports

    NASA Astrophysics Data System (ADS)

    Motegi, Shun; Masuda, Naoki

    2012-12-01

    From the viewpoint of networks, a ranking system for players or teams in sports is equivalent to a centrality measure for sports networks, whereby a directed link represents the result of a single game. Previously proposed network-based ranking systems are derived from static networks, i.e., aggregation of the results of games over time. However, the score of a player (or team) fluctuates over time. Defeating a renowned player in the peak performance is intuitively more rewarding than defeating the same player in other periods. To account for this factor, we propose a dynamic variant of such a network-based ranking system and apply it to professional men's tennis data. We derive a set of linear online update equations for the score of each player. The proposed ranking system predicts the outcome of the future games with a higher accuracy than the static counterparts.

  19. Simulated, Emulated, and Physical Investigative Analysis (SEPIA) of networked systems.

    SciTech Connect

    Burton, David P.; Van Leeuwen, Brian P.; McDonald, Michael James; Onunkwo, Uzoma A.; Tarman, Thomas David; Urias, Vincent E.

    2009-09-01

    This report describes recent progress made in developing and utilizing hybrid Simulated, Emulated, and Physical Investigative Analysis (SEPIA) environments. Many organizations require advanced tools to analyze their information system's security, reliability, and resilience against cyber attack. Today's security analysis utilize real systems such as computers, network routers and other network equipment, computer emulations (e.g., virtual machines) and simulation models separately to analyze interplay between threats and safeguards. In contrast, this work developed new methods to combine these three approaches to provide integrated hybrid SEPIA environments. Our SEPIA environments enable an analyst to rapidly configure hybrid environments to pass network traffic and perform, from the outside, like real networks. This provides higher fidelity representations of key network nodes while still leveraging the scalability and cost advantages of simulation tools. The result is to rapidly produce large yet relatively low-cost multi-fidelity SEPIA networks of computers and routers that let analysts quickly investigate threats and test protection approaches.

  20. Network Anomaly Detection System with Optimized DS Evidence Theory

    PubMed Central

    Liu, Yuan; Wang, Xiaofeng; Liu, Kaiyu

    2014-01-01

    Network anomaly detection has been focused on by more people with the fast development of computer network. Some researchers utilized fusion method and DS evidence theory to do network anomaly detection but with low performance, and they did not consider features of network—complicated and varied. To achieve high detection rate, we present a novel network anomaly detection system with optimized Dempster-Shafer evidence theory (ODS) and regression basic probability assignment (RBPA) function. In this model, we add weights for each senor to optimize DS evidence theory according to its previous predict accuracy. And RBPA employs sensor's regression ability to address complex network. By four kinds of experiments, we find that our novel network anomaly detection model has a better detection rate, and RBPA as well as ODS optimization methods can improve system performance significantly. PMID:25254258

  1. STIMULUS: End-System Network Interface Controller for 100 Gb/s Wide Area Networks

    SciTech Connect

    Zarkesh-Ha, Payman

    2014-09-12

    The main goal of this research grant is to develop a system-level solution leveraging novel technologies that enable network communications at 100 Gb/s or beyond. University of New Mexico in collaboration with Acadia Optronics LLC has been working on this project to develop the 100 Gb/s Network Interface Controller (NIC) under this Department of Energy (DOE) grant.

  2. Operation of International Monitoring System Network

    NASA Astrophysics Data System (ADS)

    Nikolova, Svetlana; Araujo, Fernando; Aktas, Kadircan; Malakhova, Marina; Otsuka, Riyo; Han, Dongmei; Assef, Thierry; Nava, Elisabetta; Mickevicius, Sigitas; Agrebi, Abdelouaheb

    2015-04-01

    The IMS is a globally distributed network of monitoring facilities using sensors from four technologies: seismic, hydroacoustic, infrasound and radionuclide. It is designed to detect the seismic and acoustic waves produced by nuclear test explosions and the subsequently released radioactive isotopes. Monitoring stations transmit their data to the IDC in Vienna, Austria, over a global private network known as the GCI. Since 2013, the data availability (DA) requirements for IMS stations account for quality of the data, meaning that in calculation of data availability data should be exclude if: - there is no input from sensor (SHI technology); - the signal consists of constant values (SHI technology); Even more strict are requirements for the DA of the radionuclide (particulate and noble gas) stations - received data have to be analyzed, reviewed and categorized by IDC analysts. In order to satisfy the strict data and network availability requirements of the IMS Network, the operation of the facilities and the GCI are managed by IDC Operations. Operations has following main functions: - to ensure proper operation and functioning of the stations; - to ensure proper operation and functioning of the GCI; - to ensure efficient management of the stations in IDC; - to provide network oversight and incident management. At the core of the IMS Network operations are a series of tools for: monitoring the stations' state of health and data quality, troubleshooting incidents, communicating with internal and external stakeholders, and reporting. The new requirements for data availability increased the importance of the raw data quality monitoring. This task is addressed by development of additional tools for easy and fast identifying problems in data acquisition, regular activities to check compliance of the station parameters with acquired data by scheduled calibration of the seismic network, review of the samples by certified radionuclide laboratories. The DA for the networks of

  3. A lightweight sensor network management system design

    USGS Publications Warehouse

    Yuan, F.; Song, W.-Z.; Peterson, N.; Peng, Y.; Wang, L.; Shirazi, B.; LaHusen, R.

    2008-01-01

    In this paper, we propose a lightweight and transparent management framework for TinyOS sensor networks, called L-SNMS, which minimizes the overhead of management functions, including memory usage overhead, network traffic overhead, and integration overhead. We accomplish this by making L-SNMS virtually transparent to other applications hence requiring minimal integration. The proposed L-SNMS framework has been successfully tested on various sensor node platforms, including TelosB, MICAz and IMote2. ?? 2008 IEEE.

  4. Neural network simulations of the nervous system.

    PubMed

    van Leeuwen, J L

    1990-01-01

    Present knowledge of brain mechanisms is mainly based on anatomical and physiological studies. Such studies are however insufficient to understand the information processing of the brain. The present new focus on neural network studies is the most likely candidate to fill this gap. The present paper reviews some of the history and current status of neural network studies. It signals some of the essential problems for which answers have to be found before substantial progress in the field can be made. PMID:2245130

  5. NetState : a network version tracking system.

    SciTech Connect

    Van Randwyk, Jamie A.; Durgin, Nancy Ann; Mai, Yuqing

    2005-02-01

    Network administrators and security analysts often do not know what network services are being run in every corner of their networks. If they do have a vague grasp of the services running on their networks, they often do not know what specific versions of those services are running. Actively scanning for services and versions does not always yield complete results, and patch and service management, therefore, suffer. We present Net-State, a system for monitoring, storing, and reporting application and operating system version information for a network. NetState gives security and network administrators the ability to know what is running on their networks while allowing for user-managed machines and complex host configurations. Our architecture uses distributed modules to collect network information and a centralized server that stores and issues reports on that collected version information. We discuss some of the challenges to building and operating NetState as well as the legal issues surrounding the promiscuous capture of network data. We conclude that this tool can solve some key problems in network management and has a wide range of possibilities for future uses.

  6. Intrusion Detection System Using Deep Neural Network for In-Vehicle Network Security

    PubMed Central

    Kang, Min-Joo

    2016-01-01

    A novel intrusion detection system (IDS) using a deep neural network (DNN) is proposed to enhance the security of in-vehicular network. The parameters building the DNN structure are trained with probability-based feature vectors that are extracted from the in-vehicular network packets. For a given packet, the DNN provides the probability of each class discriminating normal and attack packets, and, thus the sensor can identify any malicious attack to the vehicle. As compared to the traditional artificial neural network applied to the IDS, the proposed technique adopts recent advances in deep learning studies such as initializing the parameters through the unsupervised pre-training of deep belief networks (DBN), therefore improving the detection accuracy. It is demonstrated with experimental results that the proposed technique can provide a real-time response to the attack with a significantly improved detection ratio in controller area network (CAN) bus. PMID:27271802

  7. Intrusion Detection System Using Deep Neural Network for In-Vehicle Network Security.

    PubMed

    Kang, Min-Joo; Kang, Je-Won

    2016-01-01

    A novel intrusion detection system (IDS) using a deep neural network (DNN) is proposed to enhance the security of in-vehicular network. The parameters building the DNN structure are trained with probability-based feature vectors that are extracted from the in-vehicular network packets. For a given packet, the DNN provides the probability of each class discriminating normal and attack packets, and, thus the sensor can identify any malicious attack to the vehicle. As compared to the traditional artificial neural network applied to the IDS, the proposed technique adopts recent advances in deep learning studies such as initializing the parameters through the unsupervised pre-training of deep belief networks (DBN), therefore improving the detection accuracy. It is demonstrated with experimental results that the proposed technique can provide a real-time response to the attack with a significantly improved detection ratio in controller area network (CAN) bus. PMID:27271802

  8. Intrusion Detection System Using Deep Neural Network for In-Vehicle Network Security.

    PubMed

    Kang, Min-Joo; Kang, Je-Won

    2016-01-01

    A novel intrusion detection system (IDS) using a deep neural network (DNN) is proposed to enhance the security of in-vehicular network. The parameters building the DNN structure are trained with probability-based feature vectors that are extracted from the in-vehicular network packets. For a given packet, the DNN provides the probability of each class discriminating normal and attack packets, and, thus the sensor can identify any malicious attack to the vehicle. As compared to the traditional artificial neural network applied to the IDS, the proposed technique adopts recent advances in deep learning studies such as initializing the parameters through the unsupervised pre-training of deep belief networks (DBN), therefore improving the detection accuracy. It is demonstrated with experimental results that the proposed technique can provide a real-time response to the attack with a significantly improved detection ratio in controller area network (CAN) bus.

  9. Communications and control for electric power systems: Power system stability applications of artificial neural networks

    NASA Technical Reports Server (NTRS)

    Toomarian, N.; Kirkham, Harold

    1994-01-01

    This report investigates the application of artificial neural networks to the problem of power system stability. The field of artificial intelligence, expert systems, and neural networks is reviewed. Power system operation is discussed with emphasis on stability considerations. Real-time system control has only recently been considered as applicable to stability, using conventional control methods. The report considers the use of artificial neural networks to improve the stability of the power system. The networks are considered as adjuncts and as replacements for existing controllers. The optimal kind of network to use as an adjunct to a generator exciter is discussed.

  10. Communications and control for electric power systems: Power system stability applications of artificial neural networks

    SciTech Connect

    Toomarian, N.; Kirkham, H.

    1993-12-01

    This report investigates the application of artificial neural networks to the problem of power system stability. The field of artificial intelligence, expert systems and neural networks is reviewed. Power system operation is discussed with emphasis on stability considerations. Real-time system control has only recently been considered as applicable to stability, using conventional control methods. The report considers the use of artificial neural networks to improve the stability of the power system. The networks are considered as adjuncts and as replacements for existing controllers. The optimal kind of network to use as an adjunct to a generator exciter is discussed.

  11. Verification and Validation of Neural Networks for Aerospace Systems

    NASA Technical Reports Server (NTRS)

    Mackall, Dale; Nelson, Stacy; Schumann, Johann

    2002-01-01

    The Dryden Flight Research Center V&V working group and NASA Ames Research Center Automated Software Engineering (ASE) group collaborated to prepare this report. The purpose is to describe V&V processes and methods for certification of neural networks for aerospace applications, particularly adaptive flight control systems like Intelligent Flight Control Systems (IFCS) that use neural networks. This report is divided into the following two sections: Overview of Adaptive Systems and V&V Processes/Methods.

  12. Verification and Validation of Neural Networks for Aerospace Systems

    NASA Technical Reports Server (NTRS)

    Mackall, Dale; Nelson, Stacy; Schumman, Johann; Clancy, Daniel (Technical Monitor)

    2002-01-01

    The Dryden Flight Research Center V&V working group and NASA Ames Research Center Automated Software Engineering (ASE) group collaborated to prepare this report. The purpose is to describe V&V processes and methods for certification of neural networks for aerospace applications, particularly adaptive flight control systems like Intelligent Flight Control Systems (IFCS) that use neural networks. This report is divided into the following two sections: 1) Overview of Adaptive Systems; and 2) V&V Processes/Methods.

  13. Dynamic structural network evolution in compressed granular systems

    NASA Astrophysics Data System (ADS)

    Papadopoulos, Lia; Puckett, James; Daniels, Karen; Bassett, Danielle

    The heterogeneous dynamic behavior of granular packings under shear or compression is not well-understood. In this study, we use novel techniques from network science to investigate the structural evolution that occurs in compressed granular systems. Specifically, we treat particles as network nodes, and pressure-dependent forces between particles as layer-specific network edges. Then, we use a generalization of community detection methods to multilayer networks, and develop quantitative measures that characterize changes in the architecture of the force network as a function of pressure. We observe that branchlike domains reminiscent of force chains evolve differentially as pressure is applied: topological characteristics of these domains at rest predict their coalescence or dispersion under pressure. Our methods allow us to study the dynamics of mesoscale structure in granular systems, and provide a direct way to compare data from systems under different external conditions or with different physical makeup.

  14. Evolving artificial neural networks to control chaotic systems

    NASA Astrophysics Data System (ADS)

    Weeks, Eric R.; Burgess, John M.

    1997-08-01

    We develop a genetic algorithm that produces neural network feedback controllers for chaotic systems. The algorithm was tested on the logistic and Hénon maps, for which it stabilizes an unstable fixed point using small perturbations, even in the presence of significant noise. The network training method [D. E. Moriarty and R. Miikkulainen, Mach. Learn. 22, 11 (1996)] requires no previous knowledge about the system to be controlled, including the dimensionality of the system and the location of unstable fixed points. This is the first dimension-independent algorithm that produces neural network controllers using time-series data. A software implementation of this algorithm is available via the World Wide Web.

  15. Data and Network Science for Noisy Heterogeneous Systems

    ERIC Educational Resources Information Center

    Rider, Andrew Kent

    2013-01-01

    Data in many growing fields has an underlying network structure that can be taken advantage of. In this dissertation we apply data and network science to problems in the domains of systems biology and healthcare. Data challenges in these fields include noisy, heterogeneous data, and a lack of ground truth. The primary thesis of this work is that…

  16. Modeling the School System Adoption Process for Library Networking.

    ERIC Educational Resources Information Center

    Kester, Diane Katherine Davies

    The successful inclusion of school library media centers in fully articulated networks involves considerable planning and organization for technological change. In this study a preliminary model of the stages of school system participation in library networks was developed with the major activities for each stage identified. The model follows…

  17. The middleware architecture supports heterogeneous network systems for module-based personal robot system

    NASA Astrophysics Data System (ADS)

    Choo, Seongho; Li, Vitaly; Choi, Dong Hee; Jung, Gi Deck; Park, Hong Seong; Ryuh, Youngsun

    2005-12-01

    On developing the personal robot system presently, the internal architecture is every module those occupy separated functions are connected through heterogeneous network system. This module-based architecture supports specialization and division of labor at not only designing but also implementation, as an effect of this architecture, it can reduce developing times and costs for modules. Furthermore, because every module is connected among other modules through network systems, we can get easy integrations and synergy effect to apply advanced mutual functions by co-working some modules. In this architecture, one of the most important technologies is the network middleware that takes charge communications among each modules connected through heterogeneous networks systems. The network middleware acts as the human nerve system inside of personal robot system; it relays, transmits, and translates information appropriately between modules that are similar to human organizations. The network middleware supports various hardware platform, heterogeneous network systems (Ethernet, Wireless LAN, USB, IEEE 1394, CAN, CDMA-SMS, RS-232C). This paper discussed some mechanisms about our network middleware to intercommunication and routing among modules, methods for real-time data communication and fault-tolerant network service. There have designed and implemented a layered network middleware scheme, distributed routing management, network monitoring/notification technology on heterogeneous networks for these goals. The main theme is how to make routing information in our network middleware. Additionally, with this routing information table, we appended some features. Now we are designing, making a new version network middleware (we call 'OO M/W') that can support object-oriented operation, also are updating program sources itself for object-oriented architecture. It is lighter, faster, and can support more operation systems and heterogeneous network systems, but other general

  18. On neural networks in identification and control of dynamic systems

    NASA Technical Reports Server (NTRS)

    Phan, Minh; Juang, Jer-Nan; Hyland, David C.

    1993-01-01

    This paper presents a discussion of the applicability of neural networks in the identification and control of dynamic systems. Emphasis is placed on the understanding of how the neural networks handle linear systems and how the new approach is related to conventional system identification and control methods. Extensions of the approach to nonlinear systems are then made. The paper explains the fundamental concepts of neural networks in their simplest terms. Among the topics discussed are feed forward and recurrent networks in relation to the standard state-space and observer models, linear and nonlinear auto-regressive models, linear, predictors, one-step ahead control, and model reference adaptive control for linear and nonlinear systems. Numerical examples are presented to illustrate the application of these important concepts.

  19. Systems Approaches to Identifying Gene Regulatory Networks in Plants

    PubMed Central

    Long, Terri A.; Brady, Siobhan M.; Benfey, Philip N.

    2009-01-01

    Complex gene regulatory networks are composed of genes, noncoding RNAs, proteins, metabolites, and signaling components. The availability of genome-wide mutagenesis libraries; large-scale transcriptome, proteome, and metabalome data sets; and new high-throughput methods that uncover protein interactions underscores the need for mathematical modeling techniques that better enable scientists to synthesize these large amounts of information and to understand the properties of these biological systems. Systems biology approaches can allow researchers to move beyond a reductionist approach and to both integrate and comprehend the interactions of multiple components within these systems. Descriptive and mathematical models for gene regulatory networks can reveal emergent properties of these plant systems. This review highlights methods that researchers are using to obtain large-scale data sets, and examples of gene regulatory networks modeled with these data. Emergent properties revealed by the use of these network models and perspectives on the future of systems biology are discussed. PMID:18616425

  20. Applying Model Based Systems Engineering to NASA's Space Communications Networks

    NASA Technical Reports Server (NTRS)

    Bhasin, Kul; Barnes, Patrick; Reinert, Jessica; Golden, Bert

    2013-01-01

    System engineering practices for complex systems and networks now require that requirement, architecture, and concept of operations product development teams, simultaneously harmonize their activities to provide timely, useful and cost-effective products. When dealing with complex systems of systems, traditional systems engineering methodology quickly falls short of achieving project objectives. This approach is encumbered by the use of a number of disparate hardware and software tools, spreadsheets and documents to grasp the concept of the network design and operation. In case of NASA's space communication networks, since the networks are geographically distributed, and so are its subject matter experts, the team is challenged to create a common language and tools to produce its products. Using Model Based Systems Engineering methods and tools allows for a unified representation of the system in a model that enables a highly related level of detail. To date, Program System Engineering (PSE) team has been able to model each network from their top-level operational activities and system functions down to the atomic level through relational modeling decomposition. These models allow for a better understanding of the relationships between NASA's stakeholders, internal organizations, and impacts to all related entities due to integration and sustainment of existing systems. Understanding the existing systems is essential to accurate and detailed study of integration options being considered. In this paper, we identify the challenges the PSE team faced in its quest to unify complex legacy space communications networks and their operational processes. We describe the initial approaches undertaken and the evolution toward model based system engineering applied to produce Space Communication and Navigation (SCaN) PSE products. We will demonstrate the practice of Model Based System Engineering applied to integrating space communication networks and the summary of its

  1. Designing A Robust Command, Communications and Data Acquisition System For Autonomous Sensor Platforms Using The Data Transport Network

    NASA Astrophysics Data System (ADS)

    Valentic, T. A.

    2012-12-01

    The Data Transport Network is designed for the delivery of data from scientific instruments located at remote field sites with limited or unreliable communications. Originally deployed at the Sondrestrom Research Facility in Greenland over a decade ago, the system supports the real-time collection and processing of data from large instruments such as incoherent scatter radars and lidars. In recent years, the Data Transport Network has been adapted to small, low-power embedded systems controlling remote instrumentation platforms deployed throughout the Arctic. These projects include multiple buoys from the O-Buoy, IceLander and IceGoat programs, renewable energy monitoring at the Imnavait Creek and Ivotuk field sites in Alaska and remote weather observation stations in Alaska and Greenland. This presentation will discuss the common communications controller developed for these projects. Although varied in their application, each of these systems share a number of common features. Multiple instruments are attached, each of which needs to be power controlled, data sampled and files transmitted offsite. In addition, the power usage of the overall system must be minimized to handle the limited energy available from sources such as solar, wind and fuel cells. The communications links are satellite based. The buoys and weather stations utilize Iridium, necessitating the need to handle the common drop outs and high-latency, low-bandwidth nature of the link. The communications controller is an off-the-shelf, low-power, single board computer running a customized version of the Linux operating system. The Data Transport Network provides a Python-based software framework for writing individual data collection programs and supplies a number of common services for configuration, scheduling, logging, data transmission and resource management. Adding a new instrument involves writing only the necessary code for interfacing to the hardware. Individual programs communicate with the

  2. LDCM Ground System. Network Lesson Learned

    NASA Technical Reports Server (NTRS)

    Gal-Edd, Jonathan

    2010-01-01

    This slide presentation reviews the Landsat Data Continuity Mission (LDCM) and the lessons learned in implementing the network that was assembled to allow for the acquisition, archiving and distribution of the data from the Landsat mission. The objective of the LDCM is to continue the acquisition, archiving, and distribution of moderate-resolution multispectral imagery affording global, synoptic, and repetitive coverage of the earth's land surface at a scale where natural and human-induced changes can be detected, differentiated, characterized, and monitored over time. It includes a review of the ground network, including a block diagram of the ground network elements (GNE) and a review of the RF design and testing. Also included is a listing of the lessons learned.

  3. Decision support systems and methods for complex networks

    DOEpatents

    Huang, Zhenyu; Wong, Pak Chung; Ma, Jian; Mackey, Patrick S; Chen, Yousu; Schneider, Kevin P

    2012-02-28

    Methods and systems for automated decision support in analyzing operation data from a complex network. Embodiments of the present invention utilize these algorithms and techniques not only to characterize the past and present condition of a complex network, but also to predict future conditions to help operators anticipate deteriorating and/or problem situations. In particular, embodiments of the present invention characterize network conditions from operation data using a state estimator. Contingency scenarios can then be generated based on those network conditions. For at least a portion of all of the contingency scenarios, risk indices are determined that describe the potential impact of each of those scenarios. Contingency scenarios with risk indices are presented visually as graphical representations in the context of a visual representation of the complex network. Analysis of the historical risk indices based on the graphical representations can then provide trends that allow for prediction of future network conditions.

  4. Physiologic monitoring. A guide to networking your monitoring systems.

    PubMed

    2011-10-01

    There are many factors to consider when choosing a physiologic monitoring system. not only should these systems perform well clinically, but they should also be able to exchange data with other information systems. We discuss some of the ins and outs of physiologic monitoring system networking and highlight eight product lines from seven suppliers.

  5. Application of neural network to hybrid systems with binary inputs.

    PubMed

    Holderbaum, William

    2007-07-01

    Boolean input systems are in common used in the electric industry. Power supplies include such systems and the power converter represents these. For instance, in power electronics, the control variable are the switching ON and OFF of components as thyristors or transistors. The purpose of this paper is to use neural network (NN) to control continuous systems with Boolean inputs. This method is based on classification of system variations associated with input configurations. The classical supervised backpropagation algorithm is used to train the networks. The training of the artificial neural network and the control of Boolean input systems are presented. The design procedure of control systems is implemented on a nonlinear system. We apply those results to control an electrical system composed of an induction machine and its power converter.

  6. Asynchronous ad hoc network discovery for low-power systems

    NASA Astrophysics Data System (ADS)

    Joslin, Todd W.

    2008-04-01

    Unattended ground sensor systems (UGS) have become an important part of a covert monitoring arsenal in operations throughout the world. With the increased use of unattended ground sensor systems, there is a need to develop communication architectures that allow the systems to have simple emplacement procedures, have a long mission life, and be difficult to detect. Current ad-hoc networking schemes use either a network beacon, extensive preambles, or guaranteed time synchronization to achieve reliable communications. When used in wireless sensor systems many of these schemes waste power through unnecessary transmissions. These schemes compromise the covert nature of UGS through excess transmissions for a non-beaconed network or the periodic beaconing in a beaconed network. These factors are detrimental to sensor systems, which chiefly rely on being covert and low-power. This paper discusses a nonarbitrated, non-GPS synchronized, beaconless approach to discovering, joining, and reliably transmitting and receiving in a low-power ad-hoc wireless sensor network. This solution is capable of performing network discovery upon demand to get an initial alignment with other nodes in the network. Once aligned, end points maintain alignment and can predict when other nodes will be available to listen.

  7. Communication Software Performance for Linux Clusters with Mesh Connections

    SciTech Connect

    Jie Chen; William Watson

    2003-09-01

    Recent progress in copper based commodity Gigabit Ethernet interconnects enables constructing clusters to achieve extremely high I/O bandwidth at low cost with mesh connections. However, the TCP/IP protocol stack cannot match the improved performance of Gigabit Ethernet networks especially in the case of multiple interconnects on a single host. In this paper, we evaluate and compare the performance characteristics of TCP/IP and M-VIA software that is an implementation of VIA.In particular, we focus on the performance of the software systems for a mesh communication architecture and demonstrate the feasibility of using multiple Gigabit Ethernet cards on one host to achieve aggregated bandwidth and latency that are not only better than what TCP provides but also compare favorably to some of the special purpose high-speed networks. In addition, implementation of a new M-VIA driver for one type of Gigabit Ethernet card will be discussed.

  8. A Mobile Sensor Network System for Monitoring of Unfriendly Environments

    PubMed Central

    Song, Guangming; Zhou, Yaoxin; Ding, Fei; Song, Aiguo

    2008-01-01

    Observing microclimate changes is one of the most popular applications of wireless sensor networks. However, some target environments are often too dangerous or inaccessible to humans or large robots and there are many challenges for deploying and maintaining wireless sensor networks in those unfriendly environments. This paper presents a mobile sensor network system for solving this problem. The system architecture, the mobile node design, the basic behaviors and advanced network capabilities have been investigated respectively. A wheel-based robotic node architecture is proposed here that can add controlled mobility to wireless sensor networks. A testbed including some prototype nodes has also been created for validating the basic functions of the proposed mobile sensor network system. Motion performance tests have been done to get the positioning errors and power consumption model of the mobile nodes. Results of the autonomous deployment experiment show that the mobile nodes can be distributed evenly into the previously unknown environments. It provides powerful support for network deployment and maintenance and can ensure that the sensor network will work properly in unfriendly environments.

  9. Reduction techniques for network validation in systems biology.

    PubMed

    Ackermann, J; Einloft, J; Nöthen, J; Koch, I

    2012-12-21

    The rapidly increasing amount of experimental biological data enables the development of large and complex, often genome-scale models of molecular systems. The simulation and analysis of these computer models of metabolism, signal transduction, and gene regulation are standard applications in systems biology, but size and complexity of the networks limit the feasibility of many methods. Reduction of networks provides a hierarchical view of complex networks and gives insight knowledge into their coarse-grained structural properties. Although network reduction has been extensively studied in computer science, adaptation and exploration of these concepts are still lacking for the analysis of biochemical reaction systems. Using the Petri net formalism, we describe two local network structures, common transition pairs and minimal transition invariants. We apply these two structural elements for network reduction. The reduction preserves the CTI-property (covered by transition invariants), which is an important feature for completeness of biological models. We demonstrate this concept for a selection of metabolic networks including a benchmark network of Saccharomyces cerevisiae whose straightforward treatment is not yet feasible even on modern supercomputers. PMID:22982289

  10. Network benchmarking: a happy marriage between systems and synthetic biology.

    PubMed

    Minty, Jeremy J; Varedi K, S Marjan; Nina Lin, Xiaoxia

    2009-03-27

    In their new Cell paper, Cantone et al. (2009) present exciting results on constructing and utilizing a small synthetic gene regulatory network in yeast that draws from two rapidly developing fields of systems and synthetic biology.

  11. Activator-inhibitor systems on heterogeneous ecological networks

    NASA Astrophysics Data System (ADS)

    Nicolaides, C.; Cueto-Felgueroso, L.; Juanes, R.

    2012-12-01

    The consideration of activator-inhibitor systems as complex networks has broadened our knowledge of non-equilibrium reaction-diffusion processes in heterogeneous systems. For example, the Turing mechanism represents a classical model for the formation of self-organized spatial structures in non-equilibrium activator-inhibitor systems. The study of Turing patterns in networks with heterogeneous connectivity has revealed that, contrary to other models and systems, the segregation process takes place mainly in vertices of low degree. In this paper, we study the formation of vegetation patterns in semiarid ecosystems from the perspective of a heterogeneous interacting ecological network. The structure of ecological networks yields fundamental insight into the ecosystem self-organization. Using simple rules for the short-range activation and global inhibition, we reconstruct the observed power-law distribution of vegetation patch size that has been observed in semiarid ecosystems like the Kalahari transect.

  12. Interconnecting PV on New York City's Secondary Network Distribution System

    SciTech Connect

    Anderson, K; Coddington, M; Burman, K; Hayter, S; Kroposki, B; Watson, and A

    2009-11-01

    The U.S. Department of Energy (DOE) has teamed with cities across the country through the Solar America Cities (SAC) partnership program to help reduce barriers and accelerate implementation of solar energy. The New York City SAC team is a partnership between the City University of New York (CUNY), the New York City Mayor s Office of Long-term Planning and Sustainability, and the New York City Economic Development Corporation (NYCEDC).The New York City SAC team is working with DOE s National Renewable Energy Laboratory (NREL) and Con Edison, the local utility, to develop a roadmap for photovoltaic (PV) installations in the five boroughs. The city set a goal to increase its installed PV capacity from1.1 MW in 2005 to 8.1 MW by 2015 (the maximum allowed in 2005). A key barrier to reaching this goal, however, is the complexity of the interconnection process with the local utility. Unique challenges are associated with connecting distributed PV systems to secondary network distribution systems (simplified to networks in this report). Although most areas of the country use simpler radial distribution systems to distribute electricity, larger metropolitan areas like New York City typically use networks to increase reliability in large load centers. Unlike the radial distribution system, where each customer receives power through a single line, a network uses a grid of interconnected lines to deliver power to each customer through several parallel circuits and sources. This redundancy improves reliability, but it also requires more complicated coordination and protection schemes that can be disrupted by energy exported from distributed PV systems. Currently, Con Edison studies each potential PV system in New York City to evaluate the system s impact on the network, but this is time consuming for utility engineers and may delay the customer s project or add cost for larger installations. City leaders would like to streamline this process to facilitate faster, simpler, and

  13. A local area computer network expert system framework

    NASA Technical Reports Server (NTRS)

    Dominy, Robert

    1987-01-01

    Over the past years an expert system called LANES designed to detect and isolate faults in the Goddard-wide Hybrid Local Area Computer Network (LACN) was developed. As a result, the need for developing a more generic LACN fault isolation expert system has become apparent. An object oriented approach was explored to create a set of generic classes, objects, rules, and methods that would be necessary to meet this need. The object classes provide a convenient mechanism for separating high level information from low level network specific information. This approach yeilds a framework which can be applied to different network configurations and be easily expanded to meet new needs.

  14. System of Mobile Agents to Model Social Networks

    NASA Astrophysics Data System (ADS)

    González, Marta C.; Lind, Pedro G.; Herrmann, Hans J.

    2006-03-01

    We propose a model of mobile agents to construct social networks, based on a system of moving particles by keeping track of the collisions during their permanence in the system. We reproduce not only the degree distribution, clustering coefficient, and shortest path length of a large database of empirical friendship networks recently collected, but also some features related with their community structure. The model is completely characterized by the collision rate, and above a critical collision rate we find the emergence of a giant cluster in the universality class of two-dimensional percolation. Moreover, we propose possible schemes to reproduce other networks of particular social contacts, namely, sexual contacts.

  15. Systems and methods for modeling and analyzing networks

    DOEpatents

    Hill, Colin C; Church, Bruce W; McDonagh, Paul D; Khalil, Iya G; Neyarapally, Thomas A; Pitluk, Zachary W

    2013-10-29

    The systems and methods described herein utilize a probabilistic modeling framework for reverse engineering an ensemble of causal models, from data and then forward simulating the ensemble of models to analyze and predict the behavior of the network. In certain embodiments, the systems and methods described herein include data-driven techniques for developing causal models for biological networks. Causal network models include computational representations of the causal relationships between independent variables such as a compound of interest and dependent variables such as measured DNA alterations, changes in mRNA, protein, and metabolites to phenotypic readouts of efficacy and toxicity.

  16. Social Insects: A Model System for Network Dynamics

    NASA Astrophysics Data System (ADS)

    Charbonneau, Daniel; Blonder, Benjamin; Dornhaus, Anna

    Social insect colonies (ants, bees, wasps, and termites) show sophisticated collective problem-solving in the face of variable constraints. Individuals exchange information and materials such as food. The resulting network structure and dynamics can inform us about the mechanisms by which the insects achieve particular collective behaviors and these can be transposed to man-made and social networks. We discuss how network analysis can answer important questions about social insects, such as how effective task allocation or information flow is realized. We put forward the idea that network analysis methods are under-utilized in social insect research, and that they can provide novel ways to view the complexity of collective behavior, particularly if network dynamics are taken into account. To illustrate this, we present an example of network tasks performed by ant workers, linked by instances of workers switching from one task to another. We show how temporal network analysis can propose and test new hypotheses on mechanisms of task allocation, and how adding temporal elements to static networks can drastically change results. We discuss the benefits of using social insects as models for complex systems in general. There are multiple opportunities emergent technologies and analysis methods in facilitating research on social insect network. The potential for interdisciplinary work could significantly advance diverse fields such as behavioral ecology, computer sciences, and engineering.

  17. System for testing properties of a network

    DOEpatents

    Rawle, Michael; Bartholomew, David B.; Soares, Marshall A.

    2009-06-16

    A method for identifying properties of a downhole electromagnetic network in a downhole tool sting, including the step of providing an electromagnetic path intermediate a first location and a second location on the electromagnetic network. The method further includes the step of providing a receiver at the second location. The receiver includes a known reference. The analog signal includes a set amplitude, a set range of frequencies, and a set rate of change between the frequencies. The method further includes the steps of sending the analog signal, and passively modifying the signal. The analog signal is sent from the first location through the electromagnetic path, and the signal is modified by the properties of the electromagnetic path. The method further includes the step of receiving a modified signal at the second location and comparing the known reference to the modified signal.

  18. Implementation of an Adaptive Learning System Using a Bayesian Network

    ERIC Educational Resources Information Center

    Yasuda, Keiji; Kawashima, Hiroyuki; Hata, Yoko; Kimura, Hiroaki

    2015-01-01

    An adaptive learning system is proposed that incorporates a Bayesian network to efficiently gauge learners' understanding at the course-unit level. Also, learners receive content that is adapted to their measured level of understanding. The system works on an iPad via the Edmodo platform. A field experiment using the system in an elementary school…

  19. Audit Trail Management System in Community Health Care Information Network.

    PubMed

    Nakamura, Naoki; Nakayama, Masaharu; Nakaya, Jun; Tominaga, Teiji; Suganuma, Takuo; Shiratori, Norio

    2015-01-01

    After the Great East Japan Earthquake we constructed a community health care information network system. Focusing on the authentication server and portal server capable of SAML&ID-WSF, we proposed an audit trail management system to look over audit events in a comprehensive manner. Through implementation and experimentation, we verified the effectiveness of our proposed audit trail management system.

  20. A Gamma Memory Neural Network for System Identification

    NASA Technical Reports Server (NTRS)

    Motter, Mark A.; Principe, Jose C.

    1992-01-01

    A gamma neural network topology is investigated for a system identification application. A discrete gamma memory structure is used in the input layer, providing delayed values of both the control inputs and the network output to the input layer. The discrete gamma memory structure implements a tapped dispersive delay line, with the amount of dispersion regulated by a single, adaptable parameter. The network is trained using static back propagation, but captures significant features of the system dynamics. The system dynamics identified with the network are the Mach number dynamics of the 16 Foot Transonic Tunnel at NASA Langley Research Center, Hampton, Virginia. The training data spans an operating range of Mach numbers from 0.4 to 1.3.

  1. Neural networks for self-learning control systems

    NASA Technical Reports Server (NTRS)

    Nguyen, Derrick H.; Widrow, Bernard

    1990-01-01

    It is shown how a neural network can learn of its own accord to control a nonlinear dynamic system. An emulator, a multilayered neural network, learns to identify the system's dynamic characteristics. The controller, another multilayered neural network, next learns to control the emulator. The self-trained controller is then used to control the actual dynamic system. The learning process continues as the emulator and controller improve and track the physical process. An example is given to illustrate these ideas. The 'truck backer-upper,' a neural network controller that steers a trailer truck while the truck is backing up to a loading dock, is demonstrated. The controller is able to guide the truck to the dock from almost any initial position. The technique explored should be applicable to a wide variety of nonlinear control problems.

  2. Active system area networks for data intensive computations. Final report

    SciTech Connect

    2002-04-01

    The goal of the Active System Area Networks (ASAN) project is to develop hardware and software technologies for the implementation of active system area networks (ASANs). The use of the term ''active'' refers to the ability of the network interfaces to perform application-specific as well as system level computations in addition to their traditional role of data transfer. This project adopts the view that the network infrastructure should be an active computational entity capable of supporting certain classes of computations that would otherwise be performed on the host CPUs. The result is a unique network-wide programming model where computations are dynamically placed within the host CPUs or the NIs depending upon the quality of service demands and network/CPU resource availability. The projects seeks to demonstrate that such an approach is a better match for data intensive network-based applications and that the advent of low-cost powerful embedded processors and configurable hardware makes such an approach economically viable and desirable.

  3. A systemic network for Chlamydia pneumoniae entry into human cells.

    PubMed

    Wang, Anyou; Johnston, S Claiborne; Chou, Joyce; Dean, Deborah

    2010-06-01

    Bacterial entry is a multistep process triggering a complex network, yet the molecular complexity of this network remains largely unsolved. By employing a systems biology approach, we reveal a systemic bacterial-entry network initiated by Chlamydia pneumoniae, a widespread opportunistic pathogen. The network consists of nine functional modules (i.e., groups of proteins) associated with various cellular functions, including receptor systems, cell adhesion, transcription, and endocytosis. The peak levels of gene expression for these modules change rapidly during C. pneumoniae entry, with cell adhesion occurring at 5 min postinfection, receptor and actin activity at 25 min, and endocytosis at 2 h. A total of six membrane proteins (chemokine C-X-C motif receptor 7 [CXCR7], integrin beta 2 [ITGB2], platelet-derived growth factor beta polypeptide [PDGFB], vascular endothelial growth factor [VEGF], vascular cell adhesion molecule 1 [VCAM1], and GTP binding protein overexpressed in skeletal muscle [GEM]) play a key role during C. pneumoniae entry, but none alone is essential to prevent entry. The combination knockdown of three genes (coding for CXCR7, ITGB2, and PDGFB) significantly inhibits C. pneumoniae entry, but the entire network is resistant to the six-gene depletion, indicating a resilient network. Our results reveal a complex network for C. pneumoniae entry involving at least six key proteins.

  4. Analog neural network-based helicopter gearbox health monitoring system.

    PubMed

    Monsen, P T; Dzwonczyk, M; Manolakos, E S

    1995-12-01

    The development of a reliable helicopter gearbox health monitoring system (HMS) has been the subject of considerable research over the past 15 years. The deployment of such a system could lead to a significant saving in lives and vehicles as well as dramatically reduce the cost of helicopter maintenance. Recent research results indicate that a neural network-based system could provide a viable solution to the problem. This paper presents two neural network-based realizations of an HMS system. A hybrid (digital/analog) neural system is proposed as an extremely accurate off-line monitoring tool used to reduce helicopter gearbox maintenance costs. In addition, an all analog neural network is proposed as a real-time helicopter gearbox fault monitor that can exploit the ability of an analog neural network to directly compute the discrete Fourier transform (DFT) as a sum of weighted samples. Hardware performance results are obtained using the Integrated Neural Computing Architecture (INCA/1) analog neural network platform that was designed and developed at The Charles Stark Draper Laboratory. The results indicate that it is possible to achieve a 100% fault detection rate with 0% false alarm rate by performing a DFT directly on the first layer of INCA/1 followed by a small-size two-layer feed-forward neural network and a simple post-processing majority voting stage.

  5. Network-based reading system for lung cancer screening CT

    NASA Astrophysics Data System (ADS)

    Fujino, Yuichi; Fujimura, Kaori; Nomura, Shin-ichiro; Kawashima, Harumi; Tsuchikawa, Megumu; Matsumoto, Toru; Nagao, Kei-ichi; Uruma, Takahiro; Yamamoto, Shinji; Takizawa, Hotaka; Kuroda, Chikazumi; Nakayama, Tomio

    2006-03-01

    This research aims to support chest computed tomography (CT) medical checkups to decrease the death rate by lung cancer. We have developed a remote cooperative reading system for lung cancer screening over the Internet, a secure transmission function, and a cooperative reading environment. It is called the Network-based Reading System. A telemedicine system involves many issues, such as network costs and data security if we use it over the Internet, which is an open network. In Japan, broadband access is widespread and its cost is the lowest in the world. We developed our system considering human machine interface and security. It consists of data entry terminals, a database server, a computer aided diagnosis (CAD) system, and some reading terminals. It uses a secure Digital Imaging and Communication in Medicine (DICOM) encrypting method and Public Key Infrastructure (PKI) based secure DICOM image data distribution. We carried out an experimental trial over the Japan Gigabit Network (JGN), which is the testbed for the Japanese next-generation network, and conducted verification experiments of secure screening image distribution, some kinds of data addition, and remote cooperative reading. We found that network bandwidth of about 1.5 Mbps enabled distribution of screening images and cooperative reading and that the encryption and image distribution methods we proposed were applicable to the encryption and distribution of general DICOM images via the Internet.

  6. Unknown input observer design and analysis for networked control systems

    NASA Astrophysics Data System (ADS)

    Taha, Ahmad F.; Elmahdi, Ahmed; Panchal, Jitesh H.; Sun, Dengfeng

    2015-05-01

    The insertion of communication networks in the feedback loops of control systems is a defining feature of modern control systems. These systems are often subject to unknown inputs in a form of disturbances, perturbations, or attacks. The objective of this paper is to design and analyse an observer for networked dynamical systems with unknown inputs. The network effect can be viewed as either a perturbation or time-delay to the exchanged signals. In this paper, we (1) review an unknown input observer (UIO) design for a non-networked system, (2) derive the networked unknown input observer (NetUIO) dynamics, (3) design a NetUIO such that the effect of higher delay order terms are nullified and (4) establish stability-guaranteeing bounds on the networked-induced time-delay and perturbation. The formulation and results derived in this paper can be generalised to scenarios and applications where the signals are perturbed due to a different source of perturbation or delay.

  7. Optical interconnection networks for high-performance computing systems.

    PubMed

    Biberman, Aleksandr; Bergman, Keren

    2012-04-01

    Enabled by silicon photonic technology, optical interconnection networks have the potential to be a key disruptive technology in computing and communication industries. The enduring pursuit of performance gains in computing, combined with stringent power constraints, has fostered the ever-growing computational parallelism associated with chip multiprocessors, memory systems, high-performance computing systems and data centers. Sustaining these parallelism growths introduces unique challenges for on- and off-chip communications, shifting the focus toward novel and fundamentally different communication approaches. Chip-scale photonic interconnection networks, enabled by high-performance silicon photonic devices, offer unprecedented bandwidth scalability with reduced power consumption. We demonstrate that the silicon photonic platforms have already produced all the high-performance photonic devices required to realize these types of networks. Through extensive empirical characterization in much of our work, we demonstrate such feasibility of waveguides, modulators, switches and photodetectors. We also demonstrate systems that simultaneously combine many functionalities to achieve more complex building blocks. We propose novel silicon photonic devices, subsystems, network topologies and architectures to enable unprecedented performance of these photonic interconnection networks. Furthermore, the advantages of photonic interconnection networks extend far beyond the chip, offering advanced communication environments for memory systems, high-performance computing systems, and data centers.

  8. Parameter estimation in space systems using recurrent neural networks

    NASA Technical Reports Server (NTRS)

    Parlos, Alexander G.; Atiya, Amir F.; Sunkel, John W.

    1991-01-01

    The identification of time-varying parameters encountered in space systems is addressed, using artificial neural systems. A hybrid feedforward/feedback neural network, namely a recurrent multilayer perception, is used as the model structure in the nonlinear system identification. The feedforward portion of the network architecture provides its well-known interpolation property, while through recurrency and cross-talk, the local information feedback enables representation of temporal variations in the system nonlinearities. The standard back-propagation-learning algorithm is modified and it is used for both the off-line and on-line supervised training of the proposed hybrid network. The performance of recurrent multilayer perceptron networks in identifying parameters of nonlinear dynamic systems is investigated by estimating the mass properties of a representative large spacecraft. The changes in the spacecraft inertia are predicted using a trained neural network, during two configurations corresponding to the early and late stages of the spacecraft on-orbit assembly sequence. The proposed on-line mass properties estimation capability offers encouraging results, though, further research is warranted for training and testing the predictive capabilities of these networks beyond nominal spacecraft operations.

  9. Inferring connectivity in networked dynamical systems: Challenges using Granger causality

    NASA Astrophysics Data System (ADS)

    Lusch, Bethany; Maia, Pedro D.; Kutz, J. Nathan

    2016-09-01

    Determining the interactions and causal relationships between nodes in an unknown networked dynamical system from measurement data alone is a challenging, contemporary task across the physical, biological, and engineering sciences. Statistical methods, such as the increasingly popular Granger causality, are being broadly applied for data-driven discovery of connectivity in fields from economics to neuroscience. A common version of the algorithm is called pairwise-conditional Granger causality, which we systematically test on data generated from a nonlinear model with known causal network structure. Specifically, we simulate networked systems of Kuramoto oscillators and use the Multivariate Granger Causality Toolbox to discover the underlying coupling structure of the system. We compare the inferred results to the original connectivity for a wide range of parameters such as initial conditions, connection strengths, community structures, and natural frequencies. Our results show a significant systematic disparity between the original and inferred network, unless the true structure is extremely sparse or dense. Specifically, the inferred networks have significant discrepancies in the number of edges and the eigenvalues of the connectivity matrix, demonstrating that they typically generate dynamics which are inconsistent with the ground truth. We provide a detailed account of the dynamics for the Erdős-Rényi network model due to its importance in random graph theory and network science. We conclude that Granger causal methods for inferring network structure are highly suspect and should always be checked against a ground truth model. The results also advocate the need to perform such comparisons with any network inference method since the inferred connectivity results appear to have very little to do with the ground truth system.

  10. Awareness System Implemented in the European Network

    NASA Astrophysics Data System (ADS)

    Jańıček, František; Jedinák, Martin; Šulc, Igor

    2014-09-01

    Transmission system in Slovakia is part of a synchronously interconnected system of continental Europe. Besides indisputable technical and economical benefits of cooperation many hazardous factors exist of fault condition spreading with impact on our system. Even today a system break-up escalated into a vast blackout is a real danger. European transmission system operators continually work on preventive measures and develop systems with a goal to handle critical situations. The ambition of the European Awareness System is to signalize the rise of these situations and also assist with system restoration

  11. Bluetooth Roaming for Sensor Network System in Clinical Environment.

    PubMed

    Kuroda, Tomohiro; Noma, Haruo; Takase, Kazuhiko; Sasaki, Shigeto; Takemura, Tadamasa

    2015-01-01

    A sensor network is key infrastructure for advancing a hospital information system (HIS). The authors proposed a method to provide roaming functionality for Bluetooth to realize a Bluetooth-based sensor network, which is suitable to connect clinical devices. The proposed method makes the average response time of a Bluetooth connection less than one second by making the master device repeat the inquiry process endlessly and modifies parameters of the inquiry process. The authors applied the developed sensor network for daily clinical activities in an university hospital, and confirmed the stabilitya and effectiveness of the sensor network. As Bluetooth becomes a quite common wireless interface for medical devices, the proposed protocol that realizes Bluetooth-based sensor network enables HIS to equip various clinical devices and, consequently, lets information and communication technologies advance clinical services. PMID:26262038

  12. Bluetooth Roaming for Sensor Network System in Clinical Environment.

    PubMed

    Kuroda, Tomohiro; Noma, Haruo; Takase, Kazuhiko; Sasaki, Shigeto; Takemura, Tadamasa

    2015-01-01

    A sensor network is key infrastructure for advancing a hospital information system (HIS). The authors proposed a method to provide roaming functionality for Bluetooth to realize a Bluetooth-based sensor network, which is suitable to connect clinical devices. The proposed method makes the average response time of a Bluetooth connection less than one second by making the master device repeat the inquiry process endlessly and modifies parameters of the inquiry process. The authors applied the developed sensor network for daily clinical activities in an university hospital, and confirmed the stabilitya and effectiveness of the sensor network. As Bluetooth becomes a quite common wireless interface for medical devices, the proposed protocol that realizes Bluetooth-based sensor network enables HIS to equip various clinical devices and, consequently, lets information and communication technologies advance clinical services.

  13. Toward a systems-level view of dynamic phosphorylation networks

    PubMed Central

    Newman, Robert H.; Zhang, Jin; Zhu, Heng

    2014-01-01

    To better understand how cells sense and respond to their environment, it is important to understand the organization and regulation of the phosphorylation networks that underlie most cellular signal transduction pathways. These networks, which are composed of protein kinases, protein phosphatases and their respective cellular targets, are highly dynamic. Importantly, to achieve signaling specificity, phosphorylation networks must be regulated at several levels, including at the level of protein expression, substrate recognition, and spatiotemporal modulation of enzymatic activity. Here, we briefly summarize some of the traditional methods used to study the phosphorylation status of cellular proteins before focusing our attention on several recent technological advances, such as protein microarrays, quantitative mass spectrometry, and genetically-targetable fluorescent biosensors, that are offering new insights into the organization and regulation of cellular phosphorylation networks. Together, these approaches promise to lead to a systems-level view of dynamic phosphorylation networks. PMID:25177341

  14. [Study for lung sound acquisition module based on ARM and Linux].

    PubMed

    Lu, Qiang; Li, Wenfeng; Zhang, Xixue; Li, Junmin; Liu, Longqing

    2011-07-01

    A acquisition module with ARM and Linux as a core was developed. This paper presents the hardware configuration and the software design. It is shown that the module can extract human lung sound reliably and effectively.

  15. NADIR (Network Anomaly Detection and Intrusion Reporter): A prototype network intrusion detection system

    SciTech Connect

    Jackson, K.A.; DuBois, D.H.; Stallings, C.A.

    1990-01-01

    The Network Anomaly Detection and Intrusion Reporter (NADIR) is an expert system which is intended to provide real-time security auditing for intrusion and misuse detection at Los Alamos National Laboratory's Integrated Computing Network (ICN). It is based on three basic assumptions: that statistical analysis of computer system and user activities may be used to characterize normal system and user behavior, and that given the resulting statistical profiles, behavior which deviates beyond certain bounds can be detected, that expert system techniques can be applied to security auditing and intrusion detection, and that successful intrusion detection may take place while monitoring a limited set of network activities such as user authentication and access control, file movement and storage, and job scheduling. NADIR has been developed to employ these basic concepts while monitoring the audited activities of more than 8000 ICN users.

  16. Three neural network based sensor systems for environmental monitoring

    SciTech Connect

    Keller, P.E.; Kouzes, R.T.; Kangas, L.J.

    1994-05-01

    Compact, portable systems capable of quickly identifying contaminants in the field are of great importance when monitoring the environment. One of the missions of the Pacific Northwest Laboratory is to examine and develop new technologies for environmental restoration and waste management at the Hanford Site. In this paper, three prototype sensing systems are discussed. These prototypes are composed of sensing elements, data acquisition system, computer, and neural network implemented in software, and are capable of automatically identifying contaminants. The first system employs an array of tin-oxide gas sensors and is used to identify chemical vapors. The second system employs an array of optical sensors and is used to identify the composition of chemical dyes in liquids. The third system contains a portable gamma-ray spectrometer and is used to identify radioactive isotopes. In these systems, the neural network is used to identify the composition of the sensed contaminant. With a neural network, the intense computation takes place during the training process. Once the network is trained, operation consists of propagating the data through the network. Since the computation involved during operation consists of vector-matrix multiplication and application of look-up tables unknown samples can be rapidly identified in the field.

  17. A financial network perspective of financial institutions' systemic risk contributions

    NASA Astrophysics Data System (ADS)

    Huang, Wei-Qiang; Zhuang, Xin-Tian; Yao, Shuang; Uryasev, Stan

    2016-08-01

    This study considers the effects of the financial institutions' local topology structure in the financial network on their systemic risk contribution using data from the Chinese stock market. We first measure the systemic risk contribution with the Conditional Value-at-Risk (CoVaR) which is estimated by applying dynamic conditional correlation multivariate GARCH model (DCC-MVGARCH). Financial networks are constructed from dynamic conditional correlations (DCC) with graph filtering method of minimum spanning trees (MSTs). Then we investigate dynamics of systemic risk contributions of financial institution. Also we study dynamics of financial institution's local topology structure in the financial network. Finally, we analyze the quantitative relationships between the local topology structure and systemic risk contribution with panel data regression analysis. We find that financial institutions with greater node strength, larger node betweenness centrality, larger node closeness centrality and larger node clustering coefficient tend to be associated with larger systemic risk contributions.

  18. Representation of neural networks as Lotka-Volterra systems

    SciTech Connect

    Moreau, Yves; Vandewalle, Joos; Louies, Stephane; Brenig, Leon

    1999-03-22

    We study changes of coordinates that allow the representation of the ordinary differential equations describing continuous-time recurrent neural networks into differential equations describing predator-prey models--also called Lotka-Volterra systems. We transform the equations for the neural network first into quasi-monomial form, where we express the vector field of the dynamical system as a linear combination of products of powers of the variables. In practice, this transformation is possible only if the activation function is the hyperbolic tangent or the logistic sigmoied. From this quasi-monomial form, we can directly transform the system further into Lotka-Volterra equations. The resulting Lotka-Volterra system is of higher dimension than the original system, but the behavior of its first variables is equivalent to the behavior of the original neural network.

  19. Adaptive mechanism-based congestion control for networked systems

    NASA Astrophysics Data System (ADS)

    Liu, Zhi; Zhang, Yun; Chen, C. L. Philip

    2013-03-01

    In order to assure the communication quality in network systems with heavy traffic and limited bandwidth, a new ATRED (adaptive thresholds random early detection) congestion control algorithm is proposed for the congestion avoidance and resource management of network systems. Different to the traditional AQM (active queue management) algorithms, the control parameters of ATRED are not configured statically, but dynamically adjusted by the adaptive mechanism. By integrating with the adaptive strategy, ATRED alleviates the tuning difficulty of RED (random early detection) and shows a better control on the queue management, and achieve a more robust performance than RED under varying network conditions. Furthermore, a dynamic transmission control protocol-AQM control system using ATRED controller is introduced for the systematic analysis. It is proved that the stability of the network system can be guaranteed when the adaptive mechanism is finely designed. Simulation studies show the proposed ATRED algorithm achieves a good performance in varying network environments, which is superior to the RED and Gentle-RED algorithm, and providing more reliable service under varying network conditions.

  20. Multiple neural network approaches to clinical expert systems

    NASA Astrophysics Data System (ADS)

    Stubbs, Derek F.

    1990-08-01

    We briefly review the concept of computer aided medical diagnosis and more extensively review the the existing literature on neural network applications in the field. Neural networks can function as simple expert systems for diagnosis or prognosis. Using a public database we develop a neural network for the diagnosis of a major presenting symptom while discussing the development process and possible approaches. MEDICAL EXPERTS SYSTEMS COMPUTER AIDED DIAGNOSIS Biomedicine is an incredibly diverse and multidisciplinary field and it is not surprising that neural networks with their many applications are finding more and more applications in the highly non-linear field of biomedicine. I want to concentrate on neural networks as medical expert systems for clinical diagnosis or prognosis. Expert Systems started out as a set of computerized " ifthen" rules. Everything was reduced to boolean logic and the promised land of computer experts was said to be in sight. It never came. Why? First the computer code explodes as the number of " ifs" increases. All the " ifs" have to interact. Second experts are not very good at reducing expertise to language. It turns out that experts recognize patterns and have non-verbal left-brain intuition decision processes. Third learning by example rather than learning by rule is the way natural brains works and making computers work by rule-learning is hideously labor intensive. Neural networks can learn from example. They learn the results

  1. NNETS - NEURAL NETWORK ENVIRONMENT ON A TRANSPUTER SYSTEM

    NASA Technical Reports Server (NTRS)

    Villarreal, J.

    1994-01-01

    The primary purpose of NNETS (Neural Network Environment on a Transputer System) is to provide users a high degree of flexibility in creating and manipulating a wide variety of neural network topologies at processing speeds not found in conventional computing environments. To accomplish this purpose, NNETS supports back propagation and back propagation related algorithms. The back propagation algorithm used is an implementation of Rumelhart's Generalized Delta Rule. NNETS was developed on the INMOS Transputer. NNETS predefines a Back Propagation Network, a Jordan Network, and a Reinforcement Network to assist users in learning and defining their own networks. The program also allows users to configure other neural network paradigms from the NNETS basic architecture. The Jordan network is basically a feed forward network that has the outputs connected to a pseudo input layer. The state of the network is dependent on the inputs from the environment plus the state of the network. The Reinforcement network learns via a scalar feedback signal called reinforcement. The network propagates forward randomly. The environment looks at the outputs of the network to produce a reinforcement signal that is fed back to the network. NNETS was written for the INMOS C compiler D711B version 1.3 or later (MS-DOS version). A small portion of the software was written in the OCCAM language to perform the communications routing between processors. NNETS is configured to operate on a 4 X 10 array of Transputers in sequence with a Transputer based graphics processor controlled by a master IBM PC 286 (or better) Transputer. A RGB monitor is required which must be capable of 512 X 512 resolution. It must be able to receive red, green, and blue signals via BNC connectors. NNETS is meant for experienced Transputer users only. The program is distributed on 5.25 inch 1.2Mb MS-DOS format diskettes. NNETS was developed in 1991. Transputer and OCCAM are registered trademarks of Inmos Corporation. MS

  2. Stochastic S-system modeling of gene regulatory network.

    PubMed

    Chowdhury, Ahsan Raja; Chetty, Madhu; Evans, Rob

    2015-10-01

    Microarray gene expression data can provide insights into biological processes at a system-wide level and is commonly used for reverse engineering gene regulatory networks (GRN). Due to the amalgamation of noise from different sources, microarray expression profiles become inherently noisy leading to significant impact on the GRN reconstruction process. Microarray replicates (both biological and technical), generated to increase the reliability of data obtained under noisy conditions, have limited influence in enhancing the accuracy of reconstruction . Therefore, instead of the conventional GRN modeling approaches which are deterministic, stochastic techniques are becoming increasingly necessary for inferring GRN from noisy microarray data. In this paper, we propose a new stochastic GRN model by investigating incorporation of various standard noise measurements in the deterministic S-system model. Experimental evaluations performed for varying sizes of synthetic network, representing different stochastic processes, demonstrate the effect of noise on the accuracy of genetic network modeling and the significance of stochastic modeling for GRN reconstruction . The proposed stochastic model is subsequently applied to infer the regulations among genes in two real life networks: (1) the well-studied IRMA network, a real-life in-vivo synthetic network constructed within the Saccharomyces cerevisiae yeast, and (2) the SOS DNA repair network in Escherichia coli.

  3. Turing patterns in network-organized activator-inhibitor systems

    NASA Astrophysics Data System (ADS)

    Nakao, Hiroya; Mikhailov, Alexander S.

    2010-07-01

    Turing instability in activator-inhibitor systems provides a paradigm of non-equilibrium self-organization; it has been extensively investigated for biological and chemical processes. Turing instability should also be possible in networks, and general mathematical methods for its treatment have been formulated previously. However, only examples of regular lattices and small networks were explicitly considered. Here we study Turing patterns in large random networks, which reveal striking differences from the classical behaviour. The initial linear instability leads to spontaneous differentiation of the network nodes into activator-rich and activator-poor groups. The emerging Turing patterns become furthermore strongly reshaped at the subsequent nonlinear stage. Multiple coexisting stationary states and hysteresis effects are observed. This peculiar behaviour can be understood in the framework of a mean-field theory. Our results offer a new perspective on self-organization phenomena in systems organized as complex networks. Potential applications include ecological metapopulations, synthetic ecosystems, cellular networks of early biological morphogenesis, and networks of coupled chemical nanoreactors.

  4. Effects of Route Guidance Systems on Small-World Networks

    NASA Astrophysics Data System (ADS)

    Wu, Jian-Jun; Sun, Hui-Jun; Gao, Zi-You; Li, Shu-Bin

    The route guidance systems (RGS) are efficient in alleviating traffic congestion and reducing transit time of transportation networks. This paper studies the effects of RGS on performance of variably weighted small-world networks. The properties of the average shortest path length, the maximum degree, and the largest betweenness, as important indices for characterizing the network structure in complex networks, are simulated. Results show that there is an optimal guided rate of RGS to minimize the total system cost and the average shortest path length, and proper RGS can reduce the load of the node with maximum degree or largest betweenness. In addition, we found that the load distribution of nodes guided by RGS decay as the power laws which is very important for us to understand and control traffic congestion feasible.

  5. Design of speaker recognition system based on artificial neural network

    NASA Astrophysics Data System (ADS)

    Chen, Yanhong; Wang, Li; Lin, Han; Li, Jinlong

    2012-10-01

    Speaker recognition is to recognize speaker's identity from its voice which contains physiological and behavioral characteristics unique to each individual. In this paper, the artificial neural network model, which has very good capacity of non-linear division in characteristic space, is used for pattern matching. The speaker's sample characteristic domain is built for his mixed voice characteristic signals based on Kmeanlbg algorithm. Then the dimension of the inputting eigenvector is reduced, and the redundant information is got rid of. On this basis, BP neural network is used to divide capacity area for characteristic space nonlinearly, and the BP neural network acts as a classifier for the speaker. Finally, a speaker recognition system based on the neural network is realized and the experiment results validate the recognition performance and robustness of the system.

  6. Mathematically Designing a Local Interaction Algorithm for Decentralized Network Systems

    NASA Astrophysics Data System (ADS)

    Kubo, Takeshi; Hasegawa, Teruyuki; Hasegawa, Toru

    In the near future, decentralized network systems consisting of a huge number of sensor nodes are expected to play an important role. In such a network, each node should control itself by means of a local interaction algorithm. Although such local interaction algorithms improve system reliability, how to design a local interaction algorithm has become an issue. In this paper, we describe a local interaction algorithm in a partial differential equation (or PDE) and propose a new design method whereby a PDE is derived from the solution we desire. The solution is considered as a pattern of nodes' control values over the network each of which is used to control the node's behavior. As a result, nodes collectively provide network functions such as clustering, collision and congestion avoidance. In this paper, we focus on a periodic pattern comprising sinusoidal waves and derive the PDE whose solution exhibits such a pattern by exploiting the Fourier method.

  7. A Hybrid Authentication and Authorization Process for Control System Networks

    SciTech Connect

    Manz, David O.; Edgar, Thomas W.; Fink, Glenn A.

    2010-08-25

    Convergence of control system and IT networks require that security, privacy, and trust be addressed. Trust management continues to plague traditional IT managers and is even more complex when extended into control system networks, with potentially millions of entities, a mission that requires 100% availability. Yet these very networks necessitate a trusted secure environment where controllers and managers can be assured that the systems are secure and functioning properly. We propose a hybrid authentication management protocol that addresses the unique issues inherent within control system networks, while leveraging the considerable research and momentum in existing IT authentication schemes. Our hybrid authentication protocol for control systems provides end device to end device authentication within a remote station and between remote stations and control centers. Additionally, the hybrid protocol is failsafe and will not interrupt communication or control of vital systems in a network partition or device failure. Finally, the hybrid protocol is resilient to transitory link loss and can operate in an island mode until connectivity is reestablished.

  8. NSTX-U Advances in Real-Time C++11 on Linux

    SciTech Connect

    Erickson, Keith G.

    2015-08-14

    Programming languages like C and Ada combined with proprietary embedded operating systems have dominated the real-time application space for decades. The new C++11standard includes native, language-level support for concurrency, a required feature for any nontrivial event-oriented real-time software. Threads, Locks, and Atomics now exist to provide the necessary tools to build the structures that make up the foundation of a complex real-time system. The National Spherical Torus Experiment Upgrade (NSTX-U) at the Princeton Plasma Physics Laboratory (PPPL) is breaking new ground with the language as applied to the needs of fusion devices. A new Digital Coil Protection System (DCPS) will serve as the main protection mechanism for the magnetic coils, and it is written entirely in C++11 running on Concurrent Computer Corporation's real-time operating system, RedHawk Linux. It runs over 600 algorithms in a 5 kHz control loop that determine whether or not to shut down operations before physical damage occurs. To accomplish this, NSTX-U engineers developed software tools that do not currently exist elsewhere, including real-time atomic synchronization, real-time containers, and a real-time logging framework. Together with a recent (and carefully configured) version of the GCC compiler, these tools enable data acquisition, processing, and output using a conventional operating system to meet a hard real-time deadline (that is, missing one periodic is a failure) of 200 microseconds.

  9. NSTX-U Advances in Real-Time C++11 on Linux

    DOE PAGES

    Erickson, Keith G.

    2015-08-14

    Programming languages like C and Ada combined with proprietary embedded operating systems have dominated the real-time application space for decades. The new C++11standard includes native, language-level support for concurrency, a required feature for any nontrivial event-oriented real-time software. Threads, Locks, and Atomics now exist to provide the necessary tools to build the structures that make up the foundation of a complex real-time system. The National Spherical Torus Experiment Upgrade (NSTX-U) at the Princeton Plasma Physics Laboratory (PPPL) is breaking new ground with the language as applied to the needs of fusion devices. A new Digital Coil Protection System (DCPS) willmore » serve as the main protection mechanism for the magnetic coils, and it is written entirely in C++11 running on Concurrent Computer Corporation's real-time operating system, RedHawk Linux. It runs over 600 algorithms in a 5 kHz control loop that determine whether or not to shut down operations before physical damage occurs. To accomplish this, NSTX-U engineers developed software tools that do not currently exist elsewhere, including real-time atomic synchronization, real-time containers, and a real-time logging framework. Together with a recent (and carefully configured) version of the GCC compiler, these tools enable data acquisition, processing, and output using a conventional operating system to meet a hard real-time deadline (that is, missing one periodic is a failure) of 200 microseconds.« less

  10. Network Analysis of the State Space of Discrete Dynamical Systems

    NASA Astrophysics Data System (ADS)

    Shreim, Amer; Grassberger, Peter; Nadler, Walter; Samuelsson, Björn; Socolar, Joshua E. S.; Paczuski, Maya

    2007-05-01

    We study networks representing the dynamics of elementary 1D cellular automata (CA) on finite lattices. We analyze scaling behaviors of both local and global network properties as a function of system size. The scaling of the largest node in-degree is obtained analytically for a variety of CA including rules 22, 54, and 110. We further define the path diversity as a global network measure. The coappearance of nontrivial scaling in both the hub size and the path diversity separates simple dynamics from the more complex behaviors typically found in Wolfram’s class IV and some class III CA.

  11. Multi-terabyte EIDE disk arrays running Linux RAID5

    SciTech Connect

    Sanders, D.A.; Cremaldi, L.M.; Eschenburg, V.; Godang, R.; Joy, M.D.; Summers, D.J.; Petravick, D.L.; /Fermilab

    2004-11-01

    High-energy physics experiments are currently recording large amounts of data and in a few years will be recording prodigious quantities of data. New methods must be developed to handle this data and make analysis at universities possible. Grid Computing is one method; however, the data must be cached at the various Grid nodes. We examine some storage techniques that exploit recent developments in commodity hardware. Disk arrays using RAID level 5 (RAID-5) include both parity and striping. The striping improves access speed. The parity protects data in the event of a single disk failure, but not in the case of multiple disk failures. We report on tests of dual-processor Linux Software RAID-5 arrays and Hardware RAID-5 arrays using a 12-disk 3ware controller, in conjunction with 250 and 300 GB disks, for use in offline high-energy physics data analysis. The price of IDE disks is now less than $1/GB. These RAID-5 disk arrays can be scaled to sizes affordable to small institutions and used when fast random access at low cost is important.

  12. Enhanced networks operations using the X Window System

    NASA Technical Reports Server (NTRS)

    Linares, Irving

    1993-01-01

    We propose an X Window Graphical User Interface (GUI) which is tailored to the operations of NASA GSFC's Network Control Center (NCC), the NASA Ground Terminal (NGT), the White Sands Ground Terminal (WSGT), and the Second Tracking and Data Relay Satellite System (TDRSS) Ground Terminal (STGT). The proposed GUI can also be easily extended to other Ground Network (GN) Tracking Stations due to its standardized nature.

  13. State of the art survey of network operating systems development

    NASA Technical Reports Server (NTRS)

    1985-01-01

    The results of the State-of-the-Art Survey of Network Operating Systems (NOS) performed for Goddard Space Flight Center are presented. NOS functional characteristics are presented in terms of user communication data migration, job migration, network control, and common functional categories. Products (current or future) as well as research and prototyping efforts are summarized. The NOS products which are revelant to the space station and its activities are evaluated.

  14. Easy Access: Auditing the System Network

    ERIC Educational Resources Information Center

    Wiech, Dean

    2013-01-01

    In today's electronic learning environment, access to appropriate systems and data is of the utmost importance to students, faculty, and staff. Without proper access to the school's internal systems, teachers could be prevented from logging on to an online learning system and students might be unable to submit course work to an online…

  15. Secure Data Network System (SDNS) network, transport, and message security protocols

    NASA Astrophysics Data System (ADS)

    Dinkel, C.

    1990-03-01

    The Secure Data Network System (SDNS) project, implements computer to computer communications security for distributed applications. The internationally accepted Open Systems Interconnection (OSI) computer networking architecture provides the framework for SDNS. SDNS uses the layering principles of OSI to implement secure data transfers between computer nodes of local area and wide area networks. Four security protocol documents developed by the National Security Agency (NSA) as output from the SDNS project are included. SDN.301 provides the framework for security at layer 3 of the OSI Model. Cryptographic techniques to provide data protection for transport connections or for connectionless-mode transmission are described in SDN.401. Specifications for message security service and protocol are contained in SDN.701. Directory System Specifications for Message Security Protocol are covered in SDN.702.

  16. Documentation for the token ring network simulation system

    NASA Technical Reports Server (NTRS)

    Peden, Jeffery H.; Weaver, Alfred C.

    1990-01-01

    A manual is presented which describes the language features of the Token Ring Network Simulation System. The simulation system is a powerful simulation tool for token ring networks which allows the specification of various Medium Access Control (MAC) layer protocols as well as the specification of various features of upper layer ISO protocols. In addition to these features, it also allows the user to specify message and station classes virtually to any degree of detail desired. The choice of a language instead of an interactive system to specify network parameters was dictated by both flexibility and time considerations. The language was developed specifically for the simulation system, and is very simple. It is also user friendly in that language elements which do not apply to the case at hand are ignored rather than treated as errors.

  17. Research on networked manufacturing system for reciprocating pump industry

    NASA Astrophysics Data System (ADS)

    Wu, Yangdong; Qi, Guoning; Xie, Qingsheng; Lu, Yujun

    2005-12-01

    Networked manufacturing is a trend of reciprocating pump industry. According to the enterprises' requirement, the architecture of networked manufacturing system for reciprocating pump industry was proposed, which composed of infrastructure layer, system management layer, application service layer and user layer. Its main functions included product data management, ASP service, business management, and customer relationship management, its physics framework was a multi-tier internet-based model; the concept of ASP service integration was put forward and its process model was also established. As a result, a networked manufacturing system aimed at the characteristics of reciprocating pump industry was built. By implementing this system, reciprocating pump industry can obtain a new way to fully utilize their own resources and enhance the capabilities to respond to the global market quickly.

  18. Applying New Network Security Technologies to SCADA Systems.

    SciTech Connect

    Hurd, Steven A; Stamp, Jason Edwin; Duggan, David P; Chavez, Adrian R.

    2006-11-01

    Supervisory Control and Data Acquisition (SCADA) systems for automation are very important for critical infrastructure and manufacturing operations. They have been implemented to work in a number of physical environments using a variety of hardware, software, networking protocols, and communications technologies, often before security issues became of paramount concern. To offer solutions to security shortcomings in the short/medium term, this project was to identify technologies used to secure "traditional" IT networks and systems, and then assess their efficacy with respect to SCADA systems. These proposed solutions must be relatively simple to implement, reliable, and acceptable to SCADA owners and operators. 4This page intentionally left blank.

  19. Applying Trusted Network Technology To Process Control Systems

    NASA Astrophysics Data System (ADS)

    Okhravi, Hamed; Nicol, David

    Interconnections between process control networks and enterprise networks expose instrumentation and control systems and the critical infrastructure components they operate to a variety of cyber attacks. Several architectural standards and security best practices have been proposed for industrial control systems. However, they are based on older architectures and do not leverage the latest hardware and software technologies. This paper describes new technologies that can be applied to the design of next generation security architectures for industrial control systems. The technologies are discussed along with their security benefits and design trade-offs.

  20. A Learning System for Discriminating Variants of Malicious Network Traffic

    SciTech Connect

    Beaver, Justin M; Symons, Christopher T; Gillen, Rob

    2013-01-01

    Modern computer network defense systems rely primarily on signature-based intrusion detection tools, which generate alerts when patterns that are pre-determined to be malicious are encountered in network data streams. Signatures are created reactively, and only after in-depth manual analysis of a network intrusion. There is little ability for signature-based detectors to identify intrusions that are new or even variants of an existing attack, and little ability to adapt the detectors to the patterns unique to a network environment. Due to these limitations, the need exists for network intrusion detection techniques that can more comprehensively address both known unknown networkbased attacks and can be optimized for the target environment. This work describes a system that leverages machine learning to provide a network intrusion detection capability that analyzes behaviors in channels of communication between individual computers. Using examples of malicious and non-malicious traffic in the target environment, the system can be trained to discriminate between traffic types. The machine learning provides insight that would be difficult for a human to explicitly code as a signature because it evaluates many interdependent metrics simultaneously. With this approach, zero day detection is possible by focusing on similarity to known traffic types rather than mining for specific bit patterns or conditions. This also reduces the burden on organizations to account for all possible attack variant combinations through signatures. The approach is presented along with results from a third-party evaluation of its performance.

  1. FIPA agent based network distributed control system

    SciTech Connect

    D. Abbott; V. Gyurjyan; G. Heyes; E. Jastrzembski; C. Timmer; E. Wolin

    2003-03-01

    A control system with the capabilities to combine heterogeneous control systems or processes into a uniform homogeneous environment is discussed. This dynamically extensible system is an example of the software system at the agent level of abstraction. This level of abstraction considers agents as atomic entities that communicate to implement the functionality of the control system. Agents' engineering aspects are addressed by adopting the domain independent software standard, formulated by FIPA. Jade core Java classes are used as a FIPA specification implementation. A special, lightweight, XML RDFS based, control oriented, ontology markup language is developed to standardize the description of the arbitrary control system data processor. Control processes, described in this language, are integrated into the global system at runtime, without actual programming. Fault tolerance and recovery issues are also addressed.

  2. Neural Network Based Intrusion Detection System for Critical Infrastructures

    SciTech Connect

    Todd Vollmer; Ondrej Linda; Milos Manic

    2009-07-01

    Resiliency and security in control systems such as SCADA and Nuclear plant’s in today’s world of hackers and malware are a relevant concern. Computer systems used within critical infrastructures to control physical functions are not immune to the threat of cyber attacks and may be potentially vulnerable. Tailoring an intrusion detection system to the specifics of critical infrastructures can significantly improve the security of such systems. The IDS-NNM – Intrusion Detection System using Neural Network based Modeling, is presented in this paper. The main contributions of this work are: 1) the use and analyses of real network data (data recorded from an existing critical infrastructure); 2) the development of a specific window based feature extraction technique; 3) the construction of training dataset using randomly generated intrusion vectors; 4) the use of a combination of two neural network learning algorithms – the Error-Back Propagation and Levenberg-Marquardt, for normal behavior modeling. The presented algorithm was evaluated on previously unseen network data. The IDS-NNM algorithm proved to be capable of capturing all intrusion attempts presented in the network communication while not generating any false alerts.

  3. Emergent latent symbol systems in recurrent neural networks

    NASA Astrophysics Data System (ADS)

    Monner, Derek; Reggia, James A.

    2012-12-01

    Fodor and Pylyshyn [(1988). Connectionism and cognitive architecture: A critical analysis. Cognition, 28(1-2), 3-71] famously argued that neural networks cannot behave systematically short of implementing a combinatorial symbol system. A recent response from Frank et al. [(2009). Connectionist semantic systematicity. Cognition, 110(3), 358-379] claimed to have trained a neural network to behave systematically without implementing a symbol system and without any in-built predisposition towards combinatorial representations. We believe systems like theirs may in fact implement a symbol system on a deeper and more interesting level: one where the symbols are latent - not visible at the level of network structure. In order to illustrate this possibility, we demonstrate our own recurrent neural network that learns to understand sentence-level language in terms of a scene. We demonstrate our model's learned understanding by testing it on novel sentences and scenes. By paring down our model into an architecturally minimal version, we demonstrate how it supports combinatorial computation over distributed representations by using the associative memory operations of Vector Symbolic Architectures. Knowledge of the model's memory scheme gives us tools to explain its errors and construct superior future models. We show how the model designs and manipulates a latent symbol system in which the combinatorial symbols are patterns of activation distributed across the layers of a neural network, instantiating a hybrid of classical symbolic and connectionist representations that combines advantages of both.

  4. Revealing and analyzing networks of environmental systems

    NASA Astrophysics Data System (ADS)

    Eveillard, D.; Bittner, L.; Chaffron, S.; Guidi, L.; Raes, J.; Karsenti, E.; Bowler, C.; Gorsky, G.

    2015-12-01

    Understanding the interactions between microbial communities and their environment well enough to be able to predict diversity on the basis of physicochemical parameters is a fundamental pursuit of microbial ecology that still eludes us. However, modeling microbial communities is a complicated task, because (i) communities are complex, (ii) most are described qualitatively, and (iii) quantitative understanding of the way communities interacts with their surroundings remains incomplete. Within this seminar, we will illustrate two complementary approaches that aim to overcome these points in different manners. First, we will present a network analysis that focus on the biological carbon pump in the global ocean. The biological carbon pump is the process by which photosynthesis transforms CO2 to organic carbon sinking to the deep-ocean as particles where it is sequestered. While the intensity of the pump correlate to plankton community composition, the underlying ecosystem structure and interactions driving this process remain largely uncharacterized Here we use environmental and metagenomic data gathered during the Tara Oceans expedition to improve understanding of these drivers. We show that specific plankton communities correlate with carbon export and highlight unexpected and overlooked taxa such as Radiolaria, alveolate parasites and bacterial pathogens, as well as Synechococcus and their phages, as key players in the biological pump. Additionally, we show that the abundances of just a few bacterial and viral genes predict most of the global ocean carbon export's variability. Together these findings help elucidate ecosystem drivers of the biological carbon pump and present a case study for scaling from genes-to-ecosystems. Second, we will show preliminary results on a probabilistic modeling that predicts microbial community structure across observed physicochemical data, from a putative network and partial quantitative knowledge. This modeling shows that, despite

  5. Installing an Integrated Information System in a Centralized Network.

    ERIC Educational Resources Information Center

    Mendelson, Andrew D.

    1992-01-01

    Many schools are looking at ways to centralize the distribution and retrieval of video, voice, and data transmissions in an integrate information system (IIS). A centralized system offers greater control of hardware and software. Describes media network planning to retrofit an Illinois' high school with a fiber optic-based IIS. (MLF)

  6. New Generation System. "An Interstate Information Network Serving America's Children."

    ERIC Educational Resources Information Center

    Texas A and I Univ., Kingsville.

    The New Generation System (NGS) is a computer network developed to transfer academic records of migrant students. NGS was developed as a result of the phasing out of the Migrant Student Record Transfer System. NGS is backed by a 29-state consortium that uses the Internet to transfer records because of its speed, availability, and…

  7. Automated Bilingual Circulation System Using PC Local Area Networks.

    ERIC Educational Resources Information Center

    Iskanderani, A. I.; Anwar, M. A.

    1992-01-01

    Describes a local automated bilingual circulation system using personal computers in a local area network that was developed at King Abdulaziz University (Saudi Arabia) for Arabic and English materials. Topics addressed include the system structure, hardware, major features, storage requirements, and costs. (nine references) (LRW)

  8. Principles of E-network modelling of heterogeneous systems

    NASA Astrophysics Data System (ADS)

    Tarakanov, D.; Tsapko, I.; Tsapko, S.; Buldygin, R.

    2016-04-01

    The present article is concerned with the analytical and simulation modelling of heterogeneous technical systems using E-network mathematical apparatus (the expansion of Petri nets). The distinguishing feature of the given system is the presence of the module6 which identifies the parameters of the controlled object as well as the external environment.

  9. An open system network for the biological sciences.

    PubMed Central

    Springer, G. K.; Loch, J. L.; Patrick, T. B.

    1991-01-01

    A description of an open system, distributed computing environment for the Biological Sciences is presented. This system utilizes a transparent interface in a computer network using NCS to implement an application system for molecular biologists to perform various processing activities from their local workstation. This system accepts requests for the services of a remote database server, located across the network, to perform all of the database searches needed to support the activities of the user. This database access is totally transparent to the user of the system and it appears, to the user, that all activities are being carried out on the local workstation. This system is a prototype for a much more extensive system being built to support the research efforts in the Biological Sciences at UMC. PMID:1807659

  10. Application of network control systems for adaptive optics

    NASA Astrophysics Data System (ADS)

    Eager, Robert J.

    2008-04-01

    The communication architecture for most pointing, tracking, and high order adaptive optics control systems has been based on a centralized point-to-point and bus based approach. With the increased use of larger arrays and multiple sensors, actuators and processing nodes, these evolving systems require decentralized control, modularity, flexibility redundancy, integrated diagnostics, dynamic resource allocation, and ease of maintenance to support a wide range of experiments. Network control systems provide all of these critical functionalities. This paper begins with a quick overview of adaptive optics as a control system and communication architecture. It then provides an introduction to network control systems, identifying the key design areas that impact system performance. The paper then discusses the performance test results of a fielded network control system used to implement an adaptive optics system comprised of: a 10KHz, 32x32 spatial selfreferencing interferometer wave front sensor, a 705 channel "Tweeter" deformable mirror, a 177 channel "Woofer" deformable mirror, ten processing nodes, and six data acquisition nodes. The reconstructor algorithm utilized a modulo-2pi wave front phase measurement and a least-squares phase un-wrapper with branch point correction. The servo control algorithm is a hybrid of exponential and infinite impulse response controllers, with tweeter-to-woofer saturation offloading. This system achieved a first-pixel-out to last-mirror-voltage latency of 86 microseconds, with the network accounting for 4 microseconds of the measured latency. Finally, the extensibility of this architecture will be illustrated, by detailing the integration of a tracking sub-system into the existing network.

  11. An artificial neural network controller for intelligent transportation systems applications

    SciTech Connect

    Vitela, J.E.; Hanebutte, U.R.; Reifman, J.

    1996-04-01

    An Autonomous Intelligent Cruise Control (AICC) has been designed using a feedforward artificial neural network, as an example for utilizing artificial neural networks for nonlinear control problems arising in intelligent transportation systems applications. The AICC is based on a simple nonlinear model of the vehicle dynamics. A Neural Network Controller (NNC) code developed at Argonne National Laboratory to control discrete dynamical systems was used for this purpose. In order to test the NNC, an AICC-simulator containing graphical displays was developed for a system of two vehicles driving in a single lane. Two simulation cases are shown, one involving a lead vehicle with constant velocity and the other a lead vehicle with varying acceleration. More realistic vehicle dynamic models will be considered in future work.

  12. Dynamic Business Networks: A Headache for Sustainable Systems Interoperability

    NASA Astrophysics Data System (ADS)

    Agostinho, Carlos; Jardim-Goncalves, Ricardo

    Collaborative networked environments emerged with the spread of the internet, contributing to overcome past communication barriers, and identifying interoperability as an essential property. When achieved seamlessly, efficiency is increased in the entire product life cycle. Nowadays, most organizations try to attain interoperability by establishing peer-to-peer mappings with the different partners, or in optimized networks, by using international standard models as the core for information exchange. In current industrial practice, mappings are only defined once, and the morphisms that represent them, are hardcoded in the enterprise systems. This solution has been effective for static environments, where enterprise and product models are valid for decades. However, with an increasingly complex and dynamic global market, models change frequently to answer new customer requirements. This paper draws concepts from the complex systems science and proposes a framework for sustainable systems interoperability in dynamic networks, enabling different organizations to evolve at their own rate.

  13. Protecting against cyber threats in networked information systems

    NASA Astrophysics Data System (ADS)

    Ertoz, Levent; Lazarevic, Aleksandar; Eilertson, Eric; Tan, Pang-Ning; Dokas, Paul; Kumar, Vipin; Srivastava, Jaideep

    2003-07-01

    This paper provides an overview of our efforts in detecting cyber attacks in networked information systems. Traditional signature based techniques for detecting cyber attacks can only detect previously known intrusions and are useless against novel attacks and emerging threats. Our current research at the University of Minnesota is focused on developing data mining techniques to automatically detect attacks against computer networks and systems. This research is being conducted as a part of MINDS (Minnesota Intrusion Detection System) project at the University of Minnesota. Experimental results on live network traffic at the University of Minnesota show that the new techniques show great promise in detecting novel intrusions. In particular, during the past few months our techniques have been successful in automatically identifying several novel intrusions that could not be detected using state-of-the-art tools such as SNORT.

  14. Photovoltaic system lifetime prediction using Petri networks method

    NASA Astrophysics Data System (ADS)

    Laronde, Rémi; Charki, Abderafi; Bigaud, David; Excoffier, Philippe

    2010-08-01

    Photovoltaic modules and systems lifetime and availability are difficult to determine and not really well-known. This information is an important data to insure the installation performance of such a system and to prepare its recycling. The aim of this article is to present a methodology for the availability and lifetime evaluation of a photovoltaic system using the Petri networks method. Each component - module, wires and inverter - is detailed in Petri networks and several laws are used in order to estimate the reliability. Several guides (FIDES, MIL-HDBK-217 ...) allow determining the reliability of electronic components using collections of data. For photovoltaic modules, accelerated life testing are carried out for the evaluation of the lifetime which is described by a Weibull distribution. Results obtained show that Petri networks are very useful to simulate lifetime thanks to its intrinsic modularity.

  15. Computer network access to scientific information systems for minority universities

    NASA Astrophysics Data System (ADS)

    Thomas, Valerie L.; Wakim, Nagi T.

    1993-08-01

    The evolution of computer networking technology has lead to the establishment of a massive networking infrastructure which interconnects various types of computing resources at many government, academic, and corporate institutions. A large segment of this infrastructure has been developed to facilitate information exchange and resource sharing within the scientific community. The National Aeronautics and Space Administration (NASA) supports both the development and the application of computer networks which provide its community with access to many valuable multi-disciplinary scientific information systems and on-line databases. Recognizing the need to extend the benefits of this advanced networking technology to the under-represented community, the National Space Science Data Center (NSSDC) in the Space Data and Computing Division at the Goddard Space Flight Center has developed the Minority University-Space Interdisciplinary Network (MU-SPIN) Program: a major networking and education initiative for Historically Black Colleges and Universities (HBCUs) and Minority Universities (MUs). In this paper, we will briefly explain the various components of the MU-SPIN Program while highlighting how, by providing access to scientific information systems and on-line data, it promotes a higher level of collaboration among faculty and students and NASA scientists.

  16. Consistent Steering System using SCTP for Bluetooth Scatternet Sensor Network

    NASA Astrophysics Data System (ADS)

    Dhaya, R.; Sadasivam, V.; Kanthavel, R.

    2012-12-01

    Wireless communication is the best way to convey information from source to destination with flexibility and mobility and Bluetooth is the wireless technology suitable for short distance. On the other hand a wireless sensor network (WSN) consists of spatially distributed autonomous sensors to cooperatively monitor physical or environmental conditions, such as temperature, sound, vibration, pressure, motion or pollutants. Using Bluetooth piconet wireless technique in sensor nodes creates limitation in network depth and placement. The introduction of Scatternet solves the network restrictions with lack of reliability in data transmission. When the depth of the network increases, it results in more difficulties in routing. No authors so far focused on the reliability factors of Scatternet sensor network's routing. This paper illustrates the proposed system architecture and routing mechanism to increase the reliability. The another objective is to use reliable transport protocol that uses the multi-homing concept and supports multiple streams to prevent head-of-line blocking. The results show that the Scatternet sensor network has lower packet loss even in the congestive environment than the existing system suitable for all surveillance applications.

  17. The Bipartite Network Study of the Library Book Lending System

    NASA Astrophysics Data System (ADS)

    Li, Nan-Nan; Zhang, Ning

    Through collecting the library lending information of the University of Shanghai for Science and Technology during one year, we build the database between the books and readers, and then construct a bipartite network to describe the relationships. We respectively establish the corresponding un-weighted and weighted bipartite network through the borrowing relationship and the reading days, thereout obtain the statistical properties via the theory and methods of complex network. We find all the properties follow exponential distribution and there is a positive correlation between the relevant properties in un-weighted and weighted networks. The un-weighted properties can describe the cooperation situation and configuration, but the properties with node weight may describe the competition results. Besides, we discuss the practical significance for the double relationship and the statistical properties. Further more, we propose a library personal recommendation system for developing the library humanity design resumptively.

  18. Momentum Integral Network Method for Thermal-Hydraulic Systems Analysis.

    2000-11-20

    EPIPE is used for design or design evaluation of complex large piping systems. The piping systems can be viewed as a network of straight pipe elements (or tangents) and curved elements (pipe bends) interconnected at joints (or nodes) with intermediate supports and anchors. The system may be subject to static loads such as thermal, dead weight, internal pressure, or dynamic loads such as earthquake motions and flow-induced vibrations, or any combination of these. MINET (Momentummore » Integral NETwork) was developed for the transient analysis of intricate fluid flow and heat transfer networks, such as those found in the balance of plant in power generating facilities. It can be utilized as a stand-alone program or interfaced to another computer program for concurrent analysis. Through such coupling, a computer code limited by either the lack of required component models or large computational needs can be extended to more fully represent the thermal hydraulic system thereby reducing the need for estimating essential transient boundary conditions. The MINET representation of a system is one or more networks of volumes, segments, and boundaries linked together via heat exchangers only, i.e., heat can transfer between networks, but fluids cannot. Volumes are used to represent tanks or other volume components, as well as locations in the system where significant flow divisions or combinations occur. Segments are composed of one or more pipes, pumps, heat exchangers, turbines, and/or valves each represented by one or more nodes. Boundaries are simply points where the network interfaces with the user or another computer code. Several fluids can be simulated, including water, sodium, NaK, and air.« less

  19. Momentum Integral Network Method for Thermal-Hydraulic Systems Analysis.

    SciTech Connect

    2000-11-20

    EPIPE is used for design or design evaluation of complex large piping systems. The piping systems can be viewed as a network of straight pipe elements (or tangents) and curved elements (pipe bends) interconnected at joints (or nodes) with intermediate supports and anchors. The system may be subject to static loads such as thermal, dead weight, internal pressure, or dynamic loads such as earthquake motions and flow-induced vibrations, or any combination of these. MINET (Momentum Integral NETwork) was developed for the transient analysis of intricate fluid flow and heat transfer networks, such as those found in the balance of plant in power generating facilities. It can be utilized as a stand-alone program or interfaced to another computer program for concurrent analysis. Through such coupling, a computer code limited by either the lack of required component models or large computational needs can be extended to more fully represent the thermal hydraulic system thereby reducing the need for estimating essential transient boundary conditions. The MINET representation of a system is one or more networks of volumes, segments, and boundaries linked together via heat exchangers only, i.e., heat can transfer between networks, but fluids cannot. Volumes are used to represent tanks or other volume components, as well as locations in the system where significant flow divisions or combinations occur. Segments are composed of one or more pipes, pumps, heat exchangers, turbines, and/or valves each represented by one or more nodes. Boundaries are simply points where the network interfaces with the user or another computer code. Several fluids can be simulated, including water, sodium, NaK, and air.

  20. Host Event Based Network Monitoring

    SciTech Connect

    Jonathan Chugg

    2013-01-01

    The purpose of INL’s research on this project is to demonstrate the feasibility of a host event based network monitoring tool and the effects on host performance. Current host based network monitoring tools work on polling which can miss activity if it occurs between polls. Instead of polling, a tool could be developed that makes use of event APIs in the operating system to receive asynchronous notifications of network activity. Analysis and logging of these events will allow the tool to construct the complete real-time and historical network configuration of the host while the tool is running. This research focused on three major operating systems commonly used by SCADA systems: Linux, WindowsXP, and Windows7. Windows 7 offers two paths that have minimal impact on the system and should be seriously considered. First is the new Windows Event Logging API, and, second, Windows 7 offers the ALE API within WFP. Any future work should focus on these methods.

  1. The Washington Library Network's Computerized Bibliographic System

    ERIC Educational Resources Information Center

    Reed, Mary Jane Pobst

    1975-01-01

    Describes the development of the state of Washington's computer-assisted bibliographic system, along with its present batch-mode cataloging support subsystem and efforts toward on-line integrated acquisitions and cataloging support. (LS)

  2. Systemic risk and heterogeneous leverage in banking networks

    NASA Astrophysics Data System (ADS)

    Kuzubaş, Tolga Umut; Saltoğlu, Burak; Sever, Can

    2016-11-01

    This study probes systemic risk implications of leverage heterogeneity in banking networks. We show that the presence of heterogeneous leverages drastically changes the systemic effects of defaults and the nature of the contagion in interbank markets. Using financial leverage data from the US banking system, through simulations, we analyze the systemic significance of different types of borrowers, the evolution of the network, the consequences of interbank market size and the impact of market segmentation. Our study is related to the recent Basel III regulations on systemic risk and the treatment of the Global Systemically Important Banks (GSIBs). We also assess the extent to which the recent capital surcharges on GSIBs may curb financial fragility. We show the effectiveness of surcharge policy for the most-levered banks vis-a-vis uniform capital injection.

  3. Chemical reaction network approaches to Biochemical Systems Theory.

    PubMed

    Arceo, Carlene Perpetua P; Jose, Editha C; Marin-Sanguino, Alberto; Mendoza, Eduardo R

    2015-11-01

    This paper provides a framework to represent a Biochemical Systems Theory (BST) model (in either GMA or S-system form) as a chemical reaction network with power law kinetics. Using this representation, some basic properties and the application of recent results of Chemical Reaction Network Theory regarding steady states of such systems are shown. In particular, Injectivity Theory, including network concordance [36] and the Jacobian Determinant Criterion [43], a "Lifting Theorem" for steady states [26] and the comprehensive results of Müller and Regensburger [31] on complex balanced equilibria are discussed. A partial extension of a recent Emulation Theorem of Cardelli for mass action systems [3] is derived for a subclass of power law kinetic systems. However, it is also shown that the GMA and S-system models of human purine metabolism [10] do not display the reactant-determined kinetics assumed by Müller and Regensburger and hence only a subset of BST models can be handled with their approach. Moreover, since the reaction networks underlying many BST models are not weakly reversible, results for non-complex balanced equilibria are also needed.

  4. Adaptive network models of collective decision making in swarming systems.

    PubMed

    Chen, Li; Huepe, Cristián; Gross, Thilo

    2016-08-01

    We consider a class of adaptive network models where links can only be created or deleted between nodes in different states. These models provide an approximate description of a set of systems where nodes represent agents moving in physical or abstract space, the state of each node represents the agent's heading direction, and links indicate mutual awareness. We show analytically that the adaptive network description captures a phase transition to collective motion in some swarming systems, such as the Vicsek model, and that the properties of this transition are determined by the number of states (discrete heading directions) that can be accessed by each agent.

  5. Adaptive network models of collective decision making in swarming systems

    NASA Astrophysics Data System (ADS)

    Chen, Li; Huepe, Cristián; Gross, Thilo

    2016-08-01

    We consider a class of adaptive network models where links can only be created or deleted between nodes in different states. These models provide an approximate description of a set of systems where nodes represent agents moving in physical or abstract space, the state of each node represents the agent's heading direction, and links indicate mutual awareness. We show analytically that the adaptive network description captures a phase transition to collective motion in some swarming systems, such as the Vicsek model, and that the properties of this transition are determined by the number of states (discrete heading directions) that can be accessed by each agent.

  6. Integrated network analysis and effective tools in plant systems biology

    PubMed Central

    Fukushima, Atsushi; Kanaya, Shigehiko; Nishida, Kozo

    2014-01-01

    One of the ultimate goals in plant systems biology is to elucidate the genotype-phenotype relationship in plant cellular systems. Integrated network analysis that combines omics data with mathematical models has received particular attention. Here we focus on the latest cutting-edge computational advances that facilitate their combination. We highlight (1) network visualization tools, (2) pathway analyses, (3) genome-scale metabolic reconstruction, and (4) the integration of high-throughput experimental data and mathematical models. Multi-omics data that contain the genome, transcriptome, proteome, and metabolome and mathematical models are expected to integrate and expand our knowledge of complex plant metabolisms. PMID:25408696

  7. Adaptive network models of collective decision making in swarming systems.

    PubMed

    Chen, Li; Huepe, Cristián; Gross, Thilo

    2016-08-01

    We consider a class of adaptive network models where links can only be created or deleted between nodes in different states. These models provide an approximate description of a set of systems where nodes represent agents moving in physical or abstract space, the state of each node represents the agent's heading direction, and links indicate mutual awareness. We show analytically that the adaptive network description captures a phase transition to collective motion in some swarming systems, such as the Vicsek model, and that the properties of this transition are determined by the number of states (discrete heading directions) that can be accessed by each agent. PMID:27627342

  8. The Deep Space Network information system in the year 2000

    NASA Technical Reports Server (NTRS)

    Markley, R. W.; Beswick, C. A.

    1992-01-01

    The Deep Space Network (DSN), the largest, most sensitive scientific communications and radio navigation network in the world, is considered. Focus is made on the telemetry processing, monitor and control, and ground data transport architectures of the DSN ground information system envisioned for the year 2000. The telemetry architecture will be unified from the front-end area to the end user. It will provide highly automated monitor and control of the DSN, automated configuration of support activities, and a vastly improved human interface. Automated decision support systems will be in place for DSN resource management, performance analysis, fault diagnosis, and contingency management.

  9. Enterprise network intrusion detection and prevention system (ENIDPS)

    NASA Astrophysics Data System (ADS)

    Akujuobi, C. M.; Ampah, N. K.

    2007-04-01

    Securing enterprise networks comes under two broad topics: Intrusion Detection Systems (IDS) and Intrusion Prevention Systems (IPS). The right combination of selected algorithms/techniques under both topics produces better security for a given network. This approach leads to using layers of physical, administrative, electronic, and encrypted systems to protect valuable resources. So far, there is no algorithm, which guarantees absolute protection for a given network from intruders. Intrusion Prevention Systems like IPSec, Firewall, Sender ID, Domain Keys Identified Mail (DKIM) etc. do not guarantee absolute security just like existing Intrusion Detection Systems. Our approach focuses on developing an IDS, which will detect all intruders that bypass the IPS and at the same time will be used in updating the IPS, since the IPS fail to prevent some intruders from entering a given network. The new IDS will employ both signature-based detection and anomaly detection as its analysis strategy. It should therefore be able to detect known and unknown intruders or attacks and further isolate those sources of attack within the network. Both real-time and off-line IDS predictions will be applied under the analysis and response stages. The basic IDS architecture will involve both centralized and distributed/heterogeneous architecture to ensure effective detection. Pro-active responses and corrective responses will be employed. The new security system, which will be made up of both IDS and IPS, should be less expensive to implement compared to existing ones. Finally, limitations of existing security systems have to be eliminated with the introduction of the new security system.

  10. Applications of neural networks in chemical engineering: Hybrid systems

    SciTech Connect

    Ferrada, J.J.; Osborne-Lee, I.W. ); Grizzaffi, P.A. )

    1990-01-01

    Expert systems are known to be useful in capturing expertise and applying knowledge to chemical engineering problems such as diagnosis, process control, process simulation, and process advisory. However, expert system applications are traditionally limited to knowledge domains that are heuristic and involve only simple mathematics. Neural networks, on the other hand, represent an emerging technology capable of rapid recognition of patterned behavior without regard to mathematical complexity. Although useful in problem identification, neural networks are not very efficient in providing in-depth solutions and typically do not promote full understanding of the problem or the reasoning behind its solutions. Hence, applications of neural networks have certain limitations. This paper explores the potential for expanding the scope of chemical engineering areas where neural networks might be utilized by incorporating expert systems and neural networks into the same application, a process called hybridization. In addition, hybrid applications are compared with those using more traditional approaches, the results of the different applications are analyzed, and the feasibility of converting the preliminary prototypes described herein into useful final products is evaluated. 12 refs., 8 figs.

  11. A complex network description on traditional Chinese medicine system

    NASA Astrophysics Data System (ADS)

    Sun, Anzheng; Zhang, Peipei; He, Yue; Su, Beibei; He, Da-Ren

    2004-03-01

    Chinese traditional philosophy believes that a healthy body can adjust itself to reach a dynamic equilibrium with the environment. At an ill state the equilibrium is lost. Any single medicine can only attack one problem and cannot recover the whole equilibrium. A prescription formulation (PF) usually contains an "emperor" or principal medicine, several "minister" or assistant medicines, some accessorial medicines, and one or two inducting or harmonizing edicines. Therefore different traditional Chinese medicine (TCM) appears in different number of PFs. The whole TCM system may be viewed as a network set composed of many complete graphs (PFs). The TCMs, which have the highest node degrees in the network, serve as the "bridges" between the complete graphs for forming the network. While the TCMs, which have lowest node degrees in the network, serve as the "emperors" in each complete graph. According to this idea we have performed a manual statistical investigation on approximately 1000 PFs and computed 8 different tatistical properties of the network. The results show that TCM system is a scale-free one and has a nice clustering structure. We are suggesting a dynamical model to describe the development of TCM system.

  12. A graph-based system for network-vulnerability analysis

    SciTech Connect

    Swiler, L.P.; Phillips, C.

    1998-06-01

    This paper presents a graph-based approach to network vulnerability analysis. The method is flexible, allowing analysis of attacks from both outside and inside the network. It can analyze risks to a specific network asset, or examine the universe of possible consequences following a successful attack. The graph-based tool can identify the set of attack paths that have a high probability of success (or a low effort cost) for the attacker. The system could be used to test the effectiveness of making configuration changes, implementing an intrusion detection system, etc. The analysis system requires as input a database of common attacks, broken into atomic steps, specific network configuration and topology information, and an attacker profile. The attack information is matched with the network configuration information and an attacker profile to create a superset attack graph. Nodes identify a stage of attack, for example the class of machines the attacker has accessed and the user privilege level he or she has compromised. The arcs in the attack graph represent attacks or stages of attacks. By assigning probabilities of success on the arcs or costs representing level-of-effort for the attacker, various graph algorithms such as shortest-path algorithms can identify the attack paths with the highest probability of success.

  13. Reliability Modeling of Microelectromechanical Systems Using Neural Networks

    NASA Technical Reports Server (NTRS)

    Perera. J. Sebastian

    2000-01-01

    Microelectromechanical systems (MEMS) are a broad and rapidly expanding field that is currently receiving a great deal of attention because of the potential to significantly improve the ability to sense, analyze, and control a variety of processes, such as heating and ventilation systems, automobiles, medicine, aeronautical flight, military surveillance, weather forecasting, and space exploration. MEMS are very small and are a blend of electrical and mechanical components, with electrical and mechanical systems on one chip. This research establishes reliability estimation and prediction for MEMS devices at the conceptual design phase using neural networks. At the conceptual design phase, before devices are built and tested, traditional methods of quantifying reliability are inadequate because the device is not in existence and cannot be tested to establish the reliability distributions. A novel approach using neural networks is created to predict the overall reliability of a MEMS device based on its components and each component's attributes. The methodology begins with collecting attribute data (fabrication process, physical specifications, operating environment, property characteristics, packaging, etc.) and reliability data for many types of microengines. The data are partitioned into training data (the majority) and validation data (the remainder). A neural network is applied to the training data (both attribute and reliability); the attributes become the system inputs and reliability data (cycles to failure), the system output. After the neural network is trained with sufficient data. the validation data are used to verify the neural networks provided accurate reliability estimates. Now, the reliability of a new proposed MEMS device can be estimated by using the appropriate trained neural networks developed in this work.

  14. Network adaptable information systems for safeguard applications

    SciTech Connect

    Rodriguez, C.; Burczyk, L.; Chare, P.; Wagner, H.

    1996-09-01

    While containment and surveillance systems designed for nuclear safeguards have greatly improved through advances in computer, sensor, and microprocessor technologies, the authors recognize the need to continue the advancement of these systems to provide more standardized solutions for safeguards applications of the future. The benefits to be gained from the use of standardized technologies are becoming evident as safeguard activities are increasing world-wide while funding of these activities is becoming more limited. The EURATOM Safeguards Directorate and Los Alamos National Laboratory are developing and testing advanced monitoring technologies coupled with the most efficient solutions for the safeguards applications of the future.

  15. Smartphone qualification & linux-based tools for CubeSat computing payloads

    NASA Astrophysics Data System (ADS)

    Bridges, C. P.; Yeomans, B.; Iacopino, C.; Frame, T. E.; Schofield, A.; Kenyon, S.; Sweeting, M. N.

    Modern computers are now far in advance of satellite systems and leveraging of these technologies for space applications could lead to cheaper and more capable spacecraft. Together with NASA AMES's PhoneSat, the STRaND-1 nanosatellite team has been developing and designing new ways to include smart-phone technologies to the popular CubeSat platform whilst mitigating numerous risks. Surrey Space Centre (SSC) and Surrey Satellite Technology Ltd. (SSTL) have led in qualifying state-of-the-art COTS technologies and capabilities - contributing to numerous low-cost satellite missions. The focus of this paper is to answer if 1) modern smart-phone software is compatible for fast and low-cost development as required by CubeSats, and 2) if the components utilised are robust to the space environment. The STRaND-1 smart-phone payload software explored in this paper is united using various open-source Linux tools and generic interfaces found in terrestrial systems. A major result from our developments is that many existing software and hardware processes are more than sufficient to provide autonomous and operational payload object-to-object and file-based management solutions. The paper will provide methodologies on the software chains and tools used for the STRaND-1 smartphone computing platform, the hardware built with space qualification results (thermal, thermal vacuum, and TID radiation), and how they can be implemented in future missions.

  16. Systems analysis of biological networks in skeletal muscle function

    PubMed Central

    Smith, Lucas R.; Meyer, Gretchen; Lieber, Richard L.

    2014-01-01

    Skeletal muscle function depends on the efficient coordination among subcellular systems. These systems are composed of proteins encoded by a subset of genes, all of which are tightly regulated. In the cases where regulation is altered because of disease or injury, dysfunction occurs. To enable objective analysis of muscle gene expression profiles, we have defined nine biological networks whose coordination is critical to muscle function. We begin by describing the expression of proteins necessary for optimal neuromuscular junction function that results in the muscle cell action potential. That action potential is transmitted to proteins involved in excitation–contraction coupling enabling Ca2+ release. Ca2+ then activates contractile proteins supporting actin and myosin cross-bridge cycling. Force generated by cross-bridges is transmitted via cytoskeletal proteins through the sarcolemma and out to critical proteins that support the muscle extracellular matrix. Muscle contraction is fueled through many proteins that regulate energy metabolism. Inflammation is a common response to injury that can result in alteration of many pathways within muscle. Muscle also has multiple pathways that regulate size through atrophy or hypertrophy. Finally, the isoforms associated with fast muscle fibers and their corresponding isoforms in slow muscle fibers are delineated. These nine networks represent important biological systems that affect skeletal muscle function. Combining high-throughput systems analysis with advanced networking software will allow researchers to use these networks to objectively study skeletal muscle systems. PMID:23188744

  17. Systemic Risk Analysis on Reconstructed Economic and Financial Networks

    NASA Astrophysics Data System (ADS)

    Cimini, Giulio; Squartini, Tiziano; Garlaschelli, Diego; Gabrielli, Andrea

    2015-10-01

    We address a fundamental problem that is systematically encountered when modeling real-world complex systems of societal relevance: the limitedness of the information available. In the case of economic and financial networks, privacy issues severely limit the information that can be accessed and, as a consequence, the possibility of correctly estimating the resilience of these systems to events such as financial shocks, crises and cascade failures. Here we present an innovative method to reconstruct the structure of such partially-accessible systems, based on the knowledge of intrinsic node-specific properties and of the number of connections of only a limited subset of nodes. This information is used to calibrate an inference procedure based on fundamental concepts derived from statistical physics, which allows to generate ensembles of directed weighted networks intended to represent the real system—so that the real network properties can be estimated as their average values within the ensemble. We test the method both on synthetic and empirical networks, focusing on the properties that are commonly used to measure systemic risk. Indeed, the method shows a remarkable robustness with respect to the limitedness of the information available, thus representing a valuable tool for gaining insights on privacy-protected economic and financial systems.

  18. Systemic Risk Analysis on Reconstructed Economic and Financial Networks

    PubMed Central

    Cimini, Giulio; Squartini, Tiziano; Garlaschelli, Diego; Gabrielli, Andrea

    2015-01-01

    We address a fundamental problem that is systematically encountered when modeling real-world complex systems of societal relevance: the limitedness of the information available. In the case of economic and financial networks, privacy issues severely limit the information that can be accessed and, as a consequence, the possibility of correctly estimating the resilience of these systems to events such as financial shocks, crises and cascade failures. Here we present an innovative method to reconstruct the structure of such partially-accessible systems, based on the knowledge of intrinsic node-specific properties and of the number of connections of only a limited subset of nodes. This information is used to calibrate an inference procedure based on fundamental concepts derived from statistical physics, which allows to generate ensembles of directed weighted networks intended to represent the real system—so that the real network properties can be estimated as their average values within the ensemble. We test the method both on synthetic and empirical networks, focusing on the properties that are commonly used to measure systemic risk. Indeed, the method shows a remarkable robustness with respect to the limitedness of the information available, thus representing a valuable tool for gaining insights on privacy-protected economic and financial systems. PMID:26507849

  19. Pathways, Networks, and Systems: Theory and Experiments

    SciTech Connect

    Joseph H. Nadeau; John D. Lambris

    2004-10-30

    The international conference provided a unique opportunity for theoreticians and experimenters to exchange ideas, strategies, problems, challenges, language and opportunities in both formal and informal settings. This dialog is an important step towards developing a deep and effective integration of theory and experiments in studies of systems biology in humans and model organisms.

  20. Real Time Network Monitoring and Reporting System

    ERIC Educational Resources Information Center

    Massengale, Ricky L., Sr.

    2009-01-01

    With the ability of modern system developers to develop intelligent programs that allows machines to learn, modify and evolve themselves, current trends of reactionary methods to detect and eradicate malicious software code from infected machines is proving to be too costly. Addressing malicious software after an attack is the current methodology…

  1. A Study of the Effects of Fieldbus Network Induced Delays on Control Systems

    ERIC Educational Resources Information Center

    Mainoo, Joseph

    2012-01-01

    Fieldbus networks are all-digital, two-way, multi-drop communication systems that are used to connect field devices such as sensors and actuators, and controllers. These fieldbus network systems are also called networked control systems (NCS). Although, there are different varieties of fieldbus networks such as Foundation Field Bus, DeviceNet, and…

  2. Modeling belief systems with scale-free networks.

    PubMed

    Antal, Miklós; Balogh, László

    2009-12-01

    The evolution of belief systems has always been a focus of cognitive research. In this paper, we delineate a new model describing belief systems as a network of statements considered true. Testing the model with a small number of parameters enabled us to reproduce a variety of well-known mechanisms ranging from opinion changes to development of psychological problems. The self-organizing opinion structure showed a scale-free degree distribution. The novelty of our work lies in applying a convenient set of definitions allowing us to depict opinion network dynamics in a highly favorable way, which resulted in a scale-free belief network. As an additional benefit, we listed several conjectural consequences in a number of areas related to thinking and reasoning. PMID:19394794

  3. [Research on Zhejiang blood information network and management system].

    PubMed

    Yan, Li-Xing; Xu, Yan; Meng, Zhong-Hua; Kong, Chang-Hong; Wang, Jian-Min; Jin, Zhen-Liang; Wu, Shi-Ding; Chen, Chang-Shui; Luo, Ling-Fei

    2007-02-01

    This research was aimed to develop the first level blood information centralized database and real time communication network at a province area in China. Multiple technology like local area network database separate operation, real time data concentration and distribution mechanism, allopatric backup, and optical fiber virtual private network (VPN) were used. As a result, the blood information centralized database and management system were successfully constructed, which covers all the Zhejiang province, and the real time exchange of blood data was realised. In conclusion, its implementation promote volunteer blood donation and ensure the blood safety in Zhejiang, especially strengthen the quick response to public health emergency. This project lays the first stone of centralized test and allotment among blood banks in Zhejiang, and can serve as a reference of contemporary blood bank information systems in China.

  4. System Integration and Network Planning in the Academic Health Center

    PubMed Central

    Testa, Marcia A.; Spackman, Thomas J.

    1985-01-01

    The transfer of information within the academic health center is complicated by the complex nature of the institution's multi-dimensional role. The diverse functions of patient care, administration, education and research result in a complex web of information exchange which requires an integrated approach to system management. System integration involves a thorough assessment of “end user” needs in terms of hardware and software as well as specification of the communications network architecture. The network will consist of a series of end user nodes which capture, process, archive and display information. This paper will consider some requirements of these nodes, also called intelligent workstations, relating to their management and integration into a total health care network.

  5. Performance limitations for networked control systems with plant uncertainty

    NASA Astrophysics Data System (ADS)

    Chi, Ming; Guan, Zhi-Hong; Cheng, Xin-Ming; Yuan, Fu-Shun

    2016-04-01

    There has recently been significant interest in performance study for networked control systems with communication constraints. But the existing work mainly assumes that the plant has an exact model. The goal of this paper is to investigate the optimal tracking performance for networked control system in the presence of plant uncertainty. The plant under consideration is assumed to be non-minimum phase and unstable, while the two-parameter controller is employed and the integral square criterion is adopted to measure the tracking error. And we formulate the uncertainty by utilising stochastic embedding. The explicit expression of the tracking performance has been obtained. The results show that the network communication noise and the model uncertainty, as well as the unstable poles and non-minimum phase zeros, can worsen the tracking performance.

  6. Integrated evolutionary computation neural network quality controller for automated systems

    SciTech Connect

    Patro, S.; Kolarik, W.J.

    1999-06-01

    With increasing competition in the global market, more and more stringent quality standards and specifications are being demands at lower costs. Manufacturing applications of computing power are becoming more common. The application of neural networks to identification and control of dynamic processes has been discussed. The limitations of using neural networks for control purposes has been pointed out and a different technique, evolutionary computation, has been discussed. The results of identifying and controlling an unstable, dynamic process using evolutionary computation methods has been presented. A framework for an integrated system, using both neural networks and evolutionary computation, has been proposed to identify the process and then control the product quality, in a dynamic, multivariable system, in real-time.

  7. Program Support Communications Network (PSCN) facsimile system directory

    NASA Technical Reports Server (NTRS)

    1992-01-01

    This directory provides a system description, a station listing, and operating procedures for the Program Support Communications Network (PSCN) NASA Facsimile System. The NASA Facsimile System is a convenient and efficient means of spanning the distance, time, and cost of transmitting documents from one person to another. In the spectrum of communication techniques, facsimile bridges the gap between mail and data transmission. Facsimile can transmit in a matter of minutes or seconds what would take a day or more by mail delivery. The NASA Facsimile System is composed of several makes and models of facsimile machines. The system also supports the 3M FaxXchange network controllers located at Marshall Space Flight Center (MSFC).

  8. Sensor network based vehicle classification and license plate identification system

    SciTech Connect

    Frigo, Janette Rose; Brennan, Sean M; Rosten, Edward J; Raby, Eric Y; Kulathumani, Vinod K

    2009-01-01

    Typically, for energy efficiency and scalability purposes, sensor networks have been used in the context of environmental and traffic monitoring applications in which operations at the sensor level are not computationally intensive. But increasingly, sensor network applications require data and compute intensive sensors such video cameras and microphones. In this paper, we describe the design and implementation of two such systems: a vehicle classifier based on acoustic signals and a license plate identification system using a camera. The systems are implemented in an energy-efficient manner to the extent possible using commercially available hardware, the Mica motes and the Stargate platform. Our experience in designing these systems leads us to consider an alternate more flexible, modular, low-power mote architecture that uses a combination of FPGAs, specialized embedded processing units and sensor data acquisition systems.

  9. Dissipative rendering and neural network control system design

    NASA Technical Reports Server (NTRS)

    Gonzalez, Oscar R.

    1995-01-01

    Model-based control system designs are limited by the accuracy of the models of the plant, plant uncertainty, and exogenous signals. Although better models can be obtained with system identification, the models and control designs still have limitations. One approach to reduce the dependency on particular models is to design a set of compensators that will guarantee robust stability to a set of plants. Optimization over the compensator parameters can then be used to get the desired performance. Conservativeness of this approach can be reduced by integrating fundamental properties of the plant models. This is the approach of dissipative control design. Dissipative control designs are based on several variations of the Passivity Theorem, which have been proven for nonlinear/linear and continuous-time/discrete-time systems. These theorems depend not on a specific model of a plant, but on its general dissipative properties. Dissipative control design has found wide applicability in flexible space structures and robotic systems that can be configured to be dissipative. Currently, there is ongoing research to improve the performance of dissipative control designs. For aircraft systems that are not dissipative active control may be used to make them dissipative and then a dissipative control design technique can be used. It is also possible that rendering a system dissipative and dissipative control design may be combined into one step. Furthermore, the transformation of a non-dissipative system to dissipative can be done robustly. One sequential design procedure for finite dimensional linear time-invariant systems has been developed. For nonlinear plants that cannot be controlled adequately with a single linear controller, model-based techniques have additional problems. Nonlinear system identification is still a research topic. Lacking analytical models for model-based design, artificial neural network algorithms have recently received considerable attention. Using

  10. The Accounting Network: How Financial Institutions React to Systemic Crisis

    PubMed Central

    Puliga, Michelangelo; Flori, Andrea; Pappalardo, Giuseppe; Chessa, Alessandro; Pammolli, Fabio

    2016-01-01

    The role of Network Theory in the study of the financial crisis has been widely spotted in the latest years. It has been shown how the network topology and the dynamics running on top of it can trigger the outbreak of large systemic crisis. Following this methodological perspective we introduce here the Accounting Network, i.e. the network we can extract through vector similarities techniques from companies’ financial statements. We build the Accounting Network on a large database of worldwide banks in the period 2001–2013, covering the onset of the global financial crisis of mid-2007. After a careful data cleaning, we apply a quality check in the construction of the network, introducing a parameter (the Quality Ratio) capable of trading off the size of the sample (coverage) and the representativeness of the financial statements (accuracy). We compute several basic network statistics and check, with the Louvain community detection algorithm, for emerging communities of banks. Remarkably enough sensible regional aggregations show up with the Japanese and the US clusters dominating the community structure, although the presence of a geographically mixed community points to a gradual convergence of banks into similar supranational practices. Finally, a Principal Component Analysis procedure reveals the main economic components that influence communities’ heterogeneity. Even using the most basic vector similarity hypotheses on the composition of the financial statements, the signature of the financial crisis clearly arises across the years around 2008. We finally discuss how the Accounting Networks can be improved to reflect the best practices in the financial statement analysis. PMID:27736865

  11. CI-KNOW: Cyberinfrastructure Knowledge Networks on the Web. A Social Network Enabled Recommender System for Locating Resources in Cyberinfrastructures

    NASA Astrophysics Data System (ADS)

    Green, H. D.; Contractor, N. S.; Yao, Y.

    2006-12-01

    A knowledge network is a multi-dimensional network created from the interactions and interconnections among the scientists, documents, data, analytic tools, and interactive collaboration spaces (like forums and wikis) associated with a collaborative environment. CI-KNOW is a suite of software tools that leverages automated data collection, social network theories, analysis techniques and algorithms to infer an individual's interests and expertise based on their interactions and activities within a knowledge network. The CI-KNOW recommender system mines the knowledge network associated with a scientific community's use of cyberinfrastructure tools and uses relational metadata to record connections among entities in the knowledge network. Recent developments in social network theories and methods provide the backbone for a modular system that creates recommendations from relational metadata. A network navigation portlet allows users to locate colleagues, documents, data or analytic tools in the knowledge network and to explore their networks through a visual, step-wise process. An internal auditing portlet offers administrators diagnostics to assess the growth and health of the entire knowledge network. The first instantiation of the prototype CI-KNOW system is part of the Environmental Cyberinfrastructure Demonstration project at the National Center for Supercomputing Applications, which supports the activities of hydrologic and environmental science communities (CLEANER and CUAHSI) under the umbrella of the WATERS network environmental observatory planning activities (http://cleaner.ncsa.uiuc.edu). This poster summarizes the key aspects of the CI-KNOW system, highlighting the key inputs, calculation mechanisms, and output modalities.

  12. Dynamical system modeling via signal reduction and neural network simulation

    SciTech Connect

    Paez, T.L.; Hunter, N.F.

    1997-11-01

    Many dynamical systems tested in the field and the laboratory display significant nonlinear behavior. Accurate characterization of such systems requires modeling in a nonlinear framework. One construct forming a basis for nonlinear modeling is that of the artificial neural network (ANN). However, when system behavior is complex, the amount of data required to perform training can become unreasonable. The authors reduce the complexity of information present in system response measurements using decomposition via canonical variate analysis. They describe a method for decomposing system responses, then modeling the components with ANNs. A numerical example is presented, along with conclusions and recommendations.

  13. Bayesian networks as a tool for epidemiological systems analysis

    NASA Astrophysics Data System (ADS)

    Lewis, F. I.

    2012-11-01

    Bayesian network analysis is a form of probabilistic modeling which derives from empirical data a directed acyclic graph (DAG) describing the dependency structure between random variables. Bayesian networks are increasingly finding application in areas such as computational and systems biology, and more recently in epidemiological analyses. The key distinction between standard empirical modeling approaches, such as generalised linear modeling, and Bayesian network analyses is that the latter attempts not only to identify statistically associated variables, but to additionally, and empirically, separate these into those directly and indirectly dependent with one or more outcome variables. Such discrimination is vastly more ambitious but has the potential to reveal far more about key features of complex disease systems. Applying Bayesian network modeling to biological and medical data has considerable computational demands, combined with the need to ensure robust model selection given the vast model space of possible DAGs. These challenges require the use of approximation techniques, such as the Laplace approximation, Markov chain Monte Carlo simulation and parametric bootstrapping, along with computational parallelization. A case study in structure discovery - identification of an optimal DAG for given data - is presented which uses additive Bayesian networks to explore veterinary disease data of industrial and medical relevance.

  14. Network Clustering Revealed the Systemic Alterations of Mitochondrial Protein Expression

    PubMed Central

    Koo, Hyun-Jung; Park, Wook-Ha; Yang, Jae-Seong; Yu, Myeong-Hee; Kim, Sanguk; Pak, Youngmi Kim

    2011-01-01

    The mitochondrial protein repertoire varies depending on the cellular state. Protein component modifications caused by mitochondrial DNA (mtDNA) depletion are related to a wide range of human diseases; however, little is known about how nuclear-encoded mitochondrial proteins (mt proteome) changes under such dysfunctional states. In this study, we investigated the systemic alterations of mtDNA-depleted (ρ0) mitochondria by using network analysis of gene expression data. By modularizing the quantified proteomics data into protein functional networks, systemic properties of mitochondrial dysfunction were analyzed. We discovered that up-regulated and down-regulated proteins were organized into two predominant subnetworks that exhibited distinct biological processes. The down-regulated network modules are involved in typical mitochondrial functions, while up-regulated proteins are responsible for mtDNA repair and regulation of mt protein expression and transport. Furthermore, comparisons of proteome and transcriptome data revealed that ρ0 cells attempted to compensate for mtDNA depletion by modulating the coordinated expression/transport of mt proteins. Our results demonstrate that mt protein composition changed to remodel the functional organization of mitochondrial protein networks in response to dysfunctional cellular states. Human mt protein functional networks provide a framework for understanding how cells respond to mitochondrial dysfunctions. PMID:21738461

  15. Network management and signalling standards for CCSDS advanced orbiting system communication systems

    NASA Astrophysics Data System (ADS)

    Pietras, John

    The Consultative Committee for Space Data Systems (CCSDS) is an international organization chartered to develop and adopt communications protocols and data processing standards suitable for use in space-related communication and data processing systems. This paper briefly describes the CCSDS network management environment and reviews the current status of CCSDS recommendations for network management functional capability, use of internal standard for network management, and composition of signaling systems in support of the advanced orbiting systems services typified by the international Space Station Freedom Program. A timetable for future work in this area is presented.

  16. Neural Network Based System for Equipment Startup Surveillance

    1996-12-18

    NEBSESS is a system for equipment surveillance and fault detection which relies on a neural-network based means for diagnosing disturbances during startup and for automatically actuating the Sequential Probability Ratio Test (SPRT) as a signal validation means during steady-state operation.

  17. Phosphorimager and PD densitometer imaging system network. Technical report

    SciTech Connect

    1995-05-01

    This document discusses the research projects undertaken as a result of the availability of the PhosphorImager and PD Densitometer Imaging System Network, at the University of Georgia`s Complex Carbohydrate Research Center. The benefit gained from the equipment is described for each project.

  18. Modeling the School System Adoption Process for Library Networking.

    ERIC Educational Resources Information Center

    Kester, Diane D.

    This study developed a preliminary model of the stages of school system participation in library networks and identified the major activities for each stage. Constructed from a study of the literature on innovation adoption and diffusion, observation, and informal interviews, the model is composed of four primary aspects: technological support,…

  19. Role of Communication Networks in Behavioral Systems Analysis

    ERIC Educational Resources Information Center

    Houmanfar, Ramona; Rodrigues, Nischal Joseph; Smith, Gregory S.

    2009-01-01

    This article provides an overview of communication networks and the role of verbal behavior in behavioral systems analysis. Our discussion highlights styles of leadership in the design and implementation of effective organizational contingencies that affect ways by which coordinated work practices are managed. We draw upon literature pertaining to…

  20. Fourier Transform for Fermionic Systems and the Spectral Tensor Network

    NASA Astrophysics Data System (ADS)

    Ferris, Andrew J.

    2014-07-01

    Leveraging the decomposability of the fast Fourier transform, I propose a new class of tensor network that is efficiently contractible and able to represent many-body systems with local entanglement that is greater than the area law. Translationally invariant systems of free fermions in arbitrary dimensions as well as 1D systems solved by the Jordan-Wigner transformation are shown to be exactly represented in this class. Further, it is proposed that these tensor networks be used as generic structures to variationally describe more complicated systems, such as interacting fermions. This class shares some similarities with the Evenbly-Vidal branching multiscale entanglement renormalization ansatz, but with some important differences and greatly reduced computational demands.

  1. Robust nonlinear variable selective control for networked systems

    NASA Astrophysics Data System (ADS)

    Rahmani, Behrooz

    2016-10-01

    This paper is concerned with the networked control of a class of uncertain nonlinear systems. In this way, Takagi-Sugeno (T-S) fuzzy modelling is used to extend the previously proposed variable selective control (VSC) methodology to nonlinear systems. This extension is based upon the decomposition of the nonlinear system to a set of fuzzy-blended locally linearised subsystems and further application of the VSC methodology to each subsystem. To increase the applicability of the T-S approach for uncertain nonlinear networked control systems, this study considers the asynchronous premise variables in the plant and the controller, and then introduces a robust stability analysis and control synthesis. The resulting optimal switching-fuzzy controller provides a minimum guaranteed cost on an H2 performance index. Simulation studies on three nonlinear benchmark problems demonstrate the effectiveness of the proposed method.

  2. Direct broadcast satellite receiver system with optical distribution network

    NASA Astrophysics Data System (ADS)

    Kemery, S. M.; Daryoush, A. S.; Herczfeld, P. R.

    1986-01-01

    With recent developments in fiber optic communications and optical distribution networks, short haul optical communications becomes an economical alternative to conventional cable TV systems. This paper presents a system design for a direct broadcast satellite receiver system with a fiber optic distribution network based on the reception of Ku-band signals from ANIK C2, a Canadian direct broadcast satellite. Such a system is proposed for the first time and can address small communities in remote areas. Theoretical power budget calculations predict that 37 subscribers can access 128 television channels using a 3 ft reflector dish antenna. To implement such a design, a number of components that are not commercially available are custom designed.

  3. Fourier transform for fermionic systems and the spectral tensor network.

    PubMed

    Ferris, Andrew J

    2014-07-01

    Leveraging the decomposability of the fast Fourier transform, I propose a new class of tensor network that is efficiently contractible and able to represent many-body systems with local entanglement that is greater than the area law. Translationally invariant systems of free fermions in arbitrary dimensions as well as 1D systems solved by the Jordan-Wigner transformation are shown to be exactly represented in this class. Further, it is proposed that these tensor networks be used as generic structures to variationally describe more complicated systems, such as interacting fermions. This class shares some similarities with the Evenbly-Vidal branching multiscale entanglement renormalization ansatz, but with some important differences and greatly reduced computational demands.

  4. Network-Oriented Radiation Monitoring System (NORMS)

    SciTech Connect

    Rahmat Aryaeinejad; David F. Spencer

    2007-10-01

    We have developed a multi-functional pocket radiation monitoring system capable of detecting and storing gamma ray and neutron data and then sending the data through a wireless connection to a remote central facility upon request. The device has programmable alarm trigger levels that can be modified for specific applications. The device could be used as a stand-alone device or in conjunction with an array to cover a small or large area. The data is stored with a date/time stamp. The device may be remotely configured. Data can be transferred and viewed on a PDA via direct connection or wirelessly. Functional/bench tests have been completed successfully. The device detects low-level neutron and gamma sources within a shielded container in a radiation field of 10 uR/hr above the ambient background level.

  5. On the Interplay between the Evolvability and Network Robustness in an Evolutionary Biological Network: A Systems Biology Approach

    PubMed Central

    Chen, Bor-Sen; Lin, Ying-Po

    2011-01-01

    In the evolutionary process, the random transmission and mutation of genes provide biological diversities for natural selection. In order to preserve functional phenotypes between generations, gene networks need to evolve robustly under the influence of random perturbations. Therefore, the robustness of the phenotype, in the evolutionary process, exerts a selection force on gene networks to keep network functions. However, gene networks need to adjust, by variations in genetic content, to generate phenotypes for new challenges in the network’s evolution, ie, the evolvability. Hence, there should be some interplay between the evolvability and network robustness in evolutionary gene networks. In this study, the interplay between the evolvability and network robustness of a gene network and a biochemical network is discussed from a nonlinear stochastic system point of view. It was found that if the genetic robustness plus environmental robustness is less than the network robustness, the phenotype of the biological network is robust in evolution. The tradeoff between the genetic robustness and environmental robustness in evolution is discussed from the stochastic stability robustness and sensitivity of the nonlinear stochastic biological network, which may be relevant to the statistical tradeoff between bias and variance, the so-called bias/variance dilemma. Further, the tradeoff could be considered as an antagonistic pleiotropic action of a gene network and discussed from the systems biology perspective. PMID:22084563

  6. Physical Modeling of Scaled Water Distribution System Networks.

    SciTech Connect

    O'Hern, Timothy J.; Hammond, Glenn Edward; Orear, Leslie ,; van Bloemen Waanders, Bart G.; Paul Molina; Ross Johnson

    2005-10-01

    Threats to water distribution systems include release of contaminants and Denial of Service (DoS) attacks. A better understanding, and validated computational models, of the flow in water distribution systems would enable determination of sensor placement in real water distribution networks, allow source identification, and guide mitigation/minimization efforts. Validation data are needed to evaluate numerical models of network operations. Some data can be acquired in real-world tests, but these are limited by 1) unknown demand, 2) lack of repeatability, 3) too many sources of uncertainty (demand, friction factors, etc.), and 4) expense. In addition, real-world tests have limited numbers of network access points. A scale-model water distribution system was fabricated, and validation data were acquired over a range of flow (demand) conditions. Standard operating variables included system layout, demand at various nodes in the system, and pressure drop across various pipe sections. In addition, the location of contaminant (salt or dye) introduction was varied. Measurements of pressure, flowrate, and concentration at a large number of points, and overall visualization of dye transport through the flow network were completed. Scale-up issues that that were incorporated in the experiment design include Reynolds number, pressure drop across nodes, and pipe friction and roughness. The scale was chosen to be 20:1, so the 10 inch main was modeled with a 0.5 inch pipe in the physical model. Controlled validation tracer tests were run to provide validation to flow and transport models, especially of the degree of mixing at pipe junctions. Results of the pipe mixing experiments showed large deviations from predicted behavior and these have a large impact on standard network operations models.3

  7. Performance analysis of Integrated Communication and Control System networks

    NASA Technical Reports Server (NTRS)

    Halevi, Y.; Ray, A.

    1990-01-01

    This paper presents statistical analysis of delays in Integrated Communication and Control System (ICCS) networks that are based on asynchronous time-division multiplexing. The models are obtained in closed form for analyzing control systems with randomly varying delays. The results of this research are applicable to ICCS design for complex dynamical processes like advanced aircraft and spacecraft, autonomous manufacturing plants, and chemical and processing plants.

  8. The Performance of Parallel Disk Write Methods for Linux Multiprocessor Nodes

    SciTech Connect

    Benson, G D; Long, K; Pacheco, P

    2003-05-07

    Despite increasing attention paid to parallel I/O and the introduction of MPI-IO, there is limited, practical data to help guide a programmer in the choice of a good parallel write strategy in the absence of a parallel file system. In this study we experimentally evaluate several methods for implementing parallel computations that interleave a significant number of contiguous or strided writes to a local disk on Linux-based multiprocessor nodes. Using synthetic benchmark programs written with MPI and Pthreads, we have acquired detailed performance data for different application characteristics of programs running on dual processor nodes. In general, our results show that programs that perform a significant amount of I/O relative to pure computation benefit greatly from the use of threads, while programs that perform relatively little I/O obtain excellent results using only MPI. For a pure MPI approach, we have found that it is best to use two writing processes with mmap(). For Pthreads it is usually best to use write() for contiguous data and writev() for strided data. Codes that use mmap() tend to benefit from periodic syncs of the data of the data to the disk, while codes that use write() or writev() tend to have better performance with few syncs. A straightforward use of ROMIO usually does not perform as well as these direct approaches for writing to the local disk.

  9. BioSMACK: a linux live CD for genome-wide association analyses.

    PubMed

    Hong, Chang Bum; Kim, Young Jin; Moon, Sanghoon; Shin, Young-Ah; Go, Min Jin; Kim, Dong-Joon; Lee, Jong-Young; Cho, Yoon Shin

    2012-01-01

    Recent advances in high-throughput genotyping technologies have enabled us to conduct a genome-wide association study (GWAS) on a large cohort. However, analyzing millions of single nucleotide polymorphisms (SNPs) is still a difficult task for researchers conducting a GWAS. Several difficulties such as compatibilities and dependencies are often encountered by researchers using analytical tools, during the installation of software. This is a huge obstacle to any research institute without computing facilities and specialists. Therefore, a proper research environment is an urgent need for researchers working on GWAS. We developed BioSMACK to provide a research environment for GWAS that requires no configuration and is easy to use. BioSMACK is based on the Ubuntu Live CD that offers a complete Linux-based operating system environment without installation. Moreover, we provide users with a GWAS manual consisting of a series of guidelines for GWAS and useful examples. BioSMACK is freely available at http://ksnp.cdc. go.kr/biosmack.

  10. Electrical network reliability and system blackout development simulations

    NASA Astrophysics Data System (ADS)

    Nepomnyashchiy, V. A.

    2015-12-01

    The main regulations of the author's model of electrical network reliability and system blackout development are stated. The model allows one to analytically determine the main technical and economic parameters indicators of reliability of electrical network operation, taking into account the generating power dislocations and electric loads, operation conditions, and dynamic and static stability of operation, while simultaneously calculating short circuit currents. The model also considers open-phase modes at singlephase short circuits and allows one to choose the most efficient operation conditions. The calculations are finished with an estimate of the annual averages of undersupply of energy and economic losses of customers due to their power supply interruptions.

  11. Intelligent Wireless Sensor Networks for System Health Monitoring

    NASA Technical Reports Server (NTRS)

    Alena, Rick

    2011-01-01

    Wireless sensor networks (WSN) based on the IEEE 802.15.4 Personal Area Network (PAN) standard are finding increasing use in the home automation and emerging smart energy markets. The network and application layers, based on the ZigBee 2007 Standard, provide a convenient framework for component-based software that supports customer solutions from multiple vendors. WSNs provide the inherent fault tolerance required for aerospace applications. The Discovery and Systems Health Group at NASA Ames Research Center has been developing WSN technology for use aboard aircraft and spacecraft for System Health Monitoring of structures and life support systems using funding from the NASA Engineering and Safety Center and Exploration Technology Development and Demonstration Program. This technology provides key advantages for low-power, low-cost ancillary sensing systems particularly across pressure interfaces and in areas where it is difficult to run wires. Intelligence for sensor networks could be defined as the capability of forming dynamic sensor networks, allowing high-level application software to identify and address any sensor that joined the network without the use of any centralized database defining the sensors characteristics. The IEEE 1451 Standard defines methods for the management of intelligent sensor systems and the IEEE 1451.4 section defines Transducer Electronic Datasheets (TEDS), which contain key information regarding the sensor characteristics such as name, description, serial number, calibration information and user information such as location within a vehicle. By locating the TEDS information on the wireless sensor itself and enabling access to this information base from the application software, the application can identify the sensor unambiguously and interpret and present the sensor data stream without reference to any other information. The application software is able to read the status of each sensor module, responding in real-time to changes of

  12. Neural network-based finite horizon stochastic optimal control design for nonlinear networked control systems.

    PubMed

    Xu, Hao; Jagannathan, Sarangapani

    2015-03-01

    The stochastic optimal control of nonlinear networked control systems (NNCSs) using neuro-dynamic programming (NDP) over a finite time horizon is a challenging problem due to terminal constraints, system uncertainties, and unknown network imperfections, such as network-induced delays and packet losses. Since the traditional iteration or time-based infinite horizon NDP schemes are unsuitable for NNCS with terminal constraints, a novel time-based NDP scheme is developed to solve finite horizon optimal control of NNCS by mitigating the above-mentioned challenges. First, an online neural network (NN) identifier is introduced to approximate the control coefficient matrix that is subsequently utilized in conjunction with the critic and actor NNs to determine a time-based stochastic optimal control input over finite horizon in a forward-in-time and online manner. Eventually, Lyapunov theory is used to show that all closed-loop signals and NN weights are uniformly ultimately bounded with ultimate bounds being a function of initial conditions and final time. Moreover, the approximated control input converges close to optimal value within finite time. The simulation results are included to show the effectiveness of the proposed scheme. PMID:25720004

  13. Enabling information management systems in tactical network environments

    NASA Astrophysics Data System (ADS)

    Carvalho, Marco; Uszok, Andrzej; Suri, Niranjan; Bradshaw, Jeffrey M.; Ceccio, Philip J.; Hanna, James P.; Sinclair, Asher

    2009-05-01

    Net-Centric Information Management (IM) and sharing in tactical environments promises to revolutionize forward command and control capabilities by providing ubiquitous shared situational awareness to the warfighter. This vision can be realized by leveraging the tactical and Mobile Ad hoc Networks (MANET) which provide the underlying communications infrastructure, but, significant technical challenges remain. Enabling information management in these highly dynamic environments will require multiple support services and protocols which are affected by, and highly dependent on, the underlying capabilities and dynamics of the tactical network infrastructure. In this paper we investigate, discuss, and evaluate the effects of realistic tactical and mobile communications network environments on mission-critical information management systems. We motivate our discussion by introducing the Advanced Information Management System (AIMS) which is targeted for deployment in tactical sensor systems. We present some operational requirements for AIMS and highlight how critical IM support services such as discovery, transport, federation, and Quality of Service (QoS) management are necessary to meet these requirements. Our goal is to provide a qualitative analysis of the impact of underlying assumptions of availability and performance of some of the critical services supporting tactical information management. We will also propose and describe a number of technologies and capabilities that have been developed to address these challenges, providing alternative approaches for transport, service discovery, and federation services for tactical networks.

  14. Green pathways: Metabolic network analysis of plant systems.

    PubMed

    Dersch, Lisa Maria; Beckers, Veronique; Wittmann, Christoph

    2016-03-01

    Metabolic engineering of plants with enhanced crop yield and value-added compositional traits is particularly challenging as they probably exhibit the highest metabolic network complexity of all living organisms. Therefore, approaches of plant metabolic network analysis, which can provide systems-level understanding of plant physiology, appear valuable as guidance for plant metabolic engineers. Strongly supported by the sequencing of plant genomes, a number of different experimental and computational methods have emerged in recent years to study plant systems at various levels: from heterotrophic cell cultures to autotrophic entire plants. The present review presents a state-of-the-art toolbox for plant metabolic network analysis. Among the described approaches are different in silico modeling techniques, including flux balance analysis, elementary flux mode analysis and kinetic flux profiling, as well as different variants of experiments with plant systems which use radioactive and stable isotopes to determine in vivo plant metabolic fluxes. The fundamental principles of these techniques, the required data input and the obtained flux information are enriched by technical advices, specific to plants. In addition, pioneering and high-impacting findings of plant metabolic network analysis highlight the potential of the field.

  15. Networked Microgrids for Self-healing Power Systems

    SciTech Connect

    Wang, Zhaoyu; Chen, Bokan; Wang, Jianhui; Chen, Chen

    2015-06-17

    This paper proposes a transformative architecture for the normal operation and self-healing of networked microgrids (MGs). MGs can support and interchange electricity with each other in the proposed infrastructure. The networked MGs are connected by a physical common bus and a designed two-layer cyber communication network. The lower layer is within each MG where the energy management system (EMS) schedules the MG operation; the upper layer links a number of EMSs for global optimization and communication. In the normal operation mode, the objective is to schedule dispatchable distributed generators (DGs), energy storage systems (ESs) and controllable loads to minimize the operation costs and maximize the supply adequacy of each MG. When a generation deficiency or fault happens in a MG, the model switches to the self-healing mode and the local generation capacities of other MGs can be used to support the on-emergency portion of the system. A consensus algorithm is used to distribute portions of the desired power support to each individual MG in a decentralized way. The allocated portion corresponds to each MG’s local power exchange target which is used by its EMS to perform the optimal schedule. The resultant aggregated power output of networked MGs will be used to provide the requested power support. Test cases demonstrate the effectiveness of the proposed methodology.

  16. Green pathways: Metabolic network analysis of plant systems.

    PubMed

    Dersch, Lisa Maria; Beckers, Veronique; Wittmann, Christoph

    2016-03-01

    Metabolic engineering of plants with enhanced crop yield and value-added compositional traits is particularly challenging as they probably exhibit the highest metabolic network complexity of all living organisms. Therefore, approaches of plant metabolic network analysis, which can provide systems-level understanding of plant physiology, appear valuable as guidance for plant metabolic engineers. Strongly supported by the sequencing of plant genomes, a number of different experimental and computational methods have emerged in recent years to study plant systems at various levels: from heterotrophic cell cultures to autotrophic entire plants. The present review presents a state-of-the-art toolbox for plant metabolic network analysis. Among the described approaches are different in silico modeling techniques, including flux balance analysis, elementary flux mode analysis and kinetic flux profiling, as well as different variants of experiments with plant systems which use radioactive and stable isotopes to determine in vivo plant metabolic fluxes. The fundamental principles of these techniques, the required data input and the obtained flux information are enriched by technical advices, specific to plants. In addition, pioneering and high-impacting findings of plant metabolic network analysis highlight the potential of the field. PMID:26704307

  17. Implementation of medical monitor system based on networks

    NASA Astrophysics Data System (ADS)

    Yu, Hui; Cao, Yuzhen; Zhang, Lixin; Ding, Mingshi

    2006-11-01

    In this paper, the development trend of medical monitor system is analyzed and portable trend and network function become more and more popular among all kinds of medical monitor devices. The architecture of medical network monitor system solution is provided and design and implementation details of medical monitor terminal, monitor center software, distributed medical database and two kind of medical information terminal are especially discussed. Rabbit3000 system is used in medical monitor terminal to implement security administration of data transfer on network, human-machine interface, power management and DSP interface while DSP chip TMS5402 is used in signal analysis and data compression. Distributed medical database is designed for hospital center according to DICOM information model and HL7 standard. Pocket medical information terminal based on ARM9 embedded platform is also developed to interactive with center database on networks. Two kernels based on WINCE are customized and corresponding terminal software are developed for nurse's routine care and doctor's auxiliary diagnosis. Now invention patent of the monitor terminal is approved and manufacture and clinic test plans are scheduled. Applications for invention patent are also arranged for two medical information terminals.

  18. Opiate dependence induces network state shifts in the limbic system.

    PubMed

    Dejean, C; Boraud, T; Le Moine, C

    2013-11-01

    Among current theories of addiction, hedonic homeostasis dysregulation predicts that the brain reward systems, particularly the mesolimbic dopamine system, switch from a physiological state to a new "set point." In opiate addiction, evidence show that the dopamine system principal targets, prefrontal cortex (PFC), nucleus accumbens (NAC) and basolateral amygdala complex (BLA) also adapt to repeated drug stimulation. Here we investigated the impact of chronic morphine on the dynamics of the network of these three interconnected structures. For that purpose we performed simultaneous electrophysiological recordings in freely-moving rats subcutaneously implanted with continuous-release morphine pellets. Chronic morphine produced a shift in the network state underpinned by changes in Delta and Gamma oscillations in the LFP of PFC, NAC and BLA, in correlation to behavioral changes. However despite continuous stimulation by the drug, an apparent normalization of the network activity and state occurred after 2 days indicating large scale adaptations. Blockade of μ opioid receptors was nonetheless sufficient to disrupt this acquired new stability in morphine-dependent animals. In line with the homeostatic dysregulation theory of addiction, our study provides original direct evidence that the PFC-NAC-BLA network of the dependent brain is characterized by a de novo balance for which the drug of abuse becomes the main contributor.

  19. A program for the Bayesian Neural Network in the ROOT framework

    NASA Astrophysics Data System (ADS)

    Zhong, Jiahang; Huang, Run-Sheng; Lee, Shih-Chang

    2011-12-01

    We present a Bayesian Neural Network algorithm implemented in the TMVA package (Hoecker et al., 2007 [1]), within the ROOT framework (Brun and Rademakers, 1997 [2]). Comparing to the conventional utilization of Neural Network as discriminator, this new implementation has more advantages as a non-parametric regression tool, particularly for fitting probabilities. It provides functionalities including cost function selection, complexity control and uncertainty estimation. An example of such application in High Energy Physics is shown. The algorithm is available with ROOT release later than 5.29. Program summaryProgram title: TMVA-BNN Catalogue identifier: AEJX_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AEJX_v1_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: BSD license No. of lines in distributed program, including test data, etc.: 5094 No. of bytes in distributed program, including test data, etc.: 1,320,987 Distribution format: tar.gz Programming language: C++ Computer: Any computer system or cluster with C++ compiler and UNIX-like operating system Operating system: Most UNIX/Linux systems. The application programs were thoroughly tested under Fedora and Scientific Linux CERN. Classification: 11.9 External routines: ROOT package version 5.29 or higher ( http://root.cern.ch) Nature of problem: Non-parametric fitting of multivariate distributions Solution method: An implementation of Neural Network following the Bayesian statistical interpretation. Uses Laplace approximation for the Bayesian marginalizations. Provides the functionalities of automatic complexity control and uncertainty estimation. Running time: Time consumption for the training depends substantially on the size of input sample, the NN topology, the number of training iterations, etc. For the example in this manuscript, about 7 min was used on a PC/Linux with 2.0 GHz processors.

  20. Automated neural network-based instrument validation system

    NASA Astrophysics Data System (ADS)

    Xu, Xiao

    2000-10-01

    In a complex control process, instrument calibration is periodically performed to maintain the instruments within the calibration range, which assures proper control and minimizes down time. Instruments are usually calibrated under out-of-service conditions using manual calibration methods, which may cause incorrect calibration or equipment damage. Continuous in-service calibration monitoring of sensors and instruments will reduce unnecessary instrument calibrations, give operators more confidence in instrument measurements, increase plant efficiency or product quality, and minimize the possibility of equipment damage during unnecessary manual calibrations. In this dissertation, an artificial neural network (ANN)-based instrument calibration verification system is designed to achieve the on-line monitoring and verification goal for scheduling maintenance. Since an ANN is a data-driven model, it can learn the relationships among signals without prior knowledge of the physical model or process, which is usually difficult to establish for the complex non-linear systems. Furthermore, the ANNs provide a noise-reduced estimate of the signal measurement. More importantly, since a neural network learns the relationships among signals, it can give an unfaulted estimate of a faulty signal based on information provided by other unfaulted signals; that is, provide a correct estimate of a faulty signal. This ANN-based instrument verification system is capable of detecting small degradations or drifts occurring in instrumentation, and preclude false control actions or system damage caused by instrument degradation. In this dissertation, an automated scheme of neural network construction is developed. Previously, the neural network structure design required extensive knowledge of neural networks. An automated design methodology was developed so that a network structure can be created without expert interaction. This validation system was designed to monitor process sensors plant

  1. Quantized stabilization of wireless networked control systems with packet losses.

    PubMed

    Qu, Feng-Lin; Hu, Bin; Guan, Zhi-Hong; Wu, Yong-Hong; He, Ding-Xin; Zheng, Ding-Fu

    2016-09-01

    This paper considers stabilization of discrete-time linear systems, where wireless networks exist for transmitting the sensor and controller information. Based on Markov jump systems, we show that the coarsest quantizer that stabilizes the WNCS is logarithmic in the sense of mean square quadratic stability and the stabilization of this system can be transformed into the robust stabilization of an equivalent uncertain system. Moreover, a method of optimal quantizer/controller design in terms of linear matrix inequality is presented. Finally, a numerical example is provided to illustrate the effectiveness of the developed theoretical results.

  2. Modeling a Nonlinear Liquid Level System by Cellular Neural Networks

    NASA Astrophysics Data System (ADS)

    Hernandez-Romero, Norberto; Seck-Tuoh-Mora, Juan Carlos; Gonzalez-Hernandez, Manuel; Medina-Marin, Joselito; Flores-Romero, Juan Jose

    This paper presents the analogue simulation of a nonlinear liquid level system composed by two tanks; the system is controlled using the methodology of exact linearization via state feedback by cellular neural networks (CNNs). The relevance of this manuscript is to show how a block diagram representing the analogue modeling and control of a nonlinear dynamical system, can be implemented and regulated by CNNs, whose cells may contain numerical values or arithmetic and control operations. In this way the dynamical system is modeled by a set of local-interacting elements without need of a central supervisor.

  3. Using a CLIPS expert system to automatically manage TCP/IP networks and their components

    NASA Technical Reports Server (NTRS)

    Faul, Ben M.

    1991-01-01

    A expert system that can directly manage networks components on a Transmission Control Protocol/Internet Protocol (TCP/IP) network is described. Previous expert systems for managing networks have focused on managing network faults after they occur. However, this proactive expert system can monitor and control network components in near real time. The ability to directly manage network elements from the C Language Integrated Production System (CLIPS) is accomplished by the integration of the Simple Network Management Protocol (SNMP) and a Abstract Syntax Notation (ASN) parser into the CLIPS artificial intelligence language.

  4. Development of the network architecture of the Canadian MSAT system

    NASA Technical Reports Server (NTRS)

    Davies, N. George; Shoamanesh, Alireza; Leung, Victor C. M.

    1988-01-01

    A description is given of the present concept for the Canadian Mobile Satellite (MSAT) System and the development of the network architecture which will accommodate the planned family of three categories of service: a mobile radio service (MRS), a mobile telephone service (MTS), and a mobile data service (MDS). The MSAT satellite will have cross-strapped L-band and Ku-band transponders to provide communications services between L-band mobile terminals and fixed base stations supporting dispatcher-type MRS, gateway stations supporting MTS interconnections to the public telephone network, data hub stations supporting the MDS, and the network control center. The currently perceived centralized architecture with demand assignment multiple access for the circuit switched MRS, MTS and permanently assigned channels for the packet switched MDS is discussed.

  5. Indoor infrared wireless communication system based on Ethernet network

    NASA Astrophysics Data System (ADS)

    Yang, Xin; Gong, Mali; Zhang, Kai; Zhang, Haitao; Yan, Ping; Jin, Wei; Jiang, Feng; Meng, Yu; Zou, Shanshan

    2002-12-01

    In this paper, we present an infrared wireless indoor communication system that bases on Ethernet network. The bit rate of Ethernet is 10Mbps, but after Manchester coding, in the physical layer the actual bit rate is 20Mbps. In our designs, the transmitter uses laser diodes (LDs). The transmitter consists of differential input circuit, LD driver circuit. The receiver consists of a coated truncated spherical concentrator whose field of view (FOV) is 40 degree, a large area Si PIN photo-detector followed by transimpedance amplifier, second-stage amplifier, low-pass filter (LPF), high-pass filter (HPF), limiting amplifier and differential output circuit. The network is constructed as a base-terminals configuration and two transit wavelengths are used for base and terminals respectively to avoid collision. Experimental testing was conducted in a room with size 5m × 5m × 3m and the network could work well.

  6. Characterizing global evolutions of complex systems via intermediate network representations.

    PubMed

    Iwayama, Koji; Hirata, Yoshito; Takahashi, Kohske; Watanabe, Katsumi; Aihara, Kazuyuki; Suzuki, Hideyuki

    2012-01-01

    Recent developments in measurement techniques have enabled us to observe the time series of many components simultaneously. Thus, it is important to understand not only the dynamics of individual time series but also their interactions. Although there are many methods for analysing the interaction between two or more time series, there are very few methods that describe global changes of the interactions over time. Here, we propose an approach to visualise time evolution for the global changes of the interactions in complex systems. This approach consists of two steps. In the first step, we construct a meta-time series of networks. In the second step, we analyse and visualise this meta-time series by using distance and recurrence plots. Our two-step approach involving intermediate network representations elucidates the half-a-day periodicity of foreign exchange markets and a singular functional network in the brain related to perceptual alternations. PMID:22639731

  7. Neural networks for local structure detection in polymorphic systems.

    PubMed

    Geiger, Philipp; Dellago, Christoph

    2013-10-28

    The accurate identification and classification of local ordered and disordered structures is an important task in atomistic computer simulations. Here, we demonstrate that properly trained artificial neural networks can be used for this purpose. Based on a neural network approach recently developed for the calculation of energies and forces, the proposed method recognizes local atomic arrangements from a set of symmetry functions that characterize the environment around a given atom. The algorithm is simple and flexible and it does not rely on the definition of a reference frame. Using the Lennard-Jones system as well as liquid water and ice as illustrative examples, we show that the neural networks developed here detect amorphous and crystalline structures with high accuracy even in the case of complex atomic arrangements, for which conventional structure detection approaches are unreliable.

  8. Discovery of Chemical Toxicity via Biological Networks and Systems Biology

    SciTech Connect

    Perkins, Edward; Habib, Tanwir; Guan, Xin; Escalon, Barbara; Falciani, Francesco; Chipman, J.K.; Antczak, Philipp; Edwards, Stephen; Taylor, Ronald C.; Vulpe, Chris; Loguinov, Alexandre; Van Aggelen, Graham; Villeneuve, Daniel L.; Garcia-Reyero, Natalia

    2010-09-30

    Both soldiers and animals are exposed to many chemicals as the result of military activities. Tools are needed to understand the hazards and risks that chemicals and new materials pose to soldiers and the environment. We have investigated the potential of global gene regulatory networks in understanding the impact of chemicals on reproduction. We characterized effects of chemicals on ovaries of the model animal system, the Fathead minnow (Pimopheles promelas) connecting chemical impacts on gene expression to circulating blood levels of the hormones testosterone and estradiol in addition to the egg yolk protein vitellogenin. We describe the application of reverse engineering complex interaction networks from high dimensional gene expression data to characterize chemicals that disrupt the hypothalamus-pituitary-gonadal endocrine axis that governs reproduction in fathead minnows. The construction of global gene regulatory networks provides deep insights into how drugs and chemicals effect key organs and biological pathways.

  9. Characterizing global evolutions of complex systems via intermediate network representations.

    PubMed

    Iwayama, Koji; Hirata, Yoshito; Takahashi, Kohske; Watanabe, Katsumi; Aihara, Kazuyuki; Suzuki, Hideyuki

    2012-01-01

    Recent developments in measurement techniques have enabled us to observe the time series of many components simultaneously. Thus, it is important to understand not only the dynamics of individual time series but also their interactions. Although there are many methods for analysing the interaction between two or more time series, there are very few methods that describe global changes of the interactions over time. Here, we propose an approach to visualise time evolution for the global changes of the interactions in complex systems. This approach consists of two steps. In the first step, we construct a meta-time series of networks. In the second step, we analyse and visualise this meta-time series by using distance and recurrence plots. Our two-step approach involving intermediate network representations elucidates the half-a-day periodicity of foreign exchange markets and a singular functional network in the brain related to perceptual alternations.

  10. Artificial synapse network on inorganic proton conductor for neuromorphic systems

    NASA Astrophysics Data System (ADS)

    Zhu, Li Qiang; Wan, Chang Jin; Guo, Li Qiang; Shi, Yi; Wan, Qing

    2014-01-01

    The basic units in our brain are neurons, and each neuron has more than 1,000 synapse connections. Synapse is the basic structure for information transfer in an ever-changing manner, and short-term plasticity allows synapses to perform critical computational functions in neural circuits. Therefore, the major challenge for the hardware implementation of neuromorphic computation is to develop artificial synapse network. Here in-plane lateral-coupled oxide-based artificial synapse network coupled by proton neurotransmitters are self-assembled on glass substrates at room-temperature. A strong lateral modulation is observed due to the proton-related electrical-double-layer effect. Short-term plasticity behaviours, including paired-pulse facilitation, dynamic filtering and spatiotemporally correlated signal processing are mimicked. Such laterally coupled oxide-based protonic/electronic hybrid artificial synapse network proposed here is interesting for building future neuromorphic systems.

  11. Interference Mitigation for Cyber-Physical Wireless Body Area Network System Using Social Networks.

    PubMed

    Zhang, Zhaoyang; Wang, Honggang; Wang, Chonggang; Fang, Hua

    2013-06-01

    Wireless body area networks (WBANs) are cyber-physical systems (CPS) that have emerged as a key technology to provide real-time health monitoring and ubiquitous healthcare services. WBANs could operate in dense environments such as in a hospital and lead to a high mutual communication interference in many application scenarios. The excessive interferences will significantly degrade the network performance including depleting the energy of WBAN nodes more quickly, and even eventually jeopardize people's lives due to unreliable (caused by the interference) healthcare data collections. Therefore, It is critical to mitigate the interference among WBANs to increase the reliability of the WBAN system while minimizing the system power consumption. Many existing approaches can deal with communication interference mitigation in general wireless networks but are not suitable for WBANs due to their ignoring the social nature of WBANs. Unlike the previous research, we for the first time propose a power game based approach to mitigate the communication interferences for WBANs based on the people's social interaction information. Our major contributions include: (1) model the inter-WBANs interference, and determine the distance distribution of the interference through both theoretical analysis and Monte Carlo simulations; (2) develop social interaction detection and prediction algorithms for people carrying WBANs; (3) develop a power control game based on the social interaction information to maximize the system's utility while minimize the energy consumption of WBANs system. The extensive simulation results show the effectiveness of the power control game for inter-WBAN interference mitigation using social interaction information. Our research opens a new research vista of WBANs using social networks. PMID:25436180

  12. Security framework for networked storage system based on artificial immune system

    NASA Astrophysics Data System (ADS)

    Huang, Jianzhong; Xie, Changsheng; Zhang, Chengfeng; Zhan, Ling

    2007-11-01

    This paper proposed a theoretical framework for the networked storage system addressing the storage security. The immune system is an adaptive learning system, which can recognize, classify and eliminate 'non-self' such as foreign pathogens. Thus, we introduced the artificial immune technique to the storage security research, and proposed a full theoretical framework for storage security system. Under this framework, it is possible to carry out the quantitative evaluation for the storage security system using modeling language of artificial immune system (AIS), and the evaluation can offer security consideration for the deployment of networked storage system. Meanwhile, it is potential to obtain the active defense technique suitable for networked storage system via exploring the principle of AIS and achieve a highly secure storage system with immune characteristic.

  13. Robust nonlinear system identification using neural-network models.

    PubMed

    Lu, S; Basar, T

    1998-01-01

    We study the problem of identification for nonlinear systems in the presence of unknown driving noise, using both feedforward multilayer neural network and radial basis function network models. Our objective is to resolve the difficulty associated with the persistency of excitation condition inherent to the standard schemes in the neural identification literature. This difficulty is circumvented here by a novel formulation and by using a new class of identification algorithms recently obtained by Didinsky et al. We show how these algorithms can be exploited to successfully identify the nonlinearity in the system using neural-network models. By embedding the original problem in one with noise-perturbed state measurements, we present a class of identifiers (under L1 and L2 cost criteria) which secure a good approximant for the system nonlinearity provided that some global optimization technique is used. In this respect, many available learning algorithms in the current neural-network literature, e.g., the backpropagation scheme and the genetic algorithms-based scheme, with slight modifications, can ensure the identification of the system nonlinearity. Subsequently, we address the same problem under a third, worst case L(infinity) criterion for an RBF modeling. We present a neural-network version of an H(infinity)-based identification algorithm from Didinsky et al and show how, along with an appropriate choice of control input to enhance excitation, under both full-state-derivative information (FSDI) and noise-perturbed full-state-information (NPFSI), it leads to satisfaction of a relevant persistency of excitation condition, and thereby to robust identification of the nonlinearity. Results from several simulation studies have been included to demonstrate the effectiveness of these algorithms.

  14. Performance and scalability evaluation of "Big Memory" on Blue Gene Linux.

    SciTech Connect

    Yoshii, K.; Iskra, K.; Naik, H.; Beckman, P.; Broekema, P. C.

    2011-05-01

    We address memory performance issues observed in Blue Gene Linux and discuss the design and implementation of 'Big Memory' - an alternative, transparent memory space introduced to eliminate the memory performance issues. We evaluate the performance of Big Memory using custom memory benchmarks, NAS Parallel Benchmarks, and the Parallel Ocean Program, at a scale of up to 4,096 nodes. We find that Big Memory successfully resolves the performance issues normally encountered in Blue Gene Linux. For the ocean simulation program, we even find that Linux with Big Memory provides better scalability than does the lightweight compute node kernel designed solely for high-performance applications. Originally intended exclusively for compute node tasks, our new memory subsystem dramatically improves the performance of certain I/O node applications as well. We demonstrate this performance using the central processor of the LOw Frequency ARray radio telescope as an example.

  15. Scaling the Earth System Grid to 100Gbps Networks

    SciTech Connect

    Balman, Mehmet; Sim, Alex

    2012-03-02

    The SC11 demonstration, titled Scaling the Earth System Grid to 100Gbps Networks, showed the ability to use underlying infrastructure for the movement of climate data over 100Gbps network. Climate change research is one of the critical data intensive sciences, and the amount of data is continuously growing. Climate simulation data is geographically distributed over the world, and it needs to be accessed from many sources for fast and efficient analysis and inter-comparison of simulations. We used a 100Gbps link connecting National Energy Research Scientific Computing Center (NERSC) at Lawrence Berkeley National Laboratory (LBNL), Argonne National Laboratory (ANL) and Oak Ridge National Laboratory (ORNL). In the demo, the Intergovernmental Panel on Climate Change (IPCC) Fourth Assessment Report (AR4) phase 3 of the Coupled Model Intercomparison Project (CMIP-3) dataset was staged into the memory of computing nodes at ANL and ORNL from NERSC over the 100Gbps network for analysis and visualization. In general, climate simulation data consists of relatively small and large files with irregular file size distribution in each dataset. In this demo, we addressed challenges on data management in terms of high bandwidth networks, usability of existing protocols and middleware tools, and how applications can adapt and benefit from next generation networks.

  16. A graph-based network-vulnerability analysis system

    SciTech Connect

    Swiler, L.P.; Phillips, C.; Gaylor, T.

    1998-01-01

    This report presents a graph-based approach to network vulnerability analysis. The method is flexible, allowing analysis of attacks from both outside and inside the network. It can analyze risks to a specific network asset, or examine the universe of possible consequences following a successful attack. The analysis system requires as input a database of common attacks, broken into atomic steps, specific network configuration and topology information, and an attacker profile. The attack information is matched with the network configuration information and an attacker profile to create a superset attack graph. Nodes identify a stage of attack, for example the class of machines the attacker has accessed and the user privilege level he or she has compromised. The arcs in the attack graph represent attacks or stages of attacks. By assigning probabilities of success on the arcs or costs representing level-of-effort for the attacker, various graph algorithms such as shortest-path algorithms can identify the attack paths with the highest probability of success.

  17. A graph-based network-vulnerability analysis system

    SciTech Connect

    Swiler, L.P.; Phillips, C.; Gaylor, T.

    1998-05-03

    This paper presents a graph based approach to network vulnerability analysis. The method is flexible, allowing analysis of attacks from both outside and inside the network. It can analyze risks to a specific network asset, or examine the universe of possible consequences following a successful attack. The analysis system requires as input a database of common attacks, broken into atomic steps, specific network configuration and topology information, and an attacker profile. The attack information is matched with the network configuration information and an attacker profile to create a superset attack graph. Nodes identify a stage of attack, for example the class of machines the attacker has accessed and the user privilege level he or she has compromised. The arcs in the attack graph represent attacks or stages of attacks. By assigning probabilities of success on the arcs or costs representing level of effort for the attacker, various graph algorithms such as shortest path algorithms can identify the attack paths with the highest probability of success.

  18. A Java-based teleradiology system to provide services over a heterogeneous network.

    PubMed

    Elkateeb, A; Richardson, P; Kawaja, A; Rahme, P

    2001-01-01

    This paper describes a teleradiology system developed to provide services over a heterogeneous networks environment. The system can transfer radiology images and provide real-time consultations and diagnostics over the telephone, Integrated Service Digital Network (ISDN), as well as a generic hospital network. The network incorporates Ethernet Local Area Network (LAN), ATM switches, and an IP router. The Java Language is used in developing this system. This teleradiology system is flexible, effective, and provides for high performance for end users. The system has been tested over all above networks and the results have shown that the system is robust and efficient. PMID:11564359

  19. Application of local area networks to accelerator control systems at the Stanford Linear Accelerator

    SciTech Connect

    Fox, J.D.; Linstadt, E.; Melen, R.

    1983-03-01

    The history and current status of SLAC's SDLC networks for distributed accelerator control systems are discussed. These local area networks have been used for instrumentation and control of the linear accelerator. Network topologies, protocols, physical links, and logical interconnections are discussed for specific applications in distributed data acquisition and control system, computer networks and accelerator operations.

  20. Network Randomization and Dynamic Defense for Critical Infrastructure Systems

    SciTech Connect

    Chavez, Adrian R.; Martin, Mitchell Tyler; Hamlet, Jason; Stout, William M.S.; Lee, Erik

    2015-04-01

    Critical Infrastructure control systems continue to foster predictable communication paths, static configurations, and unpatched systems that allow easy access to our nation's most critical assets. This makes them attractive targets for cyber intrusion. We seek to address these attack vectors by automatically randomizing network settings, randomizing applications on the end devices themselves, and dynamically defending these systems against active attacks. Applying these protective measures will convert control systems into moving targets that proactively defend themselves against attack. Sandia National Laboratories has led this effort by gathering operational and technical requirements from Tennessee Valley Authority (TVA) and performing research and development to create a proof-of-concept solution. Our proof-of-concept has been tested in a laboratory environment with over 300 nodes. The vision of this project is to enhance control system security by converting existing control systems into moving targets and building these security measures into future systems while meeting the unique constraints that control systems face.

  1. Entropic networks in colloidal, polymeric and amphiphilic systems

    NASA Astrophysics Data System (ADS)

    Zilman, A.; Tlusty, T.; Safran, S. A.

    2003-01-01

    Self-assembly in soft-matter systems often results in the formation of locally cylindrical or chain-like structures. We review the theory of these systems whose large-scale structure and properties depend on whether the chains are finite, with end-caps or join to form junctions that result in networks. Physical examples discussed here include physical gels, wormlike micelles, dipolar fluids and microemulsions. In all these cases, the competition between end-caps and junctions results in an entropic phase separation into junction-rich and junction-poor phases, as recently observed by electron microscopy and seen in computer simulations. A simple model that accounts for these phenomena is reviewed. Extensions of these ideas can be applied to treat network formation and phase separation in a system of telechelic (hydrophobically tipped, hydrophilic) polymers and oil-in-water microemulsions, as observed in recent experiments.

  2. The structure of network control system for LAMOST

    NASA Astrophysics Data System (ADS)

    Xu, Ling-Zhe; Xu, Xin-Qi

    2006-09-01

    The Large Sky Area Multi-Object Fiber Spectroscopic Telescope (LAMOST) is a large scientific and engineering project in China. It will become by its completion an astronomical survey telescope with largest field of view and most efficient observation in 4-m aperture or above telescopes in the world. The telescope will be able to acquire spectra from 4000 stars simultaneously. All these prformance features have brought about a tremendous challenge for the control system design. This paper presents the network structure for the control system with mass data and multitask circumstances, focusing on the strategy how to conduct subsystem control, environmental monitoring, time service and wireless remote monitoring & controlling, etc. A number of IT technologies have been utilized in the network system, such as real time database, GPS time ticking and GSM communication.

  3. The wireless networking system of Earthquake precursor mobile field observation

    NASA Astrophysics Data System (ADS)

    Wang, C.; Teng, Y.; Wang, X.; Fan, X.; Wang, X.

    2012-12-01

    The mobile field observation network could be real-time, reliably record and transmit large amounts of data, strengthen the physical signal observations in specific regions and specific period, it can improve the monitoring capacity and abnormal tracking capability. According to the features of scatter everywhere, a large number of current earthquake precursor observation measuring points, networking technology is based on wireless broadband accessing McWILL system, the communication system of earthquake precursor mobile field observation would real-time, reliably transmit large amounts of data to the monitoring center from measuring points through the connection about equipment and wireless accessing system, broadband wireless access system and precursor mobile observation management center system, thereby implementing remote instrument monitoring and data transmition. At present, the earthquake precursor field mobile observation network technology has been applied to fluxgate magnetometer array geomagnetic observations of Tianzhu, Xichang,and Xinjiang, it can be real-time monitoring the working status of the observational instruments of large area laid after the last two or three years, large scale field operation. Therefore, it can get geomagnetic field data of the local refinement regions and provide high-quality observational data for impending earthquake tracking forecast. Although, wireless networking technology is very suitable for mobile field observation with the features of simple, flexible networking etc, it also has the phenomenon of packet loss etc when transmitting a large number of observational data due to the wireless relatively weak signal and narrow bandwidth. In view of high sampling rate instruments, this project uses data compression and effectively solves the problem of data transmission packet loss; Control commands, status data and observational data transmission use different priorities and means, which control the packet loss rate within

  4. Interference Mitigation for Cyber-Physical Wireless Body Area Network System Using Social Networks

    PubMed Central

    Zhang, Zhaoyang; Wang, Honggang; Wang, Chonggang; Fang, Hua

    2014-01-01

    Wireless body area networks (WBANs) are cyber-physical systems (CPS) that have emerged as a key technology to provide real-time health monitoring and ubiquitous healthcare services. WBANs could operate in dense environments such as in a hospital and lead to a high mutual communication interference in many application scenarios. The excessive interferences will significantly degrade the network performance including depleting the energy of WBAN nodes more quickly, and even eventually jeopardize people’s lives due to unreliable (caused by the interference) healthcare data collections. Therefore, It is critical to mitigate the interference among WBANs to increase the reliability of the WBAN system while minimizing the system power consumption. Many existing approaches can deal with communication interference mitigation in general wireless networks but are not suitable for WBANs due to their ignoring the social nature of WBANs. Unlike the previous research, we for the first time propose a power game based approach to mitigate the communication interferences for WBANs based on the people’s social interaction information. Our major contributions include: (1) model the inter-WBANs interference, and determine the distance distribution of the interference through both theoretical analysis and Monte Carlo simulations; (2) develop social interaction detection and prediction algorithms for people carrying WBANs; (3) develop a power control game based on the social interaction information to maximize the system’s utility while minimize the energy consumption of WBANs system. The extensive simulation results show the effectiveness of the power control game for inter-WBAN interference mitigation using social interaction information. Our research opens a new research vista of WBANs using social networks. PMID:25436180

  5. Nonlinear signal processing using neural networks: Prediction and system modelling

    SciTech Connect

    Lapedes, A.; Farber, R.

    1987-06-01

    The backpropagation learning algorithm for neural networks is developed into a formalism for nonlinear signal processing. We illustrate the method by selecting two common topics in signal processing, prediction and system modelling, and show that nonlinear applications can be handled extremely well by using neural networks. The formalism is a natural, nonlinear extension of the linear Least Mean Squares algorithm commonly used in adaptive signal processing. Simulations are presented that document the additional performance achieved by using nonlinear neural networks. First, we demonstrate that the formalism may be used to predict points in a highly chaotic time series with orders of magnitude increase in accuracy over conventional methods including the Linear Predictive Method and the Gabor-Volterra-Weiner Polynomial Method. Deterministic chaos is thought to be involved in many physical situations including the onset of turbulence in fluids, chemical reactions and plasma physics. Secondly, we demonstrate the use of the formalism in nonlinear system modelling by providing a graphic example in which it is clear that the neural network has accurately modelled the nonlinear transfer function. It is interesting to note that the formalism provides explicit, analytic, global, approximations to the nonlinear maps underlying the various time series. Furthermore, the neural net seems to be extremely parsimonious in its requirements for data points from the time series. We show that the neural net is able to perform well because it globally approximates the relevant maps by performing a kind of generalized mode decomposition of the maps. 24 refs., 13 figs.

  6. A Flexible Behavioral Learning System with Modular Neural Networks

    NASA Astrophysics Data System (ADS)

    Takeuchi, Johane; Shouno, Osamu; Tsujino, Hiroshi

    Future robots/agents will perform situated behaviors for each user. Flexible behavioral learning is required for coping with diverse and unexpected users' situations. Unexpected situations are usually not tractable for machine learning systems that are designed for pre-defined problems. In order to realize such a flexible learning system, we were trying to create a learning model that can function in several different kinds of state transitions without specific adjustments for each transition as a first step. We constructed a modular neural network model based on reinforcement learning. We expected that combining a modular architecture with neural networks could accelerate the learning speed of neural networks. The inputs of our neural network model always include not only observed states but also memory information for any transition. In pure Markov decision processes, memory information is not necessary, rather it can lead to lower performance. On the other hand, partially observable conditions require memory information to select proper actions. We demonstrated that the new learning model could actually learn those multiple kinds of state transitions with the same architectures and parameters, and without pre-designed models of environments. This paper describes the performances of constructed models using probabilistically fluctuated Markov decision processes including partially observable conditions. In the test transitions, the observed state probabilistically fluctuated. The new learning model could function in those complex transitions. In addition, the learning speeds of our model are comparable to a reinforcement learning algorithm implemented with a pre-defined and optimized table-representation of states.

  7. End-System Network Interface Controller for 100 Gb/s Wide Area Networks: Final Report

    SciTech Connect

    Wen, Jesse

    2013-08-30

    In recent years, network bandwidth requirements have scaled multiple folds, pushing the need for the development of data exchange mechanisms at 100 Gb/s and beyond. High performance computing, climate modeling, large-scale storage, and collaborative scientific research are examples of applications that can greatly benefit by leveraging high bandwidth capabilities of the order of 100 Gb/s. Such requirements and advances in IEEE Ethernet standards, Optical Transport Unit4 (OTU4), and host-system interconnects demand a network infrastructure supporting throughput rates of the order of 100 Gb/s with a single wavelength. To address such a demand Acadia Optronics in collaboration with the University of New Mexico, proposed and developed a end-system Network Interface Controller (NIC) for the 100Gbps WANs. Acadia’s 100G NIC employs an FPGA based system with a high-performance processor interconnect (PCIe 3.0) and a high capacity optical transmission link (CXP) to provide data transmission at the rate of 100 Gbps.

  8. Design of an adaptive neural network based power system stabilizer.

    PubMed

    Liu, Wenxin; Venayagamoorthy, Ganesh K; Wunsch, Donald C

    2003-01-01

    Power system stabilizers (PSS) are used to generate supplementary control signals for the excitation system in order to damp the low frequency power system oscillations. To overcome the drawbacks of conventional PSS (CPSS), numerous techniques have been proposed in the literature. Based on the analysis of existing techniques, this paper presents an indirect adaptive neural network based power system stabilizer (IDNC) design. The proposed IDNC consists of a neuro-controller, which is used to generate a supplementary control signal to the excitation system, and a neuro-identifier, which is used to model the dynamics of the power system and to adapt the neuro-controller parameters. The proposed method has the features of a simple structure, adaptivity and fast response. The proposed IDNC is evaluated on a single machine infinite bus power system under different operating conditions and disturbances to demonstrate its effectiveness and robustness. PMID:12850048

  9. A secure network access system for mobile IPv6

    NASA Astrophysics Data System (ADS)

    Zhang, Hong; Yuan, Man; He, Rui; Jiang, Luliang; Ma, Jian; Qian, Hualin

    2004-03-01

    With the fast development of Internet and wireless and mobile communication technology, the Mobile Internet Age is upcoming. For those providing Mobile Internet services, especially from the view of ISP (Internet Service Provider), current mobile IP protocol is insufficient. Since the Mobile IPv6 protocol will be popular in near future, how to provide a secure mobile IPv6 service is important. A secure mobile IPv6 network access system is highly needed for mobile IPv6 deployment. Current methods and systems are still inadequate, including EAP, PANA, 802.1X, RADIUS, Diameter, etc. In this paper, we describe main security goals for a secure mobile IPv6 access system, and propose a secure network access system to achieve them. This access system consists of access router, attendant and authentication servers. The access procedure is divided into three phases, which are initial phase, authentication and registration phase and termination phase. This system has many advantages, including layer two independent, flexible and extensible, no need to modify current IPv6 address autoconfiguration protocols, binding update optimization, etc. Finally, the security of the protocol in this system is analyzed and proved with Extended BAN logic method, and a brief introduction of system implementation is given.

  10. Identifying the hub proteins from complicated membrane protein network systems.

    PubMed

    Shen, Yi-Zhen; Ding, Yong-Sheng; Gu, Quan; Chou, Kuo-Chen

    2010-05-01

    The so-called "hub proteins" are those proteins in a protein-protein interaction network system that have remarkably higher interaction relations (or degrees) than the others. Therefore, the information of hub proteins can provide very useful insights for selecting or prioritizing targets during drug development. In this paper, by combining the multi-agent-based method with the graphical spectrum analysis and immune-genetic algorithm, a novel simulator for identifying the hub proteins from membrane protein interaction networks is proposed. As a demonstration of using the simulator, two hub membrane proteins, YPL227C and YIL147C, were identified from a complicated network system consisting of 1500 membrane proteins. Meanwhile, along with the two identified hub proteins, their molecular functions, biological processes, and cellular components were also revealed. It is anticipated that the hub-protein-simulator may become a very useful tool for system biology and drug development, particularly in deciphering unknown protein functions, determining protein complexes, and in identifying the key targets from a complicated disease system. PMID:20507268

  11. Self-Organizing Neural Network Models for State Anticipatory Systems

    NASA Astrophysics Data System (ADS)

    Pöllä, Matti; Honkela, Timo

    2006-06-01

    A vital mechanism of high-level natural cognitive systems is the anticipatory capability of making decisions based on predicted events in the future. While in some cases the performance of computational cognitive systems can be improved by modeling anticipatory behavior, it has been shown that for many cognitive tasks anticipation is mandatory. In this paper, we review the use of self-organizing artificial neural networks in constructing the state-space model of an anticipatory system. The biologically inspired self-organizing map (SOM) and its topologically dynamic variants such as the growing neural gas (GNG) are discussed using illustrative examples of their performance.

  12. Power Consumption Analysis of Operating Systems for Wireless Sensor Networks

    PubMed Central

    Lajara, Rafael; Pelegrí-Sebastiá, José; Perez Solano, Juan J.

    2010-01-01

    In this paper four wireless sensor network operating systems are compared in terms of power consumption. The analysis takes into account the most common operating systems—TinyOS v1.0, TinyOS v2.0, Mantis and Contiki—running on Tmote Sky and MICAz devices. With the objective of ensuring a fair evaluation, a benchmark composed of four applications has been developed, covering the most typical tasks that a Wireless Sensor Network performs. The results show the instant and average current consumption of the devices during the execution of these applications. The experimental measurements provide a good insight into the power mode in which the device components are running at every moment, and they can be used to compare the performance of different operating systems executing the same tasks. PMID:22219688

  13. RSMM: a network language for modeling pollutants in river systems

    SciTech Connect

    Rao, N.B.; Standridge, C.R.; Schnoor, J.L.

    1983-06-01

    Predicting the steady state distribution of pollutants in rivers is important for water quality managers. A new simulation language, the River System Modeling Methodology (RSMM), helps users construct simulation models for analyzing river pollution. In RSMM, a network of nodes and branches represents a river system. Nodes represent elements such as junctions, dams, withdrawals, and pollutant sources; branches represent homogeneous river segments, or reaches. The RSMM processor is a GASP V program. Models can employ either the embedded Streeter-Phelps equations or user supplied equations. The user describes the network diagram with GASP-like input cards. RSMM outputs may be printed or stored in an SDL database. An interface between SDL and DISSPLA provides high quality graphical output.

  14. Cooperative water network system to reduce carbon footprint.

    PubMed

    Lim, Seong-Rin; Park, Jong Moon

    2008-08-15

    Much effort has been made in reducing the carbon footprint to mitigate climate change. However, water network synthesis has been focused on reducing the consumption and cost of freshwater within each industrial plant. The objective of this study is to illustrate the necessity of the cooperation of industrial plants to reduce the total carbon footprint of their water supply systems. A mathematical optimization model to minimize global warming potentials is developed to synthesize (1) a cooperative water network system (WNS) integrated over two plants and (2) an individual WNS consisting of two WNSs separated for each plant. The cooperative WNS is compared to the individual WNS. The cooperation reduces their carbon footprint and is economically feasible and profitable. A strategy for implementing the cooperation is suggested for the fair distribution of costs and benefits. As a consequence, industrial plants should cooperate with their neighbor plants to further reduce the carbon footprint.

  15. Optical wireless networked-systems: applications to aircrafts

    NASA Astrophysics Data System (ADS)

    Kavehrad, Mohsen; Fadlullah, Jarir

    2011-01-01

    This paper focuses on leveraging the progress in semiconductor technologies to facilitate production of efficient light-based in-flight entertainment (IFE), distributed sensing, navigation and control systems. We demonstrate the ease of configuring "engineered pipes" using cheap lenses, etc. to achieve simple linear transmission capacity growth. Investigation of energy-efficient, miniaturized transceivers will create a wireless medium, for both inter and intra aircrafts, providing enhanced security, and improved quality-of-service for communications links in greater harmony with onboard systems. The applications will seamlessly inter-connect multiple intelligent devices in a network that is deployable for aircrafts navigation systems, onboard sensors and entertainment data delivery systems, and high-definition audio-visual broadcasting systems. Recent experimental results on a high-capacity infrared (808 nm) system are presented. The light source can be applied in a hybrid package along with a visible lighting LED for both lighting and communications. Also, we present a pragmatic combination of light communications through "Spotlighting" and existing onboard power-lines. It is demonstrated in details that a high-capacity IFE visible light system communicating over existing power-lines (VLC/PLC) may lead to savings in many areas through reduction of size, weight and energy consumption. This paper addresses the challenges of integrating optimized optical devices in the variety of environments described above, and presents mitigation and tailoring approaches for a multi-purpose optical network.

  16. The Felin soldier system: a tailored solution for networked operations

    NASA Astrophysics Data System (ADS)

    Le Sueur, Philippe

    2007-04-01

    Sagem Defense Securite has been awarded a 800M euro contract for the French infantrymen modernisation programme. This programme covers the development, the qualification and the production of about 32 000 soldier systems to equip all the French infantry starting fielding in 2008. The FELIN soldier system provides the infantryman with an integrated system increasing dramatically the soldier capability in any dismounted close combat domains. Man remains at the centre of the system, which can interface equipments or systems already fielded and future equipments to match any customer's needs. Urban operations are carefully addressed thanks to a versatile and modular solution and a dedicated C4I system, Sagem Defense Securite is a European leader in defence electronics and takes part of this major French Army transformation programme, which will play a key role in the Info Centric Network initiatives promoted in France and other countries. This paper summarises the system solutions selected by the French Army with a focus on the networked capabilities and the optronic devices.

  17. Network modeling of membrane-based artificial cellular systems

    NASA Astrophysics Data System (ADS)

    Freeman, Eric C.; Philen, Michael K.; Leo, Donald J.

    2013-04-01

    Computational models are derived for predicting the behavior of artificial cellular networks for engineering applications. The systems simulated involve the use of a biomolecular unit cell, a multiphase material that incorporates a lipid bilayer between two hydrophilic compartments. These unit cells may be considered building blocks that enable the fabrication of complex electrochemical networks. These networks can incorporate a variety of stimuli-responsive biomolecules to enable a diverse range of multifunctional behavior. Through the collective properties of these biomolecules, the system demonstrates abilities that recreate natural cellular phenomena such as mechanotransduction, optoelectronic response, and response to chemical gradients. A crucial step to increase the utility of these biomolecular networks is to develop mathematical models of their stimuli-responsive behavior. While models have been constructed deriving from the classical Hodgkin-Huxley model focusing on describing the system as a combination of traditional electrical components (capacitors and resistors), these electrical elements do not sufficiently describe the phenomena seen in experiment as they are not linked to the molecular scale processes. From this realization an advanced model is proposed that links the traditional unit cell parameters such as conductance and capacitance to the molecular structure of the system. Rather than approaching the membrane as an isolated parallel plate capacitor, the model seeks to link the electrical properties to the underlying chemical characteristics. This model is then applied towards experimental cases in order that a more complete picture of the underlying phenomena responsible for the desired sensing mechanisms may be constructed. In this way the stimuli-responsive characteristics may be understood and optimized.

  18. Governance of integrated delivery systems/networks: a stakeholder approach.

    PubMed

    Savage, G T; Taylor, R L; Rotarius, T M; Buesseler, J A

    1997-01-01

    The health care environment is complex and turbulent, and traditional governance forms face many challenges. As integrated delivery systems/networks are formed, governance structures must be responsive to both internal and external stakeholders. Both internal efficiencies and socially responsible actions are required of these relatively new organizational forms. To meet these needs, a two-tier governance structure is presented that consists of overarching and facilitating boards. PMID:9058084

  19. NASA's Deep Space Network and ESA's Tracking Network Collaboration to Enable Solar System Exploration

    NASA Astrophysics Data System (ADS)

    Asmar, Sami; Accomazzo, Andrea; Firre, Daniel; Ferri, Paolo; Liebrecht, Phil; Mann, Greg; Morse, Gary; Costrell, Jim; Kurtik, Susan; Hell, Wolfgang; Warhaut, Manfred

    2016-07-01

    Planetary missions travel vast distances in the solar system to explore and answer important scientific questions. To return the data containing their discoveries, communications challenges have to be overcome, namely the relatively low transmitter power, typically 20 Watts at X-band, and the one-over-the-square of the distance loss of the received power, among other factors. These missions were enabled only when leading space agencies developed very large communications antennas to communicate with them as well as provide radio-metric navigation tools. NASA's Deep Space Network (DSN) and ESA's ESTRACK network are distributed geographically in order to provide global coverage and utilize stations ranging in size from 34 m to 70 m in diameter. With the increasing number of missions and significant loading on networks' capacity, unique requirements during critical events, and long-baseline interferometry navigation techniques, it became obvious that collaboration between the networks was necessary and in the interest of both agencies and the advancement of planetary and space sciences. NASA and ESA established methods for collaboration that include a generic cross-support agreement as well as mission-specific memoranda of understanding. This collaboration also led to the development of international inter-operability standards. As a result of its success, the DSN-ESTRACK cross support approach is serving as a model for other agencies with similar stations and an interest in collaboration. Over recent years, many critical events were supported and some scientific breakthroughs in planetary science were enabled. This paper will review selected examples of the science resulting from this work and the overall benefits for deep space exploration, including lessons learned, from inter-agency collaboration with communications networks.

  20. The Use and Significance of a Research Networking System

    PubMed Central

    Yuan, Leslie; Daigre, John; Meeks, Eric; Nelson, Katie; Piontkowski, Cynthia; Reuter, Katja; Sak, Rachael; Turner, Brian; Weber, Griffin M; Chatterjee, Anirvan

    2014-01-01

    Background Universities have begun deploying public Internet systems that allow for easy search of their experts, expertise, and intellectual networks. Deployed first in biomedical schools but now being implemented more broadly, the initial motivator of these research networking systems was to enable easier identification of collaborators and enable the development of teams for research. Objective The intent of the study was to provide the first description of the usage of an institutional research “social networking” system or research networking system (RNS). Methods Number of visits, visitor location and type, referral source, depth of visit, search terms, and click paths were derived from 2.5 years of Web analytics data. Feedback from a pop-up survey presented to users over 15 months was summarized. Results RNSs automatically generate and display profiles and networks of researchers. Within 2.5 years, the RNS at the University of California, San Francisco (UCSF) achieved one-seventh of the monthly visit rate of the main longstanding university website, with an increasing trend. Visitors came from diverse locations beyond the institution. Close to 75% (74.78%, 208,304/278,570) came via a public search engine and 84.0% (210 out of a sample of 250) of these queried an individual’s name that took them directly to the relevant profile page. In addition, 20.90% (214 of 1024) visits went beyond the page related to a person of interest to explore related researchers and topics through the novel and networked information provided by the tool. At the end of the period analyzed, more than 2000 visits per month traversed 5 or more links into related people and topics. One-third of visits came from returning visitors who were significantly more likely to continue to explore networked people and topics (P<.001). Responses to an online survey suggest a broad range of benefits of using the RNS in supporting the research and clinical mission. Conclusions Returning

  1. A multi-agent system architecture for sensor networks.

    PubMed

    Fuentes-Fernández, Rubén; Guijarro, María; Pajares, Gonzalo

    2009-01-01

    The design of the control systems for sensor networks presents important challenges. Besides the traditional problems about how to process the sensor data to obtain the target information, engineers need to consider additional aspects such as the heterogeneity and high number of sensors, and the flexibility of these networks regarding topologies and the sensors in them. Although there are partial approaches for resolving these issues, their integration relies on ad hoc solutions requiring important development efforts. In order to provide an effective approach for this integration, this paper proposes an architecture based on the multi-agent system paradigm with a clear separation of concerns. The architecture considers sensors as devices used by an upper layer of manager agents. These agents are able to communicate and negotiate services to achieve the required functionality. Activities are organized according to roles related with the different aspects to integrate, mainly sensor management, data processing, communication and adaptation to changes in the available devices and their capabilities. This organization largely isolates and decouples the data management from the changing network, while encouraging reuse of solutions. The use of the architecture is facilitated by a specific modelling language developed through metamodelling. A case study concerning a generic distributed system for fire fighting illustrates the approach and the comparison with related work. PMID:22303172

  2. A Multi-Agent System Architecture for Sensor Networks

    PubMed Central

    Fuentes-Fernández, Rubén; Guijarro, María; Pajares, Gonzalo

    2009-01-01

    The design of the control systems for sensor networks presents important challenges. Besides the traditional problems about how to process the sensor data to obtain the target information, engineers need to consider additional aspects such as the heterogeneity and high number of sensors, and the flexibility of these networks regarding topologies and the sensors in them. Although there are partial approaches for resolving these issues, their integration relies on ad hoc solutions requiring important development efforts. In order to provide an effective approach for this integration, this paper proposes an architecture based on the multi-agent system paradigm with a clear separation of concerns. The architecture considers sensors as devices used by an upper layer of manager agents. These agents are able to communicate and negotiate services to achieve the required functionality. Activities are organized according to roles related with the different aspects to integrate, mainly sensor management, data processing, communication and adaptation to changes in the available devices and their capabilities. This organization largely isolates and decouples the data management from the changing network, while encouraging reuse of solutions. The use of the architecture is facilitated by a specific modelling language developed through metamodelling. A case study concerning a generic distributed system for fire fighting illustrates the approach and the comparison with related work. PMID:22303172

  3. Ocean Networks Canada: Live Sensing of a Dynamic Ocean System

    NASA Astrophysics Data System (ADS)

    Heesemann, Martin; Juniper, Kim; Hoeberechts, Maia; Matabos, Marjolaine; Mihaly, Steven; Scherwath, Martin; Dewey, Richard

    2013-04-01

    Ocean Networks Canada operates two advanced cabled networks on the west coast of British Columbia. VENUS, the coastal network consisting of two cabled arrays with four Nodes reaching an isolated fjord (Saanich Inlet) and a busy shipping corridor near Vancouver (the Strait of Georgia) went into operation in February 2006. NEPTUNE Canada is the first operational deep-sea regional cabled ocean observatory worldwide. Since the first data began streaming to the public in 2009, instruments on the five active nodes along the 800 km cable loop have gathered a time-series documenting three years in the northeastern Pacific. Observations cover the northern Juan de Fuca tectonic plate from ridge to trench and the continental shelf and slope off Vancouver Island. The cabled systems provide power and high bandwidth communications to a wide range of oceanographic instrument systems which measure the physical, chemical, geological, and biological conditions of the dynamic earth-ocean system. Over the years significant challenges have been overcome and currently we have more than 100 instruments with hundreds of sensors reporting data in real-time. Salient successes are the first open-ocean seafloor to sea-surface vertical profiling system, three years of operation of Wally—a seafloor crawler that explores a hydrate mound, and a proven resilient cable design that can recover from trawler hits and major equipment meltdown with minimal loss of data. A network wide array of bottom mounted pressure recorders and seismometers recorded the passage of three major tsunamis, numerous earthquakes and frequent whale calls. At the Endeavour segment of the Juan de Fuca ridge high temperature and diffuse vent fluids were monitored and sampled using novel equipment, including high resolution active acoustics instrumentation to study plume dynamics at a massive sulfide hydrothermal vent. Also, four deep sea cabled moorings (300 m high) were placed in the precipitous bathymetry of the 2200 m

  4. Perturbation Biology: Inferring Signaling Networks in Cellular Systems

    PubMed Central

    Miller, Martin L.; Gauthier, Nicholas P.; Jing, Xiaohong; Kaushik, Poorvi; He, Qin; Mills, Gordon; Solit, David B.; Pratilas, Christine A.; Weigt, Martin; Braunstein, Alfredo; Pagnani, Andrea; Zecchina, Riccardo; Sander, Chris

    2013-01-01

    We present a powerful experimental-computational technology for inferring network models that predict the response of cells to perturbations, and that may be useful in the design of combinatorial therapy against cancer. The experiments are systematic series of perturbations of cancer cell lines by targeted drugs, singly or in combination. The response to perturbation is quantified in terms of relative changes in the measured levels of proteins, phospho-proteins and cellular phenotypes such as viability. Computational network models are derived de novo, i.e., without prior knowledge of signaling pathways, and are based on simple non-linear differential equations. The prohibitively large solution space of all possible network models is explored efficiently using a probabilistic algorithm, Belief Propagation (BP), which is three orders of magnitude faster than standard Monte Carlo methods. Explicit executable models are derived for a set of perturbation experiments in SKMEL-133 melanoma cell lines, which are resistant to the therapeutically important inhibitor of RAF kinase. The resulting network models reproduce and extend known pathway biology. They empower potential discoveries of new molecular interactions and predict efficacious novel drug perturbations, such as the inhibition of PLK1, which is verified experimentally. This technology is suitable for application to larger systems in diverse areas of molecular biology. PMID:24367245

  5. Networked Community Change: Understanding Community Systems Change through the Lens of Social Network Analysis.

    PubMed

    Lawlor, Jennifer A; Neal, Zachary P

    2016-06-01

    Addressing complex problems in communities has become a key area of focus in recent years (Kania & Kramer, 2013, Stanford Social Innovation Review). Building on existing approaches to understanding and addressing problems, such as action research, several new approaches have emerged that shift the way communities solve problems (e.g., Burns, 2007, Systemic Action Research; Foth, 2006, Action Research, 4, 205; Kania & Kramer, 2011, Stanford Social Innovation Review, 1, 36). Seeking to bring clarity to the emerging literature on community change strategies, this article identifies the common features of the most widespread community change strategies and explores the conditions under which such strategies have the potential to be effective. We identify and describe five common features among the approaches to change. Then, using an agent-based model, we simulate network-building behavior among stakeholders participating in community change efforts using these approaches. We find that the emergent stakeholder networks are efficient when the processes are implemented under ideal conditions. PMID:27221668

  6. Networked Community Change: Understanding Community Systems Change through the Lens of Social Network Analysis.

    PubMed

    Lawlor, Jennifer A; Neal, Zachary P

    2016-06-01

    Addressing complex problems in communities has become a key area of focus in recent years (Kania & Kramer, 2013, Stanford Social Innovation Review). Building on existing approaches to understanding and addressing problems, such as action research, several new approaches have emerged that shift the way communities solve problems (e.g., Burns, 2007, Systemic Action Research; Foth, 2006, Action Research, 4, 205; Kania & Kramer, 2011, Stanford Social Innovation Review, 1, 36). Seeking to bring clarity to the emerging literature on community change strategies, this article identifies the common features of the most widespread community change strategies and explores the conditions under which such strategies have the potential to be effective. We identify and describe five common features among the approaches to change. Then, using an agent-based model, we simulate network-building behavior among stakeholders participating in community change efforts using these approaches. We find that the emergent stakeholder networks are efficient when the processes are implemented under ideal conditions.

  7. The Network Library System: The History and Description of an Evolving Library-Developed System.

    ERIC Educational Resources Information Center

    Senzig, Donna M.; Bright, Franklyn F.

    1987-01-01

    Describes the Network Library System, a collaborative development project undertaken by the University of Wisconsin-Madison and University of Chicago libraries, which currently handles an online catalog and a circulation system. The conceptualization of the system, its development and performance, and subsequent changes due to available…

  8. Distributed Interplanetary Delay/Disruption Tolerant Network (DTN) Monitor and Control System

    NASA Technical Reports Server (NTRS)

    Wang, Shin-Ywan

    2012-01-01

    The main purpose of Distributed interplanetary Delay Tolerant Network Monitor and Control System as a DTN system network management implementation in JPL is defined to provide methods and tools that can monitor the DTN operation status, detect and resolve DTN operation failures in some automated style while either space network or some heterogeneous network is infused with DTN capability. In this paper, "DTN Monitor and Control system in Deep Space Network (DSN)" exemplifies a case how DTN Monitor and Control system can be adapted into a space network as it is DTN enabled.

  9. Networked gamma radiation detection system for tactical deployment

    NASA Astrophysics Data System (ADS)

    Mukhopadhyay, Sanjoy; Maurer, Richard; Wolff, Ronald; Smith, Ethan; Guss, Paul; Mitchell, Stephen

    2015-08-01

    A networked gamma radiation detection system with directional sensitivity and energy spectral data acquisition capability is being developed by the National Security Technologies, LLC, Remote Sensing Laboratory to support the close and intense tactical engagement of law enforcement who carry out counterterrorism missions. In the proposed design, three clusters of 2″ × 4″ × 16″ sodium iodide crystals (4 each) with digiBASE-E (for list mode data collection) would be placed on the passenger side of a minivan. To enhance localization and facilitate rapid identification of isotopes, advanced smart real-time localization and radioisotope identification algorithms like WAVRAD (wavelet-assisted variance reduction for anomaly detection) and NSCRAD (nuisance-rejection spectral comparison ratio anomaly detection) will be incorporated. We will test a collection of algorithms and analysis that centers on the problem of radiation detection with a distributed sensor network. We will study the basic characteristics of a radiation sensor network and focus on the trade-offs between false positive alarm rates, true positive alarm rates, and time to detect multiple radiation sources in a large area. Empirical and simulation analyses of critical system parameters, such as number of sensors, sensor placement, and sensor response functions, will be examined. This networked system will provide an integrated radiation detection architecture and framework with (i) a large nationally recognized search database equivalent that would help generate a common operational picture in a major radiological crisis; (ii) a robust reach back connectivity for search data to be evaluated by home teams; and, finally, (iii) a possibility of integrating search data from multi-agency responders.

  10. Augmenting Trust Establishment in Dynamic Systems with Social Networks

    SciTech Connect

    Lagesse, Brent J; Kumar, Mohan; Venkatesh, Svetha; Lazarescu, Mihai

    2010-01-01

    Social networking has recently flourished in popularity through the use of social websites. Pervasive computing resources have allowed people stay well-connected to each other through access to social networking resources. We take the position that utilizing information produced by relationships within social networks can assist in the establishment of trust for other pervasive computing applications. Furthermore, we describe how such a system can augment a sensor infrastructure used for event observation with information from mobile sensors (ie, mobile phones with cameras) controlled by potentially untrusted third parties. Pervasive computing systems are invisible systems, oriented around the user. As a result, many future pervasive systems are likely to include a social aspect to the system. The social communities that are developed in these systems can augment existing trust mechanisms with information about pre-trusted entities or entities to initially consider when beginning to establish trust. An example of such a system is the Collaborative Virtual Observation (CoVO) system fuses sensor information from disaparate sources in soft real-time to recreate a scene that provides observation of an event that has recently transpired. To accomplish this, CoVO must efficently access services whilst protecting the data from corruption from unknown remote nodes. CoVO combines dynamic service composition with virtual observation to utilize existing infrastructure with third party services available in the environment. Since these services are not under the control of the system, they may be unreliable or malicious. When an event of interest occurs, the given infrastructure (bus cameras, etc.) may not sufficiently cover the necessary information (be it in space, time, or sensor type). To enhance observation of the event, infrastructure is augmented with information from sensors in the environment that the infrastructure does not control. These sensors may be unreliable

  11. An expert system for configuring a network for a Milstar terminal

    NASA Technical Reports Server (NTRS)

    Mahoney, Melissa J.; Wilson, Elizabeth J.

    1994-01-01

    This paper describes a rule-based expert system which assists the user in configuring a network for Air Force terminals using the Milstar satellite system. The network configuration expert system approach uses CLIPS. The complexity of network configuration is discussed, and the methods used to model it are described.

  12. Quantum Processes and Dynamic Networks in Physical and Biological Systems.

    NASA Astrophysics Data System (ADS)

    Dudziak, Martin Joseph

    Quantum theory since its earliest formulations in the Copenhagen Interpretation has been difficult to integrate with general relativity and with classical Newtonian physics. There has been traditionally a regard for quantum phenomena as being a limiting case for a natural order that is fundamentally classical except for microscopic extrema where quantum mechanics must be applied, more as a mathematical reconciliation rather than as a description and explanation. Macroscopic sciences including the study of biological neural networks, cellular energy transports and the broad field of non-linear and chaotic systems point to a quantum dimension extending across all scales of measurement and encompassing all of Nature as a fundamentally quantum universe. Theory and observation lead to a number of hypotheses all of which point to dynamic, evolving networks of fundamental or elementary processes as the underlying logico-physical structure (manifestation) in Nature and a strongly quantized dimension to macroscalar processes such as are found in biological, ecological and social systems. The fundamental thesis advanced and presented herein is that quantum phenomena may be the direct consequence of a universe built not from objects and substance but from interacting, interdependent processes collectively operating as sets and networks, giving rise to systems that on microcosmic or macroscopic scales function wholistically and organically, exhibiting non-locality and other non -classical phenomena. The argument is made that such effects as non-locality are not aberrations or departures from the norm but ordinary consequences of the process-network dynamics of Nature. Quantum processes are taken to be the fundamental action-events within Nature; rather than being the exception quantum theory is the rule. The argument is also presented that the study of quantum physics could benefit from the study of selective higher-scale complex systems, such as neural processes in the brain

  13. Reachability bounds for chemical reaction networks and strand displacement systems.

    PubMed

    Condon, Anne; Kirkpatrick, Bonnie; Maňuch, Ján

    2014-01-01

    Chemical reaction networks (CRNs) and DNA strand displacement systems (DSDs) are widely-studied and useful models of molecular programming. However, in order for some DSDs in the literature to behave in an expected manner, the initial number of copies of some reagents is required to be fixed. In this paper we show that, when multiple copies of all initial molecules are present, general types of CRNs and DSDs fail to work correctly if the length of the shortest sequence of reactions needed to produce any given molecule exceeds a threshold that grows polynomially with attributes of the system.

  14. The Study on the Communication Network of Wide Area Measurement System in Electricity Grid

    NASA Astrophysics Data System (ADS)

    Xiaorong, Cheng; Ying, Wang; Yangdan, Ni

    Wide area measurement system(WAMS) is a fundamental part of security defense in Smart Grid, and the communication system of WAMS is an important part of Electric power communication network. For a large regional network is concerned, the real-time data which is transferred in the communication network of WAMS will affect the safe operation of the power grid directly. Therefore, WAMS raised higher requirements for real-time, reliability and security to its communication network. In this paper, the architecture of WASM communication network was studied according to the seven layers model of the open systems interconnection(OSI), and the network architecture was researched from all levels. We explored the media of WAMS communication network, the network communication protocol and network technology. Finally, the delay of the network were analyzed.

  15. Precision gravity network for monitoring the Lassen geothermal system, Northern California

    USGS Publications Warehouse

    Jachens, Robert C.; Saltus, R.W.

    1983-01-01

    A precision gravity network consisting of approximately 50 stations was established to monitor the Lassen geothermal system. The network was surveyed during the summer of 1982 and tied to a similar network established in 1981. Measurements yielded relative gravity values at the network stations with average uncertainties of 0.007 mGal (1 computed standard error).

  16. Privacy Management and Networked PPD Systems - Challenges Solutions.

    PubMed

    Ruotsalainen, Pekka; Pharow, Peter; Petersen, Francoise

    2015-01-01

    Modern personal portable health devices (PPDs) become increasingly part of a larger, inhomogeneous information system. Information collected by sensors are stored and processed in global clouds. Services are often free of charge, but at the same time service providers' business model is based on the disclosure of users' intimate health information. Health data processed in PPD networks is not regulated by health care specific legislation. In PPD networks, there is no guarantee that stakeholders share same ethical principles with the user. Often service providers have own security and privacy policies and they rarely offer to the user possibilities to define own, or adapt existing privacy policies. This all raises huge ethical and privacy concerns. In this paper, the authors have analyzed privacy challenges in PPD networks from users' viewpoint using system modeling method and propose the principle "Personal Health Data under Personal Control" must generally be accepted at global level. Among possible implementation of this principle, the authors propose encryption, computer understandable privacy policies, and privacy labels or trust based privacy management methods. The latter can be realized using infrastructural trust calculation and monitoring service. A first step is to require the protection of personal health information and the principle proposed being internationally mandatory. This requires both regulatory and standardization activities, and the availability of open and certified software application which all service providers can implement. One of those applications should be the independent Trust verifier. PMID:25980881

  17. Privacy Management and Networked PPD Systems - Challenges Solutions.

    PubMed

    Ruotsalainen, Pekka; Pharow, Peter; Petersen, Francoise

    2015-01-01

    Modern personal portable health devices (PPDs) become increasingly part of a larger, inhomogeneous information system. Information collected by sensors are stored and processed in global clouds. Services are often free of charge, but at the same time service providers' business model is based on the disclosure of users' intimate health information. Health data processed in PPD networks is not regulated by health care specific legislation. In PPD networks, there is no guarantee that stakeholders share same ethical principles with the user. Often service providers have own security and privacy policies and they rarely offer to the user possibilities to define own, or adapt existing privacy policies. This all raises huge ethical and privacy concerns. In this paper, the authors have analyzed privacy challenges in PPD networks from users' viewpoint using system modeling method and propose the principle "Personal Health Data under Personal Control" must generally be accepted at global level. Among possible implementation of this principle, the authors propose encryption, computer understandable privacy policies, and privacy labels or trust based privacy management methods. The latter can be realized using infrastructural trust calculation and monitoring service. A first step is to require the protection of personal health information and the principle proposed being internationally mandatory. This requires both regulatory and standardization activities, and the availability of open and certified software application which all service providers can implement. One of those applications should be the independent Trust verifier.

  18. Lightning location system supervising Swedish power transmission network

    NASA Technical Reports Server (NTRS)

    Melin, Stefan A.

    1991-01-01

    For electric utilities, the ability to prevent or minimize lightning damage on personnel and power systems is of great importance. Therefore, the Swedish State Power Board, has been using data since 1983 from a nationwide lightning location system (LLS) for accurately locating lightning ground strikes. Lightning data is distributed and presented on color graphic displays at regional power network control centers as well as at the national power system control center for optimal data use. The main objectives for use of LLS data are: supervising the power system for optimal and safe use of the transmission and generating capacity during periods of thunderstorms; warning service to maintenance and service crews at power line and substations to end operations hazardous when lightning; rapid positioning of emergency crews to locate network damage at areas of detected lightning; and post analysis of power outages and transmission faults in relation to lightning, using archived lightning data for determination of appropriate design and insulation levels of equipment. Staff have found LLS data useful and economically justified since the availability of power system has increased as well as level of personnel safety.

  19. Neural Network Target Identification System for False Alarm Reduction

    NASA Technical Reports Server (NTRS)

    Ye, David; Edens, Weston; Lu, Thomas T.; Chao, Tien-Hsin

    2009-01-01

    A multi-stage automated target recognition (ATR) system has been designed to perform computer vision tasks with adequate proficiency in mimicking human vision. The system is able to detect, identify, and track targets of interest. Potential regions of interest (ROIs) are first identified by the detection stage using an Optimum Trade-off Maximum Average Correlation Height (OT-MACH) filter combined with a wavelet transform. False positives are then eliminated by the verification stage using feature extraction methods in conjunction with neural networks. Feature extraction transforms the ROIs using filtering and binning algorithms to create feature vectors. A feed forward back propagation neural network (NN) is then trained to classify each feature vector and remove false positives. This paper discusses the test of the system performance and parameter optimizations process which adapts the system to various targets and datasets. The test results show that the system was successful in substantially reducing the false positive rate when tested on a sonar image dataset.

  20. Neural network target identification system for false alarm reduction

    NASA Astrophysics Data System (ADS)

    Ye, David; Edens, Weston; Lu, Thomas T.; Chao, Tien-Hsin

    2009-04-01

    A multi-stage automated target recognition (ATR) system has been designed to perform computer vision tasks with adequate proficiency in mimicking human vision. The system is able to detect, identify, and track targets of interest. Potential regions of interest (ROIs) are first identified by the detection stage using an Optimum Trade-off Maximum Average Correlation Height (OT-MACH) filter combined with a wavelet transform. False positives are then eliminated by the verification stage using feature extraction methods in conjunction with neural networks. Feature extraction transforms the ROIs using filtering and binning algorithms to create feature vectors. A feed forward back propagation neural network (NN) is then trained to classify each feature vector and remove false positives. This paper discusses the test of the system performance and parameter optimizations process which adapts the system to various targets and datasets. The test results show that the system was successful in substantially reducing the false positive rate when tested on a sonar image dataset.

  1. Advanced information processing system: Authentication protocols for network communication

    NASA Technical Reports Server (NTRS)

    Harper, Richard E.; Adams, Stuart J.; Babikyan, Carol A.; Butler, Bryan P.; Clark, Anne L.; Lala, Jaynarayan H.

    1994-01-01

    In safety critical I/O and intercomputer communication networks, reliable message transmission is an important concern. Difficulties of communication and fault identification in networks arise primarily because the sender of a transmission cannot be identified with certainty, an intermediate node can corrupt a message without certainty of detection, and a babbling node cannot be identified and silenced without lengthy diagnosis and reconfiguration . Authentication protocols use digital signature techniques to verify the authenticity of messages with high probability. Such protocols appear to provide an efficient solution to many of these problems. The objective of this program is to develop, demonstrate, and evaluate intercomputer communication architectures which employ authentication. As a context for the evaluation, the authentication protocol-based communication concept was demonstrated under this program by hosting a real-time flight critical guidance, navigation and control algorithm on a distributed, heterogeneous, mixed redundancy system of workstations and embedded fault-tolerant computers.

  2. Optimization of neural networks using variable structure systems.

    PubMed

    Mohseni, Seyed Alireza; Tan, Ai Hui

    2012-12-01

    This paper proposes a new mixed training algorithm consisting of error backpropagation (EBP) and variable structure systems (VSSs) to optimize parameter updating of neural networks. For the optimization of the number of neurons in the hidden layer, a new term based on the output of the hidden layer is added to the cost function as a penalty term to make optimal use of hidden units related to weights corresponding to each unit in the hidden layer. VSS is used to control the dynamic model of the training process, whereas EBP attempts to minimize the cost function. In addition to the analysis of the imposed dynamics of the EBP technique, the global stability of the mixed training methodology and constraints on the design parameters are considered. The advantages of the proposed technique are guaranteed convergence, improved robustness, and lower sensitivity to initial weights of the neural network.

  3. Spin-system dynamics and fault detection in threshold networks

    SciTech Connect

    Kirkland, Steve; Severini, Simone

    2011-01-15

    We consider an agent on a fixed but arbitrary node of a known threshold network, with the task of detecting an unknown missing link. We obtain analytic formulas for the probability of success when the agent's tool is the free evolution of a single excitation on an XX spin system paired with the network. We completely characterize the parameters, which allows us to obtain an advantageous solution. From the results emerges an optimal (deterministic) algorithm for quantum search, from which a quadratic speedup with respect to the optimal classical analog and in line with well-known results in quantum computation is gained. When attempting to detect a faulty node, the chosen setting appears to be very fragile and the probability of success too small to be of any direct use.

  4. Artificial Neural Network for Location Estimation in Wireless Communication Systems

    PubMed Central

    Chen, Chien-Sheng

    2012-01-01

    In a wireless communication system, wireless location is the technique used to estimate the location of a mobile station (MS). To enhance the accuracy of MS location prediction, we propose a novel algorithm that utilizes time of arrival (TOA) measurements and the angle of arrival (AOA) information to locate MS when three base stations (BSs) are available. Artificial neural networks (ANN) are widely used techniques in various areas to overcome the problem of exclusive and nonlinear relationships. When the MS is heard by only three BSs, the proposed algorithm utilizes the intersections of three TOA circles (and the AOA line), based on various neural networks, to estimate the MS location in non-line-of-sight (NLOS) environments. Simulations were conducted to evaluate the performance of the algorithm for different NLOS error distributions. The numerical analysis and simulation results show that the proposed algorithms can obtain more precise location estimation under different NLOS environments. PMID:22736978

  5. A fraud management system architecture for next-generation networks.

    PubMed

    Bihina Bella, M A; Eloff, J H P; Olivier, M S

    2009-03-10

    This paper proposes an original architecture for a fraud management system (FMS) for convergent. Next-generation networks (NGNs), which are based on the Internet protocol (IP). The architecture has the potential to satisfy the requirements of flexibility and application-independency for effective fraud detection in NGNs that cannot be met by traditional FMSs. The proposed architecture has a thorough four-stage detection process that analyses billing records in IP detail record (IPDR) format - an emerging IP-based billing standard - for signs of fraud. Its key feature is its usage of neural networks in the form of self-organising maps (SOMs) to help uncover unknown NGN fraud scenarios. A prototype was implemented to test the effectiveness of using a SOM for fraud detection and is also described in the paper.

  6. CNN: a speaker recognition system using a cascaded neural network.

    PubMed

    Zaki, M; Ghalwash, A; Elkouny, A A

    1996-05-01

    The main emphasis of this paper is to present an approach for combining supervised and unsupervised neural network models to the issue of speaker recognition. To enhance the overall operation and performance of recognition, the proposed strategy integrates the two techniques, forming one global model called the cascaded model. We first present a simple conventional technique based on the distance measured between a test vector and a reference vector for different speakers in the population. This particular distance metric has the property of weighting down the components in those directions along which the intraspeaker variance is large. The reason for presenting this method is to clarify the discrepancy in performance between the conventional and neural network approach. We then introduce the idea of using unsupervised learning technique, presented by the winner-take-all model, as a means of recognition. Due to several tests that have been conducted and in order to enhance the performance of this model, dealing with noisy patterns, we have preceded it with a supervised learning model--the pattern association model--which acts as a filtration stage. This work includes both the design and implementation of both conventional and neural network approaches to recognize the speakers templates--which are introduced to the system via a voice master card and preprocessed before extracting the features used in the recognition. The conclusion indicates that the system performance in case of neural network is better than that of the conventional one, achieving a smooth degradation in respect of noisy patterns, and higher performance in respect of noise-free patterns.

  7. A system's view of metro and regional optical networks

    NASA Astrophysics Data System (ADS)

    Lam, Cedric F.; Way, Winston I.

    2009-01-01

    Developments in fiber optic communications have been rejuvenated after the glut of the overcapacity at the turn of the century. The boom of video-centric network applications finally resulted in another wave of vast build-outs of broadband access networks such as FTTH, DOCSIS 3.0 and WI-FI systems, which in turn also drove up the bandwidth demands in metro and regional WDM networks. These new developments have rekindled research interests on technologies not only to meet the surging demand, but also to upgrade legacy network infrastructures in an evolutionary manner without disrupting existing services and incurring significant capital penalties. Standard bodies such as IEEE, ITU and OIF have formed task forces to ratify 100Gb/s interface standards. Thanks to the seemingly unlimited bandwidth in single-mode fibers, advances in optical networks has traditionally been fueled by more capable physical components such as more powerful laser, cleaner and wider bandwidth optical amplifier, faster modulator and photo-detectors, etc. In the meanwhile, the mainstream modulation technique for fiber optic communication systems has remained the most rudimentary form of on-off keying (OOK) and direct power detection for a very long period of time because spectral efficiency had never been a concern. This scenario, however, is no longer valid as demand for bandwidth is pushing the limit of current of current WDM technologies. In terms of spectral use, all the 100-GHz ITU grids in the C-band have been populated with 10Gb/s wavelengths in most of the WDM transport networks, and we are exhausting the power and bandwidth offered on existing fiber plant EDFAs. Beyond 10Gb/s, increasing the transmission to 40Gb/s by brute force OOK approach incurs significant penalties due to chromatic and polarization mode dispersion. With conventional modulation schemes, transmission impairments at 40Gb/s speed and above already become such difficult challenges that the efforts to manage these

  8. A Partially Distributed Intrusion Detection System for Wireless Sensor Networks

    PubMed Central

    Cho, Eung Jun; Hong, Choong Seon; Lee, Sungwon; Jeon, Seokhee

    2013-01-01

    The increasing use of wireless sensor networks, which normally comprise several very small sensor nodes, makes their security an increasingly important issue. They can be practically and efficiently secured using intrusion detection systems. Conventional security mechanisms are not usually applicable due to the sensor nodes having limitations of computational power, memory capacity, and battery power. Therefore, specific security systems should be designed to function under constraints of energy or memory. A partially distributed intrusion detection system with low memory and power demands is proposed here. It employs a Bloom filter, which allows reduced signature code size. Multiple Bloom filters can be combined to reduce the signature code for each Bloom filter array. The mechanism could then cope with potential denial of service attacks, unlike many previous detection systems with Bloom filters. The mechanism was evaluated and validated through analysis and simulation.

  9. Remote network control plasma diagnostic system for Tokamak T-10

    NASA Astrophysics Data System (ADS)

    Troynov, V. I.; Zimin, A. M.; Krupin, V. A.; Notkin, G. E.; Nurgaliev, M. R.

    2016-09-01

    The parameters of molecular plasma in closed magnetic trap is studied in this paper. Using the system of molecular diagnostics, which was designed by the authors on the «Tokamak T-10» facility, the radiation of hydrogen isotopes at the plasma edge is investigated. The scheme of optical radiation registration within visible spectrum is described. For visualization, identification and processing of registered molecular spectra a new software is developed using MatLab environment. The software also includes electronic atlas of electronic-vibrational-rotational transitions for molecules of protium and deuterium. To register radiation from limiter cross-section a network control system is designed using the means of the Internet/Intranet. Remote control system diagram and methods are given. The examples of web-interfaces for working out equipment control scenarios and viewing of results are provided. After test run in Intranet, the remote diagnostic system will be accessible through Internet.

  10. Network of fully integrated multispecialty hospital imaging systems

    NASA Astrophysics Data System (ADS)

    Dayhoff, Ruth E.; Kuzmak, Peter M.

    1994-05-01

    The Department of Veterans Affairs (VA) DHCP Imaging System records clinically significant diagnostic images selected by medical specialists in a variety of departments, including radiology, cardiology, gastroenterology, pathology, dermatology, hematology, surgery, podiatry, dental clinic, and emergency room. These images are displayed on workstations located throughout a medical center. All images are managed by the VA's hospital information system, allowing integrated displays of text and image data across medical specialties. Clinicians can view screens of `thumbnail' images for all studies or procedures performed on a selected patient. Two VA medical centers currently have DHCP Imaging Systems installed, and others are planned. All VA medical centers and other VA facilities are connected by a wide area packet-switched network. The VA's electronic mail software has been modified to allow inclusion of binary data such as images in addition to the traditional text data. Testing of this multimedia electronic mail system is underway for medical teleconsultation.

  11. Neural Network Control of a Magnetically Suspended Rotor System

    NASA Technical Reports Server (NTRS)

    Choi, Benjamin B.

    1998-01-01

    Magnetic bearings offer significant advantages because they do not come into contact with other parts during operation, which can reduce maintenance. Higher speeds, no friction, no lubrication, weight reduction, precise position control, and active damping make them far superior to conventional contact bearings. However, there are technical barriers that limit the application of this technology in industry. One of them is the need for a nonlinear controller that can overcome the system nonlinearity and uncertainty inherent in magnetic bearings. At the NASA Lewis Research Center, a neural network was selected as a nonlinear controller because it generates a neural model without any detailed information regarding the internal working of the magnetic bearing system. It can be used even for systems that are too complex for an accurate system model to be derived. A feed-forward architecture with a back-propagation learning algorithm was selected because of its proven performance, accuracy, and relatively easy implementation.

  12. Multisensor Network System for Wildfire Detection Using Infrared Image Processing

    PubMed Central

    Bosch, I.; Serrano, A.; Vergara, L.

    2013-01-01

    This paper presents the next step in the evolution of multi-sensor wireless network systems in the early automatic detection of forest fires. This network allows remote monitoring of each of the locations as well as communication between each of the sensors and with the control stations. The result is an increased coverage area, with quicker and safer responses. To determine the presence of a forest wildfire, the system employs decision fusion in thermal imaging, which can exploit various expected characteristics of a real fire, including short-term persistence and long-term increases over time. Results from testing in the laboratory and in a real environment are presented to authenticate and verify the accuracy of the operation of the proposed system. The system performance is gauged by the number of alarms and the time to the first alarm (corresponding to a real fire), for different probability of false alarm (PFA). The necessity of including decision fusion is thereby demonstrated. PMID:23843734

  13. Nonlinear dynamical systems and control for large-scale, hybrid, and network systems

    NASA Astrophysics Data System (ADS)

    Hui, Qing

    In this dissertation, we present several main research thrusts involving thermodynamic stabilization via energy dissipating hybrid controllers and nonlinear control of network systems. Specifically, a novel class of fixed-order, energy-based hybrid controllers is presented as a means for achieving enhanced energy dissipation in Euler-Lagrange, lossless, and dissipative dynamical systems. These dynamic controllers combine a logical switching architecture with continuous dynamics to guarantee that the system plant energy is strictly decreasing across switching. In addition, we construct hybrid dynamic controllers that guarantee the closed-loop system is consistent with basic thermodynamic principles. In particular, the existence of an entropy function for the closed-loop system is established that satisfies a hybrid Clausius-type inequality. Special cases of energy-based hybrid controllers involving state-dependent switching are described, and the framework is applied to aerospace system models. The overall framework demonstrates that energy-based hybrid resetting controllers provide an extremely efficient mechanism for dissipating energy in nonlinear dynamical systems. Next, we present finite-time coordination controllers for multiagent network systems. Recent technological advances in communications and computation have spurred a broad interest in autonomous, adaptable vehicle formations. Distributed decision-making for coordination of networks of dynamic agents addresses a broad area of applications including cooperative control of unmanned air vehicles, microsatellite clusters, mobile robotics, and congestion control in communication networks. In this dissertation we focus on finite-time consensus protocols for networks of dynamic agents with undirected information flow. The proposed controller architectures are predicated on the recently developed notion of system thermodynamics resulting in thermodynamically consistent continuous controller architectures

  14. Implementation and performance evaluation of mobile ad hoc network for Emergency Telemedicine System in disaster areas.

    PubMed

    Kim, J C; Kim, D Y; Jung, S M; Lee, M H; Kim, K S; Lee, C K; Nah, J Y; Lee, S H; Kim, J H; Choi, W J; Yoo, S K

    2009-01-01

    So far we have developed Emergency Telemedicine System (ETS) which is a robust system using heterogeneous networks. In disaster areas, however, ETS cannot be used if the primary network channel is disabled due to damages on the network infrastructures. Thus we designed network management software for disaster communication network by combination of Mobile Ad hoc Network (MANET) and Wireless LAN (WLAN). This software maintains routes to a Backbone Gateway Node in dynamic network topologies. In this paper, we introduce the proposed disaster communication network with management software, and evaluate its performance using ETS between Medical Center and simulated disaster areas. We also present the results of network performance analysis which identifies the possibility of actual Telemedicine Service in disaster areas via MANET and mobile network (e.g. HSDPA, WiBro).

  15. Coherent Frequency Reference System for the NASA Deep Space Network

    NASA Technical Reports Server (NTRS)

    Tucker, Blake C.; Lauf, John E.; Hamell, Robert L.; Gonzaler, Jorge, Jr.; Diener, William A.; Tjoelker, Robert L.

    2010-01-01

    The NASA Deep Space Network (DSN) requires state-of-the-art frequency references that are derived and distributed from very stable atomic frequency standards. A new Frequency Reference System (FRS) and Frequency Reference Distribution System (FRD) have been developed, which together replace the previous Coherent Reference Generator System (CRG). The FRS and FRD each provide new capabilities that significantly improve operability and reliability. The FRS allows for selection and switching between frequency standards, a flywheel capability (to avoid interruptions when switching frequency standards), and a frequency synthesis system (to generate standardized 5-, 10-, and 100-MHz reference signals). The FRS is powered by redundant, specially filtered, and sustainable power systems and includes a monitor and control capability for station operations to interact and control the frequency-standard selection process. The FRD receives the standardized 5-, 10-, and 100-MHz reference signals and distributes signals to distribution amplifiers in a fan out fashion to dozens of DSN users that require the highly stable reference signals. The FRD is also powered by redundant, specially filtered, and sustainable power systems. The new DSN Frequency Distribution System, which consists of the FRS and FRD systems described here, is central to all operational activities of the NASA DSN. The frequency generation and distribution system provides ultra-stable, coherent, and very low phase-noise references at 5, l0, and 100 MHz to between 60 and 100 separate users at each Deep Space Communications Complex.

  16. Early distinction system of mine fire in underground by using a neural-network system

    SciTech Connect

    Ohga, Kotaro; Higuchi, Kiyoshi

    1996-12-31

    In our laboratory, a new detection system using smell detectors was developed to detect the spontaneous combustion of coal and the combustion of other materials used underground. The results of experiments clearly the combustion of materials can be detected earlier by this detection system than by conventional detectors for gas and smoke, and there were significant differences between output data from each smell detector for coal, rubber, oil and wood. In order to discern the source of combustion gases, we have been developing a distinction system using a neural-network system. It has shown successful results in laboratory tests. This paper describes our detection system using smell detectors and our distinction system which uses a neural-network system, and presents results of experiments using both systems.

  17. The Construction of Higher Education Entrepreneur Services Network System a Research Based on Ecological Systems Theory

    NASA Astrophysics Data System (ADS)

    Xue, Jingxin

    The article aims to completely, systematically and objectively analyze the current situation of Entrepreneurship Education in China with Ecological Systems Theory. From this perspective, the author discusses the structure, function and its basic features of higher education entrepreneur services network system, and puts forward the opinion that every entrepreneurship organization in higher education institution does not limited to only one platform. Different functional supporting platforms should be combined closed through composite functional organization to form an integrated network system, in which each unit would impels others' development.

  18. Towards a new Mercator Observatory Control System

    NASA Astrophysics Data System (ADS)

    Pessemier, W.; Raskin, G.; Prins, S.; Saey, P.; Merges, F.; Padilla, J. P.; Van Winckel, H.; Waelkens, C.

    2010-07-01

    A new control system is currently being developed for the 1.2-meter Mercator Telescope at the Roque de Los Muchachos Observatory (La Palma, Spain). Formerly based on transputers, the new Mercator Observatory Control System (MOCS) consists of a small network of Linux computers complemented by a central industrial controller and an industrial real-time data communication network. Python is chosen as the high-level language to develop flexible yet powerful supervisory control and data acquisition (SCADA) software for the Linux computers. Specialized applications such as detector control, auto-guiding and middleware management are also integrated in the same Python software package. The industrial controller, on the other hand, is connected to the majority of the field devices and is targeted to run various control loops, some of which are real-time critical. Independently of the Linux distributed control system (DCS), this controller makes sure that high priority tasks such as the telescope motion, mirror support and hydrostatic bearing control are carried out in a reliable and safe way. A comparison is made between different controller technologies including a LabVIEW embedded system, a PROFINET Programmable Logic Controller (PLC) and motion controller, and an EtherCAT embedded PC (soft-PLC). As the latter is chosen as the primary platform for the lower level control, a substantial part of the software is being ported to the IEC 61131-3 standard programming languages. Additionally, obsolete hardware is gradually being replaced by standard industrial alternatives with fast EtherCAT communication. The use of Python as a scripting language allows a smooth migration to the final MOCS: finished parts of the new control system can readily be commissioned to replace the corresponding transputer units of the old control system with minimal downtime. In this contribution, we give an overview of the systems design, implementation details and the current status of the project.

  19. Network Flow Simulation of Fluid Transients in Rocket Propulsion Systems

    NASA Technical Reports Server (NTRS)

    Bandyopadhyay, Alak; Hamill, Brian; Ramachandran, Narayanan; Majumdar, Alok

    2011-01-01

    Fluid transients, also known as water hammer, can have a significant impact on the design and operation of both spacecraft and launch vehicle propulsion systems. These transients often occur at system activation and shutdown. The pressure rise due to sudden opening and closing of valves of propulsion feed lines can cause serious damage during activation and shutdown of propulsion systems. During activation (valve opening) and shutdown (valve closing), pressure surges must be predicted accurately to ensure structural integrity of the propulsion system fluid network. In the current work, a network flow simulation software (Generalized Fluid System Simulation Program) based on Finite Volume Method has been used to predict the pressure surges in the feed line due to both valve closing and valve opening using two separate geometrical configurations. The valve opening pressure surge results are compared with experimental data available in the literature and the numerical results compared very well within reasonable accuracy (< 5%) for a wide range of inlet-to-initial pressure ratios. A Fast Fourier Transform is preformed on the pressure oscillations to predict the various modal frequencies of the pressure wave. The shutdown problem, i.e. valve closing problem, the simulation results are compared with the results of Method of Characteristics. Most rocket engines experience a longitudinal acceleration, known as "pogo" during the later stage of engine burn. In the shutdown example problem, an accumulator has been used in the feed system to demonstrate the "pogo" mitigation effects in the feed system of propellant. The simulation results using GFSSP compared very well with the results of Method of Characteristics.

  20. A distributed name resolution system in information centric networks

    NASA Astrophysics Data System (ADS)

    Elbreiki, Walid; Arlimatti, Shivaleela; Hassan, Suhaidi; Habbal, Adib; Elshaikh, Mohamed

    2016-08-01

    Information Centric Networks (ICN) is the new paradigm that envisages to shift the Internet away from its existing Point-to-Point architecture to a data centric, where communication is based on named hosts rather than the information stored on these hosts. Name Resolution is the center of attraction for ICN, where Named Data Objects (NDO) are used for identifying the information and guiding for routing or forwarding inside ICN. Recently, several researches use distributed NRS to overcome the problem of interest flooding, congestion and overloading. Yet the distribution of NRS is based on random distribution. How to distribute the NRS is still an important and challenging problem. In this work, we address the problem of distribution of NRS by proposing a new mechanism called Distributed Name Resolution System (DNRS), by considering the time of publishing the NDOs in the NRS. This mechanism partitions the network to distribute the workload among NRSs by increasing storage capacity. In addition, partitioning the network increases flexibility and scalability of NRS. We evaluate the effectiveness of our proposed mechanism, which achieves lesser end-to-end delay with more average throughputs compared to random distribution of NRS without disturbing the underlying routing or forwarding strategies.