Sample records for national computer network

  1. Documentary of MFENET, a national computer network

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Shuttleworth, B.O.

    1977-06-01

    The national Magnetic Fusion Energy Computer Network (MFENET) is a newly operational star network of geographically separated heterogeneous hosts and a communications subnetwork of PDP-11 processors. Host processors interfaced to the subnetwork currently include a CDC 7600 at the Central Computer Center (CCC) and several DECsystem-10's at User Service Centers (USC's). The network was funded by a U.S. government agency (ERDA) to provide in an economical manner the needed computational resources to magnetic confinement fusion researchers. Phase I operation of MFENET distributed the processing power of the CDC 7600 among the USC's through the provision of file transport between anymore » two hosts and remote job entry to the 7600. Extending the capabilities of Phase I, MFENET Phase II provided interactive terminal access to the CDC 7600 from the USC's. A file management system is maintained at the CCC for all network users. The history and development of MFENET are discussed, with emphasis on the protocols used to link the host computers and the USC software. Comparisons are made of MFENET versus ARPANET (Advanced Research Projects Agency Computer Network) and DECNET (Digital Distributed Network Architecture). DECNET and MFENET host-to host, host-to-CCP, and link protocols are discussed in detail. The USC--CCP interface is described briefly. 43 figures, 2 tables.« less

  2. SpecialNet. A National Computer-Based Communications Network.

    ERIC Educational Resources Information Center

    Morin, Alfred J.

    1986-01-01

    "SpecialNet," a computer-based communications network for educators at all administrative levels, has been established and is managed by National Systems Management, Inc. Users can send and receive electronic mail, share information on electronic bulletin boards, participate in electronic conferences, and send reports and other documents to each…

  3. National research and education network

    NASA Technical Reports Server (NTRS)

    Villasenor, Tony

    1991-01-01

    Some goals of this network are as follows: Extend U.S. technological leadership in high performance computing and computer communications; Provide wide dissemination and application of the technologies both to the speed and the pace of innovation and to serve the national economy, national security, education, and the global environment; and Spur gains in the U.S. productivity and industrial competitiveness by making high performance computing and networking technologies an integral part of the design and production process. Strategies for achieving these goals are as follows: Support solutions to important scientific and technical challenges through a vigorous R and D effort; Reduce the uncertainties to industry for R and D and use of this technology through increased cooperation between government, industry, and universities and by the continued use of government and government funded facilities as a prototype user for early commercial HPCC products; and Support underlying research, network, and computational infrastructures on which U.S. high performance computing technology is based.

  4. Using satellite communications for a mobile computer network

    NASA Technical Reports Server (NTRS)

    Wyman, Douglas J.

    1993-01-01

    The topics discussed include the following: patrol car automation, mobile computer network, network requirements, network design overview, MCN mobile network software, MCN hub operation, mobile satellite software, hub satellite software, the benefits of patrol car automation, the benefits of satellite mobile computing, and national law enforcement satellite.

  5. LINCS: Livermore's network architecture. [Octopus computing network

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Fletcher, J.G.

    1982-01-01

    Octopus, a local computing network that has been evolving at the Lawrence Livermore National Laboratory for over fifteen years, is currently undergoing a major revision. The primary purpose of the revision is to consolidate and redefine the variety of conventions and formats, which have grown up over the years, into a single standard family of protocols, the Livermore Interactive Network Communication Standard (LINCS). This standard treats the entire network as a single distributed operating system such that access to a computing resource is obtained in a single way, whether that resource is local (on the same computer as the accessingmore » process) or remote (on another computer). LINCS encompasses not only communication but also such issues as the relationship of customer to server processes and the structure, naming, and protection of resources. The discussion includes: an overview of the Livermore user community and computing hardware, the functions and structure of each of the seven layers of LINCS protocol, the reasons why we have designed our own protocols and why we are dissatisfied by the directions that current protocol standards are taking.« less

  6. HNET - A National Computerized Health Network

    PubMed Central

    Casey, Mark; Hamilton, Richard

    1988-01-01

    The HNET system demonstrated conceptually and technically a national text (and limited bit mapped graphics) computer network for use between innovative members of the health care industry. The HNET configuration of a leased high speed national packet switching network connecting any number of mainframe, mini, and micro computers was unique in it's relatively low capital costs and freedom from obsolescence. With multiple simultaneous conferences, databases, bulletin boards, calendars, and advanced electronic mail and surveys, it is marketable to innovative hospitals, clinics, physicians, health care associations and societies, nurses, multisite research projects libraries, etc.. Electronic publishing and education capabilities along with integrated voice and video transmission are identified as future enhancements.

  7. National High-Performance Computing and Networking Act. Report To Accompany S. 343, Senate, 102d Congess, 1st Session.

    ERIC Educational Resources Information Center

    Congress of the U.S., Washington, DC. Senate Committee on Energy and Natural Resources.

    The purpose of the bill (S. 343), as reported by the Senate Committee on Energy and Natural Resources, is to establish a federal commitment to the advancement of high-performance computing, improve interagency planning and coordination of federal high-performance computing and networking activities, authorize a national high-speed computer…

  8. Computer network access to scientific information systems for minority universities

    NASA Astrophysics Data System (ADS)

    Thomas, Valerie L.; Wakim, Nagi T.

    1993-08-01

    The evolution of computer networking technology has lead to the establishment of a massive networking infrastructure which interconnects various types of computing resources at many government, academic, and corporate institutions. A large segment of this infrastructure has been developed to facilitate information exchange and resource sharing within the scientific community. The National Aeronautics and Space Administration (NASA) supports both the development and the application of computer networks which provide its community with access to many valuable multi-disciplinary scientific information systems and on-line databases. Recognizing the need to extend the benefits of this advanced networking technology to the under-represented community, the National Space Science Data Center (NSSDC) in the Space Data and Computing Division at the Goddard Space Flight Center has developed the Minority University-Space Interdisciplinary Network (MU-SPIN) Program: a major networking and education initiative for Historically Black Colleges and Universities (HBCUs) and Minority Universities (MUs). In this paper, we will briefly explain the various components of the MU-SPIN Program while highlighting how, by providing access to scientific information systems and on-line data, it promotes a higher level of collaboration among faculty and students and NASA scientists.

  9. The Role of Computer Networks in Aerospace Engineering.

    ERIC Educational Resources Information Center

    Bishop, Ann Peterson

    1994-01-01

    Presents selected results from an empirical investigation into the use of computer networks in aerospace engineering based on data from a national mail survey. The need for user-based studies of electronic networking is discussed, and a copy of the questionnaire used in the survey is appended. (Contains 46 references.) (LRW)

  10. Educational Technology Network: a computer conferencing system dedicated to applications of computers in radiology practice, research, and education.

    PubMed

    D'Alessandro, M P; Ackerman, M J; Sparks, S M

    1993-11-01

    Educational Technology Network (ET Net) is a free, easy to use, on-line computer conferencing system organized and funded by the National Library of Medicine that is accessible via the SprintNet (SprintNet, Reston, VA) and Internet (Merit, Ann Arbor, MI) computer networks. It is dedicated to helping bring together, in a single continuously running electronic forum, developers and users of computer applications in the health sciences, including radiology. ET Net uses the Caucus computer conferencing software (Camber-Roth, Troy, NY) running on a microcomputer. This microcomputer is located in the National Library of Medicine's Lister Hill National Center for Biomedical Communications and is directly connected to the SprintNet and the Internet networks. The advanced computer conferencing software of ET Net allows individuals who are separated in space and time to unite electronically to participate, at any time, in interactive discussions on applications of computers in radiology. A computer conferencing system such as ET Net allows radiologists to maintain contact with colleagues on a regular basis when they are not physically together. Topics of discussion on ET Net encompass all applications of computers in radiological practice, research, and education. ET Net has been in successful operation for 3 years and has a promising future aiding radiologists in the exchange of information pertaining to applications of computers in radiology.

  11. Distributed Problem Solving: Adaptive Networks with a Computer Intermediary Resource. Intelligent Executive Computer Communication

    DTIC Science & Technology

    1991-06-01

    Proceedings of The National Conference on Artificial Intelligence , pages 181-184, The American Association for Aritificial Intelligence , Pittsburgh...Intermediary Resource: Intelligent Executive Computer Communication John Lyman and Carla J. Conaway University of California at Los Angeles for Contracting...Include Security Classification) Interim Report: Distributed Problem Solving: Adaptive Networks With a Computer Intermediary Resource: Intelligent

  12. Advanced networks and computing in healthcare

    PubMed Central

    Ackerman, Michael

    2011-01-01

    As computing and network capabilities continue to rise, it becomes increasingly important to understand the varied applications for using them to provide healthcare. The objective of this review is to identify key characteristics and attributes of healthcare applications involving the use of advanced computing and communication technologies, drawing upon 45 research and development projects in telemedicine and other aspects of healthcare funded by the National Library of Medicine over the past 12 years. Only projects publishing in the professional literature were included in the review. Four projects did not publish beyond their final reports. In addition, the authors drew on their first-hand experience as project officers, reviewers and monitors of the work. Major themes in the corpus of work were identified, characterizing key attributes of advanced computing and network applications in healthcare. Advanced computing and network applications are relevant to a range of healthcare settings and specialties, but they are most appropriate for solving a narrower range of problems in each. Healthcare projects undertaken primarily to explore potential have also demonstrated effectiveness and depend on the quality of network service as much as bandwidth. Many applications are enabling, making it possible to provide service or conduct research that previously was not possible or to achieve outcomes in addition to those for which projects were undertaken. Most notable are advances in imaging and visualization, collaboration and sense of presence, and mobility in communication and information-resource use. PMID:21486877

  13. Methods for computing water-quality loads at sites in the U.S. Geological Survey National Water Quality Network

    USGS Publications Warehouse

    Lee, Casey J.; Murphy, Jennifer C.; Crawford, Charles G.; Deacon, Jeffrey R.

    2017-10-24

    The U.S. Geological Survey publishes information on concentrations and loads of water-quality constituents at 111 sites across the United States as part of the U.S. Geological Survey National Water Quality Network (NWQN). This report details historical and updated methods for computing water-quality loads at NWQN sites. The primary updates to historical load estimation methods include (1) an adaptation to methods for computing loads to the Gulf of Mexico; (2) the inclusion of loads computed using the Weighted Regressions on Time, Discharge, and Season (WRTDS) method; and (3) the inclusion of loads computed using continuous water-quality data. Loads computed using WRTDS and continuous water-quality data are provided along with those computed using historical methods. Various aspects of method updates are evaluated in this report to help users of water-quality loading data determine which estimation methods best suit their particular application.

  14. Using high-performance networks to enable computational aerosciences applications

    NASA Technical Reports Server (NTRS)

    Johnson, Marjory J.

    1992-01-01

    One component of the U.S. Federal High Performance Computing and Communications Program (HPCCP) is the establishment of a gigabit network to provide a communications infrastructure for researchers across the nation. This gigabit network will provide new services and capabilities, in addition to increased bandwidth, to enable future applications. An understanding of these applications is necessary to guide the development of the gigabit network and other high-performance networks of the future. In this paper we focus on computational aerosciences applications run remotely using the Numerical Aerodynamic Simulation (NAS) facility located at NASA Ames Research Center. We characterize these applications in terms of network-related parameters and relate user experiences that reveal limitations imposed by the current wide-area networking infrastructure. Then we investigate how the development of a nationwide gigabit network would enable users of the NAS facility to work in new, more productive ways.

  15. Hyperswitch Communication Network Computer

    NASA Technical Reports Server (NTRS)

    Peterson, John C.; Chow, Edward T.; Priel, Moshe; Upchurch, Edwin T.

    1993-01-01

    Hyperswitch Communications Network (HCN) computer is prototype multiple-processor computer being developed. Incorporates improved version of hyperswitch communication network described in "Hyperswitch Network For Hypercube Computer" (NPO-16905). Designed to support high-level software and expansion of itself. HCN computer is message-passing, multiple-instruction/multiple-data computer offering significant advantages over older single-processor and bus-based multiple-processor computers, with respect to price/performance ratio, reliability, availability, and manufacturing. Design of HCN operating-system software provides flexible computing environment accommodating both parallel and distributed processing. Also achieves balance among following competing factors; performance in processing and communications, ease of use, and tolerance of (and recovery from) faults.

  16. Classroom Computer Network.

    ERIC Educational Resources Information Center

    Lent, John

    1984-01-01

    This article describes a computer network system that connects several microcomputers to a single disk drive and one copy of software. Many schools are switching to networks as a cheaper and more efficient means of computer instruction. Teachers may be faced with copywriting problems when reproducing programs. (DF)

  17. Computer network defense system

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Urias, Vincent; Stout, William M. S.; Loverro, Caleb

    A method and apparatus for protecting virtual machines. A computer system creates a copy of a group of the virtual machines in an operating network in a deception network to form a group of cloned virtual machines in the deception network when the group of the virtual machines is accessed by an adversary. The computer system creates an emulation of components from the operating network in the deception network. The components are accessible by the group of the cloned virtual machines as if the group of the cloned virtual machines was in the operating network. The computer system moves networkmore » connections for the group of the virtual machines in the operating network used by the adversary from the group of the virtual machines in the operating network to the group of the cloned virtual machines, enabling protecting the group of the virtual machines from actions performed by the adversary.« less

  18. Report on Computing and Networking in the Space Science Laboratory by the SSL Computer Committee

    NASA Technical Reports Server (NTRS)

    Gallagher, D. L. (Editor)

    1993-01-01

    The Space Science Laboratory (SSL) at Marshall Space Flight Center is a multiprogram facility. Scientific research is conducted in four discipline areas: earth science and applications, solar-terrestrial physics, astrophysics, and microgravity science and applications. Representatives from each of these discipline areas participate in a Laboratory computer requirements committee, which developed this document. The purpose is to establish and discuss Laboratory objectives for computing and networking in support of science. The purpose is also to lay the foundation for a collective, multiprogram approach to providing these services. Special recognition is given to the importance of the national and international efforts of our research communities toward the development of interoperable, network-based computer applications.

  19. Terminal-oriented computer-communication networks.

    NASA Technical Reports Server (NTRS)

    Schwartz, M.; Boorstyn, R. R.; Pickholtz, R. L.

    1972-01-01

    Four examples of currently operating computer-communication networks are described in this tutorial paper. They include the TYMNET network, the GE Information Services network, the NASDAQ over-the-counter stock-quotation system, and the Computer Sciences Infonet. These networks all use programmable concentrators for combining a multiplicity of terminals. Included in the discussion for each network is a description of the overall network structure, the handling and transmission of messages, communication requirements, routing and reliability consideration where applicable, operating data and design specifications where available, and unique design features in the area of computer communications.

  20. Network Coding for Function Computation

    ERIC Educational Resources Information Center

    Appuswamy, Rathinakumar

    2011-01-01

    In this dissertation, the following "network computing problem" is considered. Source nodes in a directed acyclic network generate independent messages and a single receiver node computes a target function f of the messages. The objective is to maximize the average number of times f can be computed per network usage, i.e., the "computing…

  1. The Merit Computer Network

    ERIC Educational Resources Information Center

    Aupperle, Eric M.; Davis, Donna L.

    1978-01-01

    The successful Merit Computer Network is examined in terms of both technology and operational management. The network is fully operational and has a significant and rapidly increasing usage, with three major institutions currently sharing computer resources. (Author/CMV)

  2. Computer Networks and Networking: A Primer.

    ERIC Educational Resources Information Center

    Collins, Mauri P.

    1993-01-01

    Provides a basic introduction to computer networks and networking terminology. Topics addressed include modems; the Internet; TCP/IP (Transmission Control Protocol/Internet Protocol); transmission lines; Internet Protocol numbers; network traffic; Fidonet; file transfer protocol (FTP); TELNET; electronic mail; discussion groups; LISTSERV; USENET;…

  3. NASA/DOD Aerospace Knowledge Diffusion Research Project. Paper 47: The value of computer networks in aerospace

    NASA Technical Reports Server (NTRS)

    Bishop, Ann Peterson; Pinelli, Thomas E.

    1995-01-01

    This paper presents data on the value of computer networks that were obtained from a national survey of 2000 aerospace engineers that was conducted in 1993. Survey respondents reported the extent to which they used computer networks in their work and communication and offered their assessments of the value of various network types and applications. They also provided information about the positive impacts of networks on their work, which presents another perspective on value. Finally, aerospace engineers' recommendations on network implementation present suggestions for increasing the value of computer networks within aerospace organizations.

  4. High End Computer Network Testbedding at NASA Goddard Space Flight Center

    NASA Technical Reports Server (NTRS)

    Gary, James Patrick

    1998-01-01

    The Earth & Space Data Computing (ESDC) Division, at the Goddard Space Flight Center, is involved in development and demonstrating various high end computer networking capabilities. The ESDC has several high end super computers. These are used to run: (1) computer simulation of the climate systems; (2) to support the Earth and Space Sciences (ESS) project; (3) to support the Grand Challenge (GC) Science, which is aimed at understanding the turbulent convection and dynamos in stars. GC research occurs in many sites throughout the country, and this research is enabled by, in part, the multiple high performance network interconnections. The application drivers for High End Computer Networking use distributed supercomputing to support virtual reality applications, such as TerraVision, (i.e., three dimensional browser of remotely accessed data), and Cave Automatic Virtual Environments (CAVE). Workstations can access and display data from multiple CAVE's with video servers, which allows for group/project collaborations using a combination of video, data, voice and shared white boarding. The ESDC is also developing and demonstrating the high degree of interoperability between satellite and terrestrial-based networks. To this end, the ESDC is conducting research and evaluations of new computer networking protocols and related technologies which improve the interoperability of satellite and terrestrial networks. The ESDC is also involved in the Security Proof of Concept Keystone (SPOCK) program sponsored by National Security Agency (NSA). The SPOCK activity provides a forum for government users and security technology providers to share information on security requirements, emerging technologies and new product developments. Also, the ESDC is involved in the Trans-Pacific Digital Library Experiment, which aims to demonstrate and evaluate the use of high performance satellite communications and advanced data communications protocols to enable interactive digital library data

  5. National Geographic Society Kids Network: Report on 1994 teacher participants

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    NONE

    In 1994, National Geographic Society Kids Network, a computer/telecommunications-based science curriculum, was presented to elementary and middle school teachers through summer programs sponsored by NGS and US DOE. The network program assists teachers in understanding the process of doing science; understanding the role of computers and telecommunications in the study of science, math, and engineering; and utilizing computers and telecommunications appropriately in the classroom. The program enables teacher to integrate science, math, and technology with other subjects with the ultimate goal of encouraging students of all abilities to pursue careers in science/math/engineering. This report assesses the impact of the networkmore » program on participating teachers.« less

  6. Software For Monitoring A Computer Network

    NASA Technical Reports Server (NTRS)

    Lee, Young H.

    1992-01-01

    SNMAT is rule-based expert-system computer program designed to assist personnel in monitoring status of computer network and identifying defective computers, workstations, and other components of network. Also assists in training network operators. Network for SNMAT located at Space Flight Operations Center (SFOC) at NASA's Jet Propulsion Laboratory. Intended to serve as data-reduction system providing windows, menus, and graphs, enabling users to focus on relevant information. SNMAT expected to be adaptable to other computer networks; for example in management of repair, maintenance, and security, or in administration of planning systems, billing systems, or archives.

  7. Scientific Computing Strategic Plan for the Idaho National Laboratory

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Whiting, Eric Todd

    Scientific computing is a critical foundation of modern science. Without innovations in the field of computational science, the essential missions of the Department of Energy (DOE) would go unrealized. Taking a leadership role in such innovations is Idaho National Laboratory’s (INL’s) challenge and charge, and is central to INL’s ongoing success. Computing is an essential part of INL’s future. DOE science and technology missions rely firmly on computing capabilities in various forms. Modeling and simulation, fueled by innovations in computational science and validated through experiment, are a critical foundation of science and engineering. Big data analytics from an increasing numbermore » of widely varied sources is opening new windows of insight and discovery. Computing is a critical tool in education, science, engineering, and experiments. Advanced computing capabilities in the form of people, tools, computers, and facilities, will position INL competitively to deliver results and solutions on important national science and engineering challenges. A computing strategy must include much more than simply computers. The foundational enabling component of computing at many DOE national laboratories is the combination of a showcase like data center facility coupled with a very capable supercomputer. In addition, network connectivity, disk storage systems, and visualization hardware are critical and generally tightly coupled to the computer system and co located in the same facility. The existence of these resources in a single data center facility opens the doors to many opportunities that would not otherwise be possible.« less

  8. pSCANNER: patient-centered Scalable National Network for Effectiveness Research

    PubMed Central

    Ohno-Machado, Lucila; Agha, Zia; Bell, Douglas S; Dahm, Lisa; Day, Michele E; Doctor, Jason N; Gabriel, Davera; Kahlon, Maninder K; Kim, Katherine K; Hogarth, Michael; Matheny, Michael E; Meeker, Daniella; Nebeker, Jonathan R

    2014-01-01

    This article describes the patient-centered Scalable National Network for Effectiveness Research (pSCANNER), which is part of the recently formed PCORnet, a national network composed of learning healthcare systems and patient-powered research networks funded by the Patient Centered Outcomes Research Institute (PCORI). It is designed to be a stakeholder-governed federated network that uses a distributed architecture to integrate data from three existing networks covering over 21 million patients in all 50 states: (1) VA Informatics and Computing Infrastructure (VINCI), with data from Veteran Health Administration's 151 inpatient and 909 ambulatory care and community-based outpatient clinics; (2) the University of California Research exchange (UC-ReX) network, with data from UC Davis, Irvine, Los Angeles, San Francisco, and San Diego; and (3) SCANNER, a consortium of UCSD, Tennessee VA, and three federally qualified health systems in the Los Angeles area supplemented with claims and health information exchange data, led by the University of Southern California. Initial use cases will focus on three conditions: (1) congestive heart failure; (2) Kawasaki disease; (3) obesity. Stakeholders, such as patients, clinicians, and health service researchers, will be engaged to prioritize research questions to be answered through the network. We will use a privacy-preserving distributed computation model with synchronous and asynchronous modes. The distributed system will be based on a common data model that allows the construction and evaluation of distributed multivariate models for a variety of statistical analyses. PMID:24780722

  9. Hacking Social Networks: Examining the Viability of Using Computer Network Attack Against Social Networks

    DTIC Science & Technology

    2007-03-01

    NAVAL POSTGRADUATE SCHOOL MONTEREY, CALIFORNIA THESIS Approved for public release; distribution is unlimited. HACKING SOCIAL NETWORKS : EXAMINING THE...VIABILITY OF USING COMPUTER NETWORK ATTACK AGAINST SOCIAL NETWORKS by Russell G. Schuhart II March 2007 Thesis Advisor: David Tucker Second Reader...Master’s Thesis 4. TITLE AND SUBTITLE: Hacking Social Networks : Examining the Viability of Using Computer Network Attack Against Social Networks 6. AUTHOR

  10. Sharing Writing through Computer Networking.

    ERIC Educational Resources Information Center

    Fey, Marion H.

    1997-01-01

    Suggests computer networking can support the essential purposes of the collaborative-writing movement, offering opportunities for sharing writing. Notes that literacy teachers are exploring the connectivity of computer networking through numerous designs that use either real-time or asynchronous communication. Discusses new roles for students and…

  11. Network Management of the SPLICE Computer Network.

    DTIC Science & Technology

    1982-12-01

    Approved for public release; distri4ition unlimited. Network lanagenent Df the SPLICE Computer Network by Zriig E. Opal captaini United St~tes larine... structure of the network must leni itself t3 change and reconfiguration, one author [Ref. 2: p.21] recommended that a global bus topology be adopted for...statistics, trace statistics, snapshot statistiZs, artifi - cial traffic generators, auulat on, a network measurement center which includes control, collction

  12. Computer Network Security- The Challenges of Securing a Computer Network

    NASA Technical Reports Server (NTRS)

    Scotti, Vincent, Jr.

    2011-01-01

    This article is intended to give the reader an overall perspective on what it takes to design, implement, enforce and secure a computer network in the federal and corporate world to insure the confidentiality, integrity and availability of information. While we will be giving you an overview of network design and security, this article will concentrate on the technology and human factors of securing a network and the challenges faced by those doing so. It will cover the large number of policies and the limits of technology and physical efforts to enforce such policies.

  13. Computer Networks as a New Data Base.

    ERIC Educational Resources Information Center

    Beals, Diane E.

    1992-01-01

    Discusses the use of communication on computer networks as a data source for psychological, social, and linguistic research. Differences between computer-mediated communication and face-to-face communication are described, the Beginning Teacher Computer Network is discussed, and examples of network conversations are appended. (28 references) (LRW)

  14. The Reality of National Computer Networking for Higher Education. Proceedings of the 1978 EDUCOM Fall Conference. EDUCOM Series in Computing and Telecommunications in Higher Education 3.

    ERIC Educational Resources Information Center

    Emery, James C., Ed.

    A comprehensive review of the current status, prospects, and problems of computer networking in higher education is presented from the perspectives of both computer users and network suppliers. Several areas of computer use are considered including applications for instruction, research, and administration in colleges and universities. In the…

  15. "TIS": An Intelligent Gateway Computer for Information and Modeling Networks. Overview.

    ERIC Educational Resources Information Center

    Hampel, Viktor E.; And Others

    TIS (Technology Information System) is being used at the Lawrence Livermore National Laboratory (LLNL) to develop software for Intelligent Gateway Computers (IGC) suitable for the prototyping of advanced, integrated information networks. Dedicated to information management, TIS leads the user to available information resources, on TIS or…

  16. National networks of Healthy Cities in Europe.

    PubMed

    Janss Lafond, Leah; Heritage, Zoë

    2009-11-01

    National networks of Healthy Cities emerged in the late 1980s as a spontaneous reaction to a great demand by cities to participate in the Healthy Cities movement. Today, they engage at least 1300 cities in the European region and form the backbone of the Healthy Cities movement. This article provides an analysis of the results of the regular surveys of national networks that have been carried out principally since 1997. The main functions and achievements of national networks are presented alongside some of their most pressing challenges. Although networks have differing priorities and organizational characteristics, they do share common goals and strategic directions based on the Healthy Cities model (see other articles in this special edition of HPI). Therefore, it has been possible to identify a set of organizational and strategic factors that contribute to the success of networks. These factors form the basis of a set of accreditation criteria for national networks and provide guidance for the establishment of new national networks. Although national networks have made substantial achievements, they continue to face a number of dilemmas that are discussed in the article. Problems a national network must deal with include how to obtain sustainable funding, how to raise the standard of work in cities without creating exclusive participation criteria and how to balance the need to provide direct support to cities with its role as a national player. These dilemmas are similar to other public sector networks. During the last 15 years, the pooling of practical expertise in urban health has made Healthy Cities networks an important resource for national as well as local governments. Not only do they provide valuable support to their members but they often advise ministries and other national institutions on effective models to promote sustainable urban health development.

  17. HeNCE: A Heterogeneous Network Computing Environment

    DOE PAGES

    Beguelin, Adam; Dongarra, Jack J.; Geist, George Al; ...

    1994-01-01

    Network computing seeks to utilize the aggregate resources of many networked computers to solve a single problem. In so doing it is often possible to obtain supercomputer performance from an inexpensive local area network. The drawback is that network computing is complicated and error prone when done by hand, especially if the computers have different operating systems and data formats and are thus heterogeneous. The heterogeneous network computing environment (HeNCE) is an integrated graphical environment for creating and running parallel programs over a heterogeneous collection of computers. It is built on a lower level package called parallel virtual machine (PVM).more » The HeNCE philosophy of parallel programming is to have the programmer graphically specify the parallelism of a computation and to automate, as much as possible, the tasks of writing, compiling, executing, debugging, and tracing the network computation. Key to HeNCE is a graphical language based on directed graphs that describe the parallelism and data dependencies of an application. Nodes in the graphs represent conventional Fortran or C subroutines and the arcs represent data and control flow. This article describes the present state of HeNCE, its capabilities, limitations, and areas of future research.« less

  18. Spontaneous ad hoc mobile cloud computing network.

    PubMed

    Lacuesta, Raquel; Lloret, Jaime; Sendra, Sandra; Peñalver, Lourdes

    2014-01-01

    Cloud computing helps users and companies to share computing resources instead of having local servers or personal devices to handle the applications. Smart devices are becoming one of the main information processing devices. Their computing features are reaching levels that let them create a mobile cloud computing network. But sometimes they are not able to create it and collaborate actively in the cloud because it is difficult for them to build easily a spontaneous network and configure its parameters. For this reason, in this paper, we are going to present the design and deployment of a spontaneous ad hoc mobile cloud computing network. In order to perform it, we have developed a trusted algorithm that is able to manage the activity of the nodes when they join and leave the network. The paper shows the network procedures and classes that have been designed. Our simulation results using Castalia show that our proposal presents a good efficiency and network performance even by using high number of nodes.

  19. Closeness Possible through Computer Networking.

    ERIC Educational Resources Information Center

    Dodd, Julie E.

    1989-01-01

    Points out the benefits of computer networking for scholastic journalism. Discusses three systems currently offering networking possibilities for publications: the Student Press Information Network; the Youth Communication Service; and the Dow Jones Newspaper Fund's electronic mail system. (MS)

  20. Constructing Precisely Computing Networks with Biophysical Spiking Neurons.

    PubMed

    Schwemmer, Michael A; Fairhall, Adrienne L; Denéve, Sophie; Shea-Brown, Eric T

    2015-07-15

    While spike timing has been shown to carry detailed stimulus information at the sensory periphery, its possible role in network computation is less clear. Most models of computation by neural networks are based on population firing rates. In equivalent spiking implementations, firing is assumed to be random such that averaging across populations of neurons recovers the rate-based approach. Recently, however, Denéve and colleagues have suggested that the spiking behavior of neurons may be fundamental to how neuronal networks compute, with precise spike timing determined by each neuron's contribution to producing the desired output (Boerlin and Denéve, 2011; Boerlin et al., 2013). By postulating that each neuron fires to reduce the error in the network's output, it was demonstrated that linear computations can be performed by networks of integrate-and-fire neurons that communicate through instantaneous synapses. This left open, however, the possibility that realistic networks, with conductance-based neurons with subthreshold nonlinearity and the slower timescales of biophysical synapses, may not fit into this framework. Here, we show how the spike-based approach can be extended to biophysically plausible networks. We then show that our network reproduces a number of key features of cortical networks including irregular and Poisson-like spike times and a tight balance between excitation and inhibition. Lastly, we discuss how the behavior of our model scales with network size or with the number of neurons "recorded" from a larger computing network. These results significantly increase the biological plausibility of the spike-based approach to network computation. We derive a network of neurons with standard spike-generating currents and synapses with realistic timescales that computes based upon the principle that the precise timing of each spike is important for the computation. We then show that our network reproduces a number of key features of cortical networks

  1. Computer network security for the radiology enterprise.

    PubMed

    Eng, J

    2001-08-01

    As computer networks become an integral part of the radiology practice, it is appropriate to raise concerns regarding their security. The purpose of this article is to present an overview of computer network security risks and preventive strategies as they pertain to the radiology enterprise. A number of technologies are available that provide strong deterrence against attacks on networks and networked computer systems in the radiology enterprise. While effective, these technologies must be supplemented with vigilant user and system management.

  2. 23 CFR 658.21 - Identification of National Network.

    Code of Federal Regulations, 2011 CFR

    2011-04-01

    ... 23 Highways 1 2011-04-01 2011-04-01 false Identification of National Network. 658.21 Section 658... Identification of National Network. (a) To identify the National Network, a State may sign the routes or provide maps of lists of highways describing the National Network. (b) Exceptional local conditions on the...

  3. 23 CFR 658.21 - Identification of National Network.

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ... 23 Highways 1 2010-04-01 2010-04-01 false Identification of National Network. 658.21 Section 658... Identification of National Network. (a) To identify the National Network, a State may sign the routes or provide maps of lists of highways describing the National Network. (b) Exceptional local conditions on the...

  4. Active Computer Network Defense: An Assessment

    DTIC Science & Technology

    2001-04-01

    sufficient base of knowledge in information technology can be assumed to be working on some form of computer network warfare, even if only defensive in...the Defense Information Infrastructure (DII) to attack. Transmission Control Protocol/ Internet Protocol (TCP/IP) networks are inherently resistant to...aims to create this part of information superiority, and computer network defense is one of its fundamental components. Most of these efforts center

  5. Spontaneous Ad Hoc Mobile Cloud Computing Network

    PubMed Central

    Lacuesta, Raquel; Sendra, Sandra; Peñalver, Lourdes

    2014-01-01

    Cloud computing helps users and companies to share computing resources instead of having local servers or personal devices to handle the applications. Smart devices are becoming one of the main information processing devices. Their computing features are reaching levels that let them create a mobile cloud computing network. But sometimes they are not able to create it and collaborate actively in the cloud because it is difficult for them to build easily a spontaneous network and configure its parameters. For this reason, in this paper, we are going to present the design and deployment of a spontaneous ad hoc mobile cloud computing network. In order to perform it, we have developed a trusted algorithm that is able to manage the activity of the nodes when they join and leave the network. The paper shows the network procedures and classes that have been designed. Our simulation results using Castalia show that our proposal presents a good efficiency and network performance even by using high number of nodes. PMID:25202715

  6. Network Patch Cables Demystified: A Super Activity for Computer Networking Technology

    ERIC Educational Resources Information Center

    Brown, Douglas L.

    2004-01-01

    This article de-mystifies network patch cable secrets so that people can connect their computers and transfer those pesky files--without screaming at the cables. It describes a network cabling activity that can offer students a great hands-on opportunity for working with the tools, techniques, and media used in computer networking. Since the…

  7. Non-harmful insertion of data mimicking computer network attacks

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Neil, Joshua Charles; Kent, Alexander; Hash, Jr, Curtis Lee

    Non-harmful data mimicking computer network attacks may be inserted in a computer network. Anomalous real network connections may be generated between a plurality of computing systems in the network. Data mimicking an attack may also be generated. The generated data may be transmitted between the plurality of computing systems using the real network connections and measured to determine whether an attack is detected.

  8. Pacific Educational Computer Network Study. Final Report.

    ERIC Educational Resources Information Center

    Hawaii Univ., Honolulu. ALOHA System.

    The Pacific Educational Computer Network Feasibility Study examined technical and non-technical aspects of the formation of an international Pacific Area computer network for higher education. The technical study covered the assessment of the feasibility of a packet-switched satellite and radio ground distribution network for data transmission…

  9. Efficiently modeling neural networks on massively parallel computers

    NASA Technical Reports Server (NTRS)

    Farber, Robert M.

    1993-01-01

    Neural networks are a very useful tool for analyzing and modeling complex real world systems. Applying neural network simulations to real world problems generally involves large amounts of data and massive amounts of computation. To efficiently handle the computational requirements of large problems, we have implemented at Los Alamos a highly efficient neural network compiler for serial computers, vector computers, vector parallel computers, and fine grain SIMD computers such as the CM-2 connection machine. This paper describes the mapping used by the compiler to implement feed-forward backpropagation neural networks for a SIMD (Single Instruction Multiple Data) architecture parallel computer. Thinking Machines Corporation has benchmarked our code at 1.3 billion interconnects per second (approximately 3 gigaflops) on a 64,000 processor CM-2 connection machine (Singer 1990). This mapping is applicable to other SIMD computers and can be implemented on MIMD computers such as the CM-5 connection machine. Our mapping has virtually no communications overhead with the exception of the communications required for a global summation across the processors (which has a sub-linear runtime growth on the order of O(log(number of processors)). We can efficiently model very large neural networks which have many neurons and interconnects and our mapping can extend to arbitrarily large networks (within memory limitations) by merging the memory space of separate processors with fast adjacent processor interprocessor communications. This paper will consider the simulation of only feed forward neural network although this method is extendable to recurrent networks.

  10. The Emergence Of The National Research And Education Network (NREN) And Its Implications For American Telecommunications

    NASA Astrophysics Data System (ADS)

    Maloff, Joel H.

    1990-01-01

    "The nation which most completely assimilates high performance computing into its economy will very likely emerge as the dominant intellectual, economic, and technological force in the next century", Senator Albert Gore, Jr., May 18, 1989, while introducing Senate Bill 1067, "The National High Performance Computer Technology Act of 1989". A national network designed to link supercomputers, particle accelerators, researchers, educators, government, and industry is beginning to emerge. The degree to which the United States can mobilize the resources inherent within our academic, industrial and government sectors towards the establishment of such a network infrastructure will have direct bearing on the economic and political stature of this country in the next century. This program will have significant impact on all forms of information transfer, and peripheral benefits to all walks of life similar to those experienced from the moon landing program of the 1960's. The key to our success is the involvement of scientists, librarians, network designers, and bureaucrats in the planning stages. Collectively, the resources resident within the United States are awesome; individually, their impact is somewhat more limited. The engineers, technicians, business people, and educators participating in this conference have a vital role to play in the success of the National Research and Education Network (NREN).

  11. Network Computer Technology. Phase I: Viability and Promise within NASA's Desktop Computing Environment

    NASA Technical Reports Server (NTRS)

    Paluzzi, Peter; Miller, Rosalind; Kurihara, West; Eskey, Megan

    1998-01-01

    Over the past several months, major industry vendors have made a business case for the network computer as a win-win solution toward lowering total cost of ownership. This report provides results from Phase I of the Ames Research Center network computer evaluation project. It identifies factors to be considered for determining cost of ownership; further, it examines where, when, and how network computer technology might fit in NASA's desktop computing architecture.

  12. The National Research and Education Network (NREN): Research and Policy Perspectives.

    ERIC Educational Resources Information Center

    McClure, Charles R.; And Others

    This book provides an overview and status report on the progress made in developing the National Research and Education Network (NREN) as of early 1991. It reports on a number of investigations that provide a research and policy perspective on the NREN and computer-mediated communication (CMC), and brings together key source documents that have…

  13. Collective Computation of Neural Network

    DTIC Science & Technology

    1990-03-15

    Sciences, Beijing ABSTRACT Computational neuroscience is a new branch of neuroscience originating from current research on the theory of computer...scientists working in artificial intelligence engineering and neuroscience . The paper introduces the collective computational properties of model neural...vision research. On this basis, the authors analyzed the significance of the Hopfield model. Key phrases: Computational Neuroscience , Neural Network, Model

  14. Computer network environment planning and analysis

    NASA Technical Reports Server (NTRS)

    Dalphin, John F.

    1989-01-01

    The GSFC Computer Network Environment provides a broadband RF cable between campus buildings and ethernet spines in buildings for the interlinking of Local Area Networks (LANs). This system provides terminal and computer linkage among host and user systems thereby providing E-mail services, file exchange capability, and certain distributed computing opportunities. The Environment is designed to be transparent and supports multiple protocols. Networking at Goddard has a short history and has been under coordinated control of a Network Steering Committee for slightly more than two years; network growth has been rapid with more than 1500 nodes currently addressed and greater expansion expected. A new RF cable system with a different topology is being installed during summer 1989; consideration of a fiber optics system for the future will begin soon. Summmer study was directed toward Network Steering Committee operation and planning plus consideration of Center Network Environment analysis and modeling. Biweekly Steering Committee meetings were attended to learn the background of the network and the concerns of those managing it. Suggestions for historical data gathering have been made to support future planning and modeling. Data Systems Dynamic Simulator, a simulation package developed at NASA and maintained at GSFC was studied as a possible modeling tool for the network environment. A modeling concept based on a hierarchical model was hypothesized for further development. Such a model would allow input of newly updated parameters and would provide an estimation of the behavior of the network.

  15. User's manual for a material transport code on the Octopus Computer Network

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Naymik, T.G.; Mendez, G.D.

    1978-09-15

    A code to simulate material transport through porous media was developed at Oak Ridge National Laboratory. This code has been modified and adapted for use at Lawrence Livermore Laboratory. This manual, in conjunction with report ORNL-4928, explains the input, output, and execution of the code on the Octopus Computer Network.

  16. LaRC local area networks to support distributed computing

    NASA Technical Reports Server (NTRS)

    Riddle, E. P.

    1984-01-01

    The Langley Research Center's (LaRC) Local Area Network (LAN) effort is discussed. LaRC initiated the development of a LAN to support a growing distributed computing environment at the Center. The purpose of the network is to provide an improved capability (over inteactive and RJE terminal access) for sharing multivendor computer resources. Specifically, the network will provide a data highway for the transfer of files between mainframe computers, minicomputers, work stations, and personal computers. An important influence on the overall network design was the vital need of LaRC researchers to efficiently utilize the large CDC mainframe computers in the central scientific computing facility. Although there was a steady migration from a centralized to a distributed computing environment at LaRC in recent years, the work load on the central resources increased. Major emphasis in the network design was on communication with the central resources within the distributed environment. The network to be implemented will allow researchers to utilize the central resources, distributed minicomputers, work stations, and personal computers to obtain the proper level of computing power to efficiently perform their jobs.

  17. Queuing theory models for computer networks

    NASA Technical Reports Server (NTRS)

    Galant, David C.

    1989-01-01

    A set of simple queuing theory models which can model the average response of a network of computers to a given traffic load has been implemented using a spreadsheet. The impact of variations in traffic patterns and intensities, channel capacities, and message protocols can be assessed using them because of the lack of fine detail in the network traffic rates, traffic patterns, and the hardware used to implement the networks. A sample use of the models applied to a realistic problem is included in appendix A. Appendix B provides a glossary of terms used in this paper. This Ames Research Center computer communication network is an evolving network of local area networks (LANs) connected via gateways and high-speed backbone communication channels. Intelligent planning of expansion and improvement requires understanding the behavior of the individual LANs as well as the collection of networks as a whole.

  18. Computer (PC/Network) Coordinator.

    ERIC Educational Resources Information Center

    Ohio State Univ., Columbus. Center on Education and Training for Employment.

    This publication contains 22 subjects appropriate for use in a competency list for the occupation of computer (PC/network) coordinator, 1 of 12 occupations within the business/computer technologies cluster. Each unit consists of a number of competencies; a list of competency builders is provided for each competency. Titles of the 22 units are as…

  19. Network survivability performance (computer diskette)

    NASA Astrophysics Data System (ADS)

    1993-11-01

    File characteristics: Data file; 1 file. Physical description: 1 computer diskette; 3 1/2 in.; high density; 2.0MB. System requirements: Mac; Word. This technical report has been developed to address the survivability of telecommunications networks including services. It responds to the need for a common understanding of, and assessment techniques for network survivability, availability, integrity, and reliability. It provides a basis for designing and operating telecommunication networks to user expectations for network survivability.

  20. Neural-Network Computer Transforms Coordinates

    NASA Technical Reports Server (NTRS)

    Josin, Gary M.

    1990-01-01

    Numerical simulation demonstrated ability of conceptual neural-network computer to generalize what it has "learned" from few examples. Ability to generalize achieved with even simple neural network (relatively few neurons) and after exposure of network to only few "training" examples. Ability to obtain fairly accurate mappings after only few training examples used to provide solutions to otherwise intractable mapping problems.

  1. The research of computer network security and protection strategy

    NASA Astrophysics Data System (ADS)

    He, Jian

    2017-05-01

    With the widespread popularity of computer network applications, its security is also received a high degree of attention. Factors affecting the safety of network is complex, for to do a good job of network security is a systematic work, has the high challenge. For safety and reliability problems of computer network system, this paper combined with practical work experience, from the threat of network security, security technology, network some Suggestions and measures for the system design principle, in order to make the masses of users in computer networks to enhance safety awareness and master certain network security technology.

  2. Hyperswitch Network For Hypercube Computer

    NASA Technical Reports Server (NTRS)

    Chow, Edward; Madan, Herbert; Peterson, John

    1989-01-01

    Data-driven dynamic switching enables high speed data transfer. Proposed hyperswitch network based on mixed static and dynamic topologies. Routing header modified in response to congestion or faults encountered as path established. Static topology meets requirement if nodes have switching elements that perform necessary routing header revisions dynamically. Hypercube topology now being implemented with switching element in each computer node aimed at designing very-richly-interconnected multicomputer system. Interconnection network connects great number of small computer nodes, using fixed hypercube topology, characterized by point-to-point links between nodes.

  3. The ASCI Network for SC 2000: Gigabyte Per Second Networking

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    PRATT, THOMAS J.; NAEGLE, JOHN H.; MARTINEZ JR., LUIS G.

    2001-11-01

    This document highlights the Discom's Distance computing and communication team activities at the 2000 Supercomputing conference in Dallas Texas. This conference is sponsored by the IEEE and ACM. Sandia's participation in the conference has now spanned a decade, for the last five years Sandia National Laboratories, Los Alamos National Lab and Lawrence Livermore National Lab have come together at the conference under the DOE's ASCI, Accelerated Strategic Computing Initiatives, Program rubric to demonstrate ASCI's emerging capabilities in computational science and our combined expertise in high performance computer science and communication networking developments within the program. At SC 2000, DISCOM demonstratedmore » an infrastructure. DISCOM2 uses this forum to demonstrate and focus communication and pre-standard implementation of 10 Gigabit Ethernet, the first gigabyte per second data IP network transfer application, and VPN technology that enabled a remote Distributed Resource Management tools demonstration. Additionally a national OC48 POS network was constructed to support applications running between the show floor and home facilities. This network created the opportunity to test PSE's Parallel File Transfer Protocol (PFTP) across a network that had similar speed and distances as the then proposed DISCOM WAN. The SCINET SC2000 showcased wireless networking and the networking team had the opportunity to explore this emerging technology while on the booth. This paper documents those accomplishments, discusses the details of their convention exhibit floor. We also supported the production networking needs of the implementation, and describes how these demonstrations supports DISCOM overall strategies in high performance computing networking.« less

  4. Using E-Mail across Computer Networks.

    ERIC Educational Resources Information Center

    Hazari, Sunil

    1990-01-01

    Discusses the use of telecommunications technology to exchange electronic mail, files, and messages across different computer networks. Networks highlighted include ARPA Internet; BITNET; USENET; FidoNet; MCI Mail; and CompuServe. Examples of the successful use of networks in higher education are given. (Six references) (LRW)

  5. Computing chemical organizations in biological networks.

    PubMed

    Centler, Florian; Kaleta, Christoph; di Fenizio, Pietro Speroni; Dittrich, Peter

    2008-07-15

    Novel techniques are required to analyze computational models of intracellular processes as they increase steadily in size and complexity. The theory of chemical organizations has recently been introduced as such a technique that links the topology of biochemical reaction network models to their dynamical repertoire. The network is decomposed into algebraically closed and self-maintaining subnetworks called organizations. They form a hierarchy representing all feasible system states including all steady states. We present three algorithms to compute the hierarchy of organizations for network models provided in SBML format. Two of them compute the complete organization hierarchy, while the third one uses heuristics to obtain a subset of all organizations for large models. While the constructive approach computes the hierarchy starting from the smallest organization in a bottom-up fashion, the flux-based approach employs self-maintaining flux distributions to determine organizations. A runtime comparison on 16 different network models of natural systems showed that none of the two exhaustive algorithms is superior in all cases. Studying a 'genome-scale' network model with 762 species and 1193 reactions, we demonstrate how the organization hierarchy helps to uncover the model structure and allows to evaluate the model's quality, for example by detecting components and subsystems of the model whose maintenance is not explained by the model. All data and a Java implementation that plugs into the Systems Biology Workbench is available from http://www.minet.uni-jena.de/csb/prj/ot/tools.

  6. National law enforcement telecommunications network

    NASA Technical Reports Server (NTRS)

    Reilly, N. B.; Garrison, G. W.; Sohn, R. L.; Gallop, D. L.; Goldstein, B. L.

    1975-01-01

    Alternative approaches are analyzed to a National Law Enforcement Telecommunications Network (NALECOM) designed to service all state-to-state and state-to-national criminal justice communications traffic needs in the United States. Network topology options were analyzed, and equipment and personnel requirements for each option were defined in accordance with NALECOM functional specifications and design guidelines. Evaluation criteria were developed and applied to each of the options leading to specific conclusions. Detailed treatments of methods for determining traffic requirements, communication line costs, switcher configurations and costs, microwave costs, satellite system configurations and costs, facilities, operations and engineering costs, network delay analysis and network availability analysis are presented. It is concluded that a single regional switcher configuration is the optimum choice based on cost and technical factors. A two-region configuration is competitive. Multiple-region configurations are less competitive due to increasing costs without attending benefits.

  7. Computer-Based Information Networks: Selected Examples.

    ERIC Educational Resources Information Center

    Hardesty, Larry

    The history, purpose, and operation of six computer-based information networks are described in general and nontechnical terms. In the introduction the many definitions of an information network are explored. Ohio College Library Center's network (OCLC) is the first example. OCLC began in 1963, and since early 1973 has been extending its services…

  8. Optical interconnection networks for high-performance computing systems

    NASA Astrophysics Data System (ADS)

    Biberman, Aleksandr; Bergman, Keren

    2012-04-01

    Enabled by silicon photonic technology, optical interconnection networks have the potential to be a key disruptive technology in computing and communication industries. The enduring pursuit of performance gains in computing, combined with stringent power constraints, has fostered the ever-growing computational parallelism associated with chip multiprocessors, memory systems, high-performance computing systems and data centers. Sustaining these parallelism growths introduces unique challenges for on- and off-chip communications, shifting the focus toward novel and fundamentally different communication approaches. Chip-scale photonic interconnection networks, enabled by high-performance silicon photonic devices, offer unprecedented bandwidth scalability with reduced power consumption. We demonstrate that the silicon photonic platforms have already produced all the high-performance photonic devices required to realize these types of networks. Through extensive empirical characterization in much of our work, we demonstrate such feasibility of waveguides, modulators, switches and photodetectors. We also demonstrate systems that simultaneously combine many functionalities to achieve more complex building blocks. We propose novel silicon photonic devices, subsystems, network topologies and architectures to enable unprecedented performance of these photonic interconnection networks. Furthermore, the advantages of photonic interconnection networks extend far beyond the chip, offering advanced communication environments for memory systems, high-performance computing systems, and data centers.

  9. Analysis of Computer Network Information Based on "Big Data"

    NASA Astrophysics Data System (ADS)

    Li, Tianli

    2017-11-01

    With the development of the current era, computer network and large data gradually become part of the people's life, people use the computer to provide convenience for their own life, but at the same time there are many network information problems has to pay attention. This paper analyzes the information security of computer network based on "big data" analysis, and puts forward some solutions.

  10. Email networks and the spread of computer viruses

    NASA Astrophysics Data System (ADS)

    Newman, M. E.; Forrest, Stephanie; Balthrop, Justin

    2002-09-01

    Many computer viruses spread via electronic mail, making use of computer users' email address books as a source for email addresses of new victims. These address books form a directed social network of connections between individuals over which the virus spreads. Here we investigate empirically the structure of this network using data drawn from a large computer installation, and discuss the implications of this structure for the understanding and prevention of computer virus epidemics.

  11. Networking DEC and IBM computers

    NASA Technical Reports Server (NTRS)

    Mish, W. H.

    1983-01-01

    Local Area Networking of DEC and IBM computers within the structure of the ISO-OSI Seven Layer Reference Model at a raw signaling speed of 1 Mops or greater are discussed. After an introduction to the ISO-OSI Reference Model nd the IEEE-802 Draft Standard for Local Area Networks (LANs), there follows a detailed discussion and comparison of the products available from a variety of manufactures to perform this networking task. A summary of these products is presented in a table.

  12. Collective network for computer structures

    DOEpatents

    Blumrich, Matthias A; Coteus, Paul W; Chen, Dong; Gara, Alan; Giampapa, Mark E; Heidelberger, Philip; Hoenicke, Dirk; Takken, Todd E; Steinmacher-Burow, Burkhard D; Vranas, Pavlos M

    2014-01-07

    A system and method for enabling high-speed, low-latency global collective communications among interconnected processing nodes. The global collective network optimally enables collective reduction operations to be performed during parallel algorithm operations executing in a computer structure having a plurality of the interconnected processing nodes. Router devices are included that interconnect the nodes of the network via links to facilitate performance of low-latency global processing operations at nodes of the virtual network. The global collective network may be configured to provide global barrier and interrupt functionality in asynchronous or synchronized manner. When implemented in a massively-parallel supercomputing structure, the global collective network is physically and logically partitionable according to the needs of a processing algorithm.

  13. Artificial Neural Network Metamodels of Stochastic Computer Simulations

    DTIC Science & Technology

    1994-08-10

    SUBTITLE r 5. FUNDING NUMBERS Artificial Neural Network Metamodels of Stochastic I () Computer Simulations 6. AUTHOR(S) AD- A285 951 Robert Allen...8217!298*1C2 ARTIFICIAL NEURAL NETWORK METAMODELS OF STOCHASTIC COMPUTER SIMULATIONS by Robert Allen Kilmer B.S. in Education Mathematics, Indiana...dedicate this document to the memory of my father, William Ralph Kilmer. mi ABSTRACT Signature ARTIFICIAL NEURAL NETWORK METAMODELS OF STOCHASTIC

  14. Mobile Computing and Ubiquitous Networking: Concepts, Technologies and Challenges.

    ERIC Educational Resources Information Center

    Pierre, Samuel

    2001-01-01

    Analyzes concepts, technologies and challenges related to mobile computing and networking. Defines basic concepts of cellular systems. Describes the evolution of wireless technologies that constitute the foundations of mobile computing and ubiquitous networking. Presents characterization and issues of mobile computing. Analyzes economical and…

  15. Automating the Presentation of Computer Networks

    DTIC Science & Technology

    2006-12-01

    software to overlay operational state information. Other network management tools like Computer Associates Unicenter [6,7] generate internal network...and required manual placement assistance. A number of software libraries [20] offer a wealth of automatic layout algorithms and presentation...FX010857971033.aspx [2] Microsoft (2005) Visio 2003 Product Demo, http://www.microsoft.com/office/visio/prodinfo/demo.mspx [3] Smartdraw (2005) Network

  16. Research Activity in Computational Physics utilizing High Performance Computing: Co-authorship Network Analysis

    NASA Astrophysics Data System (ADS)

    Ahn, Sul-Ah; Jung, Youngim

    2016-10-01

    The research activities of the computational physicists utilizing high performance computing are analyzed by bibliometirc approaches. This study aims at providing the computational physicists utilizing high-performance computing and policy planners with useful bibliometric results for an assessment of research activities. In order to achieve this purpose, we carried out a co-authorship network analysis of journal articles to assess the research activities of researchers for high-performance computational physics as a case study. For this study, we used journal articles of the Scopus database from Elsevier covering the time period of 2004-2013. We extracted the author rank in the physics field utilizing high-performance computing by the number of papers published during ten years from 2004. Finally, we drew the co-authorship network for 45 top-authors and their coauthors, and described some features of the co-authorship network in relation to the author rank. Suggestions for further studies are discussed.

  17. CRYSNET manual. Informal report. [Hardware and software of crystallographic computing network

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    None,

    1976-07-01

    This manual describes the hardware and software which together make up the crystallographic computing network (CRYSNET). The manual is intended as a users' guide and also provides general information for persons without any experience with the system. CRYSNET is a network of intelligent remote graphics terminals that are used to communicate with the CDC Cyber 70/76 computing system at the Brookhaven National Laboratory (BNL) Central Scientific Computing Facility. Terminals are in active use by four research groups in the field of crystallography. A protein data bank has been established at BNL to store in machine-readable form atomic coordinates and othermore » crystallographic data for macromolecules. The bank currently includes data for more than 20 proteins. This structural information can be accessed at BNL directly by the CRYSNET graphics terminals. More than two years of experience has been accumulated with CRYSNET. During this period, it has been demonstrated that the terminals, which provide access to a large, fast third-generation computer, plus stand-alone interactive graphics capability, are useful for computations in crystallography, and in a variety of other applications as well. The terminal hardware, the actual operations of the terminals, and the operations of the BNL Central Facility are described in some detail, and documentation of the terminal and central-site software is given. (RWR)« less

  18. Distributed Computer Networks in Support of Complex Group Practices

    PubMed Central

    Wess, Bernard P.

    1978-01-01

    The economics of medical computer networks are presented in context with the patient care and administrative goals of medical networks. Design alternatives and network topologies are discussed with an emphasis on medical network design requirements in distributed data base design, telecommunications, satellite systems, and software engineering. The success of the medical computer networking technology is predicated on the ability of medical and data processing professionals to design comprehensive, efficient, and virtually impenetrable security systems to protect data bases, network access and services, and patient confidentiality.

  19. Telecommunication Networks. Tech Use Guide: Using Computer Technology.

    ERIC Educational Resources Information Center

    Council for Exceptional Children, Reston, VA. Center for Special Education Technology.

    One of nine brief guides for special educators on using computer technology, this guide focuses on utilizing the telecommunications capabilities of computers. Network capabilities including electronic mail, bulletin boards, and access to distant databases are briefly explained. Networks useful to the educator, general commercial systems, and local…

  20. Collective network for computer structures

    DOEpatents

    Blumrich, Matthias A [Ridgefield, CT; Coteus, Paul W [Yorktown Heights, NY; Chen, Dong [Croton On Hudson, NY; Gara, Alan [Mount Kisco, NY; Giampapa, Mark E [Irvington, NY; Heidelberger, Philip [Cortlandt Manor, NY; Hoenicke, Dirk [Ossining, NY; Takken, Todd E [Brewster, NY; Steinmacher-Burow, Burkhard D [Wernau, DE; Vranas, Pavlos M [Bedford Hills, NY

    2011-08-16

    A system and method for enabling high-speed, low-latency global collective communications among interconnected processing nodes. The global collective network optimally enables collective reduction operations to be performed during parallel algorithm operations executing in a computer structure having a plurality of the interconnected processing nodes. Router devices ate included that interconnect the nodes of the network via links to facilitate performance of low-latency global processing operations at nodes of the virtual network and class structures. The global collective network may be configured to provide global barrier and interrupt functionality in asynchronous or synchronized manner. When implemented in a massively-parallel supercomputing structure, the global collective network is physically and logically partitionable according to needs of a processing algorithm.

  1. Computing motion using resistive networks

    NASA Technical Reports Server (NTRS)

    Koch, Christof; Luo, Jin; Mead, Carver; Hutchinson, James

    1988-01-01

    Recent developments in the theory of early vision are described which lead from the formulation of the motion problem as an ill-posed one to its solution by minimizing certain 'cost' functions. These cost or energy functions can be mapped onto simple analog and digital resistive networks. It is shown how the optical flow can be computed by injecting currents into resistive networks and recording the resulting stationary voltage distribution at each node. These networks can be implemented in cMOS VLSI circuits and represent plausible candidates for biological vision systems.

  2. K-12 Computer Networking.

    ERIC Educational Resources Information Center

    ERIC Review, 1993

    1993-01-01

    The "ERIC Review" is published three times a year and announces research results, publications, and new programs relevant to each issue's theme topic. This issue explores computer networking in elementary and secondary schools via two principal articles: "Plugging into the 'Net'" (Michael B. Eisenberg and Donald P. Ely); and…

  3. A network-based distributed, media-rich computing and information environment

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Phillips, R.L.

    1995-12-31

    Sunrise is a Los Alamos National Laboratory (LANL) project started in October 1993. It is intended to be a prototype National Information Infrastructure development project. A main focus of Sunrise is to tie together enabling technologies (networking, object-oriented distributed computing, graphical interfaces, security, multi-media technologies, and data-mining technologies) with several specific applications. A diverse set of application areas was chosen to ensure that the solutions developed in the project are as generic as possible. Some of the application areas are materials modeling, medical records and image analysis, transportation simulations, and K-12 education. This paper provides a description of Sunrise andmore » a view of the architecture and objectives of this evolving project. The primary objectives of Sunrise are three-fold: (1) To develop common information-enabling tools for advanced scientific research and its applications to industry; (2) To enhance the capabilities of important research programs at the Laboratory; (3) To define a new way of collaboration between computer science and industrially-relevant research.« less

  4. Network Computing for Distributed Underwater Acoustic Sensors

    DTIC Science & Technology

    2014-03-31

    underwater sensor network with mobility. In preparation. [3] EvoLogics (2013), Underwater Acoustic Modems, (Product Information Guide... Wireless Communications, 9(9), 2934–2944. [21] Pompili, D. and Akyildiz, I. (2010), A multimedia cross-layer protocol for underwater acoustic sensor networks ... Network Computing for Distributed Underwater Acoustic Sensors M. Barbeau E. Kranakis

  5. Discussion on the Technology and Method of Computer Network Security Management

    NASA Astrophysics Data System (ADS)

    Zhou, Jianlei

    2017-09-01

    With the rapid development of information technology, the application of computer network technology has penetrated all aspects of society, changed people's way of life work to a certain extent, brought great convenience to people. But computer network technology is not a panacea, it can promote the function of social development, but also can cause damage to the community and the country. Due to computer network’ openness, easiness of sharing and other characteristics, it had a very negative impact on the computer network security, especially the loopholes in the technical aspects can cause damage on the network information. Based on this, this paper will do a brief analysis on the computer network security management problems and security measures.

  6. NCI National Clinical Trials Network Structure

    Cancer.gov

    Learn about how the National Clinical Trials Network (NCTN) is structured. The NCTN is a program of the National Cancer Institute that gives funds and other support to cancer research organizations to conduct cancer clinical trials.

  7. Line-plane broadcasting in a data communications network of a parallel computer

    DOEpatents

    Archer, Charles J.; Berg, Jeremy E.; Blocksome, Michael A.; Smith, Brian E.

    2010-06-08

    Methods, apparatus, and products are disclosed for line-plane broadcasting in a data communications network of a parallel computer, the parallel computer comprising a plurality of compute nodes connected together through the network, the network optimized for point to point data communications and characterized by at least a first dimension, a second dimension, and a third dimension, that include: initiating, by a broadcasting compute node, a broadcast operation, including sending a message to all of the compute nodes along an axis of the first dimension for the network; sending, by each compute node along the axis of the first dimension, the message to all of the compute nodes along an axis of the second dimension for the network; and sending, by each compute node along the axis of the second dimension, the message to all of the compute nodes along an axis of the third dimension for the network.

  8. Line-plane broadcasting in a data communications network of a parallel computer

    DOEpatents

    Archer, Charles J.; Berg, Jeremy E.; Blocksome, Michael A.; Smith, Brian E.

    2010-11-23

    Methods, apparatus, and products are disclosed for line-plane broadcasting in a data communications network of a parallel computer, the parallel computer comprising a plurality of compute nodes connected together through the network, the network optimized for point to point data communications and characterized by at least a first dimension, a second dimension, and a third dimension, that include: initiating, by a broadcasting compute node, a broadcast operation, including sending a message to all of the compute nodes along an axis of the first dimension for the network; sending, by each compute node along the axis of the first dimension, the message to all of the compute nodes along an axis of the second dimension for the network; and sending, by each compute node along the axis of the second dimension, the message to all of the compute nodes along an axis of the third dimension for the network.

  9. Computer Mediated Social Network Approach to Software Support and Maintenance

    DTIC Science & Technology

    2010-06-01

    Page 1        Computer Mediated  Social   Network  Approach to  Software Support and Maintenance     LTC J. Carlos Vega  *Student Paper*    Point...DATES COVERED 00-00-2010 to 00-00-2010 4. TITLE AND SUBTITLE Computer Mediated Social Network Approach to Software Support and Maintenance...This research highlights the preliminary findings on the potential of computer mediated social networks . This research focused on social networks as

  10. Geo-spatial Service and Application based on National E-government Network Platform and Cloud

    NASA Astrophysics Data System (ADS)

    Meng, X.; Deng, Y.; Li, H.; Yao, L.; Shi, J.

    2014-04-01

    With the acceleration of China's informatization process, our party and government take a substantive stride in advancing development and application of digital technology, which promotes the evolution of e-government and its informatization. Meanwhile, as a service mode based on innovative resources, cloud computing may connect huge pools together to provide a variety of IT services, and has become one relatively mature technical pattern with further studies and massive practical applications. Based on cloud computing technology and national e-government network platform, "National Natural Resources and Geospatial Database (NRGD)" project integrated and transformed natural resources and geospatial information dispersed in various sectors and regions, established logically unified and physically dispersed fundamental database and developed national integrated information database system supporting main e-government applications. Cross-sector e-government applications and services are realized to provide long-term, stable and standardized natural resources and geospatial fundamental information products and services for national egovernment and public users.

  11. Primer on computers and information technology. Part two: an introduction to computer networking.

    PubMed

    Channin, D S; Chang, P J

    1997-01-01

    Computers networks are a way of connecting computers together such that they can exchange information. For this exchange to be successful, system behavior must be planned and specified very clearly at a number of different levels. Although there are many choices to be made at each level, often there are simple decisions that can be made to rapidly reduce the number of options. Planning is most important at the highest (application) and lowest (wiring) levels, whereas the middle levels must be specified to ensure compatibility. Because of the widespread use of the Internet, solutions based on Internet technologies are often cost-effective and should be considered when designing a network. As in all technical fields, consultation with experts (ie, computer networking specialists) may be worthwhile.

  12. Performance Evaluation in Network-Based Parallel Computing

    NASA Technical Reports Server (NTRS)

    Dezhgosha, Kamyar

    1996-01-01

    Network-based parallel computing is emerging as a cost-effective alternative for solving many problems which require use of supercomputers or massively parallel computers. The primary objective of this project has been to conduct experimental research on performance evaluation for clustered parallel computing. First, a testbed was established by augmenting our existing SUNSPARCs' network with PVM (Parallel Virtual Machine) which is a software system for linking clusters of machines. Second, a set of three basic applications were selected. The applications consist of a parallel search, a parallel sort, a parallel matrix multiplication. These application programs were implemented in C programming language under PVM. Third, we conducted performance evaluation under various configurations and problem sizes. Alternative parallel computing models and workload allocations for application programs were explored. The performance metric was limited to elapsed time or response time which in the context of parallel computing can be expressed in terms of speedup. The results reveal that the overhead of communication latency between processes in many cases is the restricting factor to performance. That is, coarse-grain parallelism which requires less frequent communication between processes will result in higher performance in network-based computing. Finally, we are in the final stages of installing an Asynchronous Transfer Mode (ATM) switch and four ATM interfaces (each 155 Mbps) which will allow us to extend our study to newer applications, performance metrics, and configurations.

  13. Code 672 observational science branch computer networks

    NASA Technical Reports Server (NTRS)

    Hancock, D. W.; Shirk, H. G.

    1988-01-01

    In general, networking increases productivity due to the speed of transmission, easy access to remote computers, ability to share files, and increased availability of peripherals. Two different networks within the Observational Science Branch are described in detail.

  14. Network and computing infrastructure for scientific applications in Georgia

    NASA Astrophysics Data System (ADS)

    Kvatadze, R.; Modebadze, Z.

    2016-09-01

    Status of network and computing infrastructure and available services for research and education community of Georgia are presented. Research and Educational Networking Association - GRENA provides the following network services: Internet connectivity, network services, cyber security, technical support, etc. Computing resources used by the research teams are located at GRENA and at major state universities. GE-01-GRENA site is included in European Grid infrastructure. Paper also contains information about programs of Learning Center and research and development projects in which GRENA is participating.

  15. Recurrent Neural Network for Computing the Drazin Inverse.

    PubMed

    Stanimirović, Predrag S; Zivković, Ivan S; Wei, Yimin

    2015-11-01

    This paper presents a recurrent neural network (RNN) for computing the Drazin inverse of a real matrix in real time. This recurrent neural network (RNN) is composed of n independent parts (subnetworks), where n is the order of the input matrix. These subnetworks can operate concurrently, so parallel and distributed processing can be achieved. In this way, the computational advantages over the existing sequential algorithms can be attained in real-time applications. The RNN defined in this paper is convenient for an implementation in an electronic circuit. The number of neurons in the neural network is the same as the number of elements in the output matrix, which represents the Drazin inverse. The difference between the proposed RNN and the existing ones for the Drazin inverse computation lies in their network architecture and dynamics. The conditions that ensure the stability of the defined RNN as well as its convergence toward the Drazin inverse are considered. In addition, illustrative examples and examples of application to the practical engineering problems are discussed to show the efficacy of the proposed neural network.

  16. Extreme Scale Computing to Secure the Nation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Brown, D L; McGraw, J R; Johnson, J R

    2009-11-10

    Since the dawn of modern electronic computing in the mid 1940's, U.S. national security programs have been dominant users of every new generation of high-performance computer. Indeed, the first general-purpose electronic computer, ENIAC (the Electronic Numerical Integrator and Computer), was used to calculate the expected explosive yield of early thermonuclear weapons designs. Even the U. S. numerical weather prediction program, another early application for high-performance computing, was initially funded jointly by sponsors that included the U.S. Air Force and Navy, agencies interested in accurate weather predictions to support U.S. military operations. For the decades of the cold war, national securitymore » requirements continued to drive the development of high performance computing (HPC), including advancement of the computing hardware and development of sophisticated simulation codes to support weapons and military aircraft design, numerical weather prediction as well as data-intensive applications such as cryptography and cybersecurity U.S. national security concerns continue to drive the development of high-performance computers and software in the U.S. and in fact, events following the end of the cold war have driven an increase in the growth rate of computer performance at the high-end of the market. This mainly derives from our nation's observance of a moratorium on underground nuclear testing beginning in 1992, followed by our voluntary adherence to the Comprehensive Test Ban Treaty (CTBT) beginning in 1995. The CTBT prohibits further underground nuclear tests, which in the past had been a key component of the nation's science-based program for assuring the reliability, performance and safety of U.S. nuclear weapons. In response to this change, the U.S. Department of Energy (DOE) initiated the Science-Based Stockpile Stewardship (SBSS) program in response to the Fiscal Year 1994 National Defense Authorization Act, which requires, 'in the absence of

  17. Get the Whole Story before You Plug into a Computer Network.

    ERIC Educational Resources Information Center

    Vernot, David

    1989-01-01

    Explains the myths and marvels of computer networks; cites how several schools are utilizing networking; and summarizes where the major computer companies stand today when it comes to networking. (MLF)

  18. The National Biomedical Communications Network as a Developing Structure *

    PubMed Central

    Davis, Ruth M.

    1971-01-01

    The National Biomedical Communications Network has evolved both from a set of conceptual recommendations over the last twelve years and an accumulation of needs manifesting themselves in the requests of members of the medical community. With a short history of three years this network and its developing structure have exhibited most of the stresses of technology interfacing with customer groups, and of a structure attempting to build itself upon many existing fragmentary unconnected segments of a potentially viable resourcesharing capability. In addition to addressing these topics, the paper treats a design appropriate to any network devoted to information transfer in a special interest user community. It discusses fundamentals of network design, highlighting that network structure most appropriate to a national information network. Examples are given of cost analyses of information services and certain conjectures are offered concerning the roles of national networks. PMID:5542912

  19. United States National seismograph network

    USGS Publications Warehouse

    Masse, R.P.; Filson, J.R.; Murphy, A.

    1989-01-01

    The USGS National Earthquake Information Center (NEIC) has planned and is developing a broadband digital seismograph network for the United States. The network will consist of approximately 150 seismograph stations distributed across the contiguous 48 states and across Alaska, Hawaii, Puerto Rico and the Virgin Islands. Data transmission will be via two-way satellite telemetry from the network sites to a central recording facility at the NEIC in Golden, Colorado. The design goal for the network is the on-scale recording by at least five well-distributed stations of any seismic event of magnitude 2.5 or greater in all areas of the United States except possibly part of Alaska. All event data from the network will be distributed to the scientific community on compact disc with read-only memory (CD-ROM). ?? 1989.

  20. Computer networking at SLR stations

    NASA Technical Reports Server (NTRS)

    Novotny, Antonin

    1993-01-01

    There are several existing communication methods to deliver data from the satellite laser ranging (SLR) station to the SLR data center and back: telephonmodem, telex, and computer networks. The SLR scientific community has been exploiting mainly INTERNET, BITNET/EARN, and SPAN. The total of 56 countries are connected to INTERNET and the number of nodes is exponentially growing. The computer networks mentioned above and others are connected through E-mail protocol. The scientific progress of SLR requires the increase of communication speed and the amount of the transmitted data. The TOPEX/POSEIDON test campaign required to deliver Quick Look data (1.7 kB/pass) from a SLR site to SLR data center within 8 hours and full rate data (up to 500 kB/pass) within 24 hours. We developed networking for the remote SLR station in Helwan, Egypt. The reliable scheme for data delivery consists of: compression of MERIT2 format (up to 89 percent), encoding to ASCII Me (files); and e-mail sending from SLR station--e-mail receiving, decoding, and decompression at the center. We do propose to use the ZIP method for compression/decompression and the UUCODE method for ASCII encoding/decoding. This method will be useful for stations connected via telephonemodems or commercial networks. The electronics delivery could solve the problem of the too late receiving of the FR data by SLR data center.

  1. Computer networking at SLR stations

    NASA Astrophysics Data System (ADS)

    Novotny, Antonin

    1993-06-01

    There are several existing communication methods to deliver data from the satellite laser ranging (SLR) station to the SLR data center and back: telephonmodem, telex, and computer networks. The SLR scientific community has been exploiting mainly INTERNET, BITNET/EARN, and SPAN. The total of 56 countries are connected to INTERNET and the number of nodes is exponentially growing. The computer networks mentioned above and others are connected through E-mail protocol. The scientific progress of SLR requires the increase of communication speed and the amount of the transmitted data. The TOPEX/POSEIDON test campaign required to deliver Quick Look data (1.7 kB/pass) from a SLR site to SLR data center within 8 hours and full rate data (up to 500 kB/pass) within 24 hours. We developed networking for the remote SLR station in Helwan, Egypt. The reliable scheme for data delivery consists of: compression of MERIT2 format (up to 89 percent), encoding to ASCII Me (files); and e-mail sending from SLR station--e-mail receiving, decoding, and decompression at the center. We do propose to use the ZIP method for compression/decompression and the UUCODE method for ASCII encoding/decoding. This method will be useful for stations connected via telephonemodems or commercial networks. The electronics delivery could solve the problem of the too late receiving of the FR data by SLR data center.

  2. PRiFi Networking for Tracking-Resistant Mobile Computing

    DTIC Science & Technology

    2017-11-01

    PRiFi NETWORKING FOR TRACKING-RESISTANT MOBILE COMPUTING YALE UNIVERSITY NOVEMBER 2017 FINAL TECHNICAL REPORT APPROVED FOR PUBLIC RELEASE...From - To) FEB 2016 – MAY 2017 4. TITLE AND SUBTITLE PRiFi NETWORKING FOR TRACKING-RESISTANT MOBILE COMPUTING 5a. CONTRACT NUMBER FA8750-16-2-0034...3 Figure 2: What We Have: A Cloud of Secret Mass Surveillance Processes .................................. 6 Figure 3: What

  3. Computer Network Resources for Physical Geography Instruction.

    ERIC Educational Resources Information Center

    Bishop, Michael P.; And Others

    1993-01-01

    Asserts that the use of computer networks provides an important and effective resource for geography instruction. Describes the use of the Internet network in physical geography instruction. Provides an example of the use of Internet resources in a climatology/meteorology course. (CFR)

  4. Modernization of the Slovenian National Seismic Network

    NASA Astrophysics Data System (ADS)

    Vidrih, R.; Godec, M.; Gosar, A.; Sincic, P.; Tasic, I.; Zivcic, M.

    2003-04-01

    The Environmental Agency of the Republic of Slovenia, the Seismology Office is responsible for the fast and reliable information about earthquakes, originating in the area of Slovenia and nearby. In the year 2000 the project Modernization of the Slovenian National Seismic Network started. The purpose of a modernized seismic network is to enable fast and accurate automatic location of earthquakes, to determine earthquake parameters and to collect data of local, regional and global earthquakes. The modernized network will be finished in the year 2004 and will consist of 25 Q730 remote broadband data loggers based seismic station subsystems transmitting in real-time data to the Data Center in Ljubljana, where the Seismology Office is located. The remote broadband station subsystems include 16 surface broadband seismometers CMG-40T, 5 broadband seismometers CMG-40T with strong motion accelerographs EpiSensor, 4 borehole broadband seismometers CMG-40T, all with accurate timing provided by GPS receivers. The seismic network will cover the entire Slovenian territory, involving an area of 20,256 km2. The network is planned in this way; more seismic stations will be around bigger urban centres and in regions with greater vulnerability (NW Slovenia, Krsko Brezice region). By the end of the year 2002, three old seismic stations were modernized and ten new seismic stations were built. All seismic stations transmit data to UNIX-based computers running Antelope system software. The data is transmitted in real time using TCP/IP protocols over the Goverment Wide Area Network . Real-time data is also exchanged with seismic networks in the neighbouring countries, where the data are collected from the seismic stations, close to the Slovenian border. A typical seismic station consists of the seismic shaft with the sensor and the data acquisition system and, the service shaft with communication equipment (modem, router) and power supply with a battery box. which provides energy in case

  5. Designs on a National Research Network.

    ERIC Educational Resources Information Center

    Walsh, John

    1988-01-01

    Discusses the addition of the National Aeronautics and Space Administration database to the National Science Foundation's NSFnet data communication network. Outlines the history of databases in the United States and enumerates proposed upgrades from a new Office of Science and Technology policy report. (TW)

  6. Computer memory: the LLL experience. [Octopus computer network

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Fletcher, J.G.

    1976-02-01

    Those aspects of Octopus computer network design are reviewed that relate to memory and storage. Emphasis is placed on the difficulties and problems that arise because of the limitations of present storage devices, and indications are made of the directions in which technological advance could be of most value. (auth)

  7. Providing full point-to-point communications among compute nodes of an operational group in a global combining network of a parallel computer

    DOEpatents

    Archer, Charles J; Faraj, Ahmad A; Inglett, Todd A; Ratterman, Joseph D

    2013-04-16

    Methods, apparatus, and products are disclosed for providing full point-to-point communications among compute nodes of an operational group in a global combining network of a parallel computer, each compute node connected to each adjacent compute node in the global combining network through a link, that include: receiving a network packet in a compute node, the network packet specifying a destination compute node; selecting, in dependence upon the destination compute node, at least one of the links for the compute node along which to forward the network packet toward the destination compute node; and forwarding the network packet along the selected link to the adjacent compute node connected to the compute node through the selected link.

  8. Neural Network Computing and Natural Language Processing.

    ERIC Educational Resources Information Center

    Borchardt, Frank

    1988-01-01

    Considers the application of neural network concepts to traditional natural language processing and demonstrates that neural network computing architecture can: (1) learn from actual spoken language; (2) observe rules of pronunciation; and (3) reproduce sounds from the patterns derived by its own processes. (Author/CB)

  9. Teaching Advanced Concepts in Computer Networks: VNUML-UM Virtualization Tool

    ERIC Educational Resources Information Center

    Ruiz-Martinez, A.; Pereniguez-Garcia, F.; Marin-Lopez, R.; Ruiz-Martinez, P. M.; Skarmeta-Gomez, A. F.

    2013-01-01

    In the teaching of computer networks the main problem that arises is the high price and limited number of network devices the students can work with in the laboratories. Nowadays, with virtualization we can overcome this limitation. In this paper, we present a methodology that allows students to learn advanced computer network concepts through…

  10. Providing full point-to-point communications among compute nodes of an operational group in a global combining network of a parallel computer

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Archer, Charles J.; Faraj, Daniel A.; Inglett, Todd A.

    Methods, apparatus, and products are disclosed for providing full point-to-point communications among compute nodes of an operational group in a global combining network of a parallel computer, each compute node connected to each adjacent compute node in the global combining network through a link, that include: receiving a network packet in a compute node, the network packet specifying a destination compute node; selecting, in dependence upon the destination compute node, at least one of the links for the compute node along which to forward the network packet toward the destination compute node; and forwarding the network packet along the selectedmore » link to the adjacent compute node connected to the compute node through the selected link.« less

  11. Computer-Based Semantic Network in Molecular Biology: A Demonstration.

    ERIC Educational Resources Information Center

    Callman, Joshua L.; And Others

    This paper analyzes the hardware and software features that would be desirable in a computer-based semantic network system for representing biology knowledge. It then describes in detail a prototype network of molecular biology knowledge that has been developed using Filevision software and a Macintosh computer. The prototype contains about 100…

  12. Networking Micro-Processors for Effective Computer Utilization in Nursing

    PubMed Central

    Mangaroo, Jewellean; Smith, Bob; Glasser, Jay; Littell, Arthur; Saba, Virginia

    1982-01-01

    Networking as a social entity has important implications for maximizing computer resources for improved utilization in nursing. This paper describes the one process of networking of complementary resources at three institutions. Prairie View A&M University, Texas A&M University and the University of Texas School of Public Health, which has effected greater utilization of computers at the college. The results achieved in this project should have implications for nurses, users, and consumers in the development of computer resources.

  13. Computer Code for Transportation Network Design and Analysis

    DOT National Transportation Integrated Search

    1977-01-01

    This document describes the results of research into the application of the mathematical programming technique of decomposition to practical transportation network problems. A computer code called Catnap (for Control Analysis Transportation Network A...

  14. National Strategic Computing Initiative Strategic Plan

    DTIC Science & Technology

    2016-07-01

    23 A.6 National Nanotechnology Initiative...Initiative: https://www.nitrd.gov/nitrdgroups/index.php?title=Big_Data_(BD_SSG)  National Nanotechnology Initiative: http://www.nano.gov  Precision...computing. While not limited to neuromorphic technologies, the National Nanotechnology Initiative’s first Grand Challenge seeks to achieve brain

  15. Computer Networking with the Victorian Correspondence School.

    ERIC Educational Resources Information Center

    Conboy, Ian

    During 1985 the Education Department installed two-way radios in 44 remote secondary schools in Victoria, Australia, to improve turn-around time for correspondence assignments. Subsequently, teacher supervisors at Melbourne's Correspondence School sought ways to further augument audio interactivity with computer networking. Computer equipment was…

  16. The development of computer networks: First results from a microeconomic model

    NASA Astrophysics Data System (ADS)

    Maier, Gunther; Kaufmann, Alexander

    Computer networks like the Internet are gaining importance in social and economic life. The accelerating pace of the adoption of network technologies for business purposes is a rather recent phenomenon. Many applications are still in the early, sometimes even experimental, phase. Nevertheless, it seems to be certain that networks will change the socioeconomic structures we know today. This is the background for our special interest in the development of networks, in the role of spatial factors influencing the formation of networks, and consequences of networks on spatial structures, and in the role of externalities. This paper discusses a simple economic model - based on a microeconomic calculus - that incorporates the main factors that generate the growth of computer networks. The paper provides analytic results about the generation of computer networks. The paper discusses (1) under what conditions economic factors will initiate the process of network formation, (2) the relationship between individual and social evaluation, and (3) the efficiency of a network that is generated based on economic mechanisms.

  17. The National Network of Libraries of Medicine

    MedlinePlus

    ... New England Region: University of Massachusetts Bringing the World of Medical Information to Your Neighborhood By Angela ... D., Head, NN/LM National Network Office The world's largest medical library is the National Library of ...

  18. Distributed computation of graphics primitives on a transputer network

    NASA Technical Reports Server (NTRS)

    Ellis, Graham K.

    1988-01-01

    A method is developed for distributing the computation of graphics primitives on a parallel processing network. Off-the-shelf transputer boards are used to perform the graphics transformations and scan-conversion tasks that would normally be assigned to a single transputer based display processor. Each node in the network performs a single graphics primitive computation. Frequently requested tasks can be duplicated on several nodes. The results indicate that the current distribution of commands on the graphics network shows a performance degradation when compared to the graphics display board alone. A change to more computation per node for every communication (perform more complex tasks on each node) may cause the desired increase in throughput.

  19. Genetic networks and soft computing.

    PubMed

    Mitra, Sushmita; Das, Ranajit; Hayashi, Yoichi

    2011-01-01

    The analysis of gene regulatory networks provides enormous information on various fundamental cellular processes involving growth, development, hormone secretion, and cellular communication. Their extraction from available gene expression profiles is a challenging problem. Such reverse engineering of genetic networks offers insight into cellular activity toward prediction of adverse effects of new drugs or possible identification of new drug targets. Tasks such as classification, clustering, and feature selection enable efficient mining of knowledge about gene interactions in the form of networks. It is known that biological data is prone to different kinds of noise and ambiguity. Soft computing tools, such as fuzzy sets, evolutionary strategies, and neurocomputing, have been found to be helpful in providing low-cost, acceptable solutions in the presence of various types of uncertainties. In this paper, we survey the role of these soft methodologies and their hybridizations, for the purpose of generating genetic networks.

  20. Biological modelling of a computational spiking neural network with neuronal avalanches

    NASA Astrophysics Data System (ADS)

    Li, Xiumin; Chen, Qing; Xue, Fangzheng

    2017-05-01

    In recent years, an increasing number of studies have demonstrated that networks in the brain can self-organize into a critical state where dynamics exhibit a mixture of ordered and disordered patterns. This critical branching phenomenon is termed neuronal avalanches. It has been hypothesized that the homeostatic level balanced between stability and plasticity of this critical state may be the optimal state for performing diverse neural computational tasks. However, the critical region for high performance is narrow and sensitive for spiking neural networks (SNNs). In this paper, we investigated the role of the critical state in neural computations based on liquid-state machines, a biologically plausible computational neural network model for real-time computing. The computational performance of an SNN when operating at the critical state and, in particular, with spike-timing-dependent plasticity for updating synaptic weights is investigated. The network is found to show the best computational performance when it is subjected to critical dynamic states. Moreover, the active-neuron-dominant structure refined from synaptic learning can remarkably enhance the robustness of the critical state and further improve computational accuracy. These results may have important implications in the modelling of spiking neural networks with optimal computational performance. This article is part of the themed issue `Mathematical methods in medicine: neuroscience, cardiology and pathology'.

  1. ESTABLISHING A NATIONAL ENVIRONMENTAL PUBLIC HEALTH TRACKING NETWORK

    EPA Science Inventory

    This paper describes the CDC's efforts to develop a National Environmental Public Health Tracking Network Tracking Network) with particular focus on air related issues and collaboration with EPA. A Tracking Network is needed in the United States to improve the health of communit...

  2. Network Community Detection based on the Physarum-inspired Computational Framework.

    PubMed

    Gao, Chao; Liang, Mingxin; Li, Xianghua; Zhang, Zili; Wang, Zhen; Zhou, Zhili

    2016-12-13

    Community detection is a crucial and essential problem in the structure analytics of complex networks, which can help us understand and predict the characteristics and functions of complex networks. Many methods, ranging from the optimization-based algorithms to the heuristic-based algorithms, have been proposed for solving such a problem. Due to the inherent complexity of identifying network structure, how to design an effective algorithm with a higher accuracy and a lower computational cost still remains an open problem. Inspired by the computational capability and positive feedback mechanism in the wake of foraging process of Physarum, which is a large amoeba-like cell consisting of a dendritic network of tube-like pseudopodia, a general Physarum-based computational framework for community detection is proposed in this paper. Based on the proposed framework, the inter-community edges can be identified from the intra-community edges in a network and the positive feedback of solving process in an algorithm can be further enhanced, which are used to improve the efficiency of original optimization-based and heuristic-based community detection algorithms, respectively. Some typical algorithms (e.g., genetic algorithm, ant colony optimization algorithm, and Markov clustering algorithm) and real-world datasets have been used to estimate the efficiency of our proposed computational framework. Experiments show that the algorithms optimized by Physarum-inspired computational framework perform better than the original ones, in terms of accuracy and computational cost. Moreover, a computational complexity analysis verifies the scalability of our framework.

  3. DISCRETE EVENT SIMULATION OF OPTICAL SWITCH MATRIX PERFORMANCE IN COMPUTER NETWORKS

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Imam, Neena; Poole, Stephen W

    2013-01-01

    In this paper, we present application of a Discrete Event Simulator (DES) for performance modeling of optical switching devices in computer networks. Network simulators are valuable tools in situations where one cannot investigate the system directly. This situation may arise if the system under study does not exist yet or the cost of studying the system directly is prohibitive. Most available network simulators are based on the paradigm of discrete-event-based simulation. As computer networks become increasingly larger and more complex, sophisticated DES tool chains have become available for both commercial and academic research. Some well-known simulators are NS2, NS3, OPNET,more » and OMNEST. For this research, we have applied OMNEST for the purpose of simulating multi-wavelength performance of optical switch matrices in computer interconnection networks. Our results suggest that the application of DES to computer interconnection networks provides valuable insight in device performance and aids in topology and system optimization.« less

  4. The UNESCO Global Network of National Geoparks

    NASA Astrophysics Data System (ADS)

    Mc Keever1, P.; Zouros, N.; Patzak, M.; Missotten, R.

    2009-12-01

    The UNESCO Global Network of National Geoparks was founded in 2004, following the model successfully established by the European Geoparks Network in 2000. It now comprises 63 members in 19 nations across the world. A Global Geopark is an area with geological heritage of international value but where that heritage is being used for the sustainable economic benefit if the local inhabitants, primarily through education and tourism. Supported by IUGS and IUCN, the aim of the Global Geoparks Network is to facilitate exchange and sharing between members to assist in the protection and conservation of the geological heritage of our planet but to do so in way where local communities can take ownership of these special places and where they can get some sustainable economic benefit from them. While allowing for the sustainable economic development of geoparks, the network explicitly forbids the destruction or sale of the geological value of a geopark. This paper outlines the ethos of the Global Geoparks Network and describes the typical activities of geoparks and how the network functions. Using two examples it also illustrates how members of the Global Geoparks Network provide good examples as tools not only for holistic nature conservation but also for economic development.

  5. Computing Tutte polynomials of contact networks in classrooms

    NASA Astrophysics Data System (ADS)

    Hincapié, Doracelly; Ospina, Juan

    2013-05-01

    Objective: The topological complexity of contact networks in classrooms and the potential transmission of an infectious disease were analyzed by sex and age. Methods: The Tutte polynomials, some topological properties and the number of spanning trees were used to algebraically compute the topological complexity. Computations were made with the Maple package GraphTheory. Published data of mutually reported social contacts within a classroom taken from primary school, consisting of children in the age ranges of 4-5, 7-8 and 10-11, were used. Results: The algebraic complexity of the Tutte polynomial and the probability of disease transmission increases with age. The contact networks are not bipartite graphs, gender segregation was observed especially in younger children. Conclusion: Tutte polynomials are tools to understand the topology of the contact networks and to derive numerical indexes of such topologies. It is possible to establish relationships between the Tutte polynomial of a given contact network and the potential transmission of an infectious disease within such network

  6. Large-Scale Data Collection Metadata Management at the National Computation Infrastructure

    NASA Astrophysics Data System (ADS)

    Wang, J.; Evans, B. J. K.; Bastrakova, I.; Ryder, G.; Martin, J.; Duursma, D.; Gohar, K.; Mackey, T.; Paget, M.; Siddeswara, G.

    2014-12-01

    Data Collection management has become an essential activity at the National Computation Infrastructure (NCI) in Australia. NCI's partners (CSIRO, Bureau of Meteorology, Australian National University, and Geoscience Australia), supported by the Australian Government and Research Data Storage Infrastructure (RDSI), have established a national data resource that is co-located with high-performance computing. This paper addresses the metadata management of these data assets over their lifetime. NCI manages 36 data collections (10+ PB) categorised as earth system sciences, climate and weather model data assets and products, earth and marine observations and products, geosciences, terrestrial ecosystem, water management and hydrology, astronomy, social science and biosciences. The data is largely sourced from NCI partners, the custodians of many of the national scientific records, and major research community organisations. The data is made available in a HPC and data-intensive environment - a ~56000 core supercomputer, virtual labs on a 3000 core cloud system, and data services. By assembling these large national assets, new opportunities have arisen to harmonise the data collections, making a powerful cross-disciplinary resource.To support the overall management, a Data Management Plan (DMP) has been developed to record the workflows, procedures, the key contacts and responsibilities. The DMP has fields that can be exported to the ISO19115 schema and to the collection level catalogue of GeoNetwork. The subset or file level metadata catalogues are linked with the collection level through parent-child relationship definition using UUID. A number of tools have been developed that support interactive metadata management, bulk loading of data, and support for computational workflows or data pipelines. NCI creates persistent identifiers for each of the assets. The data collection is tracked over its lifetime, and the recognition of the data providers, data owners, data

  7. The "Golden Projects": China's National Networking Initiative.

    ERIC Educational Resources Information Center

    Lovelock, Peter; Clark, Theodore C.; Petrazzini, Ben A.

    1996-01-01

    For China, information technology and communications networks are a new solution to an old problem, reconstituting hierarchical state power. This article examines China's National Networking Initiative, "Golden Projects," within the context of economic and political reform to demonstrate an alternative to traditional economic based…

  8. Spatiotemporal Dynamics and Reliable Computations in Recurrent Spiking Neural Networks

    NASA Astrophysics Data System (ADS)

    Pyle, Ryan; Rosenbaum, Robert

    2017-01-01

    Randomly connected networks of excitatory and inhibitory spiking neurons provide a parsimonious model of neural variability, but are notoriously unreliable for performing computations. We show that this difficulty is overcome by incorporating the well-documented dependence of connection probability on distance. Spatially extended spiking networks exhibit symmetry-breaking bifurcations and generate spatiotemporal patterns that can be trained to perform dynamical computations under a reservoir computing framework.

  9. Neural networks applications to control and computations

    NASA Technical Reports Server (NTRS)

    Luxemburg, Leon A.

    1994-01-01

    Several interrelated problems in the area of neural network computations are described. First an interpolation problem is considered, then a control problem is reduced to a problem of interpolation by a neural network via Lyapunov function approach, and finally a new, faster method of learning as compared with the gradient descent method, was introduced.

  10. Connectomic constraints on computation in feedforward networks of spiking neurons.

    PubMed

    Ramaswamy, Venkatakrishnan; Banerjee, Arunava

    2014-10-01

    Several efforts are currently underway to decipher the connectome or parts thereof in a variety of organisms. Ascertaining the detailed physiological properties of all the neurons in these connectomes, however, is out of the scope of such projects. It is therefore unclear to what extent knowledge of the connectome alone will advance a mechanistic understanding of computation occurring in these neural circuits, especially when the high-level function of the said circuit is unknown. We consider, here, the question of how the wiring diagram of neurons imposes constraints on what neural circuits can compute, when we cannot assume detailed information on the physiological response properties of the neurons. We call such constraints-that arise by virtue of the connectome-connectomic constraints on computation. For feedforward networks equipped with neurons that obey a deterministic spiking neuron model which satisfies a small number of properties, we ask if just by knowing the architecture of a network, we can rule out computations that it could be doing, no matter what response properties each of its neurons may have. We show results of this form, for certain classes of network architectures. On the other hand, we also prove that with the limited set of properties assumed for our model neurons, there are fundamental limits to the constraints imposed by network structure. Thus, our theory suggests that while connectomic constraints might restrict the computational ability of certain classes of network architectures, we may require more elaborate information on the properties of neurons in the network, before we can discern such results for other classes of networks.

  11. A National Perspective on Women Owning Woodlands (WOW) Networks

    ERIC Educational Resources Information Center

    Huff, Emily S.

    2017-01-01

    This article provides a national overview of women owning woodlands (WOW) networks and the barriers and successes they encounter. Qualitative interview data with key network leaders were used for increasing understanding of how these networks operate. Network leaders were all connected professionally, and all successful WOW networks involved…

  12. Biological modelling of a computational spiking neural network with neuronal avalanches.

    PubMed

    Li, Xiumin; Chen, Qing; Xue, Fangzheng

    2017-06-28

    In recent years, an increasing number of studies have demonstrated that networks in the brain can self-organize into a critical state where dynamics exhibit a mixture of ordered and disordered patterns. This critical branching phenomenon is termed neuronal avalanches. It has been hypothesized that the homeostatic level balanced between stability and plasticity of this critical state may be the optimal state for performing diverse neural computational tasks. However, the critical region for high performance is narrow and sensitive for spiking neural networks (SNNs). In this paper, we investigated the role of the critical state in neural computations based on liquid-state machines, a biologically plausible computational neural network model for real-time computing. The computational performance of an SNN when operating at the critical state and, in particular, with spike-timing-dependent plasticity for updating synaptic weights is investigated. The network is found to show the best computational performance when it is subjected to critical dynamic states. Moreover, the active-neuron-dominant structure refined from synaptic learning can remarkably enhance the robustness of the critical state and further improve computational accuracy. These results may have important implications in the modelling of spiking neural networks with optimal computational performance.This article is part of the themed issue 'Mathematical methods in medicine: neuroscience, cardiology and pathology'. © 2017 The Author(s).

  13. Optimization of an interactive distributive computer network

    NASA Technical Reports Server (NTRS)

    Frederick, V.

    1985-01-01

    The activities under a cooperative agreement for the development of a computer network are briefly summarized. Research activities covered are: computer operating systems optimization and integration; software development and implementation of the IRIS (Infrared Imaging of Shuttle) Experiment; and software design, development, and implementation of the APS (Aerosol Particle System) Experiment.

  14. Chinese Herbal Medicine Meets Biological Networks of Complex Diseases: A Computational Perspective

    PubMed Central

    Gu, Shuo

    2017-01-01

    With the rapid development of cheminformatics, computational biology, and systems biology, great progress has been made recently in the computational research of Chinese herbal medicine with in-depth understanding towards pharmacognosy. This paper summarized these studies in the aspects of computational methods, traditional Chinese medicine (TCM) compound databases, and TCM network pharmacology. Furthermore, we chose arachidonic acid metabolic network as a case study to demonstrate the regulatory function of herbal medicine in the treatment of inflammation at network level. Finally, a computational workflow for the network-based TCM study, derived from our previous successful applications, was proposed. PMID:28690664

  15. Chinese Herbal Medicine Meets Biological Networks of Complex Diseases: A Computational Perspective.

    PubMed

    Gu, Shuo; Pei, Jianfeng

    2017-01-01

    With the rapid development of cheminformatics, computational biology, and systems biology, great progress has been made recently in the computational research of Chinese herbal medicine with in-depth understanding towards pharmacognosy. This paper summarized these studies in the aspects of computational methods, traditional Chinese medicine (TCM) compound databases, and TCM network pharmacology. Furthermore, we chose arachidonic acid metabolic network as a case study to demonstrate the regulatory function of herbal medicine in the treatment of inflammation at network level. Finally, a computational workflow for the network-based TCM study, derived from our previous successful applications, was proposed.

  16. Convolutional networks for fast, energy-efficient neuromorphic computing.

    PubMed

    Esser, Steven K; Merolla, Paul A; Arthur, John V; Cassidy, Andrew S; Appuswamy, Rathinakumar; Andreopoulos, Alexander; Berg, David J; McKinstry, Jeffrey L; Melano, Timothy; Barch, Davis R; di Nolfo, Carmelo; Datta, Pallab; Amir, Arnon; Taba, Brian; Flickner, Myron D; Modha, Dharmendra S

    2016-10-11

    Deep networks are now able to achieve human-level performance on a broad spectrum of recognition tasks. Independently, neuromorphic computing has now demonstrated unprecedented energy-efficiency through a new chip architecture based on spiking neurons, low precision synapses, and a scalable communication network. Here, we demonstrate that neuromorphic computing, despite its novel architectural primitives, can implement deep convolution networks that (i) approach state-of-the-art classification accuracy across eight standard datasets encompassing vision and speech, (ii) perform inference while preserving the hardware's underlying energy-efficiency and high throughput, running on the aforementioned datasets at between 1,200 and 2,600 frames/s and using between 25 and 275 mW (effectively >6,000 frames/s per Watt), and (iii) can be specified and trained using backpropagation with the same ease-of-use as contemporary deep learning. This approach allows the algorithmic power of deep learning to be merged with the efficiency of neuromorphic processors, bringing the promise of embedded, intelligent, brain-inspired computing one step closer.

  17. The University of Michigan's Computer-Aided Engineering Network.

    ERIC Educational Resources Information Center

    Atkins, D. E.; Olsen, Leslie A.

    1986-01-01

    Presents an overview of the Computer-Aided Engineering Network (CAEN) of the University of Michigan. Describes its arrangement of workstations, communication networks, and servers. Outlines the factors considered in hardware and software decision making. Reviews the program's impact on students. (ML)

  18. ARACHNE: A neural-neuroglial network builder with remotely controlled parallel computing

    PubMed Central

    Rusakov, Dmitri A.; Savtchenko, Leonid P.

    2017-01-01

    Creating and running realistic models of neural networks has hitherto been a task for computing professionals rather than experimental neuroscientists. This is mainly because such networks usually engage substantial computational resources, the handling of which requires specific programing skills. Here we put forward a newly developed simulation environment ARACHNE: it enables an investigator to build and explore cellular networks of arbitrary biophysical and architectural complexity using the logic of NEURON and a simple interface on a local computer or a mobile device. The interface can control, through the internet, an optimized computational kernel installed on a remote computer cluster. ARACHNE can combine neuronal (wired) and astroglial (extracellular volume-transmission driven) network types and adopt realistic cell models from the NEURON library. The program and documentation (current version) are available at GitHub repository https://github.com/LeonidSavtchenko/Arachne under the MIT License (MIT). PMID:28362877

  19. Program Spotlight: National Outreach Network's Community Health Educators

    Cancer.gov

    National Outreach Network of Community Health Educators located at Community Network Program Centers, Partnerships to Advance Cancer Health Equity, and NCI-designated cancer centers help patients and their families receive survivorship support.

  20. National Seismic Network of Georgia

    NASA Astrophysics Data System (ADS)

    Tumanova, N.; Kakhoberashvili, S.; Omarashvili, V.; Tserodze, M.; Akubardia, D.

    2016-12-01

    Georgia, as a part of the Southern Caucasus, is tectonically active and structurally complex region. It is one of the most active segments of the Alpine-Himalayan collision belt. The deformation and the associated seismicity are due to the continent-continent collision between the Arabian and Eurasian plates. Seismic Monitoring of country and the quality of seismic data is the major tool for the rapid response policy, population safety, basic scientific research and in the end for the sustainable development of the country. National Seismic Network of Georgia has been developing since the end of 19th century. Digital era of the network started from 2003. Recently continuous data streams from 25 stations acquired and analyzed in the real time. Data is combined to calculate rapid location and magnitude for the earthquake. Information for the bigger events (Ml>=3.5) is simultaneously transferred to the website of the monitoring center and to the related governmental agencies. To improve rapid earthquake location and magnitude estimation the seismic network was enhanced by installing additional 7 new stations. Each new station is equipped with coupled Broadband and Strong Motion seismometers and permanent GPS system as well. To select the sites for the 7 new base stations, we used standard network optimization techniques. To choose the optimal sites for new stations we've taken into account geometry of the existed seismic network, topographic conditions of the site. For each site we studied local geology (Vs30 was mandatory for each site), local noise level and seismic vault construction parameters. Due to the country elevation, stations were installed in the high mountains, no accessible in winter due to the heavy snow conditions. To secure online data transmission we used satellite data transmission as well as cell data network coverage from the different local companies. As a result we've already have the improved earthquake location and event magnitudes. We

  1. Calculating a checksum with inactive networking components in a computing system

    DOEpatents

    Aho, Michael E; Chen, Dong; Eisley, Noel A; Gooding, Thomas M; Heidelberger, Philip; Tauferner, Andrew T

    2014-12-16

    Calculating a checksum utilizing inactive networking components in a computing system, including: identifying, by a checksum distribution manager, an inactive networking component, wherein the inactive networking component includes a checksum calculation engine for computing a checksum; sending, to the inactive networking component by the checksum distribution manager, metadata describing a block of data to be transmitted by an active networking component; calculating, by the inactive networking component, a checksum for the block of data; transmitting, to the checksum distribution manager from the inactive networking component, the checksum for the block of data; and sending, by the active networking component, a data communications message that includes the block of data and the checksum for the block of data.

  2. Calculating a checksum with inactive networking components in a computing system

    DOEpatents

    Aho, Michael E; Chen, Dong; Eisley, Noel A; Gooding, Thomas M; Heidelberger, Philip; Tauferner, Andrew T

    2015-01-27

    Calculating a checksum utilizing inactive networking components in a computing system, including: identifying, by a checksum distribution manager, an inactive networking component, wherein the inactive networking component includes a checksum calculation engine for computing a checksum; sending, to the inactive networking component by the checksum distribution manager, metadata describing a block of data to be transmitted by an active networking component; calculating, by the inactive networking component, a checksum for the block of data; transmitting, to the checksum distribution manager from the inactive networking component, the checksum for the block of data; and sending, by the active networking component, a data communications message that includes the block of data and the checksum for the block of data.

  3. National Stream Quality Accounting Network and National Monitoring Network Basin Boundary Geospatial Dataset, 2008–13

    USGS Publications Warehouse

    Baker, Nancy T.

    2011-01-01

    This report and the accompanying geospatial data were created to assist in analysis and interpretation of water-quality data provided by the U.S. Geological Survey's National Stream Quality Accounting Network (NASQAN) and by the U.S. Coastal Waters and Tributaries National Monitoring Network (NMN), which is a cooperative monitoring program of Federal, regional, and State agencies. The report describes the methods used to develop the geospatial data, which was primarily derived from the National Watershed Boundary Dataset. The geospatial data contains polygon shapefiles of basin boundaries for 33 NASQAN and 5 NMN streamflow and water-quality monitoring stations. In addition, 30 polygon shapefiles of the closed and noncontributing basins contained within the NASQAN or NMN boundaries are included. Also included is a point shapefile of the NASQAN and NMN monitoring stations and associated basin and station attributes. Geospatial data for basin delineations, associated closed and noncontributing basins, and monitoring station locations are available at http://water.usgs.gov/GIS/metadata/usgswrd/XML/ds641_nasqan_wbd12.xml.

  4. 34 CFR 412.4 - What is the National Network of Directors Council?

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... 34 Education 3 2010-07-01 2010-07-01 false What is the National Network of Directors Council? 412...) OFFICE OF VOCATIONAL AND ADULT EDUCATION, DEPARTMENT OF EDUCATION NATIONAL NETWORK FOR CURRICULUM COORDINATION IN VOCATIONAL AND TECHNICAL EDUCATION General § 412.4 What is the National Network of Directors...

  5. 34 CFR 412.4 - What is the National Network of Directors Council?

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... 34 Education 3 2011-07-01 2011-07-01 false What is the National Network of Directors Council? 412...) OFFICE OF VOCATIONAL AND ADULT EDUCATION, DEPARTMENT OF EDUCATION NATIONAL NETWORK FOR CURRICULUM COORDINATION IN VOCATIONAL AND TECHNICAL EDUCATION General § 412.4 What is the National Network of Directors...

  6. Manual for Museum Computer Network GRIPHOS Application.

    ERIC Educational Resources Information Center

    Vance, David

    This is the second in a series of manuals prepared by the Museum Computer Network explaining the use of General Retrieval and Information Processor for Humanities Oriented Studies (GRIPHOS). The user with little or no background in electronic data processing is introduced to the use of the various computer programs of the GRIPHOS system and the…

  7. National Fusion Collaboratory: Grid Computing for Simulations and Experiments

    NASA Astrophysics Data System (ADS)

    Greenwald, Martin

    2004-05-01

    The National Fusion Collaboratory Project is creating a computational grid designed to advance scientific understanding and innovation in magnetic fusion research by facilitating collaborations, enabling more effective integration of experiments, theory and modeling and allowing more efficient use of experimental facilities. The philosophy of FusionGrid is that data, codes, analysis routines, visualization tools, and communication tools should be thought of as network available services, easily used by the fusion scientist. In such an environment, access to services is stressed rather than portability. By building on a foundation of established computer science toolkits, deployment time can be minimized. These services all share the same basic infrastructure that allows for secure authentication and resource authorization which allows stakeholders to control their own resources such as computers, data and experiments. Code developers can control intellectual property, and fair use of shared resources can be demonstrated and controlled. A key goal is to shield scientific users from the implementation details such that transparency and ease-of-use are maximized. The first FusionGrid service deployed was the TRANSP code, a widely used tool for transport analysis. Tools for run preparation, submission, monitoring and management have been developed and shared among a wide user base. This approach saves user sites from the laborious effort of maintaining such a large and complex code while at the same time reducing the burden on the development team by avoiding the need to support a large number of heterogeneous installations. Shared visualization and A/V tools are being developed and deployed to enhance long-distance collaborations. These include desktop versions of the Access Grid, a highly capable multi-point remote conferencing tool and capabilities for sharing displays and analysis tools over local and wide-area networks.

  8. Computer network defense through radial wave functions

    NASA Astrophysics Data System (ADS)

    Malloy, Ian J.

    The purpose of this research is to synthesize basic and fundamental findings in quantum computing, as applied to the attack and defense of conventional computer networks. The concept focuses on uses of radio waves as a shield for, and attack against traditional computers. A logic bomb is analogous to a landmine in a computer network, and if one was to implement it as non-trivial mitigation, it will aid computer network defense. As has been seen in kinetic warfare, the use of landmines has been devastating to geopolitical regions in that they are severely difficult for a civilian to avoid triggering given the unknown position of a landmine. Thus, the importance of understanding a logic bomb is relevant and has corollaries to quantum mechanics as well. The research synthesizes quantum logic phase shifts in certain respects using the Dynamic Data Exchange protocol in software written for this work, as well as a C-NOT gate applied to a virtual quantum circuit environment by implementing a Quantum Fourier Transform. The research focus applies the principles of coherence and entanglement from quantum physics, the concept of expert systems in artificial intelligence, principles of prime number based cryptography with trapdoor functions, and modeling radio wave propagation against an event from unknown parameters. This comes as a program relying on the artificial intelligence concept of an expert system in conjunction with trigger events for a trapdoor function relying on infinite recursion, as well as system mechanics for elliptic curve cryptography along orbital angular momenta. Here trapdoor both denotes the form of cipher, as well as the implied relationship to logic bombs.

  9. A Novel College Network Resource Management Method using Cloud Computing

    NASA Astrophysics Data System (ADS)

    Lin, Chen

    At present information construction of college mainly has construction of college networks and management information system; there are many problems during the process of information. Cloud computing is development of distributed processing, parallel processing and grid computing, which make data stored on the cloud, make software and services placed in the cloud and build on top of various standards and protocols, you can get it through all kinds of equipments. This article introduces cloud computing and function of cloud computing, then analyzes the exiting problems of college network resource management, the cloud computing technology and methods are applied in the construction of college information sharing platform.

  10. Augmenting computer networks

    NASA Technical Reports Server (NTRS)

    Bokhari, S. H.; Raza, A. D.

    1984-01-01

    Three methods of augmenting computer networks by adding at most one link per processor are discussed: (1) A tree of N nodes may be augmented such that the resulting graph has diameter no greater than 4log sub 2((N+2)/3)-2. Thi O(N(3)) algorithm can be applied to any spanning tree of a connected graph to reduce the diameter of that graph to O(log N); (2) Given a binary tree T and a chain C of N nodes each, C may be augmented to produce C so that T is a subgraph of C. This algorithm is O(N) and may be used to produce augmented chains or rings that have diameter no greater than 2log sub 2((N+2)/3) and are planar; (3) Any rectangular two-dimensional 4 (8) nearest neighbor array of size N = 2(k) may be augmented so that it can emulate a single step shuffle-exchange network of size N/2 in 3(t) time steps.

  11. Biomedical informatics research network: building a national collaboratory to hasten the derivation of new understanding and treatment of disease.

    PubMed

    Grethe, Jeffrey S; Baru, Chaitan; Gupta, Amarnath; James, Mark; Ludaescher, Bertram; Martone, Maryann E; Papadopoulos, Philip M; Peltier, Steven T; Rajasekar, Arcot; Santini, Simone; Zaslavsky, Ilya N; Ellisman, Mark H

    2005-01-01

    Through support from the National Institutes of Health's National Center for Research Resources, the Biomedical Informatics Research Network (BIRN) is pioneering the use of advanced cyberinfrastructure for medical research. By synchronizing developments in advanced wide area networking, distributed computing, distributed database federation, and other emerging capabilities of e-science, the BIRN has created a collaborative environment that is paving the way for biomedical research and clinical information management. The BIRN Coordinating Center (BIRN-CC) is orchestrating the development and deployment of key infrastructure components for immediate and long-range support of biomedical and clinical research being pursued by domain scientists in three neuroimaging test beds.

  12. Six networks on a universal neuromorphic computing substrate.

    PubMed

    Pfeil, Thomas; Grübl, Andreas; Jeltsch, Sebastian; Müller, Eric; Müller, Paul; Petrovici, Mihai A; Schmuker, Michael; Brüderle, Daniel; Schemmel, Johannes; Meier, Karlheinz

    2013-01-01

    In this study, we present a highly configurable neuromorphic computing substrate and use it for emulating several types of neural networks. At the heart of this system lies a mixed-signal chip, with analog implementations of neurons and synapses and digital transmission of action potentials. Major advantages of this emulation device, which has been explicitly designed as a universal neural network emulator, are its inherent parallelism and high acceleration factor compared to conventional computers. Its configurability allows the realization of almost arbitrary network topologies and the use of widely varied neuronal and synaptic parameters. Fixed-pattern noise inherent to analog circuitry is reduced by calibration routines. An integrated development environment allows neuroscientists to operate the device without any prior knowledge of neuromorphic circuit design. As a showcase for the capabilities of the system, we describe the successful emulation of six different neural networks which cover a broad spectrum of both structure and functionality.

  13. Six Networks on a Universal Neuromorphic Computing Substrate

    PubMed Central

    Pfeil, Thomas; Grübl, Andreas; Jeltsch, Sebastian; Müller, Eric; Müller, Paul; Petrovici, Mihai A.; Schmuker, Michael; Brüderle, Daniel; Schemmel, Johannes; Meier, Karlheinz

    2013-01-01

    In this study, we present a highly configurable neuromorphic computing substrate and use it for emulating several types of neural networks. At the heart of this system lies a mixed-signal chip, with analog implementations of neurons and synapses and digital transmission of action potentials. Major advantages of this emulation device, which has been explicitly designed as a universal neural network emulator, are its inherent parallelism and high acceleration factor compared to conventional computers. Its configurability allows the realization of almost arbitrary network topologies and the use of widely varied neuronal and synaptic parameters. Fixed-pattern noise inherent to analog circuitry is reduced by calibration routines. An integrated development environment allows neuroscientists to operate the device without any prior knowledge of neuromorphic circuit design. As a showcase for the capabilities of the system, we describe the successful emulation of six different neural networks which cover a broad spectrum of both structure and functionality. PMID:23423583

  14. Biophysical constraints on the computational capacity of biochemical signaling networks

    NASA Astrophysics Data System (ADS)

    Wang, Ching-Hao; Mehta, Pankaj

    Biophysics fundamentally constrains the computations that cells can carry out. Here, we derive fundamental bounds on the computational capacity of biochemical signaling networks that utilize post-translational modifications (e.g. phosphorylation). To do so, we combine ideas from the statistical physics of disordered systems and the observation by Tony Pawson and others that the biochemistry underlying protein-protein interaction networks is combinatorial and modular. Our results indicate that the computational capacity of signaling networks is severely limited by the energetics of binding and the need to achieve specificity. We relate our results to one of the theoretical pillars of statistical learning theory, Cover's theorem, which places bounds on the computational capacity of perceptrons. PM and CHW were supported by a Simons Investigator in the Mathematical Modeling of Living Systems Grant, and NIH Grant No. 1R35GM119461 (both to PM).

  15. Convolutional networks for fast, energy-efficient neuromorphic computing

    PubMed Central

    Esser, Steven K.; Merolla, Paul A.; Arthur, John V.; Cassidy, Andrew S.; Appuswamy, Rathinakumar; Andreopoulos, Alexander; Berg, David J.; McKinstry, Jeffrey L.; Melano, Timothy; Barch, Davis R.; di Nolfo, Carmelo; Datta, Pallab; Amir, Arnon; Taba, Brian; Flickner, Myron D.; Modha, Dharmendra S.

    2016-01-01

    Deep networks are now able to achieve human-level performance on a broad spectrum of recognition tasks. Independently, neuromorphic computing has now demonstrated unprecedented energy-efficiency through a new chip architecture based on spiking neurons, low precision synapses, and a scalable communication network. Here, we demonstrate that neuromorphic computing, despite its novel architectural primitives, can implement deep convolution networks that (i) approach state-of-the-art classification accuracy across eight standard datasets encompassing vision and speech, (ii) perform inference while preserving the hardware’s underlying energy-efficiency and high throughput, running on the aforementioned datasets at between 1,200 and 2,600 frames/s and using between 25 and 275 mW (effectively >6,000 frames/s per Watt), and (iii) can be specified and trained using backpropagation with the same ease-of-use as contemporary deep learning. This approach allows the algorithmic power of deep learning to be merged with the efficiency of neuromorphic processors, bringing the promise of embedded, intelligent, brain-inspired computing one step closer. PMID:27651489

  16. Directly executable formal models of middleware for MANET and Cloud Networking and Computing

    NASA Astrophysics Data System (ADS)

    Pashchenko, D. V.; Sadeq Jaafar, Mustafa; Zinkin, S. A.; Trokoz, D. A.; Pashchenko, T. U.; Sinev, M. P.

    2016-04-01

    The article considers some “directly executable” formal models that are suitable for the specification of computing and networking in the cloud environment and other networks which are similar to wireless networks MANET. These models can be easily programmed and implemented on computer networks.

  17. The super-Turing computational power of plastic recurrent neural networks.

    PubMed

    Cabessa, Jérémie; Siegelmann, Hava T

    2014-12-01

    We study the computational capabilities of a biologically inspired neural model where the synaptic weights, the connectivity pattern, and the number of neurons can evolve over time rather than stay static. Our study focuses on the mere concept of plasticity of the model so that the nature of the updates is assumed to be not constrained. In this context, we show that the so-called plastic recurrent neural networks (RNNs) are capable of the precise super-Turing computational power--as the static analog neural networks--irrespective of whether their synaptic weights are modeled by rational or real numbers, and moreover, irrespective of whether their patterns of plasticity are restricted to bi-valued updates or expressed by any other more general form of updating. Consequently, the incorporation of only bi-valued plastic capabilities in a basic model of RNNs suffices to break the Turing barrier and achieve the super-Turing level of computation. The consideration of more general mechanisms of architectural plasticity or of real synaptic weights does not further increase the capabilities of the networks. These results support the claim that the general mechanism of plasticity is crucially involved in the computational and dynamical capabilities of biological neural networks. They further show that the super-Turing level of computation reflects in a suitable way the capabilities of brain-like models of computation.

  18. Sign: large-scale gene network estimation environment for high performance computing.

    PubMed

    Tamada, Yoshinori; Shimamura, Teppei; Yamaguchi, Rui; Imoto, Seiya; Nagasaki, Masao; Miyano, Satoru

    2011-01-01

    Our research group is currently developing software for estimating large-scale gene networks from gene expression data. The software, called SiGN, is specifically designed for the Japanese flagship supercomputer "K computer" which is planned to achieve 10 petaflops in 2012, and other high performance computing environments including Human Genome Center (HGC) supercomputer system. SiGN is a collection of gene network estimation software with three different sub-programs: SiGN-BN, SiGN-SSM and SiGN-L1. In these three programs, five different models are available: static and dynamic nonparametric Bayesian networks, state space models, graphical Gaussian models, and vector autoregressive models. All these models require a huge amount of computational resources for estimating large-scale gene networks and therefore are designed to be able to exploit the speed of 10 petaflops. The software will be available freely for "K computer" and HGC supercomputer system users. The estimated networks can be viewed and analyzed by Cell Illustrator Online and SBiP (Systems Biology integrative Pipeline). The software project web site is available at http://sign.hgc.jp/ .

  19. Prevalence and test characteristics of national health safety network ventilator-associated events.

    PubMed

    Lilly, Craig M; Landry, Karen E; Sood, Rahul N; Dunnington, Cheryl H; Ellison, Richard T; Bagley, Peter H; Baker, Stephen P; Cody, Shawn; Irwin, Richard S

    2014-09-01

    The primary aim of the study was to measure the test characteristics of the National Health Safety Network ventilator-associated event/ventilator-associated condition constructs for detecting ventilator-associated pneumonia. Its secondary aims were to report the clinical features of patients with National Health Safety Network ventilator-associated event/ventilator-associated condition, measure costs of surveillance, and its susceptibility to manipulation. Prospective cohort study. Two inpatient campuses of an academic medical center. Eight thousand four hundred eight mechanically ventilated adults discharged from an ICU. None. The National Health Safety Network ventilator-associated event/ventilator-associated condition constructs detected less than a third of ventilator-associated pneumonia cases with a sensitivity of 0.325 and a positive predictive value of 0.07. Most National Health Safety Network ventilator-associated event/ventilator-associated condition cases (93%) did not have ventilator-associated pneumonia or other hospital-acquired complications; 71% met the definition for acute respiratory distress syndrome. Similarly, most patients with National Health Safety Network probable ventilator-associated pneumonia did not have ventilator-associated pneumonia because radiographic criteria were not met. National Health Safety Network ventilator-associated event/ventilator-associated condition rates were reduced 93% by an unsophisticated manipulation of ventilator management protocols. The National Health Safety Network ventilator-associated event/ventilator-associated condition constructs failed to detect many patients who had ventilator-associated pneumonia, detected many cases that did not have a hospital complication, and were susceptible to manipulation. National Health Safety Network ventilator-associated event/ventilator-associated condition surveillance did not perform as well as ventilator-associated pneumonia surveillance and had several undesirable

  20. Computing all hybridization networks for multiple binary phylogenetic input trees.

    PubMed

    Albrecht, Benjamin

    2015-07-30

    The computation of phylogenetic trees on the same set of species that are based on different orthologous genes can lead to incongruent trees. One possible explanation for this behavior are interspecific hybridization events recombining genes of different species. An important approach to analyze such events is the computation of hybridization networks. This work presents the first algorithm computing the hybridization number as well as a set of representative hybridization networks for multiple binary phylogenetic input trees on the same set of taxa. To improve its practical runtime, we show how this algorithm can be parallelized. Moreover, we demonstrate the efficiency of the software Hybroscale, containing an implementation of our algorithm, by comparing it to PIRNv2.0, which is so far the best available software computing the exact hybridization number for multiple binary phylogenetic trees on the same set of taxa. The algorithm is part of the software Hybroscale, which was developed specifically for the investigation of hybridization networks including their computation and visualization. Hybroscale is freely available(1) and runs on all three major operating systems. Our simulation study indicates that our approach is on average 100 times faster than PIRNv2.0. Moreover, we show how Hybroscale improves the interpretation of the reported hybridization networks by adding certain features to its graphical representation.

  1. A National Strategy for Civic Networking: A Vision of Change.

    ERIC Educational Resources Information Center

    Civille, Richard

    1993-01-01

    Presents a vision and a national strategy for civic networking based on the development of the National Information Infrastructure. Topics addressed include a public interest communications policy; benefits of civic networking, including improving services and reducing government costs, reducing poverty and health care costs, and improving…

  2. A Distributed Computing Network for Real-Time Systems.

    DTIC Science & Technology

    1980-11-03

    7 ) AU2 o NAVA TUNDEWATER SY$TEMS CENTER NEWPORT RI F/G 9/2 UIS RIBUT E 0 COMPUTIN G N LTWORK FOR REAL - TIME SYSTEMS .(U) UASSIFIED NOV Al 6 1...MORAIS - UT 92 dLEVEL c A Distributed Computing Network for Real - Time Systems . 11 𔃺-1 Gordon E/Morson I7 y tm- ,r - t "en t As J 2 -p .. - 7 I’ cNaval...NUMBER TD 5932 / N 4. TITLE mand SubotI. S. TYPE OF REPORT & PERIOD COVERED A DISTRIBUTED COMPUTING NETWORK FOR REAL - TIME SYSTEMS 6. PERFORMING ORG

  3. Configuring compute nodes of a parallel computer in an operational group into a plurality of independent non-overlapping collective networks

    DOEpatents

    Archer, Charles J.; Inglett, Todd A.; Ratterman, Joseph D.; Smith, Brian E.

    2010-03-02

    Methods, apparatus, and products are disclosed for configuring compute nodes of a parallel computer in an operational group into a plurality of independent non-overlapping collective networks, the compute nodes in the operational group connected together for data communications through a global combining network, that include: partitioning the compute nodes in the operational group into a plurality of non-overlapping subgroups; designating one compute node from each of the non-overlapping subgroups as a master node; and assigning, to the compute nodes in each of the non-overlapping subgroups, class routing instructions that organize the compute nodes in that non-overlapping subgroup as a collective network such that the master node is a physical root.

  4. Computers, Networks, and Desegregation at San Jose High Academy.

    ERIC Educational Resources Information Center

    Solomon, Gwen

    1987-01-01

    Describes magnet high school which was created in California to meet desegregation requirements and emphasizes computer technology. Highlights include local computer networks that connect science and music labs, the library/media center, business computer lab, writing lab, language arts skills lab, and social studies classrooms; software; teacher…

  5. 77 FR 33229 - Notice of Proposed Information Collection: Comment Request; National Resource Network

    Federal Register 2010, 2011, 2012, 2013, 2014

    2012-06-05

    ... Information Collection: Comment Request; National Resource Network AGENCY: Office of the Assistant Secretary... information: Title of Proposal: National Resource Network. OMB Control Number, if applicable: None... and reporting information related to the proposed National Resource Network. The U.S. Department of...

  6. SNAP: A computer program for generating symbolic network functions

    NASA Technical Reports Server (NTRS)

    Lin, P. M.; Alderson, G. E.

    1970-01-01

    The computer program SNAP (symbolic network analysis program) generates symbolic network functions for networks containing R, L, and C type elements and all four types of controlled sources. The program is efficient with respect to program storage and execution time. A discussion of the basic algorithms is presented, together with user's and programmer's guides.

  7. Synchronized Pair Configuration in Virtualization-Based Lab for Learning Computer Networks

    ERIC Educational Resources Information Center

    Kongcharoen, Chaknarin; Hwang, Wu-Yuin; Ghinea, Gheorghita

    2017-01-01

    More studies are concentrating on using virtualization-based labs to facilitate computer or network learning concepts. Some benefits are lower hardware costs and greater flexibility in reconfiguring computer and network environments. However, few studies have investigated effective mechanisms for using virtualization fully for collaboration.…

  8. Electrooptical adaptive switching network for the hypercube computer

    NASA Technical Reports Server (NTRS)

    Chow, E.; Peterson, J.

    1988-01-01

    An all-optical network design for the hyperswitch network using regular free-space interconnects between electronic processor nodes is presented. The adaptive routing model used is described, and an adaptive routing control example is presented. The design demonstrates that existing electrooptical techniques are sufficient for implementing efficient parallel architectures without the need for more complex means of implementing arbitrary interconnection schemes. The electrooptical hyperswitch network significantly improves the communication performance of the hypercube computer.

  9. 78 FR 8686 - Establishment of the National Freight Network

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-02-06

    ... Network AGENCY: Federal Highway Administration (FHWA), DOT. ACTION: Notice. SUMMARY: This notice defines the planned process for the designation of the national freight network as required by Section 1115 of... the initial designation of the primary freight network, the designation of additional miles critical...

  10. Privacy Issues of a National Research and Education Network.

    ERIC Educational Resources Information Center

    Katz, James E.; Graveman, Richard F.

    1991-01-01

    Discussion of the right to privacy of communications focuses on privacy expectations within a National Research and Education Network (NREN). Highlights include privacy needs in scientific and education communications; academic and research networks; network security and privacy concerns; protection strategies; and consequences of privacy…

  11. User's guide to the Octopus computer network (the SHOC manual)

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Schneider, C.; Thompson, D.; Whitten, G.

    1977-07-18

    This guide explains how to enter, run, and debug programs on the Octopus network. It briefly describes the network's operation, and directs the reader to other documents for further information. It stresses those service programs that will be most useful in the long run; ''quick'' methods that have little flexibility are not discussed. The Octopus timesharing network gives the user access to four CDC 7600 computers, two CDC STAR computers, and a broad array of peripheral equipment, from any of 800 or so remote terminals. 16 figures, 7 tables.

  12. User's guide to the Octopus computer network (the SHOC manual)

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Schneider, C.; Thompson, D.; Whitten, G.

    1976-10-07

    This guide explains how to enter, run, and debug programs on the Octopus network. It briefly describes the network's operation, and directs the reader to other documents for further information. It stresses those service programs that will be most useful in the long run; ''quick'' methods that have little flexibility are not discussed. The Octopus timesharing network gives the user access to four CDC 7600 computers, two CDC STAR computers, and a broad array of peripheral equipment, from any of 800 or so remote terminals. 8 figures, 4 tables.

  13. User's guide to the Octopus computer network (the SHOC manual)

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Schneider, C.; Thompson, D.; Whitten, G.

    1975-06-02

    This guide explains how to enter, run, and debug programs on the Octopus network. It briefly describes the network's operation, and directs the reader to other documents for further information. It stresses those service programs that will be most useful in the long run; ''quick'' methods that have little flexibility are not discussed. The Octopus timesharing network gives the user access to four CDC 7600 computers and a broad array of peripheral equipment, from any of 800 remote terminals. Octopus will soon include the Laboratory's STAR-100 computers. 9 figures, 5 tables. (auth)

  14. NASA's Participation in the National Computational Grid

    NASA Technical Reports Server (NTRS)

    Feiereisen, William J.; Zornetzer, Steve F. (Technical Monitor)

    1998-01-01

    Over the last several years it has become evident that the character of NASA's supercomputing needs has changed. One of the major missions of the agency is to support the design and manufacture of aero- and space-vehicles with technologies that will significantly reduce their cost. It is becoming clear that improvements in the process of aerospace design and manufacturing will require a high performance information infrastructure that allows geographically dispersed teams to draw upon resources that are broader than traditional supercomputing. A computational grid draws together our information resources into one system. We can foresee the time when a Grid will allow engineers and scientists to use the tools of supercomputers, databases and on line experimental devices in a virtual environment to collaborate with distant colleagues. The concept of a computational grid has been spoken of for many years, but several events in recent times are conspiring to allow us to actually build one. In late 1997 the National Science Foundation initiated the Partnerships for Advanced Computational Infrastructure (PACI) which is built around the idea of distributed high performance computing. The Alliance lead, by the National Computational Science Alliance (NCSA), and the National Partnership for Advanced Computational Infrastructure (NPACI), lead by the San Diego Supercomputing Center, have been instrumental in drawing together the "Grid Community" to identify the technology bottlenecks and propose a research agenda to address them. During the same period NASA has begun to reformulate parts of two major high performance computing research programs to concentrate on distributed high performance computing and has banded together with the PACI centers to address the research agenda in common.

  15. 76 FR 38124 - Applications for New Awards; Americans With Disabilities Act (ADA) National Network Regional...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-06-29

    ...) National Network Regional Centers and ADA National Network Collaborative Research Projects AGENCY: Office... National Network Regional Centers (formerly the Disability Business Technical Assistance Centers (DBTACs), and ADA National Network Collaborative Research Projects. Notice inviting applications for new awards...

  16. A program to compute the soft Robinson-Foulds distance between phylogenetic networks.

    PubMed

    Lu, Bingxin; Zhang, Louxin; Leong, Hon Wai

    2017-03-14

    Over the past two decades, phylogenetic networks have been studied to model reticulate evolutionary events. The relationships among phylogenetic networks, phylogenetic trees and clusters serve as the basis for reconstruction and comparison of phylogenetic networks. To understand these relationships, two problems are raised: the tree containment problem, which asks whether a phylogenetic tree is displayed in a phylogenetic network, and the cluster containment problem, which asks whether a cluster is represented at a node in a phylogenetic network. Both the problems are NP-complete. A fast exponential-time algorithm for the cluster containment problem on arbitrary networks is developed and implemented in C. The resulting program is further extended into a computer program for fast computation of the Soft Robinson-Foulds distance between phylogenetic networks. Two computer programs are developed for facilitating reconstruction and validation of phylogenetic network models in evolutionary and comparative genomics. Our simulation tests indicated that they are fast enough for use in practice. Additionally, the distribution of the Soft Robinson-Foulds distance between phylogenetic networks is demonstrated to be unlikely normal by our simulation data.

  17. Tensor network method for reversible classical computation

    NASA Astrophysics Data System (ADS)

    Yang, Zhi-Cheng; Kourtis, Stefanos; Chamon, Claudio; Mucciolo, Eduardo R.; Ruckenstein, Andrei E.

    2018-03-01

    We develop a tensor network technique that can solve universal reversible classical computational problems, formulated as vertex models on a square lattice [Nat. Commun. 8, 15303 (2017), 10.1038/ncomms15303]. By encoding the truth table of each vertex constraint in a tensor, the total number of solutions compatible with partial inputs and outputs at the boundary can be represented as the full contraction of a tensor network. We introduce an iterative compression-decimation (ICD) scheme that performs this contraction efficiently. The ICD algorithm first propagates local constraints to longer ranges via repeated contraction-decomposition sweeps over all lattice bonds, thus achieving compression on a given length scale. It then decimates the lattice via coarse-graining tensor contractions. Repeated iterations of these two steps gradually collapse the tensor network and ultimately yield the exact tensor trace for large systems, without the need for manual control of tensor dimensions. Our protocol allows us to obtain the exact number of solutions for computations where a naive enumeration would take astronomically long times.

  18. PROFEAT Update: A Protein Features Web Server with Added Facility to Compute Network Descriptors for Studying Omics-Derived Networks.

    PubMed

    Zhang, P; Tao, L; Zeng, X; Qin, C; Chen, S Y; Zhu, F; Yang, S Y; Li, Z R; Chen, W P; Chen, Y Z

    2017-02-03

    The studies of biological, disease, and pharmacological networks are facilitated by the systems-level investigations using computational tools. In particular, the network descriptors developed in other disciplines have found increasing applications in the study of the protein, gene regulatory, metabolic, disease, and drug-targeted networks. Facilities are provided by the public web servers for computing network descriptors, but many descriptors are not covered, including those used or useful for biological studies. We upgraded the PROFEAT web server http://bidd2.nus.edu.sg/cgi-bin/profeat2016/main.cgi for computing up to 329 network descriptors and protein-protein interaction descriptors. PROFEAT network descriptors comprehensively describe the topological and connectivity characteristics of unweighted (uniform binding constants and molecular levels), edge-weighted (varying binding constants), node-weighted (varying molecular levels), edge-node-weighted (varying binding constants and molecular levels), and directed (oriented processes) networks. The usefulness of the network descriptors is illustrated by the literature-reported studies of the biological networks derived from the genome, interactome, transcriptome, metabolome, and diseasome profiles. Copyright © 2016 Elsevier Ltd. All rights reserved.

  19. Extremely Scalable Spiking Neuronal Network Simulation Code: From Laptops to Exascale Computers.

    PubMed

    Jordan, Jakob; Ippen, Tammo; Helias, Moritz; Kitayama, Itaru; Sato, Mitsuhisa; Igarashi, Jun; Diesmann, Markus; Kunkel, Susanne

    2018-01-01

    State-of-the-art software tools for neuronal network simulations scale to the largest computing systems available today and enable investigations of large-scale networks of up to 10 % of the human cortex at a resolution of individual neurons and synapses. Due to an upper limit on the number of incoming connections of a single neuron, network connectivity becomes extremely sparse at this scale. To manage computational costs, simulation software ultimately targeting the brain scale needs to fully exploit this sparsity. Here we present a two-tier connection infrastructure and a framework for directed communication among compute nodes accounting for the sparsity of brain-scale networks. We demonstrate the feasibility of this approach by implementing the technology in the NEST simulation code and we investigate its performance in different scaling scenarios of typical network simulations. Our results show that the new data structures and communication scheme prepare the simulation kernel for post-petascale high-performance computing facilities without sacrificing performance in smaller systems.

  20. Extremely Scalable Spiking Neuronal Network Simulation Code: From Laptops to Exascale Computers

    PubMed Central

    Jordan, Jakob; Ippen, Tammo; Helias, Moritz; Kitayama, Itaru; Sato, Mitsuhisa; Igarashi, Jun; Diesmann, Markus; Kunkel, Susanne

    2018-01-01

    State-of-the-art software tools for neuronal network simulations scale to the largest computing systems available today and enable investigations of large-scale networks of up to 10 % of the human cortex at a resolution of individual neurons and synapses. Due to an upper limit on the number of incoming connections of a single neuron, network connectivity becomes extremely sparse at this scale. To manage computational costs, simulation software ultimately targeting the brain scale needs to fully exploit this sparsity. Here we present a two-tier connection infrastructure and a framework for directed communication among compute nodes accounting for the sparsity of brain-scale networks. We demonstrate the feasibility of this approach by implementing the technology in the NEST simulation code and we investigate its performance in different scaling scenarios of typical network simulations. Our results show that the new data structures and communication scheme prepare the simulation kernel for post-petascale high-performance computing facilities without sacrificing performance in smaller systems. PMID:29503613

  1. "Getting Practical" and the National Network of Science Learning Centres

    ERIC Educational Resources Information Center

    Chapman, Georgina; Langley, Mark; Skilling, Gus; Walker, John

    2011-01-01

    The national network of Science Learning Centres is a co-ordinating partner in the Getting Practical--Improving Practical Work in Science programme. The principle of training provision for the "Getting Practical" programme is a cascade model. Regional trainers employed by the national network of Science Learning Centres trained the cohort of local…

  2. Spiking network simulation code for petascale computers.

    PubMed

    Kunkel, Susanne; Schmidt, Maximilian; Eppler, Jochen M; Plesser, Hans E; Masumoto, Gen; Igarashi, Jun; Ishii, Shin; Fukai, Tomoki; Morrison, Abigail; Diesmann, Markus; Helias, Moritz

    2014-01-01

    Brain-scale networks exhibit a breathtaking heterogeneity in the dynamical properties and parameters of their constituents. At cellular resolution, the entities of theory are neurons and synapses and over the past decade researchers have learned to manage the heterogeneity of neurons and synapses with efficient data structures. Already early parallel simulation codes stored synapses in a distributed fashion such that a synapse solely consumes memory on the compute node harboring the target neuron. As petaflop computers with some 100,000 nodes become increasingly available for neuroscience, new challenges arise for neuronal network simulation software: Each neuron contacts on the order of 10,000 other neurons and thus has targets only on a fraction of all compute nodes; furthermore, for any given source neuron, at most a single synapse is typically created on any compute node. From the viewpoint of an individual compute node, the heterogeneity in the synaptic target lists thus collapses along two dimensions: the dimension of the types of synapses and the dimension of the number of synapses of a given type. Here we present a data structure taking advantage of this double collapse using metaprogramming techniques. After introducing the relevant scaling scenario for brain-scale simulations, we quantitatively discuss the performance on two supercomputers. We show that the novel architecture scales to the largest petascale supercomputers available today.

  3. Spiking network simulation code for petascale computers

    PubMed Central

    Kunkel, Susanne; Schmidt, Maximilian; Eppler, Jochen M.; Plesser, Hans E.; Masumoto, Gen; Igarashi, Jun; Ishii, Shin; Fukai, Tomoki; Morrison, Abigail; Diesmann, Markus; Helias, Moritz

    2014-01-01

    Brain-scale networks exhibit a breathtaking heterogeneity in the dynamical properties and parameters of their constituents. At cellular resolution, the entities of theory are neurons and synapses and over the past decade researchers have learned to manage the heterogeneity of neurons and synapses with efficient data structures. Already early parallel simulation codes stored synapses in a distributed fashion such that a synapse solely consumes memory on the compute node harboring the target neuron. As petaflop computers with some 100,000 nodes become increasingly available for neuroscience, new challenges arise for neuronal network simulation software: Each neuron contacts on the order of 10,000 other neurons and thus has targets only on a fraction of all compute nodes; furthermore, for any given source neuron, at most a single synapse is typically created on any compute node. From the viewpoint of an individual compute node, the heterogeneity in the synaptic target lists thus collapses along two dimensions: the dimension of the types of synapses and the dimension of the number of synapses of a given type. Here we present a data structure taking advantage of this double collapse using metaprogramming techniques. After introducing the relevant scaling scenario for brain-scale simulations, we quantitatively discuss the performance on two supercomputers. We show that the novel architecture scales to the largest petascale supercomputers available today. PMID:25346682

  4. Characteristics of Effective Networking Environments.

    ERIC Educational Resources Information Center

    Kaye, Judith C.

    This document chronicles a project called Model Nets, which studies the characteristics of computer networks that have a positive impact on K-12 learning. Los Alamos National Laboratory undertook the study so that their recommendations could help federal agencies wisely fund networking projects in an era when the national imperative has driven…

  5. Bulgarian National Digital Seismological Network

    NASA Astrophysics Data System (ADS)

    Dimitrova, L.; Solakov, D.; Nikolova, S.; Stoyanov, S.; Simeonova, S.; Zimakov, L. G.; Khaikin, L.

    2011-12-01

    The Bulgarian National Digital Seismological Network (BNDSN) consists of a National Data Center (NDC), 13 stations equipped with RefTek High Resolution Broadband Seismic Recorders - model DAS 130-01/3, 1 station equipped with Quanterra 680 and broadband sensors and accelerometers. Real-time data transfer from seismic stations to NDC is realized via Virtual Private Network of the Bulgarian Telecommunication Company. The communication interruptions don't cause any data loss at the NDC. The data are backed up in the field station recorder's 4Mb RAM memory and are retransmitted to the NDC immediately after the communication link is re-established. The recorders are equipped with 2 compact flash disks able to save more than 1 month long data. The data from the flash disks can be downloaded remotely using FTP. The data acquisition and processing hardware redundancy at the NDC is achieved by two clustered SUN servers and two Blade Workstations. To secure the acquisition, processing and data storage processes a three layer local network is designed at the NDC. Real-time data acquisition is performed using REFTEK's full duplex error-correction protocol RTPD. Data from the Quanterra recorder and foreign stations are fed into RTPD in real-time via SeisComP/SeedLink protocol. Using SeisComP/SeedLink software the NDC transfers real-time data to INGV-Roma, NEIC-USA, ORFEUS Data Center. Regional real-time data exchange with Romania, Macedonia, Serbia and Greece is established at the NDC also. Data processing is performed by the Seismic Network Data Processor (SNDP) software package running on the both Servers. SNDP includes subsystems: Real-time subsystem (RTS_SNDP) - for signal detection; evaluation of the signal parameters; phase identification and association; source estimation; Seismic analysis subsystem (SAS_SNDP) - for interactive data processing; Early warning subsystem (EWS_SNDP) - based on the first arrived P-phases. The signal detection process is performed by

  6. A Feasibility Study of Synthesizing Subsurfaces Modeled with Computational Neural Networks

    NASA Technical Reports Server (NTRS)

    Wang, John T.; Housner, Jerrold M.; Szewczyk, Z. Peter

    1998-01-01

    This paper investigates the feasibility of synthesizing substructures modeled with computational neural networks. Substructures are modeled individually with computational neural networks and the response of the assembled structure is predicted by synthesizing the neural networks. A superposition approach is applied to synthesize models for statically determinate substructures while an interface displacement collocation approach is used to synthesize statically indeterminate substructure models. Beam and plate substructures along with components of a complicated Next Generation Space Telescope (NGST) model are used in this feasibility study. In this paper, the limitations and difficulties of synthesizing substructures modeled with neural networks are also discussed.

  7. Network architecture test-beds as platforms for ubiquitous computing.

    PubMed

    Roscoe, Timothy

    2008-10-28

    Distributed systems research, and in particular ubiquitous computing, has traditionally assumed the Internet as a basic underlying communications substrate. Recently, however, the networking research community has come to question the fundamental design or 'architecture' of the Internet. This has been led by two observations: first, that the Internet as it stands is now almost impossible to evolve to support new functionality; and second, that modern applications of all kinds now use the Internet rather differently, and frequently implement their own 'overlay' networks above it to work around its perceived deficiencies. In this paper, I discuss recent academic projects to allow disruptive change to the Internet architecture, and also outline a radically different view of networking for ubiquitous computing that such proposals might facilitate.

  8. European national healthy city networks: the impact of an elite epistemic community.

    PubMed

    Heritage, Zoë; Green, Geoff

    2013-10-01

    National healthy cities networks (NNs) were created 20 years ago to support the development of healthy cities within the WHO Europe Region. Using the concept of epistemic communities, the evolution and impact of NNs is considered, as is their future development. Healthy cities national networks are providing information, training and support to member cities. In many cases, they are also involved in supporting national public health policy development and disseminating out healthy city principles to other local authorities. National networks are a fragile but an extremely valuable resource for sharing public health knowledge.

  9. A Study of Complex Deep Learning Networks on High Performance, Neuromorphic, and Quantum Computers

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Potok, Thomas E; Schuman, Catherine D; Young, Steven R

    Current Deep Learning models use highly optimized convolutional neural networks (CNN) trained on large graphical processing units (GPU)-based computers with a fairly simple layered network topology, i.e., highly connected layers, without intra-layer connections. Complex topologies have been proposed, but are intractable to train on current systems. Building the topologies of the deep learning network requires hand tuning, and implementing the network in hardware is expensive in both cost and power. In this paper, we evaluate deep learning models using three different computing architectures to address these problems: quantum computing to train complex topologies, high performance computing (HPC) to automatically determinemore » network topology, and neuromorphic computing for a low-power hardware implementation. Due to input size limitations of current quantum computers we use the MNIST dataset for our evaluation. The results show the possibility of using the three architectures in tandem to explore complex deep learning networks that are untrainable using a von Neumann architecture. We show that a quantum computer can find high quality values of intra-layer connections and weights, while yielding a tractable time result as the complexity of the network increases; a high performance computer can find optimal layer-based topologies; and a neuromorphic computer can represent the complex topology and weights derived from the other architectures in low power memristive hardware. This represents a new capability that is not feasible with current von Neumann architecture. It potentially enables the ability to solve very complicated problems unsolvable with current computing technologies.« less

  10. Home Care Nursing via Computer Networks: Justification and Design Specifications

    PubMed Central

    Brennan, Patricia Flatley

    1988-01-01

    High-tech home care includes the use of information technologies, such as computer networks, to provide direct care to patients in the home. This paper presents the justification and design of a project using a free, public access computer network to deliver home care nursing. The intervention attempts to reduce isolation and improve problem solving among home care patients and their informal caregivers. Three modules comprise the intervention: a decision module, a communications module, and an information data base. This paper describes the experimental evaluation of the project, and discusses issues in the delivery of nursing care via computers.

  11. Cord blood banking in France: reorganising the national network.

    PubMed

    Katz, Gregory; Mills, Antonia

    2010-06-01

    Paradoxically, France is one of the leading exporters of cord blood units worldwide, but ranks only 17th in terms of cord blood units per inhabitant, and imports 64% of cord blood grafts to meet national transplantation demands. With three operational banks in 2008, the French allogeneic cord blood network is now entering an important phase of development with the creation of seven new banks collecting from local clusters of maternities. Although the French network of public banks is demonstrating a strong commitment to reorganise and scale up its activities, the revision of France's bioethics law in 2010 has sparked a debate concerning the legalisation of commercial autologous banking. The paper discusses key elements for a comprehensive national plan that would strengthen the allogeneic banking network through which France could meet its national medical needs and guarantee equal access to healthcare. Copyright 2010. Published by Elsevier Ltd.

  12. Microcosm to Cosmos: The Growth of a Divisional Computer Network

    PubMed Central

    Johannes, R.S.; Kahane, Stephen N.

    1987-01-01

    In 1982, we reported the deployment of a network of microcomputers in the Division of Gastroenterology[1]. This network was based upon Corvus Systems Omninet®. Corvus was one of the very first firms to offer networking products for PC's. This PC development occurred coincident with the planning phase of the Johns Hopkins Hospital's multisegment ethernet project. A rich communications infra-structure is now in place at the Johns Hopkins Medical Institutions[2,3]. Shortly after the hospital development under the direction of the Operational and Clinical Systems Division (OCS) development began, the Johns Hopkins School of Medicine began an Integrated Academic Information Management Systems (IAIMS) planning effort. We now present a model that uses aspects of all three planning efforts (PC networks, Hospital Information Systems & IAIMS) to build a divisional computing facility. This facility is viewed as a terminal leaf on then institutional network diagram. Nevertheless, it is noteworthy that this leaf, the divisional resource in the Division of Gastroenterology (GASNET), has a rich substructure and functionality of its own, perhaps revealing the recursive nature of network architecture. The current status, design and function of the GASNET computational facility is discussed. Among the major positive aspects of this design are the sharing and centralization of MS-DOS software, the high-speed DOS/Unix link that makes available most of the our institution's computing resources.

  13. The space physics analysis network

    NASA Astrophysics Data System (ADS)

    Green, James L.

    1988-04-01

    The Space Physics Analysis Network, or SPAN, is emerging as a viable method for solving an immediate communication problem for space and Earth scientists and has been operational for nearly 7 years. SPAN and its extension into Europe, utilizes computer-to-computer communications allowing mail, binary and text file transfer, and remote logon capability to over 1000 space science computer systems. The network has been used to successfully transfer real-time data to remote researchers for rapid data analysis but its primary function is for non-real-time applications. One of the major advantages for using SPAN is its spacecraft mission independence. Space science researchers using SPAN are located in universities, industries and government institutions all across the United States and Europe. These researchers are in such fields as magnetospheric physics, astrophysics, ionosperic physics, atmospheric physics, climatology, meteorology, oceanography, planetary physics and solar physics. SPAN users have access to space and Earth science data bases, mission planning and information systems, and computational facilities for the purposes of facilitating correlative space data exchange, data analysis and space research. For example, the National Space Science Data Center (NSSDC), which manages the network, is providing facilities on SPAN such as the Network Information Center (SPAN NIC). SPAN has interconnections with several national and international networks such as HEPNET and TEXNET forming a transparent DECnet network. The combined total number of computers now reachable over these combined networks is about 2000. In addition, SPAN supports full function capabilities over the international public packet switched networks (e.g. TELENET) and has mail gateways to ARPANET, BITNET and JANET.

  14. Machine learning based Intelligent cognitive network using fog computing

    NASA Astrophysics Data System (ADS)

    Lu, Jingyang; Li, Lun; Chen, Genshe; Shen, Dan; Pham, Khanh; Blasch, Erik

    2017-05-01

    In this paper, a Cognitive Radio Network (CRN) based on artificial intelligence is proposed to distribute the limited radio spectrum resources more efficiently. The CRN framework can analyze the time-sensitive signal data close to the signal source using fog computing with different types of machine learning techniques. Depending on the computational capabilities of the fog nodes, different features and machine learning techniques are chosen to optimize spectrum allocation. Also, the computing nodes send the periodic signal summary which is much smaller than the original signal to the cloud so that the overall system spectrum source allocation strategies are dynamically updated. Applying fog computing, the system is more adaptive to the local environment and robust to spectrum changes. As most of the signal data is processed at the fog level, it further strengthens the system security by reducing the communication burden of the communications network.

  15. The impact of capacity growth in national telecommunications networks.

    PubMed

    Lord, Andrew; Soppera, Andrea; Jacquet, Arnaud

    2016-03-06

    This paper discusses both UK-based and global Internet data bandwidth growth, beginning with historical data for the BT network. We examine the time variations in consumer behaviour and how this is statistically aggregated into larger traffic loads on national core fibre communications networks. The random nature of consumer Internet behaviour, where very few consumers require maximum bandwidth simultaneously, provides the opportunity for a significant statistical gain. The paper looks at predictions for how this growth might continue over the next 10-20 years, giving estimates for the amount of bandwidth that networks should support in the future. The paper then explains how national networks are designed to accommodate these traffic levels, and the various network roles, including access, metro and core, are described. The physical layer network is put into the context of how the packet and service layers are designed and the applications and location of content are also included in an overall network overview. The specific role of content servers in alleviating core network traffic loads is highlighted. The status of the relevant transmission technologies in the access, metro and core is given, showing that these technologies, with adequate research, should be sufficient to provide bandwidth for consumers in the next 10-20 years. © 2016 The Author(s).

  16. Student Motivation in Computer Networking Courses

    ERIC Educational Resources Information Center

    Hsin, Wen-Jung

    2007-01-01

    This paper introduces several hands-on projects that have been used to motivate students in learning various computer networking concepts. These projects are shown to be very useful and applicable to the learners' daily tasks and activities such as emailing, Web browsing, and online shopping and banking, and lead to an unexpected byproduct,…

  17. National information network and database system of hazardous waste management in China

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ma Hongchang

    1996-12-31

    Industries in China generate large volumes of hazardous waste, which makes it essential for the nation to pay more attention to hazardous waste management. National laws and regulations, waste surveys, and manifest tracking and permission systems have been initiated. Some centralized hazardous waste disposal facilities are under construction. China`s National Environmental Protection Agency (NEPA) has also obtained valuable information on hazardous waste management from developed countries. To effectively share this information with local environmental protection bureaus, NEPA developed a national information network and database system for hazardous waste management. This information network will have such functions as information collection, inquiry,more » and connection. The long-term objective is to establish and develop a national and local hazardous waste management information network. This network will significantly help decision makers and researchers because it will be easy to obtain information (e.g., experiences of developed countries in hazardous waste management) to enhance hazardous waste management in China. The information network consists of five parts: technology consulting, import-export management, regulation inquiry, waste survey, and literature inquiry.« less

  18. Quantum computation over the butterfly network

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Soeda, Akihito; Kinjo, Yoshiyuki; Turner, Peter S.

    2011-07-15

    In order to investigate distributed quantum computation under restricted network resources, we introduce a quantum computation task over the butterfly network where both quantum and classical communications are limited. We consider deterministically performing a two-qubit global unitary operation on two unknown inputs given at different nodes, with outputs at two distinct nodes. By using a particular resource setting introduced by M. Hayashi [Phys. Rev. A 76, 040301(R) (2007)], which is capable of performing a swap operation by adding two maximally entangled qubits (ebits) between the two input nodes, we show that unitary operations can be performed without adding any entanglementmore » resource, if and only if the unitary operations are locally unitary equivalent to controlled unitary operations. Our protocol is optimal in the sense that the unitary operations cannot be implemented if we relax the specifications of any of the channels. We also construct protocols for performing controlled traceless unitary operations with a 1-ebit resource and for performing global Clifford operations with a 2-ebit resource.« less

  19. Providing nearest neighbor point-to-point communications among compute nodes of an operational group in a global combining network of a parallel computer

    DOEpatents

    Archer, Charles J.; Faraj, Ahmad A.; Inglett, Todd A.; Ratterman, Joseph D.

    2012-10-23

    Methods, apparatus, and products are disclosed for providing nearest neighbor point-to-point communications among compute nodes of an operational group in a global combining network of a parallel computer, each compute node connected to each adjacent compute node in the global combining network through a link, that include: identifying each link in the global combining network for each compute node of the operational group; designating one of a plurality of point-to-point class routing identifiers for each link such that no compute node in the operational group is connected to two adjacent compute nodes in the operational group with links designated for the same class routing identifiers; and configuring each compute node of the operational group for point-to-point communications with each adjacent compute node in the global combining network through the link between that compute node and that adjacent compute node using that link's designated class routing identifier.

  20. Computer Network Security: Best Practices for Alberta School Jurisdictions.

    ERIC Educational Resources Information Center

    Alberta Dept. of Education, Edmonton.

    This paper provides a snapshot of the computer network security industry and addresses specific issues related to network security in public education. The following topics are covered: (1) security policy, including reasons for establishing a policy, risk assessment, areas to consider, audit tools; (2) workstations, including physical security,…

  1. Computer Networking Strategies for Building Collaboration among Science Educators.

    ERIC Educational Resources Information Center

    Aust, Ronald

    The development and dissemination of science materials can be associated with technical delivery systems such as the Unified Network for Informatics in Teacher Education (UNITE). The UNITE project was designed to investigate ways for using computer networking to improve communications and collaboration among university schools of education and…

  2. Overview of the new National Near-Road Air Quality Monitoring Network

    EPA Science Inventory

    In 2010, EPA promulgated new National Ambient Air Quality Standards (NAAQS) for nitrogen dioxide (NO2). As part of this new NAAQS, EPA required the establishment of a national near-road air quality monitoring network. This network will consist of one NO2 near-road monitoring st...

  3. Topological properties of robust biological and computational networks

    PubMed Central

    Navlakha, Saket; He, Xin; Faloutsos, Christos; Bar-Joseph, Ziv

    2014-01-01

    Network robustness is an important principle in biology and engineering. Previous studies of global networks have identified both redundancy and sparseness as topological properties used by robust networks. By focusing on molecular subnetworks, or modules, we show that module topology is tightly linked to the level of environmental variability (noise) the module expects to encounter. Modules internal to the cell that are less exposed to environmental noise are more connected and less robust than external modules. A similar design principle is used by several other biological networks. We propose a simple change to the evolutionary gene duplication model which gives rise to the rich range of module topologies observed within real networks. We apply these observations to evaluate and design communication networks that are specifically optimized for noisy or malicious environments. Combined, joint analysis of biological and computational networks leads to novel algorithms and insights benefiting both fields. PMID:24789562

  4. "It Takes a Network": Building National Capacity for Climate Change Interpretation

    NASA Astrophysics Data System (ADS)

    Spitzer, W.

    2014-12-01

    Since 2007, the New England Aquarium has led a national effort to increase the capacity of informal science venues to effectively communicate about climate change. We are now leading the NSF-funded National Network for Ocean and Climate Change Interpretation (NNOCCI), partnering with the Association of Zoos and Aquariums, FrameWorks Institute, Woods Hole Oceanographic Institution, Monterey Bay Aquarium, and National Aquarium, with evaluation conducted by the New Knowledge Organization, Pennsylvania State University, and Ohio State University. More than 1,500 informal science venues (science centers, museums, aquariums, zoos, nature centers, national parks) are visited annually by 61% of the U.S. population. These visitors expect reliable information about environmental issues and solutions. NNOCCI enables teams of informal science interpreters across the country to serve as "communication strategists" - beyond merely conveying information they can influence public perceptions, given their high level of commitment, knowledge, public trust, social networks, and visitor contact. Beyond providing in-depth training, we have found that our "alumni network" is assuming an increasingly important role in achieving our goals: 1. Ongoing learning - Training must be ongoing given continuous advances in climate and social science research. 2. Implementation support - Social support is critical as interpreters move from learning to practice, given complex and potentially contentious subject matter. 3. Leadership development - We rely on a national cadre of interpretive leaders to conduct workshops, facilitate study circle trainings, and support alumni. 4. Coalition building - A peer network helps to build and maintain connections with colleagues, and supports further dissemination through the informal science community. We are experimenting with a variety of online and face to face strategies to support the growing alumni network. Our goals are to achieve a systemic national

  5. Dynamics of global supply chain and electric power networks: Models, pricing analysis, and computations

    NASA Astrophysics Data System (ADS)

    Matsypura, Dmytro

    In this dissertation, I develop a new theoretical framework for the modeling, pricing analysis, and computation of solutions to electric power supply chains with power generators, suppliers, transmission service providers, and the inclusion of consumer demands. In particular, I advocate the application of finite-dimensional variational inequality theory, projected dynamical systems theory, game theory, network theory, and other tools that have been recently proposed for the modeling and analysis of supply chain networks (cf. Nagurney (2006)) to electric power markets. This dissertation contributes to the extant literature on the modeling, analysis, and solution of supply chain networks, including global supply chains, in general, and electric power supply chains, in particular, in the following ways. It develops a theoretical framework for modeling, pricing analysis, and computation of electric power flows/transactions in electric power systems using the rationale for supply chain analysis. The models developed include both static and dynamic ones. The dissertation also adds a new dimension to the methodology of the theory of projected dynamical systems by proving that, irrespective of the speeds of adjustment, the equilibrium of the system remains the same. Finally, I include alternative fuel suppliers, along with their behavior into the supply chain modeling and analysis framework. This dissertation has strong practical implications. In an era in which technology and globalization, coupled with increasing risk and uncertainty, complicate electricity demand and supply within and between nations, the successful management of electric power systems and pricing become increasingly pressing topics with relevance not only for economic prosperity but also national security. This dissertation addresses such related topics by providing models, pricing tools, and algorithms for decentralized electric power supply chains. This dissertation is based heavily on the following

  6. On computer vision in wireless sensor networks.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Berry, Nina M.; Ko, Teresa H.

    Wireless sensor networks allow detailed sensing of otherwise unknown and inaccessible environments. While it would be beneficial to include cameras in a wireless sensor network because images are so rich in information, the power cost of transmitting an image across the wireless network can dramatically shorten the lifespan of the sensor nodes. This paper describe a new paradigm for the incorporation of imaging into wireless networks. Rather than focusing on transmitting images across the network, we show how an image can be processed locally for key features using simple detectors. Contrasted with traditional event detection systems that trigger an imagemore » capture, this enables a new class of sensors which uses a low power imaging sensor to detect a variety of visual cues. Sharing these features among relevant nodes cues specific actions to better provide information about the environment. We report on various existing techniques developed for traditional computer vision research which can aid in this work.« less

  7. Implementation of the NCI’s National Clinical Trials Network

    Cancer.gov

    NCI is launching a new clinical trials research network intended to improve treatment for the more than 1.6 million Americans diagnosed with cancer each year. The new system, NCI’s National Clinical Trials Network (NCTN), will facilitate the rapid initia

  8. New design for interfacing computers to the Octopus network

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sloan, L.J.

    1977-03-14

    The Lawrence Livermore Laboratory has several large-scale computers which are connected to the Octopus network. Several difficulties arise in providing adequate resources along with reliable performance. To alleviate some of these problems a new method of bringing large computers into the Octopus environment is proposed.

  9. Review On Applications Of Neural Network To Computer Vision

    NASA Astrophysics Data System (ADS)

    Li, Wei; Nasrabadi, Nasser M.

    1989-03-01

    Neural network models have many potential applications to computer vision due to their parallel structures, learnability, implicit representation of domain knowledge, fault tolerance, and ability of handling statistical data. This paper demonstrates the basic principles, typical models and their applications in this field. Variety of neural models, such as associative memory, multilayer back-propagation perceptron, self-stabilized adaptive resonance network, hierarchical structured neocognitron, high order correlator, network with gating control and other models, can be applied to visual signal recognition, reinforcement, recall, stereo vision, motion, object tracking and other vision processes. Most of the algorithms have been simulated on com-puters. Some have been implemented with special hardware. Some systems use features, such as edges and profiles, of images as the data form for input. Other systems use raw data as input signals to the networks. We will present some novel ideas contained in these approaches and provide a comparison of these methods. Some unsolved problems are mentioned, such as extracting the intrinsic properties of the input information, integrating those low level functions to a high-level cognitive system, achieving invariances and other problems. Perspectives of applications of some human vision models and neural network models are analyzed.

  10. Distributed computing methodology for training neural networks in an image-guided diagnostic application.

    PubMed

    Plagianakos, V P; Magoulas, G D; Vrahatis, M N

    2006-03-01

    Distributed computing is a process through which a set of computers connected by a network is used collectively to solve a single problem. In this paper, we propose a distributed computing methodology for training neural networks for the detection of lesions in colonoscopy. Our approach is based on partitioning the training set across multiple processors using a parallel virtual machine. In this way, interconnected computers of varied architectures can be used for the distributed evaluation of the error function and gradient values, and, thus, training neural networks utilizing various learning methods. The proposed methodology has large granularity and low synchronization, and has been implemented and tested. Our results indicate that the parallel virtual machine implementation of the training algorithms developed leads to considerable speedup, especially when large network architectures and training sets are used.

  11. Network Monitoring and Fault Detection on the University of Illinois at Urbana-Champaign Campus Computer Network.

    ERIC Educational Resources Information Center

    Sng, Dennis Cheng-Hong

    The University of Illinois at Urbana-Champaign (UIUC) has a large campus computer network serving a community of about 20,000 users. With such a large network, it is inevitable that there are a wide variety of technologies co-existing in a multi-vendor environment. Effective network monitoring tools can help monitor traffic and link usage, as well…

  12. Designing a parallel evolutionary algorithm for inferring gene networks on the cloud computing environment.

    PubMed

    Lee, Wei-Po; Hsiao, Yu-Ting; Hwang, Wei-Che

    2014-01-16

    To improve the tedious task of reconstructing gene networks through testing experimentally the possible interactions between genes, it becomes a trend to adopt the automated reverse engineering procedure instead. Some evolutionary algorithms have been suggested for deriving network parameters. However, to infer large networks by the evolutionary algorithm, it is necessary to address two important issues: premature convergence and high computational cost. To tackle the former problem and to enhance the performance of traditional evolutionary algorithms, it is advisable to use parallel model evolutionary algorithms. To overcome the latter and to speed up the computation, it is advocated to adopt the mechanism of cloud computing as a promising solution: most popular is the method of MapReduce programming model, a fault-tolerant framework to implement parallel algorithms for inferring large gene networks. This work presents a practical framework to infer large gene networks, by developing and parallelizing a hybrid GA-PSO optimization method. Our parallel method is extended to work with the Hadoop MapReduce programming model and is executed in different cloud computing environments. To evaluate the proposed approach, we use a well-known open-source software GeneNetWeaver to create several yeast S. cerevisiae sub-networks and use them to produce gene profiles. Experiments have been conducted and the results have been analyzed. They show that our parallel approach can be successfully used to infer networks with desired behaviors and the computation time can be largely reduced. Parallel population-based algorithms can effectively determine network parameters and they perform better than the widely-used sequential algorithms in gene network inference. These parallel algorithms can be distributed to the cloud computing environment to speed up the computation. By coupling the parallel model population-based optimization method and the parallel computational framework, high

  13. Designing a parallel evolutionary algorithm for inferring gene networks on the cloud computing environment

    PubMed Central

    2014-01-01

    Background To improve the tedious task of reconstructing gene networks through testing experimentally the possible interactions between genes, it becomes a trend to adopt the automated reverse engineering procedure instead. Some evolutionary algorithms have been suggested for deriving network parameters. However, to infer large networks by the evolutionary algorithm, it is necessary to address two important issues: premature convergence and high computational cost. To tackle the former problem and to enhance the performance of traditional evolutionary algorithms, it is advisable to use parallel model evolutionary algorithms. To overcome the latter and to speed up the computation, it is advocated to adopt the mechanism of cloud computing as a promising solution: most popular is the method of MapReduce programming model, a fault-tolerant framework to implement parallel algorithms for inferring large gene networks. Results This work presents a practical framework to infer large gene networks, by developing and parallelizing a hybrid GA-PSO optimization method. Our parallel method is extended to work with the Hadoop MapReduce programming model and is executed in different cloud computing environments. To evaluate the proposed approach, we use a well-known open-source software GeneNetWeaver to create several yeast S. cerevisiae sub-networks and use them to produce gene profiles. Experiments have been conducted and the results have been analyzed. They show that our parallel approach can be successfully used to infer networks with desired behaviors and the computation time can be largely reduced. Conclusions Parallel population-based algorithms can effectively determine network parameters and they perform better than the widely-used sequential algorithms in gene network inference. These parallel algorithms can be distributed to the cloud computing environment to speed up the computation. By coupling the parallel model population-based optimization method and the parallel

  14. Test experience on an ultrareliable computer communication network

    NASA Technical Reports Server (NTRS)

    Abbott, L. W.

    1984-01-01

    The dispersed sensor processing mesh (DSPM) is an experimental, ultrareliable, fault-tolerant computer communications network that exhibits an organic-like ability to regenerate itself after suffering damage. The regeneration is accomplished by two routines - grow and repair. This paper discusses the DSPM concept for achieving fault tolerance and provides a brief description of the mechanization of both the experiment and the six-node experimental network. The main topic of this paper is the system performance of the growth algorithm contained in the grow routine. The characteristics imbued to DSPM by the growth algorithm are also discussed. Data from an experimental DSPM network and software simulation of larger DSPM-type networks are used to examine the inherent limitation on growth time by the growth algorithm and the relationship of growth time to network size and topology.

  15. Locating hardware faults in a data communications network of a parallel computer

    DOEpatents

    Archer, Charles J.; Megerian, Mark G.; Ratterman, Joseph D.; Smith, Brian E.

    2010-01-12

    Hardware faults location in a data communications network of a parallel computer. Such a parallel computer includes a plurality of compute nodes and a data communications network that couples the compute nodes for data communications and organizes the compute node as a tree. Locating hardware faults includes identifying a next compute node as a parent node and a root of a parent test tree, identifying for each child compute node of the parent node a child test tree having the child compute node as root, running a same test suite on the parent test tree and each child test tree, and identifying the parent compute node as having a defective link connected from the parent compute node to a child compute node if the test suite fails on the parent test tree and succeeds on all the child test trees.

  16. Automated selection of computed tomography display parameters using neural networks

    NASA Astrophysics Data System (ADS)

    Zhang, Di; Neu, Scott; Valentino, Daniel J.

    2001-07-01

    A collection of artificial neural networks (ANN's) was trained to identify simple anatomical structures in a set of x-ray computed tomography (CT) images. These neural networks learned to associate a point in an image with the anatomical structure containing the point by using the image pixels located on the horizontal and vertical lines that ran through the point. The neural networks were integrated into a computer software tool whose function is to select an index into a list of CT window/level values from the location of the user's mouse cursor. Based upon the anatomical structure selected by the user, the software tool automatically adjusts the image display to optimally view the structure.

  17. USA National Phenology Network observational data documentation

    USGS Publications Warehouse

    Rosemartin, Alyssa H.; Denny, Ellen G.; Gerst, Katharine L.; Marsh, R. Lee; Posthumus, Erin E.; Crimmins, Theresa M.; Weltzin, Jake F.

    2018-04-25

    The goals of the USA National Phenology Network (USA-NPN, www.usanpn.org) are to advance science, inform decisions, and communicate and connect with the public regarding phenology and species’ responses to environmental variation and climate change. The USA-NPN seeks to advance the science of phenology and facilitate ecosystem stewardship by providing phenological information freely and openly. To accomplish these goals, the USA-NPN National Coordinating Office (NCO) delivers observational data on plant and animal phenology in several formats, including minimally processed status and intensity datasets and derived phenometrics for individual plants, sites, and regions. This document describes the suite of observational data products delivered by the USA National Phenology Network, covering the period 2009–present for the United States and accessible via the Phenology Observation Portal (http://dx.doi.org/10.5066/F78S4N1V) and via an Application Programming Interface. The data described here have been used in diverse research and management applications, including over 30 publications in fields such as remote sensing, plant evolution, and resource management.

  18. Hybrid computing using a neural network with dynamic external memory.

    PubMed

    Graves, Alex; Wayne, Greg; Reynolds, Malcolm; Harley, Tim; Danihelka, Ivo; Grabska-Barwińska, Agnieszka; Colmenarejo, Sergio Gómez; Grefenstette, Edward; Ramalho, Tiago; Agapiou, John; Badia, Adrià Puigdomènech; Hermann, Karl Moritz; Zwols, Yori; Ostrovski, Georg; Cain, Adam; King, Helen; Summerfield, Christopher; Blunsom, Phil; Kavukcuoglu, Koray; Hassabis, Demis

    2016-10-27

    Artificial neural networks are remarkably adept at sensory processing, sequence learning and reinforcement learning, but are limited in their ability to represent variables and data structures and to store data over long timescales, owing to the lack of an external memory. Here we introduce a machine learning model called a differentiable neural computer (DNC), which consists of a neural network that can read from and write to an external memory matrix, analogous to the random-access memory in a conventional computer. Like a conventional computer, it can use its memory to represent and manipulate complex data structures, but, like a neural network, it can learn to do so from data. When trained with supervised learning, we demonstrate that a DNC can successfully answer synthetic questions designed to emulate reasoning and inference problems in natural language. We show that it can learn tasks such as finding the shortest path between specified points and inferring the missing links in randomly generated graphs, and then generalize these tasks to specific graphs such as transport networks and family trees. When trained with reinforcement learning, a DNC can complete a moving blocks puzzle in which changing goals are specified by sequences of symbols. Taken together, our results demonstrate that DNCs have the capacity to solve complex, structured tasks that are inaccessible to neural networks without external read-write memory.

  19. Bibliographic Services for a National Network.

    ERIC Educational Resources Information Center

    Avram, Henriette D.; Pulsifer, Josephine S.

    The thesis of this paper is that efficient functioning of a network is dependent upon the organization of bibliographic services so that the basic record for each bibliographic item is created once. This record must be minimally capable of serving the needs of libraries, information centers, abstracting and indexing services, and national and…

  20. Spatial spreading of infectious disease via local and national mobility networks in South Korea

    NASA Astrophysics Data System (ADS)

    Kwon, Okyu; Son, Woo-Sik

    2017-12-01

    We study the spread of infectious disease based on local- and national-scale mobility networks. We construct a local mobility network using data on urban bus services to estimate local-scale movement of people. We also construct a national mobility network from orientation-destination data of vehicular traffic between highway tollgates to evaluate national-scale movement of people. A metapopulation model is used to simulate the spread of epidemics. Thus, the number of infected people is simulated using a susceptible-infectious-recovered (SIR) model within the administrative division, and inter-division spread of infected people is determined through local and national mobility networks. In this paper, we consider two scenarios for epidemic spread. In the first, the infectious disease only spreads through local-scale movement of people, that is, the local mobility network. In the second, it spreads via both local and national mobility networks. For the former, the simulation results show infected people sequentially spread to neighboring divisions. Yet for the latter, we observe a faster spreading pattern to distant divisions. Thus, we confirm the national mobility network enhances synchronization among the incidence profiles of all administrative divisions.

  1. Change Detection Algorithms for Information Assurance of Computer Networks

    DTIC Science & Technology

    2002-01-01

    original document contains color images. 14. ABSTRACT see report 15. SUBJECT TERMS 16. SECURITY CLASSIFICATION OF: 17. LIMITATION OF ABSTRACT 18...number of computer attacks increases steadily per year. At the time of this writing the Internet Security Systems’ baseline assessment is that a new...across a network by exploiting security flaws in widely-used services offered by vulnerable computers. In order to locate the vulnerable computers, the

  2. State of the Art of Network Security Perspectives in Cloud Computing

    NASA Astrophysics Data System (ADS)

    Oh, Tae Hwan; Lim, Shinyoung; Choi, Young B.; Park, Kwang-Roh; Lee, Heejo; Choi, Hyunsang

    Cloud computing is now regarded as one of social phenomenon that satisfy customers' needs. It is possible that the customers' needs and the primary principle of economy - gain maximum benefits from minimum investment - reflects realization of cloud computing. We are living in the connected society with flood of information and without connected computers to the Internet, our activities and work of daily living will be impossible. Cloud computing is able to provide customers with custom-tailored features of application software and user's environment based on the customer's needs by adopting on-demand outsourcing of computing resources through the Internet. It also provides cloud computing users with high-end computing power and expensive application software package, and accordingly the users will access their data and the application software where they are located at the remote system. As the cloud computing system is connected to the Internet, network security issues of cloud computing are considered as mandatory prior to real world service. In this paper, survey and issues on the network security in cloud computing are discussed from the perspective of real world service environments.

  3. USA National Phenology Network gridded products documentation

    USGS Publications Warehouse

    Crimmins, Theresa M.; Marsh, R. Lee; Switzer, Jeff R.; Crimmins, Michael A.; Gerst, Katharine L.; Rosemartin, Alyssa H.; Weltzin, Jake F.

    2017-02-23

    The goals of the USA National Phenology Network (USA-NPN, www.usanpn.org) are to advance science, inform decisions, and communicate and connect with the public regarding phenology and species’ responses to environmental variation and climate change. The USA-NPN seeks to facilitate informed ecosystem stewardship and management by providing phenological information freely and openly. One way the USA-NPN is endeavoring to accomplish these goals is by providing data and data products in a wide range of formats, including gridded real-time, short-term forecasted, and historical maps of phenological events, patterns and trends. This document describes the suite of gridded phenologically relevant data products produced and provided by the USA National Phenology Network, which can be accessed at www.usanpn.org/data/phenology_maps and also through web services at geoserver.usanpn.org/geoserver/wms?request=GetCapabilities.

  4. The National Special Education Alliance: One Year Later.

    ERIC Educational Resources Information Center

    Green, Peter

    1988-01-01

    The National Special Education Alliance (a national network of local computer resource centers associated with Apple Computer, Inc.) consists, one year after formation, of 24 non-profit support centers staffed largely by volunteers. The NSEA now reaches more than 1000 disabled computer users each month and more growth in the future is expected.…

  5. Computer Networks as Instructional and Collaborative Distance Learning Environments.

    ERIC Educational Resources Information Center

    Schrum, Lynne; Lamb, Theodore A.

    1997-01-01

    Reports on the early stages of a project at the U.S. Air Force Academy, in which the instructional applications of a networked classroom laboratory, an intranet, and the Internet are explored as well as the effectiveness and efficiency of groupware and computer networks as instructional environments. Presents the results of the first pilot tests.…

  6. Bifurcation-based adiabatic quantum computation with a nonlinear oscillator network.

    PubMed

    Goto, Hayato

    2016-02-22

    The dynamics of nonlinear systems qualitatively change depending on their parameters, which is called bifurcation. A quantum-mechanical nonlinear oscillator can yield a quantum superposition of two oscillation states, known as a Schrödinger cat state, via quantum adiabatic evolution through its bifurcation point. Here we propose a quantum computer comprising such quantum nonlinear oscillators, instead of quantum bits, to solve hard combinatorial optimization problems. The nonlinear oscillator network finds optimal solutions via quantum adiabatic evolution, where nonlinear terms are increased slowly, in contrast to conventional adiabatic quantum computation or quantum annealing, where quantum fluctuation terms are decreased slowly. As a result of numerical simulations, it is concluded that quantum superposition and quantum fluctuation work effectively to find optimal solutions. It is also notable that the present computer is analogous to neural computers, which are also networks of nonlinear components. Thus, the present scheme will open new possibilities for quantum computation, nonlinear science, and artificial intelligence.

  7. Bifurcation-based adiabatic quantum computation with a nonlinear oscillator network

    PubMed Central

    Goto, Hayato

    2016-01-01

    The dynamics of nonlinear systems qualitatively change depending on their parameters, which is called bifurcation. A quantum-mechanical nonlinear oscillator can yield a quantum superposition of two oscillation states, known as a Schrödinger cat state, via quantum adiabatic evolution through its bifurcation point. Here we propose a quantum computer comprising such quantum nonlinear oscillators, instead of quantum bits, to solve hard combinatorial optimization problems. The nonlinear oscillator network finds optimal solutions via quantum adiabatic evolution, where nonlinear terms are increased slowly, in contrast to conventional adiabatic quantum computation or quantum annealing, where quantum fluctuation terms are decreased slowly. As a result of numerical simulations, it is concluded that quantum superposition and quantum fluctuation work effectively to find optimal solutions. It is also notable that the present computer is analogous to neural computers, which are also networks of nonlinear components. Thus, the present scheme will open new possibilities for quantum computation, nonlinear science, and artificial intelligence. PMID:26899997

  8. Bifurcation-based adiabatic quantum computation with a nonlinear oscillator network

    NASA Astrophysics Data System (ADS)

    Goto, Hayato

    2016-02-01

    The dynamics of nonlinear systems qualitatively change depending on their parameters, which is called bifurcation. A quantum-mechanical nonlinear oscillator can yield a quantum superposition of two oscillation states, known as a Schrödinger cat state, via quantum adiabatic evolution through its bifurcation point. Here we propose a quantum computer comprising such quantum nonlinear oscillators, instead of quantum bits, to solve hard combinatorial optimization problems. The nonlinear oscillator network finds optimal solutions via quantum adiabatic evolution, where nonlinear terms are increased slowly, in contrast to conventional adiabatic quantum computation or quantum annealing, where quantum fluctuation terms are decreased slowly. As a result of numerical simulations, it is concluded that quantum superposition and quantum fluctuation work effectively to find optimal solutions. It is also notable that the present computer is analogous to neural computers, which are also networks of nonlinear components. Thus, the present scheme will open new possibilities for quantum computation, nonlinear science, and artificial intelligence.

  9. Atomic switch networks-nanoarchitectonic design of a complex system for natural computing.

    PubMed

    Demis, E C; Aguilera, R; Sillin, H O; Scharnhorst, K; Sandouk, E J; Aono, M; Stieg, A Z; Gimzewski, J K

    2015-05-22

    Self-organized complex systems are ubiquitous in nature, and the structural complexity of these natural systems can be used as a model to design new classes of functional nanotechnology based on highly interconnected networks of interacting units. Conventional fabrication methods for electronic computing devices are subject to known scaling limits, confining the diversity of possible architectures. This work explores methods of fabricating a self-organized complex device known as an atomic switch network and discusses its potential utility in computing. Through a merger of top-down and bottom-up techniques guided by mathematical and nanoarchitectonic design principles, we have produced functional devices comprising nanoscale elements whose intrinsic nonlinear dynamics and memorization capabilities produce robust patterns of distributed activity and a capacity for nonlinear transformation of input signals when configured in the appropriate network architecture. Their operational characteristics represent a unique potential for hardware implementation of natural computation, specifically in the area of reservoir computing-a burgeoning field that investigates the computational aptitude of complex biologically inspired systems.

  10. Test experience on an ultrareliable computer communication network

    NASA Technical Reports Server (NTRS)

    Abbott, L. W.

    1984-01-01

    The dispersed sensor processing mesh (DSPM) is an experimental, ultra-reliable, fault-tolerant computer communications network that exhibits an organic-like ability to regenerate itself after suffering damage. The regeneration is accomplished by two routines - grow and repair. This paper discusses the DSPM concept for achieving fault tolerance and provides a brief description of the mechanization of both the experiment and the six-node experimental network. The main topic of this paper is the system performance of the growth algorithm contained in the grow routine. The characteristics imbued to DSPM by the growth algorithm are also discussed. Data from an experimental DSPM network and software simulation of larger DSPM-type networks are used to examine the inherent limitation on growth time by the growth algorithm and the relationship of growth time to network size and topology.

  11. Use of medical information by computer networks raises major concerns about privacy.

    PubMed Central

    OReilly, M

    1995-01-01

    The development of computer data-bases and long-distance computer networks is leading to improvements in Canada's health care system. However, these developments come at a cost and require a balancing act between access and confidentiality. Columnist Michael OReilly, who in this article explores the security of computer networks, notes that respect for patients' privacy must be given as high a priority as the ability to see their records in the first place. Images p213-a PMID:7600474

  12. 78 FR 24154 - Notice of Availability of a National Animal Health Laboratory Network Reorganization Concept Paper

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-04-24

    ...] Notice of Availability of a National Animal Health Laboratory Network Reorganization Concept Paper AGENCY... Network (NAHLN) for public review and comment. The NAHLN is a nationally coordinated network and... Coordinator, National Animal Health Laboratory Network, Veterinary Services, APHIS, 2140 Centre Avenue...

  13. Cloud Computing Services for Seismic Networks

    NASA Astrophysics Data System (ADS)

    Olson, Michael

    This thesis describes a compositional framework for developing situation awareness applications: applications that provide ongoing information about a user's changing environment. The thesis describes how the framework is used to develop a situation awareness application for earthquakes. The applications are implemented as Cloud computing services connected to sensors and actuators. The architecture and design of the Cloud services are described and measurements of performance metrics are provided. The thesis includes results of experiments on earthquake monitoring conducted over a year. The applications developed by the framework are (1) the CSN---the Community Seismic Network---which uses relatively low-cost sensors deployed by members of the community, and (2) SAF---the Situation Awareness Framework---which integrates data from multiple sources, including the CSN, CISN---the California Integrated Seismic Network, a network consisting of high-quality seismometers deployed carefully by professionals in the CISN organization and spread across Southern California---and prototypes of multi-sensor platforms that include carbon monoxide, methane, dust and radiation sensors.

  14. A Study of Quality of Service Communication for High-Speed Packet-Switching Computer Sub-Networks

    NASA Technical Reports Server (NTRS)

    Cui, Zhenqian

    1999-01-01

    With the development of high-speed networking technology, computer networks, including local-area networks (LANs), wide-area networks (WANs) and the Internet, are extending their traditional roles of carrying computer data. They are being used for Internet telephony, multimedia applications such as conferencing and video on demand, distributed simulations, and other real-time applications. LANs are even used for distributed real-time process control and computing as a cost-effective approach. Differing from traditional data transfer, these new classes of high-speed network applications (video, audio, real-time process control, and others) are delay sensitive. The usefulness of data depends not only on the correctness of received data, but also the time that data are received. In other words, these new classes of applications require networks to provide guaranteed services or quality of service (QoS). Quality of service can be defined by a set of parameters and reflects a user's expectation about the underlying network's behavior. Traditionally, distinct services are provided by different kinds of networks. Voice services are provided by telephone networks, video services are provided by cable networks, and data transfer services are provided by computer networks. A single network providing different services is called an integrated-services network.

  15. Analysis of stationary availability factor of two-level backbone computer networks with arbitrary topology

    NASA Astrophysics Data System (ADS)

    Rahman, P. A.

    2018-05-01

    This scientific paper deals with the two-level backbone computer networks with arbitrary topology. A specialized method, offered by the author for calculation of the stationary availability factor of the two-level backbone computer networks, based on the Markov reliability models for the set of the independent repairable elements with the given failure and repair rates and the methods of the discrete mathematics, is also discussed. A specialized algorithm, offered by the author for analysis of the network connectivity, taking into account different kinds of the network equipment failures, is also observed. Finally, this paper presents an example of calculation of the stationary availability factor for the backbone computer network with the given topology.

  16. Computers, Electronic Networking and Education: Some American Experiences.

    ERIC Educational Resources Information Center

    McConnell, David

    1991-01-01

    Describes new developments in distributed educational computing at Massachusetts Institute of Technology (MIT, "Athena"), Carnegie Mellon University ("Andrew"), Brown University "Intermedia"), Electronic University Network (California), Western Behavioral Sciences Institute (California), and University of California,…

  17. Deep Space Network (DSN), Network Operations Control Center (NOCC) computer-human interfaces

    NASA Technical Reports Server (NTRS)

    Ellman, Alvin; Carlton, Magdi

    1993-01-01

    The Network Operations Control Center (NOCC) of the DSN is responsible for scheduling the resources of DSN, and monitoring all multi-mission spacecraft tracking activities in real-time. Operations performs this job with computer systems at JPL connected to over 100 computers at Goldstone, Australia and Spain. The old computer system became obsolete, and the first version of the new system was installed in 1991. Significant improvements for the computer-human interfaces became the dominant theme for the replacement project. Major issues required innovating problem solving. Among these issues were: How to present several thousand data elements on displays without overloading the operator? What is the best graphical representation of DSN end-to-end data flow? How to operate the system without memorizing mnemonics of hundreds of operator directives? Which computing environment will meet the competing performance requirements? This paper presents the technical challenges, engineering solutions, and results of the NOCC computer-human interface design.

  18. A theoretical and experimental study of neuromorphic atomic switch networks for reservoir computing.

    PubMed

    Sillin, Henry O; Aguilera, Renato; Shieh, Hsien-Hang; Avizienis, Audrius V; Aono, Masakazu; Stieg, Adam Z; Gimzewski, James K

    2013-09-27

    Atomic switch networks (ASNs) have been shown to generate network level dynamics that resemble those observed in biological neural networks. To facilitate understanding and control of these behaviors, we developed a numerical model based on the synapse-like properties of individual atomic switches and the random nature of the network wiring. We validated the model against various experimental results highlighting the possibility to functionalize the network plasticity and the differences between an atomic switch in isolation and its behaviors in a network. The effects of changing connectivity density on the nonlinear dynamics were examined as characterized by higher harmonic generation in response to AC inputs. To demonstrate their utility for computation, we subjected the simulated network to training within the framework of reservoir computing and showed initial evidence of the ASN acting as a reservoir which may be optimized for specific tasks by adjusting the input gain. The work presented represents steps in a unified approach to experimentation and theory of complex systems to make ASNs a uniquely scalable platform for neuromorphic computing.

  19. A theoretical and experimental study of neuromorphic atomic switch networks for reservoir computing

    NASA Astrophysics Data System (ADS)

    Sillin, Henry O.; Aguilera, Renato; Shieh, Hsien-Hang; Avizienis, Audrius V.; Aono, Masakazu; Stieg, Adam Z.; Gimzewski, James K.

    2013-09-01

    Atomic switch networks (ASNs) have been shown to generate network level dynamics that resemble those observed in biological neural networks. To facilitate understanding and control of these behaviors, we developed a numerical model based on the synapse-like properties of individual atomic switches and the random nature of the network wiring. We validated the model against various experimental results highlighting the possibility to functionalize the network plasticity and the differences between an atomic switch in isolation and its behaviors in a network. The effects of changing connectivity density on the nonlinear dynamics were examined as characterized by higher harmonic generation in response to AC inputs. To demonstrate their utility for computation, we subjected the simulated network to training within the framework of reservoir computing and showed initial evidence of the ASN acting as a reservoir which may be optimized for specific tasks by adjusting the input gain. The work presented represents steps in a unified approach to experimentation and theory of complex systems to make ASNs a uniquely scalable platform for neuromorphic computing.

  20. Advanced Scientific Computing Research Network Requirements: ASCR Network Requirements Review Final Report

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bacon, Charles; Bell, Greg; Canon, Shane

    The Energy Sciences Network (ESnet) is the primary provider of network connectivity for the U.S. Department of Energy (DOE) Office of Science (SC), the single largest supporter of basic research in the physical sciences in the United States. In support of SC programs, ESnet regularly updates and refreshes its understanding of the networking requirements of the instruments, facilities, scientists, and science programs that it serves. This focus has helped ESnet to be a highly successful enabler of scientific discovery for over 25 years. In October 2012, ESnet and the Office of Advanced Scientific Computing Research (ASCR) of the DOE SCmore » organized a review to characterize the networking requirements of the programs funded by the ASCR program office. The requirements identified at the review are summarized in the Findings section, and are described in more detail in the body of the report.« less

  1. A Low Cost Micro-Computer Based Local Area Network for Medical Office and Medical Center Automation

    PubMed Central

    Epstein, Mel H.; Epstein, Lynn H.; Emerson, Ron G.

    1984-01-01

    A Low Cost Micro-computer based Local Area Network for medical office automation is described which makes use of an array of multiple and different personal computers interconnected by a local area network. Each computer on the network functions as fully potent workstations for data entry and report generation. The network allows each workstation complete access to the entire database. Additionally, designated computers may serve as access ports for remote terminals. Through “Gateways” the network may serve as a front end for a large mainframe, or may interface with another network. The system provides for the medical office environment the expandability and flexibility of a multi-terminal mainframe system at a far lower cost without sacrifice of performance.

  2. Estimating National-scale Emissions using Dense Monitoring Networks

    NASA Astrophysics Data System (ADS)

    Ganesan, A.; Manning, A.; Grant, A.; Young, D.; Oram, D.; Sturges, W. T.; Moncrieff, J. B.; O'Doherty, S.

    2014-12-01

    The UK's DECC (Deriving Emissions linked to Climate Change) network consists of four greenhouse gas measurement stations that are situated to constrain emissions from the UK and Northwest Europe. These four stations are located in Mace Head (West Coast of Ireland), and on telecommunication towers at Ridge Hill (Western England), Tacolneston (Eastern England) and Angus (Eastern Scotland). With the exception of Angus, which currently only measures carbon dioxide (CO2) and methane (CH4), the remaining sites are additionally equipped to monitor nitrous oxide (N2O). We present an analysis of the network's CH4 and N2O observations from 2011-2013 and compare derived top-down regional emissions with bottom-up inventories, including a recently produced high-resolution inventory (UK National Atmospheric Emissions Inventory). As countries are moving toward national-level emissions estimation, we also address some of the considerations that need to be made when designing these national networks. One of the novel aspects of this work is that we use a hierarchical Bayesian inversion framework. This methodology, which has newly been applied to greenhouse gas emissions estimation, is designed to estimate temporally and spatially varying model-measurement uncertainties and correlation scales, in addition to fluxes. Through this analysis, we demonstrate the importance of characterizing these covariance parameters in order to properly use data from high-density monitoring networks. This UK case study highlights the ways in which this new inverse framework can be used to address some of the limitations of traditional Bayesian inverse methods.

  3. Distinguishing humans from computers in the game of go: A complex network approach

    NASA Astrophysics Data System (ADS)

    Coquidé, C.; Georgeot, B.; Giraud, O.

    2017-08-01

    We compare complex networks built from the game of go and obtained from databases of human-played games with those obtained from computer-played games. Our investigations show that statistical features of the human-based networks and the computer-based networks differ, and that these differences can be statistically significant on a relatively small number of games using specific estimators. We show that the deterministic or stochastic nature of the computer algorithm playing the game can also be distinguished from these quantities. This can be seen as a tool to implement a Turing-like test for go simulators.

  4. Computational Fact Checking from Knowledge Networks

    PubMed Central

    Ciampaglia, Giovanni Luca; Shiralkar, Prashant; Rocha, Luis M.; Bollen, Johan; Menczer, Filippo; Flammini, Alessandro

    2015-01-01

    Traditional fact checking by expert journalists cannot keep up with the enormous volume of information that is now generated online. Computational fact checking may significantly enhance our ability to evaluate the veracity of dubious information. Here we show that the complexities of human fact checking can be approximated quite well by finding the shortest path between concept nodes under properly defined semantic proximity metrics on knowledge graphs. Framed as a network problem this approach is feasible with efficient computational techniques. We evaluate this approach by examining tens of thousands of claims related to history, entertainment, geography, and biographical information using a public knowledge graph extracted from Wikipedia. Statements independently known to be true consistently receive higher support via our method than do false ones. These findings represent a significant step toward scalable computational fact-checking methods that may one day mitigate the spread of harmful misinformation. PMID:26083336

  5. Human Inspired Self-developmental Model of Neural Network (HIM): Introducing Content/Form Computing

    NASA Astrophysics Data System (ADS)

    Krajíček, Jiří

    This paper presents cross-disciplinary research between medical/psychological evidence on human abilities and informatics needs to update current models in computer science to support alternative methods for computation and communication. In [10] we have already proposed hypothesis introducing concept of human information model (HIM) as cooperative system. Here we continue on HIM design in detail. In our design, first we introduce Content/Form computing system which is new principle of present methods in evolutionary computing (genetic algorithms, genetic programming). Then we apply this system on HIM (type of artificial neural network) model as basic network self-developmental paradigm. Main inspiration of our natural/human design comes from well known concept of artificial neural networks, medical/psychological evidence and Sheldrake theory of "Nature as Alive" [22].

  6. Federated queries of clinical data repositories: Scaling to a national network.

    PubMed

    Weber, Griffin M

    2015-06-01

    Federated networks of clinical research data repositories are rapidly growing in size from a handful of sites to true national networks with more than 100 hospitals. This study creates a conceptual framework for predicting how various properties of these systems will scale as they continue to expand. Starting with actual data from Harvard's four-site Shared Health Research Information Network (SHRINE), the framework is used to imagine a future 4000 site network, representing the majority of hospitals in the United States. From this it becomes clear that several common assumptions of small networks fail to scale to a national level, such as all sites being online at all times or containing data from the same date range. On the other hand, a large network enables researchers to select subsets of sites that are most appropriate for particular research questions. Developers of federated clinical data networks should be aware of how the properties of these networks change at different scales and design their software accordingly. Copyright © 2015 Elsevier Inc. All rights reserved.

  7. An Innovative Approach to Bridge a Skill Gap and Grow a Workforce Pipeline: The Computer System, Cluster, and Networking Summer Institute

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Connor, Carolyn Marie; Jacobson, Andree Lars; Bonnie, Amanda Marie

    Sustainable and effective computing infrastructure depends critically on the skills and expertise of domain scientists and of committed and well-trained advanced computing professionals. But, in its ongoing High Performance Computing (HPC) work, Los Alamos National Laboratory noted a persistent shortage of well-prepared applicants, particularly for entry-level cluster administration, file systems administration, and high speed networking positions. Further, based upon recruiting efforts and interactions with universities graduating students in related majors of interest (e.g., computer science (CS)), there has been a long standing skillset gap, as focused training in HPC topics is typically lacking or absent in undergraduate and in evenmore » many graduate programs. Given that the effective operation and use of HPC systems requires specialized and often advanced training, that there is a recognized HPC skillset gap, and that there is intense global competition for computing and computational science talent, there is a long-standing and critical need for innovative approaches to help bridge the gap and create a well-prepared, next generation HPC workforce. Our paper places this need in the context of the HPC work and workforce requirements at Los Alamos National Laboratory (LANL) and presents one such innovative program conceived to address the need, bridge the gap, and grow an HPC workforce pipeline at LANL. The Computer System, Cluster, and Networking Summer Institute (CSCNSI) completed its 10th year in 2016. The story of the CSCNSI and its evolution is detailed below with a description of the design of its Boot Camp, and a summary of its success and some key factors that have enabled that success.« less

  8. An Innovative Approach to Bridge a Skill Gap and Grow a Workforce Pipeline: The Computer System, Cluster, and Networking Summer Institute

    DOE PAGES

    Connor, Carolyn Marie; Jacobson, Andree Lars; Bonnie, Amanda Marie; ...

    2016-11-01

    Sustainable and effective computing infrastructure depends critically on the skills and expertise of domain scientists and of committed and well-trained advanced computing professionals. But, in its ongoing High Performance Computing (HPC) work, Los Alamos National Laboratory noted a persistent shortage of well-prepared applicants, particularly for entry-level cluster administration, file systems administration, and high speed networking positions. Further, based upon recruiting efforts and interactions with universities graduating students in related majors of interest (e.g., computer science (CS)), there has been a long standing skillset gap, as focused training in HPC topics is typically lacking or absent in undergraduate and in evenmore » many graduate programs. Given that the effective operation and use of HPC systems requires specialized and often advanced training, that there is a recognized HPC skillset gap, and that there is intense global competition for computing and computational science talent, there is a long-standing and critical need for innovative approaches to help bridge the gap and create a well-prepared, next generation HPC workforce. Our paper places this need in the context of the HPC work and workforce requirements at Los Alamos National Laboratory (LANL) and presents one such innovative program conceived to address the need, bridge the gap, and grow an HPC workforce pipeline at LANL. The Computer System, Cluster, and Networking Summer Institute (CSCNSI) completed its 10th year in 2016. The story of the CSCNSI and its evolution is detailed below with a description of the design of its Boot Camp, and a summary of its success and some key factors that have enabled that success.« less

  9. Computer Network Attack: An Operational Tool?

    DTIC Science & Technology

    2003-01-17

    Spectrum of Conflict, Cyber Warfare , Preemptive Strike, Effects Based Targeting. 15. Abstract: Computer Network Attack (CNA) is defined as...great deal of attention as the world’s capabilities in cyber - warfare grow. 11 Although addressing the wide ranging legal aspects of CNA is beyond the...the notion of cyber - warfare has not yet developed to the point that international norms have been established.15 These norms will be developed in

  10. Hello! Kids Network around the World.

    ERIC Educational Resources Information Center

    Lynes, Kristine

    1996-01-01

    Describes Kids Network, an educational network available from the National Geographic Society that allows students in grades four through six to become part of research teams that include students from around the world. Computer hardware requirements and a list of Kids Network research questions are listed in a sidebar. (JMV)

  11. Economics of Computing: The Case of Centralized Network File Servers.

    ERIC Educational Resources Information Center

    Solomon, Martin B.

    1994-01-01

    Discusses computer networking and the cost effectiveness of decentralization, including local area networks. A planned experiment with a centralized approach to the operation and management of file servers at the University of South Carolina is described that hopes to realize cost savings and the avoidance of staffing problems. (Contains four…

  12. Efficient computation of aerodynamic influence coefficients for aeroelastic analysis on a transputer network

    NASA Technical Reports Server (NTRS)

    Janetzke, David C.; Murthy, Durbha V.

    1991-01-01

    Aeroelastic analysis is multi-disciplinary and computationally expensive. Hence, it can greatly benefit from parallel processing. As part of an effort to develop an aeroelastic capability on a distributed memory transputer network, a parallel algorithm for the computation of aerodynamic influence coefficients is implemented on a network of 32 transputers. The aerodynamic influence coefficients are calculated using a 3-D unsteady aerodynamic model and a parallel discretization. Efficiencies up to 85 percent were demonstrated using 32 processors. The effect of subtask ordering, problem size, and network topology are presented. A comparison to results on a shared memory computer indicates that higher speedup is achieved on the distributed memory system.

  13. Critical Thinking about Literature through Computer Networking.

    ERIC Educational Resources Information Center

    Long, Thomas L.; Pedersen, Christine

    A computer-oriented, classroom-based research project was conducted at Thomas Nelson Community College in Hampton, Virginia, to explore the ways in which students in a composition and literature class might use a local area network (LAN) as a catalyst to critical thinking, to construct a decentralized classroom, and to use various forms of…

  14. Predictive Behavior of a Computational Foot/Ankle Model through Artificial Neural Networks.

    PubMed

    Chande, Ruchi D; Hargraves, Rosalyn Hobson; Ortiz-Robinson, Norma; Wayne, Jennifer S

    2017-01-01

    Computational models are useful tools to study the biomechanics of human joints. Their predictive performance is heavily dependent on bony anatomy and soft tissue properties. Imaging data provides anatomical requirements while approximate tissue properties are implemented from literature data, when available. We sought to improve the predictive capability of a computational foot/ankle model by optimizing its ligament stiffness inputs using feedforward and radial basis function neural networks. While the former demonstrated better performance than the latter per mean square error, both networks provided reasonable stiffness predictions for implementation into the computational model.

  15. Computational Models and Emergent Properties of Respiratory Neural Networks

    PubMed Central

    Lindsey, Bruce G.; Rybak, Ilya A.; Smith, Jeffrey C.

    2012-01-01

    Computational models of the neural control system for breathing in mammals provide a theoretical and computational framework bringing together experimental data obtained from different animal preparations under various experimental conditions. Many of these models were developed in parallel and iteratively with experimental studies and provided predictions guiding new experiments. This data-driven modeling approach has advanced our understanding of respiratory network architecture and neural mechanisms underlying generation of the respiratory rhythm and pattern, including their functional reorganization under different physiological conditions. Models reviewed here vary in neurobiological details and computational complexity and span multiple spatiotemporal scales of respiratory control mechanisms. Recent models describe interacting populations of respiratory neurons spatially distributed within the Bötzinger and pre-Bötzinger complexes and rostral ventrolateral medulla that contain core circuits of the respiratory central pattern generator (CPG). Network interactions within these circuits along with intrinsic rhythmogenic properties of neurons form a hierarchy of multiple rhythm generation mechanisms. The functional expression of these mechanisms is controlled by input drives from other brainstem components, including the retrotrapezoid nucleus and pons, which regulate the dynamic behavior of the core circuitry. The emerging view is that the brainstem respiratory network has rhythmogenic capabilities at multiple levels of circuit organization. This allows flexible, state-dependent expression of different neural pattern-generation mechanisms under various physiological conditions, enabling a wide repertoire of respiratory behaviors. Some models consider control of the respiratory CPG by pulmonary feedback and network reconfiguration during defensive behaviors such as cough. Future directions in modeling of the respiratory CPG are considered. PMID:23687564

  16. Main control computer security model of closed network systems protection against cyber attacks

    NASA Astrophysics Data System (ADS)

    Seymen, Bilal

    2014-06-01

    The model that brings the data input/output under control in closed network systems, that maintains the system securely, and that controls the flow of information through the Main Control Computer which also brings the network traffic under control against cyber-attacks. The network, which can be controlled single-handedly thanks to the system designed to enable the network users to make data entry into the system or to extract data from the system securely, intends to minimize the security gaps. Moreover, data input/output record can be kept by means of the user account assigned for each user, and it is also possible to carry out retroactive tracking, if requested. Because the measures that need to be taken for each computer on the network regarding cyber security, do require high cost; it has been intended to provide a cost-effective working environment with this model, only if the Main Control Computer has the updated hardware.

  17. Exploiting parallel computing with limited program changes using a network of microcomputers

    NASA Technical Reports Server (NTRS)

    Rogers, J. L., Jr.; Sobieszczanski-Sobieski, J.

    1985-01-01

    Network computing and multiprocessor computers are two discernible trends in parallel processing. The computational behavior of an iterative distributed process in which some subtasks are completed later than others because of an imbalance in computational requirements is of significant interest. The effects of asynchronus processing was studied. A small existing program was converted to perform finite element analysis by distributing substructure analysis over a network of four Apple IIe microcomputers connected to a shared disk, simulating a parallel computer. The substructure analysis uses an iterative, fully stressed, structural resizing procedure. A framework of beams divided into three substructures is used as the finite element model. The effects of asynchronous processing on the convergence of the design variables are determined by not resizing particular substructures on various iterations.

  18. Computationally Efficient Nonlinear Bell Inequalities for Quantum Networks

    NASA Astrophysics Data System (ADS)

    Luo, Ming-Xing

    2018-04-01

    The correlations in quantum networks have attracted strong interest with new types of violations of the locality. The standard Bell inequalities cannot characterize the multipartite correlations that are generated by multiple sources. The main problem is that no computationally efficient method is available for constructing useful Bell inequalities for general quantum networks. In this work, we show a significant improvement by presenting new, explicit Bell-type inequalities for general networks including cyclic networks. These nonlinear inequalities are related to the matching problem of an equivalent unweighted bipartite graph that allows constructing a polynomial-time algorithm. For the quantum resources consisting of bipartite entangled pure states and generalized Greenberger-Horne-Zeilinger (GHZ) states, we prove the generic nonmultilocality of quantum networks with multiple independent observers using new Bell inequalities. The violations are maximal with respect to the presented Tsirelson's bound for Einstein-Podolsky-Rosen states and GHZ states. Moreover, these violations hold for Werner states or some general noisy states. Our results suggest that the presented Bell inequalities can be used to characterize experimental quantum networks.

  19. National Special Education Alliance.

    ERIC Educational Resources Information Center

    Pressman, Harvey

    1987-01-01

    The article describes the National Special Education Alliance, a network of parent-led organizations seeking to speed the delivery of computer technology to the disabled. Discussed are program origins, starting a local center, charter members of the alliance, benefits of Alliance membership, and the Alliance's relationship with Apple computer. (DB)

  20. High Performance Computing and Network Program. Hearing before the Subcommittee on Science of the Committee on Science, Space, and Technology, House of Representatives, One Hundred Third Congress, First Session.

    ERIC Educational Resources Information Center

    Congress of the U.S., Washington, DC. House Committee on Science, Space and Technology.

    The purpose of the hearing transcribed in this document was to obtain the views of representatives of network user and provider communities regarding the path the National Science Foundation (NSF) is taking for recompetition of the NSFNET computer network. In particular the committee was interested in the consistency of the evolution of NSFNET…

  1. MDA-image: an environment of networked desktop computers for teleradiology/pathology.

    PubMed

    Moffitt, M E; Richli, W R; Carrasco, C H; Wallace, S; Zimmerman, S O; Ayala, A G; Benjamin, R S; Chee, S; Wood, P; Daniels, P

    1991-04-01

    MDA-Image, a project of The University of Texas M. D. Anderson Cancer Center, is an environment of networked desktop computers for teleradiology/pathology. Radiographic film is digitized with a film scanner and histopathologic slides are digitized using a red, green, and blue (RGB) video camera connected to a microscope. Digitized images are stored on a data server connected to the institution's computer communication network (Ethernet) and can be displayed from authorized desktop computers connected to Ethernet. Images are digitized for cases presented at the Bone Tumor Management Conference, a multidisciplinary conference in which treatment options are discussed among clinicians, surgeons, radiologists, pathologists, radiotherapists, and medical oncologists. These radiographic and histologic images are shown on a large screen computer monitor during the conference. They are available for later review for follow-up or representation.

  2. Automation of multi-agent control for complex dynamic systems in heterogeneous computational network

    NASA Astrophysics Data System (ADS)

    Oparin, Gennady; Feoktistov, Alexander; Bogdanova, Vera; Sidorov, Ivan

    2017-01-01

    The rapid progress of high-performance computing entails new challenges related to solving large scientific problems for various subject domains in a heterogeneous distributed computing environment (e.g., a network, Grid system, or Cloud infrastructure). The specialists in the field of parallel and distributed computing give the special attention to a scalability of applications for problem solving. An effective management of the scalable application in the heterogeneous distributed computing environment is still a non-trivial issue. Control systems that operate in networks, especially relate to this issue. We propose a new approach to the multi-agent management for the scalable applications in the heterogeneous computational network. The fundamentals of our approach are the integrated use of conceptual programming, simulation modeling, network monitoring, multi-agent management, and service-oriented programming. We developed a special framework for an automation of the problem solving. Advantages of the proposed approach are demonstrated on the parametric synthesis example of the static linear regulator for complex dynamic systems. Benefits of the scalable application for solving this problem include automation of the multi-agent control for the systems in a parallel mode with various degrees of its detailed elaboration.

  3. Parallel computation with molecular-motor-propelled agents in nanofabricated networks.

    PubMed

    Nicolau, Dan V; Lard, Mercy; Korten, Till; van Delft, Falco C M J M; Persson, Malin; Bengtsson, Elina; Månsson, Alf; Diez, Stefan; Linke, Heiner; Nicolau, Dan V

    2016-03-08

    The combinatorial nature of many important mathematical problems, including nondeterministic-polynomial-time (NP)-complete problems, places a severe limitation on the problem size that can be solved with conventional, sequentially operating electronic computers. There have been significant efforts in conceiving parallel-computation approaches in the past, for example: DNA computation, quantum computation, and microfluidics-based computation. However, these approaches have not proven, so far, to be scalable and practical from a fabrication and operational perspective. Here, we report the foundations of an alternative parallel-computation system in which a given combinatorial problem is encoded into a graphical, modular network that is embedded in a nanofabricated planar device. Exploring the network in a parallel fashion using a large number of independent, molecular-motor-propelled agents then solves the mathematical problem. This approach uses orders of magnitude less energy than conventional computers, thus addressing issues related to power consumption and heat dissipation. We provide a proof-of-concept demonstration of such a device by solving, in a parallel fashion, the small instance {2, 5, 9} of the subset sum problem, which is a benchmark NP-complete problem. Finally, we discuss the technical advances necessary to make our system scalable with presently available technology.

  4. Planning and management of cloud computing networks

    NASA Astrophysics Data System (ADS)

    Larumbe, Federico

    The evolution of the Internet has a great impact on a big part of the population. People use it to communicate, query information, receive news, work, and as entertainment. Its extraordinary usefulness as a communication media made the number of applications and technological resources explode. However, that network expansion comes at the cost of an important power consumption. If the power consumption of telecommunication networks and data centers is considered as the power consumption of a country, it would rank at the 5 th place in the world. Furthermore, the number of servers in the world is expected to grow by a factor of 10 between 2013 and 2020. This context motivates us to study techniques and methods to allocate cloud computing resources in an optimal way with respect to cost, quality of service (QoS), power consumption, and environmental impact. The results we obtained from our test cases show that besides minimizing capital expenditures (CAPEX) and operational expenditures (OPEX), the response time can be reduced up to 6 times, power consumption by 30%, and CO2 emissions by a factor of 60. Cloud computing provides dynamic access to IT resources as a service. In this paradigm, programs are executed in servers connected to the Internet that users access from their computers and mobile devices. The first advantage of this architecture is to reduce the time of application deployment and interoperability, because a new user only needs a web browser and does not need to install software on local computers with specific operating systems. Second, applications and information are available from everywhere and with any device with an Internet access. Also, servers and IT resources can be dynamically allocated depending on the number of users and workload, a feature called elasticity. This thesis studies the resource management of cloud computing networks and is divided in three main stages. We start by analyzing the planning of cloud computing networks to get a

  5. Efficient shortest-path-tree computation in network routing based on pulse-coupled neural networks.

    PubMed

    Qu, Hong; Yi, Zhang; Yang, Simon X

    2013-06-01

    Shortest path tree (SPT) computation is a critical issue for routers using link-state routing protocols, such as the most commonly used open shortest path first and intermediate system to intermediate system. Each router needs to recompute a new SPT rooted from itself whenever a change happens in the link state. Most commercial routers do this computation by deleting the current SPT and building a new one using static algorithms such as the Dijkstra algorithm at the beginning. Such recomputation of an entire SPT is inefficient, which may consume a considerable amount of CPU time and result in a time delay in the network. Some dynamic updating methods using the information in the updated SPT have been proposed in recent years. However, there are still many limitations in those dynamic algorithms. In this paper, a new modified model of pulse-coupled neural networks (M-PCNNs) is proposed for the SPT computation. It is rigorously proved that the proposed model is capable of solving some optimization problems, such as the SPT. A static algorithm is proposed based on the M-PCNNs to compute the SPT efficiently for large-scale problems. In addition, a dynamic algorithm that makes use of the structure of the previously computed SPT is proposed, which significantly improves the efficiency of the algorithm. Simulation results demonstrate the effective and efficient performance of the proposed approach.

  6. Structural Reproduction of Social Networks in Computer-Mediated Communication Forums

    ERIC Educational Resources Information Center

    Stefanone, M. A.; Gay, G.

    2008-01-01

    This study explores the relationship between the structure of an existing social network and the structure of an emergent discussion-board network in an undergraduate university class. Thirty-one students were issued with laptop computers that remained in their possession for the duration of the semester. While using these machines, participants'…

  7. Building A National Network for Ocean and Climate Change Interpretation (Invited)

    NASA Astrophysics Data System (ADS)

    Spitzer, W.; Anderson, J.

    2013-12-01

    In the US, more than 1,500 informal science venues (science centers, museums, aquariums, zoos, nature centers, national parks) are visited annually by 61% of the population. Research shows that these visitors are receptive to learning about climate change, and expect these institutions to provide reliable information about environmental issues and solutions. Given that we spend less than 5% of our lifetime in a classroom, informal science venues play a critical role in shaping public understanding. Since 2007, the New England Aquarium (NEAq) has led a national effort to increase the capacity of informal science education institutions (ISEIs) to effectively communicate about the impacts of climate change on the oceans. NEAq is now leading the NSF-funded National Network for Ocean and Climate Change Interpretation (NNOCCI), partnering with the Association of Zoos and Aquariums, FrameWorks Institute, Woods Hole Oceanographic Institution, Monterey Bay Aquarium, and National Aquarium, with evaluation conducted by the New Knowledge Organization, Pennsylvania State University, and Ohio State University. NNOCCI's design is based on best practices in informal science learning, cognitive/social psychology, community and network building: Interpreters as Communication Strategists - Interpreters can serve not merely as educators disseminating information, but can also be leaders in influencing public perceptions, given their high level of commitment, knowledge, public trust, social networks, and visitor contact. Communities of Practice - Learning is a social activity that is created through engagement in a supportive community context. Social support is particularly important in addressing a complex, contentious and distressing subject. Diffusion of Innovation - Peer networks are of primary importance in spreading innovations. Leaders serve as 'early adopters' and influence others to achieve a critical mass of implementation. Over the next five years, NNOCCI will achieve a

  8. The Effectiveness of Using Virtual Laboratories to Teach Computer Networking Skills in Zambia

    ERIC Educational Resources Information Center

    Lampi, Evans

    2013-01-01

    The effectiveness of using virtual labs to train students in computer networking skills, when real equipment is limited or unavailable, is uncertain. The purpose of this study was to determine the effectiveness of using virtual labs to train students in the acquisition of computer network configuration and troubleshooting skills. The study was…

  9. Predictive Control of Networked Multiagent Systems via Cloud Computing.

    PubMed

    Liu, Guo-Ping

    2017-01-18

    This paper studies the design and analysis of networked multiagent predictive control systems via cloud computing. A cloud predictive control scheme for networked multiagent systems (NMASs) is proposed to achieve consensus and stability simultaneously and to compensate for network delays actively. The design of the cloud predictive controller for NMASs is detailed. The analysis of the cloud predictive control scheme gives the necessary and sufficient conditions of stability and consensus of closed-loop networked multiagent control systems. The proposed scheme is verified to characterize the dynamical behavior and control performance of NMASs through simulations. The outcome provides a foundation for the development of cooperative and coordinative control of NMASs and its applications.

  10. NIF ICCS network design and loading analysis

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Tietbohl, G; Bryant, R

    The National Ignition Facility (NIF) is housed within a large facility about the size of two football fields. The Integrated Computer Control System (ICCS) is distributed throughout this facility and requires the integration of about 40,000 control points and over 500 video sources. This integration is provided by approximately 700 control computers distributed throughout the NIF facility and a network that provides the communication infrastructure. A main control room houses a set of seven computer consoles providing operator access and control of the various distributed front-end processors (FEPs). There are also remote workstations distributed within the facility that allow providemore » operator console functions while personnel are testing and troubleshooting throughout the facility. The operator workstations communicate with the FEPs which implement the localized control and monitoring functions. There are different types of FEPs for the various subsystems being controlled. This report describes the design of the NIF ICCS network and how it meets the traffic loads that will are expected and the requirements of the Sub-System Design Requirements (SSDR's). This document supersedes the earlier reports entitled Analysis of the National Ignition Facility Network, dated November 6, 1996 and The National Ignition Facility Digital Video and Control Network, dated July 9, 1996. For an overview of the ICCS, refer to the document NIF Integrated Computer Controls System Description (NIF-3738).« less

  11. The Global Special Operations Forces Network from a Partner-Nation Perspective

    DTIC Science & Technology

    2014-12-01

    in networks vs . management of Networks. ................................80  Figure 17.  A national SOF network with SOCOM as the manager of networks...context and are asked in the natural course of things; there is no predetermination of question topics or wording. 10 descriptive section is the...struggles and challenges that occur naturally over time. As depicted in Figure 2, the network will constantly have to examine how it is evolving and, if

  12. Assessment of spare reliability for multi-state computer networks within tolerable packet unreliability

    NASA Astrophysics Data System (ADS)

    Lin, Yi-Kuei; Huang, Cheng-Fu

    2015-04-01

    From a quality of service viewpoint, the transmission packet unreliability and transmission time are both critical performance indicators in a computer system when assessing the Internet quality for supervisors and customers. A computer system is usually modelled as a network topology where each branch denotes a transmission medium and each vertex represents a station of servers. Almost every branch has multiple capacities/states due to failure, partial failure, maintenance, etc. This type of network is known as a multi-state computer network (MSCN). This paper proposes an efficient algorithm that computes the system reliability, i.e., the probability that a specified amount of data can be sent through k (k ≥ 2) disjoint minimal paths within both the tolerable packet unreliability and time threshold. Furthermore, two routing schemes are established in advance to indicate the main and spare minimal paths to increase the system reliability (referred to as spare reliability). Thus, the spare reliability can be readily computed according to the routing scheme.

  13. Fusing literature and full network data improves disease similarity computation.

    PubMed

    Li, Ping; Nie, Yaling; Yu, Jingkai

    2016-08-30

    Identifying relatedness among diseases could help deepen understanding for the underlying pathogenic mechanisms of diseases, and facilitate drug repositioning projects. A number of methods for computing disease similarity had been developed; however, none of them were designed to utilize information of the entire protein interaction network, using instead only those interactions involving disease causing genes. Most of previously published methods required gene-disease association data, unfortunately, many diseases still have very few or no associated genes, which impeded broad adoption of those methods. In this study, we propose a new method (MedNetSim) for computing disease similarity by integrating medical literature and protein interaction network. MedNetSim consists of a network-based method (NetSim), which employs the entire protein interaction network, and a MEDLINE-based method (MedSim), which computes disease similarity by mining the biomedical literature. Among function-based methods, NetSim achieved the best performance. Its average AUC (area under the receiver operating characteristic curve) reached 95.2 %. MedSim, whose performance was even comparable to some function-based methods, acquired the highest average AUC in all semantic-based methods. Integration of MedSim and NetSim (MedNetSim) further improved the average AUC to 96.4 %. We further studied the effectiveness of different data sources. It was found that quality of protein interaction data was more important than its volume. On the contrary, higher volume of gene-disease association data was more beneficial, even with a lower reliability. Utilizing higher volume of disease-related gene data further improved the average AUC of MedNetSim and NetSim to 97.5 % and 96.7 %, respectively. Integrating biomedical literature and protein interaction network can be an effective way to compute disease similarity. Lacking sufficient disease-related gene data, literature-based methods such as MedSim can

  14. Correlation between Academic and Skills-Based Tests in Computer Networks

    ERIC Educational Resources Information Center

    Buchanan, William

    2006-01-01

    Computing-related programmes and modules have many problems, especially related to large class sizes, large-scale plagiarism, module franchising, and an increased requirement from students for increased amounts of hands-on, practical work. This paper presents a practical computer networks module which uses a mixture of online examinations and a…

  15. The NASA Science Internet: An integrated approach to networking

    NASA Technical Reports Server (NTRS)

    Rounds, Fred

    1991-01-01

    An integrated approach to building a networking infrastructure is an absolute necessity for meeting the multidisciplinary science networking requirements of the Office of Space Science and Applications (OSSA) science community. These networking requirements include communication connectivity between computational resources, databases, and library systems, as well as to other scientists and researchers around the world. A consolidated networking approach allows strategic use of the existing science networking within the Federal government, and it provides networking capability that takes into consideration national and international trends towards multivendor and multiprotocol service. It also offers a practical vehicle for optimizing costs and maximizing performance. Finally, and perhaps most important to the development of high speed computing is that an integrated network constitutes a focus for phasing to the National Research and Education Network (NREN). The NASA Science Internet (NSI) program, established in mid 1988, is structured to provide just such an integrated network. A description of the NSI is presented.

  16. Practical recommendations for strengthening national and regional laboratory networks in Africa in the Global Health Security era.

    PubMed

    Best, Michele; Sakande, Jean

    2016-01-01

    The role of national health laboratories in support of public health response has expanded beyond laboratory testing to include a number of other core functions such as emergency response, training and outreach, communications, laboratory-based surveillance and data management. These functions can only be accomplished by an efficient and resilient national laboratory network that includes public health, reference, clinical and other laboratories. It is a primary responsibility of the national health laboratory in the Ministry of Health to develop and maintain the national laboratory network in the country. In this article, we present practical recommendations based on 17 years of network development experience for the development of effective national laboratory networks. These recommendations and examples of current laboratory networks, are provided to facilitate laboratory network development in other states. The development of resilient, integrated laboratory networks will enhance each state's public health system and is critical to the development of a robust national laboratory response network to meet global health security threats.

  17. Practical recommendations for strengthening national and regional laboratory networks in Africa in the Global Health Security era

    PubMed Central

    2016-01-01

    The role of national health laboratories in support of public health response has expanded beyond laboratory testing to include a number of other core functions such as emergency response, training and outreach, communications, laboratory-based surveillance and data management. These functions can only be accomplished by an efficient and resilient national laboratory network that includes public health, reference, clinical and other laboratories. It is a primary responsibility of the national health laboratory in the Ministry of Health to develop and maintain the national laboratory network in the country. In this article, we present practical recommendations based on 17 years of network development experience for the development of effective national laboratory networks. These recommendations and examples of current laboratory networks, are provided to facilitate laboratory network development in other states. The development of resilient, integrated laboratory networks will enhance each state’s public health system and is critical to the development of a robust national laboratory response network to meet global health security threats. PMID:28879137

  18. Critical phenomena in communication/computation networks with various topologies and suboptimal to optimal resource allocation

    NASA Astrophysics Data System (ADS)

    Cogoni, Marco; Busonera, Giovanni; Anedda, Paolo; Zanetti, Gianluigi

    2015-01-01

    We generalize previous studies on critical phenomena in communication networks [1,2] by adding computational capabilities to the nodes. In our model, a set of tasks with random origin, destination and computational structure is distributed on a computational network, modeled as a graph. By varying the temperature of a Metropolis Montecarlo, we explore the global latency for an optimal to suboptimal resource assignment at a given time instant. By computing the two-point correlation function for the local overload, we study the behavior of the correlation distance (both for links and nodes) while approaching the congested phase: a transition from peaked to spread g(r) is seen above a critical (Montecarlo) temperature Tc. The average latency trend of the system is predicted by averaging over several network traffic realizations while maintaining a spatially detailed information for each node: a sharp decrease of performance is found over Tc independently of the workload. The globally optimized computational resource allocation and network routing defines a baseline for a future comparison of the transition behavior with respect to existing routing strategies [3,4] for different network topologies.

  19. Deterministic Function Computation with Chemical Reaction Networks*

    PubMed Central

    Chen, Ho-Lin; Doty, David; Soloveichik, David

    2013-01-01

    Chemical reaction networks (CRNs) formally model chemistry in a well-mixed solution. CRNs are widely used to describe information processing occurring in natural cellular regulatory networks, and with upcoming advances in synthetic biology, CRNs are a promising language for the design of artificial molecular control circuitry. Nonetheless, despite the widespread use of CRNs in the natural sciences, the range of computational behaviors exhibited by CRNs is not well understood. CRNs have been shown to be efficiently Turing-universal (i.e., able to simulate arbitrary algorithms) when allowing for a small probability of error. CRNs that are guaranteed to converge on a correct answer, on the other hand, have been shown to decide only the semilinear predicates (a multi-dimensional generalization of “eventually periodic” sets). We introduce the notion of function, rather than predicate, computation by representing the output of a function f : ℕk → ℕl by a count of some molecular species, i.e., if the CRN starts with x1, …, xk molecules of some “input” species X1, …, Xk, the CRN is guaranteed to converge to having f(x1, …, xk) molecules of the “output” species Y1, …, Yl. We show that a function f : ℕk → ℕl is deterministically computed by a CRN if and only if its graph {(x, y) ∈ ℕk × ℕl ∣ f(x) = y} is a semilinear set. Finally, we show that each semilinear function f (a function whose graph is a semilinear set) can be computed by a CRN on input x in expected time O(polylog ∥x∥1). PMID:25383068

  20. The ADVANCE network: accelerating data value across a national community health center network

    PubMed Central

    DeVoe, Jennifer E; Gold, Rachel; Cottrell, Erika; Bauer, Vance; Brickman, Andrew; Puro, Jon; Nelson, Christine; Mayer, Kenneth H; Sears, Abigail; Burdick, Tim; Merrell, Jonathan; Matthews, Paul; Fields, Scott

    2014-01-01

    The ADVANCE (Accelerating Data Value Across a National Community Health Center Network) clinical data research network (CDRN) is led by the OCHIN Community Health Information Network in partnership with Health Choice Network and Fenway Health. The ADVANCE CDRN will ‘horizontally’ integrate outpatient electronic health record data for over one million federally qualified health center patients, and ‘vertically’ integrate hospital, health plan, and community data for these patients, often under-represented in research studies. Patient investigators, community investigators, and academic investigators with diverse expertise will work together to meet project goals related to data integration, patient engagement and recruitment, and the development of streamlined regulatory policies. By enhancing the data and research infrastructure of participating organizations, the ADVANCE CDRN will serve as a ‘community laboratory’ for including disadvantaged and vulnerable patients in patient-centered outcomes research that is aligned with the priorities of patients, clinics, and communities in our network. PMID:24821740

  1. Development of a UNIX network compatible reactivity computer

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sanchez, R.F.; Edwards, R.M.

    1996-12-31

    A state-of-the-art UNIX network compatible controller and UNIX host workstation with MATLAB/SIMULINK software were used to develop, implement, and validate a digital reactivity calculation. An objective of the development was to determine why a Macintosh-based reactivity computer reactivity output drifted intolerably.

  2. Mississippi Curriculum Framework for Computer Information Systems Technology. Computer Information Systems Technology (Program CIP: 52.1201--Management Information Systems & Business Data). Computer Programming (Program CIP: 52.1201). Network Support (Program CIP: 52.1290--Computer Network Support Technology). Postsecondary Programs.

    ERIC Educational Resources Information Center

    Mississippi Research and Curriculum Unit for Vocational and Technical Education, State College.

    This document, which is intended for use by community and junior colleges throughout Mississippi, contains curriculum frameworks for two programs in the state's postsecondary-level computer information systems technology cluster: computer programming and network support. Presented in the introduction are program descriptions and suggested course…

  3. APINetworks: A general API for the treatment of complex networks in arbitrary computational environments

    NASA Astrophysics Data System (ADS)

    Niño, Alfonso; Muñoz-Caro, Camelia; Reyes, Sebastián

    2015-11-01

    The last decade witnessed a great development of the structural and dynamic study of complex systems described as a network of elements. Therefore, systems can be described as a set of, possibly, heterogeneous entities or agents (the network nodes) interacting in, possibly, different ways (defining the network edges). In this context, it is of practical interest to model and handle not only static and homogeneous networks but also dynamic, heterogeneous ones. Depending on the size and type of the problem, these networks may require different computational approaches involving sequential, parallel or distributed systems with or without the use of disk-based data structures. In this work, we develop an Application Programming Interface (APINetworks) for the modeling and treatment of general networks in arbitrary computational environments. To minimize dependency between components, we decouple the network structure from its function using different packages for grouping sets of related tasks. The structural package, the one in charge of building and handling the network structure, is the core element of the system. In this work, we focus in this API structural component. We apply an object-oriented approach that makes use of inheritance and polymorphism. In this way, we can model static and dynamic networks with heterogeneous elements in the nodes and heterogeneous interactions in the edges. In addition, this approach permits a unified treatment of different computational environments. Tests performed on a C++11 version of the structural package show that, on current standard computers, the system can handle, in main memory, directed and undirected linear networks formed by tens of millions of nodes and edges. Our results compare favorably to those of existing tools.

  4. Evolving plans for the USA National Phenology Network

    USGS Publications Warehouse

    Betancourt, Julio L.; Schwartz, Mark D.; Breshears, David D.; Brewer, Carol A.; Frazer, Gary; Gross, John E.; Mazer, Susan J.; Reed, Bradley C.; Wilson, Bruce E.

    2007-01-01

    Phenology is the study of periodic plant and animal life cycle events, how these are influenced by seasonal and interannual variations in climate, and how they modulate the abundance, diversity, and interactions of organisms. The USA National Phenology Network (USA-NPN) is currently being organized to engage federal agencies, environmental networks and field stations, educational institutions, and citizen scientists. The first USA-NPN planning workshop was held August 2005, in Tucson, Ariz. (Betancourt et al. [2005]; http://www.uwm.edu/Dept/Geography/npn/; by 1 June 2007, also see http://www.usanpn.org). With sponsorship from the U.S. National Science Foundation, the U.S. Geological Survey (USGS), the U.S. Fish and Wildlife Service, and NASA, the second USA-NPN planning workshop was held at the University of Wisconsin-Milwaukee on 10–12 October 2006 to (1) develop lists of target species and observation protocols; (2) identify existing networks that could comprise the backbone of nationwide observations by 2008; (3) develop opportunities for education, citizen science, and outreach beginning in spring 2007; (4) design strategies for implementing the remote sensing component of USA-NPN; and (5) draft a data management and cyberinfrastructure plan.

  5. The USA National Phenology Network: A national science and monitoring program for understanding climate change

    NASA Astrophysics Data System (ADS)

    Weltzin, J.

    2009-04-01

    Patterns of phenology for plants and animals control ecosystem processes, determine land surface properties, control biosphere-atmosphere interactions, and affect food production, health, conservation, and recreation. Although phenological data and models have applications related to scientific research, education and outreach, agriculture, tourism and recreation, human health, and natural resource conservation and management, until recently there was no coordinated effort to understand phenology at the national scale in the United States. The USA National Phenology Network (USA-NPN; www.usanpn.org), established in 2007, is an emerging and exciting partnership between federal agencies, the academic community, and the general public to establish a national science and monitoring initiative focused on phenology. The first year of operation of USA-NPN produced many new phenology products and venues for phenology research and citizen involvement. Products include a new web-site (www.usanpn.org) that went live in June 2008; the web-site includes a tool for on-line data entry, and serves as a clearinghouse for products and information to facilitate research and communication related to phenology. The new core Plant Phenology Program includes profiles for 200 vetted local, regional, and national plant species with descriptions and (BBCH-consistent) monitoring protocols, as well as templates for addition of new species. A partnership program describes how other monitoring networks can engage with USA-NPN to collect, manage or disseminate phenological information for science, health, education, management or predictive service applications. Project BudBurst, a USA-NPN field campaign for citizen scientists, went live in February 2008, and now includes over 3000 registered observers monitoring 4000 plants across the nation. For 2009 and beyond, we will initiate a new Wildlife Phenology Program, create an on-line clearing-house for phenology education and outreach, strengthen

  6. Parallel Computation of Unsteady Flows on a Network of Workstations

    NASA Technical Reports Server (NTRS)

    1997-01-01

    Parallel computation of unsteady flows requires significant computational resources. The utilization of a network of workstations seems an efficient solution to the problem where large problems can be treated at a reasonable cost. This approach requires the solution of several problems: 1) the partitioning and distribution of the problem over a network of workstation, 2) efficient communication tools, 3) managing the system efficiently for a given problem. Of course, there is the question of the efficiency of any given numerical algorithm to such a computing system. NPARC code was chosen as a sample for the application. For the explicit version of the NPARC code both two- and three-dimensional problems were studied. Again both steady and unsteady problems were investigated. The issues studied as a part of the research program were: 1) how to distribute the data between the workstations, 2) how to compute and how to communicate at each node efficiently, 3) how to balance the load distribution. In the following, a summary of these activities is presented. Details of the work have been presented and published as referenced.

  7. Data from selected U.S. Geological Survey national stream water-quality monitoring networks (WQN) on CD-ROM

    USGS Publications Warehouse

    Alexander, R.B.; Ludtke, A.S.; Fitzgerald, K.K.; Schertz, T.L.

    1996-01-01

    records of important changes in network sample collection and laboratory analytical methods, water reference sample data for estimating laboratory measurement bias and variability for 34 dissolved constituents for the period 1985-95, discussions of statistical methods for using water reference sample data to evaluate the accuracy of network stream water-quality data, and a bibliography of scientific investigations using national network data and other publications relevant to the networks. The data structure of the CD-ROMs is designed to allow users to efficiently enter the water-quality data to user-supplied software packages including statistical analysis, modeling, or geographic information systems. On one disc, all data are stored in ASCII form accessible from any computer system with a CD-ROM driver. The data also can be accessed using DOS-based retrieval software supplied on a second disc. This software supports logical queries of the water-quality data based on constituent concentrations, sample- collection date, river name, station name, county, state, hydrologic unit number, and 1990 population and 1987 land-cover characteristics for station watersheds. User-selected data may be output in a variety of formats including dBASE, flat ASCII, delimited ASCII, or fixed-field for subsequent use in other software packages.

  8. Optimization of analytical laboratory work using computer networking and databasing

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Upp, D.L.; Metcalf, R.A.

    1996-06-01

    The Health Physics Analysis Laboratory (HPAL) performs around 600,000 analyses for radioactive nuclides each year at Los Alamos National Laboratory (LANL). Analysis matrices vary from nasal swipes, air filters, work area swipes, liquids, to the bottoms of shoes and cat litter. HPAL uses 8 liquid scintillation counters, 8 gas proportional counters, and 9 high purity germanium detectors in 5 laboratories to perform these analyses. HPAL has developed a computer network between the labs and software to produce analysis results. The software and hardware package includes barcode sample tracking, log-in, chain of custody, analysis calculations, analysis result printing, and utility programs.more » All data are written to a database, mirrored on a central server, and eventually written to CD-ROM to provide for online historical results. This system has greatly reduced the work required to provide for analysis results as well as improving the quality of the work performed.« less

  9. Data from selected U.S. Geological Survey National Stream Water-Quality Networks (WQN)

    USGS Publications Warehouse

    Alexander, Richard B.; Slack, J.R.; Ludtke, A.S.; Fitzgerald, K.K.; Schertz, T.L.; Briel, L.I.; Buttleman, K.P.

    1996-01-01

    This CD-ROM set contains data from two USGS national stream water-quality networks, the Hydrologic Benchmark Network (HBN) and the National Stream Quality Accounting Network (NASQAN), operated during the past 30 years. These networks were established to provide national and regional descriptions of stream water-quality conditions and trends, based on uniform monitoring of selected watersheds throughout the United States, and to improve our understanding of the effects of the natural environment and human activities on water quality. The HBN, consisting of 63 relatively small, minimally disturbed watersheds, provides data for investigating naturally induced changes in streamflow and water quality and the effects of airborne substances on water quality. NASQAN, consisting of 618 larger, more culturally influenced watersheds, provides information for tracking water-quality conditions in major U.S. rivers and streams.

  10. Neural network computer simulation of medical aerosols.

    PubMed

    Richardson, C J; Barlow, D J

    1996-06-01

    Preliminary investigations have been conducted to assess the potential for using artificial neural networks to simulate aerosol behaviour, with a view to employing this type of methodology in the evaluation and design of pulmonary drug-delivery systems. Details are presented of the general purpose software developed for these tasks; it implements a feed-forward back-propagation algorithm with weight decay and connection pruning, the user having complete run-time control of the network architecture and mode of training. A series of exploratory investigations is then reported in which different network structures and training strategies are assessed in terms of their ability to simulate known patterns of fluid flow in simple model systems. The first of these involves simulations of cellular automata-generated data for fluid flow through a partially obstructed two-dimensional pipe. The artificial neural networks are shown to be highly successful in simulating the behaviour of this simple linear system, but with important provisos relating to the information content of the training data and the criteria used to judge when the network is properly trained. A second set of investigations is then reported in which similar networks are used to simulate patterns of fluid flow through aerosol generation devices, using training data furnished through rigorous computational fluid dynamics modelling. These more complex three-dimensional systems are modelled with equal success. It is concluded that carefully tailored, well trained networks could provide valuable tools not just for predicting but also for analysing the spatial dynamics of pharmaceutical aerosols.

  11. Computational study of noise in a large signal transduction network.

    PubMed

    Intosalmi, Jukka; Manninen, Tiina; Ruohonen, Keijo; Linne, Marja-Leena

    2011-06-21

    Biochemical systems are inherently noisy due to the discrete reaction events that occur in a random manner. Although noise is often perceived as a disturbing factor, the system might actually benefit from it. In order to understand the role of noise better, its quality must be studied in a quantitative manner. Computational analysis and modeling play an essential role in this demanding endeavor. We implemented a large nonlinear signal transduction network combining protein kinase C, mitogen-activated protein kinase, phospholipase A2, and β isoform of phospholipase C networks. We simulated the network in 300 different cellular volumes using the exact Gillespie stochastic simulation algorithm and analyzed the results in both the time and frequency domain. In order to perform simulations in a reasonable time, we used modern parallel computing techniques. The analysis revealed that time and frequency domain characteristics depend on the system volume. The simulation results also indicated that there are several kinds of noise processes in the network, all of them representing different kinds of low-frequency fluctuations. In the simulations, the power of noise decreased on all frequencies when the system volume was increased. We concluded that basic frequency domain techniques can be applied to the analysis of simulation results produced by the Gillespie stochastic simulation algorithm. This approach is suited not only to the study of fluctuations but also to the study of pure noise processes. Noise seems to have an important role in biochemical systems and its properties can be numerically studied by simulating the reacting system in different cellular volumes. Parallel computing techniques make it possible to run massive simulations in hundreds of volumes and, as a result, accurate statistics can be obtained from computational studies. © 2011 Intosalmi et al; licensee BioMed Central Ltd.

  12. Visualization Techniques for Computer Network Defense

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Beaver, Justin M; Steed, Chad A; Patton, Robert M

    2011-01-01

    Effective visual analysis of computer network defense (CND) information is challenging due to the volume and complexity of both the raw and analyzed network data. A typical CND is comprised of multiple niche intrusion detection tools, each of which performs network data analysis and produces a unique alerting output. The state-of-the-practice in the situational awareness of CND data is the prevalent use of custom-developed scripts by Information Technology (IT) professionals to retrieve, organize, and understand potential threat events. We propose a new visual analytics framework, called the Oak Ridge Cyber Analytics (ORCA) system, for CND data that allows an operatormore » to interact with all detection tool outputs simultaneously. Aggregated alert events are presented in multiple coordinated views with timeline, cluster, and swarm model analysis displays. These displays are complemented with both supervised and semi-supervised machine learning classifiers. The intent of the visual analytics framework is to improve CND situational awareness, to enable an analyst to quickly navigate and analyze thousands of detected events, and to combine sophisticated data analysis techniques with interactive visualization such that patterns of anomalous activities may be more easily identified and investigated.« less

  13. Visualization techniques for computer network defense

    NASA Astrophysics Data System (ADS)

    Beaver, Justin M.; Steed, Chad A.; Patton, Robert M.; Cui, Xiaohui; Schultz, Matthew

    2011-06-01

    Effective visual analysis of computer network defense (CND) information is challenging due to the volume and complexity of both the raw and analyzed network data. A typical CND is comprised of multiple niche intrusion detection tools, each of which performs network data analysis and produces a unique alerting output. The state-of-the-practice in the situational awareness of CND data is the prevalent use of custom-developed scripts by Information Technology (IT) professionals to retrieve, organize, and understand potential threat events. We propose a new visual analytics framework, called the Oak Ridge Cyber Analytics (ORCA) system, for CND data that allows an operator to interact with all detection tool outputs simultaneously. Aggregated alert events are presented in multiple coordinated views with timeline, cluster, and swarm model analysis displays. These displays are complemented with both supervised and semi-supervised machine learning classifiers. The intent of the visual analytics framework is to improve CND situational awareness, to enable an analyst to quickly navigate and analyze thousands of detected events, and to combine sophisticated data analysis techniques with interactive visualization such that patterns of anomalous activities may be more easily identified and investigated.

  14. Service-oriented Software Defined Optical Networks for Cloud Computing

    NASA Astrophysics Data System (ADS)

    Liu, Yuze; Li, Hui; Ji, Yuefeng

    2017-10-01

    With the development of big data and cloud computing technology, the traditional software-defined network is facing new challenges (e.g., ubiquitous accessibility, higher bandwidth, more flexible management and greater security). This paper proposes a new service-oriented software defined optical network architecture, including a resource layer, a service abstract layer, a control layer and an application layer. We then dwell on the corresponding service providing method. Different service ID is used to identify the service a device can offer. Finally, we experimentally evaluate that proposed service providing method can be applied to transmit different services based on the service ID in the service-oriented software defined optical network.

  15. Including Internet insurance as part of a hospital computer network security plan.

    PubMed

    Riccardi, Ken

    2002-01-01

    Cyber attacks on a hospital's computer network is a new crime to be reckoned with. Should your hospital consider internet insurance? The author explains this new phenomenon and presents a risk assessment for determining network vulnerabilities.

  16. Computing Smallest Intervention Strategies for Multiple Metabolic Networks in a Boolean Model

    PubMed Central

    Lu, Wei; Song, Jiangning; Akutsu, Tatsuya

    2015-01-01

    Abstract This article considers the problem whereby, given two metabolic networks N1 and N2, a set of source compounds, and a set of target compounds, we must find the minimum set of reactions whose removal (knockout) ensures that the target compounds are not producible in N1 but are producible in N2. Similar studies exist for the problem of finding the minimum knockout with the smallest side effect for a single network. However, if technologies of external perturbations are advanced in the near future, it may be important to develop methods of computing the minimum knockout for multiple networks (MKMN). Flux balance analysis (FBA) is efficient if a well-polished model is available. However, that is not always the case. Therefore, in this article, we study MKMN in Boolean models and an elementary mode (EM)-based model. Integer linear programming (ILP)-based methods are developed for these models, since MKMN is NP-complete for both the Boolean model and the EM-based model. Computer experiments are conducted with metabolic networks of clostridium perfringens SM101 and bifidobacterium longum DJO10A, respectively known as bad bacteria and good bacteria for the human intestine. The results show that larger networks are more likely to have MKMN solutions. However, solving for these larger networks takes a very long time, and often the computation cannot be completed. This is reasonable, because small networks do not have many alternative pathways, making it difficult to satisfy the MKMN condition, whereas in large networks the number of candidate solutions explodes. Our developed software minFvskO is available online. PMID:25684199

  17. Computing smallest intervention strategies for multiple metabolic networks in a boolean model.

    PubMed

    Lu, Wei; Tamura, Takeyuki; Song, Jiangning; Akutsu, Tatsuya

    2015-02-01

    This article considers the problem whereby, given two metabolic networks N1 and N2, a set of source compounds, and a set of target compounds, we must find the minimum set of reactions whose removal (knockout) ensures that the target compounds are not producible in N1 but are producible in N2. Similar studies exist for the problem of finding the minimum knockout with the smallest side effect for a single network. However, if technologies of external perturbations are advanced in the near future, it may be important to develop methods of computing the minimum knockout for multiple networks (MKMN). Flux balance analysis (FBA) is efficient if a well-polished model is available. However, that is not always the case. Therefore, in this article, we study MKMN in Boolean models and an elementary mode (EM)-based model. Integer linear programming (ILP)-based methods are developed for these models, since MKMN is NP-complete for both the Boolean model and the EM-based model. Computer experiments are conducted with metabolic networks of clostridium perfringens SM101 and bifidobacterium longum DJO10A, respectively known as bad bacteria and good bacteria for the human intestine. The results show that larger networks are more likely to have MKMN solutions. However, solving for these larger networks takes a very long time, and often the computation cannot be completed. This is reasonable, because small networks do not have many alternative pathways, making it difficult to satisfy the MKMN condition, whereas in large networks the number of candidate solutions explodes. Our developed software minFvskO is available online.

  18. Open College Networks and National Vocational Qualifications. A Development Paper.

    ERIC Educational Resources Information Center

    National Council for Vocational Qualifications, London (England).

    Both the National Council for Vocational Qualifications (NCVQ) and Open College Networks or Federations (OCNs) have the objective of creating nationally coherent frameworks of qualification and training in Britain. However, they are very different organizations and have distinct, though potentially complementary, roles. Issues where the two…

  19. Computational modeling of spiking neural network with learning rules from STDP and intrinsic plasticity

    NASA Astrophysics Data System (ADS)

    Li, Xiumin; Wang, Wei; Xue, Fangzheng; Song, Yongduan

    2018-02-01

    Recently there has been continuously increasing interest in building up computational models of spiking neural networks (SNN), such as the Liquid State Machine (LSM). The biologically inspired self-organized neural networks with neural plasticity can enhance the capability of computational performance, with the characteristic features of dynamical memory and recurrent connection cycles which distinguish them from the more widely used feedforward neural networks. Despite a variety of computational models for brain-like learning and information processing have been proposed, the modeling of self-organized neural networks with multi-neural plasticity is still an important open challenge. The main difficulties lie in the interplay among different forms of neural plasticity rules and understanding how structures and dynamics of neural networks shape the computational performance. In this paper, we propose a novel approach to develop the models of LSM with a biologically inspired self-organizing network based on two neural plasticity learning rules. The connectivity among excitatory neurons is adapted by spike-timing-dependent plasticity (STDP) learning; meanwhile, the degrees of neuronal excitability are regulated to maintain a moderate average activity level by another learning rule: intrinsic plasticity (IP). Our study shows that LSM with STDP+IP performs better than LSM with a random SNN or SNN obtained by STDP alone. The noticeable improvement with the proposed method is due to the better reflected competition among different neurons in the developed SNN model, as well as the more effectively encoded and processed relevant dynamic information with its learning and self-organizing mechanism. This result gives insights to the optimization of computational models of spiking neural networks with neural plasticity.

  20. Building National Capacity for Climate Change Interpretation: The Role of Leaders, Partnerships, and Networks

    NASA Astrophysics Data System (ADS)

    Spitzer, W.

    2015-12-01

    Since 2007, the New England Aquarium has led a national effort to increase the capacity of informal science venues to effectively communicate about climate change. We are now leading the NSF-funded National Network for Ocean and Climate Change Interpretation (NNOCCI), partnering with the Association of Zoos and Aquariums, FrameWorks Institute, Woods Hole Oceanographic Institution, Monterey Bay Aquarium, and National Aquarium, with evaluation conducted by the New Knowledge Organization, Pennsylvania State University, and Ohio State University. NNOCCI enables teams of informal science interpreters across the country to serve as "communication strategists" - beyond merely conveying information they can influence public perceptions, given their high level of commitment, knowledge, public trust, social networks, and visitor contact. We provide in-depth training as well as an alumni network for ongoing learning, implementation support, leadership development, and coalition building. Our goals are to achieve a systemic national impact, embed our work within multiple ongoing regional and national climate change education networks, and leave an enduring legacy. Our project represents a cross-disciplinary partnership among climate scientists, social and cognitive scientists, and informal education practitioners. We have built a growing national network of more than 250 alumni, including approximately 15-20 peer leaders who co-lead both in-depth training programs and introductory workshops. We have found that this alumni network has been assuming increasing importance in providing for ongoing learning, support for implementation, leadership development, and coalition building. As we look toward the future, we are exploring potential partnerships with other existing networks, both to sustain our impact and to expand our reach. This presentation will address what we have learned in terms of network impacts, best practices, factors for success, and future directions.

  1. Fluid Centrality: A Social Network Analysis of Social-Technical Relations in Computer-Mediated Communication

    ERIC Educational Resources Information Center

    Enriquez, Judith Guevarra

    2010-01-01

    In this article, centrality is explored as a measure of computer-mediated communication (CMC) in networked learning. Centrality measure is quite common in performing social network analysis (SNA) and in analysing social cohesion, strength of ties and influence in CMC, and computer-supported collaborative learning research. It argues that measuring…

  2. A Distributed Computing Network for Real-Time Systems

    DTIC Science & Technology

    1980-11-03

    NUSC Tttchnical Docum&nt 5932 3 November 1980 A Distributed Computing N ~etwork for Real ·- Time Systems Gordon · E. Morrison Combat Control...megabit, 10 megabit, and 20 megabit networks. These values are well within the J state-of-the-art and are typical for real - time systems similar to

  3. HPCC and the National Information Infrastructure: an overview.

    PubMed Central

    Lindberg, D A

    1995-01-01

    The National Information Infrastructure (NII) or "information superhighway" is a high-priority federal initiative to combine communications networks, computers, databases, and consumer electronics to deliver information services to all U.S. citizens. The NII will be used to improve government and social services while cutting administrative costs. Operated by the private sector, the NII will rely on advanced technologies developed under the direction of the federal High Performance Computing and Communications (HPCC) Program. These include computing systems capable of performing trillions of operations (teraops) per second and networks capable of transmitting billions of bits (gigabits) per second. Among other activities, the HPCC Program supports the national supercomputer research centers, the federal portion of the Internet, and the development of interface software, such as Mosaic, that facilitates access to network information services. Health care has been identified as a critical demonstration area for HPCC technology and an important application area for the NII. As an HPCC participant, the National Library of Medicine (NLM) assists hospitals and medical centers to connect to the Internet through projects directed by the Regional Medical Libraries and through an Internet Connections Program cosponsored by the National Science Foundation. In addition to using the Internet to provide enhanced access to its own information services, NLM sponsors health-related applications of HPCC technology. Examples include the "Visible Human" project and recently awarded contracts for test-bed networks to share patient data and medical images, telemedicine projects to provide consultation and medical care to patients in rural areas, and advanced computer simulations of human anatomy for training in "virtual surgery." PMID:7703935

  4. Extraction of drainage networks from large terrain datasets using high throughput computing

    NASA Astrophysics Data System (ADS)

    Gong, Jianya; Xie, Jibo

    2009-02-01

    Advanced digital photogrammetry and remote sensing technology produces large terrain datasets (LTD). How to process and use these LTD has become a big challenge for GIS users. Extracting drainage networks, which are basic for hydrological applications, from LTD is one of the typical applications of digital terrain analysis (DTA) in geographical information applications. Existing serial drainage algorithms cannot deal with large data volumes in a timely fashion, and few GIS platforms can process LTD beyond the GB size. High throughput computing (HTC), a distributed parallel computing mode, is proposed to improve the efficiency of drainage networks extraction from LTD. Drainage network extraction using HTC involves two key issues: (1) how to decompose the large DEM datasets into independent computing units and (2) how to merge the separate outputs into a final result. A new decomposition method is presented in which the large datasets are partitioned into independent computing units using natural watershed boundaries instead of using regular 1-dimensional (strip-wise) and 2-dimensional (block-wise) decomposition. Because the distribution of drainage networks is strongly related to watershed boundaries, the new decomposition method is more effective and natural. The method to extract natural watershed boundaries was improved by using multi-scale DEMs instead of single-scale DEMs. A HTC environment is employed to test the proposed methods with real datasets.

  5. Why do Reservoir Computing Networks Predict Chaotic Systems so Well?

    NASA Astrophysics Data System (ADS)

    Lu, Zhixin; Pathak, Jaideep; Girvan, Michelle; Hunt, Brian; Ott, Edward

    Recently a new type of artificial neural network, which is called a reservoir computing network (RCN), has been employed to predict the evolution of chaotic dynamical systems from measured data and without a priori knowledge of the governing equations of the system. The quality of these predictions has been found to be spectacularly good. Here, we present a dynamical-system-based theory for how RCN works. Basically a RCN is thought of as consisting of three parts, a randomly chosen input layer, a randomly chosen recurrent network (the reservoir), and an output layer. The advantage of the RCN framework is that training is done only on the linear output layer, making it computationally feasible for the reservoir dimensionality to be large. In this presentation, we address the underlying dynamical mechanisms of RCN function by employing the concepts of generalized synchronization and conditional Lyapunov exponents. Using this framework, we propose conditions on reservoir dynamics necessary for good prediction performance. By looking at the RCN from this dynamical systems point of view, we gain a deeper understanding of its surprising computational power, as well as insights on how to design a RCN. Supported by Army Research Office Grant Number W911NF1210101.

  6. From biological neural networks to thinking machines: Transitioning biological organizational principles to computer technology

    NASA Technical Reports Server (NTRS)

    Ross, Muriel D.

    1991-01-01

    The three-dimensional organization of the vestibular macula is under study by computer assisted reconstruction and simulation methods as a model for more complex neural systems. One goal of this research is to transition knowledge of biological neural network architecture and functioning to computer technology, to contribute to the development of thinking computers. Maculas are organized as weighted neural networks for parallel distributed processing of information. The network is characterized by non-linearity of its terminal/receptive fields. Wiring appears to develop through constrained randomness. A further property is the presence of two main circuits, highly channeled and distributed modifying, that are connected through feedforward-feedback collaterals and biasing subcircuit. Computer simulations demonstrate that differences in geometry of the feedback (afferent) collaterals affects the timing and the magnitude of voltage changes delivered to the spike initiation zone. Feedforward (efferent) collaterals act as voltage followers and likely inhibit neurons of the distributed modifying circuit. These results illustrate the importance of feedforward-feedback loops, of timing, and of inhibition in refining neural network output. They also suggest that it is the distributed modifying network that is most involved in adaptation, memory, and learning. Tests of macular adaptation, through hyper- and microgravitational studies, support this hypothesis since synapses in the distributed modifying circuit, but not the channeled circuit, are altered. Transitioning knowledge of biological systems to computer technology, however, remains problematical.

  7. Experimental realization of an entanglement access network and secure multi-party computation

    NASA Astrophysics Data System (ADS)

    Chang, X.-Y.; Deng, D.-L.; Yuan, X.-X.; Hou, P.-Y.; Huang, Y.-Y.; Duan, L.-M.

    2016-07-01

    To construct a quantum network with many end users, it is critical to have a cost-efficient way to distribute entanglement over different network ends. We demonstrate an entanglement access network, where the expensive resource, the entangled photon source at the telecom wavelength and the core communication channel, is shared by many end users. Using this cost-efficient entanglement access network, we report experimental demonstration of a secure multiparty computation protocol, the privacy-preserving secure sum problem, based on the network quantum cryptography.

  8. Experimental realization of an entanglement access network and secure multi-party computation

    NASA Astrophysics Data System (ADS)

    Chang, Xiuying; Deng, Donglin; Yuan, Xinxing; Hou, Panyu; Huang, Yuanyuan; Duan, Luming; Department of Physics, University of Michigan Collaboration; CenterQuantum Information in Tsinghua University Team

    2017-04-01

    To construct a quantum network with many end users, it is critical to have a cost-efficient way to distribute entanglement over different network ends. We demonstrate an entanglement access network, where the expensive resource, the entangled photon source at the telecom wavelength and the core communication channel, is shared by many end users. Using this cost-efficient entanglement access network, we report experimental demonstration of a secure multiparty computation protocol, the privacy-preserving secure sum problem, based on the network quantum cryptography.

  9. Computational modeling of neural plasticity for self-organization of neural networks.

    PubMed

    Chrol-Cannon, Joseph; Jin, Yaochu

    2014-11-01

    Self-organization in biological nervous systems during the lifetime is known to largely occur through a process of plasticity that is dependent upon the spike-timing activity in connected neurons. In the field of computational neuroscience, much effort has been dedicated to building up computational models of neural plasticity to replicate experimental data. Most recently, increasing attention has been paid to understanding the role of neural plasticity in functional and structural neural self-organization, as well as its influence on the learning performance of neural networks for accomplishing machine learning tasks such as classification and regression. Although many ideas and hypothesis have been suggested, the relationship between the structure, dynamics and learning performance of neural networks remains elusive. The purpose of this article is to review the most important computational models for neural plasticity and discuss various ideas about neural plasticity's role. Finally, we suggest a few promising research directions, in particular those along the line that combines findings in computational neuroscience and systems biology, and their synergetic roles in understanding learning, memory and cognition, thereby bridging the gap between computational neuroscience, systems biology and computational intelligence. Copyright © 2014 Elsevier Ireland Ltd. All rights reserved.

  10. External quality-assurance project report for the National Atmospheric Deposition Program/National Trends Network and Mercury Deposition Network, 2009-2010

    USGS Publications Warehouse

    Wetherbee, Gregory A.; Martin, RoseAnn; Rhodes, Mark F.; Chesney, Tanya A.

    2014-01-01

    The U.S. Geological Survey operated six distinct programs to provide external quality-assurance monitoring for the National Atmospheric Deposition Program/National Trends Network (NTN) and Mercury Deposition Network (MDN) during 2009–2010. The field-audit program assessed the effects of onsite exposure, sample handling, and shipping on the chemistry of NTN samples; a system-blank program assessed the same effects for MDN. Two interlaboratory-comparison programs assessed the bias and variability of the chemical analysis data from the Central Analytical Laboratory (CAL) and Mercury (Hg) Analytical Laboratory (HAL). The blind-audit program was also implemented for the MDN to evaluate analytical bias in total Hg concentration data produced by the HAL. The co-located-sampler program was used to identify and quantify potential shifts in NADP data resulting from replacement of original network instrumentation with new electronic recording rain gages (E-gages) and precipitation collectors that use optical sensors. The results indicate that NADP data continue to be of sufficient quality for the analysis of spatial distributions and time trends of chemical constituents in wet deposition across the United States. Results also suggest that retrofit of the NADP networks with the new precipitation collectors could cause –8 to +14 percent shifts in NADP annual precipitation-weighted mean concentrations and total deposition values for ammonium, nitrate, sulfate, and hydrogen ion, and larger shifts (+13 to +74 percent) for calcium, magnesium, sodium, potassium, and chloride. The prototype N-CON Systems bucket collector is more efficient in the catch of precipitation in winter than Aerochem Metrics Model 301 collector, especially for light snowfall.

  11. Experimental and computational analysis of a large protein network that controls fat storage reveals the design principles of a signaling network.

    PubMed

    Al-Anzi, Bader; Arpp, Patrick; Gerges, Sherif; Ormerod, Christopher; Olsman, Noah; Zinn, Kai

    2015-05-01

    An approach combining genetic, proteomic, computational, and physiological analysis was used to define a protein network that regulates fat storage in budding yeast (Saccharomyces cerevisiae). A computational analysis of this network shows that it is not scale-free, and is best approximated by the Watts-Strogatz model, which generates "small-world" networks with high clustering and short path lengths. The network is also modular, containing energy level sensing proteins that connect to four output processes: autophagy, fatty acid synthesis, mRNA processing, and MAP kinase signaling. The importance of each protein to network function is dependent on its Katz centrality score, which is related both to the protein's position within a module and to the module's relationship to the network as a whole. The network is also divisible into subnetworks that span modular boundaries and regulate different aspects of fat metabolism. We used a combination of genetics and pharmacology to simultaneously block output from multiple network nodes. The phenotypic results of this blockage define patterns of communication among distant network nodes, and these patterns are consistent with the Watts-Strogatz model.

  12. Analytical Computation of the Epidemic Threshold on Temporal Networks

    NASA Astrophysics Data System (ADS)

    Valdano, Eugenio; Ferreri, Luca; Poletto, Chiara; Colizza, Vittoria

    2015-04-01

    The time variation of contacts in a networked system may fundamentally alter the properties of spreading processes and affect the condition for large-scale propagation, as encoded in the epidemic threshold. Despite the great interest in the problem for the physics, applied mathematics, computer science, and epidemiology communities, a full theoretical understanding is still missing and currently limited to the cases where the time-scale separation holds between spreading and network dynamics or to specific temporal network models. We consider a Markov chain description of the susceptible-infectious-susceptible process on an arbitrary temporal network. By adopting a multilayer perspective, we develop a general analytical derivation of the epidemic threshold in terms of the spectral radius of a matrix that encodes both network structure and disease dynamics. The accuracy of the approach is confirmed on a set of temporal models and empirical networks and against numerical results. In addition, we explore how the threshold changes when varying the overall time of observation of the temporal network, so as to provide insights on the optimal time window for data collection of empirical temporal networked systems. Our framework is of both fundamental and practical interest, as it offers novel understanding of the interplay between temporal networks and spreading dynamics.

  13. A resource management architecture based on complex network theory in cloud computing federation

    NASA Astrophysics Data System (ADS)

    Zhang, Zehua; Zhang, Xuejie

    2011-10-01

    Cloud Computing Federation is a main trend of Cloud Computing. Resource Management has significant effect on the design, realization, and efficiency of Cloud Computing Federation. Cloud Computing Federation has the typical characteristic of the Complex System, therefore, we propose a resource management architecture based on complex network theory for Cloud Computing Federation (abbreviated as RMABC) in this paper, with the detailed design of the resource discovery and resource announcement mechanisms. Compare with the existing resource management mechanisms in distributed computing systems, a Task Manager in RMABC can use the historical information and current state data get from other Task Managers for the evolution of the complex network which is composed of Task Managers, thus has the advantages in resource discovery speed, fault tolerance and adaptive ability. The result of the model experiment confirmed the advantage of RMABC in resource discovery performance.

  14. 34 CFR 412.1 - What is the National Network for Curriculum Coordination in Vocational and Technical Education?

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... 34 Education 3 2011-07-01 2011-07-01 false What is the National Network for Curriculum... EDUCATION NATIONAL NETWORK FOR CURRICULUM COORDINATION IN VOCATIONAL AND TECHNICAL EDUCATION General § 412.1 What is the National Network for Curriculum Coordination in Vocational and Technical Education? The...

  15. 34 CFR 412.1 - What is the National Network for Curriculum Coordination in Vocational and Technical Education?

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... 34 Education 3 2010-07-01 2010-07-01 false What is the National Network for Curriculum... EDUCATION NATIONAL NETWORK FOR CURRICULUM COORDINATION IN VOCATIONAL AND TECHNICAL EDUCATION General § 412.1 What is the National Network for Curriculum Coordination in Vocational and Technical Education? The...

  16. Experimental realization of an entanglement access network and secure multi-party computation

    PubMed Central

    Chang, X.-Y.; Deng, D.-L.; Yuan, X.-X.; Hou, P.-Y.; Huang, Y.-Y.; Duan, L.-M.

    2016-01-01

    To construct a quantum network with many end users, it is critical to have a cost-efficient way to distribute entanglement over different network ends. We demonstrate an entanglement access network, where the expensive resource, the entangled photon source at the telecom wavelength and the core communication channel, is shared by many end users. Using this cost-efficient entanglement access network, we report experimental demonstration of a secure multiparty computation protocol, the privacy-preserving secure sum problem, based on the network quantum cryptography. PMID:27404561

  17. 23 CFR 658.21 - Identification of National Network.

    Code of Federal Regulations, 2012 CFR

    2012-04-01

    ... the efficiency of the total traffic flow, such as time of day prohibitions, or lane use controls. (2....21 Highways FEDERAL HIGHWAY ADMINISTRATION, DEPARTMENT OF TRANSPORTATION ENGINEERING AND TRAFFIC... National Network shall be signed. All signs shall conform to the Manual on Uniform Traffic Control Devices...

  18. 23 CFR 658.21 - Identification of National Network.

    Code of Federal Regulations, 2013 CFR

    2013-04-01

    ... the efficiency of the total traffic flow, such as time of day prohibitions, or lane use controls. (2....21 Highways FEDERAL HIGHWAY ADMINISTRATION, DEPARTMENT OF TRANSPORTATION ENGINEERING AND TRAFFIC... National Network shall be signed. All signs shall conform to the Manual on Uniform Traffic Control Devices...

  19. 23 CFR 658.21 - Identification of National Network.

    Code of Federal Regulations, 2014 CFR

    2014-04-01

    ... the efficiency of the total traffic flow, such as time of day prohibitions, or lane use controls. (2....21 Highways FEDERAL HIGHWAY ADMINISTRATION, DEPARTMENT OF TRANSPORTATION ENGINEERING AND TRAFFIC... National Network shall be signed. All signs shall conform to the Manual on Uniform Traffic Control Devices...

  20. Path planning on cellular nonlinear network using active wave computing technique

    NASA Astrophysics Data System (ADS)

    Yeniçeri, Ramazan; Yalçın, Müstak E.

    2009-05-01

    This paper introduces a simple algorithm to solve robot path finding problem using active wave computing techniques. A two-dimensional Cellular Neural/Nonlinear Network (CNN), consist of relaxation oscillators, has been used to generate active waves and to process the visual information. The network, which has been implemented on a Field Programmable Gate Array (FPGA) chip, has the feature of being programmed, controlled and observed by a host computer. The arena of the robot is modelled as the medium of the active waves on the network. Active waves are employed to cover the whole medium with their own dynamics, by starting from an initial point. The proposed algorithm is achieved by observing the motion of the wave-front of the active waves. Host program first loads the arena model onto the active wave generator network and command to start the generation. Then periodically pulls the network image from the generator hardware to analyze evolution of the active waves. When the algorithm is completed, vectorial data image is generated. The path from any of the pixel on this image to the active wave generating pixel is drawn by the vectors on this image. The robot arena may be a complicated labyrinth or may have a simple geometry. But, the arena surface always must be flat. Our Autowave Generator CNN implementation which is settled on the Xilinx University Program Virtex-II Pro Development System is operated by a MATLAB program running on the host computer. As the active wave generator hardware has 16, 384 neurons, an arena with 128 × 128 pixels can be modeled and solved by the algorithm. The system also has a monitor and network image is depicted on the monitor simultaneously.

  1. Direct2Experts: a pilot national network to demonstrate interoperability among research-networking platforms.

    PubMed

    Weber, Griffin M; Barnett, William; Conlon, Mike; Eichmann, David; Kibbe, Warren; Falk-Krzesinski, Holly; Halaas, Michael; Johnson, Layne; Meeks, Eric; Mitchell, Donald; Schleyer, Titus; Stallings, Sarah; Warden, Michael; Kahlon, Maninder

    2011-12-01

    Research-networking tools use data-mining and social networking to enable expertise discovery, matchmaking and collaboration, which are important facets of team science and translational research. Several commercial and academic platforms have been built, and many institutions have deployed these products to help their investigators find local collaborators. Recent studies, though, have shown the growing importance of multiuniversity teams in science. Unfortunately, the lack of a standard data-exchange model and resistance of universities to share information about their faculty have presented barriers to forming an institutionally supported national network. This case report describes an initiative, which, in only 6 months, achieved interoperability among seven major research-networking products at 28 universities by taking an approach that focused on addressing institutional concerns and encouraging their participation. With this necessary groundwork in place, the second phase of this effort can begin, which will expand the network's functionality and focus on the end users.

  2. Direct2Experts: a pilot national network to demonstrate interoperability among research-networking platforms

    PubMed Central

    Barnett, William; Conlon, Mike; Eichmann, David; Kibbe, Warren; Falk-Krzesinski, Holly; Halaas, Michael; Johnson, Layne; Meeks, Eric; Mitchell, Donald; Schleyer, Titus; Stallings, Sarah; Warden, Michael; Kahlon, Maninder

    2011-01-01

    Research-networking tools use data-mining and social networking to enable expertise discovery, matchmaking and collaboration, which are important facets of team science and translational research. Several commercial and academic platforms have been built, and many institutions have deployed these products to help their investigators find local collaborators. Recent studies, though, have shown the growing importance of multiuniversity teams in science. Unfortunately, the lack of a standard data-exchange model and resistance of universities to share information about their faculty have presented barriers to forming an institutionally supported national network. This case report describes an initiative, which, in only 6 months, achieved interoperability among seven major research-networking products at 28 universities by taking an approach that focused on addressing institutional concerns and encouraging their participation. With this necessary groundwork in place, the second phase of this effort can begin, which will expand the network's functionality and focus on the end users. PMID:22037890

  3. The National Education Association's Educational Computer Service. An Assessment.

    ERIC Educational Resources Information Center

    Software Publishers Association, Washington, DC.

    The Educational Computer Service (ECS) of the National Education Association (NEA) evaluates and distributes educational software. An investigation of ECS was conducted by the Computer Education Committee of the Software Publishers Association (SPA) at the request of SPA members. The SPA found that the service, as it is presently structured, is…

  4. New European Training Network to Improve Young Scientists' Capabilities in Computational Wave Propagation

    NASA Astrophysics Data System (ADS)

    Igel, Heiner

    2004-07-01

    The European Commission recently funded a Marie-Curie Research Training Network (MCRTN) in the field of computational seismology within the 6th Framework Program. SPICE (Seismic wave Propagation and Imaging in Complex media: a European network) is coordinated by the computational seismology group of the Ludwig-Maximilians-Universität in Munich linking 14 European research institutions in total. The 4-year project will provide funding for 14 Ph.D. students (3-year projects) and 14 postdoctoral positions (2-year projects) within the various fields of computational seismology. These positions have been advertised and are currently being filled.

  5. Network-based drug discovery by integrating systems biology and computational technologies

    PubMed Central

    Leung, Elaine L.; Cao, Zhi-Wei; Jiang, Zhi-Hong; Zhou, Hua

    2013-01-01

    Network-based intervention has been a trend of curing systemic diseases, but it relies on regimen optimization and valid multi-target actions of the drugs. The complex multi-component nature of medicinal herbs may serve as valuable resources for network-based multi-target drug discovery due to its potential treatment effects by synergy. Recently, robustness of multiple systems biology platforms shows powerful to uncover molecular mechanisms and connections between the drugs and their targeting dynamic network. However, optimization methods of drug combination are insufficient, owning to lacking of tighter integration across multiple ‘-omics’ databases. The newly developed algorithm- or network-based computational models can tightly integrate ‘-omics’ databases and optimize combinational regimens of drug development, which encourage using medicinal herbs to develop into new wave of network-based multi-target drugs. However, challenges on further integration across the databases of medicinal herbs with multiple system biology platforms for multi-target drug optimization remain to the uncertain reliability of individual data sets, width and depth and degree of standardization of herbal medicine. Standardization of the methodology and terminology of multiple system biology and herbal database would facilitate the integration. Enhance public accessible databases and the number of research using system biology platform on herbal medicine would be helpful. Further integration across various ‘-omics’ platforms and computational tools would accelerate development of network-based drug discovery and network medicine. PMID:22877768

  6. Proceedings of a Conference on Telecommunication Technologies, Networkings and Libraries

    NASA Astrophysics Data System (ADS)

    Knight, N. K.

    1981-12-01

    Current and developing technologies for digital transmission of image data likely to have an impact on the operations of libraries and information centers or provide support for information networking are reviewed. Technologies reviewed include slow scan television, teleconferencing, and videodisc technology and standards development for computer network interconnection through hardware and software, particularly packet switched networks computer network protocols for library and information service applications, the structure of a national bibliographic telecommunications network; and the major policy issues involved in the regulation or deregulation of the common communications carriers industry.

  7. Computational cost for detecting inspiralling binaries using a network of laser interferometric detectors

    NASA Astrophysics Data System (ADS)

    Pai, Archana; Bose, Sukanta; Dhurandhar, Sanjeev

    2002-04-01

    We extend a coherent network data-analysis strategy developed earlier for detecting Newtonian waveforms to the case of post-Newtonian (PN) waveforms. Since the PN waveform depends on the individual masses of the inspiralling binary, the parameter-space dimension increases by one from that of the Newtonian case. We obtain the number of templates and estimate the computational costs for PN waveforms: for a lower mass limit of 1Msolar, for LIGO-I noise and with 3% maximum mismatch, the online computational speed requirement for single detector is a few Gflops; for a two-detector network it is hundreds of Gflops and for a three-detector network it is tens of Tflops. Apart from idealistic networks, we obtain results for realistic networks comprising of LIGO and VIRGO. Finally, we compare costs incurred in a coincidence detection strategy with those incurred in the coherent strategy detailed above.

  8. Autumn Algorithm-Computation of Hybridization Networks for Realistic Phylogenetic Trees.

    PubMed

    Huson, Daniel H; Linz, Simone

    2018-01-01

    A minimum hybridization network is a rooted phylogenetic network that displays two given rooted phylogenetic trees using a minimum number of reticulations. Previous mathematical work on their calculation has usually assumed the input trees to be bifurcating, correctly rooted, or that they both contain the same taxa. These assumptions do not hold in biological studies and "realistic" trees have multifurcations, are difficult to root, and rarely contain the same taxa. We present a new algorithm for computing minimum hybridization networks for a given pair of "realistic" rooted phylogenetic trees. We also describe how the algorithm might be used to improve the rooting of the input trees. We introduce the concept of "autumn trees", a nice framework for the formulation of algorithms based on the mathematics of "maximum acyclic agreement forests". While the main computational problem is hard, the run-time depends mainly on how different the given input trees are. In biological studies, where the trees are reasonably similar, our parallel implementation performs well in practice. The algorithm is available in our open source program Dendroscope 3, providing a platform for biologists to explore rooted phylogenetic networks. We demonstrate the utility of the algorithm using several previously studied data sets.

  9. Computer Science and Technology Publications. NBS Publications List 84.

    ERIC Educational Resources Information Center

    National Bureau of Standards (DOC), Washington, DC. Inst. for Computer Sciences and Technology.

    This bibliography lists publications of the Institute for Computer Sciences and Technology of the National Bureau of Standards. Publications are listed by subject in the areas of computer security, computer networking, and automation technology. Sections list publications of: (1) current Federal Information Processing Standards; (2) computer…

  10. Analysis of Flow Behavior Within An Integrated Computer-Communication Network,

    DTIC Science & Technology

    1979-05-01

    Howard. Plan today for tomorrows data/voice nets. Data Communications 7, 9 (Sep. 1978), 51-62. 24. F-ark, Howard, and Gitman , Israel. Inteqrated DoD...computer networks. NTC-74, San Diego, CA., (Dec. 2-4, 1974), 1032-1037. 31. Gitman , I., Frank, H., Occhiogrosso, B., and Hsieh, W. Issues in integrated...switched networks agree on standard interface. Data Communications, (May/June 1978), 25)-39. 36. Hsieh, W., Gitman , I., and Occhiogrosso, B. Design of

  11. A New Generation of Networks and Computing Models for High Energy Physics in the LHC Era

    NASA Astrophysics Data System (ADS)

    Newman, H.

    2011-12-01

    Wide area networks of increasing end-to-end capacity and capability are vital for every phase of high energy physicists' work. Our bandwidth usage, and the typical capacity of the major national backbones and intercontinental links used by our field have progressed by a factor of several hundred times over the past decade. With the opening of the LHC era in 2009-10 and the prospects for discoveries in the upcoming LHC run, the outlook is for a continuation or an acceleration of these trends using next generation networks over the next few years. Responding to the need to rapidly distribute and access datasets of tens to hundreds of terabytes drawn from multi-petabyte data stores, high energy physicists working with network engineers and computer scientists are learning to use long range networks effectively on an increasing scale, and aggregate flows reaching the 100 Gbps range have been observed. The progress of the LHC, and the unprecedented ability of the experiments to produce results rapidly using worldwide distributed data processing and analysis has sparked major, emerging changes in the LHC Computing Models, which are moving from the classic hierarchical model designed a decade ago to more agile peer-to-peer-like models that make more effective use of the resources at Tier2 and Tier3 sites located throughout the world. A new requirements working group has gauged the needs of Tier2 centers, and charged the LHCOPN group that runs the network interconnecting the LHC Tierls with designing a new architecture interconnecting the Tier2s. As seen from the perspective of ICFA's Standing Committee on Inter-regional Connectivity (SCIC), the Digital Divide that separates physicists in several regions of the developing world from those in the developed world remains acute, although many countries have made major advances through the rapid installation of modern network infrastructures. A case in point is Africa, where a new round of undersea cables promises to transform

  12. Capability of the People’s Republic of China to Conduct Cyber Warfare and Computer Network Exploitation

    DTIC Science & Technology

    2009-10-09

    Capability of the People’s Republic of China to Conduct Cyber Warfare and Computer Network Exploitation Prepared for The US-China Economic and...the People?s Republic of China to Conduct Cyber Warfare and Computer Network Exploitation 5a. CONTRACT NUMBER 5b. GRANT NUMBER 5c. PROGRAM ELEMENT...Capability of the People’s Republic of China to Conduct Cyber Warfare and Computer Network Exploitation 2 US-China Economic and Security Review

  13. Mexican national pyronometer network calibration

    NASA Astrophysics Data System (ADS)

    VAldes, M.; Villarreal, L.; Estevez, H.; Riveros, D.

    2013-12-01

    In order to take advantage of the solar radiation as an alternate energy source it is necessary to evaluate the spatial and temporal availability. The Mexican National Meterological Service (SMN) has a network with 136 meteorological stations, each coupled with a pyronometer for measuring the global solar radiation. Some of these stations had not been calibrated in several years. The Mexican Department of Energy (SENER) in order to count on a reliable evaluation of the solar resource funded this project to calibrate the SMN pyrometer network and validate the data. The calibration of the 136 pyronometers by the intercomparison method recommended by the World Meterological Organization (WMO) requires lengthy observations and specific environmental conditions such as clear skies and a stable atmosphere, circumstances that determine the site and season of the calibration. The Solar Radiation Section of the Instituto de Geofísica of the Universidad Nacional Autónoma de México is a Regional Center of the WMO and is certified to carry out the calibration procedures and emit certificates. We are responsible for the recalibration of the pyronometer network of the SMN. A continuous emission solar simulator with exposed areas with 30cm diameters was acquired to reduce the calibration time and not depend on atmospheric conditions. We present the results of the calibration of 10 thermopile pyronometers and one photovoltaic cell by the intercomparison method with more than 10000 observations each and those obtained with the solar simulator.

  14. Information Dissemination of Public Health Emergency on Social Networks and Intelligent Computation

    PubMed Central

    Hu, Hongzhi; Mao, Huajuan; Hu, Xiaohua; Hu, Feng; Sun, Xuemin; Jing, Zaiping; Duan, Yunsuo

    2015-01-01

    Due to the extensive social influence, public health emergency has attracted great attention in today's society. The booming social network is becoming a main information dissemination platform of those events and caused high concerns in emergency management, among which a good prediction of information dissemination in social networks is necessary for estimating the event's social impacts and making a proper strategy. However, information dissemination is largely affected by complex interactive activities and group behaviors in social network; the existing methods and models are limited to achieve a satisfactory prediction result due to the open changeable social connections and uncertain information processing behaviors. ACP (artificial societies, computational experiments, and parallel execution) provides an effective way to simulate the real situation. In order to obtain better information dissemination prediction in social networks, this paper proposes an intelligent computation method under the framework of TDF (Theory-Data-Feedback) based on ACP simulation system which was successfully applied to the analysis of A (H1N1) Flu emergency. PMID:26609303

  15. Information Dissemination of Public Health Emergency on Social Networks and Intelligent Computation.

    PubMed

    Hu, Hongzhi; Mao, Huajuan; Hu, Xiaohua; Hu, Feng; Sun, Xuemin; Jing, Zaiping; Duan, Yunsuo

    2015-01-01

    Due to the extensive social influence, public health emergency has attracted great attention in today's society. The booming social network is becoming a main information dissemination platform of those events and caused high concerns in emergency management, among which a good prediction of information dissemination in social networks is necessary for estimating the event's social impacts and making a proper strategy. However, information dissemination is largely affected by complex interactive activities and group behaviors in social network; the existing methods and models are limited to achieve a satisfactory prediction result due to the open changeable social connections and uncertain information processing behaviors. ACP (artificial societies, computational experiments, and parallel execution) provides an effective way to simulate the real situation. In order to obtain better information dissemination prediction in social networks, this paper proposes an intelligent computation method under the framework of TDF (Theory-Data-Feedback) based on ACP simulation system which was successfully applied to the analysis of A (H1N1) Flu emergency.

  16. Computation of Steady-State Probability Distributions in Stochastic Models of Cellular Networks

    PubMed Central

    Hallen, Mark; Li, Bochong; Tanouchi, Yu; Tan, Cheemeng; West, Mike; You, Lingchong

    2011-01-01

    Cellular processes are “noisy”. In each cell, concentrations of molecules are subject to random fluctuations due to the small numbers of these molecules and to environmental perturbations. While noise varies with time, it is often measured at steady state, for example by flow cytometry. When interrogating aspects of a cellular network by such steady-state measurements of network components, a key need is to develop efficient methods to simulate and compute these distributions. We describe innovations in stochastic modeling coupled with approaches to this computational challenge: first, an approach to modeling intrinsic noise via solution of the chemical master equation, and second, a convolution technique to account for contributions of extrinsic noise. We show how these techniques can be combined in a streamlined procedure for evaluation of different sources of variability in a biochemical network. Evaluation and illustrations are given in analysis of two well-characterized synthetic gene circuits, as well as a signaling network underlying the mammalian cell cycle entry. PMID:22022252

  17. Requirements for a network storage service

    NASA Technical Reports Server (NTRS)

    Kelly, Suzanne M.; Haynes, Rena A.

    1992-01-01

    Sandia National Laboratories provides a high performance classified computer network as a core capability in support of its mission of nuclear weapons design and engineering, physical sciences research, and energy research and development. The network, locally known as the Internal Secure Network (ISN), was designed in 1989 and comprises multiple distributed local area networks (LAN's) residing in Albuquerque, New Mexico and Livermore, California. The TCP/IP protocol suite is used for inner-node communications. Scientific workstations and mid-range computers, running UNIX-based operating systems, compose most LAN's. One LAN, operated by the Sandia Corporate Computing Directorate, is a general purpose resource providing a supercomputer and a file server to the entire ISN. The current file server on the supercomputer LAN is an implementation of the Common File System (CFS) developed by Los Alamos National Laboratory. Subsequent to the design of the ISN, Sandia reviewed its mass storage requirements and chose to enter into a competitive procurement to replace the existing file server with one more adaptable to a UNIX/TCP/IP environment. The requirements study for the network was the starting point for the requirements study for the new file server. The file server is called the Network Storage Services (NSS) and is requirements are described in this paper. The next section gives an application or functional description of the NSS. The final section adds performance, capacity, and access constraints to the requirements.

  18. Preliminary Design Study for a National Digital Seismograph Network

    USGS Publications Warehouse

    Peterson, Jon; Hutt, Charles R.

    1981-01-01

    Introduction Recently, the National Research Council published a report by the Panel on National, Regional, and Local Seismograph Networks of the Committee on Seismology in which the principal recommendation was for the establishment of a national digital seismograph network (NDSN). The Panel Report (Bolt, 1980) addresses both the need and the scientific requirements for the new national network. The purpose of this study has been to translate the scientific requirements into an instrumentation concept for the NSDS. There are literally hundreds, perhaps thousands, of seismographs in operation within the United States. Each serves an important purpose, but most have limited objectives in time, in region, or in the types of data that are being recorded. The concept of a national network, funded and operated by the Federal Government, is based on broader objectives that include continuity of time, uniform coverage, standardization of data format and instruments, and widespread use of the data for a variety of research purposes. A national digital seismograph network will be an important data resource for many years to come; hence, its design is likely to be of interest to most seismologists. Seismologists have traditionally been involved in the development and field operation of seismic systems and thus have been familiar with both the potential value and the limitations of the data. However, in recent years of increasing technological sophistication, the development of data sstems has fallen more to system engineers, and this trend is likely to continue. One danger in this is that the engineers may misinterpret scientific objectives or subordinate them to purely technological considerations. Another risk is that the data users may misuse or misinterpret the data because they are not aware of the limitations of the data system. Perhaps the most important purpose of a design study such as this is to stimulate a dialogue between system engineers and potential data users

  19. Computing single step operators of logic programming in radial basis function neural networks

    NASA Astrophysics Data System (ADS)

    Hamadneh, Nawaf; Sathasivam, Saratha; Choon, Ong Hong

    2014-07-01

    Logic programming is the process that leads from an original formulation of a computing problem to executable programs. A normal logic program consists of a finite set of clauses. A valuation I of logic programming is a mapping from ground atoms to false or true. The single step operator of any logic programming is defined as a function (Tp:I→I). Logic programming is well-suited to building the artificial intelligence systems. In this study, we established a new technique to compute the single step operators of logic programming in the radial basis function neural networks. To do that, we proposed a new technique to generate the training data sets of single step operators. The training data sets are used to build the neural networks. We used the recurrent radial basis function neural networks to get to the steady state (the fixed point of the operators). To improve the performance of the neural networks, we used the particle swarm optimization algorithm to train the networks.

  20. Scalable Quantum Networks for Distributed Computing and Sensing

    DTIC Science & Technology

    2016-04-01

    probabilistic measurement , so we developed quantum memories and guided-wave implementations of same, demonstrating controlled delay of a heralded single...Second, fundamental scalability requires a method to synchronize protocols based on quantum measurements , which are inherently probabilistic. To meet...AFRL-AFOSR-UK-TR-2016-0007 Scalable Quantum Networks for Distributed Computing and Sensing Ian Walmsley THE UNIVERSITY OF OXFORD Final Report 04/01

  1. A national neurological excellence centers network.

    PubMed

    Pazzi, S; Cristiani, P; Cavallini, A

    1998-02-01

    The most relevant problems related to the management of neurological disorders are (i) the frequent hospitalization in nonspecialist departments, with the need for neurological consultation, and (ii) the frequent requests of GPs for highly specialized investigations that are very expensive and of little value in arriving at a correct diagnosis. In 1996, the Consorzio di Bioingegneria e Informatica Medica in Italy realized the CISNet project (in collaboration with the Consorzio Istituti Scientifici Neuroscienze e Tecnologie Biomediche and funded by the Centro Studi of the National Public Health Council) for the implementation of a national neurological excellence centers network (CISNet). In the CISNet project, neurologists will be able to give on-line interactive consultation and off-line consulting services identifying correct diagnostic/therapeutic procedures, evaluating the need for both examination in specialist centers and admission to specialized centers, and identifying the most appropriate ones.

  2. Models of Dynamic Relations Among Service Activities, System State and Service Quality on Computer and Network Systems

    DTIC Science & Technology

    2010-01-01

    Service quality on computer and network systems has become increasingly important as many conventional service transactions are moved online. Service quality of computer and network services can be measured by the performance of the service process in throughput, delay, and so on. On a computer and network system, competing service requests of users and associated service activities change the state of limited system resources which in turn affects the achieved service ...relations of service activities, system state and service

  3. Investigating Patterns of Interaction in Networked Learning and Computer-Supported Collaborative Learning: A Role for Social Network Analysis

    ERIC Educational Resources Information Center

    de Laat, Maarten; Lally, Vic; Lipponen, Lasse; Simons, Robert-Jan

    2007-01-01

    The focus of this study is to explore the advances that Social Network Analysis (SNA) can bring, in combination with other methods, when studying Networked Learning/Computer-Supported Collaborative Learning (NL/CSCL). We present a general overview of how SNA is applied in NL/CSCL research; we then go on to illustrate how this research method can…

  4. A statistical summary of data from the U.S. Geological Survey's national water quality networks

    USGS Publications Warehouse

    Smith, R.A.; Alexander, R.B.

    1983-01-01

    The U.S. Geological Survey Operates two nationwide networks to monitor water quality, the National Hydrologic Bench-Mark Network and the National Stream Quality Accounting Network (NASQAN). The Bench-Mark network is composed of 51 stations in small drainage basins which are as close as possible to their natural state, with no human influence and little likelihood of future development. Stations in the NASQAN program are located to monitor flow from accounting units (subregional drainage basins) which collectively encompass the entire land surface of the nation. Data collected at both networks include streamflow, concentrations of major inorganic constituents, nutrients, and trace metals. The goals of the two water quality sampling programs include the determination of mean constituent concentrations and transport rates as well as the analysis of long-term trends in those variables. This report presents a station-by-station statistical summary of data from the two networks for the period 1974 through 1981. (Author 's abstract)

  5. A likelihood ratio anomaly detector for identifying within-perimeter computer network attacks

    DOE PAGES

    Grana, Justin; Wolpert, David; Neil, Joshua; ...

    2016-03-11

    The rapid detection of attackers within firewalls of enterprise computer networks is of paramount importance. Anomaly detectors address this problem by quantifying deviations from baseline statistical models of normal network behavior and signaling an intrusion when the observed data deviates significantly from the baseline model. But, many anomaly detectors do not take into account plausible attacker behavior. As a result, anomaly detectors are prone to a large number of false positives due to unusual but benign activity. Our paper first introduces a stochastic model of attacker behavior which is motivated by real world attacker traversal. Then, we develop a likelihoodmore » ratio detector that compares the probability of observed network behavior under normal conditions against the case when an attacker has possibly compromised a subset of hosts within the network. Since the likelihood ratio detector requires integrating over the time each host becomes compromised, we illustrate how to use Monte Carlo methods to compute the requisite integral. We then present Receiver Operating Characteristic (ROC) curves for various network parameterizations that show for any rate of true positives, the rate of false positives for the likelihood ratio detector is no higher than that of a simple anomaly detector and is often lower. Finally, we demonstrate the superiority of the proposed likelihood ratio detector when the network topologies and parameterizations are extracted from real-world networks.« less

  6. A likelihood ratio anomaly detector for identifying within-perimeter computer network attacks

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Grana, Justin; Wolpert, David; Neil, Joshua

    The rapid detection of attackers within firewalls of enterprise computer networks is of paramount importance. Anomaly detectors address this problem by quantifying deviations from baseline statistical models of normal network behavior and signaling an intrusion when the observed data deviates significantly from the baseline model. But, many anomaly detectors do not take into account plausible attacker behavior. As a result, anomaly detectors are prone to a large number of false positives due to unusual but benign activity. Our paper first introduces a stochastic model of attacker behavior which is motivated by real world attacker traversal. Then, we develop a likelihoodmore » ratio detector that compares the probability of observed network behavior under normal conditions against the case when an attacker has possibly compromised a subset of hosts within the network. Since the likelihood ratio detector requires integrating over the time each host becomes compromised, we illustrate how to use Monte Carlo methods to compute the requisite integral. We then present Receiver Operating Characteristic (ROC) curves for various network parameterizations that show for any rate of true positives, the rate of false positives for the likelihood ratio detector is no higher than that of a simple anomaly detector and is often lower. Finally, we demonstrate the superiority of the proposed likelihood ratio detector when the network topologies and parameterizations are extracted from real-world networks.« less

  7. Reflections on a Strategic Vision for Computer Network Operations

    DTIC Science & Technology

    2010-05-25

    either a traditional or an irregular war. It cannot include the disarmament or destruction of enemy forces or the occupation of its geographic territory...Washington, DC: Chairman of the Joint Chiefs of Staff, 15 August 2007), GL-7. 34 Mr. John Mense , Basic Computer Network Operations Planners Course

  8. High Performance Computing and Networking for Science--Background Paper.

    ERIC Educational Resources Information Center

    Congress of the U.S., Washington, DC. Office of Technology Assessment.

    The Office of Technology Assessment is conducting an assessment of the effects of new information technologies--including high performance computing, data networking, and mass data archiving--on research and development. This paper offers a view of the issues and their implications for current discussions about Federal supercomputer initiatives…

  9. Results of the First National Assessment of Computer Competence (The Printout).

    ERIC Educational Resources Information Center

    Balajthy, Ernest

    1988-01-01

    Discusses the findings of the National Assessment of Educational Progress 1985-86 survey of American students' computer competence, focusing on findings of interest to reading teachers who use computers. (MM)

  10. Primary Strategy Learning Networks: A Local Study of a National Initiative

    ERIC Educational Resources Information Center

    Moore, Tessa A.; Rutherford, Desmond

    2012-01-01

    The use of networks as a means of communicating knowledge and ideas and in promoting innovation among schools has emerged globally over the past decade. Currently, inter-school collaboration is not only at the fore nationally in England, but also has become integral to the school improvement agenda. However, networking theory is a disparate field…

  11. Computing the Local Field Potential (LFP) from Integrate-and-Fire Network Models.

    PubMed

    Mazzoni, Alberto; Lindén, Henrik; Cuntz, Hermann; Lansner, Anders; Panzeri, Stefano; Einevoll, Gaute T

    2015-12-01

    Leaky integrate-and-fire (LIF) network models are commonly used to study how the spiking dynamics of neural networks changes with stimuli, tasks or dynamic network states. However, neurophysiological studies in vivo often rather measure the mass activity of neuronal microcircuits with the local field potential (LFP). Given that LFPs are generated by spatially separated currents across the neuronal membrane, they cannot be computed directly from quantities defined in models of point-like LIF neurons. Here, we explore the best approximation for predicting the LFP based on standard output from point-neuron LIF networks. To search for this best "LFP proxy", we compared LFP predictions from candidate proxies based on LIF network output (e.g, firing rates, membrane potentials, synaptic currents) with "ground-truth" LFP obtained when the LIF network synaptic input currents were injected into an analogous three-dimensional (3D) network model of multi-compartmental neurons with realistic morphology, spatial distributions of somata and synapses. We found that a specific fixed linear combination of the LIF synaptic currents provided an accurate LFP proxy, accounting for most of the variance of the LFP time course observed in the 3D network for all recording locations. This proxy performed well over a broad set of conditions, including substantial variations of the neuronal morphologies. Our results provide a simple formula for estimating the time course of the LFP from LIF network simulations in cases where a single pyramidal population dominates the LFP generation, and thereby facilitate quantitative comparison between computational models and experimental LFP recordings in vivo.

  12. Computing the Local Field Potential (LFP) from Integrate-and-Fire Network Models

    PubMed Central

    Cuntz, Hermann; Lansner, Anders; Panzeri, Stefano; Einevoll, Gaute T.

    2015-01-01

    Leaky integrate-and-fire (LIF) network models are commonly used to study how the spiking dynamics of neural networks changes with stimuli, tasks or dynamic network states. However, neurophysiological studies in vivo often rather measure the mass activity of neuronal microcircuits with the local field potential (LFP). Given that LFPs are generated by spatially separated currents across the neuronal membrane, they cannot be computed directly from quantities defined in models of point-like LIF neurons. Here, we explore the best approximation for predicting the LFP based on standard output from point-neuron LIF networks. To search for this best “LFP proxy”, we compared LFP predictions from candidate proxies based on LIF network output (e.g, firing rates, membrane potentials, synaptic currents) with “ground-truth” LFP obtained when the LIF network synaptic input currents were injected into an analogous three-dimensional (3D) network model of multi-compartmental neurons with realistic morphology, spatial distributions of somata and synapses. We found that a specific fixed linear combination of the LIF synaptic currents provided an accurate LFP proxy, accounting for most of the variance of the LFP time course observed in the 3D network for all recording locations. This proxy performed well over a broad set of conditions, including substantial variations of the neuronal morphologies. Our results provide a simple formula for estimating the time course of the LFP from LIF network simulations in cases where a single pyramidal population dominates the LFP generation, and thereby facilitate quantitative comparison between computational models and experimental LFP recordings in vivo. PMID:26657024

  13. The computational core and fixed point organization in Boolean networks

    NASA Astrophysics Data System (ADS)

    Correale, L.; Leone, M.; Pagnani, A.; Weigt, M.; Zecchina, R.

    2006-03-01

    In this paper, we analyse large random Boolean networks in terms of a constraint satisfaction problem. We first develop an algorithmic scheme which allows us to prune simple logical cascades and underdetermined variables, returning thereby the computational core of the network. Second, we apply the cavity method to analyse the number and organization of fixed points. We find in particular a phase transition between an easy and a complex regulatory phase, the latter being characterized by the existence of an exponential number of macroscopically separated fixed point clusters. The different techniques developed are reinterpreted as algorithms for the analysis of single Boolean networks, and they are applied in the analysis of and in silico experiments on the gene regulatory networks of baker's yeast (Saccharomyces cerevisiae) and the segment-polarity genes of the fruitfly Drosophila melanogaster.

  14. NATIONAL CROP LOSS ASSESSMENT NETWORK (NCLAN) 1982 ANNUAL REPORT

    EPA Science Inventory

    The National Crop Loss Assessment Network (NCLAN) is a group of organizations cooperating in research to assess the short- and long-term economic impact of air pollution on crop production. The primary objectives are (1) to define relationships between yield of major agricultural...

  15. A modular architecture for transparent computation in recurrent neural networks.

    PubMed

    Carmantini, Giovanni S; Beim Graben, Peter; Desroches, Mathieu; Rodrigues, Serafim

    2017-01-01

    Computation is classically studied in terms of automata, formal languages and algorithms; yet, the relation between neural dynamics and symbolic representations and operations is still unclear in traditional eliminative connectionism. Therefore, we suggest a unique perspective on this central issue, to which we would like to refer as transparent connectionism, by proposing accounts of how symbolic computation can be implemented in neural substrates. In this study we first introduce a new model of dynamics on a symbolic space, the versatile shift, showing that it supports the real-time simulation of a range of automata. We then show that the Gödelization of versatile shifts defines nonlinear dynamical automata, dynamical systems evolving on a vectorial space. Finally, we present a mapping between nonlinear dynamical automata and recurrent artificial neural networks. The mapping defines an architecture characterized by its granular modularity, where data, symbolic operations and their control are not only distinguishable in activation space, but also spatially localizable in the network itself, while maintaining a distributed encoding of symbolic representations. The resulting networks simulate automata in real-time and are programmed directly, in the absence of network training. To discuss the unique characteristics of the architecture and their consequences, we present two examples: (i) the design of a Central Pattern Generator from a finite-state locomotive controller, and (ii) the creation of a network simulating a system of interactive automata that supports the parsing of garden-path sentences as investigated in psycholinguistics experiments. Copyright © 2016 Elsevier Ltd. All rights reserved.

  16. Student and Instructor Perceptions of the Usefulness of Computer-Based Microworlds in Supporting the Teaching and Assessment of Computer Networking Skills: An Exploratory Study

    ERIC Educational Resources Information Center

    Dabbagh, Nada; Beattie, Mark

    2010-01-01

    Skill shortages in the area of computer network troubleshooting are becoming increasingly acute. According to research sponsored by Cisco's Learning Institute, the demand for professionals with computer networking skills in the United States and Canada will outpace the supply of workers with those skills by an average of eight percent per year…

  17. Low-altitude photographic transects of the Arctic network of national park units and Selawik National Wildlife Refuge, Alaska, July 2013

    Treesearch

    Bruce G. Marcot; M. Torre Jorgenson; Anthony R. DeGange

    2014-01-01

    During July 16–18, 2013, low-level photography flights were conducted (with a Cessna 185 with floats and a Cessna 206 with tundra tires) over the five administrative units of the National Park Service Arctic Network (Bering Land Bridge National Preserve, Cape Krusenstern National Monument, Gates of the Arctic National Park and Preserve, Kobuk Valley National Park, and...

  18. A network-based multi-target computational estimation scheme for anticoagulant activities of compounds.

    PubMed

    Li, Qian; Li, Xudong; Li, Canghai; Chen, Lirong; Song, Jun; Tang, Yalin; Xu, Xiaojie

    2011-03-22

    Traditional virtual screening method pays more attention on predicted binding affinity between drug molecule and target related to a certain disease instead of phenotypic data of drug molecule against disease system, as is often less effective on discovery of the drug which is used to treat many types of complex diseases. Virtual screening against a complex disease by general network estimation has become feasible with the development of network biology and system biology. More effective methods of computational estimation for the whole efficacy of a compound in a complex disease system are needed, given the distinct weightiness of the different target in a biological process and the standpoint that partial inhibition of several targets can be more efficient than the complete inhibition of a single target. We developed a novel approach by integrating the affinity predictions from multi-target docking studies with biological network efficiency analysis to estimate the anticoagulant activities of compounds. From results of network efficiency calculation for human clotting cascade, factor Xa and thrombin were identified as the two most fragile enzymes, while the catalytic reaction mediated by complex IXa:VIIIa and the formation of the complex VIIIa:IXa were recognized as the two most fragile biological matter in the human clotting cascade system. Furthermore, the method which combined network efficiency with molecular docking scores was applied to estimate the anticoagulant activities of a serial of argatroban intermediates and eight natural products respectively. The better correlation (r = 0.671) between the experimental data and the decrease of the network deficiency suggests that the approach could be a promising computational systems biology tool to aid identification of anticoagulant activities of compounds in drug discovery. This article proposes a network-based multi-target computational estimation method for anticoagulant activities of compounds by

  19. A Network-Based Multi-Target Computational Estimation Scheme for Anticoagulant Activities of Compounds

    PubMed Central

    Li, Canghai; Chen, Lirong; Song, Jun; Tang, Yalin; Xu, Xiaojie

    2011-01-01

    Background Traditional virtual screening method pays more attention on predicted binding affinity between drug molecule and target related to a certain disease instead of phenotypic data of drug molecule against disease system, as is often less effective on discovery of the drug which is used to treat many types of complex diseases. Virtual screening against a complex disease by general network estimation has become feasible with the development of network biology and system biology. More effective methods of computational estimation for the whole efficacy of a compound in a complex disease system are needed, given the distinct weightiness of the different target in a biological process and the standpoint that partial inhibition of several targets can be more efficient than the complete inhibition of a single target. Methodology We developed a novel approach by integrating the affinity predictions from multi-target docking studies with biological network efficiency analysis to estimate the anticoagulant activities of compounds. From results of network efficiency calculation for human clotting cascade, factor Xa and thrombin were identified as the two most fragile enzymes, while the catalytic reaction mediated by complex IXa:VIIIa and the formation of the complex VIIIa:IXa were recognized as the two most fragile biological matter in the human clotting cascade system. Furthermore, the method which combined network efficiency with molecular docking scores was applied to estimate the anticoagulant activities of a serial of argatroban intermediates and eight natural products respectively. The better correlation (r = 0.671) between the experimental data and the decrease of the network deficiency suggests that the approach could be a promising computational systems biology tool to aid identification of anticoagulant activities of compounds in drug discovery. Conclusions This article proposes a network-based multi-target computational estimation method for

  20. What Presidents Need To Know about the Impact of Networking.

    ERIC Educational Resources Information Center

    Leadership Abstracts, 1993

    1993-01-01

    Many colleges and universities are undergoing cultural changes as a result of extensive voice, data, and video networking. Local area networks link large portions of most campuses, and national networks have evolved from specialized services for researchers in computer-related disciplines to general utilities on many campuses. Campuswide systems…

  1. Ionosphere Threat Model Investigations by Using Turkish National Permanent GPS Network

    NASA Astrophysics Data System (ADS)

    Köroǧlu, Meltem; Arikan, Feza; Koroglu, Ozan

    2016-07-01

    Global Positioning System (GPS) signal realibity may decrease significantly due to the variable electron density structure of ionosphere. In the literature, ionospheric disturbance is modeled as a linear semi-definite wave which has width, gradient and a constant velocity. To provide precise positioning, Ground Based Augmentation Systems (GBAS) are used. GBAS collects all measurements from GPS network receivers and computes an integrity level for the measurement by comparing the network GPS receivers measurements with the threat models of ionosphere. Threat models are computed according to ionosphere gradient characteristics. Gradient is defined as the difference of slant delays between the receivers. Slant delays are estimated from the STEC (Slant Total Electron Content) values of the ionosphere that is given by the line integral of the electron density between the receiver and GPS satellite. STEC can be estimated over Global Navigation Satellite System (GNSS) signals by using IONOLAB-STEC and IONOLAB-BIAS algorithms. Since most of the ionospheric disturbance observed locally, threat models for the GBAS systems must be extracted as locally. In this study, an automated ionosphere gradient estimation algorithm was developed by using Turkish National Permanent GPS Network (TNPGN-Active) data for year 2011. The GPS receivers are grouped within 150 km radius. For each region, for each day and for each satellite all STEC values are estimated by using IONOLAB-STEC and IONOLAB-BIAS softwares (www.ionolab.org). In the gradient estimation, station-pair method is used. Statistical properties of the valid gradients are extracted as tables for each region, day and satellite. By observing the histograms of the maximum gradients and standard deviations of the gradients with respect to the elevation angle for each day, the anomalies and disturbances of the ionosphere can be detected. It is observed that, maximum gradient estimates are less than 40 mm/km and maximum standard

  2. SCinet Architecture: Featured at the International Conference for High Performance Computing,Networking, Storage and Analysis 2016

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lyonnais, Marc; Smith, Matt; Mace, Kate P.

    SCinet is the purpose-built network that operates during the International Conference for High Performance Computing,Networking, Storage and Analysis (Super Computing or SC). Created each year for the conference, SCinet brings to life a high-capacity network that supports applications and experiments that are a hallmark of the SC conference. The network links the convention center to research and commercial networks around the world. This resource serves as a platform for exhibitors to demonstrate the advanced computing resources of their home institutions and elsewhere by supporting a wide variety of applications. Volunteers from academia, government and industry work together to design andmore » deliver the SCinet infrastructure. Industry vendors and carriers donate millions of dollars in equipment and services needed to build and support the local and wide area networks. Planning begins more than a year in advance of each SC conference and culminates in a high intensity installation in the days leading up to the conference. The SCinet architecture for SC16 illustrates a dramatic increase in participation from the vendor community, particularly those that focus on network equipment. Software-Defined Networking (SDN) and Data Center Networking (DCN) are present in nearly all aspects of the design.« less

  3. Trusted Network Interpretation of the Trusted Computer System Evaluation Criteria. Version 1.

    DTIC Science & Technology

    1987-07-01

    for Secure Computer Systema, MTR-3153, The MITRE Corporation, Bedford, MA, June 1975. 1 See, for example, M. D. Abrams and H. J. Podell , Tutorial...References References Abrams, M. D. and H. J. Podell , Tutorial: Computer and Network Security, IEEE Com- puter Society Press, 1987. Addendum to the

  4. Human errors and violations in computer and information security: the viewpoint of network administrators and security specialists.

    PubMed

    Kraemer, Sara; Carayon, Pascale

    2007-03-01

    This paper describes human errors and violations of end users and network administration in computer and information security. This information is summarized in a conceptual framework for examining the human and organizational factors contributing to computer and information security. This framework includes human error taxonomies to describe the work conditions that contribute adversely to computer and information security, i.e. to security vulnerabilities and breaches. The issue of human error and violation in computer and information security was explored through a series of 16 interviews with network administrators and security specialists. The interviews were audio taped, transcribed, and analyzed by coding specific themes in a node structure. The result is an expanded framework that classifies types of human error and identifies specific human and organizational factors that contribute to computer and information security. Network administrators tended to view errors created by end users as more intentional than unintentional, while errors created by network administrators as more unintentional than intentional. Organizational factors, such as communication, security culture, policy, and organizational structure, were the most frequently cited factors associated with computer and information security.

  5. Fuzzy logic, neural networks, and soft computing

    NASA Technical Reports Server (NTRS)

    Zadeh, Lofti A.

    1994-01-01

    The past few years have witnessed a rapid growth of interest in a cluster of modes of modeling and computation which may be described collectively as soft computing. The distinguishing characteristic of soft computing is that its primary aims are to achieve tractability, robustness, low cost, and high MIQ (machine intelligence quotient) through an exploitation of the tolerance for imprecision and uncertainty. Thus, in soft computing what is usually sought is an approximate solution to a precisely formulated problem or, more typically, an approximate solution to an imprecisely formulated problem. A simple case in point is the problem of parking a car. Generally, humans can park a car rather easily because the final position of the car is not specified exactly. If it were specified to within, say, a few millimeters and a fraction of a degree, it would take hours or days of maneuvering and precise measurements of distance and angular position to solve the problem. What this simple example points to is the fact that, in general, high precision carries a high cost. The challenge, then, is to exploit the tolerance for imprecision by devising methods of computation which lead to an acceptable solution at low cost. By its nature, soft computing is much closer to human reasoning than the traditional modes of computation. At this juncture, the major components of soft computing are fuzzy logic (FL), neural network theory (NN), and probabilistic reasoning techniques (PR), including genetic algorithms, chaos theory, and part of learning theory. Increasingly, these techniques are used in combination to achieve significant improvement in performance and adaptability. Among the important application areas for soft computing are control systems, expert systems, data compression techniques, image processing, and decision support systems. It may be argued that it is soft computing, rather than the traditional hard computing, that should be viewed as the foundation for artificial

  6. Reconfigurable modular computer networks for spacecraft on-board processing

    NASA Technical Reports Server (NTRS)

    Rennels, D. A.

    1978-01-01

    The core electronics subsystems on unmanned spacecraft, which have been sent over the last 20 years to investigate the moon, Mars, Venus, and Mercury, have progressed through an evolution from simple fixed controllers and analog computers in the 1960's to general-purpose digital computers in current designs. This evolution is now moving in the direction of distributed computer networks. Current Voyager spacecraft already use three on-board computers. One is used to store commands and provide overall spacecraft management. Another is used for instrument control and telemetry collection, and the third computer is used for attitude control and scientific instrument pointing. An examination of the control logic in the instruments shows that, for many, it is cost-effective to replace the sequencing logic with a microcomputer. The Unified Data System architecture considered consists of a set of standard microcomputers connected by several redundant buses. A typical self-checking computer module will contain 23 RAMs, two microprocessors, one memory interface, three bus interfaces, and one core building block.

  7. Landbird Monitoring Protocol for National Parks in the North Coast and Cascades Network

    USGS Publications Warehouse

    Siegel, Rodney B.; Wilkerson, Robert L.; Jenkins, Kurt J.; Kuntz, Robert C.; Boetsch, John R.; Schaberl, James P.; Happe, Patricia J.

    2007-01-01

    This protocol narrative outlines the rationale, sampling design and methods for monitoring landbirds in the North Coast and Cascades Network (NCCN) during the breeding season. The NCCN, one of 32 networks of parks in the National Park System, comprises seven national park units in the Pacific Northwest, including three large, mountainous, natural area parks (Mount Rainier [MORA] and Olympic [OLYM] National Parks, North Cascades National Park Service Complex [NOCA]), and four small historic cultural parks (Ebey's Landing National Historical Reserve [EBLA], Lewis and Clark National Historical Park [LEWI], Fort Vancouver National Historical Park [FOVA], and San Juan Island National Historical Park [SAJH]). The protocol reflects decisions made by the NCCN avian monitoring group, which includes NPS representatives from each of the large parks in the Network as well as personnel from the U.S. Geological Survey Forest and Rangeland Ecosystem Science Center (USGS-FRESC) Olympic Field Station, and The Institute for Bird Populations, at meetings held between 2000 (Siegel and Kuntz, 2000) and 2005. The protocol narrative describes the monitoring program in relatively broad terms, and its structure and content adhere to the outline and recommendations developed by Oakley and others (2003) and adopted by NPS. Finer details of the methodology are addressed in a set of standard operating procedures (SOPs) that accompany the protocol narrative. We also provide appendixes containing additional supporting materials that do not clearly belong in either the protocol narrative or the standard operating procedures.

  8. Policy Issues in Computer Networks: Multi-Access Information Systems.

    ERIC Educational Resources Information Center

    Lyons, Patrice A.

    As computer databases become more publicly accessible through public networks, there is a growing need to provide effective protection for proprietary information. Without adequate assurances that their works will be protected, authors and other copyright owners may be reluctant to allow the full text of their works to be accessed through computer…

  9. IP Addressing: Problem-Based Learning Approach on Computer Networks

    ERIC Educational Resources Information Center

    Jevremovic, Aleksandar; Shimic, Goran; Veinovic, Mladen; Ristic, Nenad

    2017-01-01

    The case study presented in this paper describes the pedagogical aspects and experience gathered while using an e-learning tool named IPA-PBL. Its main purpose is to provide additional motivation for adopting theoretical principles and procedures in a computer networks course. In the proposed model, the sequencing of activities of the learning…

  10. Information Networks and Education: An Analytic Bibliography.

    ERIC Educational Resources Information Center

    Pritchard, Roger

    This literature review presents a broad and overall perspective on the various kinds of information networks that will be useful to educators in developing nations. There are five sections to the essay. The first section cites and briefly describes the literature dealing with library, information, and computer networks. Sections two and three…

  11. Computation and Communication Evaluation of an Authentication Mechanism for Time-Triggered Networked Control Systems.

    PubMed

    Martins, Goncalo; Moondra, Arul; Dubey, Abhishek; Bhattacharjee, Anirban; Koutsoukos, Xenofon D

    2016-07-25

    In modern networked control applications, confidentiality and integrity are important features to address in order to prevent against attacks. Moreover, network control systems are a fundamental part of the communication components of current cyber-physical systems (e.g., automotive communications). Many networked control systems employ Time-Triggered (TT) architectures that provide mechanisms enabling the exchange of precise and synchronous messages. TT systems have computation and communication constraints, and with the aim to enable secure communications in the network, it is important to evaluate the computational and communication overhead of implementing secure communication mechanisms. This paper presents a comprehensive analysis and evaluation of the effects of adding a Hash-based Message Authentication (HMAC) to TT networked control systems. The contributions of the paper include (1) the analysis and experimental validation of the communication overhead, as well as a scalability analysis that utilizes the experimental result for both wired and wireless platforms and (2) an experimental evaluation of the computational overhead of HMAC based on a kernel-level Linux implementation. An automotive application is used as an example, and the results show that it is feasible to implement a secure communication mechanism without interfering with the existing automotive controller execution times. The methods and results of the paper can be used for evaluating the performance impact of security mechanisms and, thus, for the design of secure wired and wireless TT networked control systems.

  12. Container-code recognition system based on computer vision and deep neural networks

    NASA Astrophysics Data System (ADS)

    Liu, Yi; Li, Tianjian; Jiang, Li; Liang, Xiaoyao

    2018-04-01

    Automatic container-code recognition system becomes a crucial requirement for ship transportation industry in recent years. In this paper, an automatic container-code recognition system based on computer vision and deep neural networks is proposed. The system consists of two modules, detection module and recognition module. The detection module applies both algorithms based on computer vision and neural networks, and generates a better detection result through combination to avoid the drawbacks of the two methods. The combined detection results are also collected for online training of the neural networks. The recognition module exploits both character segmentation and end-to-end recognition, and outputs the recognition result which passes the verification. When the recognition module generates false recognition, the result will be corrected and collected for online training of the end-to-end recognition sub-module. By combining several algorithms, the system is able to deal with more situations, and the online training mechanism can improve the performance of the neural networks at runtime. The proposed system is able to achieve 93% of overall recognition accuracy.

  13. Instrumentation for Scientific Computing in Neural Networks, Information Science, Artificial Intelligence, and Applied Mathematics.

    DTIC Science & Technology

    1987-10-01

    include Security Classification) Instrumentation for scientific computing in neural networks, information science, artificial intelligence, and...instrumentation grant to purchase equipment for support of research in neural networks, information science, artificail intellignece , and applied mathematics...in Neural Networks, Information Science, Artificial Intelligence, and Applied Mathematics Contract AFOSR 86-0282 Principal Investigator: Stephen

  14. The Erector Set Computer: Building a Virtual Workstation over a Large Multi-Vendor Network.

    ERIC Educational Resources Information Center

    Farago, John M.

    1989-01-01

    Describes a computer network developed at the City University of New York Law School that uses device sharing and local area networking to create a simulated law office. Topics discussed include working within a multi-vendor environment, and the communication, information, and database access services available through the network. (CLB)

  15. Computational exploration of neuron and neural network models in neurobiology.

    PubMed

    Prinz, Astrid A

    2007-01-01

    The electrical activity of individual neurons and neuronal networks is shaped by the complex interplay of a large number of non-linear processes, including the voltage-dependent gating of ion channels and the activation of synaptic receptors. These complex dynamics make it difficult to understand how individual neuron or network parameters-such as the number of ion channels of a given type in a neuron's membrane or the strength of a particular synapse-influence neural system function. Systematic exploration of cellular or network model parameter spaces by computational brute force can overcome this difficulty and generate comprehensive data sets that contain information about neuron or network behavior for many different combinations of parameters. Searching such data sets for parameter combinations that produce functional neuron or network output provides insights into how narrowly different neural system parameters have to be tuned to produce a desired behavior. This chapter describes the construction and analysis of databases of neuron or neuronal network models and describes some of the advantages and downsides of such exploration methods.

  16. Computer-assisted cervical cancer screening using neural networks.

    PubMed

    Mango, L J

    1994-03-15

    A practical and effective system for the computer-assisted screening of conventionally prepared cervical smears is presented and described. Recent developments in neural network technology have made computerized analysis of the complex cellular scenes found on Pap smears possible. The PAPNET Cytological Screening System uses neural networks to automatically analyze conventional smears by locating and recognizing potentially abnormal cells. It then displays images of these objects for review and final diagnosis by qualified cytologists. The results of the studies presented indicate that the PAPNET system could be a useful tool for both the screening and rescreening of cervical smears. In addition, the system has been shown to be sensitive to some types of abnormalities which have gone undetected during manual screening.

  17. A Three-Dimensional Computational Model of Collagen Network Mechanics

    PubMed Central

    Lee, Byoungkoo; Zhou, Xin; Riching, Kristin; Eliceiri, Kevin W.; Keely, Patricia J.; Guelcher, Scott A.; Weaver, Alissa M.; Jiang, Yi

    2014-01-01

    Extracellular matrix (ECM) strongly influences cellular behaviors, including cell proliferation, adhesion, and particularly migration. In cancer, the rigidity of the stromal collagen environment is thought to control tumor aggressiveness, and collagen alignment has been linked to tumor cell invasion. While the mechanical properties of collagen at both the single fiber scale and the bulk gel scale are quite well studied, how the fiber network responds to local stress or deformation, both structurally and mechanically, is poorly understood. This intermediate scale knowledge is important to understanding cell-ECM interactions and is the focus of this study. We have developed a three-dimensional elastic collagen fiber network model (bead-and-spring model) and studied fiber network behaviors for various biophysical conditions: collagen density, crosslinker strength, crosslinker density, and fiber orientation (random vs. prealigned). We found the best-fit crosslinker parameter values using shear simulation tests in a small strain region. Using this calibrated collagen model, we simulated both shear and tensile tests in a large linear strain region for different network geometry conditions. The results suggest that network geometry is a key determinant of the mechanical properties of the fiber network. We further demonstrated how the fiber network structure and mechanics evolves with a local formation, mimicking the effect of pulling by a pseudopod during cell migration. Our computational fiber network model is a step toward a full biomechanical model of cellular behaviors in various ECM conditions. PMID:25386649

  18. Some queuing network models of computer systems

    NASA Technical Reports Server (NTRS)

    Herndon, E. S.

    1980-01-01

    Queuing network models of a computer system operating with a single workload type are presented. Program algorithms are adapted for use on the Texas Instruments SR-52 programmable calculator. By slightly altering the algorithm to process the G and H matrices row by row instead of column by column, six devices and an unlimited job/terminal population could be handled on the SR-52. Techniques are also introduced for handling a simple load dependent server and for studying interactive systems with fixed multiprogramming limits.

  19. 78 FR 10249 - Establishment of the National Freight Network

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-02-13

    ... DEPARTMENT OF TRANSPORTATION Federal Highway Administration Establishment of the National Freight Network Correction In notice document 2013-02580 appearing on pages 8686-8689, in the issue of Wednesday, February 6, 2013, make the following correction: In the Table appearing on page 8687, in the third column...

  20. Computer-Based National Information Systems. Technology and Public Policy Issues.

    ERIC Educational Resources Information Center

    Congress of the U.S., Washington, DC. Office of Technology Assessment.

    A general introduction to computer based national information systems, and the context and basis for future studies are provided in this report. Chapter One, the introduction, summarizes computers and information systems and their relation to society, the structure of information policy issues, and public policy issues. Chapter Two describes the…

  1. The national public's values and interests related to the Arctic National Wildlife Refuge: A computer content analysis

    Treesearch

    David N. Bengston; David P. Fan; Roger Kaye

    2010-01-01

    This study examined the national public's values and interests related to the Arctic National Wildlife Refuge. Computer content analysis was used to analyze more than 23,000 media stories about the refuge from 1995 through 2007. Ten main categories of Arctic National Wildlife Refuge values and interests emerged from the analysis, reflecting a diversity of values,...

  2. Toward implementation of a national ground water monitoring network

    USGS Publications Warehouse

    Schreiber, Robert P.; Cunningham, William L.; Copeland, Rick; Frederick, Kevin D.

    2008-01-01

    The Federal Advisory Committee on Water Information's (ACWI) Subcommittee on Ground Water (SOGW) has been working steadily to develop and encourage implementation of a nationwide, long-term ground-water quantity and quality monitoring framework. Significant progress includes the planned submission this fall of a draft framework document to the full committee. The document will include recommendations for implementation of the network and continued acknowledgment at the federal and state level of ACWI's potential role in national monitoring toward an improved assessment of the nation's water reserves. The SOGW mission includes addressing several issues regarding network design, as well as developing plans for concept testing, evaluation of costs and benefits, and encouraging the movement from pilot-test results to full-scale implementation within a reasonable time period. With the recent attention to water resource sustainability driven by severe droughts, concerns over global warming effects, and persistent water supply problems, the SOGW mission is now even more critical.

  3. Building Capacity: The National Network for Ocean and Climate Change Interpretation

    NASA Astrophysics Data System (ADS)

    Spitzer, W.

    2014-12-01

    In the US, more than 1,500 informal science venues (science centers, museums, aquariums, zoos, nature centers, national parks) are visited annually by 61% of the population. Research shows that these visitors are receptive to learning about climate change, and expect these institutions to provide reliable information about environmental issues and solutions. These informal science venues play a critical role in shaping public understanding. Since 2007, the New England Aquarium has led a national effort to increase the capacity of informal science venues to effectively communicate about climate change. We are now leading the NSF-funded National Network for Ocean and Climate Change Interpretation (NNOCCI), partnering with the Association of Zoos and Aquariums, FrameWorks Institute, Woods Hole Oceanographic Institution, Monterey Bay Aquarium, and National Aquarium, with evaluation conducted by the New Knowledge Organization, Pennsylvania State University, and Ohio State University. After two years of project implementation, key findings include: 1. Importance of adaptive management - We continue to make ongoing changes in training format, content, and roles of facilitators and participants. 2. Impacts on interpreters - We have multiple lines of evidence for changes in knowledge, skills, attitudes, and behaviors. 3. Social radiation - Trained interpreters have a significant influence on their friends, family and colleagues. 4. Visitor impacts - "Exposure to "strategically framed" interpretation does change visitors' perceptions about climate change. 5. Community of practice - We are seeing evidence of growing participation, leadership, and sustainability. 6. Diffusion of innovation - Peer networks are facilitating dissemination throughout the informal science education community. Over the next five years, NNOCCI will achieve a systemic national impact across the ISE community, embed its work within multiple ongoing regional and national climate change education

  4. Informatic parcellation of the network involved in the computation of subjective value

    PubMed Central

    Rangel, Antonio

    2014-01-01

    Understanding how the brain computes value is a basic question in neuroscience. Although individual studies have driven this progress, meta-analyses provide an opportunity to test hypotheses that require large collections of data. We carry out a meta-analysis of a large set of functional magnetic resonance imaging studies of value computation to address several key questions. First, what is the full set of brain areas that reliably correlate with stimulus values when they need to be computed? Second, is this set of areas organized into dissociable functional networks? Third, is a distinct network of regions involved in the computation of stimulus values at decision and outcome? Finally, are different brain areas involved in the computation of stimulus values for different reward modalities? Our results demonstrate the centrality of ventromedial prefrontal cortex (VMPFC), ventral striatum and posterior cingulate cortex (PCC) in the computation of value across tasks, reward modalities and stages of the decision-making process. We also find evidence of distinct subnetworks of co-activation within VMPFC, one involving central VMPFC and dorsal PCC and another involving more anterior VMPFC, left angular gyrus and ventral PCC. Finally, we identify a posterior-to-anterior gradient of value representations corresponding to concrete-to-abstract rewards. PMID:23887811

  5. Local and Long Distance Computer Networking for Science Classrooms. Technical Report No. 43.

    ERIC Educational Resources Information Center

    Newman, Denis

    This report describes Earth Lab, a project which is demonstrating new ways of using computers for upper-elementary and middle-school science instruction, and finding ways to integrate local-area and telecommunications networks. The discussion covers software, classroom activities, formative research on communications networks, and integration of…

  6. External quality assurance project report for the National Atmospheric Deposition Program’s National Trends Network and Mercury Deposition Network, 2015–16

    USGS Publications Warehouse

    Wetherbee, Gregory A.; Martin, RoseAnn

    2018-06-29

    The U.S. Geological Survey Precipitation Chemistry Quality Assurance project operated five distinct programs to provide external quality assurance monitoring for the National Atmospheric Deposition Program’s (NADP) National Trends Network and Mercury Deposition Network during 2015–16. The National Trends Network programs include (1) a field audit program to evaluate sample contamination and stability, (2) an interlaboratory comparison program to evaluate analytical laboratory performance, and (3) a colocated sampler program to evaluate bias and variability attributed to automated precipitation samplers. The Mercury Deposition Network programs include the (4) system blank program and (5) an interlaboratory comparison program. The results indicate that NADP data continue to be of sufficient quality for the analysis of spatial distributions and time trends for chemical constituents in wet deposition.The field audit program results indicate increased sample contamination for calcium, magnesium, and potassium relative to 2010 levels, and slight fluctuation in sodium contamination. Nitrate contamination levels dropped slightly during 2014–16, and chloride contamination leveled off between 2007 and 2016. Sulfate contamination is similar to the 2000 level. Hydrogen ion contamination has steadily decreased since 2012. Losses of ammonium and nitrate resulting from potential sample instability were negligible.The NADP Central Analytical Laboratory produced interlaboratory comparison results with low bias and variability compared to other domestic and international laboratories that support atmospheric deposition monitoring. Significant absolute bias above the magnitudes of the detection limits was observed for nitrate and sulfate concentrations, but no analyte determinations exceeded the detection limits for blanks.Colocated sampler program results from dissimilar colocated collectors indicate that the retrofit of the National Trends Network with N-CON Systems Company

  7. Information system evolution at the French National Network of Seismic Survey (BCSF-RENASS)

    NASA Astrophysics Data System (ADS)

    Engels, F.; Grunberg, M.

    2013-12-01

    The aging information system of the French National Network of Seismic Survey (BCSF-RENASS), located in Strasbourg (EOST), needed to be updated to satisfy new practices from Computer science world. The latter means to evolve our system at different levels : development method, datamining solutions, system administration. The new system had to provide more agility for incoming projects. The main difficulty was to maintain old system and the new one in parallel the time to validate new solutions with a restricted team. Solutions adopted here are coming from standards used by the seismological community and inspired by the state of the art of devops community. The new system is easier to maintain and take advantage of large community to find support. This poster introduces the new system and choosen solutions like Puppet, Fabric, MongoDB and FDSN Webservices.

  8. Neural Dynamics as Sampling: A Model for Stochastic Computation in Recurrent Networks of Spiking Neurons

    PubMed Central

    Buesing, Lars; Bill, Johannes; Nessler, Bernhard; Maass, Wolfgang

    2011-01-01

    The organization of computations in networks of spiking neurons in the brain is still largely unknown, in particular in view of the inherently stochastic features of their firing activity and the experimentally observed trial-to-trial variability of neural systems in the brain. In principle there exists a powerful computational framework for stochastic computations, probabilistic inference by sampling, which can explain a large number of macroscopic experimental data in neuroscience and cognitive science. But it has turned out to be surprisingly difficult to create a link between these abstract models for stochastic computations and more detailed models of the dynamics of networks of spiking neurons. Here we create such a link and show that under some conditions the stochastic firing activity of networks of spiking neurons can be interpreted as probabilistic inference via Markov chain Monte Carlo (MCMC) sampling. Since common methods for MCMC sampling in distributed systems, such as Gibbs sampling, are inconsistent with the dynamics of spiking neurons, we introduce a different approach based on non-reversible Markov chains that is able to reflect inherent temporal processes of spiking neuronal activity through a suitable choice of random variables. We propose a neural network model and show by a rigorous theoretical analysis that its neural activity implements MCMC sampling of a given distribution, both for the case of discrete and continuous time. This provides a step towards closing the gap between abstract functional models of cortical computation and more detailed models of networks of spiking neurons. PMID:22096452

  9. The UK Human Genome Mapping Project online computing service.

    PubMed

    Rysavy, F R; Bishop, M J; Gibbs, G P; Williams, G W

    1992-04-01

    This paper presents an overview of computing and networking facilities developed by the Medical Research Council to provide online computing support to the Human Genome Mapping Project (HGMP) in the UK. The facility is connected to a number of other computing facilities in various centres of genetics and molecular biology research excellence, either directly via high-speed links or through national and international wide-area networks. The paper describes the design and implementation of the current system, a 'client/server' network of Sun, IBM, DEC and Apple servers, gateways and workstations. A short outline of online computing services currently delivered by this system to the UK human genetics research community is also provided. More information about the services and their availability could be obtained by a direct approach to the UK HGMP-RC.

  10. Medical applications for high-performance computers in SKIF-GRID network.

    PubMed

    Zhuchkov, Alexey; Tverdokhlebov, Nikolay

    2009-01-01

    The paper presents a set of software services for massive mammography image processing by using high-performance parallel computers of SKIF-family which are linked into a service-oriented grid-network. An experience of a prototype system implementation in two medical institutions is also described.

  11. Database Software Selection for the Egyptian National STI Network.

    ERIC Educational Resources Information Center

    Slamecka, Vladimir

    The evaluation and selection of information/data management system software for the Egyptian National Scientific and Technical (STI) Network are described. An overview of the state-of-the-art of database technology elaborates on the differences between information retrieval and database management systems (DBMS). The desirable characteristics of…

  12. Mobility-Aware Caching and Computation Offloading in 5G Ultra-Dense Cellular Networks.

    PubMed

    Chen, Min; Hao, Yixue; Qiu, Meikang; Song, Jeungeun; Wu, Di; Humar, Iztok

    2016-06-25

    Recent trends show that Internet traffic is increasingly dominated by content, which is accompanied by the exponential growth of traffic. To cope with this phenomena, network caching is introduced to utilize the storage capacity of diverse network devices. In this paper, we first summarize four basic caching placement strategies, i.e., local caching, Device-to-Device (D2D) caching, Small cell Base Station (SBS) caching and Macrocell Base Station (MBS) caching. However, studies show that so far, much of the research has ignored the impact of user mobility. Therefore, taking the effect of the user mobility into consideration, we proposes a joint mobility-aware caching and SBS density placement scheme (MS caching). In addition, differences and relationships between caching and computation offloading are discussed. We present a design of a hybrid computation offloading and support it with experimental results, which demonstrate improved performance in terms of energy cost. Finally, we discuss the design of an incentive mechanism by considering network dynamics, differentiated user's quality of experience (QoE) and the heterogeneity of mobile terminals in terms of caching and computing capabilities.

  13. High-throughput Bayesian Network Learning using Heterogeneous Multicore Computers

    PubMed Central

    Linderman, Michael D.; Athalye, Vivek; Meng, Teresa H.; Asadi, Narges Bani; Bruggner, Robert; Nolan, Garry P.

    2017-01-01

    Aberrant intracellular signaling plays an important role in many diseases. The causal structure of signal transduction networks can be modeled as Bayesian Networks (BNs), and computationally learned from experimental data. However, learning the structure of Bayesian Networks (BNs) is an NP-hard problem that, even with fast heuristics, is too time consuming for large, clinically important networks (20–50 nodes). In this paper, we present a novel graphics processing unit (GPU)-accelerated implementation of a Monte Carlo Markov Chain-based algorithm for learning BNs that is up to 7.5-fold faster than current general-purpose processor (GPP)-based implementations. The GPU-based implementation is just one of several implementations within the larger application, each optimized for a different input or machine configuration. We describe the methodology we use to build an extensible application, assembled from these variants, that can target a broad range of heterogeneous systems, e.g., GPUs, multicore GPPs. Specifically we show how we use the Merge programming model to efficiently integrate, test and intelligently select among the different potential implementations. PMID:28819655

  14. Complex network problems in physics, computer science and biology

    NASA Astrophysics Data System (ADS)

    Cojocaru, Radu Ionut

    There is a close relation between physics and mathematics and the exchange of ideas between these two sciences are well established. However until few years ago there was no such a close relation between physics and computer science. Even more, only recently biologists started to use methods and tools from statistical physics in order to study the behavior of complex system. In this thesis we concentrate on applying and analyzing several methods borrowed from computer science to biology and also we use methods from statistical physics in solving hard problems from computer science. In recent years physicists have been interested in studying the behavior of complex networks. Physics is an experimental science in which theoretical predictions are compared to experiments. In this definition, the term prediction plays a very important role: although the system is complex, it is still possible to get predictions for its behavior, but these predictions are of a probabilistic nature. Spin glasses, lattice gases or the Potts model are a few examples of complex systems in physics. Spin glasses and many frustrated antiferromagnets map exactly to computer science problems in the NP-hard class defined in Chapter 1. In Chapter 1 we discuss a common result from artificial intelligence (AI) which shows that there are some problems which are NP-complete, with the implication that these problems are difficult to solve. We introduce a few well known hard problems from computer science (Satisfiability, Coloring, Vertex Cover together with Maximum Independent Set and Number Partitioning) and then discuss their mapping to problems from physics. In Chapter 2 we provide a short review of combinatorial optimization algorithms and their applications to ground state problems in disordered systems. We discuss the cavity method initially developed for studying the Sherrington-Kirkpatrick model of spin glasses. We extend this model to the study of a specific case of spin glass on the Bethe

  15. Establishing the ACORN National Practitioner Database: Strategies to Recruit Practitioners to a National Practice-Based Research Network.

    PubMed

    Adams, Jon; Steel, Amie; Moore, Craig; Amorin-Woods, Lyndon; Sibbritt, David

    2016-10-01

    The purpose of this paper is to report on the recruitment and promotion strategies employed by the Australian Chiropractic Research Network (ACORN) project aimed at helping recruit a substantial national sample of participants and to describe the features of our practice-based research network (PBRN) design that may provide key insights to others looking to establish a similar network or draw on the ACORN project to conduct sub-studies. The ACORN project followed a multifaceted recruitment and promotion strategy drawing on distinct branding, a practitioner-focused promotion campaign, and a strategically designed questionnaire and distribution/recruitment approach to attract sufficient participation from the ranks of registered chiropractors across Australia. From the 4684 chiropractors registered at the time of recruitment, the project achieved a database response rate of 36% (n = 1680), resulting in a large, nationally representative sample across age, gender, and location. This sample constitutes the largest proportional coverage of participants from any voluntary national PBRN across any single health care profession. It does appear that a number of key promotional and recruitment features of the ACORN project may have helped establish the high response rate for the PBRN, which constitutes an important sustainable resource for future national and international efforts to grow the chiropractic evidence base and research capacity. Further rigorous enquiry is needed to help evaluate the direct contribution of specific promotional and recruitment strategies in attaining high response rates from practitioner populations who may be invited to participate in future PBRNs. Copyright © 2016. Published by Elsevier Inc.

  16. Computation and Communication Evaluation of an Authentication Mechanism for Time-Triggered Networked Control Systems

    PubMed Central

    Martins, Goncalo; Moondra, Arul; Dubey, Abhishek; Bhattacharjee, Anirban; Koutsoukos, Xenofon D.

    2016-01-01

    In modern networked control applications, confidentiality and integrity are important features to address in order to prevent against attacks. Moreover, network control systems are a fundamental part of the communication components of current cyber-physical systems (e.g., automotive communications). Many networked control systems employ Time-Triggered (TT) architectures that provide mechanisms enabling the exchange of precise and synchronous messages. TT systems have computation and communication constraints, and with the aim to enable secure communications in the network, it is important to evaluate the computational and communication overhead of implementing secure communication mechanisms. This paper presents a comprehensive analysis and evaluation of the effects of adding a Hash-based Message Authentication (HMAC) to TT networked control systems. The contributions of the paper include (1) the analysis and experimental validation of the communication overhead, as well as a scalability analysis that utilizes the experimental result for both wired and wireless platforms and (2) an experimental evaluation of the computational overhead of HMAC based on a kernel-level Linux implementation. An automotive application is used as an example, and the results show that it is feasible to implement a secure communication mechanism without interfering with the existing automotive controller execution times. The methods and results of the paper can be used for evaluating the performance impact of security mechanisms and, thus, for the design of secure wired and wireless TT networked control systems. PMID:27463718

  17. Improved Seismic Acquisition System and Data Processing for the Italian National Seismic Network

    NASA Astrophysics Data System (ADS)

    Badiali, L.; Marcocci, C.; Mele, F.; Piscini, A.

    2001-12-01

    A new system for acquiring and processing digital signals has been developed in the last few years at the Istituto Nazionale di Geofisica e Vulcanologia (INGV). The system makes extensive use of the internet communication protocol standards such as TCP and UDP which are used as the transport highway inside the Italian network, and possibly in a near future outside, to share or redirect data among processes. The Italian National Seismic Network has been working for about 18 years equipped with vertical short period seismometers and transmitting through analog lines, to the computer center in Rome. We are now concentrating our efforts on speeding the migration towards a fully digital network based on about 150 stations equipped with either broad band or 5 seconds sensors connected to the data center partly through wired digital communication and partly through satellite digital communication. The overall process is layered through intranet and/or internet. Every layer gathers data in a simple format and provides data in a processed format, ready to be distributed towards the next layer. The lowest level acquires seismic data (raw waveforms) coming from the remote stations. It handshakes, checks and sends data in LAN or WAN according to a distribution list where other machines with their programs are waiting for. At the next level there are the picking procedures, or "pickers", on a per instrument basis, looking for phases. A picker spreads phases, again through the LAN or WAN and according to a distribution list, to one or more waiting locating machines tuned to generate a seismic event. The event locating procedure itself, the higher level in this stack, can exchange information with other similar procedures. Such a layered and distributed structure with nearby targets allows other seismic networks to join the processing and data collection of the same ongoing event, creating a virtual network larger than the original one. At present we plan to cooperate with other

  18. Wide-Area Network Resources for Teacher Education.

    ERIC Educational Resources Information Center

    Aust, Ronald

    A central feature of the High Performance Computing Act of 1991 is the establishment of a National Research and Education Network (NREN). The level of access that teachers and teacher educators will need to benefit from the NREN and the types of network resources that are most useful for educators are explored, along with design issues that are…

  19. Smart photonic networks and computer security for image data

    NASA Astrophysics Data System (ADS)

    Campello, Jorge; Gill, John T.; Morf, Martin; Flynn, Michael J.

    1998-02-01

    Work reported here is part of a larger project on 'Smart Photonic Networks and Computer Security for Image Data', studying the interactions of coding and security, switching architecture simulations, and basic technologies. Coding and security: coding methods that are appropriate for data security in data fusion networks were investigated. These networks have several characteristics that distinguish them form other currently employed networks, such as Ethernet LANs or the Internet. The most significant characteristics are very high maximum data rates; predominance of image data; narrowcasting - transmission of data form one source to a designated set of receivers; data fusion - combining related data from several sources; simple sensor nodes with limited buffering. These characteristics affect both the lower level network design and the higher level coding methods.Data security encompasses privacy, integrity, reliability, and availability. Privacy, integrity, and reliability can be provided through encryption and coding for error detection and correction. Availability is primarily a network issue; network nodes must be protected against failure or routed around in the case of failure. One of the more promising techniques is the use of 'secret sharing'. We consider this method as a special case of our new space-time code diversity based algorithms for secure communication. These algorithms enable us to exploit parallelism and scalable multiplexing schemes to build photonic network architectures. A number of very high-speed switching and routing architectures and their relationships with very high performance processor architectures were studied. Indications are that routers for very high speed photonic networks can be designed using the very robust and distributed TCP/IP protocol, if suitable processor architecture support is available.

  20. Project: Toward a National Educational Testing Network. Final Report.

    ERIC Educational Resources Information Center

    Bock, Darrell R.

    Three fiscal year 1987 deliverables due for the "Toward a National Educational Testing Network: Feasibility Study of Duplex Design" are presented. The study is concerned with implementation of statewide and interstate testing of student attainment. The report includes: (1) a duplex design (DD) review paper discussing the means by which…

  1. Campus Computing 1993. The USC National Survey of Desktop Computing in Higher Education.

    ERIC Educational Resources Information Center

    Green, Kenneth C.; Eastman, Skip

    A national survey of desktop computing in higher education was conducted in spring and summer 1993 at over 2500 institutions. Data were responses from public and private research universities, public and private four-year colleges and community colleges. Respondents (N=1011) were individuals specifically responsible for the operation and future…

  2. Effective Instruction. National Dropout Prevention Center/Network Newsletter. Volume 21, Number 2

    ERIC Educational Resources Information Center

    Duckenfield, Marty, Ed.

    2009-01-01

    The "National Dropout Prevention Newsletter" is published quarterly by the National Dropout Prevention Center/Network. This issue contains the following articles: (1) Strategies for Success (Charles W. Hatch); (2) 2009 NDPN Crystal Star Winners; (3) Strategies for More Effective Instruction (Micki Gibson); (4) Some Thoughts on Teaching…

  3. Service-Learning. National Dropout Prevention Center/Network Newsletter. Volume 22, Number 4

    ERIC Educational Resources Information Center

    Duckenfield, Marty, Ed.

    2011-01-01

    The "National Dropout Prevention Newsletter" is published quarterly by the National Dropout Prevention Center/Network. This issue contains the following articles: (1) Dropouts and Democracy (Robert Shumer); (2) 2011 NDPN Crystal Star Winners; (3) Service-Learning as Dropout Intervention and More (Michael VanKeulen); and (4) Teacher…

  4. Low-cost autonomous perceptron neural network inspired by quantum computation

    NASA Astrophysics Data System (ADS)

    Zidan, Mohammed; Abdel-Aty, Abdel-Haleem; El-Sadek, Alaa; Zanaty, E. A.; Abdel-Aty, Mahmoud

    2017-11-01

    Achieving low cost learning with reliable accuracy is one of the important goals to achieve intelligent machines to save time, energy and perform learning process over limited computational resources machines. In this paper, we propose an efficient algorithm for a perceptron neural network inspired by quantum computing composite from a single neuron to classify inspirable linear applications after a single training iteration O(1). The algorithm is applied over a real world data set and the results are outer performs the other state-of-the art algorithms.

  5. Portable Computer Technology (PCT) Research and Development Program Phase 2

    NASA Technical Reports Server (NTRS)

    Castillo, Michael; McGuire, Kenyon; Sorgi, Alan

    1995-01-01

    The subject of this project report, focused on: (1) Design and development of two Advanced Portable Workstation 2 (APW 2) units. These units incorporate advanced technology features such as a low power Pentium processor, a high resolution color display, National Television Standards Committee (NTSC) video handling capabilities, a Personal Computer Memory Card International Association (PCMCIA) interface, and Small Computer System Interface (SCSI) and ethernet interfaces. (2) Use these units to integrate and demonstrate advanced wireless network and portable video capabilities. (3) Qualification of the APW 2 systems for use in specific experiments aboard the Mir Space Station. A major objective of the PCT Phase 2 program was to help guide future choices in computing platforms and techniques for meeting National Aeronautics and Space Administration (NASA) mission objectives. The focus being on the development of optimal configurations of computing hardware, software applications, and network technologies for use on NASA missions.

  6. WaveJava: Wavelet-based network computing

    NASA Astrophysics Data System (ADS)

    Ma, Kun; Jiao, Licheng; Shi, Zhuoer

    1997-04-01

    Wavelet is a powerful theory, but its successful application still needs suitable programming tools. Java is a simple, object-oriented, distributed, interpreted, robust, secure, architecture-neutral, portable, high-performance, multi- threaded, dynamic language. This paper addresses the design and development of a cross-platform software environment for experimenting and applying wavelet theory. WaveJava, a wavelet class library designed by the object-orient programming, is developed to take advantage of the wavelets features, such as multi-resolution analysis and parallel processing in the networking computing. A new application architecture is designed for the net-wide distributed client-server environment. The data are transmitted with multi-resolution packets. At the distributed sites around the net, these data packets are done the matching or recognition processing in parallel. The results are fed back to determine the next operation. So, the more robust results can be arrived quickly. The WaveJava is easy to use and expand for special application. This paper gives a solution for the distributed fingerprint information processing system. It also fits for some other net-base multimedia information processing, such as network library, remote teaching and filmless picture archiving and communications.

  7. Factors Impacting Adult Learner Achievement in a Technology Certificate Program on Computer Networks

    ERIC Educational Resources Information Center

    Delialioglu, Omer; Cakir, Hasan; Bichelmeyer, Barbara A.; Dennis, Alan R.; Duffy, Thomas M.

    2010-01-01

    This study investigates the factors impacting the achievement of adult learners in a technology certificate program on computer networks. We studied 2442 participants in 256 institutions. The participants were older than age 18 and were enrolled in the Cisco Certified Network Associate (CCNA) technology training program as "non-degree" or…

  8. Implementation and integration of regional health care data networks in the Hellenic National Health Service.

    PubMed

    Lampsas, Petros; Vidalis, Ioannis; Papanikolaou, Christos; Vagelatos, Aristides

    2002-12-01

    Modern health care is provided with close cooperation among many different institutions and professionals, using their specialized expertise in a common effort to deliver best-quality and, at the same time, cost-effective services. Within this context of the growing need for information exchange, the demand for realization of data networks interconnecting various health care institutions at a regional level, as well as a national level, has become a practical necessity. To present the technical solution that is under consideration for implementing and interconnecting regional health care data networks in the Hellenic National Health System. The most critical requirements for deploying such a regional health care data network were identified as: fast implementation, security, quality of service, availability, performance, and technical support. The solution proposed is the use of proper virtual private network technologies for implementing functionally-interconnected regional health care data networks. The regional health care data network is considered to be a critical infrastructure for further development and penetration of information and communication technologies in the Hellenic National Health System. Therefore, a technical approach was planned, in order to have a fast cost-effective implementation, conforming to certain specifications.

  9. Implementation and Integration of Regional Health Care Data Networks in the Hellenic National Health Service

    PubMed Central

    Vidalis, Ioannis; Papanikolaou, Christos; Vagelatos, Aristides

    2002-01-01

    Background Modern health care is provided with close cooperation among many different institutions and professionals, using their specialized expertise in a common effort to deliver best-quality and, at the same time, cost-effective services. Within this context of the growing need for information exchange, the demand for realization of data networks interconnecting various health care institutions at a regional level, as well as a national level, has become a practical necessity. Objectives To present the technical solution that is under consideration for implementing and interconnecting regional health care data networks in the Hellenic National Health System. Methods The most critical requirements for deploying such a regional health care data network were identified as: fast implementation, security, quality of service, availability, performance, and technical support. Results The solution proposed is the use of proper virtual private network technologies for implementing functionally-interconnected regional health care data networks. Conclusions The regional health care data network is considered to be a critical infrastructure for further development and penetration of information and communication technologies in the Hellenic National Health System. Therefore, a technical approach was planned, in order to have a fast cost-effective implementation, conforming to certain specifications. PMID:12554551

  10. Bias and precision of selected analytes reported by the National Atmospheric Deposition Program and National Trends Network, 1984

    USGS Publications Warehouse

    Brooks, M.H.; Schroder, L.J.; Willoughby, T.C.

    1987-01-01

    The U.S. Geological Survey operated a blind audit sample program during 1974 to test the effects of the sample handling and shipping procedures used by the National Atmospheric Deposition Program and National Trends Network on the quality of wet deposition data produced by the combined networks. Blind audit samples, which were dilutions of standard reference water samples, were submitted by network site operators to the central analytical laboratory disguised as actual wet deposition samples. Results from the analyses of blind audit samples were used to calculate estimates of analyte bias associated with all network wet deposition samples analyzed in 1984 and to estimate analyte precision. Concentration differences between double blind samples that were submitted to the central analytical laboratory and separate analyses of aliquots of those blind audit samples that had not undergone network sample handling and shipping were used to calculate analyte masses that apparently were added to each blind audit sample by routine network handling and shipping procedures. These calculated masses indicated statistically significant biases for magnesium, sodium , potassium, chloride, and sulfate. Median calculated masses were 41.4 micrograms (ug) for calcium, 14.9 ug for magnesium, 23.3 ug for sodium, 0.7 ug for potassium, 16.5 ug for chloride and 55.3 ug for sulfate. Analyte precision was estimated using two different sets of replicate measures performed by the central analytical laboratory. Estimated standard deviations were similar to those previously reported. (Author 's abstract)

  11. Biological neural networks as model systems for designing future parallel processing computers

    NASA Technical Reports Server (NTRS)

    Ross, Muriel D.

    1991-01-01

    One of the more interesting debates of the present day centers on whether human intelligence can be simulated by computer. The author works under the premise that neurons individually are not smart at all. Rather, they are physical units which are impinged upon continuously by other matter that influences the direction of voltage shifts across the units membranes. It is only the action of a great many neurons, billions in the case of the human nervous system, that intelligent behavior emerges. What is required to understand even the simplest neural system is painstaking analysis, bit by bit, of the architecture and the physiological functioning of its various parts. The biological neural network studied, the vestibular utricular and saccular maculas of the inner ear, are among the most simple of the mammalian neural networks to understand and model. While there is still a long way to go to understand even this most simple neural network in sufficient detail for extrapolation to computers and robots, a start was made. Moreover, the insights obtained and the technologies developed help advance the understanding of the more complex neural networks that underlie human intelligence.

  12. Assessment of the National Park network of mainland Spain by the Insecurity Index of vertebrate species.

    PubMed

    Estrada, Alba; Real, Raimundo

    2018-01-01

    The evaluation of protected area networks on their capacity to preserve species distributions is a key topic in conservation biology. There are different types of protected areas, with National Parks those with highest level of protection. National Parks can be declared attending to many ecological features that include the presence of certain animal species. Here, we selected 37 vertebrate species that were highlighted as having relevant natural value for at least one of the 10 National Parks of mainland Spain. We modelled species distributions with the favourability function, and applied the Insecurity Index to detect the degree of protection of favourable areas for each species. Two metrics of Insecurity Index were defined for each species: the Insecurity Index in each of the cells, and the Overall Insecurity Index of a species. The former allows the identification of insecure areas for each species that can be used to establish spatial conservation priorities. The latter gives a value of Insecurity for each species, which we used to calculate the Representativeness of favourable areas for the species in the network. As expected, due to the limited extension of the National Park network, all species have high values of Insecurity; i.e., just a narrow proportion of their favourable areas are covered by a National Park. However, the majority of species favourable areas are well represented in the network, i.e., the percentage of favourable areas covered by the National Park network is higher than the percentage of mainland Spain covered by the network (result also supported by a randomization approach). Even if a reserve network only covers a low percentage of a country, the Overall Insecurity Index allows an objective assessment of its capacity to represent species. Beyond the results presented here, the Insecurity Index has the potential to be extrapolated to other areas and to cover a wide range of species.

  13. 76 FR 63811 - Structural Reforms To Improve the Security of Classified Networks and the Responsible Sharing and...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-10-13

    ... Structural Reforms To Improve the Security of Classified Networks and the Responsible Sharing and... classified national security information (classified information) on computer networks, it is hereby ordered as follows: Section 1. Policy. Our Nation's security requires classified information to be shared...

  14. Linking Geophysical Networks to International Economic Development Through Integration of Global and National Monitoring

    NASA Astrophysics Data System (ADS)

    Lerner-Lam, A.

    2007-05-01

    Outside of the research community and mission agencies, global geophysical monitoring rarely receives sustained attention except in the aftermath of a humanitarian disaster. The recovery and rebuilding period focuses attention and resources for a short time on regional needs for geophysical observation, often at the national or sub-national level. This can result in the rapid deployment of national monitoring networks, but may overlook the longer-term benefits of integration with global networks. Even in the case of multinational disasters, such as the Indian Ocean tsunami, it has proved difficult to promote the integration of national solutions with global monitoring, research and operations infrastructure. More importantly, continuing operations at the national or sub-national scale are difficult to sustain once the resources associated with recovery and rebuilding are depleted. Except for some notable examples, the vast infrastructure associated with global geophysical monitoring is not utilized constructively to promote the integration of national networks with international efforts. This represents a missed opportunity not only for monitoring, but for developing the international research and educational collaborations necessary for technological transfer and capacity building. The recent confluence of highly visible disasters, global multi-hazard risk assessments, evaluations of the relationships between natural disasters and socio-economic development, and shifts in development agency policies, provides an opportunity to link global geophysical monitoring initiatives to central issues in international development. Natural hazard risk reduction has not been the first priority of international development agendas for understandable, mainly humanitarian reasons. However, it is now recognized that the so-called risk premium associated with making development projects more risk conscious or risk resilient is relatively small relative to potential losses. Thus

  15. Effects of maximum node degree on computer virus spreading in scale-free networks

    NASA Astrophysics Data System (ADS)

    Bamaarouf, O.; Ould Baba, A.; Lamzabi, S.; Rachadi, A.; Ez-Zahraouy, H.

    2017-10-01

    The increase of the use of the Internet networks favors the spread of viruses. In this paper, we studied the spread of viruses in the scale-free network with different topologies based on the Susceptible-Infected-External (SIE) model. It is found that the network structure influences the virus spreading. We have shown also that the nodes of high degree are more susceptible to infection than others. Furthermore, we have determined a critical maximum value of node degree (Kc), below which the network is more resistible and the computer virus cannot expand into the whole network. The influence of network size is also studied. We found that the network with low size is more effective to reduce the proportion of infected nodes.

  16. 77 FR 20010 - Notice of Public Workshop: “Designing for Impact: Workshop on Building the National Network for...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2012-04-03

    ...: ``Designing for Impact: Workshop on Building the National Network for Manufacturing Innovation'' AGENCY...), housed at the National Institute of Standards and Technology (NIST), announces the first of a series of public workshops entitled ``Designing for Impact: Workshop on Building the National Network for...

  17. U.S. EPA's National Dioxin Air Monitoring Network: Analytical Issues

    EPA Science Inventory

    The U.S. EPA has established a National Dioxin Air Monitoring Network (NDAMN) to determine the temporal and geographical variability of atmospheric chlorinated dibenzo-p-dioxins (CDDs), furans (CDFs), and coplanar polychlorinated biphenyls (PCBs) at rural and non-impacted locatio...

  18. Identifying failure in a tree network of a parallel computer

    DOEpatents

    Archer, Charles J.; Pinnow, Kurt W.; Wallenfelt, Brian P.

    2010-08-24

    Methods, parallel computers, and products are provided for identifying failure in a tree network of a parallel computer. The parallel computer includes one or more processing sets including an I/O node and a plurality of compute nodes. For each processing set embodiments include selecting a set of test compute nodes, the test compute nodes being a subset of the compute nodes of the processing set; measuring the performance of the I/O node of the processing set; measuring the performance of the selected set of test compute nodes; calculating a current test value in dependence upon the measured performance of the I/O node of the processing set, the measured performance of the set of test compute nodes, and a predetermined value for I/O node performance; and comparing the current test value with a predetermined tree performance threshold. If the current test value is below the predetermined tree performance threshold, embodiments include selecting another set of test compute nodes. If the current test value is not below the predetermined tree performance threshold, embodiments include selecting from the test compute nodes one or more potential problem nodes and testing individually potential problem nodes and links to potential problem nodes.

  19. Project UNIFY. National Dropout Prevention Center/Network Newsletter. Volume 22, Number 1

    ERIC Educational Resources Information Center

    Duckenfield, Marty, Ed.

    2011-01-01

    The "National Dropout Prevention Newsletter" is published quarterly by the National Dropout Prevention Center/Network. This issue contains the following articles: (1) Special Olympics Project UNIFY (Andrea Cahn); (2) The Impact of Project UNIFY; (3) Project UNIFY Brings Youth Together to Learn and Graduate (William H. Hughes); (4)…

  20. "Repeating Events" as Estimator of Location Precision: The China National Seismograph Network

    NASA Astrophysics Data System (ADS)

    Jiang, Changsheng; Wu, Zhongliang; Li, Yutong; Ma, Tengfei

    2014-03-01

    "Repeating earthquakes" identified by waveform cross-correlation, with inter-event separation of no more than 1 km, can be used for assessment of location precision. Assuming that the network-measured apparent inter-epicenter distance X of the "repeating doublets" indicates the location precision, we estimated the regionalized location quality of the China National Seismograph Network by comparing the "repeating events" in and around China by S chaff and R ichards (Science 303: 1176-1178, 2004; J Geophys Res 116: B03309, 2011) and the monthly catalogue of the China Earthquake Networks Center. The comparison shows that the average X value of the China National Seismograph Network is approximately 10 km. The mis-location is larger for the Tibetan Plateau, west and north of Xinjiang, and east of Inner Mongolia, as indicated by larger X values. Mis-location is correlated with the completeness magnitude of the earthquake catalogue. Using the data from the Beijing Capital Circle Region, the dependence of the mis-location on the distribution of seismic stations can be further confirmed.

  1. Dispatching packets on a global combining network of a parallel computer

    DOEpatents

    Almasi, Gheorghe [Ardsley, NY; Archer, Charles J [Rochester, MN

    2011-07-19

    Methods, apparatus, and products are disclosed for dispatching packets on a global combining network of a parallel computer comprising a plurality of nodes connected for data communications using the network capable of performing collective operations and point to point operations that include: receiving, by an origin system messaging module on an origin node from an origin application messaging module on the origin node, a storage identifier and an operation identifier, the storage identifier specifying storage containing an application message for transmission to a target node, and the operation identifier specifying a message passing operation; packetizing, by the origin system messaging module, the application message into network packets for transmission to the target node, each network packet specifying the operation identifier and an operation type for the message passing operation specified by the operation identifier; and transmitting, by the origin system messaging module, the network packets to the target node.

  2. Computational properties of networks of synchronous groups of spiking neurons.

    PubMed

    Dayhoff, Judith E

    2007-09-01

    We demonstrate a model in which synchronously firing ensembles of neurons are networked to produce computational results. Each ensemble is a group of biological integrate-and-fire spiking neurons, with probabilistic interconnections between groups. An analogy is drawn in which each individual processing unit of an artificial neural network corresponds to a neuronal group in a biological model. The activation value of a unit in the artificial neural network corresponds to the fraction of active neurons, synchronously firing, in a biological neuronal group. Weights of the artificial neural network correspond to the product of the interconnection density between groups, the group size of the presynaptic group, and the postsynaptic potential heights in the synchronous group model. All three of these parameters can modulate connection strengths between neuronal groups in the synchronous group models. We give an example of nonlinear classification (XOR) and a function approximation example in which the capability of the artificial neural network can be captured by a neural network model with biological integrate-and-fire neurons configured as a network of synchronously firing ensembles of such neurons. We point out that the general function approximation capability proven for feedforward artificial neural networks appears to be approximated by networks of neuronal groups that fire in synchrony, where the groups comprise integrate-and-fire neurons. We discuss the advantages of this type of model for biological systems, its possible learning mechanisms, and the associated timing relationships.

  3. Global tree network for computing structures enabling global processing operations

    DOEpatents

    Blumrich; Matthias A.; Chen, Dong; Coteus, Paul W.; Gara, Alan G.; Giampapa, Mark E.; Heidelberger, Philip; Hoenicke, Dirk; Steinmacher-Burow, Burkhard D.; Takken, Todd E.; Vranas, Pavlos M.

    2010-01-19

    A system and method for enabling high-speed, low-latency global tree network communications among processing nodes interconnected according to a tree network structure. The global tree network enables collective reduction operations to be performed during parallel algorithm operations executing in a computer structure having a plurality of the interconnected processing nodes. Router devices are included that interconnect the nodes of the tree via links to facilitate performance of low-latency global processing operations at nodes of the virtual tree and sub-tree structures. The global operations performed include one or more of: broadcast operations downstream from a root node to leaf nodes of a virtual tree, reduction operations upstream from leaf nodes to the root node in the virtual tree, and point-to-point message passing from any node to the root node. The global tree network is configurable to provide global barrier and interrupt functionality in asynchronous or synchronized manner, and, is physically and logically partitionable.

  4. Communication Environments for Local Networks.

    DTIC Science & Technology

    1982-12-01

    San Francisco, February-March 1979, pp.272.275. [Frank 75] Frank, H., I. Gitman , and R. Van Slyke, "Packet radio system - Network * -considerations...34 in AFIPS Conference Proceedings, Volume 44: National Computer Conference, Anaheim, Calif., May 1975, pp. 217-231. [Frank 76a] Frank, H., I. Gitman ...Local, Regional and Larger Scale Integrated Networks, Volume 2, 4 February 1976. [Frank 76b] Frank, H., I. Gitman , and R. Van Slyke, Local and Regional

  5. Workshop on Incomplete Network Data Held at Sandia National Labs – Livermore

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Soundarajan, Sucheta; Wendt, Jeremy D.

    2016-06-01

    While network analysis is applied in a broad variety of scientific fields (including physics, computer science, biology, and the social sciences), how networks are constructed and the resulting bias and incompleteness have drawn more limited attention. For example, in biology, gene networks are typically developed via experiment -- many actual interactions are likely yet to be discovered. In addition to this incompleteness, the data-collection processes can introduce significant bias into the observed network datasets. For instance, if you observe part of the World Wide Web network through a classic random walk, then high degree nodes are more likely to bemore » found than if you had selected nodes at random. Unfortunately, such incomplete and biasing data collection methods must be often used.« less

  6. Proceedings from the conference on high speed computing: High speed computing and national security

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hirons, K.P.; Vigil, M.; Carlson, R.

    1997-07-01

    This meeting covered the following topics: technologies/national needs/policies: past, present and future; information warfare; crisis management/massive data systems; risk assessment/vulnerabilities; Internet law/privacy and rights of society; challenges to effective ASCI programmatic use of 100 TFLOPs systems; and new computing technologies.

  7. Requirements for a network storage service

    NASA Technical Reports Server (NTRS)

    Kelly, Suzanne M.; Haynes, Rena A.

    1991-01-01

    Sandia National Laboratories provides a high performance classified computer network as a core capability in support of its mission of nuclear weapons design and engineering, physical sciences research, and energy research and development. The network, locally known as the Internal Secure Network (ISN), comprises multiple distributed local area networks (LAN's) residing in New Mexico and California. The TCP/IP protocol suite is used for inter-node communications. Scientific workstations and mid-range computers, running UNIX-based operating systems, compose most LAN's. One LAN, operated by the Sandia Corporate Computing Computing Directorate, is a general purpose resource providing a supercomputer and a file server to the entire ISN. The current file server on the supercomputer LAN is an implementation of the Common File Server (CFS). Subsequent to the design of the ISN, Sandia reviewed its mass storage requirements and chose to enter into a competitive procurement to replace the existing file server with one more adaptable to a UNIX/TCP/IP environment. The requirements study for the network was the starting point for the requirements study for the new file server. The file server is called the Network Storage Service (NSS) and its requirements are described. An application or functional description of the NSS is given. The final section adds performance, capacity, and access constraints to the requirements.

  8. Family Engagement. National Dropout Prevention Center/Network Newsletter. Volume 20, Number 2

    ERIC Educational Resources Information Center

    Duckenfield, Marty, Ed.

    2008-01-01

    The "National Dropout Prevention Newsletter" is published quarterly by the National Dropout Prevention Center/Network. This issue contains the following articles: (1) Family/School Relationships: Relationships That Matter; (2) Program Profile; (3) Engaging Families in the Pathway to College: Lessons From Schools That Are Beating the Odds (Anne T.…

  9. Characterization of computer network events through simultaneous feature selection and clustering of intrusion alerts

    NASA Astrophysics Data System (ADS)

    Chen, Siyue; Leung, Henry; Dondo, Maxwell

    2014-05-01

    As computer network security threats increase, many organizations implement multiple Network Intrusion Detection Systems (NIDS) to maximize the likelihood of intrusion detection and provide a comprehensive understanding of intrusion activities. However, NIDS trigger a massive number of alerts on a daily basis. This can be overwhelming for computer network security analysts since it is a slow and tedious process to manually analyse each alert produced. Thus, automated and intelligent clustering of alerts is important to reveal the structural correlation of events by grouping alerts with common features. As the nature of computer network attacks, and therefore alerts, is not known in advance, unsupervised alert clustering is a promising approach to achieve this goal. We propose a joint optimization technique for feature selection and clustering to aggregate similar alerts and to reduce the number of alerts that analysts have to handle individually. More precisely, each identified feature is assigned a binary value, which reflects the feature's saliency. This value is treated as a hidden variable and incorporated into a likelihood function for clustering. Since computing the optimal solution of the likelihood function directly is analytically intractable, we use the Expectation-Maximisation (EM) algorithm to iteratively update the hidden variable and use it to maximize the expected likelihood. Our empirical results, using a labelled Defense Advanced Research Projects Agency (DARPA) 2000 reference dataset, show that the proposed method gives better results than the EM clustering without feature selection in terms of the clustering accuracy.

  10. An Analysis of Attitudes toward Computer Networks and Internet Addiction.

    ERIC Educational Resources Information Center

    Tsai, Chin-Chung; Lin, Sunny S. J.

    The purpose of this study was to explore the interplay between young people's attitudes toward computer networks and Internet addiction. After analyzing questionnaire responses of an initial sample of 615 Taiwanese high school students, 78 subjects, viewed as possible Internet addicts, were selected for further explorations. It was found that…

  11. The Continuing Growth of Global Cooperation Networks in Research: A Conundrum for National Governments

    PubMed Central

    Wagner, Caroline S.; Park, Han Woo; Leydesdorff, Loet

    2015-01-01

    Global collaboration continues to grow as a share of all scientific cooperation, measured as coauthorships of peer-reviewed, published papers. The percent of all scientific papers that are internationally coauthored has more than doubled in 20 years, and they account for all the growth in output among the scientifically advanced countries. Emerging countries, particularly China, have increased their participation in global science, in part by doubling their spending on R&D; they are increasingly likely to appear as partners on internationally coauthored scientific papers. Given the growth of connections at the international level, it is helpful to examine the phenomenon as a communications network and to consider the network as a new organization on the world stage that adds to and complements national systems. When examined as interconnections across the globe over two decades, a global network has grown denser but not more clustered, meaning there are many more connections but they are not grouping into exclusive ‘cliques’. This suggests that power relationships are not reproducing those of the political system. The network has features an open system, attracting productive scientists to participate in international projects. National governments could gain efficiencies and influence by developing policies and strategies designed to maximize network benefits—a model different from those designed for national systems. PMID:26196296

  12. Rapid Sampling of Hydrogen Bond Networks for Computational Protein Design.

    PubMed

    Maguire, Jack B; Boyken, Scott E; Baker, David; Kuhlman, Brian

    2018-05-08

    Hydrogen bond networks play a critical role in determining the stability and specificity of biomolecular complexes, and the ability to design such networks is important for engineering novel structures, interactions, and enzymes. One key feature of hydrogen bond networks that makes them difficult to rationally engineer is that they are highly cooperative and are not energetically favorable until the hydrogen bonding potential has been satisfied for all buried polar groups in the network. Existing computational methods for protein design are ill-equipped for creating these highly cooperative networks because they rely on energy functions and sampling strategies that are focused on pairwise interactions. To enable the design of complex hydrogen bond networks, we have developed a new sampling protocol in the molecular modeling program Rosetta that explicitly searches for sets of amino acid mutations that can form self-contained hydrogen bond networks. For a given set of designable residues, the protocol often identifies many alternative sets of mutations/networks, and we show that it can readily be applied to large sets of residues at protein-protein interfaces or in the interior of proteins. The protocol builds on a recently developed method in Rosetta for designing hydrogen bond networks that has been experimentally validated for small symmetric systems but was not extensible to many larger protein structures and complexes. The sampling protocol we describe here not only recapitulates previously validated designs with performance improvements but also yields viable hydrogen bond networks for cases where the previous method fails, such as the design of large, asymmetric interfaces relevant to engineering protein-based therapeutics.

  13. Computational Modeling of Single Neuron Extracellular Electric Potentials and Network Local Field Potentials using LFPsim.

    PubMed

    Parasuram, Harilal; Nair, Bipin; D'Angelo, Egidio; Hines, Michael; Naldi, Giovanni; Diwakar, Shyam

    2016-01-01

    Local Field Potentials (LFPs) are population signals generated by complex spatiotemporal interaction of current sources and dipoles. Mathematical computations of LFPs allow the study of circuit functions and dysfunctions via simulations. This paper introduces LFPsim, a NEURON-based tool for computing population LFP activity and single neuron extracellular potentials. LFPsim was developed to be used on existing cable compartmental neuron and network models. Point source, line source, and RC based filter approximations can be used to compute extracellular activity. As a demonstration of efficient implementation, we showcase LFPs from mathematical models of electrotonically compact cerebellum granule neurons and morphologically complex neurons of the neocortical column. LFPsim reproduced neocortical LFP at 8, 32, and 56 Hz via current injection, in vitro post-synaptic N2a, N2b waves and in vivo T-C waves in cerebellum granular layer. LFPsim also includes a simulation of multi-electrode array of LFPs in network populations to aid computational inference between biophysical activity in neural networks and corresponding multi-unit activity resulting in extracellular and evoked LFP signals.

  14. A collaborative computing framework of cloud network and WBSN applied to fall detection and 3-D motion reconstruction.

    PubMed

    Lai, Chin-Feng; Chen, Min; Pan, Jeng-Shyang; Youn, Chan-Hyun; Chao, Han-Chieh

    2014-03-01

    As cloud computing and wireless body sensor network technologies become gradually developed, ubiquitous healthcare services prevent accidents instantly and effectively, as well as provides relevant information to reduce related processing time and cost. This study proposes a co-processing intermediary framework integrated cloud and wireless body sensor networks, which is mainly applied to fall detection and 3-D motion reconstruction. In this study, the main focuses includes distributed computing and resource allocation of processing sensing data over the computing architecture, network conditions and performance evaluation. Through this framework, the transmissions and computing time of sensing data are reduced to enhance overall performance for the services of fall events detection and 3-D motion reconstruction.

  15. Convolutional Deep Belief Networks for Single-Cell/Object Tracking in Computational Biology and Computer Vision.

    PubMed

    Zhong, Bineng; Pan, Shengnan; Zhang, Hongbo; Wang, Tian; Du, Jixiang; Chen, Duansheng; Cao, Liujuan

    2016-01-01

    In this paper, we propose deep architecture to dynamically learn the most discriminative features from data for both single-cell and object tracking in computational biology and computer vision. Firstly, the discriminative features are automatically learned via a convolutional deep belief network (CDBN). Secondly, we design a simple yet effective method to transfer features learned from CDBNs on the source tasks for generic purpose to the object tracking tasks using only limited amount of training data. Finally, to alleviate the tracker drifting problem caused by model updating, we jointly consider three different types of positive samples. Extensive experiments validate the robustness and effectiveness of the proposed method.

  16. Convolutional Deep Belief Networks for Single-Cell/Object Tracking in Computational Biology and Computer Vision

    PubMed Central

    Pan, Shengnan; Zhang, Hongbo; Wang, Tian; Du, Jixiang; Chen, Duansheng; Cao, Liujuan

    2016-01-01

    In this paper, we propose deep architecture to dynamically learn the most discriminative features from data for both single-cell and object tracking in computational biology and computer vision. Firstly, the discriminative features are automatically learned via a convolutional deep belief network (CDBN). Secondly, we design a simple yet effective method to transfer features learned from CDBNs on the source tasks for generic purpose to the object tracking tasks using only limited amount of training data. Finally, to alleviate the tracker drifting problem caused by model updating, we jointly consider three different types of positive samples. Extensive experiments validate the robustness and effectiveness of the proposed method. PMID:27847827

  17. Mobility-Aware Caching and Computation Offloading in 5G Ultra-Dense Cellular Networks

    PubMed Central

    Chen, Min; Hao, Yixue; Qiu, Meikang; Song, Jeungeun; Wu, Di; Humar, Iztok

    2016-01-01

    Recent trends show that Internet traffic is increasingly dominated by content, which is accompanied by the exponential growth of traffic. To cope with this phenomena, network caching is introduced to utilize the storage capacity of diverse network devices. In this paper, we first summarize four basic caching placement strategies, i.e., local caching, Device-to-Device (D2D) caching, Small cell Base Station (SBS) caching and Macrocell Base Station (MBS) caching. However, studies show that so far, much of the research has ignored the impact of user mobility. Therefore, taking the effect of the user mobility into consideration, we proposes a joint mobility-aware caching and SBS density placement scheme (MS caching). In addition, differences and relationships between caching and computation offloading are discussed. We present a design of a hybrid computation offloading and support it with experimental results, which demonstrate improved performance in terms of energy cost. Finally, we discuss the design of an incentive mechanism by considering network dynamics, differentiated user’s quality of experience (QoE) and the heterogeneity of mobile terminals in terms of caching and computing capabilities. PMID:27347975

  18. National resource for computation in chemistry, phase I: evaluation and recommendations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Not Available

    1980-05-01

    The National Resource for Computation in Chemistry (NRCC) was inaugurated at the Lawrence Berkeley Laboratory (LBL) in October 1977, with joint funding by the Department of Energy (DOE) and the National Science Foundation (NSF). The chief activities of the NRCC include: assembling a staff of eight postdoctoral computational chemists, establishing an office complex at LBL, purchasing a midi-computer and graphics display system, administering grants of computer time, conducting nine workshops in selected areas of computational chemistry, compiling a library of computer programs with adaptations and improvements, initiating a software distribution system, providing user assistance and consultation on request. This reportmore » presents assessments and recommendations of an Ad Hoc Review Committee appointed by the DOE and NSF in January 1980. The recommendations are that NRCC should: (1) not fund grants for computing time or research but leave that to the relevant agencies, (2) continue the Workshop Program in a mode similar to Phase I, (3) abandon in-house program development and establish instead a competitive external postdoctoral program in chemistry software development administered by the Policy Board and Director, and (4) not attempt a software distribution system (leaving that function to the QCPE). Furthermore, (5) DOE should continue to make its computational facilities available to outside users (at normal cost rates) and should find some way to allow the chemical community to gain occasional access to a CRAY-level computer.« less

  19. National network television news coverage of contraception - a content analysis.

    PubMed

    Patton, Elizabeth W; Moniz, Michelle H; Hughes, Lauren S; Buis, Lorraine; Howell, Joel

    2017-01-01

    The objective was to describe and analyze national network television news framing of contraception, recognizing that onscreen news can influence the public's knowledge and beliefs. We used the Vanderbilt Television News Archives and LexisNexis Database to obtain video and print transcripts of all relevant national network television news segments covering contraception from January 2010 to June 2014. We conducted a content analysis of 116 TV news segments covering contraception during the rollout of the Affordable Care Act. Segments were quantitatively coded for contraceptive methods covered, story sources used, and inclusion of medical and nonmedical content (intercoder reliability using Krippendorf's alpha ranged 0.6-1 for coded categories). Most (55%) news stories focused on contraception in general rather than specific methods. The most effective contraceptive methods were rarely discussed (implant, 1%; intrauterine device, 4%). The most frequently used sources were political figures (40%), advocates (25%), the general public (25%) and Catholic Church leaders (16%); medical professionals (11%) and health researchers (4%) appeared in a minority of stories. A minority of stories (31%) featured medical content. National network news coverage of contraception frequently focuses on contraception in political and social terms and uses nonmedical figures such as politicians and church leaders as sources. This focus deemphasizes the public health aspect of contraception, leading medical professionals and health content to be rarely featured. Media coverage of contraception may influence patients' views about contraception. Understanding the content, sources and medical accuracy of current media portrayals of contraception may enable health care professionals to dispel popular misperceptions. Published by Elsevier Inc.

  20. Summer Learning. National Dropout Prevention Center/Network Newsletter. Volume 21, Number 3

    ERIC Educational Resources Information Center

    Duckenfield, Marty, Ed.

    2010-01-01

    The "National Dropout Prevention Newsletter" is published quarterly by the National Dropout Prevention Center/Network. This issue contains the following articles: (1) A New Vision of Summer Learning (Brenda McLaughlin); (2) Using Summers More Strategically to Bridge the 8th-9th Grade Transition (Brenda McLaughlin and Hillary Hardt); (3)…

  1. Measuring a year of child pornography trafficking by U.S. computers on a peer-to-peer network.

    PubMed

    Wolak, Janis; Liberatore, Marc; Levine, Brian Neil

    2014-02-01

    We used data gathered via investigative "RoundUp" software to measure a year of online child pornography (CP) trafficking activity by U.S. computers on the Gnutella peer-to-peer network. The data include millions of observations of Internet Protocol addresses sharing known CP files, identified as such in previous law enforcement investigations. We found that 244,920 U.S. computers shared 120,418 unique known CP files on Gnutella during the study year. More than 80% of these computers shared fewer than 10 such files during the study year or shared files for fewer than 10 days. However, less than 1% of computers (n=915) made high annual contributions to the number of known CP files available on the network (100 or more files). If law enforcement arrested the operators of these high-contribution computers and took their files offline, the number of distinct known CP files available in the P2P network could be reduced by as much as 30%. Our findings indicate widespread low level CP trafficking by U.S. computers in one peer-to-peer network, while a small percentage of computers made high contributions to the problem. However, our measures were not comprehensive and should be considered lower bounds estimates. Nonetheless, our findings show that data can be systematically gathered and analyzed to develop an empirical grasp of the scope and characteristics of CP trafficking on peer-to-peer networks. Such measurements can be used to combat the problem. Further, investigative software tools can be used strategically to help law enforcement prioritize investigations. Copyright © 2013 Elsevier Ltd. All rights reserved.

  2. Privacy and the National Information Infrastructure.

    ERIC Educational Resources Information Center

    Rotenberg, Marc

    1994-01-01

    Explains the work of Computer Professionals for Social Responsibility regarding privacy issues in the use of electronic networks; recommends principles that should be adopted for a National Information Infrastructure privacy code; discusses the need for public education; and suggests pertinent legislative proposals. (LRW)

  3. GPs’ use of defibrillators and the national radio network in emergency primary healthcare in Norway

    PubMed Central

    Zakariassen, Erik; Hunskaar, Steinar

    2008-01-01

    Objective To study the geographic size of out-of-hours districts, the availability of defibrillators and use of the national radio network in Norway. Design Survey. Setting The emergency primary healthcare system in Norway. Subjects A total of 282 host municipalities responsible for 260 out-of-hours districts. Main outcome measures Size of out-of-hours districts, use of national radio network and access to a defibrillator in emergency situations. Results The out-of-hours districts have a wide range of areas, which gives a large variation in driving time for doctors on call. The median longest transport time for doctors in Norway is 45 minutes. In 46% of out-of-hours districts doctors bring their own defibrillator on emergency callouts. Doctors always use the national radio network in 52% of out-of-hours districts. Use of the radio network and access to a defibrillator are significantly greater in out-of-hours districts with a host municipality of fewer then 5000 inhabitants compared with host municipalities of more than 20 000 inhabitants. Conclusion In half of out-of-hours districts doctors on call always use the national radio network. Doctors in out-of-hours districts with a host municipality of fewer than 5000 inhabitants are in a better state of readiness to attend an emergency, compared with doctors working in larger host municipalities. PMID:18570012

  4. Offdiagonal complexity: A computationally quick complexity measure for graphs and networks

    NASA Astrophysics Data System (ADS)

    Claussen, Jens Christian

    2007-02-01

    A vast variety of biological, social, and economical networks shows topologies drastically differing from random graphs; yet the quantitative characterization remains unsatisfactory from a conceptual point of view. Motivated from the discussion of small scale-free networks, a biased link distribution entropy is defined, which takes an extremum for a power-law distribution. This approach is extended to the node-node link cross-distribution, whose nondiagonal elements characterize the graph structure beyond link distribution, cluster coefficient and average path length. From here a simple (and computationally cheap) complexity measure can be defined. This offdiagonal complexity (OdC) is proposed as a novel measure to characterize the complexity of an undirected graph, or network. While both for regular lattices and fully connected networks OdC is zero, it takes a moderately low value for a random graph and shows high values for apparently complex structures as scale-free networks and hierarchical trees. The OdC approach is applied to the Helicobacter pylori protein interaction network and randomly rewired surrogates.

  5. NASA/DOD Aerospace Knowledge Diffusion Research Project. Report 35: The use of computer networks in aerospace engineering

    NASA Technical Reports Server (NTRS)

    Bishop, Ann P.; Pinelli, Thomas E.

    1995-01-01

    This research used survey research to explore and describe the use of computer networks by aerospace engineers. The study population included 2000 randomly selected U.S. aerospace engineers and scientists who subscribed to Aerospace Engineering. A total of 950 usable questionnaires were received by the cutoff date of July 1994. Study results contribute to existing knowledge about both computer network use and the nature of engineering work and communication. We found that 74 percent of mail survey respondents personally used computer networks. Electronic mail, file transfer, and remote login were the most widely used applications. Networks were used less often than face-to-face interactions in performing work tasks, but about equally with reading and telephone conversations, and more often than mail or fax. Network use was associated with a range of technical, organizational, and personal factors: lack of compatibility across systems, cost, inadequate access and training, and unwillingness to embrace new technologies and modes of work appear to discourage network use. The greatest positive impacts from networking appear to be increases in the amount of accurate and timely information available, better exchange of ideas across organizational boundaries, and enhanced work flexibility, efficiency, and quality. Involvement with classified or proprietary data and type of organizational structure did not distinguish network users from nonusers. The findings can be used by people involved in the design and implementation of networks in engineering communities to inform the development of more effective networking systems, services, and policies.

  6. A knowledge-based system with learning for computer communication network design

    NASA Technical Reports Server (NTRS)

    Pierre, Samuel; Hoang, Hai Hoc; Tropper-Hausen, Evelyne

    1990-01-01

    Computer communication network design is well-known as complex and hard. For that reason, the most effective methods used to solve it are heuristic. Weaknesses of these techniques are listed and a new approach based on artificial intelligence for solving this problem is presented. This approach is particularly recommended for large packet switched communication networks, in the sense that it permits a high degree of reliability and offers a very flexible environment dealing with many relevant design parameters such as link cost, link capacity, and message delay.

  7. The International Postal Network and Other Global Flows as Proxies for National Wellbeing.

    PubMed

    Hristova, Desislava; Rutherford, Alex; Anson, Jose; Luengo-Oroz, Miguel; Mascolo, Cecilia

    2016-01-01

    The digital exhaust left by flows of physical and digital commodities provides a rich measure of the nature, strength and significance of relationships between countries in the global network. With this work, we examine how these traces and the network structure can reveal the socioeconomic profile of different countries. We take into account multiple international networks of physical and digital flows, including the previously unexplored international postal network. By measuring the position of each country in the Trade, Postal, Migration, International Flights, IP and Digital Communications networks, we are able to build proxies for a number of crucial socioeconomic indicators such as GDP per capita and the Human Development Index ranking along with twelve other indicators used as benchmarks of national well-being by the United Nations and other international organisations. In this context, we have also proposed and evaluated a global connectivity degree measure applying multiplex theory across the six networks that accounts for the strength of relationships between countries. We conclude by showing how countries with shared community membership over multiple networks have similar socioeconomic profiles. Combining multiple flow data sources can help understand the forces which drive economic activity on a global level. Such an ability to infer proxy indicators in a context of incomplete information is extremely timely in light of recent discussions on measurement of indicators relevant to the Sustainable Development Goals.

  8. The International Postal Network and Other Global Flows as Proxies for National Wellbeing

    PubMed Central

    Rutherford, Alex; Anson, Jose; Luengo-Oroz, Miguel; Mascolo, Cecilia

    2016-01-01

    The digital exhaust left by flows of physical and digital commodities provides a rich measure of the nature, strength and significance of relationships between countries in the global network. With this work, we examine how these traces and the network structure can reveal the socioeconomic profile of different countries. We take into account multiple international networks of physical and digital flows, including the previously unexplored international postal network. By measuring the position of each country in the Trade, Postal, Migration, International Flights, IP and Digital Communications networks, we are able to build proxies for a number of crucial socioeconomic indicators such as GDP per capita and the Human Development Index ranking along with twelve other indicators used as benchmarks of national well-being by the United Nations and other international organisations. In this context, we have also proposed and evaluated a global connectivity degree measure applying multiplex theory across the six networks that accounts for the strength of relationships between countries. We conclude by showing how countries with shared community membership over multiple networks have similar socioeconomic profiles. Combining multiple flow data sources can help understand the forces which drive economic activity on a global level. Such an ability to infer proxy indicators in a context of incomplete information is extremely timely in light of recent discussions on measurement of indicators relevant to the Sustainable Development Goals. PMID:27248142

  9. A Direct Position-Determination Approach for Multiple Sources Based on Neural Network Computation.

    PubMed

    Chen, Xin; Wang, Ding; Yin, Jiexin; Wu, Ying

    2018-06-13

    The most widely used localization technology is the two-step method that localizes transmitters by measuring one or more specified positioning parameters. Direct position determination (DPD) is a promising technique that directly localizes transmitters from sensor outputs and can offer superior localization performance. However, existing DPD algorithms such as maximum likelihood (ML)-based and multiple signal classification (MUSIC)-based estimations are computationally expensive, making it difficult to satisfy real-time demands. To solve this problem, we propose the use of a modular neural network for multiple-source DPD. In this method, the area of interest is divided into multiple sub-areas. Multilayer perceptron (MLP) neural networks are employed to detect the presence of a source in a sub-area and filter sources in other sub-areas, and radial basis function (RBF) neural networks are utilized for position estimation. Simulation results show that a number of appropriately trained neural networks can be successfully used for DPD. The performance of the proposed MLP-MLP-RBF method is comparable to the performance of the conventional MUSIC-based DPD algorithm for various signal-to-noise ratios and signal power ratios. Furthermore, the MLP-MLP-RBF network is less computationally intensive than the classical DPD algorithm and is therefore an attractive choice for real-time applications.

  10. Computer integration of engineering design and production: A national opportunity

    NASA Astrophysics Data System (ADS)

    1984-10-01

    The National Aeronautics and Space Administration (NASA), as a purchaser of a variety of manufactured products, including complex space vehicles and systems, clearly has a stake in the advantages of computer-integrated manufacturing (CIM). Two major NASA objectives are to launch a Manned Space Station by 1992 with a budget of $8 billion, and to be a leader in the development and application of productivity-enhancing technology. At the request of NASA, a National Research Council committee visited five companies that have been leaders in using CIM. Based on these case studies, technical, organizational, and financial issues that influence computer integration are described, guidelines for its implementation in industry are offered, and the use of CIM to manage the space station program is recommended.

  11. Computer integration of engineering design and production: A national opportunity

    NASA Technical Reports Server (NTRS)

    1984-01-01

    The National Aeronautics and Space Administration (NASA), as a purchaser of a variety of manufactured products, including complex space vehicles and systems, clearly has a stake in the advantages of computer-integrated manufacturing (CIM). Two major NASA objectives are to launch a Manned Space Station by 1992 with a budget of $8 billion, and to be a leader in the development and application of productivity-enhancing technology. At the request of NASA, a National Research Council committee visited five companies that have been leaders in using CIM. Based on these case studies, technical, organizational, and financial issues that influence computer integration are described, guidelines for its implementation in industry are offered, and the use of CIM to manage the space station program is recommended.

  12. A Nanotechnology-Ready Computing Scheme based on a Weakly Coupled Oscillator Network

    NASA Astrophysics Data System (ADS)

    Vodenicarevic, Damir; Locatelli, Nicolas; Abreu Araujo, Flavio; Grollier, Julie; Querlioz, Damien

    2017-03-01

    With conventional transistor technologies reaching their limits, alternative computing schemes based on novel technologies are currently gaining considerable interest. Notably, promising computing approaches have proposed to leverage the complex dynamics emerging in networks of coupled oscillators based on nanotechnologies. The physical implementation of such architectures remains a true challenge, however, as most proposed ideas are not robust to nanotechnology devices’ non-idealities. In this work, we propose and investigate the implementation of an oscillator-based architecture, which can be used to carry out pattern recognition tasks, and which is tailored to the specificities of nanotechnologies. This scheme relies on a weak coupling between oscillators, and does not require a fine tuning of the coupling values. After evaluating its reliability under the severe constraints associated to nanotechnologies, we explore the scalability of such an architecture, suggesting its potential to realize pattern recognition tasks using limited resources. We show that it is robust to issues like noise, variability and oscillator non-linearity. Defining network optimization design rules, we show that nano-oscillator networks could be used for efficient cognitive processing.

  13. A Nanotechnology-Ready Computing Scheme based on a Weakly Coupled Oscillator Network

    PubMed Central

    Vodenicarevic, Damir; Locatelli, Nicolas; Abreu Araujo, Flavio; Grollier, Julie; Querlioz, Damien

    2017-01-01

    With conventional transistor technologies reaching their limits, alternative computing schemes based on novel technologies are currently gaining considerable interest. Notably, promising computing approaches have proposed to leverage the complex dynamics emerging in networks of coupled oscillators based on nanotechnologies. The physical implementation of such architectures remains a true challenge, however, as most proposed ideas are not robust to nanotechnology devices’ non-idealities. In this work, we propose and investigate the implementation of an oscillator-based architecture, which can be used to carry out pattern recognition tasks, and which is tailored to the specificities of nanotechnologies. This scheme relies on a weak coupling between oscillators, and does not require a fine tuning of the coupling values. After evaluating its reliability under the severe constraints associated to nanotechnologies, we explore the scalability of such an architecture, suggesting its potential to realize pattern recognition tasks using limited resources. We show that it is robust to issues like noise, variability and oscillator non-linearity. Defining network optimization design rules, we show that nano-oscillator networks could be used for efficient cognitive processing. PMID:28322262

  14. A Nanotechnology-Ready Computing Scheme based on a Weakly Coupled Oscillator Network.

    PubMed

    Vodenicarevic, Damir; Locatelli, Nicolas; Abreu Araujo, Flavio; Grollier, Julie; Querlioz, Damien

    2017-03-21

    With conventional transistor technologies reaching their limits, alternative computing schemes based on novel technologies are currently gaining considerable interest. Notably, promising computing approaches have proposed to leverage the complex dynamics emerging in networks of coupled oscillators based on nanotechnologies. The physical implementation of such architectures remains a true challenge, however, as most proposed ideas are not robust to nanotechnology devices' non-idealities. In this work, we propose and investigate the implementation of an oscillator-based architecture, which can be used to carry out pattern recognition tasks, and which is tailored to the specificities of nanotechnologies. This scheme relies on a weak coupling between oscillators, and does not require a fine tuning of the coupling values. After evaluating its reliability under the severe constraints associated to nanotechnologies, we explore the scalability of such an architecture, suggesting its potential to realize pattern recognition tasks using limited resources. We show that it is robust to issues like noise, variability and oscillator non-linearity. Defining network optimization design rules, we show that nano-oscillator networks could be used for efficient cognitive processing.

  15. Improving a Computer Networks Course Using the Partov Simulation Engine

    ERIC Educational Resources Information Center

    Momeni, B.; Kharrazi, M.

    2012-01-01

    Computer networks courses are hard to teach as there are many details in the protocols and techniques involved that are difficult to grasp. Employing programming assignments as part of the course helps students to obtain a better understanding and gain further insight into the theoretical lectures. In this paper, the Partov simulation engine and…

  16. Optimal control strategy for a novel computer virus propagation model on scale-free networks

    NASA Astrophysics Data System (ADS)

    Zhang, Chunming; Huang, Haitao

    2016-06-01

    This paper aims to study the combined impact of reinstalling system and network topology on the spread of computer viruses over the Internet. Based on scale-free network, this paper proposes a novel computer viruses propagation model-SLBOSmodel. A systematic analysis of this new model shows that the virus-free equilibrium is globally asymptotically stable when its spreading threshold is less than one; nevertheless, it is proved that the viral equilibrium is permanent if the spreading threshold is greater than one. Then, the impacts of different model parameters on spreading threshold are analyzed. Next, an optimally controlled SLBOS epidemic model on complex networks is also studied. We prove that there is an optimal control existing for the control problem. Some numerical simulations are finally given to illustrate the main results.

  17. Memristor-Based Analog Computation and Neural Network Classification with a Dot Product Engine.

    PubMed

    Hu, Miao; Graves, Catherine E; Li, Can; Li, Yunning; Ge, Ning; Montgomery, Eric; Davila, Noraica; Jiang, Hao; Williams, R Stanley; Yang, J Joshua; Xia, Qiangfei; Strachan, John Paul

    2018-03-01

    Using memristor crossbar arrays to accelerate computations is a promising approach to efficiently implement algorithms in deep neural networks. Early demonstrations, however, are limited to simulations or small-scale problems primarily due to materials and device challenges that limit the size of the memristor crossbar arrays that can be reliably programmed to stable and analog values, which is the focus of the current work. High-precision analog tuning and control of memristor cells across a 128 × 64 array is demonstrated, and the resulting vector matrix multiplication (VMM) computing precision is evaluated. Single-layer neural network inference is performed in these arrays, and the performance compared to a digital approach is assessed. Memristor computing system used here reaches a VMM accuracy equivalent of 6 bits, and an 89.9% recognition accuracy is achieved for the 10k MNIST handwritten digit test set. Forecasts show that with integrated (on chip) and scaled memristors, a computational efficiency greater than 100 trillion operations per second per Watt is possible. © 2018 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  18. Digital optical computers at the optoelectronic computing systems center

    NASA Technical Reports Server (NTRS)

    Jordan, Harry F.

    1991-01-01

    The Digital Optical Computing Program within the National Science Foundation Engineering Research Center for Opto-electronic Computing Systems has as its specific goal research on optical computing architectures suitable for use at the highest possible speeds. The program can be targeted toward exploiting the time domain because other programs in the Center are pursuing research on parallel optical systems, exploiting optical interconnection and optical devices and materials. Using a general purpose computing architecture as the focus, we are developing design techniques, tools and architecture for operation at the speed of light limit. Experimental work is being done with the somewhat low speed components currently available but with architectures which will scale up in speed as faster devices are developed. The design algorithms and tools developed for a general purpose, stored program computer are being applied to other systems such as optimally controlled optical communication networks.

  19. [Forensic evidence-based medicine in computer communication networks].

    PubMed

    Qiu, Yun-Liang; Peng, Ming-Qi

    2013-12-01

    As an important component of judicial expertise, forensic science is broad and highly specialized. With development of network technology, increasement of information resources, and improvement of people's legal consciousness, forensic scientists encounter many new problems, and have been required to meet higher evidentiary standards in litigation. In view of this, evidence-based concept should be established in forensic medicine. We should find the most suitable method in forensic science field and other related area to solve specific problems in the evidence-based mode. Evidence-based practice can solve the problems in legal medical field, and it will play a great role in promoting the progress and development of forensic science. This article reviews the basic theory of evidence-based medicine and its effect, way, method, and evaluation in the forensic medicine in order to discuss the application value of forensic evidence-based medicine in computer communication networks.

  20. Experimental system for computer network via satellite /CS/. III - Network control processor

    NASA Astrophysics Data System (ADS)

    Kakinuma, Y.; Ito, A.; Takahashi, H.; Uchida, K.; Matsumoto, K.; Mitsudome, H.

    1982-03-01

    A network control processor (NCP) has the functions of generating traffics, the control of links and the control of transmitting bursts. The NCP executes protocols, monitors of experiments, gathering and compiling data of measurements, of which programs are loaded on a minicomputer (MELCOM 70/40) with 512KB of memories. The NCP acts as traffic generators, instead of a host computer, in the experiment. For this purpose, 15 fake stations are realized by the software in each user station. This paper describes the configuration of the NCP and the implementation of the protocols for the experimental system.

  1. An historical overview of the National Network of Libraries of Medicine, 1985-2015.

    PubMed

    Speaker, Susan L

    2018-04-01

    The National Network of Libraries of Medicine (NNLM), established as the Regional Medical Library Program in 1965, has a rich and remarkable history. The network's first twenty years were documented in a detailed 1987 history by Alison Bunting, AHIP, FMLA. This article traces the major trends in the network's development since then: reconceiving the Regional Medical Library staff as a "field force" for developing, marketing, and distributing a growing number of National Library of Medicine (NLM) products and services; subsequent expansion of outreach to health professionals who are unaffiliated with academic medical centers, particularly those in public health; the advent of the Internet during the 1990s, which brought the migration of NLM and NNLM resources and services to the World Wide Web, and a mandate to encourage and facilitate Internet connectivity in the network; and the further expansion of the NLM and NNLM mission to include providing consumer health resources to satisfy growing public demand. The concluding section discusses the many challenges that NNLM staff faced as they transformed the network from a system that served mainly academic medical researchers to a larger, denser organization that offers health information resources to everyone.

  2. Using the D-Wave 2X Quantum Computer to Explore the Formation of Global Terrorist Networks

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ambrosiano, John Joseph; Roberts, Randy Mark; Sims, Benjamin Hayden

    Social networks with signed edges (+/-) play an important role in an area of social network theory called structural balance. In these networks, edges represent relationships that are labeled as either friendly (+) or hostile (-). A signed social network is balanced only if all cycles of three or more nodes in the graph have an odd number of hostile edges. A fundamental property of a balanced network is that it can be cleanly divided into 2 factions, where all relationships within each faction are friendly, and all relationships between members of different factions are hostile. The more unbalanced amore » network is, the more edges will fail to adhere to this rule, making factions more ambiguous. Social theory suggests unbalanced networks should be unstable, a finding that has been supported by research on gangs, which shows that unbalanced relationships are associated with greater violence, possibly due to this increased ambiguity about factional allegiances (Nakamura et al). One way to estimate the imbalance in a network, if only edge relationships are known, is to assign nodes to factions that minimize the number of violations of the edge rule described above. This problem is known to be computationally NP-hard. However, Facchetti et al. have pointed out that it is equivalent to an Ising model with a Hamiltonian that effectively counts the number of edge rule violations. Therefore, finding the assignment of factions that minimizes energy of the equivalent Ising system yields an estimate of the imbalance in the network. Based on the Ising model equivalence of the signed-social network balance problem, we have used the D-Wave 2X quantum annealing computer to explore some aspects of signed social networks. Because connectivity in the D-Wave computer is limited to its particular native topology, arbitrary networks cannot be represented directly. Rather, they must be “embedded” using a technique in which multiple qubits are chained together with special

  3. Survival in Very Preterm Infants: An International Comparison of 10 National Neonatal Networks.

    PubMed

    Helenius, Kjell; Sjörs, Gunnar; Shah, Prakesh S; Modi, Neena; Reichman, Brian; Morisaki, Naho; Kusuda, Satoshi; Lui, Kei; Darlow, Brian A; Bassler, Dirk; Håkansson, Stellan; Adams, Mark; Vento, Maximo; Rusconi, Franca; Isayama, Tetsuya; Lee, Shoo K; Lehtonen, Liisa

    2017-12-01

    To compare survival rates and age at death among very preterm infants in 10 national and regional neonatal networks. A cohort study of very preterm infants, born between 24 and 29 weeks' gestation and weighing <1500 g, admitted to participating neonatal units between 2007 and 2013 in the International Network for Evaluating Outcomes of Neonates. Survival was compared by using standardized ratios (SRs) comparing survival in each network to the survival estimate of the whole population. Network populations differed with respect to rates of cesarean birth, exposure to antenatal steroids and birth in nontertiary hospitals. Network SRs for survival were highest in Japan (SR: 1.10; 99% confidence interval: 1.08-1.13) and lowest in Spain (SR: 0.88; 99% confidence interval: 0.85-0.90). The overall survival differed from 78% to 93% among networks, the difference being highest at 24 weeks' gestation (range 35%-84%). Survival rates increased and differences between networks diminished with increasing gestational age (GA) (range 92%-98% at 29 weeks' gestation); yet, relative differences in survival followed a similar pattern at all GAs. The median age at death varied from 4 days to 13 days across networks. The network ranking of survival rates for very preterm infants remained largely unchanged as GA increased; however, survival rates showed marked variations at lower GAs. The median age at death also varied among networks. These findings warrant further assessment of the representativeness of the study populations, organization of perinatal services, national guidelines, philosophy of care at extreme GAs, and resources used for decision-making. Copyright © 2017 by the American Academy of Pediatrics.

  4. NASA/DOD Aerospace Knowledge Diffusion Research Project. Paper 39: The role of computer networks in aerospace engineering

    NASA Technical Reports Server (NTRS)

    Bishop, Ann P.; Pinelli, Thomas E.

    1994-01-01

    This paper presents selected results from an empirical investigation into the use of computer networks in aerospace engineering. Such networks allow aerospace engineers to communicate with people and access remote resources through electronic mail, file transfer, and remote log-in. The study drew its subjects from private sector, government and academic organizations in the U.S. aerospace industry. Data presented here were gathered in a mail survey, conducted in Spring 1993, that was distributed to aerospace engineers performing a wide variety of jobs. Results from the mail survey provide a snapshot of the current use of computer networks in the aerospace industry, suggest factors associated with the use of networks, and identify perceived impacts of networks on aerospace engineering work and communication.

  5. Update on Plans to Establish a National Phenology Network in the U.S.A.

    NASA Astrophysics Data System (ADS)

    Betancourt, J.; Schwartz, M.; Breshears, D.; Cayan, D.; Dettinger, M.; Inouye, D.; Post, E.; Reed, B.; Gray, S.

    2005-12-01

    The passing of the seasons is the most pervasive source of climatic and biological variability on Earth, yet phenological monitoring has been spotty worldwide. Formal phenological networks were recently established in Europe and Canada, and we are now following their lead in organizing a National Phenology Network (NPN) for the U.S.A. With support from federal agencies (NSF, USGS, NPS, USDA-FS, EPA, NOAA, NASA), on Aug. 22-26 we organized a workshop in Tucson, Arizona to begin planning a national-scale, multi-tiered phenological network. A prototype for a web-based NPN and preliminary workshop results are available at http://www.npn.uwm.edu. The main goals of NPN will be to: (1) facilitate thorough understanding of phenological phenomena, including causes and effects; (2) provide ground truthing to make the most of heavy public investment in remote sensing data; (3) allow detection and prediction of environmental change for a wide of variety of applications; (4) harness the power of mass participation and engage tens of thousands of "citizen scientists" in meeting national needs in Education, Health, Commerce, Natural Resources and Agriculture; (5) develop a model system for substantive collaboration across different levels of government, academia and the private sector. Just as the national networks of weather stations and stream gauges are critical for providing weather, climate and water-related information, NPN will help safeguard and procure goods and services that ecosystems provide. We expect that NPN will consist of a four-tiered, expandable structure: 1) a backbone network linked to existing weather stations, run by recruited public observers; 2) A smaller, second tier of intensive observations, run by scientists at established research sites; 3) a much larger network of observations made by citizen scientists; and 4) remote sensing observations that can be validated with surface observations, thereby providing wall-to-wall coverage for the U.S.A. Key to

  6. The Computer-Networked Writing Lab: One Instructor's View. ERIC Digest.

    ERIC Educational Resources Information Center

    Puccio, P. M.

    According to an instructor of basic writing in the Writing Lab at the University of Massachusetts in Amherst, he can teach differently in a computer-networked writing lab than he did in a conventional classroom. Because the room is designed to teach writing and nothing else, it offers a congenial workspace where the teacher can interact with…

  7. A computer program for the generation of logic networks from task chart data

    NASA Technical Reports Server (NTRS)

    Herbert, H. E.

    1980-01-01

    The Network Generation Program (NETGEN), which creates logic networks from task chart data is presented. NETGEN is written in CDC FORTRAN IV (Extended) and runs in a batch mode on the CDC 6000 and CYBER 170 series computers. Data is input via a two-card format and contains information regarding the specific tasks in a project. From this data, NETGEN constructs a logic network of related activities with each activity having unique predecessor and successor nodes, activity duration, descriptions, etc. NETGEN then prepares this data on two files that can be used in the Project Planning Analysis and Reporting System Batch Network Scheduling program and the EZPERT graphics program.

  8. Lawrence Livermore National Laboratories Perspective on Code Development and High Performance Computing Resources in Support of the National HED/ICF Effort

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Clouse, C. J.; Edwards, M. J.; McCoy, M. G.

    2015-07-07

    Through its Advanced Scientific Computing (ASC) and Inertial Confinement Fusion (ICF) code development efforts, Lawrence Livermore National Laboratory (LLNL) provides a world leading numerical simulation capability for the National HED/ICF program in support of the Stockpile Stewardship Program (SSP). In addition the ASC effort provides high performance computing platform capabilities upon which these codes are run. LLNL remains committed to, and will work with, the national HED/ICF program community to help insure numerical simulation needs are met and to make those capabilities available, consistent with programmatic priorities and available resources.

  9. Site characterization of the national seismic network of Italy

    NASA Astrophysics Data System (ADS)

    Bordoni, Paola; Pacor, Francesca; Cultrera, Giovanna; Casale, Paolo; Cara, Fabrizio; Di Giulio, Giuseppe; Famiani, Daniela; Ladina, Chiara; PIschiutta, Marta; Quintiliani, Matteo

    2017-04-01

    The national seismic network of Italy (Rete Sismica Nazionale, RSN) run by Istituto Nazionale di Geofisica e Vulcanologia (INGV) consists of more than 400 seismic stations connected in real time to the institute data center in order to locate earthquakes for civil defense purposes. A critical issue in the performance of a network is the characterization of site condition at the recording stations. Recently INGV has started addressing this subject through the revision of all available geological and geophysical data, the acquisition of new information by means of ad-hoc field measurements and the analysis of seismic waveforms. The main effort is towards building a database, integrated with the other INGV infrastructures, designed to archive homogeneous parameters through the seismic network useful for a complete site characterization, including housing, geological, seismological and geotechnical features as well as the site class according to the European and Italian building codes. Here we present the ongoing INGV activities.

  10. Anticipated Ethics and Regulatory Challenges in PCORnet: The National Patient-Centered Clinical Research Network.

    PubMed

    Ali, Joseph; Califf, Robert; Sugarman, Jeremy

    2016-01-01

    PCORnet, the National Patient-Centered Clinical Research Network, seeks to establish a robust national health data network for patient-centered comparative effectiveness research. This article reports the results of a PCORnet survey designed to identify the ethics and regulatory challenges anticipated in network implementation. A 12-item online survey was developed by leadership of the PCORnet Ethics and Regulatory Task Force; responses were collected from the 29 PCORnet networks. The most pressing ethics issues identified related to informed consent, patient engagement, privacy and confidentiality, and data sharing. High priority regulatory issues included IRB coordination, privacy and confidentiality, informed consent, and data sharing. Over 150 IRBs and five different approaches to managing multisite IRB review were identified within PCORnet. Further empirical and scholarly work, as well as practical and policy guidance, is essential if important initiatives that rely on comparative effectiveness research are to move forward.

  11. U.S. Geological Survey external quality-assurance project report for the National Atmospheric Deposition Program / National Trends Network and Mercury Deposition Network, 2011-2012

    USGS Publications Warehouse

    Wetherbee, Gregory A.; Martin, RoseAnn

    2014-01-01

    The U.S. Geological Survey operated six distinct programs to provide external quality-assurance monitoring for the National Atmospheric Deposition Program (NADP) / National Trends Network (NTN) and Mercury Deposition Network (MDN) during 2011–2012. The field-audit program assessed the effects of onsite exposure, sample handling, and shipping on the chemistry of NTN samples; a system-blank program assessed the same effects for MDN. Two interlaboratory-comparison programs assessed the bias and variability of the chemical analysis data from the Central Analytical Laboratory and Mercury Analytical Laboratory (HAL). A blind-audit program was implemented for the MDN during 2011 to evaluate analytical bias in HAL total mercury concentration data. The co-located–sampler program was used to identify and quantify potential shifts in NADP data resulting from the replacement of original network instrumentation with new electronic recording rain gages and precipitation collectors that use optical precipitation sensors. The results indicate that NADP data continue to be of sufficient quality for the analysis of spatial distributions and time trends of chemical constituents in wet deposition across the United States. Co-located rain gage results indicate -3.7 to +6.5 percent bias in NADP precipitation-depth measurements. Co-located collector results suggest that the retrofit of the NADP networks with the new precipitation collectors could cause +10 to +36 percent shifts in NADP annual deposition values for ammonium, nitrate, and sulfate; -7.5 to +41 percent shifts for hydrogen-ion deposition; and larger shifts (-51 to +52 percent) for calcium, magnesium, sodium, potassium, and chloride. The prototype N-CON Systems bucket collector typically catches more precipitation than the NADP-approved Aerochem Metrics Model 301 collector.

  12. Planning and Establishment of a National Teledocumentation Network--Guidelines Based on the Spanish Experience.

    ERIC Educational Resources Information Center

    Mahon, F. V., Ed.

    Finding that the promotion of a national information industry can best be pursued through the planning and establishment of a national teledocumentation network, this study (based on the experiences of Spain) offers a model that may be of interest to UNESCO (United Nations Educational, Scientific and Cultural Organization) member states wishing to…

  13. The Handicap Principle for Trust in Computer Security, the Semantic Web and Social Networking

    NASA Astrophysics Data System (ADS)

    Ma, Zhanshan (Sam); Krings, Axel W.; Hung, Chih-Cheng

    Communication is a fundamental function of life, and it exists in almost all living things: from single-cell bacteria to human beings. Communication, together with competition and cooperation,arethree fundamental processes in nature. Computer scientists are familiar with the study of competition or 'struggle for life' through Darwin's evolutionary theory, or even evolutionary computing. They may be equally familiar with the study of cooperation or altruism through the Prisoner's Dilemma (PD) game. However, they are likely to be less familiar with the theory of animal communication. The objective of this article is three-fold: (i) To suggest that the study of animal communication, especially the honesty (reliability) of animal communication, in which some significant advances in behavioral biology have been achieved in the last three decades, should be on the verge to spawn important cross-disciplinary research similar to that generated by the study of cooperation with the PD game. One of the far-reaching advances in the field is marked by the publication of "The Handicap Principle: a Missing Piece of Darwin's Puzzle" by Zahavi (1997). The 'Handicap' principle [34][35], which states that communication signals must be costly in some proper way to be reliable (honest), is best elucidated with evolutionary games, e.g., Sir Philip Sidney (SPS) game [23]. Accordingly, we suggest that the Handicap principle may serve as a fundamental paradigm for trust research in computer science. (ii) To suggest to computer scientists that their expertise in modeling computer networks may help behavioral biologists in their study of the reliability of animal communication networks. This is largely due to the historical reason that, until the last decade, animal communication was studied with the dyadic paradigm (sender-receiver) rather than with the network paradigm. (iii) To pose several open questions, the answers to which may bear some refreshing insights to trust research in

  14. The National Network forTechnology Entrepreneurship and Commercialization (N2TEC): Bringing New Technologies to Market

    NASA Astrophysics Data System (ADS)

    Allen, Kathleen

    2003-03-01

    N2TEC, the National Network for Technology Entrepreneurship and Commercialization, is a National Science Foundation "Partnerships for Innovation" initiative designed to raise the level of innovation and technology commercialization in colleges, universities, and communities across the nation. N2TEC is creating a network of people and institutions, and a set of technology tools that will facilitate the pooling of resources and knowledge and enable faculty and students to share those resources and collaborate without regard to geographic boundaries. N2TEC will become the backbone by which educational institutions across the nation can move their technologies into new venture startups. The ultimate goal is to create new wealth and strengthen local, regional and national economies.

  15. Combining Topological Hardware and Topological Software: Color-Code Quantum Computing with Topological Superconductor Networks

    NASA Astrophysics Data System (ADS)

    Litinski, Daniel; Kesselring, Markus S.; Eisert, Jens; von Oppen, Felix

    2017-07-01

    We present a scalable architecture for fault-tolerant topological quantum computation using networks of voltage-controlled Majorana Cooper pair boxes and topological color codes for error correction. Color codes have a set of transversal gates which coincides with the set of topologically protected gates in Majorana-based systems, namely, the Clifford gates. In this way, we establish color codes as providing a natural setting in which advantages offered by topological hardware can be combined with those arising from topological error-correcting software for full-fledged fault-tolerant quantum computing. We provide a complete description of our architecture, including the underlying physical ingredients. We start by showing that in topological superconductor networks, hexagonal cells can be employed to serve as physical qubits for universal quantum computation, and we present protocols for realizing topologically protected Clifford gates. These hexagonal-cell qubits allow for a direct implementation of open-boundary color codes with ancilla-free syndrome read-out and logical T gates via magic-state distillation. For concreteness, we describe how the necessary operations can be implemented using networks of Majorana Cooper pair boxes, and we give a feasibility estimate for error correction in this architecture. Our approach is motivated by nanowire-based networks of topological superconductors, but it could also be realized in alternative settings such as quantum-Hall-superconductor hybrids.

  16. Celebrating 25 Years. National Dropout Prevention Center/Network Newsletter. Volume 22, Number 3

    ERIC Educational Resources Information Center

    Duckenfield, Marty, Ed.

    2011-01-01

    The "National Dropout Prevention Newsletter" is published quarterly by the National Dropout Prevention Center/Network. This issue contains the following articles: (1) Leading the Way in Dropout Prevention; (2) The 15 Effective Strategies in Action; (3) Technology Changes 1986-2011 (Marty Duckenfield); (4) 25 Years of Research and Support…

  17. Computational neural networks in chemistry: Model free mapping devices for predicting chemical reactivity from molecular structure

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Elrod, D.W.

    1992-01-01

    Computational neural networks (CNNs) are a computational paradigm inspired by the brain's massively parallel network of highly interconnected neurons. The power of computational neural networks derives not so much from their ability to model the brain as from their ability to learn by example and to map highly complex, nonlinear functions, without the need to explicitly specify the functional relationship. Two central questions about CNNs were investigated in the context of predicting chemical reactions: (1) the mapping properties of neural networks and (2) the representation of chemical information for use in CNNs. Chemical reactivity is here considered an example ofmore » a complex, nonlinear function of molecular structure. CNN's were trained using modifications of the back propagation learning rule to map a three dimensional response surface similar to those typically observed in quantitative structure-activity and structure-property relationships. The computational neural network's mapping of the response surface was found to be robust to the effects of training sample size, noisy data and intercorrelated input variables. The investigation of chemical structure representation led to the development of a molecular structure-based connection-table representation suitable for neural network training. An extension of this work led to a BE-matrix structure representation that was found to be general for several classes of reactions. The CNN prediction of chemical reactivity and regiochemistry was investigated for electrophilic aromatic substitution reactions, Markovnikov addition to alkenes, Saytzeff elimination from haloalkanes, Diels-Alder cycloaddition, and retro Diels-Alder ring opening reactions using these connectivity-matrix derived representations. The reaction predictions made by the CNNs were more accurate than those of an expert system and were comparable to predictions made by chemists.« less

  18. A Computational Network Biology Approach to Uncover Novel Genes Related to Alzheimer's Disease.

    PubMed

    Zanzoni, Andreas

    2016-01-01

    Recent advances in the fields of genetics and genomics have enabled the identification of numerous Alzheimer's disease (AD) candidate genes, although for many of them the role in AD pathophysiology has not been uncovered yet. Concomitantly, network biology studies have shown a strong link between protein network connectivity and disease. In this chapter I describe a computational approach that, by combining local and global network analysis strategies, allows the formulation of novel hypotheses on the molecular mechanisms involved in AD and prioritizes candidate genes for further functional studies.

  19. NFDRSPC: The National Fire-Danger Rating System on a Personal Computer

    Treesearch

    Bryan G. Donaldson; James T. Paul

    1990-01-01

    This user's guide is an introductory manual for using the 1988 version (Burgan 1988) of the National Fire-Danger Rating System on an IBM PC or compatible computer. NFDRSPC is a window-oriented, interactive computer program that processes observed and forecast weather with fuels data to produce NFDRS indices. Other program features include user-designed display...

  20. 78 FR 68030 - Draft Guidance on Intellectual Property Rights for the National Network for Manufacturing...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-11-13

    ... Additive Manufacturing showed great promise for the defense, energy, space and commercial sectors of the Nation. In August, 2012, the selection of the National Additive Manufacturing Innovation Institute (NAMII...-01] Draft Guidance on Intellectual Property Rights for the National Network for Manufacturing...

  1. Computational Intelligence and Its Impact on Future High-Performance Engineering Systems

    NASA Technical Reports Server (NTRS)

    Noor, Ahmed K. (Compiler)

    1996-01-01

    This document contains presentations from the joint UVA/NASA Workshop on Computational Intelligence held at the Virginia Consortium of Engineering and Science Universities, Hampton, Virginia, June 27-28, 1995. The presentations addressed activities in the areas of fuzzy logic, neural networks, and evolutionary computations. Workshop attendees represented NASA, the National Science Foundation, the Department of Energy, National Institute of Standards and Technology (NIST), the Jet Propulsion Laboratory, industry, and academia. The workshop objectives were to assess the state of technology in the Computational intelligence area and to provide guidelines for future research.

  2. Strategic factors in the development of the National Technology Transfer Network

    NASA Technical Reports Server (NTRS)

    Root, Jonathan F.; Stone, Barbara A.

    1993-01-01

    Broad consensus among industry and government leaders has developed over the last decade on the importance of applying the U.S. leadership in research and development (R&D) to strengthen competitiveness in the global marketplace, and thus enhance national prosperity. This consensus has emerged against the backdrop of increasing economic competition, and the dramatic reduction of military threats to national security with the end of the Cold War. This paper reviews the key factors and considerations that shaped - and continue to influence - the development of the Regional Technoloty Transfer Centers (RTTC) and the National Technology Transfer Center (NTTC). Also, the future role of the national network in support of emerging technology policy initiatives will be explored.

  3. Integrative Analysis of Many Weighted Co-Expression Networks Using Tensor Computation

    PubMed Central

    Li, Wenyuan; Liu, Chun-Chi; Zhang, Tong; Li, Haifeng; Waterman, Michael S.; Zhou, Xianghong Jasmine

    2011-01-01

    The rapid accumulation of biological networks poses new challenges and calls for powerful integrative analysis tools. Most existing methods capable of simultaneously analyzing a large number of networks were primarily designed for unweighted networks, and cannot easily be extended to weighted networks. However, it is known that transforming weighted into unweighted networks by dichotomizing the edges of weighted networks with a threshold generally leads to information loss. We have developed a novel, tensor-based computational framework for mining recurrent heavy subgraphs in a large set of massive weighted networks. Specifically, we formulate the recurrent heavy subgraph identification problem as a heavy 3D subtensor discovery problem with sparse constraints. We describe an effective approach to solving this problem by designing a multi-stage, convex relaxation protocol, and a non-uniform edge sampling technique. We applied our method to 130 co-expression networks, and identified 11,394 recurrent heavy subgraphs, grouped into 2,810 families. We demonstrated that the identified subgraphs represent meaningful biological modules by validating against a large set of compiled biological knowledge bases. We also showed that the likelihood for a heavy subgraph to be meaningful increases significantly with its recurrence in multiple networks, highlighting the importance of the integrative approach to biological network analysis. Moreover, our approach based on weighted graphs detects many patterns that would be overlooked using unweighted graphs. In addition, we identified a large number of modules that occur predominately under specific phenotypes. This analysis resulted in a genome-wide mapping of gene network modules onto the phenome. Finally, by comparing module activities across many datasets, we discovered high-order dynamic cooperativeness in protein complex networks and transcriptional regulatory networks. PMID:21698123

  4. Advancing environmental health surveillance in the US through a national human biomonitoring network.

    PubMed

    Latshaw, Megan Weil; Degeberg, Ruhiyyih; Patel, Surili Sutaria; Rhodes, Blaine; King, Ewa; Chaudhuri, Sanwat; Nassif, Julianne

    2017-03-01

    The United States lacks a comprehensive, nationally-coordinated, state-based environmental health surveillance system. This lack of infrastructure leads to: • varying levels of understanding of chemical exposures at the state & local levels • often inefficient public health responses to chemical exposure emergencies (such as those that occurred in the Flint drinking water crisis, the Gold King mine spill, the Elk river spill and the Gulf Coast oil spill) • reduced ability to measure the impact of public health interventions or environmental policies • less efficient use of resources for cleaning up environmental contamination Establishing the National Biomonitoring Network serves as a step toward building a national, state-based environmental health surveillance system. The Network builds upon CDC investments in emergency preparedness and environmental public health tracking, which have created advanced chemical analysis and information sharing capabilities in the state public health systems. The short-term goal of the network is to harmonize approaches to human biomonitoring in the US, thus increasing the comparability of human biomonitoring data across states and communities. The long-term goal is to compile baseline data on exposures at the state level, similar to data found in CDC's National Report on Human Exposure to Environmental Chemicals. Barriers to success for this network include: available resources, effective risk communication strategies, data comparability & sharing, and political will. Anticipated benefits include high quality data on which to base public health and environmental decisions, data with which to assess the success of public health interventions, improved risk assessments for chemicals, and new ways to prioritize environmental health research. Copyright © 2016 Elsevier GmbH. All rights reserved.

  5. The USA National Phenology Network's Model for Collaborative Data Generation and Dissemination

    NASA Astrophysics Data System (ADS)

    Rosemartin, A.; Lincicome, A.; Denny, E. G.; Marsh, L.; Wilson, B. E.

    2010-12-01

    The USA National Phenology Network (USA-NPN) serves science and society by promoting a broad understanding of plant and animal phenology and the relationships among phenological patterns and all aspects of environmental change. The Network was founded as an NSF-funded Research Coordination Network, for the purpose of fostering collaboration among scientists, policy-makers and the general public to address the challenges posed by global change and its impact on ecosystems and human health. With this mission in mind, the USA-NPN has developed an Information Management System (IMS) to facilitate collaboration and participatory data collection and digitization. The IMS includes components for data storage, such as the National Phenology Database, as well as a Drupal website for information-sharing and data visualization, and a Java application for collection of contemporary observational data. The National Phenology Database is designed to efficiently accommodate large quantities of phenology data and to be flexible to the changing needs of the network. The database allows for the collection, storage and output of phenology data from multiple sources (e.g., partner organizations, researchers and citizen observers), as well as integration with legacy data sets. Participants in the network can submit records (as Drupal content types) for publications, legacy data sets and phenology-related festivals. The USA-NPN’s contemporary phenology data collection effort, Nature’s Notebook also draws on the contributions of participants. Citizen scientists around the country submit data through this Java application (paired with the Drupal site through a shared login) on the life cycle stages of plants and animals in their yards and parks. The North American Bird Phenology Program, now a part of the USA-NPN, also relies on web-based crowdsourcing. Participants in this program are transcribing 6 million scanned paper cards that were collected by observers across the United States

  6. Seeing the forest for the trees: Networked workstations as a parallel processing computer

    NASA Technical Reports Server (NTRS)

    Breen, J. O.; Meleedy, D. M.

    1992-01-01

    Unlike traditional 'serial' processing computers in which one central processing unit performs one instruction at a time, parallel processing computers contain several processing units, thereby, performing several instructions at once. Many of today's fastest supercomputers achieve their speed by employing thousands of processing elements working in parallel. Few institutions can afford these state-of-the-art parallel processors, but many already have the makings of a modest parallel processing system. Workstations on existing high-speed networks can be harnessed as nodes in a parallel processing environment, bringing the benefits of parallel processing to many. While such a system can not rival the industry's latest machines, many common tasks can be accelerated greatly by spreading the processing burden and exploiting idle network resources. We study several aspects of this approach, from algorithms to select nodes to speed gains in specific tasks. With ever-increasing volumes of astronomical data, it becomes all the more necessary to utilize our computing resources fully.

  7. Adolescents, Health Education, and Computers: The Body Awareness Resource Network (BARN).

    ERIC Educational Resources Information Center

    Bosworth, Kris; And Others

    1983-01-01

    The Body Awareness Resource Network (BARN) is a computer-based system designed as a confidential, nonjudgmental source of health information for adolescents. Topics include alcohol and other drugs, diet and activity, family communication, human sexuality, smoking, and stress management; programs are available for high school and middle school…

  8. Data systems and computer science space data systems: Onboard networking and testbeds

    NASA Technical Reports Server (NTRS)

    Dalton, Dan

    1991-01-01

    The technical objectives are to develop high-performance, space-qualifiable, onboard computing, storage, and networking technologies. The topics are presented in viewgraph form and include the following: justification; technology challenges; program description; and state-of-the-art assessment.

  9. U.S. National PM2.5 Chemical Speciation Monitoring Networks – CSN and IMPROVE: Description of Networks

    EPA Science Inventory

    The U.S. Environmental Protection Agency (EPA) initiated the national PM2.5 Chemical Speciation Monitoring Network (CSN) in 2000 to support evaluation of long-term trends and to better quantify the impact of sources on particulate matter (PM) concentrations in the size range belo...

  10. Effects of equipment performance on data quality from the National Atmospheric Deposition Program/National Trends Network and the Mercury Deposition Network

    USGS Publications Warehouse

    Wetherbee, Gregory A.; Rhodes, Mark F.

    2013-01-01

    The U.S. Geological Survey Branch of Quality Systems operates the Precipitation Chemistry Quality Assurance project (PCQA) to provide independent, external quality-assurance for the National Atmospheric Deposition Program (NADP). NADP is composed of five monitoring networks that measure the chemical composition of precipitation and ambient air. PCQA and the NADP Program Office completed five short-term studies to investigate the effects of equipment performance with respect to the National Trends Network (NTN) and Mercury Deposition Network (MDN) data quality: sample evaporation from NTN collectors; sample volume and mercury loss from MDN collectors; mercury adsorption to MDN collector glassware, grid-type precipitation sensors for precipitation collectors, and the effects of an NTN collector wind shield on sample catch efficiency. Sample-volume evaporation from an NTN Aerochem Metrics (ACM) collector ranged between 1.1–33 percent with a median of 4.7 percent. The results suggest that weekly NTN sample evaporation is small relative to sample volume. MDN sample evaporation occurs predominantly in western and southern regions of the United States (U.S.) and more frequently with modified ACM collectors than with N-CON Systems Inc. collectors due to differences in airflow through the collectors. Variations in mercury concentrations, measured to be as high as 47.5 percent per week with a median of 5 percent, are associated with MDN sample-volume loss. Small amounts of mercury are also lost from MDN samples by adsorption to collector glassware irrespective of collector type. MDN 11-grid sensors were found to open collectors sooner, keep them open longer, and cause fewer lid cycles than NTN 7-grid sensors. Wind shielding an NTN ACM collector resulted in collection of larger quantities of precipitation while also preserving sample integrity.

  11. Use of the computer and Internet among Italian families: first national study.

    PubMed

    Bricolo, Francesco; Gentile, Douglas A; Smelser, Rachel L; Serpelloni, Giovanni

    2007-12-01

    Although home Internet access has continued to increase, little is known about actual usage patterns in homes. This nationally representative study of over 4,700 Italian households with children measured computer and Internet use of each family member across 3 months. Data on actual computer and Internet usage were collected by Nielsen//NetRatings service and provide national baseline information on several variables for several age groups separately, including children, adolescents, and adult men and women. National averages are shown for the average amount of time spent using computers and on the Web, the percentage of each age group online, and the types of Web sites viewed. Overall, about one-third of children ages 2 to 11, three-fourths of adolescents and adult women, and over four-fifths of adult men access the Internet each month. Children spend an average of 22 hours/month on the computer, with a jump to 87 hours/month for adolescents. Adult women spend less time (about 60 hours/month), and adult men spend more (over 100). The types of Web sites visited are reported, including the top five for each age group. In general, search engines and Web portals are the top sites visited, regardless of age group. These data provide a baseline for comparisons across time and cultures.

  12. The National Research and Education Network (NREN): Promise of New Information Environments. ERIC Digest.

    ERIC Educational Resources Information Center

    Bishop, Ann P.

    This digest describes proposed legislation for the implementation of the National Research and Education Network (NREN). Issues and implications for teachers, students, researchers, and librarians are suggested and the emergence of the electronic network as a general communication and research tool is described. Developments in electronic…

  13. Inference of cancer-specific gene regulatory networks using soft computing rules.

    PubMed

    Wang, Xiaosheng; Gotoh, Osamu

    2010-03-24

    Perturbations of gene regulatory networks are essentially responsible for oncogenesis. Therefore, inferring the gene regulatory networks is a key step to overcoming cancer. In this work, we propose a method for inferring directed gene regulatory networks based on soft computing rules, which can identify important cause-effect regulatory relations of gene expression. First, we identify important genes associated with a specific cancer (colon cancer) using a supervised learning approach. Next, we reconstruct the gene regulatory networks by inferring the regulatory relations among the identified genes, and their regulated relations by other genes within the genome. We obtain two meaningful findings. One is that upregulated genes are regulated by more genes than downregulated ones, while downregulated genes regulate more genes than upregulated ones. The other one is that tumor suppressors suppress tumor activators and activate other tumor suppressors strongly, while tumor activators activate other tumor activators and suppress tumor suppressors weakly, indicating the robustness of biological systems. These findings provide valuable insights into the pathogenesis of cancer.

  14. Albemarle Sound demonstration study of the national monitoring network for US coastal waters and their tributaries

    Treesearch

    Michelle Moorman; Sharon Fitzgerald; Keith Loftin; Elizabeth Fensin

    2016-01-01

    The U.S. Geological Survey’s (USGS) is implementing a demonstration project in the Albemarle Sound for the National Monitoring Network for U.S. coastal waters and their tributaries. The goal of the National Monitoring Network is to provide information about the health of our oceans and coastal ecosystems and inland influences on coastal waters for improved resource...

  15. National Health Care Network for children with oral clefts: organization, functioning, and preliminary outcomes.

    PubMed

    Cassinelli, Agustina; Pauselli, Nadia; Piola, Agustina; Martinelli, Claudia; Alves de Azeved, José L; Bidondo, María P; Groisman, Boris; Barbero, Pablo; Liascovich, Rosa; Sala, Ana

    2018-02-01

    Oral clefts are major congenital anomalies that may affect the lip and/or palate, and that may also involve the nose and nostrils. In Argentina, their prevalence is approximately 15 per 10 000 births. In 2015, the Ministry of Health of Argentina created a national health care network for children with oral clefts in Argentina through the joint work with the National Registry of Congenital Anomalies (Red Nacional de Anomalías Congénitas, RENAC) (coordinating center for the national network) and the SUMAR Program. The objective of this study was to describe the health care network and its preliminary outcomes. A total of 61 centers that provided a comprehensive treatment for oral clefts or in collaboration with other centers were identified and accredited. Maternity centers were connected with treating centers grouped in health care network nodes. In the period between March 2015 and February 2016, 550 newborn infants who were exclusively covered by the public health care system were identified. Among these, 18% had a cleft lip; 62%, cleft lip and palate; and 20%, cleft palate only; 75% were isolated cases and 25%, in association with other congenital anomalies. Approximately 70% of children were assessed by a certified treating institution and are receiving treatment. The network seeks to improve data systematization, include the largest number of centers possible, strengthen interdisciplinary team work, and promote high-quality standards for treatments. Sociedad Argentina de Pediatría

  16. A graph-based computational framework for simulation and optimisation of coupled infrastructure networks

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Jalving, Jordan; Abhyankar, Shrirang; Kim, Kibaek

    Here, we present a computational framework that facilitates the construction, instantiation, and analysis of large-scale optimization and simulation applications of coupled energy networks. The framework integrates the optimization modeling package PLASMO and the simulation package DMNetwork (built around PETSc). These tools use a common graphbased abstraction that enables us to achieve compatibility between data structures and to build applications that use network models of different physical fidelity. We also describe how to embed these tools within complex computational workflows using SWIFT, which is a tool that facilitates parallel execution of multiple simulation runs and management of input and output data.more » We discuss how to use these capabilities to target coupled natural gas and electricity systems.« less

  17. A graph-based computational framework for simulation and optimisation of coupled infrastructure networks

    DOE PAGES

    Jalving, Jordan; Abhyankar, Shrirang; Kim, Kibaek; ...

    2017-04-24

    Here, we present a computational framework that facilitates the construction, instantiation, and analysis of large-scale optimization and simulation applications of coupled energy networks. The framework integrates the optimization modeling package PLASMO and the simulation package DMNetwork (built around PETSc). These tools use a common graphbased abstraction that enables us to achieve compatibility between data structures and to build applications that use network models of different physical fidelity. We also describe how to embed these tools within complex computational workflows using SWIFT, which is a tool that facilitates parallel execution of multiple simulation runs and management of input and output data.more » We discuss how to use these capabilities to target coupled natural gas and electricity systems.« less

  18. Application of artificial neural networks to identify equilibration in computer simulations

    NASA Astrophysics Data System (ADS)

    Leibowitz, Mitchell H.; Miller, Evan D.; Henry, Michael M.; Jankowski, Eric

    2017-11-01

    Determining which microstates generated by a thermodynamic simulation are representative of the ensemble for which sampling is desired is a ubiquitous, underspecified problem. Artificial neural networks are one type of machine learning algorithm that can provide a reproducible way to apply pattern recognition heuristics to underspecified problems. Here we use the open-source TensorFlow machine learning library and apply it to the problem of identifying which hypothetical observation sequences from a computer simulation are “equilibrated” and which are not. We generate training populations and test populations of observation sequences with embedded linear and exponential correlations. We train a two-neuron artificial network to distinguish the correlated and uncorrelated sequences. We find that this simple network is good enough for > 98% accuracy in identifying exponentially-decaying energy trajectories from molecular simulations.

  19. The Use of Computer Networks in Data Gathering and Data Analysis.

    ERIC Educational Resources Information Center

    Yost, Michael; Bremner, Fred

    This document describes the review, analysis, and decision-making process that Trinity University, Texas, went through to develop the three-part computer network that they use to gather and analyze EEG (electroencephalography) and EKG (electrocardiogram) data. The data are gathered in the laboratory on a PDP-1124, an analog minicomputer. Once…

  20. Analysis and synthesis of distributed-lumped-active networks by digital computer

    NASA Technical Reports Server (NTRS)

    1973-01-01

    The use of digital computational techniques in the analysis and synthesis of DLA (distributed lumped active) networks is considered. This class of networks consists of three distinct types of elements, namely, distributed elements (modeled by partial differential equations), lumped elements (modeled by algebraic relations and ordinary differential equations), and active elements (modeled by algebraic relations). Such a characterization is applicable to a broad class of circuits, especially including those usually referred to as linear integrated circuits, since the fabrication techniques for such circuits readily produce elements which may be modeled as distributed, as well as the more conventional lumped and active ones.