Sample records for national computer network

  1. Cyber-Ed.

    ERIC Educational Resources Information Center

    Ruben, Barbara

    1994-01-01

    Reviews a number of interactive environmental computer education networks and software packages. Computer networks include National Geographic Kids Network, Global Lab, and Global Rivers Environmental Education Network. Computer software involve environmental decision making, simulation games, tropical rainforests, the ocean, the greenhouse…

  2. 10 CFR 727.2 - What are the definitions of the terms used in this part?

    Code of Federal Regulations, 2012 CFR

    2012-01-01

    ... information. Computer means desktop computers, portable computers, computer networks (including the DOE network and local area networks at or controlled by DOE organizations), network devices, automated.... DOE means the Department of Energy, including the National Nuclear Security Administration. DOE...

  3. 10 CFR 727.2 - What are the definitions of the terms used in this part?

    Code of Federal Regulations, 2014 CFR

    2014-01-01

    ... information. Computer means desktop computers, portable computers, computer networks (including the DOE network and local area networks at or controlled by DOE organizations), network devices, automated.... DOE means the Department of Energy, including the National Nuclear Security Administration. DOE...

  4. 10 CFR 727.2 - What are the definitions of the terms used in this part?

    Code of Federal Regulations, 2013 CFR

    2013-01-01

    ... information. Computer means desktop computers, portable computers, computer networks (including the DOE network and local area networks at or controlled by DOE organizations), network devices, automated.... DOE means the Department of Energy, including the National Nuclear Security Administration. DOE...

  5. 10 CFR 727.2 - What are the definitions of the terms used in this part?

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ... information. Computer means desktop computers, portable computers, computer networks (including the DOE network and local area networks at or controlled by DOE organizations), network devices, automated.... DOE means the Department of Energy, including the National Nuclear Security Administration. DOE...

  6. 10 CFR 727.2 - What are the definitions of the terms used in this part?

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... information. Computer means desktop computers, portable computers, computer networks (including the DOE network and local area networks at or controlled by DOE organizations), network devices, automated.... DOE means the Department of Energy, including the National Nuclear Security Administration. DOE...

  7. Using satellite communications for a mobile computer network

    NASA Technical Reports Server (NTRS)

    Wyman, Douglas J.

    1993-01-01

    The topics discussed include the following: patrol car automation, mobile computer network, network requirements, network design overview, MCN mobile network software, MCN hub operation, mobile satellite software, hub satellite software, the benefits of patrol car automation, the benefits of satellite mobile computing, and national law enforcement satellite.

  8. Information Communication Highways in the 1990s: An Analysis of Their Potential Impact on Library Automation.

    ERIC Educational Resources Information Center

    Kibirige, Harry M.

    1991-01-01

    Discussion of the potential effects of fiber optic-based communication technology on information networks and systems design highlights library automation. Topics discussed include computers and telecommunications systems, the importance of information in national economies, microcomputers, local area networks (LANs), national computer networks,…

  9. National research and education network

    NASA Technical Reports Server (NTRS)

    Villasenor, Tony

    1991-01-01

    Some goals of this network are as follows: Extend U.S. technological leadership in high performance computing and computer communications; Provide wide dissemination and application of the technologies both to the speed and the pace of innovation and to serve the national economy, national security, education, and the global environment; and Spur gains in the U.S. productivity and industrial competitiveness by making high performance computing and networking technologies an integral part of the design and production process. Strategies for achieving these goals are as follows: Support solutions to important scientific and technical challenges through a vigorous R and D effort; Reduce the uncertainties to industry for R and D and use of this technology through increased cooperation between government, industry, and universities and by the continued use of government and government funded facilities as a prototype user for early commercial HPCC products; and Support underlying research, network, and computational infrastructures on which U.S. high performance computing technology is based.

  10. The ASCI Network for SC 2000: Gigabyte Per Second Networking

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    PRATT, THOMAS J.; NAEGLE, JOHN H.; MARTINEZ JR., LUIS G.

    2001-11-01

    This document highlights the Discom's Distance computing and communication team activities at the 2000 Supercomputing conference in Dallas Texas. This conference is sponsored by the IEEE and ACM. Sandia's participation in the conference has now spanned a decade, for the last five years Sandia National Laboratories, Los Alamos National Lab and Lawrence Livermore National Lab have come together at the conference under the DOE's ASCI, Accelerated Strategic Computing Initiatives, Program rubric to demonstrate ASCI's emerging capabilities in computational science and our combined expertise in high performance computer science and communication networking developments within the program. At SC 2000, DISCOM demonstratedmore » an infrastructure. DISCOM2 uses this forum to demonstrate and focus communication and pre-standard implementation of 10 Gigabit Ethernet, the first gigabyte per second data IP network transfer application, and VPN technology that enabled a remote Distributed Resource Management tools demonstration. Additionally a national OC48 POS network was constructed to support applications running between the show floor and home facilities. This network created the opportunity to test PSE's Parallel File Transfer Protocol (PFTP) across a network that had similar speed and distances as the then proposed DISCOM WAN. The SCINET SC2000 showcased wireless networking and the networking team had the opportunity to explore this emerging technology while on the booth. This paper documents those accomplishments, discusses the details of their convention exhibit floor. We also supported the production networking needs of the implementation, and describes how these demonstrations supports DISCOM overall strategies in high performance computing networking.« less

  11. National High-Performance Computing and Networking Act. Report To Accompany S. 343, Senate, 102d Congess, 1st Session.

    ERIC Educational Resources Information Center

    Congress of the U.S., Washington, DC. Senate Committee on Energy and Natural Resources.

    The purpose of the bill (S. 343), as reported by the Senate Committee on Energy and Natural Resources, is to establish a federal commitment to the advancement of high-performance computing, improve interagency planning and coordination of federal high-performance computing and networking activities, authorize a national high-speed computer…

  12. SpecialNet. A National Computer-Based Communications Network.

    ERIC Educational Resources Information Center

    Morin, Alfred J.

    1986-01-01

    "SpecialNet," a computer-based communications network for educators at all administrative levels, has been established and is managed by National Systems Management, Inc. Users can send and receive electronic mail, share information on electronic bulletin boards, participate in electronic conferences, and send reports and other documents to each…

  13. The Reality of National Computer Networking for Higher Education. Proceedings of the 1978 EDUCOM Fall Conference. EDUCOM Series in Computing and Telecommunications in Higher Education 3.

    ERIC Educational Resources Information Center

    Emery, James C., Ed.

    A comprehensive review of the current status, prospects, and problems of computer networking in higher education is presented from the perspectives of both computer users and network suppliers. Several areas of computer use are considered including applications for instruction, research, and administration in colleges and universities. In the…

  14. Internet Basics. ERIC Digest.

    ERIC Educational Resources Information Center

    Tennant, Roy

    The Internet is a worldwide network of computer networks. In the United States, the National Science Foundation Network (NSFNet) serves as the Internet "backbone" (a very high speed network that connects key regions across the country). The NSFNet will likely evolve into the National Research and Education Network (NREN) as defined in…

  15. NASA/DOD Aerospace Knowledge Diffusion Research Project. Paper 47: The value of computer networks in aerospace

    NASA Technical Reports Server (NTRS)

    Bishop, Ann Peterson; Pinelli, Thomas E.

    1995-01-01

    This paper presents data on the value of computer networks that were obtained from a national survey of 2000 aerospace engineers that was conducted in 1993. Survey respondents reported the extent to which they used computer networks in their work and communication and offered their assessments of the value of various network types and applications. They also provided information about the positive impacts of networks on their work, which presents another perspective on value. Finally, aerospace engineers' recommendations on network implementation present suggestions for increasing the value of computer networks within aerospace organizations.

  16. Construction of Blaze at the University of Illinois at Chicago: A Shared, High-Performance, Visual Computer for Next-Generation Cyberinfrastructure-Accelerated Scientific, Engineering, Medical and Public Policy Research

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Brown, Maxine D.; Leigh, Jason

    2014-02-17

    The Blaze high-performance visual computing system serves the high-performance computing research and education needs of University of Illinois at Chicago (UIC). Blaze consists of a state-of-the-art, networked, computer cluster and ultra-high-resolution visualization system called CAVE2(TM) that is currently not available anywhere in Illinois. This system is connected via a high-speed 100-Gigabit network to the State of Illinois' I-WIRE optical network, as well as to national and international high speed networks, such as the Internet2, and the Global Lambda Integrated Facility. This enables Blaze to serve as an on-ramp to national cyberinfrastructure, such as the National Science Foundation’s Blue Waters petascalemore » computer at the National Center for Supercomputing Applications at the University of Illinois at Chicago and the Department of Energy’s Argonne Leadership Computing Facility (ALCF) at Argonne National Laboratory. DOE award # DE-SC005067, leveraged with NSF award #CNS-0959053 for “Development of the Next-Generation CAVE Virtual Environment (NG-CAVE),” enabled us to create a first-of-its-kind high-performance visual computing system. The UIC Electronic Visualization Laboratory (EVL) worked with two U.S. companies to advance their commercial products and maintain U.S. leadership in the global information technology economy. New applications are being enabled with the CAVE2/Blaze visual computing system that is advancing scientific research and education in the U.S. and globally, and help train the next-generation workforce.« less

  17. Educational Technology Network: a computer conferencing system dedicated to applications of computers in radiology practice, research, and education.

    PubMed

    D'Alessandro, M P; Ackerman, M J; Sparks, S M

    1993-11-01

    Educational Technology Network (ET Net) is a free, easy to use, on-line computer conferencing system organized and funded by the National Library of Medicine that is accessible via the SprintNet (SprintNet, Reston, VA) and Internet (Merit, Ann Arbor, MI) computer networks. It is dedicated to helping bring together, in a single continuously running electronic forum, developers and users of computer applications in the health sciences, including radiology. ET Net uses the Caucus computer conferencing software (Camber-Roth, Troy, NY) running on a microcomputer. This microcomputer is located in the National Library of Medicine's Lister Hill National Center for Biomedical Communications and is directly connected to the SprintNet and the Internet networks. The advanced computer conferencing software of ET Net allows individuals who are separated in space and time to unite electronically to participate, at any time, in interactive discussions on applications of computers in radiology. A computer conferencing system such as ET Net allows radiologists to maintain contact with colleagues on a regular basis when they are not physically together. Topics of discussion on ET Net encompass all applications of computers in radiological practice, research, and education. ET Net has been in successful operation for 3 years and has a promising future aiding radiologists in the exchange of information pertaining to applications of computers in radiology.

  18. Computer network access to scientific information systems for minority universities

    NASA Astrophysics Data System (ADS)

    Thomas, Valerie L.; Wakim, Nagi T.

    1993-08-01

    The evolution of computer networking technology has lead to the establishment of a massive networking infrastructure which interconnects various types of computing resources at many government, academic, and corporate institutions. A large segment of this infrastructure has been developed to facilitate information exchange and resource sharing within the scientific community. The National Aeronautics and Space Administration (NASA) supports both the development and the application of computer networks which provide its community with access to many valuable multi-disciplinary scientific information systems and on-line databases. Recognizing the need to extend the benefits of this advanced networking technology to the under-represented community, the National Space Science Data Center (NSSDC) in the Space Data and Computing Division at the Goddard Space Flight Center has developed the Minority University-Space Interdisciplinary Network (MU-SPIN) Program: a major networking and education initiative for Historically Black Colleges and Universities (HBCUs) and Minority Universities (MUs). In this paper, we will briefly explain the various components of the MU-SPIN Program while highlighting how, by providing access to scientific information systems and on-line data, it promotes a higher level of collaboration among faculty and students and NASA scientists.

  19. The National Special Education Alliance: One Year Later.

    ERIC Educational Resources Information Center

    Green, Peter

    1988-01-01

    The National Special Education Alliance (a national network of local computer resource centers associated with Apple Computer, Inc.) consists, one year after formation, of 24 non-profit support centers staffed largely by volunteers. The NSEA now reaches more than 1000 disabled computer users each month and more growth in the future is expected.…

  20. Documentary of MFENET, a national computer network

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Shuttleworth, B.O.

    1977-06-01

    The national Magnetic Fusion Energy Computer Network (MFENET) is a newly operational star network of geographically separated heterogeneous hosts and a communications subnetwork of PDP-11 processors. Host processors interfaced to the subnetwork currently include a CDC 7600 at the Central Computer Center (CCC) and several DECsystem-10's at User Service Centers (USC's). The network was funded by a U.S. government agency (ERDA) to provide in an economical manner the needed computational resources to magnetic confinement fusion researchers. Phase I operation of MFENET distributed the processing power of the CDC 7600 among the USC's through the provision of file transport between anymore » two hosts and remote job entry to the 7600. Extending the capabilities of Phase I, MFENET Phase II provided interactive terminal access to the CDC 7600 from the USC's. A file management system is maintained at the CCC for all network users. The history and development of MFENET are discussed, with emphasis on the protocols used to link the host computers and the USC software. Comparisons are made of MFENET versus ARPANET (Advanced Research Projects Agency Computer Network) and DECNET (Digital Distributed Network Architecture). DECNET and MFENET host-to host, host-to-CCP, and link protocols are discussed in detail. The USC--CCP interface is described briefly. 43 figures, 2 tables.« less

  1. The Role of Computer Networks in Aerospace Engineering.

    ERIC Educational Resources Information Center

    Bishop, Ann Peterson

    1994-01-01

    Presents selected results from an empirical investigation into the use of computer networks in aerospace engineering based on data from a national mail survey. The need for user-based studies of electronic networking is discussed, and a copy of the questionnaire used in the survey is appended. (Contains 46 references.) (LRW)

  2. HNET - A National Computerized Health Network

    PubMed Central

    Casey, Mark; Hamilton, Richard

    1988-01-01

    The HNET system demonstrated conceptually and technically a national text (and limited bit mapped graphics) computer network for use between innovative members of the health care industry. The HNET configuration of a leased high speed national packet switching network connecting any number of mainframe, mini, and micro computers was unique in it's relatively low capital costs and freedom from obsolescence. With multiple simultaneous conferences, databases, bulletin boards, calendars, and advanced electronic mail and surveys, it is marketable to innovative hospitals, clinics, physicians, health care associations and societies, nurses, multisite research projects libraries, etc.. Electronic publishing and education capabilities along with integrated voice and video transmission are identified as future enhancements.

  3. Characteristics of Effective Networking Environments.

    ERIC Educational Resources Information Center

    Kaye, Judith C.

    This document chronicles a project called Model Nets, which studies the characteristics of computer networks that have a positive impact on K-12 learning. Los Alamos National Laboratory undertook the study so that their recommendations could help federal agencies wisely fund networking projects in an era when the national imperative has driven…

  4. 76 FR 63811 - Structural Reforms To Improve the Security of Classified Networks and the Responsible Sharing and...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-10-13

    ... Structural Reforms To Improve the Security of Classified Networks and the Responsible Sharing and... classified national security information (classified information) on computer networks, it is hereby ordered as follows: Section 1. Policy. Our Nation's security requires classified information to be shared...

  5. Real World Graph Connectivity

    ERIC Educational Resources Information Center

    Lind, Joy; Narayan, Darren

    2009-01-01

    We present the topic of graph connectivity along with a famous theorem of Menger in the real-world setting of the national computer network infrastructure of "National LambdaRail". We include a set of exercises where students reinforce their understanding of graph connectivity by analysing the "National LambdaRail" network. Finally, we give…

  6. The NASA Science Internet: An integrated approach to networking

    NASA Technical Reports Server (NTRS)

    Rounds, Fred

    1991-01-01

    An integrated approach to building a networking infrastructure is an absolute necessity for meeting the multidisciplinary science networking requirements of the Office of Space Science and Applications (OSSA) science community. These networking requirements include communication connectivity between computational resources, databases, and library systems, as well as to other scientists and researchers around the world. A consolidated networking approach allows strategic use of the existing science networking within the Federal government, and it provides networking capability that takes into consideration national and international trends towards multivendor and multiprotocol service. It also offers a practical vehicle for optimizing costs and maximizing performance. Finally, and perhaps most important to the development of high speed computing is that an integrated network constitutes a focus for phasing to the National Research and Education Network (NREN). The NASA Science Internet (NSI) program, established in mid 1988, is structured to provide just such an integrated network. A description of the NSI is presented.

  7. National Geographic Society Kids Network: Report on 1994 teacher participants

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    NONE

    In 1994, National Geographic Society Kids Network, a computer/telecommunications-based science curriculum, was presented to elementary and middle school teachers through summer programs sponsored by NGS and US DOE. The network program assists teachers in understanding the process of doing science; understanding the role of computers and telecommunications in the study of science, math, and engineering; and utilizing computers and telecommunications appropriately in the classroom. The program enables teacher to integrate science, math, and technology with other subjects with the ultimate goal of encouraging students of all abilities to pursue careers in science/math/engineering. This report assesses the impact of the networkmore » program on participating teachers.« less

  8. Distributed Problem Solving: Adaptive Networks with a Computer Intermediary Resource. Intelligent Executive Computer Communication

    DTIC Science & Technology

    1991-06-01

    Proceedings of The National Conference on Artificial Intelligence , pages 181-184, The American Association for Aritificial Intelligence , Pittsburgh...Intermediary Resource: Intelligent Executive Computer Communication John Lyman and Carla J. Conaway University of California at Los Angeles for Contracting...Include Security Classification) Interim Report: Distributed Problem Solving: Adaptive Networks With a Computer Intermediary Resource: Intelligent

  9. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Not Available

    The Computing and Communications (C) Division is responsible for the Laboratory's Integrated Computing Network (ICN) as well as Laboratory-wide communications. Our computing network, used by 8,000 people distributed throughout the nation, constitutes one of the most powerful scientific computing facilities in the world. In addition to the stable production environment of the ICN, we have taken a leadership role in high-performance computing and have established the Advanced Computing Laboratory (ACL), the site of research on experimental, massively parallel computers; high-speed communication networks; distributed computing; and a broad variety of advanced applications. The computational resources available in the ACL are ofmore » the type needed to solve problems critical to national needs, the so-called Grand Challenge'' problems. The purpose of this publication is to inform our clients of our strategic and operating plans in these important areas. We review major accomplishments since late 1990 and describe our strategic planning goals and specific projects that will guide our operations over the next few years. Our mission statement, planning considerations, and management policies and practices are also included.« less

  10. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Not Available

    The Computing and Communications (C) Division is responsible for the Laboratory`s Integrated Computing Network (ICN) as well as Laboratory-wide communications. Our computing network, used by 8,000 people distributed throughout the nation, constitutes one of the most powerful scientific computing facilities in the world. In addition to the stable production environment of the ICN, we have taken a leadership role in high-performance computing and have established the Advanced Computing Laboratory (ACL), the site of research on experimental, massively parallel computers; high-speed communication networks; distributed computing; and a broad variety of advanced applications. The computational resources available in the ACL are ofmore » the type needed to solve problems critical to national needs, the so-called ``Grand Challenge`` problems. The purpose of this publication is to inform our clients of our strategic and operating plans in these important areas. We review major accomplishments since late 1990 and describe our strategic planning goals and specific projects that will guide our operations over the next few years. Our mission statement, planning considerations, and management policies and practices are also included.« less

  11. Community Colleges and Cybersecurity Education.

    ERIC Educational Resources Information Center

    Teles, Elizabeth J.; Hovis, R. Corby

    2002-01-01

    Describes recent federal legislation (H.R. 3394) that charges the National Science Foundation with offering more grants to colleges and universities for degree programs in computer and network security, and to establish trainee programs for graduate students who pursue doctoral degrees in computer and network security. Discusses aspects of…

  12. NIF ICCS network design and loading analysis

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Tietbohl, G; Bryant, R

    The National Ignition Facility (NIF) is housed within a large facility about the size of two football fields. The Integrated Computer Control System (ICCS) is distributed throughout this facility and requires the integration of about 40,000 control points and over 500 video sources. This integration is provided by approximately 700 control computers distributed throughout the NIF facility and a network that provides the communication infrastructure. A main control room houses a set of seven computer consoles providing operator access and control of the various distributed front-end processors (FEPs). There are also remote workstations distributed within the facility that allow providemore » operator console functions while personnel are testing and troubleshooting throughout the facility. The operator workstations communicate with the FEPs which implement the localized control and monitoring functions. There are different types of FEPs for the various subsystems being controlled. This report describes the design of the NIF ICCS network and how it meets the traffic loads that will are expected and the requirements of the Sub-System Design Requirements (SSDR's). This document supersedes the earlier reports entitled Analysis of the National Ignition Facility Network, dated November 6, 1996 and The National Ignition Facility Digital Video and Control Network, dated July 9, 1996. For an overview of the ICCS, refer to the document NIF Integrated Computer Controls System Description (NIF-3738).« less

  13. Proceedings of a Conference on Telecommunication Technologies, Networkings and Libraries

    NASA Astrophysics Data System (ADS)

    Knight, N. K.

    1981-12-01

    Current and developing technologies for digital transmission of image data likely to have an impact on the operations of libraries and information centers or provide support for information networking are reviewed. Technologies reviewed include slow scan television, teleconferencing, and videodisc technology and standards development for computer network interconnection through hardware and software, particularly packet switched networks computer network protocols for library and information service applications, the structure of a national bibliographic telecommunications network; and the major policy issues involved in the regulation or deregulation of the common communications carriers industry.

  14. "TIS": An Intelligent Gateway Computer for Information and Modeling Networks. Overview.

    ERIC Educational Resources Information Center

    Hampel, Viktor E.; And Others

    TIS (Technology Information System) is being used at the Lawrence Livermore National Laboratory (LLNL) to develop software for Intelligent Gateway Computers (IGC) suitable for the prototyping of advanced, integrated information networks. Dedicated to information management, TIS leads the user to available information resources, on TIS or…

  15. Outline of CS application experiments

    NASA Astrophysics Data System (ADS)

    Otsu, Y.; Kondoh, K.; Matsumoto, M.

    1985-09-01

    To promote and investigate the practical application of satellite use, CS application experiments for various social activity needs, including those of public services such as the National Police Agency and the Japanese National Railway, computer network services, news material transmissions, and advanced teleconference activities, were performed. Public service satellite communications systems were developed and tested. Based on results obtained, several public services have implemented CS-2 for practical disaster-back-up uses. Practical application computer network and enhanced video-conference experiments have also been performed.

  16. The space physics analysis network

    NASA Astrophysics Data System (ADS)

    Green, James L.

    1988-04-01

    The Space Physics Analysis Network, or SPAN, is emerging as a viable method for solving an immediate communication problem for space and Earth scientists and has been operational for nearly 7 years. SPAN and its extension into Europe, utilizes computer-to-computer communications allowing mail, binary and text file transfer, and remote logon capability to over 1000 space science computer systems. The network has been used to successfully transfer real-time data to remote researchers for rapid data analysis but its primary function is for non-real-time applications. One of the major advantages for using SPAN is its spacecraft mission independence. Space science researchers using SPAN are located in universities, industries and government institutions all across the United States and Europe. These researchers are in such fields as magnetospheric physics, astrophysics, ionosperic physics, atmospheric physics, climatology, meteorology, oceanography, planetary physics and solar physics. SPAN users have access to space and Earth science data bases, mission planning and information systems, and computational facilities for the purposes of facilitating correlative space data exchange, data analysis and space research. For example, the National Space Science Data Center (NSSDC), which manages the network, is providing facilities on SPAN such as the Network Information Center (SPAN NIC). SPAN has interconnections with several national and international networks such as HEPNET and TEXNET forming a transparent DECnet network. The combined total number of computers now reachable over these combined networks is about 2000. In addition, SPAN supports full function capabilities over the international public packet switched networks (e.g. TELENET) and has mail gateways to ARPANET, BITNET and JANET.

  17. HPCC and the National Information Infrastructure: an overview.

    PubMed Central

    Lindberg, D A

    1995-01-01

    The National Information Infrastructure (NII) or "information superhighway" is a high-priority federal initiative to combine communications networks, computers, databases, and consumer electronics to deliver information services to all U.S. citizens. The NII will be used to improve government and social services while cutting administrative costs. Operated by the private sector, the NII will rely on advanced technologies developed under the direction of the federal High Performance Computing and Communications (HPCC) Program. These include computing systems capable of performing trillions of operations (teraops) per second and networks capable of transmitting billions of bits (gigabits) per second. Among other activities, the HPCC Program supports the national supercomputer research centers, the federal portion of the Internet, and the development of interface software, such as Mosaic, that facilitates access to network information services. Health care has been identified as a critical demonstration area for HPCC technology and an important application area for the NII. As an HPCC participant, the National Library of Medicine (NLM) assists hospitals and medical centers to connect to the Internet through projects directed by the Regional Medical Libraries and through an Internet Connections Program cosponsored by the National Science Foundation. In addition to using the Internet to provide enhanced access to its own information services, NLM sponsors health-related applications of HPCC technology. Examples include the "Visible Human" project and recently awarded contracts for test-bed networks to share patient data and medical images, telemedicine projects to provide consultation and medical care to patients in rural areas, and advanced computer simulations of human anatomy for training in "virtual surgery." PMID:7703935

  18. Z39.50 and the Scholar's Workstation Concept.

    ERIC Educational Resources Information Center

    Phillips, Gary Lee

    1992-01-01

    Examines the potential application of the American National Standards Institute (ANSI)/National Information Standards Organization (NISO) Z39.50 library networking protocol as a client/server environment for a scholar's workstation. Computer networking models are described, and linking the workstation to an online public access catalog (OPAC) is…

  19. National Special Education Alliance.

    ERIC Educational Resources Information Center

    Pressman, Harvey

    1987-01-01

    The article describes the National Special Education Alliance, a network of parent-led organizations seeking to speed the delivery of computer technology to the disabled. Discussed are program origins, starting a local center, charter members of the alliance, benefits of Alliance membership, and the Alliance's relationship with Apple computer. (DB)

  20. Integrated Engineering Information Technology, FY93 accommplishments

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Harris, R.N.; Miller, D.K.; Neugebauer, G.L.

    1994-03-01

    The Integrated Engineering Information Technology (IEIT) project is providing a comprehensive, easy-to-use computer network solution or communicating with coworkers both inside and outside Sandia National Laboratories. IEIT capabilities include computer networking, electronic mail, mechanical design, and data management. These network-based tools have one fundamental purpose: to help create a concurrent engineering environment that will enable Sandia organizations to excel in today`s increasingly competitive business environment.

  1. Methods for computing water-quality loads at sites in the U.S. Geological Survey National Water Quality Network

    USGS Publications Warehouse

    Lee, Casey J.; Murphy, Jennifer C.; Crawford, Charles G.; Deacon, Jeffrey R.

    2017-10-24

    The U.S. Geological Survey publishes information on concentrations and loads of water-quality constituents at 111 sites across the United States as part of the U.S. Geological Survey National Water Quality Network (NWQN). This report details historical and updated methods for computing water-quality loads at NWQN sites. The primary updates to historical load estimation methods include (1) an adaptation to methods for computing loads to the Gulf of Mexico; (2) the inclusion of loads computed using the Weighted Regressions on Time, Discharge, and Season (WRTDS) method; and (3) the inclusion of loads computed using continuous water-quality data. Loads computed using WRTDS and continuous water-quality data are provided along with those computed using historical methods. Various aspects of method updates are evaluated in this report to help users of water-quality loading data determine which estimation methods best suit their particular application.

  2. Global information infrastructure.

    PubMed

    Lindberg, D A

    1994-01-01

    The High Performance Computing and Communications Program (HPCC) is a multiagency federal initiative under the leadership of the White House Office of Science and Technology Policy, established by the High Performance Computing Act of 1991. It has been assigned a critical role in supporting the international collaboration essential to science and to health care. Goals of the HPCC are to extend USA leadership in high performance computing and networking technologies; to improve technology transfer for economic competitiveness, education, and national security; and to provide a key part of the foundation for the National Information Infrastructure. The first component of the National Institutes of Health to participate in the HPCC, the National Library of Medicine (NLM), recently issued a solicitation for proposals to address a range of issues, from privacy to 'testbed' networks, 'virtual reality,' and more. These efforts will build upon the NLM's extensive outreach program and other initiatives, including the Unified Medical Language System (UMLS), MEDLARS, and Grateful Med. New Internet search tools are emerging, such as Gopher and 'Knowbots'. Medicine will succeed in developing future intelligent agents to assist in utilizing computer networks. Our ability to serve patients is so often restricted by lack of information and knowledge at the time and place of medical decision-making. The new technologies, properly employed, will also greatly enhance our ability to serve the patient.

  3. User's manual for a material transport code on the Octopus Computer Network

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Naymik, T.G.; Mendez, G.D.

    1978-09-15

    A code to simulate material transport through porous media was developed at Oak Ridge National Laboratory. This code has been modified and adapted for use at Lawrence Livermore Laboratory. This manual, in conjunction with report ORNL-4928, explains the input, output, and execution of the code on the Octopus Computer Network.

  4. The Emergence Of The National Research And Education Network (NREN) And Its Implications For American Telecommunications

    NASA Astrophysics Data System (ADS)

    Maloff, Joel H.

    1990-01-01

    "The nation which most completely assimilates high performance computing into its economy will very likely emerge as the dominant intellectual, economic, and technological force in the next century", Senator Albert Gore, Jr., May 18, 1989, while introducing Senate Bill 1067, "The National High Performance Computer Technology Act of 1989". A national network designed to link supercomputers, particle accelerators, researchers, educators, government, and industry is beginning to emerge. The degree to which the United States can mobilize the resources inherent within our academic, industrial and government sectors towards the establishment of such a network infrastructure will have direct bearing on the economic and political stature of this country in the next century. This program will have significant impact on all forms of information transfer, and peripheral benefits to all walks of life similar to those experienced from the moon landing program of the 1960's. The key to our success is the involvement of scientists, librarians, network designers, and bureaucrats in the planning stages. Collectively, the resources resident within the United States are awesome; individually, their impact is somewhat more limited. The engineers, technicians, business people, and educators participating in this conference have a vital role to play in the success of the National Research and Education Network (NREN).

  5. What Presidents Need To Know about the Impact of Networking.

    ERIC Educational Resources Information Center

    Leadership Abstracts, 1993

    1993-01-01

    Many colleges and universities are undergoing cultural changes as a result of extensive voice, data, and video networking. Local area networks link large portions of most campuses, and national networks have evolved from specialized services for researchers in computer-related disciplines to general utilities on many campuses. Campuswide systems…

  6. Hello! Kids Network around the World.

    ERIC Educational Resources Information Center

    Lynes, Kristine

    1996-01-01

    Describes Kids Network, an educational network available from the National Geographic Society that allows students in grades four through six to become part of research teams that include students from around the world. Computer hardware requirements and a list of Kids Network research questions are listed in a sidebar. (JMV)

  7. Overview of NASA communications infrastructure

    NASA Technical Reports Server (NTRS)

    Arnold, Ray J.; Fuechsel, Charles

    1991-01-01

    The infrastructure of NASA communications systems for effecting coordination across NASA offices and with the national and international research and technological communities is discussed. The offices and networks of the communication system include the Office of Space Science and Applications (OSSA), which manages all NASA missions, and the Office of Space Operations, which furnishes communication support through the NASCOM, the mission critical communications support network, and the Program Support Communications network. The NASA Science Internet was established by OSSA to centrally manage, develop, and operate an integrated computer network service dedicated to NASA's space science and application research. Planned for the future is the National Research and Education Network, which will provide communications infrastructure to enhance science resources at a national level.

  8. High Performance Computing and Network Program. Hearing before the Subcommittee on Science of the Committee on Science, Space, and Technology, House of Representatives, One Hundred Third Congress, First Session.

    ERIC Educational Resources Information Center

    Congress of the U.S., Washington, DC. House Committee on Science, Space and Technology.

    The purpose of the hearing transcribed in this document was to obtain the views of representatives of network user and provider communities regarding the path the National Science Foundation (NSF) is taking for recompetition of the NSFNET computer network. In particular the committee was interested in the consistency of the evolution of NSFNET…

  9. Investigation of a Neural Network Implementation of a TCP Packet Anomaly Detection System

    DTIC Science & Technology

    2004-05-01

    reconnatre les nouvelles variantes d’attaque. Les réseaux de neurones artificiels (ANN) ont les capacités d’apprendre à partir de schémas et de...Computational Intelligence Techniques in Intrusion Detection Systems. In IASTED International Conference on Neural Networks and Computational Intelligence , pp...Neural Network Training: Overfitting May be Harder than Expected. In Proceedings of the Fourteenth National Conference on Artificial Intelligence , AAAI-97

  10. Cybersim: geographic, temporal, and organizational dynamics of malware propagation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Santhi, Nandakishore; Yan, Guanhua; Eidenbenz, Stephan

    2010-01-01

    Cyber-infractions into a nation's strategic security envelope pose a constant and daunting challenge. We present the modular CyberSim tool which has been developed in response to the need to realistically simulate at a national level, software vulnerabilities and resulting mal ware propagation in online social networks. CyberSim suite (a) can generate realistic scale-free networks from a database of geocoordinated computers to closely model social networks arising from personal and business email contacts and online communities; (b) maintains for each,bost a list of installed software, along with the latest published vulnerabilities; (d) allows designated initial nodes where malware gets introduced; (e)more » simulates, using distributed discrete event-driven technology, the spread of malware exploiting a specific vulnerability, with packet delay and user online behavior models; (f) provides a graphical visualization of spread of infection, its severity, businesses affected etc to the analyst. We present sample simulations on a national level network with millions of computers.« less

  11. The National Research and Education Network (NREN): Research and Policy Perspectives.

    ERIC Educational Resources Information Center

    McClure, Charles R.; And Others

    This book provides an overview and status report on the progress made in developing the National Research and Education Network (NREN) as of early 1991. It reports on a number of investigations that provide a research and policy perspective on the NREN and computer-mediated communication (CMC), and brings together key source documents that have…

  12. Using high-performance networks to enable computational aerosciences applications

    NASA Technical Reports Server (NTRS)

    Johnson, Marjory J.

    1992-01-01

    One component of the U.S. Federal High Performance Computing and Communications Program (HPCCP) is the establishment of a gigabit network to provide a communications infrastructure for researchers across the nation. This gigabit network will provide new services and capabilities, in addition to increased bandwidth, to enable future applications. An understanding of these applications is necessary to guide the development of the gigabit network and other high-performance networks of the future. In this paper we focus on computational aerosciences applications run remotely using the Numerical Aerodynamic Simulation (NAS) facility located at NASA Ames Research Center. We characterize these applications in terms of network-related parameters and relate user experiences that reveal limitations imposed by the current wide-area networking infrastructure. Then we investigate how the development of a nationwide gigabit network would enable users of the NAS facility to work in new, more productive ways.

  13. Computer Science and Technology Publications. NBS Publications List 84.

    ERIC Educational Resources Information Center

    National Bureau of Standards (DOC), Washington, DC. Inst. for Computer Sciences and Technology.

    This bibliography lists publications of the Institute for Computer Sciences and Technology of the National Bureau of Standards. Publications are listed by subject in the areas of computer security, computer networking, and automation technology. Sections list publications of: (1) current Federal Information Processing Standards; (2) computer…

  14. Report on Computing and Networking in the Space Science Laboratory by the SSL Computer Committee

    NASA Technical Reports Server (NTRS)

    Gallagher, D. L. (Editor)

    1993-01-01

    The Space Science Laboratory (SSL) at Marshall Space Flight Center is a multiprogram facility. Scientific research is conducted in four discipline areas: earth science and applications, solar-terrestrial physics, astrophysics, and microgravity science and applications. Representatives from each of these discipline areas participate in a Laboratory computer requirements committee, which developed this document. The purpose is to establish and discuss Laboratory objectives for computing and networking in support of science. The purpose is also to lay the foundation for a collective, multiprogram approach to providing these services. Special recognition is given to the importance of the national and international efforts of our research communities toward the development of interoperable, network-based computer applications.

  15. Portable Computer Technology (PCT) Research and Development Program Phase 2

    NASA Technical Reports Server (NTRS)

    Castillo, Michael; McGuire, Kenyon; Sorgi, Alan

    1995-01-01

    The subject of this project report, focused on: (1) Design and development of two Advanced Portable Workstation 2 (APW 2) units. These units incorporate advanced technology features such as a low power Pentium processor, a high resolution color display, National Television Standards Committee (NTSC) video handling capabilities, a Personal Computer Memory Card International Association (PCMCIA) interface, and Small Computer System Interface (SCSI) and ethernet interfaces. (2) Use these units to integrate and demonstrate advanced wireless network and portable video capabilities. (3) Qualification of the APW 2 systems for use in specific experiments aboard the Mir Space Station. A major objective of the PCT Phase 2 program was to help guide future choices in computing platforms and techniques for meeting National Aeronautics and Space Administration (NASA) mission objectives. The focus being on the development of optimal configurations of computing hardware, software applications, and network technologies for use on NASA missions.

  16. High End Computer Network Testbedding at NASA Goddard Space Flight Center

    NASA Technical Reports Server (NTRS)

    Gary, James Patrick

    1998-01-01

    The Earth & Space Data Computing (ESDC) Division, at the Goddard Space Flight Center, is involved in development and demonstrating various high end computer networking capabilities. The ESDC has several high end super computers. These are used to run: (1) computer simulation of the climate systems; (2) to support the Earth and Space Sciences (ESS) project; (3) to support the Grand Challenge (GC) Science, which is aimed at understanding the turbulent convection and dynamos in stars. GC research occurs in many sites throughout the country, and this research is enabled by, in part, the multiple high performance network interconnections. The application drivers for High End Computer Networking use distributed supercomputing to support virtual reality applications, such as TerraVision, (i.e., three dimensional browser of remotely accessed data), and Cave Automatic Virtual Environments (CAVE). Workstations can access and display data from multiple CAVE's with video servers, which allows for group/project collaborations using a combination of video, data, voice and shared white boarding. The ESDC is also developing and demonstrating the high degree of interoperability between satellite and terrestrial-based networks. To this end, the ESDC is conducting research and evaluations of new computer networking protocols and related technologies which improve the interoperability of satellite and terrestrial networks. The ESDC is also involved in the Security Proof of Concept Keystone (SPOCK) program sponsored by National Security Agency (NSA). The SPOCK activity provides a forum for government users and security technology providers to share information on security requirements, emerging technologies and new product developments. Also, the ESDC is involved in the Trans-Pacific Digital Library Experiment, which aims to demonstrate and evaluate the use of high performance satellite communications and advanced data communications protocols to enable interactive digital library data access between the U. S. Library of Congress, the National Library of Japan and other digital library sites at 155 MegaBytes Per Second. The ESDC participation in this program is the Trans-Pacific access to GLOBE visualizations in real time. ESDC is participating in the Department of Defense's ATDNet with Multiwavelength Optical Network (MONET) a fully switched Wavelength Division Networking testbed. This presentation is in viewgraph format.

  17. A network of web multimedia medical information servers for a medical school and university hospital.

    PubMed

    Denier, P; Le Beux, P; Delamarre, D; Fresnel, A; Cleret, M; Courtin, C; Seka, L P; Pouliquen, B; Cleran, L; Riou, C; Burgun, A; Jarno, P; Leduff, F; Lesaux, H; Duvauferrier, R

    1997-08-01

    Modern medicine requires a rapid access to information including clinical data from medical records, bibliographic databases, knowledge bases and nomenclature databases. This is especially true for University Hospitals and Medical Schools for training as well as for fundamental and clinical research for diagnosis and therapeutic purposes. This implies the development of local, national and international cooperation which can be enhanced via the use and access to computer networks such as Internet. The development of professional cooperative networks goes with the development of the telecommunication and computer networks and our project is to make these new tools and technologies accessible to the medical students both during the teaching time in Medical School and during the training periods at the University Hospital. We have developed a local area network which communicates between the School of Medicine and the Hospital which takes advantage of the new Web client-server technology both internally (Intranet) and externally by access to the National Research Network (RENATER in France) connected to the Internet network. The address of our public web server is http:(/)/www.med.univ-rennesl.fr.

  18. Information Networks and Education: An Analytic Bibliography.

    ERIC Educational Resources Information Center

    Pritchard, Roger

    This literature review presents a broad and overall perspective on the various kinds of information networks that will be useful to educators in developing nations. There are five sections to the essay. The first section cites and briefly describes the literature dealing with library, information, and computer networks. Sections two and three…

  19. LINCS: Livermore's network architecture. [Octopus computing network

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Fletcher, J.G.

    1982-01-01

    Octopus, a local computing network that has been evolving at the Lawrence Livermore National Laboratory for over fifteen years, is currently undergoing a major revision. The primary purpose of the revision is to consolidate and redefine the variety of conventions and formats, which have grown up over the years, into a single standard family of protocols, the Livermore Interactive Network Communication Standard (LINCS). This standard treats the entire network as a single distributed operating system such that access to a computing resource is obtained in a single way, whether that resource is local (on the same computer as the accessingmore » process) or remote (on another computer). LINCS encompasses not only communication but also such issues as the relationship of customer to server processes and the structure, naming, and protection of resources. The discussion includes: an overview of the Livermore user community and computing hardware, the functions and structure of each of the seven layers of LINCS protocol, the reasons why we have designed our own protocols and why we are dissatisfied by the directions that current protocol standards are taking.« less

  20. The UK Human Genome Mapping Project online computing service.

    PubMed

    Rysavy, F R; Bishop, M J; Gibbs, G P; Williams, G W

    1992-04-01

    This paper presents an overview of computing and networking facilities developed by the Medical Research Council to provide online computing support to the Human Genome Mapping Project (HGMP) in the UK. The facility is connected to a number of other computing facilities in various centres of genetics and molecular biology research excellence, either directly via high-speed links or through national and international wide-area networks. The paper describes the design and implementation of the current system, a 'client/server' network of Sun, IBM, DEC and Apple servers, gateways and workstations. A short outline of online computing services currently delivered by this system to the UK human genetics research community is also provided. More information about the services and their availability could be obtained by a direct approach to the UK HGMP-RC.

  1. Requirements for a network storage service

    NASA Technical Reports Server (NTRS)

    Kelly, Suzanne M.; Haynes, Rena A.

    1992-01-01

    Sandia National Laboratories provides a high performance classified computer network as a core capability in support of its mission of nuclear weapons design and engineering, physical sciences research, and energy research and development. The network, locally known as the Internal Secure Network (ISN), was designed in 1989 and comprises multiple distributed local area networks (LAN's) residing in Albuquerque, New Mexico and Livermore, California. The TCP/IP protocol suite is used for inner-node communications. Scientific workstations and mid-range computers, running UNIX-based operating systems, compose most LAN's. One LAN, operated by the Sandia Corporate Computing Directorate, is a general purpose resource providing a supercomputer and a file server to the entire ISN. The current file server on the supercomputer LAN is an implementation of the Common File System (CFS) developed by Los Alamos National Laboratory. Subsequent to the design of the ISN, Sandia reviewed its mass storage requirements and chose to enter into a competitive procurement to replace the existing file server with one more adaptable to a UNIX/TCP/IP environment. The requirements study for the network was the starting point for the requirements study for the new file server. The file server is called the Network Storage Services (NSS) and is requirements are described in this paper. The next section gives an application or functional description of the NSS. The final section adds performance, capacity, and access constraints to the requirements.

  2. 76 FR 7213 - ACRAnet, Inc.; SettlementOne Credit Corporation, and Sackett National Holdings, Inc.; Fajilan and...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-02-09

    ... allege that hackers were able to exploit vulnerabilities in the computer networks of multiple end user clients, putting all consumer reports in those networks at risk. In multiple breaches, hackers accessed...

  3. Evolutionary in Technology, Revolutionary in Impact

    ERIC Educational Resources Information Center

    Grush, Mary

    2007-01-01

    Ken Klingenstein has led national networking initiatives for the past 25 years. He served as director of computing and network services at the University of Colorado at Boulder from 1985-1999, and today, Klingenstein is director of middleware and security for Internet2. Truth is, this networking innovator has participated in the development of the…

  4. Computational Intelligence and Its Impact on Future High-Performance Engineering Systems

    NASA Technical Reports Server (NTRS)

    Noor, Ahmed K. (Compiler)

    1996-01-01

    This document contains presentations from the joint UVA/NASA Workshop on Computational Intelligence held at the Virginia Consortium of Engineering and Science Universities, Hampton, Virginia, June 27-28, 1995. The presentations addressed activities in the areas of fuzzy logic, neural networks, and evolutionary computations. Workshop attendees represented NASA, the National Science Foundation, the Department of Energy, National Institute of Standards and Technology (NIST), the Jet Propulsion Laboratory, industry, and academia. The workshop objectives were to assess the state of technology in the Computational intelligence area and to provide guidelines for future research.

  5. Corridor One:An Integrated Distance Visualization Enuronments for SSI+ASCI Applications

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Christopher R. Johnson, Charles D. Hansen

    2001-10-29

    The goal of Corridor One: An Integrated Distance Visualization Environment for ASCI and SSI Application was to combine the forces of six leading edge laboratories working in the areas of visualization and distributed computing and high performance networking (Argonne National Laboratory, Lawrence Berkeley National Laboratory, Los Alamos National Laboratory, University of Illinois, University of Utah and Princeton University) to develop and deploy the most advanced integrated distance visualization environment for large-scale scientific visualization and demonstrate it on applications relevant to the DOE SSI and ASCI programs. The Corridor One team brought world class expertise in parallel rendering, deep image basedmore » rendering, immersive environment technology, large-format multi-projector wall based displays, volume and surface visualization algorithms, collaboration tools and streaming media technology, network protocols for image transmission, high-performance networking, quality of service technology and distributed computing middleware. Our strategy was to build on the very successful teams that produced the I-WAY, ''Computational Grids'' and CAVE technology and to add these to the teams that have developed the fastest parallel visualizations systems and the most widely used networking infrastructure for multicast and distributed media. Unfortunately, just as we were getting going on the Corridor One project, DOE cut the program after the first year. As such, our final report consists of our progress during year one of the grant.« less

  6. pSCANNER: patient-centered Scalable National Network for Effectiveness Research

    PubMed Central

    Ohno-Machado, Lucila; Agha, Zia; Bell, Douglas S; Dahm, Lisa; Day, Michele E; Doctor, Jason N; Gabriel, Davera; Kahlon, Maninder K; Kim, Katherine K; Hogarth, Michael; Matheny, Michael E; Meeker, Daniella; Nebeker, Jonathan R

    2014-01-01

    This article describes the patient-centered Scalable National Network for Effectiveness Research (pSCANNER), which is part of the recently formed PCORnet, a national network composed of learning healthcare systems and patient-powered research networks funded by the Patient Centered Outcomes Research Institute (PCORI). It is designed to be a stakeholder-governed federated network that uses a distributed architecture to integrate data from three existing networks covering over 21 million patients in all 50 states: (1) VA Informatics and Computing Infrastructure (VINCI), with data from Veteran Health Administration's 151 inpatient and 909 ambulatory care and community-based outpatient clinics; (2) the University of California Research exchange (UC-ReX) network, with data from UC Davis, Irvine, Los Angeles, San Francisco, and San Diego; and (3) SCANNER, a consortium of UCSD, Tennessee VA, and three federally qualified health systems in the Los Angeles area supplemented with claims and health information exchange data, led by the University of Southern California. Initial use cases will focus on three conditions: (1) congestive heart failure; (2) Kawasaki disease; (3) obesity. Stakeholders, such as patients, clinicians, and health service researchers, will be engaged to prioritize research questions to be answered through the network. We will use a privacy-preserving distributed computation model with synchronous and asynchronous modes. The distributed system will be based on a common data model that allows the construction and evaluation of distributed multivariate models for a variety of statistical analyses. PMID:24780722

  7. Communications among data and science centers

    NASA Technical Reports Server (NTRS)

    Green, James L.

    1990-01-01

    The ability to electronically access and query the contents of remote computer archives is of singular importance in space and earth sciences; the present evaluation of such on-line information networks' development status foresees swift expansion of their data capabilities and complexity, in view of the volumes of data that will continue to be generated by NASA missions. The U.S.'s National Space Science Data Center (NSSDC) manages NASA's largest science computer network, the Space Physics Analysis Network; a comprehensive account is given of the structure of NSSDC international access through BITNET, and of connections to the NSSDC available in the Americas via the International X.25 network.

  8. Wide-Area Network Resources for Teacher Education.

    ERIC Educational Resources Information Center

    Aust, Ronald

    A central feature of the High Performance Computing Act of 1991 is the establishment of a National Research and Education Network (NREN). The level of access that teachers and teacher educators will need to benefit from the NREN and the types of network resources that are most useful for educators are explored, along with design issues that are…

  9. Georgia Interactive Network--GaIN--for Medical Information. Final Grant Report.

    ERIC Educational Resources Information Center

    Rankin, Jocelyn A.

    This report describes the development of the Georgia Interactive Network for Medical Information (GaIN), a project initially funded by a three-year (1983-1986) National Library of Medicine resource project grant. Designed to serve as a model network to transmit information via computer directly to health professionals, GaIN now operates through a…

  10. Geo-spatial Service and Application based on National E-government Network Platform and Cloud

    NASA Astrophysics Data System (ADS)

    Meng, X.; Deng, Y.; Li, H.; Yao, L.; Shi, J.

    2014-04-01

    With the acceleration of China's informatization process, our party and government take a substantive stride in advancing development and application of digital technology, which promotes the evolution of e-government and its informatization. Meanwhile, as a service mode based on innovative resources, cloud computing may connect huge pools together to provide a variety of IT services, and has become one relatively mature technical pattern with further studies and massive practical applications. Based on cloud computing technology and national e-government network platform, "National Natural Resources and Geospatial Database (NRGD)" project integrated and transformed natural resources and geospatial information dispersed in various sectors and regions, established logically unified and physically dispersed fundamental database and developed national integrated information database system supporting main e-government applications. Cross-sector e-government applications and services are realized to provide long-term, stable and standardized natural resources and geospatial fundamental information products and services for national egovernment and public users.

  11. Biomedical informatics research network: building a national collaboratory to hasten the derivation of new understanding and treatment of disease.

    PubMed

    Grethe, Jeffrey S; Baru, Chaitan; Gupta, Amarnath; James, Mark; Ludaescher, Bertram; Martone, Maryann E; Papadopoulos, Philip M; Peltier, Steven T; Rajasekar, Arcot; Santini, Simone; Zaslavsky, Ilya N; Ellisman, Mark H

    2005-01-01

    Through support from the National Institutes of Health's National Center for Research Resources, the Biomedical Informatics Research Network (BIRN) is pioneering the use of advanced cyberinfrastructure for medical research. By synchronizing developments in advanced wide area networking, distributed computing, distributed database federation, and other emerging capabilities of e-science, the BIRN has created a collaborative environment that is paving the way for biomedical research and clinical information management. The BIRN Coordinating Center (BIRN-CC) is orchestrating the development and deployment of key infrastructure components for immediate and long-range support of biomedical and clinical research being pursued by domain scientists in three neuroimaging test beds.

  12. IDEANET to Connect U.S. Schools.

    ERIC Educational Resources Information Center

    TechTrends, 1994

    1994-01-01

    Describes projects in schools using innovative educational technology. These programs include an interactive television and computer network; an initiative to use the Internet in rural areas; a telecommunications network to share information throughout Tennessee; and an announcement of the Marlowe Froke National Award for Leadership in Educational…

  13. Privacy and the National Information Infrastructure.

    ERIC Educational Resources Information Center

    Rotenberg, Marc

    1994-01-01

    Explains the work of Computer Professionals for Social Responsibility regarding privacy issues in the use of electronic networks; recommends principles that should be adopted for a National Information Infrastructure privacy code; discusses the need for public education; and suggests pertinent legislative proposals. (LRW)

  14. Addressing Software Security

    NASA Technical Reports Server (NTRS)

    Bailey, Brandon

    2015-01-01

    Historically security within organizations was thought of as an IT function (web sites/servers, email, workstation patching, etc.) Threat landscape has evolved (Script Kiddies, Hackers, Advanced Persistent Threat (APT), Nation States, etc.) Attack surface has expanded -Networks interconnected!! Some security posture factors Network Layer (Routers, Firewalls, etc.) Computer Network Defense (IPS/IDS, Sensors, Continuous Monitoring, etc.) Industrial Control Systems (ICS) Software Security (COTS, FOSS, Custom, etc.)

  15. Apollo Ring Optical Switch

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Maestas, J.H.

    1987-03-01

    An optical switch was designed, built, and installed at Sandia National Laboratories in Albuquerque, New Mexico, to facilitate the integration of two Apollo computer networks into a single network. This report presents an overview of the optical switch as well as its layout, switch testing procedure and test data, and installation.

  16. The medical science DMZ: a network design pattern for data-intensive medical science.

    PubMed

    Peisert, Sean; Dart, Eli; Barnett, William; Balas, Edward; Cuff, James; Grossman, Robert L; Berman, Ari; Shankar, Anurag; Tierney, Brian

    2017-10-06

    We describe a detailed solution for maintaining high-capacity, data-intensive network flows (eg, 10, 40, 100 Gbps+) in a scientific, medical context while still adhering to security and privacy laws and regulations. High-end networking, packet-filter firewalls, network intrusion-detection systems. We describe a "Medical Science DMZ" concept as an option for secure, high-volume transport of large, sensitive datasets between research institutions over national research networks, and give 3 detailed descriptions of implemented Medical Science DMZs. The exponentially increasing amounts of "omics" data, high-quality imaging, and other rapidly growing clinical datasets have resulted in the rise of biomedical research "Big Data." The storage, analysis, and network resources required to process these data and integrate them into patient diagnoses and treatments have grown to scales that strain the capabilities of academic health centers. Some data are not generated locally and cannot be sustained locally, and shared data repositories such as those provided by the National Library of Medicine, the National Cancer Institute, and international partners such as the European Bioinformatics Institute are rapidly growing. The ability to store and compute using these data must therefore be addressed by a combination of local, national, and industry resources that exchange large datasets. Maintaining data-intensive flows that comply with the Health Insurance Portability and Accountability Act (HIPAA) and other regulations presents a new challenge for biomedical research. We describe a strategy that marries performance and security by borrowing from and redefining the concept of a Science DMZ, a framework that is used in physical sciences and engineering research to manage high-capacity data flows. By implementing a Medical Science DMZ architecture, biomedical researchers can leverage the scale provided by high-performance computer and cloud storage facilities and national high-speed research networks while preserving privacy and meeting regulatory requirements. © The Author 2017. Published by Oxford University Press on behalf of the American Medical Informatics Association.

  17. National High Performance Computer Technology Act of 1989. Hearings Before the Subcommittee on Science, Technology, and Space of the Committee on Commerce, Science, and Transportation. United States Senate, One Hundred First Congress, First Session (June 21, July 26, and September 15, 1989).

    ERIC Educational Resources Information Center

    Congress of the U.S., Washington, DC. Senate Committee on Commerce, Science, and Transportation.

    This collection of statements focuses on Title 2 of S. 1067, which calls for the National Science Foundation to establish a National Research and Education Network (NREN) by 1996. This is one of several titles in a bill to provide for a coordinated federal research program to ensure continued U.S. leadership in high performance computing. The…

  18. Building a Model for Distance Collaboration in the Computer-Assisted Business Communication Classroom.

    ERIC Educational Resources Information Center

    Lopez, Elizabeth Sanders; Nagelhout, Edwin

    1995-01-01

    Outlines a model for distance collaboration between business writing classrooms using network technology. Discusses ways to teach national and international audience awareness, problem solving, and the contextual nature of cases. Discusses goals for distance collaboration, sample assignments, and the pros and cons of network technologies. (SR)

  19. Internet2: Building and Deploying Advanced, Networked Applications.

    ERIC Educational Resources Information Center

    Hanss, Ted

    1997-01-01

    Internet2, a consortium effort of over 100 universities, is investing in upgrading campus and national computer network platforms for such applications as digital libraries, collaboration environments, tele-medicine, and distance-independent instruction. The project is described, issues the project intends to address are detailed, and ways in…

  20. 77 FR 3069 - Modification of Interlibrary Loan Fee Schedule

    Federal Register 2010, 2011, 2012, 2013, 2014

    2012-01-23

    ... collections of the National Agricultural Library (NAL). The revised fee schedule is based on the method of... through the Online Computer Library Center (OCLC) network's Interlibrary Fee Management program, a debit..., National Agricultural Library, 10301 Baltimore Avenue, Beltsville, MD 20705-2351. FOR FURTHER INFORMATION...

  1. SIPP ACCESS: Information Tools Improve Access to National Longitudinal Panel Surveys.

    ERIC Educational Resources Information Center

    Robbin, Alice; David, Martin

    1988-01-01

    A computer-based, integrated information system incorporating data and information about the data, SIPP ACCESS systematically links technologies of laser disk, mainframe computer, microcomputer, and electronic networks, and applies relational technology to provide access to information about complex statistical data collections. Examples are given…

  2. Inquiring Minds

    Science.gov Websites

    -performance Computing Grid Computing Networking Mass Storage Plan for the Future State of the Laboratory to help decipher the language of high-energy physics. Virtual Ask-a-Scientist Read transcripts from past online chat sessions. last modified 1/04/2005 email Fermilab Fermi National Accelerator Laboratory

  3. ASCR Cybersecurity for Scientific Computing Integrity - Research Pathways and Ideas Workshop

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Peisert, Sean; Potok, Thomas E.; Jones, Todd

    At the request of the U.S. Department of Energy's (DOE) Office of Science (SC) Advanced Scientific Computing Research (ASCR) program office, a workshop was held June 2-3, 2015, in Gaithersburg, MD, to identify potential long term (10 to +20 year) cybersecurity fundamental basic research and development challenges, strategies and roadmap facing future high performance computing (HPC), networks, data centers, and extreme-scale scientific user facilities. This workshop was a follow-on to the workshop held January 7-9, 2015, in Rockville, MD, that examined higher level ideas about scientific computing integrity specific to the mission of the DOE Office of Science. Issues includedmore » research computation and simulation that takes place on ASCR computing facilities and networks, as well as network-connected scientific instruments, such as those run by various DOE Office of Science programs. Workshop participants included researchers and operational staff from DOE national laboratories, as well as academic researchers and industry experts. Participants were selected based on the submission of abstracts relating to the topics discussed in the previous workshop report [1] and also from other ASCR reports, including "Abstract Machine Models and Proxy Architectures for Exascale Computing" [27], the DOE "Preliminary Conceptual Design for an Exascale Computing Initiative" [28], and the January 2015 machine learning workshop [29]. The workshop was also attended by several observers from DOE and other government agencies. The workshop was divided into three topic areas: (1) Trustworthy Supercomputing, (2) Extreme-Scale Data, Knowledge, and Analytics for Understanding and Improving Cybersecurity, and (3) Trust within High-end Networking and Data Centers. Participants were divided into three corresponding teams based on the category of their abstracts. The workshop began with a series of talks from the program manager and workshop chair, followed by the leaders for each of the three topics and a representative of each of the four major DOE Office of Science Advanced Scientific Computing Research Facilities: the Argonne Leadership Computing Facility (ALCF), the Energy Sciences Network (ESnet), the National Energy Research Scientific Computing Center (NERSC), and the Oak Ridge Leadership Computing Facility (OLCF). The rest of the workshop consisted of topical breakout discussions and focused writing periods that produced much of this report.« less

  4. Requirements for a network storage service

    NASA Technical Reports Server (NTRS)

    Kelly, Suzanne M.; Haynes, Rena A.

    1991-01-01

    Sandia National Laboratories provides a high performance classified computer network as a core capability in support of its mission of nuclear weapons design and engineering, physical sciences research, and energy research and development. The network, locally known as the Internal Secure Network (ISN), comprises multiple distributed local area networks (LAN's) residing in New Mexico and California. The TCP/IP protocol suite is used for inter-node communications. Scientific workstations and mid-range computers, running UNIX-based operating systems, compose most LAN's. One LAN, operated by the Sandia Corporate Computing Computing Directorate, is a general purpose resource providing a supercomputer and a file server to the entire ISN. The current file server on the supercomputer LAN is an implementation of the Common File Server (CFS). Subsequent to the design of the ISN, Sandia reviewed its mass storage requirements and chose to enter into a competitive procurement to replace the existing file server with one more adaptable to a UNIX/TCP/IP environment. The requirements study for the network was the starting point for the requirements study for the new file server. The file server is called the Network Storage Service (NSS) and its requirements are described. An application or functional description of the NSS is given. The final section adds performance, capacity, and access constraints to the requirements.

  5. The National Information Infrastructure: Agenda for Action.

    ERIC Educational Resources Information Center

    Department of Commerce, Washington, DC. Information Infrastructure Task Force.

    The National Information Infrastructure (NII) is planned as a web of communications networks, computers, databases, and consumer electronics that will put vast amounts of information at the users' fingertips. Private sector firms are beginning to develop this infrastructure, but essential roles remain for the Federal Government. The National…

  6. Grand Challenges 1993: High Performance Computing and Communications. A Report by the Committee on Physical, Mathematical, and Engineering Sciences. The FY 1993 U.S. Research and Development Program.

    ERIC Educational Resources Information Center

    Office of Science and Technology Policy, Washington, DC.

    This report presents the United States research and development program for 1993 for high performance computing and computer communications (HPCC) networks. The first of four chapters presents the program goals and an overview of the federal government's emphasis on high performance computing as an important factor in the nation's scientific and…

  7. Computer Assisted Diagnostic Prescriptive Program in Reading and Mathematics. An Exemplary Micro-Computer Program and a Developer/Demonstrator Project, National Diffusion Network.

    ERIC Educational Resources Information Center

    Roberson, E. Wayne; Glowinski, Debra J.

    The Computer Assisted Diagnostic Prescriptive Program (CADPP) is a customized databased curriculum management system which permits the user to load the following into a filing/retrieval software system: (1) learning characteristics of individual students (e.g., age, instructional level, learning modality); (2) skill-oriented characteristics of…

  8. The Rise of the CISO

    ERIC Educational Resources Information Center

    Gale, Doug

    2007-01-01

    The late 1980s was an exciting time to be a CIO in higher education. Computing was being decentralized as microcomputers replaced mainframes, networking was emerging, and the National Science Foundation Network (NSFNET) was introducing the concept of an "internet" to hundreds of thousands of new users. Security wasn't much of an issue;…

  9. University Hopes Campuswide Network Will Help Give It a Competitive Edge.

    ERIC Educational Resources Information Center

    Watkins, Beverly T.

    1992-01-01

    Case Western Reserve University (Ohio) is hoping a high-powered campus information system will help diversify its student body and provide innovative education. A new optical-fiber network will connect computers in dormitory rooms, faculty and staff offices, classrooms, libraries, and laboratories and be linked with local, national, and…

  10. ICT & Learning in Chilean Schools: Lessons Learned

    ERIC Educational Resources Information Center

    Sanchez, Jaime; Salinas, Alvaro

    2008-01-01

    By the early nineties a Chilean network on computers and education for public schools had emerged. There were both high expectancies that technology could revolutionize education as well as divergent voices that doubted the real impact of technology on learning. This paper presents an evaluation of the Enlaces network, a national Information and…

  11. The medical science DMZ: a network design pattern for data-intensive medical science

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Peisert, Sean; Dart, Eli; Barnett, William

    We describe a detailed solution for maintaining high-capacity, data-intensive network flows (eg, 10, 40, 100 Gbps+) in a scientific, medical context while still adhering to security and privacy laws and regulations.High-end networking, packet-filter firewalls, network intrusion-detection systems.We describe a "Medical Science DMZ" concept as an option for secure, high-volume transport of large, sensitive datasets between research institutions over national research networks, and give 3 detailed descriptions of implemented Medical Science DMZs.The exponentially increasing amounts of "omics" data, high-quality imaging, and other rapidly growing clinical datasets have resulted in the rise of biomedical research "Big Data." The storage, analysis, and networkmore » resources required to process these data and integrate them into patient diagnoses and treatments have grown to scales that strain the capabilities of academic health centers. Some data are not generated locally and cannot be sustained locally, and shared data repositories such as those provided by the National Library of Medicine, the National Cancer Institute, and international partners such as the European Bioinformatics Institute are rapidly growing. The ability to store and compute using these data must therefore be addressed by a combination of local, national, and industry resources that exchange large datasets. Maintaining data-intensive flows that comply with the Health Insurance Portability and Accountability Act (HIPAA) and other regulations presents a new challenge for biomedical research. We describe a strategy that marries performance and security by borrowing from and redefining the concept of a Science DMZ, a framework that is used in physical sciences and engineering research to manage high-capacity data flows.By implementing a Medical Science DMZ architecture, biomedical researchers can leverage the scale provided by high-performance computer and cloud storage facilities and national high-speed research networks while preserving privacy and meeting regulatory requirements.« less

  12. The Medical Science DMZ.

    PubMed

    Peisert, Sean; Barnett, William; Dart, Eli; Cuff, James; Grossman, Robert L; Balas, Edward; Berman, Ari; Shankar, Anurag; Tierney, Brian

    2016-11-01

    We describe use cases and an institutional reference architecture for maintaining high-capacity, data-intensive network flows (e.g., 10, 40, 100 Gbps+) in a scientific, medical context while still adhering to security and privacy laws and regulations. High-end networking, packet filter firewalls, network intrusion detection systems. We describe a "Medical Science DMZ" concept as an option for secure, high-volume transport of large, sensitive data sets between research institutions over national research networks. The exponentially increasing amounts of "omics" data, the rapid increase of high-quality imaging, and other rapidly growing clinical data sets have resulted in the rise of biomedical research "big data." The storage, analysis, and network resources required to process these data and integrate them into patient diagnoses and treatments have grown to scales that strain the capabilities of academic health centers. Some data are not generated locally and cannot be sustained locally, and shared data repositories such as those provided by the National Library of Medicine, the National Cancer Institute, and international partners such as the European Bioinformatics Institute are rapidly growing. The ability to store and compute using these data must therefore be addressed by a combination of local, national, and industry resources that exchange large data sets. Maintaining data-intensive flows that comply with HIPAA and other regulations presents a new challenge for biomedical research. Recognizing this, we describe a strategy that marries performance and security by borrowing from and redefining the concept of a "Science DMZ"-a framework that is used in physical sciences and engineering research to manage high-capacity data flows. By implementing a Medical Science DMZ architecture, biomedical researchers can leverage the scale provided by high-performance computer and cloud storage facilities and national high-speed research networks while preserving privacy and meeting regulatory requirements. © The Author 2016. Published by Oxford University Press on behalf of the American Medical Informatics Association.

  13. The Medical Science DMZ

    PubMed Central

    Barnett, William; Dart, Eli; Cuff, James; Grossman, Robert L; Balas, Edward; Berman, Ari; Shankar, Anurag; Tierney, Brian

    2016-01-01

    Objective We describe use cases and an institutional reference architecture for maintaining high-capacity, data-intensive network flows (e.g., 10, 40, 100 Gbps+) in a scientific, medical context while still adhering to security and privacy laws and regulations. Materials and Methods High-end networking, packet filter firewalls, network intrusion detection systems. Results We describe a “Medical Science DMZ” concept as an option for secure, high-volume transport of large, sensitive data sets between research institutions over national research networks. Discussion The exponentially increasing amounts of “omics” data, the rapid increase of high-quality imaging, and other rapidly growing clinical data sets have resulted in the rise of biomedical research “big data.” The storage, analysis, and network resources required to process these data and integrate them into patient diagnoses and treatments have grown to scales that strain the capabilities of academic health centers. Some data are not generated locally and cannot be sustained locally, and shared data repositories such as those provided by the National Library of Medicine, the National Cancer Institute, and international partners such as the European Bioinformatics Institute are rapidly growing. The ability to store and compute using these data must therefore be addressed by a combination of local, national, and industry resources that exchange large data sets. Maintaining data-intensive flows that comply with HIPAA and other regulations presents a new challenge for biomedical research. Recognizing this, we describe a strategy that marries performance and security by borrowing from and redefining the concept of a “Science DMZ”—a framework that is used in physical sciences and engineering research to manage high-capacity data flows. Conclusion By implementing a Medical Science DMZ architecture, biomedical researchers can leverage the scale provided by high-performance computer and cloud storage facilities and national high-speed research networks while preserving privacy and meeting regulatory requirements. PMID:27136944

  14. Communication Environments for Local Networks.

    DTIC Science & Technology

    1982-12-01

    San Francisco, February-March 1979, pp.272.275. [Frank 75] Frank, H., I. Gitman , and R. Van Slyke, "Packet radio system - Network * -considerations...34 in AFIPS Conference Proceedings, Volume 44: National Computer Conference, Anaheim, Calif., May 1975, pp. 217-231. [Frank 76a] Frank, H., I. Gitman ...Local, Regional and Larger Scale Integrated Networks, Volume 2, 4 February 1976. [Frank 76b] Frank, H., I. Gitman , and R. Van Slyke, Local and Regional

  15. Minority University-Space Interdisciplinary Network Conference Proceedings of the Seventh Annual Users' Conference

    NASA Technical Reports Server (NTRS)

    Harrington, James L., Jr.; Brown, Robin L.; Shukla, Pooja

    1998-01-01

    Seventh annual conference proceedings of the Minority University-SPace Interdisciplinary Network (MU-SPIN) conference. MU-SPIN is cosponsored by NASA Goddard Space Flight Center and the National Science Foundation, and is a comprehensive educational initiative for Historically Black Colleges and Universities, and minority universities. MU-SPIN focuses on the transfer of advanced computer networking technologies to these institutions and their use for supporting multidisciplinary research.

  16. 32 CFR 240.3 - Definitions.

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... designated by the Department of Homeland Security and the NSA as a national center of excellence. IA. For the purpose of this part, the term “IA” includes computer security, network security, cybersecurity, cyber... the Department of Homeland Security and the NSA as a national center of excellence. CAE-R. An...

  17. Security: Progress and Challenges

    ERIC Educational Resources Information Center

    Luker, Mark A.

    2004-01-01

    The Homepage column in the March/April 2003 issue of "EDUCAUSE Review" explained the national implication of security vulnerabilities in higher education and the role of the EDUCAUSE/Internet2 Computer and Network Security Task Force in representing the higher education sector in the development of the National Strategy to Secure Cyberspace. Among…

  18. 32 CFR 240.3 - Definitions.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... designated by the Department of Homeland Security and the NSA as a national center of excellence. IA. For the purpose of this part, the term “IA” includes computer security, network security, cybersecurity, cyber... the Department of Homeland Security and the NSA as a national center of excellence. CAE-R. An...

  19. 32 CFR 240.3 - Definitions.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... designated by the Department of Homeland Security and the NSA as a national center of excellence. IA. For the purpose of this part, the term “IA” includes computer security, network security, cybersecurity, cyber... the Department of Homeland Security and the NSA as a national center of excellence. CAE-R. An...

  20. Utilizing Weak Indicators to Detect Anomalous Behaviors in Networks

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Egid, Adin

    We consider the use of a novel weak in- dicator alongside more commonly used weak indicators to help detect anomalous behavior in a large computer network. The data of the network which we are studying in this research paper concerns remote log-in information (Virtual Private Network, or VPN sessions) from the internal network of Los Alamos National Laboratory (LANL). The novel indicator we are utilizing is some- thing which, while novel in its application to data science/cyber security research, is a concept borrowed from the business world. The Her ndahl-Hirschman Index (HHI) is a computationally trivial index which provides amore » useful heuristic for regulatory agencies to ascertain the relative competitiveness of a particular industry. Using this index as a lagging indicator in the monthly format we have studied could help to detect anomalous behavior by a particular or small set of users on the network.« less

  1. Computer-based communication in support of scientific and technical work. [conferences on management information systems used by scientists of NASA programs

    NASA Technical Reports Server (NTRS)

    Vallee, J.; Wilson, T.

    1976-01-01

    Results are reported of the first experiments for a computer conference management information system at the National Aeronautics and Space Administration. Between August 1975 and March 1976, two NASA projects with geographically separated participants (NASA scientists) used the PLANET computer conferencing system for portions of their work. The first project was a technology assessment of future transportation systems. The second project involved experiments with the Communication Technology Satellite. As part of this project, pre- and postlaunch operations were discussed in a computer conference. These conferences also provided the context for an analysis of the cost of computer conferencing. In particular, six cost components were identified: (1) terminal equipment, (2) communication with a network port, (3) network connection, (4) computer utilization, (5) data storage and (6) administrative overhead.

  2. [Datanet 1 and the convergence of the computer and telecommunications].

    PubMed

    de Wit, Onno

    2008-01-01

    This article describes the efforts of the Dutch national company for telecommunication, PTT, in introducing and developing a public network for data communication in the Netherlands in the last decades of the twentieth century. As early as the 1960s, private companies started to connect their local computers. As a result, small private computer networks started to emerge. As the state company offering general access to public services in telephony, the PTT strove to develop a public data network, accessible to every user and telephone subscriber. This ambition was realized with Datanet 1, the public data network which was officially opened in 1982. In the years that followed, Datanet became the dominant network for data transmission, despite competing efforts by private companies and computer manufacturers. The large-scale application of Datanet in public municipal administration serves as a case study for the development of data communication in practice, that shows that there was a gradual migration from X-25 to TCP/IP protocols. The article concludes by stating that the introduction and development of data transmission transformed the role of the PTT in Dutch society, brought new working practices, new services and new responsibilities, and resulted in a whole new phase in the history of the computer.

  3. US GEOLOGICAL SURVEY'S NATIONAL SYSTEM FOR PROCESSING AND DISTRIBUTION OF NEAR REAL-TIME HYDROLOGICAL DATA.

    USGS Publications Warehouse

    Shope, William G.; ,

    1987-01-01

    The US Geological Survey is utilizing a national network of more than 1000 satellite data-collection stations, four satellite-relay direct-readout ground stations, and more than 50 computers linked together in a private telecommunications network to acquire, process, and distribute hydrological data in near real-time. The four Survey offices operating a satellite direct-readout ground station provide near real-time hydrological data to computers located in other Survey offices through the Survey's Distributed Information System. The computerized distribution system permits automated data processing and distribution to be carried out in a timely manner under the control and operation of the Survey office responsible for the data-collection stations and for the dissemination of hydrological information to the water-data users.

  4. An Innovative Approach to Bridge a Skill Gap and Grow a Workforce Pipeline: The Computer System, Cluster, and Networking Summer Institute

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Connor, Carolyn Marie; Jacobson, Andree Lars; Bonnie, Amanda Marie

    Sustainable and effective computing infrastructure depends critically on the skills and expertise of domain scientists and of committed and well-trained advanced computing professionals. But, in its ongoing High Performance Computing (HPC) work, Los Alamos National Laboratory noted a persistent shortage of well-prepared applicants, particularly for entry-level cluster administration, file systems administration, and high speed networking positions. Further, based upon recruiting efforts and interactions with universities graduating students in related majors of interest (e.g., computer science (CS)), there has been a long standing skillset gap, as focused training in HPC topics is typically lacking or absent in undergraduate and in evenmore » many graduate programs. Given that the effective operation and use of HPC systems requires specialized and often advanced training, that there is a recognized HPC skillset gap, and that there is intense global competition for computing and computational science talent, there is a long-standing and critical need for innovative approaches to help bridge the gap and create a well-prepared, next generation HPC workforce. Our paper places this need in the context of the HPC work and workforce requirements at Los Alamos National Laboratory (LANL) and presents one such innovative program conceived to address the need, bridge the gap, and grow an HPC workforce pipeline at LANL. The Computer System, Cluster, and Networking Summer Institute (CSCNSI) completed its 10th year in 2016. The story of the CSCNSI and its evolution is detailed below with a description of the design of its Boot Camp, and a summary of its success and some key factors that have enabled that success.« less

  5. An Innovative Approach to Bridge a Skill Gap and Grow a Workforce Pipeline: The Computer System, Cluster, and Networking Summer Institute

    DOE PAGES

    Connor, Carolyn Marie; Jacobson, Andree Lars; Bonnie, Amanda Marie; ...

    2016-11-01

    Sustainable and effective computing infrastructure depends critically on the skills and expertise of domain scientists and of committed and well-trained advanced computing professionals. But, in its ongoing High Performance Computing (HPC) work, Los Alamos National Laboratory noted a persistent shortage of well-prepared applicants, particularly for entry-level cluster administration, file systems administration, and high speed networking positions. Further, based upon recruiting efforts and interactions with universities graduating students in related majors of interest (e.g., computer science (CS)), there has been a long standing skillset gap, as focused training in HPC topics is typically lacking or absent in undergraduate and in evenmore » many graduate programs. Given that the effective operation and use of HPC systems requires specialized and often advanced training, that there is a recognized HPC skillset gap, and that there is intense global competition for computing and computational science talent, there is a long-standing and critical need for innovative approaches to help bridge the gap and create a well-prepared, next generation HPC workforce. Our paper places this need in the context of the HPC work and workforce requirements at Los Alamos National Laboratory (LANL) and presents one such innovative program conceived to address the need, bridge the gap, and grow an HPC workforce pipeline at LANL. The Computer System, Cluster, and Networking Summer Institute (CSCNSI) completed its 10th year in 2016. The story of the CSCNSI and its evolution is detailed below with a description of the design of its Boot Camp, and a summary of its success and some key factors that have enabled that success.« less

  6. Minnesota Computer Aided Library System (MCALS); University of Minnesota Subsystem Cost/Benefits Analysis.

    ERIC Educational Resources Information Center

    Lourey, Eugene D., Comp.

    The Minnesota Computer Aided Library System (MCALS) provides a basis of unification for library service program development in Minnesota for eventual linkage to the national information network. A prototype plan for communications functions is illustrated. A cost/benefits analysis was made to show the cost/effectiveness potential for MCALS. System…

  7. Computer Communications, Cooperation or Confusion; A Communications Conference at San Jose State College.

    ERIC Educational Resources Information Center

    San Jose State Coll., CA.

    The papers from a conference on computer communication networks are divided into five groups--trends, applications, problems and impairments, solutions and tools, impact on society and education. The impact of such developing technologies as cable television, the "wired nation," the telephone industry, and analog data storage is…

  8. Education and Library Services for Community Information Utilities.

    ERIC Educational Resources Information Center

    Farquhar, John A.

    The concept of "computer utility"--the provision of computing and information service by a utility in the form of a national network to which any person desiring information could gain access--has been gaining interest among the public and among the technical community. This report on planning community information utilities discusses the…

  9. Stop the World--West Georgia Is Getting On.

    ERIC Educational Resources Information Center

    Mitchell, Phyllis R.

    1996-01-01

    In 5 years, the schools and community of Carrollton, Georgia, created a school systemwide network of 1,400 computers and 70 CD-ROMs connected by a fiber wide-area network to other city institutions and the Internet with grants from local, state, and national industry. After incorporating the new technologies into the curriculum, the dropout rate…

  10. Characterization of attacks on public telephone networks

    NASA Astrophysics Data System (ADS)

    Lorenz, Gary V.; Manes, Gavin W.; Hale, John C.; Marks, Donald; Davis, Kenneth; Shenoi, Sujeet

    2001-02-01

    The U.S. Public Telephone Network (PTN) is a massively connected distributed information systems, much like the Internet. PTN signaling, transmission and operations functions must be protected from physical and cyber attacks to ensure the reliable delivery of telecommunications services. The increasing convergence of PTNs with wireless communications systems, computer networks and the Internet itself poses serious threats to our nation's telecommunications infrastructure. Legacy technologies and advanced services encumber well-known and as of yet undiscovered vulnerabilities that render them susceptible to cyber attacks. This paper presents a taxonomy of cyber attacks on PTNs in converged environments that synthesizes exploits in computer and communications network domains. The taxonomy provides an opportunity for the systematic exploration of mitigative and preventive strategies, as well as for the identification and classification of emerging threats.

  11. A National Information Network: Changing Our Lives in the 21st Century. 1992 Annual Review of the Institute for Information Studies.

    ERIC Educational Resources Information Center

    Aspen Inst., Queenstown, MD.

    In a workshop held by the National Research Council through their Board on Telecommunications and Computer Applications, the participants determined that the earlier vision of affordable telephone service for all, already fundamentally achieved in the United States, can be extended to a new national policy of affordable information for all. This…

  12. A network-based distributed, media-rich computing and information environment

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Phillips, R.L.

    1995-12-31

    Sunrise is a Los Alamos National Laboratory (LANL) project started in October 1993. It is intended to be a prototype National Information Infrastructure development project. A main focus of Sunrise is to tie together enabling technologies (networking, object-oriented distributed computing, graphical interfaces, security, multi-media technologies, and data-mining technologies) with several specific applications. A diverse set of application areas was chosen to ensure that the solutions developed in the project are as generic as possible. Some of the application areas are materials modeling, medical records and image analysis, transportation simulations, and K-12 education. This paper provides a description of Sunrise andmore » a view of the architecture and objectives of this evolving project. The primary objectives of Sunrise are three-fold: (1) To develop common information-enabling tools for advanced scientific research and its applications to industry; (2) To enhance the capabilities of important research programs at the Laboratory; (3) To define a new way of collaboration between computer science and industrially-relevant research.« less

  13. A National Virtual Specimen Database for Early Cancer Detection

    NASA Technical Reports Server (NTRS)

    Crichton, Daniel; Kincaid, Heather; Kelly, Sean; Thornquist, Mark; Johnsey, Donald; Winget, Marcy

    2003-01-01

    Access to biospecimens is essential for enabling cancer biomarker discovery. The National Cancer Institute's (NCI) Early Detection Research Network (EDRN) comprises and integrates a large number of laboratories into a network in order to establish a collaborative scientific environment to discover and validate disease markers. The diversity of both the institutions and the collaborative focus has created the need for establishing cross-disciplinary teams focused on integrating expertise in biomedical research, computational and biostatistics, and computer science. Given the collaborative design of the network, the EDRN needed an informatics infrastructure. The Fred Hutchinson Cancer Research Center, the National Cancer Institute,and NASA's Jet Propulsion Laboratory (JPL) teamed up to build an informatics infrastructure creating a collaborative, science-driven research environment despite the geographic and morphology differences of the information systems that existed within the diverse network. EDRN investigators identified the need to share biospecimen data captured across the country managed in disparate databases. As a result, the informatics team initiated an effort to create a virtual tissue database whereby scientists could search and locate details about specimens located at collaborating laboratories. Each database, however, was locally implemented and integrated into collection processes and methods unique to each institution. This meant that efforts to integrate databases needed to be done in a manner that did not require redesign or re-implementation of existing system

  14. History, structure, and function of the Internet.

    PubMed

    Glowniak, J

    1998-04-01

    The Internet stands at the forefront of telecommunications in medicine. This worldwide system of computers had its beginnings in networking projects in the United States and western Europe in the 1960s and 1970s. The precursor of the Internet was ARPANET, a long-distance telecommunication network funded by the Department of Defense that linked together computers throughout the United States. In the 1980s, ARPANET was superseded by NSFNET, a series of networks created by the National Science Foundation, which established the present-day structure of the Internet. The physical structure of the Internet resembles and is integrated with the telephone system. Long-distance data transport services are provided by large telecommunication companies, called network service providers (NSPs), through high-capacity, high-speed national and international fiber optic cables. These transport services are accessed through Internet service providers, ISPs. ISPs, the equivalent of regional Bell operating companies, provide the physical link to the NSPs for individuals and organizations. Telecommunications on the Internet are standardized by a set of communications protocols, the TCP/IP protocol suite, that describe routing of messages over the Internet, computer naming conventions, and commonly used Internet services such as e-mail. At present, the Internet consists of over 20 million computer worldwide and is continuing to grow at a rapid rate. Along with the growth of the Internet, higher speed access methods are offering a range of new services such as real-time video and voice communications. Medical education, teaching, and research, as well as clinical practice, will be affected in numerous different ways by these advances.

  15. The Magic of Technology. NECC 1993: Proceedings of the Annual National Educational Computing Conference (14th, Orlando, Florida, June 27-30, 1993).

    ERIC Educational Resources Information Center

    Brubaker, Thomas, A., Ed.; And Others

    These conference proceedings address the capabilities of technology in education. Papers and summaries of presentations are provided on the following topics: programs for special needs students; virtual realities; funding opportunities; videodiscs; future programs and perspectives; telecomputing; computer networks in the classroom; human…

  16. Jamming the Phone Lines: Pencils, Notebooks, and Modems (Computers in the Classroom).

    ERIC Educational Resources Information Center

    Holvig, Kenneth C.

    1989-01-01

    Describes how BreadNet (a national computer network of English teachers) has come to dominate the routine of a high school class. Notes that BreadNet gives students new motivation to write, inquire, and learn. Describes classroom electronic writing exchanges and an electronic writers' workshop which posted essays on BreadNet. (RS)

  17. Advanced networks and computing in healthcare

    PubMed Central

    Ackerman, Michael

    2011-01-01

    As computing and network capabilities continue to rise, it becomes increasingly important to understand the varied applications for using them to provide healthcare. The objective of this review is to identify key characteristics and attributes of healthcare applications involving the use of advanced computing and communication technologies, drawing upon 45 research and development projects in telemedicine and other aspects of healthcare funded by the National Library of Medicine over the past 12 years. Only projects publishing in the professional literature were included in the review. Four projects did not publish beyond their final reports. In addition, the authors drew on their first-hand experience as project officers, reviewers and monitors of the work. Major themes in the corpus of work were identified, characterizing key attributes of advanced computing and network applications in healthcare. Advanced computing and network applications are relevant to a range of healthcare settings and specialties, but they are most appropriate for solving a narrower range of problems in each. Healthcare projects undertaken primarily to explore potential have also demonstrated effectiveness and depend on the quality of network service as much as bandwidth. Many applications are enabling, making it possible to provide service or conduct research that previously was not possible or to achieve outcomes in addition to those for which projects were undertaken. Most notable are advances in imaging and visualization, collaboration and sense of presence, and mobility in communication and information-resource use. PMID:21486877

  18. Free geometric adjustment of the SECOR Equatorial Network (Solution SECOR-27)

    NASA Technical Reports Server (NTRS)

    Mueller, I. I.; Kumar, M.; Soler, T.

    1973-01-01

    The basic purpose of this experiment is to compute reduced normal equations from the observational data of the SECOR Equatorial Network obtained from DMA/Topographic Center, D/Geodesy, Geosciences Div. Washington, D.C. These reduced normal equations are to be combined with reduced normal equations of other satellite networks of the National Geodetic Satellite Program to provide station coordinates from a single least square adjustment. An individual SECOR solution was also obtained and is presented in this report, using direction constraints computed from BC-4 optical data from stations collocated with SECOR stations. Due to the critical configuration present in the range observations, weighted height constraints were also applied in order to break the near coplanarity of the observing stations.

  19. Silicon synaptic transistor for hardware-based spiking neural network and neuromorphic system

    NASA Astrophysics Data System (ADS)

    Kim, Hyungjin; Hwang, Sungmin; Park, Jungjin; Park, Byung-Gook

    2017-10-01

    Brain-inspired neuromorphic systems have attracted much attention as new computing paradigms for power-efficient computation. Here, we report a silicon synaptic transistor with two electrically independent gates to realize a hardware-based neural network system without any switching components. The spike-timing dependent plasticity characteristics of the synaptic devices are measured and analyzed. With the help of the device model based on the measured data, the pattern recognition capability of the hardware-based spiking neural network systems is demonstrated using the modified national institute of standards and technology handwritten dataset. By comparing systems with and without inhibitory synapse part, it is confirmed that the inhibitory synapse part is an essential element in obtaining effective and high pattern classification capability.

  20. Silicon synaptic transistor for hardware-based spiking neural network and neuromorphic system.

    PubMed

    Kim, Hyungjin; Hwang, Sungmin; Park, Jungjin; Park, Byung-Gook

    2017-10-06

    Brain-inspired neuromorphic systems have attracted much attention as new computing paradigms for power-efficient computation. Here, we report a silicon synaptic transistor with two electrically independent gates to realize a hardware-based neural network system without any switching components. The spike-timing dependent plasticity characteristics of the synaptic devices are measured and analyzed. With the help of the device model based on the measured data, the pattern recognition capability of the hardware-based spiking neural network systems is demonstrated using the modified national institute of standards and technology handwritten dataset. By comparing systems with and without inhibitory synapse part, it is confirmed that the inhibitory synapse part is an essential element in obtaining effective and high pattern classification capability.

  1. Radio Synthesis Imaging - A High Performance Computing and Communications Project

    NASA Astrophysics Data System (ADS)

    Crutcher, Richard M.

    The National Science Foundation has funded a five-year High Performance Computing and Communications project at the National Center for Supercomputing Applications (NCSA) for the direct implementation of several of the computing recommendations of the Astronomy and Astrophysics Survey Committee (the "Bahcall report"). This paper is a summary of the project goals and a progress report. The project will implement a prototype of the next generation of astronomical telescope systems - remotely located telescopes connected by high-speed networks to very high performance, scalable architecture computers and on-line data archives, which are accessed by astronomers over Gbit/sec networks. Specifically, a data link has been installed between the BIMA millimeter-wave synthesis array at Hat Creek, California and NCSA at Urbana, Illinois for real-time transmission of data to NCSA. Data are automatically archived, and may be browsed and retrieved by astronomers using the NCSA Mosaic software. In addition, an on-line digital library of processed images will be established. BIMA data will be processed on a very high performance distributed computing system, with I/O, user interface, and most of the software system running on the NCSA Convex C3880 supercomputer or Silicon Graphics Onyx workstations connected by HiPPI to the high performance, massively parallel Thinking Machines Corporation CM-5. The very computationally intensive algorithms for calibration and imaging of radio synthesis array observations will be optimized for the CM-5 and new algorithms which utilize the massively parallel architecture will be developed. Code running simultaneously on the distributed computers will communicate using the Data Transport Mechanism developed by NCSA. The project will also use the BLANCA Gbit/s testbed network between Urbana and Madison, Wisconsin to connect an Onyx workstation in the University of Wisconsin Astronomy Department to the NCSA CM-5, for development of long-distance distributed computing. Finally, the project is developing 2D and 3D visualization software as part of the international AIPS++ project. This research and development project is being carried out by a team of experts in radio astronomy, algorithm development for massively parallel architectures, high-speed networking, database management, and Thinking Machines Corporation personnel. The development of this complete software, distributed computing, and data archive and library solution to the radio astronomy computing problem will advance our expertise in high performance computing and communications technology and the application of these techniques to astronomical data processing.

  2. Scientific Computing Strategic Plan for the Idaho National Laboratory

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Whiting, Eric Todd

    Scientific computing is a critical foundation of modern science. Without innovations in the field of computational science, the essential missions of the Department of Energy (DOE) would go unrealized. Taking a leadership role in such innovations is Idaho National Laboratory’s (INL’s) challenge and charge, and is central to INL’s ongoing success. Computing is an essential part of INL’s future. DOE science and technology missions rely firmly on computing capabilities in various forms. Modeling and simulation, fueled by innovations in computational science and validated through experiment, are a critical foundation of science and engineering. Big data analytics from an increasing numbermore » of widely varied sources is opening new windows of insight and discovery. Computing is a critical tool in education, science, engineering, and experiments. Advanced computing capabilities in the form of people, tools, computers, and facilities, will position INL competitively to deliver results and solutions on important national science and engineering challenges. A computing strategy must include much more than simply computers. The foundational enabling component of computing at many DOE national laboratories is the combination of a showcase like data center facility coupled with a very capable supercomputer. In addition, network connectivity, disk storage systems, and visualization hardware are critical and generally tightly coupled to the computer system and co located in the same facility. The existence of these resources in a single data center facility opens the doors to many opportunities that would not otherwise be possible.« less

  3. Mind Transplants Or: The Role of Computer Assisted Instruction in the Future of the Library.

    ERIC Educational Resources Information Center

    Lyon, Becky J.

    Computer assisted instruction (CAI) may well represent the next phase in the involvement of the library or learning resources center with media and the educational process. The Lister Hill Center Experimental CAI Network was established in July, 1972, on the recommendation of the National Library of Medicine, to test the feasibility of sharing CAI…

  4. The Operation of a Specialized Scientific Information and Data Analysis Center With Computer Base and Associated Communications Network.

    ERIC Educational Resources Information Center

    Cottrell, William B.; And Others

    The Nuclear Safety Information Center (NSIC) is a highly sophisticated scientific information center operated at Oak Ridge National Laboratory (ORNL) for the U.S. Atomic Energy Commission. Its information file, which consists of both data and bibliographic information, is computer stored and numerous programs have been developed to facilitate the…

  5. R&D100 Finalist: Neuromorphic Cyber Microscope

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Follett, David; Naegle, John; Suppona, Roger

    The Neuromorphic Cyber Microscope provides security analysts with unprecedented visibility of their network, computer and storage assets. This processor is the world's first practical implementation of neuromorphic technology to a major computer science mission. Working with Lewis Rhodes Labs, engineers at Sandia National Laboratories have created a device that is orders of magnitude faster at analyzing data to identify cyber-attacks.

  6. Introduction

    NASA Astrophysics Data System (ADS)

    Dum, Ralph

    Various types of diverse networks — communication networks, transport networks, global business networks, networks of friends, or the Internet — shape our daily life and the way we think and act. We depend on various social, economic, and technological networks that weave a tissue of businesses, governments, technologies and that contain us as citizens, users, or customers. We only become aware of our dependence if failures occur in these networks: when cities are plunged into darkness because of a breakdown of the power grid like happened recently in New York, when national economies collapse because of a failure of global financial systems like happened in the South-Asian banking crisis, or when computer viruses spread with mind-boggling speed over information networks destroying or, even worse, exposing sensitive data.

  7. Report on the Program and Contract Infrastructure Technical Requirements Development for the Guam Realignment Program

    DTIC Science & Technology

    2012-02-08

    Office GRN Guam Road Network GWA Guam Waterworks Authority ICG Interagency Coordination Group JFY Japanese Fiscal Year JRM Joint...PAC) (Pacific) NCTS Naval Computer and Telecommunications Station NEPA National Environmental Policy Act NPDES National Pollutant Discharge...Elimination System OPNAV Operational Navy UFC Unified Facilities Criteria U.S. United States USC United States Code USDA United States

  8. Analysis of Delays in Transmitting Time Code Using an Automated Computer Time Distribution System

    DTIC Science & Technology

    1999-12-01

    jlevine@clock. bldrdoc.gov Abstract An automated computer time distribution system broadcasts standard tune to users using computers and modems via...contributed to &lays - sofhareplatform (50% of the delay), transmission speed of time- codes (25OA), telephone network (lS%), modem and others (10’4). The... modems , and telephone lines. Users dial the ACTS server to receive time traceable to the national time scale of Singapore, UTC(PSB). The users can in

  9. dfnWorks: A discrete fracture network framework for modeling subsurface flow and transport

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hyman, Jeffrey D.; Karra, Satish; Makedonska, Nataliia

    DFNWORKS is a parallelized computational suite to generate three-dimensional discrete fracture networks (DFN) and simulate flow and transport. Developed at Los Alamos National Laboratory over the past five years, it has been used to study flow and transport in fractured media at scales ranging from millimeters to kilometers. The networks are created and meshed using DFNGEN, which combines FRAM (the feature rejection algorithm for meshing) methodology to stochastically generate three-dimensional DFNs with the LaGriT meshing toolbox to create a high-quality computational mesh representation. The representation produces a conforming Delaunay triangulation suitable for high performance computing finite volume solvers in anmore » intrinsically parallel fashion. Flow through the network is simulated in dfnFlow, which utilizes the massively parallel subsurface flow and reactive transport finite volume code PFLOTRAN. A Lagrangian approach to simulating transport through the DFN is adopted within DFNTRANS to determine pathlines and solute transport through the DFN. Example applications of this suite in the areas of nuclear waste repository science, hydraulic fracturing and CO 2 sequestration are also included.« less

  10. dfnWorks: A discrete fracture network framework for modeling subsurface flow and transport

    DOE PAGES

    Hyman, Jeffrey D.; Karra, Satish; Makedonska, Nataliia; ...

    2015-11-01

    DFNWORKS is a parallelized computational suite to generate three-dimensional discrete fracture networks (DFN) and simulate flow and transport. Developed at Los Alamos National Laboratory over the past five years, it has been used to study flow and transport in fractured media at scales ranging from millimeters to kilometers. The networks are created and meshed using DFNGEN, which combines FRAM (the feature rejection algorithm for meshing) methodology to stochastically generate three-dimensional DFNs with the LaGriT meshing toolbox to create a high-quality computational mesh representation. The representation produces a conforming Delaunay triangulation suitable for high performance computing finite volume solvers in anmore » intrinsically parallel fashion. Flow through the network is simulated in dfnFlow, which utilizes the massively parallel subsurface flow and reactive transport finite volume code PFLOTRAN. A Lagrangian approach to simulating transport through the DFN is adopted within DFNTRANS to determine pathlines and solute transport through the DFN. Example applications of this suite in the areas of nuclear waste repository science, hydraulic fracturing and CO 2 sequestration are also included.« less

  11. Proceedings of the Third International Workshop on Neural Networks and Fuzzy Logic, volume 2

    NASA Technical Reports Server (NTRS)

    Culbert, Christopher J. (Editor)

    1993-01-01

    Papers presented at the Neural Networks and Fuzzy Logic Workshop sponsored by the National Aeronautics and Space Administration and cosponsored by the University of Houston, Clear Lake, held 1-3 Jun. 1992 at the Lyndon B. Johnson Space Center in Houston, Texas are included. During the three days approximately 50 papers were presented. Technical topics addressed included adaptive systems; learning algorithms; network architectures; vision; robotics; neurobiological connections; speech recognition and synthesis; fuzzy set theory and application, control and dynamics processing; space applications; fuzzy logic and neural network computers; approximate reasoning; and multiobject decision making.

  12. Climate Science's Globally Distributed Infrastructure

    NASA Astrophysics Data System (ADS)

    Williams, D. N.

    2016-12-01

    The Earth System Grid Federation (ESGF) is primarily funded by the Department of Energy's (DOE's) Office of Science (the Office of Biological and Environmental Research [BER] Climate Data Informatics Program and the Office of Advanced Scientific Computing Research Next Generation Network for Science Program), the National Oceanic and Atmospheric Administration (NOAA), the National Aeronautics and Space Administration (NASA), and the National Science Foundation (NSF), the European Infrastructure for the European Network for Earth System Modeling (IS-ENES), and the Australian National University (ANU). Support also comes from other U.S. federal and international agencies. The federation works across multiple worldwide data centers and spans seven international network organizations to provide users with the ability to access, analyze, and visualize data using a globally federated collection of networks, computers, and software. Its architecture employs a series of geographically distributed peer nodes that are independently administered and united by common federation protocols and application programming interfaces (APIs). The full ESGF infrastructure has now been adopted by multiple Earth science projects and allows access to petabytes of geophysical data, including the Coupled Model Intercomparison Project (CMIP; output used by the Intergovernmental Panel on Climate Change assessment reports), multiple model intercomparison projects (MIPs; endorsed by the World Climate Research Programme [WCRP]), and the Accelerated Climate Modeling for Energy (ACME; ESGF is included in the overarching ACME workflow process to store model output). ESGF is a successful example of integration of disparate open-source technologies into a cohesive functional system that serves the needs the global climate science community. Data served by ESGF includes not only model output but also observational data from satellites and instruments, reanalysis, and generated images.

  13. An open source high-performance solution to extract surface water drainage networks from diverse terrain conditions

    USGS Publications Warehouse

    Stanislawski, Larry V.; Survila, Kornelijus; Wendel, Jeffrey; Liu, Yan; Buttenfield, Barbara P.

    2018-01-01

    This paper describes a workflow for automating the extraction of elevation-derived stream lines using open source tools with parallel computing support and testing the effectiveness of procedures in various terrain conditions within the conterminous United States. Drainage networks are extracted from the US Geological Survey 1/3 arc-second 3D Elevation Program elevation data having a nominal cell size of 10 m. This research demonstrates the utility of open source tools with parallel computing support for extracting connected drainage network patterns and handling depressions in 30 subbasins distributed across humid, dry, and transitional climate regions and in terrain conditions exhibiting a range of slopes. Special attention is given to low-slope terrain, where network connectivity is preserved by generating synthetic stream channels through lake and waterbody polygons. Conflation analysis compares the extracted streams with a 1:24,000-scale National Hydrography Dataset flowline network and shows that similarities are greatest for second- and higher-order tributaries.

  14. Airport Simulations Using Distributed Computational Resources

    NASA Technical Reports Server (NTRS)

    McDermott, William J.; Maluf, David A.; Gawdiak, Yuri; Tran, Peter; Clancy, Daniel (Technical Monitor)

    2002-01-01

    The Virtual National Airspace Simulation (VNAS) will improve the safety of Air Transportation. In 2001, using simulation and information management software running over a distributed network of super-computers, researchers at NASA Ames, Glenn, and Langley Research Centers developed a working prototype of a virtual airspace. This VNAS prototype modeled daily operations of the Atlanta airport by integrating measured operational data and simulation data on up to 2,000 flights a day. The concepts and architecture developed by NASA for this prototype are integral to the National Airspace Simulation to support the development of strategies improving aviation safety, identifying precursors to component failure.

  15. IAS telecommunication infrastructure and value added network services provided by IASNET

    NASA Astrophysics Data System (ADS)

    Smirnov, Oleg L.; Marchenko, Sergei

    The topology of a packet switching network for the Soviet National Centre for Automated Data Exchange with Foreign Computer Networks and Databanks (NCADE) based on a design by the Institute for Automated Systems (IAS) is discussed. NCADE has partners all over the world: it is linked to East European countries via telephone lines while satellites are used for communication with remote partners, such as Cuba, Mongolia, and Vietnam. Moreover, there is a connection to the Austrian, British, Canadian, Finnish, French, U.S. and other western networks through which users can have access to databases on each network. At the same time, NCADE provides western customers with access to more than 70 Soviet databases. Software and hardware of IASNET use data exchange recommendations agreed with the International Standard Organization (ISO) and International Telegraph and Telephone Consultative Committee (CCITT). Technical parameters of IASNET are compatible with the majority of foreign networks such as DATAPAK, TRANSPAC, TELENET, and others. By means of IASNET, the NCADE provides connection of Soviet and foreign users to information and computer centers around the world on the basis of the CCITT X.25 and X.75 recommendations. Any information resources of IASNET and value added network services, such as computer teleconferences, E-mail, information retrieval system, intelligent support of access to databanks and databases, and others are discussed. The topology of the ACADEMNET connected to IASNET over an X.25 gateway is also discussed.

  16. Crosscut report: Exascale Requirements Reviews, March 9–10, 2017 – Tysons Corner, Virginia. An Office of Science review sponsored by: Advanced Scientific Computing Research, Basic Energy Sciences, Biological and Environmental Research, Fusion Energy Sciences, High Energy Physics, Nuclear Physics

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gerber, Richard; Hack, James; Riley, Katherine

    The mission of the U.S. Department of Energy Office of Science (DOE SC) is the delivery of scientific discoveries and major scientific tools to transform our understanding of nature and to advance the energy, economic, and national security missions of the United States. To achieve these goals in today’s world requires investments in not only the traditional scientific endeavors of theory and experiment, but also in computational science and the facilities that support large-scale simulation and data analysis. The Advanced Scientific Computing Research (ASCR) program addresses these challenges in the Office of Science. ASCR’s mission is to discover, develop, andmore » deploy computational and networking capabilities to analyze, model, simulate, and predict complex phenomena important to DOE. ASCR supports research in computational science, three high-performance computing (HPC) facilities — the National Energy Research Scientific Computing Center (NERSC) at Lawrence Berkeley National Laboratory and Leadership Computing Facilities at Argonne (ALCF) and Oak Ridge (OLCF) National Laboratories — and the Energy Sciences Network (ESnet) at Berkeley Lab. ASCR is guided by science needs as it develops research programs, computers, and networks at the leading edge of technologies. As we approach the era of exascale computing, technology changes are creating challenges for science programs in SC for those who need to use high performance computing and data systems effectively. Numerous significant modifications to today’s tools and techniques will be needed to realize the full potential of emerging computing systems and other novel computing architectures. To assess these needs and challenges, ASCR held a series of Exascale Requirements Reviews in 2015–2017, one with each of the six SC program offices,1 and a subsequent Crosscut Review that sought to integrate the findings from each. Participants at the reviews were drawn from the communities of leading domain scientists, experts in computer science and applied mathematics, ASCR facility staff, and DOE program managers in ASCR and the respective program offices. The purpose of these reviews was to identify mission-critical scientific problems within the DOE Office of Science (including experimental facilities) and determine the requirements for the exascale ecosystem that would be needed to address those challenges. The exascale ecosystem includes exascale computing systems, high-end data capabilities, efficient software at scale, libraries, tools, and other capabilities. This effort will contribute to the development of a strategic roadmap for ASCR compute and data facility investments and will help the ASCR Facility Division establish partnerships with Office of Science stakeholders. It will also inform the Office of Science research needs and agenda. The results of the six reviews have been published in reports available on the web at http://exascaleage.org/. This report presents a summary of the individual reports and of common and crosscutting findings, and it identifies opportunities for productive collaborations among the DOE SC program offices.« less

  17. The Virtual Earth-Solar Observatory of the SCiESMEX

    NASA Astrophysics Data System (ADS)

    De la Luz, V.; Gonzalez-Esparza, A.; Cifuentes-Nava, G.

    2015-12-01

    The Mexican Space Weather Service (SCiESMEX, http://www.sciesmex.unam.mx) started operations in October 2014. The project includes the Virtual Earth-Solar Observatory (VESO, http://www.veso.unam.mx). The VESO is a improved project wich objetive is integrate the space weather instrumentation network from the National Autonomous University of Mexico (UNAM). The network includes the Mexican Array Radiotelescope (MEXART), the Callisto receptor (MEXART), a Neutron Telescope, a Cosmic Ray Telescope. the Schumann Antenna, the National Magnetic Service, and the mexican GPS network (TlalocNet). The VESO facility is located at the Geophysics Institute campus Michoacan (UNAM). We offer the service of data store, real-time data, and quasi real-time data. The hardware of VESO includes a High Performance Computer (HPC) dedicated specially to big data storage.

  18. The Czech National Grid Infrastructure

    NASA Astrophysics Data System (ADS)

    Chudoba, J.; Křenková, I.; Mulač, M.; Ruda, M.; Sitera, J.

    2017-10-01

    The Czech National Grid Infrastructure is operated by MetaCentrum, a CESNET department responsible for coordinating and managing activities related to distributed computing. CESNET as the Czech National Research and Education Network (NREN) provides many e-infrastructure services, which are used by 94% of the scientific and research community in the Czech Republic. Computing and storage resources owned by different organizations are connected by fast enough network to provide transparent access to all resources. We describe in more detail the computing infrastructure, which is based on several different technologies and covers grid, cloud and map-reduce environment. While the largest part of CPUs is still accessible via distributed torque servers, providing environment for long batch jobs, part of infrastructure is available via standard EGI tools in EGI, subset of NGI resources is provided into EGI FedCloud environment with cloud interface and there is also Hadoop cluster provided by the same e-infrastructure.A broad spectrum of computing servers is offered; users can choose from standard 2 CPU servers to large SMP machines with up to 6 TB of RAM or servers with GPU cards. Different groups have different priorities on various resources, resource owners can even have an exclusive access. The software is distributed via AFS. Storage servers offering up to tens of terabytes of disk space to individual users are connected via NFS4 on top of GPFS and access to long term HSM storage with peta-byte capacity is also provided. Overview of available resources and recent statistics of usage will be given.

  19. WE-B-BRD-01: Innovation in Radiation Therapy Planning II: Cloud Computing in RT

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Moore, K; Kagadis, G; Xing, L

    As defined by the National Institute of Standards and Technology, cloud computing is “a model for enabling ubiquitous, convenient, on-demand network access to a shared pool of configurable computing resources (e.g., networks, servers, storage, applications, and services) that can be rapidly provisioned and released with minimal management effort or service provider interaction.” Despite the omnipresent role of computers in radiotherapy, cloud computing has yet to achieve widespread adoption in clinical or research applications, though the transition to such “on-demand” access is underway. As this transition proceeds, new opportunities for aggregate studies and efficient use of computational resources are set againstmore » new challenges in patient privacy protection, data integrity, and management of clinical informatics systems. In this Session, current and future applications of cloud computing and distributed computational resources will be discussed in the context of medical imaging, radiotherapy research, and clinical radiation oncology applications. Learning Objectives: Understand basic concepts of cloud computing. Understand how cloud computing could be used for medical imaging applications. Understand how cloud computing could be employed for radiotherapy research.4. Understand how clinical radiotherapy software applications would function in the cloud.« less

  20. 49 CFR 395.18 - Matter incorporated by reference.

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ... Technology—Telecommunications and information exchange between systems—Local and metropolitan area networks...) Specifications,” IEEE Computer Society, Sponsored by the LAN/MAN Standards Committee: June 12, 2007 (IEEE Std... 446-2008, American National Standard for Information Technology—Identifying Attributes for Named...

  1. Delay/Disruption Tolerant Networking for the International Space Station (ISS)

    NASA Technical Reports Server (NTRS)

    Schlesinger, Adam; Willman, Brett M.; Pitts, Lee; Davidson, Suzanne R.; Pohlchuck, William A.

    2017-01-01

    Disruption Tolerant Networking (DTN) is an emerging data networking technology designed to abstract the hardware communication layer from the spacecraft/payload computing resources. DTN is specifically designed to operate in environments where link delays and disruptions are common (e.g., space-based networks). The National Aeronautics and Space Administration (NASA) has demonstrated DTN on several missions, such as the Deep Impact Networking (DINET) experiment, the Earth Observing Mission 1 (EO-1) and the Lunar Laser Communication Demonstration (LLCD). To further the maturation of DTN, NASA is implementing DTN protocols on the International Space Station (ISS). This paper explains the architecture of the ISS DTN network, the operational support for the system, the results from integrated ground testing, and the future work for DTN expansion.

  2. An Internet-style Approach to Managing Wireless Link Errors

    DTIC Science & Technology

    2002-05-01

    implementation I used. Jamshid Mahdavi and Matt Mathis, then at the Pittsburgh Super- computer Center, and Vern Paxson of the Lawrence Berkeley National...Exposition. IEEE CS Press, 2002. [19] P. Bhagwat, P. Bhattacharya, A. Krishna , and S. Tripathi. Enhancing throughput over wireless LANs using channel...performance over wireless networks at the link layer. ACM Mobile Networks and Applications, 5(1):57– 71, March 2000. [97] Vern Paxson and Mark Allman

  3. Integrated DoD Voice and Data Networks and Ground Packet Radio Technology

    DTIC Science & Technology

    1976-08-01

    as the traffic requirement level increases. Moreover, the satellite switch selection problem is only meaningful over a limited traffic range. When...5: CPU TIMES VS. NUMBER OF SWITCHES SATELLITE SWITCH SELECTION ALGORITHM Computer Used: PDP-10 ♦O’S" means 0 minutes and 5 seconds. 5.30...Saturation Algorithm for Topo\\ogical Design of Parket-Switched Communications Networks," National Te3 ecommunications Conference Proceed- ings, San

  4. Our Plan for a Wireless Loan Service.

    ERIC Educational Resources Information Center

    Allmang, Nancy

    2003-01-01

    Discusses the planning for wireless technology at the research library of the National Institute of Standards and Technology (NIST). Highlights include computer equipment, including laptops and PDAs; local area networks; equipment loan service; writing a business plan; infrastructure; training programs; and future considerations, including…

  5. Proceedings of the Third International Workshop on Neural Networks and Fuzzy Logic, volume 1

    NASA Technical Reports Server (NTRS)

    Culbert, Christopher J. (Editor)

    1993-01-01

    Documented here are papers presented at the Neural Networks and Fuzzy Logic Workshop sponsored by the National Aeronautics and Space Administration and cosponsored by the University of Houston, Clear Lake. The workshop was held June 1-3, 1992 at the Lyndon B. Johnson Space Center in Houston, Texas. During the three days approximately 50 papers were presented. Technical topics addressed included adaptive systems; learning algorithms; network architectures; vision; robotics; neurobiological connections; speech recognition and synthesis; fuzzy set theory and application, control, and dynamics processing; space applications; fuzzy logic and neural network computers; approximate reasoning; and multiobject decision making.

  6. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bennett, C. V.; Mendez, A. J.

    This was a collaborative effort between Lawrence Livermore National Security, LLC (formerly The Regents of the University of California)/Lawrence Livermore National Laboratory (LLNL) and Mendez R & D Associates (MRDA) to develop and demonstrate a reconfigurable and cost effective design for optical code division multiplexing (O-CDM) with high spectral efficiency and throughput, as applied to the field of distributed computing, including multiple accessing (sharing of communication resources) and bidirectional data distribution in fiber-to-the-premise (FTTx) networks.

  7. Services and the National Information Infrastructure. Report of the Information Infrastructure Task Force Committee on Applications and Technology, Technology Policy Working Group. Draft for Public Comment.

    ERIC Educational Resources Information Center

    Office of Science and Technology Policy, Washington, DC.

    In this report, the National Information Infrastructure (NII) services issue is addressed, and activities to advance the development of NII services are recommended. The NII is envisioned to grow into a seamless web of communications networks, computers, databases, and consumer electronics that will put vast amounts of information at users'…

  8. Dual Arm Work Package performance estimates and telerobot task network simulation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Draper, J.V.; Blair, L.M.

    1997-02-01

    This paper describes the methodology and results of a network simulation study of the Dual Arm Work Package (DAWP), to be employed for dismantling the Argonne National Laboratory CP-5 reactor. The development of the simulation model was based upon the results of a task analysis for the same system. This study was performed by the Oak Ridge National Laboratory (ORNL), in the Robotics and Process Systems Division. Funding was provided the US Department of Energy`s Office of Technology Development, Robotics Technology Development Program (RTDP). The RTDP is developing methods of computer simulation to estimate telerobotic system performance. Data were collectedmore » to provide point estimates to be used in a task network simulation model. Three skilled operators performed six repetitions of a pipe cutting task representative of typical teleoperation cutting operations.« less

  9. Status of NGS CORS Network and Its Contribution to the GGOS Infrastructure

    NASA Astrophysics Data System (ADS)

    Choi, K. K.; Haw, D.; Sun, L.

    2017-12-01

    Recent advancement of Satellite Geodesy techniques can now contribute to the global frame realization needed to improve worldwide accuracies. These techniques rely on coordinates computed using continuously observed GPS data and corresponding satellite orbits. The GPS-based reference system continues to depend on the physical stability of a ground-based network of points as the primary foundation for these observations. NOAA's National Geodetic Survey (NGS) has been operating Continuously Operating Reference Stations (CORS) to provide direct access to the National Spatial Reference System (NSRS). By virtue of NGS' scientific reputation and leadership in national and international geospatial issues, NGS has determined to increase its participation in the maintenance of the U.S. component of the global GPS tracking network in order to realize a long-term stable national terrestrial reference frame. NGS can do so by leveraging its national leadership role coupled with NGS' scientific expertise, in designating and upgrading a subset of the current tracking network for this purpose. This subset of stations must have the highest operational standards to serve the dual functions: being the U.S. contribution to the international frame, along with providing the link to the national datum. These stations deserve special attention to ensure that the highest possible levels of quality and stability are maintained. To meet this need, NGS is working with the international scientific groups to add and designate these reference stations based on scientific merit such as: colocation with other geodetic techniques, geographic area, and monumentation stability.

  10. Networking: the view from HEP

    NASA Astrophysics Data System (ADS)

    McKee, Shawn

    2017-10-01

    Networks have played a critical role in high-energy physics (HEP), enabling us to access and effectively utilize globally distributed resources to meet the needs of our physicists. National and global-scale collaborations that characterize HEP would not be feasible without ubiquitous capable networks. Because of their importance in enabling our grid computing infrastructure many physicists have taken leading roles in research and education (R&E) networking, participating in, and even convening, network related meetings and research programs with the broader networking community worldwide. This has led to HEP benefiting from excellent global networking capabilities for little to no direct cost. However, as other science domains ramp-up their need for similar networking it becomes less clear that this situation will continue unchanged. This paper will briefly discuss the history of networking in HEP, the current activities and challenges we are facing, and try to provide some understanding of where networking may be going in the next 5 to 10 years.

  11. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Christoph, G.G; Jackson, K.A.; Neuman, M.C.

    An effective method for detecting computer misuse is the automatic auditing and analysis of on-line user activity. This activity is reflected in the system audit record, by changes in the vulnerability posture of the system configuration, and in other evidence found through active testing of the system. In 1989 we started developing an automatic misuse detection system for the Integrated Computing Network (ICN) at Los Alamos National Laboratory. Since 1990 this system has been operational, monitoring a variety of network systems and services. We call it the Network Anomaly Detection and Intrusion Reporter, or NADIR. During the last year andmore » a half, we expanded NADIR to include processing of audit and activity records for the Cray UNICOS operating system. This new component is called the UNICOS Real-time NADIR, or UNICORN. UNICORN summarizes user activity and system configuration information in statistical profiles. In near real-time, it can compare current activity to historical profiles and test activity against expert rules that express our security policy and define improper or suspicious behavior. It reports suspicious behavior to security auditors and provides tools to aid in follow-up investigations. UNICORN is currently operational on four Crays in Los Alamos` main computing network, the ICN.« less

  12. Assessment of Microphysical Models in the National Combustion Code (NCC) for Aircraft Particulate Emissions: Particle Loss in Sampling Lines

    NASA Technical Reports Server (NTRS)

    Wey, Thomas; Liu, Nan-Suey

    2008-01-01

    This paper at first describes the fluid network approach recently implemented into the National Combustion Code (NCC) for the simulation of transport of aerosols (volatile particles and soot) in the particulate sampling systems. This network-based approach complements the other two approaches already in the NCC, namely, the lower-order temporal approach and the CFD-based approach. The accuracy and the computational costs of these three approaches are then investigated in terms of their application to the prediction of particle losses through sample transmission and distribution lines. Their predictive capabilities are assessed by comparing the computed results with the experimental data. The present work will help establish standard methodologies for measuring the size and concentration of particles in high-temperature, high-velocity jet engine exhaust. Furthermore, the present work also represents the first step of a long term effort of validating physics-based tools for the prediction of aircraft particulate emissions.

  13. Advanced Optical Burst Switched Network Concepts

    NASA Astrophysics Data System (ADS)

    Nejabati, Reza; Aracil, Javier; Castoldi, Piero; de Leenheer, Marc; Simeonidou, Dimitra; Valcarenghi, Luca; Zervas, Georgios; Wu, Jian

    In recent years, as the bandwidth and the speed of networks have increased significantly, a new generation of network-based applications using the concept of distributed computing and collaborative services is emerging (e.g., Grid computing applications). The use of the available fiber and DWDM infrastructure for these applications is a logical choice offering huge amounts of cheap bandwidth and ensuring global reach of computing resources [230]. Currently, there is a great deal of interest in deploying optical circuit (wavelength) switched network infrastructure for distributed computing applications that require long-lived wavelength paths and address the specific needs of a small number of well-known users. Typical users are particle physicists who, due to their international collaborations and experiments, generate enormous amounts of data (Petabytes per year). These users require a network infrastructures that can support processing and analysis of large datasets through globally distributed computing resources [230]. However, providing wavelength granularity bandwidth services is not an efficient and scalable solution for applications and services that address a wider base of user communities with different traffic profiles and connectivity requirements. Examples of such applications may be: scientific collaboration in smaller scale (e.g., bioinformatics, environmental research), distributed virtual laboratories (e.g., remote instrumentation), e-health, national security and defense, personalized learning environments and digital libraries, evolving broadband user services (i.e., high resolution home video editing, real-time rendering, high definition interactive TV). As a specific example, in e-health services and in particular mammography applications due to the size and quantity of images produced by remote mammography, stringent network requirements are necessary. Initial calculations have shown that for 100 patients to be screened remotely, the network would have to securely transport 1.2 GB of data every 30 s [230]. According to the above explanation it is clear that these types of applications need a new network infrastructure and transport technology that makes large amounts of bandwidth at subwavelength granularity, storage, computation, and visualization resources potentially available to a wide user base for specified time durations. As these types of collaborative and network-based applications evolve addressing a wide range and large number of users, it is infeasible to build dedicated networks for each application type or category. Consequently, there should be an adaptive network infrastructure able to support all application types, each with their own access, network, and resource usage patterns. This infrastructure should offer flexible and intelligent network elements and control mechanism able to deploy new applications quickly and efficiently.

  14. NSI customer service representatives and user support office: NASA Science Internet

    NASA Technical Reports Server (NTRS)

    1991-01-01

    The NASA Science Internet, (NSI) was established in 1987 to provide NASA's Offices of Space Science and Applications (OSSA) missions with transparent wide-area data connectivity to NASA's researchers, computational resources, and databases. The NSI Office at NASA/Ames Research Center has the lead responsibility for implementing a total, open networking program to serve the OSSA community. NSI is a full-service communications provider whose services include science network planning, network engineering, applications development, network operations, and network information center/user support services. NSI's mission is to provide reliable high-speed communications to the NASA science community. To this end, the NSI Office manages and operates the NASA Science Internet, a multiprotocol network currently supporting both DECnet and TCP/IP protocols. NSI utilizes state-of-the-art network technology to meet its customers' requirements. THe NASA Science Internet interconnects with other national networks including the National Science Foundation's NSFNET, the Department of Energy's ESnet, and the Department of Defense's MILNET. NSI also has international connections to Japan, Australia, New Zealand, Chile, and several European countries. NSI cooperates with other government agencies as well as academic and commercial organizations to implement networking technologies which foster interoperability, improve reliability and performance, increase security and control, and expedite migration to the OSI protocols.

  15. Communication and computing technology in biocontainment laboratories using the NEIDL as a model.

    PubMed

    McCall, John; Hardcastle, Kath

    2014-07-01

    The National Emerging Infectious Diseases Laboratories (NEIDL), Boston University, is a globally unique biocontainment research facility housing biosafety level 2 (BSL-2), BSL-3, and BSL-4 laboratories. Located in the BioSquare area at the University's Medical Campus, it is part of a national network of secure facilities constructed to study infectious diseases of major public health concern. The NEIDL allows for basic, translational, and clinical phases of research to be carried out in a single facility with the overall goal of accelerating understanding, treatment, and prevention of infectious diseases. The NEIDL will also act as a center of excellence providing training and education in all aspects of biocontainment research. Within every detail of NEIDL operations is a primary emphasis on safety and security. The ultramodern NEIDL has required a new approach to communications technology solutions in order to ensure safety and security and meet the needs of investigators working in this complex building. This article discusses the implementation of secure wireless networks and private cloud computing to promote operational efficiency, biosecurity, and biosafety with additional energy-saving advantages. The utilization of a dedicated data center, virtualized servers, virtualized desktop integration, multichannel secure wireless networks, and a NEIDL-dedicated Voice over Internet Protocol (VoIP) network are all discussed. © 2014 Federation of European Microbiological Societies. Published by John Wiley & Sons Ltd. All rights reserved.

  16. CoDA 2014 special issue: Exploring data-focused research across the department of energy: Editorial

    DOE PAGES

    Myers, Kary Lynn

    2015-10-05

    Here, this collection of papers, written by researchers at the national labs, in academia, and in industry present real problems, massive and complex datasets, and novel statistical approaches motivated by the challenges presented by experimental and computational science. You'll find explorations of the trajectories of aircraft and of the light curves of supernovae, of computer network intrusions and of nuclear forensics, of photovoltaics and overhead imagery.

  17. NASDA knowledge-based network planning system

    NASA Technical Reports Server (NTRS)

    Yamaya, K.; Fujiwara, M.; Kosugi, S.; Yambe, M.; Ohmori, M.

    1993-01-01

    One of the SODS (space operation and data system) sub-systems, NP (network planning) was the first expert system used by NASDA (national space development agency of Japan) for tracking and control of satellite. The major responsibilities of the NP system are: first, the allocation of network and satellite control resources and, second, the generation of the network operation plan data (NOP) used in automated control of the stations and control center facilities. Up to now, the first task of network resource scheduling was done by network operators. NP system automatically generates schedules using its knowledge base, which contains information on satellite orbits, station availability, which computer is dedicated to which satellite, and how many stations must be available for a particular satellite pass or a certain time period. The NP system is introduced.

  18. Workshop on Incomplete Network Data Held at Sandia National Labs – Livermore

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Soundarajan, Sucheta; Wendt, Jeremy D.

    2016-06-01

    While network analysis is applied in a broad variety of scientific fields (including physics, computer science, biology, and the social sciences), how networks are constructed and the resulting bias and incompleteness have drawn more limited attention. For example, in biology, gene networks are typically developed via experiment -- many actual interactions are likely yet to be discovered. In addition to this incompleteness, the data-collection processes can introduce significant bias into the observed network datasets. For instance, if you observe part of the World Wide Web network through a classic random walk, then high degree nodes are more likely to bemore » found than if you had selected nodes at random. Unfortunately, such incomplete and biasing data collection methods must be often used.« less

  19. CRYSNET manual. Informal report. [Hardware and software of crystallographic computing network

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    None,

    1976-07-01

    This manual describes the hardware and software which together make up the crystallographic computing network (CRYSNET). The manual is intended as a users' guide and also provides general information for persons without any experience with the system. CRYSNET is a network of intelligent remote graphics terminals that are used to communicate with the CDC Cyber 70/76 computing system at the Brookhaven National Laboratory (BNL) Central Scientific Computing Facility. Terminals are in active use by four research groups in the field of crystallography. A protein data bank has been established at BNL to store in machine-readable form atomic coordinates and othermore » crystallographic data for macromolecules. The bank currently includes data for more than 20 proteins. This structural information can be accessed at BNL directly by the CRYSNET graphics terminals. More than two years of experience has been accumulated with CRYSNET. During this period, it has been demonstrated that the terminals, which provide access to a large, fast third-generation computer, plus stand-alone interactive graphics capability, are useful for computations in crystallography, and in a variety of other applications as well. The terminal hardware, the actual operations of the terminals, and the operations of the BNL Central Facility are described in some detail, and documentation of the terminal and central-site software is given. (RWR)« less

  20. China's Chemical Information Online Service: ChI2Net.

    ERIC Educational Resources Information Center

    Naiyan, Yu; And Others

    1997-01-01

    Describes the Chemical Integrated Information Service Network (ChI2Net), a comprehensive online information service system which includes chemical, technical, economic, market, news, and management information based on computer and modern communication technology that was built by the China National Chemical Information Centre. (Author/LRW)

  1. Prototyping an institutional IAIMS/UMLS information environment for an academic medical center.

    PubMed

    Miller, P L; Paton, J A; Clyman, J I; Powsner, S M

    1992-07-01

    The paper describes a prototype information environment designed to link network-based information resources in an integrated fashion and thus enhance the information capabilities of an academic medical center. The prototype was implemented on a single Macintosh computer to permit exploration of the overall "information architecture" and to demonstrate the various desired capabilities prior to full-scale network-based implementation. At the heart of the prototype are two components: a diverse set of information resources available over an institutional computer network and an information sources map designed to assist users in finding and accessing information resources relevant to their needs. The paper describes these and other components of the prototype and presents a scenario illustrating its use. The prototype illustrates the link between the goals of two National Library of Medicine initiatives, the Integrated Academic Information Management System (IAIMS) and the Unified Medical Language System (UMLS).

  2. Introducing a Girl to Engineering Day

    NASA Image and Video Library

    2018-02-22

    The laptop computer in the foreground displays Rachel Power, left, of NASA’s Digital Expansion to Engage the Public (DEEP) Network; Bethanne’ Hull, center, of NASA Outreach; and NASA engineer Krista Shaffer inside Kennedy Space Center’s Vehicle Assembly Building during Introduce a Girl to Engineering Day. Held in conjunction with National Engineers Week and Girl Day, the event allowed students from throughout the nation to speak with female NASA scientists and technical experts.

  3. Uncovering and Managing the Impact of Methodological Choices for the Computational Construction of Socio-Technical Networks from Texts

    DTIC Science & Technology

    2012-09-01

    supported by the National Science Foundation (NSF) IGERT 9972762, the Army Research Institute (ARI) W91WAW07C0063, the Army Research Laboratory (ARL/CTA...prediction models in AutoMap .................................................. 144   Figure 13: Decision Tree for prediction model selection in...generated for nationally funded initiatives and made available through the Linguistic Data Consortium (LDC). An overview of these datasets is provided in

  4. ESCAP/POPIN Expert Working Group on Development of Population Information Centres and Networks, 20-23 June 1984, Bangkok, Thailand.

    PubMed

    1984-07-01

    An overview of current population information programs at the regional, national, and global level was presented at a meeting of the Expert Working Group on Development of Population Information Centres and Networks. On the global level, the decentralized Population Information Network (POPIN) was established, consisting of population libraries, clearinghouses, information systems, and documentation centers. The Economic and Social Commission for Asia and the Pacific (ESCAP) Regional Population Information Centre (PIC) has actively promoted the standardization of methodologies for the collection and processing of data, the use of compatible terminology, adoption of classification systems, computer-assisted data and information handling, and improved programs of publication and infomration dissemination, within and among national centers. Among the national PICs, 83% are attached to the primary national family planning/fertility control unit and 17% are attached to demographic data, research, and analysis units. Lack of access to specialized information handling equipment such as microcomputers, word processors, and computer terminals remains a problem for PICs. Recommendations were made by the Expert Working Group to improve the functions of PICs: 1) the mandate and resoponsibilities of the PIC should be explicilty stated; 2) PICs should collect, process, and disseminate population information in the most effective format to workers in the population feild; 3) PICs should be given flexibility in the performance of activitites by their governing bodies; 4) short-term training should be provided in computerization and dissemination of information; 5) research and evaluation mechanisms for PIC activities should be developed; 6) PIC staff should prepare policy briefs for decision makers; 7) access to parent organizations should be given to nongovernment PICs; 8) study tours to foreign PICs should be organized for PIC staff; and 9) on-the-job training in indexing and abstracting should be provided. Networking among PICs can be further facilitated by written acquisition policies, automation of bibliographic information, common classification systems, and exchange of ideas and experience between various systems.

  5. Science Information System in Japan. NIER Occasional Paper 02/83.

    ERIC Educational Resources Information Center

    Matsumura, Tamiko

    This paper describes the development of a proposed Japanese Science Information System (SIS), a nationwide network of research and academic libraries, large-scale computer centers, national research institutes, and other organizations, to be formed for the purpose of sharing information and resources in the natural sciences, technology, the…

  6. Information Operations and FATA Integration into the National Mainstream

    DTIC Science & Technology

    2012-09-01

    Edward L. Fisher THIS PAGE INTENTIONALLY LEFT BLANK i REPORT DOCUMENTATION PAGE Form Approved OMB No. 0704-0188 Public reporting burden for this...LEFT BLANK vii TABLE OF CONTENTS I . INTRODUCTION ............................................................................................. 1...39 ix i . Computer Network Operations .................................. 40 j. CNO as an IO Core Capability .................................... 40

  7. The Arabization of a Full-Text Database Interface.

    ERIC Educational Resources Information Center

    Fayen, Emily Gallup; And Others

    The 1981 design specifications for the Egyptian National Scientific and Technical Information Network (ENSTINET) stipulated that major end-user facilities of the system should be bilingual in English and Arabic. Many characteristics of the Arabic alphabet and language impact computer applications, and there exists no universally accepted character…

  8. Networked Interactive Video for Group Training

    ERIC Educational Resources Information Center

    Eary, John

    2008-01-01

    The National Computing Centre (NCC) has developed an interactive video training system for the Scottish Police College to help train police supervisory officers in crowd control at major spectator events, such as football matches. This approach involves technology-enhanced training in a group-learning environment, and may have significant impact…

  9. High Definition Information Systems. Hearings before the Subcommittee on Technology and Competitiveness of the Committee on Science, Space, and Technology. U.S. House of Representatives, One Hundred Second Congress, First Session (May 14, 21, 1991).

    ERIC Educational Resources Information Center

    Congress of the U.S., Washington, DC. House Committee on Science, Space and Technology.

    The report of these two hearings on high definition information systems begins by noting that they are digital, and that they are likely to handle computing, telecommunications, home security, computer imaging, storage, fiber optics networks, multi-dimensional libraries, and many other local, national, and international systems. (It is noted that…

  10. Interdependent networks - Topological percolation research and application in finance

    NASA Astrophysics Data System (ADS)

    Zhou, Di

    This dissertation covers the two major parts of my Ph.D. research: i) developing a theoretical framework of complex networks and applying simulation and numerical methods to study the robustness of the network system, and ii) applying statistical physics concepts and methods to quantitatively analyze complex systems and applying the theoretical framework to study real-world systems. In part I, we focus on developing theories of interdependent networks as well as building computer simulation models, which includes three parts: 1) We report on the effects of topology on failure propagation for a model system consisting of two interdependent networks. We find that the internal node correlations in each of the networks significantly changes the critical density of failures, which can trigger the total disruption of the two-network system. Specifically, we find that the assortativity within a single network decreases the robustness of the entire system. 2) We study the percolation behavior of two interdependent scale-free (SF) networks under random failure of 1-p fraction of nodes. We find that as the coupling strength q between the two networks reduces from 1 (fully coupled) to 0 (no coupling), there exist two critical coupling strengths q1 and q2 , which separate the behaviors of the giant component as a function of p into three different regions, and for q2 < q < q 1 , we observe a hybrid order phase transition phenomenon. 3) We study the robustness of n interdependent networks with partially support-dependent relationship both analytically and numerically. We study a starlike network of n Erdos-Renyi (ER), SF networks and a looplike network of n ER networks, and we find for starlike networks, their phase transition regions change with n, but for looplike networks the phase regions change with average degree k . In part II, we apply concepts and methods developed in statistical physics to study economic systems. We analyze stock market indices and foreign exchange daily returns for 60 countries over the period of 1999-2012. We build a multi-layer network model based on different correlation measures, and introduce a dynamic network model to simulate and analyze the initializing and spreading of financial crisis. Using different computational approaches and econometric tests, we find atypical behavior of the cross correlations and community formations in the financial networks that we study during the financial crisis of 2008. For example, the overall correlation of stock market increases during crisis while the correlation between stock market and foreign exchange market decreases. The dramatic increase in correlations between a specific nation and other nations may indicate that this nation could trigger a global financial crisis. Specifically, core countries that have higher correlations with other countries and larger Gross Domestic Product (GDP) values spread financial crisis quite effectively, yet some countries with small GDPs like Greece and Cyprus are also effective in propagating systemic risk and spreading global financial crisis.

  11. Strengthening National, Homeland, and Economic Security. Networking and Information Technology Research and Development Supplement to the President’s FY 2003 Budget

    DTIC Science & Technology

    2002-07-01

    Knowledge From Data .................................................. 25 HIGH-CONFIDENCE SOFTWARE AND SYSTEMS Reliability, Security, and Safety for...NOAA’s Cessna Citation flew over the 16-acre World Trade Center site, scanning with an Optech ALSM unit. The system recorded data points from 33,000...provide the data storage and compute power for intelligence analysis, high-performance national defense systems , and critical scientific research • Large

  12. Control System Applicable Use Assessment of the Secure Computing Corporation - Secure Firewall (Sidewinder)

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hadley, Mark D.; Clements, Samuel L.

    2009-01-01

    Battelle’s National Security & Defense objective is, “applying unmatched expertise and unique facilities to deliver homeland security solutions. From detection and protection against weapons of mass destruction to emergency preparedness/response and protection of critical infrastructure, we are working with industry and government to integrate policy, operational, technological, and logistical parameters that will secure a safe future”. In an ongoing effort to meet this mission, engagements with industry that are intended to improve operational and technical attributes of commercial solutions that are related to national security initiatives are necessary. This necessity will ensure that capabilities for protecting critical infrastructure assets aremore » considered by commercial entities in their development, design, and deployment lifecycles thus addressing the alignment of identified deficiencies and improvements needed to support national cyber security initiatives. The Secure Firewall (Sidewinder) appliance by Secure Computing was assessed for applicable use in critical infrastructure control system environments, such as electric power, nuclear and other facilities containing critical systems that require augmented protection from cyber threat. The testing was performed in the Pacific Northwest National Laboratory’s (PNNL) Electric Infrastructure Operations Center (EIOC). The Secure Firewall was tested in a network configuration that emulates a typical control center network and then evaluated. A number of observations and recommendations are included in this report relating to features currently included in the Secure Firewall that support critical infrastructure security needs.« less

  13. An improved approximate network blocking probability model for all-optical WDM Networks with heterogeneous link capacities

    NASA Astrophysics Data System (ADS)

    Khan, Akhtar Nawaz

    2017-11-01

    Currently, analytical models are used to compute approximate blocking probabilities in opaque and all-optical WDM networks with the homogeneous link capacities. Existing analytical models can also be extended to opaque WDM networking with heterogeneous link capacities due to the wavelength conversion at each switch node. However, existing analytical models cannot be utilized for all-optical WDM networking with heterogeneous structure of link capacities due to the wavelength continuity constraint and unequal numbers of wavelength channels on different links. In this work, a mathematical model is extended for computing approximate network blocking probabilities in heterogeneous all-optical WDM networks in which the path blocking is dominated by the link along the path with fewer number of wavelength channels. A wavelength assignment scheme is also proposed for dynamic traffic, termed as last-fit-first wavelength assignment, in which a wavelength channel with maximum index is assigned first to a lightpath request. Due to heterogeneous structure of link capacities and the wavelength continuity constraint, the wavelength channels with maximum indexes are utilized for minimum hop routes. Similarly, the wavelength channels with minimum indexes are utilized for multi-hop routes between source and destination pairs. The proposed scheme has lower blocking probability values compared to the existing heuristic for wavelength assignments. Finally, numerical results are computed in different network scenarios which are approximately equal to values obtained from simulations. Since January 2016, he is serving as Head of Department and an Assistant Professor in the Department of Electrical Engineering at UET, Peshawar-Jalozai Campus, Pakistan. From May 2013 to June 2015, he served Department of Telecommunication Engineering as an Assistant Professor at UET, Peshawar-Mardan Campus, Pakistan. He also worked as an International Internship scholar in the Fukuda Laboratory, National Institute of Informatics, Tokyo, Japan on the topic large-scale simulation for internet topology analysis. His research interests include design and analysis of optical WDM networks, network algorithms, network routing, and network resource optimization problems.

  14. Asynchronous transfer mode link performance over ground networks

    NASA Technical Reports Server (NTRS)

    Chow, E. T.; Markley, R. W.

    1993-01-01

    The results of an experiment to determine the feasibility of using asynchronous transfer mode (ATM) technology to support advanced spacecraft missions that require high-rate ground communications and, in particular, full-motion video are reported. Potential nodes in such a ground network include Deep Space Network (DSN) antenna stations, the Jet Propulsion Laboratory, and a set of national and international end users. The experiment simulated a lunar microrover, lunar lander, the DSN ground communications system, and distributed science users. The users were equipped with video-capable workstations. A key feature was an optical fiber link between two high-performance workstations equipped with ATM interfaces. Video was also transmitted through JPL's institutional network to a user 8 km from the experiment. Variations in video depending on the networks and computers were observed, the results are reported.

  15. Complex Failure Forewarning System - DHS Conference Proceedings

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Abercrombie, Robert K; Hively, Lee M; Prowell, Stacy J

    2011-01-01

    As the critical infrastructures of the United States have become more and more dependent on public and private networks, the potential for widespread national impact resulting from disruption or failure of these networks has also increased. Securing the nation s critical infrastructures requires protecting not only their physical systems but, just as important, the cyber portions of the systems on which they rely. A failure is inclusive of random events, design flaws, and instabilities caused by cyber (and/or physical) attack. One such domain, aging bridges, is used to explain the Complex Structure Failure Forewarning System. We discuss the workings ofmore » such a system in the context of the necessary sensors, command and control and data collection as well as the cyber security efforts that would support this system. Their application and the implications of this computing architecture are also discussed, with respect to our nation s aging infrastructure.« less

  16. Forewarning of Failure in Complex Systems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Abercrombie, Robert K; Hively, Lee M; Prowell, Stacy J

    2011-01-01

    As the critical infrastructures of the United States have become more and more dependent on public and private networks, the potential for widespread national impact resulting from disruption or failure of these networks has also increased. Securing the nation s critical infrastructures requires protecting not only their physical systems but, just as important, the cyber portions of the systems on which they rely. A failure is inclusive of random events, design flaws, and instabilities caused by cyber (and/or physical) attack. One such domain is failure in critical equipment. A second is aging bridges. We discuss the workings of such amore » system in the context of the necessary sensors, command and control and data collection as well as the cyber security efforts that would support this system. Their application and the implications of this computing architecture are also discussed, with respect to our nation s aging infrastructure.« less

  17. Comprehensive, Multi-Source Cyber-Security Events Data Set

    DOE Data Explorer

    Kent, Alexander D. [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)

    2015-05-21

    This data set represents 58 consecutive days of de-identified event data collected from five sources within Los Alamos National Laboratory’s corporate, internal computer network. The data sources include Windows-based authentication events from both individual computers and centralized Active Directory domain controller servers; process start and stop events from individual Windows computers; Domain Name Service (DNS) lookups as collected on internal DNS servers; network flow data as collected on at several key router locations; and a set of well-defined red teaming events that present bad behavior within the 58 days. In total, the data set is approximately 12 gigabytes compressed across the five data elements and presents 1,648,275,307 events in total for 12,425 users, 17,684 computers, and 62,974 processes. Specific users that are well known system related (SYSTEM, Local Service) were not de-identified though any well-known administrators account were still de-identified. In the network flow data, well-known ports (e.g. 80, 443, etc) were not de-identified. All other users, computers, process, ports, times, and other details were de-identified as a unified set across all the data elements (e.g. U1 is the same U1 in all of the data). The specific timeframe used is not disclosed for security purposes. In addition, no data that allows association outside of LANL’s network is included. All data starts with a time epoch of 1 using a time resolution of 1 second. In the authentication data, failed authentication events are only included for users that had a successful authentication event somewhere within the data set.

  18. The Network of Global Corporate Control

    PubMed Central

    Vitali, Stefania; Glattfelder, James B.; Battiston, Stefano

    2011-01-01

    The structure of the control network of transnational corporations affects global market competition and financial stability. So far, only small national samples were studied and there was no appropriate methodology to assess control globally. We present the first investigation of the architecture of the international ownership network, along with the computation of the control held by each global player. We find that transnational corporations form a giant bow-tie structure and that a large portion of control flows to a small tightly-knit core of financial institutions. This core can be seen as an economic “super-entity” that raises new important issues both for researchers and policy makers. PMID:22046252

  19. NINJA: a noninvasive framework for internal computer security hardening

    NASA Astrophysics Data System (ADS)

    Allen, Thomas G.; Thomson, Steve

    2004-07-01

    Vulnerabilities are a growing problem in both the commercial and government sector. The latest vulnerability information compiled by CERT/CC, for the year ending Dec. 31, 2002 reported 4129 vulnerabilities representing a 100% increase over the 2001 [1] (the 2003 report has not been published at the time of this writing). It doesn"t take long to realize that the growth rate of vulnerabilities greatly exceeds the rate at which the vulnerabilities can be fixed. It also doesn"t take long to realize that our nation"s networks are growing less secure at an accelerating rate. As organizations become aware of vulnerabilities they may initiate efforts to resolve them, but quickly realize that the size of the remediation project is greater than their current resources can handle. In addition, many IT tools that suggest solutions to the problems in reality only address "some" of the vulnerabilities leaving the organization unsecured and back to square one in searching for solutions. This paper proposes an auditing framework called NINJA (acronym for Network Investigation Notification Joint Architecture) for noninvasive daily scanning/auditing based on common security vulnerabilities that repeatedly occur in a network environment. This framework is used for performing regular audits in order to harden an organizations security infrastructure. The framework is based on the results obtained by the Network Security Assessment Team (NSAT) which emulates adversarial computer network operations for US Air Force organizations. Auditing is the most time consuming factor involved in securing an organization's network infrastructure. The framework discussed in this paper uses existing scripting technologies to maintain a security hardened system at a defined level of performance as specified by the computer security audit team. Mobile agents which were under development at the time of this writing are used at a minimum to improve the noninvasiveness of our scans. In general, noninvasive scans with an adequate framework performed on a daily basis reduce the amount of security work load as well as the timeliness in performing remediation, as verified by the NINJA framework. A vulnerability assessment/auditing architecture based on mobile agent technology is proposed and examined at the end of the article as an enhancement to the current NINJA architecture.

  20. Enabling Research Network Connectivity to Clouds with Virtual Router Technology

    NASA Astrophysics Data System (ADS)

    Seuster, R.; Casteels, K.; Leavett-Brown, CR; Paterson, M.; Sobie, RJ

    2017-10-01

    The use of opportunistic cloud resources by HEP experiments has significantly increased over the past few years. Clouds that are owned or managed by the HEP community are connected to the LHCONE network or the research network with global access to HEP computing resources. Private clouds, such as those supported by non-HEP research funds are generally connected to the international research network; however, commercial clouds are either not connected to the research network or only connect to research sites within their national boundaries. Since research network connectivity is a requirement for HEP applications, we need to find a solution that provides a high-speed connection. We are studying a solution with a virtual router that will address the use case when a commercial cloud has research network connectivity in a limited region. In this situation, we host a virtual router in our HEP site and require that all traffic from the commercial site transit through the virtual router. Although this may increase the network path and also the load on the HEP site, it is a workable solution that would enable the use of the remote cloud for low I/O applications. We are exploring some simple open-source solutions. In this paper, we present the results of our studies and how it will benefit our use of private and public clouds for HEP computing.

  1. National Computer Security Conference Proceedings (10th): Computer Security--From Principles to Practices, 21-24 September 1987

    DTIC Science & Technology

    1987-09-24

    Some concerns take on rating (e.g., ’Zl’) that adequately reflects increased significance in the network how well the system provides each service...to how well a M.•.imum, Fair, Good); however, in specific spicific approach may be expected to achieve cases, ratings such as "plesent" or "approved...established thresholds, Supportive policies include idertification and and for detecting the fact that access to a authentication policies as well as

  2. 76 FR 43347 - Notice Pursuant to the National Cooperative Research and Production Act of 1993; Network Centric...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-07-20

    ... circumstances. Specifically, Wakelight Technologies, Inc., Honolulu, HI; LinQuest Corporation, Los Angeles, CA; and Computer Sciences Corporation, Rockville, MD, have withdrawn as parties to this venture. In... activity of the group research project. Membership in this group research project remains open, and NCOIC...

  3. Impacts and Characteristics of Computer-Based Science Inquiry Learning Environments for Precollege Students

    ERIC Educational Resources Information Center

    Donnelly, Dermot F.; Linn, Marcia C.; Ludvigsen, Sten

    2014-01-01

    The National Science Foundation-sponsored report "Fostering Learning in the Networked World" called for "a common, open platform to support communities of developers and learners in ways that enable both to take advantage of advances in the learning sciences." We review research on science inquiry learning environments (ILEs)…

  4. Modeling Computer Communication Networks in a Realistic 3D Environment

    DTIC Science & Technology

    2010-03-01

    50 2. Comparison of visualization tools . . . . . . . . . . . . . . . . . 75 xi List of Abbreviations Abbreviation Page 2D two-dimensional...International Conference on, 77 –84, 2001. 20. National Defense and the Canadian Forces. “Joint Fires Support”. URL http: //www.cfd-cdf.forces.gc.ca/sites/ page ...Table of Contents Page Abstract . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . iv Acknowledgements

  5. Distributed telemedicine for the National Information Infrastructure

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Forslund, D.W.; Lee, Seong H.; Reverbel, F.C.

    1997-08-01

    TeleMed is an advanced system that provides a distributed multimedia electronic medical record available over a wide area network. It uses object-based computing, distributed data repositories, advanced graphical user interfaces, and visualization tools along with innovative concept extraction of image information for storing and accessing medical records developed in a separate project from 1994-5. In 1996, we began the transition to Java, extended the infrastructure, and worked to begin deploying TeleMed-like technologies throughout the nation. Other applications are mentioned.

  6. NAS-current status and future plans

    NASA Technical Reports Server (NTRS)

    Bailey, F. R.

    1987-01-01

    The Numerical Aerodynamic Simulation (NAS) has met its first major milestone, the NAS Processing System Network (NPSN) Initial Operating Configuration (IOC). The program has met its goal of providing a national supercomputer facility capable of greatly enhancing the Nation's research and development efforts. Furthermore, the program is fulfilling its pathfinder role by defining and implementing a paradigm for supercomputing system environments. The IOC is only the begining and the NAS Program will aggressively continue to develop and implement emerging supercomputer, communications, storage, and software technologies to strengthen computations as a critical element in supporting the Nation's leadership role in aeronautics.

  7. The ASCI Network for SC '99: A Step on the Path to a 100 Gigabit Per Second Supercomputing Network

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    PRATT,THOMAS J.; TARMAN,THOMAS D.; MARTINEZ,LUIS M.

    2000-07-24

    This document highlights the Discom{sup 2}'s Distance computing and communication team activities at the 1999 Supercomputing conference in Portland, Oregon. This conference is sponsored by the IEEE and ACM. Sandia, Lawrence Livermore and Los Alamos National laboratories have participated in this conference for eleven years. For the last four years the three laboratories have come together at the conference under the DOE's ASCI, Accelerated Strategic Computing Initiatives rubric. Communication support for the ASCI exhibit is provided by the ASCI DISCOM{sup 2} project. The DISCOM{sup 2} communication team uses this forum to demonstrate and focus communication and networking developments within themore » community. At SC 99, DISCOM built a prototype of the next generation ASCI network demonstrated remote clustering techniques, demonstrated the capabilities of the emerging Terabit Routers products, demonstrated the latest technologies for delivering visualization data to the scientific users, and demonstrated the latest in encryption methods including IP VPN technologies and ATM encryption research. The authors also coordinated the other production networking activities within the booth and between their demonstration partners on the exhibit floor. This paper documents those accomplishments, discusses the details of their implementation, and describes how these demonstrations support Sandia's overall strategies in ASCI networking.« less

  8. An Adaptive QSE-reduced Nuclear Reaction Network for Silicon Burning

    NASA Astrophysics Data System (ADS)

    Parete-Koon, Suzanne; Hix, W.; Thielemann, F.

    2008-03-01

    The nuclei of the "iron peak" are formed in massive stars shortly before core collapse and during their supernova outbursts as well as during thermonuclear supernovae. Complete and incomplete silicon burning during these events are responsible for the production of a wide range of nuclei with atomic mass numbers from 28 to 64. Because of the large number of nuclei involved, accurate modeling of silicon burning is computationally expensive. However, examination of the physics of silicon burning has revealed that the nuclear evolution is dominated by large groups of nuclei in mutual equilibrium. We present an improvement on our hybrid equilibrium-network scheme which takes advantage of this quasi-equilibrium in order to reduce the number of independent variables calculated. Because the size and membership of these groups vary as the temperature, density and electron faction change, achieving maximal efficiency requires dynamic adjustment of group number and membership. Toward this end, we are implementing a scheme beginning with a single QSE (NSE) group at appropriately high temperature, then progressing through 2, 3 and 4 group stages (with successively more independent variables) as temperature declines. This combination allows accurate prediction of the nuclear abundance evolution, deleptonization and energy generation at a further reduced computational cost when compared to a conventional nuclear reaction network or our previous 3 fixed group QSE-reduced network. During silicon burning, the resultant QSE-reduced network is up to 20 times faster than the full network it replaces without significant loss of accuracy. These reductions in computational cost and the number of species evolved make QSE-reduced networks well suited for inclusion within hydrodynamic simulations, particularly in multi-dimensional applications. This work has been supported by the National Science Foundation, by the Department of Energy's Scientic Discovery through Advanced Computing Programs, and by the Joint Institute for Heavy Ion Research at ORNL.

  9. Recent Performance Results of VPIC on Trinity

    NASA Astrophysics Data System (ADS)

    Nystrom, W. D.; Bergen, B.; Bird, R. F.; Bowers, K. J.; Daughton, W. S.; Guo, F.; Le, A.; Li, H.; Nam, H.; Pang, X.; Stark, D. J.; Rust, W. N., III; Yin, L.; Albright, B. J.

    2017-10-01

    Trinity is a new DOE compute resource now in production at Los Alamos National Laboratory. Trinity has several new and unique features including two compute partitions, one with dual socket Intel Haswell Xeon compute nodes and one with Intel Knights Landing (KNL) Xeon Phi compute nodes, use of on package high bandwidth memory (HBM) for KNL nodes, ability to configure KNL nodes with respect to HBM model and on die network topology in a variety of operational modes at run time, and use of solid state storage via burst buffer technology to reduce time required to perform I/O. An effort is in progress to optimize VPIC on Trinity by taking advantage of these new architectural features. Results of work will be presented on performance of VPIC on Haswell and KNL partitions for single node runs and runs at scale. Results include use of burst buffers at scale to optimize I/O, comparison of strategies for using MPI and threads, performance benefits using HBM and effectiveness of using intrinsics for vectorization. Work performed under auspices of U.S. Dept. of Energy by Los Alamos National Security, LLC Los Alamos National Laboratory under contract DE-AC52-06NA25396 and supported by LANL LDRD program.

  10. Evaluating the networking characteristics of the Cray XC-40 Intel Knights Landing-based Cori supercomputer at NERSC

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Doerfler, Douglas; Austin, Brian; Cook, Brandon

    There are many potential issues associated with deploying the Intel Xeon Phi™ (code named Knights Landing [KNL]) manycore processor in a large-scale supercomputer. One in particular is the ability to fully utilize the high-speed communications network, given that the serial performance of a Xeon Phi TM core is a fraction of a Xeon®core. In this paper, we take a look at the trade-offs associated with allocating enough cores to fully utilize the Aries high-speed network versus cores dedicated to computation, e.g., the trade-off between MPI and OpenMP. In addition, we evaluate new features of Cray MPI in support of KNL,more » such as internode optimizations. We also evaluate one-sided programming models such as Unified Parallel C. We quantify the impact of the above trade-offs and features using a suite of National Energy Research Scientific Computing Center applications.« less

  11. Quantifying the Digital Divide: A Scientific Overview of Network Connectivity and Grid Infrastructure in South Asian Countries

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Khan, Shahryar Muhammad; /SLAC /NUST, Rawalpindi; Cottrell, R.Les

    2007-10-30

    The future of Computing in High Energy Physics (HEP) applications depends on both the Network and Grid infrastructure. South Asian countries such as India and Pakistan are making significant progress by building clusters as well as improving their network infrastructure However to facilitate the use of these resources, they need to manage the issues of network connectivity to be among the leading participants in Computing for HEP experiments. In this paper we classify the connectivity for academic and research institutions of South Asia. The quantitative measurements are carried out using the PingER methodology; an approach that induces minimal ICMP trafficmore » to gather active end-to-end network statistics. The PingER project has been measuring the Internet performance for the last decade. Currently the measurement infrastructure comprises of over 700 hosts in more than 130 countries which collectively represents approximately 99% of the world's Internet-connected population. Thus, we are well positioned to characterize the world's connectivity. Here we present the current state of the National Research and Educational Networks (NRENs) and Grid Infrastructure in the South Asian countries and identify the areas of concern. We also present comparisons between South Asia and other developing as well as developed regions. We show that there is a strong correlation between the Network performance and several Human Development indices.« less

  12. Utilizing Weak Indicators to Detect Anomalous Behaviors in Networks

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Egid, Adin Ezra

    We consider the use of a novel weak in- dicator alongside more commonly used weak indicators to help detect anomalous behavior in a large computer network. The data of the network which we are studying in this research paper concerns remote log-in information (Virtual Private Network, or VPN sessions) from the internal network of Los Alamos National Laboratory (LANL). The novel indicator we are utilizing is some- thing which, while novel in its application to data science/cyber security research, is a concept borrowed from the business world. The Her ndahl-Hirschman Index (HHI) is a computationally trivial index which provides amore » useful heuristic for regulatory agencies to ascertain the relative competitiveness of a particular industry. Using this index as a lagging indicator in the monthly format we have studied could help to detect anomalous behavior by a particular or small set of users on the network. Additionally, we study indicators related to the speed of movement of a user based on the physical location of their current and previous logins. This data can be ascertained from the IP addresses of the users, and is likely very similar to the fraud detection schemes regularly utilized by credit card networks to detect anomalous activity. In future work we would look to nd a way to combine these indicators for use as an internal fraud detection system.« less

  13. Providing full point-to-point communications among compute nodes of an operational group in a global combining network of a parallel computer

    DOEpatents

    Archer, Charles J; Faraj, Ahmad A; Inglett, Todd A; Ratterman, Joseph D

    2013-04-16

    Methods, apparatus, and products are disclosed for providing full point-to-point communications among compute nodes of an operational group in a global combining network of a parallel computer, each compute node connected to each adjacent compute node in the global combining network through a link, that include: receiving a network packet in a compute node, the network packet specifying a destination compute node; selecting, in dependence upon the destination compute node, at least one of the links for the compute node along which to forward the network packet toward the destination compute node; and forwarding the network packet along the selected link to the adjacent compute node connected to the compute node through the selected link.

  14. Crowd-Sourcing Seismic Data for Education and Research Opportunities with the Quake-Catcher Network

    NASA Astrophysics Data System (ADS)

    Sumy, D. F.; DeGroot, R. M.; Benthien, M. L.; Cochran, E. S.; Taber, J. J.

    2016-12-01

    The Quake Catcher Network (QCN; quakecatcher.net) uses low cost micro-electro-mechanical system (MEMS) sensors hosted by volunteers to collect seismic data. Volunteers use accelerometers internal to laptop computers, phones, tablets or small (the size of a matchbox) MEMS sensors plugged into desktop computers using a USB connector to collect scientifically useful data. Data are collected and sent to a central server using the Berkeley Open Infrastructure for Network Computing (BOINC) distributed computing software. Since 2008, sensors installed in museums, schools, offices, and residences have collected thousands of earthquake records, including the 2010 M8.8 Maule, Chile, the 2010 M7.1 Darfield, New Zealand, and 2015 M7.8 Gorkha, Nepal earthquakes. In 2016, the QCN in the United States transitioned to the Incorporated Research Institutions for Seismology (IRIS) Consortium and the Southern California Earthquake Center (SCEC), which are facilities funded through the National Science Foundation and the United States Geological Survey, respectively. The transition has allowed for an influx of new ideas and new education related efforts, which include focused installations in several school districts in southern California, on Native American reservations in North Dakota, and in the most seismically active state in the contiguous U.S. - Oklahoma. We present and describe these recent educational opportunities, and highlight how QCN has engaged a wide sector of the public in scientific data collection, particularly through the QCN-EPIcenter Network and NASA Mars InSight teacher programs. QCN provides the public with information and insight into how seismic data are collected, and how researchers use these data to better understand and characterize seismic activity. Lastly, we describe how students use data recorded by QCN sensors installed in their classrooms to explore and investigate felt earthquakes, and look towards the bright future of the network.

  15. Providing full point-to-point communications among compute nodes of an operational group in a global combining network of a parallel computer

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Archer, Charles J.; Faraj, Daniel A.; Inglett, Todd A.

    Methods, apparatus, and products are disclosed for providing full point-to-point communications among compute nodes of an operational group in a global combining network of a parallel computer, each compute node connected to each adjacent compute node in the global combining network through a link, that include: receiving a network packet in a compute node, the network packet specifying a destination compute node; selecting, in dependence upon the destination compute node, at least one of the links for the compute node along which to forward the network packet toward the destination compute node; and forwarding the network packet along the selectedmore » link to the adjacent compute node connected to the compute node through the selected link.« less

  16. The School in Its Relations with the Community. Research Projects EUDISED 1975-1977.

    ERIC Educational Resources Information Center

    Documentation Centre for Education in Europe, Strasbourg (France).

    The document presents abstracts of 40 research projects dealing with the relationship between school and community in Europe. These have been compiled by the European Documentation and Information System for the Education Project, (EUDISED). The aim of the EUDISED project is to create a computer-based network of national agencies dealing with…

  17. Development of Igbo Language E-Learning System

    ERIC Educational Resources Information Center

    Oyelami, Olufemi Moses

    2008-01-01

    E-Learning involves using a variety of computer and networking technologies to access training materials. The United Nations report, quoted in one of the Nigerian dailies towards the end of year 2006, says that most of the minor languages in the world would be extinct by the year 2050. African languages are currently suffering from discard by…

  18. A Helping Hand in the Frederick Community—Ross Smith | Poster

    Cancer.gov

    By day, Ross Smith is the compliance and security officer for Data Management Services, Inc., assigned to the National Cancer Institute (NCI) at Frederick. His role is to ensure the secure operation of in-house computer systems, servers, and network connections. But in his spare time, Smith is also a volunteer firefighter and emergency medical technician (EMT).

  19. Advanced Decentralized Water/Energy Network Design for Sustainable Infrastructure presentation at the 2012 National Association of Home Builders (NAHB) International Builders'Show

    EPA Science Inventory

    A university/industry panel will report on the progress and findings of a fivesteve-year project funded by the US Environmental Protection Agency. The project's end product will be a Web-based, 3D computer-simulated residential environment with a decision support system to assist...

  20. Computer Link Offering Variable Educational Records (Project CLOVER). A National Diffusion Network Developer/Demonstrator Project.

    ERIC Educational Resources Information Center

    Arkansas State Dept. of Education, Little Rock.

    Project CLOVER (Computerized Link Offering Variable Educational Records) is a demonstration project designed to increase use of the Migrant Student Record Transfer System (MSRTS). Project CLOVER (1) helps to ensure that schools attended by migrant students have the capability to receive and transmit academic and medical information on students;…

  1. Summary of Internet Terms and Resources. NRC Fact Sheet

    ERIC Educational Resources Information Center

    Zubal, Rachael; Hall, Mair

    2010-01-01

    What is the Internet? The Internet is a worldwide network of computers communicating with each other. This paper offers some basic, easy-to-understand meanings of words about the Internet that individuals may have questions about.[The preparation of this fact sheet was supported in part by the National Resource Center on Supported Living and…

  2. Research Institute for Advanced Computer Science: Annual Report October 1998 through September 1999

    NASA Technical Reports Server (NTRS)

    Leiner, Barry M.; Gross, Anthony R. (Technical Monitor)

    1999-01-01

    The Research Institute for Advanced Computer Science (RIACS) carries out basic research and technology development in computer science, in support of the National Aeronautics and Space Administration's missions. RIACS is located at the NASA Ames Research Center (ARC). It currently operates under a multiple year grant/cooperative agreement that began on October 1, 1997 and is up for renewal in the year 2002. ARC has been designated NASA's Center of Excellence in Information Technology. In this capacity, ARC is charged with the responsibility to build an Information Technology Research Program that is preeminent within NASA. RIACS serves as a bridge between NASA ARC and the academic community, and RIACS scientists and visitors work in close collaboration with NASA scientists. RIACS has the additional goal of broadening the base of researchers in these areas of importance to the nation's space and aeronautics enterprises. RIACS research focuses on the three cornerstones of information technology research necessary to meet the future challenges of NASA missions: (1) Automated Reasoning for Autonomous Systems. Techniques are being developed enabling spacecraft that will be self-guiding and self-correcting to the extent that they will require little or no human intervention. Such craft will be equipped to independently solve problems as they arise, and fulfill their missions with minimum direction from Earth. (2) Human-Centered Computing. Many NASA missions require synergy between humans and computers, with sophisticated computational aids amplifying human cognitive and perceptual abilities; (3) High Performance Computing and Networking Advances in the performance of computing and networking continue to have major impact on a variety of NASA endeavors, ranging from modeling and simulation to data analysis of large datasets to collaborative engineering, planning and execution. In addition, RIACS collaborates with NASA scientists to apply information technology research to a variety of NASA application domains. RIACS also engages in other activities, such as workshops, seminars, and visiting scientist programs, designed to encourage and facilitate collaboration between the university and NASA information technology research communities.

  3. Research Institute for Advanced Computer Science

    NASA Technical Reports Server (NTRS)

    Gross, Anthony R. (Technical Monitor); Leiner, Barry M.

    2000-01-01

    The Research Institute for Advanced Computer Science (RIACS) carries out basic research and technology development in computer science, in support of the National Aeronautics and Space Administration's missions. RIACS is located at the NASA Ames Research Center. It currently operates under a multiple year grant/cooperative agreement that began on October 1, 1997 and is up for renewal in the year 2002. Ames has been designated NASA's Center of Excellence in Information Technology. In this capacity, Ames is charged with the responsibility to build an Information Technology Research Program that is preeminent within NASA. RIACS serves as a bridge between NASA Ames and the academic community, and RIACS scientists and visitors work in close collaboration with NASA scientists. RIACS has the additional goal of broadening the base of researchers in these areas of importance to the nation's space and aeronautics enterprises. RIACS research focuses on the three cornerstones of information technology research necessary to meet the future challenges of NASA missions: (1) Automated Reasoning for Autonomous Systems. Techniques are being developed enabling spacecraft that will be self-guiding and self-correcting to the extent that they will require little or no human intervention. Such craft will be equipped to independently solve problems as they arise, and fulfill their missions with minimum direction from Earth; (2) Human-Centered Computing. Many NASA missions require synergy between humans and computers, with sophisticated computational aids amplifying human cognitive and perceptual abilities; (3) High Performance Computing and Networking. Advances in the performance of computing and networking continue to have major impact on a variety of NASA endeavors, ranging from modeling and simulation to data analysis of large datasets to collaborative engineering, planning and execution. In addition, RIACS collaborates with NASA scientists to apply information technology research to a variety of NASA application domains. RIACS also engages in other activities, such as workshops, seminars, and visiting scientist programs, designed to encourage and facilitate collaboration between the university and NASA information technology research communities.

  4. How Data Becomes Physics: Inside the RACF

    ScienceCinema

    Ernst, Michael; Rind, Ofer; Rajagopalan, Srini; Lauret, Jerome; Pinkenburg, Chris

    2018-06-22

    The RHIC & ATLAS Computing Facility (RACF) at the U.S. Department of Energy’s (DOE) Brookhaven National Laboratory sits at the center of a global computing network. It connects more than 2,500 researchers around the world with the data generated by millions of particle collisions taking place each second at Brookhaven Lab's Relativistic Heavy Ion Collider (RHIC, a DOE Office of Science User Facility for nuclear physics research), and the ATLAS experiment at the Large Hadron Collider in Europe. Watch this video to learn how the people and computing resources of the RACF serve these scientists to turn petabytes of raw data into physics discoveries.

  5. Validation of the thermal challenge problem using Bayesian Belief Networks.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    McFarland, John; Swiler, Laura Painton

    The thermal challenge problem has been developed at Sandia National Laboratories as a testbed for demonstrating various types of validation approaches and prediction methods. This report discusses one particular methodology to assess the validity of a computational model given experimental data. This methodology is based on Bayesian Belief Networks (BBNs) and can incorporate uncertainty in experimental measurements, in physical quantities, and model uncertainties. The approach uses the prior and posterior distributions of model output to compute a validation metric based on Bayesian hypothesis testing (a Bayes' factor). This report discusses various aspects of the BBN, specifically in the context ofmore » the thermal challenge problem. A BBN is developed for a given set of experimental data in a particular experimental configuration. The development of the BBN and the method for ''solving'' the BBN to develop the posterior distribution of model output through Monte Carlo Markov Chain sampling is discussed in detail. The use of the BBN to compute a Bayes' factor is demonstrated.« less

  6. Sustaining and Extending the Open Science Grid: Science Innovation on a PetaScale Nationwide Facility (DE-FC02-06ER41436) SciDAC-2 Closeout Report

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Livny, Miron; Shank, James; Ernst, Michael

    Under this SciDAC-2 grant the project’s goal w a s t o stimulate new discoveries by providing scientists with effective and dependable access to an unprecedented national distributed computational facility: the Open Science Grid (OSG). We proposed to achieve this through the work of the Open Science Grid Consortium: a unique hands-on multi-disciplinary collaboration of scientists, software developers and providers of computing resources. Together the stakeholders in this consortium sustain and use a shared distributed computing environment that transforms simulation and experimental science in the US. The OSG consortium is an open collaboration that actively engages new research communities. Wemore » operate an open facility that brings together a broad spectrum of compute, storage, and networking resources and interfaces to other cyberinfrastructures, including the US XSEDE (previously TeraGrid), the European Grids for ESciencE (EGEE), as well as campus and regional grids. We leverage middleware provided by computer science groups, facility IT support organizations, and computing programs of application communities for the benefit of consortium members and the US national CI.« less

  7. Hypothesis generation using network structures on community health center cancer-screening performance.

    PubMed

    Carney, Timothy Jay; Morgan, Geoffrey P; Jones, Josette; McDaniel, Anna M; Weaver, Michael T; Weiner, Bryan; Haggstrom, David A

    2015-10-01

    Nationally sponsored cancer-care quality-improvement efforts have been deployed in community health centers to increase breast, cervical, and colorectal cancer-screening rates among vulnerable populations. Despite several immediate and short-term gains, screening rates remain below national benchmark objectives. Overall improvement has been both difficult to sustain over time in some organizational settings and/or challenging to diffuse to other settings as repeatable best practices. Reasons for this include facility-level changes, which typically occur in dynamic organizational environments that are complex, adaptive, and unpredictable. This study seeks to understand the factors that shape community health center facility-level cancer-screening performance over time. This study applies a computational-modeling approach, combining principles of health-services research, health informatics, network theory, and systems science. To investigate the roles of knowledge acquisition, retention, and sharing within the setting of the community health center and to examine their effects on the relationship between clinical decision support capabilities and improvement in cancer-screening rate improvement, we employed Construct-TM to create simulated community health centers using previously collected point-in-time survey data. Construct-TM is a multi-agent model of network evolution. Because social, knowledge, and belief networks co-evolve, groups and organizations are treated as complex systems to capture the variability of human and organizational factors. In Construct-TM, individuals and groups interact by communicating, learning, and making decisions in a continuous cycle. Data from the survey was used to differentiate high-performing simulated community health centers from low-performing ones based on computer-based decision support usage and self-reported cancer-screening improvement. This virtual experiment revealed that patterns of overall network symmetry, agent cohesion, and connectedness varied by community health center performance level. Visual assessment of both the agent-to-agent knowledge sharing network and agent-to-resource knowledge use network diagrams demonstrated that community health centers labeled as high performers typically showed higher levels of collaboration and cohesiveness among agent classes, faster knowledge-absorption rates, and fewer agents that were unconnected to key knowledge resources. Conclusions and research implications: Using the point-in-time survey data outlining community health center cancer-screening practices, our computational model successfully distinguished between high and low performers. Results indicated that high-performance environments displayed distinctive network characteristics in patterns of interaction among agents, as well as in the access and utilization of key knowledge resources. Our study demonstrated how non-network-specific data obtained from a point-in-time survey can be employed to forecast community health center performance over time, thereby enhancing the sustainability of long-term strategic-improvement efforts. Our results revealed a strategic profile for community health center cancer-screening improvement via simulation over a projected 10-year period. The use of computational modeling allows additional inferential knowledge to be drawn from existing data when examining organizational performance in increasingly complex environments. Copyright © 2015 Elsevier Inc. All rights reserved.

  8. Dynamics of global supply chain and electric power networks: Models, pricing analysis, and computations

    NASA Astrophysics Data System (ADS)

    Matsypura, Dmytro

    In this dissertation, I develop a new theoretical framework for the modeling, pricing analysis, and computation of solutions to electric power supply chains with power generators, suppliers, transmission service providers, and the inclusion of consumer demands. In particular, I advocate the application of finite-dimensional variational inequality theory, projected dynamical systems theory, game theory, network theory, and other tools that have been recently proposed for the modeling and analysis of supply chain networks (cf. Nagurney (2006)) to electric power markets. This dissertation contributes to the extant literature on the modeling, analysis, and solution of supply chain networks, including global supply chains, in general, and electric power supply chains, in particular, in the following ways. It develops a theoretical framework for modeling, pricing analysis, and computation of electric power flows/transactions in electric power systems using the rationale for supply chain analysis. The models developed include both static and dynamic ones. The dissertation also adds a new dimension to the methodology of the theory of projected dynamical systems by proving that, irrespective of the speeds of adjustment, the equilibrium of the system remains the same. Finally, I include alternative fuel suppliers, along with their behavior into the supply chain modeling and analysis framework. This dissertation has strong practical implications. In an era in which technology and globalization, coupled with increasing risk and uncertainty, complicate electricity demand and supply within and between nations, the successful management of electric power systems and pricing become increasingly pressing topics with relevance not only for economic prosperity but also national security. This dissertation addresses such related topics by providing models, pricing tools, and algorithms for decentralized electric power supply chains. This dissertation is based heavily on the following coauthored papers: Nagurney, Cruz, and Matsypura (2003), Nagurney and Matsypura (2004, 2005, 2006), Matsypura and Nagurney (2005), Matsypura, Nagurney, and Liu (2006).

  9. Multi-site videoconferencing for home-based education of older people with chronic conditions: the Telehealth Literacy Project.

    PubMed

    Banbury, Annie; Parkinson, Lynne; Nancarrow, Susan; Dart, Jared; Gray, Len; Buckley, Jennene

    2014-10-01

    We examined the acceptability of multi-site videoconferencing as a method of providing group education to older people in their homes. There were 9 groups comprising 52 participants (mean age 73 years) with an average of four chronic conditions. Tablet computers or PCs were installed in participant's homes and connected to the Internet by the National Broadband Network (high-speed broad band network) or by the 4G wireless network. A health literacy and self-management programme was delivered by videoconference for 5 weeks. Participants were able to view and interact with all group members and the facilitator on their devices. During the study, 44 group videoconferences were conducted. Evaluation included 16 semi-structured interviews, 3 focus groups and a journal detailing project implementation. The participants reported enjoying home-based group education by videoconference and found the technology easy to use. Using home-based groups via videoconference was acceptable for providing group education, and considered particularly valuable for people living alone and/or with limited mobility. Audio difficulties were the most commonly reported problem. Participants connected with 4G experienced more problems (audio and visual) than participants on the National Broadband Network and those living in multi-dwelling residences reported more problems than those living in single-dwelling residences. Older people with little computer experience can be supported to use telehealth equipment. Telehealth has the potential to improve access to education about chronic disease self-management. © The Author(s) 2014 Reprints and permissions:]br]sagepub.co.uk/journalsPermissions.nav.

  10. Social Network Analysis of Elders' Health Literacy and their Use of Online Health Information.

    PubMed

    Jang, Haeran; An, Ji-Young

    2014-07-01

    Utilizing social network analysis, this study aimed to analyze the main keywords in the literature regarding the health literacy of and the use of online health information by aged persons over 65. Medical Subject Heading keywords were extracted from articles on the PubMed database of the National Library of Medicine. For health literacy, 110 articles out of 361 were initially extracted. Seventy-one keywords out of 1,021 were finally selected after removing repeated keywords and applying pruning. Regarding the use of online health information, 19 articles out of 26 were selected. One hundred forty-four keywords were initially extracted. After removing the repeated keywords, 74 keywords were finally selected. Health literacy was found to be strongly connected with 'Health knowledge, attitudes, practices' and 'Patient education as topic.' 'Computer literacy' had strong connections with 'Internet' and 'Attitude towards computers.' 'Computer literacy' was connected to 'Health literacy,' and was studied according to the parameters 'Attitude towards health' and 'Patient education as topic.' The use of online health information was strongly connected with 'Health knowledge, attitudes, practices,' 'Consumer health information,' 'Patient education as topic,' etc. In the network, 'Computer literacy' was connected with 'Health education,' 'Patient satisfaction,' 'Self-efficacy,' 'Attitude to computer,' etc. Research on older citizens' health literacy and their use of online health information was conducted together with study of computer literacy, patient education, attitude towards health, health education, patient satisfaction, etc. In particular, self-efficacy was noted as an important keyword. Further research should be conducted to identify the effective outcomes of self-efficacy in the area of interest.

  11. [Activities of Research Institute for Advanced Computer Science

    NASA Technical Reports Server (NTRS)

    Gross, Anthony R. (Technical Monitor); Leiner, Barry M.

    2001-01-01

    The Research Institute for Advanced Computer Science (RIACS) carries out basic research and technology development in computer science, in support of the National Aeronautics and Space Administrations missions. RIACS is located at the NASA Ames Research Center, Moffett Field, California. RIACS research focuses on the three cornerstones of IT research necessary to meet the future challenges of NASA missions: 1. Automated Reasoning for Autonomous Systems Techniques are being developed enabling spacecraft that will be self-guiding and self-correcting to the extent that they will require little or no human intervention. Such craft will be equipped to independently solve problems as they arise, and fulfill their missions with minimum direction from Earth. 2. Human-Centered Computing Many NASA missions require synergy between humans and computers, with sophisticated computational aids amplifying human cognitive and perceptual abilities. 3. High Performance Computing and Networking Advances in the performance of computing and networking continue to have major impact on a variety of NASA endeavors, ranging from modeling and simulation to analysis of large scientific datasets to collaborative engineering, planning and execution. In addition, RIACS collaborates with NASA scientists to apply IT research to a variety of NASA application domains. RIACS also engages in other activities, such as workshops, seminars, visiting scientist programs and student summer programs, designed to encourage and facilitate collaboration between the university and NASA IT research communities.

  12. Resource Aware Intelligent Network Services (RAINS) Final Technical Report

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lehman, Tom; Yang, Xi

    The Resource Aware Intelligent Network Services (RAINS) project conducted research and developed technologies in the area of cyber infrastructure resource modeling and computation. The goal of this work was to provide a foundation to enable intelligent, software defined services which spanned the network AND the resources which connect to the network. A Multi-Resource Service Plane (MRSP) was defined, which allows resource owners/managers to locate and place themselves from a topology and service availability perspective within the dynamic networked cyberinfrastructure ecosystem. The MRSP enables the presentation of integrated topology views and computation results which can include resources across the spectrum ofmore » compute, storage, and networks. The RAINS project developed MSRP includes the following key components: i) Multi-Resource Service (MRS) Ontology/Multi-Resource Markup Language (MRML), ii) Resource Computation Engine (RCE), iii) Modular Driver Framework (to allow integration of a variety of external resources). The MRS/MRML is a general and extensible modeling framework that allows for resource owners to model, or describe, a wide variety of resource types. All resources are described using three categories of elements: Resources, Services, and Relationships between the elements. This modeling framework defines a common method for the transformation of cyber infrastructure resources into data in the form of MRML models. In order to realize this infrastructure datification, the RAINS project developed a model based computation system, i.e. “RAINS Computation Engine (RCE)”. The RCE has the ability to ingest, process, integrate, and compute based on automatically generated MRML models. The RCE interacts with the resources thru system drivers which are specific to the type of external network or resource controller. The RAINS project developed a modular and pluggable driver system which facilities a variety of resource controllers to automatically generate, maintain, and distribute MRML based resource descriptions. Once all of the resource topologies are absorbed by the RCE, a connected graph of the full distributed system topology is constructed, which forms the basis for computation and workflow processing. The RCE includes a Modular Computation Element (MCE) framework which allows for tailoring of the computation process to the specific set of resources under control, and the services desired. The input and output of an MCE are both model data based on MRS/MRML ontology and schema. Some of the RAINS project accomplishments include: Development of general and extensible multi-resource modeling framework; Design of a Resource Computation Engine (RCE) system which includes the following key capabilities; Absorb a variety of multi-resource model types and build integrated models; Novel architecture which uses model based communications across the full stack for all Flexible provision of abstract or intent based user facing interfaces; Workflow processing based on model descriptions; Release of the RCE as an open source software; Deployment of RCE in the University of Maryland/Mid-Atlantic Crossroad ScienceDMZ in prototype mode with a plan under way to transition to production; Deployment at the Argonne National Laboratory DTN Facility in prototype mode; Selection of RCE by the DOE SENSE (SDN for End-to-end Networked Science at the Exascale) project as the basis for their orchestration service.« less

  13. Reprocessing Multiyear GPS Data from Continuously Operating Reference Stations on Cloud Computing Platform

    NASA Astrophysics Data System (ADS)

    Yoon, S.

    2016-12-01

    To define geodetic reference frame using GPS data collected by Continuously Operating Reference Stations (CORS) network, historical GPS data needs to be reprocessed regularly. Reprocessing GPS data collected by upto 2000 CORS sites for the last two decades requires a lot of computational resource. At National Geodetic Survey (NGS), there has been one completed reprocessing in 2011, and currently, the second reprocessing is undergoing. For the first reprocessing effort, in-house computing resource was utilized. In the current second reprocessing effort, outsourced cloud computing platform is being utilized. In this presentation, the outline of data processing strategy at NGS is described as well as the effort to parallelize the data processing procedure in order to maximize the benefit of the cloud computing. The time and cost savings realized by utilizing cloud computing approach will also be discussed.

  14. Non-harmful insertion of data mimicking computer network attacks

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Neil, Joshua Charles; Kent, Alexander; Hash, Jr, Curtis Lee

    Non-harmful data mimicking computer network attacks may be inserted in a computer network. Anomalous real network connections may be generated between a plurality of computing systems in the network. Data mimicking an attack may also be generated. The generated data may be transmitted between the plurality of computing systems using the real network connections and measured to determine whether an attack is detected.

  15. A hierarchical network-based algorithm for multi-scale watershed delineation

    NASA Astrophysics Data System (ADS)

    Castronova, Anthony M.; Goodall, Jonathan L.

    2014-11-01

    Watershed delineation is a process for defining a land area that contributes surface water flow to a single outlet point. It is a commonly used in water resources analysis to define the domain in which hydrologic process calculations are applied. There has been a growing effort over the past decade to improve surface elevation measurements in the U.S., which has had a significant impact on the accuracy of hydrologic calculations. Traditional watershed processing on these elevation rasters, however, becomes more burdensome as data resolution increases. As a result, processing of these datasets can be troublesome on standard desktop computers. This challenge has resulted in numerous works that aim to provide high performance computing solutions to large data, high resolution data, or both. This work proposes an efficient watershed delineation algorithm for use in desktop computing environments that leverages existing data, U.S. Geological Survey (USGS) National Hydrography Dataset Plus (NHD+), and open source software tools to construct watershed boundaries. This approach makes use of U.S. national-level hydrography data that has been precomputed using raster processing algorithms coupled with quality control routines. Our approach uses carefully arranged data and mathematical graph theory to traverse river networks and identify catchment boundaries. We demonstrate this new watershed delineation technique, compare its accuracy with traditional algorithms that derive watershed solely from digital elevation models, and then extend our approach to address subwatershed delineation. Our findings suggest that the open-source hierarchical network-based delineation procedure presented in the work is a promising approach to watershed delineation that can be used summarize publicly available datasets for hydrologic model input pre-processing. Through our analysis, we explore the benefits of reusing the NHD+ datasets for watershed delineation, and find that the our technique offers greater flexibility and extendability than traditional raster algorithms.

  16. IBM NJE protocol emulator for VAX/VMS

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Engert, D.E.

    1981-01-01

    Communications software has been written at Argonne National Laboratory to enable a VAX/VMS system to participate as an end-node in a standard IBM network by emulating the Network Job Entry (NJE) protocol. NJE is actually a collection of programs that support job networking for the operating systems used on most large IBM-compatible computers (e.g., VM/370, MVS with JES2 or JES3, SVS, MVT with ASP or HASP). Files received by the VAX can be printed or saved in user-selected disk files. Files sent to the network can be routed to any node in the network for printing, punching, or job submission,more » as well as to a VM/370 user's virtual reader. Files sent from the VAX are queued and transmitted asynchronously to allow users to perform other work while files are awaiting transmission. No changes are required to the IBM software.« less

  17. The National Virtual Observatory

    NASA Astrophysics Data System (ADS)

    Hanisch, Robert J.

    2001-06-01

    The National Virtual Observatory is a distributed computational facility that will provide access to the ``virtual sky''-the federation of astronomical data archives, object catalogs, and associated information services. The NVO's ``virtual telescope'' is a common framework for requesting, retrieving, and manipulating information from diverse, distributed resources. The NVO will make it possible to seamlessly integrate data from the new all-sky surveys, enabling cross-correlations between multi-Terabyte catalogs and providing transparent access to the underlying image or spectral data. Success requires high performance computational systems, high bandwidth network services, agreed upon standards for the exchange of metadata, and collaboration among astronomers, astronomical data and information service providers, information technology specialists, funding agencies, and industry. International cooperation at the onset will help to assure that the NVO simultaneously becomes a global facility. .

  18. Design and methodological considerations of an effectiveness trial of a computer-assisted intervention: an example from the NIDA Clinical Trials Network.

    PubMed

    Campbell, Aimee N C; Nunes, Edward V; Miele, Gloria M; Matthews, Abigail; Polsky, Daniel; Ghitza, Udi E; Turrigiano, Eva; Bailey, Genie L; VanVeldhuisen, Paul; Chapdelaine, Rita; Froias, Autumn; Stitzer, Maxine L; Carroll, Kathleen M; Winhusen, Theresa; Clingerman, Sara; Perez, Livangelie; McClure, Erin; Goldman, Bruce; Crowell, A Rebecca

    2012-03-01

    Computer-assisted interventions hold the promise of minimizing two problems that are ubiquitous in substance abuse treatment: the lack of ready access to treatment and the challenges to providing empirically-supported treatments. Reviews of research on computer-assisted treatments for mental health and substance abuse report promising findings, but study quality and methodological limitations remain an issue. In addition, relatively few computer-assisted treatments have been tested among illicit substance users. This manuscript describes the methodological considerations of a multi-site effectiveness trial conducted within the National Institute on Drug Abuse's (NIDA's) National Drug Abuse Treatment Clinical Trials Network (CTN). The study is evaluating a web-based version of the Community Reinforcement Approach, in addition to prize-based contingency management, among 500 participants enrolled in 10 outpatient substance abuse treatment programs. Several potential effectiveness trial designs were considered and the rationale for the choice of design in this study is described. The study uses a randomized controlled design (with independent treatment arm allocation), intention-to-treat primary outcome analysis, biological markers for the primary outcome of abstinence, long-term follow-up assessments, precise measurement of intervention dose, and a cost-effectiveness analysis. Input from community providers during protocol development highlighted potential concerns and helped to address issues of practicality and feasibility. Collaboration between providers and investigators supports the utility of infrastructures that enhance research partnerships to facilitate effectiveness trials and dissemination of promising, technologically innovative treatments. Outcomes from this study will further the empirical knowledge base on the effectiveness and cost-effectiveness of computer-assisted treatment in clinical treatment settings. Copyright © 2011 Elsevier Inc. All rights reserved.

  19. Design and Methodological Considerations of an Effectiveness Trial of a Computer-assisted Intervention: An Example from the NIDA Clinical Trials Network

    PubMed Central

    Campbell, Aimee N. C.; Nunes, Edward V.; Miele, Gloria M.; Matthews, Abigail; Polsky, Daniel; Ghitza, Udi E.; Turrigiano, Eva; Bailey, Genie L.; VanVeldhuisen, Paul; Chapdelaine, Rita; Froias, Autumn; Stitzer, Maxine L.; Carroll, Kathleen M.; Winhusen, Theresa; Clingerman, Sara; Perez, Livangelie; McClure, Erin; Goldman, Bruce; Crowell, A. Rebecca

    2011-01-01

    Computer-assisted interventions hold the promise of minimizing two problems that are ubiquitous in substance abuse treatment: the lack of ready access to treatment and the challenges to providing empirically-supported treatments. Reviews of research on computer-assisted treatments for mental health and substance abuse report promising findings, but study quality and methodological limitations remain an issue. In addition, relatively few computer-assisted treatments have been tested among illicit substance users. This manuscript describes the methodological considerations of a multi-site effectiveness trial conducted within the National Institute on Drug Abuse's (NIDA's) National Drug Abuse Treatment Clinical Trials Network (CTN). The study is evaluating a web-based version of the Community Reinforcement Approach, in addition to prize-based contingency management, among 500 participants enrolled in 10 outpatient substance abuse treatment programs. Several potential effectiveness trial designs were considered and the rationale for the choice of design in this study is described. The study uses a randomized controlled design (with independent treatment arm allocation), intention-to-treat primary outcome analysis, biological markers for the primary outcome of abstinence, long-term follow-up assessments, precise measurement of intervention dose, and a cost-effectiveness analysis. Input from community providers during protocol development highlighted potential concerns and helped to address issues of practicality and feasibility. Collaboration between providers and investigators supports the utility of infrastructures that enhance research partnerships to facilitate effectiveness trials and dissemination of promising, technologically innovative treatments. Outcomes from this study will further the empirical knowledge base on the effectiveness and cost-effectiveness of computer-assisted treatment in clinical treatment settings. PMID:22085803

  20. Digital optical computers at the optoelectronic computing systems center

    NASA Technical Reports Server (NTRS)

    Jordan, Harry F.

    1991-01-01

    The Digital Optical Computing Program within the National Science Foundation Engineering Research Center for Opto-electronic Computing Systems has as its specific goal research on optical computing architectures suitable for use at the highest possible speeds. The program can be targeted toward exploiting the time domain because other programs in the Center are pursuing research on parallel optical systems, exploiting optical interconnection and optical devices and materials. Using a general purpose computing architecture as the focus, we are developing design techniques, tools and architecture for operation at the speed of light limit. Experimental work is being done with the somewhat low speed components currently available but with architectures which will scale up in speed as faster devices are developed. The design algorithms and tools developed for a general purpose, stored program computer are being applied to other systems such as optimally controlled optical communication networks.

  1. Control and Information Systems for the National Ignition Facility

    DOE PAGES

    Brunton, Gordon; Casey, Allan; Christensen, Marvin; ...

    2017-03-23

    Orchestration of every National Ignition Facility (NIF) shot cycle is managed by the Integrated Computer Control System (ICCS), which uses a scalable software architecture running code on more than 1950 front-end processors, embedded controllers, and supervisory servers. The ICCS operates laser and industrial control hardware containing 66 000 control and monitor points to ensure that all of NIF’s laser beams arrive at the target within 30 ps of each other and are aligned to a pointing accuracy of less than 50 μm root-mean-square, while ensuring that a host of diagnostic instruments record data in a few billionths of a second.more » NIF’s automated control subsystems are built from a common object-oriented software framework that distributes the software across the computer network and achieves interoperation between different software languages and target architectures. A large suite of business and scientific software tools supports experimental planning, experimental setup, facility configuration, and post-shot analysis. Standard business services using open-source software, commercial workflow tools, and database and messaging technologies have been developed. An information technology infrastructure consisting of servers, network devices, and storage provides the foundation for these systems. Thus, this work is an overview of the control and information systems used to support a wide variety of experiments during the National Ignition Campaign.« less

  2. Control and Information Systems for the National Ignition Facility

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Brunton, Gordon; Casey, Allan; Christensen, Marvin

    Orchestration of every National Ignition Facility (NIF) shot cycle is managed by the Integrated Computer Control System (ICCS), which uses a scalable software architecture running code on more than 1950 front-end processors, embedded controllers, and supervisory servers. The ICCS operates laser and industrial control hardware containing 66 000 control and monitor points to ensure that all of NIF’s laser beams arrive at the target within 30 ps of each other and are aligned to a pointing accuracy of less than 50 μm root-mean-square, while ensuring that a host of diagnostic instruments record data in a few billionths of a second.more » NIF’s automated control subsystems are built from a common object-oriented software framework that distributes the software across the computer network and achieves interoperation between different software languages and target architectures. A large suite of business and scientific software tools supports experimental planning, experimental setup, facility configuration, and post-shot analysis. Standard business services using open-source software, commercial workflow tools, and database and messaging technologies have been developed. An information technology infrastructure consisting of servers, network devices, and storage provides the foundation for these systems. Thus, this work is an overview of the control and information systems used to support a wide variety of experiments during the National Ignition Campaign.« less

  3. National Storage Laboratory: a collaborative research project

    NASA Astrophysics Data System (ADS)

    Coyne, Robert A.; Hulen, Harry; Watson, Richard W.

    1993-01-01

    The grand challenges of science and industry that are driving computing and communications have created corresponding challenges in information storage and retrieval. An industry-led collaborative project has been organized to investigate technology for storage systems that will be the future repositories of national information assets. Industry participants are IBM Federal Systems Company, Ampex Recording Systems Corporation, General Atomics DISCOS Division, IBM ADSTAR, Maximum Strategy Corporation, Network Systems Corporation, and Zitel Corporation. Industry members of the collaborative project are funding their own participation. Lawrence Livermore National Laboratory through its National Energy Research Supercomputer Center (NERSC) will participate in the project as the operational site and provider of applications. The expected result is the creation of a National Storage Laboratory to serve as a prototype and demonstration facility. It is expected that this prototype will represent a significant advance in the technology for distributed storage systems capable of handling gigabyte-class files at gigabit-per-second data rates. Specifically, the collaboration expects to make significant advances in hardware, software, and systems technology in four areas of need, (1) network-attached high performance storage; (2) multiple, dynamic, distributed storage hierarchies; (3) layered access to storage system services; and (4) storage system management.

  4. Organization of the secure distributed computing based on multi-agent system

    NASA Astrophysics Data System (ADS)

    Khovanskov, Sergey; Rumyantsev, Konstantin; Khovanskova, Vera

    2018-04-01

    Nowadays developing methods for distributed computing is received much attention. One of the methods of distributed computing is using of multi-agent systems. The organization of distributed computing based on the conventional network computers can experience security threats performed by computational processes. Authors have developed the unified agent algorithm of control system of computing network nodes operation. Network PCs is used as computing nodes. The proposed multi-agent control system for the implementation of distributed computing allows in a short time to organize using of the processing power of computers any existing network to solve large-task by creating a distributed computing. Agents based on a computer network can: configure a distributed computing system; to distribute the computational load among computers operated agents; perform optimization distributed computing system according to the computing power of computers on the network. The number of computers connected to the network can be increased by connecting computers to the new computer system, which leads to an increase in overall processing power. Adding multi-agent system in the central agent increases the security of distributed computing. This organization of the distributed computing system reduces the problem solving time and increase fault tolerance (vitality) of computing processes in a changing computing environment (dynamic change of the number of computers on the network). Developed a multi-agent system detects cases of falsification of the results of a distributed system, which may lead to wrong decisions. In addition, the system checks and corrects wrong results.

  5. Fbis report. Science and technology: China, October 18, 1995

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    NONE

    1995-10-18

    ;Partial Contents: Nanomaterials Fabrication, Applications Research Advances Noted; CAST Announces World`s First Space-Grown Large-Diameter GaAs Monocrystal; Assay of Antiviral Activity of Antisense Phosphorothioate Oligodeoxynucleotide Against Dengue Virus; Expression and Antigenicity of Chimeric Proteins of Cholera Toxin B Subunit With Hepatitis C Virus; CNCOFIEC Signs Agreement With IBM for New Intelligent Building; Latest Reports on Optical Computing, Memory; BIDC To Introduce S3 Company`s Multimedia Accelerator Chipset; Virtual Private PCN Ring Network Based on ATM VP Cross-Connection; Beijing Gets Nation`s First Frame Relay Network; Situation of Power Industry Development and International Cooperation; Diagrams of China`s Nuclear Waste Containment Vessels; Chinese-Developed Containment Vesselmore » Material Reaches World Standards; Second Fuel Elements for Qinshan Plant Passes Inspection; and Geothermal Deep-Well Electric Pump Technology Developed.« less

  6. Information system evolution at the French National Network of Seismic Survey (BCSF-RENASS)

    NASA Astrophysics Data System (ADS)

    Engels, F.; Grunberg, M.

    2013-12-01

    The aging information system of the French National Network of Seismic Survey (BCSF-RENASS), located in Strasbourg (EOST), needed to be updated to satisfy new practices from Computer science world. The latter means to evolve our system at different levels : development method, datamining solutions, system administration. The new system had to provide more agility for incoming projects. The main difficulty was to maintain old system and the new one in parallel the time to validate new solutions with a restricted team. Solutions adopted here are coming from standards used by the seismological community and inspired by the state of the art of devops community. The new system is easier to maintain and take advantage of large community to find support. This poster introduces the new system and choosen solutions like Puppet, Fabric, MongoDB and FDSN Webservices.

  7. New Partnerships: People, Technology, and Organizations. Proceedings of the International ADCIS Conference (35th, Nashville, Tennessee, February 15-19, 1994).

    ERIC Educational Resources Information Center

    Orey, Michael, Ed.

    The theme of the Association for the Development of Computer-Based Instructional Systems (ADCIS) 1994 conference was "New Partnerships: People, Technology, and Organizations." Included in the 38 papers and abstracts compiled in this document are the following topics: hypermedia; the National Research and Education Network and K-12…

  8. A Federal Vision for Future Computing: A Nanotechnology-Inspired Grand Challenge

    DTIC Science & Technology

    2016-07-29

    Science Foundation (NSF), Department of Defense (DOD), National Institute of Standards and Technology (NIST), Intelligence Community (IC) Introduction...multiple Federal agencies: • Intelligent big data sensors that act autonomously and are programmable via the network for increased flexibility, and... intelligence for scientific discovery enabled by rapid extreme-scale data analysis, capable of understanding and making sense of results and thereby

  9. Networking the Global Maritime Partnership

    DTIC Science & Technology

    2008-06-01

    how do the navies of disparate nations that desire to operate together at sea obtain the requisite, compatible C4ISR (command, control, communications ...compatible C4ISR (command, control, communications , computers, intelligence, surveillance, and reconnaissance) systems that will enable them to truly...partnership. Coalition Naval Operations Maritime coalitions have existed for two and one-half millennia and navies have communicated at sea for

  10. Improving Data Mobility & Management for International Cosmology

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Borrill, Julian; Dart, Eli; Gore, Brooklin

    In February 2015 the third workshop in the CrossConnects series, with a focus on Improving Data Mobility & Management for International Cosmology, was held at Lawrence Berkeley National Laboratory. Scientists from fields including astrophysics, cosmology, and astronomy collaborated with experts in computing and networking to outline strategic opportunities for enhancing scientific productivity and effectively managing the ever-increasing scale of scientific data.

  11. Synthesis of Available Research and Databases on the Migrant Education Program. Volume II: the Migrant Student Record Transfer System.

    ERIC Educational Resources Information Center

    Eckels, Elaine; Vorek, Robert

    The Migrant Student Record Transfer System (MSRTS) is a nationwide computer-based communications network originally designed to transfer the health and educational records of migrant workers' children. This report assesses MSRTS data from September 1984 through June 1986 to determine the potential utility of such data for national studies of the…

  12. Interscholastic Correspondence Exchanges in Celestin Freinet's Modern School Movement: Implications for Computer-Mediated Intercultural Learning Networks.

    ERIC Educational Resources Information Center

    Sayers, Dennis

    Although the work of Celestin Freinet has exerted considerable influence on European education, it remains largely unknown to English-speaking educators. The Modern School Movement (MSM), which Freinet founded in 1926, is worldwide in scope, and has affiliated organizations in 13 countries with correspondent groups in more than 20 nations. The MSM…

  13. Organisational Structure and Information Technology (IT): Exploring the Implications of IT for Future Military Structures

    DTIC Science & Technology

    2006-07-01

    4 Abbreviations AI Artificial Intelligence AM Artificial Memory CAD Computer Aided...memory (AM), artificial intelligence (AI), and embedded knowledge systems it is possible to expand the “effective span of competence” of...Technology J Joint J2 Joint Intelligence J3 Joint Operations NATO North Atlantic Treaty Organisation NCW Network Centric Warfare NHS National Health

  14. Mexican Space Weather Service (SCiESMEX)

    NASA Astrophysics Data System (ADS)

    Gonzalez-Esparza, J. A.; De la Luz, V.; Corona-Romero, P.; Mejia-Ambriz, J. C.; Gonzalez, L. X.; Sergeeva, M. A.; Romero-Hernandez, E.; Aguilar-Rodriguez, E.

    2017-01-01

    Legislative modifications of the General Civil Protection Law in Mexico in 2014 included specific references to space hazards and space weather phenomena. The legislation is consistent with United Nations promotion of international engagement and cooperation on space weather awareness, studies, and monitoring. These internal and external conditions motivated the creation of a space weather service in Mexico. The Mexican Space Weather Service (SCiESMEX in Spanish) (www.sciesmex.unam.mx) was initiated in October 2014 and is operated by the Institute of Geophysics at the Universidad Nacional Autonoma de Mexico (UNAM). SCiESMEX became a Regional Warning Center of the International Space Environment Services (ISES) in June 2015. We present the characteristics of the service, some products, and the initial actions for developing a space weather strategy in Mexico. The service operates a computing infrastructure including a web application, data repository, and a high-performance computing server to run numerical models. SCiESMEX uses data of the ground-based instrumental network of the National Space Weather Laboratory (LANCE), covering solar radio burst emissions, solar wind and interplanetary disturbances (by interplanetary scintillation observations), geomagnetic measurements, and analysis of the total electron content (TEC) of the ionosphere (by employing data from local networks of GPS receiver stations).

  15. A Urinalysis Result Reporting System for a Clinical Laboratory

    PubMed Central

    Sullivan, James E.; Plexico, Perry S.; Blank, David W.

    1987-01-01

    A menu driven Urinalysis Result Reporting System based on multiple IBM-PC Workstations connected together by a local area network was developed for the Clinical Chemistry Section of the Clinical Pathology Department at the National Institutes of Health's Clinical Center. Two Network File Servers redundantly save the test results of each urine specimen. When all test results for a specimen are entered into the system, the results are transmitted to the Department's Laboratory Computer System where they are made available to the ordering physician. The Urinalysis Data Management System has proven easy to learn and use.

  16. Final Report: Sensorpedia Phases 1 and 2

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gorman, Bryan L; Resseguie, David R

    2010-08-01

    Over the past several years, ORNL has been actively involved in research to formalize the engineering principles and best practices behind emerging social media and social networking concepts to solve real-time data sharing problems for national security and defense, public health and safety, environmental and infrastructure awareness, and disaster preparedness and response. Sensorpedia, an ORNL web site, is a practical application of several key social media principles. Dubbed the Wikipedia for sensors, Sensorpedia is currently in limited BETA testing and was selected in 2009 by Federal Computer Week as one of the government s top 10 social networking sites.

  17. Space physics analysis network node directory (The Yellow Pages): Fourth edition

    NASA Technical Reports Server (NTRS)

    Peters, David J.; Sisson, Patricia L.; Green, James L.; Thomas, Valerie L.

    1989-01-01

    The Space Physics Analysis Network (SPAN) is a component of the global DECnet Internet, which has over 17,000 host computers. The growth of SPAN from its implementation in 1981 to its present size of well over 2,500 registered SPAN host computers, has created a need for users to acquire timely information about the network through a central source. The SPAN Network Information Center (SPAN-NIC) an online facility managed by the National Space Science Data Center (NSSDC) was developed to meet this need for SPAN-wide information. The remote node descriptive information in this document is not currently contained in the SPAN-NIC database, but will be incorporated in the near future. Access to this information is also available to non-DECnet users over a variety of networks such as Telenet, the NASA Packet Switched System (NPSS), and the TCP/IP Internet. This publication serves as the Yellow Pages for SPAN node information. The document also provides key information concerning other computer networks connected to SPAN, nodes associated with each SPAN routing center, science discipline nodes, contacts for primary SPAN nodes, and SPAN reference information. A section on DECnet Internetworking discusses SPAN connections with other wide-area DECnet networks (many with thousands of nodes each). Another section lists node names and their disciplines, countries, and institutions in the SPAN Network Information Center Online Data Base System. All remote sites connected to US-SPAN and European-SPAN (E-SPAN) are indexed. Also provided is information on the SPAN tail circuits, i.e., those remote nodes connected directly to a SPAN routing center, which is the local point of contact for resolving SPAN-related problems. Reference material is included for those who wish to know more about SPAN. Because of the rapid growth of SPAN, the SPAN Yellow Pages is reissued periodically.

  18. Line-plane broadcasting in a data communications network of a parallel computer

    DOEpatents

    Archer, Charles J.; Berg, Jeremy E.; Blocksome, Michael A.; Smith, Brian E.

    2010-06-08

    Methods, apparatus, and products are disclosed for line-plane broadcasting in a data communications network of a parallel computer, the parallel computer comprising a plurality of compute nodes connected together through the network, the network optimized for point to point data communications and characterized by at least a first dimension, a second dimension, and a third dimension, that include: initiating, by a broadcasting compute node, a broadcast operation, including sending a message to all of the compute nodes along an axis of the first dimension for the network; sending, by each compute node along the axis of the first dimension, the message to all of the compute nodes along an axis of the second dimension for the network; and sending, by each compute node along the axis of the second dimension, the message to all of the compute nodes along an axis of the third dimension for the network.

  19. Line-plane broadcasting in a data communications network of a parallel computer

    DOEpatents

    Archer, Charles J.; Berg, Jeremy E.; Blocksome, Michael A.; Smith, Brian E.

    2010-11-23

    Methods, apparatus, and products are disclosed for line-plane broadcasting in a data communications network of a parallel computer, the parallel computer comprising a plurality of compute nodes connected together through the network, the network optimized for point to point data communications and characterized by at least a first dimension, a second dimension, and a third dimension, that include: initiating, by a broadcasting compute node, a broadcast operation, including sending a message to all of the compute nodes along an axis of the first dimension for the network; sending, by each compute node along the axis of the first dimension, the message to all of the compute nodes along an axis of the second dimension for the network; and sending, by each compute node along the axis of the second dimension, the message to all of the compute nodes along an axis of the third dimension for the network.

  20. Efficient Use of Distributed Systems for Scientific Applications

    NASA Technical Reports Server (NTRS)

    Taylor, Valerie; Chen, Jian; Canfield, Thomas; Richard, Jacques

    2000-01-01

    Distributed computing has been regarded as the future of high performance computing. Nationwide high speed networks such as vBNS are becoming widely available to interconnect high-speed computers, virtual environments, scientific instruments and large data sets. One of the major issues to be addressed with distributed systems is the development of computational tools that facilitate the efficient execution of parallel applications on such systems. These tools must exploit the heterogeneous resources (networks and compute nodes) in distributed systems. This paper presents a tool, called PART, which addresses this issue for mesh partitioning. PART takes advantage of the following heterogeneous system features: (1) processor speed; (2) number of processors; (3) local network performance; and (4) wide area network performance. Further, different finite element applications under consideration may have different computational complexities, different communication patterns, and different element types, which also must be taken into consideration when partitioning. PART uses parallel simulated annealing to partition the domain, taking into consideration network and processor heterogeneity. The results of using PART for an explicit finite element application executing on two IBM SPs (located at Argonne National Laboratory and the San Diego Supercomputer Center) indicate an increase in efficiency by up to 36% as compared to METIS, a widely used mesh partitioning tool. The input to METIS was modified to take into consideration heterogeneous processor performance; METIS does not take into consideration heterogeneous networks. The execution times for these applications were reduced by up to 30% as compared to METIS. These results are given in Figure 1 for four irregular meshes with number of elements ranging from 30,269 elements for the Barth5 mesh to 11,451 elements for the Barth4 mesh. Future work with PART entails using the tool with an integrated application requiring distributed systems. In particular this application, illustrated in the document entails an integration of finite element and fluid dynamic simulations to address the cooling of turbine blades of a gas turbine engine design. It is not uncommon to encounter high-temperature, film-cooled turbine airfoils with 1,000,000s of degrees of freedom. This results because of the complexity of the various components of the airfoils, requiring fine-grain meshing for accuracy. Additional information is contained in the original.

  1. Toward a Dynamically Reconfigurable Computing and Communication System for Small Spacecraft

    NASA Technical Reports Server (NTRS)

    Kifle, Muli; Andro, Monty; Tran, Quang K.; Fujikawa, Gene; Chu, Pong P.

    2003-01-01

    Future science missions will require the use of multiple spacecraft with multiple sensor nodes autonomously responding and adapting to a dynamically changing space environment. The acquisition of random scientific events will require rapidly changing network topologies, distributed processing power, and a dynamic resource management strategy. Optimum utilization and configuration of spacecraft communications and navigation resources will be critical in meeting the demand of these stringent mission requirements. There are two important trends to follow with respect to NASA's (National Aeronautics and Space Administration) future scientific missions: the use of multiple satellite systems and the development of an integrated space communications network. Reconfigurable computing and communication systems may enable versatile adaptation of a spacecraft system's resources by dynamic allocation of the processor hardware to perform new operations or to maintain functionality due to malfunctions or hardware faults. Advancements in FPGA (Field Programmable Gate Array) technology make it possible to incorporate major communication and network functionalities in FPGA chips and provide the basis for a dynamically reconfigurable communication system. Advantages of higher computation speeds and accuracy are envisioned with tremendous hardware flexibility to ensure maximum survivability of future science mission spacecraft. This paper discusses the requirements, enabling technologies, and challenges associated with dynamically reconfigurable space communications systems.

  2. Development of a Web Based Simulating System for Earthquake Modeling on the Grid

    NASA Astrophysics Data System (ADS)

    Seber, D.; Youn, C.; Kaiser, T.

    2007-12-01

    Existing cyberinfrastructure-based information, data and computational networks now allow development of state- of-the-art, user-friendly simulation environments that democratize access to high-end computational environments and provide new research opportunities for many research and educational communities. Within the Geosciences cyberinfrastructure network, GEON, we have developed the SYNSEIS (SYNthetic SEISmogram) toolkit to enable efficient computations of 2D and 3D seismic waveforms for a variety of research purposes especially for helping to analyze the EarthScope's USArray seismic data in a speedy and efficient environment. The underlying simulation software in SYNSEIS is a finite difference code, E3D, developed by LLNL (S. Larsen). The code is embedded within the SYNSEIS portlet environment and it is used by our toolkit to simulate seismic waveforms of earthquakes at regional distances (<1000km). Architecturally, SYNSEIS uses both Web Service and Grid computing resources in a portal-based work environment and has a built in access mechanism to connect to national supercomputer centers as well as to a dedicated, small-scale compute cluster for its runs. Even though Grid computing is well-established in many computing communities, its use among domain scientists still is not trivial because of multiple levels of complexities encountered. We grid-enabled E3D using our own dialect XML inputs that include geological models that are accessible through standard Web services within the GEON network. The XML inputs for this application contain structural geometries, source parameters, seismic velocity, density, attenuation values, number of time steps to compute, and number of stations. By enabling a portal based access to a such computational environment coupled with its dynamic user interface we enable a large user community to take advantage of such high end calculations in their research and educational activities. Our system can be used to promote an efficient and effective modeling environment to help scientists as well as educators in their daily activities and speed up the scientific discovery process.

  3. Remote consultation and diagnosis in medical imaging using a global PACS backbone network

    NASA Astrophysics Data System (ADS)

    Martinez, Ralph; Sutaria, Bijal N.; Kim, Jinman; Nam, Jiseung

    1993-10-01

    A Global PACS is a national network which interconnects several PACS networks at medical and hospital complexes using a national backbone network. A Global PACS environment enables new and beneficial operations between radiologists and physicians, when they are located in different geographical locations. One operation allows the radiologist to view the same image folder at both Local and Remote sites so that a diagnosis can be performed. The paper describes the user interface, database management, and network communication software which has been developed in the Computer Engineering Research Laboratory and Radiology Research Laboratory. Specifically, a design for a file management system in a distributed environment is presented. In the remote consultation and diagnosis operation, a set of images is requested from the database archive system and sent to the Local and Remote workstation sites on the Global PACS network. Viewing the same images, the radiologists use pointing overlay commands, or frames to point out features on the images. Each workstation transfers these frames, to the other workstation, so that an interactive session for diagnosis takes place. In this phase, we use fixed frames and variable size frames, used to outline an object. The data pockets for these frames traverses the national backbone in real-time. We accomplish this feature by using TCP/IP protocol sockets for communications. The remote consultation and diagnosis operation has been tested in real-time between the University Medical Center and the Bowman Gray School of Medicine at Wake Forest University, over the Internet. In this paper, we show the feasibility of the operation in a Global PACS environment. Future improvements to the system will include real-time voice and interactive compressed video scenarios.

  4. The CzeCOS Network

    NASA Astrophysics Data System (ADS)

    Havránková, Kateřina; Taufarová, Klára; Šigut, Ladislav; McGloin, Ryan; Acosta, Manuel; Dušek, Jiří; Krupková, Lenka; Macálková-Mžourková, Lenka; Pavelka, Marian; Dařenová, Eva; Yadav, Shilpi; Nguyen, Vinh; Guerra, Carlos; Janous, Dalibor; Marek, Michal V.

    2017-04-01

    The Global Change Research Institute of the Czech Academy of Sciences (CzechGlobe) have established a well-equipped network of ecosystem stations, with modern instrumentation for eco-physiological, plant physiological and micrometeorological studies, and estimation of GHG emissions. The network of stations (CzeCOS) covers the main terrestrial ecosystems of the Czech Republic (young and old coniferous forest, deciduous forest, mixed floodplain forest, grassland, wetland and cropland). The ecosystem stations are equipped with eddy covariance systems, soil and stem chamber systems for CO2 efflux and instruments for making micrometeorological measurements. The network enables detailed research to be conducted on topics such as: the carbon balance of different ecosystems, energy balance closure, the impact of current climate conditions on production and ecosystem disturbances during extreme weather conditions (drought, floods, winter storms, etc.) at regional, national and international scales. As a part of global networks (Fluxnet, ANAEe, ICOS), CzeCOS participates in evaluating and predicting environmental change and helps in the proposal of mitigation measures. Another important issue studied at some of the CzeCOS sites is the use of the eddy covariance method in sloping terrain in order to improve eddy covariance data processing for sites in this kind of terrain. Here we show specific results from the sites and outline the importance of the regional/national network for improving our knowledge about the exchange of matter and energy fluxes at different ecosystems. This study was supported by the Ministry of Education, Youth and Sports of CR within the National Sustainability Program I (NPU I), grant number LO1415 and LD 15040. Computational resources were provided by the CESNET LM2015042 and the CERIT Scientific Cloud LM2015085, provided under the programme "Projects of Large Research, Development, and Innovations Infrastructures".

  5. MPI implementation of PHOENICS: A general purpose computational fluid dynamics code

    NASA Astrophysics Data System (ADS)

    Simunovic, S.; Zacharia, T.; Baltas, N.; Spalding, D. B.

    1995-03-01

    PHOENICS is a suite of computational analysis programs that are used for simulation of fluid flow, heat transfer, and dynamical reaction processes. The parallel version of the solver EARTH for the Computational Fluid Dynamics (CFD) program PHOENICS has been implemented using Message Passing Interface (MPI) standard. Implementation of MPI version of PHOENICS makes this computational tool portable to a wide range of parallel machines and enables the use of high performance computing for large scale computational simulations. MPI libraries are available on several parallel architectures making the program usable across different architectures as well as on heterogeneous computer networks. The Intel Paragon NX and MPI versions of the program have been developed and tested on massively parallel supercomputers Intel Paragon XP/S 5, XP/S 35, and Kendall Square Research, and on the multiprocessor SGI Onyx computer at Oak Ridge National Laboratory. The preliminary testing results of the developed program have shown scalable performance for reasonably sized computational domains.

  6. MPI implementation of PHOENICS: A general purpose computational fluid dynamics code

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Simunovic, S.; Zacharia, T.; Baltas, N.

    1995-04-01

    PHOENICS is a suite of computational analysis programs that are used for simulation of fluid flow, heat transfer, and dynamical reaction processes. The parallel version of the solver EARTH for the Computational Fluid Dynamics (CFD) program PHOENICS has been implemented using Message Passing Interface (MPI) standard. Implementation of MPI version of PHOENICS makes this computational tool portable to a wide range of parallel machines and enables the use of high performance computing for large scale computational simulations. MPI libraries are available on several parallel architectures making the program usable across different architectures as well as on heterogeneous computer networks. Themore » Intel Paragon NX and MPI versions of the program have been developed and tested on massively parallel supercomputers Intel Paragon XP/S 5, XP/S 35, and Kendall Square Research, and on the multiprocessor SGI Onyx computer at Oak Ridge National Laboratory. The preliminary testing results of the developed program have shown scalable performance for reasonably sized computational domains.« less

  7. The Practical Obstacles of Data Transfer: Why researchers still love scp

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Nam, Hai Ah; Hill, Jason J; Parete-Koon, Suzanne T

    The importance of computing facilities is heralded every six months with the announcement of the new Top500 list, showcasing the world s fastest supercomputers. Unfortu- nately, with great computing capability does not come great long-term data storage capacity, which often means users must move their data to their local site archive, to remote sites where they may be doing future computation or anal- ysis, or back to their home institution, else face the dreaded data purge that most HPC centers employ to keep utiliza- tion of large parallel filesystems low to manage performance and capacity. At HPC centers, data transfermore » is crucial to the scientific workflow and will increase in importance as computing systems grow in size. The Energy Sciences Net- work (ESnet) recently launched its fifth generation network, a 100 Gbps high-performance, unclassified national network connecting more than 40 DOE research sites to support scientific research and collaboration. Despite the tenfold increase in bandwidth to DOE research sites amenable to multiple data transfer streams and high throughput, in prac- tice, researchers often under-utilize the network and resort to painfully-slow single stream transfer methods such as scp to avoid the complexity of using multiple stream tools such as GridFTP and bbcp, and contend with frustration from the lack of consistency of available tools between sites. In this study we survey and assess the data transfer methods pro- vided at several DOE supported computing facilities, includ- ing both leadership-computing facilities, connected through ESnet. We present observed transfer rates, suggested opti- mizations, and discuss the obstacles the tools must overcome to receive wide-spread adoption over scp.« less

  8. Social Network Analysis of Elders' Health Literacy and their Use of Online Health Information

    PubMed Central

    Jang, Haeran

    2014-01-01

    Objectives Utilizing social network analysis, this study aimed to analyze the main keywords in the literature regarding the health literacy of and the use of online health information by aged persons over 65. Methods Medical Subject Heading keywords were extracted from articles on the PubMed database of the National Library of Medicine. For health literacy, 110 articles out of 361 were initially extracted. Seventy-one keywords out of 1,021 were finally selected after removing repeated keywords and applying pruning. Regarding the use of online health information, 19 articles out of 26 were selected. One hundred forty-four keywords were initially extracted. After removing the repeated keywords, 74 keywords were finally selected. Results Health literacy was found to be strongly connected with 'Health knowledge, attitudes, practices' and 'Patient education as topic.' 'Computer literacy' had strong connections with 'Internet' and 'Attitude towards computers.' 'Computer literacy' was connected to 'Health literacy,' and was studied according to the parameters 'Attitude towards health' and 'Patient education as topic.' The use of online health information was strongly connected with 'Health knowledge, attitudes, practices,' 'Consumer health information,' 'Patient education as topic,' etc. In the network, 'Computer literacy' was connected with 'Health education,' 'Patient satisfaction,' 'Self-efficacy,' 'Attitude to computer,' etc. Conclusions Research on older citizens' health literacy and their use of online health information was conducted together with study of computer literacy, patient education, attitude towards health, health education, patient satisfaction, etc. In particular, self-efficacy was noted as an important keyword. Further research should be conducted to identify the effective outcomes of self-efficacy in the area of interest. PMID:25152835

  9. Collaborative Visualization Project: shared-technology learning environments for science learning

    NASA Astrophysics Data System (ADS)

    Pea, Roy D.; Gomez, Louis M.

    1993-01-01

    Project-enhanced science learning (PESL) provides students with opportunities for `cognitive apprenticeships' in authentic scientific inquiry using computers for data-collection and analysis. Student teams work on projects with teacher guidance to develop and apply their understanding of science concepts and skills. We are applying advanced computing and communications technologies to augment and transform PESL at-a-distance (beyond the boundaries of the individual school), which is limited today to asynchronous, text-only networking and unsuitable for collaborative science learning involving shared access to multimedia resources such as data, graphs, tables, pictures, and audio-video communication. Our work creates user technology (a Collaborative Science Workbench providing PESL design support and shared synchronous document views, program, and data access; a Science Learning Resource Directory for easy access to resources including two-way video links to collaborators, mentors, museum exhibits, media-rich resources such as scientific visualization graphics), and refine enabling technologies (audiovisual and shared-data telephony, networking) for this PESL niche. We characterize participation scenarios for using these resources and we discuss national networked access to science education expertise.

  10. Optimization of analytical laboratory work using computer networking and databasing

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Upp, D.L.; Metcalf, R.A.

    1996-06-01

    The Health Physics Analysis Laboratory (HPAL) performs around 600,000 analyses for radioactive nuclides each year at Los Alamos National Laboratory (LANL). Analysis matrices vary from nasal swipes, air filters, work area swipes, liquids, to the bottoms of shoes and cat litter. HPAL uses 8 liquid scintillation counters, 8 gas proportional counters, and 9 high purity germanium detectors in 5 laboratories to perform these analyses. HPAL has developed a computer network between the labs and software to produce analysis results. The software and hardware package includes barcode sample tracking, log-in, chain of custody, analysis calculations, analysis result printing, and utility programs.more » All data are written to a database, mirrored on a central server, and eventually written to CD-ROM to provide for online historical results. This system has greatly reduced the work required to provide for analysis results as well as improving the quality of the work performed.« less

  11. The Sunrise project: An R&D project for a national information infrastructure prototype

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lee, Juhnyoung

    1995-02-01

    Sunrise is a Los Alamos National Laboratory (LANL) project started in October 1993. It is intended to a prototype National Information Infrastructure (NII) development project. A main focus of Sunrise is to tie together enabling technologies (networking, object-oriented distributed computing, graphical interfaces, security, multimedia technologies, and data mining technologies) with several specific applications. A diverse set of application areas was chosen to ensure that the solutions developed in the project are as generic as possible. Some of the application areas are materials modeling, medical records and image analysis, transportation simulations, and education. This paper provides a description of Sunrise andmore » a view of the architecture and objectives of this evolving project. The primary objectives of Sunrise are three-fold: (1) To develop common information-enabling tools for advanced scientific research and its applications to industry; (2) To enhance the capabilities of important research programs at the Laboratory; and (3) To define a new way of collaboration between computer science and industrially relevant research.« less

  12. Computing, Information and Communications Technology (CICT) Website

    NASA Technical Reports Server (NTRS)

    Hardman, John; Tu, Eugene (Technical Monitor)

    2002-01-01

    The Computing, Information and Communications Technology Program (CICT) was established in 2001 to ensure NASA's Continuing leadership in emerging technologies. It is a coordinated, Agency-wide effort to develop and deploy key enabling technologies for a broad range of mission-critical tasks. The NASA CICT program is designed to address Agency-specific computing, information, and communications technology requirements beyond the projected capabilities of commercially available solutions. The areas of technical focus have been chosen for their impact on NASA's missions, their national importance, and the technical challenge they provide to the Program. In order to meet its objectives, the CICT Program is organized into the following four technology focused projects: 1) Computing, Networking and Information Systems (CNIS); 2) Intelligent Systems (IS); 3) Space Communications (SC); 4) Information Technology Strategic Research (ITSR).

  13. Conversion and improvement of the Rutherford Laboratory's magnetostatic computer code GFUN3D to the NMFECC CDC 7600

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Tucker, T.C.

    1980-06-01

    The implementation of a version of the Rutherford Laboratory's magnetostatic computer code GFUN3D on the CDC 7600 at the National Magnetic Fusion Energy Computer Center is reported. A new iteration technique that greatly increases the probability of convergence and reduces computation time by about 30% for calculations with nonlinear, ferromagnetic materials is included. The use of GFUN3D on the NMFE network is discussed, and suggestions for future work are presented. Appendix A consists of revisions to the GFUN3D User Guide (published by Rutherford Laboratory( that are necessary to use this version. Appendix B contains input and output for some samplemore » calculations. Appendix C is a detailed discussion of the old and new iteration techniques.« less

  14. Fault tolerant hypercube computer system architecture

    NASA Technical Reports Server (NTRS)

    Madan, Herb S. (Inventor); Chow, Edward (Inventor)

    1989-01-01

    A fault-tolerant multiprocessor computer system of the hypercube type comprising a hierarchy of computers of like kind which can be functionally substituted for one another as necessary is disclosed. Communication between the working nodes is via one communications network while communications between the working nodes and watch dog nodes and load balancing nodes higher in the structure is via another communications network separate from the first. A typical branch of the hierarchy reporting to a master node or host computer comprises, a plurality of first computing nodes; a first network of message conducting paths for interconnecting the first computing nodes as a hypercube. The first network provides a path for message transfer between the first computing nodes; a first watch dog node; and a second network of message connecting paths for connecting the first computing nodes to the first watch dog node independent from the first network, the second network provides an independent path for test message and reconfiguration affecting transfers between the first computing nodes and the first switch watch dog node. There is additionally, a plurality of second computing nodes; a third network of message conducting paths for interconnecting the second computing nodes as a hypercube. The third network provides a path for message transfer between the second computing nodes; a fourth network of message conducting paths for connecting the second computing nodes to the first watch dog node independent from the third network. The fourth network provides an independent path for test message and reconfiguration affecting transfers between the second computing nodes and the first watch dog node; and a first multiplexer disposed between the first watch dog node and the second and fourth networks for allowing the first watch dog node to selectively communicate with individual ones of the computing nodes through the second and fourth networks; as well as, a second watch dog node operably connected to the first multiplexer whereby the second watch dog node can selectively communicate with individual ones of the computing nodes through the second and fourth networks. The branch is completed by a first load balancing node; and a second multiplexer connected between the first load balancing node and the first and second watch dog nodes, allowing the first load balancing node to selectively communicate with the first and second watch dog nodes.

  15. Sense-making for intelligence analysis on social media data

    NASA Astrophysics Data System (ADS)

    Pritzkau, Albert

    2016-05-01

    Social networks, in particular online social networks as a subset, enable the analysis of social relationships which are represented by interaction, collaboration, or other sorts of influence between people. Any set of people and their internal social relationships can be modelled as a general social graph. These relationships are formed by exchanging emails, making phone calls, or carrying out a range of other activities that build up the network. This paper presents an overview of current approaches to utilizing social media as a ubiquitous sensor network in the context of national and global security. Exploitation of social media is usually an interdisciplinary endeavour, in which the relevant technologies and methods are identified and linked in order ultimately demonstrate selected applications. Effective and efficient intelligence is usually accomplished in a combined human and computer effort. Indeed, the intelligence process heavily depends on combining a human's flexibility, creativity, and cognitive ability with the bandwidth and processing power of today's computers. To improve the usability and accuracy of the intelligence analysis we will have to rely on data-processing tools at the level of natural language. Especially the collection and transformation of unstructured data into actionable, structured data requires scalable computational algorithms ranging from Artificial Intelligence, via Machine Learning, to Natural Language Processing (NLP). To support intelligence analysis on social media data, social media analytics is concerned with developing and evaluating computational tools and frameworks to collect, monitor, analyze, summarize, and visualize social media data. Analytics methods are employed to extract of significant patterns that might not be obvious. As a result, different data representations rendering distinct aspects of content and interactions serve as a means to adapt the focus of the intelligence analysis to specific information requests.

  16. Data from selected U.S. Geological Survey national stream water-quality monitoring networks (WQN) on CD-ROM

    USGS Publications Warehouse

    Alexander, R.B.; Ludtke, A.S.; Fitzgerald, K.K.; Schertz, T.L.

    1996-01-01

    Data from two U.S. Geological Survey (USGS) national stream water-quality monitoring networks, the National Stream Quality Accounting Network (NASQAN) and the Hydrologic Benchmark Network (HBN), are now available in a two CD-ROM set. These data on CD-ROM are collectively referred to as WQN, water-quality networks. Data from these networks have been used at the national, regional, and local levels to estimate the rates of chemical flux from watersheds, quantify changes in stream water quality for periods during the past 30 years, and investigate relations between water quality and streamflow as well as the relations of water quality to pollution sources and various physical characteristics of watersheds. The networks include 679 monitoring stations in watersheds that represent diverse climatic, physiographic, and cultural characteristics. The HBN includes 63 stations in relatively small, minimally disturbed basins ranging in size from 2 to 2,000 square miles with a median drainage basin size of 57 square miles. NASQAN includes 618 stations in larger, more culturally-influenced drainage basins ranging in size from one square mile to 1.2 million square miles with a median drainage basin size of about 4,000 square miles. The CD-ROMs contain data for 63 physical, chemical, and biological properties of water (122 total constituents including analyses of dissolved and water suspended-sediment samples) collected during more than 60,000 site visits. These data approximately span the periods 1962-95 for HBN and 1973-95 for NASQAN. The data reflect sampling over a wide range of streamflow conditions and the use of relatively consistent sampling and analytical methods. The CD-ROMs provide ancillary information and data-retrieval tools to allow the national network data to be properly and efficiently used. Ancillary information includes the following: descriptions of the network objectives and history, characteristics of the network stations and water-quality data, historical records of important changes in network sample collection and laboratory analytical methods, water reference sample data for estimating laboratory measurement bias and variability for 34 dissolved constituents for the period 1985-95, discussions of statistical methods for using water reference sample data to evaluate the accuracy of network stream water-quality data, and a bibliography of scientific investigations using national network data and other publications relevant to the networks. The data structure of the CD-ROMs is designed to allow users to efficiently enter the water-quality data to user-supplied software packages including statistical analysis, modeling, or geographic information systems. On one disc, all data are stored in ASCII form accessible from any computer system with a CD-ROM driver. The data also can be accessed using DOS-based retrieval software supplied on a second disc. This software supports logical queries of the water-quality data based on constituent concentrations, sample- collection date, river name, station name, county, state, hydrologic unit number, and 1990 population and 1987 land-cover characteristics for station watersheds. User-selected data may be output in a variety of formats including dBASE, flat ASCII, delimited ASCII, or fixed-field for subsequent use in other software packages.

  17. Data Structures for Extreme Scale Computing

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kahan, Simon

    As computing problems of national importance grow, the government meets the increased demand by funding the development of ever larger systems. The overarching goal of the work supported in part by this grant is to increase efficiency of programming and performing computations on these large computing systems. In past work, we have demonstrated that some of these computations once thought to require expensive hardware designs and/or complex, special-purpose programming may be executed efficiently on low-cost commodity cluster computing systems using a general-purpose “latency-tolerant” programming framework. One important developed application of the ideas underlying this framework is graph database technology supportingmore » social network pattern matching used by US intelligence agencies to more quickly identify potential terrorist threats. This database application has been spun out by the Pacific Northwest National Laboratory, a Department of Energy Laboratory, into a commercial start-up, Trovares Inc. We explore an alternative application of the same underlying ideas to a well-studied challenge arising in engineering: solving unstructured sparse linear equations. Solving these equations is key to predicting the behavior of large electronic circuits before they are fabricated. Predicting that behavior ahead of fabrication means that designs can optimized and errors corrected ahead of the expense of manufacture.« less

  18. Server-Based and Server-Less Byod Solutions to Support Electronic Learning

    DTIC Science & Technology

    2016-06-01

    Knowledge Online NSD National Security Directive OS operating system OWA Outlook Web Access PC personal computer PED personal electronic device PDA...mobile devices, institute mobile device policies and standards, and promote the development and use of DOD mobile and web -enabled applications” (DOD...with an isolated BYOD web server, properly educated system administrators must carry out and execute the necessary, pre-defined network security

  19. Putting the Information Infrastructure to Work. Report of the Information Infrastructure Task Force Committee on Applications and Technology. NIST Special Publication 857.

    ERIC Educational Resources Information Center

    National Inst. of Standards and Technology, Gaithersburg, MD.

    An interconnection of computer networks, telecommunications services, and applications, the National Information Infrastructure (NII) can open up new vistas and profoundly change much of American life. This report explores some of the opportunities and obstacles to the use of the NII by people and organizations. The goal is to express how…

  20. Radio Signal Augmentation for Improved Training of a Convolutional Neural Network

    DTIC Science & Technology

    2016-09-01

    official government endorsement or approval of commercial products or services referenced in this report. Bluetooth ® is a registered...trademark of Bluetooth SIG, Inc.. Nuand™ and blade RF™ are trademarks of Nurand, LLC. Released by E. R. Buckland, Head IO Support to National... Bluetooth ® computer mouse, and Bluetooth ® search from a mobile cellular phone. Qualitatively, model Moffset dramatically outperformed model Mclean in

  1. Social Software and National Security: An Initial Net Assessment

    DTIC Science & Technology

    2009-04-01

    networks. Government ignores this fact at its peril. Use of social software as ICT is creative and collaborative. Large corporations conduct...from the collaborative, distributed approaches promoted by responsible use of social software. Our recommendations are not exhaustive, but this... responsibilities are there for cyber security when using social software on government computers in a Web 2.0 environment?   67 This section might be

  2. The Blurring of Lines Between Combatants and Civilians in Twenty-First Century Armed Conflict

    DTIC Science & Technology

    2013-03-28

    concern for retirement, pensions , placement, or medical care. Speed, technical expertise, continuity, and flexibility are advantages gained by using...including the Internet, telecommunications networks, computer systems , and embedded processors and controllers.”42 Cyberspace and the technologies that... systems . Additionally, the Department of Defense relies heavily on its National Security Agency to defend the United States from attacks against its

  3. LLNL Partners with IBM on Brain-Like Computing Chip

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Van Essen, Brian

    Lawrence Livermore National Laboratory (LLNL) will receive a first-of-a-kind brain-inspired supercomputing platform for deep learning developed by IBM Research. Based on a breakthrough neurosynaptic computer chip called IBM TrueNorth, the scalable platform will process the equivalent of 16 million neurons and 4 billion synapses and consume the energy equivalent of a hearing aid battery – a mere 2.5 watts of power. The brain-like, neural network design of the IBM Neuromorphic System is able to infer complex cognitive tasks such as pattern recognition and integrated sensory processing far more efficiently than conventional chips.

  4. LLNL Partners with IBM on Brain-Like Computing Chip

    ScienceCinema

    Van Essen, Brian

    2018-06-25

    Lawrence Livermore National Laboratory (LLNL) will receive a first-of-a-kind brain-inspired supercomputing platform for deep learning developed by IBM Research. Based on a breakthrough neurosynaptic computer chip called IBM TrueNorth, the scalable platform will process the equivalent of 16 million neurons and 4 billion synapses and consume the energy equivalent of a hearing aid battery – a mere 2.5 watts of power. The brain-like, neural network design of the IBM Neuromorphic System is able to infer complex cognitive tasks such as pattern recognition and integrated sensory processing far more efficiently than conventional chips.

  5. Acoustic Sensor Planning for Gunshot Location in National Parks: A Pareto Front Approach

    PubMed Central

    González-Castaño, Francisco Javier; Alonso, Javier Vales; Costa-Montenegro, Enrique; López-Matencio, Pablo; Vicente-Carrasco, Francisco; Parrado-García, Francisco J.; Gil-Castiñeira, Felipe; Costas-Rodríguez, Sergio

    2009-01-01

    In this paper, we propose a solution for gunshot location in national parks. In Spain there are agencies such as SEPRONA that fight against poaching with considerable success. The DiANa project, which is endorsed by Cabaneros National Park and the SEPRONA service, proposes a system to automatically detect and locate gunshots. This work presents its technical aspects related to network design and planning. The system consists of a network of acoustic sensors that locate gunshots by hyperbolic multi-lateration estimation. The differences in sound time arrivals allow the computation of a low error estimator of gunshot location. The accuracy of this method depends on tight sensor clock synchronization, which an ad-hoc time synchronization protocol provides. On the other hand, since the areas under surveillance are wide, and electric power is scarce, it is necessary to maximize detection coverage and minimize system cost at the same time. Therefore, sensor network planning has two targets, i.e., coverage and cost. We model planning as an unconstrained problem with two objective functions. We determine a set of candidate solutions of interest by combining a derivative-free descent method we have recently proposed with a Pareto front approach. The results are clearly superior to random seeding in a realistic simulation scenario. PMID:22303135

  6. Acoustic sensor planning for gunshot location in national parks: a pareto front approach.

    PubMed

    González-Castaño, Francisco Javier; Alonso, Javier Vales; Costa-Montenegro, Enrique; López-Matencio, Pablo; Vicente-Carrasco, Francisco; Parrado-García, Francisco J; Gil-Castiñeira, Felipe; Costas-Rodríguez, Sergio

    2009-01-01

    In this paper, we propose a solution for gunshot location in national parks. In Spain there are agencies such as SEPRONA that fight against poaching with considerable success. The DiANa project, which is endorsed by Cabaneros National Park and the SEPRONA service, proposes a system to automatically detect and locate gunshots. This work presents its technical aspects related to network design and planning. The system consists of a network of acoustic sensors that locate gunshots by hyperbolic multi-lateration estimation. The differences in sound time arrivals allow the computation of a low error estimator of gunshot location. The accuracy of this method depends on tight sensor clock synchronization, which an ad-hoc time synchronization protocol provides. On the other hand, since the areas under surveillance are wide, and electric power is scarce, it is necessary to maximize detection coverage and minimize system cost at the same time. Therefore, sensor network planning has two targets, i.e., coverage and cost. We model planning as an unconstrained problem with two objective functions. We determine a set of candidate solutions of interest by combining a derivative-free descent method we have recently proposed with a Pareto front approach. The results are clearly superior to random seeding in a realistic simulation scenario.

  7. Efficient Memory Access with NumPy Global Arrays using Local Memory Access

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Daily, Jeffrey A.; Berghofer, Dan C.

    This paper discusses the work completed working with Global Arrays of data on distributed multi-computer systems and improving their performance. The tasks completed were done at Pacific Northwest National Laboratory in the Science Undergrad Laboratory Internship program in the summer of 2013 for the Data Intensive Computing Group in the Fundamental and Computational Sciences DIrectorate. This work was done on the Global Arrays Toolkit developed by this group. This toolkit is an interface for programmers to more easily create arrays of data on networks of computers. This is useful because scientific computation is often done on large amounts of datamore » sometimes so large that individual computers cannot hold all of it. This data is held in array form and can best be processed on supercomputers which often consist of a network of individual computers doing their computation in parallel. One major challenge for this sort of programming is that operations on arrays on multiple computers is very complex and an interface is needed so that these arrays seem like they are on a single computer. This is what global arrays does. The work done here is to use more efficient operations on that data that requires less copying of data to be completed. This saves a lot of time because copying data on many different computers is time intensive. The way this challenge was solved is when data to be operated on with binary operations are on the same computer, they are not copied when they are accessed. When they are on separate computers, only one set is copied when accessed. This saves time because of less copying done although more data access operations were done.« less

  8. The research of computer network security and protection strategy

    NASA Astrophysics Data System (ADS)

    He, Jian

    2017-05-01

    With the widespread popularity of computer network applications, its security is also received a high degree of attention. Factors affecting the safety of network is complex, for to do a good job of network security is a systematic work, has the high challenge. For safety and reliability problems of computer network system, this paper combined with practical work experience, from the threat of network security, security technology, network some Suggestions and measures for the system design principle, in order to make the masses of users in computer networks to enhance safety awareness and master certain network security technology.

  9. Creating an Effective Network: The GRACEnet Example

    NASA Astrophysics Data System (ADS)

    Follett, R. F.; Del Grosso, S.

    2008-12-01

    Networking activities require time, work, and nurturing. The objective of this presentation is to share the experience gained from The Greenhouse gas Reduction through Agricultural Carbon Enhancement network (GRACEnet). GRACEnet, formally established in 2005 by the ARS/USDA, resulted from workshops, teleconferences, and other activities beginning in at least 2002. Critical factors for its formation were to develop and formalize a common vision, goals, and objectives, which was accomplished in a 2005 workshop. The 4-person steering committee (now 5) was charged with coordinating the part-time (0.05- to 0.5 SY/location) efforts across 30 ARS locations to develop four products; (1) a national database, (2) regional/national guidelines of management practices, (3) computer models, and (4) "state-of-knowledge" summary publications. All locations are asked to contribute to the database from their field studies. Communication with everyone and periodic meeting are extremely important. Required to populate the database has to be a common vision of sharing, format, and trust. Based upon the e-mail list, GRACEnet has expanded from about 30 to now nearly 70 participants. Annual reports and a new website help facilitate this activity.

  10. A New Generation of Networks and Computing Models for High Energy Physics in the LHC Era

    NASA Astrophysics Data System (ADS)

    Newman, H.

    2011-12-01

    Wide area networks of increasing end-to-end capacity and capability are vital for every phase of high energy physicists' work. Our bandwidth usage, and the typical capacity of the major national backbones and intercontinental links used by our field have progressed by a factor of several hundred times over the past decade. With the opening of the LHC era in 2009-10 and the prospects for discoveries in the upcoming LHC run, the outlook is for a continuation or an acceleration of these trends using next generation networks over the next few years. Responding to the need to rapidly distribute and access datasets of tens to hundreds of terabytes drawn from multi-petabyte data stores, high energy physicists working with network engineers and computer scientists are learning to use long range networks effectively on an increasing scale, and aggregate flows reaching the 100 Gbps range have been observed. The progress of the LHC, and the unprecedented ability of the experiments to produce results rapidly using worldwide distributed data processing and analysis has sparked major, emerging changes in the LHC Computing Models, which are moving from the classic hierarchical model designed a decade ago to more agile peer-to-peer-like models that make more effective use of the resources at Tier2 and Tier3 sites located throughout the world. A new requirements working group has gauged the needs of Tier2 centers, and charged the LHCOPN group that runs the network interconnecting the LHC Tierls with designing a new architecture interconnecting the Tier2s. As seen from the perspective of ICFA's Standing Committee on Inter-regional Connectivity (SCIC), the Digital Divide that separates physicists in several regions of the developing world from those in the developed world remains acute, although many countries have made major advances through the rapid installation of modern network infrastructures. A case in point is Africa, where a new round of undersea cables promises to transform the continent.

  11. Computer hardware fault administration

    DOEpatents

    Archer, Charles J.; Megerian, Mark G.; Ratterman, Joseph D.; Smith, Brian E.

    2010-09-14

    Computer hardware fault administration carried out in a parallel computer, where the parallel computer includes a plurality of compute nodes. The compute nodes are coupled for data communications by at least two independent data communications networks, where each data communications network includes data communications links connected to the compute nodes. Typical embodiments carry out hardware fault administration by identifying a location of a defective link in the first data communications network of the parallel computer and routing communications data around the defective link through the second data communications network of the parallel computer.

  12. 76 FR 38124 - Applications for New Awards; Americans With Disabilities Act (ADA) National Network Regional...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-06-29

    ...) National Network Regional Centers and ADA National Network Collaborative Research Projects AGENCY: Office... National Network Regional Centers (formerly the Disability Business Technical Assistance Centers (DBTACs), and ADA National Network Collaborative Research Projects. Notice inviting applications for new awards...

  13. Software For Monitoring A Computer Network

    NASA Technical Reports Server (NTRS)

    Lee, Young H.

    1992-01-01

    SNMAT is rule-based expert-system computer program designed to assist personnel in monitoring status of computer network and identifying defective computers, workstations, and other components of network. Also assists in training network operators. Network for SNMAT located at Space Flight Operations Center (SFOC) at NASA's Jet Propulsion Laboratory. Intended to serve as data-reduction system providing windows, menus, and graphs, enabling users to focus on relevant information. SNMAT expected to be adaptable to other computer networks; for example in management of repair, maintenance, and security, or in administration of planning systems, billing systems, or archives.

  14. Hacking Social Networks: Examining the Viability of Using Computer Network Attack Against Social Networks

    DTIC Science & Technology

    2007-03-01

    NAVAL POSTGRADUATE SCHOOL MONTEREY, CALIFORNIA THESIS Approved for public release; distribution is unlimited. HACKING SOCIAL NETWORKS : EXAMINING THE...VIABILITY OF USING COMPUTER NETWORK ATTACK AGAINST SOCIAL NETWORKS by Russell G. Schuhart II March 2007 Thesis Advisor: David Tucker Second Reader...Master’s Thesis 4. TITLE AND SUBTITLE: Hacking Social Networks : Examining the Viability of Using Computer Network Attack Against Social Networks 6. AUTHOR

  15. Constructing Precisely Computing Networks with Biophysical Spiking Neurons.

    PubMed

    Schwemmer, Michael A; Fairhall, Adrienne L; Denéve, Sophie; Shea-Brown, Eric T

    2015-07-15

    While spike timing has been shown to carry detailed stimulus information at the sensory periphery, its possible role in network computation is less clear. Most models of computation by neural networks are based on population firing rates. In equivalent spiking implementations, firing is assumed to be random such that averaging across populations of neurons recovers the rate-based approach. Recently, however, Denéve and colleagues have suggested that the spiking behavior of neurons may be fundamental to how neuronal networks compute, with precise spike timing determined by each neuron's contribution to producing the desired output (Boerlin and Denéve, 2011; Boerlin et al., 2013). By postulating that each neuron fires to reduce the error in the network's output, it was demonstrated that linear computations can be performed by networks of integrate-and-fire neurons that communicate through instantaneous synapses. This left open, however, the possibility that realistic networks, with conductance-based neurons with subthreshold nonlinearity and the slower timescales of biophysical synapses, may not fit into this framework. Here, we show how the spike-based approach can be extended to biophysically plausible networks. We then show that our network reproduces a number of key features of cortical networks including irregular and Poisson-like spike times and a tight balance between excitation and inhibition. Lastly, we discuss how the behavior of our model scales with network size or with the number of neurons "recorded" from a larger computing network. These results significantly increase the biological plausibility of the spike-based approach to network computation. We derive a network of neurons with standard spike-generating currents and synapses with realistic timescales that computes based upon the principle that the precise timing of each spike is important for the computation. We then show that our network reproduces a number of key features of cortical networks including irregular, Poisson-like spike times, and a tight balance between excitation and inhibition. These results significantly increase the biological plausibility of the spike-based approach to network computation, and uncover how several components of biological networks may work together to efficiently carry out computation. Copyright © 2015 the authors 0270-6474/15/3510112-23$15.00/0.

  16. Information-seeking behavior changes in community-based teaching practices.

    PubMed

    Byrnes, Jennifer A; Kulick, Tracy A; Schwartz, Diane G

    2004-07-01

    A National Library of Medicine information access grant allowed for a collaborative project to provide computer resources in fourteen clinical practice sites that enabled health care professionals to access medical information via PubMed and the Internet. Health care professionals were taught how to access quality, cost-effective information that was user friendly and would result in improved patient care. Selected sites were located in medically underserved areas and received a computer, a printer, and, during year one, a fax machine. Participants were provided dial-up Internet service or were connected to the affiliated hospital's network. Clinicians were trained in how to search PubMed as a tool for practicing evidence-based medicine and to support clinical decision making. Health care providers were also taught how to find patient-education materials and continuing education programs and how to network with other professionals. Prior to the training, participants completed a questionnaire to assess their computer skills and familiarity with searching the Internet, MEDLINE, and other health-related databases. Responses indicated favorable changes in information-seeking behavior, including an increased frequency in conducting MEDLINE searches and Internet searches for work-related information.

  17. Computer Networks as a New Data Base.

    ERIC Educational Resources Information Center

    Beals, Diane E.

    1992-01-01

    Discusses the use of communication on computer networks as a data source for psychological, social, and linguistic research. Differences between computer-mediated communication and face-to-face communication are described, the Beginning Teacher Computer Network is discussed, and examples of network conversations are appended. (28 references) (LRW)

  18. Lightning Radio Source Retrieval Using Advanced Lightning Direction Finder (ALDF) Networks

    NASA Technical Reports Server (NTRS)

    Koshak, William J.; Blakeslee, Richard J.; Bailey, J. C.

    1998-01-01

    A linear algebraic solution is provided for the problem of retrieving the location and time of occurrence of lightning ground strikes from an Advanced Lightning Direction Finder (ALDF) network. The ALDF network measures field strength, magnetic bearing and arrival time of lightning radio emissions. Solutions for the plane (i.e., no Earth curvature) are provided that implement all of tile measurements mentioned above. Tests of the retrieval method are provided using computer-simulated data sets. We also introduce a quadratic planar solution that is useful when only three arrival time measurements are available. The algebra of the quadratic root results are examined in detail to clarify what portions of the analysis region lead to fundamental ambiguities in source location. Complex root results are shown to be associated with the presence of measurement errors when the lightning source lies near an outer sensor baseline of the ALDF network. In the absence of measurement errors, quadratic root degeneracy (no source location ambiguity) is shown to exist exactly on the outer sensor baselines for arbitrary non-collinear network geometries. The accuracy of the quadratic planar method is tested with computer generated data sets. The results are generally better than those obtained from the three station linear planar method when bearing errors are about 2 deg. We also note some of the advantages and disadvantages of these methods over the nonlinear method of chi(sup 2) minimization employed by the National Lightning Detection Network (NLDN) and discussed in Cummins et al.(1993, 1995, 1998).

  19. The GÉANT network: addressing current and future needs of the HEP community

    NASA Astrophysics Data System (ADS)

    Capone, Vincenzo; Usman, Mian

    2015-12-01

    The GÉANT infrastructure is the backbone that serves the scientific communities in Europe for their data movement needs and their access to international research and education networks. Using the extensive fibre footprint and infrastructure in Europe the GÉANT network delivers a portfolio of services aimed to best fit the specific needs of the users, including Authentication and Authorization Infrastructure, end-to-end performance monitoring, advanced network services (dynamic circuits, L2-L3VPN, MD-VPN). This talk will outline the factors that help the GÉANT network to respond to the needs of the High Energy Physics community, both in Europe and worldwide. The Pan-European network provides the connectivity between 40 European national research and education networks. In addition, GÉANT also connects the European NRENs to the R&E networks in other world region and has reach to over 110 NREN worldwide, making GÉANT the best connected Research and Education network, with its multiple intercontinental links to different continents e.g. North and South America, Africa and Asia-Pacific. The High Energy Physics computational needs have always had (and will keep having) a leading role among the scientific user groups of the GÉANT network: the LHCONE overlay network has been built, in collaboration with the other big world REN, specifically to address the peculiar needs of the LHC data movement. Recently, as a result of a series of coordinated efforts, the LHCONE network has been expanded to the Asia-Pacific area, and is going to include some of the main regional R&E network in the area. The LHC community is not the only one that is actively using a distributed computing model (hence the need for a high-performance network); new communities are arising, as BELLE II. GÉANT is deeply involved also with the BELLE II Experiment, to provide full support to their distributed computing model, along with a perfSONAR-based network monitoring system. GÉANT has also coordinated the setup of the network infrastructure to perform the BELLE II Trans-Atlantic Data Challenge, and has been active on helping the BELLE II community to sort out their end-to-end performance issues. In this talk we will provide information about the current GÉANT network architecture and of the international connectivity, along with the upcoming upgrades and the planned and foreseeable improvements. We will also describe the implementation of the solutions provided to support the LHC and BELLE II experiments.

  20. Georgia's Surface-Water Resources and Streamflow Monitoring Network, 2006

    USGS Publications Warehouse

    Nobles, Patricia L.; ,

    2006-01-01

    The U.S. Geological Survey (USGS) network of 223 real-time monitoring stations, the 'Georgia HydroWatch,' provides real-time water-stage data, with streamflow computed at 198 locations, and rainfall recorded at 187 stations. These sites continuously record data on 15-minute intervals and transmit the data via satellite to be incorporated into the USGS National Water Information System database. These data are automatically posted to the USGS Web site for public dissemination (http://waterdata.usgs.gov/ga/nwis/nwis). The real-time capability of this network provides information to help emergency-management officials protect human life and property during floods, and mitigate the effects of prolonged drought. The map at right shows the USGS streamflow monitoring network for Georgia and major watersheds. Streamflow is monitored at 198 sites statewide, more than 80 percent of which include precipitation gages. Various Federal, State, and local agencies fund these streamflow monitoring stations.

  1. 23 CFR 658.21 - Identification of National Network.

    Code of Federal Regulations, 2011 CFR

    2011-04-01

    ... 23 Highways 1 2011-04-01 2011-04-01 false Identification of National Network. 658.21 Section 658... Identification of National Network. (a) To identify the National Network, a State may sign the routes or provide maps of lists of highways describing the National Network. (b) Exceptional local conditions on the...

  2. 23 CFR 658.21 - Identification of National Network.

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ... 23 Highways 1 2010-04-01 2010-04-01 false Identification of National Network. 658.21 Section 658... Identification of National Network. (a) To identify the National Network, a State may sign the routes or provide maps of lists of highways describing the National Network. (b) Exceptional local conditions on the...

  3. Get the Whole Story before You Plug into a Computer Network.

    ERIC Educational Resources Information Center

    Vernot, David

    1989-01-01

    Explains the myths and marvels of computer networks; cites how several schools are utilizing networking; and summarizes where the major computer companies stand today when it comes to networking. (MLF)

  4. Network Coding for Function Computation

    ERIC Educational Resources Information Center

    Appuswamy, Rathinakumar

    2011-01-01

    In this dissertation, the following "network computing problem" is considered. Source nodes in a directed acyclic network generate independent messages and a single receiver node computes a target function f of the messages. The objective is to maximize the average number of times f can be computed per network usage, i.e., the "computing…

  5. FluxSuite: a New Scientific Tool for Advanced Network Management and Cross-Sharing of Next-Generation Flux Stations

    NASA Astrophysics Data System (ADS)

    Burba, G. G.; Johnson, D.; Velgersdyk, M.; Beaty, K.; Forgione, A.; Begashaw, I.; Allyn, D.

    2015-12-01

    Significant increases in data generation and computing power in recent years have greatly improved spatial and temporal flux data coverage on multiple scales, from a single station to continental flux networks. At the same time, operating budgets for flux teams and stations infrastructure are getting ever more difficult to acquire and sustain. With more stations and networks, larger data flows from each station, and smaller operating budgets, modern tools are needed to effectively and efficiently handle the entire process. This would help maximize time dedicated to answering research questions, and minimize time and expenses spent on data processing, quality control and station management. Cross-sharing the stations with external institutions may also help leverage available funding, increase scientific collaboration, and promote data analyses and publications. FluxSuite, a new advanced tool combining hardware, software and web-service, was developed to address these specific demands. It automates key stages of flux workflow, minimizes day-to-day site management, and modernizes the handling of data flows: Each next-generation station measures all parameters needed for flux computations Field microcomputer calculates final fully-corrected flux rates in real time, including computation-intensive Fourier transforms, spectra, co-spectra, multiple rotations, stationarity, footprint, etc. Final fluxes, radiation, weather and soil data are merged into a single quality-controlled file Multiple flux stations are linked into an automated time-synchronized network Flux network manager, or PI, can see all stations in real time, including fluxes, supporting data, automated reports, and email alerts PI can assign rights, allow or restrict access to stations and data: selected stations can be shared via rights-managed access internally or with external institutions Researchers without stations could form "virtual networks" for specific projects by collaborating with PIs from different actual networks This presentation provides detailed examples of FluxSuite currently utilized by two large flux networks in China (National Academy of Sciences & Agricultural Academy of Sciences), and smaller networks with stations in the USA, Germany, Ireland, Malaysia and other locations around the globe.

  6. Optical computing and neural networks; Proceedings of the Meeting, National Chiao Tung Univ., Hsinchu, Taiwan, Dec. 16, 17, 1992

    NASA Technical Reports Server (NTRS)

    Hsu, Ken-Yuh (Editor); Liu, Hua-Kuang (Editor)

    1992-01-01

    The present conference discusses optical neural networks, photorefractive nonlinear optics, optical pattern recognition, digital and analog processors, and holography and its applications. Attention is given to bifurcating optical information processing, neural structures in digital halftoning, an exemplar-based optical neural net classifier for color pattern recognition, volume storage in photorefractive disks, and microlaser-based compact optical neuroprocessors. Also treated are the optical implementation of a feature-enhanced optical interpattern-associative neural network model and its optical implementation, an optical pattern binary dual-rail logic gate module, a theoretical analysis for holographic associative memories, joint transform correlators, image addition and subtraction via the Talbot effect, and optical wavelet-matched filters. (No individual items are abstracted in this volume)

  7. Optical computing and neural networks; Proceedings of the Meeting, National Chiao Tung Univ., Hsinchu, Taiwan, Dec. 16, 17, 1992

    NASA Astrophysics Data System (ADS)

    Hsu, Ken-Yuh; Liu, Hua-Kuang

    The present conference discusses optical neural networks, photorefractive nonlinear optics, optical pattern recognition, digital and analog processors, and holography and its applications. Attention is given to bifurcating optical information processing, neural structures in digital halftoning, an exemplar-based optical neural net classifier for color pattern recognition, volume storage in photorefractive disks, and microlaser-based compact optical neuroprocessors. Also treated are the optical implementation of a feature-enhanced optical interpattern-associative neural network model and its optical implementation, an optical pattern binary dual-rail logic gate module, a theoretical analysis for holographic associative memories, joint transform correlators, image addition and subtraction via the Talbot effect, and optical wavelet-matched filters. (No individual items are abstracted in this volume)

  8. LaRC local area networks to support distributed computing

    NASA Technical Reports Server (NTRS)

    Riddle, E. P.

    1984-01-01

    The Langley Research Center's (LaRC) Local Area Network (LAN) effort is discussed. LaRC initiated the development of a LAN to support a growing distributed computing environment at the Center. The purpose of the network is to provide an improved capability (over inteactive and RJE terminal access) for sharing multivendor computer resources. Specifically, the network will provide a data highway for the transfer of files between mainframe computers, minicomputers, work stations, and personal computers. An important influence on the overall network design was the vital need of LaRC researchers to efficiently utilize the large CDC mainframe computers in the central scientific computing facility. Although there was a steady migration from a centralized to a distributed computing environment at LaRC in recent years, the work load on the central resources increased. Major emphasis in the network design was on communication with the central resources within the distributed environment. The network to be implemented will allow researchers to utilize the central resources, distributed minicomputers, work stations, and personal computers to obtain the proper level of computing power to efficiently perform their jobs.

  9. Information resources at the National Center for Biotechnology Information.

    PubMed Central

    Woodsmall, R M; Benson, D A

    1993-01-01

    The National Center for Biotechnology Information (NCBI), part of the National Library of Medicine, was established in 1988 to perform basic research in the field of computational molecular biology as well as build and distribute molecular biology databases. The basic research has led to new algorithms and analysis tools for interpreting genomic data and has been instrumental in the discovery of human disease genes for neurofibromatosis and Kallmann syndrome. The principal database responsibility is the National Institutes of Health (NIH) genetic sequence database, GenBank. NCBI, in collaboration with international partners, builds, distributes, and provides online and CD-ROM access to over 112,000 DNA sequences. Another major program is the integration of multiple sequences databases and related bibliographic information and the development of network-based retrieval systems for Internet access. PMID:8374583

  10. The next generation of neural network chips

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Beiu, V.

    There have been many national and international neural networks research initiatives: USA (DARPA, NIBS), Canada (IRIS), Japan (HFSP) and Europe (BRAIN, GALA TEA, NERVES, ELENE NERVES 2) -- just to mention a few. Recent developments in the field of neural networks, cognitive science, bioengineering and electrical engineering have made it possible to understand more about the functioning of large ensembles of identical processing elements. There are more research papers than ever proposing solutions and hardware implementations are by no means an exception. Two fields (computing and neuroscience) are interacting in ways nobody could imagine just several years ago, and --more » with the advent of new technologies -- researchers are focusing on trying to copy the Brain. Such an exciting confluence may quite shortly lead to revolutionary new computers and it is the aim of this invited session to bring to light some of the challenging research aspects dealing with the hardware realizability of future intelligent chips. Present-day (conventional) technology is (still) mostly digital and, thus, occupies wider areas and consumes much more power than the solutions envisaged. The innovative algorithmic and architectural ideals should represent important breakthroughs, paving the way towards making neural network chips available to the industry at competitive prices, in relatively small packages and consuming a fraction of the power required by equivalent digital solutions.« less

  11. Terminal-oriented computer-communication networks.

    NASA Technical Reports Server (NTRS)

    Schwartz, M.; Boorstyn, R. R.; Pickholtz, R. L.

    1972-01-01

    Four examples of currently operating computer-communication networks are described in this tutorial paper. They include the TYMNET network, the GE Information Services network, the NASDAQ over-the-counter stock-quotation system, and the Computer Sciences Infonet. These networks all use programmable concentrators for combining a multiplicity of terminals. Included in the discussion for each network is a description of the overall network structure, the handling and transmission of messages, communication requirements, routing and reliability consideration where applicable, operating data and design specifications where available, and unique design features in the area of computer communications.

  12. Do You Lock Your Network Doors? Some Network Management Precautions.

    ERIC Educational Resources Information Center

    Neray, Phil

    1997-01-01

    Discusses security problems and solutions for networked organizations with Internet connections. Topics include access to private networks from electronic mail information; computer viruses; computer software; corporate espionage; firewalls, that is computers that stand between a local network and the Internet; passwords; and physical security.…

  13. HEPLIB `91: International users meeting on the support and environments of high energy physics computing

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Johnstad, H.

    The purpose of this meeting is to discuss the current and future HEP computing support and environments from the perspective of new horizons in accelerator, physics, and computing technologies. Topics of interest to the Meeting include (but are limited to): the forming of the HEPLIB world user group for High Energy Physic computing; mandate, desirables, coordination, organization, funding; user experience, international collaboration; the roles of national labs, universities, and industry; range of software, Monte Carlo, mathematics, physics, interactive analysis, text processors, editors, graphics, data base systems, code management tools; program libraries, frequency of updates, distribution; distributed and interactive computing, datamore » base systems, user interface, UNIX operating systems, networking, compilers, Xlib, X-Graphics; documentation, updates, availability, distribution; code management in large collaborations, keeping track of program versions; and quality assurance, testing, conventions, standards.« less

  14. HEPLIB 91: International users meeting on the support and environments of high energy physics computing

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Johnstad, H.

    The purpose of this meeting is to discuss the current and future HEP computing support and environments from the perspective of new horizons in accelerator, physics, and computing technologies. Topics of interest to the Meeting include (but are limited to): the forming of the HEPLIB world user group for High Energy Physic computing; mandate, desirables, coordination, organization, funding; user experience, international collaboration; the roles of national labs, universities, and industry; range of software, Monte Carlo, mathematics, physics, interactive analysis, text processors, editors, graphics, data base systems, code management tools; program libraries, frequency of updates, distribution; distributed and interactive computing, datamore » base systems, user interface, UNIX operating systems, networking, compilers, Xlib, X-Graphics; documentation, updates, availability, distribution; code management in large collaborations, keeping track of program versions; and quality assurance, testing, conventions, standards.« less

  15. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bland, Arthur S Buddy; Hack, James J; Baker, Ann E

    Oak Ridge National Laboratory's (ORNL's) Cray XT5 supercomputer, Jaguar, kicked off the era of petascale scientific computing in 2008 with applications that sustained more than a thousand trillion floating point calculations per second - or 1 petaflop. Jaguar continues to grow even more powerful as it helps researchers broaden the boundaries of knowledge in virtually every domain of computational science, including weather and climate, nuclear energy, geosciences, combustion, bioenergy, fusion, and materials science. Their insights promise to broaden our knowledge in areas that are vitally important to the Department of Energy (DOE) and the nation as a whole, particularly energymore » assurance and climate change. The science of the 21st century, however, will demand further revolutions in computing, supercomputers capable of a million trillion calculations a second - 1 exaflop - and beyond. These systems will allow investigators to continue attacking global challenges through modeling and simulation and to unravel longstanding scientific questions. Creating such systems will also require new approaches to daunting challenges. High-performance systems of the future will need to be codesigned for scientific and engineering applications with best-in-class communications networks and data-management infrastructures and teams of skilled researchers able to take full advantage of these new resources. The Oak Ridge Leadership Computing Facility (OLCF) provides the nation's most powerful open resource for capability computing, with a sustainable path that will maintain and extend national leadership for DOE's Office of Science (SC). The OLCF has engaged a world-class team to support petascale science and to take a dramatic step forward, fielding new capabilities for high-end science. This report highlights the successful delivery and operation of a petascale system and shows how the OLCF fosters application development teams, developing cutting-edge tools and resources for next-generation systems.« less

  16. Performance of VPIC on Trinity

    NASA Astrophysics Data System (ADS)

    Nystrom, W. D.; Bergen, B.; Bird, R. F.; Bowers, K. J.; Daughton, W. S.; Guo, F.; Li, H.; Nam, H. A.; Pang, X.; Rust, W. N., III; Wohlbier, J.; Yin, L.; Albright, B. J.

    2016-10-01

    Trinity is a new major DOE computing resource which is going through final acceptance testing at Los Alamos National Laboratory. Trinity has several new and unique architectural features including two compute partitions, one with dual socket Intel Haswell Xeon compute nodes and one with Intel Knights Landing (KNL) Xeon Phi compute nodes. Additional unique features include use of on package high bandwidth memory (HBM) for the KNL nodes, the ability to configure the KNL nodes with respect to HBM model and on die network topology in a variety of operational modes at run time, and use of solid state storage via burst buffer technology to reduce time required to perform I/O. An effort is in progress to port and optimize VPIC to Trinity and evaluate its performance. Because VPIC was recently released as Open Source, it is being used as part of acceptance testing for Trinity and is participating in the Trinity Open Science Program which has resulted in excellent collaboration activities with both Cray and Intel. Results of this work will be presented on performance of VPIC on both Haswell and KNL partitions for both single node runs and runs at scale. Work performed under the auspices of the U.S. Dept. of Energy by the Los Alamos National Security, LLC Los Alamos National Laboratory under contract DE-AC52-06NA25396 and supported by the LANL LDRD program.

  17. Global thermal analysis of air-air cooled motor based on thermal network

    NASA Astrophysics Data System (ADS)

    Hu, Tian; Leng, Xue; Shen, Li; Liu, Haidong

    2018-02-01

    The air-air cooled motors with high efficiency, large starting torque, strong overload capacity, low noise, small vibration and other characteristics, are widely used in different department of national industry, but its cooling structure is complex, it requires the motor thermal management technology should be high. The thermal network method is a common method to calculate the temperature field of the motor, it has the advantages of small computation time and short time consuming, it can save a lot of time in the initial design phase of the motor. The domain analysis of air-air cooled motor and its cooler was based on thermal network method, the combined thermal network model was based, the main components of motor internal and external cooler temperature were calculated and analyzed, and the temperature rise test results were compared to verify the correctness of the combined thermal network model, the calculation method can satisfy the need of engineering design, and provide a reference for the initial and optimum design of the motor.

  18. National Fusion Collaboratory: Grid Computing for Simulations and Experiments

    NASA Astrophysics Data System (ADS)

    Greenwald, Martin

    2004-05-01

    The National Fusion Collaboratory Project is creating a computational grid designed to advance scientific understanding and innovation in magnetic fusion research by facilitating collaborations, enabling more effective integration of experiments, theory and modeling and allowing more efficient use of experimental facilities. The philosophy of FusionGrid is that data, codes, analysis routines, visualization tools, and communication tools should be thought of as network available services, easily used by the fusion scientist. In such an environment, access to services is stressed rather than portability. By building on a foundation of established computer science toolkits, deployment time can be minimized. These services all share the same basic infrastructure that allows for secure authentication and resource authorization which allows stakeholders to control their own resources such as computers, data and experiments. Code developers can control intellectual property, and fair use of shared resources can be demonstrated and controlled. A key goal is to shield scientific users from the implementation details such that transparency and ease-of-use are maximized. The first FusionGrid service deployed was the TRANSP code, a widely used tool for transport analysis. Tools for run preparation, submission, monitoring and management have been developed and shared among a wide user base. This approach saves user sites from the laborious effort of maintaining such a large and complex code while at the same time reducing the burden on the development team by avoiding the need to support a large number of heterogeneous installations. Shared visualization and A/V tools are being developed and deployed to enhance long-distance collaborations. These include desktop versions of the Access Grid, a highly capable multi-point remote conferencing tool and capabilities for sharing displays and analysis tools over local and wide-area networks.

  19. Computer Security for Commercial Nuclear Power Plants - Literature Review for Korea Hydro Nuclear Power Central Research Institute

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Duran, Felicia Angelica; Waymire, Russell L.

    2013-10-01

    Sandia National Laboratories (SNL) is providing training and consultation activities on security planning and design for the Korea Hydro and Nuclear Power Central Research Institute (KHNPCRI). As part of this effort, SNL performed a literature review on computer security requirements, guidance and best practices that are applicable to an advanced nuclear power plant. This report documents the review of reports generated by SNL and other organizations [U.S. Nuclear Regulatory Commission, Nuclear Energy Institute, and International Atomic Energy Agency] related to protection of information technology resources, primarily digital controls and computer resources and their data networks. Copies of the key documentsmore » have also been provided to KHNP-CRI.« less

  20. Network Computer Technology. Phase I: Viability and Promise within NASA's Desktop Computing Environment

    NASA Technical Reports Server (NTRS)

    Paluzzi, Peter; Miller, Rosalind; Kurihara, West; Eskey, Megan

    1998-01-01

    Over the past several months, major industry vendors have made a business case for the network computer as a win-win solution toward lowering total cost of ownership. This report provides results from Phase I of the Ames Research Center network computer evaluation project. It identifies factors to be considered for determining cost of ownership; further, it examines where, when, and how network computer technology might fit in NASA's desktop computing architecture.

  1. Directly executable formal models of middleware for MANET and Cloud Networking and Computing

    NASA Astrophysics Data System (ADS)

    Pashchenko, D. V.; Sadeq Jaafar, Mustafa; Zinkin, S. A.; Trokoz, D. A.; Pashchenko, T. U.; Sinev, M. P.

    2016-04-01

    The article considers some “directly executable” formal models that are suitable for the specification of computing and networking in the cloud environment and other networks which are similar to wireless networks MANET. These models can be easily programmed and implemented on computer networks.

  2. Modeling Computer Communication Networks in a Realistic 3D Environment

    DTIC Science & Technology

    2010-03-01

    50 2. Comparison of visualization tools . . . . . . . . . . . . . . . . . 75 xi List of Abbreviations Abbreviation Page 2D two-dimensional...International Conference on, 77 –84, 2001. 20. National Defense and the Canadian Forces. “Joint Fires Support”. URL http: //www.cfd-cdf.forces.gc.ca/sites/ page ...UNLIMITED. Report Documentation Page Form ApprovedOMB No. 0704-0188 Public reporting burden for the collection of information is estimated to average 1 hour

  3. Education, Emerging Information Technology, and the NSF

    NASA Astrophysics Data System (ADS)

    Wink, Donald J.

    1998-11-01

    The National Science Foundation was the original organizational leader for the Internet, and it is still engaged in funding research and infrastructure related to the use of networked information. As it is written in the strategic plan for the Directorate for Computer and Information Science and Engineering, "These technologies promise to have at least as great an impact as did the invention of written language thousands of years ago."

  4. Advantages of Parallel Processing and the Effects of Communications Time

    NASA Technical Reports Server (NTRS)

    Eddy, Wesley M.; Allman, Mark

    2000-01-01

    Many computing tasks involve heavy mathematical calculations, or analyzing large amounts of data. These operations can take a long time to complete using only one computer. Networks such as the Internet provide many computers with the ability to communicate with each other. Parallel or distributed computing takes advantage of these networked computers by arranging them to work together on a problem, thereby reducing the time needed to obtain the solution. The drawback to using a network of computers to solve a problem is the time wasted in communicating between the various hosts. The application of distributed computing techniques to a space environment or to use over a satellite network would therefore be limited by the amount of time needed to send data across the network, which would typically take much longer than on a terrestrial network. This experiment shows how much faster a large job can be performed by adding more computers to the task, what role communications time plays in the total execution time, and the impact a long-delay network has on a distributed computing system.

  5. LiverTox: Advanced QSAR and Toxicogeomic Software for Hepatotoxicity Prediction

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lu, P-Y.; Yuracko, K.

    2011-02-25

    YAHSGS LLC and Oak Ridge National Laboratory (ORNL) established a CRADA in an attempt to develop a predictive system using a pre-existing ORNL computational neural network and wavelets format. This was in the interest of addressing national needs for toxicity prediction system to help overcome the significant drain of resources (money and time) being directed toward developing chemical agents for commerce. The research project has been supported through an STTR mechanism and funded by the National Institute of Environmental Health Sciences beginning Phase I in 2004 (CRADA No. ORNL-04-0688) and extending Phase II through 2007 (ORNL NFE-06-00020). To attempt themore » research objectives and aims outlined under this CRADA, state-of-the-art computational neural network and wavelet methods were used in an effort to design a predictive toxicity system that used two independent areas on which to base the system’s predictions. These two areas were quantitative structure-activity relationships and gene-expression data obtained from microarrays. A third area, using the new Massively Parallel Signature Sequencing (MPSS) technology to assess gene expression, also was attempted but had to be dropped because the company holding the rights to this promising MPSS technology went out of business. A research-scale predictive toxicity database system called Multi-Intelligent System for Toxicogenomic Applications (MISTA) was developed and its feasibility for use as a predictor of toxicological activity was tested. The fundamental focus of the CRADA was an attempt and effort to operate the MISTA database using the ORNL neural network. This effort indicated the potential that such a fully developed system might be used to assist in predicting such biological endpoints as hepatotoxcity and neurotoxicity. The MISTA/LiverTox approach if eventually fully developed might also be useful for automatic processing of microarray data to predict modes of action. A technical paper describing the methods and technology used in the CRADA has been published. This paper was entitled “A Toxicity Evaluation and Predictive System Based on Neural Networks and Wavelets” and appeared in an American Chemical Society peer-reviewed publication this year (J. Chem. Inf. Model. 47: 676685, 2007). A patent application was filed but later abandoned.« less

  6. Near real-time traffic routing

    NASA Technical Reports Server (NTRS)

    Yang, Chaowei (Inventor); Xie, Jibo (Inventor); Zhou, Bin (Inventor); Cao, Ying (Inventor)

    2012-01-01

    A near real-time physical transportation network routing system comprising: a traffic simulation computing grid and a dynamic traffic routing service computing grid. The traffic simulator produces traffic network travel time predictions for a physical transportation network using a traffic simulation model and common input data. The physical transportation network is divided into a multiple sections. Each section has a primary zone and a buffer zone. The traffic simulation computing grid includes multiple of traffic simulation computing nodes. The common input data includes static network characteristics, an origin-destination data table, dynamic traffic information data and historical traffic data. The dynamic traffic routing service computing grid includes multiple dynamic traffic routing computing nodes and generates traffic route(s) using the traffic network travel time predictions.

  7. Discussion on the Technology and Method of Computer Network Security Management

    NASA Astrophysics Data System (ADS)

    Zhou, Jianlei

    2017-09-01

    With the rapid development of information technology, the application of computer network technology has penetrated all aspects of society, changed people's way of life work to a certain extent, brought great convenience to people. But computer network technology is not a panacea, it can promote the function of social development, but also can cause damage to the community and the country. Due to computer network’ openness, easiness of sharing and other characteristics, it had a very negative impact on the computer network security, especially the loopholes in the technical aspects can cause damage on the network information. Based on this, this paper will do a brief analysis on the computer network security management problems and security measures.

  8. Overview of the LINCS architecture

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Fletcher, J.G.; Watson, R.W.

    1982-01-13

    Computing at the Lawrence Livermore National Laboratory (LLNL) has evolved over the past 15 years with a computer network based resource sharing environment. The increasing use of low cost and high performance micro, mini and midi computers and commercially available local networking systems will accelerate this trend. Further, even the large scale computer systems, on which much of the LLNL scientific computing depends, are evolving into multiprocessor systems. It is our belief that the most cost effective use of this environment will depend on the development of application systems structured into cooperating concurrent program modules (processes) distributed appropriately over differentmore » nodes of the environment. A node is defined as one or more processors with a local (shared) high speed memory. Given the latter view, the environment can be characterized as consisting of: multiple nodes communicating over noisy channels with arbitrary delays and throughput, heterogenous base resources and information encodings, no single administration controlling all resources, distributed system state, and no uniform time base. The system design problem is - how to turn the heterogeneous base hardware/firmware/software resources of this environment into a coherent set of resources that facilitate development of cost effective, reliable, and human engineered applications. We believe the answer lies in developing a layered, communication oriented distributed system architecture; layered and modular to support ease of understanding, reconfiguration, extensibility, and hiding of implementation or nonessential local details; communication oriented because that is a central feature of the environment. The Livermore Interactive Network Communication System (LINCS) is a hierarchical architecture designed to meet the above needs. While having characteristics in common with other architectures, it differs in several respects.« less

  9. Mediator infrastructure for information integration and semantic data integration environment for biomedical research.

    PubMed

    Grethe, Jeffrey S; Ross, Edward; Little, David; Sanders, Brian; Gupta, Amarnath; Astakhov, Vadim

    2009-01-01

    This paper presents current progress in the development of semantic data integration environment which is a part of the Biomedical Informatics Research Network (BIRN; http://www.nbirn.net) project. BIRN is sponsored by the National Center for Research Resources (NCRR), a component of the National Institutes of Health (NIH). A goal is the development of a cyberinfrastructure for biomedical research that supports advance data acquisition, data storage, data management, data integration, data mining, data visualization, and other computing and information processing services over the Internet. Each participating institution maintains storage of their experimental or computationally derived data. Mediator-based data integration system performs semantic integration over the databases to enable researchers to perform analyses based on larger and broader datasets than would be available from any single institution's data. This paper describes recent revision of the system architecture, implementation, and capabilities of the semantically based data integration environment for BIRN.

  10. Development of the Centralized Storm Information System (CSIS) for use in severe weather prediction

    NASA Technical Reports Server (NTRS)

    Mosher, F. R.

    1984-01-01

    The centralized storm information system is now capable of ingesting and remapping radar scope presentations on a satellite projection. This can be color enhanced and superposed on other data types. Presentations from more than one radar can be composited on a single image. As with most other data sources, a simple macro establishes the loops and scheduling of the radar ingestions as well as the autodialing. There are approximately 60 NWS network 10 cm radars that can be interrogated. NSSFC forecasters have found this data source to be extremely helpful in severe weather situations. The capability to access lightning frequency data stored in a National Weather Service computer was added. Plans call for an interface with the National Meteorological Center to receive and display prognostic fields from operational computer forecast models. Programs are to be developed to plot and display locations of reported severe local storm events.

  11. ASCR Cybersecurity for Scientific Computing Integrity

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Piesert, Sean

    The Department of Energy (DOE) has the responsibility to address the energy, environmental, and nuclear security challenges that face our nation. Much of DOE’s enterprise involves distributed, collaborative teams; a signi¬cant fraction involves “open science,” which depends on multi-institutional, often international collaborations that must access or share signi¬cant amounts of information between institutions and over networks around the world. The mission of the Office of Science is the delivery of scienti¬c discoveries and major scienti¬c tools to transform our understanding of nature and to advance the energy, economic, and national security of the United States. The ability of DOE tomore » execute its responsibilities depends critically on its ability to assure the integrity and availability of scienti¬c facilities and computer systems, and of the scienti¬c, engineering, and operational software and data that support its mission.« less

  12. Risk Assessment and Mapping of Fecal Contamination in the Ohio River Basin

    NASA Astrophysics Data System (ADS)

    Cabezas, A.; Morehead, D.; Teklitz, A.; Yeghiazarian, L.

    2014-12-01

    Decisions in many problems in engineering planning are invariably made under conditions of uncertainty imposed by the inherent randomness of natural phenomena. Water quality is one such problem. For example, the leading cause of surface-water impairment in the US is fecal microbial contamination, which can potentially trigger massive outbreaks of gastrointestinal disease. It is well known that the difficulty in prediction of water contamination is rooted in the stochastic variability of microbes in the environment, and in the complexity of environmental systems.To address these issues, we employ a risk-based design format to compute the variability in microbial concentrations and the probability of exceeding the E. Coli target in the Ohio River Basin (ORB). This probability is then mapped onto the basin's stream network within the ArcGIS environment. We demonstrate how spatial risk maps can be used in support of watershed management decisions, in particular in the assessment of best management practices for reduction of E. Coli load in surface water. The modeling environment selected for the analysis is the Schematic Processor (SP), a suite of geoprocessing ArcGIS tools. SP operates on a schematic, link-and-node network model of the watershed. The National Hydrography Dataset (NHD) is used as the basis for this representation, as it provides the stream network, lakes, and catchment definitions. Given the schematic network of the watershed, SP adds the capability to perform mathematical computations along the links and at the nodes. This enables modeling fate and transport of any entity over the network. Data from various sources have been integrated for this analysis. Catchment boundaries, lake locations, the stream network and flow data have been retrieved from the NHDPlus. Land use data come from the National Land Cover Database (NLCD), and microbial observations data from the Ohio River Sanitation Committee. The latter dataset is a result of a 2003-2007 longitudinal study. Samples for E. coli analysis were collected approximately every five miles along the entire length of the Ohio River, with additional samples collected at the mouths of over 125 direct tributaries to the Ohio River.

  13. Implementation of Grid Tier 2 and Tier 3 facilities on a Distributed OpenStack Cloud

    NASA Astrophysics Data System (ADS)

    Limosani, Antonio; Boland, Lucien; Coddington, Paul; Crosby, Sean; Huang, Joanna; Sevior, Martin; Wilson, Ross; Zhang, Shunde

    2014-06-01

    The Australian Government is making a AUD 100 million investment in Compute and Storage for the academic community. The Compute facilities are provided in the form of 30,000 CPU cores located at 8 nodes around Australia in a distributed virtualized Infrastructure as a Service facility based on OpenStack. The storage will eventually consist of over 100 petabytes located at 6 nodes. All will be linked via a 100 Gb/s network. This proceeding describes the development of a fully connected WLCG Tier-2 grid site as well as a general purpose Tier-3 computing cluster based on this architecture. The facility employs an extension to Torque to enable dynamic allocations of virtual machine instances. A base Scientific Linux virtual machine (VM) image is deployed in the OpenStack cloud and automatically configured as required using Puppet. Custom scripts are used to launch multiple VMs, integrate them into the dynamic Torque cluster and to mount remote file systems. We report on our experience in developing this nation-wide ATLAS and Belle II Tier 2 and Tier 3 computing infrastructure using the national Research Cloud and storage facilities.

  14. Providing nearest neighbor point-to-point communications among compute nodes of an operational group in a global combining network of a parallel computer

    DOEpatents

    Archer, Charles J.; Faraj, Ahmad A.; Inglett, Todd A.; Ratterman, Joseph D.

    2012-10-23

    Methods, apparatus, and products are disclosed for providing nearest neighbor point-to-point communications among compute nodes of an operational group in a global combining network of a parallel computer, each compute node connected to each adjacent compute node in the global combining network through a link, that include: identifying each link in the global combining network for each compute node of the operational group; designating one of a plurality of point-to-point class routing identifiers for each link such that no compute node in the operational group is connected to two adjacent compute nodes in the operational group with links designated for the same class routing identifiers; and configuring each compute node of the operational group for point-to-point communications with each adjacent compute node in the global combining network through the link between that compute node and that adjacent compute node using that link's designated class routing identifier.

  15. A Low Cost Micro-Computer Based Local Area Network for Medical Office and Medical Center Automation

    PubMed Central

    Epstein, Mel H.; Epstein, Lynn H.; Emerson, Ron G.

    1984-01-01

    A Low Cost Micro-computer based Local Area Network for medical office automation is described which makes use of an array of multiple and different personal computers interconnected by a local area network. Each computer on the network functions as fully potent workstations for data entry and report generation. The network allows each workstation complete access to the entire database. Additionally, designated computers may serve as access ports for remote terminals. Through “Gateways” the network may serve as a front end for a large mainframe, or may interface with another network. The system provides for the medical office environment the expandability and flexibility of a multi-terminal mainframe system at a far lower cost without sacrifice of performance.

  16. Class network routing

    DOEpatents

    Bhanot, Gyan [Princeton, NJ; Blumrich, Matthias A [Ridgefield, CT; Chen, Dong [Croton On Hudson, NY; Coteus, Paul W [Yorktown Heights, NY; Gara, Alan G [Mount Kisco, NY; Giampapa, Mark E [Irvington, NY; Heidelberger, Philip [Cortlandt Manor, NY; Steinmacher-Burow, Burkhard D [Mount Kisco, NY; Takken, Todd E [Mount Kisco, NY; Vranas, Pavlos M [Bedford Hills, NY

    2009-09-08

    Class network routing is implemented in a network such as a computer network comprising a plurality of parallel compute processors at nodes thereof. Class network routing allows a compute processor to broadcast a message to a range (one or more) of other compute processors in the computer network, such as processors in a column or a row. Normally this type of operation requires a separate message to be sent to each processor. With class network routing pursuant to the invention, a single message is sufficient, which generally reduces the total number of messages in the network as well as the latency to do a broadcast. Class network routing is also applied to dense matrix inversion algorithms on distributed memory parallel supercomputers with hardware class function (multicast) capability. This is achieved by exploiting the fact that the communication patterns of dense matrix inversion can be served by hardware class functions, which results in faster execution times.

  17. Using 100G Network Technology in Support of Petascale Science

    NASA Technical Reports Server (NTRS)

    Gary, James P.

    2011-01-01

    NASA in collaboration with a number of partners conducted a set of individual experiments and demonstrations during SC 10 that collectively were titled "Using 100G Network Technology in Support of Petascale Science". The partners included the iCAIR, Internet2, LAC, MAX, National LambdaRail (NLR), NOAA and SCinet Research Sandbox (SRS) as well as the vendors Ciena, Cisco, ColorChip, cPacket, Extreme Networks, Fusion-io, HP and Panduit who most generously allowed some of their leading edge 40G/100G optical transport, Ethernet switch and Internet Protocol router equipment and file server technologies to be involved. The experiments and demonstrations featured different vendor-provided 40G/100G network technology solutions for full-duplex 40G and 100G LAN data flows across SRS-deployed single-node fiber-pairs among the Exhibit Booths of NASA, the National Center for Data lining, NOAA and the SCinet Network Operations Center, as well as between the NASA Exhibit Booth in New Orleans and the Starlight Communications Exchange facility in Chicago across special SC 10- only 80- and 100-Gbps wide area network links provisioned respectively by the NLR and Internet2, then on to GSFC across a 40-Gbps link. provisioned by the Mid-Atlantic Crossroads. The networks and vendor equipment were load-stressed by sets of NASA/GSFC High End Computer Network Team-built, relatively inexpensive, net-test-workstations that are capable of demonstrating greater than 100Gbps uni-directional nuttcp-enabled memory-to-memory data transfers, greater than 80-Gbps aggregate--bidirectional memory-to-memory data transfers, and near 40-Gbps uni-directional disk-to-disk file copying. This paper will summarize the background context, key accomplishments and some significances of these experiments and demonstrations.

  18. 78 FR 24154 - Notice of Availability of a National Animal Health Laboratory Network Reorganization Concept Paper

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-04-24

    ...] Notice of Availability of a National Animal Health Laboratory Network Reorganization Concept Paper AGENCY... Network (NAHLN) for public review and comment. The NAHLN is a nationally coordinated network and... Coordinator, National Animal Health Laboratory Network, Veterinary Services, APHIS, 2140 Centre Avenue...

  19. Artificial neural networks as quantum associative memory

    NASA Astrophysics Data System (ADS)

    Hamilton, Kathleen; Schrock, Jonathan; Imam, Neena; Humble, Travis

    We present results related to the recall accuracy and capacity of Hopfield networks implemented on commercially available quantum annealers. The use of Hopfield networks and artificial neural networks as content-addressable memories offer robust storage and retrieval of classical information, however, implementation of these models using currently available quantum annealers faces several challenges: the limits of precision when setting synaptic weights, the effects of spurious spin-glass states and minor embedding of densely connected graphs into fixed-connectivity hardware. We consider neural networks which are less than fully-connected, and also consider neural networks which contain multiple sparsely connected clusters. We discuss the effect of weak edge dilution on the accuracy of memory recall, and discuss how the multiple clique structure affects the storage capacity. Our work focuses on storage of patterns which can be embedded into physical hardware containing n < 1000 qubits. This work was supported by the United States Department of Defense and used resources of the Computational Research and Development Programs as Oak Ridge National Laboratory under Contract No. DE-AC0500OR22725 with the U. S. Department of Energy.

  20. Computer network security for the radiology enterprise.

    PubMed

    Eng, J

    2001-08-01

    As computer networks become an integral part of the radiology practice, it is appropriate to raise concerns regarding their security. The purpose of this article is to present an overview of computer network security risks and preventive strategies as they pertain to the radiology enterprise. A number of technologies are available that provide strong deterrence against attacks on networks and networked computer systems in the radiology enterprise. While effective, these technologies must be supplemented with vigilant user and system management.

  1. Efficiently modeling neural networks on massively parallel computers

    NASA Technical Reports Server (NTRS)

    Farber, Robert M.

    1993-01-01

    Neural networks are a very useful tool for analyzing and modeling complex real world systems. Applying neural network simulations to real world problems generally involves large amounts of data and massive amounts of computation. To efficiently handle the computational requirements of large problems, we have implemented at Los Alamos a highly efficient neural network compiler for serial computers, vector computers, vector parallel computers, and fine grain SIMD computers such as the CM-2 connection machine. This paper describes the mapping used by the compiler to implement feed-forward backpropagation neural networks for a SIMD (Single Instruction Multiple Data) architecture parallel computer. Thinking Machines Corporation has benchmarked our code at 1.3 billion interconnects per second (approximately 3 gigaflops) on a 64,000 processor CM-2 connection machine (Singer 1990). This mapping is applicable to other SIMD computers and can be implemented on MIMD computers such as the CM-5 connection machine. Our mapping has virtually no communications overhead with the exception of the communications required for a global summation across the processors (which has a sub-linear runtime growth on the order of O(log(number of processors)). We can efficiently model very large neural networks which have many neurons and interconnects and our mapping can extend to arbitrarily large networks (within memory limitations) by merging the memory space of separate processors with fast adjacent processor interprocessor communications. This paper will consider the simulation of only feed forward neural network although this method is extendable to recurrent networks.

  2. ATLAS computing on Swiss Cloud SWITCHengines

    NASA Astrophysics Data System (ADS)

    Haug, S.; Sciacca, F. G.; ATLAS Collaboration

    2017-10-01

    Consolidation towards more computing at flat budgets beyond what pure chip technology can offer, is a requirement for the full scientific exploitation of the future data from the Large Hadron Collider at CERN in Geneva. One consolidation measure is to exploit cloud infrastructures whenever they are financially competitive. We report on the technical solutions and the performances used and achieved running simulation tasks for the ATLAS experiment on SWITCHengines. SWITCHengines is a new infrastructure as a service offered to Swiss academia by the National Research and Education Network SWITCH. While solutions and performances are general, financial considerations and policies, on which we also report, are country specific.

  3. A computational system for lattice QCD with overlap Dirac quarks

    NASA Astrophysics Data System (ADS)

    Chiu, Ting-Wai; Hsieh, Tung-Han; Huang, Chao-Hsi; Huang, Tsung-Ren

    2003-05-01

    We outline the essential features of a Linux PC cluster which is now being developed at National Taiwan University, and discuss how to optimize its hardware and software for lattice QCD with overlap Dirac quarks. At present, the cluster constitutes of 30 nodes, with each node consisting of one Pentium 4 processor (1.6/2.0 GHz), one Gbyte of PC800 RDRAM, one 40/80 Gbyte hard disk, and a network card. The speed of this system is estimated to be 30 Gflops, and its price/performance ratio is better than $1.0/Mflops for 64-bit (double precision) computations in quenched lattice QCD with overlap Dirac quarks.

  4. Hyperswitch Communication Network Computer

    NASA Technical Reports Server (NTRS)

    Peterson, John C.; Chow, Edward T.; Priel, Moshe; Upchurch, Edwin T.

    1993-01-01

    Hyperswitch Communications Network (HCN) computer is prototype multiple-processor computer being developed. Incorporates improved version of hyperswitch communication network described in "Hyperswitch Network For Hypercube Computer" (NPO-16905). Designed to support high-level software and expansion of itself. HCN computer is message-passing, multiple-instruction/multiple-data computer offering significant advantages over older single-processor and bus-based multiple-processor computers, with respect to price/performance ratio, reliability, availability, and manufacturing. Design of HCN operating-system software provides flexible computing environment accommodating both parallel and distributed processing. Also achieves balance among following competing factors; performance in processing and communications, ease of use, and tolerance of (and recovery from) faults.

  5. Towards a Scalable and Adaptive Application Support Platform for Large-Scale Distributed E-Sciences in High-Performance Network Environments

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wu, Chase Qishi; Zhu, Michelle Mengxia

    The advent of large-scale collaborative scientific applications has demonstrated the potential for broad scientific communities to pool globally distributed resources to produce unprecedented data acquisition, movement, and analysis. System resources including supercomputers, data repositories, computing facilities, network infrastructures, storage systems, and display devices have been increasingly deployed at national laboratories and academic institutes. These resources are typically shared by large communities of users over Internet or dedicated networks and hence exhibit an inherent dynamic nature in their availability, accessibility, capacity, and stability. Scientific applications using either experimental facilities or computation-based simulations with various physical, chemical, climatic, and biological models featuremore » diverse scientific workflows as simple as linear pipelines or as complex as a directed acyclic graphs, which must be executed and supported over wide-area networks with massively distributed resources. Application users oftentimes need to manually configure their computing tasks over networks in an ad hoc manner, hence significantly limiting the productivity of scientists and constraining the utilization of resources. The success of these large-scale distributed applications requires a highly adaptive and massively scalable workflow platform that provides automated and optimized computing and networking services. This project is to design and develop a generic Scientific Workflow Automation and Management Platform (SWAMP), which contains a web-based user interface specially tailored for a target application, a set of user libraries, and several easy-to-use computing and networking toolkits for application scientists to conveniently assemble, execute, monitor, and control complex computing workflows in heterogeneous high-performance network environments. SWAMP will enable the automation and management of the entire process of scientific workflows with the convenience of a few mouse clicks while hiding the implementation and technical details from end users. Particularly, we will consider two types of applications with distinct performance requirements: data-centric and service-centric applications. For data-centric applications, the main workflow task involves large-volume data generation, catalog, storage, and movement typically from supercomputers or experimental facilities to a team of geographically distributed users; while for service-centric applications, the main focus of workflow is on data archiving, preprocessing, filtering, synthesis, visualization, and other application-specific analysis. We will conduct a comprehensive comparison of existing workflow systems and choose the best suited one with open-source code, a flexible system structure, and a large user base as the starting point for our development. Based on the chosen system, we will develop and integrate new components including a black box design of computing modules, performance monitoring and prediction, and workflow optimization and reconfiguration, which are missing from existing workflow systems. A modular design for separating specification, execution, and monitoring aspects will be adopted to establish a common generic infrastructure suited for a wide spectrum of science applications. We will further design and develop efficient workflow mapping and scheduling algorithms to optimize the workflow performance in terms of minimum end-to-end delay, maximum frame rate, and highest reliability. We will develop and demonstrate the SWAMP system in a local environment, the grid network, and the 100Gpbs Advanced Network Initiative (ANI) testbed. The demonstration will target scientific applications in climate modeling and high energy physics and the functions to be demonstrated include workflow deployment, execution, steering, and reconfiguration. Throughout the project period, we will work closely with the science communities in the fields of climate modeling and high energy physics including Spallation Neutron Source (SNS) and Large Hadron Collider (LHC) projects to mature the system for production use.« less

  6. Network Patch Cables Demystified: A Super Activity for Computer Networking Technology

    ERIC Educational Resources Information Center

    Brown, Douglas L.

    2004-01-01

    This article de-mystifies network patch cable secrets so that people can connect their computers and transfer those pesky files--without screaming at the cables. It describes a network cabling activity that can offer students a great hands-on opportunity for working with the tools, techniques, and media used in computer networking. Since the…

  7. Computing, information, and communications: Technologies for the 21. Century

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    NONE

    1998-11-01

    To meet the challenges of a radically new and technologically demanding century, the Federal Computing, Information, and Communications (CIC) programs are investing in long-term research and development (R and D) to advance computing, information, and communications in the United States. CIC R and D programs help Federal departments and agencies to fulfill their evolving missions, assure the long-term national security, better understand and manage the physical environment, improve health care, help improve the teaching of children, provide tools for lifelong training and distance learning to the workforce, and sustain critical US economic competitiveness. One of the nine committees of themore » National Science and Technology Council (NSTC), the Committee on Computing, Information, and Communications (CCIC)--through its CIC R and D Subcommittee--coordinates R and D programs conducted by twelve Federal departments and agencies in cooperation with US academia and industry. These R and D programs are organized into five Program Component Areas: (1) HECC--High End Computing and Computation; (2) LSN--Large Scale Networking, including the Next Generation Internet Initiative; (3) HCS--High Confidence Systems; (4) HuCS--Human Centered Systems; and (5) ETHR--Education, Training, and Human Resources. A brief synopsis of FY 1997 accomplishments and FY 1998 goals by PCA is presented. This report, which supplements the President`s Fiscal Year 1998 Budget, describes the interagency CIC programs.« less

  8. Computerized patient identification for the EMBRACA clinical trial using real-time data from the PRAEGNANT network for metastatic breast cancer patients.

    PubMed

    Hein, Alexander; Gass, Paul; Walter, Christina Barbara; Taran, Florin-Andrei; Hartkopf, Andreas; Overkamp, Friedrich; Kolberg, Hans-Christian; Hadji, Peyman; Tesch, Hans; Ettl, Johannes; Wuerstlein, Rachel; Lounsbury, Debra; Lux, Michael P; Lüftner, Diana; Wallwiener, Markus; Müller, Volkmar; Belleville, Erik; Janni, Wolfgang; Fehm, Tanja N; Wallwiener, Diethelm; Ganslandt, Thomas; Ruebner, Matthias; Beckmann, Matthias W; Schneeweiss, Andreas; Fasching, Peter A; Brucker, Sara Y

    2016-07-01

    As breast cancer is a diverse disease, clinical trials are becoming increasingly diversified and are consequently being conducted in very small subgroups of patients, making study recruitment increasingly difficult. The aim of this study was to assess the use of data from a remote data entry system that serves a large national registry for metastatic breast cancer. The PRAEGNANT network is a real-time registry with an integrated biomaterials bank that was designed as a scientific study and as a means of identifying patients who are eligible for clinical trials, based on clinical and molecular information. Here, we report on the automated use of the clinical data documented to identify patients for a clinical trial (EMBRACA) for patients with metastatic breast cancer. The patients' charts were assessed by two independent physicians involved in the clinical trial and also by a computer program that tested patients for eligibility using a structured query language script. In all, 326 patients from two study sites in the PRAEGNANT network were included in the analysis. Using expert assessment, 120 of the 326 patients (37 %) appeared to be eligible for inclusion in the EMBRACA study; with the computer algorithm assessment, a total of 129 appeared to be eligible. The sensitivity of the computer algorithm was 0.87 and its specificity was 0.88. Using computer-based identification of patients for clinical trials appears feasible. With the instrument's high specificity, its application in a large cohort of patients appears to be feasible, and the workload for reassessing the patients is limited.

  9. Offline Social Relationships and Online Cancer Communication: Effects of Social and Family Support on Online Social Network Building.

    PubMed

    Namkoong, Kang; Shah, Dhavan V; Gustafson, David H

    2017-11-01

    This study investigates how social support and family relationship perceptions influence breast cancer patients' online communication networks in a computer-mediated social support (CMSS) group. To examine social interactions in the CMSS group, we identified two types of online social networks: open and targeted communication networks. The open communication network reflects group communication behaviors (i.e., one-to-many or "broadcast" communication) in which the intended audience is not specified; in contrast, the targeted communication network reflects interpersonal discourses (i.e., one-to-one or directed communication) in which the audience for the message is specified. The communication networks were constructed by tracking CMSS group usage data of 237 breast cancer patients who participated in one of two National Cancer Institute-funded randomized clinical trials. Eligible subjects were within 2 months of a diagnosis of primary breast cancer or recurrence at the time of recruitment. Findings reveal that breast cancer patients who perceived less availability of offline social support had a larger social network size in the open communication network. In contrast, those who perceived less family cohesion had a larger targeted communication network in the CMSS group, meaning they were inclined to use the CMSS group for developing interpersonal relationships.

  10. Chinese Herbal Medicine Meets Biological Networks of Complex Diseases: A Computational Perspective

    PubMed Central

    Gu, Shuo

    2017-01-01

    With the rapid development of cheminformatics, computational biology, and systems biology, great progress has been made recently in the computational research of Chinese herbal medicine with in-depth understanding towards pharmacognosy. This paper summarized these studies in the aspects of computational methods, traditional Chinese medicine (TCM) compound databases, and TCM network pharmacology. Furthermore, we chose arachidonic acid metabolic network as a case study to demonstrate the regulatory function of herbal medicine in the treatment of inflammation at network level. Finally, a computational workflow for the network-based TCM study, derived from our previous successful applications, was proposed. PMID:28690664

  11. Chinese Herbal Medicine Meets Biological Networks of Complex Diseases: A Computational Perspective.

    PubMed

    Gu, Shuo; Pei, Jianfeng

    2017-01-01

    With the rapid development of cheminformatics, computational biology, and systems biology, great progress has been made recently in the computational research of Chinese herbal medicine with in-depth understanding towards pharmacognosy. This paper summarized these studies in the aspects of computational methods, traditional Chinese medicine (TCM) compound databases, and TCM network pharmacology. Furthermore, we chose arachidonic acid metabolic network as a case study to demonstrate the regulatory function of herbal medicine in the treatment of inflammation at network level. Finally, a computational workflow for the network-based TCM study, derived from our previous successful applications, was proposed.

  12. Molecular Dynamics Study of the Proposed Proton Transport Pathways in [FeFe]-Hydrogenase

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ginovska-Pangovska, Bojana; Ho, Ming-Hsun; Linehan, John C.

    2014-01-15

    Possible proton channels in Clostridium pasteurianum [FeFe]-hydrogenase were investigated with molecular dynamics simulations. This study was undertaken to discern proposed channels, compare their properties, evaluate the functional channel, and to provide insight into the features of an active proton channel. Our simulations suggest that protons are not transported through water wires. Instead, a five-residue motif (E282, S319, E279, HOH, C299) was found to be the likely channel, consistent with experimental observations. This channel connects the surface of the enzyme and the di-thiomethylamine bridge of the catalytic H-cluster, permitting the transport of protons. The channel was found to have a persistentmore » hydrogen bonded core (residues E279 to S319), with less persistent hydrogen bonds at the ends of the channel. The hydrogen bond occupancy in this network was found to be sensitive to the protonation state of the residues in the channel, with different protonation states enhancing or stabilizing hydrogen bonding in different regions of the network. Single site mutations to non-hydrogen bonding residues break the hydrogen bonding network at that residue, consistent with experimental observations showing catalyst inactivation. In many cases, these mutations alter the hydrogen bonding in other regions of the channel which may be equally important in catalytic failure. A correlation between the protein dynamics near the proton channel and the redox partner binding regions was also found as a function of protonation state. The likely mechanism of proton movement in [FeFe]-hydrogenases is discussed based on the structural analysis presented here. This work was funded by the DOE Office of Science Early Career Research Program through the Office of Basic Energy Sciences. Computational resources were provided at W. R. Wiley Environmental Molecular Science Laboratory (EMSL), a national scientific user facility sponsored by the Department of Energy’s Office of Biological and Environmental Research located at Pacific Northwest National Laboratory, and a portion of the research was performed using PNNL Institutional Computing at Pacific Northwest National Laboratory. Pacific Northwest National Laboratory is operated by Battelle for the U.S. Department of Energy.« less

  13. HeNCE: A Heterogeneous Network Computing Environment

    DOE PAGES

    Beguelin, Adam; Dongarra, Jack J.; Geist, George Al; ...

    1994-01-01

    Network computing seeks to utilize the aggregate resources of many networked computers to solve a single problem. In so doing it is often possible to obtain supercomputer performance from an inexpensive local area network. The drawback is that network computing is complicated and error prone when done by hand, especially if the computers have different operating systems and data formats and are thus heterogeneous. The heterogeneous network computing environment (HeNCE) is an integrated graphical environment for creating and running parallel programs over a heterogeneous collection of computers. It is built on a lower level package called parallel virtual machine (PVM).more » The HeNCE philosophy of parallel programming is to have the programmer graphically specify the parallelism of a computation and to automate, as much as possible, the tasks of writing, compiling, executing, debugging, and tracing the network computation. Key to HeNCE is a graphical language based on directed graphs that describe the parallelism and data dependencies of an application. Nodes in the graphs represent conventional Fortran or C subroutines and the arcs represent data and control flow. This article describes the present state of HeNCE, its capabilities, limitations, and areas of future research.« less

  14. Big Data over a 100G network at Fermilab

    DOE PAGES

    Garzoglio, Gabriele; Mhashilkar, Parag; Kim, Hyunwoo; ...

    2014-06-11

    As the need for Big Data in science becomes ever more relevant, networks around the world are upgrading their infrastructure to support high-speed interconnections. To support its mission, the high-energy physics community as a pioneer in Big Data has always been relying on the Fermi National Accelerator Laboratory to be at the forefront of storage and data movement. This need was reiterated in recent years with the data-taking rate of the major LHC experiments reaching tens of petabytes per year. At Fermilab, this resulted regularly in peaks of data movement on the Wide area network (WAN) in and out ofmore » the laboratory of about 30 Gbit/s and on the Local are network (LAN) between storage and computational farms of 160 Gbit/s. To address these ever increasing needs, as of this year Fermilab is connected to the Energy Sciences Network (ESnet) through a 100 Gb/s link. To understand the optimal system-and application-level configuration to interface computational systems with the new highspeed interconnect, Fermilab has deployed a Network Research & Development facility connected to the ESnet 100G Testbed. For the past two years, the High Throughput Data Program (HTDP) has been using the Testbed to identify gaps in data movement middleware [5] when transferring data at these high-speeds. The program has published evaluations of technologies typically used in High Energy Physics, such as GridFTP [4], XrootD [9], and Squid [8]. Furthermore, this work presents the new R&D facility and the continuation of the evaluation program.« less

  15. Big Data over a 100G network at Fermilab

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Garzoglio, Gabriele; Mhashilkar, Parag; Kim, Hyunwoo

    As the need for Big Data in science becomes ever more relevant, networks around the world are upgrading their infrastructure to support high-speed interconnections. To support its mission, the high-energy physics community as a pioneer in Big Data has always been relying on the Fermi National Accelerator Laboratory to be at the forefront of storage and data movement. This need was reiterated in recent years with the data-taking rate of the major LHC experiments reaching tens of petabytes per year. At Fermilab, this resulted regularly in peaks of data movement on the Wide area network (WAN) in and out ofmore » the laboratory of about 30 Gbit/s and on the Local are network (LAN) between storage and computational farms of 160 Gbit/s. To address these ever increasing needs, as of this year Fermilab is connected to the Energy Sciences Network (ESnet) through a 100 Gb/s link. To understand the optimal system-and application-level configuration to interface computational systems with the new highspeed interconnect, Fermilab has deployed a Network Research & Development facility connected to the ESnet 100G Testbed. For the past two years, the High Throughput Data Program (HTDP) has been using the Testbed to identify gaps in data movement middleware [5] when transferring data at these high-speeds. The program has published evaluations of technologies typically used in High Energy Physics, such as GridFTP [4], XrootD [9], and Squid [8]. Furthermore, this work presents the new R&D facility and the continuation of the evaluation program.« less

  16. Analysis of Computer Network Information Based on "Big Data"

    NASA Astrophysics Data System (ADS)

    Li, Tianli

    2017-11-01

    With the development of the current era, computer network and large data gradually become part of the people's life, people use the computer to provide convenience for their own life, but at the same time there are many network information problems has to pay attention. This paper analyzes the information security of computer network based on "big data" analysis, and puts forward some solutions.

  17. Pacific Educational Computer Network Study. Final Report.

    ERIC Educational Resources Information Center

    Hawaii Univ., Honolulu. ALOHA System.

    The Pacific Educational Computer Network Feasibility Study examined technical and non-technical aspects of the formation of an international Pacific Area computer network for higher education. The technical study covered the assessment of the feasibility of a packet-switched satellite and radio ground distribution network for data transmission…

  18. Mobile Computing and Ubiquitous Networking: Concepts, Technologies and Challenges.

    ERIC Educational Resources Information Center

    Pierre, Samuel

    2001-01-01

    Analyzes concepts, technologies and challenges related to mobile computing and networking. Defines basic concepts of cellular systems. Describes the evolution of wireless technologies that constitute the foundations of mobile computing and ubiquitous networking. Presents characterization and issues of mobile computing. Analyzes economical and…

  19. A Science Information Infrastructure for Access to Earth and Space Science Data through the Nation's Science Museums

    NASA Technical Reports Server (NTRS)

    Murray, S.

    1999-01-01

    In this project, we worked with the University of California at Berkeley/Center for Extreme Ultraviolet Astrophysics and five science museums (the National Air and Space Museum, the Science Museum of Virginia, the Lawrence Hall of Science, the Exploratorium., and the New York Hall of Science) to formulate plans for computer-based laboratories located at these museums. These Science Learning Laboratories would be networked and provided with real Earth and space science observations, as well as appropriate lesson plans, that would allow the general public to directly access and manipulate the actual remote sensing data, much as a scientist would.

  20. Assessment of the MHD capability in the ATHENA code using data from the ALEX facility

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Roth, P.A.

    1989-03-01

    The ATHENA (Advanced Thermal Hydraulic Energy Network Analyzer) code is a system transient analysis code with multi-loop, multi-fluid capabilities, which is available to the fusion community at the National Magnetic Fusion Energy Computing Center (NMFECC). The work reported here assesses the ATHENA magnetohydrodynamic (MHD) pressure drop model for liquid metals flowing through a strong magnetic field. An ATHENA model was developed for two simple geometry, adiabatic test sections used in the Argonne Liquid Metal Experiment (ALEX) at Argonne National Laboratory (ANL). The pressure drops calculated by ATHENA agreed well with the experimental results from the ALEX facility.

  1. The Social Network of Tracer Variations and O(100) Uncertain Photochemical Parameters in the Community Atmosphere Model

    NASA Astrophysics Data System (ADS)

    Lucas, D. D.; Labute, M.; Chowdhary, K.; Debusschere, B.; Cameron-Smith, P. J.

    2014-12-01

    Simulating the atmospheric cycles of ozone, methane, and other radiatively important trace gases in global climate models is computationally demanding and requires the use of 100's of photochemical parameters with uncertain values. Quantitative analysis of the effects of these uncertainties on tracer distributions, radiative forcing, and other model responses is hindered by the "curse of dimensionality." We describe efforts to overcome this curse using ensemble simulations and advanced statistical methods. Uncertainties from 95 photochemical parameters in the trop-MOZART scheme were sampled using a Monte Carlo method and propagated through 10,000 simulations of the single column version of the Community Atmosphere Model (CAM). The variance of the ensemble was represented as a network with nodes and edges, and the topology and connections in the network were analyzed using lasso regression, Bayesian compressive sensing, and centrality measures from the field of social network theory. Despite the limited sample size for this high dimensional problem, our methods determined the key sources of variation and co-variation in the ensemble and identified important clusters in the network topology. Our results can be used to better understand the flow of photochemical uncertainty in simulations using CAM and other climate models. This work was performed under the auspices of the U.S. Department of Energy by Lawrence Livermore National Laboratory under Contract DE-AC52-07NA27344 and supported by the DOE Office of Science through the Scientific Discovery Through Advanced Computing (SciDAC).

  2. Engaging the Nation’s Critical Infrastructure Sector to Deter Cyber Threats

    DTIC Science & Technology

    2013-03-01

    is the component of CyberOps that extends cyber power beyond the defensive boundaries of the GIG to detect, deter, deny, and defeat adversaries... economy .16 DDOS attacks are based on multiple, malware infected personal computers, organized into networks called botnets, and are directed by...not condemn the actions of those involved. Of the two attacks on Estonia and Georgia, it was Estonia that had the greatest damage to its economy

  3. Snow Leopard Cloud: A Multi-national Education Training and Experimentation Cloud and Its Security Challenges

    NASA Astrophysics Data System (ADS)

    Cayirci, Erdal; Rong, Chunming; Huiskamp, Wim; Verkoelen, Cor

    Military/civilian education training and experimentation networks (ETEN) are an important application area for the cloud computing concept. However, major security challenges have to be overcome to realize an ETEN. These challenges can be categorized as security challenges typical to any cloud and multi-level security challenges specific to an ETEN environment. The cloud approach for ETEN is introduced and its security challenges are explained in this paper.

  4. Mexican Space Weather Service (SCIESMEX)

    NASA Astrophysics Data System (ADS)

    Gonzalez-Esparza, A.; De la Luz, V.; Mejia-Ambriz, J. C.; Aguilar-Rodriguez, E.; Corona-Romero, P.; Gonzalez, L. X.

    2015-12-01

    Recent modifications of the Civil Protection Law in Mexico include now specific mentions to space hazards and space weather phenomena. During the last few years, the UN has promoted international cooperation on Space Weather awareness, studies and monitoring. Internal and external conditions motivated the creation of a Space Weather Service in Mexico (SCIESMEX). The SCIESMEX (www.sciesmex.unam.mx) is operated by the Geophysics Institute at the National Autonomous University of Mexico (UNAM). The UNAM has the experience of operating several critical national services, including the National Seismological Service (SSN); besides that has a well established scientific group with expertise in space physics and solar- terrestrial phenomena. The SCIESMEX is also related with the recent creation of the Mexican Space Agency (AEM). The project combines a network of different ground instruments covering solar, interplanetary, geomagnetic, and ionospheric observations. The SCIESMEX has already in operation computing infrastructure running the web application, a virtual observatory and a high performance computing server to run numerical models. SCIESMEX participates in the International Space Environment Services (ISES) and in the Inter-progamme Coordination Team on Space Weather (ICTSW) of the Word Meteorological Organization (WMO).

  5. Interactive Forecasting with the National Weather Service River Forecast System

    NASA Technical Reports Server (NTRS)

    Smith, George F.; Page, Donna

    1993-01-01

    The National Weather Service River Forecast System (NWSRFS) consists of several major hydrometeorologic subcomponents to model the physics of the flow of water through the hydrologic cycle. The entire NWSRFS currently runs in both mainframe and minicomputer environments, using command oriented text input to control the system computations. As computationally powerful and graphically sophisticated scientific workstations became available, the National Weather Service (NWS) recognized that a graphically based, interactive environment would enhance the accuracy and timeliness of NWS river and flood forecasts. Consequently, the operational forecasting portion of the NWSRFS has been ported to run under a UNIX operating system, with X windows as the display environment on a system of networked scientific workstations. In addition, the NWSRFS Interactive Forecast Program was developed to provide a graphical user interface to allow the forecaster to control NWSRFS program flow and to make adjustments to forecasts as necessary. The potential market for water resources forecasting is immense and largely untapped. Any private company able to market the river forecasting technologies currently developed by the NWS Office of Hydrology could provide benefits to many information users and profit from providing these services.

  6. A system to build distributed multivariate models and manage disparate data sharing policies: implementation in the scalable national network for effectiveness research.

    PubMed

    Meeker, Daniella; Jiang, Xiaoqian; Matheny, Michael E; Farcas, Claudiu; D'Arcy, Michel; Pearlman, Laura; Nookala, Lavanya; Day, Michele E; Kim, Katherine K; Kim, Hyeoneui; Boxwala, Aziz; El-Kareh, Robert; Kuo, Grace M; Resnic, Frederic S; Kesselman, Carl; Ohno-Machado, Lucila

    2015-11-01

    Centralized and federated models for sharing data in research networks currently exist. To build multivariate data analysis for centralized networks, transfer of patient-level data to a central computation resource is necessary. The authors implemented distributed multivariate models for federated networks in which patient-level data is kept at each site and data exchange policies are managed in a study-centric manner. The objective was to implement infrastructure that supports the functionality of some existing research networks (e.g., cohort discovery, workflow management, and estimation of multivariate analytic models on centralized data) while adding additional important new features, such as algorithms for distributed iterative multivariate models, a graphical interface for multivariate model specification, synchronous and asynchronous response to network queries, investigator-initiated studies, and study-based control of staff, protocols, and data sharing policies. Based on the requirements gathered from statisticians, administrators, and investigators from multiple institutions, the authors developed infrastructure and tools to support multisite comparative effectiveness studies using web services for multivariate statistical estimation in the SCANNER federated network. The authors implemented massively parallel (map-reduce) computation methods and a new policy management system to enable each study initiated by network participants to define the ways in which data may be processed, managed, queried, and shared. The authors illustrated the use of these systems among institutions with highly different policies and operating under different state laws. Federated research networks need not limit distributed query functionality to count queries, cohort discovery, or independently estimated analytic models. Multivariate analyses can be efficiently and securely conducted without patient-level data transport, allowing institutions with strict local data storage requirements to participate in sophisticated analyses based on federated research networks. © The Author 2015. Published by Oxford University Press on behalf of the American Medical Informatics Association.

  7. Distributed Computer Networks in Support of Complex Group Practices

    PubMed Central

    Wess, Bernard P.

    1978-01-01

    The economics of medical computer networks are presented in context with the patient care and administrative goals of medical networks. Design alternatives and network topologies are discussed with an emphasis on medical network design requirements in distributed data base design, telecommunications, satellite systems, and software engineering. The success of the medical computer networking technology is predicated on the ability of medical and data processing professionals to design comprehensive, efficient, and virtually impenetrable security systems to protect data bases, network access and services, and patient confidentiality.

  8. Military clouds: utilization of cloud computing systems at the battlefield

    NASA Astrophysics Data System (ADS)

    Süleyman, Sarıkürk; Volkan, Karaca; İbrahim, Kocaman; Ahmet, Şirzai

    2012-05-01

    Cloud computing is known as a novel information technology (IT) concept, which involves facilitated and rapid access to networks, servers, data saving media, applications and services via Internet with minimum hardware requirements. Use of information systems and technologies at the battlefield is not new. Information superiority is a force multiplier and is crucial to mission success. Recent advances in information systems and technologies provide new means to decision makers and users in order to gain information superiority. These developments in information technologies lead to a new term, which is known as network centric capability. Similar to network centric capable systems, cloud computing systems are operational today. In the near future extensive use of military clouds at the battlefield is predicted. Integrating cloud computing logic to network centric applications will increase the flexibility, cost-effectiveness, efficiency and accessibility of network-centric capabilities. In this paper, cloud computing and network centric capability concepts are defined. Some commercial cloud computing products and applications are mentioned. Network centric capable applications are covered. Cloud computing supported battlefield applications are analyzed. The effects of cloud computing systems on network centric capability and on the information domain in future warfare are discussed. Battlefield opportunities and novelties which might be introduced to network centric capability by cloud computing systems are researched. The role of military clouds in future warfare is proposed in this paper. It was concluded that military clouds will be indispensible components of the future battlefield. Military clouds have the potential of improving network centric capabilities, increasing situational awareness at the battlefield and facilitating the settlement of information superiority.

  9. CLARET user's manual: Mainframe Logs. Revision 1

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Frobose, R.H.

    1984-11-12

    CLARET (Computer Logging and RETrieval) is a stand-alone PDP 11/23 system that can support 16 terminals. It provides a forms-oriented front end by which operators enter online activity logs for the Lawrence Livermore National Laboratory's OCTOPUS computer network. The logs are stored on the PDP 11/23 disks for later retrieval, and hardcopy reports are generated both automatically and upon request. Online viewing of the current logs is provided to management. As each day's logs are completed, the information is automatically sent to a CRAY and included in an online database system. The terminal used for the CLARET system is amore » dual-port Hewlett Packard 2626 terminal that can be used as either the CLARET logging station or as an independent OCTOPUS terminal. Because this is a stand-alone system, it does not depend on the availability of the OCTOPUS network to run and, in the event of a power failure, can be brought up independently.« less

  10. Processing large remote sensing image data sets on Beowulf clusters

    USGS Publications Warehouse

    Steinwand, Daniel R.; Maddox, Brian; Beckmann, Tim; Schmidt, Gail

    2003-01-01

    High-performance computing is often concerned with the speed at which floating- point calculations can be performed. The architectures of many parallel computers and/or their network topologies are based on these investigations. Often, benchmarks resulting from these investigations are compiled with little regard to how a large dataset would move about in these systems. This part of the Beowulf study addresses that concern by looking at specific applications software and system-level modifications. Applications include an implementation of a smoothing filter for time-series data, a parallel implementation of the decision tree algorithm used in the Landcover Characterization project, a parallel Kriging algorithm used to fit point data collected in the field on invasive species to a regular grid, and modifications to the Beowulf project's resampling algorithm to handle larger, higher resolution datasets at a national scale. Systems-level investigations include a feasibility study on Flat Neighborhood Networks and modifications of that concept with Parallel File Systems.

  11. Configuring compute nodes of a parallel computer in an operational group into a plurality of independent non-overlapping collective networks

    DOEpatents

    Archer, Charles J.; Inglett, Todd A.; Ratterman, Joseph D.; Smith, Brian E.

    2010-03-02

    Methods, apparatus, and products are disclosed for configuring compute nodes of a parallel computer in an operational group into a plurality of independent non-overlapping collective networks, the compute nodes in the operational group connected together for data communications through a global combining network, that include: partitioning the compute nodes in the operational group into a plurality of non-overlapping subgroups; designating one compute node from each of the non-overlapping subgroups as a master node; and assigning, to the compute nodes in each of the non-overlapping subgroups, class routing instructions that organize the compute nodes in that non-overlapping subgroup as a collective network such that the master node is a physical root.

  12. Practical recommendations for strengthening national and regional laboratory networks in Africa in the Global Health Security era.

    PubMed

    Best, Michele; Sakande, Jean

    2016-01-01

    The role of national health laboratories in support of public health response has expanded beyond laboratory testing to include a number of other core functions such as emergency response, training and outreach, communications, laboratory-based surveillance and data management. These functions can only be accomplished by an efficient and resilient national laboratory network that includes public health, reference, clinical and other laboratories. It is a primary responsibility of the national health laboratory in the Ministry of Health to develop and maintain the national laboratory network in the country. In this article, we present practical recommendations based on 17 years of network development experience for the development of effective national laboratory networks. These recommendations and examples of current laboratory networks, are provided to facilitate laboratory network development in other states. The development of resilient, integrated laboratory networks will enhance each state's public health system and is critical to the development of a robust national laboratory response network to meet global health security threats.

  13. Practical recommendations for strengthening national and regional laboratory networks in Africa in the Global Health Security era

    PubMed Central

    2016-01-01

    The role of national health laboratories in support of public health response has expanded beyond laboratory testing to include a number of other core functions such as emergency response, training and outreach, communications, laboratory-based surveillance and data management. These functions can only be accomplished by an efficient and resilient national laboratory network that includes public health, reference, clinical and other laboratories. It is a primary responsibility of the national health laboratory in the Ministry of Health to develop and maintain the national laboratory network in the country. In this article, we present practical recommendations based on 17 years of network development experience for the development of effective national laboratory networks. These recommendations and examples of current laboratory networks, are provided to facilitate laboratory network development in other states. The development of resilient, integrated laboratory networks will enhance each state’s public health system and is critical to the development of a robust national laboratory response network to meet global health security threats. PMID:28879137

  14. Increasing Susceptibility of the Global Network of Food Trade to Climate Disturbances

    NASA Astrophysics Data System (ADS)

    Puma, M. J.; Bose, S.; Chon, S.; Cook, B.

    2013-12-01

    Globalization of agriculture through trade liberalization has led to a dramatic transformation of the global network of food trade. The many benefits of this globalization include greater and more efficient global agricultural production, reduced variability of regional and global food supplies, and savings in global water resources. However, a potential hidden cost is an increasingly fragile network that is more susceptible to shocks or disruptions. Recent studies suggest that complex systems, like the global food trade network, may have architectural features typically associated with the existence of tipping points and susceptibility to collapse. Here we present evidence that this global agricultural network is increasingly connected, homogeneous, and in a state where network nodes (here countries) can flip between alternate states. We use production and trade data from 1986 to 2009 to identify shifts in national self sufficiency and to quantify changes in connectivity and homogeneity of the wheat, maize and rice trade. We then simulate the possible impacts of climate and crop-disease disruptions, which could potentially trigger a global food crisis through an export-restriction-induced domino effect. Changes in self-sufficiency ratio (SSR) over time for various country groups. The SSR is computed based on production and trade of cereals and starchy roots. (Top row) Time series of SSR for the Group of Eight + Five (G8+5) countries. The '+ Five' refers to the five leading emerging economies in the world. (Bottom row) Boxplots of average SSR over two periods (1986-1990 and 2005-2009) for countries designated as 'Annex I' and 'Least Developed Countries' (LDC) by the United Nations.

  15. History of the numerical aerodynamic simulation program

    NASA Technical Reports Server (NTRS)

    Peterson, Victor L.; Ballhaus, William F., Jr.

    1987-01-01

    The Numerical Aerodynamic Simulation (NAS) program has reached a milestone with the completion of the initial operating configuration of the NAS Processing System Network. This achievement is the first major milestone in the continuing effort to provide a state-of-the-art supercomputer facility for the national aerospace community and to serve as a pathfinder for the development and use of future supercomputer systems. The underlying factors that motivated the initiation of the program are first identified and then discussed. These include the emergence and evolution of computational aerodynamics as a powerful new capability in aerodynamics research and development, the computer power required for advances in the discipline, the complementary nature of computation and wind tunnel testing, and the need for the government to play a pathfinding role in the development and use of large-scale scientific computing systems. Finally, the history of the NAS program is traced from its inception in 1975 to the present time.

  16. LTSS compendium: an introduction to the CDC 7600 and the Livermore Timesharing System

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Fong, K. W.

    1977-08-15

    This report is an introduction to the CDC 7600 computer and to the Livermore Timesharing System (LTSS) used by the National Magnetic Fusion Energy Computer Center (NMFECC) and the Lawrence Livermore Laboratory Computer Center (LLLCC or Octopus network) on their 7600's. This report is based on a document originally written specifically about the system as it is implemented at NMFECC but has been broadened to point out differences in implementation at LLLCC. It also contains information about LLLCC not relevant to NMFECC. This report is written for computational physicists who want to prepare large production codes to run under LTSSmore » on the 7600's. The generalized discussion of the operating system focuses on creating and executing controllees. This document and its companion, UCID-17557, CDC 7600 LTSS Programming Stratagems, provide a basis for understanding more specialized documents about individual parts of the system.« less

  17. Information-seeking behavior changes in community-based teaching practices*†

    PubMed Central

    Byrnes, Jennifer A.; Kulick, Tracy A.; Schwartz, Diane G.

    2004-01-01

    A National Library of Medicine information access grant allowed for a collaborative project to provide computer resources in fourteen clinical practice sites that enabled health care professionals to access medical information via PubMed and the Internet. Health care professionals were taught how to access quality, cost-effective information that was user friendly and would result in improved patient care. Selected sites were located in medically underserved areas and received a computer, a printer, and, during year one, a fax machine. Participants were provided dial-up Internet service or were connected to the affiliated hospital's network. Clinicians were trained in how to search PubMed as a tool for practicing evidence-based medicine and to support clinical decision making. Health care providers were also taught how to find patient-education materials and continuing education programs and how to network with other professionals. Prior to the training, participants completed a questionnaire to assess their computer skills and familiarity with searching the Internet, MEDLINE, and other health-related databases. Responses indicated favorable changes in information-seeking behavior, including an increased frequency in conducting MEDLINE searches and Internet searches for work-related information. PMID:15243639

  18. The Merit Computer Network

    ERIC Educational Resources Information Center

    Aupperle, Eric M.; Davis, Donna L.

    1978-01-01

    The successful Merit Computer Network is examined in terms of both technology and operational management. The network is fully operational and has a significant and rapidly increasing usage, with three major institutions currently sharing computer resources. (Author/CMV)

  19. Research and realization implementation of monitor technology on illegal external link of classified computer

    NASA Astrophysics Data System (ADS)

    Zhang, Hong

    2017-06-01

    In recent years, with the continuous development and application of network technology, network security has gradually entered people's field of vision. The host computer network external network of violations is an important reason for the threat of network security. At present, most of the work units have a certain degree of attention to network security, has taken a lot of means and methods to prevent network security problems such as the physical isolation of the internal network, install the firewall at the exit. However, these measures and methods to improve network security are often not comply with the safety rules of human behavior damage. For example, the host to wireless Internet access and dual-network card to access the Internet, inadvertently formed a two-way network of external networks and computer connections [1]. As a result, it is possible to cause some important documents and confidentiality leak even in the the circumstances of user unaware completely. Secrecy Computer Violation Out-of-band monitoring technology can largely prevent the violation by monitoring the behavior of the offending connection. In this paper, we mainly research and discuss the technology of secret computer monitoring.

  20. 77 FR 33229 - Notice of Proposed Information Collection: Comment Request; National Resource Network

    Federal Register 2010, 2011, 2012, 2013, 2014

    2012-06-05

    ... Information Collection: Comment Request; National Resource Network AGENCY: Office of the Assistant Secretary... information: Title of Proposal: National Resource Network. OMB Control Number, if applicable: None... and reporting information related to the proposed National Resource Network. The U.S. Department of...

  1. A Study of Quality of Service Communication for High-Speed Packet-Switching Computer Sub-Networks

    NASA Technical Reports Server (NTRS)

    Cui, Zhenqian

    1999-01-01

    With the development of high-speed networking technology, computer networks, including local-area networks (LANs), wide-area networks (WANs) and the Internet, are extending their traditional roles of carrying computer data. They are being used for Internet telephony, multimedia applications such as conferencing and video on demand, distributed simulations, and other real-time applications. LANs are even used for distributed real-time process control and computing as a cost-effective approach. Differing from traditional data transfer, these new classes of high-speed network applications (video, audio, real-time process control, and others) are delay sensitive. The usefulness of data depends not only on the correctness of received data, but also the time that data are received. In other words, these new classes of applications require networks to provide guaranteed services or quality of service (QoS). Quality of service can be defined by a set of parameters and reflects a user's expectation about the underlying network's behavior. Traditionally, distinct services are provided by different kinds of networks. Voice services are provided by telephone networks, video services are provided by cable networks, and data transfer services are provided by computer networks. A single network providing different services is called an integrated-services network.

  2. A Study of Complex Deep Learning Networks on High Performance, Neuromorphic, and Quantum Computers

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Potok, Thomas E; Schuman, Catherine D; Young, Steven R

    Current Deep Learning models use highly optimized convolutional neural networks (CNN) trained on large graphical processing units (GPU)-based computers with a fairly simple layered network topology, i.e., highly connected layers, without intra-layer connections. Complex topologies have been proposed, but are intractable to train on current systems. Building the topologies of the deep learning network requires hand tuning, and implementing the network in hardware is expensive in both cost and power. In this paper, we evaluate deep learning models using three different computing architectures to address these problems: quantum computing to train complex topologies, high performance computing (HPC) to automatically determinemore » network topology, and neuromorphic computing for a low-power hardware implementation. Due to input size limitations of current quantum computers we use the MNIST dataset for our evaluation. The results show the possibility of using the three architectures in tandem to explore complex deep learning networks that are untrainable using a von Neumann architecture. We show that a quantum computer can find high quality values of intra-layer connections and weights, while yielding a tractable time result as the complexity of the network increases; a high performance computer can find optimal layer-based topologies; and a neuromorphic computer can represent the complex topology and weights derived from the other architectures in low power memristive hardware. This represents a new capability that is not feasible with current von Neumann architecture. It potentially enables the ability to solve very complicated problems unsolvable with current computing technologies.« less

  3. Computer Mediated Social Network Approach to Software Support and Maintenance

    DTIC Science & Technology

    2010-06-01

    Page 1        Computer Mediated  Social   Network  Approach to  Software Support and Maintenance     LTC J. Carlos Vega  *Student Paper*    Point...DATES COVERED 00-00-2010 to 00-00-2010 4. TITLE AND SUBTITLE Computer Mediated Social Network Approach to Software Support and Maintenance...This research highlights the preliminary findings on the potential of computer mediated social networks . This research focused on social networks as

  4. Standardized cardiovascular data for clinical research, registries, and patient care: a report from the Data Standards Workgroup of the National Cardiovascular Research Infrastructure project.

    PubMed

    Anderson, H Vernon; Weintraub, William S; Radford, Martha J; Kremers, Mark S; Roe, Matthew T; Shaw, Richard E; Pinchotti, Dana M; Tcheng, James E

    2013-05-07

    Relatively little attention has been focused on standardization of data exchange in clinical research studies and patient care activities. Both are usually managed locally using separate and generally incompatible data systems at individual hospitals or clinics. In the past decade there have been nascent efforts to create data standards for clinical research and patient care data, and to some extent these are helpful in providing a degree of uniformity. Nonetheless, these data standards generally have not been converted into accepted computer-based language structures that could permit reliable data exchange across computer networks. The National Cardiovascular Research Infrastructure (NCRI) project was initiated with a major objective of creating a model framework for standard data exchange in all clinical research, clinical registry, and patient care environments, including all electronic health records. The goal is complete syntactic and semantic interoperability. A Data Standards Workgroup was established to create or identify and then harmonize clinical definitions for a base set of standardized cardiovascular data elements that could be used in this network infrastructure. Recognizing the need for continuity with prior efforts, the Workgroup examined existing data standards sources. A basic set of 353 elements was selected. The NCRI staff then collaborated with the 2 major technical standards organizations in health care, the Clinical Data Interchange Standards Consortium and Health Level Seven International, as well as with staff from the National Cancer Institute Enterprise Vocabulary Services. Modeling and mapping were performed to represent (instantiate) the data elements in appropriate technical computer language structures for endorsement as an accepted data standard for public access and use. Fully implemented, these elements will facilitate clinical research, registry reporting, administrative reporting and regulatory compliance, and patient care. Copyright © 2013 American College of Cardiology Foundation. Published by Elsevier Inc. All rights reserved.

  5. Probabilistic Inference in General Graphical Models through Sampling in Stochastic Networks of Spiking Neurons

    PubMed Central

    Pecevski, Dejan; Buesing, Lars; Maass, Wolfgang

    2011-01-01

    An important open problem of computational neuroscience is the generic organization of computations in networks of neurons in the brain. We show here through rigorous theoretical analysis that inherent stochastic features of spiking neurons, in combination with simple nonlinear computational operations in specific network motifs and dendritic arbors, enable networks of spiking neurons to carry out probabilistic inference through sampling in general graphical models. In particular, it enables them to carry out probabilistic inference in Bayesian networks with converging arrows (“explaining away”) and with undirected loops, that occur in many real-world tasks. Ubiquitous stochastic features of networks of spiking neurons, such as trial-to-trial variability and spontaneous activity, are necessary ingredients of the underlying computational organization. We demonstrate through computer simulations that this approach can be scaled up to neural emulations of probabilistic inference in fairly large graphical models, yielding some of the most complex computations that have been carried out so far in networks of spiking neurons. PMID:22219717

  6. Spontaneous Ad Hoc Mobile Cloud Computing Network

    PubMed Central

    Lacuesta, Raquel; Sendra, Sandra; Peñalver, Lourdes

    2014-01-01

    Cloud computing helps users and companies to share computing resources instead of having local servers or personal devices to handle the applications. Smart devices are becoming one of the main information processing devices. Their computing features are reaching levels that let them create a mobile cloud computing network. But sometimes they are not able to create it and collaborate actively in the cloud because it is difficult for them to build easily a spontaneous network and configure its parameters. For this reason, in this paper, we are going to present the design and deployment of a spontaneous ad hoc mobile cloud computing network. In order to perform it, we have developed a trusted algorithm that is able to manage the activity of the nodes when they join and leave the network. The paper shows the network procedures and classes that have been designed. Our simulation results using Castalia show that our proposal presents a good efficiency and network performance even by using high number of nodes. PMID:25202715

  7. Spontaneous ad hoc mobile cloud computing network.

    PubMed

    Lacuesta, Raquel; Lloret, Jaime; Sendra, Sandra; Peñalver, Lourdes

    2014-01-01

    Cloud computing helps users and companies to share computing resources instead of having local servers or personal devices to handle the applications. Smart devices are becoming one of the main information processing devices. Their computing features are reaching levels that let them create a mobile cloud computing network. But sometimes they are not able to create it and collaborate actively in the cloud because it is difficult for them to build easily a spontaneous network and configure its parameters. For this reason, in this paper, we are going to present the design and deployment of a spontaneous ad hoc mobile cloud computing network. In order to perform it, we have developed a trusted algorithm that is able to manage the activity of the nodes when they join and leave the network. The paper shows the network procedures and classes that have been designed. Our simulation results using Castalia show that our proposal presents a good efficiency and network performance even by using high number of nodes.

  8. Getting Online: A Friendly Guide for Teachers, Students, and Parents.

    ERIC Educational Resources Information Center

    Educational Resources Information Center (ED), Washington, DC.

    This brochure provides teachers, students, and parents with information on how to connect to a computer network; describes some of the education offerings available to network users; and offers hints to help make exploration of computer networks easy and successful. The brochure explains the equipment needed to connect to a computer network; ways…

  9. Structural Properties of the Brazilian Air Transportation Network.

    PubMed

    Couto, Guilherme S; da Silva, Ana Paula Couto; Ruiz, Linnyer B; Benevenuto, Fabrício

    2015-09-01

    The air transportation network in a country has a great impact on the local, national and global economy. In this paper, we analyze the air transportation network in Brazil with complex network features to better understand its characteristics. In our analysis, we built networks composed either by national or by international flights. We also consider the network when both types of flights are put together. Interesting conclusions emerge from our analysis. For instance, Viracopos Airport (Campinas City) is the most central and connected airport on the national flights network. Any operational problem in this airport separates the Brazilian national network into six distinct subnetworks. Moreover, the Brazilian air transportation network exhibits small world characteristics and national connections network follows a power law distribution. Therefore, our analysis sheds light on the current Brazilian air transportation infrastructure, bringing a novel understanding that may help face the recent fast growth in the usage of the Brazilian transport network.

  10. National networks of Healthy Cities in Europe.

    PubMed

    Janss Lafond, Leah; Heritage, Zoë

    2009-11-01

    National networks of Healthy Cities emerged in the late 1980s as a spontaneous reaction to a great demand by cities to participate in the Healthy Cities movement. Today, they engage at least 1300 cities in the European region and form the backbone of the Healthy Cities movement. This article provides an analysis of the results of the regular surveys of national networks that have been carried out principally since 1997. The main functions and achievements of national networks are presented alongside some of their most pressing challenges. Although networks have differing priorities and organizational characteristics, they do share common goals and strategic directions based on the Healthy Cities model (see other articles in this special edition of HPI). Therefore, it has been possible to identify a set of organizational and strategic factors that contribute to the success of networks. These factors form the basis of a set of accreditation criteria for national networks and provide guidance for the establishment of new national networks. Although national networks have made substantial achievements, they continue to face a number of dilemmas that are discussed in the article. Problems a national network must deal with include how to obtain sustainable funding, how to raise the standard of work in cities without creating exclusive participation criteria and how to balance the need to provide direct support to cities with its role as a national player. These dilemmas are similar to other public sector networks. During the last 15 years, the pooling of practical expertise in urban health has made Healthy Cities networks an important resource for national as well as local governments. Not only do they provide valuable support to their members but they often advise ministries and other national institutions on effective models to promote sustainable urban health development.

  11. Throughput analysis for the National Airspace System

    NASA Astrophysics Data System (ADS)

    Sureshkumar, Chandrasekar

    The United States National Airspace System (NAS) network performance is currently measured using a variety of metrics based on delay. Developments in the fields of wireless communication, manufacturing and other modes of transportation like road, freight, etc. have explored various metrics that complement the delay metric. In this work, we develop a throughput concept for both the terminal and en-route phases of flight inspired by studies in the above areas and explore the applications of throughput metrics for the en-route airspace of the NAS. These metrics can be applied to the NAS performance at each hierarchical level—the sector, center, regional and national and will consist of multiple layers of networks with the bottom level comprising the traffic pattern modelled as a network of individual sectors acting as nodes. This hierarchical approach is especially suited for executive level decision making as it gives an overall picture of not just the inefficiencies but also the aspects where the NAS has performed well in a given situation from which specific information about the effects of a policy change on the NAS performance at each level can be determined. These metrics are further validated with real traffic data using the Future Air Traffic Management Concepts Evaluation Tool (FACET) for three en-route sectors and an Air Route Traffic Control Center (ARTCC). Further, this work proposes a framework to compute the minimum makespan and the capacity of a runway system in any configuration. Towards this, an algorithm for optimal arrival and departure flight sequencing is proposed. The proposed algorithm is based on a branch-and-bound technique and allows for the efficient computation of the best runway assignment and sequencing of arrival and departure operations that minimize the makespan at a given airport. The lower and upper bounds of the cost of each branch for the best first search in the branch-and-bound algorithm are computed based on the minimum separation standards between arrival and departure operations set by the Federal Aviation Administration. The optimal objective value is mathematically proved to lie between these bounds and the algorithm uses these bounds to efficiently find promising branches and discard all others and terminate with atleast one sequence with the minimal makespan. The proposed algorithm is analyzed and validated through real traffic operations data at the Hartsfield-Jackson Atlanta international airport.

  12. Ku-band signal design study. [space shuttle orbiter data processing network

    NASA Technical Reports Server (NTRS)

    Rubin, I.

    1978-01-01

    Analytical tools, methods and techniques for assessing the design and performance of the space shuttle orbiter data processing system (DPS) are provided. The computer data processing network is evaluated in the key areas of queueing behavior synchronization and network reliability. The structure of the data processing network is described as well as the system operation principles and the network configuration. The characteristics of the computer systems are indicated. System reliability measures are defined and studied. System and network invulnerability measures are computed. Communication path and network failure analysis techniques are included.

  13. Plan for the design, development, and implementation, and operation of the National Water Information System

    USGS Publications Warehouse

    Edwards, M.D.

    1987-01-01

    The Water Resources Division of the U.S. Geological Survey is developing a National Water Information System (NWIS) that will integrate and replace its existing water data and information systems of the National Water Data Storage and Retrieval System, National Water Data Exchange, National Water-Use Information, and Water Resources Scientific Information Center programs. It will be a distributed data system operated as part of the Division 's Distributed Information System, which is a network of computers linked together through a national telecommunication network known as GEONET. The NWIS is being developed as a series of prototypes that will be integrated as they are completed to allow the development and implementation of the system in a phased manner. It also is being developed in a distributed manner using personnel who work under the coordination of a central NWIS Project Office. Work on the development of the NWIS began in 1983 and it is scheduled for completion in 1990. This document presents an overall plan for the design, development, implementation, and operation of the system. Detailed discussions are presented on each of these phases of the NWIS life cycle. The planning, quality assurance, and configuration management phases of the life cycle also are discussed. The plan is intended to be a working document for use by NWIS management and participants in its design and development and to assist offices of the Division in planning and preparing for installation and operation of the system. (Author 's abstract)

  14. TME10/380: Remote Transmission of Radiological Images by means of Intranet/Internet Technology

    PubMed Central

    Sicurello, F; Pizzi, R

    1999-01-01

    At the Istituto Nazionale Neurologico C. Besta in Milano a network architecture has been developed to connect computers and diagnostic modalities, based on Intranet technology in order to allow the hospital to have an external access through the Internet. The Internet technology has become the "glue" that allows to link different computers and to develop applications able to work independently from the hardware/software platform. Using a PACS (Picture Archiving and Communication System) system integrated to the diagnostic modalities by means of the standardized DICOM image format, the digital radiological images can be transferred, displayed and processed on special visualization workstations all around the hospital. From the workstations the same images can be transferred in DICOM format to a teleconsulting workstation. In fact the hospital is involved in a national project for the remote connection between many Italian hospitals. This national network is linked to already developed regional networks like the Toscana MAN and the ATM Sirius Network. Some links are performed directly in ATM (155 Mbps), others are based on CDN (Direct Numerical Connection, 2Mbps), others are simply based on ISDN connections. The system allows to make it simpler and faster the already established daily exchange of radiological reports between the involved hospitals, especially from Istituto Nazionale Neurologico and Istituto Nazionale deiTumori. All the actions performed by the radiologist are translated by the software into "events" and replied to the remote workstation and vice-versa. In this way the radiologists can see each others, speak together and act in real time on a common "board" of diagnostic images, each one with his own pointer. The adopted technology is evolving on a system based on a web architecture and Java applications, useful for small clinical centers not endowed with expensive information systems. These centers will be able to get consulting performances by the excellence centers, making available accurate diagnoses and therapy protocols.

  15. A global network for the control of snail-borne disease using satellite surveillance and geographic information systems.

    PubMed

    Malone, J B; Bergquist, N R; Huh, O K; Bavia, M E; Bernardi, M; El Bahy, M M; Fuentes, M V; Kristensen, T K; McCarroll, J C; Yilma, J M; Zhou, X N

    2001-04-27

    At a team residency sponsored by the Rockefeller Foundation in Bellagio, Italy, 10-14 April 2000 an organizational plan was conceived to create a global network of collaborating health workers and earth scientists dedicated to the development of computer-based models that can be used for improved control programs for schistosomiasis and other snail-borne diseases of medical and veterinary importance. The models will be assembled using GIS methods, global climate model data, sensor data from earth observing satellites, disease prevalence data, the distribution and abundance of snail hosts, and digital maps of key environmental factors that affect development and propagation of snail-borne disease agents. A work plan was developed for research collaboration and data sharing, recruitment of new contributing researchers, and means of access of other medical scientists and national control program managers to GIS models that may be used for more effective control of snail-borne disease. Agreement was reached on the use of compatible GIS formats, software, methods and data resources, including the definition of a 'minimum medical database' to enable seamless incorporation of results from each regional GIS project into a global model. The collaboration plan calls for linking a 'central resource group' at the World Health Organization, the Food and Agriculture Organization, Louisiana State University and the Danish Bilharziasis Laboratory with regional GIS networks to be initiated in Eastern Africa, Southern Africa, West Africa, Latin America and Southern Asia. An Internet site, www.gnosisGIS.org, (GIS Network On Snail-borne Infections with special reference to Schistosomiasis), has been initiated to allow interaction of team members as a 'virtual research group'. When completed, the site will point users to a toolbox of common resources resident on computers at member organizations, provide assistance on routine use of GIS health maps in selected national disease control programs and provide a forum for development of GIS models to predict the health impacts of water development projects and climate variation.

  16. Application of Near-Surface Remote Sensing and computer algorithms in evaluating impacts of agroecosystem management on Zea mays (corn) phenological development in the Platte River - High Plains Aquifer Long Term Agroecosystem Research Network field sites.

    NASA Astrophysics Data System (ADS)

    Okalebo, J. A.; Das Choudhury, S.; Awada, T.; Suyker, A.; LeBauer, D.; Newcomb, M.; Ward, R.

    2017-12-01

    The Long-term Agroecosystem Research (LTAR) network is a USDA-ARS effort that focuses on conducting research that addresses current and emerging issues in agriculture related to sustainability and profitability of agroecosystems in the face of climate change and population growth. There are 18 sites across the USA covering key agricultural production regions. In Nebraska, a partnership between the University of Nebraska - Lincoln and ARD/USDA resulted in the establishment of the Platte River - High Plains Aquifer LTAR site in 2014. The site conducts research to sustain multiple ecosystem services focusing specifically on Nebraska's main agronomic production agroecosystems that comprise of abundant corn, soybeans, managed grasslands and beef production. As part of the national LTAR network, PR-HPA participates and contributes near-surface remotely sensed imagery of corn, soybean and grassland canopy phenology to the PhenoCam Network through high-resolution digital cameras. This poster highlights the application, advantages and usefulness of near-surface remotely sensed imagery in agroecosystem studies and management. It demonstrates how both Infrared and Red-Green-Blue imagery may be applied to monitor phenological events as well as crop abiotic stresses. Computer-based algorithms and analytic techniques proved very instrumental in revealing crop phenological changes such as green-up and tasseling in corn. This poster also reports the suitability and applicability of corn-derived computer based algorithms for evaluating phenological development of sorghum since both crops have similarities in their phenology; with sorghum panicles being similar to corn tassels. This later assessment was carried out using a sorghum dataset obtained from the Transportation Energy Resources from Renewable Agriculture Phenotyping Reference Platform project, Maricopa Agricultural Center, Arizona.

  17. Teaching Advanced Concepts in Computer Networks: VNUML-UM Virtualization Tool

    ERIC Educational Resources Information Center

    Ruiz-Martinez, A.; Pereniguez-Garcia, F.; Marin-Lopez, R.; Ruiz-Martinez, P. M.; Skarmeta-Gomez, A. F.

    2013-01-01

    In the teaching of computer networks the main problem that arises is the high price and limited number of network devices the students can work with in the laboratories. Nowadays, with virtualization we can overcome this limitation. In this paper, we present a methodology that allows students to learn advanced computer network concepts through…

  18. Computer Networks and Networking: A Primer.

    ERIC Educational Resources Information Center

    Collins, Mauri P.

    1993-01-01

    Provides a basic introduction to computer networks and networking terminology. Topics addressed include modems; the Internet; TCP/IP (Transmission Control Protocol/Internet Protocol); transmission lines; Internet Protocol numbers; network traffic; Fidonet; file transfer protocol (FTP); TELNET; electronic mail; discussion groups; LISTSERV; USENET;…

  19. The new space and earth science information systems at NASA's archive

    NASA Technical Reports Server (NTRS)

    Green, James L.

    1990-01-01

    The on-line interactive systems of the National Space Science Data Center (NSSDC) are examined. The worldwide computer network connections that allow access to NSSDC users are outlined. The services offered by the NSSDC new technology on-line systems are presented, including the IUE request system, ozone TOMS data, and data sets on astrophysics, atmospheric science, land sciences, and space plasma physics. Plans for future increases in the NSSDC data holdings are considered.

  20. The new space and Earth science information systems at NASA's archive

    NASA Technical Reports Server (NTRS)

    Green, James L.

    1990-01-01

    The on-line interactive systems of the National Space Science Data Center (NSSDC) are examined. The worldwide computer network connections that allow access to NSSDC users are outlined. The services offered by the NSSDC new technology on-line systems are presented, including the IUE request system, Total Ozone Mapping Spectrometer (TOMS) data, and data sets on astrophysics, atmospheric science, land sciences, and space plasma physics. Plans for future increases in the NSSDC data holdings are considered.

  1. The architecture of a distributed medical dictionary.

    PubMed

    Fowler, J; Buffone, G; Moreau, D

    1995-01-01

    Exploiting high-speed computer networks to provide a national medical information infrastructure is a goal for medical informatics. The Distributed Medical Dictionary under development at Baylor College of Medicine is a model for an architecture that supports collaborative development of a distributed online medical terminology knowledge-base. A prototype is described that illustrates the concept. Issues that must be addressed by such a system include high availability, acceptable response time, support for local idiom, and control of vocabulary.

  2. Georgia’s Cyber Left Hook

    DTIC Science & Technology

    2009-01-01

    Relations for the Joint Task Force- Global Network Operations (JTF-GNO/ J5 ). He assists in development of cyber policy and strategy for operations and...History (Manchester, U.K: Manchester Univ. Press, 2000), 1. 10. See The Steamship Appam, 243 U.S. 124 (1917). 11. Jeffrey T. G. Kelsey, “ Hacking into...Arrest for Computer Hacking ,” news release, 1 October 2007, http://www.cybercrime.gov/kingIndict.pdf. 39. Grant Gross, “FBI: Several Nations Eyeing U.S

  3. The development of computer networks: First results from a microeconomic model

    NASA Astrophysics Data System (ADS)

    Maier, Gunther; Kaufmann, Alexander

    Computer networks like the Internet are gaining importance in social and economic life. The accelerating pace of the adoption of network technologies for business purposes is a rather recent phenomenon. Many applications are still in the early, sometimes even experimental, phase. Nevertheless, it seems to be certain that networks will change the socioeconomic structures we know today. This is the background for our special interest in the development of networks, in the role of spatial factors influencing the formation of networks, and consequences of networks on spatial structures, and in the role of externalities. This paper discusses a simple economic model - based on a microeconomic calculus - that incorporates the main factors that generate the growth of computer networks. The paper provides analytic results about the generation of computer networks. The paper discusses (1) under what conditions economic factors will initiate the process of network formation, (2) the relationship between individual and social evaluation, and (3) the efficiency of a network that is generated based on economic mechanisms.

  4. Alternative Fuels Data Center: Electric Vehicle Charging Network Expands at

    Science.gov Websites

    National Parks Electric Vehicle Charging Network Expands at National Parks to someone by E-mail Share Alternative Fuels Data Center: Electric Vehicle Charging Network Expands at National Parks on Facebook Tweet about Alternative Fuels Data Center: Electric Vehicle Charging Network Expands at National

  5. 34 CFR 412.4 - What is the National Network of Directors Council?

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... 34 Education 3 2010-07-01 2010-07-01 false What is the National Network of Directors Council? 412...) OFFICE OF VOCATIONAL AND ADULT EDUCATION, DEPARTMENT OF EDUCATION NATIONAL NETWORK FOR CURRICULUM COORDINATION IN VOCATIONAL AND TECHNICAL EDUCATION General § 412.4 What is the National Network of Directors...

  6. 34 CFR 412.1 - What is the National Network for Curriculum Coordination in Vocational and Technical Education?

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... 34 Education 3 2011-07-01 2011-07-01 false What is the National Network for Curriculum... EDUCATION NATIONAL NETWORK FOR CURRICULUM COORDINATION IN VOCATIONAL AND TECHNICAL EDUCATION General § 412.1 What is the National Network for Curriculum Coordination in Vocational and Technical Education? The...

  7. 34 CFR 412.4 - What is the National Network of Directors Council?

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... 34 Education 3 2011-07-01 2011-07-01 false What is the National Network of Directors Council? 412...) OFFICE OF VOCATIONAL AND ADULT EDUCATION, DEPARTMENT OF EDUCATION NATIONAL NETWORK FOR CURRICULUM COORDINATION IN VOCATIONAL AND TECHNICAL EDUCATION General § 412.4 What is the National Network of Directors...

  8. 34 CFR 412.1 - What is the National Network for Curriculum Coordination in Vocational and Technical Education?

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... 34 Education 3 2010-07-01 2010-07-01 false What is the National Network for Curriculum... EDUCATION NATIONAL NETWORK FOR CURRICULUM COORDINATION IN VOCATIONAL AND TECHNICAL EDUCATION General § 412.1 What is the National Network for Curriculum Coordination in Vocational and Technical Education? The...

  9. Computer network defense system

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Urias, Vincent; Stout, William M. S.; Loverro, Caleb

    A method and apparatus for protecting virtual machines. A computer system creates a copy of a group of the virtual machines in an operating network in a deception network to form a group of cloned virtual machines in the deception network when the group of the virtual machines is accessed by an adversary. The computer system creates an emulation of components from the operating network in the deception network. The components are accessible by the group of the cloned virtual machines as if the group of the cloned virtual machines was in the operating network. The computer system moves networkmore » connections for the group of the virtual machines in the operating network used by the adversary from the group of the virtual machines in the operating network to the group of the cloned virtual machines, enabling protecting the group of the virtual machines from actions performed by the adversary.« less

  10. Mississippi Curriculum Framework for Computer Information Systems Technology. Computer Information Systems Technology (Program CIP: 52.1201--Management Information Systems & Business Data). Computer Programming (Program CIP: 52.1201). Network Support (Program CIP: 52.1290--Computer Network Support Technology). Postsecondary Programs.

    ERIC Educational Resources Information Center

    Mississippi Research and Curriculum Unit for Vocational and Technical Education, State College.

    This document, which is intended for use by community and junior colleges throughout Mississippi, contains curriculum frameworks for two programs in the state's postsecondary-level computer information systems technology cluster: computer programming and network support. Presented in the introduction are program descriptions and suggested course…

  11. The iPad and EFL Digital Literacy

    NASA Astrophysics Data System (ADS)

    Meurant, Robert C.

    In future, the uses of English by non-native speakers will predominantly be online, using English language digital resources, and in computer-mediated communication with other non-native speakers of English. Thus for Korea to be competitive in the global economy, its EFL should develop L2 Digital Literacy in English. With its fast Internet connections, Korea is the most wired nation on Earth; but ICT facilities in educational institutions need reorganization. Opportunities for computer-mediated second language learning need to be increased, providing multimedia-capable, mobile web solutions that put the Internet into the hands of all students and teachers. Wi-Fi networked campuses allow any campus space to act as a wireless classroom. Every classroom should have a teacher's computer console. All students should be provided with adequate computing facilities, that are available anywhere, anytime. Ubiquitous computing has now become feasible by providing every student on enrollment with a tablet: a Wi-Fi+3G enabled Apple iPad.

  12. From photons to big-data applications: terminating terabits

    PubMed Central

    2016-01-01

    Computer architectures have entered a watershed as the quantity of network data generated by user applications exceeds the data-processing capacity of any individual computer end-system. It will become impossible to scale existing computer systems while a gap grows between the quantity of networked data and the capacity for per system data processing. Despite this, the growth in demand in both task variety and task complexity continues unabated. Networked computer systems provide a fertile environment in which new applications develop. As networked computer systems become akin to infrastructure, any limitation upon the growth in capacity and capabilities becomes an important constraint of concern to all computer users. Considering a networked computer system capable of processing terabits per second, as a benchmark for scalability, we critique the state of the art in commodity computing, and propose a wholesale reconsideration in the design of computer architectures and their attendant ecosystem. Our proposal seeks to reduce costs, save power and increase performance in a multi-scale approach that has potential application from nanoscale to data-centre-scale computers. PMID:26809573

  13. From photons to big-data applications: terminating terabits.

    PubMed

    Zilberman, Noa; Moore, Andrew W; Crowcroft, Jon A

    2016-03-06

    Computer architectures have entered a watershed as the quantity of network data generated by user applications exceeds the data-processing capacity of any individual computer end-system. It will become impossible to scale existing computer systems while a gap grows between the quantity of networked data and the capacity for per system data processing. Despite this, the growth in demand in both task variety and task complexity continues unabated. Networked computer systems provide a fertile environment in which new applications develop. As networked computer systems become akin to infrastructure, any limitation upon the growth in capacity and capabilities becomes an important constraint of concern to all computer users. Considering a networked computer system capable of processing terabits per second, as a benchmark for scalability, we critique the state of the art in commodity computing, and propose a wholesale reconsideration in the design of computer architectures and their attendant ecosystem. Our proposal seeks to reduce costs, save power and increase performance in a multi-scale approach that has potential application from nanoscale to data-centre-scale computers. © 2016 The Authors.

  14. Image Engine: an object-oriented multimedia database for storing, retrieving and sharing medical images and text.

    PubMed Central

    Lowe, H. J.

    1993-01-01

    This paper describes Image Engine, an object-oriented, microcomputer-based, multimedia database designed to facilitate the storage and retrieval of digitized biomedical still images, video, and text using inexpensive desktop computers. The current prototype runs on Apple Macintosh computers and allows network database access via peer to peer file sharing protocols. Image Engine supports both free text and controlled vocabulary indexing of multimedia objects. The latter is implemented using the TView thesaurus model developed by the author. The current prototype of Image Engine uses the National Library of Medicine's Medical Subject Headings (MeSH) vocabulary (with UMLS Meta-1 extensions) as its indexing thesaurus. PMID:8130596

  15. The thinking of Cloud computing in the digital construction of the oil companies

    NASA Astrophysics Data System (ADS)

    CaoLei, Qizhilin; Dengsheng, Lei

    In order to speed up digital construction of the oil companies and enhance productivity and decision-support capabilities while avoiding the disadvantages from the waste of the original process of building digital and duplication of development and input. This paper presents a cloud-based models for the build in the digital construction of the oil companies that National oil companies though the private network will join the cloud data of the oil companies and service center equipment integrated into a whole cloud system, then according to the needs of various departments to prepare their own virtual service center, which can provide a strong service industry and computing power for the Oil companies.

  16. Designing a parallel evolutionary algorithm for inferring gene networks on the cloud computing environment.

    PubMed

    Lee, Wei-Po; Hsiao, Yu-Ting; Hwang, Wei-Che

    2014-01-16

    To improve the tedious task of reconstructing gene networks through testing experimentally the possible interactions between genes, it becomes a trend to adopt the automated reverse engineering procedure instead. Some evolutionary algorithms have been suggested for deriving network parameters. However, to infer large networks by the evolutionary algorithm, it is necessary to address two important issues: premature convergence and high computational cost. To tackle the former problem and to enhance the performance of traditional evolutionary algorithms, it is advisable to use parallel model evolutionary algorithms. To overcome the latter and to speed up the computation, it is advocated to adopt the mechanism of cloud computing as a promising solution: most popular is the method of MapReduce programming model, a fault-tolerant framework to implement parallel algorithms for inferring large gene networks. This work presents a practical framework to infer large gene networks, by developing and parallelizing a hybrid GA-PSO optimization method. Our parallel method is extended to work with the Hadoop MapReduce programming model and is executed in different cloud computing environments. To evaluate the proposed approach, we use a well-known open-source software GeneNetWeaver to create several yeast S. cerevisiae sub-networks and use them to produce gene profiles. Experiments have been conducted and the results have been analyzed. They show that our parallel approach can be successfully used to infer networks with desired behaviors and the computation time can be largely reduced. Parallel population-based algorithms can effectively determine network parameters and they perform better than the widely-used sequential algorithms in gene network inference. These parallel algorithms can be distributed to the cloud computing environment to speed up the computation. By coupling the parallel model population-based optimization method and the parallel computational framework, high quality solutions can be obtained within relatively short time. This integrated approach is a promising way for inferring large networks.

  17. Designing a parallel evolutionary algorithm for inferring gene networks on the cloud computing environment

    PubMed Central

    2014-01-01

    Background To improve the tedious task of reconstructing gene networks through testing experimentally the possible interactions between genes, it becomes a trend to adopt the automated reverse engineering procedure instead. Some evolutionary algorithms have been suggested for deriving network parameters. However, to infer large networks by the evolutionary algorithm, it is necessary to address two important issues: premature convergence and high computational cost. To tackle the former problem and to enhance the performance of traditional evolutionary algorithms, it is advisable to use parallel model evolutionary algorithms. To overcome the latter and to speed up the computation, it is advocated to adopt the mechanism of cloud computing as a promising solution: most popular is the method of MapReduce programming model, a fault-tolerant framework to implement parallel algorithms for inferring large gene networks. Results This work presents a practical framework to infer large gene networks, by developing and parallelizing a hybrid GA-PSO optimization method. Our parallel method is extended to work with the Hadoop MapReduce programming model and is executed in different cloud computing environments. To evaluate the proposed approach, we use a well-known open-source software GeneNetWeaver to create several yeast S. cerevisiae sub-networks and use them to produce gene profiles. Experiments have been conducted and the results have been analyzed. They show that our parallel approach can be successfully used to infer networks with desired behaviors and the computation time can be largely reduced. Conclusions Parallel population-based algorithms can effectively determine network parameters and they perform better than the widely-used sequential algorithms in gene network inference. These parallel algorithms can be distributed to the cloud computing environment to speed up the computation. By coupling the parallel model population-based optimization method and the parallel computational framework, high quality solutions can be obtained within relatively short time. This integrated approach is a promising way for inferring large networks. PMID:24428926

  18. Editorial [Special issue on software defined networks and infrastructures, network function virtualisation, autonomous systems and network management

    DOE PAGES

    Biswas, Amitava; Liu, Chen; Monga, Inder; ...

    2016-01-01

    For last few years, there has been a tremendous growth in data traffic due to high adoption rate of mobile devices and cloud computing. Internet of things (IoT) will stimulate even further growth. This is increasing scale and complexity of telecom/internet service provider (SP) and enterprise data centre (DC) compute and network infrastructures. As a result, managing these large network-compute converged infrastructures is becoming complex and cumbersome. To cope up, network and DC operators are trying to automate network and system operations, administrations and management (OAM) functions. OAM includes all non-functional mechanisms which keep the network running.

  19. A Socio-Technical Approach to Preventing, Mitigating, and Recovering from Ransomware Attacks.

    PubMed

    Sittig, Dean F; Singh, Hardeep

    2016-01-01

    Recently there have been several high-profile ransomware attacks involving hospitals around the world. Ransomware is intended to damage or disable a user's computer unless the user makes a payment. Once the attack has been launched, users have three options: 1) try to restore their data from backup; 2) pay the ransom; or 3) lose their data. In this manuscript, we discuss a socio-technical approach to address ransomware and outline four overarching steps that organizations can undertake to secure an electronic health record (EHR) system and the underlying computing infrastructure. First, health IT professionals need to ensure adequate system protection by correctly installing and configuring computers and networks that connect them. Next, the health care organizations need to ensure more reliable system defense by implementing user-focused strategies, including simulation and training on correct and complete use of computers and network applications. Concomitantly, the organization needs to monitor computer and application use continuously in an effort to detect suspicious activities and identify and address security problems before they cause harm. Finally, organizations need to respond adequately to and recover quickly from ransomware attacks and take actions to prevent them in future. We also elaborate on recommendations from other authoritative sources, including the National Institute of Standards and Technology (NIST). Similar to approaches to address other complex socio-technical health IT challenges, the responsibility of preventing, mitigating, and recovering from these attacks is shared between health IT professionals and end-users.

  20. A Socio-Technical Approach to Preventing, Mitigating, and Recovering from Ransomware Attacks

    PubMed Central

    Singh, Hardeep

    2016-01-01

    Summary Recently there have been several high-profile ransomware attacks involving hospitals around the world. Ransomware is intended to damage or disable a user’s computer unless the user makes a payment. Once the attack has been launched, users have three options: 1) try to restore their data from backup; 2) pay the ransom; or 3) lose their data. In this manuscript, we discuss a socio-technical approach to address ransomware and outline four overarching steps that organizations can undertake to secure an electronic health record (EHR) system and the underlying computing infrastructure. First, health IT professionals need to ensure adequate system protection by correctly installing and configuring computers and networks that connect them. Next, the health care organizations need to ensure more reliable system defense by implementing user-focused strategies, including simulation and training on correct and complete use of computers and network applications. Concomitantly, the organization needs to monitor computer and application use continuously in an effort to detect suspicious activities and identify and address security problems before they cause harm. Finally, organizations need to respond adequately to and recover quickly from ransomware attacks and take actions to prevent them in future. We also elaborate on recommendations from other authoritative sources, including the National Institute of Standards and Technology (NIST). Similar to approaches to address other complex socio-technical health IT challenges, the responsibility of preventing, mitigating, and recovering from these attacks is shared between health IT professionals and end-users. PMID:27437066

  1. A computer tool to support in design of industrial Ethernet.

    PubMed

    Lugli, Alexandre Baratella; Santos, Max Mauro Dias; Franco, Lucia Regina Horta Rodrigues

    2009-04-01

    This paper presents a computer tool to support in the project and development of an industrial Ethernet network, verifying the physical layer (cables-resistance and capacitance, scan time, network power supply-POE's concept "Power Over Ethernet" and wireless), and occupation rate (amount of information transmitted to the network versus the controller network scan time). These functions are accomplished without a single physical element installed in the network, using only simulation. The computer tool has a software that presents a detailed vision of the network to the user, besides showing some possible problems in the network, and having an extremely friendly environment.

  2. Sharing Writing through Computer Networking.

    ERIC Educational Resources Information Center

    Fey, Marion H.

    1997-01-01

    Suggests computer networking can support the essential purposes of the collaborative-writing movement, offering opportunities for sharing writing. Notes that literacy teachers are exploring the connectivity of computer networking through numerous designs that use either real-time or asynchronous communication. Discusses new roles for students and…

  3. Organising a University Computer System: Analytical Notes.

    ERIC Educational Resources Information Center

    Jacquot, J. P.; Finance, J. P.

    1990-01-01

    Thirteen trends in university computer system development are identified, system user requirements are analyzed, critical system qualities are outlined, and three options for organizing a computer system are presented. The three systems include a centralized network, local network, and federation of local networks. (MSE)

  4. Classroom Computer Network.

    ERIC Educational Resources Information Center

    Lent, John

    1984-01-01

    This article describes a computer network system that connects several microcomputers to a single disk drive and one copy of software. Many schools are switching to networks as a cheaper and more efficient means of computer instruction. Teachers may be faced with copywriting problems when reproducing programs. (DF)

  5. Capability of the People’s Republic of China to Conduct Cyber Warfare and Computer Network Exploitation

    DTIC Science & Technology

    2009-10-09

    Capability of the People’s Republic of China to Conduct Cyber Warfare and Computer Network Exploitation Prepared for The US-China Economic and...the People?s Republic of China to Conduct Cyber Warfare and Computer Network Exploitation 5a. CONTRACT NUMBER 5b. GRANT NUMBER 5c. PROGRAM ELEMENT...Capability of the People’s Republic of China to Conduct Cyber Warfare and Computer Network Exploitation 2 US-China Economic and Security Review

  6. A method of non-contact reading code based on computer vision

    NASA Astrophysics Data System (ADS)

    Zhang, Chunsen; Zong, Xiaoyu; Guo, Bingxuan

    2018-03-01

    With the purpose of guarantee the computer information exchange security between internal and external network (trusted network and un-trusted network), A non-contact Reading code method based on machine vision has been proposed. Which is different from the existing network physical isolation method. By using the computer monitors, camera and other equipment. Deal with the information which will be on exchanged, Include image coding ,Generate the standard image , Display and get the actual image , Calculate homography matrix, Image distort correction and decoding in calibration, To achieve the computer information security, Non-contact, One-way transmission between the internal and external network , The effectiveness of the proposed method is verified by experiments on real computer text data, The speed of data transfer can be achieved 24kb/s. The experiment shows that this algorithm has the characteristics of high security, fast velocity and less loss of information. Which can meet the daily needs of the confidentiality department to update the data effectively and reliably, Solved the difficulty of computer information exchange between Secret network and non-secret network, With distinctive originality, practicability, and practical research value.

  7. NetMOD Version 2.0 Mathematical Framework

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Merchant, Bion J.; Young, Christopher J.; Chael, Eric P.

    2015-08-01

    NetMOD ( Net work M onitoring for O ptimal D etection) is a Java-based software package for conducting simulation of seismic, hydroacoustic and infrasonic networks. Network simulations have long been used to study network resilience to station outages and to determine where additional stations are needed to reduce monitoring thresholds. NetMOD makes use of geophysical models to determine the source characteristics, signal attenuation along the path between the source and station, and the performance and noise properties of the station. These geophysical models are combined to simulate the relative amplitudes of signal and noise that are observed at each ofmore » the stations. From these signal-to-noise ratios (SNR), the probabilities of signal detection at each station and event detection across the network of stations can be computed given a detection threshold. The purpose of this document is to clearly and comprehensively present the mathematical framework used by NetMOD, the software package developed by Sandia National Laboratories to assess the monitoring capability of ground-based sensor networks. Many of the NetMOD equations used for simulations are inherited from the NetSim network capability assessment package developed in the late 1980s by SAIC (Sereno et al., 1990).« less

  8. Closeness Possible through Computer Networking.

    ERIC Educational Resources Information Center

    Dodd, Julie E.

    1989-01-01

    Points out the benefits of computer networking for scholastic journalism. Discusses three systems currently offering networking possibilities for publications: the Student Press Information Network; the Youth Communication Service; and the Dow Jones Newspaper Fund's electronic mail system. (MS)

  9. Development of a forecasting model for brucellosis spreading in the Italian cattle trade network aimed to prioritise the field interventions.

    PubMed

    Savini, L; Candeloro, L; Conte, A; De Massis, F; Giovannini, A

    2017-01-01

    Brucellosis caused by Brucella abortus is an important zoonosis that constitutes a serious hazard to public health. Prevention of human brucellosis depends on the control of the disease in animals. Livestock movement data represent a valuable source of information to understand the pattern of contacts between holdings, which may determine the inter-herds and intra-herd spread of the disease. The manuscript addresses the use of computational epidemic models rooted in the knowledge of cattle trade network to assess the probabilities of brucellosis spread and to design control strategies. Three different spread network-based models were proposed: the DFC (Disease Flow Centrality) model based only on temporal cattle network structure and unrelated to the epidemiological disease parameters; a deterministic SIR (Susceptible-Infectious-Recovered) model; a stochastic SEIR (Susceptible-Exposed-Infectious-Recovered) model in which epidemiological and demographic within-farm aspects were also modelled. Containment strategies based on farms centrality in the cattle network were tested and discussed. All three models started from the identification of the entire sub-network originated from an infected farm, up to the fifth order of contacts. Their performances were based on data collected in Sicily in the framework of the national eradication plan of brucellosis in 2009. Results show that the proposed methods improves the efficacy and efficiency of the tracing activities in comparison to the procedure currently adopted by the veterinary services in the brucellosis control, in Italy. An overall assessment shows that the SIR model is the most suitable for the practical needs of the veterinary services, being the one with the highest sensitivity and the shortest computation time.

  10. Development of a forecasting model for brucellosis spreading in the Italian cattle trade network aimed to prioritise the field interventions

    PubMed Central

    Candeloro, L.; Conte, A.; De Massis, F.; Giovannini, A.

    2017-01-01

    Brucellosis caused by Brucella abortus is an important zoonosis that constitutes a serious hazard to public health. Prevention of human brucellosis depends on the control of the disease in animals. Livestock movement data represent a valuable source of information to understand the pattern of contacts between holdings, which may determine the inter-herds and intra-herd spread of the disease. The manuscript addresses the use of computational epidemic models rooted in the knowledge of cattle trade network to assess the probabilities of brucellosis spread and to design control strategies. Three different spread network-based models were proposed: the DFC (Disease Flow Centrality) model based only on temporal cattle network structure and unrelated to the epidemiological disease parameters; a deterministic SIR (Susceptible-Infectious-Recovered) model; a stochastic SEIR (Susceptible-Exposed-Infectious-Recovered) model in which epidemiological and demographic within-farm aspects were also modelled. Containment strategies based on farms centrality in the cattle network were tested and discussed. All three models started from the identification of the entire sub-network originated from an infected farm, up to the fifth order of contacts. Their performances were based on data collected in Sicily in the framework of the national eradication plan of brucellosis in 2009. Results show that the proposed methods improves the efficacy and efficiency of the tracing activities in comparison to the procedure currently adopted by the veterinary services in the brucellosis control, in Italy. An overall assessment shows that the SIR model is the most suitable for the practical needs of the veterinary services, being the one with the highest sensitivity and the shortest computation time. PMID:28654703

  11. Large-Scale Data Collection Metadata Management at the National Computation Infrastructure

    NASA Astrophysics Data System (ADS)

    Wang, J.; Evans, B. J. K.; Bastrakova, I.; Ryder, G.; Martin, J.; Duursma, D.; Gohar, K.; Mackey, T.; Paget, M.; Siddeswara, G.

    2014-12-01

    Data Collection management has become an essential activity at the National Computation Infrastructure (NCI) in Australia. NCI's partners (CSIRO, Bureau of Meteorology, Australian National University, and Geoscience Australia), supported by the Australian Government and Research Data Storage Infrastructure (RDSI), have established a national data resource that is co-located with high-performance computing. This paper addresses the metadata management of these data assets over their lifetime. NCI manages 36 data collections (10+ PB) categorised as earth system sciences, climate and weather model data assets and products, earth and marine observations and products, geosciences, terrestrial ecosystem, water management and hydrology, astronomy, social science and biosciences. The data is largely sourced from NCI partners, the custodians of many of the national scientific records, and major research community organisations. The data is made available in a HPC and data-intensive environment - a ~56000 core supercomputer, virtual labs on a 3000 core cloud system, and data services. By assembling these large national assets, new opportunities have arisen to harmonise the data collections, making a powerful cross-disciplinary resource.To support the overall management, a Data Management Plan (DMP) has been developed to record the workflows, procedures, the key contacts and responsibilities. The DMP has fields that can be exported to the ISO19115 schema and to the collection level catalogue of GeoNetwork. The subset or file level metadata catalogues are linked with the collection level through parent-child relationship definition using UUID. A number of tools have been developed that support interactive metadata management, bulk loading of data, and support for computational workflows or data pipelines. NCI creates persistent identifiers for each of the assets. The data collection is tracked over its lifetime, and the recognition of the data providers, data owners, data generators and data aggregators are updated. A Digital Object Identifier is assigned using the Australian National Data Service (ANDS). Once the data has been quality assured, a DOI is minted and the metadata record updated. NCI's data citation policy establishes the relationship between research outcomes, data providers, and the data.

  12. [Dutch computer domestication, 1975-1990].

    PubMed

    Veraart, Frank

    2008-01-01

    A computer seems an indispensable tool among twenty-first century households. Computers however, did not come as manna from heaven. The domestication and appropriation of computers in Dutch households was a result of activities by various intermediary actors. Computers became household commodities only gradually. Technophile computer hobbyists imported the first computers into the Netherlands from the USA, and started small businesses from 1975 onwards. They developed a social network in which computer technology was made available for use by individuals. This network extended itself via shops, clubs, magazines, and other means of acquiring and exchanging computer hard- and software. Hobbyist culture established the software-copying habits of private computer users as well as their ambivalence to commercial software. They also made the computer into a game machine. Under the impulse of a national policy that aimed at transforming society into an 'Information Society', clubs and other actors extended their activities and tailored them to this new agenda. Hobby clubs presented themselves as consumer organizations and transformed into intermediary actors that filled the gap between suppliers and a growing group of users. They worked hard to give meaning to (proper) use of computers. A second impulse to the increasing use of computers in the household came from so-called 'private-PC' projects in the late 1980s. In these projects employers financially aided employees in purchasing their own private PCs'. The initially important intermediary actors such as hobby clubs lost control and the agenda for personal computers was shifted to interoperability with office equipment. IBM compatible PC's flooded the households. In the household the new equipment blended with the established uses, such as gaming. The copying habits together with the PC standard created a risky combination in which computer viruses could spread easily. New roles arose for intermediary actors in guiding and educating computer users. The activities of intermediaries had a lasting influence on contemporary computer use and user preferences. Technical choices and the nature of Dutch computer use in households can be explained by analyzing the historical developments of intermediaries and users.

  13. CDC 7600 LTSS programming stratagens: preparing your first production code for the Livermore Timesharing System

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Fong, K. W.

    1977-08-15

    This report deals with some techniques in applied programming using the Livermore Timesharing System (LTSS) on the CDC 7600 computers at the National Magnetic Fusion Energy Computer Center (NMFECC) and the Lawrence Livermore Laboratory Computer Center (LLLCC or Octopus network). This report is based on a document originally written specifically about the system as it is implemented at NMFECC but has been revised to accommodate differences between LLLCC and NMFECC implementations. Topics include: maintaining programs, debugging, recovering from system crashes, and using the central processing unit, memory, and input/output devices efficiently and economically. Routines that aid in these procedures aremore » mentioned. The companion report, UCID-17556, An LTSS Compendium, discusses the hardware and operating system and should be read before reading this report.« less

  14. Los Alamos Plutonium Facility Waste Management System

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Smith, K.; Montoya, A.; Wieneke, R.

    1997-02-01

    This paper describes the new computer-based transuranic (TRU) Waste Management System (WMS) being implemented at the Plutonium Facility at Los Alamos National Laboratory (LANL). The Waste Management System is a distributed computer processing system stored in a Sybase database and accessed by a graphical user interface (GUI) written in Omnis7. It resides on the local area network at the Plutonium Facility and is accessible by authorized TRU waste originators, count room personnel, radiation protection technicians (RPTs), quality assurance personnel, and waste management personnel for data input and verification. Future goals include bringing outside groups like the LANL Waste Management Facilitymore » on-line to participate in this streamlined system. The WMS is changing the TRU paper trail into a computer trail, saving time and eliminating errors and inconsistencies in the process.« less

  15. 75 FR 8101 - 30-Day Federal Register Notice of Intention To Request Clearance of Collection of Information...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2010-02-23

    ..., National Manager, National Underground Railroad Network to Freedom Program, National Park Service, Midwest... Control Number: 1024-0232. Title: NPS National Underground Railroad Network to Freedom Application. Form: National Underground Railroad Network to Freedom Application. Expiration Date: 2/28/2010. Type of request...

  16. 78 FR 29775 - Information Collection Request Sent to the Office of Management and Budget (OMB) for Approval...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-05-21

    ... Underground Railroad Network to Freedom Program AGENCY: National Park Service, Interior. ACTION: Notice... Miller, National Manager, National Underground Railroad Network to Freedom Program, National Park Service...: OMB Control Number: 1024-0232. Title: National Underground Railroad Network to Freedom Program...

  17. Email networks and the spread of computer viruses

    NASA Astrophysics Data System (ADS)

    Newman, M. E.; Forrest, Stephanie; Balthrop, Justin

    2002-09-01

    Many computer viruses spread via electronic mail, making use of computer users' email address books as a source for email addresses of new victims. These address books form a directed social network of connections between individuals over which the virus spreads. Here we investigate empirically the structure of this network using data drawn from a large computer installation, and discuss the implications of this structure for the understanding and prevention of computer virus epidemics.

  18. Optimization of Emissions Sensor Networks Incorporating Tradeoffs Between Different Sensor Technologies

    NASA Astrophysics Data System (ADS)

    Nicholson, B.; Klise, K. A.; Laird, C. D.; Ravikumar, A. P.; Brandt, A. R.

    2017-12-01

    In order to comply with current and future methane emissions regulations, natural gas producers must develop emissions monitoring strategies for their facilities. In addition, regulators must develop air monitoring strategies over wide areas incorporating multiple facilities. However, in both of these cases, only a limited number of sensors can be deployed. With a wide variety of sensors to choose from in terms of cost, precision, accuracy, spatial coverage, location, orientation, and sampling frequency, it is difficult to design robust monitoring strategies for different scenarios while systematically considering the tradeoffs between different sensor technologies. In addition, the geography, weather, and other site specific conditions can have a large impact on the performance of a sensor network. In this work, we demonstrate methods for calculating optimal sensor networks. Our approach can incorporate tradeoffs between vastly different sensor technologies, optimize over typical wind conditions for a particular area, and consider different objectives such as time to detection or geographic coverage. We do this by pre-computing site specific scenarios and using them as input to a mixed-integer, stochastic programming problem that solves for a sensor network that maximizes the effectiveness of the detection program. Our methods and approach have been incorporated within an open source Python package called Chama with the goal of providing facility operators and regulators with tools for designing more effective and efficient monitoring systems. Sandia National Laboratories is a multi-mission laboratory managed and operated by National Technology and Engineering Solutions of Sandia, LLC., a wholly owned subsidiary of Honeywell International, Inc., for the U.S. Department of Energys National Nuclear Security Administration under contract DE-NA0003525.

  19. Digital information management: a progress report on the National Digital Mammography Archive

    NASA Astrophysics Data System (ADS)

    Beckerman, Barbara G.; Schnall, Mitchell D.

    2002-05-01

    Digital mammography creates very large images, which require new approaches to storage, retrieval, management, and security. The National Digital Mammography Archive (NDMA) project, funded by the National Library of Medicine (NLM), is developing a limited testbed that demonstrates the feasibility of a national breast imaging archive, with access to prior exams; patient information; computer aids for image processing, teaching, and testing tools; and security components to ensure confidentiality of patient information. There will be significant benefits to patients and clinicians in terms of accessible data with which to make a diagnosis and to researchers performing studies on breast cancer. Mammography was chosen for the project, because standards were already available for digital images, report formats, and structures. New standards have been created for communications protocols between devices, front- end portal and archive. NDMA is a distributed computing concept that provides for sharing and access across corporate entities. Privacy, auditing, and patient consent are all integrated into the system. Five sites, Universities of Pennsylvania, Chicago, North Carolina and Toronto, and BWXT Y12, are connected through high-speed networks to demonstrate functionality. We will review progress, including technical challenges, innovative research and development activities, standards and protocols being implemented, and potential benefits to healthcare systems.

  20. Characterizing Crowd Participation and Productivity of Foldit Through Web Scraping

    DTIC Science & Technology

    2016-03-01

    Berkeley Open Infrastructure for Network Computing CDF Cumulative Distribution Function CPU Central Processing Unit CSSG Crowdsourced Serious Game...computers at once can create a similar capacity. According to Anderson [6], principal investigator for the Berkeley Open Infrastructure for Network...extraterrestrial life. From this project, a software-based distributed computing platform called the Berkeley Open Infrastructure for Network Computing

  1. Solving Constraint Satisfaction Problems with Networks of Spiking Neurons

    PubMed Central

    Jonke, Zeno; Habenschuss, Stefan; Maass, Wolfgang

    2016-01-01

    Network of neurons in the brain apply—unlike processors in our current generation of computer hardware—an event-based processing strategy, where short pulses (spikes) are emitted sparsely by neurons to signal the occurrence of an event at a particular point in time. Such spike-based computations promise to be substantially more power-efficient than traditional clocked processing schemes. However, it turns out to be surprisingly difficult to design networks of spiking neurons that can solve difficult computational problems on the level of single spikes, rather than rates of spikes. We present here a new method for designing networks of spiking neurons via an energy function. Furthermore, we show how the energy function of a network of stochastically firing neurons can be shaped in a transparent manner by composing the networks of simple stereotypical network motifs. We show that this design approach enables networks of spiking neurons to produce approximate solutions to difficult (NP-hard) constraint satisfaction problems from the domains of planning/optimization and verification/logical inference. The resulting networks employ noise as a computational resource. Nevertheless, the timing of spikes plays an essential role in their computations. Furthermore, networks of spiking neurons carry out for the Traveling Salesman Problem a more efficient stochastic search for good solutions compared with stochastic artificial neural networks (Boltzmann machines) and Gibbs sampling. PMID:27065785

  2. A Network Primer: Full-Fledged Educational Networks.

    ERIC Educational Resources Information Center

    Lehrer, Ariella

    1988-01-01

    Discusses some of the factors included in choosing appropriate computer networks for the classroom. Describes such networks as those produced by Apple Computer, Corvus Systems, Velan, Berkeley Softworks, Tandy, LAN-TECH, Unisys, and International Business Machines (IBM). (TW)

  3. Biological modelling of a computational spiking neural network with neuronal avalanches.

    PubMed

    Li, Xiumin; Chen, Qing; Xue, Fangzheng

    2017-06-28

    In recent years, an increasing number of studies have demonstrated that networks in the brain can self-organize into a critical state where dynamics exhibit a mixture of ordered and disordered patterns. This critical branching phenomenon is termed neuronal avalanches. It has been hypothesized that the homeostatic level balanced between stability and plasticity of this critical state may be the optimal state for performing diverse neural computational tasks. However, the critical region for high performance is narrow and sensitive for spiking neural networks (SNNs). In this paper, we investigated the role of the critical state in neural computations based on liquid-state machines, a biologically plausible computational neural network model for real-time computing. The computational performance of an SNN when operating at the critical state and, in particular, with spike-timing-dependent plasticity for updating synaptic weights is investigated. The network is found to show the best computational performance when it is subjected to critical dynamic states. Moreover, the active-neuron-dominant structure refined from synaptic learning can remarkably enhance the robustness of the critical state and further improve computational accuracy. These results may have important implications in the modelling of spiking neural networks with optimal computational performance.This article is part of the themed issue 'Mathematical methods in medicine: neuroscience, cardiology and pathology'. © 2017 The Author(s).

  4. Biological modelling of a computational spiking neural network with neuronal avalanches

    NASA Astrophysics Data System (ADS)

    Li, Xiumin; Chen, Qing; Xue, Fangzheng

    2017-05-01

    In recent years, an increasing number of studies have demonstrated that networks in the brain can self-organize into a critical state where dynamics exhibit a mixture of ordered and disordered patterns. This critical branching phenomenon is termed neuronal avalanches. It has been hypothesized that the homeostatic level balanced between stability and plasticity of this critical state may be the optimal state for performing diverse neural computational tasks. However, the critical region for high performance is narrow and sensitive for spiking neural networks (SNNs). In this paper, we investigated the role of the critical state in neural computations based on liquid-state machines, a biologically plausible computational neural network model for real-time computing. The computational performance of an SNN when operating at the critical state and, in particular, with spike-timing-dependent plasticity for updating synaptic weights is investigated. The network is found to show the best computational performance when it is subjected to critical dynamic states. Moreover, the active-neuron-dominant structure refined from synaptic learning can remarkably enhance the robustness of the critical state and further improve computational accuracy. These results may have important implications in the modelling of spiking neural networks with optimal computational performance. This article is part of the themed issue `Mathematical methods in medicine: neuroscience, cardiology and pathology'.

  5. Active Computer Network Defense: An Assessment

    DTIC Science & Technology

    2001-04-01

    sufficient base of knowledge in information technology can be assumed to be working on some form of computer network warfare, even if only defensive in...the Defense Information Infrastructure (DII) to attack. Transmission Control Protocol/ Internet Protocol (TCP/IP) networks are inherently resistant to...aims to create this part of information superiority, and computer network defense is one of its fundamental components. Most of these efforts center

  6. Embracing Statistical Challenges in the Information Technology Age

    DTIC Science & Technology

    2006-01-01

    computation and feature selection. Moreover, two research projects on network tomography and arctic cloud detection are used throughout the paper to bring...prominent Network Tomography problem, origin- destination (OD) traffic estimation. It demonstrates well how the two modes of data collection interact...software debugging (Biblit et al, 2005 [2]), and network tomography for computer network management. Computer sys- tem problems exist long before the IT

  7. Artificial Neural Network Metamodels of Stochastic Computer Simulations

    DTIC Science & Technology

    1994-08-10

    SUBTITLE r 5. FUNDING NUMBERS Artificial Neural Network Metamodels of Stochastic I () Computer Simulations 6. AUTHOR(S) AD- A285 951 Robert Allen...8217!298*1C2 ARTIFICIAL NEURAL NETWORK METAMODELS OF STOCHASTIC COMPUTER SIMULATIONS by Robert Allen Kilmer B.S. in Education Mathematics, Indiana...dedicate this document to the memory of my father, William Ralph Kilmer. mi ABSTRACT Signature ARTIFICIAL NEURAL NETWORK METAMODELS OF STOCHASTIC

  8. DISCRETE EVENT SIMULATION OF OPTICAL SWITCH MATRIX PERFORMANCE IN COMPUTER NETWORKS

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Imam, Neena; Poole, Stephen W

    2013-01-01

    In this paper, we present application of a Discrete Event Simulator (DES) for performance modeling of optical switching devices in computer networks. Network simulators are valuable tools in situations where one cannot investigate the system directly. This situation may arise if the system under study does not exist yet or the cost of studying the system directly is prohibitive. Most available network simulators are based on the paradigm of discrete-event-based simulation. As computer networks become increasingly larger and more complex, sophisticated DES tool chains have become available for both commercial and academic research. Some well-known simulators are NS2, NS3, OPNET,more » and OMNEST. For this research, we have applied OMNEST for the purpose of simulating multi-wavelength performance of optical switch matrices in computer interconnection networks. Our results suggest that the application of DES to computer interconnection networks provides valuable insight in device performance and aids in topology and system optimization.« less

  9. Thermodynamic characterization of networks using graph polynomials

    NASA Astrophysics Data System (ADS)

    Ye, Cheng; Comin, César H.; Peron, Thomas K. DM.; Silva, Filipi N.; Rodrigues, Francisco A.; Costa, Luciano da F.; Torsello, Andrea; Hancock, Edwin R.

    2015-09-01

    In this paper, we present a method for characterizing the evolution of time-varying complex networks by adopting a thermodynamic representation of network structure computed from a polynomial (or algebraic) characterization of graph structure. Commencing from a representation of graph structure based on a characteristic polynomial computed from the normalized Laplacian matrix, we show how the polynomial is linked to the Boltzmann partition function of a network. This allows us to compute a number of thermodynamic quantities for the network, including the average energy and entropy. Assuming that the system does not change volume, we can also compute the temperature, defined as the rate of change of entropy with energy. All three thermodynamic variables can be approximated using low-order Taylor series that can be computed using the traces of powers of the Laplacian matrix, avoiding explicit computation of the normalized Laplacian spectrum. These polynomial approximations allow a smoothed representation of the evolution of networks to be constructed in the thermodynamic space spanned by entropy, energy, and temperature. We show how these thermodynamic variables can be computed in terms of simple network characteristics, e.g., the total number of nodes and node degree statistics for nodes connected by edges. We apply the resulting thermodynamic characterization to real-world time-varying networks representing complex systems in the financial and biological domains. The study demonstrates that the method provides an efficient tool for detecting abrupt changes and characterizing different stages in network evolution.

  10. Analysis of stationary availability factor of two-level backbone computer networks with arbitrary topology

    NASA Astrophysics Data System (ADS)

    Rahman, P. A.

    2018-05-01

    This scientific paper deals with the two-level backbone computer networks with arbitrary topology. A specialized method, offered by the author for calculation of the stationary availability factor of the two-level backbone computer networks, based on the Markov reliability models for the set of the independent repairable elements with the given failure and repair rates and the methods of the discrete mathematics, is also discussed. A specialized algorithm, offered by the author for analysis of the network connectivity, taking into account different kinds of the network equipment failures, is also observed. Finally, this paper presents an example of calculation of the stationary availability factor for the backbone computer network with the given topology.

  11. High performance network and channel-based storage

    NASA Technical Reports Server (NTRS)

    Katz, Randy H.

    1991-01-01

    In the traditional mainframe-centered view of a computer system, storage devices are coupled to the system through complex hardware subsystems called input/output (I/O) channels. With the dramatic shift towards workstation-based computing, and its associated client/server model of computation, storage facilities are now found attached to file servers and distributed throughout the network. We discuss the underlying technology trends that are leading to high performance network-based storage, namely advances in networks, storage devices, and I/O controller and server architectures. We review several commercial systems and research prototypes that are leading to a new approach to high performance computing based on network-attached storage.

  12. Telecommunication Networks. Tech Use Guide: Using Computer Technology.

    ERIC Educational Resources Information Center

    Council for Exceptional Children, Reston, VA. Center for Special Education Technology.

    One of nine brief guides for special educators on using computer technology, this guide focuses on utilizing the telecommunications capabilities of computers. Network capabilities including electronic mail, bulletin boards, and access to distant databases are briefly explained. Networks useful to the educator, general commercial systems, and local…

  13. QADATA user's manual; an interactive computer program for the retrieval and analysis of the results from the external blind sample quality- assurance project of the U.S. Geological Survey

    USGS Publications Warehouse

    Lucey, K.J.

    1990-01-01

    The U.S. Geological Survey conducts an external blind sample quality assurance project for its National Water Quality Laboratory in Denver, Colorado, based on the analysis of reference water samples. Reference samples containing selected inorganic and nutrient constituents are disguised as environmental samples at the Survey 's office in Ocala, Florida, and are sent periodically through other Survey offices to the laboratory. The results of this blind sample project indicate the quality of analytical data produced by the laboratory. This report provides instructions on the use of QADATA, an interactive, menu-driven program that allows users to retrieve the results of the blind sample quality- assurance project. The QADATA program, which is available on the U.S. Geological Survey 's national computer network, accesses a blind sample data base that contains more than 50,000 determinations from the last five water years for approximately 40 constituents at various concentrations. The data can be retrieved from the database for any user- defined time period and for any or all available constituents. After the user defines the retrieval, the program prepares statistical tables, control charts, and precision plots and generates a report which can be transferred to the user 's office through the computer network. A discussion of the interpretation of the program output is also included. This quality assurance information will permit users to document the quality of the analytical results received from the laboratory. The blind sample data is entered into the database within weeks after being produced by the laboratory and can be retrieved to meet the needs of specific projects or programs. (USGS)

  14. Standardized Cardiovascular Data for Clinical Research, Registries, and Patient Care

    PubMed Central

    Anderson, H. Vernon; Weintraub, William S.; Radford, Martha J.; Kremers, Mark S.; Roe, Matthew T.; Shaw, Richard E.; Pinchotti, Dana M.; Tcheng, James E.

    2013-01-01

    Relatively little attention has been focused on standardization of data exchange in clinical research studies and patient care activities. Both are usually managed locally using separate and generally incompatible data systems at individual hospitals or clinics. In the past decade there have been nascent efforts to create data standards for clinical research and patient care data, and to some extent these are helpful in providing a degree of uniformity. Nevertheless these data standards generally have not been converted into accepted computer-based language structures that could permit reliable data exchange across computer networks. The National Cardiovascular Research Infrastructure (NCRI) project was initiated with a major objective of creating a model framework for standard data exchange in all clinical research, clinical registry, and patient care environments, including all electronic health records. The goal is complete syntactic and semantic interoperability. A Data Standards Workgroup was established to create or identify and then harmonize clinical definitions for a base set of standardized cardiovascular data elements that could be used in this network infrastructure. Recognizing the need for continuity with prior efforts, the Workgroup examined existing data standards sources. A basic set of 353 elements was selected. The NCRI staff then collaborated with the two major technical standards organizations in healthcare, the Clinical Data Interchange Standards Consortium and Health Level 7 International, as well as with staff from the National Cancer Institute Enterprise Vocabulary Services. Modeling and mapping were performed to represent (instantiate) the data elements in appropriate technical computer language structures for endorsement as an accepted data standard for public access and use. Fully implemented, these elements will facilitate clinical research, registry reporting, administrative reporting and regulatory compliance, and patient care. PMID:23500238

  15. Federal Emergency Management Information System (FEMIS) System Administration Guide for FEMIS Version 1.4.6

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Arp, J.A.; Bower, J.C.; Burnett, R.A.

    The Federal Emergency Management Information System (FEMIS) is an emergency management planning and response tool that was developed by the Pacific Northwest National Laboratory (PNNL) under the direction of the U.S. Army Chemical Biological Defense Command. The FEMIS System Administration Guide provides information necessary for the system administrator to maintain the FEMIS system. The FEMIS system is designed for a single Chemical Stockpile Emergency Preparedness Program (CSEPP) site that has multiple Emergency Operations Centers (EOCs). Each EOC has personal computers (PCs) that emergency planners and operations personnel use to do their jobs. These PCs are corrected via a local areamore » network (LAN) to servers that provide EOC-wide services. Each EOC is interconnected to other EOCs via a Wide Area Network (WAN). Thus, FEMIS is an integrated software product that resides on client/server computer architecture. The main body of FEMIS software, referred to as the FEMIS Application Software, resides on the PC client(s) and is directly accessible to emergency management personnel. The remainder of the FEMIS software, referred to as the FEMIS Support Software, resides on the UNIX server. The Support Software provides the communication data distribution and notification functionality necessary to operate FEMIS in a networked, client/server environment.« less

  16. Federal Emergency Management Information System (FEMIS), Installation Guide for FEMIS 1.4.6

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Arp, J.A.; Burnett, R.A.; Carter, R.J.

    The Federal Emergency Management Information System (FEMIS) is an emergency management planning and response tool that was developed by the Pacific Northwest National Laboratory (PNNL) under the direction of the U.S. Army Chemical Biological Defense Command. The FEMIS System Administration Guide provides information necessary for the system administrator to maintain the FEMIS system. The FEMIS system is designed for a single Chemical Stockpile Emergency Preparedness Program (CSEPP) site that has multiple Emergency Operations Centers (EOCs). Each EOC has personal computers (PCs) that emergency planners and operations personnel use to do their jobs. These PCs are corrected via a local areamore » network (LAN) to servers that provide EOC-wide services. Each EOC is interconnected to other EOCs via a Wide Area Network (WAN). Thus, FEMIS is an integrated software product that resides on client/server computer architecture. The main body of FEMIS software, referred to as the FEMIS Application Software, resides on the PC client(s) and is directly accessible to emergency management personnel. The remainder of the FEMIS software, referred to as the FEMIS Support Software, resides on the UNIX server. The Support Software provides the communication data distribution and notification functionality necessary to operate FEMIS in a networked, client/server environment.« less

  17. Efficient QR sequential least square algorithm for high frequency GNSS precise point positioning seismic application

    NASA Astrophysics Data System (ADS)

    Barbu, Alina L.; Laurent-Varin, Julien; Perosanz, Felix; Mercier, Flavien; Marty, Jean-Charles

    2018-01-01

    The implementation into the GINS CNES geodetic software of a more efficient filter was needed to satisfy the users who wanted to compute high-rate GNSS PPP solutions. We selected the SRI approach and a QR factorization technique including an innovative algorithm which optimizes the matrix reduction step. A full description of this algorithm is given for future users. The new capacities of the software have been tested using a set of 1 Hz data from the Japanese GEONET network including the Mw 9.0 2011 Tohoku earthquake. Station coordinates solution agreed at a sub-decimeter level with previous publications as well as with solutions we computed with the National Resource Canada software. An additional benefit from the implementation of the SRI filter is the capability to estimate high-rate tropospheric parameters too. As the CPU time to estimate a 1 Hz kinematic solution from 1 h of data is now less than 1 min we could produced series of coordinates for the full 1300 stations of the Japanese network. The corresponding movie shows the impressive co-seismic deformation as well as the wave propagation along the island. The processing was straightforward using a cluster of PCs which illustrates the new potentiality of the GINS software for massive network high rate PPP processing.

  18. Federal Emergency Management Information System (FEMIS) Data Management Guide for FEMIS Version 1.4.6

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Angel, L.K.; Bower, J.C.; Burnett, R.A.

    1999-06-29

    The Federal Emergency Management Information System (FEMIS) is an emergency management planning and response tool that was developed by the Pacific Northwest National Laboratory (PNNL) under the direction of the U.S. Army Chemical Biological Defense Command. The FEMIS System Administration Guide provides information necessary for the system administrator to maintain the FEMIS system. The FEMIS system is designed for a single Chemical Stockpile Emergency Preparedness Program (CSEPP) site that has multiple Emergency Operations Centers (EOCs). Each EOC has personal computers (PCs) that emergency planners and operations personnel use to do their jobs. These PCs are corrected via a local areamore » network (LAN) to servers that provide EOC-wide services. Each EOC is interconnected to other EOCs via a Wide Area Network (WAN). Thus, FEMIS is an integrated software product that resides on client/server computer architecture. The main body of FEMIS software, referred to as the FEMIS Application Software, resides on the PC client(s) and is directly accessible to emergency management personnel. The remainder of the FEMIS software, referred to as the FEMIS Support Software, resides on the UNIX server. The Support Software provides the communication data distribution and notification functionality necessary to operate FEMIS in a networked, client/server environment.« less

  19. Network and computing infrastructure for scientific applications in Georgia

    NASA Astrophysics Data System (ADS)

    Kvatadze, R.; Modebadze, Z.

    2016-09-01

    Status of network and computing infrastructure and available services for research and education community of Georgia are presented. Research and Educational Networking Association - GRENA provides the following network services: Internet connectivity, network services, cyber security, technical support, etc. Computing resources used by the research teams are located at GRENA and at major state universities. GE-01-GRENA site is included in European Grid infrastructure. Paper also contains information about programs of Learning Center and research and development projects in which GRENA is participating.

  20. PROFEAT Update: A Protein Features Web Server with Added Facility to Compute Network Descriptors for Studying Omics-Derived Networks.

    PubMed

    Zhang, P; Tao, L; Zeng, X; Qin, C; Chen, S Y; Zhu, F; Yang, S Y; Li, Z R; Chen, W P; Chen, Y Z

    2017-02-03

    The studies of biological, disease, and pharmacological networks are facilitated by the systems-level investigations using computational tools. In particular, the network descriptors developed in other disciplines have found increasing applications in the study of the protein, gene regulatory, metabolic, disease, and drug-targeted networks. Facilities are provided by the public web servers for computing network descriptors, but many descriptors are not covered, including those used or useful for biological studies. We upgraded the PROFEAT web server http://bidd2.nus.edu.sg/cgi-bin/profeat2016/main.cgi for computing up to 329 network descriptors and protein-protein interaction descriptors. PROFEAT network descriptors comprehensively describe the topological and connectivity characteristics of unweighted (uniform binding constants and molecular levels), edge-weighted (varying binding constants), node-weighted (varying molecular levels), edge-node-weighted (varying binding constants and molecular levels), and directed (oriented processes) networks. The usefulness of the network descriptors is illustrated by the literature-reported studies of the biological networks derived from the genome, interactome, transcriptome, metabolome, and diseasome profiles. Copyright © 2016 Elsevier Ltd. All rights reserved.

  1. Calculating a checksum with inactive networking components in a computing system

    DOEpatents

    Aho, Michael E; Chen, Dong; Eisley, Noel A; Gooding, Thomas M; Heidelberger, Philip; Tauferner, Andrew T

    2014-12-16

    Calculating a checksum utilizing inactive networking components in a computing system, including: identifying, by a checksum distribution manager, an inactive networking component, wherein the inactive networking component includes a checksum calculation engine for computing a checksum; sending, to the inactive networking component by the checksum distribution manager, metadata describing a block of data to be transmitted by an active networking component; calculating, by the inactive networking component, a checksum for the block of data; transmitting, to the checksum distribution manager from the inactive networking component, the checksum for the block of data; and sending, by the active networking component, a data communications message that includes the block of data and the checksum for the block of data.

  2. Calculating a checksum with inactive networking components in a computing system

    DOEpatents

    Aho, Michael E; Chen, Dong; Eisley, Noel A; Gooding, Thomas M; Heidelberger, Philip; Tauferner, Andrew T

    2015-01-27

    Calculating a checksum utilizing inactive networking components in a computing system, including: identifying, by a checksum distribution manager, an inactive networking component, wherein the inactive networking component includes a checksum calculation engine for computing a checksum; sending, to the inactive networking component by the checksum distribution manager, metadata describing a block of data to be transmitted by an active networking component; calculating, by the inactive networking component, a checksum for the block of data; transmitting, to the checksum distribution manager from the inactive networking component, the checksum for the block of data; and sending, by the active networking component, a data communications message that includes the block of data and the checksum for the block of data.

  3. Simulation of Water Levels and Salinity in the Rivers and Tidal Marshes in the Vicinity of the Savannah National Wildlife Refuge, Coastal South Carolina and Georgia

    USGS Publications Warehouse

    Conrads, Paul; Roehl, Edwin A.; Daamen, Ruby C.; Kitchens, Wiley M.

    2006-01-01

    The Savannah Harbor is one of the busiest ports on the East Coast of the United States and is located downstream from the Savannah National Wildlife Refuge, which is one of the Nation?s largest freshwater tidal marshes. The Georgia Ports Authority and the U.S. Army Corps of Engineers funded hydrodynamic and ecological studies to evaluate the potential effects of a proposed deepening of Savannah Harbor as part of the Environmental Impact Statement. These studies included a three-dimensional (3D) model of the Savannah River estuary system, which was developed to simulate changes in water levels and salinity in the system in response to geometry changes as a result of the deepening of Savannah Harbor, and a marsh-succession model that predicts plant distribution in the tidal marshes in response to changes in the water-level and salinity conditions in the marsh. Beginning in May 2001, the U.S. Geological Survey entered into cooperative agreements with the Georgia Ports Authority to develop empirical models to simulate the water level and salinity of the rivers and tidal marshes in the vicinity of the Savannah National Wildlife Refuge and to link the 3D hydrodynamic river-estuary model and the marsh-succession model. For the development of these models, many different databases were created that describe the complexity and behaviors of the estuary. The U.S. Geological Survey has maintained a network of continuous streamflow, water-level, and specific-conductance (field measurement to compute salinity) river gages in the study area since the 1980s and a network of water-level and salinity marsh gages in the study area since 1999. The Georgia Ports Authority collected water-level and salinity data during summer 1997 and 1999 and collected continuous water-level and salinity data in the marsh and connecting tidal creeks from 1999 to 2002. Most of the databases comprise time series that differ by variable type, periods of record, measurement frequency, location, and reliability. Understanding freshwater inflows, tidal water levels, and specific conductance in the rivers and marshes is critical to enhancing the predictive capabilities of a successful marsh succession model. Data-mining techniques, including artificial neural network (ANN) models, were applied to address various needs of the ecology study and to integrate the riverine predictions from the 3D model to the marsh-succession model. ANN models were developed to simulate riverine water levels and specific conductance in the vicinity of the tidal marshes for the full range of historical conditions using data from the river gaging networks. ANN models were also developed to simulate the marsh water levels and pore-water salinities using data from the marsh gaging networks. Using the marsh ANN models, the continuous marsh network was hindcasted to be concurrent with the long-term riverine network. The hindcasted data allow ecologists to compute hydrologic parameters?such as hydroperiods and exposure frequency?to help analyze historical vegetation data. To integrate the 3D hydrodynamic model, the marsh-succession model, and various time-series databases, a decision support system (DSS) was developed to support the various needs of regulatory and scientific stakeholders. The DSS required the development of a spreadsheet application that integrates the database, 3D hydrodynamic model output, and ANN riverine and marsh models into a single package that is easy to use and can be readily disseminated. The DSS allows users to evaluate water-level and salinity response for different hydrologic conditions. Savannah River streamflows can be controlled by the user as constant flow, a percentage of historical flows, a percentile daily flow hydrograph, or as a user-specified hydrograph. The DSS can also use output from the 3D model at stream gages near the Savannah National Wildlife Refuge to simulate the effects in the tidal marshes. The DSS is distributed with a two-dimensional (

  4. Integrated computer-aided design using minicomputers

    NASA Technical Reports Server (NTRS)

    Storaasli, O. O.

    1980-01-01

    Computer-Aided Design/Computer-Aided Manufacturing (CAD/CAM), a highly interactive software, has been implemented on minicomputers at the NASA Langley Research Center. CAD/CAM software integrates many formerly fragmented programs and procedures into one cohesive system; it also includes finite element modeling and analysis, and has been interfaced via a computer network to a relational data base management system and offline plotting devices on mainframe computers. The CAD/CAM software system requires interactive graphics terminals operating at a minimum of 4800 bits/sec transfer rate to a computer. The system is portable and introduces 'interactive graphics', which permits the creation and modification of models interactively. The CAD/CAM system has already produced designs for a large area space platform, a national transonic facility fan blade, and a laminar flow control wind tunnel model. Besides the design/drafting element analysis capability, CAD/CAM provides options to produce an automatic program tooling code to drive a numerically controlled (N/C) machine. Reductions in time for design, engineering, drawing, finite element modeling, and N/C machining will benefit productivity through reduced costs, fewer errors, and a wider range of configuration.

  5. The NFSNET: Beginnings of a National Research Internet.

    ERIC Educational Resources Information Center

    Catlett, Charles E.

    1989-01-01

    Describes the development, current status, and possible future of NSFNET, which is a backbone network designed to connect five national supercomputer centers established by the National Science Foundation. The discussion covers the implications of this network for research and national networking needs. (CLB)

  6. ARACHNE: A neural-neuroglial network builder with remotely controlled parallel computing

    PubMed Central

    Rusakov, Dmitri A.; Savtchenko, Leonid P.

    2017-01-01

    Creating and running realistic models of neural networks has hitherto been a task for computing professionals rather than experimental neuroscientists. This is mainly because such networks usually engage substantial computational resources, the handling of which requires specific programing skills. Here we put forward a newly developed simulation environment ARACHNE: it enables an investigator to build and explore cellular networks of arbitrary biophysical and architectural complexity using the logic of NEURON and a simple interface on a local computer or a mobile device. The interface can control, through the internet, an optimized computational kernel installed on a remote computer cluster. ARACHNE can combine neuronal (wired) and astroglial (extracellular volume-transmission driven) network types and adopt realistic cell models from the NEURON library. The program and documentation (current version) are available at GitHub repository https://github.com/LeonidSavtchenko/Arachne under the MIT License (MIT). PMID:28362877

  7. Model based verification of the Secure Socket Layer (SSL) Protocol for NASA systems

    NASA Technical Reports Server (NTRS)

    Powell, John D.; Gilliam, David

    2004-01-01

    The National Aeronautics and Space Administration (NASA) has tens of thousands of networked computer systems and applications. Software Security vulnerabilities present risks such as lost or corrupted data, information theft, and unavailability of critical systems. These risks represent potentially enormous costs to NASA. The NASA Code Q research initiative 'Reducing Software Security Risk (RSSR) Trough an Integrated Approach' offers formal verification of information technology (IT), through the creation of a Software Security Assessment Instrument (SSAI), to address software security risks.

  8. Exploring the Plausibility of a National Multi-Agency Communications System for the Homeland Security Community: A Southeast Ohio Half-Duplex Voice Over IP Case Study

    DTIC Science & Technology

    2009-03-01

    modeled after the use by computer gamers in MMORPGs (massively multiuser online role-playing games). This is a good example of engaging the larger...during grant evaluations. There are more and more peer reviewed journal articles being written on MMORPGs , but this work is mainly geared towards...of the massively multiuser online role-playing games ( MMORPGs ). Most of the literature is about leadership and social networking. There is

  9. Computer Network Attack and the Use of Force in International Law: Thoughts on a Normative Framework

    DTIC Science & Technology

    1999-06-01

    U.S. and its allies on the battlefield, but a credible threat to employ chemical or biological weapons in pursuit of national objectives would give...injury. 18 Instrumentalities that produce them are weapons. There is little debate about whether the use of chemicals or biologicals falls...282. 50 For an interesting projection of factors likely to affect the use of force in the future, see Anthony D’Amato, Megatrends in the Use of

  10. Facilities at Indian Institute of Astrophysics and New Initiatives

    NASA Astrophysics Data System (ADS)

    Bhatt, Bhuwan Chandra

    2018-04-01

    The Indian Institute of Astrophysics is a premier national institute of India for the study of and research into topics pertaining to astronomy, astrophysics and related subjects. The Institute's main campus in Bangalore city in southern India houses the main administrative set up, library and computer center, photonics lab and state of art mechanical workshop. IIA has a network of laboratories and observatories located in various places in India, including Kodaikanal (Tamilnadu), Kavalur (Tamilnadu), Gauribidanur (Karnataka), Leh & Hanle (Jammu & Kashmir) and Hosakote (Karnataka).

  11. A new graph-based method for pairwise global network alignment

    PubMed Central

    Klau, Gunnar W

    2009-01-01

    Background In addition to component-based comparative approaches, network alignments provide the means to study conserved network topology such as common pathways and more complex network motifs. Yet, unlike in classical sequence alignment, the comparison of networks becomes computationally more challenging, as most meaningful assumptions instantly lead to NP-hard problems. Most previous algorithmic work on network alignments is heuristic in nature. Results We introduce the graph-based maximum structural matching formulation for pairwise global network alignment. We relate the formulation to previous work and prove NP-hardness of the problem. Based on the new formulation we build upon recent results in computational structural biology and present a novel Lagrangian relaxation approach that, in combination with a branch-and-bound method, computes provably optimal network alignments. The Lagrangian algorithm alone is a powerful heuristic method, which produces solutions that are often near-optimal and – unlike those computed by pure heuristics – come with a quality guarantee. Conclusion Computational experiments on the alignment of protein-protein interaction networks and on the classification of metabolic subnetworks demonstrate that the new method is reasonably fast and has advantages over pure heuristics. Our software tool is freely available as part of the LISA library. PMID:19208162

  12. A Computational Solution to Automatically Map Metabolite Libraries in the Context of Genome Scale Metabolic Networks.

    PubMed

    Merlet, Benjamin; Paulhe, Nils; Vinson, Florence; Frainay, Clément; Chazalviel, Maxime; Poupin, Nathalie; Gloaguen, Yoann; Giacomoni, Franck; Jourdan, Fabien

    2016-01-01

    This article describes a generic programmatic method for mapping chemical compound libraries on organism-specific metabolic networks from various databases (KEGG, BioCyc) and flat file formats (SBML and Matlab files). We show how this pipeline was successfully applied to decipher the coverage of chemical libraries set up by two metabolomics facilities MetaboHub (French National infrastructure for metabolomics and fluxomics) and Glasgow Polyomics (GP) on the metabolic networks available in the MetExplore web server. The present generic protocol is designed to formalize and reduce the volume of information transfer between the library and the network database. Matching of metabolites between libraries and metabolic networks is based on InChIs or InChIKeys and therefore requires that these identifiers are specified in both libraries and networks. In addition to providing covering statistics, this pipeline also allows the visualization of mapping results in the context of metabolic networks. In order to achieve this goal, we tackled issues on programmatic interaction between two servers, improvement of metabolite annotation in metabolic networks and automatic loading of a mapping in genome scale metabolic network analysis tool MetExplore. It is important to note that this mapping can also be performed on a single or a selection of organisms of interest and is thus not limited to large facilities.

  13. Engaging Cyber Communities

    DTIC Science & Technology

    2010-04-01

    technology centric operations such as computer network attack and computer network defense. 3 This leads to the question of whether the US military is... information and infrastructure. For the purpose of military operations, CNO are divided into CNA, CND, and computer network exploitation (CNE) enabling...of a CNA if they take undesirable action,” 21 and from a defensive stance in CND, “providing information about non-military threat to computers in

  14. Synchronized Pair Configuration in Virtualization-Based Lab for Learning Computer Networks

    ERIC Educational Resources Information Center

    Kongcharoen, Chaknarin; Hwang, Wu-Yuin; Ghinea, Gheorghita

    2017-01-01

    More studies are concentrating on using virtualization-based labs to facilitate computer or network learning concepts. Some benefits are lower hardware costs and greater flexibility in reconfiguring computer and network environments. However, few studies have investigated effective mechanisms for using virtualization fully for collaboration.…

  15. Systems Librarian and Automation Review.

    ERIC Educational Resources Information Center

    Schuyler, Michael

    1992-01-01

    Discusses software sharing on computer networks and the need for proper bandwidth; and describes the technology behind FidoNet, a computer network made up of electronic bulletin boards. Network features highlighted include front-end mailers, Zone Mail Hour, Nodelist, NetMail, EchoMail, computer conferences, tosser and scanner programs, and host…

  16. Models of Dynamic Relations Among Service Activities, System State and Service Quality on Computer and Network Systems

    DTIC Science & Technology

    2010-01-01

    Service quality on computer and network systems has become increasingly important as many conventional service transactions are moved online. Service quality of computer and network services can be measured by the performance of the service process in throughput, delay, and so on. On a computer and network system, competing service requests of users and associated service activities change the state of limited system resources which in turn affects the achieved service ...relations of service activities, system state and service

  17. I/O routing in a multidimensional torus network

    DOEpatents

    Chen, Dong; Eisley, Noel A.; Heidelberger, Philip

    2017-02-07

    A method, system and computer program product are disclosed for routing data packet in a computing system comprising a multidimensional torus compute node network including a multitude of compute nodes, and an I/O node network including a plurality of I/O nodes. In one embodiment, the method comprises assigning to each of the data packets a destination address identifying one of the compute nodes; providing each of the data packets with a toio value; routing the data packets through the compute node network to the destination addresses of the data packets; and when each of the data packets reaches the destination address assigned to said each data packet, routing said each data packet to one of the I/O nodes if the toio value of said each data packet is a specified value. In one embodiment, each of the data packets is also provided with an ioreturn value used to route the data packets through the compute node network.

  18. I/O routing in a multidimensional torus network

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chen, Dong; Eisley, Noel A.; Heidelberger, Philip

    A method, system and computer program product are disclosed for routing data packet in a computing system comprising a multidimensional torus compute node network including a multitude of compute nodes, and an I/O node network including a plurality of I/O nodes. In one embodiment, the method comprises assigning to each of the data packets a destination address identifying one of the compute nodes; providing each of the data packets with a toio value; routing the data packets through the compute node network to the destination addresses of the data packets; and when each of the data packets reaches the destinationmore » address assigned to said each data packet, routing said each data packet to one of the I/O nodes if the toio value of said each data packet is a specified value. In one embodiment, each of the data packets is also provided with an ioreturn value used to route the data packets through the compute node network.« less

  19. Water Intelligence and the Cyber-Infrastructure Revolution

    NASA Astrophysics Data System (ADS)

    Cline, D. W.

    2015-12-01

    As an intrinsic factor in national security, the global economy, food and energy production, and human and ecological health, fresh water resources are increasingly being considered by an ever-widening array of stakeholders. The U.S. intelligence community has identified water as a key factor in the Nation's security risk profile. Water industries are growing rapidly, and seek to revolutionize the role of water in the global economy, making water an economic value rather than a limitation on operations. Recent increased focus on the complex interrelationships and interdependencies between water, food, and energy signal a renewed effort to move towards integrated water resource management. Throughout all of this, hydrologic extremes continue to wreak havoc on communities and regions around the world, in some cases threatening long-term economic stability. This increased attention on water coincides with the "second IT revolution" of cyber-infrastructure (CI). The CI concept is a convergence of technology, data, applications and human resources, all coalescing into a tightly integrated global grid of computing, information, networking and sensor resources, and ultimately serving as an engine of change for collaboration, education and scientific discovery and innovation. In the water arena, we have unprecedented opportunities to apply the CI concept to help address complex water challenges and shape the future world of water resources - on both science and socio-economic application fronts. Providing actionable local "water intelligence" nationally or globally is now becoming feasible through high-performance computing, data technologies, and advanced hydrologic modeling. Further development on all of these fronts appears likely and will help advance this much-needed capability. Lagging behind are water observation systems, especially in situ networks, which need significant innovation to keep pace with and help fuel rapid advancements in water intelligence.

  20. Montage Version 3.0

    NASA Technical Reports Server (NTRS)

    Jacob, Joseph; Katz, Daniel; Prince, Thomas; Berriman, Graham; Good, John; Laity, Anastasia

    2006-01-01

    The final version (3.0) of the Montage software has been released. To recapitulate from previous NASA Tech Briefs articles about Montage: This software generates custom, science-grade mosaics of astronomical images on demand from input files that comply with the Flexible Image Transport System (FITS) standard and contain image data registered on projections that comply with the World Coordinate System (WCS) standards. This software can be executed on single-processor computers, multi-processor computers, and such networks of geographically dispersed computers as the National Science Foundation s TeraGrid or NASA s Information Power Grid. The primary advantage of running Montage in a grid environment is that computations can be done on a remote supercomputer for efficiency. Multiple computers at different sites can be used for different parts of a computation a significant advantage in cases of computations for large mosaics that demand more processor time than is available at any one site. Version 3.0 incorporates several improvements over prior versions. The most significant improvement is that this version is accessible to scientists located anywhere, through operational Web services that provide access to data from several large astronomical surveys and construct mosaics on either local workstations or remote computational grids as needed.

  1. The Evaluation of Rekeying Protocols Within the Hubenko Architecture as Applied to Wireless Sensor Networks

    DTIC Science & Technology

    2009-03-01

    SENSOR NETWORKS THESIS Presented to the Faculty Department of Electrical and Computer Engineering Graduate School of Engineering and...hierarchical, and Secure Lock within a wireless sensor network (WSN) under the Hubenko architecture. Using a Matlab computer simulation, the impact of the...rekeying protocol should be applied given particular network parameters, such as WSN size. 10 1.3 Experimental Approach A computer simulation in

  2. Neural-Network Simulator

    NASA Technical Reports Server (NTRS)

    Mitchell, Paul H.

    1991-01-01

    F77NNS (FORTRAN 77 Neural Network Simulator) computer program simulates popular back-error-propagation neural network. Designed to take advantage of vectorization when used on computers having this capability, also used on any computer equipped with ANSI-77 FORTRAN Compiler. Problems involving matching of patterns or mathematical modeling of systems fit class of problems F77NNS designed to solve. Program has restart capability so neural network solved in stages suitable to user's resources and desires. Enables user to customize patterns of connections between layers of network. Size of neural network F77NNS applied to limited only by amount of random-access memory available to user.

  3. Why Do Computer Viruses Survive In The Internet?

    NASA Astrophysics Data System (ADS)

    Ifti, Margarita; Neumann, Paul

    2010-01-01

    We use the three-species cyclic competition model (Rock-Paper-Scissors), described by reactions A+B→2B, B+C→2C, C+A→2A, for emulating a computer network with e-mail viruses. Different topologies of the network bring about different dynamics of the epidemics. When the parameters of the network are varied, it is observed that very high clustering coefficients are necessary for a pandemics to happen. The differences between the networks of computer users, e-mail networks, and social networks, as well as their role in determining the nature of epidemics are also discussed.

  4. NHDPlusHR: A national geospatial framework for surface-water information

    USGS Publications Warehouse

    Viger, Roland; Rea, Alan H.; Simley, Jeffrey D.; Hanson, Karen M.

    2016-01-01

    The U.S. Geological Survey is developing a new geospatial hydrographic framework for the United States, called the National Hydrography Dataset Plus High Resolution (NHDPlusHR), that integrates a diversity of the best-available information, robustly supports ongoing dataset improvements, enables hydrographic generalization to derive alternate representations of the network while maintaining feature identity, and supports modern scientific computing and Internet accessibility needs. This framework is based on the High Resolution National Hydrography Dataset, the Watershed Boundaries Dataset, and elevation from the 3-D Elevation Program, and will provide an authoritative, high precision, and attribute-rich geospatial framework for surface-water information for the United States. Using this common geospatial framework will provide a consistent basis for indexing water information in the United States, eliminate redundancy, and harmonize access to, and exchange of water information.

  5. Modernization of the Slovenian National Seismic Network

    NASA Astrophysics Data System (ADS)

    Vidrih, R.; Godec, M.; Gosar, A.; Sincic, P.; Tasic, I.; Zivcic, M.

    2003-04-01

    The Environmental Agency of the Republic of Slovenia, the Seismology Office is responsible for the fast and reliable information about earthquakes, originating in the area of Slovenia and nearby. In the year 2000 the project Modernization of the Slovenian National Seismic Network started. The purpose of a modernized seismic network is to enable fast and accurate automatic location of earthquakes, to determine earthquake parameters and to collect data of local, regional and global earthquakes. The modernized network will be finished in the year 2004 and will consist of 25 Q730 remote broadband data loggers based seismic station subsystems transmitting in real-time data to the Data Center in Ljubljana, where the Seismology Office is located. The remote broadband station subsystems include 16 surface broadband seismometers CMG-40T, 5 broadband seismometers CMG-40T with strong motion accelerographs EpiSensor, 4 borehole broadband seismometers CMG-40T, all with accurate timing provided by GPS receivers. The seismic network will cover the entire Slovenian territory, involving an area of 20,256 km2. The network is planned in this way; more seismic stations will be around bigger urban centres and in regions with greater vulnerability (NW Slovenia, Krsko Brezice region). By the end of the year 2002, three old seismic stations were modernized and ten new seismic stations were built. All seismic stations transmit data to UNIX-based computers running Antelope system software. The data is transmitted in real time using TCP/IP protocols over the Goverment Wide Area Network . Real-time data is also exchanged with seismic networks in the neighbouring countries, where the data are collected from the seismic stations, close to the Slovenian border. A typical seismic station consists of the seismic shaft with the sensor and the data acquisition system and, the service shaft with communication equipment (modem, router) and power supply with a battery box. which provides energy in case of mains failure. The data acquisition systems are recording continuous time-series sampled at 200 sps, 20 sps and 1sps.

  6. Optical interconnection networks for high-performance computing systems

    NASA Astrophysics Data System (ADS)

    Biberman, Aleksandr; Bergman, Keren

    2012-04-01

    Enabled by silicon photonic technology, optical interconnection networks have the potential to be a key disruptive technology in computing and communication industries. The enduring pursuit of performance gains in computing, combined with stringent power constraints, has fostered the ever-growing computational parallelism associated with chip multiprocessors, memory systems, high-performance computing systems and data centers. Sustaining these parallelism growths introduces unique challenges for on- and off-chip communications, shifting the focus toward novel and fundamentally different communication approaches. Chip-scale photonic interconnection networks, enabled by high-performance silicon photonic devices, offer unprecedented bandwidth scalability with reduced power consumption. We demonstrate that the silicon photonic platforms have already produced all the high-performance photonic devices required to realize these types of networks. Through extensive empirical characterization in much of our work, we demonstrate such feasibility of waveguides, modulators, switches and photodetectors. We also demonstrate systems that simultaneously combine many functionalities to achieve more complex building blocks. We propose novel silicon photonic devices, subsystems, network topologies and architectures to enable unprecedented performance of these photonic interconnection networks. Furthermore, the advantages of photonic interconnection networks extend far beyond the chip, offering advanced communication environments for memory systems, high-performance computing systems, and data centers.

  7. Research Activity in Computational Physics utilizing High Performance Computing: Co-authorship Network Analysis

    NASA Astrophysics Data System (ADS)

    Ahn, Sul-Ah; Jung, Youngim

    2016-10-01

    The research activities of the computational physicists utilizing high performance computing are analyzed by bibliometirc approaches. This study aims at providing the computational physicists utilizing high-performance computing and policy planners with useful bibliometric results for an assessment of research activities. In order to achieve this purpose, we carried out a co-authorship network analysis of journal articles to assess the research activities of researchers for high-performance computational physics as a case study. For this study, we used journal articles of the Scopus database from Elsevier covering the time period of 2004-2013. We extracted the author rank in the physics field utilizing high-performance computing by the number of papers published during ten years from 2004. Finally, we drew the co-authorship network for 45 top-authors and their coauthors, and described some features of the co-authorship network in relation to the author rank. Suggestions for further studies are discussed.

  8. Node fingerprinting: an efficient heuristic for aligning biological networks.

    PubMed

    Radu, Alex; Charleston, Michael

    2014-10-01

    With the continuing increase in availability of biological data and improvements to biological models, biological network analysis has become a promising area of research. An emerging technique for the analysis of biological networks is through network alignment. Network alignment has been used to calculate genetic distance, similarities between regulatory structures, and the effect of external forces on gene expression, and to depict conditional activity of expression modules in cancer. Network alignment is algorithmically complex, and therefore we must rely on heuristics, ideally as efficient and accurate as possible. The majority of current techniques for network alignment rely on precomputed information, such as with protein sequence alignment, or on tunable network alignment parameters, which may introduce an increased computational overhead. Our presented algorithm, which we call Node Fingerprinting (NF), is appropriate for performing global pairwise network alignment without precomputation or tuning, can be fully parallelized, and is able to quickly compute an accurate alignment between two biological networks. It has performed as well as or better than existing algorithms on biological and simulated data, and with fewer computational resources. The algorithmic validation performed demonstrates the low computational resource requirements of NF.

  9. A program to compute the soft Robinson-Foulds distance between phylogenetic networks.

    PubMed

    Lu, Bingxin; Zhang, Louxin; Leong, Hon Wai

    2017-03-14

    Over the past two decades, phylogenetic networks have been studied to model reticulate evolutionary events. The relationships among phylogenetic networks, phylogenetic trees and clusters serve as the basis for reconstruction and comparison of phylogenetic networks. To understand these relationships, two problems are raised: the tree containment problem, which asks whether a phylogenetic tree is displayed in a phylogenetic network, and the cluster containment problem, which asks whether a cluster is represented at a node in a phylogenetic network. Both the problems are NP-complete. A fast exponential-time algorithm for the cluster containment problem on arbitrary networks is developed and implemented in C. The resulting program is further extended into a computer program for fast computation of the Soft Robinson-Foulds distance between phylogenetic networks. Two computer programs are developed for facilitating reconstruction and validation of phylogenetic network models in evolutionary and comparative genomics. Our simulation tests indicated that they are fast enough for use in practice. Additionally, the distribution of the Soft Robinson-Foulds distance between phylogenetic networks is demonstrated to be unlikely normal by our simulation data.

  10. Student and Instructor Perceptions of the Usefulness of Computer-Based Microworlds in Supporting the Teaching and Assessment of Computer Networking Skills: An Exploratory Study

    ERIC Educational Resources Information Center

    Dabbagh, Nada; Beattie, Mark

    2010-01-01

    Skill shortages in the area of computer network troubleshooting are becoming increasingly acute. According to research sponsored by Cisco's Learning Institute, the demand for professionals with computer networking skills in the United States and Canada will outpace the supply of workers with those skills by an average of eight percent per year…

  11. Spatiotemporal Dynamics and Reliable Computations in Recurrent Spiking Neural Networks

    NASA Astrophysics Data System (ADS)

    Pyle, Ryan; Rosenbaum, Robert

    2017-01-01

    Randomly connected networks of excitatory and inhibitory spiking neurons provide a parsimonious model of neural variability, but are notoriously unreliable for performing computations. We show that this difficulty is overcome by incorporating the well-documented dependence of connection probability on distance. Spatially extended spiking networks exhibit symmetry-breaking bifurcations and generate spatiotemporal patterns that can be trained to perform dynamical computations under a reservoir computing framework.

  12. Distributed computing testbed for a remote experimental environment

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Butner, D.N.; Casper, T.A.; Howard, B.C.

    1995-09-18

    Collaboration is increasing as physics research becomes concentrated on a few large, expensive facilities, particularly in magnetic fusion energy research, with national and international participation. These facilities are designed for steady state operation and interactive, real-time experimentation. We are developing tools to provide for the establishment of geographically distant centers for interactive operations; such centers would allow scientists to participate in experiments from their home institutions. A testbed is being developed for a Remote Experimental Environment (REE), a ``Collaboratory.`` The testbed will be used to evaluate the ability of a remotely located group of scientists to conduct research on themore » DIII-D Tokamak at General Atomics. The REE will serve as a testing environment for advanced control and collaboration concepts applicable to future experiments. Process-to-process communications over high speed wide area networks provide real-time synchronization and exchange of data among multiple computer networks, while the ability to conduct research is enhanced by adding audio/video communication capabilities. The Open Software Foundation`s Distributed Computing Environment is being used to test concepts in distributed control, security, naming, remote procedure calls and distributed file access using the Distributed File Services. We are exploring the technology and sociology of remotely participating in the operation of a large scale experimental facility.« less

  13. Deep learning for computational chemistry

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Goh, Garrett B.; Hodas, Nathan O.; Vishnu, Abhinav

    The rise and fall of artificial neural networks is well documented in the scientific literature of both the fields of computer science and computational chemistry. Yet almost two decades later, we are now seeing a resurgence of interest in deep learning, a machine learning algorithm based on “deep” neural networks. Within the last few years, we have seen the transformative impact of deep learning the computer science domain, notably in speech recognition and computer vision, to the extent that the majority of practitioners in those field are now regularly eschewing prior established models in favor of deep learning models. Inmore » this review, we provide an introductory overview into the theory of deep neural networks and their unique properties as compared to traditional machine learning algorithms used in cheminformatics. By providing an overview of the variety of emerging applications of deep neural networks, we highlight its ubiquity and broad applicability to a wide range of challenges in the field, including QSAR, virtual screening, protein structure modeling, QM calculations, materials synthesis and property prediction. In reviewing the performance of deep neural networks, we observed a consistent outperformance against non neural networks state-of-the-art models across disparate research topics, and deep neural network based models often exceeded the “glass ceiling” expectations of their respective tasks. Coupled with the maturity of GPU-accelerated computing for training deep neural networks and the exponential growth of chemical data on which to train these networks on, we anticipate that deep learning algorithms will be a useful tool and may grow into a pivotal role for various challenges in the computational chemistry field.« less

  14. Ubiquitous human computing.

    PubMed

    Zittrain, Jonathan

    2008-10-28

    Ubiquitous computing means network connectivity everywhere, linking devices and systems as small as a drawing pin and as large as a worldwide product distribution chain. What could happen when people are so readily networked? This paper explores issues arising from two possible emerging models of ubiquitous human computing: fungible networked brainpower and collective personal vital sign monitoring.

  15. Computer-Based Semantic Network in Molecular Biology: A Demonstration.

    ERIC Educational Resources Information Center

    Callman, Joshua L.; And Others

    This paper analyzes the hardware and software features that would be desirable in a computer-based semantic network system for representing biology knowledge. It then describes in detail a prototype network of molecular biology knowledge that has been developed using Filevision software and a Macintosh computer. The prototype contains about 100…

  16. Neural network approach to proximity effect corrections in electron-beam lithography

    NASA Astrophysics Data System (ADS)

    Frye, Robert C.; Cummings, Kevin D.; Rietman, Edward A.

    1990-05-01

    The proximity effect, caused by electron beam backscattering during resist exposure, is an important concern in writing submicron features. It can be compensated by appropriate local changes in the incident beam dose, but computation of the optimal correction usually requires a prohibitively long time. We present an example of such a computation on a small test pattern, which we performed by an iterative method. We then used this solution as a training set for an adaptive neural network. After training, the network computed the same correction as the iterative method, but in a much shorter time. Correcting the image with a software based neural network resulted in a decrease in the computation time by a factor of 30, and a hardware based network enhanced the computation speed by more than a factor of 1000. Both methods had an acceptably small error of 0.5% compared to the results of the iterative computation. Additionally, we verified that the neural network correctly generalized the solution of the problem to include patterns not contained in its training set.

  17. Critical phenomena in communication/computation networks with various topologies and suboptimal to optimal resource allocation

    NASA Astrophysics Data System (ADS)

    Cogoni, Marco; Busonera, Giovanni; Anedda, Paolo; Zanetti, Gianluigi

    2015-01-01

    We generalize previous studies on critical phenomena in communication networks [1,2] by adding computational capabilities to the nodes. In our model, a set of tasks with random origin, destination and computational structure is distributed on a computational network, modeled as a graph. By varying the temperature of a Metropolis Montecarlo, we explore the global latency for an optimal to suboptimal resource assignment at a given time instant. By computing the two-point correlation function for the local overload, we study the behavior of the correlation distance (both for links and nodes) while approaching the congested phase: a transition from peaked to spread g(r) is seen above a critical (Montecarlo) temperature Tc. The average latency trend of the system is predicted by averaging over several network traffic realizations while maintaining a spatially detailed information for each node: a sharp decrease of performance is found over Tc independently of the workload. The globally optimized computational resource allocation and network routing defines a baseline for a future comparison of the transition behavior with respect to existing routing strategies [3,4] for different network topologies.

  18. Spatial spreading of infectious disease via local and national mobility networks in South Korea

    NASA Astrophysics Data System (ADS)

    Kwon, Okyu; Son, Woo-Sik

    2017-12-01

    We study the spread of infectious disease based on local- and national-scale mobility networks. We construct a local mobility network using data on urban bus services to estimate local-scale movement of people. We also construct a national mobility network from orientation-destination data of vehicular traffic between highway tollgates to evaluate national-scale movement of people. A metapopulation model is used to simulate the spread of epidemics. Thus, the number of infected people is simulated using a susceptible-infectious-recovered (SIR) model within the administrative division, and inter-division spread of infected people is determined through local and national mobility networks. In this paper, we consider two scenarios for epidemic spread. In the first, the infectious disease only spreads through local-scale movement of people, that is, the local mobility network. In the second, it spreads via both local and national mobility networks. For the former, the simulation results show infected people sequentially spread to neighboring divisions. Yet for the latter, we observe a faster spreading pattern to distant divisions. Thus, we confirm the national mobility network enhances synchronization among the incidence profiles of all administrative divisions.

  19. The Spider Center Wide File System; From Concept to Reality

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Shipman, Galen M; Dillow, David A; Oral, H Sarp

    2009-01-01

    The Leadership Computing Facility (LCF) at Oak Ridge National Laboratory (ORNL) has a diverse portfolio of computational resources ranging from a petascale XT4/XT5 simulation system (Jaguar) to numerous other systems supporting development, visualization, and data analytics. In order to support vastly different I/O needs of these systems Spider, a Lustre-based center wide file system was designed and deployed to provide over 240 GB/s of aggregate throughput with over 10 Petabytes of formatted capacity. A multi-stage InfiniBand network, dubbed as Scalable I/O Network (SION), with over 889 GB/s of bisectional bandwidth was deployed as part of Spider to provide connectivity tomore » our simulation, development, visualization, and other platforms. To our knowledge, while writing this paper, Spider is the largest and fastest POSIX-compliant parallel file system in production. This paper will detail the overall architecture of the Spider system, challenges in deploying and initial testings of a file system of this scale, and novel solutions to these challenges which offer key insights into file system design in the future.« less

  20. Extending Simple Network Management Protocol (SNMP) Beyond Network Management: A MIB Architecture for Network-Centric Services

    DTIC Science & Technology

    2007-03-01

    potential of moving closer to the goal of a fully service-oriented GIG by allowing even computing - and bandwidth-constrained elements to participate...the functionality provided by core network assets with relatively unlimited bandwidth and computing resources. Finally, the nature of information is...the Department of Defense is a requirement for ubiquitous computer connectivity. An espoused vehicle for delivering that ubiquity is the Global

  1. Modeling a Wireless Network for International Space Station

    NASA Technical Reports Server (NTRS)

    Alena, Richard; Yaprak, Ece; Lamouri, Saad

    2000-01-01

    This paper describes the application of wireless local area network (LAN) simulation modeling methods to the hybrid LAN architecture designed for supporting crew-computing tools aboard the International Space Station (ISS). These crew-computing tools, such as wearable computers and portable advisory systems, will provide crew members with real-time vehicle and payload status information and access to digital technical and scientific libraries, significantly enhancing human capabilities in space. A wireless network, therefore, will provide wearable computer and remote instruments with the high performance computational power needed by next-generation 'intelligent' software applications. Wireless network performance in such simulated environments is characterized by the sustainable throughput of data under different traffic conditions. This data will be used to help plan the addition of more access points supporting new modules and more nodes for increased network capacity as the ISS grows.

  2. Distributed computing methodology for training neural networks in an image-guided diagnostic application.

    PubMed

    Plagianakos, V P; Magoulas, G D; Vrahatis, M N

    2006-03-01

    Distributed computing is a process through which a set of computers connected by a network is used collectively to solve a single problem. In this paper, we propose a distributed computing methodology for training neural networks for the detection of lesions in colonoscopy. Our approach is based on partitioning the training set across multiple processors using a parallel virtual machine. In this way, interconnected computers of varied architectures can be used for the distributed evaluation of the error function and gradient values, and, thus, training neural networks utilizing various learning methods. The proposed methodology has large granularity and low synchronization, and has been implemented and tested. Our results indicate that the parallel virtual machine implementation of the training algorithms developed leads to considerable speedup, especially when large network architectures and training sets are used.

  3. Final Report. Analysis and Reduction of Complex Networks Under Uncertainty

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Marzouk, Youssef M.; Coles, T.; Spantini, A.

    2013-09-30

    The project was a collaborative effort among MIT, Sandia National Laboratories (local PI Dr. Habib Najm), the University of Southern California (local PI Prof. Roger Ghanem), and The Johns Hopkins University (local PI Prof. Omar Knio, now at Duke University). Our focus was the analysis and reduction of large-scale dynamical systems emerging from networks of interacting components. Such networks underlie myriad natural and engineered systems. Examples important to DOE include chemical models of energy conversion processes, and elements of national infrastructure—e.g., electric power grids. Time scales in chemical systems span orders of magnitude, while infrastructure networks feature both local andmore » long-distance connectivity, with associated clusters of time scales. These systems also blend continuous and discrete behavior; examples include saturation phenomena in surface chemistry and catalysis, and switching in electrical networks. Reducing size and stiffness is essential to tractable and predictive simulation of these systems. Computational singular perturbation (CSP) has been effectively used to identify and decouple dynamics at disparate time scales in chemical systems, allowing reduction of model complexity and stiffness. In realistic settings, however, model reduction must contend with uncertainties, which are often greatest in large-scale systems most in need of reduction. Uncertainty is not limited to parameters; one must also address structural uncertainties—e.g., whether a link is present in a network—and the impact of random perturbations, e.g., fluctuating loads or sources. Research under this project developed new methods for the analysis and reduction of complex multiscale networks under uncertainty, by combining computational singular perturbation (CSP) with probabilistic uncertainty quantification. CSP yields asymptotic approximations of reduceddimensionality “slow manifolds” on which a multiscale dynamical system evolves. Introducing uncertainty in this context raised fundamentally new issues, e.g., how is the topology of slow manifolds transformed by parametric uncertainty? How to construct dynamical models on these uncertain manifolds? To address these questions, we used stochastic spectral polynomial chaos (PC) methods to reformulate uncertain network models and analyzed them using CSP in probabilistic terms. Finding uncertain manifolds involved the solution of stochastic eigenvalue problems, facilitated by projection onto PC bases. These problems motivated us to explore the spectral properties stochastic Galerkin systems. We also introduced novel methods for rank-reduction in stochastic eigensystems—transformations of a uncertain dynamical system that lead to lower storage and solution complexity. These technical accomplishments are detailed below. This report focuses on the MIT portion of the joint project.« less

  4. [Introduction].

    PubMed

    Gerard, Adrienne; van den Bogaard, Alberts

    2008-01-01

    Along with the international trends in history of computing, Dutch contributions over the past twenty years moved away from a focus on machinery to the broader scope of use of computers, appropriation of computing technologies in various traditions, labour relations and professionalisation issues, and, lately, software. It is only natural that an emerging field like computer science sets out to write its genealogy and canonise the important steps in its intellectual endeavour. It is fair to say that a historiography diverging from such "home" interest, started in 1987 with the work of Eda Kranakis--then active in The Netherlands--commissioned by the national bureau for technology assessment, and Gerard Alberts, turning a commemorative volume of the Mathematical Center into a history of the same institute. History of computing in The Netherlands made a major leap in the spring of 1994 when Dirk de Wit, Jan van den Ende and Ellen van Oost defended their dissertations, on the roads towards adoption of computing technology in banking, in science and engineering, and on the gender aspect in computing. Here, history of computing had already moved from machines to the use of computers. The three authors joined Gerard Alberts and Onno de Wit in preparing a volume on the rise of IT in The Netherlands, the sequel of which in now in preparation in a team lead by Adrienne van den Bogaard. Dutch research reflected the international attention for professionalisation issues (Ensmenger, Haigh) very early on in the dissertation by Ruud van Dael, Something to do with computers (2001) revealing how occupations dealing with computers typically escape the pattern of closure by professionalisation as expected by the, thus outdated, sociology of professions. History of computing not only takes use and users into consideration, but finally, as one may say, confronts the technological side of putting the machine to use, software, head on. The groundbreaking works of the 2000 Paderborn meeting and by Martin Campbell-Kelly resonate in work done in The Netherlands and recently in a major research project sponsored by the European Science Foundation: Software for Europe. The four contributions to this issue offer a true cross-section of ongoing history of computing in The Netherlands. Gerard Alberts and Huub de Beer return to the earliest computers at the Mathematical Center. As they do so under the perspective of using the machines, the result is, let us say, remarkable. Adrienne van den Bogaard compares the styles of software as practiced by Van der Poel and Dijkstra: so much had these two pioneers in common, so different the consequences they took. Frank Veraart treats us with an excerpt from his recent dissertation on the domestication of the micro computer technology: appropriation of computing technology is shown by the role of intermediate actors. Onno de Wit, finally, gives an account of the development, prior to internet, of a national data communication network among large scale users and its remarkable persistence under competition with new network technologies.

  5. Truth in Reporting: How Data Capture Methods Obfuscate Actual Surgical Site Infection Rates within a Health Care Network System.

    PubMed

    Bordeianou, Liliana; Cauley, Christy E; Antonelli, Donna; Bird, Sarah; Rattner, David; Hutter, Matthew; Mahmood, Sadiqa; Schnipper, Deborah; Rubin, Marc; Bleday, Ronald; Kenney, Pardon; Berger, David

    2017-01-01

    Two systems measure surgical site infection rates following colorectal surgeries: the American College of Surgeons National Surgical Quality Improvement Program and the Centers for Disease Control and Prevention National Healthcare Safety Network. The Centers for Medicare & Medicaid Services pay-for-performance initiatives use National Healthcare Safety Network data for hospital comparisons. This study aimed to compare database concordance. This is a multi-institution cohort study of systemwide Colorectal Surgery Collaborative. The National Surgical Quality Improvement Program requires rigorous, standardized data capture techniques; National Healthcare Safety Network allows 5 data capture techniques. Standardized surgical site infection rates were compared between databases. The Cohen κ-coefficient was calculated. This study was conducted at Boston-area hospitals. National Healthcare Safety Network or National Surgical Quality Improvement Program patients undergoing colorectal surgery were included. Standardized surgical site infection rates were the primary outcomes of interest. Thirty-day surgical site infection rates of 3547 (National Surgical Quality Improvement Program) vs 5179 (National Healthcare Safety Network) colorectal procedures (2012-2014). Discrepancies appeared: National Surgical Quality Improvement Program database of hospital 1 (N = 1480 patients) routinely found surgical site infection rates of approximately 10%, routinely deemed rate "exemplary" or "as expected" (100%). National Healthcare Safety Network data from the same hospital and time period (N = 1881) revealed a similar overall surgical site infection rate (10%), but standardized rates were deemed "worse than national average" 80% of the time. Overall, hospitals using less rigorous capture methods had improved surgical site infection rates for National Healthcare Safety Network compared with standardized National Surgical Quality Improvement Program reports. The correlation coefficient between standardized infection rates was 0.03 (p = 0.88). During 25 site-time period observations, National Surgical Quality Improvement Program and National Healthcare Safety Network data matched for 52% of observations (13/25). κ = 0.10 (95% CI, -0.1366 to 0.3402; p = 0.403), indicating poor agreement. This study investigated hospitals located in the Northeastern United States only. Variation in Centers for Medicare & Medicaid Services-mandated National Healthcare Safety Network infection surveillance methodology leads to unreliable results, which is apparent when these results are compared with standardized data. High-quality data would improve care quality and compare outcomes among institutions.

  6. Local area networking: Ames centerwide network

    NASA Technical Reports Server (NTRS)

    Price, Edwin

    1988-01-01

    A computer network can benefit the user by making his/her work quicker and easier. A computer network is made up of seven different layers with the lowest being the hardware, the top being the user, and the middle being the software. These layers are discussed.

  7. Using E-Mail across Computer Networks.

    ERIC Educational Resources Information Center

    Hazari, Sunil

    1990-01-01

    Discusses the use of telecommunications technology to exchange electronic mail, files, and messages across different computer networks. Networks highlighted include ARPA Internet; BITNET; USENET; FidoNet; MCI Mail; and CompuServe. Examples of the successful use of networks in higher education are given. (Six references) (LRW)

  8. The DoD's High Performance Computing Modernization Program - Ensuing the National Earth Systems Prediction Capability Becomes Operational

    NASA Astrophysics Data System (ADS)

    Burnett, W.

    2016-12-01

    The Department of Defense's (DoD) High Performance Computing Modernization Program (HPCMP) provides high performance computing to address the most significant challenges in computational resources, software application support and nationwide research and engineering networks. Today, the HPCMP has a critical role in ensuring the National Earth System Prediction Capability (N-ESPC) achieves initial operational status in 2019. A 2015 study commissioned by the HPCMP found that N-ESPC computational requirements will exceed interconnect bandwidth capacity due to the additional load from data assimilation and passing connecting data between ensemble codes. Memory bandwidth and I/O bandwidth will continue to be significant bottlenecks for the Navy's Hybrid Coordinate Ocean Model (HYCOM) scalability - by far the major driver of computing resource requirements in the N-ESPC. The study also found that few of the N-ESPC model developers have detailed plans to ensure their respective codes scale through 2024. Three HPCMP initiatives are designed to directly address and support these issues: Productivity Enhancement, Technology, Transfer and Training (PETTT), the HPCMP Applications Software Initiative (HASI), and Frontier Projects. PETTT supports code conversion by providing assistance, expertise and training in scalable and high-end computing architectures. HASI addresses the continuing need for modern application software that executes effectively and efficiently on next-generation high-performance computers. Frontier Projects enable research and development that could not be achieved using typical HPCMP resources by providing multi-disciplinary teams access to exceptional amounts of high performance computing resources. Finally, the Navy's DoD Supercomputing Resource Center (DSRC) currently operates a 6 Petabyte system, of which Naval Oceanography receives 15% of operational computational system use, or approximately 1 Petabyte of the processing capability. The DSRC will provide the DoD with future computing assets to initially operate the N-ESPC in 2019. This talk will further describe how DoD's HPCMP will ensure N-ESPC becomes operational, efficiently and effectively, using next-generation high performance computing.

  9. Inverse targeting —An effective immunization strategy

    NASA Astrophysics Data System (ADS)

    Schneider, C. M.; Mihaljev, T.; Herrmann, H. J.

    2012-05-01

    We propose a new method to immunize populations or computer networks against epidemics which is more efficient than any continuous immunization method considered before. The novelty of our method resides in the way of determining the immunization targets. First we identify those individuals or computers that contribute the least to the disease spreading measured through their contribution to the size of the largest connected cluster in the social or a computer network. The immunization process follows the list of identified individuals or computers in inverse order, immunizing first those which are most relevant for the epidemic spreading. We have applied our immunization strategy to several model networks and two real networks, the Internet and the collaboration network of high-energy physicists. We find that our new immunization strategy is in the case of model networks up to 14%, and for real networks up to 33% more efficient than immunizing dynamically the most connected nodes in a network. Our strategy is also numerically efficient and can therefore be applied to large systems.

  10. 78 FR 27249 - Announcement of Funding Awards for Fiscal Year 2012/2013; Strong Cities, Strong Communities...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-05-09

    ... Awards for Fiscal Year 2012/2013; Strong Cities, Strong Communities National Resource Network AGENCY... 2012/2013 Strong Cities, Strong Communities National Resource Network (SC2 Network). The purpose of... SC2 Network is a capacity building program targeted to assisting the nation's most distressed...

  11. Local-Area-Network Simulator

    NASA Technical Reports Server (NTRS)

    Gibson, Jim; Jordan, Joe; Grant, Terry

    1990-01-01

    Local Area Network Extensible Simulator (LANES) computer program provides method for simulating performance of high-speed local-area-network (LAN) technology. Developed as design and analysis software tool for networking computers on board proposed Space Station. Load, network, link, and physical layers of layered network architecture all modeled. Mathematically models according to different lower-layer protocols: Fiber Distributed Data Interface (FDDI) and Star*Bus. Written in FORTRAN 77.

  12. Encryption for Remote Control via Internet or Intranet

    NASA Technical Reports Server (NTRS)

    Lineberger, Lewis

    2005-01-01

    A data-communication protocol has been devised to enable secure, reliable remote control of processes and equipment via a collision-based network, while using minimal bandwidth and computation. The network could be the Internet or an intranet. Control is made secure by use of both a password and a dynamic key, which is sent transparently to a remote user by the controlled computer (that is, the computer, located at the site of the equipment or process to be controlled, that exerts direct control over the process). The protocol functions in the presence of network latency, overcomes errors caused by missed dynamic keys, and defeats attempts by unauthorized remote users to gain control. The protocol is not suitable for real-time control, but is well suited for applications in which control latencies up to about 0.5 second are acceptable. The encryption scheme involves the use of both a dynamic and a private key, without any additional overhead that would degrade performance. The dynamic key is embedded in the equipment- or process-monitor data packets sent out by the controlled computer: in other words, the dynamic key is a subset of the data in each such data packet. The controlled computer maintains a history of the last 3 to 5 data packets for use in decrypting incoming control commands. In addition, the controlled computer records a private key (password) that is given to the remote computer. The encrypted incoming command is permuted by both the dynamic and private key. A person who records the command data in a given packet for hostile purposes cannot use that packet after the public key expires (typically within 3 seconds). Even a person in possession of an unauthorized copy of the command/remote-display software cannot use that software in the absence of the password. The use of a dynamic key embedded in the outgoing data makes the central-processing unit overhead very small. The use of a National Instruments DataSocket(TradeMark) (or equivalent) protocol or the User Datagram Protocol makes it possible to obtain reasonably short response times: Typical response times in event-driven control, using packets sized .300 bytes, are <0.2 second for commands issued from locations anywhere on Earth. The protocol requires that control commands represent absolute values of controlled parameters (e.g., a specified temperature), as distinguished from changes in values of controlled parameters (e.g., a specified increment of temperature). Each command is issued three or more times to ensure delivery in crowded networks. The use of absolute-value commands prevents additional (redundant) commands from causing trouble. Because a remote controlling computer receives "talkback" in the form of data packets from the controlled computer, typically within a time interval < or =1 s, the controlling computer can re-issue a command if network failure has occurred. The controlled computer, the process or equipment that it controls, and any human operator(s) at the site of the controlled equipment or process should be equipped with safety measures to prevent damage to equipment or injury to humans. These features could be a combination of software, external hardware, and intervention by the human operator(s). The protocol is not fail-safe, but by adopting these safety measures as part of the protocol, one makes the protocol a robust means of controlling remote processes and equipment by use of typical office computers via intranets and/or the Internet.

  13. Meteorological Processes Affecting the Transport of Emissions from the Navajo Generating Station to Grand Canyon National Park.

    NASA Astrophysics Data System (ADS)

    Lindsey, Charles G.; Chen, Jun; Dye, Timothy S.; Willard Richards, L.; Blumenthal, Donald L.

    1999-08-01

    During the 1990 Navajo Generating Station (NGS) Winter Visibility Study, a network of surface and upper-air meteorological measurement systems was operated in and around Grand Canyon National Park to investigate atmospheric processes in complex terrain that affected the transport of emissions from the nearby NGS. This network included 15 surface monitoring stations, eight balloon sounding stations (equipped with a mix of rawinsonde, tethersonde, and Airsonde sounding systems), three Doppler radar wind profilers, and four Doppler sodars. Measurements were made from 10 January through 31 March 1990. Data from this network were used to prepare objectively analyzed wind fields, trajectories, and streak lines to represent transport of emissions from the NGS, and to prepare isentropic analyses of the data. The results of these meteorological analyses were merged in the form of a computer animation that depicted the streak line analyses along with measurements of perfluorocarbon tracer, SO2, and sulfate aerosol concentrations, as well as visibility measurements collected by an extensive surface monitoring network. These analyses revealed that synoptic-scale circulations associated with the passage of low pressure systems followed by the formation of high pressure ridges accompanied the majority of cases when NGS emittants appeared to be transported to the Grand Canyon. The authors' results also revealed terrain influences on transport within the topography of the study area, especially mesoscale flows inside the Lake Powell basin and along the plain above the Marble Canyon.

  14. Object-oriented Tools for Distributed Computing

    NASA Technical Reports Server (NTRS)

    Adler, Richard M.

    1993-01-01

    Distributed computing systems are proliferating, owing to the availability of powerful, affordable microcomputers and inexpensive communication networks. A critical problem in developing such systems is getting application programs to interact with one another across a computer network. Remote interprogram connectivity is particularly challenging across heterogeneous environments, where applications run on different kinds of computers and operating systems. NetWorks! (trademark) is an innovative software product that provides an object-oriented messaging solution to these problems. This paper describes the design and functionality of NetWorks! and illustrates how it is being used to build complex distributed applications for NASA and in the commercial sector.

  15. Distinguishing humans from computers in the game of go: A complex network approach

    NASA Astrophysics Data System (ADS)

    Coquidé, C.; Georgeot, B.; Giraud, O.

    2017-08-01

    We compare complex networks built from the game of go and obtained from databases of human-played games with those obtained from computer-played games. Our investigations show that statistical features of the human-based networks and the computer-based networks differ, and that these differences can be statistically significant on a relatively small number of games using specific estimators. We show that the deterministic or stochastic nature of the computer algorithm playing the game can also be distinguished from these quantities. This can be seen as a tool to implement a Turing-like test for go simulators.

  16. High-speed on-chip windowed centroiding using photodiode-based CMOS imager

    NASA Technical Reports Server (NTRS)

    Pain, Bedabrata (Inventor); Sun, Chao (Inventor); Yang, Guang (Inventor); Cunningham, Thomas J. (Inventor); Hancock, Bruce (Inventor)

    2003-01-01

    A centroid computation system is disclosed. The system has an imager array, a switching network, computation elements, and a divider circuit. The imager array has columns and rows of pixels. The switching network is adapted to receive pixel signals from the image array. The plurality of computation elements operates to compute inner products for at least x and y centroids. The plurality of computation elements has only passive elements to provide inner products of pixel signals the switching network. The divider circuit is adapted to receive the inner products and compute the x and y centroids.

  17. High-speed on-chip windowed centroiding using photodiode-based CMOS imager

    NASA Technical Reports Server (NTRS)

    Pain, Bedabrata (Inventor); Sun, Chao (Inventor); Yang, Guang (Inventor); Cunningham, Thomas J. (Inventor); Hancock, Bruce (Inventor)

    2004-01-01

    A centroid computation system is disclosed. The system has an imager array, a switching network, computation elements, and a divider circuit. The imager array has columns and rows of pixels. The switching network is adapted to receive pixel signals from the image array. The plurality of computation elements operates to compute inner products for at least x and y centroids. The plurality of computation elements has only passive elements to provide inner products of pixel signals the switching network. The divider circuit is adapted to receive the inner products and compute the x and y centroids.

  18. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Dykstra, Dave; Garzoglio, Gabriele; Kim, Hyunwoo

    As of 2012, a number of US Department of Energy (DOE) National Laboratories have access to a 100 Gb/s wide-area network backbone. The ESnet Advanced Networking Initiative (ANI) project is intended to develop a prototype network, based on emerging 100 Gb/s Ethernet technology. The ANI network will support DOE's science research programs. A 100 Gb/s network test bed is a key component of the ANI project. The test bed offers the opportunity for early evaluation of 100Gb/s network infrastructure for supporting the high impact data movement typical of science collaborations and experiments. In order to make effective use of thismore » advanced infrastructure, the applications and middleware currently used by the distributed computing systems of large-scale science need to be adapted and tested within the new environment, with gaps in functionality identified and corrected. As a user of the ANI test bed, Fermilab aims to study the issues related to end-to-end integration and use of 100 Gb/s networks for the event simulation and analysis applications of physics experiments. In this paper we discuss our findings from evaluating existing HEP Physics middleware and application components, including GridFTP, Globus Online, etc. in the high-speed environment. These will include possible recommendations to the system administrators, application and middleware developers on changes that would make production use of the 100 Gb/s networks, including data storage, caching and wide area access.« less

  19. OSI in the NASA science internet: An analysis

    NASA Technical Reports Server (NTRS)

    Nitzan, Rebecca

    1990-01-01

    The Open Systems Interconnection (OSI) protocol suite is a result of a world-wide effort to develop international standards for networking. OSI is formalized through the International Organization for Standardization (ISO) and the International Electrotechnical Commission (IEC). The goal of OSI is to provide interoperability between network products without relying on one particular vendor, and to do so on a multinational basis. The National Institute for Standards and Technology (NIST) has developed a Government OSI Profile (GOSIP) that specified a subset of the OSI protocols as a Federal Information Processing Standard (FIPS 146). GOSIP compatibility has been adopted as the direction for all U.S. government networks. OSI is extremely diverse, and therefore adherence to a profile will facilitate interoperability within OSI networks. All major computer vendors have indicated current or future support of GOSIP-compliant OSI protocols in their products. The NASA Science Internet (NSI) is an operational network, serving user requirements under NASA's Office of Space Science and Applications. NSI consists of the Space Physics Analysis Network (SPAN) that uses the DECnet protocols and the NASA Science Network (NSN) that uses TCP/IP protocols. The NSI Project Office is currently working on an OSI integration analysis and strategy. A long-term goal is to integrate SPAN and NSN into one unified network service, using a full OSI protocol suite, which will support the OSSA user community.

  20. Extremely Scalable Spiking Neuronal Network Simulation Code: From Laptops to Exascale Computers.

    PubMed

    Jordan, Jakob; Ippen, Tammo; Helias, Moritz; Kitayama, Itaru; Sato, Mitsuhisa; Igarashi, Jun; Diesmann, Markus; Kunkel, Susanne

    2018-01-01

    State-of-the-art software tools for neuronal network simulations scale to the largest computing systems available today and enable investigations of large-scale networks of up to 10 % of the human cortex at a resolution of individual neurons and synapses. Due to an upper limit on the number of incoming connections of a single neuron, network connectivity becomes extremely sparse at this scale. To manage computational costs, simulation software ultimately targeting the brain scale needs to fully exploit this sparsity. Here we present a two-tier connection infrastructure and a framework for directed communication among compute nodes accounting for the sparsity of brain-scale networks. We demonstrate the feasibility of this approach by implementing the technology in the NEST simulation code and we investigate its performance in different scaling scenarios of typical network simulations. Our results show that the new data structures and communication scheme prepare the simulation kernel for post-petascale high-performance computing facilities without sacrificing performance in smaller systems.

Top