Sample records for network file system

  1. The Jade File System. Ph.D. Thesis

    NASA Technical Reports Server (NTRS)

    Rao, Herman Chung-Hwa

    1991-01-01

    File systems have long been the most important and most widely used form of shared permanent storage. File systems in traditional time-sharing systems, such as Unix, support a coherent sharing model for multiple users. Distributed file systems implement this sharing model in local area networks. However, most distributed file systems fail to scale from local area networks to an internet. Four characteristics of scalability were recognized: size, wide area, autonomy, and heterogeneity. Owing to size and wide area, techniques such as broadcasting, central control, and central resources, which are widely adopted by local area network file systems, are not adequate for an internet file system. An internet file system must also support the notion of autonomy because an internet is made up by a collection of independent organizations. Finally, heterogeneity is the nature of an internet file system, not only because of its size, but also because of the autonomy of the organizations in an internet. The Jade File System, which provides a uniform way to name and access files in the internet environment, is presented. Jade is a logical system that integrates a heterogeneous collection of existing file systems, where heterogeneous means that the underlying file systems support different file access protocols. Because of autonomy, Jade is designed under the restriction that the underlying file systems may not be modified. In order to avoid the complexity of maintaining an internet-wide, global name space, Jade permits each user to define a private name space. In Jade's design, we pay careful attention to avoiding unnecessary network messages between clients and file servers in order to achieve acceptable performance. Jade's name space supports two novel features: (1) it allows multiple file systems to be mounted under one direction; and (2) it permits one logical name space to mount other logical name spaces. A prototype of Jade was implemented to examine and validate its design. The prototype consists of interfaces to the Unix File System, the Sun Network File System, and the File Transfer Protocol.

  2. Zebra: A striped network file system

    NASA Technical Reports Server (NTRS)

    Hartman, John H.; Ousterhout, John K.

    1992-01-01

    The design of Zebra, a striped network file system, is presented. Zebra applies ideas from log-structured file system (LFS) and RAID research to network file systems, resulting in a network file system that has scalable performance, uses its servers efficiently even when its applications are using small files, and provides high availability. Zebra stripes file data across multiple servers, so that the file transfer rate is not limited by the performance of a single server. High availability is achieved by maintaining parity information for the file system. If a server fails its contents can be reconstructed using the contents of the remaining servers and the parity information. Zebra differs from existing striped file systems in the way it stripes file data: Zebra does not stripe on a per-file basis; instead it stripes the stream of bytes written by each client. Clients write to the servers in units called stripe fragments, which are analogous to segments in an LFS. Stripe fragments contain file blocks that were written recently, without regard to which file they belong. This method of striping has numerous advantages over per-file striping, including increased server efficiency, efficient parity computation, and elimination of parity update.

  3. Reliable file sharing in distributed operating system using web RTC

    NASA Astrophysics Data System (ADS)

    Dukiya, Rajesh

    2017-12-01

    Since, the evolution of distributed operating system, distributed file system is come out to be important part in operating system. P2P is a reliable way in Distributed Operating System for file sharing. It was introduced in 1999, later it became a high research interest topic. Peer to Peer network is a type of network, where peers share network workload and other load related tasks. A P2P network can be a period of time connection, where a bunch of computers connected by a USB (Universal Serial Bus) port to transfer or enable disk sharing i.e. file sharing. Currently P2P requires special network that should be designed in P2P way. Nowadays, there is a big influence of browsers in our life. In this project we are going to study of file sharing mechanism in distributed operating system in web browsers, where we will try to find performance bottlenecks which our research will going to be an improvement in file sharing by performance and scalability in distributed file systems. Additionally, we will discuss the scope of Web Torrent file sharing and free-riding in peer to peer networks.

  4. P2P watch: personal health information detection in peer-to-peer file-sharing networks.

    PubMed

    Sokolova, Marina; El Emam, Khaled; Arbuckle, Luk; Neri, Emilio; Rose, Sean; Jonker, Elizabeth

    2012-07-09

    Users of peer-to-peer (P2P) file-sharing networks risk the inadvertent disclosure of personal health information (PHI). In addition to potentially causing harm to the affected individuals, this can heighten the risk of data breaches for health information custodians. Automated PHI detection tools that crawl the P2P networks can identify PHI and alert custodians. While there has been previous work on the detection of personal information in electronic health records, there has been a dearth of research on the automated detection of PHI in heterogeneous user files. To build a system that accurately detects PHI in files sent through P2P file-sharing networks. The system, which we call P2P Watch, uses a pipeline of text processing techniques to automatically detect PHI in files exchanged through P2P networks. P2P Watch processes unstructured texts regardless of the file format, document type, and content. We developed P2P Watch to extract and analyze PHI in text files exchanged on P2P networks. We labeled texts as PHI if they contained identifiable information about a person (eg, name and date of birth) and specifics of the person's health (eg, diagnosis, prescriptions, and medical procedures). We evaluated the system's performance through its efficiency and effectiveness on 3924 files gathered from three P2P networks. P2P Watch successfully processed 3924 P2P files of unknown content. A manual examination of 1578 randomly selected files marked by the system as non-PHI confirmed that these files indeed did not contain PHI, making the false-negative detection rate equal to zero. Of 57 files marked by the system as PHI, all contained both personally identifiable information and health information: 11 files were PHI disclosures, and 46 files contained organizational materials such as unfilled insurance forms, job applications by medical professionals, and essays. PHI can be successfully detected in free-form textual files exchanged through P2P networks. Once the files with PHI are detected, affected individuals or data custodians can be alerted to take remedial action.

  5. An analysis of image storage systems for scalable training of deep neural networks

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lim, Seung-Hwan; Young, Steven R; Patton, Robert M

    This study presents a principled empirical evaluation of image storage systems for training deep neural networks. We employ the Caffe deep learning framework to train neural network models for three different data sets, MNIST, CIFAR-10, and ImageNet. While training the models, we evaluate five different options to retrieve training image data: (1) PNG-formatted image files on local file system; (2) pushing pixel arrays from image files into a single HDF5 file on local file system; (3) in-memory arrays to hold the pixel arrays in Python and C++; (4) loading the training data into LevelDB, a log-structured merge tree based key-valuemore » storage; and (5) loading the training data into LMDB, a B+tree based key-value storage. The experimental results quantitatively highlight the disadvantage of using normal image files on local file systems to train deep neural networks and demonstrate reliable performance with key-value storage based storage systems. When training a model on the ImageNet dataset, the image file option was more than 17 times slower than the key-value storage option. Along with measurements on training time, this study provides in-depth analysis on the cause of performance advantages/disadvantages of each back-end to train deep neural networks. We envision the provided measurements and analysis will shed light on the optimal way to architect systems for training neural networks in a scalable manner.« less

  6. P2P Watch: Personal Health Information Detection in Peer-to-Peer File-Sharing Networks

    PubMed Central

    El Emam, Khaled; Arbuckle, Luk; Neri, Emilio; Rose, Sean; Jonker, Elizabeth

    2012-01-01

    Background Users of peer-to-peer (P2P) file-sharing networks risk the inadvertent disclosure of personal health information (PHI). In addition to potentially causing harm to the affected individuals, this can heighten the risk of data breaches for health information custodians. Automated PHI detection tools that crawl the P2P networks can identify PHI and alert custodians. While there has been previous work on the detection of personal information in electronic health records, there has been a dearth of research on the automated detection of PHI in heterogeneous user files. Objective To build a system that accurately detects PHI in files sent through P2P file-sharing networks. The system, which we call P2P Watch, uses a pipeline of text processing techniques to automatically detect PHI in files exchanged through P2P networks. P2P Watch processes unstructured texts regardless of the file format, document type, and content. Methods We developed P2P Watch to extract and analyze PHI in text files exchanged on P2P networks. We labeled texts as PHI if they contained identifiable information about a person (eg, name and date of birth) and specifics of the person’s health (eg, diagnosis, prescriptions, and medical procedures). We evaluated the system’s performance through its efficiency and effectiveness on 3924 files gathered from three P2P networks. Results P2P Watch successfully processed 3924 P2P files of unknown content. A manual examination of 1578 randomly selected files marked by the system as non-PHI confirmed that these files indeed did not contain PHI, making the false-negative detection rate equal to zero. Of 57 files marked by the system as PHI, all contained both personally identifiable information and health information: 11 files were PHI disclosures, and 46 files contained organizational materials such as unfilled insurance forms, job applications by medical professionals, and essays. Conclusions PHI can be successfully detected in free-form textual files exchanged through P2P networks. Once the files with PHI are detected, affected individuals or data custodians can be alerted to take remedial action. PMID:22776692

  7. Solving data-at-rest for the storage and retrieval of files in ad hoc networks

    NASA Astrophysics Data System (ADS)

    Knobler, Ron; Scheffel, Peter; Williams, Jonathan; Gaj, Kris; Kaps, Jens-Peter

    2013-05-01

    Based on current trends for both military and commercial applications, the use of mobile devices (e.g. smartphones and tablets) is greatly increasing. Several military applications consist of secure peer to peer file sharing without a centralized authority. For these military applications, if one or more of these mobile devices are lost or compromised, sensitive files can be compromised by adversaries, since COTS devices and operating systems are used. Complete system files cannot be stored on a device, since after compromising a device, an adversary can attack the data at rest, and eventually obtain the original file. Also after a device is compromised, the existing peer to peer system devices must still be able to access all system files. McQ has teamed with the Cryptographic Engineering Research Group at George Mason University to develop a custom distributed file sharing system to provide a complete solution to the data at rest problem for resource constrained embedded systems and mobile devices. This innovative approach scales very well to a large number of network devices, without a single point of failure. We have implemented the approach on representative mobile devices as well as developed an extensive system simulator to benchmark expected system performance based on detailed modeling of the network/radio characteristics, CONOPS, and secure distributed file system functionality. The simulator is highly customizable for the purpose of determining expected system performance for other network topologies and CONOPS.

  8. The Global File System

    NASA Technical Reports Server (NTRS)

    Soltis, Steven R.; Ruwart, Thomas M.; OKeefe, Matthew T.

    1996-01-01

    The global file system (GFS) is a prototype design for a distributed file system in which cluster nodes physically share storage devices connected via a network-like fiber channel. Networks and network-attached storage devices have advanced to a level of performance and extensibility so that the previous disadvantages of shared disk architectures are no longer valid. This shared storage architecture attempts to exploit the sophistication of storage device technologies whereas a server architecture diminishes a device's role to that of a simple component. GFS distributes the file system responsibilities across processing nodes, storage across the devices, and file system resources across the entire storage pool. GFS caches data on the storage devices instead of the main memories of the machines. Consistency is established by using a locking mechanism maintained by the storage devices to facilitate atomic read-modify-write operations. The locking mechanism is being prototyped in the Silicon Graphics IRIX operating system and is accessed using standard Unix commands and modules.

  9. Performance Modeling of Network-Attached Storage Device Based Hierarchical Mass Storage Systems

    NASA Technical Reports Server (NTRS)

    Menasce, Daniel A.; Pentakalos, Odysseas I.

    1995-01-01

    Network attached storage devices improve I/O performance by separating control and data paths and eliminating host intervention during the data transfer phase. Devices are attached to both a high speed network for data transfer and to a slower network for control messages. Hierarchical mass storage systems use disks to cache the most recently used files and a combination of robotic and manually mounted tapes to store the bulk of the files in the file system. This paper shows how queuing network models can be used to assess the performance of hierarchical mass storage systems that use network attached storage devices as opposed to host attached storage devices. Simulation was used to validate the model. The analytic model presented here can be used, among other things, to evaluate the protocols involved in 1/0 over network attached devices.

  10. IBM NJE protocol emulator for VAX/VMS

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Engert, D.E.

    1981-01-01

    Communications software has been written at Argonne National Laboratory to enable a VAX/VMS system to participate as an end-node in a standard IBM network by emulating the Network Job Entry (NJE) protocol. NJE is actually a collection of programs that support job networking for the operating systems used on most large IBM-compatible computers (e.g., VM/370, MVS with JES2 or JES3, SVS, MVT with ASP or HASP). Files received by the VAX can be printed or saved in user-selected disk files. Files sent to the network can be routed to any node in the network for printing, punching, or job submission,more » as well as to a VM/370 user's virtual reader. Files sent from the VAX are queued and transmitted asynchronously to allow users to perform other work while files are awaiting transmission. No changes are required to the IBM software.« less

  11. Network survivability performance (computer diskette)

    NASA Astrophysics Data System (ADS)

    1993-11-01

    File characteristics: Data file; 1 file. Physical description: 1 computer diskette; 3 1/2 in.; high density; 2.0MB. System requirements: Mac; Word. This technical report has been developed to address the survivability of telecommunications networks including services. It responds to the need for a common understanding of, and assessment techniques for network survivability, availability, integrity, and reliability. It provides a basis for designing and operating telecommunication networks to user expectations for network survivability.

  12. Applications of Coding in Network Communications

    ERIC Educational Resources Information Center

    Chang, Christopher SungWook

    2012-01-01

    This thesis uses the tool of network coding to investigate fast peer-to-peer file distribution, anonymous communication, robust network construction under uncertainty, and prioritized transmission. In a peer-to-peer file distribution system, we use a linear optimization approach to show that the network coding framework significantly simplifies…

  13. Telematics and satellites. Part 1: Information systems

    NASA Astrophysics Data System (ADS)

    Burke, W. R.

    1980-06-01

    Telematic systems are identified and described. The applications are examined emphasizing the role played by satellite links. The discussion includes file transfer, examples of distributed processor systems, terminal communication, information retrieval systems, office information systems, electronic preparation and publishing of information, electronic systems for transfer of funds, electronic mail systems, record file transfer characteristics, intra-enterprise networks, and inter-enterprise networks.

  14. NJE; VAX-VMS IBM NJE network protocol emulator. [DEC VAX11/780; VAX-11 FORTRAN 77 (99%) and MACRO-11 (1%)

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Engert, D.E.; Raffenetti, C.

    NJE is communications software developed to enable a VAX VMS system to participate as an end-node in a standard IBM network by emulating the Network Job Entry (NJE) protocol. NJE supports job networking for the operating systems used on most large IBM-compatible computers (e.g., VM/370, MVS with JES2 or JES3, SVS, MVT with ASP or HASP). Files received by the VAX can be printed or saved in user-selected disk files. Files sent to the network can be routed to any network node for printing, punching, or job submission, or to a VM/370 user's virtual reader. Files sent from the VAXmore » are queued and transmitted asynchronously. No changes are required to the IBM software.DEC VAX11/780; VAX-11 FORTRAN 77 (99%) and MACRO-11 (1%); VMS 2.5; VAX11/780 with DUP-11 UNIBUS interface and 9600 baud synchronous modem..« less

  15. Requirements for a network storage service

    NASA Technical Reports Server (NTRS)

    Kelly, Suzanne M.; Haynes, Rena A.

    1992-01-01

    Sandia National Laboratories provides a high performance classified computer network as a core capability in support of its mission of nuclear weapons design and engineering, physical sciences research, and energy research and development. The network, locally known as the Internal Secure Network (ISN), was designed in 1989 and comprises multiple distributed local area networks (LAN's) residing in Albuquerque, New Mexico and Livermore, California. The TCP/IP protocol suite is used for inner-node communications. Scientific workstations and mid-range computers, running UNIX-based operating systems, compose most LAN's. One LAN, operated by the Sandia Corporate Computing Directorate, is a general purpose resource providing a supercomputer and a file server to the entire ISN. The current file server on the supercomputer LAN is an implementation of the Common File System (CFS) developed by Los Alamos National Laboratory. Subsequent to the design of the ISN, Sandia reviewed its mass storage requirements and chose to enter into a competitive procurement to replace the existing file server with one more adaptable to a UNIX/TCP/IP environment. The requirements study for the network was the starting point for the requirements study for the new file server. The file server is called the Network Storage Services (NSS) and is requirements are described in this paper. The next section gives an application or functional description of the NSS. The final section adds performance, capacity, and access constraints to the requirements.

  16. How to Handle the Avalanche of Online Documentation.

    ERIC Educational Resources Information Center

    Nolan, Maureen P.

    1981-01-01

    The method of handling the printed documentation associated with online information retrieval, which is described, involves the use of a series of separate but related files: database files, system files, network files, index sheets, and equipment files. (FM)

  17. Network issues for large mass storage requirements

    NASA Technical Reports Server (NTRS)

    Perdue, James

    1992-01-01

    File Servers and Supercomputing environments need high performance networks to balance the I/O requirements seen in today's demanding computing scenarios. UltraNet is one solution which permits both high aggregate transfer rates and high task-to-task transfer rates as demonstrated in actual tests. UltraNet provides this capability as both a Server-to-Server and Server-to-Client access network giving the supercomputing center the following advantages highest performance Transport Level connections (to 40 MBytes/sec effective rates); matches the throughput of the emerging high performance disk technologies, such as RAID, parallel head transfer devices and software striping; supports standard network and file system applications using SOCKET's based application program interface such as FTP, rcp, rdump, etc.; supports access to the Network File System (NFS) and LARGE aggregate bandwidth for large NFS usage; provides access to a distributed, hierarchical data server capability using DISCOS UniTree product; supports file server solutions available from multiple vendors, including Cray, Convex, Alliant, FPS, IBM, and others.

  18. 1995 Joseph E. Whitley, MD, Award. A World Wide Web gateway to the radiologic learning file.

    PubMed

    Channin, D S

    1995-12-01

    Computer networks in general, and the Internet specifically, are changing the way information is manipulated in the world at large and in radiology. The goal of this project was to develop a computer system in which images from the Radiologic Learning File, available previously only via a single-user laser disc, are made available over a generic, high-availability computer network to many potential users simultaneously. Using a networked workstation in our laboratory and freely available distributed hypertext software, we established a World Wide Web (WWW) information server for radiology. Images from the Radiologic Learning File are requested through the WWW client software, digitized from a single laser disc containing the entire teaching file and then transmitted over the network to the client. The text accompanying each image is incorporated into the transmitted document. The Radiologic Learning File is now on-line, and requests to view the cases result in the delivery of the text and images. Image digitization via a frame grabber takes 1/30th of a second. Conversion of the image to a standard computer graphic format takes 45-60 sec. Text and image transmission speed on a local area network varies between 200 and 400 kilobytes (KB) per second depending on the network load. We have made images from a laser disc of the Radiologic Learning File available through an Internet-based hypertext server. The images previously available through a single-user system located in a remote section of our department are now ubiquitously available throughout our department via the department's computer network. We have thus converted a single-user, limited functionality system into a multiuser, widely available resource.

  19. 75 FR 36456 - Channel America Television Network, Inc., EquiMed, Inc., Kore Holdings, Inc., Robotic Vision...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2010-06-25

    ... SECURITIES AND EXCHANGE COMMISSION [File No. 500-1] Channel America Television Network, Inc., EquiMed, Inc., Kore Holdings, Inc., Robotic Vision Systems, Inc. (n/k/a Acuity Cimatrix, Inc.), Security... information concerning the securities of Channel America Television Network, Inc. because it has not filed any...

  20. RAMA: A file system for massively parallel computers

    NASA Technical Reports Server (NTRS)

    Miller, Ethan L.; Katz, Randy H.

    1993-01-01

    This paper describes a file system design for massively parallel computers which makes very efficient use of a few disks per processor. This overcomes the traditional I/O bottleneck of massively parallel machines by storing the data on disks within the high-speed interconnection network. In addition, the file system, called RAMA, requires little inter-node synchronization, removing another common bottleneck in parallel processor file systems. Support for a large tertiary storage system can easily be integrated in lo the file system; in fact, RAMA runs most efficiently when tertiary storage is used.

  1. Accessing files in an Internet: The Jade file system

    NASA Technical Reports Server (NTRS)

    Peterson, Larry L.; Rao, Herman C.

    1991-01-01

    Jade is a new distribution file system that provides a uniform way to name and access files in an internet environment. It makes two important contributions. First, Jade is a logical system that integrates a heterogeneous collection of existing file systems, where heterogeneous means that the underlying file systems support different file access protocols. Jade is designed under the restriction that the underlying file system may not be modified. Second, rather than providing a global name space, Jade permits each user to define a private name space. These private name spaces support two novel features: they allow multiple file systems to be mounted under one directory, and they allow one logical name space to mount other logical name spaces. A prototype of the Jade File System was implemented on Sun Workstations running Unix. It consists of interfaces to the Unix file system, the Sun Network File System, the Andrew File System, and FTP. This paper motivates Jade's design, highlights several aspects of its implementation, and illustrates applications that can take advantage of its features.

  2. Accessing files in an internet - The Jade file system

    NASA Technical Reports Server (NTRS)

    Rao, Herman C.; Peterson, Larry L.

    1993-01-01

    Jade is a new distribution file system that provides a uniform way to name and access files in an internet environment. It makes two important contributions. First, Jade is a logical system that integrates a heterogeneous collection of existing file systems, where heterogeneous means that the underlying file systems support different file access protocols. Jade is designed under the restriction that the underlying file system may not be modified. Second, rather than providing a global name space, Jade permits each user to define a private name space. These private name spaces support two novel features: they allow multiple file systems to be mounted under one directory, and they allow one logical name space to mount other logical name spaces. A prototype of the Jade File System was implemented on Sun Workstations running Unix. It consists of interfaces to the Unix file system, the Sun Network File System, the Andrew File System, and FTP. This paper motivates Jade's design, highlights several aspects of its implementation, and illustrates applications that can take advantage of its features.

  3. Interfacing a high performance disk array file server to a Gigabit LAN

    NASA Technical Reports Server (NTRS)

    Seshan, Srinivasan; Katz, Randy H.

    1993-01-01

    Our previous prototype, RAID-1, identified several bottlenecks in typical file server architectures. The most important bottleneck was the lack of a high-bandwidth path between disk, memory, and the network. Workstation servers, such as the Sun-4/280, have very slow access to peripherals on busses far from the CPU. For the RAID-2 system, we addressed this problem by designing a crossbar interconnect, Xbus board, that provides a 40MB/s path between disk, memory, and the network interfaces. However, this interconnect does not provide the system CPU with low latency access to control the various interfaces. To provide a high data rate to clients on the network, we were forced to carefully and efficiently design the network software. A block diagram of the system hardware architecture is given. In the following subsections, we describe pieces of the RAID-2 file server hardware that had a significant impact on the design of the network interface.

  4. Networks for Autonomous Formation Flying Satellite Systems

    NASA Technical Reports Server (NTRS)

    Knoblock, Eric J.; Konangi, Vijay K.; Wallett, Thomas M.; Bhasin, Kul B.

    2001-01-01

    The performance of three communications networks to support autonomous multi-spacecraft formation flying systems is presented. All systems are comprised of a ten-satellite formation arranged in a star topology, with one of the satellites designated as the central or "mother ship." All data is routed through the mother ship to the terrestrial network. The first system uses a TCP/lP over ATM protocol architecture within the formation the second system uses the IEEE 802.11 protocol architecture within the formation and the last system uses both of the previous architectures with a constellation of geosynchronous satellites serving as an intermediate point-of-contact between the formation and the terrestrial network. The simulations consist of file transfers using either the File Transfer Protocol (FTP) or the Simple Automatic File Exchange (SAFE) Protocol. The results compare the IF queuing delay, and IP processing delay at the mother ship as well as application-level round-trip time for both systems, In all cases, using IEEE 802.11 within the formation yields less delay. Also, the throughput exhibited by SAFE is better than FTP.

  5. A high-speed network for cardiac image review.

    PubMed

    Elion, J L; Petrocelli, R R

    1994-01-01

    A high-speed fiber-based network for the transmission and display of digitized full-motion cardiac images has been developed. Based on Asynchronous Transfer Mode (ATM), the network is scaleable, meaning that the same software and hardware is used for a small local area network or for a large multi-institutional network. The system can handle uncompressed digital angiographic images, considered to be at the "high-end" of the bandwidth requirements. Along with the networking, a general-purpose multi-modality review station has been implemented without specialized hardware. This station can store a full injection sequence in "loop RAM" in a 512 x 512 format, then interpolate to 1024 x 1024 while displaying at 30 frames per second. The network and review stations connect to a central file server that uses a virtual file system to make a large high-speed RAID storage disk and associated off-line storage tapes and cartridges all appear as a single large file system to the software. In addition to supporting archival storage and review, the system can also digitize live video using high-speed Direct Memory Access (DMA) from the frame grabber to present uncompressed data to the network. Fully functional prototypes have provided the proof of concept, with full deployment in the institution planned as the next stage.

  6. A high-speed network for cardiac image review.

    PubMed Central

    Elion, J. L.; Petrocelli, R. R.

    1994-01-01

    A high-speed fiber-based network for the transmission and display of digitized full-motion cardiac images has been developed. Based on Asynchronous Transfer Mode (ATM), the network is scaleable, meaning that the same software and hardware is used for a small local area network or for a large multi-institutional network. The system can handle uncompressed digital angiographic images, considered to be at the "high-end" of the bandwidth requirements. Along with the networking, a general-purpose multi-modality review station has been implemented without specialized hardware. This station can store a full injection sequence in "loop RAM" in a 512 x 512 format, then interpolate to 1024 x 1024 while displaying at 30 frames per second. The network and review stations connect to a central file server that uses a virtual file system to make a large high-speed RAID storage disk and associated off-line storage tapes and cartridges all appear as a single large file system to the software. In addition to supporting archival storage and review, the system can also digitize live video using high-speed Direct Memory Access (DMA) from the frame grabber to present uncompressed data to the network. Fully functional prototypes have provided the proof of concept, with full deployment in the institution planned as the next stage. PMID:7949964

  7. Network Configuration Analysis for Formation Flying Satellites

    NASA Technical Reports Server (NTRS)

    Knoblock, Eric J.; Wallett, Thomas M.; Konangi, Vijay K.; Bhasin, Kul B.

    2001-01-01

    The performance of two networks to support autonomous multi-spacecraft formation flying systems is presented. Both systems are comprised of a ten-satellite formation, with one of the satellites designated as the central or 'mother ship.' All data is routed through the mother ship to the terrestrial network. The first system uses a TCP/EP over ATM protocol architecture within the formation, and the second system uses the IEEE 802.11 protocol architecture within the formation. The simulations consist of file transfers using either the File Transfer Protocol (FTP) or the Simple Automatic File Exchange (SAFE) Protocol. The results compare the IP queuing delay, IP queue size and IP processing delay at the mother ship as well as end-to-end delay for both systems. In all cases, using IEEE 802.11 within the formation yields less delay. Also, the throughput exhibited by SAFE is better than FTP.

  8. A secure file manager for UNIX

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    DeVries, R.G.

    1990-12-31

    The development of a secure file management system for a UNIX-based computer facility with supercomputers and workstations is described. Specifically, UNIX in its usual form does not address: (1) Operation which would satisfy rigorous security requirements. (2) Online space management in an environment where total data demands would be many times the actual online capacity. (3) Making the file management system part of a computer network in which users of any computer in the local network could retrieve data generated on any other computer in the network. The characteristics of UNIX can be exploited to develop a portable, secure filemore » manager which would operate on computer systems ranging from workstations to supercomputers. Implementation considerations making unusual use of UNIX features, rather than requiring extensive internal system changes, are described, and implementation using the Cray Research Inc. UNICOS operating system is outlined.« less

  9. The Spider Center Wide File System; From Concept to Reality

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Shipman, Galen M; Dillow, David A; Oral, H Sarp

    2009-01-01

    The Leadership Computing Facility (LCF) at Oak Ridge National Laboratory (ORNL) has a diverse portfolio of computational resources ranging from a petascale XT4/XT5 simulation system (Jaguar) to numerous other systems supporting development, visualization, and data analytics. In order to support vastly different I/O needs of these systems Spider, a Lustre-based center wide file system was designed and deployed to provide over 240 GB/s of aggregate throughput with over 10 Petabytes of formatted capacity. A multi-stage InfiniBand network, dubbed as Scalable I/O Network (SION), with over 889 GB/s of bisectional bandwidth was deployed as part of Spider to provide connectivity tomore » our simulation, development, visualization, and other platforms. To our knowledge, while writing this paper, Spider is the largest and fastest POSIX-compliant parallel file system in production. This paper will detail the overall architecture of the Spider system, challenges in deploying and initial testings of a file system of this scale, and novel solutions to these challenges which offer key insights into file system design in the future.« less

  10. Requirements for a network storage service

    NASA Technical Reports Server (NTRS)

    Kelly, Suzanne M.; Haynes, Rena A.

    1991-01-01

    Sandia National Laboratories provides a high performance classified computer network as a core capability in support of its mission of nuclear weapons design and engineering, physical sciences research, and energy research and development. The network, locally known as the Internal Secure Network (ISN), comprises multiple distributed local area networks (LAN's) residing in New Mexico and California. The TCP/IP protocol suite is used for inter-node communications. Scientific workstations and mid-range computers, running UNIX-based operating systems, compose most LAN's. One LAN, operated by the Sandia Corporate Computing Computing Directorate, is a general purpose resource providing a supercomputer and a file server to the entire ISN. The current file server on the supercomputer LAN is an implementation of the Common File Server (CFS). Subsequent to the design of the ISN, Sandia reviewed its mass storage requirements and chose to enter into a competitive procurement to replace the existing file server with one more adaptable to a UNIX/TCP/IP environment. The requirements study for the network was the starting point for the requirements study for the new file server. The file server is called the Network Storage Service (NSS) and its requirements are described. An application or functional description of the NSS is given. The final section adds performance, capacity, and access constraints to the requirements.

  11. The Design and Application of Data Storage System in Miyun Satellite Ground Station

    NASA Astrophysics Data System (ADS)

    Xue, Xiping; Su, Yan; Zhang, Hongbo; Liu, Bin; Yao, Meijuan; Zhao, Shu

    2015-04-01

    China has launched Chang'E-3 satellite in 2013, firstly achieved soft landing on moon for China's lunar probe. Miyun satellite ground station firstly used SAN storage network system based-on Stornext sharing software in Chang'E-3 mission. System performance fully meets the application requirements of Miyun ground station data storage.The Stornext file system is a sharing file system with high performance, supports multiple servers to access the file system using different operating system at the same time, and supports access to data on a variety of topologies, such as SAN and LAN. Stornext focused on data protection and big data management. It is announced that Quantum province has sold more than 70,000 licenses of Stornext file system worldwide, and its customer base is growing, which marks its leading position in the big data management.The responsibilities of Miyun satellite ground station are the reception of Chang'E-3 satellite downlink data and management of local data storage. The station mainly completes exploration mission management, receiving and management of observation data, and provides a comprehensive, centralized monitoring and control functions on data receiving equipment. The ground station applied SAN storage network system based on Stornext shared software for receiving and managing data reliable.The computer system in Miyun ground station is composed by business running servers, application workstations and other storage equipments. So storage systems need a shared file system which supports heterogeneous multi-operating system. In practical applications, 10 nodes simultaneously write data to the file system through 16 channels, and the maximum data transfer rate of each channel is up to 15MB/s. Thus the network throughput of file system is not less than 240MB/s. At the same time, the maximum capacity of each data file is up to 810GB. The storage system planned requires that 10 nodes simultaneously write data to the file system through 16 channels with 240MB/s network throughput.When it is integrated,sharing system can provide 1020MB/s write speed simultaneously.When the master storage server fails, the backup storage server takes over the normal service.The literacy of client will not be affected,in which switching time is less than 5s.The design and integrated storage system meet users requirements. Anyway, all-fiber way is too expensive in SAN; SCSI hard disk transfer rate may still be the bottleneck in the development of the entire storage system. Stornext can provide users with efficient sharing, management, automatic archiving of large numbers of files and hardware solutions. It occupies a leading position in big data management. Storage is the most popular sharing shareware, and there are drawbacks in Stornext: Firstly, Stornext software is expensive, in which charge by the sites. When the network scale is large, the purchase cost will be very high. Secondly, the parameters of Stornext software are more demands on the skills of technical staff. If there is a problem, it is difficult to exclude.

  12. 78 FR 63196 - Privacy Act System of Records

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-10-23

    ... Technology Center (ITC) staff and contractors, who maintain the FCC's computer network. Other FCC employees... and Offices (B/ Os); 2. Electronic data, records, and files that are stored in the FCC's computer.... Access to the FACA electronic records, files, and data, which are housed in the FCC's computer network...

  13. Sending Foreign Language Word Processor Files over Networks.

    ERIC Educational Resources Information Center

    Feustle, Joseph A., Jr.

    1992-01-01

    Advantages of using online systems are described, and specific techniques for successfully transmitting computer text files are described. Topics covered include Microsoft's Rich TextFile, WordPerfect encoding, text compression, and especially encoding and decoding with UNIX programs. (LB)

  14. How to Purchase, Set Up, & Safeguard a CD-ROM Network.

    ERIC Educational Resources Information Center

    Almquist, Arne J.

    1996-01-01

    Presents an overview of the hardware and software required to network CD-ROMs in schools. Topics include network infrastructures, networking software, file server-based systems, CD-ROM servers, vendors of network components, workstations, network utilities, and network management. (LRW)

  15. Understanding Customer Dissatisfaction with Underutilized Distributed File Servers

    NASA Technical Reports Server (NTRS)

    Riedel, Erik; Gibson, Garth

    1996-01-01

    An important trend in the design of storage subsystems is a move toward direct network attachment. Network-attached storage offers the opportunity to off-load distributed file system functionality from dedicated file server machines and execute many requests directly at the storage devices. For this strategy to lead to better performance, as perceived by users, the response time of distributed operations must improve. In this paper we analyze measurements of an Andrew file system (AFS) server that we recently upgraded in an effort to improve client performance in our laboratory. While the original server's overall utilization was only about 3%, we show how burst loads were sufficiently intense to lead to period of poor response time significant enough to trigger customer dissatisfaction. In particular, we show how, after adjusting for network load and traffic to non-project servers, 50% of the variation in client response time was explained by variation in server central processing unit (CPU) use. That is, clients saw long response times in large part because the server was often over-utilized when it was used at all. Using these measures, we see that off-loading file server work in a network-attached storage architecture has to potential to benefit user response time. Computational power in such a system scales directly with storage capacity, so the slowdown during burst period should be reduced.

  16. Accuracy comparison among different machine learning techniques for detecting malicious codes

    NASA Astrophysics Data System (ADS)

    Narang, Komal

    2016-03-01

    In this paper, a machine learning based model for malware detection is proposed. It can detect newly released malware i.e. zero day attack by analyzing operation codes on Android operating system. The accuracy of Naïve Bayes, Support Vector Machine (SVM) and Neural Network for detecting malicious code has been compared for the proposed model. In the experiment 400 benign files, 100 system files and 500 malicious files have been used to construct the model. The model yields the best accuracy 88.9% when neural network is used as classifier and achieved 95% and 82.8% accuracy for sensitivity and specificity respectively.

  17. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Haynes, R.A.

    The Network File System (NFS) is used in UNIX-based networks to provide transparent file sharing between heterogeneous systems. Although NFS is well-known for being weak in security, it is widely used and has become a de facto standard. This paper examines the user authentication shortcomings of NFS and the approach Sandia National Laboratories has taken to strengthen it with Kerberos. The implementation on a Cray Y-MP8/864 running UNICOS is described and resource/performance issues are discussed. 4 refs., 4 figs.

  18. Final Report for File System Support for Burst Buffers on HPC Systems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Yu, W.; Mohror, K.

    Distributed burst buffers are a promising storage architecture for handling I/O workloads for exascale computing. As they are being deployed on more supercomputers, a file system that efficiently manages these burst buffers for fast I/O operations carries great consequence. Over the past year, FSU team has undertaken several efforts to design, prototype and evaluate distributed file systems for burst buffers on HPC systems. These include MetaKV: a Key-Value Store for Metadata Management of Distributed Burst Buffers, a user-level file system with multiple backends, and a specialized file system for large datasets of deep neural networks. Our progress for these respectivemore » efforts are elaborated further in this report.« less

  19. Transparency in Distributed File Systems

    DTIC Science & Technology

    1989-01-01

    ORGANIZATION NAME AND ADDRESS 10. PROGRAM ELEMENT. PROJECT, TASK Computer Science Department AREA & WORK UNIT NUMBERS 734 Comouter Studies Bldc . University of...sistency control , file and director) placement, and file and directory migration in a way that pro- 3 vides full network transparency. This transparency...areas of naming, replication, con- sistency control , file and directory placement, and file and directory migration in a way that pro- 3 vides full

  20. Deceit: A flexible distributed file system

    NASA Technical Reports Server (NTRS)

    Siegel, Alex; Birman, Kenneth; Marzullo, Keith

    1989-01-01

    Deceit, a distributed file system (DFS) being developed at Cornell, focuses on flexible file semantics in relation to efficiency, scalability, and reliability. Deceit servers are interchangeable and collectively provide the illusion of a single, large server machine to any clients of the Deceit service. Non-volatile replicas of each file are stored on a subset of the file servers. The user is able to set parameters on a file to achieve different levels of availability, performance, and one-copy serializability. Deceit also supports a file version control mechanism. In contrast with many recent DFS efforts, Deceit can behave like a plain Sun Network File System (NFS) server and can be used by any NFS client without modifying any client software. The current Deceit prototype uses the ISIS Distributed Programming Environment for all communication and process group management, an approach that reduces system complexity and increases system robustness.

  1. I/O performance evaluation of a Linux-based network-attached storage device

    NASA Astrophysics Data System (ADS)

    Sun, Zhaoyan; Dong, Yonggui; Wu, Jinglian; Jia, Huibo; Feng, Guanping

    2002-09-01

    In a Local Area Network (LAN), clients are permitted to access the files on high-density optical disks via a network server. But the quality of read service offered by the conventional server is not satisfied because of the multiple functions on the server and the overmuch caller. This paper develops a Linux-based Network-Attached Storage (NAS) server. The Operation System (OS), composed of an optimized kernel and a miniaturized file system, is stored in a flash memory. After initialization, the NAS device is connected into the LAN. The administrator and users could configure the access the server through the web page respectively. In order to enhance the quality of access, the management of buffer cache in file system is optimized. Some benchmark programs are peformed to evaluate the I/O performance of the NAS device. Since data recorded in optical disks are usually for reading accesses, our attention is focused on the reading throughput of the device. The experimental results indicate that the I/O performance of our NAS device is excellent.

  2. Experimental Analysis of File Transfer Rates over Wide-Area Dedicated Connections

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Rao, Nageswara S.; Liu, Qiang; Sen, Satyabrata

    2016-12-01

    File transfers over dedicated connections, supported by large parallel file systems, have become increasingly important in high-performance computing and big data workflows. It remains a challenge to achieve peak rates for such transfers due to the complexities of file I/O, host, and network transport subsystems, and equally importantly, their interactions. We present extensive measurements of disk-to-disk file transfers using Lustre and XFS file systems mounted on multi-core servers over a suite of 10 Gbps emulated connections with 0-366 ms round trip times. Our results indicate that large buffer sizes and many parallel flows do not always guarantee high transfer rates.more » Furthermore, large variations in the measured rates necessitate repeated measurements to ensure confidence in inferences based on them. We propose a new method to efficiently identify the optimal joint file I/O and network transport parameters using a small number of measurements. We show that for XFS and Lustre with direct I/O, this method identifies configurations achieving 97% of the peak transfer rate while probing only 12% of the parameter space.« less

  3. 75 FR 76426 - Privacy Act of 1974; System of Records

    Federal Register 2010, 2011, 2012, 2013, 2014

    2010-12-08

    ..., access control lists, file system permissions, intrusion detection and prevention systems and log..., address, mailing address, country, organization, phone, fax, mobile, pager, Defense Switched Network (DSN..., address, mailing address, country, organization, phone, fax, mobile, pager, Defense Switched Network (DSN...

  4. Implementing Journaling in a Linux Shared Disk File System

    NASA Technical Reports Server (NTRS)

    Preslan, Kenneth W.; Barry, Andrew; Brassow, Jonathan; Cattelan, Russell; Manthei, Adam; Nygaard, Erling; VanOort, Seth; Teigland, David; Tilstra, Mike; O'Keefe, Matthew; hide

    2000-01-01

    In computer systems today, speed and responsiveness is often determined by network and storage subsystem performance. Faster, more scalable networking interfaces like Fibre Channel and Gigabit Ethernet provide the scaffolding from which higher performance computer systems implementations may be constructed, but new thinking is required about how machines interact with network-enabled storage devices. In this paper we describe how we implemented journaling in the Global File System (GFS), a shared-disk, cluster file system for Linux. Our previous three papers on GFS at the Mass Storage Symposium discussed our first three GFS implementations, their performance, and the lessons learned. Our fourth paper describes, appropriately enough, the evolution of GFS version 3 to version 4, which supports journaling and recovery from client failures. In addition, GFS scalability tests extending to 8 machines accessing 8 4-disk enclosures were conducted: these tests showed good scaling. We describe the GFS cluster infrastructure, which is necessary for proper recovery from machine and disk failures in a collection of machines sharing disks using GFS. Finally, we discuss the suitability of Linux for handling the big data requirements of supercomputing centers.

  5. Measurements of file transfer rates over dedicated long-haul connections

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Rao, Nageswara S; Settlemyer, Bradley W; Imam, Neena

    2016-01-01

    Wide-area file transfers are an integral part of several High-Performance Computing (HPC) scenarios. Dedicated network connections with high capacity, low loss rate and low competing traffic, are increasingly being provisioned over current HPC infrastructures to support such transfers. To gain insights into these file transfers, we collected transfer rate measurements for Lustre and xfs file systems between dedicated multi-core servers over emulated 10 Gbps connections with round trip times (rtt) in 0-366 ms range. Memory transfer throughput over these connections is measured using iperf, and file IO throughput on host systems is measured using xddprof. We consider two file systemmore » configurations: Lustre over IB network and xfs over SSD connected to PCI bus. Files are transferred using xdd across these connections, and the transfer rates are measured, which indicate the need to jointly optimize the connection and host file IO parameters to achieve peak transfer rates. In particular, these measurements indicate that (i) peak file transfer rate is lower than peak connection and host IO throughput, in some cases by as much as 50% or lower, (ii) xdd request sizes that achieve peak throughput for host file IO do not necessarily lead to peak file transfer rates, and (iii) parallelism in host IO and TCP transport does not always improve the file transfer rates.« less

  6. Sawmill: A Logging File System for a High-Performance RAID Disk Array

    DTIC Science & Technology

    1995-01-01

    from limiting disk performance, new controller architectures connect the disks directly to the network so that data movement bypasses the file server...These developments raise two questions for file systems: how to get the best performance from a RAID, and how to use such a controller architecture ...the RAID-II storage system; this architecture provides a fast data path that moves data rapidly among the disks, high-speed controller memory, and the

  7. Issues in ATM Support of High-Performance, Geographically Distributed Computing

    NASA Technical Reports Server (NTRS)

    Claus, Russell W.; Dowd, Patrick W.; Srinidhi, Saragur M.; Blade, Eric D.G

    1995-01-01

    This report experimentally assesses the effect of the underlying network in a cluster-based computing environment. The assessment is quantified by application-level benchmarking, process-level communication, and network file input/output. Two testbeds were considered, one small cluster of Sun workstations and another large cluster composed of 32 high-end IBM RS/6000 platforms. The clusters had Ethernet, fiber distributed data interface (FDDI), Fibre Channel, and asynchronous transfer mode (ATM) network interface cards installed, providing the same processors and operating system for the entire suite of experiments. The primary goal of this report is to assess the suitability of an ATM-based, local-area network to support interprocess communication and remote file input/output systems for distributed computing.

  8. Hierarchical Data Distribution Scheme for Peer-to-Peer Networks

    NASA Astrophysics Data System (ADS)

    Bhushan, Shashi; Dave, M.; Patel, R. B.

    2010-11-01

    In the past few years, peer-to-peer (P2P) networks have become an extremely popular mechanism for large-scale content sharing. P2P systems have focused on specific application domains (e.g. music files, video files) or on providing file system like capabilities. P2P is a powerful paradigm, which provides a large-scale and cost-effective mechanism for data sharing. P2P system may be used for storing data globally. Can we implement a conventional database on P2P system? But successful implementation of conventional databases on the P2P systems is yet to be reported. In this paper we have presented the mathematical model for the replication of the partitions and presented a hierarchical based data distribution scheme for the P2P networks. We have also analyzed the resource utilization and throughput of the P2P system with respect to the availability, when a conventional database is implemented over the P2P system with variable query rate. Simulation results show that database partitions placed on the peers with higher availability factor perform better. Degradation index, throughput, resource utilization are the parameters evaluated with respect to the availability factor.

  9. A site of communication among enterprises for supporting occupational health and safety management system.

    PubMed

    Velonakis, E; Mantas, J; Mavrikakis, I

    2006-01-01

    The occupational health and safety management constitutes a field of increasing interest. Institutions in cooperation with enterprises make synchronized efforts to initiate quality management systems to this field. Computer networks can offer such services via TCP/IP which is a reliable protocol for workflow management between enterprises and institutions. A design of such network is based on several factors in order to achieve defined criteria and connectivity with other networks. The network will be consisted of certain nodes responsible to inform executive persons on Occupational Health and Safety. A web database has been planned for inserting and searching documents, for answering and processing questionnaires. The submission of files to a server and the answers to questionnaires through the web help the experts to make corrections and improvements on their activities. Based on the requirements of enterprises we have constructed a web file server. We submit files in purpose users could retrieve the files which need. The access is limited to authorized users and digital watermarks authenticate and protect digital objects. The Health and Safety Management System follows ISO 18001. The implementation of it, through the web site is an aim. The all application is developed and implemented on a pilot basis for the health services sector. It is all ready installed within a hospital, supporting health and safety management among different departments of the hospital and allowing communication through WEB with other hospitals.

  10. 75 FR 76428 - Privacy Act of 1974; System of Records

    Federal Register 2010, 2011, 2012, 2013, 2014

    2010-12-08

    ..., country, organization, phone, fax, mobile, pager, Defense Switched Network (DSN) phone, other fax, other... to populate and maintain personal data elements in DoD Component networks and systems, such as.../Transport Layer Security (SSL/ TLS) connections, access control lists, file system permissions, intrusion...

  11. Software Supports Distributed Operations via the Internet

    NASA Technical Reports Server (NTRS)

    Norris, Jeffrey; Backers, Paul; Steinke, Robert

    2003-01-01

    Multi-mission Encrypted Communication System (MECS) is a computer program that enables authorized, geographically dispersed users to gain secure access to a common set of data files via the Internet. MECS is compatible with legacy application programs and a variety of operating systems. The MECS architecture is centered around maintaining consistent replicas of data files cached on remote computers. MECS monitors these files and, whenever one is changed, the changed file is committed to a master database as soon as network connectivity makes it possible to do so. MECS provides subscriptions for remote users to automatically receive new data as they are generated. Remote users can be producers as well as consumers of data. Whereas a prior program that provides some of the same services treats disconnection of a user from the network of users as an error from which recovery must be effected, MECS treats disconnection as a nominal state of the network: This leads to a different design that is more efficient for serving many users, each of whom typically connects and disconnects frequently and wants only a small fraction of the data at any given time.

  12. 75 FR 69645 - Privacy Act of 1974; System of Records

    Federal Register 2010, 2011, 2012, 2013, 2014

    2010-11-15

    ..., pager, Defense Switched Network (DSN) phone, other fax, other mobile, other pager, city, zip code, post... system may used to populate and maintain persona data elements in DoD component networks and systems.../Transport Layer Security (SSL/ TLS) connections, access control lists, file system permissions, intrusion...

  13. 76 FR 5973 - Privacy Act of 1974; Notice; Publication of the Systems of Records Managed by the Commodity...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-02-02

    ...: Paper records are stored in file folders, binders, computer files (eLaw) and computer disks. Electronic records, including computer files, are stored on the Commission's network and other electronic media as... physical security measures. Technical security measures within CFTC include restrictions on computer access...

  14. System and method for generating a relationship network

    DOEpatents

    Franks, Kasian; Myers, Cornelia A; Podowski, Raf M

    2015-05-05

    A computer-implemented system and process for generating a relationship network is disclosed. The system provides a set of data items to be related and generates variable length data vectors to represent the relationships between the terms within each data item. The system can be used to generate a relationship network for documents, images, or any other type of file. This relationship network can then be queried to discover the relationships between terms within the set of data items.

  15. System and method for generating a relationship network

    DOEpatents

    Franks, Kasian [Kensington, CA; Myers, Cornelia A [St. Louis, MO; Podowski, Raf M [Pleasant Hill, CA

    2011-07-26

    A computer-implemented system and process for generating a relationship network is disclosed. The system provides a set of data items to be related and generates variable length data vectors to represent the relationships between the terms within each data item. The system can be used to generate a relationship network for documents, images, or any other type of file. This relationship network can then be queried to discover the relationships between terms within the set of data items.

  16. TOXNET (TOXICOLOGY DATA NETWORK)

    EPA Science Inventory

    TOXNET (Toxicology Data Network) is a computerized system of files oriented to toxicology and related areas. It is managed by the National Library of Medicines Toxicology and Environmental Health Information Program (TEHIP) and runs on a series of microcomputers in a networked cl...

  17. Identifying compromised systems through correlation of suspicious traffic from malware behavioral analysis

    NASA Astrophysics Data System (ADS)

    Camilo, Ana E. F.; Grégio, André; Santos, Rafael D. C.

    2016-05-01

    Malware detection may be accomplished through the analysis of their infection behavior. To do so, dynamic analysis systems run malware samples and extract their operating system activities and network traffic. This traffic may represent malware accessing external systems, either to steal sensitive data from victims or to fetch other malicious artifacts (configuration files, additional modules, commands). In this work, we propose the use of visualization as a tool to identify compromised systems based on correlating malware communications in the form of graphs and finding isomorphisms between them. We produced graphs from over 6 thousand distinct network traffic files captured during malware execution and analyzed the existing relationships among malware samples and IP addresses.

  18. Derived virtual devices: a secure distributed file system mechanism

    NASA Technical Reports Server (NTRS)

    VanMeter, Rodney; Hotz, Steve; Finn, Gregory

    1996-01-01

    This paper presents the design of derived virtual devices (DVDs). DVDs are the mechanism used by the Netstation Project to provide secure shared access to network-attached peripherals distributed in an untrusted network environment. DVDs improve Input/Output efficiency by allowing user processes to perform I/O operations directly from devices without intermediate transfer through the controlling operating system kernel. The security enforced at the device through the DVD mechanism includes resource boundary checking, user authentication, and restricted operations, e.g., read-only access. To illustrate the application of DVDs, we present the interactions between a network-attached disk and a file system designed to exploit the DVD abstraction. We further discuss third-party transfer as a mechanism intended to provide for efficient data transfer in a typical NAP environment. We show how DVDs facilitate third-party transfer, and provide the security required in a more open network environment.

  19. 76 FR 28499 - Data Fortress Systems Group Ltd., Digital Youth Network Corp., Fantom Technologies, Inc., and KIK...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-05-17

    ... SECURITIES AND EXCHANGE COMMISSION [File No. 500-1] Data Fortress Systems Group Ltd., Digital Youth Network Corp., Fantom Technologies, Inc., and KIK Technology International, Inc., Order of... of current and accurate information concerning the securities of Data Fortress Systems Group Ltd...

  20. A Prototype System for a Computer-Based Statewide Film Library Network: A Model for Operation. Statewide Film Library Network: System-1 Specifications - Files.

    ERIC Educational Resources Information Center

    Sullivan, Todd

    Using an IBM System/360 Model 50 computer, the New York Statewide Film Library Network schedules film use, reports on materials handling and statistics, and provides for interlibrary loan of films. Communications between the film libraries and the computer are maintained by Teletype model 33 ASR Teletypewriter terminals operating on TWX…

  1. Enabling Dynamic Security Management of Networked Systems via Device-Embedded Security (Self-Securing Devices)

    DTIC Science & Technology

    2007-01-15

    it can detect specifically proscribed content changes to critical files (e.g., illegal shells inserted into /etc/ passwd ). Fourth, it can detect the...UNIX password management involves a pair of inter-related files (/etc/ passwd and /etc/shadow). The corresponding access patterns seen at the storage...content integrity verification is utilized. As a concrete example, consider a UNIX system password file (/etc/ passwd ), which consists of a set of well

  2. A peer-to-peer music sharing system based on query-by-humming

    NASA Astrophysics Data System (ADS)

    Wang, Jianrong; Chang, Xinglong; Zhao, Zheng; Zhang, Yebin; Shi, Qingwei

    2007-09-01

    Today, the main traffic in peer-to-peer (P2P) network is still multimedia files including large numbers of music files. The study of Music Information Retrieval (MIR) brings out many encouraging achievements in music search area. Nevertheless, the research of music search based on MIR in P2P network is still insufficient. Query by Humming (QBH) is one MIR technology studied for years. In this paper, we present a server based P2P music sharing system which is based on QBH and integrated with a Hierarchical Index Structure (HIS) to enhance the relation between surface data and potential information. HIS automatically evolving depends on the music related items carried by each peer such as midi files, lyrics and so forth. Instead of adding large amount of redundancy, the system generates a bit of index for multiple search input which improves the traditional keyword-based text search mode largely. When network bandwidth, speed, etc. are no longer a bottleneck of internet serve, the accessibility and accuracy of information provided by internet are being more concerned by end users.

  3. The Cheetah Data Management System

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kunz, P.F.; Word, G.B.

    1991-03-01

    Cheetah is a data management system based on the C programming language. The premise of Cheetah is that the banks' of FORTRAN based systems should be structures' as defined by the C language. Cheetah is a system to mange these structures, while preserving the use of the C language in its native form. For C structures managed by Cheetah, the user can use Cheetah utilities such as reading and writing, in a machine independent form, both binary and text files to disk or over a network. Files written by Cheetah also contain a dictionary describing in detail the data containedmore » in the file. Such information is intended to be used by interactive programs for presenting the contents of the file. Such information is intended to be used by interactive programs for presenting the contents of file. Cheetah has been ported to many different operating systems with no operating system dependent switches.« less

  4. Cloud Engineering Principles and Technology Enablers for Medical Image Processing-as-a-Service.

    PubMed

    Bao, Shunxing; Plassard, Andrew J; Landman, Bennett A; Gokhale, Aniruddha

    2017-04-01

    Traditional in-house, laboratory-based medical imaging studies use hierarchical data structures (e.g., NFS file stores) or databases (e.g., COINS, XNAT) for storage and retrieval. The resulting performance from these approaches is, however, impeded by standard network switches since they can saturate network bandwidth during transfer from storage to processing nodes for even moderate-sized studies. To that end, a cloud-based "medical image processing-as-a-service" offers promise in utilizing the ecosystem of Apache Hadoop, which is a flexible framework providing distributed, scalable, fault tolerant storage and parallel computational modules, and HBase, which is a NoSQL database built atop Hadoop's distributed file system. Despite this promise, HBase's load distribution strategy of region split and merge is detrimental to the hierarchical organization of imaging data (e.g., project, subject, session, scan, slice). This paper makes two contributions to address these concerns by describing key cloud engineering principles and technology enhancements we made to the Apache Hadoop ecosystem for medical imaging applications. First, we propose a row-key design for HBase, which is a necessary step that is driven by the hierarchical organization of imaging data. Second, we propose a novel data allocation policy within HBase to strongly enforce collocation of hierarchically related imaging data. The proposed enhancements accelerate data processing by minimizing network usage and localizing processing to machines where the data already exist. Moreover, our approach is amenable to the traditional scan, subject, and project-level analysis procedures, and is compatible with standard command line/scriptable image processing software. Experimental results for an illustrative sample of imaging data reveals that our new HBase policy results in a three-fold time improvement in conversion of classic DICOM to NiFTI file formats when compared with the default HBase region split policy, and nearly a six-fold improvement over a commonly available network file system (NFS) approach even for relatively small file sets. Moreover, file access latency is lower than network attached storage.

  5. 77 FR 73096 - Self-Regulatory Organizations; Chicago Board Options Exchange, Incorporated; Notice of Filing and...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2012-12-07

    ... systems in Chicago (the ``Disaster Recovery Systems'') in case of the occurrence of some kind of disaster which prevents NY4 from operating. These Disaster Recovery Systems can be accessed via Network Access... Disaster Recovery Network Access Ports in order to be able to connect to the Disaster Recovery Systems in...

  6. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mercier, C.W.

    The Network File System (NFS) will be the user interface to a High-Performance Data System (HPDS) being developed at Los Alamos National Laboratory (LANL). HPDS will manage high-capacity, high-performance storage systems connected directly to a high-speed network from distributed workstations. NFS will be modified to maximize performance and to manage massive amounts of data. 6 refs., 3 figs.

  7. Supplement B: Research Networking Systems Characteristics Profiles. A Companion to the OCLC Research Report, Registering Researchers in Authority Files

    ERIC Educational Resources Information Center

    Smith-Yoshimura, Karen; Altman, Micah; Conlon, Michael; Cristán, Ana Lupe; Dawson, Laura; Dunham, Joanne; Hickey, Thom; Hill, Amanda; Hook, Daniel; Horstmann, Wolfram; MacEwan, Andrew; Schreur, Philip; Smart, Laura; Wacker, Melanie; Woutersen, Saskia

    2014-01-01

    The OCLC Research Report, "Registering Researchers in Authority Files", [Accessible in ERIC as ED564924] summarizes the results of the research conducted by the OCLC Research Registering Researchers in Authority Files Task Group in 2012-2014. Details of this research are in supplementary data sets: (1) "Supplement A: Use Cases. A…

  8. BOREAS AFM-5 Level-1 Upper Air Network Data

    NASA Technical Reports Server (NTRS)

    Barr, Alan; Hrynkiw, Charmaine; Newcomer, Jeffrey A. (Editor); Hall, Forrest G. (Editor); Smith, David E. (Technical Monitor)

    2000-01-01

    The Boreal Ecosystem-Atmosphere Study (BOREAS) Airborne Fluxes and Meteorology (AFM)-5 team collected and processed data from the numerous radiosonde flights during the project. The goals of the AFM-05 team were to provide large-scale definition of the atmosphere by supplementing the existing Atmospheric Environment Service (AES) aerological network, both temporally and spatially. This data set includes basic upper-air parameters collected from the network of upper-air stations during the 1993, 1994, and 1996 field campaigns over the entire study region. The data are contained in tabular ASCII files. The level-1 upper-air network data are available from the Earth Observing System Data and Information System (EOSDIS) Oak Ridge National Laboratory (ORNL) Distributed Active Archive Center (DAAC). The data files also are available on a CD-ROM (see document number 20010000884).

  9. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Fox, Kevin

    The software provides a simple web api to allow users to request a time window where a file will not be removed from cache. HPSS provides the concept of a "purge lock". When a purge lock is set on a file, the file will not be removed from disk, entering tape only state. A lot of network file protocols assume a file is on disk so it is good to purge lock a file before transferring using one of those protocols. HPSS's purge lock system is very coarse grained though. A file is either purge locked or not. Nothing enforcesmore » quotas, timely unlocking of purge locks, or managing the races inherent with multiple users wanting to lock/unlock the same file. The Purge Lock Server lets you, through a simple REST API, specify a list of files to purge lock and an expire time, and the system will ensure things happen properly.« less

  10. 77 FR 32141 - Privacy Act of 1974, as Amended; System of Records Notices

    Federal Register 2010, 2011, 2012, 2013, 2014

    2012-05-31

    ... records titled ``Internal Collaboration Network''. SUMMARY: The National Archives and Records... 43, the Internal Collaboration Network, which contains files with information on National Archives.... SUPPLEMENTARY INFORMATION: The Internal Collaboration Network is a web- based platform that allows users to...

  11. FTP: Full-Text Publishing?

    ERIC Educational Resources Information Center

    Jul, Erik

    1992-01-01

    Describes the use of file transfer protocol (FTP) on the INTERNET computer network and considers its use as an electronic publishing system. The differing electronic formats of text files are discussed; the preparation and access of documents are described; and problems are addressed, including a lack of consistency. (LRW)

  12. Multipurpose Controller with EPICS integration and data logging: BPM application for ESS Bilbao

    NASA Astrophysics Data System (ADS)

    Arredondo, I.; del Campo, M.; Echevarria, P.; Jugo, J.; Etxebarria, V.

    2013-10-01

    This work presents a multipurpose configurable control system which can be integrated in an EPICS control network, this functionality being configured through a XML configuration file. The core of the system is the so-called Hardware Controller which is in charge of the control hardware management, the set up and communication with the EPICS network and the data storage. The reconfigurable nature of the controller is based on a single XML file, allowing any final user to easily modify and adjust the control system to any specific requirement. The selected Java development environment ensures a multiplatform operation and large versatility, even regarding the control hardware to be controlled. Specifically, this paper, focused on fast control based on a high performance FPGA, describes also an application approach for the ESS Bilbao's Beam Position Monitoring system. The implementation of the XML configuration file and the satisfactory performance outcome achieved are presented, as well as a general description of the Multipurpose Controller itself.

  13. Moving Large Data Sets Over High-Performance Long Distance Networks

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hodson, Stephen W; Poole, Stephen W; Ruwart, Thomas

    2011-04-01

    In this project we look at the performance characteristics of three tools used to move large data sets over dedicated long distance networking infrastructure. Although performance studies of wide area networks have been a frequent topic of interest, performance analyses have tended to focus on network latency characteristics and peak throughput using network traffic generators. In this study we instead perform an end-to-end long distance networking analysis that includes reading large data sets from a source file system and committing large data sets to a destination file system. An evaluation of end-to-end data movement is also an evaluation of themore » system configurations employed and the tools used to move the data. For this paper, we have built several storage platforms and connected them with a high performance long distance network configuration. We use these systems to analyze the capabilities of three data movement tools: BBcp, GridFTP, and XDD. Our studies demonstrate that existing data movement tools do not provide efficient performance levels or exercise the storage devices in their highest performance modes. We describe the device information required to achieve high levels of I/O performance and discuss how this data is applicable in use cases beyond data movement performance.« less

  14. An alternative to sneakernet

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Orrell, S.; Ralstin, S.

    1992-04-01

    Many computer security plans specify that only a small percentage of the data processed will be classified. Thus, the bulk of the data on secure systems must be unclassified. Secure limited access sites operating approved classified computing systems sometimes also have a system ostensibly containing only unclassified files but operating within the secure environment. That system could be networked or otherwise connected to a classified system(s) in order that both be able to use common resources for file storage or computing power. Such a system must operate under the same rules as the secure classified systems. It is in themore » nature of unclassified files that they either came from, or will eventually migrate to, a non-secure system. Today, unclassified files are exported from systems within the secure environment typically by loading transport media and carrying them to an open system. Import of unclassified files is handled similarly. This media transport process, sometimes referred to as sneaker net, often is manually logged and controlled only by administrative procedures. A comprehensive system for secure bi-directional transfer of unclassified files between secure and open environments has yet to be developed. Any such secure file transport system should be required to meet several stringent criteria. It is the purpose of this document to begin a definition of these criteria.« less

  15. An alternative to sneakernet

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Orrell, S.; Ralstin, S.

    1992-01-01

    Many computer security plans specify that only a small percentage of the data processed will be classified. Thus, the bulk of the data on secure systems must be unclassified. Secure limited access sites operating approved classified computing systems sometimes also have a system ostensibly containing only unclassified files but operating within the secure environment. That system could be networked or otherwise connected to a classified system(s) in order that both be able to use common resources for file storage or computing power. Such a system must operate under the same rules as the secure classified systems. It is in themore » nature of unclassified files that they either came from, or will eventually migrate to, a non-secure system. Today, unclassified files are exported from systems within the secure environment typically by loading transport media and carrying them to an open system. Import of unclassified files is handled similarly. This media transport process, sometimes referred to as sneaker net, often is manually logged and controlled only by administrative procedures. A comprehensive system for secure bi-directional transfer of unclassified files between secure and open environments has yet to be developed. Any such secure file transport system should be required to meet several stringent criteria. It is the purpose of this document to begin a definition of these criteria.« less

  16. Communications network design and costing model programmers manual

    NASA Technical Reports Server (NTRS)

    Logan, K. P.; Somes, S. S.; Clark, C. A.

    1983-01-01

    Otpimization algorithms and techniques used in the communications network design and costing model for least cost route and least cost network problems are examined from the programmer's point of view. All system program modules, the data structures within the model, and the files which make up the data base are described.

  17. Technological Networks

    NASA Astrophysics Data System (ADS)

    Mitra, Bivas

    The study of networks in the form of mathematical graph theory is one of the fundamental pillars of discrete mathematics. However, recent years have witnessed a substantial new movement in network research. The focus of the research is shifting away from the analysis of small graphs and the properties of individual vertices or edges to consideration of statistical properties of large scale networks. This new approach has been driven largely by the availability of technological networks like the Internet [12], World Wide Web network [2], etc. that allow us to gather and analyze data on a scale far larger than previously possible. At the same time, technological networks have evolved as a socio-technological system, as the concepts of social systems that are based on self-organization theory have become unified in technological networks [13]. In today’s society, we have a simple and universal access to great amounts of information and services. These information services are based upon the infrastructure of the Internet and the World Wide Web. The Internet is the system composed of ‘computers’ connected by cables or some other form of physical connections. Over this physical network, it is possible to exchange e-mails, transfer files, etc. On the other hand, the World Wide Web (commonly shortened to the Web) is a system of interlinked hypertext documents accessed via the Internet where nodes represent web pages and links represent hyperlinks between the pages. Peer-to-peer (P2P) networks [26] also have recently become a popular medium through which huge amounts of data can be shared. P2P file sharing systems, where files are searched and downloaded among peers without the help of central servers, have emerged as a major component of Internet traffic. An important advantage in P2P networks is that all clients provide resources, including bandwidth, storage space, and computing power. In this chapter, we discuss these technological networks in detail. The review is organized as follows. Section 2 presents an introduction to the Internet and different protocols related to it. This section also specifies the socio-technological properties of the Internet, like scale invariance, the small-world property, network resilience, etc. Section 3 describes the P2P networks, their categorization, and other related issues like search, stability, etc. Section 4 concludes the chapter.

  18. Distributing File-Based Data to Remote Sites Within the BABAR Collaboration

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gowdy, Stephen J.

    BABAR [1] uses two formats for its data: Objectivity database and root [2] files. This poster concerns the distribution of the latter--for Objectivity data see [3]. The BABAR analysis data is stored in root files--one per physics run and analysis selection channel--maintained in a large directory tree. Currently BABAR has more than 4.5 TBytes in 200,000 root files. This data is (mostly) produced at SLAC, but is required for analysis at universities and research centers throughout the us and Europe. Two basic problems confront us when we seek to import bulk data from slac to an institute's local storage viamore » the network. We must determine which files must be imported (depending on the local site requirements and which files have already been imported), and we must make the optimum use of the network when transferring the data. Basic ftp-like tools (ftp, scp, etc) do not attempt to solve the first problem. More sophisticated tools like rsync [4], the widely-used mirror/synchronization program, compare local and remote file systems, checking for changes (based on file date, size and, if desired, an elaborate checksum) in order to only copy new or modified files. However rsync allows for only limited file selection. Also when, as in BABAR, an extremely large directory structure must be scanned, rsync can take several hours just to determine which files need to be copied. Although rsync (and scp) provides on-the-fly compression, it does not allow us to optimize the network transfer by using multiple streams, adjusting the tcp window size, or separating encrypted authentication from unencrypted data channels.« less

  19. DSN command system Mark III-78. [data processing

    NASA Technical Reports Server (NTRS)

    Stinnett, W. G.

    1978-01-01

    The Deep Space Network command Mark III-78 data processing system includes a capability for a store-and-forward handling method. The functions of (1) storing the command files at a Deep Space station; (2) attaching the files to a queue; and (3) radiating the commands to the spacecraft are straightforward. However, the total data processing capability is a result of assuming worst case, failure-recovery, or nonnominal operating conditions. Optional data processing functions include: file erase, clearing the queue, suspend radiation, command abort, resume command radiation, and close window time override.

  20. [PVFS 2000: An operational parallel file system for Beowulf

    NASA Technical Reports Server (NTRS)

    Ligon, Walt

    2004-01-01

    The approach has been to develop Parallel Virtual File System version 2 (PVFS2) , retaining the basic philosophy of the original file system but completely rewriting the code. It shows the architecture of the server and client components. BMI - BMI is the network abstraction layer. It is designed with a common driver and modules for each protocol supported. The interface is non-blocking, and provides mechanisms for optimizations including pinning user buffers. Currently TCP/IP and GM(Myrinet) modules have been implemented. Trove -Trove is the storage abstraction layer. It provides for storing both data spaces and name/value pairs. Trove can also be implemented using different underlying storage mechanisms including native files, raw disk partitions, SQL and other databases. The current implementation uses native files for data spaces and Berkeley db for name/value pairs.

  1. Cache-enabled small cell networks: modeling and tradeoffs.

    PubMed

    Baştuǧ, Ejder; Bennis, Mehdi; Kountouris, Marios; Debbah, Mérouane

    We consider a network model where small base stations (SBSs) have caching capabilities as a means to alleviate the backhaul load and satisfy users' demand. The SBSs are stochastically distributed over the plane according to a Poisson point process (PPP) and serve their users either (i) by bringing the content from the Internet through a finite rate backhaul or (ii) by serving them from the local caches. We derive closed-form expressions for the outage probability and the average delivery rate as a function of the signal-to-interference-plus-noise ratio (SINR), SBS density, target file bitrate, storage size, file length, and file popularity. We then analyze the impact of key operating parameters on the system performance. It is shown that a certain outage probability can be achieved either by increasing the number of base stations or the total storage size. Our results and analysis provide key insights into the deployment of cache-enabled small cell networks (SCNs), which are seen as a promising solution for future heterogeneous cellular networks.

  2. Development of an e-VLBI Data Transport Software Suite with VDIF

    NASA Technical Reports Server (NTRS)

    Sekido, Mamoru; Takefuji, Kazuhiro; Kimura, Moritaka; Hobiger, Thomas; Kokado, Kensuke; Nozawa, Kentarou; Kurihara, Shinobu; Shinno, Takuya; Takahashi, Fujinobu

    2010-01-01

    We have developed a software library (KVTP-lib) for VLBI data transmission over the network with the VDIF (VLBI Data Interchange Format), which is the newly proposed standard VLBI data format designed for electronic data transfer over the network. The software package keeps the application layer (VDIF frame) and the transmission layer separate, so that each layer can be developed efficiently. The real-time VLBI data transmission tool sudp-send is an application tool based on the KVTP-lib library. sudp-send captures the VLBI data stream from the VSI-H interface with the K5/VSI PC-board and writes the data to file in standard Linux file format or transmits it to the network using the simple- UDP (SUDP) protocol. Another tool, sudp-recv , receives the data stream from the network and writes the data to file in a specific VLBI format (K5/VSSP, VDIF, or Mark 5B). This software system has been implemented on the Wettzell Tsukuba baseline; evaluation before operational employment is under way.

  3. Research and realization of info-net security controlling system

    NASA Astrophysics Data System (ADS)

    Xu, Tao; Zhang, Wei; Li, Xuhong; Wang, Xia; Pan, Wenwen

    2017-03-01

    The thesis introduces some relative concepts about Network Cybernetics, and we design and realize a new info-net security controlling system based on Network Cybernetics. The system can control the endpoints, safely save files, encrypt communication, supervise actions of users and show security conditions, in order to realize full-scale security management. At last, we simulate the functions of the system. The results show, the system can ensure the controllability of users and devices, and supervise them real-time. The system can maximize the security of the network and users.

  4. Workload Characterization and Performance Implications of Large-Scale Blog Servers

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Jeon, Myeongjae; Kim, Youngjae; Hwang, Jeaho

    With the ever-increasing popularity of social network services (SNSs), an understanding of the characteristics of these services and their effects on the behavior of their host servers is critical. However, there has been a lack of research on the workload characterization of servers running SNS applications such as blog services. To fill this void, we empirically characterized real-world web server logs collected from one of the largest South Korean blog hosting sites for 12 consecutive days. The logs consist of more than 96 million HTTP requests and 4.7 TB of network traffic. Our analysis reveals the followings: (i) The transfermore » size of non-multimedia files and blog articles can be modeled using a truncated Pareto distribution and a log-normal distribution, respectively; (ii) User access for blog articles does not show temporal locality, but is strongly biased towards those posted with image or audio files. We additionally discuss the potential performance improvement through clustering of small files on a blog page into contiguous disk blocks, which benefits from the observed file access patterns. Trace-driven simulations show that, on average, the suggested approach achieves 60.6% better system throughput and reduces the processing time for file access by 30.8% compared to the best performance of the Ext4 file system.« less

  5. Data update in a land information network

    NASA Astrophysics Data System (ADS)

    Mullin, Robin C.

    1988-01-01

    The on-going update of data exchanged in a land information network is examined. In the past, major developments have been undertaken to enable the exchange of data between land information systems. A model of a land information network and the data update process have been developed. Based on these, a functional description of the database and software to perform data updating is presented. A prototype of the data update process was implemented using the ARC/INFO geographic information system. This was used to test four approaches to data updating, i.e., bulk, block, incremental, and alert updates. A bulk update is performed by replacing a complete file with an updated file. A block update requires that the data set be partitioned into blocks. When an update occurs, only the blocks which are affected need to be transferred. An incremental update approach records each feature which is added or deleted and transmits only the features needed to update the copy of the file. An alert is a marker indicating that an update has occurred. It can be placed in a file to warn a user that if he is active in an area containing markers, updated data is available. The four approaches have been tested using a cadastral data set.

  6. A design for a new catalog manager and associated file management for the Land Analysis System (LAS)

    NASA Technical Reports Server (NTRS)

    Greenhagen, Cheryl

    1986-01-01

    Due to the larger number of different types of files used in an image processing system, a mechanism for file management beyond the bounds of typical operating systems is necessary. The Transportable Applications Executive (TAE) Catalog Manager was written to meet this need. Land Analysis System (LAS) users at the EROS Data Center (EDC) encountered some problems in using the TAE catalog manager, including catalog corruption, networking difficulties, and lack of a reliable tape storage and retrieval capability. These problems, coupled with the complexity of the TAE catalog manager, led to the decision to design a new file management system for LAS, tailored to the needs of the EDC user community. This design effort, which addressed catalog management, label services, associated data management, and enhancements to LAS applications, is described. The new file management design will provide many benefits including improved system integration, increased flexibility, enhanced reliability, enhanced portability, improved performance, and improved maintainability.

  7. 78 FR 53124 - First Responder Network Authority Filing

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-08-28

    ...-229; and WT Docket No. 06-150; DA 13-1775] First Responder Network Authority Filing AGENCY: Federal... public comment on a filing submitted by the First Responder Network Authority (FirstNet) on August 2... Commission provides seven days for public comment on matters raised by the First Responder Network Authority...

  8. Sharing digital micrographs and other data files between computers.

    PubMed

    Entwistle, A

    2004-01-01

    It ought to be easy to exchange digital micrographs and other computer data files with a colleague even on another continent. In practice, this often is not the case. The advantages and disadvantages of various methods that are available for exchanging data files between computers are discussed. When possible, data should be transferred through computer networking. When data are to be exchanged locally between computers with similar operating systems, the use of a local area network is recommended. For computers in commercial or academic environments that have dissimilar operating systems or are more widely spaced, the use of FTPs is recommended. Failing this, posting the data on a website and transferring by hypertext transfer protocol is suggested. If peer to peer exchange between computers in domestic environments is needed, the use of Messenger services such as Microsoft Messenger or Yahoo Messenger is the method of choice. When it is not possible to transfer the data files over the internet, single use, writable CD ROMs are the best media for transferring data. If for some reason this is not possible, DVD-R/RW, DVD+R/RW, 100 MB ZIP disks and USB flash media are potentially useful media for exchanging data files.

  9. Casimage project: a digital teaching files authoring environment.

    PubMed

    Rosset, Antoine; Muller, Henning; Martins, Martina; Dfouni, Natalia; Vallée, Jean-Paul; Ratib, Osman

    2004-04-01

    The goal of the Casimage project is to offer an authoring and editing environment integrated with the Picture Archiving and Communication Systems (PACS) for creating image-based electronic teaching files. This software is based on a client/server architecture allowing remote access of users to a central database. This authoring environment allows radiologists to create reference databases and collection of digital images for teaching and research directly from clinical cases being reviewed on PACS diagnostic workstations. The environment includes all tools to create teaching files, including textual description, annotations, and image manipulation. The software also allows users to generate stand-alone CD-ROMs and web-based teaching files to easily share their collections. The system includes a web server compatible with the Medical Imaging Resource Center standard (MIRC, http://mirc.rsna.org) to easily integrate collections in the RSNA web network dedicated to teaching files. This software could be installed on any PACS workstation to allow users to add new cases at any time and anywhere during clinical operations. Several images collections were created with this tool, including thoracic imaging that was subsequently made available on a CD-Rom and on our web site and through the MIRC network for public access.

  10. Developing Large-Scale Bayesian Networks by Composition: Fault Diagnosis of Electrical Power Systems in Aircraft and Spacecraft

    NASA Technical Reports Server (NTRS)

    Mengshoel, Ole Jakob; Poll, Scott; Kurtoglu, Tolga

    2009-01-01

    This CD contains files that support the talk (see CASI ID 20100021404). There are 24 models that relate to the ADAPT system and 1 Excel worksheet. In the paper an investigation into the use of Bayesian networks to construct large-scale diagnostic systems is described. The high-level specifications, Bayesian networks, clique trees, and arithmetic circuits representing 24 different electrical power systems are described in the talk. The data in the CD are the models of the 24 different power systems.

  11. NASA Langley Research Center's distributed mass storage system

    NASA Technical Reports Server (NTRS)

    Pao, Juliet Z.; Humes, D. Creig

    1993-01-01

    There is a trend in institutions with high performance computing and data management requirements to explore mass storage systems with peripherals directly attached to a high speed network. The Distributed Mass Storage System (DMSS) Project at NASA LaRC is building such a system and expects to put it into production use by the end of 1993. This paper presents the design of the DMSS, some experiences in its development and use, and a performance analysis of its capabilities. The special features of this system are: (1) workstation class file servers running UniTree software; (2) third party I/O; (3) HIPPI network; (4) HIPPI/IPI3 disk array systems; (5) Storage Technology Corporation (STK) ACS 4400 automatic cartridge system; (6) CRAY Research Incorporated (CRI) CRAY Y-MP and CRAY-2 clients; (7) file server redundancy provision; and (8) a transition mechanism from the existent mass storage system to the DMSS.

  12. Multi-Gigabit Free-Space Optical Data Communication and Network System

    DTIC Science & Technology

    2016-04-01

    IR), Ultraviolet ( UV ), Laser Transceiver, Adaptive Beam Tracking, Electronic Attack (EA), Cyber Attack, Multipoint-to-Multipoint Network, Adaptive...FileName.pptx Free Space Optical Datalink Timeline Phase 1 Point-to-point demonstration 2012 Future Adaptive optic & Quantum Cascade Laser

  13. A trace-driven analysis of name and attribute caching in a distributed system

    NASA Technical Reports Server (NTRS)

    Shirriff, Ken W.; Ousterhout, John K.

    1992-01-01

    This paper presents the results of simulating file name and attribute caching on client machines in a distributed file system. The simulation used trace data gathered on a network of about 40 workstations. Caching was found to be advantageous: a cache on each client containing just 10 directories had a 91 percent hit rate on name look ups. Entry-based name caches (holding individual directory entries) had poorer performance for several reasons, resulting in a maximum hit rate of about 83 percent. File attribute caching obtained a 90 percent hit rate with a cache on each machine of the attributes for 30 files. The simulations show that maintaining cache consistency between machines is not a significant problem; only 1 in 400 name component look ups required invalidation of a remotely cached entry. Process migration to remote machines had little effect on caching. Caching was less successful in heavily shared and modified directories such as /tmp, but there weren't enough references to /tmp overall to affect the results significantly. We estimate that adding name and attribute caching to the Sprite operating system could reduce server load by 36 percent and the number of network packets by 30 percent.

  14. Cloud Engineering Principles and Technology Enablers for Medical Image Processing-as-a-Service

    PubMed Central

    Bao, Shunxing; Plassard, Andrew J.; Landman, Bennett A.; Gokhale, Aniruddha

    2017-01-01

    Traditional in-house, laboratory-based medical imaging studies use hierarchical data structures (e.g., NFS file stores) or databases (e.g., COINS, XNAT) for storage and retrieval. The resulting performance from these approaches is, however, impeded by standard network switches since they can saturate network bandwidth during transfer from storage to processing nodes for even moderate-sized studies. To that end, a cloud-based “medical image processing-as-a-service” offers promise in utilizing the ecosystem of Apache Hadoop, which is a flexible framework providing distributed, scalable, fault tolerant storage and parallel computational modules, and HBase, which is a NoSQL database built atop Hadoop’s distributed file system. Despite this promise, HBase’s load distribution strategy of region split and merge is detrimental to the hierarchical organization of imaging data (e.g., project, subject, session, scan, slice). This paper makes two contributions to address these concerns by describing key cloud engineering principles and technology enhancements we made to the Apache Hadoop ecosystem for medical imaging applications. First, we propose a row-key design for HBase, which is a necessary step that is driven by the hierarchical organization of imaging data. Second, we propose a novel data allocation policy within HBase to strongly enforce collocation of hierarchically related imaging data. The proposed enhancements accelerate data processing by minimizing network usage and localizing processing to machines where the data already exist. Moreover, our approach is amenable to the traditional scan, subject, and project-level analysis procedures, and is compatible with standard command line/scriptable image processing software. Experimental results for an illustrative sample of imaging data reveals that our new HBase policy results in a three-fold time improvement in conversion of classic DICOM to NiFTI file formats when compared with the default HBase region split policy, and nearly a six-fold improvement over a commonly available network file system (NFS) approach even for relatively small file sets. Moreover, file access latency is lower than network attached storage. PMID:28884169

  15. CD-ROM Network Configurations: Good, Better, Best!

    ERIC Educational Resources Information Center

    McClanahan, Gloria

    1996-01-01

    Rates three methods of arranging CD-ROM school networks: (1) peer-to-peer; (2) daisy chain configurations; and (3) dedicated CD-ROM file server. Describes the following network components: the file server, network adapters and wiring, the CD-ROM file server, and CD-ROM drives. Discusses issues involved in assembling these components into a working…

  16. Volume serving and media management in a networked, distributed client/server environment

    NASA Technical Reports Server (NTRS)

    Herring, Ralph H.; Tefend, Linda L.

    1993-01-01

    The E-Systems Modular Automated Storage System (EMASS) is a family of hierarchical mass storage systems providing complete storage/'file space' management. The EMASS volume server provides the flexibility to work with different clients (file servers), different platforms, and different archives with a 'mix and match' capability. The EMASS design considers all file management programs as clients of the volume server system. System storage capacities are tailored to customer needs ranging from small data centers to large central libraries serving multiple users simultaneously. All EMASS hardware is commercial off the shelf (COTS), selected to provide the performance and reliability needed in current and future mass storage solutions. All interfaces use standard commercial protocols and networks suitable to service multiple hosts. EMASS is designed to efficiently store and retrieve in excess of 10,000 terabytes of data. Current clients include CRAY's YMP Model E based Data Migration Facility (DMF), IBM's RS/6000 based Unitree, and CONVEX based EMASS File Server software. The VolSer software provides the capability to accept client or graphical user interface (GUI) commands from the operator's console and translate them to the commands needed to control any configured archive. The VolSer system offers advanced features to enhance media handling and particularly media mounting such as: automated media migration, preferred media placement, drive load leveling, registered MediaClass groupings, and drive pooling.

  17. Fail-over file transfer process

    NASA Technical Reports Server (NTRS)

    Semancik, Susan K. (Inventor); Conger, Annette M. (Inventor)

    2005-01-01

    The present invention provides a fail-over file transfer process to handle data file transfer when the transfer is unsuccessful in order to avoid unnecessary network congestion and enhance reliability in an automated data file transfer system. If a file cannot be delivered after attempting to send the file to a receiver up to a preset number of times, and the receiver has indicated the availability of other backup receiving locations, then the file delivery is automatically attempted to one of the backup receiving locations up to the preset number of times. Failure of the file transfer to one of the backup receiving locations results in a failure notification being sent to the receiver, and the receiver may retrieve the file from the location indicated in the failure notification when ready.

  18. Peer-to-Peer Content Distribution and Over-The-Top TV: An Analysis of Value Networks

    NASA Astrophysics Data System (ADS)

    de Boever, Jorn; de Grooff, Dirk

    The convergence of Internet and TV, i.e., the Over-The-Top TV (OTT TV) paradigm, created opportunities for P2P content distribution as these systems reduce bandwidth expenses for media companies. This resulted in the arrival of legal, commercial P2P systems which increases the importance of studying economic aspects of these business operations. This chapter examines the value networks of three cases (Kontiki, Zattoo and bittorrent) in order to compare how different actors position and distinguish themselves from competitors by creating value in different ways. The value networks of legal systems have different compositions depending on their market orientation - Business-to-Business (B2B) and/or Businessto- Consumer (B2C). In addition, legal systems differ from illegal systems as legal companies are not inclined to grant control to users, whereas users havemost control in value networks of illegal, self-organizing file sharing communities. In conclusion, the OTT TV paradigm made P2P technology a partner for the media industry rather than an enemy. However, we argue that the lack of control granted to users will remain a seed-bed for the success of illegal P2P file sharing communities.

  19. CFDP for Interplanetary Overlay Network

    NASA Technical Reports Server (NTRS)

    Burleigh, Scott C.

    2011-01-01

    The CCSDS (Consultative Committee for Space Data Systems) File Delivery Protocol for Interplanetary Overlay Network (CFDP-ION) is an implementation of CFDP that uses IO' s DTN (delay tolerant networking) implementation as its UT (unit-data transfer) layer. Because the DTN protocols effect automatic, reliable transmission via multiple relays, CFDP-ION need only satisfy the requirements for Class 1 ("unacknowledged") CFDP. This keeps the implementation small, but without loss of capability. This innovation minimizes processing resources by using zero-copy objects for file data transmission. It runs without modification in VxWorks, Linux, Solaris, and OS/X. As such, this innovation can be used without modification in both flight and ground systems. Integration with DTN enables the CFDP implementation itself to be very simple; therefore, very small. Use of ION infrastructure minimizes consumption of storage and processing resources while maximizing safety.

  20. Dagik: A Quick Look System of the Geospace Data in KML format

    NASA Astrophysics Data System (ADS)

    Yoshida, D.; Saito, A.

    2007-12-01

    Dagik (Daily Geospace data in KML) is a quick look plot sharing system using Google Earth as a data browser. It provides daily data lists that contain network links to the KML/KMZ files of various geospace data. KML is a markup language to display data on Google Earth, and KMZ is a compressed file of KML. Users can browse the KML/KMZ files with the following procedures: 1) download "dagik.kml" from Dagik homepage (http://www- step.kugi.kyoto-u.ac.jp/dagik/), and open it with Google Earth, 2) select date, 3) select data type to browse. Dagik is a collection of network links to KML/KMZ files. The daily Dagik files are available since 1957, though they contain only the geomagnetic index data in the early periods. There are three activities of Dagik. The first one is the generation of the daily data lists, the second is to provide several useful tools, such as observatory lists, and the third is to assist researchers to make KML/KMZ data plots. To make the plot browsing easy, there are three rules for Dagik plot format: 1) one file contains one UT day data, 2) use common plot panel size, 3) share the data list. There are three steps to join Dagik as a plot provider: 1) make KML/KMZ files of the data, 2) put the KML/KMZ files on Web, 3) notice Dagik group the URL address and description of the files. The KML/KMZ files will be included in Dagik data list. As of September 2007, quick looks of several geosphace data, such as GPS total electron content data, ionosonde data, magnetometer data, FUV imaging data by a satellite, ground-based airglow data, and satellite footprint data, are available. The system of Dagik is introduced in the presentation. u.ac.jp/dagik/

  1. 78 FR 27261 - Self-Regulatory Organizations; New York Stock Exchange LLC; NYSE MKT LLC; Order Granting Approval...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-05-09

    ... management systems and routing networks, such member organizations may not be able to fully segregate Retail... SECURITIES AND EXCHANGE COMMISSION [Release No. 34-69513; File Nos. SR-NYSE-2013-08; SR-NYSEMKT... with NYSE, the ``Exchanges'') each filed with the Securities and Exchange Commission (``Commission...

  2. Task Report for Task Authorization 1 for: Technology Demonstration of the Joint Network Defence and Management System (JNDMS) Project

    DTIC Science & Technology

    2009-01-30

    tool written in Java to support the automated creation of simulated subnets. It can be run giving it a subnet, the number of hosts to create, the...network and can also be used to create subnets with specific profiles. Subnet Creator command line: > java –jar SubnetCreator.jar –j [path to client...command: > java –jar jss_client.jar com.mdacorporation.jndms.JSS.Client.JSSBatchClient [file] 5. Software: This is the output file that will store the

  3. The 60 Minute Network Security Guide (First Steps Towards a Secure Network Environment)

    DTIC Science & Technology

    2001-10-16

    default/ passwd file in UNIX. Administrators should obtain and run password-guessing programs (i.e., “John the Ripper,’’ “L0phtCrack,” and “Crack...system on which it is running, it is a good idea to transfer the encrypted passwords (the dumped SAM database for Windows and the /etc/ passwd and /etc...ownership by root and group sys. The /etc/ passwd file should have permissions 644 with owner root and group root. n Be cracked every month to find

  4. Mass Storage Systems.

    ERIC Educational Resources Information Center

    Ranade, Sanjay; Schraeder, Jeff

    1991-01-01

    Presents an overview of the mass storage market and discusses mass storage systems as part of computer networks. Systems for personal computers, workstations, minicomputers, and mainframe computers are described; file servers are explained; system integration issues are raised; and future possibilities are suggested. (LRW)

  5. Glossary of Internet Terms.

    ERIC Educational Resources Information Center

    Microcomputers for Information Management, 1995

    1995-01-01

    Provides definitions for 71 terms related to the Internet, including Archie, bulletin board system, cyberspace, e-mail (electronic mail), file transfer protocol, gopher, hypertext, integrated services digital network, local area network, listserv, modem, packet switching, server, telnet, UNIX, WAIS (wide area information servers), and World Wide…

  6. THE EPANET PROGRAMMER'S TOOLKIT FOR ANALYSIS OF WATER DISTRIBUTION SYSTEMS

    EPA Science Inventory

    The EPANET Programmer's Toolkit is a collection of functions that helps simplify computer programming of water distribution network analyses. the functions can be used to read in a pipe network description file, modify selected component properties, run multiple hydraulic and wa...

  7. XpressWare Installation User guide

    NASA Astrophysics Data System (ADS)

    Duffey, K. P.

    XpressWare is a set of X terminal software, released by Tektronix Inc, that accommodates the X Window system on a range of host computers. The software comprises boot files (the X server image), configuration files, fonts, and font tools to support the X terminal. The files can be installed on one host or distributed across multiple hosts The purpose of this guide is to present the system or network administrator with a step-by-step account of how to install XpressWare, and how subsequently to configure the X terminals appropriately for the environment in which they operate.

  8. High performance network and channel-based storage

    NASA Technical Reports Server (NTRS)

    Katz, Randy H.

    1991-01-01

    In the traditional mainframe-centered view of a computer system, storage devices are coupled to the system through complex hardware subsystems called input/output (I/O) channels. With the dramatic shift towards workstation-based computing, and its associated client/server model of computation, storage facilities are now found attached to file servers and distributed throughout the network. We discuss the underlying technology trends that are leading to high performance network-based storage, namely advances in networks, storage devices, and I/O controller and server architectures. We review several commercial systems and research prototypes that are leading to a new approach to high performance computing based on network-attached storage.

  9. Tools for Administration of a UNIX-Based Network

    NASA Technical Reports Server (NTRS)

    LeClaire, Stephen; Farrar, Edward

    2004-01-01

    Several computer programs have been developed to enable efficient administration of a large, heterogeneous, UNIX-based computing and communication network that includes a variety of computers connected to a variety of subnetworks. One program provides secure software tools for administrators to create, modify, lock, and delete accounts of specific users. This program also provides tools for users to change their UNIX passwords and log-in shells. These tools check for errors. Another program comprises a client and a server component that, together, provide a secure mechanism to create, modify, and query quota levels on a network file system (NFS) mounted by use of the VERITAS File SystemJ software. The client software resides on an internal secure computer with a secure Web interface; one can gain access to the client software from any authorized computer capable of running web-browser software. The server software resides on a UNIX computer configured with the VERITAS software system. Directories where VERITAS quotas are applied are NFS-mounted. Another program is a Web-based, client/server Internet Protocol (IP) address tool that facilitates maintenance lookup of information about IP addresses for a network of computers.

  10. TOAD Editor

    NASA Technical Reports Server (NTRS)

    Bingle, Bradford D.; Shea, Anne L.; Hofler, Alicia S.

    1993-01-01

    Transferable Output ASCII Data (TOAD) computer program (LAR-13755), implements format designed to facilitate transfer of data across communication networks and dissimilar host computer systems. Any data file conforming to TOAD format standard called TOAD file. TOAD Editor is interactive software tool for manipulating contents of TOAD files. Commonly used to extract filtered subsets of data for visualization of results of computation. Also offers such user-oriented features as on-line help, clear English error messages, startup file, macroinstructions defined by user, command history, user variables, UNDO features, and full complement of mathematical statistical, and conversion functions. Companion program, TOAD Gateway (LAR-14484), converts data files from variety of other file formats to that of TOAD. TOAD Editor written in FORTRAN 77.

  11. Mass storage technology in networks

    NASA Astrophysics Data System (ADS)

    Ishii, Katsunori; Takeda, Toru; Itao, Kiyoshi; Kaneko, Reizo

    1990-08-01

    Trends and features of mass storage subsystems in network are surveyed and their key technologies spotlighted. Storage subsystems are becoming increasingly important in new network systems in which communications and data processing are systematically combined. These systems require a new class of high-performance mass-information storage in order to effectively utilize their processing power. The requirements of high transfer rates, high transactional rates and large storage capacities, coupled with high functionality, fault tolerance and flexibility in configuration, are major challenges in storage subsystems. Recent progress in optical disk technology has resulted in improved performance of on-line external memories to optical disk drives, which are competing with mid-range magnetic disks. Optical disks are more effective than magnetic disks in using low-traffic random-access file storing multimedia data that requires large capacity, such as in archive use and in information distribution use by ROM disks. Finally, it demonstrates image coded document file servers for local area network use that employ 130mm rewritable magneto-optical disk subsystems.

  12. A Database of Computer Attacks for the Evaluation of Intrusion Detection Systems

    DTIC Science & Technology

    1999-06-01

    administrator whenever a system binary file (such as the ps, login , or ls program) is modified. Normal users have no legitimate reason to alter these files...development of EMERALD [46], which combines statistical anomaly detection from NIDES with signature verification. Specification-based intrusion detection...the creation of a single host that can act as many hosts. Daemons that provide network services—including telnetd, ftpd, and login — display banners

  13. A Novel Network Attack Audit System based on Multi-Agent Technology

    NASA Astrophysics Data System (ADS)

    Jianping, Wang; Min, Chen; Xianwen, Wu

    A network attack audit system which includes network attack audit Agent, host audit Agent and management control center audit Agent is proposed. And the improved multi-agent technology is carried out in the network attack audit Agent which has achieved satisfactory audit results. The audit system in terms of network attack is just in-depth, and with the function improvement of network attack audit Agent, different attack will be better analyzed and audit. In addition, the management control center Agent should manage and analyze audit results from AA (or HA) and audit data on time. And the history files of network packets and host log data should also be audit to find deeper violations that cannot be found in real time.

  14. Peregrine System Configuration | High-Performance Computing | NREL

    Science.gov Websites

    nodes and storage are connected by a high speed InfiniBand network. Compute nodes are diskless with an directories are mounted on all nodes, along with a file system dedicated to shared projects. A brief processors with 64 GB of memory. All nodes are connected to the high speed Infiniband network and and a

  15. CNES-NASA Disruption-Tolerant Networking (DTN) Interoperability

    NASA Technical Reports Server (NTRS)

    Mortensen, Dale; Eddy, Wesley M.; Reinhart, Richard C.; Lassere, Francois

    2014-01-01

    Future missions requiring robust internetworking services may use Delay-Disruption-Tolerant Networking (DTN) technology. CNES, NASA, and other international space agencies are committed to using CCSDS standards in their space and ground mission communications systems. The experiment described in this presentation will evaluate operations concepts, system performance, and advance technology readiness for the use of DTN protocols in conjunction with CCSDS ground systems, CCSDS data links, and CCSDS file transfer applications

  16. 76 FR 47576 - DCP Intrastate Network, LLC; Notice of Filing

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-08-05

    ... DEPARTMENT OF ENERGY Federal Energy Regulatory Commission [Docket No. PR11-120-000] DCP Intrastate Network, LLC; Notice of Filing Take notice that on July 26, 2011, DCP Intrastate Network, LLC filed to provide notice of its cancellation of its Statement of Operating Conditions for Interstate Gas...

  17. 76 FR 9012 - DCP Intrastate Network, LLC; Notice of Filing

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-02-16

    ... DEPARTMENT OF ENERGY Federal Energy Regulatory Commission [Docket No. PR11-85-000] DCP Intrastate Network, LLC; Notice of Filing Take notice that on February 1, 2011, DCP Intrastate Network, LLC (DCPIN) filed to provide notice of its withdrawal of rates for transportation service under Section 311 of the...

  18. 12 CFR 235.8 - Reporting requirements and record retention.

    Code of Federal Regulations, 2012 CFR

    2012-01-01

    .... 235.8 Section 235.8 Banks and Banking FEDERAL RESERVE SYSTEM (CONTINUED) BOARD OF GOVERNORS OF THE... the requirements of this part under § 235.5(a) and each payment card network shall file a report with the Board in accordance with this section. (b) Report. Each entity required to file a report with the...

  19. 12 CFR 235.8 - Reporting requirements and record retention.

    Code of Federal Regulations, 2014 CFR

    2014-01-01

    .... 235.8 Section 235.8 Banks and Banking FEDERAL RESERVE SYSTEM (CONTINUED) BOARD OF GOVERNORS OF THE... the requirements of this part under § 235.5(a) and each payment card network shall file a report with the Board in accordance with this section. (b) Report. Each entity required to file a report with the...

  20. 12 CFR 235.8 - Reporting requirements and record retention.

    Code of Federal Regulations, 2013 CFR

    2013-01-01

    .... 235.8 Section 235.8 Banks and Banking FEDERAL RESERVE SYSTEM (CONTINUED) BOARD OF GOVERNORS OF THE... the requirements of this part under § 235.5(a) and each payment card network shall file a report with the Board in accordance with this section. (b) Report. Each entity required to file a report with the...

  1. 78 FR 13915 - Self-Regulatory Organizations; BATS Y-Exchange, Inc.; Notice of Filing of Proposed Rule Change...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-03-01

    ....'' The Exchange further understands that limitations in order management systems and routing networks... SECURITIES AND EXCHANGE COMMISSION [Release No. 34-68975; File No. SR-BYX-2013-008] Self..., 2013. Pursuant to Section 19(b)(1) of the Securities Exchange Act of 1934 (the ``Act''),\\1\\ and Rule...

  2. Mass-storage management for distributed image/video archives

    NASA Astrophysics Data System (ADS)

    Franchi, Santina; Guarda, Roberto; Prampolini, Franco

    1993-04-01

    The realization of image/video database requires a specific design for both database structures and mass storage management. This issue has addressed the project of the digital image/video database system that has been designed at IBM SEMEA Scientific & Technical Solution Center. Proper database structures have been defined to catalog image/video coding technique with the related parameters, and the description of image/video contents. User workstations and servers are distributed along a local area network. Image/video files are not managed directly by the DBMS server. Because of their wide size, they are stored outside the database on network devices. The database contains the pointers to the image/video files and the description of the storage devices. The system can use different kinds of storage media, organized in a hierarchical structure. Three levels of functions are available to manage the storage resources. The functions of the lower level provide media management. They allow it to catalog devices and to modify device status and device network location. The medium level manages image/video files on a physical basis. It manages file migration between high capacity media and low access time media. The functions of the upper level work on image/video file on a logical basis, as they archive, move and copy image/video data selected by user defined queries. These functions are used to support the implementation of a storage management strategy. The database information about characteristics of both storage devices and coding techniques are used by the third level functions to fit delivery/visualization requirements and to reduce archiving costs.

  3. Networking CD-ROMs: A Tutorial Introduction.

    ERIC Educational Resources Information Center

    Perone, Karen

    1996-01-01

    Provides an introduction to CD-ROM networking. Highlights include LAN (local area network) architectures for CD-ROM networks, peer-to-peer networks, shared file and dedicated file servers, commercial software/vendor solutions, problems, multiple hardware platforms, and multimedia. Six figures illustrate network architectures and a sidebar contains…

  4. High-performance mass storage system for workstations

    NASA Technical Reports Server (NTRS)

    Chiang, T.; Tang, Y.; Gupta, L.; Cooperman, S.

    1993-01-01

    Reduced Instruction Set Computer (RISC) workstations and Personnel Computers (PC) are very popular tools for office automation, command and control, scientific analysis, database management, and many other applications. However, when using Input/Output (I/O) intensive applications, the RISC workstations and PC's are often overburdened with the tasks of collecting, staging, storing, and distributing data. Also, by using standard high-performance peripherals and storage devices, the I/O function can still be a common bottleneck process. Therefore, the high-performance mass storage system, developed by Loral AeroSys' Independent Research and Development (IR&D) engineers, can offload a RISC workstation of I/O related functions and provide high-performance I/O functions and external interfaces. The high-performance mass storage system has the capabilities to ingest high-speed real-time data, perform signal or image processing, and stage, archive, and distribute the data. This mass storage system uses a hierarchical storage structure, thus reducing the total data storage cost, while maintaining high-I/O performance. The high-performance mass storage system is a network of low-cost parallel processors and storage devices. The nodes in the network have special I/O functions such as: SCSI controller, Ethernet controller, gateway controller, RS232 controller, IEEE488 controller, and digital/analog converter. The nodes are interconnected through high-speed direct memory access links to form a network. The topology of the network is easily reconfigurable to maximize system throughput for various applications. This high-performance mass storage system takes advantage of a 'busless' architecture for maximum expandability. The mass storage system consists of magnetic disks, a WORM optical disk jukebox, and an 8mm helical scan tape to form a hierarchical storage structure. Commonly used files are kept in the magnetic disk for fast retrieval. The optical disks are used as archive media, and the tapes are used as backup media. The storage system is managed by the IEEE mass storage reference model-based UniTree software package. UniTree software will keep track of all files in the system, will automatically migrate the lesser used files to archive media, and will stage the files when needed by the system. The user can access the files without knowledge of their physical location. The high-performance mass storage system developed by Loral AeroSys will significantly boost the system I/O performance and reduce the overall data storage cost. This storage system provides a highly flexible and cost-effective architecture for a variety of applications (e.g., realtime data acquisition with a signal and image processing requirement, long-term data archiving and distribution, and image analysis and enhancement).

  5. The Aerospace Energy Systems Laboratory: A BITBUS networking application

    NASA Technical Reports Server (NTRS)

    Glover, Richard D.; Oneill-Rood, Nora

    1989-01-01

    The NASA Ames-Dryden Flight Research Facility developed a computerized aircraft battery servicing facility called the Aerospace Energy Systems Laboratory (AESL). This system employs distributed processing with communications provided by a 2.4-megabit BITBUS local area network. Customized handlers provide real time status, remote command, and file transfer protocols between a central system running the iRMX-II operating system and ten slave stations running the iRMX-I operating system. The hardware configuration and software components required to implement this BITBUS application are required.

  6. KNBD: A Remote Kernel Block Server for Linux

    NASA Technical Reports Server (NTRS)

    Becker, Jeff

    1999-01-01

    I am developing a prototype of a Linux remote disk block server whose purpose is to serve as a lower level component of a parallel file system. Parallel file systems are an important component of high performance supercomputers and clusters. Although supercomputer vendors such as SGI and IBM have their own custom solutions, there has been a void and hence a demand for such a system on Beowulf-type PC Clusters. Recently, the Parallel Virtual File System (PVFS) project at Clemson University has begun to address this need (1). Although their system provides much of the functionality of (and indeed was inspired by) the equivalent file systems in the commercial supercomputer market, their system is all in user-space. Migrating their 10 services to the kernel could provide a performance boost, by obviating the need for expensive system calls. Thanks to Pavel Machek, the Linux kernel has provided the network block device (2) with kernels 2.1.101 and later. You can configure this block device to redirect reads and writes to a remote machine's disk. This can be used as a building block for constructing a striped file system across several nodes.

  7. The design and implementation of the HY-1B Product Archive System

    NASA Astrophysics Data System (ADS)

    Liu, Shibin; Liu, Wei; Peng, Hailong

    2010-11-01

    Product Archive System (PAS), as a background system, is the core part of the Product Archive and Distribution System (PADS) which is the center for data management of the Ground Application System of HY-1B satellite hosted by the National Satellite Ocean Application Service of China. PAS integrates a series of updating methods and technologies, such as a suitable data transmittal mode, flexible configuration files and log information in order to make the system with several desirable characteristics, such as ease of maintenance, stability, minimal complexity. This paper describes seven major components of the PAS (Network Communicator module, File Collector module, File Copy module, Task Collector module, Metadata Extractor module, Product data Archive module, Metadata catalogue import module) and some of the unique features of the system, as well as the technical problems encountered and resolved.

  8. Documentation of a daily mean stream temperature module—An enhancement to the Precipitation-Runoff Modeling System

    USGS Publications Warehouse

    Sanders, Michael J.; Markstrom, Steven L.; Regan, R. Steven; Atkinson, R. Dwight

    2017-09-15

    A module for simulation of daily mean water temperature in a network of stream segments has been developed as an enhancement to the U.S. Geological Survey Precipitation Runoff Modeling System (PRMS). This new module is based on the U.S. Fish and Wildlife Service Stream Network Temperature model, a mechanistic, one-dimensional heat transport model. The new module is integrated in PRMS. Stream-water temperature simulation is activated by selection of the appropriate input flags in the PRMS Control File and by providing the necessary additional inputs in standard PRMS input files.This report includes a comprehensive discussion of the methods relevant to the stream temperature calculations and detailed instructions for model input preparation.

  9. Using an Inductive Learning Algorithm to Improve Antibody Generation in a Single Packet Computer Defense Immune System

    DTIC Science & Technology

    2002-03-01

    allow the network to perform as a much better classifier. Another disadvantage of neural networks is that it is difficult to know what is going on...could be shown to have a cause and effect relationship in producing bad antibodies, the next phase of the research was destined to go more smoothly...Number Bad 2,402 1,803 80 253 266 Training File Number Good 1,201 316 307 312 266 Number Bad 1,201 901 40 127 133 Testing File Number Good 1,201 322 333

  10. XRootd, disk-based, caching proxy for optimization of data access, data placement and data replication

    NASA Astrophysics Data System (ADS)

    Bauerdick, L. A. T.; Bloom, K.; Bockelman, B.; Bradley, D. C.; Dasu, S.; Dost, J. M.; Sfiligoi, I.; Tadel, A.; Tadel, M.; Wuerthwein, F.; Yagil, A.; Cms Collaboration

    2014-06-01

    Following the success of the XRootd-based US CMS data federation, the AAA project investigated extensions of the federation architecture by developing two sample implementations of an XRootd, disk-based, caching proxy. The first one simply starts fetching a whole file as soon as a file open request is received and is suitable when completely random file access is expected or it is already known that a whole file be read. The second implementation supports on-demand downloading of partial files. Extensions to the Hadoop Distributed File System have been developed to allow for an immediate fallback to network access when local HDFS storage fails to provide the requested block. Both cache implementations are in pre-production testing at UCSD.

  11. Implementing MANETS in Android based environment using Wi-Fi direct

    NASA Astrophysics Data System (ADS)

    Waqas, Muhammad; Babar, Mohammad Inayatullah Khan; Zafar, Mohammad Haseeb

    2015-05-01

    Packet loss occurs in real-time voice transmission over wireless broadcast Ad-hoc network which creates disruptions in sound. Basic objective of this research is to design a wireless Ad-hoc network based on two Android devices by using the Wireless Fidelity (WIFI) Direct Application Programming Interface (API) and apply the Network Codec, Reed Solomon Code. The network codec is used to encode the data of a music wav file and recover the lost packets if any, packets are dropped using a loss module at the transmitter device to analyze the performance with the objective of retrieving the original file at the receiver device using the network codec. This resulted in faster transmission of the files despite dropped packets. In the end both files had the original formatted music files with complete performance analysis based on the transmission delay.

  12. The Defense Message System and the U.S. Coast Guard

    DTIC Science & Technology

    1992-06-01

    these mail services, the Internet also provides a File Transfer Protocol (FTP) and remote login between host computers (TELNET) capabilities. 17 [Ref...the Joint Maritime Intelligence Element (JMIE), Zincdust, and Emerald . [Ref. 27] 4. Secure Data Network The Coast Guard’s Secure Data Network (SDN

  13. 76 FR 58491 - Combined Notice of Filings #2

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-09-21

    ... tariff filing per 35.12: BPA Interconnection Agreement--Orcas Island to be effective 10/ 1/2011. Filed... Sound Energy, Inc. submits tariff filing per 35.12: BPA Network Integratn TX Service Agreemt for Orcas...: BPA Network Operating Agreement for Orcas, Original Service Agreemt No 527 to be effective 10/1/2011...

  14. The key image and case log application: new radiology software for teaching file creation and case logging that incorporates elements of a social network.

    PubMed

    Rowe, Steven P; Siddiqui, Adeel; Bonekamp, David

    2014-07-01

    To create novel radiology key image software that is easy to use for novice users, incorporates elements adapted from social networking Web sites, facilitates resident and fellow education, and can serve as the engine for departmental sharing of interesting cases and follow-up studies. Using open-source programming languages and software, radiology key image software (the key image and case log application, KICLA) was developed. This system uses a lightweight interface with the institutional picture archiving and communications systems and enables the storage of key images, image series, and cine clips. It was designed to operate with minimal disruption to the radiologists' daily workflow. Many features of the user interface have been inspired by social networking Web sites, including image organization into private or public folders, flexible sharing with other users, and integration of departmental teaching files into the system. We also review the performance, usage, and acceptance of this novel system. KICLA was implemented at our institution and achieved widespread popularity among radiologists. A large number of key images have been transmitted to the system since it became available. After this early experience period, the most commonly encountered radiologic modalities are represented. A survey distributed to users revealed that most of the respondents found the system easy to use (89%) and fast at allowing them to record interesting cases (100%). Hundred percent of respondents also stated that they would recommend a system such as KICLA to their colleagues. The system described herein represents a significant upgrade to the Digital Imaging and Communications in Medicine teaching file paradigm with efforts made to maximize its ease of use and inclusion of characteristics inspired by social networking Web sites that allow the system additional functionality such as individual case logging. Copyright © 2014 AUR. Published by Elsevier Inc. All rights reserved.

  15. 78 FR 7842 - Self-Regulatory Organizations; NYSE MKT LLC; Notice of Filing of Proposed Rule Change Amending...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-02-04

    ... limitations in order management systems and routing networks used by such member organizations may make it... SECURITIES AND EXCHANGE COMMISSION [Release No. 34-68746; File No. SR-NYSEMKT-2013-07] Self.... Pursuant to Section 19(b)(1) \\1\\ of the Securities Exchange Act of 1934 (the ``Act'') \\2\\ and Rule 19b-4...

  16. 78 FR 61433 - Self-Regulatory Organizations; NYSE Arca, Inc.; Notice of Filing and Immediate Effectiveness of...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-10-03

    ... that limitations in order management systems and routing networks used by such ETP Holders may make it... SECURITIES AND EXCHANGE COMMISSION [Release No. 34-70565; File No. SR-NYSEARCA-2013-98] Self... September 30, 2013. Pursuant to Section 19(b)(1) \\1\\ of the Securities Exchange Act of 1934 (the ``Act'') \\2...

  17. BSD Portals for LINUX 2.0

    NASA Technical Reports Server (NTRS)

    McNab, A. David; woo, Alex (Technical Monitor)

    1999-01-01

    Portals, an experimental feature of 4.4BSD, extend the file system name space by exporting certain open () requests to a user-space daemon. A portal daemon is mounted into the file name space as if it were a standard file system. When the kernel resolves a pathname and encounters a portal mount point, the remainder of the path is passed to the portal daemon. Depending on the portal "pathname" and the daemon's configuration, some type of open (2) is performed. The resulting file descriptor is passed back to the kernel which eventually returns it to the user, to whom it appears that a "normal" open has occurred. A proxy portalfs file system is responsible for kernel interaction with the daemon. The overall effect is that the portal daemon performs an open (2) on behalf of the kernel, possibly hiding substantial complexity from the calling process. One particularly useful application is implementing a connection service that allows simple scripts to open network sockets. This paper describes the implementation of portals for LINUX 2.0.

  18. Composition and Realization of Source-to-Sink High-Performance Flows: File Systems, Storage, Hosts, LAN and WAN

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wu, Chase Qishi

    A number of Department of Energy (DOE) science applications, involving exascale computing systems and large experimental facilities, are expected to generate large volumes of data, in the range of petabytes to exabytes, which will be transported over wide-area networks for the purpose of storage, visualization, and analysis. To support such capabilities, significant progress has been made in various components including the deployment of 100 Gbps networks with future 1 Tbps bandwidth, increases in end-host capabilities with multiple cores and buses, capacity improvements in large disk arrays, and deployment of parallel file systems such as Lustre and GPFS. High-performance source-to-sink datamore » flows must be composed of these component systems, which requires significant optimizations of the storage-to-host data and execution paths to match the edge and long-haul network connections. In particular, end systems are currently supported by 10-40 Gbps Network Interface Cards (NIC) and 8-32 Gbps storage Host Channel Adapters (HCAs), which carry the individual flows that collectively must reach network speeds of 100 Gbps and higher. Indeed, such data flows must be synthesized using multicore, multibus hosts connected to high-performance storage systems on one side and to the network on the other side. Current experimental results show that the constituent flows must be optimally composed and preserved from storage systems, across the hosts and the networks with minimal interference. Furthermore, such a capability must be made available transparently to the science users without placing undue demands on them to account for the details of underlying systems and networks. And, this task is expected to become even more complex in the future due to the increasing sophistication of hosts, storage systems, and networks that constitute the high-performance flows. The objectives of this proposal are to (1) develop and test the component technologies and their synthesis methods to achieve source-to-sink high-performance flows, and (2) develop tools that provide these capabilities through simple interfaces to users and applications. In terms of the former, we propose to develop (1) optimization methods that align and transition multiple storage flows to multiple network flows on multicore, multibus hosts; and (2) edge and long-haul network path realization and maintenance using advanced provisioning methods including OSCARS and OpenFlow. We also propose synthesis methods that combine these individual technologies to compose high-performance flows using a collection of constituent storage-network flows, and realize them across the storage and local network connections as well as long-haul connections. We propose to develop automated user tools that profile the hosts, storage systems, and network connections; compose the source-to-sink complex flows; and set up and maintain the needed network connections. These solutions will be tested using (1) 100 Gbps connection(s) between Oak Ridge National Laboratory (ORNL) and Argonne National Laboratory (ANL) with storage systems supported by Lustre and GPFS file systems with an asymmetric connection to University of Memphis (UM); (2) ORNL testbed with multicore and multibus hosts, switches with OpenFlow capabilities, and network emulators; and (3) 100 Gbps connections from ESnet and their Openflow testbed, and other experimental connections. This proposal brings together the expertise and facilities of the two national laboratories, ORNL and ANL, and UM. It also represents a collaboration between DOE and the Department of Defense (DOD) projects at ORNL by sharing technical expertise and personnel costs, and leveraging the existing DOD Extreme Scale Systems Center (ESSC) facilities at ORNL.« less

  19. Lessons Learned in Deploying the World s Largest Scale Lustre File System

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Dillow, David A; Fuller, Douglas; Wang, Feiyi

    2010-01-01

    The Spider system at the Oak Ridge National Laboratory's Leadership Computing Facility (OLCF) is the world's largest scale Lustre parallel file system. Envisioned as a shared parallel file system capable of delivering both the bandwidth and capacity requirements of the OLCF's diverse computational environment, the project had a number of ambitious goals. To support the workloads of the OLCF's diverse computational platforms, the aggregate performance and storage capacity of Spider exceed that of our previously deployed systems by a factor of 6x - 240 GB/sec, and 17x - 10 Petabytes, respectively. Furthermore, Spider supports over 26,000 clients concurrently accessing themore » file system, which exceeds our previously deployed systems by nearly 4x. In addition to these scalability challenges, moving to a center-wide shared file system required dramatically improved resiliency and fault-tolerance mechanisms. This paper details our efforts in designing, deploying, and operating Spider. Through a phased approach of research and development, prototyping, deployment, and transition to operations, this work has resulted in a number of insights into large-scale parallel file system architectures, from both the design and the operational perspectives. We present in this paper our solutions to issues such as network congestion, performance baselining and evaluation, file system journaling overheads, and high availability in a system with tens of thousands of components. We also discuss areas of continued challenges, such as stressed metadata performance and the need for file system quality of service alongside with our efforts to address them. Finally, operational aspects of managing a system of this scale are discussed along with real-world data and observations.« less

  20. An Optimal Mobile Service for Telecare Data Synchronization using a Role-based Access Control Model and Mobile Peer-to-Peer Technology.

    PubMed

    Ke, Chih-Kun; Lin, Zheng-Hua

    2015-09-01

    The progress of information and communication technologies (ICT) has promoted the development of healthcare which has enabled the exchange of resources and services between organizations. Organizations want to integrate mobile devices into their hospital information systems (HIS) due to the convenience to employees who are then able to perform specific healthcare processes from any location. The collection and merage of healthcare data from discrete mobile devices are worth exploring possible ways for further use, especially in remote districts without public data network (PDN) to connect the HIS. In this study, we propose an optimal mobile service which automatically synchronizes the telecare file resources among discrete mobile devices. The proposed service enforces some technical methods. The role-based access control model defines the telecare file resources accessing mechanism; the symmetric data encryption method protects telecare file resources transmitted over a mobile peer-to-peer network. The multi-criteria decision analysis method, ELECTRE (Elimination Et Choice Translating Reality), evaluates multiple criteria of the candidates' mobile devices to determine a ranking order. This optimizes the synchronization of telecare file resources among discrete mobile devices. A prototype system is implemented to examine the proposed mobile service. The results of the experiment show that the proposed mobile service can automatically and effectively synchronize telecare file resources among discrete mobile devices. The contribution of this experiment is to provide an optimal mobile service that enhances the security of telecare file resource synchronization and strengthens an organization's mobility.

  1. The Standard Autonomous File Server, A Customized, Off-the-Shelf Success Story

    NASA Technical Reports Server (NTRS)

    Semancik, Susan K.; Conger, Annette M.; Obenschain, Arthur F. (Technical Monitor)

    2001-01-01

    The Standard Autonomous File Server (SAFS), which includes both off-the-shelf hardware and software, uses an improved automated file transfer process to provide a quicker, more reliable, prioritized file distribution for customers of near real-time data without interfering with the assets involved in the acquisition and processing of the data. It operates as a stand-alone solution, monitoring itself, and providing an automated fail-over process to enhance reliability. This paper describes the unique problems and lessons learned both during the COTS selection and integration into SAFS, and the system's first year of operation in support of NASA's satellite ground network. COTS was the key factor in allowing the two-person development team to deploy systems in less than a year, meeting the required launch schedule. The SAFS system has been so successful; it is becoming a NASA standard resource, leading to its nomination for NASA's Software of the Year Award in 1999.

  2. Improving Reliability in a Stochastic Communication Network

    DTIC Science & Technology

    1990-12-01

    and GINO. In addition, the following computers were used: a Sun 386i workstation, a Digital Equipment Corporation (DEC) 11/785 miniframe , and a DEC...operating system. The DEC 11/785 miniframe used in the experiment was running Unix Version 4.3 (Berkley System Domain). Maxflo was run on the DEC 11/785...the file was still called Mod- ifyl.for). 4. The Maxflo program was started on the DEC 11/785 miniframe . 5. At this time the Convert.max file, created

  3. Folksonomical P2P File Sharing Networks Using Vectorized KANSEI Information as Search Tags

    NASA Astrophysics Data System (ADS)

    Ohnishi, Kei; Yoshida, Kaori; Oie, Yuji

    We present the concept of folksonomical peer-to-peer (P2P) file sharing networks that allow participants (peers) to freely assign structured search tags to files. These networks are similar to folksonomies in the present Web from the point of view that users assign search tags to information distributed over a network. As a concrete example, we consider an unstructured P2P network using vectorized Kansei (human sensitivity) information as structured search tags for file search. Vectorized Kansei information as search tags indicates what participants feel about their files and is assigned by the participant to each of their files. A search query also has the same form of search tags and indicates what participants want to feel about files that they will eventually obtain. A method that enables file search using vectorized Kansei information is the Kansei query-forwarding method, which probabilistically propagates a search query to peers that are likely to hold more files having search tags that are similar to the query. The similarity between the search query and the search tags is measured in terms of their dot product. The simulation experiments examine if the Kansei query-forwarding method can provide equal search performance for all peers in a network in which only the Kansei information and the tendency with respect to file collection are different among all of the peers. The simulation results show that the Kansei query forwarding method and a random-walk-based query forwarding method, for comparison, work effectively in different situations and are complementary. Furthermore, the Kansei query forwarding method is shown, through simulations, to be superior to or equal to the random-walk based one in terms of search speed.

  4. The Data Base and Decision Making in Public Schools.

    ERIC Educational Resources Information Center

    Hedges, William D.

    1984-01-01

    Describes generic types of databases--file management systems, relational database management systems, and network/hierarchical database management systems--with their respective strengths and weaknesses; discusses factors to be considered in determining whether a database is desirable; and provides evaluative criteria for use in choosing…

  5. [Improving the physician-dental surgeon relationship to improve patient care].

    PubMed

    Tenenbaum, Annabelle; Folliguet, Marysette; Berdougo, Brice; Hervé, Christian; Moutel, Grégoire

    2008-04-01

    This study had two aims: to assess the nature of the relationship between general practitioners (GPs) and dental surgeons in relation to patient care and to evaluate qualitatively their interest in the changes that health networks and shared patient medical files could bring. Questionnaires were completed by 12 GPs belonging to ASDES, a private practitioner-hospital health network that seeks to promote a partnership between physicians and dental surgeons, and by 13 private dental surgeons in the network catchment area. The GPs and dentists had quite different perceptions of their relationship. Most dentists rated their relationship with GPs as "good" to "excellent" and did not wish to modify it, while GPs rated their relationship with dentists as nonexistent and expressed a desire to change the situation. Some GPs and some dentists supported data exchange by sharing personal medical files through the network. Many obstacles hinder communication between GPs and dentists. There is insufficient coordination between professionals. Health professionals must be made aware of how changes in the health care system (health networks, personal medical files, etc) can help to provide patients with optimal care. Technical innovations in medicine will not be beneficial to patients unless medical education and training begins to include interdisciplinary and holistic approaches to health care and preventive care.

  6. Dial-in flow cytometry data analysis.

    PubMed

    Battye, Francis L

    2002-02-01

    As listmode data files continue to grow larger, access via any kind of network connections becomes more and more trouble because of the enormous traffic generated. The limited speed of transmission via modem makes analysis almost impossible. This unit presents a solution to these problems, one that involves installation at the central storage facility of a small computer program called a Web servlet. Operating in concert with a Web server, the servlet assists the analysis by extracting the display array from the data file and organizing its transmission over the network to a remote client program that creates the data display. The author discusses a recent implementation of this solution and the results for model transmission of two typical data files. The system greatly speeds access to remotely stored data yet retains the flexibility of manipulation expected with local access.

  7. Integrating data from biological experiments into metabolic networks with the DBE information system.

    PubMed

    Borisjuk, Ljudmilla; Hajirezaei, Mohammad-Reza; Klukas, Christian; Rolletschek, Hardy; Schreiber, Falk

    2005-01-01

    Modern 'omics'-technologies result in huge amounts of data about life processes. For analysis and data mining purposes this data has to be considered in the context of the underlying biological networks. This work presents an approach for integrating data from biological experiments into metabolic networks by mapping the data onto network elements and visualising the data enriched networks automatically. This methodology is implemented in DBE, an information system that supports the analysis and visualisation of experimental data in the context of metabolic networks. It consists of five parts: (1) the DBE-Database for consistent data storage, (2) the Excel-Importer application for the data import, (3) the DBE-Website as the interface for the system, (4) the DBE-Pictures application for the up- and download of binary (e. g. image) files, and (5) DBE-Gravisto, a network analysis and graph visualisation system. The usability of this approach is demonstrated in two examples.

  8. Timeline Resource Analysis Program (TRAP): User's manual and program document

    NASA Technical Reports Server (NTRS)

    Sessler, J. G.

    1981-01-01

    The Timeline Resource Analysis Program (TRAP), developed for scheduling and timelining problems, is described. Given an activity network, TRAP generates timeline plots, resource histograms, and tabular summaries of the network, schedules, and resource levels. It is written in ANSI FORTRAN for the Honeywell SIGMA 5 computer and operates in the interactive mode using the TEKTRONIX 4014-1 graphics terminal. The input network file may be a standard SIGMA 5 file or one generated using the Interactive Graphics Design System. The timeline plots can be displayed in two orderings: according to the sequence in which the tasks were read on input, and a waterfall sequence in which the tasks are ordered by start time. The input order is especially meaningful when the network consists of several interacting subnetworks. The waterfall sequence is helpful in assessing the project status at any point in time.

  9. FNV: light-weight flash-based network and pathway viewer.

    PubMed

    Dannenfelser, Ruth; Lachmann, Alexander; Szenk, Mariola; Ma'ayan, Avi

    2011-04-15

    Network diagrams are commonly used to visualize biochemical pathways by displaying the relationships between genes, proteins, mRNAs, microRNAs, metabolites, regulatory DNA elements, diseases, viruses and drugs. While there are several currently available web-based pathway viewers, there is still room for improvement. To this end, we have developed a flash-based network viewer (FNV) for the visualization of small to moderately sized biological networks and pathways. Written in Adobe ActionScript 3.0, the viewer accepts simple Extensible Markup Language (XML) formatted input files to display pathways in vector graphics on any web-page providing flexible layout options, interactivity with the user through tool tips, hyperlinks and the ability to rearrange nodes on the screen. FNV was utilized as a component in several web-based systems, namely Genes2Networks, Lists2Networks, KEA, ChEA and PathwayGenerator. In addition, FVN can be used to embed pathways inside pdf files for the communication of pathways in soft publication materials. FNV is available for use and download along with the supporting documentation and sample networks at http://www.maayanlab.net/FNV. avi.maayan@mssm.edu.

  10. 75 FR 69644 - Privacy Act of 1974; System of Records

    Federal Register 2010, 2011, 2012, 2013, 2014

    2010-11-15

    ..., organization, phone, fax, mobile, pager, Defense Switched Network (DSN) phone, other fax, other mobile, other.../Transport Layer Security (SSL/ TLS) connections, access control lists, file system permissions, intrusion detection and prevention systems and log monitoring. Complete access to all records is restricted to and...

  11. The development of an information system and installation of an Internet web database for the purposes of the occupational health and safety management system.

    PubMed

    Mavrikakis, I; Mantas, J; Diomidous, M

    2007-01-01

    This paper is based on the research on the possible structure of an information system for the purposes of occupational health and safety management. We initiated a questionnaire in order to find the possible interest on the part of potential users in the subject of occupational health and safety. The depiction of the potential interest is vital both for the software analysis cycle and development according to previous models. The evaluation of the results tends to create pilot applications among different enterprises. Documentation and process improvements ascertained quality of services, operational support, occupational health and safety advice are the basics of the above applications. Communication and codified information among intersted parts is the other target of the survey regarding health issues. Computer networks can offer such services. The network will consist of certain nodes responsible to inform executives on Occupational Health and Safety. A web database has been installed for inserting and searching documents. The submission of files to a server and the answers to questionnaires through the web help the experts to perform their activities. Based on the requirements of enterprises we have constructed a web file server. We submit files so that users can retrieve the files which they need. The access is limited to authorized users. Digital watermarks authenticate and protect digital objects.

  12. NSSDC provides network access to key data via NDADS

    NASA Technical Reports Server (NTRS)

    Behnke, Jeanne; King, Joseph

    1994-01-01

    The National Space Science Data Center (NSSDC) is making a growing fraction of its most customer-desirable data electronically accessible via both the local and wide area networks. NSSDC is witnessing a great increase in its data dissemination owing to this network accessibility. To provide its customers the best data accessibility, the NSSDC makes data available from a nearline, mass storage system, the NSSDC Data Archive and Dissemination Service (NDADS). The NDADS, the initial version was made available in January 1992, is a customized system of hardware and software that provides users access to the nearline data via ANONYMOUS FTP, an e-mail interface (ARMS), and a C-based software library. In January 1992, the NDADS registered 416 requests for 1,957 files. By December of 1994, NDADS had been populated with 800 gigabytes of electronically accessible data and had registered 1458 requests for 20,887 files. In this report we describe the NDADS system, both hardware and software. Later in the report, we discuss some of the lessons that were learned as a result of operating NDADS, particularly in the area of ingest and dissemination.

  13. Content-aware network storage system supporting metadata retrieval

    NASA Astrophysics Data System (ADS)

    Liu, Ke; Qin, Leihua; Zhou, Jingli; Nie, Xuejun

    2008-12-01

    Nowadays, content-based network storage has become the hot research spot of academy and corporation[1]. In order to solve the problem of hit rate decline causing by migration and achieve the content-based query, we exploit a new content-aware storage system which supports metadata retrieval to improve the query performance. Firstly, we extend the SCSI command descriptor block to enable system understand those self-defined query requests. Secondly, the extracted metadata is encoded by extensible markup language to improve the universality. Thirdly, according to the demand of information lifecycle management (ILM), we store those data in different storage level and use corresponding query strategy to retrieval them. Fourthly, as the file content identifier plays an important role in locating data and calculating block correlation, we use it to fetch files and sort query results through friendly user interface. Finally, the experiments indicate that the retrieval strategy and sort algorithm have enhanced the retrieval efficiency and precision.

  14. Visual behavior characterization for intrusion and misuse detection

    NASA Astrophysics Data System (ADS)

    Erbacher, Robert F.; Frincke, Deborah

    2001-05-01

    As computer and network intrusions become more and more of a concern, the need for better capabilities, to assist in the detection and analysis of intrusions also increase. System administrators typically rely on log files to analyze usage and detect misuse. However, as a consequence of the amount of data collected by each machine, multiplied by the tens or hundreds of machines under the system administrator's auspices, the entirety of the data available is neither collected nor analyzed. This is compounded by the need to analyze network traffic data as well. We propose a methodology for analyzing network and computer log information visually based on the analysis of the behavior of the users. Each user's behavior is the key to determining their intent and overriding activity, whether they attempt to hide their actions or not. Proficient hackers will attempt to hide their ultimate activities, which hinders the reliability of log file analysis. Visually analyzing the users''s behavior however, is much more adaptable and difficult to counteract.

  15. Interfacing the VAX 11/780 Using Berkeley Unix 4.2.BSD and Ethernet Based Xerox Network Systems. Volume 1.

    DTIC Science & Technology

    1984-12-01

    3Com Corporation ....... A-18 Ethernet Controller Support . . . . . . A-19 Host Systems Support . . . . . . . . . A-20 Personal Computers Support...A-23 VAX EtherSeries Software 0 * A-23 Network Research Corporation . o o o . o A-24 File Transfer Service . . . . o A-25 Virtual Terminal Service 0...Control office is planning to acquire a Digital Equipment Corporation VAX 11/780 mainframe computer with the Unix Berkeley 4.2BSD operating system. They

  16. Public census data on CD-ROM at Lawrence Berkeley Laboratory

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Merrill, D.W.

    The Comprehensive Epidemiologic Data Resource (CEDR) and Populations at Risk to Environmental Pollution (PAREP) projects, of the Information and Computing Sciences Division (ICSD) at Lawrence Berkeley Laboratory (LBL), are using public socio-economic and geographic data files which are available to CEDR and PAREP collaborators via LBL`s computing network. At this time 70 CD-ROM diskettes (approximately 36 gigabytes) are on line via the Unix file server cedrcd. lbl. gov. Most of the files are from the US Bureau of the Census, and most pertain to the 1990 Census of Population and Housing. All the CD-ROM diskettes contain documentation in the formmore » of ASCII text files. Printed documentation for most files is available for inspection at University of California Data and Technical Assistance (UC DATA), or the UC Documents Library. Many of the CD-ROM diskettes distributed by the Census Bureau contain software for PC compatible computers, for easily accessing the data. Shared access to the data is maintained through a collaboration among the CEDR and PAREP projects at LBL, and UC DATA, and the UC Documents Library. Via the Sun Network File System (NFS), these data can be exported to Internet computers for direct access by the user`s application program(s).« less

  17. Public census data on CD-ROM at Lawrence Berkeley Laboratory

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Merrill, D.W.

    The Comprehensive Epidemiologic Data Resource (CEDR) and Populations at Risk to Environmental Pollution (PAREP) projects, of the Information and Computing Sciences Division (ICSD) at Lawrence Berkeley Laboratory (LBL), are using public socio-economic and geographic data files which are available to CEDR and PAREP collaborators via LBL's computing network. At this time 70 CD-ROM diskettes (approximately 36 gigabytes) are on line via the Unix file server cedrcd. lbl. gov. Most of the files are from the US Bureau of the Census, and most pertain to the 1990 Census of Population and Housing. All the CD-ROM diskettes contain documentation in the formmore » of ASCII text files. Printed documentation for most files is available for inspection at University of California Data and Technical Assistance (UC DATA), or the UC Documents Library. Many of the CD-ROM diskettes distributed by the Census Bureau contain software for PC compatible computers, for easily accessing the data. Shared access to the data is maintained through a collaboration among the CEDR and PAREP projects at LBL, and UC DATA, and the UC Documents Library. Via the Sun Network File System (NFS), these data can be exported to Internet computers for direct access by the user's application program(s).« less

  18. The SPAN cookbook: A practical guide to accessing SPAN

    NASA Technical Reports Server (NTRS)

    Mason, Stephanie; Tencati, Ronald D.; Stern, David M.; Capps, Kimberly D.; Dorman, Gary; Peters, David J.

    1990-01-01

    This is a manual for remote users who wish to send electronic mail messages from the Space Physics Analysis Network (SPAN) to scientific colleagues on other computer networks and vice versa. In several instances more than one gateway has been included for the same network. Users are provided with an introduction to each network listed with helpful details about accessing the system and mail syntax examples. Also included is information on file transfers, remote logins, and help telephone numbers.

  19. Computer Security Products Technology Overview

    DTIC Science & Technology

    1988-10-01

    13 3. DATABASE MANAGEMENT SYSTEMS ................................... 15 Definition...this paper addresses fall into the areas of multi-user hosts, database management systems (DBMS), workstations, networks, guards and gateways, and...provide a portion of that protection, for example, a password scheme, a file protection mechanism, a secure database management system, or even a

  20. The University of Minnesota's Internet Gopher System: A Tool for Accessing Network-Based Electronic Information.

    ERIC Educational Resources Information Center

    Wiggins, Rich

    1993-01-01

    Describes the Gopher system developed at the University of Minnesota for accessing information on the Internet. Highlights include the need for navigation tools; Gopher clients; FTP (File Transfer Protocol); campuswide information systems; navigational enhancements; privacy and security issues; electronic publishing; multimedia; and future…

  1. 77 FR 29637 - Game Show Network, LLC v. Cablevision Systems Corp.

    Federal Register 2010, 2011, 2012, 2013, 2014

    2012-05-18

    ... FEDERAL COMMUNICATIONS COMMISSION [MB Docket No. 12-122; File No. CSR-8529-P; DA 12-739] Game Show... Administrative Law Judge (``ALJ'') to resolve the factual disputes and to return an Initial Decision. DATES: Game... control. 6. GSN is a national cable network launched on December 1, 1994 under the name ``Game Show...

  2. Cyber-Security Concerns Mount as Student Hacking Hits Schools: Districts Straining to Safeguard Online Networks

    ERIC Educational Resources Information Center

    Borja, Rhea R.

    2006-01-01

    While schools rightly fear break-ins to their computer systems by professional criminals, students are increasingly giving educators almost as much to worry about. Reports of students' gaining access to school networks to change grades, delete teachers' files, or steal data are becoming more common, experts say, and many districts remain highly…

  3. A performance analysis of advanced I/O architectures for PC-based network file servers

    NASA Astrophysics Data System (ADS)

    Huynh, K. D.; Khoshgoftaar, T. M.

    1994-12-01

    In the personal computing and workstation environments, more and more I/O adapters are becoming complete functional subsystems that are intelligent enough to handle I/O operations on their own without much intervention from the host processor. The IBM Subsystem Control Block (SCB) architecture has been defined to enhance the potential of these intelligent adapters by defining services and conventions that deliver command information and data to and from the adapters. In recent years, a new storage architecture, the Redundant Array of Independent Disks (RAID), has been quickly gaining acceptance in the world of computing. In this paper, we would like to discuss critical system design issues that are important to the performance of a network file server. We then present a performance analysis of the SCB architecture and disk array technology in typical network file server environments based on personal computers (PCs). One of the key issues investigated in this paper is whether a disk array can outperform a group of disks (of same type, same data capacity, and same cost) operating independently, not in parallel as in a disk array.

  4. Rig Diagnostic Tools

    NASA Technical Reports Server (NTRS)

    Soileau, Kerry M.; Baicy, John W.

    2008-01-01

    Rig Diagnostic Tools is a suite of applications designed to allow an operator to monitor the status and health of complex networked systems using a unique interface between Java applications and UNIX scripts. The suite consists of Java applications, C scripts, Vx- Works applications, UNIX utilities, C programs, and configuration files. The UNIX scripts retrieve data from the system and write them to a certain set of files. The Java side monitors these files and presents the data in user-friendly formats for operators to use in making troubleshooting decisions. This design allows for rapid prototyping and expansion of higher-level displays without affecting the basic data-gathering applications. The suite is designed to be extensible, with the ability to add new system components in building block fashion without affecting existing system applications. This allows for monitoring of complex systems for which unplanned shutdown time comes at a prohibitive cost.

  5. Turning Archival Tapes into an Online “Cardless” Catalog

    PubMed Central

    Zuckerman, Alan E.; Ewens, Wilma A.; Cannard, Bonnie G.; Broering, Naomi C.

    1982-01-01

    Georgetown University has created an online card catalog based on machine readable cataloging records (MARC) loaded from archival tapes or online via the OCLC network. The system is programmed in MUMPS and uses the medical subject headings (MeSH) authority file created by the National Library of Medicine. The online catalog may be searched directly by library users and has eliminated the need for manual filing of catalog cards.

  6. Data storage and retrieval system

    NASA Technical Reports Server (NTRS)

    Nakamoto, Glen

    1991-01-01

    The Data Storage and Retrieval System (DSRS) consists of off-the-shelf system components integrated as a file server supporting very large files. These files are on the order of one gigabyte of data per file, although smaller files on the order of one megabyte can be accommodated as well. For instance, one gigabyte of data occupies approximately six 9 track tape reels (recorded at 6250 bpi). Due to this large volume of media, it was desirable to shrink the size of the proposed media to a single portable cassette. In addition to large size, a key requirement was that the data needs to be transferred to a (VME based) workstation at very high data rates. One gigabyte (GB) of data needed to be transferred from an archiveable media on a file server to a workstation in less than 5 minutes. Equivalent size, on-line data needed to be transferred in less than 3 minutes. These requirements imply effective transfer rates on the order of four to eight megabytes per second (4-8 MB/s). The DSRS also needed to be able to send and receive data from a variety of other sources accessible from an Ethernet local area network.

  7. Data storage and retrieval system

    NASA Technical Reports Server (NTRS)

    Nakamoto, Glen

    1992-01-01

    The Data Storage and Retrieval System (DSRS) consists of off-the-shelf system components integrated as a file server supporting very large files. These files are on the order of one gigabyte of data per file, although smaller files on the order of one megabyte can be accommodated as well. For instance, one gigabyte of data occupies approximately six 9-track tape reels (recorded at 6250 bpi). Due to this large volume of media, it was desirable to 'shrink' the size of the proposed media to a single portable cassette. In addition to large size, a key requirement was that the data needs to be transferred to a (VME based) workstation at very high data rates. One gigabyte (GB) of data needed to be transferred from an archiveable media on a file server to a workstation in less than 5 minutes. Equivalent size, on-line data needed to be transferred in less than 3 minutes. These requirements imply effective transfer rates on the order of four to eight megabytes per second (4-8 MB/s). The DSRS also needed to be able to send and receive data from a variety of other sources accessible from an Ethernet local area network.

  8. A general UNIX interface for biocomputing and network information retrieval software.

    PubMed

    Kiong, B K; Tan, T W

    1993-10-01

    We describe a UNIX program, HYBROW, which can integrate without modification a wide range of UNIX biocomputing and network information retrieval software. HYBROW works in conjunction with a separate set of ASCII files containing embedded hypertext-like links. The program operates like a hypertext browser featuring five basic links: file link, execute-only link, execute-display link, directory-browse link and field-filling link. Useful features of the interface may be developed using combinations of these links with simple shell scripts and examples of these are briefly described. The system manager who supports biocomputing users should find the program easy to maintain, and useful in assisting new and infrequent users; it is also simple to incorporate new programs. Moreover, the individual user can customize the interface, create dynamic menus, hypertext a document, invoke shell scripts and new programs simply with a basic understanding of the UNIX operating system and any text editor. This program was written in C language and uses the UNIX curses and termcap libraries. It is freely available as a tar compressed file (by anonymous FTP from nuscc.nus.sg).

  9. NAFFS: network attached flash file system for cloud storage on portable consumer electronics

    NASA Astrophysics Data System (ADS)

    Han, Lin; Huang, Hao; Xie, Changsheng

    Cloud storage technology has become a research hotspot in recent years, while the existing cloud storage services are mainly designed for data storage needs with stable high speed Internet connection. Mobile Internet connections are often unstable and the speed is relatively low. These native features of mobile Internet limit the use of cloud storage in portable consumer electronics. The Network Attached Flash File System (NAFFS) presented the idea of taking the portable device built-in NAND flash memory as the front-end cache of virtualized cloud storage device. Modern portable devices with Internet connection have built-in more than 1GB NAND Flash, which is quite enough for daily data storage. The data transfer rate of NAND flash device is much higher than mobile Internet connections[1], and its non-volatile feature makes it very suitable as the cache device of Internet cloud storage on portable device, which often have unstable power supply and intermittent Internet connection. In the present work, NAFFS is evaluated with several benchmarks, and its performance is compared with traditional network attached file systems, such as NFS. Our evaluation results indicate that the NAFFS achieves an average accessing speed of 3.38MB/s, which is about 3 times faster than directly accessing cloud storage by mobile Internet connection, and offers a more stable interface than that of directly using cloud storage API. Unstable Internet connection and sudden power off condition are tolerable, and no data in cache will be lost in such situation.

  10. CycADS: an annotation database system to ease the development and update of BioCyc databases

    PubMed Central

    Vellozo, Augusto F.; Véron, Amélie S.; Baa-Puyoulet, Patrice; Huerta-Cepas, Jaime; Cottret, Ludovic; Febvay, Gérard; Calevro, Federica; Rahbé, Yvan; Douglas, Angela E.; Gabaldón, Toni; Sagot, Marie-France; Charles, Hubert; Colella, Stefano

    2011-01-01

    In recent years, genomes from an increasing number of organisms have been sequenced, but their annotation remains a time-consuming process. The BioCyc databases offer a framework for the integrated analysis of metabolic networks. The Pathway tool software suite allows the automated construction of a database starting from an annotated genome, but it requires prior integration of all annotations into a specific summary file or into a GenBank file. To allow the easy creation and update of a BioCyc database starting from the multiple genome annotation resources available over time, we have developed an ad hoc data management system that we called Cyc Annotation Database System (CycADS). CycADS is centred on a specific database model and on a set of Java programs to import, filter and export relevant information. Data from GenBank and other annotation sources (including for example: KAAS, PRIAM, Blast2GO and PhylomeDB) are collected into a database to be subsequently filtered and extracted to generate a complete annotation file. This file is then used to build an enriched BioCyc database using the PathoLogic program of Pathway Tools. The CycADS pipeline for annotation management was used to build the AcypiCyc database for the pea aphid (Acyrthosiphon pisum) whose genome was recently sequenced. The AcypiCyc database webpage includes also, for comparative analyses, two other metabolic reconstruction BioCyc databases generated using CycADS: TricaCyc for Tribolium castaneum and DromeCyc for Drosophila melanogaster. Linked to its flexible design, CycADS offers a powerful software tool for the generation and regular updating of enriched BioCyc databases. The CycADS system is particularly suited for metabolic gene annotation and network reconstruction in newly sequenced genomes. Because of the uniform annotation used for metabolic network reconstruction, CycADS is particularly useful for comparative analysis of the metabolism of different organisms. Database URL: http://www.cycadsys.org PMID:21474551

  11. Automatic River Network Extraction from LIDAR Data

    NASA Astrophysics Data System (ADS)

    Maderal, E. N.; Valcarcel, N.; Delgado, J.; Sevilla, C.; Ojeda, J. C.

    2016-06-01

    National Geographic Institute of Spain (IGN-ES) has launched a new production system for automatic river network extraction for the Geospatial Reference Information (GRI) within hydrography theme. The goal is to get an accurate and updated river network, automatically extracted as possible. For this, IGN-ES has full LiDAR coverage for the whole Spanish territory with a density of 0.5 points per square meter. To implement this work, it has been validated the technical feasibility, developed a methodology to automate each production phase: hydrological terrain models generation with 2 meter grid size and river network extraction combining hydrographic criteria (topographic network) and hydrological criteria (flow accumulation river network), and finally the production was launched. The key points of this work has been managing a big data environment, more than 160,000 Lidar data files, the infrastructure to store (up to 40 Tb between results and intermediate files), and process; using local virtualization and the Amazon Web Service (AWS), which allowed to obtain this automatic production within 6 months, it also has been important the software stability (TerraScan-TerraSolid, GlobalMapper-Blue Marble , FME-Safe, ArcGIS-Esri) and finally, the human resources managing. The results of this production has been an accurate automatic river network extraction for the whole country with a significant improvement for the altimetric component of the 3D linear vector. This article presents the technical feasibility, the production methodology, the automatic river network extraction production and its advantages over traditional vector extraction systems.

  12. Cardio-PACs: a new opportunity

    NASA Astrophysics Data System (ADS)

    Heupler, Frederick A., Jr.; Thomas, James D.; Blume, Hartwig R.; Cecil, Robert A.; Heisler, Mary

    2000-05-01

    It is now possible to replace film-based image management in the cardiac catheterization laboratory with a Cardiology Picture Archiving and Communication System (Cardio-PACS) based on digital imaging technology. The first step in the conversion process is installation of a digital image acquisition system that is capable of generating high-quality DICOM-compatible images. The next three steps, which are the subject of this presentation, involve image display, distribution, and storage. Clinical requirements and associated cost considerations for these three steps are listed below: Image display: (1) Image quality equal to film, with DICOM format, lossless compression, image processing, desktop PC-based with color monitor, and physician-friendly imaging software; (2) Performance specifications include: acquire 30 frames/sec; replay 15 frames/sec; access to file server 5 seconds, and to archive 5 minutes; (3) Compatibility of image file, transmission, and processing formats; (4) Image manipulation: brightness, contrast, gray scale, zoom, biplane display, and quantification; (5) User-friendly control of image review. Image distribution: (1) Standard IP-based network between cardiac catheterization laboratories, file server, long-term archive, review stations, and remote sites; (2) Non-proprietary formats; (3) Bidirectional distribution. Image storage: (1) CD-ROM vs disk vs tape; (2) Verification of data integrity; (3) User-designated storage capacity for catheterization laboratory, file server, long-term archive. Costs: (1) Image acquisition equipment, file server, long-term archive; (2) Network infrastructure; (3) Review stations and software; (4) Maintenance and administration; (5) Future upgrades and expansion; (6) Personnel.

  13. Measuring a year of child pornography trafficking by U.S. computers on a peer-to-peer network.

    PubMed

    Wolak, Janis; Liberatore, Marc; Levine, Brian Neil

    2014-02-01

    We used data gathered via investigative "RoundUp" software to measure a year of online child pornography (CP) trafficking activity by U.S. computers on the Gnutella peer-to-peer network. The data include millions of observations of Internet Protocol addresses sharing known CP files, identified as such in previous law enforcement investigations. We found that 244,920 U.S. computers shared 120,418 unique known CP files on Gnutella during the study year. More than 80% of these computers shared fewer than 10 such files during the study year or shared files for fewer than 10 days. However, less than 1% of computers (n=915) made high annual contributions to the number of known CP files available on the network (100 or more files). If law enforcement arrested the operators of these high-contribution computers and took their files offline, the number of distinct known CP files available in the P2P network could be reduced by as much as 30%. Our findings indicate widespread low level CP trafficking by U.S. computers in one peer-to-peer network, while a small percentage of computers made high contributions to the problem. However, our measures were not comprehensive and should be considered lower bounds estimates. Nonetheless, our findings show that data can be systematically gathered and analyzed to develop an empirical grasp of the scope and characteristics of CP trafficking on peer-to-peer networks. Such measurements can be used to combat the problem. Further, investigative software tools can be used strategically to help law enforcement prioritize investigations. Copyright © 2013 Elsevier Ltd. All rights reserved.

  14. The Standard Autonomous File Server, a Customized, Off-the-Shelf Success Story

    NASA Technical Reports Server (NTRS)

    Semancik, Susan K.; Conger, Annette M.; Obenschain, Arthur F. (Technical Monitor)

    2001-01-01

    The Standard Autonomous File Server (SAFS), which includes both off-the-shelf hardware and software, uses an improved automated file transfer process to provide a quicker, more reliable, prioritized file distribution for customers of near real-time data without interfering with the assets involved in the acquisition and processing of the data. It operates as a stand-alone solution, monitoring itself, and providing an automated fail-over process to enhance reliability. This paper will describe the unique problems and lessons learned both during the COTS selection and integration into SAFS, and the system's first year of operation in support of NASA's satellite ground network. COTS was the key factor in allowing the two-person development team to deploy systems in less than a year, meeting the required launch schedule. The SAFS system his been so successful, it is becoming a NASA standard resource, leading to its nomination for NASA's Software or the Year Award in 1999.

  15. The challenge of a data storage hierarchy

    NASA Technical Reports Server (NTRS)

    Ruderman, Michael

    1992-01-01

    A discussion of Mesa Archival Systems' data archiving system is presented. This data archiving system is strictly a software system that is implemented on a mainframe and manages the data into permanent file storage. Emphasis is placed on the fact that any kind of client system on the network can be connected through the Unix interface of the data archiving system.

  16. 75 FR 9989 - Self-Regulatory Organizations; NASDAQ OMX PHLX, Inc.; Notice of Filing and Immediate...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2010-03-04

    ... Exchanges to operate a stand-alone system or ``Linkage'' for sending order-flow between exchanges to limit trade-throughs.\\6\\ The Options Clearing Corporation (``OCC'') operated the Linkage system (the ``System.... options markets are linked together on a real-time basis through a network capable of transporting orders...

  17. Company's Data Security - Case Study

    NASA Astrophysics Data System (ADS)

    Stera, Piotr

    This paper describes a computer network and data security problems in an existing company. Two main issues were pointed out: data loss protection and uncontrolled data copying. Security system was designed and implemented. The system consists of many dedicated programs. This system protect from data loss and detected unauthorized file copying from company's server by a dishonest employee.

  18. An Xrootd Italian Federation

    NASA Astrophysics Data System (ADS)

    Boccali, T.; Donvito, G.; Diacono, D.; Marzulli, G.; Pompili, A.; Della Ricca, G.; Mazzoni, E.; Argiro, S.; Gregori, D.; Grandi, C.; Bonacorsi, D.; Lista, L.; Fabozzi, F.; Barone, L. M.; Santocchia, A.; Riahi, H.; Tricomi, A.; Sgaravatto, M.; Maron, G.

    2014-06-01

    The Italian community in CMS has built a geographically distributed network in which all the data stored in the Italian region are available to all the users for their everyday work. This activity involves at different level all the CMS centers: the Tier1 at CNAF, all the four Tier2s (Bari, Rome, Legnaro and Pisa), and few Tier3s (Trieste, Perugia, Torino, Catania, Napoli, ...). The federation uses the new network connections as provided by GARR, our NREN (National Research and Education Network), which provides a minimum of 10 Gbit/s to all the sites via the GARR-X[2] project. The federation is currently based on Xrootd[1] technology, and on a Redirector aimed to seamlessly connect all the sites, giving the logical view of a single entity. A special configuration has been put in place for the Tier1, CNAF, where ad-hoc Xrootd changes have been implemented in order to protect the tape system from excessive stress, by not allowing WAN connections to access tape only files, on a file-by-file basis. In order to improve the overall performance while reading files, both in terms of bandwidth and latency, a hierarchy of xrootd redirectors has been implemented. The solution implemented provides a dedicated Redirector where all the INFN sites are registered, without considering their status (T1, T2, or T3 sites). An interesting use case were able to cover via the federation are disk-less Tier3s. The caching solution allows to operate a local storage with minimal human intervention: transfers are automatically done on a single file basis, and the cache is maintained operational by automatic removal of old files.

  19. Network systems security analysis

    NASA Astrophysics Data System (ADS)

    Yilmaz, Ä.°smail

    2015-05-01

    Network Systems Security Analysis has utmost importance in today's world. Many companies, like banks which give priority to data management, test their own data security systems with "Penetration Tests" by time to time. In this context, companies must also test their own network/server systems and take precautions, as the data security draws attention. Based on this idea, the study cyber-attacks are researched throughoutly and Penetration Test technics are examined. With these information on, classification is made for the cyber-attacks and later network systems' security is tested systematically. After the testing period, all data is reported and filed for future reference. Consequently, it is found out that human beings are the weakest circle of the chain and simple mistakes may unintentionally cause huge problems. Thus, it is clear that some precautions must be taken to avoid such threats like updating the security software.

  20. Jefferson Lab Mass Storage and File Replication Services

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ian Bird; Ying Chen; Bryan Hess

    Jefferson Lab has implemented a scalable, distributed, high performance mass storage system - JASMine. The system is entirely implemented in Java, provides access to robotic tape storage and includes disk cache and stage manager components. The disk manager subsystem may be used independently to manage stand-alone disk pools. The system includes a scheduler to provide policy-based access to the storage systems. Security is provided by pluggable authentication modules and is implemented at the network socket level. The tape and disk cache systems have well defined interfaces in order to provide integration with grid-based services. The system is in production andmore » being used to archive 1 TB per day from the experiments, and currently moves over 2 TB per day total. This paper will describe the architecture of JASMine; discuss the rationale for building the system, and present a transparent 3rd party file replication service to move data to collaborating institutes using JASMine, XM L, and servlet technology interfacing to grid-based file transfer mechanisms.« less

  1. A Urinalysis Result Reporting System for a Clinical Laboratory

    PubMed Central

    Sullivan, James E.; Plexico, Perry S.; Blank, David W.

    1987-01-01

    A menu driven Urinalysis Result Reporting System based on multiple IBM-PC Workstations connected together by a local area network was developed for the Clinical Chemistry Section of the Clinical Pathology Department at the National Institutes of Health's Clinical Center. Two Network File Servers redundantly save the test results of each urine specimen. When all test results for a specimen are entered into the system, the results are transmitted to the Department's Laboratory Computer System where they are made available to the ordering physician. The Urinalysis Data Management System has proven easy to learn and use.

  2. Measurements over distributed high performance computing and storage systems

    NASA Technical Reports Server (NTRS)

    Williams, Elizabeth; Myers, Tom

    1993-01-01

    A strawman proposal is given for a framework for presenting a common set of metrics for supercomputers, workstations, file servers, mass storage systems, and the networks that interconnect them. Production control and database systems are also included. Though other applications and third part software systems are not addressed, it is important to measure them as well.

  3. Building an FTP guard

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sands, P.D.

    1998-08-01

    Classified designs usually include lesser classified (including unclassified) components. An engineer working on such a design needs access to the various sub-designs at lower classification levels. For simplicity, the problem is presented with only two levels: high and low. If the low-classification component designs are stored in the high network, they become inaccessible to persons working on a low network. In order to keep the networks separate, the component designs may be duplicated in all networks, resulting in a synchronization problem. Alternatively, they may be stored in the low network and brought into the high network when needed. The lattermore » solution results in the use of sneaker-net (copying the files from the low system to a tape and carrying the tape to a high system) or a file transfer guard. This paper shows how an FTP Guard was constructed and implemented without degrading the security of the underlying B3 platform. The paper then shows how the guard can be extended to an FTP proxy server or an HTTP proxy server. The extension is accomplished by allowing the high-side user to select among items that already exist on the low-side. No high-side data can be directly compromised by the extension, but a mechanism must be developed to handle the low-bandwidth covert channel that would be introduced by the application.« less

  4. Exploring the use of I/O nodes for computation in a MIMD multiprocessor

    NASA Technical Reports Server (NTRS)

    Kotz, David; Cai, Ting

    1995-01-01

    As parallel systems move into the production scientific-computing world, the emphasis will be on cost-effective solutions that provide high throughput for a mix of applications. Cost effective solutions demand that a system make effective use of all of its resources. Many MIMD multiprocessors today, however, distinguish between 'compute' and 'I/O' nodes, the latter having attached disks and being dedicated to running the file-system server. This static division of responsibilities simplifies system management but does not necessarily lead to the best performance in workloads that need a different balance of computation and I/O. Of course, computational processes sharing a node with a file-system service may receive less CPU time, network bandwidth, and memory bandwidth than they would on a computation-only node. In this paper we begin to examine this issue experimentally. We found that high performance I/O does not necessarily require substantial CPU time, leaving plenty of time for application computation. There were some complex file-system requests, however, which left little CPU time available to the application. (The impact on network and memory bandwidth still needs to be determined.) For applications (or users) that cannot tolerate an occasional interruption, we recommend that they continue to use only compute nodes. For tolerant applications needing more cycles than those provided by the compute nodes, we recommend that they take full advantage of both compute and I/O nodes for computation, and that operating systems should make this possible.

  5. A mass storage system for supercomputers based on Unix

    NASA Technical Reports Server (NTRS)

    Richards, J.; Kummell, T.; Zarlengo, D. G.

    1988-01-01

    The authors present the design, implementation, and utilization of a large mass storage subsystem (MSS) for the numerical aerodynamics simulation. The MSS supports a large networked, multivendor Unix-based supercomputing facility. The MSS at Ames Research Center provides all processors on the numerical aerodynamics system processing network, from workstations to supercomputers, the ability to store large amounts of data in a highly accessible, long-term repository. The MSS uses Unix System V and is capable of storing hundreds of thousands of files ranging from a few bytes to 2 Gb in size.

  6. Information Retrieval Using ADABAS-NATURAL (with Applications for Television and Radio).

    ERIC Educational Resources Information Center

    Silbergeld, I.; Kutok, P.

    1984-01-01

    Describes use of the software ADABAS (general purpose database management system) and NATURAL (interactive programing language) in development and implementation of an information retrieval system for the National Television and Radio Network of Israel. General design considerations, files contained in each archive, search strategies, and keywords…

  7. Online Writing Labs (OWLs): A Taxonomy of Options and Issues.

    ERIC Educational Resources Information Center

    Harris, Muriel; Pemberton, Michael

    1995-01-01

    Offers an overview and schema for understanding frequently used network technologies available for Online Writing Labs (OWLs)--electronic mail, gopher, World Wide Web, newsgroups, synchronous chat systems, and automated file retrieval systems. Considers ways writing centers' choices among these technologies are impacted by user access, network…

  8. Web usage data mining agent

    NASA Astrophysics Data System (ADS)

    Madiraju, Praveen; Zhang, Yanqing

    2002-03-01

    When a user logs in to a website, behind the scenes the user leaves his/her impressions, usage patterns and also access patterns in the web servers log file. A web usage mining agent can analyze these web logs to help web developers to improve the organization and presentation of their websites. They can help system administrators in improving the system performance. Web logs provide invaluable help in creating adaptive web sites and also in analyzing the network traffic analysis. This paper presents the design and implementation of a Web usage mining agent for digging in to the web log files.

  9. 77 FR 73508 - Self-Regulatory Organizations; Chicago Board Options Exchange, Incorporated; Notice of Filing and...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2012-12-10

    ... ``Disaster Recovery Systems'') in case of the occurrence of some manner of disaster which prevents NY4 from operating. These Disaster Recovery Systems can be accessed via Network Access Ports in Chicago (the... Access Ports in order to be able to connect to the Disaster Recovery Systems in case of such disaster...

  10. Large File Transfers from Space Using Multiple Ground Terminals and Delay-Tolerant Networking

    NASA Technical Reports Server (NTRS)

    Ivancic, William D.; Paulsen, Phillip; Stewart, Dave; Eddy, Wesley; McKim, James; Taylor, John; Lynch, Scott; Heberle, Jay; Northam, James; Jackson, Chris; hide

    2010-01-01

    We use Delay-Tolerant Networking (DTN) to break control loops between space-ground communication links and ground-ground communication links to increase overall file delivery efficiency, as well as to enable large files to be proactively fragmented and received across multiple ground stations. DTN proactive fragmentation and reactive fragmentation were demonstrated from the UK-DMC satellite using two independent ground stations. The files were reassembled at a bundle agent, located at Glenn Research Center in Cleveland Ohio. The first space-based demonstration of this occurred on September 30 and October 1, 2009. This paper details those experiments. Communication, delay-tolerant networking, DTN, satellite, Internet, protocols, bundle, IP, TCP.

  11. Associative programming language and virtual associative access manager

    NASA Technical Reports Server (NTRS)

    Price, C.

    1978-01-01

    APL provides convenient associative data manipulation functions in a high level language. Six statements were added to PL/1 via a preprocessor: CREATE, INSERT, FIND, FOR EACH, REMOVE, and DELETE. They allow complete control of all data base operations. During execution, data base management programs perform the functions required to support the APL language. VAAM is the data base management system designed to support the APL language. APL/VAAM is used by CADANCE, an interactive graphic computer system. VAAM is designed to support heavily referenced files. Virtual memory files, which utilize the paging mechanism of the operating system, are used. VAAM supports a full network data structure. The two basic blocks in a VAAM file are entities and sets. Entities are the basic information element and correspond to PL/1 based structures defined by the user. Sets contain the relationship information and are implemented as arrays.

  12. File Server-Based CD-ROM Networking: Using SCSI Express.

    ERIC Educational Resources Information Center

    McQueen, Howard

    1992-01-01

    Provides guidelines for evaluating SCSI Express Novell 386, a new product allowing CD-ROM drives to be attached to a Netware 3.11 file server, increasing CD-ROM networking capability. Specific limitations concerning software, hardware, and human resources are outlined, as well as its unique features and potential for future networking uses. (EA)

  13. 77 FR 36305 - Stream Communications Network & Media, Inc.; Order of Suspension of Trading

    Federal Register 2010, 2011, 2012, 2013, 2014

    2012-06-18

    ... SECURITIES AND EXCHANGE COMMISSION [File No. 500-1] Stream Communications Network & Media, Inc.; Order of Suspension of Trading June 14, 2012. It appears to the Securities and Exchange Commission that... Network & Media, Inc. because it has not filed any periodic reports since the period ended December 31...

  14. 76 FR 28117 - Order of Suspension of Trading; City Network, Inc.

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-05-13

    ... SECURITIES AND EXCHANGE COMMISSION [File No. 500-1] Order of Suspension of Trading; City Network, Inc. May 11, 2011. It appears to the Securities and Exchange Commission that there is a lack of current and accurate information concerning the securities of City Network, Inc. because it has not filed...

  15. Economics of Computing: The Case of Centralized Network File Servers.

    ERIC Educational Resources Information Center

    Solomon, Martin B.

    1994-01-01

    Discusses computer networking and the cost effectiveness of decentralization, including local area networks. A planned experiment with a centralized approach to the operation and management of file servers at the University of South Carolina is described that hopes to realize cost savings and the avoidance of staffing problems. (Contains four…

  16. 76 FR 79169 - Power Network New Mexico, LLC; Supplemental Notice That Initial Market-Based Rate Filing Includes...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-12-21

    ... DEPARTMENT OF ENERGY Federal Energy Regulatory Commission [Docket No. ER12-605-000] Power Network New Mexico, LLC; Supplemental Notice That Initial Market-Based Rate Filing Includes Request for... Power Network New Mexico, LLC's application for market-based rate authority, with an accompanying rate...

  17. High-Performance, Multi-Node File Copies and Checksums for Clustered File Systems

    NASA Technical Reports Server (NTRS)

    Kolano, Paul Z.; Ciotti, Robert B.

    2012-01-01

    Modern parallel file systems achieve high performance using a variety of techniques, such as striping files across multiple disks to increase aggregate I/O bandwidth and spreading disks across multiple servers to increase aggregate interconnect bandwidth. To achieve peak performance from such systems, it is typically necessary to utilize multiple concurrent readers/writers from multiple systems to overcome various singlesystem limitations, such as number of processors and network bandwidth. The standard cp and md5sum tools of GNU coreutils found on every modern Unix/Linux system, however, utilize a single execution thread on a single CPU core of a single system, and hence cannot take full advantage of the increased performance of clustered file systems. Mcp and msum are drop-in replacements for the standard cp and md5sum programs that utilize multiple types of parallelism and other optimizations to achieve maximum copy and checksum performance on clustered file systems. Multi-threading is used to ensure that nodes are kept as busy as possible. Read/write parallelism allows individual operations of a single copy to be overlapped using asynchronous I/O. Multinode cooperation allows different nodes to take part in the same copy/checksum. Split-file processing allows multiple threads to operate concurrently on the same file. Finally, hash trees allow inherently serial checksums to be performed in parallel. Mcp and msum provide significant performance improvements over standard cp and md5sum using multiple types of parallelism and other optimizations. The total speed-ups from all improvements are significant. Mcp improves cp performance over 27x, msum improves md5sum performance almost 19x, and the combination of mcp and msum improves verified copies via cp and md5sum by almost 22x. These improvements come in the form of drop-in replacements for cp and md5sum, so are easily used and are available for download as open source software at http://mutil.sourceforge.net.

  18. 75 FR 77885 - Government-Owned Inventions; Availability for Licensing

    Federal Register 2010, 2011, 2012, 2013, 2014

    2010-12-14

    ... of federally-funded research and development. Foreign patent applications are filed on selected... applications. Software System for Quantitative Assessment of Vasculature in Three Dimensional Images... three dimensional vascular networks from medical and basic research images. Deregulation of angiogenesis...

  19. Online data handling and storage at the CMS experiment

    NASA Astrophysics Data System (ADS)

    Andre, J.-M.; Andronidis, A.; Behrens, U.; Branson, J.; Chaze, O.; Cittolin, S.; Darlea, G.-L.; Deldicque, C.; Demiragli, Z.; Dobson, M.; Dupont, A.; Erhan, S.; Gigi, D.; Glege, F.; Gómez-Ceballos, G.; Hegeman, J.; Holzner, A.; Jimenez-Estupiñán, R.; Masetti, L.; Meijers, F.; Meschi, E.; Mommsen, RK; Morovic, S.; Nuñez-Barranco-Fernández, C.; O'Dell, V.; Orsini, L.; Paus, C.; Petrucci, A.; Pieri, M.; Racz, A.; Roberts, P.; Sakulin, H.; Schwick, C.; Stieger, B.; Sumorok, K.; Veverka, J.; Zaza, S.; Zejdl, P.

    2015-12-01

    During the LHC Long Shutdown 1, the CMS Data Acquisition (DAQ) system underwent a partial redesign to replace obsolete network equipment, use more homogeneous switching technologies, and support new detector back-end electronics. The software and hardware infrastructure to provide input, execute the High Level Trigger (HLT) algorithms and deal with output data transport and storage has also been redesigned to be completely file- based. All the metadata needed for bookkeeping are stored in files as well, in the form of small documents using the JSON encoding. The Storage and Transfer System (STS) is responsible for aggregating these files produced by the HLT, storing them temporarily and transferring them to the T0 facility at CERN for subsequent offline processing. The STS merger service aggregates the output files from the HLT from ∼62 sources produced with an aggregate rate of ∼2GB/s. An estimated bandwidth of 7GB/s in concurrent read/write mode is needed. Furthermore, the STS has to be able to store several days of continuous running, so an estimated of 250TB of total usable disk space is required. In this article we present the various technological and implementation choices of the three components of the STS: the distributed file system, the merger service and the transfer system.

  20. Online Data Handling and Storage at the CMS Experiment

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Andre, J. M.; et al.

    2015-12-23

    During the LHC Long Shutdown 1, the CMS Data Acquisition (DAQ) system underwent a partial redesign to replace obsolete network equipment, use more homogeneous switching technologies, and support new detector back-end electronics. The software and hardware infrastructure to provide input, execute the High Level Trigger (HLT) algorithms and deal with output data transport and storage has also been redesigned to be completely file- based. All the metadata needed for bookkeeping are stored in files as well, in the form of small documents using the JSON encoding. The Storage and Transfer System (STS) is responsible for aggregating these files produced bymore » the HLT, storing them temporarily and transferring them to the T0 facility at CERN for subsequent offline processing. The STS merger service aggregates the output files from the HLT from ~62 sources produced with an aggregate rate of ~2GB/s. An estimated bandwidth of 7GB/s in concurrent read/write mode is needed. Furthermore, the STS has to be able to store several days of continuous running, so an estimated of 250TB of total usable disk space is required. In this article we present the various technological and implementation choices of the three components of the STS: the distributed file system, the merger service and the transfer system.« less

  1. The new CMS DAQ system for run-2 of the LHC

    DOE PAGES

    Bawej, Tomasz; Behrens, Ulf; Branson, James; ...

    2015-05-21

    The data acquisition (DAQ) system of the CMS experiment at the CERN Large Hadron Collider assembles events at a rate of 100 kHz, transporting event data at an aggregate throughput of 100 GB/s to the high level trigger (HLT) farm. The HLT farm selects interesting events for storage and offline analysis at a rate of around 1 kHz. The DAQ system has been redesigned during the accelerator shutdown in 2013/14. The motivation is twofold: Firstly, the current compute nodes, networking, and storage infrastructure will have reached the end of their lifetime by the time the LHC restarts. Secondly, in ordermore » to handle higher LHC luminosities and event pileup, a number of sub-detectors will be upgraded, increasing the number of readout channels and replacing the off-detector readout electronics with a μTCA implementation. The new DAQ architecture will take advantage of the latest developments in the computing industry. For data concentration, 10/40 Gb/s Ethernet technologies will be used, as well as an implementation of a reduced TCP/IP in FPGA for a reliable transport between custom electronics and commercial computing hardware. A Clos network based on 56 Gb/s FDR Infiniband has been chosen for the event builder with a throughput of ~ 4 Tb/s. The HLT processing is entirely file based. This allows the DAQ and HLT systems to be independent, and to use the HLT software in the same way as for the offline processing. The fully built events are sent to the HLT with 1/10/40 Gb/s Ethernet via network file systems. Hierarchical collection of HLT accepted events and monitoring meta-data are stored into a global file system. As a result, this paper presents the requirements, technical choices, and performance of the new system.« less

  2. Next Generation Space Telescope Integrated Science Module Data System

    NASA Technical Reports Server (NTRS)

    Schnurr, Richard G.; Greenhouse, Matthew A.; Jurotich, Matthew M.; Whitley, Raymond; Kalinowski, Keith J.; Love, Bruce W.; Travis, Jeffrey W.; Long, Knox S.

    1999-01-01

    The Data system for the Next Generation Space Telescope (NGST) Integrated Science Module (ISIM) is the primary data interface between the spacecraft, telescope, and science instrument systems. This poster includes block diagrams of the ISIM data system and its components derived during the pre-phase A Yardstick feasibility study. The poster details the hardware and software components used to acquire and process science data for the Yardstick instrument compliment, and depicts the baseline external interfaces to science instruments and other systems. This baseline data system is a fully redundant, high performance computing system. Each redundant computer contains three 150 MHz power PC processors. All processors execute a commercially available real time multi-tasking operating system supporting, preemptive multi-tasking, file management and network interfaces. These six processors in the system are networked together. The spacecraft interface baseline is an extension of the network, which links the six processors. The final selection for Processor busses, processor chips, network interfaces, and high-speed data interfaces will be made during mid 2002.

  3. Multimedia Information Networks in Social Media

    NASA Astrophysics Data System (ADS)

    Cao, Liangliang; Qi, Guojun; Tsai, Shen-Fu; Tsai, Min-Hsuan; Pozo, Andrey Del; Huang, Thomas S.; Zhang, Xuemei; Lim, Suk Hwan

    The popularity of personal digital cameras and online photo/video sharing community has lead to an explosion of multimedia information. Unlike traditional multimedia data, many new multimedia datasets are organized in a structural way, incorporating rich information such as semantic ontology, social interaction, community media, geographical maps, in addition to the multimedia contents by themselves. Studies of such structured multimedia data have resulted in a new research area, which is referred to as Multimedia Information Networks. Multimedia information networks are closely related to social networks, but especially focus on understanding the topics and semantics of the multimedia files in the context of network structure. This chapter reviews different categories of recent systems related to multimedia information networks, summarizes the popular inference methods used in recent works, and discusses the applications related to multimedia information networks. We also discuss a wide range of topics including public datasets, related industrial systems, and potential future research directions in this field.

  4. 78 FR 43234 - Agency Information Collection Activities; Submission to OMB for Reinstatement, With Change, of a...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-07-19

    ...) administrator, the Financial Crimes Enforcement Network (FinCEN) transitioned from a system originally designed... forms. FinCEN's objective is to have one electronically-filed dynamic and interactive BSA-SAR that will... and oversight of supervised institutions. \\2\\ The Board of Governors of the Federal Reserve System...

  5. Design, Development, and Testing of a Network Frequency Selection Service (NFSS)

    DTIC Science & Technology

    1994-02-14

    mercial simulation software (Sim++), word processor ( FrameMaker ), editor (Gnu Emacs), software ver- sion control (Revision Control System (RCS)), system...of FrameMaker ".mif" files. When viewed using FrameMaker or a PostScript reader, each page of results appears as two columns by four rows of graphics

  6. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bordetsky, A; Dougan, A D; Nekoogar, F

    The paper addresses technological and operational challenges of developing a global plug-and-play Maritime Domain Security testbed for the Global War on Terrorism mission. This joint NPS-LLNL project is based on the NPS Tactical Network Topology (TNT) composed of long-haul OFDM networks combined with self-forming wireless mesh links to air, surface, ground, and underwater unmanned vehicles. This long-haul network is combined with ultra-wideband (UWB) communications systems for wireless communications in harsh radio propagation channels. LLNL's UWB communication prototypes are designed to overcome shortcomings of the present narrowband communications systems in heavy metallic and constricted corridors inside ships. In the center ofmore » our discussion are networking solutions for the Maritime Interdiction Operation (MIO) Experiments in which geographically distributed command centers and subject matter experts collaborate with the Boarding Party in real time to facilitate situational understanding and course of action selection. The most recent experiment conducted via the testbed extension to the Alameda Island exercised several key technologies aimed at improving MIO. These technologies included UWB communications from within the ship to Boarding Party leader sending data files and pictures, advanced radiation detection equipment for search and identification, biometric equipment to record and send fingerprint files to facilitate rapid positive identification of crew members, and the latest updates of the NPS Tactical Network Topology facilitating reachback to LLNL, Biometric Fusion Center, USCG, and DTRA experts.« less

  7. Documentary of MFENET, a national computer network

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Shuttleworth, B.O.

    1977-06-01

    The national Magnetic Fusion Energy Computer Network (MFENET) is a newly operational star network of geographically separated heterogeneous hosts and a communications subnetwork of PDP-11 processors. Host processors interfaced to the subnetwork currently include a CDC 7600 at the Central Computer Center (CCC) and several DECsystem-10's at User Service Centers (USC's). The network was funded by a U.S. government agency (ERDA) to provide in an economical manner the needed computational resources to magnetic confinement fusion researchers. Phase I operation of MFENET distributed the processing power of the CDC 7600 among the USC's through the provision of file transport between anymore » two hosts and remote job entry to the 7600. Extending the capabilities of Phase I, MFENET Phase II provided interactive terminal access to the CDC 7600 from the USC's. A file management system is maintained at the CCC for all network users. The history and development of MFENET are discussed, with emphasis on the protocols used to link the host computers and the USC software. Comparisons are made of MFENET versus ARPANET (Advanced Research Projects Agency Computer Network) and DECNET (Digital Distributed Network Architecture). DECNET and MFENET host-to host, host-to-CCP, and link protocols are discussed in detail. The USC--CCP interface is described briefly. 43 figures, 2 tables.« less

  8. PIYAS-proceeding to intelligent service oriented memory allocation for flash based data centric sensor devices in wireless sensor networks.

    PubMed

    Rizvi, Sanam Shahla; Chung, Tae-Sun

    2010-01-01

    Flash memory has become a more widespread storage medium for modern wireless devices because of its effective characteristics like non-volatility, small size, light weight, fast access speed, shock resistance, high reliability and low power consumption. Sensor nodes are highly resource constrained in terms of limited processing speed, runtime memory, persistent storage, communication bandwidth and finite energy. Therefore, for wireless sensor networks supporting sense, store, merge and send schemes, an efficient and reliable file system is highly required with consideration of sensor node constraints. In this paper, we propose a novel log structured external NAND flash memory based file system, called Proceeding to Intelligent service oriented memorY Allocation for flash based data centric Sensor devices in wireless sensor networks (PIYAS). This is the extended version of our previously proposed PIYA [1]. The main goals of the PIYAS scheme are to achieve instant mounting and reduced SRAM space by keeping memory mapping information to a very low size of and to provide high query response throughput by allocation of memory to the sensor data by network business rules. The scheme intelligently samples and stores the raw data and provides high in-network data availability by keeping the aggregate data for a longer period of time than any other scheme has done before. We propose effective garbage collection and wear-leveling schemes as well. The experimental results show that PIYAS is an optimized memory management scheme allowing high performance for wireless sensor networks.

  9. SBEToolbox: A Matlab Toolbox for Biological Network Analysis

    PubMed Central

    Konganti, Kranti; Wang, Gang; Yang, Ence; Cai, James J.

    2013-01-01

    We present SBEToolbox (Systems Biology and Evolution Toolbox), an open-source Matlab toolbox for biological network analysis. It takes a network file as input, calculates a variety of centralities and topological metrics, clusters nodes into modules, and displays the network using different graph layout algorithms. Straightforward implementation and the inclusion of high-level functions allow the functionality to be easily extended or tailored through developing custom plugins. SBEGUI, a menu-driven graphical user interface (GUI) of SBEToolbox, enables easy access to various network and graph algorithms for programmers and non-programmers alike. All source code and sample data are freely available at https://github.com/biocoder/SBEToolbox/releases. PMID:24027418

  10. SBEToolbox: A Matlab Toolbox for Biological Network Analysis.

    PubMed

    Konganti, Kranti; Wang, Gang; Yang, Ence; Cai, James J

    2013-01-01

    We present SBEToolbox (Systems Biology and Evolution Toolbox), an open-source Matlab toolbox for biological network analysis. It takes a network file as input, calculates a variety of centralities and topological metrics, clusters nodes into modules, and displays the network using different graph layout algorithms. Straightforward implementation and the inclusion of high-level functions allow the functionality to be easily extended or tailored through developing custom plugins. SBEGUI, a menu-driven graphical user interface (GUI) of SBEToolbox, enables easy access to various network and graph algorithms for programmers and non-programmers alike. All source code and sample data are freely available at https://github.com/biocoder/SBEToolbox/releases.

  11. 77 FR 27108 - Order of Suspension of Trading; In the Matter of Anthracite Capital, Inc., Auto Data Network Inc...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2012-05-08

    ... of Anthracite Capital, Inc., Auto Data Network Inc., Avenue Group, Inc., Ckrush, Inc., Clickable... securities of Auto Data Network Inc. because it has not filed any periodic reports since the period ended... accurate information concerning the securities of Avenue Group, Inc. because it has not filed any periodic...

  12. 75 FR 77882 - Government-Owned Inventions; Availability for Licensing

    Federal Register 2010, 2011, 2012, 2013, 2014

    2010-12-14

    ... of federally-funded research and development. Foreign patent applications are filed on selected... applications. Software System for Quantitative Assessment of Vasculature in Three Dimensional Images... vascular networks from medical and basic research images. Deregulation of angiogenesis plays a major role...

  13. JNDMS Task Authorization 2 Report

    DTIC Science & Technology

    2013-10-01

    uses Barnyard to store alarms from all DREnet Snort sensors in a MySQL database. Barnyard is an open source tool designed to work with Snort to take...Technology ITI Information Technology Infrastructure J2EE Java 2 Enterprise Edition JAR Java Archive. This is an archive file format defined by Java ...standards. JDBC Java Database Connectivity JDW JNDMS Data Warehouse JNDMS Joint Network and Defence Management System JNDMS Joint Network Defence and

  14. A framework for visualization of battlefield network behavior

    NASA Astrophysics Data System (ADS)

    Perzov, Yury; Yurcik, William

    2006-05-01

    An extensible network simulation application was developed to study wireless battlefield communications. The application monitors node mobility and depicts broadcast and unicast traffic as expanding rings and directed links. The network simulation was specially designed to support fault injection to show the impact of air strikes on disabling nodes. The application takes standard ns-2 trace files as an input and provides for performance data output in different graphical forms (histograms and x/y plots). Network visualization via animation of simulation output can be saved in AVI format that may serve as a basis for a real-time battlefield awareness system.

  15. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ward, Lee H.; Laros, James H., III

    This paper describes a methodology for implementing disk-less cluster systems using the Network File System (NFS) that scales to thousands of nodes. This method has been successfully deployed and is currently in use on several production systems at Sandia National Labs. This paper will outline our methodology and implementation, discuss hardware and software considerations in detail and present cluster configurations with performance numbers for various management operations like booting.

  16. The Design of an Interactive Computer Based System for the Training of Signal Corps Officers in Communications Network Management

    DTIC Science & Technology

    1985-08-01

    from the mainframe to the terminals is approximately 56k bits per second (21:3). Score: 8. Expandability. The number of terminals available to the 0...the systems controllers may access any files. For modem link up, a callback system is to be implemented to prevent unauthorized off post access (10:2

  17. Ship to Shore Data Communication and Prioritization

    DTIC Science & Technology

    2011-12-01

    First Out FTP File Transfer Protocol GCCS-M Global Command and Control System Maritime HAIPE High Assurance Internet Protocol Encryptor HTTP Hypertext...Transfer Protocol (world wide web protocol ) IBS Integrated Bar Code System IDEF0 Integration Definition IER Information Exchange Requirements...INTEL Intelligence IP Internet Protocol IPT Integrated Product Team ISEA In-Service Engineering Agent ISNS Integrated Shipboard Network System IT

  18. Arctic Boreal Vulnerability Experiment (ABoVE) Science Cloud

    NASA Astrophysics Data System (ADS)

    Duffy, D.; Schnase, J. L.; McInerney, M.; Webster, W. P.; Sinno, S.; Thompson, J. H.; Griffith, P. C.; Hoy, E.; Carroll, M.

    2014-12-01

    The effects of climate change are being revealed at alarming rates in the Arctic and Boreal regions of the planet. NASA's Terrestrial Ecology Program has launched a major field campaign to study these effects over the next 5 to 8 years. The Arctic Boreal Vulnerability Experiment (ABoVE) will challenge scientists to take measurements in the field, study remote observations, and even run models to better understand the impacts of a rapidly changing climate for areas of Alaska and western Canada. The NASA Center for Climate Simulation (NCCS) at the Goddard Space Flight Center (GSFC) has partnered with the Terrestrial Ecology Program to create a science cloud designed for this field campaign - the ABoVE Science Cloud. The cloud combines traditional high performance computing with emerging technologies to create an environment specifically designed for large-scale climate analytics. The ABoVE Science Cloud utilizes (1) virtualized high-speed InfiniBand networks, (2) a combination of high-performance file systems and object storage, and (3) virtual system environments tailored for data intensive, science applications. At the center of the architecture is a large object storage environment, much like a traditional high-performance file system, that supports data proximal processing using technologies like MapReduce on a Hadoop Distributed File System (HDFS). Surrounding the storage is a cloud of high performance compute resources with many processing cores and large memory coupled to the storage through an InfiniBand network. Virtual systems can be tailored to a specific scientist and provisioned on the compute resources with extremely high-speed network connectivity to the storage and to other virtual systems. In this talk, we will present the architectural components of the science cloud and examples of how it is being used to meet the needs of the ABoVE campaign. In our experience, the science cloud approach significantly lowers the barriers and risks to organizations that require high performance computing solutions and provides the NCCS with the agility required to meet our customers' rapidly increasing and evolving requirements.

  19. Evaluation of a data dictionary system. [information dissemination and computer systems programs

    NASA Technical Reports Server (NTRS)

    Driggers, W. G.

    1975-01-01

    The usefulness was investigated of a data dictionary/directory system for achieving optimum benefits from existing and planned investments in computer data files in the Data Systems Development Branch and the Institutional Data Systems Division. Potential applications of the data catalogue system are discussed along with an evaluation of the system. Other topics discussed include data description, data structure, programming aids, programming languages, program networks, and test data.

  20. Astrochem: Abundances of chemical species in the interstellar medium

    NASA Astrophysics Data System (ADS)

    Maret, Sébastien; Bergin, Edwin A.

    2015-07-01

    Astrochem computes the abundances of chemical species in the interstellar medium, as function of time. It studies the chemistry in a variety of astronomical objects, including diffuse clouds, dense clouds, photodissociation regions, prestellar cores, protostars, and protostellar disks. Astrochem reads a network of chemical reactions from a text file, builds up a system of kinetic rates equations, and solves it using a state-of-the-art stiff ordinary differential equation (ODE) solver. The Jacobian matrix of the system is computed implicitly, so the resolution of the system is extremely fast: large networks containing several thousands of reactions are usually solved in a few seconds. A variety of gas phase process are considered, as well as simple gas-grain interactions, such as the freeze-out and the desorption via several mechanisms (thermal desorption, cosmic-ray desorption and photo-desorption). The computed abundances are written in a HDF5 file, and can be plotted in different ways with the tools provided with Astrochem. Chemical reactions and their rates are written in a format which is meant to be easy to read and to edit. A tool to convert the chemical networks from the OSU and KIDA databases into this format is also provided. Astrochem is written in C, and its source code is distributed under the terms of the GNU General Public License (GPL).

  1. Multimedial data base and management system for self-education and testing the students' knowledge on pathomorphology.

    PubMed

    Szymaś, J; Gawroński, M

    1993-01-01

    The composition assumed our experience in creating and using multimedial data base of examination questions and management system, which is used for. This system is implemented on microcomputers compatible with IBM PC and works in network system Net Ware 3.11. The test questions exceeded 2000 until now. The packet consists of the two functionally individual programs: ASSISTANT, which is the administrator for the databases, and EXAMINATOR which is the executive program. This system enables to use text files and add images to each question, which are adjusted to display on standard graphics devices (VGA). Standard format of the notation files enables to elaborate the results in order to estimate the scale of answers and to find correlations between the results.

  2. Secure Reliable Processing Systems

    DTIC Science & Technology

    1984-02-21

    be attainable in principle, the more difficult goal is to meet all of the above while still maintaining good performance within the framwork of a well...managing the network, the user sees a conceptually simpler storage facility, composed merely of files, without machine boundaries, replicated copies

  3. Tracking state deployments of commercial vehicle information systems and networks : 1998 Washington State report

    DOT National Transportation Integrated Search

    1999-12-01

    Volume III of the Logical Architecture contract deliverable documents the Data Dictionary. This formatted version of the Teamwork model data dictionary is mechanically produced from the Teamwork CDIF (Case Data Interchange Format) output file. It is ...

  4. An Implementation Plan for NFS at NASA's NAS Facility

    NASA Technical Reports Server (NTRS)

    Lam, Terance L.; Kutler, Paul (Technical Monitor)

    1998-01-01

    This document discusses how NASA's NAS can benefit from the Sun Microsystems' Network File System (NFS). A case study is presented to demonstrate the effects of NFS on the NAS supercomputing environment. Potential problems are addressed and an implementation strategy is proposed.

  5. Gaining Access to the Internet.

    ERIC Educational Resources Information Center

    Notess, Greg R.

    1992-01-01

    Discusses Internet services and protocols (i.e., electronic mail, file transfer, and remote login) and provides instructions for retrieving guides and directories of the Internet. Services providing access to the Internet are described, including bulletin board systems, regional networks, nationwide connections, and library organizations; and a…

  6. The inadvertent disclosure of personal health information through peer-to-peer file sharing programs

    PubMed Central

    Neri, Emilio; Jonker, Elizabeth; Sokolova, Marina; Peyton, Liam; Neisa, Angelica; Scassa, Teresa

    2010-01-01

    Objective There has been a consistent concern about the inadvertent disclosure of personal information through peer-to-peer file sharing applications, such as Limewire and Morpheus. Examples of personal health and financial information being exposed have been published. We wanted to estimate the extent to which personal health information (PHI) is being disclosed in this way, and compare that to the extent of disclosure of personal financial information (PFI). Design After careful review and approval of our protocol by our institutional research ethics board, files were downloaded from peer-to-peer file sharing networks and manually analyzed for the presence of PHI and PFI. The geographic region of the IP addresses was determined, and classified as either USA or Canada. Measurement We estimated the proportion of files that contain personal health and financial information for each region. We also estimated the proportion of search terms that return files with personal health and financial information. We ascertained and discuss the ethical issues related to this study. Results Approximately 0.4% of Canadian IP addresses had PHI, as did 0.5% of US IP addresses. There was more disclosure of financial information, at 1.7% of Canadian IP addresses and 4.7% of US IP addresses. An analysis of search terms used in these file sharing networks showed that a small percentage of the terms would return PHI and PFI files (ie, there are people successfully searching for PFI and PHI on the peer-to-peer file sharing networks). Conclusion There is a real risk of inadvertent disclosure of PHI through peer-to-peer file sharing networks, although the risk is not as large as for PFI. Anyone keeping PHI on their computers should avoid installing file sharing applications on their computers, or if they have to use such tools, actively manage the risks of inadvertent disclosure of their, their family's, their clients', or patients' PHI. PMID:20190057

  7. SurvNet: a web server for identifying network-based biomarkers that most correlate with patient survival data.

    PubMed

    Li, Jun; Roebuck, Paul; Grünewald, Stefan; Liang, Han

    2012-07-01

    An important task in biomedical research is identifying biomarkers that correlate with patient clinical data, and these biomarkers then provide a critical foundation for the diagnosis and treatment of disease. Conventionally, such an analysis is based on individual genes, but the results are often noisy and difficult to interpret. Using a biological network as the searching platform, network-based biomarkers are expected to be more robust and provide deep insights into the molecular mechanisms of disease. We have developed a novel bioinformatics web server for identifying network-based biomarkers that most correlate with patient survival data, SurvNet. The web server takes three input files: one biological network file, representing a gene regulatory or protein interaction network; one molecular profiling file, containing any type of gene- or protein-centred high-throughput biological data (e.g. microarray expression data or DNA methylation data); and one patient survival data file (e.g. patients' progression-free survival data). Given user-defined parameters, SurvNet will automatically search for subnetworks that most correlate with the observed patient survival data. As the output, SurvNet will generate a list of network biomarkers and display them through a user-friendly interface. SurvNet can be accessed at http://bioinformatics.mdanderson.org/main/SurvNet.

  8. Discovery in a World of Mashups

    NASA Astrophysics Data System (ADS)

    King, T. A.; Ritschel, B.; Hourcle, J. A.; Moon, I. S.

    2014-12-01

    When the first digital information was stored electronically, discovery of what existed was through file names and the organization of the file system. With the advent of networks, digital information was shared on a wider scale, but discovery remained based on file and folder names. With a growing number of information sources, named based discovery quickly became ineffective. The keyword based search engine was one of the first types of a mashup in the world of Web 1.0. Embedded links from one document to another with prescribed relationships between files and the world of Web 2.0 was formed. Search engines like Google used the links to improve search results and a worldwide mashup was formed. While a vast improvement, the need for semantic (meaning rich) discovery was clear, especially for the discovery of scientific data. In response, every science discipline defined schemas to describe their type of data. Some core schemas where shared, but most schemas are custom tailored even though they share many common concepts. As with the networking of information sources, science increasingly relies on data from multiple disciplines. So there is a need to bring together multiple sources of semantically rich information. We explore how harvesting, conceptual mapping, facet based search engines, search term promotion, and style sheets can be combined to create the next generation of mashups in the emerging world of Web 3.0. We use NASA's Planetary Data System and NASA's Heliophysics Data Environment to illustrate how to create a multi-discipline mash-up.

  9. Development and Integration of WWW-Based Services in an Existing University Environment.

    ERIC Educational Resources Information Center

    Garofalakis, John; Kappos, Panagiotis; Tsakalidis, Athanasios; Tsaknakis, John; Tzimas, Giannis; Vassiliadis, Vassilios

    This paper describes the experience and the problems solved in the process of developing and integrating advanced World Wide Web-based services into the University of Patras (Greece) system. In addition to basic network services (e.g., e-mail, file transfer protocol), the final system will integrate the following set of advanced services: a…

  10. Precise Network Modeling of Systems Genetics Data Using the Bayesian Network Webserver.

    PubMed

    Ziebarth, Jesse D; Cui, Yan

    2017-01-01

    The Bayesian Network Webserver (BNW, http://compbio.uthsc.edu/BNW ) is an integrated platform for Bayesian network modeling of biological datasets. It provides a web-based network modeling environment that seamlessly integrates advanced algorithms for probabilistic causal modeling and reasoning with Bayesian networks. BNW is designed for precise modeling of relatively small networks that contain less than 20 nodes. The structure learning algorithms used by BNW guarantee the discovery of the best (most probable) network structure given the data. To facilitate network modeling across multiple biological levels, BNW provides a very flexible interface that allows users to assign network nodes into different tiers and define the relationships between and within the tiers. This function is particularly useful for modeling systems genetics datasets that often consist of multiscalar heterogeneous genotype-to-phenotype data. BNW enables users to, within seconds or minutes, go from having a simply formatted input file containing a dataset to using a network model to make predictions about the interactions between variables and the potential effects of experimental interventions. In this chapter, we will introduce the functions of BNW and show how to model systems genetics datasets with BNW.

  11. Network Security and the NPS Internet Firewall.

    DTIC Science & Technology

    1994-09-16

    NIS between clients and servers. The ypxfr service is used for trans- Network Information ferring the /etc/ passwd file from the master server to the...slave Service servers. If the NIS domain name is guessed, an outsider can get a copy of the /etc/ passwd file. Table 5: Network Services With Known...TCP ports 20 and 21), for the ftp network service: outside% ftp 131.120.50.151 ftp> get /etc/ passwd /tmp/passwd.inside (login process...) ftp> cd nttcp

  12. Optimizing Targeting of Intrusion Detection Systems in Social Networks

    NASA Astrophysics Data System (ADS)

    Puzis, Rami; Tubi, Meytal; Elovici, Yuval

    Internet users communicate with each other in various ways: by Emails, instant messaging, social networking, accessing Web sites, etc. In the course of communicating, users may unintentionally copy files contaminated with computer viruses and worms [1, 2] to their computers and spread them to other users [3]. (Hereafter we will use the term "threats", rather than computer viruses and computer worms). The Internet is the chief source of these threats [4].

  13. Networking—a statistical physics perspective

    NASA Astrophysics Data System (ADS)

    Yeung, Chi Ho; Saad, David

    2013-03-01

    Networking encompasses a variety of tasks related to the communication of information on networks; it has a substantial economic and societal impact on a broad range of areas including transportation systems, wired and wireless communications and a range of Internet applications. As transportation and communication networks become increasingly more complex, the ever increasing demand for congestion control, higher traffic capacity, quality of service, robustness and reduced energy consumption requires new tools and methods to meet these conflicting requirements. The new methodology should serve for gaining better understanding of the properties of networking systems at the macroscopic level, as well as for the development of new principled optimization and management algorithms at the microscopic level. Methods of statistical physics seem best placed to provide new approaches as they have been developed specifically to deal with nonlinear large-scale systems. This review aims at presenting an overview of tools and methods that have been developed within the statistical physics community and that can be readily applied to address the emerging problems in networking. These include diffusion processes, methods from disordered systems and polymer physics, probabilistic inference, which have direct relevance to network routing, file and frequency distribution, the exploration of network structures and vulnerability, and various other practical networking applications.

  14. Strategies for Sharing Seismic Data Among Multiple Computer Platforms

    NASA Astrophysics Data System (ADS)

    Baker, L. M.; Fletcher, J. B.

    2001-12-01

    Seismic waveform data is readily available from a variety of sources, but it often comes in a distinct, instrument-specific data format. For example, data may be from portable seismographs, such as those made by Refraction Technology or Kinemetrics, from permanent seismograph arrays, such as the USGS Parkfield Dense Array, from public data centers, such as the IRIS Data Center, or from personal communication with other researchers through e-mail or ftp. A computer must be selected to import the data - usually whichever is the most suitable for reading the originating format. However, the computer best suited for a specific analysis may not be the same. When copies of the data are then made for analysis, a proliferation of copies of the same data results, in possibly incompatible, computer-specific formats. In addition, if an error is detected and corrected in one copy, or some other change is made, all the other copies must be updated to preserve their validity. Keeping track of what data is available, where it is located, and which copy is authoritative requires an effort that is easy to neglect. We solve this problem by importing waveform data to a shared network file server that is accessible to all our computers on our campus LAN. We use a Network Appliance file server running Sun's Network File System (NFS) software. Using an NFS client software package on each analysis computer, waveform data can then be read by our MatLab or Fortran applications without first copying the data. Since there is a single copy of the waveform data in a single location, the NFS file system hierarchy provides an implicit complete waveform data catalog and the single copy is inherently authoritative. Another part of our solution is to convert the original data into a blocked-binary format (known historically as USGS DR100 or VFBB format) that is interpreted by MatLab or Fortran library routines available on each computer so that the idiosyncrasies of each machine are not visible to the user. Commercial software packages, such as MatLab, also have the ability to share data in their own formats across multiple computer platforms. Our Fortran applications can create plot files in Adobe PostScript, Illustrator, and Portable Document Format (PDF) formats. Vendor support for reading these files is readily available on multiple computer platforms. We will illustrate by example our strategies for sharing seismic data among our multiple computer platforms, and we will discuss our positive and negative experiences. We will include our solutions for handling the different byte ordering, floating-point formats, and text file ``end-of-line'' conventions on the various computer platforms we use (6 different operating systems on 5 processor architectures).

  15. On Data Transfers Over Wide-Area Dedicated Connections

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Rao, Nageswara S.; Liu, Qiang

    Dedicated wide-area network connections are employed in big data and high-performance computing scenarios, since the absence of cross-traffic promises to make it easier to analyze and optimize data transfers over them. However, nonlinear transport dynamics and end-system complexity due to multi-core hosts and distributed file systems make these tasks surprisingly challenging. We present an overview of methods to analyze memory and disk file transfers using extensive measurements over 10 Gbps physical and emulated connections with 0–366 ms round trip times (RTTs). For memory transfers, we derive performance profiles of TCP and UDT throughput as a function of RTT, which showmore » concave regions in contrast to entirely convex regions predicted by previous models. These highly desirable concave regions can be expanded by utilizing large buffers and more parallel flows. We also present Poincar´e maps and Lyapunov exponents of TCP and UDT throughputtraces that indicate complex throughput dynamics. For disk file transfers, we show that throughput can be optimized using a combination of parallel I/O and network threads under direct I/O mode. Our initial throughput measurements of Lustre filesystems mounted over long-haul connections using LNet routers show convex profiles indicative of I/O limits.« less

  16. A computer program for the generation of logic networks from task chart data

    NASA Technical Reports Server (NTRS)

    Herbert, H. E.

    1980-01-01

    The Network Generation Program (NETGEN), which creates logic networks from task chart data is presented. NETGEN is written in CDC FORTRAN IV (Extended) and runs in a batch mode on the CDC 6000 and CYBER 170 series computers. Data is input via a two-card format and contains information regarding the specific tasks in a project. From this data, NETGEN constructs a logic network of related activities with each activity having unique predecessor and successor nodes, activity duration, descriptions, etc. NETGEN then prepares this data on two files that can be used in the Project Planning Analysis and Reporting System Batch Network Scheduling program and the EZPERT graphics program.

  17. Approaches in highly parameterized inversion-PESTCommander, a graphical user interface for file and run management across networks

    USGS Publications Warehouse

    Karanovic, Marinko; Muffels, Christopher T.; Tonkin, Matthew J.; Hunt, Randall J.

    2012-01-01

    Models of environmental systems have become increasingly complex, incorporating increasingly large numbers of parameters in an effort to represent physical processes on a scale approaching that at which they occur in nature. Consequently, the inverse problem of parameter estimation (specifically, model calibration) and subsequent uncertainty analysis have become increasingly computation-intensive endeavors. Fortunately, advances in computing have made computational power equivalent to that of dozens to hundreds of desktop computers accessible through a variety of alternate means: modelers have various possibilities, ranging from traditional Local Area Networks (LANs) to cloud computing. Commonly used parameter estimation software is well suited to take advantage of the availability of such increased computing power. Unfortunately, logistical issues become increasingly important as an increasing number and variety of computers are brought to bear on the inverse problem. To facilitate efficient access to disparate computer resources, the PESTCommander program documented herein has been developed to provide a Graphical User Interface (GUI) that facilitates the management of model files ("file management") and remote launching and termination of "slave" computers across a distributed network of computers ("run management"). In version 1.0 described here, PESTCommander can access and ascertain resources across traditional Windows LANs: however, the architecture of PESTCommander has been developed with the intent that future releases will be able to access computing resources (1) via trusted domains established in Wide Area Networks (WANs) in multiple remote locations and (2) via heterogeneous networks of Windows- and Unix-based operating systems. The design of PESTCommander also makes it suitable for extension to other computational resources, such as those that are available via cloud computing. Version 1.0 of PESTCommander was developed primarily to work with the parameter estimation software PEST; the discussion presented in this report focuses on the use of the PESTCommander together with Parallel PEST. However, PESTCommander can be used with a wide variety of programs and models that require management, distribution, and cleanup of files before or after model execution. In addition to its use with the Parallel PEST program suite, discussion is also included in this report regarding the use of PESTCommander with the Global Run Manager GENIE, which was developed simultaneously with PESTCommander.

  18. Characteristics of Urbanization in Five Watersheds of Anchorage, Alaska: Geographic Information System Data

    USGS Publications Warehouse

    Moran, Edward H.

    2002-01-01

    The report contains environmental and urban geographic information system data for 14 sites in 5 watersheds in Anchorage, Alaska. These sites were examined during summer in 1999 and 2000 to determine effects of urbanization on water quality. The data sets are Environmental Systems Research Institute, Inc., shapefiles, coverages, and images. Also included are an elevation grid and a triangulated irregular network. Although the data are intended for users with advanced geographic information system capabilities, simple images of the data also are available. ArcView? 3.2 project, an ArcGIS? project, and 16 ArcExplorer2? projects are linked to the PDF file based report. Some of these coverages are large files over 10 MB. The largest coverage, impervious cover, is 208 MB.

  19. Fault-Tolerant Local-Area Network

    NASA Technical Reports Server (NTRS)

    Morales, Sergio; Friedman, Gary L.

    1988-01-01

    Local-area network (LAN) for computers prevents single-point failure from interrupting communication between nodes of network. Includes two complete cables, LAN 1 and LAN 2. Microprocessor-based slave switches link cables to network-node devices as work stations, print servers, and file servers. Slave switches respond to commands from master switch, connecting nodes to two cable networks or disconnecting them so they are completely isolated. System monitor and control computer (SMC) acts as gateway, allowing nodes on either cable to communicate with each other and ensuring that LAN 1 and LAN 2 are fully used when functioning properly. Network monitors and controls itself, automatically routes traffic for efficient use of resources, and isolates and corrects its own faults, with potential dramatic reduction in time out of service.

  20. Experiences From NASA/Langley's DMSS Project

    NASA Technical Reports Server (NTRS)

    1996-01-01

    There is a trend in institutions with high performance computing and data management requirements to explore mass storage systems with peripherals directly attached to a high speed network. The Distributed Mass Storage System (DMSS) Project at the NASA Langley Research Center (LaRC) has placed such a system into production use. This paper will present the experiences, both good and bad, we have had with this system since putting it into production usage. The system is comprised of: 1) National Storage Laboratory (NSL)/UniTree 2.1, 2) IBM 9570 HIPPI attached disk arrays (both RAID 3 and RAID 5), 3) IBM RS6000 server, 4) HIPPI/IPI3 third party transfers between the disk array systems and the supercomputer clients, a CRAY Y-MP and a CRAY 2, 5) a "warm spare" file server, 6) transition software to convert from CRAY's Data Migration Facility (DMF) based system to DMSS, 7) an NSC PS32 HIPPI switch, and 8) a STK 4490 robotic library accessed from the IBM RS6000 block mux interface. This paper will cover: the performance of the DMSS in the following areas: file transfer rates, migration and recall, and file manipulation (listing, deleting, etc.); the appropriateness of a workstation class of file server for NSL/UniTree with LaRC's present storage requirements in mind the role of the third party transfers between the supercomputers and the DMSS disk array systems in DMSS; a detailed comparison (both in performance and functionality) between the DMF and DMSS systems LaRC's enhancements to the NSL/UniTree system administration environment the mechanism for DMSS to provide file server redundancy the statistics on the availability of DMSS the design and experiences with the locally developed transparent transition software which allowed us to make over 1.5 million DMF files available to NSL/UniTree with minimal system outage

  1. Development and Implementation of Kumamoto Technopolis Regional Database T-KIND

    NASA Astrophysics Data System (ADS)

    Onoue, Noriaki

    T-KIND (Techno-Kumamoto Information Network for Data-Base) is a system for effectively searching information of technology, human resources and industries which are necessary to realize Kumamoto Technopolis. It is composed of coded database, image database and LAN inside technoresearch park which is the center of R & D in the Technopolis. It constructs on-line system by networking general-purposed computers, minicomputers, optical disk file systems and so on, and provides the service through public telephone line. Two databases are now available on enterprise information and human resource information. The former covers about 4,000 enterprises, and the latter does about 2,000 persons.

  2. Network of anatomical texts (NAnaTex), an open-source project for visualizing the interaction between anatomical terms.

    PubMed

    Momota, Ryusuke; Ohtsuka, Aiji

    2018-01-01

    Anatomy is the science and art of understanding the structure of the body and its components in relation to the functions of the whole-body system. Medicine is based on a deep understanding of anatomy, but quite a few introductory-level learners are overwhelmed by the sheer amount of anatomical terminology that must be understood, so they regard anatomy as a dull and dense subject. To help them learn anatomical terms in a more contextual way, we started a new open-source project, the Network of Anatomical Texts (NAnaTex), which visualizes relationships of body components by integrating text-based anatomical information using Cytoscape, a network visualization software platform. Here, we present a network of bones and muscles produced from literature descriptions. As this network is primarily text-based and does not require any programming knowledge, it is easy to implement new functions or provide extra information by making changes to the original text files. To facilitate collaborations, we deposited the source code files for the network into the GitHub repository ( https://github.com/ryusukemomota/nanatex ) so that anybody can participate in the evolution of the network and use it for their own non-profit purposes. This project should help not only introductory-level learners but also professional medical practitioners, who could use it as a quick reference.

  3. Collaborative, Trust-Based Security Mechanisms for a National Utility Intranet

    DTIC Science & Technology

    2007-09-01

    time_message_ceated … username bearnold operation_type copy from_file C:/etc/ passwd \\MPLpw.txt from_file_data_type ND //network data...time_message_created … username bearnold operation_type paste from_file C:\\etc\\ passwd \\MPLpw.txt //logon server password file...from_file_data_type ND from_file_caveat restricted-release to_file F:\\Copy of C:\\etc\\ passwd \\MPLpw.txt //removable drive //end message data

  4. ION Configuration Editor

    NASA Technical Reports Server (NTRS)

    Borgen, Richard L.

    2013-01-01

    The configuration of ION (Inter - planetary Overlay Network) network nodes is a manual task that is complex, time-consuming, and error-prone. This program seeks to accelerate this job and produce reliable configurations. The ION Configuration Editor is a model-based smart editor based on Eclipse Modeling Framework technology. An ION network designer uses this Eclipse-based GUI to construct a data model of the complete target network and then generate configurations. The data model is captured in an XML file. Intrinsic editor features aid in achieving model correctness, such as field fill-in, type-checking, lists of valid values, and suitable default values. Additionally, an explicit "validation" feature executes custom rules to catch more subtle model errors. A "survey" feature provides a set of reports providing an overview of the entire network, enabling a quick assessment of the model s completeness and correctness. The "configuration" feature produces the main final result, a complete set of ION configuration files (eight distinct file types) for each ION node in the network.

  5. 75 FR 41559 - In the Matter of E-Sync Networks, Inc. (n/k/a ESNI, Inc.), EchoCath, Inc., Edison Brothers Stores...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2010-07-16

    ... SECURITIES AND EXCHANGE COMMISSION [File No. 500-1] In the Matter of E[dash]Sync Networks, Inc. (n/k/a ESNI, Inc.), EchoCath, Inc., Edison Brothers Stores, Inc., Electronic Technology Group, Inc. (n... information concerning the securities of E-Sync Networks, Inc. (n/k/a ESNI, Inc.) because it has not filed any...

  6. Expert System Software

    NASA Technical Reports Server (NTRS)

    1989-01-01

    C Language Integrated Production System (CLIPS) is a software shell for developing expert systems is designed to allow research and development of artificial intelligence on conventional computers. Originally developed by Johnson Space Center, it enables highly efficient pattern matching. A collection of conditions and actions to be taken if the conditions are met is built into a rule network. Additional pertinent facts are matched to the rule network. Using the program, E.I. DuPont de Nemours & Co. is monitoring chemical production machines; California Polytechnic State University is investigating artificial intelligence in computer aided design; Mentor Graphics has built a new Circuit Synthesis system, and Brooke and Brooke, a law firm, can determine which facts from a file are most important.

  7. Computer Assisted Diagnostic Prescriptive Program in Reading and Mathematics. An Exemplary Micro-Computer Program and a Developer/Demonstrator Project, National Diffusion Network.

    ERIC Educational Resources Information Center

    Roberson, E. Wayne; Glowinski, Debra J.

    The Computer Assisted Diagnostic Prescriptive Program (CADPP) is a customized databased curriculum management system which permits the user to load the following into a filing/retrieval software system: (1) learning characteristics of individual students (e.g., age, instructional level, learning modality); (2) skill-oriented characteristics of…

  8. Service Without Servers

    DTIC Science & Technology

    1993-08-01

    Abstract We propose a new style of operating system architecture appropriate for microkernel -based operating sys- tems: services are implemented as a...retaining all the modularity advantages of microkernel technology. Since services reside in libraries, an application is free to use the library that...U.S. Government. 93-23976,. . I~lUI5E NIIA Keywords: Operating Systems, Microkernel , Network communication, File organization 1. Introduction In the

  9. Secure Mobile Distributed File System (MDFS)

    DTIC Science & Technology

    2011-03-01

    dissemination of data. In a mobile ad - hoc network, there are two classes of devices: content generators and content consumers. One im- plementation of...use of infrastructure mode is necessary because current Android implemen- tations do not support Mobile Ad - Hoc network without modification of the...NUMBER (include area code ) Standard Form 298 (Rev. 8–98) Prescribed by ANSI Std. Z39.18 24–3–2011 Master’s Thesis 2009-03-01—2011-03-31 Secure Mobile

  10. SNS programming environment user's guide

    NASA Technical Reports Server (NTRS)

    Tennille, Geoffrey M.; Howser, Lona M.; Humes, D. Creig; Cronin, Catherine K.; Bowen, John T.; Drozdowski, Joseph M.; Utley, Judith A.; Flynn, Theresa M.; Austin, Brenda A.

    1992-01-01

    The computing environment is briefly described for the Supercomputing Network Subsystem (SNS) of the Central Scientific Computing Complex of NASA Langley. The major SNS computers are a CRAY-2, a CRAY Y-MP, a CONVEX C-210, and a CONVEX C-220. The software is described that is common to all of these computers, including: the UNIX operating system, computer graphics, networking utilities, mass storage, and mathematical libraries. Also described is file management, validation, SNS configuration, documentation, and customer services.

  11. Taxonomic Workstation System User Guide

    DTIC Science & Technology

    1993-05-01

    of the Methods below: Method 1: - Type SAVELISr ACME.DEPr - From the Main Menu, select DEVOP , then M RBASE S1VIURE, then INDEX LISTS. For Missions...utility allows the user to covert downloaded IWM ASCII files to Micro SAINTr task networks . To use Hooker you nist be fully conzversant in the use and...hookc up fojllw-on tasks/ networks using a graphiic point and shoot interface (Note: All other MicroSAIWI’ variables, sixi as Release Cm-ditions

  12. Performance Evaluation of Peer-to-Peer Progressive Download in Broadband Access Networks

    NASA Astrophysics Data System (ADS)

    Shibuya, Megumi; Ogishi, Tomohiko; Yamamoto, Shu

    P2P (Peer-to-Peer) file sharing architectures have scalable and cost-effective features. Hence, the application of P2P architectures to media streaming is attractive and expected to be an alternative to the current video streaming using IP multicast or content delivery systems because the current systems require expensive network infrastructures and large scale centralized cache storage systems. In this paper, we investigate the P2P progressive download enabling Internet video streaming services. We demonstrated the capability of the P2P progressive download in both laboratory test network as well as in the Internet. Through the experiments, we clarified the contribution of the FTTH links to the P2P progressive download in the heterogeneous access networks consisting of FTTH and ADSL links. We analyzed the cause of some download performance degradation occurred in the experiment and discussed about the effective methods to provide the video streaming service using P2P progressive download in the current heterogeneous networks.

  13. Hand-waving and interpretive dance: an introductory course on tensor networks

    NASA Astrophysics Data System (ADS)

    Bridgeman, Jacob C.; Chubb, Christopher T.

    2017-06-01

    The curse of dimensionality associated with the Hilbert space of spin systems provides a significant obstruction to the study of condensed matter systems. Tensor networks have proven an important tool in attempting to overcome this difficulty in both the numerical and analytic regimes. These notes form the basis for a seven lecture course, introducing the basics of a range of common tensor networks and algorithms. In particular, we cover: introductory tensor network notation, applications to quantum information, basic properties of matrix product states, a classification of quantum phases using tensor networks, algorithms for finding matrix product states, basic properties of projected entangled pair states, and multiscale entanglement renormalisation ansatz states. The lectures are intended to be generally accessible, although the relevance of many of the examples may be lost on students without a background in many-body physics/quantum information. For each lecture, several problems are given, with worked solutions in an ancillary file.

  14. Analog-to-digital clinical data collection on networked workstations with graphic user interface.

    PubMed

    Lunt, D

    1991-02-01

    An innovative respiratory examination system has been developed that combines physiological response measurement, real-time graphic displays, user-driven operating sequences, and networked file archiving and review into a scientific research and clinical diagnosis tool. This newly constructed computer network is being used to enhance the research center's ability to perform patient pulmonary function examinations. Respiratory data are simultaneously acquired and graphically presented during patient breathing maneuvers and rapidly transformed into graphic and numeric reports, suitable for statistical analysis or database access. The environment consists of the hardware (Macintosh computer, MacADIOS converters, analog amplifiers), the software (HyperCard v2.0, HyperTalk, XCMDs), and the network (AppleTalk, fileservers, printers) as building blocks for data acquisition, analysis, editing, and storage. System operation modules include: Calibration, Examination, Reports, On-line Help Library, Graphic/Data Editing, and Network Storage.

  15. 77 FR 26796 - Order of Suspension of Trading; Airtrax, Inc., Amedia Networks, Inc., American Business Financial...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2012-05-07

    ... SECURITIES AND EXCHANGE COMMISSION [File No. 500-1] Order of Suspension of Trading; Airtrax, Inc., Amedia Networks, Inc., American Business Financial Services, Inc., Appalachian Bancshares, Inc., and... information concerning the securities of American Business Financial Services, Inc. because it has not filed...

  16. Access to Inter-Organization Computer Networks.

    DTIC Science & Technology

    1985-08-01

    management of computing and information systems, system management . 20. ABSTRACT (Continue on reverse aide it neceeery end identify by block number) When two...necessary control mechanisms. Message-based gateways that support non-real-time invocation of services (e.g., file and print servers, financial ...operations (C.2.3), electronic mail (H.4.3), public policy issues (K.4.1), organizationa impacts (K.4.3), management of computing and information systems (K.6

  17. 75 FR 16784 - Combined Notice of Filings #1

    Federal Register 2010, 2011, 2012, 2013, 2014

    2010-04-02

    ... Services, Inc. Description: Alabama Power Company et al submits for filing an amendment to the Network Integration Transmission Service Agreement with PowerSouth Energy Cooperative et al. Filed Date: 03/25/2010... DEPARTMENT OF ENERGY Federal Energy Regulatory Commission Combined Notice of Filings 1 March 25...

  18. New directions in the CernVM file system

    NASA Astrophysics Data System (ADS)

    Blomer, Jakob; Buncic, Predrag; Ganis, Gerardo; Hardi, Nikola; Meusel, Rene; Popescu, Radu

    2017-10-01

    The CernVM File System today is commonly used to host and distribute application software stacks. In addition to this core task, recent developments expand the scope of the file system into two new areas. Firstly, CernVM-FS emerges as a good match for container engines to distribute the container image contents. Compared to native container image distribution (e.g. through the “Docker registry”), CernVM-FS massively reduces the network traffic for image distribution. This has been shown, for instance, by a prototype integration of CernVM-FS into Mesos developed by Mesosphere, Inc. We present a path for a smooth integration of CernVM-FS and Docker. Secondly, CernVM-FS recently raised new interest as an option for the distribution of experiment conditions data. Here, the focus is on improved versioning capabilities of CernVM-FS that allows to link the conditions data of a run period to the state of a CernVM-FS repository. Lastly, CernVM-FS has been extended to provide a name space for physics data for the LIGO and CMS collaborations. Searching through a data namespace is often done by a central, experiment specific database service. A name space on CernVM-FS can particularly benefit from an existing, scalable infrastructure and from the POSIX file system interface.

  19. A convertor and user interface to import CAD files into worldtoolkit virtual reality systems

    NASA Technical Reports Server (NTRS)

    Wang, Peter Hor-Ching

    1996-01-01

    Virtual Reality (VR) is a rapidly developing human-to-computer interface technology. VR can be considered as a three-dimensional computer-generated Virtual World (VW) which can sense particular aspects of a user's behavior, allow the user to manipulate the objects interactively, and render the VW at real-time accordingly. The user is totally immersed in the virtual world and feel the sense of transforming into that VW. NASA/MSFC Computer Application Virtual Environments (CAVE) has been developing the space-related VR applications since 1990. The VR systems in CAVE lab are based on VPL RB2 system which consists of a VPL RB2 control tower, an LX eyephone, an Isotrak polhemus sensor, two Fastrak polhemus sensors, a folk of Bird sensor, and two VPL DG2 DataGloves. A dynamics animator called Body Electric from VPL is used as the control system to interface with all the input/output devices and to provide the network communications as well as VR programming environment. The RB2 Swivel 3D is used as the modelling program to construct the VW's. A severe limitation of the VPL VR system is the use of RB2 Swivel 3D, which restricts the files to a maximum of 1020 objects and doesn't have the advanced graphics texture mapping. The other limitation is that the VPL VR system is a turn-key system which does not provide the flexibility for user to add new sensors and C language interface. Recently, NASA/MSFC CAVE lab provides VR systems built on Sense8 WorldToolKit (WTK) which is a C library for creating VR development environments. WTK provides device drivers for most of the sensors and eyephones available on the VR market. WTK accepts several CAD file formats, such as Sense8 Neutral File Format, AutoCAD DXF and 3D Studio file format, Wave Front OBJ file format, VideoScape GEO file format, Intergraph EMS stereolithographics and CATIA Stereolithographics STL file formats. WTK functions are object-oriented in their naming convention, are grouped into classes, and provide easy C language interface. Using a CAD or modelling program to build a VW for WTK VR applications, we typically construct the stationary universe with all the geometric objects except the dynamic objects, and create each dynamic object in an individual file.

  20. Management and development of local area network upgrade prototype

    NASA Technical Reports Server (NTRS)

    Fouser, T. J.

    1981-01-01

    Given the situation of having management and development users accessing a central computing facility and given the fact that these same users have the need for local computation and storage, the utilization of a commercially available networking system such as CP/NET from Digital Research provides the building blocks for communicating intelligent microsystems to file and print services. The major problems to be overcome in the implementation of such a network are the dearth of intelligent communication front-ends for the microcomputers and the lack of a rich set of management and software development tools.

  1. A Computerized Bibliographic Service for the Blind and Physically Handicapped

    ERIC Educational Resources Information Center

    Friedman, Morton H.

    1975-01-01

    Describes a three-year plan and a system study designed to produce a computerized union catalog and an in-process file for both the Library of Congress Division for the Blind and Physically Handicapped and a network of almost 200 libraries throughout the nation. (Author/PF)

  2. Execute-Only Attacks against Execute-Only Defenses

    DTIC Science & Technology

    2016-02-18

    and network cards , do not undergo translation by the MMU and are unaffected by EPT permission. The idea of exploiting systems via DMA is well studied... dump . There are two ways an attacker can gain access to a file opened using O_DIRECT. In the most straightforward scenario, the victim process may

  3. Are Computer Science Students Ready for the Real World.

    ERIC Educational Resources Information Center

    Elliot, Noreen

    The typical undergraduate program in computer science includes an introduction to hardware and operating systems, file processing and database organization, data communication and networking, and programming. However, many graduates may lack the ability to integrate the concepts "learned" into a skill set and pattern of approaching problems that…

  4. 75 FR 69717 - Self-Regulatory Organizations; The NASDAQ Stock Market LLC; Notice of Filing and Immediate...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2010-11-15

    ... new Nasdaq product offerings, pending the resolution to this matter. Thus, offering a Managed Data... trading systems (``ATSs''), including dark pools and electronic communication networks (``ECNs''). Each.... A proliferation of dark pools and other ATSs operate profitably with fragmentary shares of...

  5. 76 FR 43354 - Records Schedules; Availability and Request for Comments

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-07-20

    ...). Master files of an electronic information system containing network security information such as lists of... scores. 11. Department of Homeland Security, U.S. Immigration and Customs Enforcement (N1-567-11-5, 2... enforcement officers to receive agency credentials. 12. Department of Homeland Security, U.S. Immigration and...

  6. Data Hemorrhages in the Health-Care Sector

    NASA Astrophysics Data System (ADS)

    Johnson, M. Eric

    Confidential data hemorrhaging from health-care providers pose financial risks to firms and medical risks to patients. We examine the consequences of data hemorrhages including privacy violations, medical fraud, financial identity theft, and medical identity theft. We also examine the types and sources of data hemorrhages, focusing on inadvertent disclosures. Through an analysis of leaked files, we examine data hemorrhages stemming from inadvertent disclosures on internet-based file sharing networks. We characterize the security risk for a group of health-care organizations using a direct analysis of leaked files. These files contained highly sensitive medical and personal information that could be maliciously exploited by criminals seeking to commit medical and financial identity theft. We also present evidence of the threat by examining user-issued searches. Our analysis demonstrates both the substantial threat and vulnerability for the health-care sector and the unique complexity exhibited by the US health-care system.

  7. Saguaro: a distributed operating system based on pools of servers. Annual report, 1 January 1984-31 December 1986

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Andrews, G.R.

    1986-03-03

    Prototypes of components of the Saguaro distributed operating system were implemented and the design of the entire system refined based on the experience. The philosophy behind Saguaro is to support the illusion of a single virtual machine while taking advantage of the concurrency and robustness that are possible in a network architecture. Within the system, these advantages are realized by the use of pools of server processes and decentralized allocation protocols. Potential concurrency and robustness are also made available to the user through low-cost mechanisms to control placement of executing commands and files, and to support semi-transparent file replication andmore » access. Another unique aspect of Saguaro is its extensive use of type system to describe user data such as files and to specify the types of arguments to commands and procedures. This enables the system to assist in type checking and leads to a user interface in which command-specific templates are available to facilitate command invocation. A mechanism, channels, is also provided to enable users to construct applications containing general graphs of communication processes.« less

  8. ILRS Station Reporting

    NASA Technical Reports Server (NTRS)

    Noll, Carey E.; Pearlman, Michael Reisman; Torrence, Mark H.

    2013-01-01

    Network stations provided system configuration documentation upon joining the ILRS. This information, found in the various site and system log files available on the ILRS website, is essential to the ILRS analysis centers, combination centers, and general user community. Therefore, it is imperative that the station personnel inform the ILRS community in a timely fashion when changes to the system occur. This poster provides some information about the various documentation that must be maintained. The ILRS network consists of over fifty global sites actively ranging to over sixty satellites as well as five lunar reflectors. Information about these stations are available on the ILRS website (http://ilrs.gsfc.nasa.gov/network/stations/index.html). The ILRS Analysis Centers must have current information about the stations and their system configuration in order to use their data in generation of derived products. However, not all information available on the ILRS website is as up-to-date as necessary for correct analysis of their data.

  9. Secure Peer-to-Peer Networks for Scientific Information Sharing

    NASA Technical Reports Server (NTRS)

    Karimabadi, Homa

    2012-01-01

    The most common means of remote scientific collaboration today includes the trio of e-mail for electronic communication, FTP for file sharing, and personalized Web sites for dissemination of papers and research results. With the growth of broadband Internet, there has been a desire to share large files (movies, files, scientific data files) over the Internet. Email has limits on the size of files that can be attached and transmitted. FTP is often used to share large files, but this requires the user to set up an FTP site for which it is hard to set group privileges, it is not straightforward for everyone, and the content is not searchable. Peer-to-peer technology (P2P), which has been overwhelmingly successful in popular content distribution, is the basis for development of a scientific collaboratory called Scientific Peer Network (SciPerNet). This technology combines social networking with P2P file sharing. SciPerNet will be a standalone application, written in Java and Swing, thus insuring portability to a number of different platforms. Some of the features include user authentication, search capability, seamless integration with a data center, the ability to create groups and social networks, and on-line chat. In contrast to P2P networks such as Gnutella, Bit Torrent, and others, SciPerNet incorporates three design elements that are critical to application of P2P for scientific purposes: User authentication, Data integrity validation, Reliable searching SciPerNet also provides a complementary solution to virtual observatories by enabling distributed collaboration and sharing of downloaded and/or processed data among scientists. This will, in turn, increase scientific returns from NASA missions. As such, SciPerNet can serve a two-fold purpose for NASA: a cost-savings software as well as a productivity tool for scientists working with data from NASA missions.

  10. Computer network environment planning and analysis

    NASA Technical Reports Server (NTRS)

    Dalphin, John F.

    1989-01-01

    The GSFC Computer Network Environment provides a broadband RF cable between campus buildings and ethernet spines in buildings for the interlinking of Local Area Networks (LANs). This system provides terminal and computer linkage among host and user systems thereby providing E-mail services, file exchange capability, and certain distributed computing opportunities. The Environment is designed to be transparent and supports multiple protocols. Networking at Goddard has a short history and has been under coordinated control of a Network Steering Committee for slightly more than two years; network growth has been rapid with more than 1500 nodes currently addressed and greater expansion expected. A new RF cable system with a different topology is being installed during summer 1989; consideration of a fiber optics system for the future will begin soon. Summmer study was directed toward Network Steering Committee operation and planning plus consideration of Center Network Environment analysis and modeling. Biweekly Steering Committee meetings were attended to learn the background of the network and the concerns of those managing it. Suggestions for historical data gathering have been made to support future planning and modeling. Data Systems Dynamic Simulator, a simulation package developed at NASA and maintained at GSFC was studied as a possible modeling tool for the network environment. A modeling concept based on a hierarchical model was hypothesized for further development. Such a model would allow input of newly updated parameters and would provide an estimation of the behavior of the network.

  11. Computer assisted audit techniques for UNIX (UNIX-CAATS)

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Polk, W.T.

    1991-12-31

    Federal and DOE regulations impose specific requirements for internal controls of computer systems. These controls include adequate separation of duties and sufficient controls for access of system and data. The DOE Inspector General`s Office has the responsibility to examine internal controls, as well as efficient use of computer system resources. As a result, DOE supported NIST development of computer assisted audit techniques to examine BSD UNIX computers (UNIX-CAATS). These systems were selected due to the increasing number of UNIX workstations in use within DOE. This paper describes the design and development of these techniques, as well as the results ofmore » testing at NIST and the first audit at a DOE site. UNIX-CAATS consists of tools which examine security of passwords, file systems, and network access. In addition, a tool was developed to examine efficiency of disk utilization. Test results at NIST indicated inadequate password management, as well as weak network resource controls. File system security was considered adequate. Audit results at a DOE site indicated weak password management and inefficient disk utilization. During the audit, we also found improvements to UNIX-CAATS were needed when applied to large systems. NIST plans to enhance the techniques developed for DOE/IG in future work. This future work would leverage currently available tools, along with needed enhancements. These enhancements would enable DOE/IG to audit large systems, such as supercomputers.« less

  12. Computer assisted audit techniques for UNIX (UNIX-CAATS)

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Polk, W.T.

    1991-01-01

    Federal and DOE regulations impose specific requirements for internal controls of computer systems. These controls include adequate separation of duties and sufficient controls for access of system and data. The DOE Inspector General's Office has the responsibility to examine internal controls, as well as efficient use of computer system resources. As a result, DOE supported NIST development of computer assisted audit techniques to examine BSD UNIX computers (UNIX-CAATS). These systems were selected due to the increasing number of UNIX workstations in use within DOE. This paper describes the design and development of these techniques, as well as the results ofmore » testing at NIST and the first audit at a DOE site. UNIX-CAATS consists of tools which examine security of passwords, file systems, and network access. In addition, a tool was developed to examine efficiency of disk utilization. Test results at NIST indicated inadequate password management, as well as weak network resource controls. File system security was considered adequate. Audit results at a DOE site indicated weak password management and inefficient disk utilization. During the audit, we also found improvements to UNIX-CAATS were needed when applied to large systems. NIST plans to enhance the techniques developed for DOE/IG in future work. This future work would leverage currently available tools, along with needed enhancements. These enhancements would enable DOE/IG to audit large systems, such as supercomputers.« less

  13. ECFS: A decentralized, distributed and fault-tolerant FUSE filesystem for the LHCb online farm

    NASA Astrophysics Data System (ADS)

    Rybczynski, Tomasz; Bonaccorsi, Enrico; Neufeld, Niko

    2014-06-01

    The LHCb experiment records millions of proton collisions every second, but only a fraction of them are useful for LHCb physics. In order to filter out the "bad events" a large farm of x86-servers (~2000 nodes) has been put in place. These servers boot from and run from NFS, however they use their local disk to temporarily store data, which cannot be processed in real-time ("data-deferring"). These events are subsequently processed, when there are no live-data coming in. The effective CPU power is thus greatly increased. This gain in CPU power depends critically on the availability of the local disks. For cost and power-reasons, mirroring (RAID-1) is not used, leading to a lot of operational headache with failing disks and disk-errors or server failures induced by faulty disks. To mitigate these problems and increase the reliability of the LHCb farm, while at same time keeping cost and power-consumption low, an extensive research and study of existing highly available and distributed file systems has been done. While many distributed file systems are providing reliability by "file replication", none of the evaluated ones supports erasure algorithms. A decentralised, distributed and fault-tolerant "write once read many" file system has been designed and implemented as a proof of concept providing fault tolerance without using expensive - in terms of disk space - file replication techniques and providing a unique namespace as a main goals. This paper describes the design and the implementation of the Erasure Codes File System (ECFS) and presents the specialised FUSE interface for Linux. Depending on the encoding algorithm ECFS will use a certain number of target directories as a backend to store the segments that compose the encoded data. When target directories are mounted via nfs/autofs - ECFS will act as a file-system over network/block-level raid over multiple servers.

  14. Satellite control system nucleus for the Brazilian complete space mission

    NASA Astrophysics Data System (ADS)

    Yamaguti, Wilson; Decarvalhovieira, Anastacio Emanuel; Deoliveira, Julia Leocadia; Cardoso, Paulo Eduardo; Dacosta, Petronio Osorio

    1990-10-01

    The nucleus of the satellite control system for the Brazilian data collecting and remote sensing satellites is described. The system is based on Digital Equipment Computers and the VAX/VMS operating system. The nucleus provides the access control, the system configuration, the event management, history files management, time synchronization, wall display control, and X25 data communication network access facilities. The architecture of the nucleus and its main implementation aspects are described. The implementation experience acquired is considered.

  15. Network Solutions.

    ERIC Educational Resources Information Center

    Vietzke, Robert; And Others

    1996-01-01

    This special section explains the latest developments in networking technologies, profiles school districts benefiting from successful implementations, and reviews new products for building networks. Highlights include ATM (asynchronous transfer mode), cable modems, networking switches, Internet screening software, file servers, network management…

  16. Low-Speed Fingerprint Image Capture System User`s Guide, June 1, 1993

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Whitus, B.R.; Goddard, J.S.; Jatko, W.B.

    1993-06-01

    The Low-Speed Fingerprint Image Capture System (LS-FICS) uses a Sun workstation controlling a Lenzar ElectroOptics Opacity 1000 imaging system to digitize fingerprint card images to support the Federal Bureau of Investigation`s (FBI`s) Automated Fingerprint Identification System (AFIS) program. The system also supports the operations performed by the Oak Ridge National Laboratory- (ORNL-) developed Image Transmission Network (ITN) prototype card scanning system. The input to the system is a single FBI fingerprint card of the agreed-upon standard format and a user-specified identification number. The output is a file formatted to be compatible with the National Institute of Standards and Technology (NIST)more » draft standard for fingerprint data exchange dated June 10, 1992. These NIST compatible files contain the required print and text images. The LS-FICS is designed to provide the FBI with the capability of scanning fingerprint cards into a digital format. The FBI will replicate the system to generate a data base of test images. The Host Workstation contains the image data paths and the compression algorithm. A local area network interface, disk storage, and tape drive are used for the image storage and retrieval, and the Lenzar Opacity 1000 scanner is used to acquire the image. The scanner is capable of resolving 500 pixels/in. in both x and y directions. The print images are maintained in full 8-bit gray scale and compressed with an FBI-approved wavelet-based compression algorithm. The text fields are downsampled to 250 pixels/in. and 2-bit gray scale. The text images are then compressed using a lossless Huffman coding scheme. The text fields retrieved from the output files are easily interpreted when displayed on the screen. Detailed procedures are provided for system calibration and operation. Software tools are provided to verify proper system operation.« less

  17. Data transfer nodes and demonstration of 100-400 Gbps wide area throughput using the Caltech SDN testbed

    NASA Astrophysics Data System (ADS)

    Mughal, A.; Newman, H.

    2017-10-01

    We review and demonstrate the design of efficient data transfer nodes (DTNs), from the perspective of the highest throughput over both local and wide area networks, as well as the highest performance per unit cost. A careful system-level design is required for the hardware, firmware, OS and software components. Furthermore, additional tuning of these components, and the identification and elimination of any remaining bottlenecks is needed once the system is assembled and commissioned, in order to obtain optimal performance. For high throughput data transfers, specialized software is used to overcome the traditional limits in performance caused by the OS, file system, file structures used, etc. Concretely, we will discuss and present the latest results using Fast Data Transfer (FDT), developed by Caltech. We present and discuss the design choices for three generations of Caltech DTNs. Their transfer capabilities range from 40 Gbps to 400 Gbps. Disk throughput is still the biggest challenge in the current generation of available hardware. However, new NVME drives combined with RDMA and a new NVME network fabric are expected to improve the overall data-transfer throughput and simultaneously reduce the CPU load on the end nodes.

  18. Small Aircraft Data Distribution System

    NASA Technical Reports Server (NTRS)

    Chazanoff, Seth L.; Dinardo, Steven J.

    2012-01-01

    The CARVE Small Aircraft Data Distribution System acquires the aircraft location and attitude data that is required by the various programs running on a distributed network. This system distributes the data it acquires to the data acquisition programs for inclusion in their data files. It uses UDP (User Datagram Protocol) to broadcast data over a LAN (Local Area Network) to any programs that might have a use for the data. The program is easily adaptable to acquire additional data and log that data to disk. The current version also drives displays using precision pitch and roll information to aid the pilot in maintaining a level-level attitude for radar/radiometer mapping beyond the degree available by flying visually or using a standard gyro-driven attitude indicator. The software is designed to acquire an array of data to help the mission manager make real-time decisions as to the effectiveness of the flight. This data is displayed for the mission manager and broadcast to the other experiments on the aircraft for inclusion in their data files. The program also drives real-time precision pitch and roll displays for the pilot and copilot to aid them in maintaining the desired attitude, when required, during data acquisition on mapping lines.

  19. Prototype Implementation of Web and Desktop Applications for ALMA Science Verification Data and the Lessons Learned

    NASA Astrophysics Data System (ADS)

    Eguchi, S.; Kawasaki, W.; Shirasaki, Y.; Komiya, Y.; Kosugi, G.; Ohishi, M.; Mizumoto, Y.

    2013-10-01

    ALMA is estimated to generate TB scale data during only one observation; astronomers need to identify which part of the data they are really interested in. We have been developing new GUI software for this purpose utilizing the VO interface: ALMA Web Quick Look System (ALMAWebQL) and ALMA Desktop Application (Vissage). The former is written in JavaScript and HTML5 generated from Java code by the Google Web Toolkit, and the latter is in pure Java. An essential point of our approach is how to reduce network traffic: we prepare, in advance, “compressed” FITS files of 2x2x1 (horizontal, vertical, and spectral directions, respectively) binning, 2 x 2 x 2 binning, 4 x 4 x 2 binning data, and so on. These files are hidden from users, and Web QL automatically chooses the proper one for each user operation. Through this work, we find that network traffic in our system is still a bottleneck towards TB scale data distribution. Hence we have to develop alternative data containers for much faster data processing. In this paper, we introduce our data analysis systems, and describe what we learned through the development.

  20. Filmless PACS in a multiple facility environment

    NASA Astrophysics Data System (ADS)

    Wilson, Dennis L.; Glicksman, Robert A.; Prior, Fred W.; Siu, Kai-Yeung; Goldburgh, Mitchell M.

    1996-05-01

    A Picture Archiving and Communication System centered on a shared image file server can support a filmless hospital. Systems based on this architecture have proven themselves in over four years of clinical operation. Changes in healthcare delivery are causing radiology groups to support multiple facilities for remote clinic support and consolidation of services. There will be a corresponding need for communicating over a standardized wide area network (WAN). Interactive workflow, a natural extension to the single facility case, requires a means to work effectively and seamlessly across moderate to low speed communication networks. Several schemes for supporting a consortium of medical treatment facilities over a WAN are explored. Both centralized and distributed database approaches are evaluated against several WAN scenarios. Likewise, several architectures for distributing image file servers or buffers over a WAN are explored, along with the caching and distribution strategies that support them. An open system implementation is critical to the success of a wide area system. The role of the Digital Imaging and Communications in Medicine (DICOM) standard in supporting multi- facility and multi-vendor open systems is also addressed. An open system can be achieved by using a DICOM server to provide a view of the system-wide distributed database. The DICOM server interface to a local version of the global database lets a local workstation treat the multiple, distributed data servers as though they were one local server for purposes of examination queries. The query will recover information about the examination that will permit retrieval over the network from the server on which the examination resides. For efficiency reasons, the ability to build cross-facility radiologist worklists and clinician-oriented patient folders is essential. The technologies of the World-Wide-Web can be used to generate worklists and patient folders across facilities. A reliable broadcast protocol may be a convenient way to notify many different users and many image servers about new activities in the network of image servers. In addition to ensuring reliability of message delivery and global serialization of each broadcast message in the network, the broadcast protocol should not introduce significant communication overhead.

  1. Tooth labeling in cone-beam CT using deep convolutional neural network for forensic identification

    NASA Astrophysics Data System (ADS)

    Miki, Yuma; Muramatsu, Chisako; Hayashi, Tatsuro; Zhou, Xiangrong; Hara, Takeshi; Katsumata, Akitoshi; Fujita, Hiroshi

    2017-03-01

    In large disasters, dental record plays an important role in forensic identification. However, filing dental charts for corpses is not an easy task for general dentists. Moreover, it is laborious and time-consuming work in cases of large scale disasters. We have been investigating a tooth labeling method on dental cone-beam CT images for the purpose of automatic filing of dental charts. In our method, individual tooth in CT images are detected and classified into seven tooth types using deep convolutional neural network. We employed the fully convolutional network using AlexNet architecture for detecting each tooth and applied our previous method using regular AlexNet for classifying the detected teeth into 7 tooth types. From 52 CT volumes obtained by two imaging systems, five images each were randomly selected as test data, and the remaining 42 cases were used as training data. The result showed the tooth detection accuracy of 77.4% with the average false detection of 5.8 per image. The result indicates the potential utility of the proposed method for automatic recording of dental information.

  2. The US Nuclear Data Network

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Not Available

    1993-10-19

    This report discusses the following topics: US Nuclear Data Network Meeting; TUNL A=3--20 Data Project Activity Report 1993; INEL Mass-chain Evaluation Project Activity Report for 1993; 1993 Isotopes; Nuclear Data Project Activity Report; The NNDC Activity Report Parts A and B; Minutes of the Formats and Procedures Subcommittee; Evaluation of High-spin Nuclear Data for ENSDF and Table of Superdeformed Nuclear Bands; Proposal for Support of a Experimental High-spin; Data File/Data-Network Coordinator; Radioactive Decay and Applications; A Plan for a Horizontal Evaluation of Decay Data; ENSDF On-line System; The MacNuclide Project Expanding the Scope of the Nuclear Structure Reference File; ENSDAT:more » Evaluated Nuclear Structure Drawings and Tables; Cross Section Evaluation Working Group (CSEWG) and CSEWG Strategy Session; A Draft Proposal for a USNDN Program Advisory Council; Recommendations of Focus Group 1; Recommendations of Focus Group 2; Recommendations of Focus Group 3; Recommendations of Focus Group 4; The Table of Isotopes; The Isotopes CD-ROM; Electronic Table of Isotopes (ETOI); and Electronic Access to Nuclear Data.« less

  3. 31 CFR 1024.311 - Filing obligations.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... CRIMES ENFORCEMENT NETWORK, DEPARTMENT OF THE TREASURY RULES FOR MUTUAL FUNDS Reports Required To Be Made By Mutual Funds § 1024.311 Filing obligations. Refer to § 1010.311 of this chapter for reports of transactions in currency filing obligations for mutual funds. ...

  4. Report on Approaches to Database Translation. Final Report.

    ERIC Educational Resources Information Center

    Gallagher, Leonard; Salazar, Sandra

    This report describes approaches to database translation (i.e., transferring data and data definitions from a source, either a database management system (DBMS) or a batch file, to a target DBMS), and recommends a method for representing the data structures of newly-proposed network and relational data models in a form suitable for database…

  5. Electronic Mail Is One High-Tech Management Tool that Really Delivers.

    ERIC Educational Resources Information Center

    Parker, Donald C.

    1987-01-01

    Describes an electronic mail system used by the Horseheads (New York) Central School Distict's eight schools and central office that saves time and enhances productivity. This software calls up information from the district's computer network and sends it to other users' special files--electronic "mailboxes" set aside for messages and…

  6. A Survey of Some Approaches to Distributed Data Base & Distributed File System Architecture.

    DTIC Science & Technology

    1980-01-01

    BUS POD A DD A 12 12 A = A Cell D = D Cell Figure 7-1: MUFFIN logical architecture - 45 - MUFI January 1980 ".-.Bus Interface V Conventional Processor...and Applied Mathematics (14), * December, 1966. [Kimbleton 791 Kimbleton, Stephen; Wang, Pearl; and Fong, Elizabeth. XNDM: An Experimental Network

  7. 77 FR 2331 - Self-Regulatory Organizations; NYSE Arca, Inc.; Notice of Filing and Immediate Effectiveness of...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2012-01-17

    ... trading systems (``ATSs''), including dark pools and electronic communication networks (``ECNs''). Each... data at no charge on its Web site in order to attract more order flow, and it uses market data revenue... users.\\19\\ A proliferation of dark pools and other [[Page 2335

  8. 76 FR 75593 - Self-Regulatory Organizations; The NASDAQ Stock Market, LLC; Notice of Filing and Immediate...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-12-02

    ... available on the Exchange's Web site at http://www.nasdaq.cchwallstreet.com , at the principal office of the... trading systems (``ATSs''), including dark pools and electronic communication networks (``ECNs''). Each...ECN, BATS Trading and Direct Edge. A proliferation of dark pools and other ATSs operate profitably...

  9. 78 FR 33136 - Self-Regulatory Organizations; BATS Y-Exchange, Inc.; Order Granting Approval to Proposed Rule...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-06-03

    ... that, due to technical limitations in order management systems and routing networks, such member... SECURITIES AND EXCHANGE COMMISSION [Release No. 34-69643; File Nos. SR-BYX-2013-008] Self... the Securities and Exchange Commission (``Commission'') pursuant to Section 19(b)(1) of the Securities...

  10. A Low Cost Microcomputer Laboratory for Investigating Computer Architecture.

    ERIC Educational Resources Information Center

    Mitchell, Eugene E., Ed.

    1980-01-01

    Described is a microcomputer laboratory at the United States Military Academy at West Point, New York, which provides easy access to non-volatile memory and a single input/output file system for 16 microcomputer laboratory positions. A microcomputer network that has a centralized data base is implemented using the concepts of computer network…

  11. Network Visualization Project (NVP)

    DTIC Science & Technology

    2016-07-01

    network visualization, network traffic analysis, network forensics 16. SECURITY CLASSIFICATION OF: 17. LIMITATION OF ABSTRACT UU 18. NUMBER OF...shell, is a command-line framework used for network forensic analysis. Dshell processes existing pcap files and filters output information based on

  12. Situational Awareness of Network System Roles (SANSR)

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Huffer, Kelly M; Reed, Joel W

    In a large enterprise it is difficult for cyber security analysts to know what services and roles every machine on the network is performing (e.g., file server, domain name server, email server). Using network flow data, already collected by most enterprises, we developed a proof-of-concept tool that discovers the roles of a system using both clustering and categorization techniques. The tool's role information would allow cyber analysts to detect consequential changes in the network, initiate incident response plans, and optimize their security posture. The results of this proof-of-concept tool proved to be quite accurate on three real data sets. Wemore » will present the algorithms used in the tool, describe the results of preliminary testing, provide visualizations of the results, and discuss areas for future work. Without this kind of situational awareness, cyber analysts cannot quickly diagnose an attack or prioritize remedial actions.« less

  13. Livermore Big Artificial Neural Network Toolkit

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Essen, Brian Van; Jacobs, Sam; Kim, Hyojin

    2016-07-01

    LBANN is a toolkit that is designed to train artificial neural networks efficiently on high performance computing architectures. It is optimized to take advantages of key High Performance Computing features to accelerate neural network training. Specifically it is optimized for low-latency, high bandwidth interconnects, node-local NVRAM, node-local GPU accelerators, and high bandwidth parallel file systems. It is built on top of the open source Elemental distributed-memory dense and spars-direct linear algebra and optimization library that is released under the BSD license. The algorithms contained within LBANN are drawn from the academic literature and implemented to work within a distributed-memory framework.

  14. Description of individual data items and codes in CRIB

    USGS Publications Warehouse

    Keefer, Eleanor K.; Calkins, James Alfred

    1978-01-01

    The U.S. Geological Survey's Computerized Resources Information Bank (CRIB) is being made available for public use through the computer facilities of the University of Oklahoma and the General Electric Company, U.S.A. The use of General Electric's worldwide information-services network provides access to the CRIB file to a worldwide clientele. This manual, which consists of two chapters, is intended as a guide to users who wish to interrogate the file. Chapter A contains a description of the CRIB file, information on the use of the GIPSY retrieval system, and a description of the General Electric MARK III Service. Chapter B contains a description of the individual data items in the CRIB record as well as code lists. CRIB consists of a set of variable-length records on the metallic and nonmetallic mineral resources of the United States and other countries. At present, 31,645 records in the master file are being made available. The record contains information on mineral deposits and mineral commodities. Some topics covered are: deposit name, location, commodity information, description of deposit, geology, production, reserves, potential resources, and references. The data are processed by the GIPSY program, which maintains the data file and builds, updates, searches, and prints the records using simple yet versatile command statements. Searching and selecting records is accomplished by specifying the presence, absence, or content of any element of information in the record; these specifications can be logically linked to prepare sophisticated search strategies. Output is available in the form of the complete record, a listing of selected parts of the record, or fixed-field tabulations. The General Electric MARK III Service is a computerized information services network operating internationally by land lines, satellites, and undersea cables. The service is available by local telephone to 500 cities in North America, Western Europe, Australia, Southeast Asia, Japan, and Saudi Arabia. An interface called the 'foreground driver' is used to link the GIPSY program to the General Electric system.

  15. Bandwidth characteristics of multimedia data traffic on a local area network

    NASA Technical Reports Server (NTRS)

    Chuang, Shery L.; Doubek, Sharon; Haines, Richard F.

    1993-01-01

    Limited spacecraft communication links call for users to investigate the potential use of video compression and multimedia technologies to optimize bandwidth allocations. The objective was to determine the transmission characteristics of multimedia data - motion video, text or bitmap graphics, and files transmitted independently and simultaneously over an ethernet local area network. Commercial desktop video teleconferencing hardware and software and Intel's proprietary Digital Video Interactive (DVI) video compression algorithm were used, and typical task scenarios were selected. The transmission time, packet size, number of packets, and network utilization of the data were recorded. Each data type - compressed motion video, text and/or bitmapped graphics, and a compressed image file - was first transmitted independently and its characteristics recorded. The results showed that an average bandwidth of 7.4 kilobits per second (kbps) was used to transmit graphics; an average bandwidth of 86.8 kbps was used to transmit an 18.9-kilobyte (kB) image file; a bandwidth of 728.9 kbps was used to transmit compressed motion video at 15 frames per second (fps); and a bandwidth of 75.9 kbps was used to transmit compressed motion video at 1.5 fps. Average packet sizes were 933 bytes for graphics, 498.5 bytes for the image file, 345.8 bytes for motion video at 15 fps, and 341.9 bytes for motion video at 1.5 fps. Simultaneous transmission of multimedia data types was also characterized. The multimedia packets used transmission bandwidths of 341.4 kbps and 105.8kbps. Bandwidth utilization varied according to the frame rate (frames per second) setting for the transmission of motion video. Packet size did not vary significantly between the data types. When these characteristics are applied to Space Station Freedom (SSF), the packet sizes fall within the maximum specified by the Consultative Committee for Space Data Systems (CCSDS). The uplink of imagery to SSF may be performed at minimal frame rates and/or within seconds of delay, depending on the user's allocated bandwidth. Further research to identify the acceptable delay interval and its impact on human performance is required. Additional studies in network performance using various video compression algorithms and integrated multimedia techniques are needed to determine the optimal design approach for utilizing SSF's data communications system.

  16. High Performance Data Transfer for Distributed Data Intensive Sciences

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Fang, Chin; Cottrell, R 'Les' A.; Hanushevsky, Andrew B.

    We report on the development of ZX software providing high performance data transfer and encryption. The design scales in: computation power, network interfaces, and IOPS while carefully balancing the available resources. Two U.S. patent-pending algorithms help tackle data sets containing lots of small files and very large files, and provide insensitivity to network latency. It has a cluster-oriented architecture, using peer-to-peer technologies to ease deployment, operation, usage, and resource discovery. Its unique optimizations enable effective use of flash memory. Using a pair of existing data transfer nodes at SLAC and NERSC, we compared its performance to that of bbcp andmore » GridFTP and determined that they were comparable. With a proof of concept created using two four-node clusters with multiple distributed multi-core CPUs, network interfaces and flash memory, we achieved 155Gbps memory-to-memory over a 2x100Gbps link aggregated channel and 70Gbps file-to-file with encryption over a 5000 mile 100Gbps link.« less

  17. Application-Defined Decentralized Access Control

    PubMed Central

    Xu, Yuanzhong; Dunn, Alan M.; Hofmann, Owen S.; Lee, Michael Z.; Mehdi, Syed Akbar; Witchel, Emmett

    2014-01-01

    DCAC is a practical OS-level access control system that supports application-defined principals. It allows normal users to perform administrative operations within their privilege, enabling isolation and privilege separation for applications. It does not require centralized policy specification or management, giving applications freedom to manage their principals while the policies are still enforced by the OS. DCAC uses hierarchically-named attributes as a generic framework for user-defined policies such as groups defined by normal users. For both local and networked file systems, its execution time overhead is between 0%–9% on file system microbenchmarks, and under 1% on applications. This paper shows the design and implementation of DCAC, as well as several real-world use cases, including sandboxing applications, enforcing server applications’ security policies, supporting NFS, and authenticating user-defined sub-principals in SSH, all with minimal code changes. PMID:25426493

  18. The Metadata Cloud: The Last Piece of a Distributed Data System Model

    NASA Astrophysics Data System (ADS)

    King, T. A.; Cecconi, B.; Hughes, J. S.; Walker, R. J.; Roberts, D.; Thieman, J. R.; Joy, S. P.; Mafi, J. N.; Gangloff, M.

    2012-12-01

    Distributed data systems have existed ever since systems were networked together. Over the years the model for distributed data systems have evolved from basic file transfer to client-server to multi-tiered to grid and finally to cloud based systems. Initially metadata was tightly coupled to the data either by embedding the metadata in the same file containing the data or by co-locating the metadata in commonly named files. As the sources of data multiplied, data volumes have increased and services have specialized to improve efficiency; a cloud system model has emerged. In a cloud system computing and storage are provided as services with accessibility emphasized over physical location. Computation and data clouds are common implementations. Effectively using the data and computation capabilities requires metadata. When metadata is stored separately from the data; a metadata cloud is formed. With a metadata cloud information and knowledge about data resources can migrate efficiently from system to system, enabling services and allowing the data to remain efficiently stored until used. This is especially important with "Big Data" where movement of the data is limited by bandwidth. We examine how the metadata cloud completes a general distributed data system model, how standards play a role and relate this to the existing types of cloud computing. We also look at the major science data systems in existence and compare each to the generalized cloud system model.

  19. Rule-based simulation models

    NASA Technical Reports Server (NTRS)

    Nieten, Joseph L.; Seraphine, Kathleen M.

    1991-01-01

    Procedural modeling systems, rule based modeling systems, and a method for converting a procedural model to a rule based model are described. Simulation models are used to represent real time engineering systems. A real time system can be represented by a set of equations or functions connected so that they perform in the same manner as the actual system. Most modeling system languages are based on FORTRAN or some other procedural language. Therefore, they must be enhanced with a reaction capability. Rule based systems are reactive by definition. Once the engineering system has been decomposed into a set of calculations using only basic algebraic unary operations, a knowledge network of calculations and functions can be constructed. The knowledge network required by a rule based system can be generated by a knowledge acquisition tool or a source level compiler. The compiler would take an existing model source file, a syntax template, and a symbol table and generate the knowledge network. Thus, existing procedural models can be translated and executed by a rule based system. Neural models can be provide the high capacity data manipulation required by the most complex real time models.

  20. A Korean Space Situational Awareness Program : OWL Network

    NASA Astrophysics Data System (ADS)

    Park, J.; Choi, Y.; Jo, J.; Moon, H.; Im, H.; Park, J.

    2012-09-01

    We are going to present a brief introduction to the OWL (Optical Wide-field patroL) network, one of Korean space situational awareness facilities. Primary objectives of the OWL network are 1) to obtain orbital information of Korean domestic LEOs using optical method, 2) to monitor GEO-belt over territory of Korea, and 3) to alleviate collisional risks posed to Korean satellites from space debris. For these purposes, we are planning to build a global network of telescopes which consists of five small wide-field telescopes and one 2m class telescope. The network of small telescopes will be dedicated mainly to the observation of domestic LEOs, but many slots will be open to other scientific programs such as GRB follow-up observations. Main targets of 2m telescope not only include artificial objects such as GEO debris and LEO debris with low inclination and high eccentricity, but also natural objects such as near Earth asteroids. We expect to monitor space objects down to 10cm in size in GEO using the 2m telescope system. Main research topics include size distribution and evolution of space debris. We also expect to utilize this facility for physical characterization and population study of near Earth asteroids. The aperture size of the small telescope system is 0.5m with Rechey-Cretian configuration and its field of view is 1.75 deg x 1.75 deg. It is equipped with 4K CCD with 9um pixel size, and its plate scale is 1.3 arcsec/pixel. A chopper wheel is employed to maximize astrometric solutions in a single CCD frame, and a de-rotator is used to compensate field rotation of the alt-az type mount. We have designed a compact end unit in which three rotating parts (chopper wheel, filter wheel, de-rotator) and a CCD camera are integrated, and dedicated telescope/site control boards for the OWL network. The design of 2m class telescope is still under discussion yet is expected to be fixed in the first half of 2013 at the latest. The OWL network will be operated in a fully autonomous mode based on scheduled observation. We have designed a compact and robust system for fully robotic operation. The network operating system located in the headquarter issues command files for observation which are transferred to each local site. After that, the site operating system interprets command files and controls each telescope system. In this way, we obtain and update orbital information of domestic satellites based on purely optical method. A prototype of the network telescope system will be installed at a test bed in Korea in commissioning phase. After the test operation, the design of the network telescope system will be finalized in the end of 2012. The installation of the telescope systems in 3 local sites will be completed in 2013, and the so-called "OWL basic network"" will start normal operations. In the first two years of the second stage of the OWL Project (2014-2015), we plan to place two small wide-field telescopes, and we build the 2m telescope system to complete the OWL network in the 2016.

  1. An Open Software Platform for Sharing Water Resource Models, Code and Data

    NASA Astrophysics Data System (ADS)

    Knox, Stephen; Meier, Philipp; Mohamed, Khaled; Korteling, Brett; Matrosov, Evgenii; Huskova, Ivana; Harou, Julien; Rosenberg, David; Tilmant, Amaury; Medellin-Azuara, Josue; Wicks, Jon

    2016-04-01

    The modelling of managed water resource systems requires new approaches in the face of increasing future uncertainty. Water resources management models, even if applied to diverse problem areas, use common approaches such as representing the problem as a network of nodes and links. We propose a data management software platform, called Hydra, that uses this commonality to allow multiple models using a node-link structure to be managed and run using a single software system. Hydra's user interface allows users to manage network topology and associated data. Hydra feeds this data directly into a model, importing from and exporting to different file formats using Apps. An App connects Hydra to a custom model, a modelling system such as GAMS or MATLAB or to different file formats such as MS Excel, CSV and ESRI Shapefiles. Hydra allows users to manage their data in a single, consistent place. Apps can be used to run domain-specific models and allow users to work with their own required file formats. The Hydra App Store offers a collaborative space where model developers can publish, review and comment on Apps, models and data. Example Apps and open-source libraries are available in a variety of languages (Python, Java and .NET). The App Store can act as a hub for water resource modellers to view and share Apps, models and data easily. This encourages an ecosystem of development using a shared platform, resulting in more model integration and potentially greater unity within resource modelling communities. www.hydraplatform.org www.hydraappstore.com

  2. A collaborative framework for contributing DICOM RT PHI (Protected Health Information) to augment data mining in clinical decision support

    NASA Astrophysics Data System (ADS)

    Deshpande, Ruchi; Thuptimdang, Wanwara; DeMarco, John; Liu, Brent J.

    2014-03-01

    We have built a decision support system that provides recommendations for customizing radiation therapy treatment plans, based on patient models generated from a database of retrospective planning data. This database consists of relevant metadata and information derived from the following DICOM objects - CT images, RT Structure Set, RT Dose and RT Plan. The usefulness and accuracy of such patient models partly depends on the sample size of the learning data set. Our current goal is to increase this sample size by expanding our decision support system into a collaborative framework to include contributions from multiple collaborators. Potential collaborators are often reluctant to upload even anonymized patient files to repositories outside their local organizational network in order to avoid any conflicts with HIPAA Privacy and Security Rules. We have circumvented this problem by developing a tool that can parse DICOM files on the client's side and extract de-identified numeric and text data from DICOM RT headers for uploading to a centralized system. As a result, the DICOM files containing PHI remain local to the client side. This is a novel workflow that results in adding only relevant yet valuable data from DICOM files to the centralized decision support knowledge base in such a way that the DICOM files never leave the contributor's local workstation in a cloud-based environment. Such a workflow serves to encourage clinicians to contribute data for research endeavors by ensuring protection of electronic patient data.

  3. Computer Networks and Networking: A Primer.

    ERIC Educational Resources Information Center

    Collins, Mauri P.

    1993-01-01

    Provides a basic introduction to computer networks and networking terminology. Topics addressed include modems; the Internet; TCP/IP (Transmission Control Protocol/Internet Protocol); transmission lines; Internet Protocol numbers; network traffic; Fidonet; file transfer protocol (FTP); TELNET; electronic mail; discussion groups; LISTSERV; USENET;…

  4. AMPS/PC - AUTOMATIC MANUFACTURING PROGRAMMING SYSTEM

    NASA Technical Reports Server (NTRS)

    Schroer, B. J.

    1994-01-01

    The AMPS/PC system is a simulation tool designed to aid the user in defining the specifications of a manufacturing environment and then automatically writing code for the target simulation language, GPSS/PC. The domain of problems that AMPS/PC can simulate are manufacturing assembly lines with subassembly lines and manufacturing cells. The user defines the problem domain by responding to the questions from the interface program. Based on the responses, the interface program creates an internal problem specification file. This file includes the manufacturing process network flow and the attributes for all stations, cells, and stock points. AMPS then uses the problem specification file as input for the automatic code generator program to produce a simulation program in the target language GPSS. The output of the generator program is the source code of the corresponding GPSS/PC simulation program. The system runs entirely on an IBM PC running PC DOS Version 2.0 or higher and is written in Turbo Pascal Version 4 requiring 640K memory and one 360K disk drive. To execute the GPSS program, the PC must have resident the GPSS/PC System Version 2.0 from Minuteman Software. The AMPS/PC program was developed in 1988.

  5. Experience in running relational databases on clustered storage

    NASA Astrophysics Data System (ADS)

    Gaspar Aparicio, Ruben; Potocky, Miroslav

    2015-12-01

    For past eight years, CERN IT Database group has based its backend storage on NAS (Network-Attached Storage) architecture, providing database access via NFS (Network File System) protocol. In last two and half years, our storage has evolved from a scale-up architecture to a scale-out one. This paper describes our setup and a set of functionalities providing key features to other services like Database on Demand [1] or CERN Oracle backup and recovery service. It also outlines possible trend of evolution that, storage for databases could follow.

  6. Design and implementation of embedded un-interruptible power supply system (EUPSS) for web-based mobile application

    NASA Astrophysics Data System (ADS)

    Zhang, De-gan; Zhang, Xiao-dan

    2012-11-01

    With the growth of the amount of information manipulated by embedded application systems, which are embedded into devices and offer access to the devices on the internet, the requirements of saving the information systemically is necessary so as to fulfil access from the client and the local processing more efficiently. For supporting mobile applications, a design and implementation solution of embedded un-interruptible power supply (UPS) system (in brief, EUPSS) is brought forward for long-distance monitoring and controlling of UPS based on Web. The implementation of system is based on ATmega161, RTL8019AS and Arm chips with TCP/IP protocol suite for communication. In the embedded UPS system, an embedded file system is designed and implemented which saves the data and index information on a serial EEPROM chip in a structured way and communicates with a microcontroller unit through I2C bus. By embedding the file system into UPS system or other information appliances, users can access and manipulate local data on the web client side. Embedded file system on chips will play a major role in the growth of IP networking. Based on our experiment tests, the mobile users can easily monitor and control UPS in different places of long-distance. The performance of EUPSS has satisfied the requirements of all kinds of Web-based mobile applications.

  7. Integrated Autonomous Network Management (IANM) Multi-Topology Route Manager and Analyzer

    DTIC Science & Technology

    2008-02-01

    zebra tmg mtrcli xinetd (tftp) mysql configuration file (mtrrm.conf) configuration file (mtrrmAggregator.properties) tftp files /tftpboot NetFlow PDUs...configuration upload/download snmp, telnet OSPFv2 user interface tmg Figure 6-2. Internal software organization Figure 6-2 illustrates the main

  8. A pervasive health monitoring service system based on ubiquitous network technology.

    PubMed

    Lin, Chung-Chih; Lee, Ren-Guey; Hsiao, Chun-Chieh

    2008-07-01

    The phenomenon of aging society has derived problems such as shortage of medical resources and reduction of quality in healthcare services. This paper presents a system infrastructure for pervasive and long-term healthcare applications, i.e. a ubiquitous network composed of wireless local area network (WLAN) and cable television (CATV) network serving as a platform for monitoring physiological signals. Users can record vital signs including heart rate, blood pressure, and body temperature anytime either at home or at frequently visited public places in order to create a personal health file. The whole system was formally implemented in December 2004. Analysis of 2000 questionnaires indicates that 85% of users were satisfied with the provided community-wide healthcare services. Among the services provided by our system, health consultation services offered by family doctors was rated the most important service by 17.9% of respondents, and was followed by control of one's own health condition (16.4% of respondents). Convenience of data access was rated most important by roughly 14.3% of respondents. We proposed and implemented a long-term healthcare system integrating WLAN and CATV networks in the form of a ubiquitous network providing a service platform for physiological monitoring. This system can classify the health levels of the resident according to the variation tendency of his or her physiological signal for important reference of health management.

  9. Heterogeneous information sharing of sensor information in contested environments

    NASA Astrophysics Data System (ADS)

    Wampler, Jason A.; Hsieh, Chien; Toth, Andrew; Sheatsley, Ryan

    2017-05-01

    The inherent nature of unattended sensors makes these devices most vulnerable to detection, exploitation, and denial in contested environments. Physical access is often cited as the easiest way to compromise any device or network. A new mechanism for mitigating these types of attacks developed under the Assistant Secretary of Defense for Research and Engineering, ASD(R and E) project, "Smoke Screen in Cyberspace", was demonstrated in a live, over-the-air experiment. Smoke Screen encrypts, slices up, and disburses redundant fragments of files throughout the network. Recovery is only possible after recovering all fragments and attacking/denying one or more nodes does not limit the availability of other fragment copies in the network. This experiment proved the feasibility of redundant file fragmentation, and is the foundation for developing sophisticated methods to blacklist compromised nodes, move data fragments from risks of compromise, and forward stored data fragments closer to the anticipated retrieval point. This paper outlines initial results in scalability of node members, fragment size, file size, and performance in a heterogeneous network consisting of the Wireless Network after Next (WNaN) radio and Common Sensor Radio (CSR).

  10. 75 FR 47668 - Self-Regulatory Organizations; NASDAQ Stock Market LLC; Notice of Filing and Immediate...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2010-08-06

    ... systems (``ATSs''), including dark pools and electronic communication networks (``ECNs''). Each SRO market...ECN, BATS Trading and Direct Edge. Today, BATS publishes certain data at no charge on its Web site and... resulting executions to maintain low execution charges for its users.\\11\\ A proliferation of dark pools and...

  11. 76 FR 20054 - Self-Regulatory Organizations; the NASDAQ Stock Market LLC; Notice of Filing and Immediate...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-04-11

    ... over 50,000,000 investors on Web sites operated by Google, Interactive Data, and Dow Jones, among... systems (``ATSs''), including dark pools and electronic communication networks (``ECNs''). Each SRO market..., Attain, TracECN, BATS Trading and Direct Edge. Today, BATS publishes its data at no charge on its Web...

  12. 78 FR 54697 - Self-Regulatory Organizations; NYSE MKT LLC; Notice of Filing and Immediate Effectiveness of...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-09-05

    ... execution systems through the same order gateway, regardless of whether the sender is co-located in the data... a 40 Gigabit Liquidity Center Network Connection in the Exchange Data Center August 29, 2013.... The Commission is publishing this notice to solicit comments on the proposed rule change from...

  13. Online network of subspecialty aortic disease experts: Impact of "cloud" technology on management of acute aortic emergencies.

    PubMed

    Schoenhagen, Paul; Roselli, Eric E; Harris, C Martin; Eagleton, Matthew; Menon, Venu

    2016-07-01

    For the management of acute aortic syndromes, regional treatment networks have been established to coordinate diagnosis and treatment between local emergency rooms and central specialized centers. Triage of acute aortic syndromes requires definitive imaging, resulting in complex data files. Modern information technology network structures, specifically "cloud" technology, coupled with mobile communication, increasingly support sharing of these data in a network of experts using mobile, online access and communication. Although this network is technically complex, the potential benefit of online sharing of data files between professionals at multiple locations within a treatment network appear obvious; however, clinical experience is limited, and further evaluation is needed. Copyright © 2016 The American Association for Thoracic Surgery. Published by Elsevier Inc. All rights reserved.

  14. Robo-line storage: Low latency, high capacity storage systems over geographically distributed networks

    NASA Technical Reports Server (NTRS)

    Katz, Randy H.; Anderson, Thomas E.; Ousterhout, John K.; Patterson, David A.

    1991-01-01

    Rapid advances in high performance computing are making possible more complete and accurate computer-based modeling of complex physical phenomena, such as weather front interactions, dynamics of chemical reactions, numerical aerodynamic analysis of airframes, and ocean-land-atmosphere interactions. Many of these 'grand challenge' applications are as demanding of the underlying storage system, in terms of their capacity and bandwidth requirements, as they are on the computational power of the processor. A global view of the Earth's ocean chlorophyll and land vegetation requires over 2 terabytes of raw satellite image data. In this paper, we describe our planned research program in high capacity, high bandwidth storage systems. The project has four overall goals. First, we will examine new methods for high capacity storage systems, made possible by low cost, small form factor magnetic and optical tape systems. Second, access to the storage system will be low latency and high bandwidth. To achieve this, we must interleave data transfer at all levels of the storage system, including devices, controllers, servers, and communications links. Latency will be reduced by extensive caching throughout the storage hierarchy. Third, we will provide effective management of a storage hierarchy, extending the techniques already developed for the Log Structured File System. Finally, we will construct a protototype high capacity file server, suitable for use on the National Research and Education Network (NREN). Such research must be a Cornerstone of any coherent program in high performance computing and communications.

  15. Sandbox for Mac Malware v 1.0

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Walkup, Elizabeth

    This software is an analyzer for automated sandbox analysis of malware on the OS X operating system. It runs inside an OS X virtual machine to collect data about what happens when a given file is opened or run. As of August 2014, there was no sandbox software for Mac OS X malware, as it requires different methods from those used on the Windows OS (which most sandboxes are written for). This software adds OS X analysis capabilities to an existing open-source sandbox, Cuckoo Sandbox (http://cuckoosandbox.org/), which previously only worked for Windows. The analyzer itself can take many different typesmore » of files as input: the traditional Mach-O and FAT executables, .app files, zip files, Python scripts, Java archives, and web pages, as well as PDFs and other documents. While the file is running, the analyzer also simulates rudimentary human interaction with clicks and mouse movements in order to bypass the tests some malware use to see if they are being analyzed. The analyzer outputs several different kinds of data: function call traces, network captures, screenshots, and all created and modified files. This work also includes a static analysis Cuckoo module for Mach-O binary files. It extracts file structures, code library imports and exports, and signatures. This data can be used along with the analyzer results to create signatures for malware.« less

  16. Inadvertent Exposure to Pornography on the Internet: Implications of Peer-to-Peer File-Sharing Networks for Child Development and Families

    ERIC Educational Resources Information Center

    Greenfield, P.M.

    2004-01-01

    This essay comprises testimony to the Congressional Committee on Government Reform. The Committee's concern was the possibility of exposure to pornography when children and teens participate in peer-to-peer file-sharing networks, which are extremely popular in these age groups. A review of the relevant literature led to three major conclusions:…

  17. Designing Secure Library Networks.

    ERIC Educational Resources Information Center

    Breeding, Michael

    1997-01-01

    Focuses on designing a library network to maximize security. Discusses UNIX and file servers; connectivity to campus, corporate networks and the Internet; separation of staff from public servers; controlling traffic; the threat of network sniffers; hubs that eliminate eavesdropping; dividing the network into subnets; Switched Ethernet;…

  18. Availability of software services for a hospital information system.

    PubMed

    Sakamoto, N

    1998-03-01

    Hospital information systems (HISs) are becoming more important and covering more parts in daily hospital operations as order-entry systems become popular and electronic charts are introduced. Thus, HISs today need to be able to provide necessary services for hospital operations for a 24-h day, 365 days a year. The provision of services discussed here does not simply mean the availability of computers, in which all that matters is that the computer is functioning. It means the provision of necessary information for hospital operations by the computer software, and we will call it the availability of software services. HISs these days are mostly client-server systems. To increase availability of software services in these systems, it is not enough to just use system structures that are highly reliable in existing host-centred systems. Four main components which support availability of software services are network systems, client computers, server computers, and application software. In this paper, we suggest how to structure these four components to provide the minimum requested software services even if a part of the system stops to function. The network system should be double-protected in stratus using Asynchronous Transfer Mode (ATM) as its base network. Client computers should be fat clients with as much application logic as possible, and reference information which do not require frequent updates (master files, for example) should be replicated in clients. It would be best if all server computers could be double-protected. However, if that is physically impossible, one database file should be made accessible by several server computers. Still, at least the basic patients' information and the latest clinical records should be double-protected physically. Application software should be tested carefully before introduction. Different versions of the application software should always be kept and managed in case the new version has problems. If a hospital information system is designed and developed with these points in mind, it's availability of software services should increase greatly.

  19. HPNAIDM: The High-Performance Network Anomaly/Intrusion Detection and Mitigation System

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chen, Yan

    Identifying traffic anomalies and attacks rapidly and accurately is critical for large network operators. With the rapid growth of network bandwidth, such as the next generation DOE UltraScience Network, and fast emergence of new attacks/virus/worms, existing network intrusion detection systems (IDS) are insufficient because they: • Are mostly host-based and not scalable to high-performance networks; • Are mostly signature-based and unable to adaptively recognize flow-level unknown attacks; • Cannot differentiate malicious events from the unintentional anomalies. To address these challenges, we proposed and developed a new paradigm called high-performance network anomaly/intrustion detection and mitigation (HPNAIDM) system. The new paradigm ismore » significantly different from existing IDSes with the following features (research thrusts). • Online traffic recording and analysis on high-speed networks; • Online adaptive flow-level anomaly/intrusion detection and mitigation; • Integrated approach for false positive reduction. Our research prototype and evaluation demonstrate that the HPNAIDM system is highly effective and economically feasible. Beyond satisfying the pre-set goals, we even exceed that significantly (see more details in the next section). Overall, our project harvested 23 publications (2 book chapters, 6 journal papers and 15 peer-reviewed conference/workshop papers). Besides, we built a website for technique dissemination, which hosts two system prototype release to the research community. We also filed a patent application and developed strong international and domestic collaborations which span both academia and industry.« less

  20. The Development of Design Tools for Fault Tolerant Quantum Dot Cellular Automata Based Logic

    NASA Technical Reports Server (NTRS)

    Armstrong, Curtis D.; Humphreys, William M.

    2003-01-01

    We are developing software to explore the fault tolerance of quantum dot cellular automata gate architectures in the presence of manufacturing variations and device defects. The Topology Optimization Methodology using Applied Statistics (TOMAS) framework extends the capabilities of the A Quantum Interconnected Network Array Simulator (AQUINAS) by adding front-end and back-end software and creating an environment that integrates all of these components. The front-end tools establish all simulation parameters, configure the simulation system, automate the Monte Carlo generation of simulation files, and execute the simulation of these files. The back-end tools perform automated data parsing, statistical analysis and report generation.

  1. BrainNet Viewer: a network visualization tool for human brain connectomics.

    PubMed

    Xia, Mingrui; Wang, Jinhui; He, Yong

    2013-01-01

    The human brain is a complex system whose topological organization can be represented using connectomics. Recent studies have shown that human connectomes can be constructed using various neuroimaging technologies and further characterized using sophisticated analytic strategies, such as graph theory. These methods reveal the intriguing topological architectures of human brain networks in healthy populations and explore the changes throughout normal development and aging and under various pathological conditions. However, given the huge complexity of this methodology, toolboxes for graph-based network visualization are still lacking. Here, using MATLAB with a graphical user interface (GUI), we developed a graph-theoretical network visualization toolbox, called BrainNet Viewer, to illustrate human connectomes as ball-and-stick models. Within this toolbox, several combinations of defined files with connectome information can be loaded to display different combinations of brain surface, nodes and edges. In addition, display properties, such as the color and size of network elements or the layout of the figure, can be adjusted within a comprehensive but easy-to-use settings panel. Moreover, BrainNet Viewer draws the brain surface, nodes and edges in sequence and displays brain networks in multiple views, as required by the user. The figure can be manipulated with certain interaction functions to display more detailed information. Furthermore, the figures can be exported as commonly used image file formats or demonstration video for further use. BrainNet Viewer helps researchers to visualize brain networks in an easy, flexible and quick manner, and this software is freely available on the NITRC website (www.nitrc.org/projects/bnv/).

  2. Method and system for a network mapping service

    DOEpatents

    Bynum, Leo

    2017-10-17

    A method and system of publishing a map includes providing access to a plurality of map data files or mapping services between at least one publisher and at least one subscriber; defining a map in a map context comprising parameters and descriptors to substantially duplicate a map by reference to mutually accessible data or mapping services, publishing a map to a channel in a table file on server; accessing the channel by at least one subscriber, transmitting the mapping context from the server to the at least one subscriber, executing the map context by the at least one subscriber, and generating the map on a display software associated with the at least one subscriber by reconstituting the map from the references and other data in the mapping context.

  3. Production data in media systems and press front ends: capture, formats and database methods

    NASA Astrophysics Data System (ADS)

    Karttunen, Simo

    1997-02-01

    The nature, purpose and data presentation features of media jobs are analyzed in relation to the content, document, process and resource management in media production. Formats are the natural way of presenting, collecting and storing information, contents, document components and final documents. The state of the art and the trends in the media formats and production data are reviewed. The types and the amount of production data are listed, e.g. events, schedules, product descriptions, reports, visual support, quality, process states and color data. The data exchange must be vendor-neutral. Adequate infrastructure and system architecture are defined for production and media data. The roles of open servers and intranets are evaluated and their potential roles as future solutions are anticipated. The press frontend is the part of print media production where large files dominate. The new output alternatives, i.e. film recorders, direct plate output (CTP and CTP-on-press) and digital, plateless printing lines need new workflow tools and very efficient file and format management. The paper analyzes the capture, formatting and storing of job files and respective production data, such as the event logs of the processes. Intranet, browsers, Java applets and open web severs will be used to capture production data, especially where intranets are used anyhow, or where several companies are networked to plan, design and use documents and printed products. The user aspects of installing intranets is stressed since there are numerous more traditional and more dedicated networking solutions on the market.

  4. Communication security in open health care networks.

    PubMed

    Blobel, B; Pharow, P; Engel, K; Spiegel, V; Krohn, R

    1999-01-01

    Fulfilling the shared care paradigm, health care networks providing open systems' interoperability in health care are needed. Such communicating and co-operating health information systems, dealing with sensitive personal medical information across organisational, regional, national or even international boundaries, require appropriate security solutions. Based on the generic security model, within the European MEDSEC project an open approach for secure EDI like HL7, EDIFACT, XDT or XML has been developed. The consideration includes both securing the message in an unsecure network and the transport of the unprotected information via secure channels (SSL, TLS etc.). Regarding EDI, an open and widely usable security solution has been specified and practically implemented for the examples of secure mailing and secure file transfer (FTP) via wrapping the sensitive information expressed by the corresponding protocols. The results are currently prepared for standardisation.

  5. In-Space Networking on NASA's SCAN Testbed

    NASA Technical Reports Server (NTRS)

    Brooks, David E.; Eddy, Wesley M.; Clark, Gilbert J.; Johnson, Sandra K.

    2016-01-01

    The NASA Space Communications and Navigation (SCaN) Testbed, an external payload onboard the International Space Station, is equipped with three software defined radios and a flight computer for supporting in-space communication research. New technologies being studied using the SCaN Testbed include advanced networking, coding, and modulation protocols designed to support the transition of NASAs mission systems from primarily point to point data links and preplanned routes towards adaptive, autonomous internetworked operations needed to meet future mission objectives. Networking protocols implemented on the SCaN Testbed include the Advanced Orbiting Systems (AOS) link-layer protocol, Consultative Committee for Space Data Systems (CCSDS) Encapsulation Packets, Internet Protocol (IP), Space Link Extension (SLE), CCSDS File Delivery Protocol (CFDP), and Delay-Tolerant Networking (DTN) protocols including the Bundle Protocol (BP) and Licklider Transmission Protocol (LTP). The SCaN Testbed end-to-end system provides three S-band data links and one Ka-band data link to exchange space and ground data through NASAs Tracking Data Relay Satellite System or a direct-to-ground link to ground stations. The multiple data links and nodes provide several upgradable elements on both the space and ground systems. This paper will provide a general description of the testbeds system design and capabilities, discuss in detail the design and lessons learned in the implementation of the network protocols, and describe future plans for continuing research to meet the communication needs for evolving global space systems.

  6. Performance Evaluation of an Enhanced Uplink 3.5G System for Mobile Healthcare Applications.

    PubMed

    Komnakos, Dimitris; Vouyioukas, Demosthenes; Maglogiannis, Ilias; Constantinou, Philip

    2008-01-01

    The present paper studies the prospective and the performance of a forthcoming high-speed third generation (3.5G) networking technology, called enhanced uplink, for delivering mobile health (m-health) applications. The performance of 3.5G networks is a critical factor for successful development of m-health services perceived by end users. In this paper, we propose a methodology for performance assessment based on the joint uplink transmission of voice, real-time video, biological data (such as electrocardiogram, vital signals, and heart sounds), and healthcare records file transfer. Various scenarios were concerned in terms of real-time, nonreal-time, and emergency applications in random locations, where no other system but 3.5G is available. The accomplishment of quality of service (QoS) was explored through a step-by-step improvement of enhanced uplink system's parameters, attributing the network system for the best performance in the context of the desired m-health services.

  7. Performance Evaluation of an Enhanced Uplink 3.5G System for Mobile Healthcare Applications

    PubMed Central

    Komnakos, Dimitris; Vouyioukas, Demosthenes; Maglogiannis, Ilias; Constantinou, Philip

    2008-01-01

    The present paper studies the prospective and the performance of a forthcoming high-speed third generation (3.5G) networking technology, called enhanced uplink, for delivering mobile health (m-health) applications. The performance of 3.5G networks is a critical factor for successful development of m-health services perceived by end users. In this paper, we propose a methodology for performance assessment based on the joint uplink transmission of voice, real-time video, biological data (such as electrocardiogram, vital signals, and heart sounds), and healthcare records file transfer. Various scenarios were concerned in terms of real-time, nonreal-time, and emergency applications in random locations, where no other system but 3.5G is available. The accomplishment of quality of service (QoS) was explored through a step-by-step improvement of enhanced uplink system's parameters, attributing the network system for the best performance in the context of the desired m-health services. PMID:19132096

  8. Social Networking Adapted for Distributed Scientific Collaboration

    NASA Technical Reports Server (NTRS)

    Karimabadi, Homa

    2012-01-01

    Share is a social networking site with novel, specially designed feature sets to enable simultaneous remote collaboration and sharing of large data sets among scientists. The site will include not only the standard features found on popular consumer-oriented social networking sites such as Facebook and Myspace, but also a number of powerful tools to extend its functionality to a science collaboration site. A Virtual Observatory is a promising technology for making data accessible from various missions and instruments through a Web browser. Sci-Share augments services provided by Virtual Observatories by enabling distributed collaboration and sharing of downloaded and/or processed data among scientists. This will, in turn, increase science returns from NASA missions. Sci-Share also enables better utilization of NASA s high-performance computing resources by providing an easy and central mechanism to access and share large files on users space or those saved on mass storage. The most common means of remote scientific collaboration today remains the trio of e-mail for electronic communication, FTP for file sharing, and personalized Web sites for dissemination of papers and research results. Each of these tools has well-known limitations. Sci-Share transforms the social networking paradigm into a scientific collaboration environment by offering powerful tools for cooperative discourse and digital content sharing. Sci-Share differentiates itself by serving as an online repository for users digital content with the following unique features: a) Sharing of any file type, any size, from anywhere; b) Creation of projects and groups for controlled sharing; c) Module for sharing files on HPC (High Performance Computing) sites; d) Universal accessibility of staged files as embedded links on other sites (e.g. Facebook) and tools (e.g. e-mail); e) Drag-and-drop transfer of large files, replacing awkward e-mail attachments (and file size limitations); f) Enterprise-level data and messaging encryption; and g) Easy-to-use intuitive workflow.

  9. Addressing the Tension Between Strong Perimeter Control an Usability

    NASA Technical Reports Server (NTRS)

    Hinke, Thomas H.; Kolano, Paul Z.; Keller, Chris

    2006-01-01

    This paper describes a strong perimeter control system for a general purpose processing system, with the perimeter control system taking significant steps to address usability issues, thus mitigating the tension between strong perimeter protection and usability. A secure front end enforces two-factor authentication for all interactive access to an enclave that contains a large supercomputer and various associated systems, with each requiring their own authentication. Usability is addressed through a design in which the user has to perform two-factor authentication at the secure front end in order to gain access to the enclave, while an agent transparently performs public key authentication as needed to authenticate to specific systems within the enclave. The paper then describes a proxy system that allows users to transfer files into the enclave under script control, when the user is not present to perform two-factor authentication. This uses a pre-authorization approach based on public key technology, which is still strongly tied to both two-factor authentication and strict control over where files can be transferred on the target system. Finally the paper describes an approach to support network applications and systems such as grids or parallel file transfer protocols that require the use of many ports through the perimeter. The paper describes a least privilege approach that dynamically opens ports on a host-specific, if-authorized, as-needed, just-in-time basis.

  10. cadcVOFS: A FUSE Based File System Layer for VOSpace

    NASA Astrophysics Data System (ADS)

    Kavelaars, J.; Dowler, P.; Jenkins, D.; Hill, N.; Damian, A.

    2012-09-01

    The CADC is now making extensive use of the VOSpace protocol for user managed storage. The VOSpace standard allows a diverse set of rich data services to be delivered to users via a simple protocol. We have recently developed the cadcVOFS, a FUSE based file-system layer for VOSpace. cadcVOFS provides a filesystem layer on-top of VOSpace so that standard Unix tools (such as ‘find’, ‘emacs’, ‘awk’ etc) can be used directly on the data objects stored in VOSpace. Once mounted the VOSpace appears as a network storage volume inside the operating system. Within the CADC Cloud Computing project (CANFAR) we have used VOSpace as the method for retrieving and storing processing inputs and products. The abstraction of storage is an important component of Cloud Computing and the high use level of our VOSpace service reflects this.

  11. Definition and maintenance of a telemetry database dictionary

    NASA Technical Reports Server (NTRS)

    Knopf, William P. (Inventor)

    2007-01-01

    A telemetry dictionary database includes a component for receiving spreadsheet workbooks of telemetry data over a web-based interface from other computer devices. Another component routes the spreadsheet workbooks to a specified directory on the host processing device. A process then checks the received spreadsheet workbooks for errors, and if no errors are detected the spreadsheet workbooks are routed to another directory to await initiation of a remote database loading process. The loading process first converts the spreadsheet workbooks to comma separated value (CSV) files. Next, a network connection with the computer system that hosts the telemetry dictionary database is established and the CSV files are ported to the computer system that hosts the telemetry dictionary database. This is followed by a remote initiation of a database loading program. Upon completion of loading a flatfile generation program is manually initiated to generate a flatfile to be used in a mission operations environment by the core ground system.

  12. Twiddlenet: Metadata Tagging and Data Dissemination in Mobile Device Networks

    DTIC Science & Technology

    2007-09-01

    hosting a distributed data dissemination application. Stated simply, there are a multitude of handheld devices on the market that can communicate in...content ( UGC ) across a network of distributed devices. This sharing is accomplished through the use of descriptive metadata tags that are assigned to a...file once it has been shared. These metadata files are uploaded to a centralized portal and arranged for efficient UGC location and searching

  13. Network Basics.

    ERIC Educational Resources Information Center

    Tennant, Roy

    1992-01-01

    Explains how users can find and access information resources available on the Internet. Highlights include network information centers (NICs); lists, both formal and informal; computer networking protocols, including international standards; electronic mail; remote log-in; and file transfer. (LRW)

  14. Home teleradiology system

    NASA Astrophysics Data System (ADS)

    Komo, Darmadi; Garra, Brian S.; Freedman, Matthew T.; Mun, Seong K.

    1997-05-01

    The Home Teleradiology Server system has been developed and installed at the Department of Radiology, Georgetown University Medical Center. The main purpose of the system is to provide a service for on-call physicians to view patients' medical images at home during off-hours. This service will reduce the overhead time required by on-call physicians to travel to the hospital, thereby increasing the efficiency of patient care and improving the total quality of the health care. Typically when a new case is conducted, the medical images generated from CT, US, and/or MRI modalities are transferred to a central server at the hospital via DICOM messages over an existing hospital network. The server has a DICOM network agent that listens to DICOM messages sent by CT, US, and MRI modalities and stores them into separate DICOM files for sending purposes. The server also has a general purpose, flexible scheduling software that can be configured to send image files to specific user(s) at certain times on any day(s) of the week. The server will then distribute the medical images to on- call physicians' homes via a high-speed modem. All file transmissions occur in the background without human interaction after the scheduling software is pre-configured accordingly. At the receiving end, the physicians' computers consist of high-end workstations that have high-speed modems to receive the medical images sent by the central server from the hospital, and DICOM compatible viewer software to view the transmitted medical images in DICOM format. A technician from the hospital, and DICOM compatible viewer software to view the transmitted medical images in DICOM format. A technician from the hospital will notify the physician(s) after all the image files have been completely sent. The physician(s) will then examine the medical images and decide if it is necessary to travel to the hospital for further examination on the patients. Overall, the Home Teleradiology system provides the on-call physicians with a cost-effective and convenient environment for viewing patients' medical images at home.

  15. Functional evaluation of telemedicine with super high definition images and B-ISDN.

    PubMed

    Takeda, H; Matsumura, Y; Okada, T; Kuwata, S; Komori, M; Takahashi, T; Minatom, K; Hashimoto, T; Wada, M; Fujio, Y

    1998-01-01

    In order to determine whether a super high definition (SHD) image running at a series of 2048 resolution x 2048 line x 60 frame/sec was capable of telemedicine, we established a filing system for medical images and two experiments for transmission of high quality images were performed. All images of various types, produced from one case of ischemic heart disease were digitized and registered into the filing system. Images consisted of plain chest x-ray, electrocardiogram, ultrasound cardiogram, cardiac scintigram, coronary angiogram, left ventriculogram and so on. All images were animated and totaled a number of 243. We prepared a graphic user interface (GUI) for image retrieval based on the medical events and modalities. Twenty one cardiac specialists evaluated quality of the SHD images to be somewhat poor compared to the original pictures but sufficient for making diagnoses, and effective as a tool for teaching and case study purposes. The system capability of simultaneously displaying several animated images was especially deemed effective in grasping comprehension of diagnosis. Efficient input methods and creating capacity of filing all produced images are future issue. Using B-ISDN network, the SHD file was prefetched to the servers at Kyoto University Hospital and BBCC (Bradband ISDN Business chance & Culture Creation) laboratory as an telemedicine experiment. Simultaneous video conference system, the control of image retrieval and pointing function made the teleconference successful in terms of high quality of medical images, quick response time and interactive data exchange.

  16. Creation of lumped parameter thermal model by the use of finite elements

    NASA Technical Reports Server (NTRS)

    1978-01-01

    In the finite difference technique, the thermal network is represented by an analogous electrical network. The development of this network model, which is used to describe a physical system, often requires tedious and mental data preparation and checkout by the analyst which can be greatly reduced through the use of the computer programs to develop automatically the mathematical model and associated input data and graphically display the analytical model to facilitate model verification. Three separate programs are involved which are linked through common mass storage files and data card formats. These programs are SPAR, CINGEN and GEOMPLT, and are used to (1) develop thermal models for the MITAS II thermal analyzer program; (2) produce geometry plots of the thermal network; and (3) produce temperature distribution and time history plots.

  17. Biological Investigations of Adaptive Networks: Neuronal Control of Conditioned Responses

    DTIC Science & Technology

    1989-07-01

    The program also controls A/D sampling of voltage trace from NMR transducer and disk files for NMR, neural spikes, and synchronization. * HSAD . Basic...format which ANALYZE (by John Desmond) can read. e FIG.HIRES Reads C-64 HSAD files and EVENT NMR files and generates oscilloscope-like figures showing

  18. 76 FR 76463 - Self-Regulatory Organizations; International Securities Exchange, LLC; Notice of Filing and...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-12-07

    ... SECURITIES AND EXCHANGE COMMISSION [Release No. 34-65861; File No. SR-ISE-2011-77] Self-Regulatory Organizations; International Securities Exchange, LLC; Notice of Filing and Immediate Effectiveness of Proposed Rule Change Relating to Network and Gateway Fees December 1, 2011. Pursuant to Section 19(b)(1) of the...

  19. 76 FR 21416 - Self-Regulatory Organizations; International Securities Exchange, LLC; Notice of Filing and...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-04-15

    ... SECURITIES AND EXCHANGE COMMISSION [Release No. 34-64292; File No. SR-ISE-2011-22] Self-Regulatory Organizations; International Securities Exchange, LLC; Notice of Filing and Immediate Effectiveness of Proposed Rule Change Relating to Network Fees April 11, 2011. Pursuant to Section 19(b)(1) of the Securities...

  20. 77 FR 14847 - Self-Regulatory Organizations; International Securities Exchange, LLC; Notice of Filing and...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2012-03-13

    ... SECURITIES AND EXCHANGE COMMISSION [Release No. 34-66525; File No. SR-ISE-2012-09] Self-Regulatory Organizations; International Securities Exchange, LLC; Notice of Filing and Immediate Effectiveness of Proposed Rule Change Relating to Network Fees March 7, 2012. Pursuant to Section 19(b)(1) of the Securities...

  1. An algorithm of a real time image tracking system using a camera with pan/tilt motors on an embedded system

    NASA Astrophysics Data System (ADS)

    Kim, Hie-Sik; Nam, Chul; Ha, Kwan-Yong; Ayurzana, Odgeral; Kwon, Jong-Won

    2005-12-01

    The embedded systems have been applied to many fields, including households and industrial sites. The user interface technology with simple display on the screen was implemented more and more. The user demands are increasing and the system has more various applicable fields due to a high penetration rate of the Internet. Therefore, the demand for embedded system is tend to rise. An embedded system for image tracking was implemented. This system is used a fixed IP for the reliable server operation on TCP/IP networks. Using an USB camera on the embedded Linux system developed a real time broadcasting of video image on the Internet. The digital camera is connected at the USB host port of the embedded board. All input images from the video camera are continuously stored as a compressed JPEG file in a directory at the Linux web-server. And each frame image data from web camera is compared for measurement of displacement Vector. That used Block matching algorithm and edge detection algorithm for past speed. And the displacement vector is used at pan/tilt motor control through RS232 serial cable. The embedded board utilized the S3C2410 MPU, which used the ARM 920T core form Samsung. The operating system was ported to embedded Linux kernel and mounted of root file system. And the stored images are sent to the client PC through the web browser. It used the network function of Linux and it developed a program with protocol of the TCP/IP.

  2. Displaying Composite and Archived Soundings in the Advanced Weather Interactive Processing System

    NASA Technical Reports Server (NTRS)

    Barrett, Joe H., III; Volkmer, Matthew R.; Blottman, Peter F.; Sharp, David W.

    2008-01-01

    In a previous task, the Applied Meteorology Unit (AMU) developed spatial and temporal climatologies of lightning occurrence based on eight atmospheric flow regimes. The AMU created climatological, or composite, soundings of wind speed and direction, temperature, and dew point temperature at four rawinsonde observation stations at Jacksonville, Tampa, Miami, and Cape Canaveral Air Force Station, for each of the eight flow regimes. The composite soundings were delivered to the National Weather Service (NWS) Melbourne (MLB) office for display using the National version of the Skew-T Hodograph analysis and Research Program (NSHARP) software program. The NWS MLB requested the AMU make the composite soundings available for display in the Advanced Weather Interactive Processing System (AWIPS), so they could be overlaid on current observed soundings. This will allow the forecasters to compare the current state of the atmosphere with climatology. This presentation describes how the AMU converted the composite soundings from NSHARP Archive format to Network Common Data Form (NetCDF) format, so that the soundings could be displayed in AWl PS. The NetCDF is a set of data formats, programming interfaces, and software libraries used to read and write scientific data files. In AWIPS, each meteorological data type, such as soundings or surface observations, has a unique NetCDF format. Each format is described by a NetCDF template file. Although NetCDF files are in binary format, they can be converted to a text format called network Common data form Description Language (CDL). A software utility called ncgen is used to create a NetCDF file from a CDL file, while the ncdump utility is used to create a CDL file from a NetCDF file. An AWIPS receives soundings in Binary Universal Form for the Representation of Meteorological data (BUFR) format (http://dss.ucar.edu/docs/formats/bufr/), and then decodes them into NetCDF format. Only two sounding files are generated in AWIPS per day. One file contains all of the soundings received worldwide between 0000 UTC and 1200 UTC, and the other includes all soundings between 1200 UTC and 0000 UTC. In order to add the composite soundings into AWIPS, a procedure was created to configure, or localize, AWIPS. This involved modifying and creating several configuration text files. A unique fourcharacter site identifier was created for each of the 32 soundings so each could be viewed separately. The first three characters were based on the site identifier of the observed sounding, while the last character was based on the flow regime. While researching the localization process for soundings, the AMU discovered a method of archiving soundings so old soundings would not get purged automatically by AWl PS. This method could provide an alternative way of localizing AWl PS for composite soundings. In addition, this would allow forecasters to use archived soundings in AWIPS for case studies. A test sounding file in NetCDF format was written in order to verify the correct format for soundings in AWIPS. After the file was viewed successfully in AWIPS, the AMU wrote a software program in the Tool Command Language/Tool Kit (Tcl/Tk) language to convert the 32 composite soundings from NSHARP Archive to CDL format. The ncgen utility was then used to convert the CDL file to a NetCDF file. The NetCDF file could then be read and displayed in AWIPS.

  3. Performance of the engineering analysis and data system 2 common file system

    NASA Technical Reports Server (NTRS)

    Debrunner, Linda S.

    1993-01-01

    The Engineering Analysis and Data System (EADS) was used from April 1986 to July 1993 to support large scale scientific and engineering computation (e.g. computational fluid dynamics) at Marshall Space Flight Center. The need for an updated system resulted in a RFP in June 1991, after which a contract was awarded to Cray Grumman. EADS II was installed in February 1993, and by July 1993 most users were migrated. EADS II is a network of heterogeneous computer systems supporting scientific and engineering applications. The Common File System (CFS) is a key component of this system. The CFS provides a seamless, integrated environment to the users of EADS II including both disk and tape storage. UniTree software is used to implement this hierarchical storage management system. The performance of the CFS suffered during the early months of the production system. Several of the performance problems were traced to software bugs which have been corrected. Other problems were associated with hardware. However, the use of NFS in UniTree UCFM software limits the performance of the system. The performance issues related to the CFS have led to a need to develop a greater understanding of the CFS organization. This paper will first describe the EADS II with emphasis on the CFS. Then, a discussion of mass storage systems will be presented, and methods of measuring the performance of the Common File System will be outlined. Finally, areas for further study will be identified and conclusions will be drawn.

  4. Effects of Data Replication on Data Exfiltration in Mobile Ad Hoc Networks Utilizing Reactive Protocols

    DTIC Science & Technology

    2015-03-01

    2.5.5 Availability Schemes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 42 2.6 Simulation Environments...routing scheme can prove problematic. Two prominent proactive protocols, 7 Destination-Sequenced Distance-Vector (DSDV) and Optimized Link State...distributed file management systems such as Tahoe- LAFS as part of its replication scheme . Altman and De Pellegrini [4] examine the impact of FEC and

  5. ORA User’s Guide 2010

    DTIC Science & Technology

    2010-06-03

    36 Stargate Summit - Synopsis...Started Welcome to ORA’s Help File system! The ORA Help and examples contained herein are written with a specific data set in mind: Stargate -SG1. More...www.casos.cs.cmu.edu/computational_tools/datasets/inter nal/ stargate /index2.html As an added data set to use, a network model of The Tragedy of Julius Caesar will

  6. SU-E-T-142: Automatic Linac Log File: Analysis and Reporting

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gainey, M; Rothe, T

    Purpose: End to end QA for IMRT/VMAT is time consuming. Automated linac log file analysis and recalculation of daily recorded fluence, and hence dose, distribution bring this closer. Methods: Matlab (R2014b, Mathworks) software was written to read in and analyse IMRT/VMAT trajectory log files (TrueBeam 1.5, Varian Medical Systems) overnight, and are archived on a backed-up network drive (figure). A summary report (PDF) is sent by email to the duty linac physicist. A structured summary report (PDF) for each patient is automatically updated for embedding into the R&V system (Mosaiq 2.5, Elekta AG). The report contains cross-referenced hyperlinks to easemore » navigation between treatment fractions. Gamma analysis can be performed on planned (DICOM RTPlan) and treated (trajectory log) fluence distributions. Trajectory log files can be converted into RTPlan files for dose distribution calculation (Eclipse, AAA10.0.28, VMS). Results: All leaf positions are within +/−0.10mm: 57% within +/−0.01mm; 89% within 0.05mm. Mean leaf position deviation is 0.02mm. Gantry angle variations lie in the range −0.1 to 0.3 degrees, mean 0.04 degrees. Fluence verification shows excellent agreement between planned and treated fluence. Agreement between planned and treated dose distribution, the derived from log files, is very good. Conclusion: Automated log file analysis is a valuable tool for the busy physicist, enabling potential treated fluence distribution errors to be quickly identified. In the near future we will correlate trajectory log analysis with routine IMRT/VMAT QA analysis. This has the potential to reduce, but not eliminate, the QA workload.« less

  7. Securing electronic mail: The risks and future of electronic mail

    NASA Astrophysics Data System (ADS)

    Weeber, S. A.

    1993-03-01

    The network explosion of the past decade has significantly affected how many of us conduct our day to day work. We increasingly rely on network services such as electronic mail, file transfer, and network newsgroups to collect and distribute information. Unfortunately, few of the network services in use today were designed with the security issues of large heterogeneous networks in mind. In particular, electronic mail, although heavily relied upon, is notoriously insecure. Messages can be forged, snooped, and even altered by users with only a moderate level of system proficiency. The level of trust that can be assigned at present to these services needs to be carefully considered. In the past few years, standards and tools have begun to appear addressing the security concerns of electronic mail. Principal among these are RFC's 1421, 1422, 1423, and 1424, which propose Internet standards in the areas of message encipherment, key management, and algorithms for privacy enhanced mail (PEM). Additionally, three PEM systems, offering varying levels of compliance with the PEM RFC's, have also recently emerged: PGP, RIPEM, and TIS/PEM. This paper addresses the motivations and requirements for more secure electronic mail, and evaluates the suitability of the currently available PEM systems.

  8. SAN/CXFS test report to LLNL

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ruwart, T M; Eldel, A

    2000-01-01

    The primary objectives of this project were to evaluate the performance of the SGI CXFS File System in a Storage Area Network (SAN) and compare/contrast it to the performance of a locally attached XFS file system on the same computer and storage subsystems. The University of Minnesota participants were asked to verify that the performance of the SAN/CXFS configuration did not fall below 85% of the performance of the XFS local configuration. There were two basic hardware test configurations constructed from the following equipment: Two Onyx 2 computer systems each with two Qlogic-based Fibre Channel/XIO Host Bus Adapter (HBA); Onemore » 8-Port Brocade Silkworm 2400 Fibre Channel Switch; and Four Ciprico RF7000 RAID Disk Arrays populated Seagate Barracuda 50GB disk drives. The Operating System on each of the ONYX 2 computer systems was IRIX 6.5.6. The first hardware configuration consisted of directly connecting the Ciprico arrays to the Qlogic controllers without the Brocade switch. The purpose for this configuration was to establish baseline performance data on the Qlogic controllers / Ciprico disk raw subsystem. This baseline performance data would then be used to demonstrate any performance differences arising from the addition of the Brocade Fibre Channel Switch. Furthermore, the performance of the Qlogic controllers could be compared to that of the older, Adaptec-based XIO dual-channel Fibre Channel adapters previously used on these systems. It should be noted that only raw device tests were performed on this configuration. No file system testing was performed on this configuration. The second hardware configuration introduced the Brocade Fibre Channel Switch. Two FC ports from each of the ONYX2 computer systems were attached to four ports of the switch and the four Ciprico arrays were attached to the remaining four. Raw disk subsystem tests were performed on the SAN configuration in order to demonstrate the performance differences between the direct-connect and the switched configurations. After this testing was completed, the Ciprico arrays were formatted with an XFS file system and performance numbers were gathered to establish a File System Performance Baseline. Finally, the disks were formatted with CXFS and further tests were run to demonstrate the performance of the CXFS file system. A summary of the results of these tests is given.« less

  9. Lab Streaming Layer Enabled Myo Data Collection Software User Manual

    DTIC Science & Technology

    2017-06-07

    time - series data over a local network. LSL handles the networking, time -synchronization, (near-) real- time access as well as, optionally, the... series data collection (e.g., brain activity, heart activity, muscle activity) using the LSL application programming interface (API). Time -synchronized...saved to a single extensible data format (XDF) file. Once the time - series data are collected in a Lab Recorder XDF file, users will be able to query

  10. Improved image classification with neural networks by fusing multispectral signatures with topological data

    NASA Technical Reports Server (NTRS)

    Harston, Craig; Schumacher, Chris

    1992-01-01

    Automated schemes are needed to classify multispectral remotely sensed data. Human intelligence is often required to correctly interpret images from satellites and aircraft. Humans suceed because they use various types of cues about a scene to accurately define the contents of the image. Consequently, it follows that computer techniques that integrate and use different types of information would perform better than single source approaches. This research illustrated that multispectral signatures and topographical information could be used in concert. Significantly, this dual source tactic classified a remotely sensed image better than the multispectral classification alone. These classifications were accomplished by fusing spectral signatures with topographical information using neural network technology. A neural network was trained to classify Landsat mulitspectral signatures. A file of georeferenced ground truth classifications were used as the training criterion. The network was trained to classify urban, agriculture, range, and forest with an accuracy of 65.7 percent. Another neural network was programmed and trained to fuse these multispectral signature results with a file of georeferenced altitude data. This topological file contained 10 levels of elevations. When this nonspectral elevation information was fused with the spectral signatures, the classifications were improved to 73.7 and 75.7 percent.

  11. Distributed Virtual System (DIVIRS) Project

    NASA Technical Reports Server (NTRS)

    Schorr, Herbert; Neuman, B. Clifford

    1993-01-01

    As outlined in our continuation proposal 92-ISI-50R (revised) on contract NCC 2-539, we are (1) developing software, including a system manager and a job manager, that will manage available resources and that will enable programmers to program parallel applications in terms of a virtual configuration of processors, hiding the mapping to physical nodes; (2) developing communications routines that support the abstractions implemented in item one; (3) continuing the development of file and information systems based on the virtual system model; and (4) incorporating appropriate security measures to allow the mechanisms developed in items 1 through 3 to be used on an open network. The goal throughout our work is to provide a uniform model that can be applied to both parallel and distributed systems. We believe that multiprocessor systems should exist in the context of distributed systems, allowing them to be more easily shared by those that need them. Our work provides the mechanisms through which nodes on multiprocessors are allocated to jobs running within the distributed system and the mechanisms through which files needed by those jobs can be located and accessed.

  12. DIstributed VIRtual System (DIVIRS) project

    NASA Technical Reports Server (NTRS)

    Schorr, Herbert; Neuman, B. Clifford

    1994-01-01

    As outlined in our continuation proposal 92-ISI-. OR (revised) on NASA cooperative agreement NCC2-539, we are (1) developing software, including a system manager and a job manager, that will manage available resources and that will enable programmers to develop and execute parallel applications in terms of a virtual configuration of processors, hiding the mapping to physical nodes; (2) developing communications routines that support the abstractions implemented in item one; (3) continuing the development of file and information systems based on the Virtual System Model; and (4) incorporating appropriate security measures to allow the mechanisms developed in items 1 through 3 to be used on an open network. The goal throughout our work is to provide a uniform model that can be applied to both parallel and distributed systems. We believe that multiprocessor systems should exist in the context of distributed systems, allowing them to be more easily shared by those that need them. Our work provides the mechanisms through which nodes on multiprocessors are allocated to jobs running within the distributed system and the mechanisms through which files needed by those jobs can be located and accessed.

  13. DIstributed VIRtual System (DIVIRS) project

    NASA Technical Reports Server (NTRS)

    Schorr, Herbert; Neuman, Clifford B.

    1995-01-01

    As outlined in our continuation proposal 92-ISI-50R (revised) on NASA cooperative agreement NCC2-539, we are (1) developing software, including a system manager and a job manager, that will manage available resources and that will enable programmers to develop and execute parallel applications in terms of a virtual configuration of processors, hiding the mapping to physical nodes; (2) developing communications routines that support the abstractions implemented in item one; (3) continuing the development of file and information systems based on the Virtual System Model; and (4) incorporating appropriate security measures to allow the mechanisms developed in items 1 through 3 to be used on an open network. The goal throughout our work is to provide a uniform model that can be applied to both parallel and distributed systems. We believe that multiprocessor systems should exist in the context of distributed systems, allowing them to be more easily shared by those that need them. Our work provides the mechanisms through which nodes on multiprocessors are allocated to jobs running within the distributed system and the mechanisms through which files needed by those jobs can be located and accessed.

  14. NALNET book system: Cost benefit study

    NASA Technical Reports Server (NTRS)

    Dewath, N. V.; Palmour, V. E.; Foley, J. R.; Henderson, M. M.; Shockley, C. W.

    1981-01-01

    The goals of the NASA's library network system, NALNET, the functions of the current book system, the products and services of a book system required by NASA Center libraries, and the characteristics of a system that would best supply those products and services were assessed. Emphasis was placed on determining the most cost effective means of meeting NASA's requirements for an automated book system. Various operating modes were examined including the current STIMS file, the PUBFILE, developing software improvements for products as appropriate to the Center needs, and obtaining cataloging and products from the bibliographic utilities including at least OCLC, RLIN, BNA, and STIF. It is recommended that NALNET operate under the STIMS file mode and obtain cataloging and products from the bibliographic utilities. The recommendations are based on the premise that given the current state of the art in library automation it is not cost effective for NASA to maintain a full range of cataloging services on its own system. The bibliographic utilities can support higher quality systems with a greater range of services at a lower total cost.

  15. Distributed Virtual System (DIVIRS) project

    NASA Technical Reports Server (NTRS)

    Schorr, Herbert; Neuman, B. Clifford

    1993-01-01

    As outlined in the continuation proposal 92-ISI-50R (revised) on NASA cooperative agreement NCC 2-539, the investigators are developing software, including a system manager and a job manager, that will manage available resources and that will enable programmers to develop and execute parallel applications in terms of a virtual configuration of processors, hiding the mapping to physical nodes; developing communications routines that support the abstractions implemented; continuing the development of file and information systems based on the Virtual System Model; and incorporating appropriate security measures to allow the mechanisms developed to be used on an open network. The goal throughout the work is to provide a uniform model that can be applied to both parallel and distributed systems. The authors believe that multiprocessor systems should exist in the context of distributed systems, allowing them to be more easily shared by those that need them. The work provides the mechanisms through which nodes on multiprocessors are allocated to jobs running within the distributed system and the mechanisms through which files needed by those jobs can be located and accessed.

  16. Methodology used to produce an encoded 1:100,000-scale digital hydrographic data layer for the Pacific Northwest

    USGS Publications Warehouse

    Fisher, B.J.

    1996-01-01

    The U.S. Geological Survey (USGS) has produced a River Reach File data layer for the Pacific Northwest for use in water-resource management applications. The Pacific Northwest (PNW) River Reach Files, a geo-referenced river reach data layer at 1:100,000-scale, are encoded with the U.S. Environmental Protection Agency"s (EPA) reach numbers. The encoding was a primary task of the River Reach project, because EPA"s reach identifiers are also an integral hydrologic component in a regional Northwest Environmental Data Base-an ongoing effort by Federal and State agencies to compile information on reach-specific resources on rivers in Oregon, Idaho, Washington, and western Montana. A unique conflation algorithm was developed by the USGS to transfer the EPA reach codes and other meaningful attributes from the 1:250,000-scale EPA TRACE graphic files to the PNW Reach Files. The PNW Reach Files also were designed so that reach-specific information upstream or downstream from a point in the stream network could be extracted from feature attribute tables or from a Geographic Information System. This report documents the methodology used to create this 1:100,000-scale hydrologic data layer.

  17. Web Extensible Display Manager

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Slominski, Ryan; Larrieu, Theodore L.

    Jefferson Lab's Web Extensible Display Manager (WEDM) allows staff to access EDM control system screens from a web browser in remote offices and from mobile devices. Native browser technologies are leveraged to avoid installing and managing software on remote clients such as browser plugins, tunnel applications, or an EDM environment. Since standard network ports are used firewall exceptions are minimized. To avoid security concerns from remote users modifying a control system, WEDM exposes read-only access and basic web authentication can be used to further restrict access. Updates of monitored EPICS channels are delivered via a Web Socket using a webmore » gateway. The software translates EDM description files (denoted with the edl suffix) to HTML with Scalable Vector Graphics (SVG) following the EDM's edl file vector drawing rules to create faithful screen renderings. The WEDM server parses edl files and creates the HTML equivalent in real-time allowing existing screens to work without modification. Alternatively, the familiar drag and drop EDM screen creation tool can be used to create optimized screens sized specifically for smart phones and then rendered by WEDM.« less

  18. Using E-Mail across Computer Networks.

    ERIC Educational Resources Information Center

    Hazari, Sunil

    1990-01-01

    Discusses the use of telecommunications technology to exchange electronic mail, files, and messages across different computer networks. Networks highlighted include ARPA Internet; BITNET; USENET; FidoNet; MCI Mail; and CompuServe. Examples of the successful use of networks in higher education are given. (Six references) (LRW)

  19. Implementation of a Campuswide Distributed Mass Storage Service: the Dream Versus Reality

    NASA Technical Reports Server (NTRS)

    Prahst, Stephen; Armstead, Betty Jo

    1996-01-01

    In 1990, a technical team at NASA Lewis Research Center, Cleveland, Ohio, began defining a Mass Storage Service to pro- wide long-term archival storage, short-term storage for very large files, distributed Network File System access, and backup services for critical data dw resides on workstations and personal computers. Because of software availability and budgets, the total service was phased in over dm years. During the process of building the service from the commercial technologies available, our Mass Storage Team refined the original vision and learned from the problems and mistakes that occurred. We also enhanced some technologies to better meet the needs of users and system administrators. This report describes our team's journey from dream to reality, outlines some of the problem areas that still exist, and suggests some solutions.

  20. Contact Graph Routing Enhancements Developed in ION for DTN

    NASA Technical Reports Server (NTRS)

    Segui, John S.; Burleigh, Scott

    2013-01-01

    The Interplanetary Overlay Network (ION) software suite is an open-source, flight-ready implementation of networking protocols including the Delay/Disruption Tolerant Networking (DTN) Bundle Protocol (BP), the CCSDS (Consultative Committee for Space Data Systems) File Delivery Protocol (CFDP), and many others including the Contact Graph Routing (CGR) DTN routing system. While DTN offers the capability to tolerate disruption and long signal propagation delays in transmission, without an appropriate routing protocol, no data can be delivered. CGR was built for space exploration networks with scheduled communication opportunities (typically based on trajectories and orbits), represented as a contact graph. Since CGR uses knowledge of future connectivity, the contact graph can grow rather large, and so efficient processing is desired. These enhancements allow CGR to scale to predicted NASA space network complexities and beyond. This software improves upon CGR by adopting an earliest-arrival-time cost metric and using the Dijkstra path selection algorithm. Moving to Dijkstra path selection also enables construction of an earliest- arrival-time tree for multicast routing. The enhancements have been rolled into ION 3.0 available on sourceforge.net.

  1. Entropy based file type identification and partitioning

    DTIC Science & Technology

    2017-06-01

    energy spectrum,” Proceedings of the Twenty-Ninth International Florida Artificial Intelligence Research Society Conference, pp. 288–293, 2016...ABBREVIATIONS AES Advanced Encryption Standard ANN Artificial Neural Network ASCII American Standard Code for Information Interchange CWT...the identification of file types and file partitioning. This approach has applications in cybersecurity as it allows for a quick determination of

  2. Code 672 observational science branch computer networks

    NASA Technical Reports Server (NTRS)

    Hancock, D. W.; Shirk, H. G.

    1988-01-01

    In general, networking increases productivity due to the speed of transmission, easy access to remote computers, ability to share files, and increased availability of peripherals. Two different networks within the Observational Science Branch are described in detail.

  3. 31 CFR 1023.311 - Filing obligations.

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... CRIMES ENFORCEMENT NETWORK, DEPARTMENT OF THE TREASURY RULES FOR BROKERS OR DEALERS IN SECURITIES Reports Required To Be Made By Brokers or Dealers in Securities § 1023.311 Filing obligations. Refer to § 1010.311... securities. ...

  4. 31 CFR 1023.311 - Filing obligations.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... CRIMES ENFORCEMENT NETWORK, DEPARTMENT OF THE TREASURY RULES FOR BROKERS OR DEALERS IN SECURITIES Reports Required To Be Made By Brokers or Dealers in Securities § 1023.311 Filing obligations. Refer to § 1010.311... securities. ...

  5. 31 CFR 1023.311 - Filing obligations.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... CRIMES ENFORCEMENT NETWORK, DEPARTMENT OF THE TREASURY RULES FOR BROKERS OR DEALERS IN SECURITIES Reports Required To Be Made By Brokers or Dealers in Securities § 1023.311 Filing obligations. Refer to § 1010.311... securities. ...

  6. 31 CFR 1023.311 - Filing obligations.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... CRIMES ENFORCEMENT NETWORK, DEPARTMENT OF THE TREASURY RULES FOR BROKERS OR DEALERS IN SECURITIES Reports Required To Be Made By Brokers or Dealers in Securities § 1023.311 Filing obligations. Refer to § 1010.311... securities. ...

  7. The Near Future Trend: Combining Web Access and Local CD Networks. Experience and a Few Suggestions.

    ERIC Educational Resources Information Center

    Ma, Wei

    1998-01-01

    Focuses on the trend to combine Web access and CD networks, benefits of considering the community network environment as a whole, and need for flexibility in considering new technologies. Describes the Occidental College Library (California) experience of building and sharing a network and network file server. (PEN)

  8. Cytoscape file of chemical networks

    EPA Pesticide Factsheets

    The maximum connectivity scores of pairwise chemical conditions summarized from Cmap results in a file with Cytoscape format (http://www.cytoscape.org/). The figures in the publication were generated from this file. The Cytoscape file is formed from importing the eight text file therein.This dataset is associated with the following publication:Wang , R., A. Biales , N. Garcia-Reyero, E. Perkins, D. Villeneuve, G. Ankley, and D. Bencic. Fish Connectivity Mapping: Linking Chemical Stressors by Their MOA-Driven Transcriptomic Profiles. BMC Genomics. BioMed Central Ltd, London, UK, 17(84): 1-20, (2016).

  9. RAID-2: Design and implementation of a large scale disk array controller

    NASA Technical Reports Server (NTRS)

    Katz, R. H.; Chen, P. M.; Drapeau, A. L.; Lee, E. K.; Lutz, K.; Miller, E. L.; Seshan, S.; Patterson, D. A.

    1992-01-01

    We describe the implementation of a large scale disk array controller and subsystem incorporating over 100 high performance 3.5 inch disk drives. It is designed to provide 40 MB/s sustained performance and 40 GB capacity in three 19 inch racks. The array controller forms an integral part of a file server that attaches to a Gb/s local area network. The controller implements a high bandwidth interconnect between an interleaved memory, an XOR calculation engine, the network interface (HIPPI), and the disk interfaces (SCSI). The system is now functionally operational, and we are tuning its performance. We review the design decisions, history, and lessons learned from this three year university implementation effort to construct a truly large scale system assembly.

  10. Modeling And Simulation Of Multimedia Communication Networks

    NASA Astrophysics Data System (ADS)

    Vallee, Richard; Orozco-Barbosa, Luis; Georganas, Nicolas D.

    1989-05-01

    In this paper, we present a simulation study of a browsing system involving radiological image servers. The proposed IEEE 802.6 DQDB MAN standard is designated as the computer network to transfer radiological images from file servers to medical workstations, and to simultaneously support real time voice communications. Storage and transmission of original raster scanned images and images compressed according to pyramid data structures are considered. Different types of browsing as well as various image sizes and bit rates in the DQDB MAN are also compared. The elapsed time, measured from the time an image request is issued until the image is displayed on the monitor, is the parameter considered to evaluate the system performance. Simulation results show that image browsing can be supported by the DQDB MAN.

  11. Description of CRIB, the GIPSY retrieval mechanism, and the interface to the General Electric MARK III Service : CRIB, the mineral resources data bank of the U.S. Geological Survey--guide for public users, 1977

    USGS Publications Warehouse

    Calkins, James Alfred; Keefer, Eleanor K.; Ofsharick, Regina A.; Mason, George T.; Tracy, Patricia; Atkins, Mary

    1978-01-01

    The U.S. Geological Survey's Computerized Resources Information Bank (CRIB) is being made available for public use through the computer facilities of the University of Oklahoma and the General Electric Company, U.S.A. The use of General Electric's worldwide information-services network provides access to the CRIB file to a worldwide clientele. This manual, which consists of two chapters, is intended as a guide to users who wish to interrogate the file. Chapter A contains a description of the CRIB file, information on the use of the GIPSY retrieval system, and a description of the General Electric MARK III Service. Chapter B contains a description of the individual data items in the CRIB record as well as code lists. CRIB consists of a set of variable-length records on the metallic and nonmetallic mineral resources of the United States and other countries. At present, 31,645 records in the master file are being made available. The record contains information on mineral deposits and mineral commodities. Some topics covered are: deposit name, location, commodity information, description of deposit, geology, production, reserves, potential resources, and references. The data are processed by the GIPSY program, which maintains the data file and builds, updates, searches, and prints the records using simple yet versatile command statements. Searching and selecting records is accomplished by specifying the presence, absence, or content of any element of information in the record; these specifications can be logically linked to prepare sophisticated search strategies. Output is available in the form of the complete record, a listing of selected parts of the record, or fixed-field tabulations. The General Electric MARK III Service is a computerized information services network operating internationally by land lines, satellites, and undersea cables. The service is available by local telephone to 500 cities in North America, Western Europe, Australia, Southeast Asia, Japan, and Saudi Arabia. An interface called the 'foreground driver' is used to link the GIPSY program to the General Electric system.

  12. X-Graphs: Language and Algorithms for Heterogeneous Graph Streams

    DTIC Science & Technology

    2017-09-01

    INTRODUCTION 1 3 METHODS , ASUMPTIONS, AND PROCEDURES 2 Software Abstractions for Graph Analytic Applications 2 High performance Platforms for Graph Processing...data is stored in a distributed file system. 3 METHODS , ASUMPTIONS, AND PROCEDURES Software Abstractions for Graph Analytic Applications To...implementations of novel methods for networks analysis: several methods for detection of overlapping communities, personalized PageRank, node embeddings into a d

  13. Free Space Optical Communication in the Military Environment

    DTIC Science & Technology

    2014-09-01

    Communications Commission FDA Food and Drug Administration FMV Full Motion Video FOB Forward Operating Base FOENEX Free-Space Optical Experimental Network...from radio and voice to chat message and email. Data-rich multimedia content, such as high-definition pictures, video chat, video files, and...introduction of full-motion video (FMV) via numerous different Intelligence Surveillance and Reconnaissance (ISR) systems, such as targeting pods on

  14. Field-Deployable Video Cloud Solution

    DTIC Science & Technology

    2016-03-01

    78 2. Shipboard Server or Video Cloud System .......................................79 3. 4G LTE and Wi-Fi...local area network LED light emitting diode Li-ion lithium ion LTE long term evolution Mbps mega-bits per second MBps mega-bytes per second xv...restrictions on distribution. File size is dependent on both bit rate and content length. Bit rate is a value measured in bits per second (bps) and is

  15. Survey on Security Issues in File Management in Cloud Computing Environment

    NASA Astrophysics Data System (ADS)

    Gupta, Udit

    2015-06-01

    Cloud computing has pervaded through every aspect of Information technology in past decade. It has become easier to process plethora of data, generated by various devices in real time, with the advent of cloud networks. The privacy of users data is maintained by data centers around the world and hence it has become feasible to operate on that data from lightweight portable devices. But with ease of processing comes the security aspect of the data. One such security aspect is secure file transfer either internally within cloud or externally from one cloud network to another. File management is central to cloud computing and it is paramount to address the security concerns which arise out of it. This survey paper aims to elucidate the various protocols which can be used for secure file transfer and analyze the ramifications of using each protocol.

  16. DSSTox chemical-index files for exposure-related ...

    EPA Pesticide Factsheets

    The Distributed Structure-Searchable Toxicity (DSSTox) ARYEXP and GEOGSE files are newly published, structure-annotated files of the chemical-associated and chemical exposure-related summary experimental content contained in the ArrayExpress Repository and Gene Expression Omnibus (GEO) Series (based on data extracted on September 20, 2008). ARYEXP and GEOGSE contain 887 and 1064 unique chemical substances mapped to 1835 and 2381 chemical exposure-related experiment accession IDs, respectively. The standardized files allow one to assess, compare and search the chemical content in each resource, in the context of the larger DSSTox toxicology data network, as well as across large public cheminformatics resources such as PubChem (http://pubchem.ncbi.nlm.nih.gov). The Distributed Structure-Searchable Toxicity (DSSTox) ARYEXP and GEOGSE files are newly published, structure-annotated files of the chemical-associated and chemical exposure-related summary experimental content contained in the ArrayExpress Repository and Gene Expression Omnibus (GEO) Series (based on data extracted on September 20, 2008). ARYEXP and GEOGSE contain 887 and 1064 unique chemical substances mapped to 1835 and 2381 chemical exposure-related experiment accession IDs, respectively. The standardized files allow one to assess, compare and search the chemical content in each resource, in the context of the larger DSSTox toxicology data network, as well as across large public cheminformatics resourc

  17. Archiving and Distributing Seismic Data at the Southern California Earthquake Data Center (SCEDC)

    NASA Astrophysics Data System (ADS)

    Appel, V. L.

    2002-12-01

    The Southern California Earthquake Data Center (SCEDC) archives and provides public access to earthquake parametric and waveform data gathered by the Southern California Seismic Network and since January 1, 2001, the TriNet seismic network, southern California's earthquake monitoring network. The parametric data in the archive includes earthquake locations, magnitudes, moment-tensor solutions and phase picks. The SCEDC waveform archive prior to TriNet consists primarily of short-period, 100-samples-per-second waveforms from the SCSN. The addition of the TriNet array added continuous recordings of 155 broadband stations (20 samples per second or less), and triggered seismograms from 200 accelerometers and 200 short-period instruments. Since the Data Center and TriNet use the same Oracle database system, new earthquake data are available to the seismological community in near real-time. Primary access to the database and waveforms is through the Seismogram Transfer Program (STP) interface. The interface enables users to search the database for earthquake information, phase picks, and continuous and triggered waveform data. Output is available in SAC, miniSEED, and other formats. Both the raw counts format (V0) and the gain-corrected format (V1) of COSMOS (Consortium of Organizations for Strong-Motion Observation Systems) are now supported by STP. EQQuest is an interface to prepackaged waveform data sets for select earthquakes in Southern California stored at the SCEDC. Waveform data for large-magnitude events have been prepared and new data sets will be available for download in near real-time following major events. The parametric data from 1981 to present has been loaded into the Oracle 9.2.0.1 database system and the waveforms for that time period have been converted to mSEED format and are accessible through the STP interface. The DISC optical-disk system (the "jukebox") that currently serves as the mass-storage for the SCEDC is in the process of being replaced with a series of inexpensive high-capacity (1.6 Tbyte) magnetic-disk RAIDs. These systems are built with PC-technology components, using 16 120-Gbyte IDE disks, hot-swappable disk trays, two RAID controllers, dual redundant power supplies and a Linux operating system. The system is configured over a private gigabit network that connects to the two Data Center servers and spans between the Seismological Lab and the USGS. To ensure data integrity, each RAID disk system constantly checks itself against its twin and verifies file integrity using 128-bit MD5 file checksums that are stored separate from the system. The final level of data protection is a Sony AIT-3 tape backup of the files. The primary advantage of the magnetic-disk approach is faster data access because magnetic disk drives have almost no latency. This means that the SCEDC can provide better "on-demand" interactive delivery of the seismograms in the archive.

  18. a Schema for Extraction of Indoor Pedestrian Navigation Grid Network from Floor Plans

    NASA Astrophysics Data System (ADS)

    Niu, Lei; Song, Yiquan

    2016-06-01

    The requirement of the indoor navigation related tasks such emergency evacuation calls for efficient solutions for handling data sources. Therefore, the navigation grid extraction from existing floor plans draws attentions. To this, we have to thoroughly analyse the source data, such as Autocad dxf files. Then, we could establish a sounding navigation solution, which firstly complements the basic navigation rectangle boundaries, secondly subdivides these rectangles and finally generates accessible networks with these refined rectangles. Test files are introduced to validate the whole workflow and evaluate the solution performance. In conclusion, we have achieved the preliminary step of forming up accessible network from the navigation grids.

  19. Digital surveying and mapping of forest road network for development of a GIS tool for the effective protection and management of natural ecosystems

    NASA Astrophysics Data System (ADS)

    Drosos, Vasileios C.; Liampas, Sarantis-Aggelos G.; Doukas, Aristotelis-Kosmas G.

    2014-08-01

    In our time, the Geographic Information Systems (GIS) have become important tools, not only in the geosciences and environmental sciences, as well as virtually for all researches that require monitoring, planning or land management. The purpose of this paper was to develop a planning tool and decision making tool using AutoCAD Map software, ArcGIS and Google Earth with emphasis on the investigation of the suitability of forest roads' mapping and the range of its implementation in Greece in prefecture level. Integrating spatial information into a database makes data available throughout the organization; improving quality, productivity, and data management. Also working in such an environment, you can: Access and edit information, integrate and analyze data and communicate effectively. To select desirable information such as forest road network in a very early stage in the planning of silviculture operations, for example before the planning of the harvest is carried out. The software programs that were used were AutoCAD Map for the export in shape files for the GPS data, and ArcGIS in shape files (ArcGlobe), while Google Earth with KML files (Keyhole Markup Language) in order to better visualize and evaluate existing conditions, design in a real-world context and exchange information with government agencies, utilities, and contractors in both CAD and GIS data formats. The automation of the updating procedure and transfer of any files between agencies-departments is one of the main tasks of the integrated GIS-tool among the others should be addressed.

  20. Network Patch Cables Demystified: A Super Activity for Computer Networking Technology

    ERIC Educational Resources Information Center

    Brown, Douglas L.

    2004-01-01

    This article de-mystifies network patch cable secrets so that people can connect their computers and transfer those pesky files--without screaming at the cables. It describes a network cabling activity that can offer students a great hands-on opportunity for working with the tools, techniques, and media used in computer networking. Since the…

  1. Friendly Neighborhood Computer Project. Extension of the IBM NJE network to DEC VAX computers

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Raffenetti, R.C.; Bertoncini, P.J.; Engert, D.E.

    1984-07-01

    This manual is divided into six chapters. The first is an overview of the VAX NJE emulator system and describes what can be done with the VAX NJE emulator software. The second chapter describes the commands that users of the VAX systems will use. Each command description includes the format of the command, a list of valid options and parameters and their meanings, and several short examples of command use. The third chapter describes the commands and capabilities for sending general, sequential files from and to VAX VMS nodes. The fourth chapter describes how to transmit data to a VAXmore » from other computer systems on the network. The fifth chapter explains how to exchange electronic mail with IBM CMS users and with users of other VAX VMS systems connected by NJE communications. The sixth chapter describes operator procedures and the additional commands operators may use.« less

  2. Simulation platform of LEO satellite communication system based on OPNET

    NASA Astrophysics Data System (ADS)

    Zhang, Yu; Zhang, Yong; Li, Xiaozhuo; Wang, Chuqiao; Li, Haihao

    2018-02-01

    For the purpose of verifying communication protocol in the low earth orbit (LEO) satellite communication system, an Optimized Network Engineering Tool (OPNET) based simulation platform is built. Using the three-layer modeling mechanism, the network model, the node model and the process model of the satellite communication system are built respectively from top to bottom, and the protocol will be implemented by finite state machine and Proto-C language. According to satellite orbit parameters, orbit files are generated via Satellite Tool Kit (STK) and imported into OPNET, and the satellite nodes move along their orbits. The simulation platform adopts time-slot-driven mode, divides simulation time into continuous time slots, and allocates slot number for each time slot. A resource allocation strategy is simulated on this platform, and the simulation results such as resource utilization rate, system throughput and packet delay are analyzed, which indicate that this simulation platform has outstanding versatility.

  3. Method and tool for network vulnerability analysis

    DOEpatents

    Swiler, Laura Painton [Albuquerque, NM; Phillips, Cynthia A [Albuquerque, NM

    2006-03-14

    A computer system analysis tool and method that will allow for qualitative and quantitative assessment of security attributes and vulnerabilities in systems including computer networks. The invention is based on generation of attack graphs wherein each node represents a possible attack state and each edge represents a change in state caused by a single action taken by an attacker or unwitting assistant. Edges are weighted using metrics such as attacker effort, likelihood of attack success, or time to succeed. Generation of an attack graph is accomplished by matching information about attack requirements (specified in "attack templates") to information about computer system configuration (contained in a configuration file that can be updated to reflect system changes occurring during the course of an attack) and assumed attacker capabilities (reflected in "attacker profiles"). High risk attack paths, which correspond to those considered suited to application of attack countermeasures given limited resources for applying countermeasures, are identified by finding "epsilon optimal paths."

  4. Incorporating Brokers within Collaboration Environments

    NASA Astrophysics Data System (ADS)

    Rajasekar, A.; Moore, R.; de Torcy, A.

    2013-12-01

    A collaboration environment, such as the integrated Rule Oriented Data System (iRODS - http://irods.diceresearch.org), provides interoperability mechanisms for accessing storage systems, authentication systems, messaging systems, information catalogs, networks, and policy engines from a wide variety of clients. The interoperability mechanisms function as brokers, translating actions requested by clients to the protocol required by a specific technology. The iRODS data grid is used to enable collaborative research within hydrology, seismology, earth science, climate, oceanography, plant biology, astronomy, physics, and genomics disciplines. Although each domain has unique resources, data formats, semantics, and protocols, the iRODS system provides a generic framework that is capable of managing collaborative research initiatives that span multiple disciplines. Each interoperability mechanism (broker) is linked to a name space that enables unified access across the heterogeneous systems. The collaboration environment provides not only support for brokers, but also support for virtualization of name spaces for users, files, collections, storage systems, metadata, and policies. The broker enables access to data or information in a remote system using the appropriate protocol, while the collaboration environment provides a uniform naming convention for accessing and manipulating each object. Within the NSF DataNet Federation Consortium project (http://www.datafed.org), three basic types of interoperability mechanisms have been identified and applied: 1) drivers for managing manipulation at the remote resource (such as data subsetting), 2) micro-services that execute the protocol required by the remote resource, and 3) policies for controlling the execution. For example, drivers have been written for manipulating NetCDF and HDF formatted files within THREDDS servers. Micro-services have been written that manage interactions with the CUAHSI data repository, the DataONE information catalog, and the GeoBrain broker. Policies have been written that manage transfer of messages between an iRODS message queue and the Advanced Message Queuing Protocol. Examples of these brokering mechanisms will be presented. The DFC collaboration environment serves as the intermediary between community resources and compute grids, enabling reproducible data-driven research. It is possible to create an analysis workflow that retrieves data subsets from a remote server, assemble the required input files, automate the execution of the workflow, automatically track the provenance of the workflow, and share the input files, workflow, and output files. A collaborator can re-execute a shared workflow, compare results, change input files, and re-execute an analysis.

  5. Database Deposit Service through JOIS : JAFIC File on Food Industry and Osaka Urban Engineering File

    NASA Astrophysics Data System (ADS)

    Kataoka, Akihiro

    JICST has launched the database deposit service for the excellent quality in small-and medium size, both of which have no dissemination network. JAFIC File on Food Industry produced by the Japan Food Industry Center and Osaka Urban Engineering File by Osaka City have been in service by JOIS since March 2, 1987. In this paper the outline of the above databases is introduced in focussing on the items covered and retrieved by JOIS.

  6. Dialable Cryptography for Wireless Networks

    DTIC Science & Technology

    2008-03-01

    size increased the file size differences for RSA and ELG-E. For example, Elg-E with key size 768 had a smaller file size difference than Elg-E with...not tested at key size 768 ). Figure 11 shows the file size differences for RSA and ElGamal for the different key sizes (all file size differences...times for key sizes 1024 and 1280 (key size 768 was only tested with ElGamal. Once the key size increased above 1280, RSA rose slower than ElGamal

  7. Volcanic observation data and simulation database at NIED, Japan (Invited)

    NASA Astrophysics Data System (ADS)

    Fujita, E.; Ueda, H.; Kozono, T.

    2009-12-01

    NIED (Nat’l Res. Inst. for Earth Sci. & Disast. Prev.) has a project to develop two volcanic database systems: (1) volcanic observation database; (2) volcanic simulation database. The volcanic observation database is the data archive center obtained by the geophysical observation networks at Mt. Fuji, Miyake, Izu-Oshima, Iwo-jima and Nasu volcanoes, central Japan. The data consist of seismic (both high-sensitivity and broadband), ground deformation (tiltmeter, GPS) and those from other sensors (e.g., rain gauge, gravimeter, magnetometer, pressure gauge.) These data is originally stored in “WIN format,” the Japanese standard format, which is also at the Hi-net (High sensitivity seismic network Japan, http://www.hinet.bosai.go.jp/). NIED joins to WOVOdat and we have prepared to upload our data, via XML format. Our concept of the XML format is 1)a common format for intermediate files to upload into the WOVOdat DB, 2) for data files downloaded from the WOVOdat DB, 3) for data exchanges between observatories without the WOVOdat DB, 4) for common data files in each observatory, 5) for data communications between systems and softwares and 6)a for softwares. NIED is now preparing for (2) the volcanic simulation database. The objective of this project is to support to develop a “real-time” hazard map, i.e., the system which is effective to evaluate volcanic hazard in case of emergency, including the up-to-date conditions. Our system will include lava flow simulation (LavaSIM) and pyroclastic flow simulation (grvcrt). The database will keep many cases of assumed simulations and we can pick up the most probable case as the first evaluation in case the eruption started. The final goals of the both database will realize the volcanic eruption prediction and forecasting in real time by the combination of monitoring data and numerical simulations.

  8. Standardized data sharing in a paediatric oncology research network--a proof-of-concept study.

    PubMed

    Hochedlinger, Nina; Nitzlnader, Michael; Falgenhauer, Markus; Welte, Stefan; Hayn, Dieter; Koumakis, Lefteris; Potamias, George; Tsiknakis, Manolis; Saraceno, Davide; Rinaldi, Eugenia; Ladenstein, Ruth; Schreier, Günter

    2015-01-01

    Data that has been collected in the course of clinical trials are potentially valuable for additional scientific research questions in so called secondary use scenarios. This is of particular importance in rare disease areas like paediatric oncology. If data from several research projects need to be connected, so called Core Datasets can be used to define which information needs to be extracted from every involved source system. In this work, the utility of the Clinical Data Interchange Standards Consortium (CDISC) Operational Data Model (ODM) as a format for Core Datasets was evaluated and a web tool was developed which received Source ODM XML files and--via Extensible Stylesheet Language Transformation (XSLT)--generated standardized Core Dataset ODM XML files. Using this tool, data from different source systems were extracted and pooled for joined analysis in a proof-of-concept study, facilitating both, basic syntactic and semantic interoperability.

  9. Land Boundary Conditions for the Goddard Earth Observing System Model Version 5 (GEOS-5) Climate Modeling System: Recent Updates and Data File Descriptions

    NASA Technical Reports Server (NTRS)

    Mahanama, Sarith P.; Koster, Randal D.; Walker, Gregory K.; Takacs, Lawrence L.; Reichle, Rolf H.; De Lannoy, Gabrielle; Liu, Qing; Zhao, Bin; Suarez, Max J.

    2015-01-01

    The Earths land surface boundary conditions in the Goddard Earth Observing System version 5 (GEOS-5) modeling system were updated using recent high spatial and temporal resolution global data products. The updates include: (i) construction of a global 10-arcsec land-ocean lakes-ice mask; (ii) incorporation of a 10-arcsec Globcover 2009 land cover dataset; (iii) implementation of Level 12 Pfafstetter hydrologic catchments; (iv) use of hybridized SRTM global topography data; (v) construction of the HWSDv1.21-STATSGO2 merged global 30 arc second soil mineral and carbon data in conjunction with a highly-refined soil classification system; (vi) production of diffuse visible and near-infrared 8-day MODIS albedo climatologies at 30-arcsec from the period 2001-2011; and (vii) production of the GEOLAND2 and MODIS merged 8-day LAI climatology at 30-arcsec for GEOS-5. The global data sets were preprocessed and used to construct global raster data files for the software (mkCatchParam) that computes parameters on catchment-tiles for various atmospheric grids. The updates also include a few bug fixes in mkCatchParam, as well as changes (improvements in algorithms, etc.) to mkCatchParam that allow it to produce tile-space parameters efficiently for high resolution AGCM grids. The update process also includes the construction of data files describing the vegetation type fractions, soil background albedo, nitrogen deposition and mean annual 2m air temperature to be used with the future Catchment CN model and the global stream channel network to be used with the future global runoff routing model. This report provides detailed descriptions of the data production process and data file format of each updated data set.

  10. An operating system for future aerospace vehicle computer systems

    NASA Technical Reports Server (NTRS)

    Foudriat, E. C.; Berman, W. J.; Will, R. W.; Bynum, W. L.

    1984-01-01

    The requirements for future aerospace vehicle computer operating systems are examined in this paper. The computer architecture is assumed to be distributed with a local area network connecting the nodes. Each node is assumed to provide a specific functionality. The network provides for communication so that the overall tasks of the vehicle are accomplished. The O/S structure is based upon the concept of objects. The mechanisms for integrating node unique objects with node common objects in order to implement both the autonomy and the cooperation between nodes is developed. The requirements for time critical performance and reliability and recovery are discussed. Time critical performance impacts all parts of the distributed operating system; e.g., its structure, the functional design of its objects, the language structure, etc. Throughout the paper the tradeoffs - concurrency, language structure, object recovery, binding, file structure, communication protocol, programmer freedom, etc. - are considered to arrive at a feasible, maximum performance design. Reliability of the network system is considered. A parallel multipath bus structure is proposed for the control of delivery time for time critical messages. The architecture also supports immediate recovery for the time critical message system after a communication failure.

  11. Version 6 of the consensus yeast metabolic network refines biochemical coverage and improves model performance

    PubMed Central

    Heavner, Benjamin D.; Smallbone, Kieran; Price, Nathan D.; Walker, Larry P.

    2013-01-01

    Updates to maintain a state-of-the art reconstruction of the yeast metabolic network are essential to reflect our understanding of yeast metabolism and functional organization, to eliminate any inaccuracies identified in earlier iterations, to improve predictive accuracy and to continue to expand into novel subsystems to extend the comprehensiveness of the model. Here, we present version 6 of the consensus yeast metabolic network (Yeast 6) as an update to the community effort to computationally reconstruct the genome-scale metabolic network of Saccharomyces cerevisiae S288c. Yeast 6 comprises 1458 metabolites participating in 1888 reactions, which are annotated with 900 yeast genes encoding the catalyzing enzymes. Compared with Yeast 5, Yeast 6 demonstrates improved sensitivity, specificity and positive and negative predictive values for predicting gene essentiality in glucose-limited aerobic conditions when analyzed with flux balance analysis. Additionally, Yeast 6 improves the accuracy of predicting the likelihood that a mutation will cause auxotrophy. The network reconstruction is available as a Systems Biology Markup Language (SBML) file enriched with Minimium Information Requested in the Annotation of Biochemical Models (MIRIAM)-compliant annotations. Small- and macromolecules in the network are referenced to authoritative databases such as Uniprot or ChEBI. Molecules and reactions are also annotated with appropriate publications that contain supporting evidence. Yeast 6 is freely available at http://yeast.sf.net/ as three separate SBML files: a model using the SBML level 3 Flux Balance Constraint package, a model compatible with the MATLAB® COBRA Toolbox for backward compatibility and a reconstruction containing only reactions for which there is experimental evidence (without the non-biological reactions necessary for simulating growth). Database URL: http://yeast.sf.net/ PMID:23935056

  12. Development of a user-centered radiology teaching file system

    NASA Astrophysics Data System (ADS)

    dos Santos, Marcelo; Fujino, Asa

    2011-03-01

    Learning radiology requires systematic and comprehensive study of a large knowledge base of medical images. In this work is presented the development of a digital radiology teaching file system. The proposed system has been created in order to offer a set of customized services regarding to users' contexts and their informational needs. This has been done by means of an electronic infrastructure that provides easy and integrated access to all relevant patient data at the time of image interpretation, so that radiologists and researchers can examine all available data to reach well-informed conclusions, while protecting patient data privacy and security. The system is presented such as an environment which implements a distributed clinical database, including medical images, authoring tools, repository for multimedia documents, and also a peer-reviewed model which assures dataset quality. The current implementation has shown that creating clinical data repositories on networked computer environments points to be a good solution in terms of providing means to review information management practices in electronic environments and to create customized and contextbased tools for users connected to the system throughout electronic interfaces.

  13. The world's microbiology laboratories can be a global microbial sensor network.

    PubMed

    O'Brien, Thomas F; Stelling, John

    2014-04-01

    The microbes that infect us spread in global and local epidemics, and the resistance genes that block their treatment spread within and between them. All we can know about where they are to track and contain them comes from the only places that can see them, the world's microbiology laboratories, but most report each patient's microbe only to that patient's caregiver. Sensors, ranging from instruments to birdwatchers, are now being linked in electronic networks to monitor and interpret algorithmically in real-time ocean currents, atmospheric carbon, supply-chain inventory, bird migration, etc. To so link the world's microbiology laboratories as exquisite sensors in a truly lifesaving real-time network their data must be accessed and fully subtyped. Microbiology laboratories put individual reports into inaccessible paper or mutually incompatible electronic reporting systems, but those from more than 2,200 laboratories in more than 108 countries worldwide are now accessed and translated into compatible WHONET files. These increasingly web-based files could initiate a global microbial sensor network. Unused microbiology laboratory byproduct data, now from drug susceptibility and biochemical testing but increasingly from new technologies (genotyping, MALDI-TOF, etc.), can be reused to subtype microbes of each genus/species into sub-groupings that are discriminated and traced with greater sensitivity. Ongoing statistical delineation of subtypes from global sensor network data will improve detection of movement into any patient of a microbe or resistance gene from another patient, medical center or country. Growing data on clinical manifestations and global distributions of subtypes can automate comments for patient's reports, select microbes to genotype and alert responders.

  14. Nuclear Data Networks

    Science.gov Websites

    calibrations. NSDD The international network of Nuclear Structure and Decay Data evaluators Group of and updating of nuclear structure data contained in Evaluated Nuclear Structure Data File (ENSDF

  15. On-Line Data Reconstruction in Redundant Disk Arrays.

    DTIC Science & Technology

    1994-05-01

    each sale, - file servers that support a large number of clients with differing work schedules , and * automated teller networks in banking systems...24KB Head scheduling : FIFO User data layout: Sequential in address space of array Disk spindles: Synchronized Table 2.2: Default array parameters for...package and a set of scheduling and queueing routines. 2.3.3. Default workload This dissertation reports on many performance evaluations. In order to

  16. Process Synchronization and Data Communication between Processes in Real Time Local Area Networks.

    DTIC Science & Technology

    1985-12-01

    52 APPENDIX A: PROCEDURE MAKETABLE .............. 54 APPENDIX B: PROCEDURE MAKEMESSAGE ............. 56 APPENDIX C: PROCEDURE...item. The relation table is built by the driver during system initialization by the procedure maketable , see Appendix A. This procedure reads the file... MAKETABLE Procedure maketable is the first procedure called by the driver. It sets up the relation table in local RAM of SBC 1 by reading the information

  17. Gravity change from 2014 to 2015, Sierra Vista Subwatershed, Upper San Pedro Basin, Arizona

    USGS Publications Warehouse

    Kennedy, Jeffrey R.

    2016-09-13

    Relative-gravity data and absolute-gravity data were collected at 68 stations in the Sierra Vista Subwatershed, Upper San Pedro Basin, Arizona, in May–June 2015 for the purpose of estimating aquifer-storage change. Similar data from 2014 and a description of the survey network were published in U.S. Geological Survey Open-File Report 2015–1086. Data collection and network adjustment results are presented in this report, which is accompanied by a supporting Web Data Release (http://dx.doi.org/10.5066/F7SQ8XHX). Station positions are presented from a Global Positioning System campaign to determine station elevation.

  18. Software Comparison for Renewable Energy Deployment in a Distribution Network

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gao, David Wenzhong; Muljadi, Eduard; Tian, Tian

    The main objective of this report is to evaluate different software options for performing robust distributed generation (DG) power system modeling. The features and capabilities of four simulation tools, OpenDSS, GridLAB-D, CYMDIST, and PowerWorld Simulator, are compared to analyze their effectiveness in analyzing distribution networks with DG. OpenDSS and GridLAB-D, two open source software, have the capability to simulate networks with fluctuating data values. These packages allow the running of a simulation each time instant by iterating only the main script file. CYMDIST, a commercial software, allows for time-series simulation to study variations on network controls. PowerWorld Simulator, another commercialmore » tool, has a batch mode simulation function through the 'Time Step Simulation' tool, which obtains solutions for a list of specified time points. PowerWorld Simulator is intended for analysis of transmission-level systems, while the other three are designed for distribution systems. CYMDIST and PowerWorld Simulator feature easy-to-use graphical user interfaces (GUIs). OpenDSS and GridLAB-D, on the other hand, are based on command-line programs, which increase the time necessary to become familiar with the software packages.« less

  19. A study of inventiveness among Society of Interventional Radiology members and the impact of their social networks.

    PubMed

    Murphy, Kieran J; Elias, Gavin; Jaffer, Hussein; Mandani, Rashesh

    2013-07-01

    To investigate the nature of inventiveness among members of the Society of Interventional Radiology (SIR) and learn what influenced the inventors and assisted their creativity. The membership directory of the SIR was cross-referenced with filings at the United States Patent and Trademark Organization (USPTO) and the Patent Cooperation Treaty (PCT). The inventors were queried with an online survey to illuminate their institutions of training and practice as well as enabling or inhibiting factors to their inventiveness. Responses were analyzed through the construction of social network maps and thematic and graphical analysis. It was found that 457 members of the SIR held 2,492 patents or patent filings. After 1986, there was a marked and sustained increase in patent filings. The online survey was completed by 73 inventors holding 470 patents and patent filings. The social network maps show the key role of large academic interventional radiology departments and individual inventors in the formation of interconnectivity among inventors and the creation of the intellectual property (IP). Key inhibitors of the inventive process include lack of mentorship, of industry contacts, and of legal advice. Key enablers include mentorship, motivation, and industry contacts. Creativity and inventiveness in SIR members stem from institutions that are hubs of innovation and networks of key innovators; inventors are facilitated by personal motivation, mentorship, and strong industry contacts. Copyright © 2013 SIR. Published by Elsevier Inc. All rights reserved.

  20. Computer-aided diagnosis workstation and teleradiology network system for chest diagnosis using the web medical image conference system with a new information security solution

    NASA Astrophysics Data System (ADS)

    Satoh, Hitoshi; Niki, Noboru; Eguchi, Kenji; Ohmatsu, Hironobu; Kaneko, Masahiro; Kakinuma, Ryutaro; Moriyama, Noriyuki

    2010-03-01

    Diagnostic MDCT imaging requires a considerable number of images to be read. Moreover, the doctor who diagnoses a medical image is insufficient in Japan. Because of such a background, we have provided diagnostic assistance methods to medical screening specialists by developing a lung cancer screening algorithm that automatically detects suspected lung cancers in helical CT images, a coronary artery calcification screening algorithm that automatically detects suspected coronary artery calcification and a vertebra body analysis algorithm for quantitative evaluation of osteoporosis. We also have developed the teleradiology network system by using web medical image conference system. In the teleradiology network system, the security of information network is very important subjects. Our teleradiology network system can perform Web medical image conference in the medical institutions of a remote place using the web medical image conference system. We completed the basic proof experiment of the web medical image conference system with information security solution. We can share the screen of web medical image conference system from two or more web conference terminals at the same time. An opinion can be exchanged mutually by using a camera and a microphone that are connected with the workstation that builds in some diagnostic assistance methods. Biometric face authentication used on site of teleradiology makes "Encryption of file" and "Success in login" effective. Our Privacy and information security technology of information security solution ensures compliance with Japanese regulations. As a result, patients' private information is protected. Based on these diagnostic assistance methods, we have developed a new computer-aided workstation and a new teleradiology network that can display suspected lesions three-dimensionally in a short time. The results of this study indicate that our radiological information system without film by using computer-aided diagnosis workstation and our teleradiology network system can increase diagnostic speed, diagnostic accuracy and security improvement of medical information.

  1. NASA/DOD Aerospace Knowledge Diffusion Research Project. Report 35: The use of computer networks in aerospace engineering

    NASA Technical Reports Server (NTRS)

    Bishop, Ann P.; Pinelli, Thomas E.

    1995-01-01

    This research used survey research to explore and describe the use of computer networks by aerospace engineers. The study population included 2000 randomly selected U.S. aerospace engineers and scientists who subscribed to Aerospace Engineering. A total of 950 usable questionnaires were received by the cutoff date of July 1994. Study results contribute to existing knowledge about both computer network use and the nature of engineering work and communication. We found that 74 percent of mail survey respondents personally used computer networks. Electronic mail, file transfer, and remote login were the most widely used applications. Networks were used less often than face-to-face interactions in performing work tasks, but about equally with reading and telephone conversations, and more often than mail or fax. Network use was associated with a range of technical, organizational, and personal factors: lack of compatibility across systems, cost, inadequate access and training, and unwillingness to embrace new technologies and modes of work appear to discourage network use. The greatest positive impacts from networking appear to be increases in the amount of accurate and timely information available, better exchange of ideas across organizational boundaries, and enhanced work flexibility, efficiency, and quality. Involvement with classified or proprietary data and type of organizational structure did not distinguish network users from nonusers. The findings can be used by people involved in the design and implementation of networks in engineering communities to inform the development of more effective networking systems, services, and policies.

  2. Establishing Malware Attribution and Binary Provenance Using Multicompilation Techniques

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ramshaw, M. J.

    2017-07-28

    Malware is a serious problem for computer systems and costs businesses and customers billions of dollars a year in addition to compromising their private information. Detecting malware is particularly difficult because malware source code can be compiled in many different ways and generate many different digital signatures, which causes problems for most anti-malware programs that rely on static signature detection. Our project uses a convolutional neural network to identify malware programs but these require large amounts of data to be effective. Towards that end, we gather thousands of source code files from publicly available programming contest sites and compile themmore » with several different compilers and flags. Building upon current research, we then transform these binary files into image representations and use them to train a long-term recurrent convolutional neural network that will eventually be used to identify how a malware binary was compiled. This information will include the compiler, version of the compiler and the options used in compilation, information which can be critical in determining where a malware program came from and even who authored it.« less

  3. 31 CFR 1026.311 - Filing obligations.

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... 31 Money and Finance:Treasury 3 2014-07-01 2014-07-01 false Filing obligations. 1026.311 Section 1026.311 Money and Finance: Treasury Regulations Relating to Money and Finance (Continued) FINANCIAL CRIMES ENFORCEMENT NETWORK, DEPARTMENT OF THE TREASURY RULES FOR FUTURES COMMISSION MERCHANTS AND...

  4. 31 CFR 1026.311 - Filing obligations.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... 31 Money and Finance:Treasury 3 2013-07-01 2013-07-01 false Filing obligations. 1026.311 Section 1026.311 Money and Finance: Treasury Regulations Relating to Money and Finance (Continued) FINANCIAL CRIMES ENFORCEMENT NETWORK, DEPARTMENT OF THE TREASURY RULES FOR FUTURES COMMISSION MERCHANTS AND...

  5. 31 CFR 1026.311 - Filing obligations.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... 31 Money and Finance:Treasury 3 2012-07-01 2012-07-01 false Filing obligations. 1026.311 Section 1026.311 Money and Finance: Treasury Regulations Relating to Money and Finance (Continued) FINANCIAL CRIMES ENFORCEMENT NETWORK, DEPARTMENT OF THE TREASURY RULES FOR FUTURES COMMISSION MERCHANTS AND...

  6. 31 CFR 1022.311 - Filing obligations.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... 31 Money and Finance:Treasury 3 2012-07-01 2012-07-01 false Filing obligations. 1022.311 Section 1022.311 Money and Finance: Treasury Regulations Relating to Money and Finance (Continued) FINANCIAL CRIMES ENFORCEMENT NETWORK, DEPARTMENT OF THE TREASURY RULES FOR MONEY SERVICES BUSINESSES Reports...

  7. 31 CFR 1022.311 - Filing obligations.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... 31 Money and Finance:Treasury 3 2013-07-01 2013-07-01 false Filing obligations. 1022.311 Section 1022.311 Money and Finance: Treasury Regulations Relating to Money and Finance (Continued) FINANCIAL CRIMES ENFORCEMENT NETWORK, DEPARTMENT OF THE TREASURY RULES FOR MONEY SERVICES BUSINESSES Reports...

  8. 31 CFR 1022.311 - Filing obligations.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... 31 Money and Finance:Treasury 3 2011-07-01 2011-07-01 false Filing obligations. 1022.311 Section 1022.311 Money and Finance: Treasury Regulations Relating to Money and Finance (Continued) FINANCIAL CRIMES ENFORCEMENT NETWORK, DEPARTMENT OF THE TREASURY RULES FOR MONEY SERVICES BUSINESSES Reports...

  9. 31 CFR 1022.311 - Filing obligations.

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... 31 Money and Finance:Treasury 3 2014-07-01 2014-07-01 false Filing obligations. 1022.311 Section 1022.311 Money and Finance: Treasury Regulations Relating to Money and Finance (Continued) FINANCIAL CRIMES ENFORCEMENT NETWORK, DEPARTMENT OF THE TREASURY RULES FOR MONEY SERVICES BUSINESSES Reports...

  10. Introduction to the Space Physics Analysis Network (SPAN)

    NASA Technical Reports Server (NTRS)

    Green, J. L. (Editor); Peters, D. J. (Editor)

    1985-01-01

    The Space Physics Analysis Network or SPAN is emerging as a viable method for solving an immediate communication problem for the space scientist. SPAN provides low-rate communication capability with co-investigators and colleagues, and access to space science data bases and computational facilities. The SPAN utilizes up-to-date hardware and software for computer-to-computer communications allowing binary file transfer and remote log-on capability to over 25 nationwide space science computer systems. SPAN is not discipline or mission dependent with participation from scientists in such fields as magnetospheric, ionospheric, planetary, and solar physics. Basic information on the network and its use are provided. It is anticipated that SPAN will grow rapidly over the next few years, not only from the standpoint of more network nodes, but as scientists become more proficient in the use of telescience, more capability will be needed to satisfy the demands.

  11. A QoS adaptive multimedia transport system: design, implementation and experiences

    NASA Astrophysics Data System (ADS)

    Campbell, Andrew; Coulson, Geoff

    1997-03-01

    The long awaited `new environment' of high speed broadband networks and multimedia applications is fast becoming a reality. However, few systems in existence today, whether they be large scale pilots or small scale test-beds in research laboratories, offer a fully integrated and flexible environment where multimedia applications can maximally exploit the quality of service (QoS) capabilities of supporting networks and end-systems. In this paper we describe the implementation of an adaptive transport system that incorporates a QoS oriented API and a range of mechanisms to assist applications in exploiting QoS and adapting to fluctuations in QoS. The system, which is an instantiation of the Lancaster QoS Architecture, is implemented in a multi ATM switch network environment with Linux based PC end systems and continuous media file servers. A performance evaluation of the system configured to support video-on-demand application scenario is presented and discussed. Emphasis is placed on novel features of the system and on their integration into a complete prototype. The most prominent novelty of our design is a `distributed QoS adaptation' scheme which allows applications to delegate to the system responsibility for augmenting and reducing the perceptual quality of video and audio flows when resource availability increases or decreases.

  12. LHCb Online event processing and filtering

    NASA Astrophysics Data System (ADS)

    Alessio, F.; Barandela, C.; Brarda, L.; Frank, M.; Franek, B.; Galli, D.; Gaspar, C.; Herwijnen, E. v.; Jacobsson, R.; Jost, B.; Köstner, S.; Moine, G.; Neufeld, N.; Somogyi, P.; Stoica, R.; Suman, S.

    2008-07-01

    The first level trigger of LHCb accepts one million events per second. After preprocessing in custom FPGA-based boards these events are distributed to a large farm of PC-servers using a high-speed Gigabit Ethernet network. Synchronisation and event management is achieved by the Timing and Trigger system of LHCb. Due to the complex nature of the selection of B-events, which are the main interest of LHCb, a full event-readout is required. Event processing on the servers is parallelised on an event basis. The reduction factor is typically 1/500. The remaining events are forwarded to a formatting layer, where the raw data files are formed and temporarily stored. A small part of the events is also forwarded to a dedicated farm for calibration and monitoring. The files are subsequently shipped to the CERN Tier0 facility for permanent storage and from there to the various Tier1 sites for reconstruction. In parallel files are used by various monitoring and calibration processes running within the LHCb Online system. The entire data-flow is controlled and configured by means of a SCADA system and several databases. After an overview of the LHCb data acquisition and its design principles this paper will emphasize the LHCb event filter system, which is now implemented using the final hardware and will be ready for data-taking for the LHC startup. Control, configuration and security aspects will also be discussed.

  13. James Webb Space Telescope - L2 Communications for Science Data Processing

    NASA Technical Reports Server (NTRS)

    Johns, Alan; Seaton, Bonita; Gal-Edd, Jonathan; Jones, Ronald; Fatig, Curtis; Wasiak, Francis

    2008-01-01

    JWST is the first NASA mission at the second Lagrange point (L2) to identify the need for data rates higher than 10 megabits per second (Mbps). JWST will produce approximately 235 Gigabits of science data every day that will be downlinked to the Deep Space Network (DSN). To get the data rates desired required moving away from X-band frequencies to Ka-band frequencies. To accomplish this transition, the DSN is upgrading its infrastructure. This new range of frequencies are becoming the new standard for high data rate science missions at L2. With the new frequency range, the issues of alternatives antenna deployment, off nominal scenarios, NASA implementation of the Ka-band 26 GHz, and navigation requirements will be discussed in this paper. JWST is also using Consultative Committee for Space Data Systems (CCSDS) standard process for reliable file transfer using CCSDS File Delivery Protocol (CFDP). For JWST the use of the CFDP protocol provides level zero processing at the DSN site. This paper will address NASA implementations of Ground Stations in support of Ka-band 26 GHz and lesson learned from implementing a file base (CFDP) protocol operational system.

  14. A systems neurophysiology approach to voluntary event coding.

    PubMed

    Petruo, Vanessa A; Stock, Ann-Kathrin; Münchau, Alexander; Beste, Christian

    2016-07-15

    Mechanisms responsible for the integration of perceptual events and appropriate actions (sensorimotor processes) have been subject to intense research. Different theoretical frameworks have been put forward with the "Theory of Event Coding (TEC)" being one of the most influential. In the current study, we focus on the concept of 'event files' within TEC and examine what sub-processes being dissociable by means of cognitive-neurophysiological methods are involved in voluntary event coding. This was combined with EEG source localization. We also introduce reward manipulations to delineate the neurophysiological sub-processes most relevant for performance variations during event coding. The results show that processes involved in voluntary event coding included predominantly stimulus categorization, feature unbinding and response selection, which were reflected by distinct neurophysiological processes (the P1, N2 and P3 ERPs). On a system's neurophysiological level, voluntary event-file coding is thus related to widely distributed parietal-medial frontal networks. Attentional selection processes (N1 ERP) turned out to be less important. Reward modulated stimulus categorization in parietal regions likely reflecting aspects of perceptual decision making but not in other processes. The perceptual categorization stage appears central for voluntary event-file coding. Copyright © 2016 Elsevier Inc. All rights reserved.

  15. Implementing an Automated Antenna Measurement System

    NASA Technical Reports Server (NTRS)

    Valerio, Matthew D.; Romanofsky, Robert R.; VanKeuls, Fred W.

    2003-01-01

    We developed an automated measurement system using a PC running a LabView application, a Velmex BiSlide X-Y positioner, and a HP85l0C network analyzer. The system provides high positioning accuracy and requires no user supervision. After the user inputs the necessary parameters into the LabView application, LabView controls the motor positioning and performs the data acquisition. Current parameters and measured data are shown on the PC display in two 3-D graphs and updated after every data point is collected. The final output is a formatted data file for later processing.

  16. Web servlet-assisted, dial-in flow cytometry data analysis.

    PubMed

    Battye, F

    2001-02-01

    The obvious benefits of centralized data storage notwithstanding, the size of modern flow cytometry data files discourages their transmission over commonly used telephone modem connections. The proposed solution is to install at the central location a web servlet that can extract compact data arrays, of a form dependent on the requested display type, from the stored files and transmit them to a remote client computer program for display. A client program and a web servlet, both written in the Java programming language, were designed to communicate over standard network connections. The client program creates familiar numerical and graphical display types and allows the creation of gates from combinations of user-defined regions. Data compression techniques further reduce transmission times for data arrays that are already much smaller than the data file itself. For typical data files, network transmission times were reduced more than 700-fold for extraction of one-dimensional (1-D) histograms, between 18 and 120-fold for 2-D histograms, and 6-fold for color-coded dot plots. Numerous display formats are possible without further access to the data file. This scheme enables telephone modem access to centrally stored data without restricting flexibility of display format or preventing comparisons with locally stored files. Copyright 2001 Wiley-Liss, Inc.

  17. DDN New User Guide. Revision.

    DTIC Science & Technology

    1992-10-01

    5 2.1 Network Overview ................................................ 5 2.2 Network Access Methods...to TAC and ?4ini-TAC users, such as common error messages, TAC commands, and instructions for performing file tranders. Section 5 , Network Use...originally known as Interface Message Processors, or IMPs. 5 THE DEFENSE DATA NETWORK DRAFt NIC 60001, October 1992 message do not necessarily take the same

  18. DMFS: A Data Migration File System for NetBSD

    NASA Technical Reports Server (NTRS)

    Studenmund, William

    1999-01-01

    I have recently developed dmfs, a Data Migration File System, for NetBSD. This file system is based on the overlay file system, which is discussed in a separate paper, and provides kernel support for the data migration system being developed by my research group here at NASA/Ames. The file system utilizes an underlying file store to provide the file backing, and coordinates user and system access to the files. It stores its internal meta data in a flat file, which resides on a separate file system. Our data migration system provides archiving and file migration services. System utilities scan the dmfs file system for recently modified files, and archive them to two separate tape stores. Once a file has been doubly archived, files larger than a specified size will be truncated to that size, potentially freeing up large amounts of the underlying file store. Some sites will choose to retain none of the file (deleting its contents entirely from the file system) while others may choose to retain a portion, for instance a preamble describing the remainder of the file. The dmfs layer coordinates access to the file, retaining user-perceived access and modification times, file size, and restricting access to partially migrated files to the portion actually resident. When a user process attempts to read from the non-resident portion of a file, it is blocked and the dmfs layer sends a request to a system daemon to restore the file. As more of the file becomes resident, the user process is permitted to begin accessing the now-resident portions of the file. For simplicity, our data migration system divides a file into two portions, a resident portion followed by an optional non-resident portion. Also, a file is in one of three states: fully resident, fully resident and archived, and (partially) non-resident and archived. For a file which is only partially resident, any attempt to write or truncate the file, or to read a non-resident portion, will trigger a file restoration. Truncations and writes are blocked until the file is fully restored so that a restoration which only partially succeed does not leave the file in an indeterminate state with portions existing only on tape and other portions only in the disk file system. We chose layered file system technology as it permits us to focus on the data migration functionality, and permits end system administrators to choose the underlying file store technology. We chose the overlay layered file system instead of the null layer for two reasons: first to permit our layer to better preserve meta data integrity and second to prevent even root processes from accessing migrated files. This is achieved as the underlying file store becomes inaccessible once the dmfs layer is mounted. We are quite pleased with how the layered file system has turned out. Of the 45 vnode operations in NetBSD, 20 (forty-four percent) required no intervention by our file layer - they are passed directly to the underlying file store. Of the twenty five we do intercept, nine (such as vop_create()) are intercepted only to ensure meta data integrity. Most of the functionality was concentrated in five operations: vop_read, vop_write, vop_getattr, vop_setattr, and vop_fcntl. The first four are the core operations for controlling access to migrated files and preserving the user experience. vop_fcntl, a call generated for a certain class of fcntl codes, provides the command channel used by privileged user programs to communicate with the dmfs layer.

  19. Networked Resources.

    ERIC Educational Resources Information Center

    Nickerson, Gord

    1991-01-01

    Explains File Transfer Protocol (FTP), an application software program that allows a user to transfer files from one computer to another. The benefit of the rapid operating speed of FTP is discussed, the use of FTP on microcomputers, minicomputers, and workstations is described, and FTP problems are considered. (four references) (LRW)

  20. Development and evaluation of a low-cost and high-capacity DICOM image data storage system for research.

    PubMed

    Yakami, Masahiro; Ishizu, Koichi; Kubo, Takeshi; Okada, Tomohisa; Togashi, Kaori

    2011-04-01

    Thin-slice CT data, useful for clinical diagnosis and research, is now widely available but is typically discarded in many institutions, after a short period of time due to data storage capacity limitations. We designed and built a low-cost high-capacity Digital Imaging and COmmunication in Medicine (DICOM) storage system able to store thin-slice image data for years, using off-the-shelf consumer hardware components, such as a Macintosh computer, a Windows PC, and network-attached storage units. "Ordinary" hierarchical file systems, instead of a centralized data management system such as relational database, were adopted to manage patient DICOM files by arranging them in directories enabling quick and easy access to the DICOM files of each study by following the directory trees with Windows Explorer via study date and patient ID. Software used for this system was open-source OsiriX and additional programs we developed ourselves, both of which were freely available via the Internet. The initial cost of this system was about $3,600 with an incremental storage cost of about $900 per 1 terabyte (TB). This system has been running since 7th Feb 2008 with the data stored increasing at the rate of about 1.3 TB per month. Total data stored was 21.3 TB on 23rd June 2009. The maintenance workload was found to be about 30 to 60 min once every 2 weeks. In conclusion, this newly developed DICOM storage system is useful for research due to its cost-effectiveness, enormous capacity, high scalability, sufficient reliability, and easy data access.

  1. Simple Automatic File Exchange (SAFE) to Support Low-Cost Spacecraft Operation via the Internet

    NASA Technical Reports Server (NTRS)

    Baker, Paul; Repaci, Max; Sames, David

    1998-01-01

    Various issues associated with Simple Automatic File Exchange (SAFE) are presented in viewgraph form. Specific topics include: 1) Packet telemetry, Internet IP networks and cost reduction; 2) Basic functions and technical features of SAFE; 3) Project goals, including low-cost satellite transmission to data centers to be distributed via an Internet; 4) Operations with a replicated file protocol; 5) File exchange operation; 6) Ground stations as gateways; 7) Lessons learned from demonstrations and tests with SAFE; and 8) Feedback and future initiatives.

  2. Online handwritten mathematical expression recognition

    NASA Astrophysics Data System (ADS)

    Büyükbayrak, Hakan; Yanikoglu, Berrin; Erçil, Aytül

    2007-01-01

    We describe a system for recognizing online, handwritten mathematical expressions. The system is designed with a user-interface for writing scientific articles, supporting the recognition of basic mathematical expressions as well as integrals, summations, matrices etc. A feed-forward neural network recognizes symbols which are assumed to be single-stroke and a recursive algorithm parses the expression by combining neural network output and the structure of the expression. Preliminary results show that writer-dependent recognition rates are very high (99.8%) while writer-independent symbol recognition rates are lower (75%). The interface associated with the proposed system integrates the built-in recognition capabilities of the Microsoft's Tablet PC API for recognizing textual input and supports conversion of hand-drawn figures into PNG format. This enables the user to enter text, mathematics and draw figures in a single interface. After recognition, all output is combined into one LATEX code and compiled into a PDF file.

  3. Bringing Control System User Interfaces to the Web

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chen, Xihui; Kasemir, Kay

    With the evolution of web based technologies, especially HTML5 [1], it becomes possible to create web-based control system user interfaces (UI) that are cross-browser and cross-device compatible. This article describes two technologies that facilitate this goal. The first one is the WebOPI [2], which can seamlessly display CSS BOY [3] Operator Interfaces (OPI) in web browsers without modification to the original OPI file. The WebOPI leverages the powerful graphical editing capabilities of BOY and provides the convenience of re-using existing OPI files. On the other hand, it uses generic JavaScript and a generic communication mechanism between the web browser andmore » web server. It is not optimized for a control system, which results in unnecessary network traffic and resource usage. Our second technology is the WebSocket-based Process Data Access (WebPDA) [4]. It is a protocol that provides efficient control system data communication using WebSocket [5], so that users can create web-based control system UIs using standard web page technologies such as HTML, CSS and JavaScript. WebPDA is control system independent, potentially supporting any type of control system.« less

  4. DOE Office of Scientific and Technical Information (OSTI.GOV)

    North, Michael J.

    SchemaOnRead provides tools for implementing schema-on-read including a single function call (e.g., schemaOnRead("filename")) that reads text (TXT), comma separated value (CSV), raster image (BMP, PNG, GIF, TIFF, and JPG), R data (RDS), HDF5, NetCDF, spreadsheet (XLS, XLSX, ODS, and DIF), Weka Attribute-Relation File Format (ARFF), Epi Info (REC), Pajek network (PAJ), R network (NET), Hypertext Markup Language (HTML), SPSS (SAV), Systat (SYS), and Stata (DTA) files. It also recursively reads folders (e.g., schemaOnRead("folder")), returning a nested list of the contained elements.

  5. Science network resources: Distributed systems

    NASA Technical Reports Server (NTRS)

    Cline, Neal

    1991-01-01

    The Master Directory, which is overview information about whole data sets, is outlined. The data system environment is depicted. The question is explored of what is a prototype international directory including purpose and features. Advantages of on-line directories are listed. Interconnected directory assumptions are given. A description of given of DIF (Directory Interchange Format), which is an exchange file for directory information, along with information content of DIF and directories. The directory population status is given in a percentage viewgraph. The present and future directory interconnections status at GSFC is also listed.

  6. Bringing the medical library to the office desktop.

    PubMed

    Brown, S R; Decker, G; Pletzke, C J

    1991-01-01

    This demonstration illustrates LRC Remote Computer Services- a dual operating system, multi-protocol system for delivering medical library services to the medical professional's desktop. A working model draws resources from CD-ROM and magnetic media file services, Novell and AppleTalk network protocol suites and gating, LAN and asynchronous (dial-in) access strategies, commercial applications for MS-DOS and Macintosh workstations and custom user interfaces. The demonstration includes a discussion of issues relevant to the delivery of said services, particularly with respect to maintenance, security, training/support, staffing, software licensing and costs.

  7. Metadata and Service at the GFZ ISDC Portal

    NASA Astrophysics Data System (ADS)

    Ritschel, B.

    2008-05-01

    The online service portal of the GFZ Potsdam Information System and Data Center (ISDC) is an access point for all manner of geoscientific geodata, its corresponding metadata, scientific documentation and software tools. At present almost 2000 national and international users and user groups have the opportunity to request Earth science data from a portfolio of 275 different products types and more than 20 Million single data files with an added volume of approximately 12 TByte. The majority of the data and information, the portal currently offers to the public, are global geomonitoring products such as satellite orbit and Earth gravity field data as well as geomagnetic and atmospheric data for the exploration. These products for Earths changing system are provided via state-of-the art retrieval techniques. The data product catalog system behind these techniques is based on the extensive usage of standardized metadata, which are describing the different geoscientific product types and data products in an uniform way. Where as all ISDC product types are specified by NASA's Directory Interchange Format (DIF), Version 9.0 Parent XML DIF metadata files, the individual data files are described by extended DIF metadata documents. Depending on the beginning of the scientific project, one part of data files are described by extended DIF, Version 6 metadata documents and the other part are specified by data Child XML DIF metadata documents. Both, the product type dependent parent DIF metadata documents and the data file dependent child DIF metadata documents are derived from a base-DIF.xsd xml schema file. The ISDC metadata philosophy defines a geoscientific product as a package consisting of mostly one or sometimes more than one data file plus one extended DIF metadata file. Because NASA's DIF metadata standard has been developed in order to specify a collection of data only, the extension of the DIF standard consists of new and specific attributes, which are necessary for an explicit identification of single data files and the set-up of a comprehensive Earth science data catalog. The huge ISDC data catalog is realized by product type dependent tables filled with data file related metadata, which have relations to corresponding metadata tables. The product type describing parent DIF XML metadata documents are stored and managed in ORACLE's XML storage structures. In order to improve the interoperability of the ISDC service portal, the existing proprietary catalog system will be extended by an ISO 19115 based web catalog service. In addition to this development there is ISDC related concerning semantic network of different kind of metadata resources, like different kind of standardized and not-standardized metadata documents and literature as well as Web 2.0 user generated information derived from tagging activities and social navigation data.

  8. Monitoring and Indentification Packet in Wireless With Deep Packet Inspection Method

    NASA Astrophysics Data System (ADS)

    Fali Oklilas, Ahmad; Tasmi

    2017-04-01

    Layer 2 and Layer 3 are used to make a process of network monitoring, but with the development of applications on the network such as the p2p file sharing, VoIP, encrypted, and many applications that already use the same port, it would require a system that can classify network traffics, not only based on port number classification. This paper reports the implementation of the deep packet inspection method to analyse data packets based on the packet header and payload to be used in packet data classification. If each application can be grouped based on the application layer, then we can determine the pattern of internet users and also to perform network management of computer science department. In this study, a prototype wireless network and applications SSO were developed to detect the active user. The focus is on the ability of open DPI and nDPI in detecting the payload of an application and the results are elaborated in this paper.

  9. Unravelling Responses for the Canadian National Seismic Network

    NASA Astrophysics Data System (ADS)

    Mulder, T. L.

    2009-12-01

    There are a number of attendant difficulties any network must deal with that range from defining the transfer function to instrument naming conventions to choices of final local file format representation. These choices ultimately result in the ease of conversion to other data formats and therefore directly impact useability. In particular, the ease of data exhange and use of established software that is dependent on standard data types is impacted. This becomes particularly critical with large (terabyte) dataset processing and when integrating external datasets into analysis procedures. Transfer functions, often referred to as instrument responses, are a key component in describing instrumentation. The transfer function describes the complete response of the seismic system. The seismic system is designed to be a linear system that can be decomposed into discrete components. Analogue or digital convolution can be represented as multiplication in the frequency domain. The two basic elements of a seismic system are the sensor and datalogger. The analogue sensor can be represented mathmatically as poles and zeroes. The datalogger can be further broken down into its discrete analogue and digital components: the preamp, A/D converter, and fir filters. The Canadian seismic network (CNSN) digitizers have an additional complication. To save telemetry band-width, the 32 bit signal from the digitizer has a transmission gain removed. The transmission gain (txgain) represents the number of the least significant bits truncated from the sample (2^txgain) after which the data is compressed and transmitted. While telemetry band-width is not the issue it was, now that many sites have ip connectivity, this user programmable transmission gain is still in use and can vary from station to station. The processes receiving the transmitted data do not restore the pre-transmission scaling, consequently the archived waveform files can vary in bit weight over time from station to station depending on the value of the transmission gain. Consequently the transmission gain must be factored into the transfer function. This presentation describes the process for generating the transfer function based on the constituent components discussed here. A matlab routine run on the database generates the transfer function plots for the network.

  10. Design and implementation of wireless dose logger network for radiological emergency decision support system.

    PubMed

    Gopalakrishnan, V; Baskaran, R; Venkatraman, B

    2016-08-01

    A decision support system (DSS) is implemented in Radiological Safety Division, Indira Gandhi Centre for Atomic Research for providing guidance for emergency decision making in case of an inadvertent nuclear accident. Real time gamma dose rate measurement around the stack is used for estimating the radioactive release rate (source term) by using inverse calculation. Wireless gamma dose logging network is designed, implemented, and installed around the Madras Atomic Power Station reactor stack to continuously acquire the environmental gamma dose rate and the details are presented in the paper. The network uses XBee-Pro wireless modules and PSoC controller for wireless interfacing, and the data are logged at the base station. A LabView based program is developed to receive the data, display it on the Google Map, plot the data over the time scale, and register the data in a file to share with DSS software. The DSS at the base station evaluates the real time source term to assess radiation impact.

  11. Design and implementation of wireless dose logger network for radiological emergency decision support system

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gopalakrishnan, V.; Baskaran, R.; Venkatraman, B.

    A decision support system (DSS) is implemented in Radiological Safety Division, Indira Gandhi Centre for Atomic Research for providing guidance for emergency decision making in case of an inadvertent nuclear accident. Real time gamma dose rate measurement around the stack is used for estimating the radioactive release rate (source term) by using inverse calculation. Wireless gamma dose logging network is designed, implemented, and installed around the Madras Atomic Power Station reactor stack to continuously acquire the environmental gamma dose rate and the details are presented in the paper. The network uses XBee–Pro wireless modules and PSoC controller for wireless interfacing,more » and the data are logged at the base station. A LabView based program is developed to receive the data, display it on the Google Map, plot the data over the time scale, and register the data in a file to share with DSS software. The DSS at the base station evaluates the real time source term to assess radiation impact.« less

  12. Cyber-Physical Multi-Core Optimization for Resource and Cache Effects (C2ORES)

    DTIC Science & Technology

    2014-03-01

    DoD-sponsored ATAACK mobile cloud testbed funded through the DURIP program, which is deployed at Virginia Tech and Vanderbilt University to conduct...0.9.2. Jug was configured to use a filesystem (network file system (nfs)) backend for locking and task synchronization. 4.1.7.2 Experiment 1...and performance-aware virtual machine placement technique that is realized as cloud infrastructure middleware. The key contributions of iPlace include

  13. Network Security Issues

    DTIC Science & Technology

    1989-01-01

    access. 8 An example of a Trojan Horse was one that affected many Macintosh users in 1987. The program called "Sexy Ladies " deleted files as the...be malicious, just the disruption and freezing of the system would be enough to send a panic throughout the financial world. Gold prices would soar...Protection Products," Computers and Security, Apr 88, p. 159. 15 Neil Rubenking, " Antivirus Programs Fight Data Loss," PC Magazine (First Look), 28 Jun

  14. Patent Abstract Digest. Volume II.

    DTIC Science & Technology

    1981-03-01

    THE AIR FORCE SYSTEMS COMMAND United States Patent 191 [J 4,190,815 Albanese [45] Feb. 26, 1980 [541 HIGH POWER HYBRID SWITCH 3,659.227 4/1972...R.F. power are controlled and switched [22] Filed: Mar. 9, 1978 by means of a hybrid switching network that employs [511 nt. C. 2...broadband quadrature 3dB hybrid . Switching is accomplished by selectively inserting a [561 Referenees Cited 180 phase shift means into the lower power

  15. Ensuring a C2 Level of Trust and Interoperability in a Networked Windows NT Environment

    DTIC Science & Technology

    1996-09-01

    addition, it should be noted that the device drivers, microkernel , memory manager, and Hardware Abstraction Layer are all hardware dependent. a. The...Executive The executive is further divided into three conceptual layers which are referred to as-the Hardware Abstraction Layer (HAL), the Microkernel , and...Subsystem Executive Subsystems Manager I/O Manager Cache Manager File Systems Microkernel Device Driver Hardware Abstraction Layer F HARDWARE Figure 3

  16. Please Move Inactive Files Off the /projects File System | High-Performance

    Science.gov Websites

    Computing | NREL Please Move Inactive Files Off the /projects File System Please Move Inactive Files Off the /projects File System January 11, 2018 The /projects file system is a shared resource . This year this has created a space crunch - the file system is now about 90% full and we need your help

  17. National Crime Information Center (NCIC) Training Videos.

    ERIC Educational Resources Information Center

    Federal Bureau of Investigation, Washington, DC. National Crime Information Center.

    The Federal Bureau of Investigation's National Crime Information Center (NCIC) maintains a set of computerized files of documented criminal justice information reported by a network of over 60,000 participating national, regional, state, and local agencies. The files, dealing with wanted persons, missing persons, unidentified persons, and stolen…

  18. 76 FR 39757 - Filing Procedures

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-07-06

    ... an optical character recognition process, such a document may contain recognition errors. CAUTION... network speed e-filing of these documents may be difficult. Pursuant to section II(C) above, the Secretary... optical scan format or a typed ``electronic signature,'' e.g., ``/s/Jane Doe.'' (3) In the case of a...

  19. BOREAS AFM-5 Level-2 Upper Air Network Standard Pressure Level Data

    NASA Technical Reports Server (NTRS)

    Barr, Alan; Hrynkiw, Charmaine; Hall, Forrest G. (Editor); Newcomer, Jeffrey A. (Editor); Smith, David E. (Technical Monitor)

    2000-01-01

    The BOREAS AFM-5 team collected and processed data from the numerous radiosonde flights during the project. The goals of the AFM-05 team were to provide large-scale definition of the atmosphere by supplementing the existing AES aerological network, both temporally and spatially. This data set includes basic upper-air parameters interpolated at 0.5 kiloPascal increments of atmospheric pressure from data collected from the network of upper-air stations during the 1993, 1994, and 1996 field campaigns over the entire study region. The data are contained in tabular ASCII files. The data files are available on a CD-ROM (see document number 20010000884) or from the Oak Ridge National Laboratory (ORNL) Distributed Active Archive Center (DAAC).

  20. Automatic image database generation from CAD for 3D object recognition

    NASA Astrophysics Data System (ADS)

    Sardana, Harish K.; Daemi, Mohammad F.; Ibrahim, Mohammad K.

    1993-06-01

    The development and evaluation of Multiple-View 3-D object recognition systems is based on a large set of model images. Due to the various advantages of using CAD, it is becoming more and more practical to use existing CAD data in computer vision systems. Current PC- level CAD systems are capable of providing physical image modelling and rendering involving positional variations in cameras, light sources etc. We have formulated a modular scheme for automatic generation of various aspects (views) of the objects in a model based 3-D object recognition system. These views are generated at desired orientations on the unit Gaussian sphere. With a suitable network file sharing system (NFS), the images can directly be stored on a database located on a file server. This paper presents the image modelling solutions using CAD in relation to multiple-view approach. Our modular scheme for data conversion and automatic image database storage for such a system is discussed. We have used this approach in 3-D polyhedron recognition. An overview of the results, advantages and limitations of using CAD data and conclusions using such as scheme are also presented.

  1. Cyber secure systems approach for NPP digital control systems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    McCreary, T. J.; Hsu, A.

    2006-07-01

    Whether fossil or nuclear power, the chief operations goal is to generate electricity. The heart of most plant operations is the I and C system. With the march towards open architecture, the I and C system is more vulnerable than ever to system security attacks (denial of service, virus attacks and others), thus jeopardizing plant operations. Plant staff must spend large amounts of time and money setting up and monitoring a variety of security strategies to counter the threats and actual attacks to the system. This time and money is a drain on the financial performance of a plant andmore » distracts valuable operations resources from their real goals: product. The pendulum towards complete open architecture may have swung too far. Not all aspects of proprietary hardware and software are necessarily 'bad'. As the aging U.S. fleet of nuclear power plants starts to engage in replacing legacy control systems, and given the on-going (and legitimate) concern about the security of present digital control systems, decisions about how best to approach cyber security are vital to the specification and selection of control system vendors for these upgrades. The authors maintain that utilizing certain resources available in today's digital technology, plant control systems can be configured from the onset to be inherently safe, so that plant staff can concentrate on the operational issues of the plant. The authors postulate the concept of the plant I and C being bounded in a 'Cyber Security Zone' and present a design approach that can alleviate the concern and cost at the plant level of dealing with system security strategies. Present approaches through various IT cyber strategies, commercial software, and even postulated standards from various industry/trade organizations are almost entirely reactive and simply add to cost and complexity. This Cyber Security Zone design demonstrates protection from the four classes of cyber security attacks: 1)Threat from an intruder attempting to disrupt network communications by entering the system from an attached utility network or utilizing a modem connected to a control system PC that is in turn connected to a publicly accessible phone; 2)Threat from a user connecting an unauthorized computer to the control network; 3)Threat from a security attack when an unauthorized user gains access to a PC connected to the plant network;. 4)Threat from internal disruption (by plant staff, whether, malicious or otherwise) by unauthorized usage of files or file handling media that opens the system to security threat (as typified in current situation in most control rooms). The plant I and C system cyber security design and the plant specific procedures should adequately demonstrate protection from the four pertinent classes of cyber security attacks. The combination of these features should demonstrate that the system is not vulnerable to any analyzed cyber security attacks either from internal sources or through network connections. The authors will provide configurations that will demonstrate the Cyber Security Zone. (authors)« less

  2. 76 FR 65718 - Western Area Power Administration; Notice of Filing

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-10-24

    ... DEPARTMENT OF ENERGY Federal Energy Regulatory Commission [Docket No. EF11-14-000] Western Area Power Administration; Notice of Filing Take notice that on September 23, 2011, Western Area Power Administration submitted its Rate Order No. WAPA-151 concerning rate and repayment data for Network Integration...

  3. 47 CFR 1.7001 - Scope and content of filed reports.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... Telecommunications Capability Data § 1.7001 Scope and content of filed reports. (a) Definitions. Terms used in this... services over their own facilities or over Unbundled Network Elements (UNEs), special access lines, and other leased lines and wireless channels that the entity obtains from a communications service provider...

  4. SiLK: A Tool Suite for Unsampled Network Flow Analysis at Scale

    DTIC Science & Technology

    2014-06-01

    file format,” [Accessed: Feb 9, 2014]. [Online]. Available: https: //tools.netsa.cert.org/silk/faq.html#file-formats [12] “2012 data breach investigations...report (DBIR),” Verizon, Tech. Rep., 2012. [Online]. Available: http://www.verizonenterprise.com/DBIR/2012/ [13] “2013 data breach investigations

  5. Can Social Network Analysis Help Address the High Rates of Bacterial Sexually Transmitted Infections in Saskatchewan?

    PubMed

    Trecker, Molly A; Dillon, Jo-Anne R; Lloyd, Kathy; Hennink, Maurice; Jolly, Ann; Waldner, Cheryl

    2017-06-01

    Saskatchewan has one of the highest rates of gonorrhea among the Canadian provinces-more than double the national rate. In light of these high rates, and the growing threat of untreatable infections, improved understanding of gonorrhea transmission dynamics in the province and evaluation of the current system and tools for disease control are important. We extracted data from a cross-sectional sample of laboratory-confirmed gonorrhea cases between 2003 and 2012 from the notifiable disease files of the Regina Qu'Appelle Health Region. The database was stratified by calendar year, and social network analysis combined with statistical modeling was used to identify associations between measures of connection within the network and the odds of repeat gonorrhea and risk of coinfection with chlamydia at the time of diagnosis. Networks were highly fragmented. Younger age and component size were positively associated with being coinfected with chlamydia. Being coinfected, reporting sex trade involvement, and component size were all positively associated with repeat infection. This is the first study to apply social network analysis to gonorrhea transmission in Saskatchewan and contributes important information about the relationship of network connections to gonorrhea/chlamydia coinfection and repeat gonorrhea. This study also suggests several areas for change of systems-related factors that could greatly increase understanding of social networks and enhance the potential for bacterial sexually transmitted infection control in Saskatchewan.

  6. Fuzzy/Neural Software Estimates Costs of Rocket-Engine Tests

    NASA Technical Reports Server (NTRS)

    Douglas, Freddie; Bourgeois, Edit Kaminsky

    2005-01-01

    The Highly Accurate Cost Estimating Model (HACEM) is a software system for estimating the costs of testing rocket engines and components at Stennis Space Center. HACEM is built on a foundation of adaptive-network-based fuzzy inference systems (ANFIS) a hybrid software concept that combines the adaptive capabilities of neural networks with the ease of development and additional benefits of fuzzy-logic-based systems. In ANFIS, fuzzy inference systems are trained by use of neural networks. HACEM includes selectable subsystems that utilize various numbers and types of inputs, various numbers of fuzzy membership functions, and various input-preprocessing techniques. The inputs to HACEM are parameters of specific tests or series of tests. These parameters include test type (component or engine test), number and duration of tests, and thrust level(s) (in the case of engine tests). The ANFIS in HACEM are trained by use of sets of these parameters, along with costs of past tests. Thereafter, the user feeds HACEM a simple input text file that contains the parameters of a planned test or series of tests, the user selects the desired HACEM subsystem, and the subsystem processes the parameters into an estimate of cost(s).

  7. Highway Safety Information System guidebook for the Minnesota state data files. Volume 1 : SAS file formats

    DOT National Transportation Integrated Search

    2001-02-01

    The Minnesota data system includes the following basic files: Accident data (Accident File, Vehicle File, Occupant File); Roadlog File; Reference Post File; Traffic File; Intersection File; Bridge (Structures) File; and RR Grade Crossing File. For ea...

  8. Interconnection of electronic medical record with clinical data management system by CDISC ODM.

    PubMed

    Matsumura, Yasushi; Hattori, Atsushi; Manabe, Shiro; Takeda, Toshihiro; Takahashi, Daiyo; Yamamoto, Yuichiro; Murata, Taizo; Mihara, Naoki

    2014-01-01

    EDC system has been used in the field of clinical research. The current EDC system does not connect with electronic medical record system (EMR), thus a medical staff has to transcribe the data in EMR to EDC system manually. This redundant process causes not only inefficiency but also human error. We developed an EDC system cooperating with EMR, in which the data required for a clinical research form (CRF) is transcribed automatically from EMR to electronic CRF (eCRF) and is sent via network. We call this system as "eCRF reporter". The interface module of eCRF reporter can retrieves the data in EMR database including patient biography data, laboratory test data, prescription data and data entered by template in progress notes. The eCRF reporter also enables users to enter data directly to eCRF. The eCRF reporter generates CDISC ODM file and PDF which is a translated form of Clinical data in ODM. After storing eCRF in EMR, it is transferred via VPN to a clinical data management system (CDMS) which can receive the eCRF files and parse ODM. We started some clinical research by using this system. This system is expected to promote clinical research efficiency and strictness.

  9. Long-Term file activity patterns in a UNIX workstation environment

    NASA Technical Reports Server (NTRS)

    Gibson, Timothy J.; Miller, Ethan L.

    1998-01-01

    As mass storage technology becomes more affordable for sites smaller than supercomputer centers, understanding their file access patterns becomes crucial for developing systems to store rarely used data on tertiary storage devices such as tapes and optical disks. This paper presents a new way to collect and analyze file system statistics for UNIX-based file systems. The collection system runs in user-space and requires no modification of the operating system kernel. The statistics package provides details about file system operations at the file level: creations, deletions, modifications, etc. The paper analyzes four months of file system activity on a university file system. The results confirm previously published results gathered from supercomputer file systems, but differ in several important areas. Files in this study were considerably smaller than those at supercomputer centers, and they were accessed less frequently. Additionally, the long-term creation rate on workstation file systems is sufficiently low so that all data more than a day old could be cheaply saved on a mass storage device, allowing the integration of time travel into every file system.

  10. QuakeSim Project Networking

    NASA Astrophysics Data System (ADS)

    Kong, D.; Donnellan, A.; Pierce, M. E.

    2012-12-01

    QuakeSim is an online computational framework focused on using remotely sensed geodetic imaging data to model and understand earthquakes. With the rise in online social networking over the last decade, many tools and concepts have been developed that are useful to research groups. In particular, QuakeSim is interested in the ability for researchers to post, share, and annotate files generated by modeling tools in order to facilitate collaboration. To accomplish this, features were added to the preexisting QuakeSim site that include single sign-on, automated saving of output from modeling tools, and a personal user space to manage sharing permissions on these saved files. These features implement OpenID and Lightweight Data Access Protocol (LDAP) technologies to manage files across several different servers, including a web server running Drupal and other servers hosting the computational tools themselves.

  11. Software for hyperspectral, joint photographic experts group (.JPG), portable network graphics (.PNG) and tagged image file format (.TIFF) segmentation

    NASA Astrophysics Data System (ADS)

    Bruno, L. S.; Rodrigo, B. P.; Lucio, A. de C. Jorge

    2016-10-01

    This paper presents a system developed by an application of a neural network Multilayer Perceptron for drone acquired agricultural image segmentation. This application allows a supervised user training the classes that will posteriorly be interpreted by neural network. These classes will be generated manually with pre-selected attributes in the application. After the attribute selection a segmentation process is made to allow the relevant information extraction for different types of images, RGB or Hyperspectral. The application allows extracting the geographical coordinates from the image metadata, geo referencing all pixels on the image. In spite of excessive memory consume on hyperspectral images regions of interest, is possible to perform segmentation, using bands chosen by user that can be combined in different ways to obtain different results.

  12. Network, system, and status software enhancements for the autonomously managed electrical power system breadboard. Volume 4: Graphical status display

    NASA Technical Reports Server (NTRS)

    Mckee, James W.

    1990-01-01

    This volume (4 of 4) contains the description, structured flow charts, prints of the graphical displays, and source code to generate the displays for the AMPS graphical status system. The function of these displays is to present to the manager of the AMPS system a graphical status display with the hot boxes that allow the manager to get more detailed status on selected portions of the AMPS system. The development of the graphical displays is divided into two processes; the creation of the screen images and storage of them in files on the computer, and the running of the status program which uses the screen images.

  13. DTS: The NOAO Data Transport System

    NASA Astrophysics Data System (ADS)

    Fitzpatrick, M.; Semple, T.

    2014-05-01

    The NOAO Data Transport System (DTS) provides high-throughput, reliable, data transfer between telescopes, pipelines and archive centers located in the Northern and Southern hemispheres. It is a distributed application using XML-RPC for command and control, and either parallel-TCP or UDT protocols for bulk data transport. The system is data-agnostic, allowing arbitrary files or directories to be moved using the same infrastructure. Data paths are configurable in the system by connecting nodes as the source or destination of data in a queue. Each leg of a data path may be configured independently based on the network environment between the sites. A queueing model is currently implemented to manage the automatic movement of data, a streaming model is planned to support arbitrarily large transfers (e.g. as in a disk recovery scenario) or to provide a 'pass-thru' interface to minize overheads. A web-based monitor allows anyone to get a graphical overview of the DTS system as it runs, operators will be able to control individual nodes in the system. Through careful tuning of the network paths DTS is able to achieve in excess of 80-percent of the nominal wire speed using only commodity networks, making it ideal for long-haul transport of large volumes of data.

  14. 78 FR 50480 - In the Matter of Redfin Network, Inc.; Order of Suspension of Trading

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-08-19

    ... SECURITIES AND EXCHANGE COMMISSION [File No. 500-1] In the Matter of Redfin Network, Inc.; Order of Suspension of Trading August 15, 2013. It appears to the Securities and Exchange Commission that there is a lack of current and accurate information concerning the securities of Redfin Network, Inc...

  15. 78 FR 17946 - Consolidated Tape Association; Notice of Filing and Immediate Effectiveness of the Sixteenth...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-03-25

    ... Network A and Network B data feeds. Consistent with current practice, within each of a firm's billable... fee schedules by compressing the current 14-tier Network A device rate schedule into four tiers, by... products, unprecedented levels of trading, internationalization and developments in portfolio analysis and...

  16. 75 FR 57307 - Consolidated Tape Association; Notice of Filing and Immediate Effectiveness of the Fourteenth...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2010-09-20

    ... to a vendor's dissemination of a real-time Network B last sale price information ticker over...: BATS Exchange, Inc.; Chicago Board Options Exchange, Inc.; Chicago Stock Exchange, Inc.; Financial.... Network B Television Ticker Fees The amendment seeks to establish as a permanent part of the Network B...

  17. 75 FR 6229 - Consolidated Tape Association; Notice of Filing of the Fifteenth Substantive Amendment to the...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2010-02-08

    ... Plan (``Amendments'') would amend the Plans to provide that the Participants pay the Network B Administrator a fixed annual fee in exchange for its performance of Network B administrator functions under the... Amendments Network Administrator Fees under the Plans. Section XII (``Financial Matters'') of the CTA and...

  18. Processing large remote sensing image data sets on Beowulf clusters

    USGS Publications Warehouse

    Steinwand, Daniel R.; Maddox, Brian; Beckmann, Tim; Schmidt, Gail

    2003-01-01

    High-performance computing is often concerned with the speed at which floating- point calculations can be performed. The architectures of many parallel computers and/or their network topologies are based on these investigations. Often, benchmarks resulting from these investigations are compiled with little regard to how a large dataset would move about in these systems. This part of the Beowulf study addresses that concern by looking at specific applications software and system-level modifications. Applications include an implementation of a smoothing filter for time-series data, a parallel implementation of the decision tree algorithm used in the Landcover Characterization project, a parallel Kriging algorithm used to fit point data collected in the field on invasive species to a regular grid, and modifications to the Beowulf project's resampling algorithm to handle larger, higher resolution datasets at a national scale. Systems-level investigations include a feasibility study on Flat Neighborhood Networks and modifications of that concept with Parallel File Systems.

  19. 10 CFR 13.2 - Definitions.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... identity when filing documents and serving participants electronically through the E-Filing system, and... transmitted electronically from the E-Filing system to the submitter confirming receipt of electronic filing... presentation of the docket and a link to its files. E-Filing System means an electronic system that receives...

  20. Internet Protocol Enhanced over Satellite Networks

    NASA Technical Reports Server (NTRS)

    Ivancic, William D.

    1999-01-01

    Extensive research conducted by the Satellite Networks and Architectures Branch of the NASA Lewis Research Center led to an experimental change to the Internet's Transmission Control Protocol (TCP) that will increase performance over satellite channels. The change raises the size of the initial burst of data TCP can send from 1 packet to 4 packets or roughly 4 kilobytes (kB), whichever is less. TCP is used daily by everyone on the Internet for e-mail and World Wide Web access, as well as other services. TCP is one of the feature protocols used in computer communications for reliable data delivery and file transfer. Increasing TCP's initial data burst from the previously specified single segment to approximately 4 kB may improve data transfer rates by up to 27 percent for very small files. This is significant because most file transfers in wide-area networks today are small files, 4 kilobytes or less. In addition, because data transfers over geostationary satellites can take 5 to 20 times longer than over typical terrestrial connections, increasing the initial burst of data that can be sent is extremely important. This research along with research from other institutions has led to the release of two new Request for Comments from the Internet Engineering Task Force (IETF, the international body that sets Internet standards). In addition, two studies of the implications of this mechanism were also funded by NASA Lewis.

  1. Salient Feature Selection Using Feed-Forward Neural Networks and Signal-to-Noise Ratios with a Focus Toward Network Threat Detection and Classification

    DTIC Science & Technology

    2014-03-27

    0.8.0. The virtual machine’s network adapter was set to internal network only to keep any outside traffic from interfering. A MySQL -based query...primary output of Fullstats is the ARFF file format, intended for use with the WEKA Java -based data mining software developed at the University of Waikato

  2. Sig2BioPAX: Java tool for converting flat files to BioPAX Level 3 format.

    PubMed

    Webb, Ryan L; Ma'ayan, Avi

    2011-03-21

    The World Wide Web plays a critical role in enabling molecular, cell, systems and computational biologists to exchange, search, visualize, integrate, and analyze experimental data. Such efforts can be further enhanced through the development of semantic web concepts. The semantic web idea is to enable machines to understand data through the development of protocol free data exchange formats such as Resource Description Framework (RDF) and the Web Ontology Language (OWL). These standards provide formal descriptors of objects, object properties and their relationships within a specific knowledge domain. However, the overhead of converting datasets typically stored in data tables such as Excel, text or PDF into RDF or OWL formats is not trivial for non-specialists and as such produces a barrier to seamless data exchange between researchers, databases and analysis tools. This problem is particularly of importance in the field of network systems biology where biochemical interactions between genes and their protein products are abstracted to networks. For the purpose of converting biochemical interactions into the BioPAX format, which is the leading standard developed by the computational systems biology community, we developed an open-source command line tool that takes as input tabular data describing different types of molecular biochemical interactions. The tool converts such interactions into the BioPAX level 3 OWL format. We used the tool to convert several existing and new mammalian networks of protein interactions, signalling pathways, and transcriptional regulatory networks into BioPAX. Some of these networks were deposited into PathwayCommons, a repository for consolidating and organizing biochemical networks. The software tool Sig2BioPAX is a resource that enables experimental and computational systems biologists to contribute their identified networks and pathways of molecular interactions for integration and reuse with the rest of the research community.

  3. PCDS as a tool in teaching and research at the University of Michigan

    NASA Technical Reports Server (NTRS)

    Abreu, V.

    1986-01-01

    The Space Physics Research Laboratory's (SPRL) use of the Pilot Climate Data System (PCDS) is discussed. For this purpose, a computer center was established to provide the hardware and software necessary to fully utilize existing data bases for research and teaching purposes. A schematic of the SPRL network is given. The core of the system consists of two VAX 11/750s and a VAX 8600, networked through ETHERNET to several LSI 11/23 microprocessors. Much of the system is used for external communications with major networks and data centers. A VAX 11/750 provides DECNET services through the SPAN network to the PCDS. A functional diagram of PCDS usage is given. The browsing capabilities of the PCDS are used to generate data files, which are later transferred to the SPRL center for further data manipulation and display. This mode of operation for classroom instruction will be used to effectively use terminals and to simplify usage of the data base. The Atmosphere Explorer data base has been used successfully in a similar manner in courses related to the thermosphere and ionosphere. The main motivation to access the PCDS was to complement research efforts related to the High Resolution Doppler Imager (HRDI), to be flown on the Upper Atmosphere Research Satellite (UARS).

  4. Multi-Resolution Playback of Network Trace Files

    DTIC Science & Technology

    2015-06-01

    a com- plete MySQL database, C++ developer tools and the libraries utilized in the development of the system (Boost and Libcrafter), and Wireshark...XE suite has a limit to the allowed size of each database. In order to be scalable, the project had to switch to the MySQL database suite. The...programs that access the database use the MySQL C++ connector, provided by Oracle, and the supplied methods and libraries. 4.4 Flow Generator Chapter 3

  5. Examining the Return on Investment of a Security Information and Event Management Solution in a Notional Department of Defense Network Environment

    DTIC Science & Technology

    2013-06-01

    collection are the facts that devices the lack encryption or compression methods and that the log file must be saved on the host system prior to transfer...time. Statistical correlation utilizes numerical algorithms to detect deviations from normal event levels and other routine activities (Chuvakin...can also assist in detecting low volume threats. Although easy and logical to implement, the implementation of statistical correlation algorithms

  6. Very High-Speed Report File System

    DTIC Science & Technology

    1992-12-15

    1.5 and 45 Mb/s and is expected 1 Introduction to reach 150 Mb/s. These new technologies pose some challenges to The Internet Protocol (IP) family (IP... Internet Engineering Task Force (IETF) has R taken up the issue, but a definitive answer is probably some time away. The basic issues are the choice of AAL...by an IEEE 802. la Subnetwork Access Protocol (SNAP) However, with a large number of networks all header. The third proposal identifies the protocol

  7. Computer-aided diagnosis workstation and network system for chest diagnosis based on multislice CT images

    NASA Astrophysics Data System (ADS)

    Satoh, Hitoshi; Niki, Noboru; Eguchi, Kenji; Moriyama, Noriyuki; Ohmatsu, Hironobu; Masuda, Hideo; Machida, Suguru

    2008-03-01

    Mass screening based on multi-helical CT images requires a considerable number of images to be read. It is this time-consuming step that makes the use of helical CT for mass screening impractical at present. To overcome this problem, we have provided diagnostic assistance methods to medical screening specialists by developing a lung cancer screening algorithm that automatically detects suspected lung cancers in helical CT images, a coronary artery calcification screening algorithm that automatically detects suspected coronary artery calcification and a vertebra body analysis algorithm for quantitative evaluation of osteoporosis likelihood by using helical CT scanner for the lung cancer mass screening. The function to observe suspicious shadow in detail are provided in computer-aided diagnosis workstation with these screening algorithms. We also have developed the telemedicine network by using Web medical image conference system with the security improvement of images transmission, Biometric fingerprint authentication system and Biometric face authentication system. Biometric face authentication used on site of telemedicine makes "Encryption of file" and Success in login" effective. As a result, patients' private information is protected. Based on these diagnostic assistance methods, we have developed a new computer-aided workstation and a new telemedicine network that can display suspected lesions three-dimensionally in a short time. The results of this study indicate that our radiological information system without film by using computer-aided diagnosis workstation and our telemedicine network system can increase diagnostic speed, diagnostic accuracy and security improvement of medical information.

  8. Policies for implementing network firewalls

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Brown, C.D.

    1994-05-01

    Corporate networks are frequently protected by {open_quotes}firewalls{close_quotes} or gateway systems that control access to/from other networks, e.g., the Internet, in order to reduce the network`s vulnerability to hackers and other unauthorized access. Firewalls typically limit access to particular network nodes and application protocols, and they often perform special authentication and authorization functions. One of the difficult issues associated with network firewalls is determining which applications should be permitted through the firewall. For example, many networks permit the exchange of electronic mail with the outside but do not permit file access to be initiated by outside users, as this might allowmore » outside users to access sensitive data or to surreptitiously modify data or programs (e.g., to intall Trojan Horse software). However, if access through firewalls is severely restricted, legitimate network users may find it difficult or impossible to collaborate with outside users and to share data. Some of the most serious issues regarding firewalls involve setting policies for firewalls with the goal of achieving an acceptable balance between the need for greater functionality and the associated risks. Two common firewall implementation techniques, screening routers and application gateways, are discussed below, followed by some common policies implemented by network firewalls.« less

  9. Brain model of text animation as a data mining strategy.

    PubMed

    Astakhova, Tamara; Astakhov, Vadim

    2009-01-01

    Imagination is the critical point in developing of realistic intelligence (AI) systems. One way to approach imagination would be simulation of its properties and operations. We developed two models "Brain Network Hierarchy of Languages," and "Semantical Holographic Calculus" and simulation system ScriptWriter that emulate the process of imagination through an automatic animation of English texts. The purpose of this paper is to demonstrate the model and present "ScriptWriter" system http://nvo.sdsc.edu/NVO/JCSG/get_SRB_mime_file2.cgi//home/tamara.sdsc/test/demo.zip?F=/home/tamara.sdsc/test/demo.zip&M=application/x-gtar for simulation of the imagination.

  10. Integrated clinical workstations for image and text data capture, display, and teleconsultation.

    PubMed

    Dayhoff, R; Kuzmak, P M; Kirin, G

    1994-01-01

    The Department of Veterans Affairs (VA) DHCP Imaging System digitally records clinically significant diagnostic images selected by medical specialists in a variety of hospital departments, including radiology, cardiology, gastroenterology, pathology, dermatology, hematology, surgery, podiatry, dental clinic, and emergency room. These images, which include true color and gray scale images, scanned documents, and electrocardiogram waveforms, are stored on network file servers and displayed on workstations located throughout a medical center. All images are managed by the VA's hospital information system (HIS), allowing integrated displays of text and image data from all medical specialties. Two VA medical centers currently have DHCP Imaging Systems installed, and other installations are underway.

  11. Measuring driver satisfaction with an urban arterial before and after deployment of an adaptive timing signal system

    DOT National Transportation Integrated Search

    2001-02-01

    The Minnesota data system includes the following basic files: Accident data (Accident File, Vehicle File, Occupant File); Roadlog File; Reference Post File; Traffic File; Intersection File; Bridge (Structures) File; and RR Grade Crossing File. For ea...

  12. Educational use of World Wide Web pages on CD-ROM.

    PubMed

    Engel, Thomas P; Smith, Michael

    2002-01-01

    The World Wide Web is increasingly important for medical education. Internet served pages may also be used on a local hard disk or CD-ROM without a network or server. This allows authors to reuse existing content and provide access to users without a network connection. CD-ROM offers several advantages over network delivery of Web pages for several applications. However, creating Web pages for CD-ROM requires careful planning. Issues include file names, relative links, directory names, default pages, server created content, image maps, other file types and embedded programming. With care, it is possible to create server based pages that can be copied directly to CD-ROM. In addition, Web pages on CD-ROM may reference Internet served pages to provide the best features of both methods.

  13. Modeling and Performance Simulation of the Mass Storage Network Environment

    NASA Technical Reports Server (NTRS)

    Kim, Chan M.; Sang, Janche

    2000-01-01

    This paper describes the application of modeling and simulation in evaluating and predicting the performance of the mass storage network environment. Network traffic is generated to mimic the realistic pattern of file transfer, electronic mail, and web browsing. The behavior and performance of the mass storage network and a typical client-server Local Area Network (LAN) are investigated by modeling and simulation. Performance characteristics in throughput and delay demonstrate the important role of modeling and simulation in network engineering and capacity planning.

  14. BOREAS AFM-07 SRC Surface Meteorological Data

    NASA Technical Reports Server (NTRS)

    Osborne, Heather; Hall, Forrest G. (Editor); Newcomer, Jeffrey A. (Editor); Young, Kim; Wittrock, Virginia; Shewchuck, Stan; Smith, David E. (Technical Monitor)

    2000-01-01

    The Saskatchewan Research Council (SRC) collected surface meteorological and radiation data from December 1993 until December 1996. The data set comprises Suite A (meteorological and energy balance measurements) and Suite B (diffuse solar and longwave measurements) components. Suite A measurements were taken at each of ten sites, and Suite B measurements were made at five of the Suite A sites. The data cover an approximate area of 500 km (North-South) by 1000 km (East-West) (a large portion of northern Manitoba and northern Saskatchewan). The measurement network was designed to provide researchers with a sufficient record of near-surface meteorological and radiation measurements. The data are provided in tabular ASCII files, and were collected by Aircraft Flux and Meteorology (AFM)-7. The surface meteorological and radiation data are available from the Earth Observing System Data and Information System (EOSDIS) Oak Ridge National Laboratory (ORNL) Distributed Active Archive Center (DAAC). The data files are available on a CD-ROM (see document number 20010000884).

  15. File-based data flow in the CMS Filter Farm

    NASA Astrophysics Data System (ADS)

    Andre, J.-M.; Andronidis, A.; Bawej, T.; Behrens, U.; Branson, J.; Chaze, O.; Cittolin, S.; Darlea, G.-L.; Deldicque, C.; Dobson, M.; Dupont, A.; Erhan, S.; Gigi, D.; Glege, F.; Gomez-Ceballos, G.; Hegeman, J.; Holzner, A.; Jimenez-Estupiñán, R.; Masetti, L.; Meijers, F.; Meschi, E.; Mommsen, R. K.; Morovic, S.; Nunez-Barranco-Fernandez, C.; O'Dell, V.; Orsini, L.; Paus, C.; Petrucci, A.; Pieri, M.; Racz, A.; Roberts, P.; Sakulin, H.; Schwick, C.; Stieger, B.; Sumorok, K.; Veverka, J.; Zaza, S.; Zejdl, P.

    2015-12-01

    During the LHC Long Shutdown 1, the CMS Data Acquisition system underwent a partial redesign to replace obsolete network equipment, use more homogeneous switching technologies, and prepare the ground for future upgrades of the detector front-ends. The software and hardware infrastructure to provide input, execute the High Level Trigger (HLT) algorithms and deal with output data transport and storage has also been redesigned to be completely file- based. This approach provides additional decoupling between the HLT algorithms and the input and output data flow. All the metadata needed for bookkeeping of the data flow and the HLT process lifetimes are also generated in the form of small “documents” using the JSON encoding, by either services in the flow of the HLT execution (for rates etc.) or watchdog processes. These “files” can remain memory-resident or be written to disk if they are to be used in another part of the system (e.g. for aggregation of output data). We discuss how this redesign improves the robustness and flexibility of the CMS DAQ and the performance of the system currently being commissioned for the LHC Run 2.

  16. VizieR Online Data Catalog: MARVEL analysis of TiO energy levels (McKemmish+, 2017)

    NASA Astrophysics Data System (ADS)

    McKemmish, L. K.; Masseron, T.; Sheppard, S.; Sandeman, E.; Schofield, Z.; Furtenbacher, T.; Csaszar, A. G.; Tennyson, J.; Sousa-Silva, C.

    2017-04-01

    48Ti-16OFFNca_33.energies, which contains the relative energies free-floating network incorporating the c1{Phi} v=3 and a1Δ v=3 states, and three directories containing sorted folders and files with predicted transition frequencies using the Marvel energies. (2 data files).

  17. 75 FR 78302 - Self-Regulatory Organizations; The NASDAQ Stock Market LLC; Notice of Filing and Immediate...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2010-12-15

    ... last-sale price disseminated by a network processor over a five-minute rolling period measured...-Regulatory Organizations; The NASDAQ Stock Market LLC; Notice of Filing and Immediate Effectiveness of Proposed Rule Change To Extend the Pilot Period of the Trading Pause for Individual Stocks Contained in the...

  18. 77 FR 22776 - Agency Information Collection Activities OMB Responses

    Federal Register 2010, 2011, 2012, 2013, 2014

    2012-04-17

    ... for Pulp and Paper Production; in 40 CFR part 63 subparts A and S; OMB filed comment on 03/02/2012...; OMB filed comment on 03/02/2012. EPA ICR Number 1811.08; NESHAP for Polyether Polyol Production; in 40... Number 2313.02; Ambient Ozone Monitoring Regulations: Revisions to Network Design Requirements (Final...

  19. Utilizing Graphics Processing Units for Network Anomaly Detection

    DTIC Science & Technology

    2012-09-13

    pages 1266–1271, 2003. [Nis12a] Ste↵en Nissen. Fann datatypes - activation function enum, 2012. http://leenissen.dk/fann/html/files/fann data-h.html# fann...activationfunc enum. Last accessed: 7 Aug 2012. [Nis12b] Ste↵en Nissen. Fann datatypes - train enum, 2012. http://leenissen.dk/fann/html/files/fann

  20. 76 FR 78055 - Self-Regulatory Organizations; NASDAQ OMX PHLX LLC; Notice of Filing of Proposed Rule Change To...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-12-15

    ...-Location Fee Schedule Regarding Low Latency Network Connections; Correction AGENCY: Securities And Exchange Commission. ACTION: Notice; correction. SUMMARY: The Securities and Exchange Commission published a document... SECURITIES AND EXCHANGE COMMISSION [Release No. 34-65689A; File No. SR-Phlx-2011-142] Self...

  1. DMFS: A Data Migration File System for NetBSD

    NASA Technical Reports Server (NTRS)

    Studenmund, William

    2000-01-01

    I have recently developed DMFS, a Data Migration File System, for NetBSD. This file system provides kernel support for the data migration system being developed by my research group at NASA/Ames. The file system utilizes an underlying file store to provide the file backing, and coordinates user and system access to the files. It stores its internal metadata in a flat file, which resides on a separate file system. This paper will first describe our data migration system to provide a context for DMFS, then it will describe DMFS. It also will describe the changes to NetBSD needed to make DMFS work. Then it will give an overview of the file archival and restoration procedures, and describe how some typical user actions are modified by DMFS. Lastly, the paper will present simple performance measurements which indicate that there is little performance loss due to the use of the DMFS layer.

  2. Distributed Finite Element Analysis Using a Transputer Network

    NASA Technical Reports Server (NTRS)

    Watson, James; Favenesi, James; Danial, Albert; Tombrello, Joseph; Yang, Dabby; Reynolds, Brian; Turrentine, Ronald; Shephard, Mark; Baehmann, Peggy

    1989-01-01

    The principal objective of this research effort was to demonstrate the extraordinarily cost effective acceleration of finite element structural analysis problems using a transputer-based parallel processing network. This objective was accomplished in the form of a commercially viable parallel processing workstation. The workstation is a desktop size, low-maintenance computing unit capable of supercomputer performance yet costs two orders of magnitude less. To achieve the principal research objective, a transputer based structural analysis workstation termed XPFEM was implemented with linear static structural analysis capabilities resembling commercially available NASTRAN. Finite element model files, generated using the on-line preprocessing module or external preprocessing packages, are downloaded to a network of 32 transputers for accelerated solution. The system currently executes at about one third Cray X-MP24 speed but additional acceleration appears likely. For the NASA selected demonstration problem of a Space Shuttle main engine turbine blade model with about 1500 nodes and 4500 independent degrees of freedom, the Cray X-MP24 required 23.9 seconds to obtain a solution while the transputer network, operated from an IBM PC-AT compatible host computer, required 71.7 seconds. Consequently, the $80,000 transputer network demonstrated a cost-performance ratio about 60 times better than the $15,000,000 Cray X-MP24 system.

  3. Analysis Report for Exascale Storage Requirements for Scientific Data.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ruwart, Thomas M.

    Over the next 10 years, the Department of Energy will be transitioning from Petascale to Exascale Computing resulting in data storage, networking, and infrastructure requirements to increase by three orders of magnitude. The technologies and best practices used today are the result of a relatively slow evolution of ancestral technologies developed in the 1950s and 1960s. These include magnetic tape, magnetic disk, networking, databases, file systems, and operating systems. These technologies will continue to evolve over the next 10 to 15 years on a reasonably predictable path. Experience with the challenges involved in transitioning these fundamental technologies from Terascale tomore » Petascale computing systems has raised questions about how these will scale another 3 or 4 orders of magnitude to meet the requirements imposed by Exascale computing systems. This report is focused on the most concerning scaling issues with data storage systems as they relate to High Performance Computing- and presents options for a path forward. Given the ability to store exponentially increasing amounts of data, far more advanced concepts and use of metadata will be critical to managing data in Exascale computing systems.« less

  4. Cloud Based Drive Forensic and DDoS Analysis on Seafile as Case Study

    NASA Astrophysics Data System (ADS)

    Bahaweres, R. B.; Santo, N. B.; Ningsih, A. S.

    2017-01-01

    The rapid development of Internet due to increasing data rates through both broadband cable networks and 4G wireless mobile, make everyone easily connected to the internet. Storages as Services (StaaS) is more popular and many users want to store their data in one place so that whenever they need they can easily access anywhere, any place and anytime in the cloud. The use of the service makes it vulnerable to use by someone to commit a crime or can do Denial of Service (DoS) on cloud storage services. The criminals can use the cloud storage services to store, upload and download illegal file or document to the cloud storage. In this study, we try to implement a private cloud storage using Seafile on Raspberry Pi and perform simulations in Local Area Network and Wi-Fi environment to analyze forensically to discover or open a criminal act can be traced and proved forensically. Also, we can identify, collect and analyze the artifact of server and client, such as a registry of the desktop client, the file system, the log of seafile, the cache of the browser, and database forensic.

  5. 75 FR 8116 - Notice Pursuant to the National Cooperative Research and Production Act of 1993-Network Centric...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2010-02-23

    ... Production Act of 1993--Network Centric Operations Industry Consortium, Inc. Notice is hereby given that, on..., 15 U.S.C. 4301 et seq. (``the Act'') Network Centric Operations Industry Consortium, Inc. has filed... venture. No other changes have been made in either the membership or planned activity of the group...

  6. 47 CFR 51.333 - Notice of network changes: Short term notice, objections thereto and objections to retirement of...

    Code of Federal Regulations, 2012 CFR

    2012-10-01

    ... 47 Telecommunication 3 2012-10-01 2012-10-01 false Notice of network changes: Short term notice... Additional Obligations of Incumbent Local Exchange Carriers § 51.333 Notice of network changes: Short term... changes, the public notice or certification that it files with the Commission must include a certificate...

  7. 78 FR 44984 - Consolidated Tape Association; Notice of Filing and Immediate Effectiveness of the Nineteenth...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-07-25

    ... Network A quotation information under the CQ Plan.) Market data users have told the Participants that they... by market data recipients, the Network A Administrator has discovered improper use of the employee... which currently receive both last sale and quotation information. Network B has a small number of data...

  8. The development of the International Network for Frontier Research on Earthquake Precursors (INFREP) by designing new analysing software and by setting up new recording locations of radio VLF/LF signals in Romania

    NASA Astrophysics Data System (ADS)

    Moldovan, Iren-Adelina; Petruta Constantin, Angela; Emilian Toader, Victorin; Toma-Danila, Dragos; Biagi, Pier Francesco; Maggipinto, Tommaso; Dolea, Paul; Septimiu Moldovan, Adrian

    2014-05-01

    Based on scientific evidences supporting the causality between earthquake preparatory stages, space weather and solar activity and different types of electromagnetic (EM) disturbances together with the benefit of having full access at ground and space based EM data, INFREP proposes a complex and cross correlated investigation of phenomena that occur in the coupled system Lithosphere-Atmosphere-Ionsophere in order to identify possible causes responsible for anomalous effects observed in the propagation characteristics of radio waves, especially at low (LF) and very low frequency (VLF). INFREP, a network of VLF (20-60 kHz) and LF (150-300 kHz) radio receivers, was put into operation in Europe in 2009, having as principal goal, the study of disturbances produced by the earthquakes on the propagation properties of these signals. The Romanian NIEP VLF / LF monitoring system consisting in a radio receiver -made by Elettronika S.R.L. (Italy) and provided by the Bari University- and the infrastructure that is necessary to record and transmit the collected data, is a part of the international initiative INFREP. The NIEP VLF / LF receiver installed in Romania was put into operation in February 2009 in Bucharest and relocated to the Black-Sea shore (Dobruja Seismologic Observatory) in December 2009. The first development of the Romanian EM monitoring system was needed because after changing the receiving site from Bucharest to Eforie we obtained unsatisfactory monitoring data, characterized by large fluctuations of the received signals' intensities. Trying to understand this behavior has led to the conclusion that the electric component of the electromagnetic field was possibly influenced by the local conditions. Starting from this observation we have run some tests and changed the vertical antenna with a loop-type antenna that is more appropriate in highly electric-field polluted environments. Since the amount of recorded data is huge, for streamlining the research process we have realized the automation of the transfer, storage and initial processing of data using the LabView software platform. The special designed LabVIEW application, which accesses the VLF/LF receiver through internet, opens the receiver's web-page and automatically retrieves the list of data files to synchronize the user-side data with the receiver's data. Missing zipped files are also automatically downloaded. The application performs primary, statistical correlation and spectral analysis, appends daily files into monthly and annual files and performs 3D color-coded maps with graphic representations of VLF and LF signals' intensities versus the minute-of-the-day and the day-of-the-month, facilitating a near real-time observation of VLF and LF electromagnetic waves' propagation. Another feature of the software is the correlation of the daily recorded files for the studied frequencies by overlaying the 24 hours radio activity and taking into account the sunrise and sunset. The next step in developing the Romanian EM recording system is to enlarge the INFREP network with new VLF/LF receivers for a better coverage and separation of European seismogenic zones. This will be done in the future by using national resources. The unitary seismotectonic zoning of Romania and the whole Europe is a very important step for this goal.

  9. Summary of available hydrogeologic data for the northeast portion of the alluvial aquifer at Louisville, Kentucky

    USGS Publications Warehouse

    Unthank, Michael D.; Nelson, Hugh L.

    2006-01-01

    The hydrogeologic characteristics of the unconsolidated glacial outwash sand and gravel deposits that compose the northeast portion of the alluvial aquifer at Louisville, Kentucky, indicate a prolific water-bearing formation with approximately 7 billion gallons of ground-water storage and an estimated sustainable yield of over 280 million gallons per day. This abundance of ground water and the need to properly develop and manage this resource has prompted many past investigations (since 1956), which have produced reports, maps, and data files covering a variety of topics relative to the movement, availability, and use of ground water in this area. These data have been compiled into a single report to assist in future development and use of the ground-water resources. Available ground-water data for the alluvial aquifer at Louisville, Kentucky, from Beargrass Creek to Harrods Creek, were compiled from the U.S. Geological Survey National Water Information System and the Kentucky Groundwater Data Repository. Data contained in these databases include ground-water well-construction details and historical ground-water levels, drillers' logs, and water-quality information. Additional data and information were gathered from project files at the U.S. Geological Survey--Kentucky Water Science Center and files at the Louisville Water Company. Information contained in these files included data from area pumping tests describing aquifer characteristics and ground-water flow. Data describing current conditions of the ground-water system in the northeast portion of the alluvial aquifer also are included. Ground-water levels from a network of observation wells show recent trends in the flow system, and information from the Kentucky Division of Water-Groundwater Branch lists current permitted ground-water withdrawals in the area.

  10. Efficient File Sharing by Multicast - P2P Protocol Using Network Coding and Rank Based Peer Selection

    NASA Technical Reports Server (NTRS)

    Stoenescu, Tudor M.; Woo, Simon S.

    2009-01-01

    In this work, we consider information dissemination and sharing in a distributed peer-to-peer (P2P highly dynamic communication network. In particular, we explore a network coding technique for transmission and a rank based peer selection method for network formation. The combined approach has been shown to improve information sharing and delivery to all users when considering the challenges imposed by the space network environments.

  11. VisIO: enabling interactive visualization of ultra-scale, time-series data via high-bandwidth distributed I/O systems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mitchell, Christopher J; Ahrens, James P; Wang, Jun

    2010-10-15

    Petascale simulations compute at resolutions ranging into billions of cells and write terabytes of data for visualization and analysis. Interactive visuaUzation of this time series is a desired step before starting a new run. The I/O subsystem and associated network often are a significant impediment to interactive visualization of time-varying data; as they are not configured or provisioned to provide necessary I/O read rates. In this paper, we propose a new I/O library for visualization applications: VisIO. Visualization applications commonly use N-to-N reads within their parallel enabled readers which provides an incentive for a shared-nothing approach to I/O, similar tomore » other data-intensive approaches such as Hadoop. However, unlike other data-intensive applications, visualization requires: (1) interactive performance for large data volumes, (2) compatibility with MPI and POSIX file system semantics for compatibility with existing infrastructure, and (3) use of existing file formats and their stipulated data partitioning rules. VisIO, provides a mechanism for using a non-POSIX distributed file system to provide linear scaling of 110 bandwidth. In addition, we introduce a novel scheduling algorithm that helps to co-locate visualization processes on nodes with the requested data. Testing using VisIO integrated into Para View was conducted using the Hadoop Distributed File System (HDFS) on TACC's Longhorn cluster. A representative dataset, VPIC, across 128 nodes showed a 64.4% read performance improvement compared to the provided Lustre installation. Also tested, was a dataset representing a global ocean salinity simulation that showed a 51.4% improvement in read performance over Lustre when using our VisIO system. VisIO, provides powerful high-performance I/O services to visualization applications, allowing for interactive performance with ultra-scale, time-series data.« less

  12. Evidence of Probabilistic Behaviour in Protein Interaction Networks

    DTIC Science & Technology

    2008-01-31

    Evidence of degree-weighted connectivity in nine PPI networks. a, Homo sapiens (human); b, Drosophila melanogaster (fruit fly); c-e, Saccharomyces...illustrates maps for the networks of Homo sapiens and Dro- sophila melanogaster, while maps for the remaining net- works are provided in Additional file 2. As...protein-protein interaction networks. a, Homo sapiens ; b, Drosophila melanogaster. Distances shown as average shortest path lengths L(k1, k2) between

  13. FTP Extensions for Variable Protocol Specification

    NASA Technical Reports Server (NTRS)

    Allman, Mark; Ostermann, Shawn

    2000-01-01

    The specification for the File Transfer Protocol (FTP) assumes that the underlying network protocols use a 32-bit network address and a 16-bit transport address (specifically IP version 4 and TCP). With the deployment of version 6 of the Internet Protocol, network addresses will no longer be 32-bits. This paper species extensions to FTP that will allow the protocol to work over a variety of network and transport protocols.

  14. Enabling Discoveries in Earth Sciences Through the Geosciences Network (GEON)

    NASA Astrophysics Data System (ADS)

    Seber, D.; Baru, C.; Memon, A.; Lin, K.; Youn, C.

    2005-12-01

    Taking advantage of the state-of-the-art information technology resources GEON researchers are building a cyberinfrastructure designed to enable data sharing, semantic data integration, high-end computations and 4D visualization in easy-to-use web-based environments. The GEON Network currently allows users to search and register Earth science resources such as data sets (GIS layers, GMT files, geoTIFF images, ASCII files, relational databases etc), software applications or ontologies. Portal based access mechanisms enable developers to built dynamic user interfaces to conduct advanced processing and modeling efforts across distributed computers and supercomputers. Researchers and educators can access the networked resources through the GEON portal and its portlets that were developed to conduct better and more comprehensive science and educational studies. For example, the SYNSEIS portlet in GEON enables users to access in near-real time seismic waveforms from the IRIS Data Management Center, easily build a 3D geologic model within the area of the seismic station(s) and the epicenter and perform a 3D synthetic seismogram analysis to understand the lithospheric structure and earthquake source parameters for any given earthquake in the US. Similarly, GEON's workbench area enables users to create their own work environment and copy, visualize and analyze any data sets within the network, and create subsets of the data sets for their own purposes. Since all these resources are built as part of a Service-oriented Architecture (SOA), they are also used in other development platforms. One such platform is Kepler Workflow system which can access web service based resources and provides users with graphical programming interfaces to build a model to conduct computations and/or visualization efforts using the networked resources. Developments in the area of semantic integration of the networked datasets continue to advance and prototype studies can be accessed via the GEON portal at www.geongrid.org

  15. PACS quality control and automatic problem notifier

    NASA Astrophysics Data System (ADS)

    Honeyman-Buck, Janice C.; Jones, Douglas; Frost, Meryll M.; Staab, Edward V.

    1997-05-01

    One side effect of installing a clinical PACS Is that users become dependent upon the technology and in some cases it can be very difficult to revert back to a film based system if components fail. The nature of system failures range from slow deterioration of function as seen in the loss of monitor luminance through sudden catastrophic loss of the entire PACS networks. This paper describes the quality control procedures in place at the University of Florida and the automatic notification system that alerts PACS personnel when a failure has happened or is anticipated. The goal is to recover from a failure with a minimum of downtime and no data loss. Routine quality control is practiced on all aspects of PACS, from acquisition, through network routing, through display, and including archiving. Whenever possible, the system components perform self and between platform checks for active processes, file system status, errors in log files, and system uptime. When an error is detected or a exception occurs, an automatic page is sent to a pager with a diagnostic code. Documentation on each code, trouble shooting procedures, and repairs are kept on an intranet server accessible only to people involved in maintaining the PACS. In addition to the automatic paging system for error conditions, acquisition is assured by an automatic fax report sent on a daily basis to all technologists acquiring PACS images to be used as a cross check that all studies are archived prior to being removed from the acquisition systems. Daily quality control is preformed to assure that studies can be moved from each acquisition and contrast adjustment. The results of selected quality control reports will be presented. The intranet documentation server will be described with the automatic pager system. Monitor quality control reports will be described and the cost of quality control will be quantified. As PACS is accepted as a clinical tool, the same standards of quality control must be established as are expected on other equipment used in the diagnostic process.

  16. FITSManager: Management of Personal Astronomical Data

    NASA Astrophysics Data System (ADS)

    Cui, Chenzhou; Fan, Dongwei; Zhao, Yongheng; Kembhavi, Ajit; He, Boliang; Cao, Zihuang; Li, Jian; Nandrekar, Deoyani

    2011-07-01

    With the increase of personal storage capacity, it is easy to find hundreds to thousands of FITS files in the personal computer of an astrophysicist. Because Flexible Image Transport System (FITS) is a professional data format initiated by astronomers and used mainly in the small community, data management toolkits for FITS files are very few. Astronomers need a powerful tool to help them manage their local astronomical data. Although Virtual Observatory (VO) is a network oriented astronomical research environment, its applications and related technologies provide useful solutions to enhance the management and utilization of astronomical data hosted in an astronomer's personal computer. FITSManager is such a tool to provide astronomers an efficient management and utilization of their local data, bringing VO to astronomers in a seamless and transparent way. FITSManager provides fruitful functions for FITS file management, like thumbnail, preview, type dependent icons, header keyword indexing and search, collaborated working with other tools and online services, and so on. The development of the FITSManager is an effort to fill the gap between management and analysis of astronomical data.

  17. Assessing Inhalation Exposures Associated with Contamination Events inWater Distribution Systems

    EPA Pesticide Factsheets

    EPANET network models (inp files) used in paper. The file ??cdf2003-12singles.txt?? developed using ATUS data, that contains tab-separated values for the starting times and cumulative probabilities plotted in Fig. 2 in supporting design report. There are 101 rows in the file. The first entry in each row is the cumulative probability (0 to 1.0) and the second entry is the corresponding starting time (0.0 to 24.0 hours). The second file (??two events 2003-12.txt??) was developed that contains data for all 36,652 ATUS respondents who reported two grooming events in 2003 to 2012. Results in this file are used in TEVA-SPOT to generate random starting time for individuals who take two showers per day. The file has 36,652 rows and five tab-separated columns. The first column contains the year the data were collected and the second column contains the ATUS identifiers used for the respondents. The third column contains the starting times in hours local time for the first event and the fourth column contains the starting time in hours local time for the second event. The fifth column provides the ATUS weights for the respondents. Weights are needed to compensate for the manner in which sampling and data collection were carried out in ATUS. The Report (EPA/600/R-15/271) documents the design for incorporating the capability for estimating inhalation doses in TEVA-SPOT.This dataset is associated with the following publication:Janke , R., M. Davis, and T. Taxon. Assessing In

  18. System for Automated Calibration of Vector Modulators

    NASA Technical Reports Server (NTRS)

    Lux, James; Boas, Amy; Li, Samuel

    2009-01-01

    Vector modulators are used to impose baseband modulation on RF signals, but non-ideal behavior limits the overall performance. The non-ideal behavior of the vector modulator is compensated using data collected with the use of an automated test system driven by a LabVIEW program that systematically applies thousands of control-signal values to the device under test and collects RF measurement data. The technology innovation automates several steps in the process. First, an automated test system, using computer controlled digital-to-analog converters (DACs) and a computer-controlled vector network analyzer (VNA) systematically can apply different I and Q signals (which represent the complex number by which the RF signal is multiplied) to the vector modulator under test (VMUT), while measuring the RF performance specifically, gain and phase. The automated test system uses the LabVIEW software to control the test equipment, collect the data, and write it to a file. The input to the Lab - VIEW program is either user-input for systematic variation, or is provided in a file containing specific test values that should be fed to the VMUT. The output file contains both the control signals and the measured data. The second step is to post-process the file to determine the correction functions as needed. The result of the entire process is a tabular representation, which allows translation of a desired I/Q value to the required analog control signals to produce a particular RF behavior. In some applications, corrected performance is needed only for a limited range. If the vector modulator is being used as a phase shifter, there is only a need to correct I and Q values that represent points on a circle, not the entire plane. This innovation has been used to calibrate 2-GHz MMIC (monolithic microwave integrated circuit) vector modulators in the High EIRP Cluster Array project (EIRP is high effective isotropic radiated power). These calibrations were then used to create correction tables to allow the commanding of the phase shift in each of four channels used as a phased array for beam steering of a Ka-band (32-GHz) signal. The system also was the basis of a breadboard electronic beam steering system. In this breadboard, the goal was not to make systematic measurements of the properties of a vector modulator, but to drive the breadboard with a series of test patterns varying in phase and amplitude. This is essentially the same calibration process, but with the difference that the data collection process is oriented toward collecting breadboard performance, rather than the measurement of output from a network analyzer.

  19. A hydrologic network supporting spatially referenced regression modeling in the Chesapeake Bay watershed

    USGS Publications Warehouse

    Brakebill, J.W.; Preston, S.D.

    2003-01-01

    The U.S. Geological Survey has developed a methodology for statistically relating nutrient sources and land-surface characteristics to nutrient loads of streams. The methodology is referred to as SPAtially Referenced Regressions On Watershed attributes (SPARROW), and relates measured stream nutrient loads to nutrient sources using nonlinear statistical regression models. A spatially detailed digital hydrologic network of stream reaches, stream-reach characteristics such as mean streamflow, water velocity, reach length, and travel time, and their associated watersheds supports the regression models. This network serves as the primary framework for spatially referencing potential nutrient source information such as atmospheric deposition, septic systems, point-sources, land use, land cover, and agricultural sources and land-surface characteristics such as land use, land cover, average-annual precipitation and temperature, slope, and soil permeability. In the Chesapeake Bay watershed that covers parts of Delaware, Maryland, Pennsylvania, New York, Virginia, West Virginia, and Washington D.C., SPARROW was used to generate models estimating loads of total nitrogen and total phosphorus representing 1987 and 1992 land-surface conditions. The 1987 models used a hydrologic network derived from an enhanced version of the U.S. Environmental Protection Agency's digital River Reach File, and course resolution Digital Elevation Models (DEMs). A new hydrologic network was created to support the 1992 models by generating stream reaches representing surface-water pathways defined by flow direction and flow accumulation algorithms from higher resolution DEMs. On a reach-by-reach basis, stream reach characteristics essential to the modeling were transferred to the newly generated pathways or reaches from the enhanced River Reach File used to support the 1987 models. To complete the new network, watersheds for each reach were generated using the direction of surface-water flow derived from the DEMs. This network improves upon existing digital stream data by increasing the level of spatial detail and providing consistency between the reach locations and topography. The hydrologic network also aids in illustrating the spatial patterns of predicted nutrient loads and sources contributed locally to each stream, and the percentages of nutrient load that reach Chesapeake Bay.

  20. The space physics analysis network

    NASA Astrophysics Data System (ADS)

    Green, James L.

    1988-04-01

    The Space Physics Analysis Network, or SPAN, is emerging as a viable method for solving an immediate communication problem for space and Earth scientists and has been operational for nearly 7 years. SPAN and its extension into Europe, utilizes computer-to-computer communications allowing mail, binary and text file transfer, and remote logon capability to over 1000 space science computer systems. The network has been used to successfully transfer real-time data to remote researchers for rapid data analysis but its primary function is for non-real-time applications. One of the major advantages for using SPAN is its spacecraft mission independence. Space science researchers using SPAN are located in universities, industries and government institutions all across the United States and Europe. These researchers are in such fields as magnetospheric physics, astrophysics, ionosperic physics, atmospheric physics, climatology, meteorology, oceanography, planetary physics and solar physics. SPAN users have access to space and Earth science data bases, mission planning and information systems, and computational facilities for the purposes of facilitating correlative space data exchange, data analysis and space research. For example, the National Space Science Data Center (NSSDC), which manages the network, is providing facilities on SPAN such as the Network Information Center (SPAN NIC). SPAN has interconnections with several national and international networks such as HEPNET and TEXNET forming a transparent DECnet network. The combined total number of computers now reachable over these combined networks is about 2000. In addition, SPAN supports full function capabilities over the international public packet switched networks (e.g. TELENET) and has mail gateways to ARPANET, BITNET and JANET.

Top