Sample records for virtual file system

  1. File System Virtual Appliances: Portable File System Implementations

    DTIC Science & Technology

    2009-05-01

    Mobile Computing Systems and Applications, Santa Cruz, CA, 1994. IEEE. [10] Michael Eisler , Peter Corbett, Michael Kazar, Daniel S. Nydick, and...Gingell, Joseph P. Moran, and William A. Shannon. Virtual Memory Architec- ture in SunOS. In USENIX Summer Conference, pages 81–94, Berkeley, CA, 1987

  2. Virtual file system on NoSQL for processing high volumes of HL7 messages.

    PubMed

    Kimura, Eizen; Ishihara, Ken

    2015-01-01

    The Standardized Structured Medical Information Exchange (SS-MIX) is intended to be the standard repository for HL7 messages that depend on a local file system. However, its scalability is limited. We implemented a virtual file system using NoSQL to incorporate modern computing technology into SS-MIX and allow the system to integrate local patient IDs from different healthcare systems into a universal system. We discuss its implementation using the database MongoDB and describe its performance in a case study.

  3. File System Virtual Appliances: Portable File System Implementations

    DTIC Science & Technology

    2010-04-01

    com- puting. Santa Cruz, CA, 1994. [12] Michael Eisler , Peter Corbett, Michael Kazar, Daniel S. Nydick, and Christopher Wagner. Data ontap gx: a...fuse.sourceforge.net. [15] R. A. Gingell, J. P. Moran, and W. A. Shannon. Virtual memory architecture in sunos. USENIX ATC, pages 81–94, 1987 . [16

  4. 77 FR 5008 - Solios Power Mid-Atlantic Virtual LLC; Supplemental Notice That Initial Market-Based Rate Filing...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2012-02-01

    ...-referenced proceeding are accessible in the Commission's eLibrary system by clicking on the appropriate link... Mid-Atlantic Virtual LLC; Supplemental Notice That Initial Market-Based Rate Filing Includes Request... of Solios Power Mid-Atlantic Virtual LLC's application for market-based rate authority, with an...

  5. Virtual file system for PSDS

    NASA Technical Reports Server (NTRS)

    Runnels, Tyson D.

    1993-01-01

    This is a case study. It deals with the use of a 'virtual file system' (VFS) for Boeing's UNIX-based Product Standards Data System (PSDS). One of the objectives of PSDS is to store digital standards documents. The file-storage requirements are that the files must be rapidly accessible, stored for long periods of time - as though they were paper, protected from disaster, and accumulative to about 80 billion characters (80 gigabytes). This volume of data will be approached in the first two years of the project's operation. The approach chosen is to install a hierarchical file migration system using optical disk cartridges. Files are migrated from high-performance media to lower performance optical media based on a least-frequency-used algorithm. The optical media are less expensive per character stored and are removable. Vital statistics about the removable optical disk cartridges are maintained in a database. The assembly of hardware and software acts as a single virtual file system transparent to the PSDS user. The files are copied to 'backup-and-recover' media whose vital statistics are also stored in the database. Seventeen months into operation, PSDS is storing 49 gigabytes. A number of operational and performance problems were overcome. Costs are under control. New and/or alternative uses for the VFS are being considered.

  6. 78 FR 5765 - Wireline Competition Bureau Releases Connect America Phase II Cost Model Virtual Workshop...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-01-28

    ... variation by geography; inter-office transport cost; voice capability; wire center facilities; sizing of... through traditional channels at the FCC, such as the Commission's Electronic Comment Filing System (ECFS... Electronic Comment Filing System (ECFS). In the meantime, parties are encouraged to examine both the Virtual...

  7. File System Virtual Appliances: Third-Party File System Implementations Without the Pain

    DTIC Science & Technology

    2008-05-01

    Eifeldt. POSIX: a developer’s view of standards. USENIX ATC, pages 24–24. USENIX Association, 1997. [12] M. Eisler , P. Corbett, M. Kazar, D. S...Gingell, J. P. Moran, and W. A. Shannon. Virtual Memory Architecture in SunOS. USENIX Summer Conference, pages 81–94, 1987 . [17] D. Gupta, L. Cherkasova, R

  8. A convertor and user interface to import CAD files into worldtoolkit virtual reality systems

    NASA Technical Reports Server (NTRS)

    Wang, Peter Hor-Ching

    1996-01-01

    Virtual Reality (VR) is a rapidly developing human-to-computer interface technology. VR can be considered as a three-dimensional computer-generated Virtual World (VW) which can sense particular aspects of a user's behavior, allow the user to manipulate the objects interactively, and render the VW at real-time accordingly. The user is totally immersed in the virtual world and feel the sense of transforming into that VW. NASA/MSFC Computer Application Virtual Environments (CAVE) has been developing the space-related VR applications since 1990. The VR systems in CAVE lab are based on VPL RB2 system which consists of a VPL RB2 control tower, an LX eyephone, an Isotrak polhemus sensor, two Fastrak polhemus sensors, a folk of Bird sensor, and two VPL DG2 DataGloves. A dynamics animator called Body Electric from VPL is used as the control system to interface with all the input/output devices and to provide the network communications as well as VR programming environment. The RB2 Swivel 3D is used as the modelling program to construct the VW's. A severe limitation of the VPL VR system is the use of RB2 Swivel 3D, which restricts the files to a maximum of 1020 objects and doesn't have the advanced graphics texture mapping. The other limitation is that the VPL VR system is a turn-key system which does not provide the flexibility for user to add new sensors and C language interface. Recently, NASA/MSFC CAVE lab provides VR systems built on Sense8 WorldToolKit (WTK) which is a C library for creating VR development environments. WTK provides device drivers for most of the sensors and eyephones available on the VR market. WTK accepts several CAD file formats, such as Sense8 Neutral File Format, AutoCAD DXF and 3D Studio file format, Wave Front OBJ file format, VideoScape GEO file format, Intergraph EMS stereolithographics and CATIA Stereolithographics STL file formats. WTK functions are object-oriented in their naming convention, are grouped into classes, and provide easy C language interface. Using a CAD or modelling program to build a VW for WTK VR applications, we typically construct the stationary universe with all the geometric objects except the dynamic objects, and create each dynamic object in an individual file.

  9. Robotics virtual rail system and method

    DOEpatents

    Bruemmer, David J [Idaho Falls, ID; Few, Douglas A [Idaho Falls, ID; Walton, Miles C [Idaho Falls, ID

    2011-07-05

    A virtual track or rail system and method is described for execution by a robot. A user, through a user interface, generates a desired path comprised of at least one segment representative of the virtual track for the robot. Start and end points are assigned to the desired path and velocities are also associated with each of the at least one segment of the desired path. A waypoint file is generated including positions along the virtual track representing the desired path with the positions beginning from the start point to the end point including the velocities of each of the at least one segment. The waypoint file is sent to the robot for traversing along the virtual track.

  10. Integrating UniTree with the data migration API

    NASA Technical Reports Server (NTRS)

    Schrodel, David G.

    1994-01-01

    The Data Migration Application Programming Interface (DMAPI) has the potential to allow developers of open systems Hierarchical Storage Management (HSM) products to virtualize native file systems without the requirement to make changes to the underlying operating system. This paper describes advantages of virtualizing native file systems in hierarchical storage management systems, the DMAPI at a high level, what the goals are for the interface, and the integration of the Convex UniTree+HSM with DMAPI along with some of the benefits derived in the resulting product.

  11. 78 FR 36765 - Combined Notice of Filings #1

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-06-19

    ... & OA re Up-to-Congestion & Virtual Transactions to be effective 8/9/2013. Filed Date: 6/10/13.... Comments Due: 5 p.m. ET 6/17/13. The filings are accessible in the Commission's eLibrary system by clicking...

  12. Efficient Checkpointing of Virtual Machines using Virtual Machine Introspection

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Aderholdt, Ferrol; Han, Fang; Scott, Stephen L

    Cloud Computing environments rely heavily on system-level virtualization. This is due to the inherent benefits of virtualization including fault tolerance through checkpoint/restart (C/R) mechanisms. Because clouds are the abstraction of large data centers and large data centers have a higher potential for failure, it is imperative that a C/R mechanism for such an environment provide minimal latency as well as a small checkpoint file size. Recently, there has been much research into C/R with respect to virtual machines (VM) providing excellent solutions to reduce either checkpoint latency or checkpoint file size. However, these approaches do not provide both. This papermore » presents a method of checkpointing VMs by utilizing virtual machine introspection (VMI). Through the usage of VMI, we are able to determine which pages of memory within the guest are used or free and are better able to reduce the amount of pages written to disk during a checkpoint. We have validated this work by using various benchmarks to measure the latency along with the checkpoint size. With respect to checkpoint file size, our approach results in file sizes within 24% or less of the actual used memory within the guest. Additionally, the checkpoint latency of our approach is up to 52% faster than KVM s default method.« less

  13. LVFS: A Scalable Petabye/Exabyte Data Storage System

    NASA Astrophysics Data System (ADS)

    Golpayegani, N.; Halem, M.; Masuoka, E. J.; Ye, G.; Devine, N. K.

    2013-12-01

    Managing petabytes of data with hundreds of millions of files is the first step necessary towards an effective big data computing and collaboration environment in a distributed system. We describe here the MODAPS LAADS Virtual File System (LVFS), a new storage architecture which replaces the previous MODAPS operational Level 1 Land Atmosphere Archive Distribution System (LAADS) NFS based approach to storing and distributing datasets from several instruments, such as MODIS, MERIS, and VIIRS. LAADS is responsible for the distribution of over 4 petabytes of data and over 300 million files across more than 500 disks. We present here the first LVFS big data comparative performance results and new capabilities not previously possible with the LAADS system. We consider two aspects in addressing inefficiencies of massive scales of data. First, is dealing in a reliable and resilient manner with the volume and quantity of files in such a dataset, and, second, minimizing the discovery and lookup times for accessing files in such large datasets. There are several popular file systems that successfully deal with the first aspect of the problem. Their solution, in general, is through distribution, replication, and parallelism of the storage architecture. The Hadoop Distributed File System (HDFS), Parallel Virtual File System (PVFS), and Lustre are examples of such file systems that deal with petabyte data volumes. The second aspect deals with data discovery among billions of files, the largest bottleneck in reducing access time. The metadata of a file, generally represented in a directory layout, is stored in ways that are not readily scalable. This is true for HDFS, PVFS, and Lustre as well. Recent experimental file systems, such as Spyglass or Pantheon, have attempted to address this problem through redesign of the metadata directory architecture. LVFS takes a radically different architectural approach by eliminating the need for a separate directory within the file system. The LVFS system replaces the NFS disk mounting approach of LAADS and utilizes the already existing highly optimized metadata database server, which is applicable to most scientific big data intensive compute systems. Thus, LVFS ties the existing storage system with the existing metadata infrastructure system which we believe leads to a scalable exabyte virtual file system. The uniqueness of the implemented design is not limited to LAADS but can be employed with most scientific data processing systems. By utilizing the Filesystem In Userspace (FUSE), a kernel module available in many operating systems, LVFS was able to replace the NFS system while staying POSIX compliant. As a result, the LVFS system becomes scalable to exabyte sizes owing to the use of highly scalable database servers optimized for metadata storage. The flexibility of the LVFS design allows it to organize data on the fly in different ways, such as by region, date, instrument or product without the need for duplication, symbolic links, or any other replication methods. We proposed here a strategic reference architecture that addresses the inefficiencies of scientific petabyte/exabyte file system access through the dynamic integration of the observing system's large metadata file.

  14. Registered File Support for Critical Operations Files at (Space Infrared Telescope Facility) SIRTF

    NASA Technical Reports Server (NTRS)

    Turek, G.; Handley, Tom; Jacobson, J.; Rector, J.

    2001-01-01

    The SIRTF Science Center's (SSC) Science Operations System (SOS) has to contend with nearly one hundred critical operations files via comprehensive file management services. The management is accomplished via the registered file system (otherwise known as TFS) which manages these files in a registered file repository composed of a virtual file system accessible via a TFS server and a file registration database. The TFS server provides controlled, reliable, and secure file transfer and storage by registering all file transactions and meta-data in the file registration database. An API is provided for application programs to communicate with TFS servers and the repository. A command line client implementing this API has been developed as a client tool. This paper describes the architecture, current implementation, but more importantly, the evolution of these services based on evolving community use cases and emerging information system technology.

  15. 77 FR 37394 - Combined Notice of Filings #1

    Federal Register 2010, 2011, 2012, 2013, 2014

    2012-06-21

    ...: Revisions to the Tariff Att Q to modify PJM's Credit Standards for Virtual Bids to be effective 8/8/2012.../12. The filings are accessible in the Commission's eLibrary system by clicking on the links or...

  16. Beyond a Terabyte File System

    NASA Technical Reports Server (NTRS)

    Powers, Alan K.

    1994-01-01

    The Numerical Aerodynamics Simulation Facility's (NAS) CRAY C916/1024 accesses a "virtual" on-line file system, which is expanding beyond a terabyte of information. This paper will present some options to fine tuning Data Migration Facility (DMF) to stretch the online disk capacity and explore the transitions to newer devices (STK 4490, ER90, RAID).

  17. Associative programming language and virtual associative access manager

    NASA Technical Reports Server (NTRS)

    Price, C.

    1978-01-01

    APL provides convenient associative data manipulation functions in a high level language. Six statements were added to PL/1 via a preprocessor: CREATE, INSERT, FIND, FOR EACH, REMOVE, and DELETE. They allow complete control of all data base operations. During execution, data base management programs perform the functions required to support the APL language. VAAM is the data base management system designed to support the APL language. APL/VAAM is used by CADANCE, an interactive graphic computer system. VAAM is designed to support heavily referenced files. Virtual memory files, which utilize the paging mechanism of the operating system, are used. VAAM supports a full network data structure. The two basic blocks in a VAAM file are entities and sets. Entities are the basic information element and correspond to PL/1 based structures defined by the user. Sets contain the relationship information and are implemented as arrays.

  18. Distributed Virtual System (DIVIRS) Project

    NASA Technical Reports Server (NTRS)

    Schorr, Herbert; Neuman, B. Clifford

    1993-01-01

    As outlined in our continuation proposal 92-ISI-50R (revised) on contract NCC 2-539, we are (1) developing software, including a system manager and a job manager, that will manage available resources and that will enable programmers to program parallel applications in terms of a virtual configuration of processors, hiding the mapping to physical nodes; (2) developing communications routines that support the abstractions implemented in item one; (3) continuing the development of file and information systems based on the virtual system model; and (4) incorporating appropriate security measures to allow the mechanisms developed in items 1 through 3 to be used on an open network. The goal throughout our work is to provide a uniform model that can be applied to both parallel and distributed systems. We believe that multiprocessor systems should exist in the context of distributed systems, allowing them to be more easily shared by those that need them. Our work provides the mechanisms through which nodes on multiprocessors are allocated to jobs running within the distributed system and the mechanisms through which files needed by those jobs can be located and accessed.

  19. DIstributed VIRtual System (DIVIRS) project

    NASA Technical Reports Server (NTRS)

    Schorr, Herbert; Neuman, B. Clifford

    1994-01-01

    As outlined in our continuation proposal 92-ISI-. OR (revised) on NASA cooperative agreement NCC2-539, we are (1) developing software, including a system manager and a job manager, that will manage available resources and that will enable programmers to develop and execute parallel applications in terms of a virtual configuration of processors, hiding the mapping to physical nodes; (2) developing communications routines that support the abstractions implemented in item one; (3) continuing the development of file and information systems based on the Virtual System Model; and (4) incorporating appropriate security measures to allow the mechanisms developed in items 1 through 3 to be used on an open network. The goal throughout our work is to provide a uniform model that can be applied to both parallel and distributed systems. We believe that multiprocessor systems should exist in the context of distributed systems, allowing them to be more easily shared by those that need them. Our work provides the mechanisms through which nodes on multiprocessors are allocated to jobs running within the distributed system and the mechanisms through which files needed by those jobs can be located and accessed.

  20. DIstributed VIRtual System (DIVIRS) project

    NASA Technical Reports Server (NTRS)

    Schorr, Herbert; Neuman, Clifford B.

    1995-01-01

    As outlined in our continuation proposal 92-ISI-50R (revised) on NASA cooperative agreement NCC2-539, we are (1) developing software, including a system manager and a job manager, that will manage available resources and that will enable programmers to develop and execute parallel applications in terms of a virtual configuration of processors, hiding the mapping to physical nodes; (2) developing communications routines that support the abstractions implemented in item one; (3) continuing the development of file and information systems based on the Virtual System Model; and (4) incorporating appropriate security measures to allow the mechanisms developed in items 1 through 3 to be used on an open network. The goal throughout our work is to provide a uniform model that can be applied to both parallel and distributed systems. We believe that multiprocessor systems should exist in the context of distributed systems, allowing them to be more easily shared by those that need them. Our work provides the mechanisms through which nodes on multiprocessors are allocated to jobs running within the distributed system and the mechanisms through which files needed by those jobs can be located and accessed.

  1. Distributed Virtual System (DIVIRS) project

    NASA Technical Reports Server (NTRS)

    Schorr, Herbert; Neuman, B. Clifford

    1993-01-01

    As outlined in the continuation proposal 92-ISI-50R (revised) on NASA cooperative agreement NCC 2-539, the investigators are developing software, including a system manager and a job manager, that will manage available resources and that will enable programmers to develop and execute parallel applications in terms of a virtual configuration of processors, hiding the mapping to physical nodes; developing communications routines that support the abstractions implemented; continuing the development of file and information systems based on the Virtual System Model; and incorporating appropriate security measures to allow the mechanisms developed to be used on an open network. The goal throughout the work is to provide a uniform model that can be applied to both parallel and distributed systems. The authors believe that multiprocessor systems should exist in the context of distributed systems, allowing them to be more easily shared by those that need them. The work provides the mechanisms through which nodes on multiprocessors are allocated to jobs running within the distributed system and the mechanisms through which files needed by those jobs can be located and accessed.

  2. Design of a steganographic virtual operating system

    NASA Astrophysics Data System (ADS)

    Ashendorf, Elan; Craver, Scott

    2015-03-01

    A steganographic file system is a secure file system whose very existence on a disk is concealed. Customarily, these systems hide an encrypted volume within unused disk blocks, slack space, or atop conventional encrypted volumes. These file systems are far from undetectable, however: aside from their ciphertext footprint, they require a software or driver installation whose presence can attract attention and then targeted surveillance. We describe a new steganographic operating environment that requires no visible software installation, launching instead from a concealed bootstrap program that can be extracted and invoked with a chain of common Unix commands. Our system conceals its payload within innocuous files that typically contain high-entropy data, producing a footprint that is far less conspicuous than existing methods. The system uses a local web server to provide a file system, user interface and applications through a web architecture.

  3. File System Virtual Appliances

    DTIC Science & Technology

    2010-05-01

    Technical Conference. USENIX Association, Berkeley, CA, 24–24. [31] Eisler , M., Corbett, P., Kazar, M., Nydick, D. S., and Wagner, C. 2007. Data ONTAP GX...and Operations Market Through 2012. http://www.gartner.com/it/page.jsp?id=638207. [37] Gingell, R. A., Moran, J. P., and Shannon, W. A. 1987 . Virtual

  4. The virtual microscopy database-sharing digital microscope images for research and education.

    PubMed

    Lee, Lisa M J; Goldman, Haviva M; Hortsch, Michael

    2018-02-14

    Over the last 20 years, virtual microscopy has become the predominant modus of teaching the structural organization of cells, tissues, and organs, replacing the use of optical microscopes and glass slides in a traditional histology or pathology laboratory setting. Although virtual microscopy image files can easily be duplicated, creating them requires not only quality histological glass slides but also an expensive whole slide microscopic scanner and massive data storage devices. These resources are not available to all educators and researchers, especially at new institutions in developing countries. This leaves many schools without access to virtual microscopy resources. The Virtual Microscopy Database (VMD) is a new resource established to address this problem. It is a virtual image file-sharing website that allows researchers and educators easy access to a large repository of virtual histology and pathology image files. With the support from the American Association of Anatomists (Bethesda, MD) and MBF Bioscience Inc. (Williston, VT), registration and use of the VMD are currently free of charge. However, the VMD site is restricted to faculty and staff of research and educational institutions. Virtual Microscopy Database users can upload their own collection of virtual slide files, as well as view and download image files for their own non-profit educational and research purposes that have been deposited by other VMD clients. Anat Sci Educ. © 2018 American Association of Anatomists. © 2018 American Association of Anatomists.

  5. Analysis towards VMEM File of a Suspended Virtual Machine

    NASA Astrophysics Data System (ADS)

    Song, Zheng; Jin, Bo; Sun, Yongqing

    With the popularity of virtual machines, forensic investigators are challenged with more complicated situations, among which discovering the evidences in virtualized environment is of significant importance. This paper mainly analyzes the file suffixed with .vmem in VMware Workstation, which stores all pseudo-physical memory into an image. The internal file structure of .vmem file is studied and disclosed. Key information about processes and threads of a suspended virtual machine is revealed. Further investigation into the Windows XP SP3 heap contents is conducted and a proof-of-concept tool is provided. Different methods to obtain forensic memory images are introduced, with both advantages and limits analyzed. We conclude with an outlook.

  6. Solid Freeform Fabrication Proceedings -1999

    DTIC Science & Technology

    1999-08-11

    geometry of the stylus. Some geometries cannot be used to acquire data if the part geometry interferes 48 with a feature on the part. Thus, the data...fabrication processing systems such as surface micro- machining and lithography . 63 Conclusion The LCVD system (figure 6) has the versatility and...part, creating STL (STereo Lithography ) or VRML (Virtual Reality Modeling Language) files, slicing them, converting into laser path files, and

  7. Students' Acceptance of File Sharing Systems as a Tool for Sharing Course Materials: The Case of Google Drive

    ERIC Educational Resources Information Center

    Sadik, Alaa

    2017-01-01

    Students' perceptions about both ease of use and usefulness are fundamental factors in determining their acceptance and successful use of technology in higher education. File sharing systems are one of these technologies and can be used to manage and deliver course materials and coordinate virtual teams. The aim of this study is to explore how…

  8. Cooperative storage of shared files in a parallel computing system with dynamic block size

    DOEpatents

    Bent, John M.; Faibish, Sorin; Grider, Gary

    2015-11-10

    Improved techniques are provided for parallel writing of data to a shared object in a parallel computing system. A method is provided for storing data generated by a plurality of parallel processes to a shared object in a parallel computing system. The method is performed by at least one of the processes and comprises: dynamically determining a block size for storing the data; exchanging a determined amount of the data with at least one additional process to achieve a block of the data having the dynamically determined block size; and writing the block of the data having the dynamically determined block size to a file system. The determined block size comprises, e.g., a total amount of the data to be stored divided by the number of parallel processes. The file system comprises, for example, a log structured virtual parallel file system, such as a Parallel Log-Structured File System (PLFS).

  9. iRODS: A Distributed Data Management Cyberinfrastructure for Observatories

    NASA Astrophysics Data System (ADS)

    Rajasekar, A.; Moore, R.; Vernon, F.

    2007-12-01

    Large-scale and long-term preservation of both observational and synthesized data requires a system that virtualizes data management concepts. A methodology is needed that can work across long distances in space (distribution) and long-periods in time (preservation). The system needs to manage data stored on multiple types of storage systems including new systems that become available in the future. This concept is called infrastructure independence, and is typically implemented through virtualization mechanisms. Data grids are built upon concepts of data and trust virtualization. These concepts enable the management of collections of data that are distributed across multiple institutions, stored on multiple types of storage systems, and accessed by multiple types of clients. Data virtualization ensures that the name spaces used to identify files, users, and storage systems are persistent, even when files are migrated onto future technology. This is required to preserve authenticity, the link between the record and descriptive and provenance metadata. Trust virtualization ensures that access controls remain invariant as files are moved within the data grid. This is required to track the chain of custody of records over time. The Storage Resource Broker (http://www.sdsc.edu/srb) is one such data grid used in a wide variety of applications in earth and space sciences such as ROADNet (roadnet.ucsd.edu), SEEK (seek.ecoinformatics.org), GEON (www.geongrid.org) and NOAO (www.noao.edu). Recent extensions to data grids provide one more level of virtualization - policy or management virtualization. Management virtualization ensures that execution of management policies can be automated, and that rules can be created that verify assertions about the shared collections of data. When dealing with distributed large-scale data over long periods of time, the policies used to manage the data and provide assurances about the authenticity of the data become paramount. The integrated Rule-Oriented Data System (iRODS) (http://irods.sdsc.edu) provides the mechanisms needed to describe not only management policies, but also to track how the policies are applied and their execution results. The iRODS data grid maps management policies to rules that control the execution of the remote micro-services. As an example, a rule can be created that automatically creates a replica whenever a file is added to a specific collection, or extracts its metadata automatically and registers it in a searchable catalog. For the replication operation, the persistent state information consists of the replica location, the creation date, the owner, the replica size, etc. The mechanism used by iRODS for providing policy virtualization is based on well-defined functions, called micro-services, which are chained into alternative workflows using rules. A rule engine, based on the event-condition-action paradigm executes the rule-based workflows after an event. Rules can be deferred to a pre-determined time or executed on a periodic basis. As the data management policies evolve, the iRODS system can implement new rules, new micro-services, and new state information (metadata content) needed to manage the new policies. Each sub- collection can be managed using a different set of policies. The discussion of the concepts in rule-based policy virtualization and its application to long-term and large-scale data management for observatories such as ORION and NEON will be the basis of the paper.

  10. Doing Your Science While You're in Orbit

    NASA Astrophysics Data System (ADS)

    Green, Mark L.; Miller, Stephen D.; Vazhkudai, Sudharshan S.; Trater, James R.

    2010-11-01

    Large-scale neutron facilities such as the Spallation Neutron Source (SNS) located at Oak Ridge National Laboratory need easy-to-use access to Department of Energy Leadership Computing Facilities and experiment repository data. The Orbiter thick- and thin-client and its supporting Service Oriented Architecture (SOA) based services (available at https://orbiter.sns.gov) consist of standards-based components that are reusable and extensible for accessing high performance computing, data and computational grid infrastructure, and cluster-based resources easily from a user configurable interface. The primary Orbiter system goals consist of (1) developing infrastructure for the creation and automation of virtual instrumentation experiment optimization, (2) developing user interfaces for thin- and thick-client access, (3) provide a prototype incorporating major instrument simulation packages, and (4) facilitate neutron science community access and collaboration. The secure Orbiter SOA authentication and authorization is achieved through the developed Virtual File System (VFS) services, which use Role-Based Access Control (RBAC) for data repository file access, thin-and thick-client functionality and application access, and computational job workflow management. The VFS Relational Database Management System (RDMS) consists of approximately 45 database tables describing 498 user accounts with 495 groups over 432,000 directories with 904,077 repository files. Over 59 million NeXus file metadata records are associated to the 12,800 unique NeXus file field/class names generated from the 52,824 repository NeXus files. Services that enable (a) summary dashboards of data repository status with Quality of Service (QoS) metrics, (b) data repository NeXus file field/class name full text search capabilities within a Google like interface, (c) fully functional RBAC browser for the read-only data repository and shared areas, (d) user/group defined and shared metadata for data repository files, (e) user, group, repository, and web 2.0 based global positioning with additional service capabilities are currently available. The SNS based Orbiter SOA integration progress with the Distributed Data Analysis for Neutron Scattering Experiments (DANSE) software development project is summarized with an emphasis on DANSE Central Services and the Virtual Neutron Facility (VNF). Additionally, the DANSE utilization of the Orbiter SOA authentication, authorization, and data transfer services best practice implementations are presented.

  11. MOLA: a bootable, self-configuring system for virtual screening using AutoDock4/Vina on computer clusters.

    PubMed

    Abreu, Rui Mv; Froufe, Hugo Jc; Queiroz, Maria João Rp; Ferreira, Isabel Cfr

    2010-10-28

    Virtual screening of small molecules using molecular docking has become an important tool in drug discovery. However, large scale virtual screening is time demanding and usually requires dedicated computer clusters. There are a number of software tools that perform virtual screening using AutoDock4 but they require access to dedicated Linux computer clusters. Also no software is available for performing virtual screening with Vina using computer clusters. In this paper we present MOLA, an easy-to-use graphical user interface tool that automates parallel virtual screening using AutoDock4 and/or Vina in bootable non-dedicated computer clusters. MOLA automates several tasks including: ligand preparation, parallel AutoDock4/Vina jobs distribution and result analysis. When the virtual screening project finishes, an open-office spreadsheet file opens with the ligands ranked by binding energy and distance to the active site. All results files can automatically be recorded on an USB-flash drive or on the hard-disk drive using VirtualBox. MOLA works inside a customized Live CD GNU/Linux operating system, developed by us, that bypass the original operating system installed on the computers used in the cluster. This operating system boots from a CD on the master node and then clusters other computers as slave nodes via ethernet connections. MOLA is an ideal virtual screening tool for non-experienced users, with a limited number of multi-platform heterogeneous computers available and no access to dedicated Linux computer clusters. When a virtual screening project finishes, the computers can just be restarted to their original operating system. The originality of MOLA lies on the fact that, any platform-independent computer available can he added to the cluster, without ever using the computer hard-disk drive and without interfering with the installed operating system. With a cluster of 10 processors, and a potential maximum speed-up of 10x, the parallel algorithm of MOLA performed with a speed-up of 8,64× using AutoDock4 and 8,60× using Vina.

  12. Data General Corporation Advanced Operating System/Virtual Storage (AOS/ VS). Revision 7.60

    DTIC Science & Technology

    1989-02-22

    control list for each directory and data file. An access control list includes the users who can and cannot access files as well as the access...and any required data, it can -5- February 22, 1989 Final Evaluation Report Data General AOS/VS SYSTEM OVERVIEW operate asynchronously and in parallel...memory. The IOC can perform the data transfer without further interventiin from the CPU. The I/O channels interface with the processor or system

  13. 77 FR 9266 - Notice Pursuant to the National Cooperative Research and Production Act of 1993-Interchangeable...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2012-02-16

    ... Production Act of 1993--Interchangeable Virtual Instruments Foundation, Inc. Notice is hereby given that, on..., 15 U.S.C. 4301 et seq. (``the Act''), Interchangeable Virtual Instruments Foundation, Inc. has filed... Interchangeable Virtual Instruments Foundation, Inc. intends to file additional written notifications disclosing...

  14. 75 FR 28294 - Notice Pursuant to the National Cooperative Research and Production Act of 1993-Interchangeable...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2010-05-20

    ... Production Act of 1993--Interchangeable Virtual Instruments Foundation, Inc. Notice is hereby given that, on..., 15 U.S.C. 4301 et seq. (``the Act''), Interchangeable Virtual Instruments Foundation, Inc. has filed... remains open, and Interchangeable Virtual Instruments Foundation, Inc. intends to file additional written...

  15. [PVFS 2000: An operational parallel file system for Beowulf

    NASA Technical Reports Server (NTRS)

    Ligon, Walt

    2004-01-01

    The approach has been to develop Parallel Virtual File System version 2 (PVFS2) , retaining the basic philosophy of the original file system but completely rewriting the code. It shows the architecture of the server and client components. BMI - BMI is the network abstraction layer. It is designed with a common driver and modules for each protocol supported. The interface is non-blocking, and provides mechanisms for optimizations including pinning user buffers. Currently TCP/IP and GM(Myrinet) modules have been implemented. Trove -Trove is the storage abstraction layer. It provides for storing both data spaces and name/value pairs. Trove can also be implemented using different underlying storage mechanisms including native files, raw disk partitions, SQL and other databases. The current implementation uses native files for data spaces and Berkeley db for name/value pairs.

  16. Arctic Boreal Vulnerability Experiment (ABoVE) Science Cloud

    NASA Astrophysics Data System (ADS)

    Duffy, D.; Schnase, J. L.; McInerney, M.; Webster, W. P.; Sinno, S.; Thompson, J. H.; Griffith, P. C.; Hoy, E.; Carroll, M.

    2014-12-01

    The effects of climate change are being revealed at alarming rates in the Arctic and Boreal regions of the planet. NASA's Terrestrial Ecology Program has launched a major field campaign to study these effects over the next 5 to 8 years. The Arctic Boreal Vulnerability Experiment (ABoVE) will challenge scientists to take measurements in the field, study remote observations, and even run models to better understand the impacts of a rapidly changing climate for areas of Alaska and western Canada. The NASA Center for Climate Simulation (NCCS) at the Goddard Space Flight Center (GSFC) has partnered with the Terrestrial Ecology Program to create a science cloud designed for this field campaign - the ABoVE Science Cloud. The cloud combines traditional high performance computing with emerging technologies to create an environment specifically designed for large-scale climate analytics. The ABoVE Science Cloud utilizes (1) virtualized high-speed InfiniBand networks, (2) a combination of high-performance file systems and object storage, and (3) virtual system environments tailored for data intensive, science applications. At the center of the architecture is a large object storage environment, much like a traditional high-performance file system, that supports data proximal processing using technologies like MapReduce on a Hadoop Distributed File System (HDFS). Surrounding the storage is a cloud of high performance compute resources with many processing cores and large memory coupled to the storage through an InfiniBand network. Virtual systems can be tailored to a specific scientist and provisioned on the compute resources with extremely high-speed network connectivity to the storage and to other virtual systems. In this talk, we will present the architectural components of the science cloud and examples of how it is being used to meet the needs of the ABoVE campaign. In our experience, the science cloud approach significantly lowers the barriers and risks to organizations that require high performance computing solutions and provides the NCCS with the agility required to meet our customers' rapidly increasing and evolving requirements.

  17. Prospects for Evidence -Based Software Assurance: Models and Analysis

    DTIC Science & Technology

    2015-09-01

    virtual machine is much lighter than the workstation. The virtual machine doesn’t need to run anti- virus , firewalls, intrusion preven- tion systems...34] Maiorca, D., Corona , I., and Giacinto, G. Looking at the bag is not enough to find the bomb: An evasion of structural methods for malicious PDF...CCS ’13, ACM, pp. 119–130. [35] Maiorca, D., Giacinto, G., and Corona , I. A pattern recognition system for malicious PDF files detection. In

  18. Derived virtual devices: a secure distributed file system mechanism

    NASA Technical Reports Server (NTRS)

    VanMeter, Rodney; Hotz, Steve; Finn, Gregory

    1996-01-01

    This paper presents the design of derived virtual devices (DVDs). DVDs are the mechanism used by the Netstation Project to provide secure shared access to network-attached peripherals distributed in an untrusted network environment. DVDs improve Input/Output efficiency by allowing user processes to perform I/O operations directly from devices without intermediate transfer through the controlling operating system kernel. The security enforced at the device through the DVD mechanism includes resource boundary checking, user authentication, and restricted operations, e.g., read-only access. To illustrate the application of DVDs, we present the interactions between a network-attached disk and a file system designed to exploit the DVD abstraction. We further discuss third-party transfer as a mechanism intended to provide for efficient data transfer in a typical NAP environment. We show how DVDs facilitate third-party transfer, and provide the security required in a more open network environment.

  19. The Fluke Security Project

    DTIC Science & Technology

    2000-04-01

    be an extension of Utah’s nascent Quarks system, oriented to closely coupled cluster environments. However, the grant did not actually begin until... Intel x86, implemented ten virtual machine monitors and servers, including a virtual memory manager, a checkpointer, a process manager, a file server...Fluke, we developed a novel hierarchical processor scheduling frame- work called CPU inheritance scheduling [5]. This is a framework for scheduling

  20. Considerations of persistence and security in CHOICES, an object-oriented operating system

    NASA Technical Reports Server (NTRS)

    Campbell, Roy H.; Madany, Peter W.

    1990-01-01

    The current design of the CHOICES persistent object implementation is summarized, and research in progress is outlined. CHOICES is implemented as an object-oriented system, and persistent objects appear to simplify and unify many functions of the system. It is demonstrated that persistent data can be accessed through an object-oriented file system model as efficiently as by an existing optimized commercial file system. The object-oriented file system can be specialized to provide an object store for persistent objects. The problems that arise in building an efficient persistent object scheme in a 32-bit virtual address space that only uses paging are described. Despite its limitations, the solution presented allows quite large numbers of objects to be active simultaneously, and permits sharing and efficient method calls.

  1. A data distribution strategy for the 1990s (files are not enough)

    NASA Technical Reports Server (NTRS)

    Tankenson, Mike; Wright, Steven

    1993-01-01

    Virtually all of the data distribution strategies being contemplated for the EOSDIS era revolve around the use of files. Most, if not all, mass storage technologies are based around the file model. However, files may be the wrong primary abstraction for supporting scientific users in the 1990s and beyond. Other abstractions more closely matching the respective scientific discipline of the end user may be more appropriate. JPL has built a unique multimission data distribution system based on a strategy of telemetry stream emulation to match the responsibilities of spacecraft team and ground data system operators supporting our nations suite of planetary probes. The current system, operational since 1989 and the launch of the Magellan spacecraft, is supporting over 200 users at 15 remote sites. This stream-oriented data distribution model can provide important lessons learned to builders of future data systems.

  2. SAM-FS: LSC's New Solaris-Based Storage Management Product

    NASA Technical Reports Server (NTRS)

    Angell, Kent

    1996-01-01

    SAM-FS is a full featured hierarchical storage management (HSM) device that operates as a file system on Solaris-based machines. The SAM-FS file system provides the user with all of the standard UNIX system utilities and calls, and adds some new commands, i.e. archive, release, stage, sls, sfind, and a family of maintenance commands. The system also offers enhancements such as high performance virtual disk read and write, control of the disk through an extent array, and the ability to dynamically allocate block size. SAM-FS provides 'archive sets' which are groupings of data to be copied to secondary storage. In practice, as soon as a file is written to disk, SAM-FS will make copies onto secondary media. SAM-FS is a scalable storage management system. The system can manage millions of files per system, though this is limited today by the speed of UNIX and its utilities. In the future, a new search algorithm will be implemented that will remove logical and performance restrictions on the number of files managed.

  3. LabVIEW 2010 Computer Vision Platform Based Virtual Instrument and Its Application for Pitting Corrosion Study.

    PubMed

    Ramos, Rogelio; Zlatev, Roumen; Valdez, Benjamin; Stoytcheva, Margarita; Carrillo, Mónica; García, Juan-Francisco

    2013-01-01

    A virtual instrumentation (VI) system called VI localized corrosion image analyzer (LCIA) based on LabVIEW 2010 was developed allowing rapid automatic and subjective error-free determination of the pits number on large sized corroded specimens. The VI LCIA controls synchronously the digital microscope image taking and its analysis, finally resulting in a map file containing the coordinates of the detected probable pits containing zones on the investigated specimen. The pits area, traverse length, and density are also determined by the VI using binary large objects (blobs) analysis. The resulting map file can be used further by a scanning vibrating electrode technique (SVET) system for rapid (one pass) "true/false" SVET check of the probable zones only passing through the pit's centers avoiding thus the entire specimen scan. A complete SVET scan over the already proved "true" zones could determine the corrosion rate in any of the zones.

  4. IBM NJE protocol emulator for VAX/VMS

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Engert, D.E.

    1981-01-01

    Communications software has been written at Argonne National Laboratory to enable a VAX/VMS system to participate as an end-node in a standard IBM network by emulating the Network Job Entry (NJE) protocol. NJE is actually a collection of programs that support job networking for the operating systems used on most large IBM-compatible computers (e.g., VM/370, MVS with JES2 or JES3, SVS, MVT with ASP or HASP). Files received by the VAX can be printed or saved in user-selected disk files. Files sent to the network can be routed to any node in the network for printing, punching, or job submission,more » as well as to a VM/370 user's virtual reader. Files sent from the VAX are queued and transmitted asynchronously to allow users to perform other work while files are awaiting transmission. No changes are required to the IBM software.« less

  5. Game-Based Virtual Worlds as Decentralized Virtual Activity Systems

    NASA Astrophysics Data System (ADS)

    Scacchi, Walt

    There is widespread interest in the development and use of decentralized systems and virtual world environments as possible new places for engaging in collaborative work activities. Similarly, there is widespread interest in stimulating new technological innovations that enable people to come together through social networking, file/media sharing, and networked multi-player computer game play. A decentralized virtual activity system (DVAS) is a networked computer supported work/play system whose elements and social activities can be both virtual and decentralized (Scacchi et al. 2008b). Massively multi-player online games (MMOGs) such as World of Warcraft and online virtual worlds such as Second Life are each popular examples of a DVAS. Furthermore, these systems are beginning to be used for research, deve-lopment, and education activities in different science, technology, and engineering domains (Bainbridge 2007, Bohannon et al. 2009; Rieber 2005; Scacchi and Adams 2007; Shaffer 2006), which are also of interest here. This chapter explores two case studies of DVASs developed at the University of California at Irvine that employ game-based virtual worlds to support collaborative work/play activities in different settings. The settings include those that model and simulate practical or imaginative physical worlds in different domains of science, technology, or engineering through alternative virtual worlds where players/workers engage in different kinds of quests or quest-like workflows (Jakobsson 2006).

  6. Request queues for interactive clients in a shared file system of a parallel computing system

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bent, John M.; Faibish, Sorin

    Interactive requests are processed from users of log-in nodes. A metadata server node is provided for use in a file system shared by one or more interactive nodes and one or more batch nodes. The interactive nodes comprise interactive clients to execute interactive tasks and the batch nodes execute batch jobs for one or more batch clients. The metadata server node comprises a virtual machine monitor; an interactive client proxy to store metadata requests from the interactive clients in an interactive client queue; a batch client proxy to store metadata requests from the batch clients in a batch client queue;more » and a metadata server to store the metadata requests from the interactive client queue and the batch client queue in a metadata queue based on an allocation of resources by the virtual machine monitor. The metadata requests can be prioritized, for example, based on one or more of a predefined policy and predefined rules.« less

  7. KNBD: A Remote Kernel Block Server for Linux

    NASA Technical Reports Server (NTRS)

    Becker, Jeff

    1999-01-01

    I am developing a prototype of a Linux remote disk block server whose purpose is to serve as a lower level component of a parallel file system. Parallel file systems are an important component of high performance supercomputers and clusters. Although supercomputer vendors such as SGI and IBM have their own custom solutions, there has been a void and hence a demand for such a system on Beowulf-type PC Clusters. Recently, the Parallel Virtual File System (PVFS) project at Clemson University has begun to address this need (1). Although their system provides much of the functionality of (and indeed was inspired by) the equivalent file systems in the commercial supercomputer market, their system is all in user-space. Migrating their 10 services to the kernel could provide a performance boost, by obviating the need for expensive system calls. Thanks to Pavel Machek, the Linux kernel has provided the network block device (2) with kernels 2.1.101 and later. You can configure this block device to redirect reads and writes to a remote machine's disk. This can be used as a building block for constructing a striped file system across several nodes.

  8. Virtualizing Resources for the Application Services and Framework Team

    NASA Technical Reports Server (NTRS)

    Varner, Justin T.; Crawford, Linda K.

    2010-01-01

    Virtualization is an emerging technology that will undoubtedly have a major impact on the future of Information Technology. It allows for the centralization of resources in an enterprise system without the need to make any changes to the host operating system, file system, or registry. In turn, this significantly reduces cost and administration, and provides a much greater level of security, compatibility, and efficiency. This experiment examined the practicality, methodology, challenges, and benefits of implementing the technology for the Launch Control System (LCS), and more specifically the Application Services (AS) group of the National Aeronautics and Space Administration (NASA) at the Kennedy Space Center (KSC). In order to carry out this experiment, I used several tools from the virtualization company known as VMWare; these programs included VMWare ThinApp, VMWare Workstation, and VMWare ACE. Used in conjunction, these utilities provided the engine necessary to virtualize and deploy applications in a desktop environment on any Windows platform available. The results clearly show that virtualization is a viable technology that can, when implemented properly, dramatically cut costs, enhance stability and security, and provide easier management for administrators.

  9. New Web Server - the Java Version of Tempest - Produced

    NASA Technical Reports Server (NTRS)

    York, David W.; Ponyik, Joseph G.

    2000-01-01

    A new software design and development effort has produced a Java (Sun Microsystems, Inc.) version of the award-winning Tempest software (refs. 1 and 2). In 1999, the Embedded Web Technology (EWT) team received a prestigious R&D 100 Award for Tempest, Java Version. In this article, "Tempest" will refer to the Java version of Tempest, a World Wide Web server for desktop or embedded systems. Tempest was designed at the NASA Glenn Research Center at Lewis Field to run on any platform for which a Java Virtual Machine (JVM, Sun Microsystems, Inc.) exists. The JVM acts as a translator between the native code of the platform and the byte code of Tempest, which is compiled in Java. These byte code files are Java executables with a ".class" extension. Multiple byte code files can be zipped together as a "*.jar" file for more efficient transmission over the Internet. Today's popular browsers, such as Netscape (Netscape Communications Corporation) and Internet Explorer (Microsoft Corporation) have built-in Virtual Machines to display Java applets.

  10. From stereoscopic recording to virtual reality headsets: Designing a new way to learn surgery.

    PubMed

    Ros, M; Trives, J-V; Lonjon, N

    2017-03-01

    To improve surgical practice, there are several different approaches to simulation. Due to wearable technologies, recording 3D movies is now easy. The development of a virtual reality headset allows imagining a different way of watching these videos: using dedicated software to increase interactivity in a 3D immersive experience. The objective was to record 3D movies via a main surgeon's perspective, to watch files using virtual reality headsets and to validate pedagogic interest. Surgical procedures were recorded using a system combining two side-by-side cameras placed on a helmet. We added two LEDs just below the cameras to enhance luminosity. Two files were obtained in mp4 format and edited using dedicated software to create 3D movies. Files obtained were then played using a virtual reality headset. Surgeons who tried the immersive experience completed a questionnaire to evaluate the interest of this procedure for surgical learning. Twenty surgical procedures were recorded. The movies capture a scene which is extended 180° horizontally and 90° vertically. The immersive experience created by the device conveys a genuine feeling of being in the operating room and seeing the procedure first-hand through the eyes of the main surgeon. All surgeons indicated that they believe in pedagogical interest of this method. We succeeded in recording the main surgeon's point of view in 3D and watch it on a virtual reality headset. This new approach enhances the understanding of surgery; most of the surgeons appreciated its pedagogic value. This method could be an effective learning tool in the future. Copyright © 2016. Published by Elsevier Masson SAS.

  11. AliEn—ALICE environment on the GRID

    NASA Astrophysics Data System (ADS)

    Saiz, P.; Aphecetche, L.; Bunčić, P.; Piskač, R.; Revsbech, J.-E.; Šego, V.; Alice Collaboration

    2003-04-01

    AliEn ( http://alien.cern.ch) (ALICE Environment) is a Grid framework built on top of the latest Internet standards for information exchange and authentication (SOAP, PKI) and common Open Source components. AliEn provides a virtual file catalogue that allows transparent access to distributed datasets and a number of collaborating Web services which implement the authentication, job execution, file transport, performance monitor and event logging. In the paper we will present the architecture and components of the system.

  12. Factors to keep in mind when introducing virtual microscopy.

    PubMed

    Glatz-Krieger, Katharina; Spornitz, Udo; Spatz, Alain; Mihatsch, Michael J; Glatz, Dieter

    2006-03-01

    Digitization of glass slides and delivery of so-called virtual slides (VS) emulating a real microscope over the Internet have become reality due to recent improvements in technology. We have implemented a virtual microscope for instruction of medical students and for continuing medical education. Up to 30,000 images per slide are captured using a microscope with an automated stage. The images are post-processed and then served by a plain hypertext transfer protocol (http)-server. A virtual slide client (vMic) based on Macromedia's Flash MX, a highly accepted technology available on every modern Web browser, has been developed. All necessary virtual slide parameters are stored in an XML file together with the image. Evaluation of the courses by questionnaire indicated that most students and many but not all pathologists regard virtual slides as an adequate replacement for traditional slides. All our virtual slides are publicly accessible over the World Wide Web (WWW) at http://vmic.unibas.ch . Recently, several commercially available virtual slide acquisition systems (VSAS) have been developed that use various technologies to acquire and distribute virtual slides. These systems differ in speed, image quality, compatibility, viewer functionalities and price. This paper gives an overview of the factors to keep in mind when introducing virtual microscopy.

  13. DOVIS 2.0: an efficient and easy to use parallel virtual screening tool based on AutoDock 4.0.

    PubMed

    Jiang, Xiaohui; Kumar, Kamal; Hu, Xin; Wallqvist, Anders; Reifman, Jaques

    2008-09-08

    Small-molecule docking is an important tool in studying receptor-ligand interactions and in identifying potential drug candidates. Previously, we developed a software tool (DOVIS) to perform large-scale virtual screening of small molecules in parallel on Linux clusters, using AutoDock 3.05 as the docking engine. DOVIS enables the seamless screening of millions of compounds on high-performance computing platforms. In this paper, we report significant advances in the software implementation of DOVIS 2.0, including enhanced screening capability, improved file system efficiency, and extended usability. To keep DOVIS up-to-date, we upgraded the software's docking engine to the more accurate AutoDock 4.0 code. We developed a new parallelization scheme to improve runtime efficiency and modified the AutoDock code to reduce excessive file operations during large-scale virtual screening jobs. We also implemented an algorithm to output docked ligands in an industry standard format, sd-file format, which can be easily interfaced with other modeling programs. Finally, we constructed a wrapper-script interface to enable automatic rescoring of docked ligands by arbitrarily selected third-party scoring programs. The significance of the new DOVIS 2.0 software compared with the previous version lies in its improved performance and usability. The new version makes the computation highly efficient by automating load balancing, significantly reducing excessive file operations by more than 95%, providing outputs that conform to industry standard sd-file format, and providing a general wrapper-script interface for rescoring of docked ligands. The new DOVIS 2.0 package is freely available to the public under the GNU General Public License.

  14. LabVIEW 2010 Computer Vision Platform Based Virtual Instrument and Its Application for Pitting Corrosion Study

    PubMed Central

    Ramos, Rogelio; Zlatev, Roumen; Valdez, Benjamin; Stoytcheva, Margarita; Carrillo, Mónica; García, Juan-Francisco

    2013-01-01

    A virtual instrumentation (VI) system called VI localized corrosion image analyzer (LCIA) based on LabVIEW 2010 was developed allowing rapid automatic and subjective error-free determination of the pits number on large sized corroded specimens. The VI LCIA controls synchronously the digital microscope image taking and its analysis, finally resulting in a map file containing the coordinates of the detected probable pits containing zones on the investigated specimen. The pits area, traverse length, and density are also determined by the VI using binary large objects (blobs) analysis. The resulting map file can be used further by a scanning vibrating electrode technique (SVET) system for rapid (one pass) “true/false” SVET check of the probable zones only passing through the pit's centers avoiding thus the entire specimen scan. A complete SVET scan over the already proved “true” zones could determine the corrosion rate in any of the zones. PMID:23691434

  15. BioImg.org: A Catalog of Virtual Machine Images for the Life Sciences

    PubMed Central

    Dahlö, Martin; Haziza, Frédéric; Kallio, Aleksi; Korpelainen, Eija; Bongcam-Rudloff, Erik; Spjuth, Ola

    2015-01-01

    Virtualization is becoming increasingly important in bioscience, enabling assembly and provisioning of complete computer setups, including operating system, data, software, and services packaged as virtual machine images (VMIs). We present an open catalog of VMIs for the life sciences, where scientists can share information about images and optionally upload them to a server equipped with a large file system and fast Internet connection. Other scientists can then search for and download images that can be run on the local computer or in a cloud computing environment, providing easy access to bioinformatics environments. We also describe applications where VMIs aid life science research, including distributing tools and data, supporting reproducible analysis, and facilitating education. BioImg.org is freely available at: https://bioimg.org. PMID:26401099

  16. BioImg.org: A Catalog of Virtual Machine Images for the Life Sciences.

    PubMed

    Dahlö, Martin; Haziza, Frédéric; Kallio, Aleksi; Korpelainen, Eija; Bongcam-Rudloff, Erik; Spjuth, Ola

    2015-01-01

    Virtualization is becoming increasingly important in bioscience, enabling assembly and provisioning of complete computer setups, including operating system, data, software, and services packaged as virtual machine images (VMIs). We present an open catalog of VMIs for the life sciences, where scientists can share information about images and optionally upload them to a server equipped with a large file system and fast Internet connection. Other scientists can then search for and download images that can be run on the local computer or in a cloud computing environment, providing easy access to bioinformatics environments. We also describe applications where VMIs aid life science research, including distributing tools and data, supporting reproducible analysis, and facilitating education. BioImg.org is freely available at: https://bioimg.org.

  17. Design and implementation of a reliable and cost-effective cloud computing infrastructure: the INFN Napoli experience

    NASA Astrophysics Data System (ADS)

    Capone, V.; Esposito, R.; Pardi, S.; Taurino, F.; Tortone, G.

    2012-12-01

    Over the last few years we have seen an increasing number of services and applications needed to manage and maintain cloud computing facilities. This is particularly true for computing in high energy physics, which often requires complex configurations and distributed infrastructures. In this scenario a cost effective rationalization and consolidation strategy is the key to success in terms of scalability and reliability. In this work we describe an IaaS (Infrastructure as a Service) cloud computing system, with high availability and redundancy features, which is currently in production at INFN-Naples and ATLAS Tier-2 data centre. The main goal we intended to achieve was a simplified method to manage our computing resources and deliver reliable user services, reusing existing hardware without incurring heavy costs. A combined usage of virtualization and clustering technologies allowed us to consolidate our services on a small number of physical machines, reducing electric power costs. As a result of our efforts we developed a complete solution for data and computing centres that can be easily replicated using commodity hardware. Our architecture consists of 2 main subsystems: a clustered storage solution, built on top of disk servers running GlusterFS file system, and a virtual machines execution environment. GlusterFS is a network file system able to perform parallel writes on multiple disk servers, providing this way live replication of data. High availability is also achieved via a network configuration using redundant switches and multiple paths between hypervisor hosts and disk servers. We also developed a set of management scripts to easily perform basic system administration tasks such as automatic deployment of new virtual machines, adaptive scheduling of virtual machines on hypervisor hosts, live migration and automated restart in case of hypervisor failures.

  18. Barriers to success: physical separation optimizes event-file retrieval in shared workspaces.

    PubMed

    Klempova, Bibiana; Liepelt, Roman

    2017-07-08

    Sharing tasks with other persons can simplify our work and life, but seeing and hearing other people's actions may also be very distracting. The joint Simon effect (JSE) is a standard measure of referential response coding when two persons share a Simon task. Sequential modulations of the joint Simon effect (smJSE) are interpreted as a measure of event-file processing containing stimulus information, response information and information about the just relevant control-state active in a given social situation. This study tested effects of physical (Experiment 1) and virtual (Experiment 2) separation of shared workspaces on referential coding and event-file processing using a joint Simon task. In Experiment 1, participants performed this task in individual (go-nogo), joint and standard Simon task conditions with and without a transparent curtain (physical separation) placed along the imagined vertical midline of the monitor. In Experiment 2, participants performed the same tasks with and without receiving background music (virtual separation). For response times, physical separation enhanced event-file retrieval indicated by an enlarged smJSE in the joint Simon task with curtain than without curtain (Experiment1), but did not change referential response coding. In line with this, we also found evidence for enhanced event-file processing through physical separation in the joint Simon task for error rates. Virtual separation did neither impact event-file processing, nor referential coding, but generally slowed down response times in the joint Simon task. For errors, virtual separation hampered event-file processing in the joint Simon task. For the cognitively more demanding standard two-choice Simon task, we found music to have a degrading effect on event-file retrieval for response times. Our findings suggest that adding a physical separation optimizes event-file processing in shared workspaces, while music seems to lead to a more relaxed task processing mode under shared task conditions. In addition, music had an interfering impact on joint error processing and more generally when dealing with a more complex task in isolation.

  19. Cloud services for the Fermilab scientific stakeholders

    DOE PAGES

    Timm, S.; Garzoglio, G.; Mhashilkar, P.; ...

    2015-12-23

    As part of the Fermilab/KISTI cooperative research project, Fermilab has successfully run an experimental simulation workflow at scale on a federation of Amazon Web Services (AWS), FermiCloud, and local FermiGrid resources. We used the CernVM-FS (CVMFS) file system to deliver the application software. We established Squid caching servers in AWS as well, using the Shoal system to let each individual virtual machine find the closest squid server. We also developed an automatic virtual machine conversion system so that we could transition virtual machines made on FermiCloud to Amazon Web Services. We used this system to successfully run a cosmic raymore » simulation of the NOvA detector at Fermilab, making use of both AWS spot pricing and network bandwidth discounts to minimize the cost. On FermiCloud we also were able to run the workflow at the scale of 1000 virtual machines, using a private network routable inside of Fermilab. As a result, we present in detail the technological improvements that were used to make this work a reality.« less

  20. Kinematic/Dynamic Characteristics for Visual and Kinesthetic Virtual Environments

    NASA Technical Reports Server (NTRS)

    Bortolussi, Michael R. (Compiler); Adelstein, B. D.; Gold, Miriam

    1996-01-01

    Work was carried out on two topics of principal importance to current progress in virtual environment research at NASA Ames and elsewhere. The first topic was directed at maximizing the temporal dynamic response of visually presented Virtual Environments (VEs) through reorganization and optimization of system hardware and software. The final results of this portion of the work was a VE system in the Advanced Display and Spatial Perception Laboratory at NASA Ames capable of updating at 60 Hz (the maximum hardware refresh rate) with latencies approaching 30 msec. In the course of achieving this system performance, specialized hardware and software tools for measurement of VE latency and analytic models correlating update rate and latency for different system configurations were developed. The second area of activity was the preliminary development and analysis of a novel kinematic architecture for three Degree Of Freedom (DOF) haptic interfaces--devices that provide force feedback for manipulative interaction with virtual and remote environments. An invention disclosure was filed on this work and a patent application is being pursued by NASA Ames. Activities in these two areas are expanded upon below.

  1. Cloud services for the Fermilab scientific stakeholders

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Timm, S.; Garzoglio, G.; Mhashilkar, P.

    As part of the Fermilab/KISTI cooperative research project, Fermilab has successfully run an experimental simulation workflow at scale on a federation of Amazon Web Services (AWS), FermiCloud, and local FermiGrid resources. We used the CernVM-FS (CVMFS) file system to deliver the application software. We established Squid caching servers in AWS as well, using the Shoal system to let each individual virtual machine find the closest squid server. We also developed an automatic virtual machine conversion system so that we could transition virtual machines made on FermiCloud to Amazon Web Services. We used this system to successfully run a cosmic raymore » simulation of the NOvA detector at Fermilab, making use of both AWS spot pricing and network bandwidth discounts to minimize the cost. On FermiCloud we also were able to run the workflow at the scale of 1000 virtual machines, using a private network routable inside of Fermilab. As a result, we present in detail the technological improvements that were used to make this work a reality.« less

  2. A virtual data language and system for scientific workflow management in data grid environments

    NASA Astrophysics Data System (ADS)

    Zhao, Yong

    With advances in scientific instrumentation and simulation, scientific data is growing fast in both size and analysis complexity. So-called Data Grids aim to provide high performance, distributed data analysis infrastructure for data- intensive sciences, where scientists distributed worldwide need to extract information from large collections of data, and to share both data products and the resources needed to produce and store them. However, the description, composition, and execution of even logically simple scientific workflows are often complicated by the need to deal with "messy" issues like heterogeneous storage formats and ad-hoc file system structures. We show how these difficulties can be overcome via a typed workflow notation called virtual data language, within which issues of physical representation are cleanly separated from logical typing, and by the implementation of this notation within the context of a powerful virtual data system that supports distributed execution. The resulting language and system are capable of expressing complex workflows in a simple compact form, enacting those workflows in distributed environments, monitoring and recording the execution processes, and tracing the derivation history of data products. We describe the motivation, design, implementation, and evaluation of the virtual data language and system, and the application of the virtual data paradigm in various science disciplines, including astronomy, cognitive neuroscience.

  3. Three Dimensional (3D) Printing: A Straightforward, User-Friendly Protocol to Convert Virtual Chemical Models to Real-Life Objects

    ERIC Educational Resources Information Center

    Rossi, Sergio; Benaglia, Maurizio; Brenna, Davide; Porta, Riccardo; Orlandi, Manuel

    2015-01-01

    A simple procedure to convert protein data bank files (.pdb) into a stereolithography file (.stl) using VMD software (Virtual Molecular Dynamic) is reported. This tutorial allows generating, with a very simple protocol, three-dimensional customized structures that can be printed by a low-cost 3D-printer, and used for teaching chemical education…

  4. Virtual reality system for treatment of the fear of public speaking using image-based rendering and moving pictures.

    PubMed

    Lee, Jae M; Ku, Jeong H; Jang, Dong P; Kim, Dong H; Choi, Young H; Kim, In Y; Kim, Sun I

    2002-06-01

    The fear of speaking is often cited as the world's most common social phobia. The rapid growth of computer technology enabled us to use virtual reality (VR) for the treatment of the fear of public speaking. There have been two techniques used to construct a virtual environment for the treatment of the fear of public speaking: model-based and movie-based. Virtual audiences and virtual environments made by model-based technique are unrealistic and unnatural. The movie-based technique has a disadvantage in that each virtual audience cannot be controlled respectively, because all virtual audiences are included in one moving picture file. To address this disadvantage, this paper presents a virtual environment made by using image-based rendering (IBR) and chroma keying simultaneously. IBR enables us to make the virtual environment realistic because the images are stitched panoramically with the photos taken from a digital camera. And the use of chroma keying allows a virtual audience to be controlled individually. In addition, a real-time capture technique was applied in constructing the virtual environment to give the subjects more interaction, in that they can talk with a therapist or another subject.

  5. Implementation of a Landscape Lighting System to Display Images

    NASA Astrophysics Data System (ADS)

    Sun, Gi-Ju; Cho, Sung-Jae; Kim, Chang-Beom; Moon, Cheol-Hong

    The system implemented in this study consists of a PC, MASTER, SLAVEs and MODULEs. The PC sets the various landscape lighting displays, and the image files can be sent to the MASTER through a virtual serial port connected to the USB (Universal Serial Bus). The MASTER sends a sync signal to the SLAVE. The SLAVE uses the signal received from the MASTER and the landscape lighting display pattern. The video file is saved in the NAND Flash memory and the R, G, B signals are separated using the self-made display signal and sent to the MODULE so that it can display the image.

  6. 78 FR 17431 - Antitrust Division

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-03-21

    ... Production Act of 1993--Interchangeable Virtual Instruments Foundation, Inc. Notice is hereby given that, on..., 15 U.S.C. 4301 et seq. (``the Act''), Interchangeable Virtual Instruments Foundation, Inc. has filed... in this group research project remains open, and Interchangeable Virtual Instruments Foundation, Inc...

  7. Applications and challenges of digital pathology and whole slide imaging.

    PubMed

    Higgins, C

    2015-07-01

    Virtual microscopy is a method for digitizing images of tissue on glass slides and using a computer to view, navigate, change magnification, focus and mark areas of interest. Virtual microscope systems (also called digital pathology or whole slide imaging systems) offer several advantages for biological scientists who use slides as part of their general, pharmaceutical, biotechnology or clinical research. The systems usually are based on one of two methodologies: area scanning or line scanning. Virtual microscope systems enable automatic sample detection, virtual-Z acquisition and creation of focal maps. Virtual slides are layered with multiple resolutions at each location, including the highest resolution needed to allow more detailed review of specific regions of interest. Scans may be acquired at 2, 10, 20, 40, 60 and 100 × or a combination of magnifications to highlight important detail. Digital microscopy starts when a slide collection is put into an automated or manual scanning system. The original slides are archived, then a server allows users to review multilayer digital images of the captured slides either by a closed network or by the internet. One challenge for adopting the technology is the lack of a universally accepted file format for virtual slides. Additional challenges include maintaining focus in an uneven sample, detecting specimens accurately, maximizing color fidelity with optimal brightness and contrast, optimizing resolution and keeping the images artifact-free. There are several manufacturers in the field and each has not only its own approach to these issues, but also its own image analysis software, which provides many options for users to enhance the speed, quality and accuracy of their process through virtual microscopy. Virtual microscope systems are widely used and are trusted to provide high quality solutions for teleconsultation, education, quality control, archiving, veterinary medicine, research and other fields.

  8. Determining of a robot workspace using the integration of a CAD system with a virtual control system

    NASA Astrophysics Data System (ADS)

    Herbuś, K.; Ociepka, P.

    2016-08-01

    The paper presents a method for determining the workspace of an industrial robot using an approach consisting in integration a 3D model of an industrial robot with a virtual control system. The robot model with his work environment, prepared for motion simulation, was created in the “Motion Simulation” module of the Siemens PLM NX software. In the mentioned model components of the “link” type were created which map the geometrical form of particular elements of the robot and the components of “joint” type mapping way of cooperation of components of the “link” type. In the paper is proposed the solution in which the control process of a virtual robot is similar to the control process of a real robot using the manual control panel (teach pendant). For this purpose, the control application “JOINT” was created, which provides the manipulation of a virtual robot in accordance with its internal control system. The set of procedures stored in an .xlsx file is the element integrating the 3D robot model working in the CAD/CAE class system with the elaborated control application.

  9. Virtual Environment User Interfaces to Support RLV and Space Station Simulations in the ANVIL Virtual Reality Lab

    NASA Technical Reports Server (NTRS)

    Dumas, Joseph D., II

    1998-01-01

    Several virtual reality I/O peripherals were successfully configured and integrated as part of the author's 1997 Summer Faculty Fellowship work. These devices, which were not supported by the developers of VR software packages, use new software drivers and configuration files developed by the author to allow them to be used with simulations developed using those software packages. The successful integration of these devices has added significant capability to the ANVIL lab at MSFC. In addition, the author was able to complete the integration of a networked virtual reality simulation of the Space Shuttle Remote Manipulator System docking Space Station modules which was begun as part of his 1996 Fellowship. The successful integration of this simulation demonstrates the feasibility of using VR technology for ground-based training as well as on-orbit operations.

  10. Prospero - A tool for organizing Internet resources

    NASA Technical Reports Server (NTRS)

    Neuman, B. C.

    1992-01-01

    This article describes Prospero, a distributed file system based on the Virtual System Model. Prospero provides tools to help users organize Internet resources. These tools allow users to construct customized views of available resources, while taking advantage of the structure imposed by others. Prospero provides a framework that can tie together various indexing services producing the fabric on which resource discovery techniques can be applied.

  11. NJE; VAX-VMS IBM NJE network protocol emulator. [DEC VAX11/780; VAX-11 FORTRAN 77 (99%) and MACRO-11 (1%)

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Engert, D.E.; Raffenetti, C.

    NJE is communications software developed to enable a VAX VMS system to participate as an end-node in a standard IBM network by emulating the Network Job Entry (NJE) protocol. NJE supports job networking for the operating systems used on most large IBM-compatible computers (e.g., VM/370, MVS with JES2 or JES3, SVS, MVT with ASP or HASP). Files received by the VAX can be printed or saved in user-selected disk files. Files sent to the network can be routed to any network node for printing, punching, or job submission, or to a VM/370 user's virtual reader. Files sent from the VAXmore » are queued and transmitted asynchronously. No changes are required to the IBM software.DEC VAX11/780; VAX-11 FORTRAN 77 (99%) and MACRO-11 (1%); VMS 2.5; VAX11/780 with DUP-11 UNIBUS interface and 9600 baud synchronous modem..« less

  12. 76 FR 35872 - Commission Information Collection Activities (Ferc-603); Comment Request; Submitted for OMB Review

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-06-20

    .... Critical infrastructure means existing and proposed systems and assets, whether physical or virtual, the....ferc.gov/help/submission-guide.asp . To file the document electronically, access the Commission's Web... using the ``eLibrary'' link. For user assistance, contact [email protected] or toll-free at...

  13. LHCb experience with running jobs in virtual machines

    NASA Astrophysics Data System (ADS)

    McNab, A.; Stagni, F.; Luzzi, C.

    2015-12-01

    The LHCb experiment has been running production jobs in virtual machines since 2013 as part of its DIRAC-based infrastructure. We describe the architecture of these virtual machines and the steps taken to replicate the WLCG worker node environment expected by user and production jobs. This relies on the uCernVM system for providing root images for virtual machines. We use the CernVM-FS distributed filesystem to supply the root partition files, the LHCb software stack, and the bootstrapping scripts necessary to configure the virtual machines for us. Using this approach, we have been able to minimise the amount of contextualisation which must be provided by the virtual machine managers. We explain the process by which the virtual machine is able to receive payload jobs submitted to DIRAC by users and production managers, and how this differs from payloads executed within conventional DIRAC pilot jobs on batch queue based sites. We describe our operational experiences in running production on VM based sites managed using Vcycle/OpenStack, Vac, and HTCondor Vacuum. Finally we show how our use of these resources is monitored using Ganglia and DIRAC.

  14. BOREAS AFM-06 Mean Temperature Profile Data

    NASA Technical Reports Server (NTRS)

    Wilczak, James; Hall, Forrest G. (Editor); Newcomer, Jeffrey A. (Editor); Smith, David E. (Technical Monitor)

    2000-01-01

    The Boreal Ecosystem-Atmosphere Study (BOREAS) Airborne Fluxes and Meteorology (AFM)-6 team from the National Oceanic and Atmospheric Adminsitration/Environment Technology Laboratory (NOAA/ETL) operated a 915-MHz wind/Radio Acoustic Sounding System (RASS) profiler system in the Southern Study Area (SSA) near the Old Jack Pine (OJP) tower from 21 May 1994 to 20 Sep 1994. The data set provides temperature profiles at 15 heights, containing the variables of virtual temperature, vertical velocity, the speed of sound, and w-bar. The data are stored in tabular ASCII files. The mean temperature profile data are available from the Earth Observing System Data and Information System (EOSDIS) Oak Ridge National Laboratory (ORNL) Distributed Active Archive Center (DAAC). The data files are available on a CD-ROM (see document number 20010000884).

  15. Status and Roadmap of CernVM

    NASA Astrophysics Data System (ADS)

    Berzano, D.; Blomer, J.; Buncic, P.; Charalampidis, I.; Ganis, G.; Meusel, R.

    2015-12-01

    Cloud resources nowadays contribute an essential share of resources for computing in high-energy physics. Such resources can be either provided by private or public IaaS clouds (e.g. OpenStack, Amazon EC2, Google Compute Engine) or by volunteers computers (e.g. LHC@Home 2.0). In any case, experiments need to prepare a virtual machine image that provides the execution environment for the physics application at hand. The CernVM virtual machine since version 3 is a minimal and versatile virtual machine image capable of booting different operating systems. The virtual machine image is less than 20 megabyte in size. The actual operating system is delivered on demand by the CernVM File System. CernVM 3 has matured from a prototype to a production environment. It is used, for instance, to run LHC applications in the cloud, to tune event generators using a network of volunteer computers, and as a container for the historic Scientific Linux 5 and Scientific Linux 4 based software environments in the course of long-term data preservation efforts of the ALICE, CMS, and ALEPH experiments. We present experience and lessons learned from the use of CernVM at scale. We also provide an outlook on the upcoming developments. These developments include adding support for Scientific Linux 7, the use of container virtualization, such as provided by Docker, and the streamlining of virtual machine contextualization towards the cloud-init industry standard.

  16. Virtual Machine Language

    NASA Technical Reports Server (NTRS)

    Grasso, Christopher; Page, Dennis; O'Reilly, Taifun; Fteichert, Ralph; Lock, Patricia; Lin, Imin; Naviaux, Keith; Sisino, John

    2005-01-01

    Virtual Machine Language (VML) is a mission-independent, reusable software system for programming for spacecraft operations. Features of VML include a rich set of data types, named functions, parameters, IF and WHILE control structures, polymorphism, and on-the-fly creation of spacecraft commands from calculated values. Spacecraft functions can be abstracted into named blocks that reside in files aboard the spacecraft. These named blocks accept parameters and execute in a repeatable fashion. The sizes of uplink products are minimized by the ability to call blocks that implement most of the command steps. This block approach also enables some autonomous operations aboard the spacecraft, such as aerobraking, telemetry conditional monitoring, and anomaly response, without developing autonomous flight software. Operators on the ground write blocks and command sequences in a concise, high-level, human-readable programming language (also called VML ). A compiler translates the human-readable blocks and command sequences into binary files (the operations products). The flight portion of VML interprets the uplinked binary files. The ground subsystem of VML also includes an interactive sequence- execution tool hosted on workstations, which runs sequences at several thousand times real-time speed, affords debugging, and generates reports. This tool enables iterative development of blocks and sequences within times of the order of seconds.

  17. Management of scientific information with Google Drive.

    PubMed

    Kubaszewski, Łukasz; Kaczmarczyk, Jacek; Nowakowski, Andrzej

    2013-09-20

    The amount and diversity of scientific publications requires a modern management system. By "management" we mean the process of gathering interesting information for the purpose of reading and archiving for quick access in future clinical practice and research activity. In the past, such system required physical existence of a library, either institutional or private. Nowadays in an era dominated by electronic information, it is natural to migrate entire systems to a digital form. In the following paper we describe the structure and functions of an individual electronic library system (IELiS) for the management of scientific publications based on the Google Drive service. Architecture of the system. Architecture system consists of a central element and peripheral devices. Central element of the system is virtual Google Drive provided by Google Inc. Physical elements of the system include: tablet with Android operating system and a personal computer, both with internet access. Required software includes a program to view and edit files in PDF format for mobile devices and another to synchronize the files. Functioning of the system. The first step in creating a system is collection of scientific papers in PDF format and their analysis. This step is performed most frequently on a tablet. At this stage, after being read, the papers are cataloged in a system of folders and subfolders, according to individual demands. During this stage, but not exclusively, the PDF files are annotated by the reader. This allows the user to quickly track down interesting information in review or research process. Modification of the document title is performed at this stage, as well. Second element of the system is creation of a mirror database in the Google Drive virtual memory. Modified and cataloged papers are synchronized with Google Drive. At this stage, a fully functional scientific information electronic library becomes available online. The third element of the system is a periodic two-way synchronization of data between Google Drive and tablet, as occasional modification of the files with annotation or recataloging may be performed at both locations. The system architecture is designed to gather, catalog and analyze scientific publications. All steps are electronic, eliminating paper forms. Indexed files are available for re-reading and modification. The system allows for fast access to full-text search with additional features making research easier. Team collaboration is also possible with full control of user privileges. Particularly important is the safety of collected data. In our opinion, the system exceeds many commercially available applications in terms of functionality and versatility.

  18. 78 FR 57921 - Patch International, Inc., QuadTech International, Inc., Strategic Resources, Ltd., and Virtual...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-09-20

    ... SECURITIES AND EXCHANGE COMMISSION [File No. 500-1] Patch International, Inc., QuadTech International, Inc., Strategic Resources, Ltd., and Virtual Medical Centre, Inc.; Order of Suspension of Trading... lack of current and accurate information concerning the securities of Virtual Medical Centre, Inc...

  19. Detecting Hardware-assisted Hypervisor Rootkits within Nested Virtualized Environments

    DTIC Science & Technology

    2012-06-14

    least the minimum required for the guest OS and click “Next”. For 64-bit Windows 7 the minimum required is 2048 MB (Figure 66). Figure 66. Memory...prompted for Memory, allocate at least the minimum required for the guest OS, for 64-bit Windows 7 the minimum required is 2048 MB (Figure 79...130 21. Within the virtual disk creation wizard, select VDI for the file type (Figure 81). Figure 81. Select File Type 22. Select Dynamically

  20. Virtual Sonography Through the Internet: Volume Compression Issues

    PubMed Central

    Vilarchao-Cavia, Joseba; Troyano-Luque, Juan-Mario; Clavijo, Matilde

    2001-01-01

    Background Three-dimensional ultrasound images allow virtual sonography even at a distance. However, the size of final 3-D files limits their transmission through slow networks such as the Internet. Objective To analyze compression techniques that transform ultrasound images into small 3-D volumes that can be transmitted through the Internet without loss of relevant medical information. Methods Samples were selected from ultrasound examinations performed during, 1999-2000, in the Obstetrics and Gynecology Department at the University Hospital in La Laguna, Canary Islands, Spain. The conventional ultrasound video output was recorded at 25 fps (frames per second) on a PC, producing 100- to 120-MB files (for from 500 to 550 frames). Processing to obtain 3-D images progressively reduced file size. Results The original frames passed through different compression stages: selecting the region of interest, rendering techniques, and compression for storage. Final 3-D volumes reached 1:25 compression rates (1.5- to 2-MB files). Those volumes need 7 to 8 minutes to be transmitted through the Internet at a mean data throughput of 6.6 Kbytes per second. At the receiving site, virtual sonography is possible using orthogonal projections or oblique cuts. Conclusions Modern volume-rendering techniques allowed distant virtual sonography through the Internet. This is the result of their efficient data compression that maintains its attractiveness as a main criterion for distant diagnosis. PMID:11720963

  1. Image fusion in craniofacial virtual reality modeling based on CT and 3dMD photogrammetry.

    PubMed

    Xin, Pengfei; Yu, Hongbo; Cheng, Huanchong; Shen, Shunyao; Shen, Steve G F

    2013-09-01

    The aim of this study was to demonstrate the feasibility of building a craniofacial virtual reality model by image fusion of 3-dimensional (3D) CT models and 3 dMD stereophotogrammetric facial surface. A CT scan and stereophotography were performed. The 3D CT models were reconstructed by Materialise Mimics software, and the stereophotogrammetric facial surface was reconstructed by 3 dMD patient software. All 3D CT models were exported as Stereo Lithography file format, and the 3 dMD model was exported as Virtual Reality Modeling Language file format. Image registration and fusion were performed in Mimics software. Genetic algorithm was used for precise image fusion alignment with minimum error. The 3D CT models and the 3 dMD stereophotogrammetric facial surface were finally merged into a single file and displayed using Deep Exploration software. Errors between the CT soft tissue model and 3 dMD facial surface were also analyzed. Virtual model based on CT-3 dMD image fusion clearly showed the photorealistic face and bone structures. Image registration errors in virtual face are mainly located in bilateral cheeks and eyeballs, and the errors are more than 1.5 mm. However, the image fusion of whole point cloud sets of CT and 3 dMD is acceptable with a minimum error that is less than 1 mm. The ease of use and high reliability of CT-3 dMD image fusion allows the 3D virtual head to be an accurate, realistic, and widespread tool, and has a great benefit to virtual face model.

  2. Autoplot: a Browser for Science Data on the Web

    NASA Astrophysics Data System (ADS)

    Faden, J.; Weigel, R. S.; West, E. E.; Merka, J.

    2008-12-01

    Autoplot (www.autoplot.org) is software for plotting data from many different sources and in many different file formats. Data from CDF, CEF, Fits, NetCDF, and OpenDAP can be plotted, along with many other sources such as ASCII tables and Excel spreadsheets. This is done by adapting these various data formats and APIs into a common data model that borrows from the netCDF and CDF data models. Autoplot uses a web browser metaphor to simplify use. The user specifies a parameter URL, for example a CDF file accessible via http with a parameter name appended, and the file resource is downloaded and the parameter is rendered in a scientifically meaningful way. When data span multiple files, the user can use a file name template in the URL to aggregate (combine) a set of remote files. So the problem of aggregating data across file boundaries is handled on the client side, allowing simple web servers to be used. The das2 graphics library provides rich controls for exploring the data. Scripting is supported through Python, providing not just programmatic control, but for calculating new parameters in a language that will look familiar to IDL and Matlab users. Autoplot is Java-based software, and will run on most computers without a burdensome installation process. It can also used as an applet or as a servlet that serves static images. Autoplot was developed as part of the Virtual Radiation Belt Observatory (ViRBO) project, and is also being used for the Virtual Magnetospheric Observatory (VMO). It is expected that this flexible, general-purpose plotting tool will be useful for allowing a data provider to add instant visualization capabilities to a directory of files or for general use in the Virtual Observatory environment.

  3. USCCR: United States Commission on Civil Rights > Home Page

    Science.gov Websites

    videos on YouTube Home About Us Virtual Press Room Meeting Calendars Publications Transmit a Complaint | Virtual Press Room | Meeting Calendars | Publications | Filing a Complaint Contact Us | Disclaimer

  4. Forensic Analysis of Window’s(Registered) Virtual Memory Incorporating the System’s Page-File

    DTIC Science & Technology

    2008-12-01

    Management and Budget, Paperwork Reduction Project (0704-0188) Washington DC 20503. 1. AGENCY USE ONLY (Leave blank) 2. REPORT DATE December...data in a meaningful way. One reason for this is how memory is managed by the operating system. Data belonging to one process can be distributed...way. One reason for this is how memory is managed by the operating system. Data belonging to one process can be distributed arbitrarily across

  5. [Construction of information management-based virtual forest landscape and its application].

    PubMed

    Chen, Chongcheng; Tang, Liyu; Quan, Bing; Li, Jianwei; Shi, Song

    2005-11-01

    Based on the analysis of the contents and technical characteristics of different scale forest visualization modeling, this paper brought forward the principles and technical systems of constructing an information management-based virtual forest landscape. With the combination of process modeling and tree geometric structure description, a software method of interactively and parameterized tree modeling was developed, and the corresponding renderings and geometrical elements simplification algorithms were delineated to speed up rendering run-timely. As a pilot study, the geometrical model bases associated with the typical tree categories in Zhangpu County of Fujian Province, southeast China were established as template files. A Virtual Forest Management System prototype was developed with GIS component (ArcObject), OpenGL graphics environment, and Visual C++ language, based on forest inventory and remote sensing data. The prototype could be used for roaming between 2D and 3D, information query and analysis, and virtual and interactive forest growth simulation, and its reality and accuracy could meet the needs of forest resource management. Some typical interfaces of the system and the illustrative scene cross-sections of simulated masson pine growth under conditions of competition and thinning were listed.

  6. 76 FR 80405 - Notice Pursuant to the National Cooperative Research and Production Act of 1993-Interchangeable...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-12-23

    ... Production Act of 1993--Interchangeable Virtual Instruments Foundation, Inc. Notice is hereby given that, on..., 15 U.S.C. 4301 et seq. (``the Act''), Interchangeable Virtual Instruments Foundation, Inc. has filed... in this group research project remains open, and Interchangeable Virtual Instruments Foundation, Inc...

  7. 78 FR 117 - Notice Pursuant to the National Cooperative Research and Production Act of 1993-Interchangeable...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-01-02

    ... Production Act of 1993--Interchangeable Virtual Instruments Foundation, Inc. Notice is hereby given that, on..., 15 U.S.C. 4301 et seq. (``the Act''), Interchangeable Virtual Instruments Foundation, Inc. has filed... research project. Membership in this group research project remains open, and Interchangeable Virtual...

  8. 76 FR 29267 - Notice Pursuant to the National Cooperative Research and Production Act of 1993-Interchangeable...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-05-20

    ... Production Act of 1993--Interchangeable Virtual Instruments Foundation, Inc. Notice is hereby given that, on..., 15 U.S.C. 4301 et seq. (``the Act''), Interchangeable Virtual Instruments Foundation, Inc. has filed... in this group research project remains open, and Interchangeable Virtual Instruments Foundation, Inc...

  9. 76 FR 16820 - Notice Pursuant to the National Cooperative Research and Production Act of 1993-Interchangeable...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-03-25

    ... Production Act of 1993--Interchangeable Virtual Instruments Foundation, Inc. Notice is hereby given that, on..., 15 U.S.C. 4301 et seq. (``the Act''), Interchangeable Virtual Instruments Foundation, Inc. has filed... research project. Membership in this group research project remains open, and Interchangeable Virtual...

  10. 75 FR 54652 - Notice Pursuant to the National Cooperative Research and Production Act of 1993-Interchangeable...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2010-09-08

    ... Production Act of 1993--Interchangeable Virtual Instruments Foundation, Inc. Notice is hereby given that, on..., 15 U.S.C. 4301 et seq. (``the Act''), Interchangeable Virtual Instruments Foundation, Inc. has filed... group research project remains open, and Interchangeable Virtual Instruments Foundation, Inc. intends to...

  11. The Infrastructure of an Integrated Virtual Reality Environment for International Space Welding Experiment

    NASA Technical Reports Server (NTRS)

    Wang, Peter Hor-Ching

    1996-01-01

    This study is a continuation of the summer research of 1995 NASA/ASEE Summer Faculty Fellowship Program. This effort is to provide the infrastructure of an integrated Virtual Reality (VR) environment for the International Space Welding Experiment (ISWE) Analytical Tool and Trainer and the Microgravity Science Glovebox (MSG) Analytical Tool study. Due to the unavailability of the MSG CAD files and the 3D-CAD converter, little was done to the MSG study. However, the infrastructure of the integrated VR environment for ISWE is capable of performing the MSG study when the CAD files become available. Two primary goals are established for this research. First, the essential peripheral devices for an integrated VR environment will be studied and developed for the ISWE and MSG studies. Secondly, the training of the flight crew (astronaut) in general orientation, procedures, and location, orientation, and sequencing of the welding samples and tools are built into the VR system for studying the welding process and training the astronaut.

  12. In Internet-Based Visualization System Study about Breakthrough Applet Security Restrictions

    NASA Astrophysics Data System (ADS)

    Chen, Jie; Huang, Yan

    In the process of realization Internet-based visualization system of the protein molecules, system needs to allow users to use the system to observe the molecular structure of the local computer, that is, customers can generate the three-dimensional graphics from PDB file on the client computer. This requires Applet access to local file, related to the Applet security restrictions question. In this paper include two realization methods: 1.Use such as signature tools, key management tools and Policy Editor tools provided by the JDK to digital signature and authentication for Java Applet, breakthrough certain security restrictions in the browser. 2. Through the use of Servlet agent implement indirect access data methods, breakthrough the traditional Java Virtual Machine sandbox model restriction of Applet ability. The two ways can break through the Applet's security restrictions, but each has its own strengths.

  13. LVFS: A Big Data File Storage Bridge for the HPC Community

    NASA Astrophysics Data System (ADS)

    Golpayegani, N.; Halem, M.; Mauoka, E.; Fonseca, L. F.

    2015-12-01

    Merging Big Data capabilities into High Performance Computing architecture starts at the file storage level. Heterogeneous storage systems are emerging which offer enhanced features for dealing with Big Data such as the IBM GPFS storage system's integration into Hadoop Map-Reduce. Taking advantage of these capabilities requires file storage systems to be adaptive and accommodate these new storage technologies. We present the extension of the Lightweight Virtual File System (LVFS) currently running as the production system for the MODIS Level 1 and Atmosphere Archive and Distribution System (LAADS) to incorporate a flexible plugin architecture which allows easy integration of new HPC hardware and/or software storage technologies without disrupting workflows, system architectures and only minimal impact on existing tools. We consider two essential aspects provided by the LVFS plugin architecture needed for the future HPC community. First, it allows for the seamless integration of new and emerging hardware technologies which are significantly different than existing technologies such as Segate's Kinetic disks and Intel's 3DXPoint non-volatile storage. Second is the transparent and instantaneous conversion between new software technologies and various file formats. With most current storage system a switch in file format would require costly reprocessing and nearly doubling of storage requirements. We will install LVFS on UMBC's IBM iDataPlex cluster with a heterogeneous storage architecture utilizing local, remote, and Seagate Kinetic storage as a case study. LVFS merges different kinds of storage architectures to show users a uniform layout and, therefore, prevent any disruption in workflows, architecture design, or tool usage. We will show how LVFS will convert HDF data produced by applying machine learning algorithms to Xco2 Level 2 data from the OCO-2 satellite to produce CO2 surface fluxes into GeoTIFF for visualization.

  14. DIstributed VIRtual System (DIVIRS) Project

    NASA Technical Reports Server (NTRS)

    Schorr, Herbert; Neuman, B. Clifford; Gaines, Stockton R.; Mizell, David

    1996-01-01

    The development of Prospero moved from the University of Washington to ISI and several new versions of the software were released from ISI during the contract period. Changes in the first release from ISI included bug fixes and extensions to support the needs of specific users. Among these changes was a new option to directory queries that allows attributes to be returned for all files in a directory together with the directory listing. This change greatly improves the performance of their server and reduces the number of packets sent across their trans-pacific connection to the rest of the internet. Several new access method were added to the Prospero file method. The Prospero Data Access Protocol was designed, to support secure retrieval of data from systems running Prospero.

  15. Interfacing the VAX 11/780 Using Berkeley Unix 4.2.BSD and Ethernet Based Xerox Network Systems. Volume 1.

    DTIC Science & Technology

    1984-12-01

    3Com Corporation ....... A-18 Ethernet Controller Support . . . . . . A-19 Host Systems Support . . . . . . . . . A-20 Personal Computers Support...A-23 VAX EtherSeries Software 0 * A-23 Network Research Corporation . o o o . o A-24 File Transfer Service . . . . o A-25 Virtual Terminal Service 0...Control office is planning to acquire a Digital Equipment Corporation VAX 11/780 mainframe computer with the Unix Berkeley 4.2BSD operating system. They

  16. Radio data archiving system

    NASA Astrophysics Data System (ADS)

    Knapic, C.; Zanichelli, A.; Dovgan, E.; Nanni, M.; Stagni, M.; Righini, S.; Sponza, M.; Bedosti, F.; Orlati, A.; Smareglia, R.

    2016-07-01

    Radio Astronomical Data models are becoming very complex since the huge possible range of instrumental configurations available with the modern Radio Telescopes. What in the past was the last frontiers of data formats in terms of efficiency and flexibility is now evolving with new strategies and methodologies enabling the persistence of a very complex, hierarchical and multi-purpose information. Such an evolution of data models and data formats require new data archiving techniques in order to guarantee data preservation following the directives of Open Archival Information System and the International Virtual Observatory Alliance for data sharing and publication. Currently, various formats (FITS, MBFITS, VLBI's XML description files and ancillary files) of data acquired with the Medicina and Noto Radio Telescopes can be stored and handled by a common Radio Archive, that is planned to be released to the (inter)national community by the end of 2016. This state-of-the-art archiving system for radio astronomical data aims at delegating as much as possible to the software setting how and where the descriptors (metadata) are saved, while the users perform user-friendly queries translated by the web interface into complex interrogations on the database to retrieve data. In such a way, the Archive is ready to be Virtual Observatory compliant and as much as possible user-friendly.

  17. 78 FR 61941 - Combined Notice of Filings #1

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-10-07

    .... Applicants: Southwest Power Pool, Inc. Description: Revisions to Day-Ahead Virtual Energy Transaction Fee to...: 20130924-5121. Comments Due: 5 p.m. e.t. 10/15/13. The filings are accessible in the Commission's eLibrary...

  18. Import and visualization of clinical medical imagery into multiuser VR environments

    NASA Astrophysics Data System (ADS)

    Mehrle, Andreas H.; Freysinger, Wolfgang; Kikinis, Ron; Gunkel, Andreas; Kral, Florian

    2005-03-01

    The graphical representation of three-dimensional data obtained from tomographic imaging has been the central problem since this technology is available. Neither the representation as a set of two-dimensional slices nor the 2D projection of three-dimensional models yields satisfactory results. In this paper a way is outlined which permits the investigation of volumetric clinical data obtained from standard CT, MR, PET, SPECT or experimental very high resolution CT-scanners in a three dimensional environment within a few worksteps. Volumetric datasets are converted into surface data (segmentation process) using the 3D-Slicer software tool and saved as .vtk files and exported as a collection of primitives in any common file format (.iv, .pfb). Subsequently this files can be displayed and manipulated in the CAVE virtual reality center. The CAVE is a multiuser walkable virtual room consisting of several walls on which stereoscopic images are projected by rear panel beamers. Adequate tracking of the head position and separate image calculation for each eye yields a vivid impression for one or several users. With the use of a seperately tracked 6D joystick manipulations such as rotation, translation, zooming, decomposition or highlighting can be done intuitively. The usage of the CAVE technology opens new possibilities especially in surgical training ("hands-on-effect") and as an educational tool (availability of pathological data). Unlike concurring technologies the CAVE permits a walk-through into the virtual scene but preserves enough physical perception to allow interaction between multiple users, e.g. gestures and movements. By training in a virtual environment on one hand the learning process of students in complex anatomic findings may be improved considerably and on the other hand unaccustomed views such as the one through a microscope or endoscope can be trained in advance. The availability of low-cost PC based CAVE-like systems and the rapidly decreasing price of high-performance video beamers makes the CAVE an affordable alternative to conventional surgical training techniques and without limitations in handling cadavers.

  19. The use of ZFP lossy floating point data compression in tornado-resolving thunderstorm simulations

    NASA Astrophysics Data System (ADS)

    Orf, L.

    2017-12-01

    In the field of atmospheric science, numerical models are used to produce forecasts of weather and climate and serve as virtual laboratories for scientists studying atmospheric phenomena. In both operational and research arenas, atmospheric simulations exploiting modern supercomputing hardware can produce a tremendous amount of data. During model execution, the transfer of floating point data from memory to the file system is often a significant bottleneck where I/O can dominate wallclock time. One way to reduce the I/O footprint is to compress the floating point data, which reduces amount of data saved to the file system. In this presentation we introduce LOFS, a file system developed specifically for use in three-dimensional numerical weather models that are run on massively parallel supercomputers. LOFS utilizes the core (in-memory buffered) HDF5 driver and includes compression options including ZFP, a lossy floating point data compression algorithm. ZFP offers several mechanisms for specifying the amount of lossy compression to be applied to floating point data, including the ability to specify the maximum absolute error allowed in each compressed 3D array. We explore different maximum error tolerances in a tornado-resolving supercell thunderstorm simulation for model variables including cloud and precipitation, temperature, wind velocity and vorticity magnitude. We find that average compression ratios exceeding 20:1 in scientifically interesting regions of the simulation domain produce visually identical results to uncompressed data in visualizations and plots. Since LOFS splits the model domain across many files, compression ratios for a given error tolerance can be compared across different locations within the model domain. We find that regions of high spatial variability (which tend to be where scientifically interesting things are occurring) show the lowest compression ratios, whereas regions of the domain with little spatial variability compress extremely well. We observe that the overhead for compressing data with ZFP is low, and that compressing data in memory reduces the amount of memory overhead needed to store the virtual files before they are flushed to disk.

  20. Saguaro: a distributed operating system based on pools of servers. Annual report, 1 January 1984-31 December 1986

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Andrews, G.R.

    1986-03-03

    Prototypes of components of the Saguaro distributed operating system were implemented and the design of the entire system refined based on the experience. The philosophy behind Saguaro is to support the illusion of a single virtual machine while taking advantage of the concurrency and robustness that are possible in a network architecture. Within the system, these advantages are realized by the use of pools of server processes and decentralized allocation protocols. Potential concurrency and robustness are also made available to the user through low-cost mechanisms to control placement of executing commands and files, and to support semi-transparent file replication andmore » access. Another unique aspect of Saguaro is its extensive use of type system to describe user data such as files and to specify the types of arguments to commands and procedures. This enables the system to assist in type checking and leads to a user interface in which command-specific templates are available to facilitate command invocation. A mechanism, channels, is also provided to enable users to construct applications containing general graphs of communication processes.« less

  1. WLCG Transfers Dashboard: a Unified Monitoring Tool for Heterogeneous Data Transfers

    NASA Astrophysics Data System (ADS)

    Andreeva, J.; Beche, A.; Belov, S.; Kadochnikov, I.; Saiz, P.; Tuckett, D.

    2014-06-01

    The Worldwide LHC Computing Grid provides resources for the four main virtual organizations. Along with data processing, data distribution is the key computing activity on the WLCG infrastructure. The scale of this activity is very large, the ATLAS virtual organization (VO) alone generates and distributes more than 40 PB of data in 100 million files per year. Another challenge is the heterogeneity of data transfer technologies. Currently there are two main alternatives for data transfers on the WLCG: File Transfer Service and XRootD protocol. Each LHC VO has its own monitoring system which is limited to the scope of that particular VO. There is a need for a global system which would provide a complete cross-VO and cross-technology picture of all WLCG data transfers. We present a unified monitoring tool - WLCG Transfers Dashboard - where all the VOs and technologies coexist and are monitored together. The scale of the activity and the heterogeneity of the system raise a number of technical challenges. Each technology comes with its own monitoring specificities and some of the VOs use several of these technologies. This paper describes the implementation of the system with particular focus on the design principles applied to ensure the necessary scalability and performance, and to easily integrate any new technology providing additional functionality which might be specific to that technology.

  2. Sensor Webs in Digital Earth

    NASA Astrophysics Data System (ADS)

    Heavner, M. J.; Fatland, D. R.; Moeller, H.; Hood, E.; Schultz, M.

    2007-12-01

    The University of Alaska Southeast is currently implementing a sensor web identified as the SouthEast Alaska MOnitoring Network for Science, Telecommunications, Education, and Research (SEAMONSTER). From power systems and instrumentation through data management, visualization, education, and public outreach, SEAMONSTER is designed with modularity in mind. We are utilizing virtual earth infrastructures to enhance both sensor web management and data access. We will describe how the design philosophy of using open, modular components contributes to the exploration of different virtual earth environments. We will also describe the sensor web physical implementation and how the many components have corresponding virtual earth representations. This presentation will provide an example of the integration of sensor webs into a virtual earth. We suggest that IPY sensor networks and sensor webs may integrate into virtual earth systems and provide an IPY legacy easily accessible to both scientists and the public. SEAMONSTER utilizes geobrowsers for education and public outreach, sensor web management, data dissemination, and enabling collaboration. We generate near-real-time auto-updating geobrowser files of the data. In this presentation we will describe how we have implemented these technologies to date, the lessons learned, and our efforts towards greater OGC standard implementation. A major focus will be on demonstrating how geobrowsers have made this project possible.

  3. Java bioinformatics analysis web services for multiple sequence alignment--JABAWS:MSA.

    PubMed

    Troshin, Peter V; Procter, James B; Barton, Geoffrey J

    2011-07-15

    JABAWS is a web services framework that simplifies the deployment of web services for bioinformatics. JABAWS:MSA provides services for five multiple sequence alignment (MSA) methods (Probcons, T-coffee, Muscle, Mafft and ClustalW), and is the system employed by the Jalview multiple sequence analysis workbench since version 2.6. A fully functional, easy to set up server is provided as a Virtual Appliance (VA), which can be run on most operating systems that support a virtualization environment such as VMware or Oracle VirtualBox. JABAWS is also distributed as a Web Application aRchive (WAR) and can be configured to run on a single computer and/or a cluster managed by Grid Engine, LSF or other queuing systems that support DRMAA. JABAWS:MSA provides clients full access to each application's parameters, allows administrators to specify named parameter preset combinations and execution limits for each application through simple configuration files. The JABAWS command-line client allows integration of JABAWS services into conventional scripts. JABAWS is made freely available under the Apache 2 license and can be obtained from: http://www.compbio.dundee.ac.uk/jabaws.

  4. Universal Serial Bus Architecture for Removable Media (USB-ARM)

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    2011-03-09

    USB-ARM creates operating system drivers which sit between removable media and the user and applications. The drivers isolate the media and submit the contents of the media to a virtual machine containing an entire scanning system. This scanning system may include traditional anti-virus, but also allows more detailed analysis of files, including dynamic run-time analysis, helping to prevent "zero-day" threats not already identified in anti-virus signatures. Once cleared, the media is presented to the operating system, at which point it becomes available to users and applications.

  5. Implementation of relational data base management systems on micro-computers

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Huang, C.L.

    1982-01-01

    This dissertation describes an implementation of a Relational Data Base Management System on a microcomputer. A specific floppy disk based hardward called TERAK is being used, and high level query interface which is similar to a subset of the SEQUEL language is provided. The system contains sub-systems such as I/O, file management, virtual memory management, query system, B-tree management, scanner, command interpreter, expression compiler, garbage collection, linked list manipulation, disk space management, etc. The software has been implemented to fulfill the following goals: (1) it is highly modularized. (2) The system is physically segmented into 16 logically independent, overlayable segments,more » in a way such that a minimal amount of memory is needed at execution time. (3) Virtual memory system is simulated that provides the system with seemingly unlimited memory space. (4) A language translator is applied to recognize user requests in the query language. The code generation of this translator generates compact code for the execution of UPDATE, DELETE, and QUERY commands. (5) A complete set of basic functions needed for on-line data base manipulations is provided through the use of a friendly query interface. (6) To eliminate the dependency on the environment (both software and hardware) as much as possible, so that it would be easy to transplant the system to other computers. (7) To simulate each relation as a sequential file. It is intended to be a highly efficient, single user system suited to be used by small or medium sized organizations for, say, administrative purposes. Experiments show that quite satisfying results have indeed been achieved.« less

  6. 78 FR 48851 - Wireline Competition Bureau Announces Closing of the Bureau's Cost Model Virtual Workshop

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-08-12

    ... questions through Public Notice. DATES: Virtual workshop closure effective August 12, 2013. ADDRESSES: You.... The filing hours are 8:00 a.m. to 7:00 p.m. All hand deliveries must be held together with rubber...

  7. Data Management System for the National Energy-Water System (NEWS) Assessment Framework

    NASA Astrophysics Data System (ADS)

    Corsi, F.; Prousevitch, A.; Glidden, S.; Piasecki, M.; Celicourt, P.; Miara, A.; Fekete, B. M.; Vorosmarty, C. J.; Macknick, J.; Cohen, S. M.

    2015-12-01

    Aiming at providing a comprehensive assessment of the water-energy nexus, the National Energy-Water System (NEWS) project requires the integration of data to support a modeling framework that links climate, hydrological, power production, transmission, and economical models. Large amounts of Georeferenced data has to be streamed to the components of the inter-disciplinary model to explore future challenges and tradeoffs in the US power production, based on climate scenarios, power plant locations and technologies, available water resources, ecosystem sustainability, and economic demand. We used open source and in-house build software components to build a system that addresses two major data challenges: On-the-fly re-projection, re-gridding, interpolation, extrapolation, nodata patching, merging, temporal and spatial aggregation, of static and time series datasets in virtually any file formats and file structures, and any geographic extent for the models I/O, directly at run time; Comprehensive data management based on metadata cataloguing and discovery in repositories utilizing the MAGIC Table (Manipulation and Geographic Inquiry Control database). This innovative concept allows models to access data on-the-fly by data ID, irrespective of file path, file structure, file format and regardless its GIS specifications. In addition, a web-based information and computational system is being developed to control the I/O of spatially distributed Earth system, climate, and hydrological, power grid, and economical data flow within the NEWS framework. The system allows scenario building, data exploration, visualization, querying, and manipulation any loaded gridded, point, and vector polygon dataset. The system has demonstrated its potential for applications in other fields of Earth science modeling, education, and outreach. Over time, this implementation of the system will provide near real-time assessment of various current and future scenarios of the water-energy nexus.

  8. Mash-up of techniques between data crawling/transfer, data preservation/stewardship and data processing/visualization technologies on a science cloud system designed for Earth and space science: a report of successful operation and science projects of the NICT Science Cloud

    NASA Astrophysics Data System (ADS)

    Murata, K. T.

    2014-12-01

    Data-intensive or data-centric science is 4th paradigm after observational and/or experimental science (1st paradigm), theoretical science (2nd paradigm) and numerical science (3rd paradigm). Science cloud is an infrastructure for 4th science methodology. The NICT science cloud is designed for big data sciences of Earth, space and other sciences based on modern informatics and information technologies [1]. Data flow on the cloud is through the following three techniques; (1) data crawling and transfer, (2) data preservation and stewardship, and (3) data processing and visualization. Original tools and applications of these techniques have been designed and implemented. We mash up these tools and applications on the NICT Science Cloud to build up customized systems for each project. In this paper, we discuss science data processing through these three steps. For big data science, data file deployment on a distributed storage system should be well designed in order to save storage cost and transfer time. We developed a high-bandwidth virtual remote storage system (HbVRS) and data crawling tool, NICTY/DLA and Wide-area Observation Network Monitoring (WONM) system, respectively. Data files are saved on the cloud storage system according to both data preservation policy and data processing plan. The storage system is developed via distributed file system middle-ware (Gfarm: GRID datafarm). It is effective since disaster recovery (DR) and parallel data processing are carried out simultaneously without moving these big data from storage to storage. Data files are managed on our Web application, WSDBank (World Science Data Bank). The big-data on the cloud are processed via Pwrake, which is a workflow tool with high-bandwidth of I/O. There are several visualization tools on the cloud; VirtualAurora for magnetosphere and ionosphere, VDVGE for google Earth, STICKER for urban environment data and STARStouch for multi-disciplinary data. There are 30 projects running on the NICT Science Cloud for Earth and space science. In 2003 56 refereed papers were published. At the end, we introduce a couple of successful results of Earth and space sciences using these three techniques carried out on the NICT Sciences Cloud. [1] http://sc-web.nict.go.jp

  9. MISR Center Block Time Tool

    Atmospheric Science Data Center

    2013-04-01

      MISR Center Block Time Tool The misr_time tool calculates the block center times for MISR Level 1B2 files. This is ... version of the IDL package or by using the IDL Virtual Machine application. The IDL Virtual Machine is bundled with IDL and is ...

  10. Personal pervasive environments: practice and experience.

    PubMed

    Ballesteros, Francisco J; Guardiola, Gorka; Soriano, Enrique

    2012-01-01

    In this paper we present our experience designing and developing two different systems to enable personal pervasive computing environments, Plan B and the Octopus. These systems were fully implemented and have been used on a daily basis for years. Both are based on synthetic (virtual) file system interfaces and provide mechanisms to adapt to changes in the context and reconfigure the system to support pervasive applications. We also present the main differences between them, focusing on architectural and reconfiguration aspects. Finally, we analyze the pitfalls and successes of both systems and review the lessons we learned while designing, developing, and using them.

  11. Personal Pervasive Environments: Practice and Experience

    PubMed Central

    Ballesteros, Francisco J.; Guardiola, Gorka; Soriano, Enrique

    2012-01-01

    In this paper we present our experience designing and developing two different systems to enable personal pervasive computing environments, Plan B and the Octopus. These systems were fully implemented and have been used on a daily basis for years. Both are based on synthetic (virtual) file system interfaces and provide mechanisms to adapt to changes in the context and reconfigure the system to support pervasive applications. We also present the main differences between them, focusing on architectural and reconfiguration aspects. Finally, we analyze the pitfalls and successes of both systems and review the lessons we learned while designing, developing, and using them. PMID:22969340

  12. Best practices for virtual participation in meetings: Experiences from synthesis centers

    USGS Publications Warehouse

    Hampton, Stephanie E.; Halpern, Benjamin S.; Winter, Marten; Balch, Jennifer K.; Parker, John N.; Baron, Jill S.; Palmer, Margaret; Schildhauer, Mark P.; Bishop, Pamela; Meagher, Thomas R.; Specht, Alison

    2017-01-01

    The earth environment is a complex system, in which collaborative scientific approaches can provide major benefits by bringing together diverse perspectives, methods, and data, to achieve robust, synthetic understanding (Fig. 1). Face-to-face scientific meetings remain extremely valuable because of the opportunity to build deep mutual trust and understanding, and develop new collaborations and sometimes even lifelong friendships (Alberts 2013, Cooke and Hilton 2015). However, it has been argued that ecologists should be particularly sensitive to the environmental footprint of travel (Fox et al. 2009); such concerns, along with the time demands for travel, particularly for multi-national working groups, provide strong motivation for exploring virtual attendance. While not replacing the richness of face-to-face interactions entirely, it is now feasible to virtually participate in meetings through services that allow video, audio, and file sharing, as well as other Web-enabled communication.

  13. Large Scale Hierarchical K-Means Based Image Retrieval With MapReduce

    DTIC Science & Technology

    2014-03-27

    hadoop distributed file system: Architecture and design, 2007. [10] G. Bradski. Dr. Dobb’s Journal of Software Tools, 2000. [11] Terry Costlow. Big data ...million images running on 20 virtual machines are shown. 15. SUBJECT TERMS Image Retrieval, MapReduce, Hierarchical K-Means, Big Data , Hadoop U U U UU 87...13 2.1.1.2 HDFS Data Representation . . . . . . . . . . . . . . . . 14 2.1.1.3 Hadoop Engine

  14. JOVIAL/Ada Microprocessor Study.

    DTIC Science & Technology

    1982-04-01

    Study Final Technical Report interesting feature of the nodes is that they provide multiple virtual terminals, so it is possible to monitor several...Terminal Interface Tasking Except ion Handling A more elaborate system could allow such features as spooling, background jobs or multiple users. To a large...Another editor feature is the buffer. Buffers may hold small amounts of text or entire text objects. They allow multiple files to be edited simultaneously

  15. Virtual Labs vs. Remote Labs: Between Myth & Reality.

    ERIC Educational Resources Information Center

    Alhalabi, Bassem; Hamza, M. Khalid; Hsu, Sam; Romance, Nancy

    Many United States institutions of higher education have established Web-based educational environments that provide higher education curricula via the Internet and diverse modalities. Success has been limited primarily to virtual classrooms (real audio/video transmission) and/or test taking (online form filing). An extensive survey was carried…

  16. Template-based combinatorial enumeration of virtual compound libraries for lipids

    PubMed Central

    2012-01-01

    A variety of software packages are available for the combinatorial enumeration of virtual libraries for small molecules, starting from specifications of core scaffolds with attachments points and lists of R-groups as SMILES or SD files. Although SD files include atomic coordinates for core scaffolds and R-groups, it is not possible to control 2-dimensional (2D) layout of the enumerated structures generated for virtual compound libraries because different packages generate different 2D representations for the same structure. We have developed a software package called LipidMapsTools for the template-based combinatorial enumeration of virtual compound libraries for lipids. Virtual libraries are enumerated for the specified lipid abbreviations using matching lists of pre-defined templates and chain abbreviations, instead of core scaffolds and lists of R-groups provided by the user. 2D structures of the enumerated lipids are drawn in a specific and consistent fashion adhering to the framework for representing lipid structures proposed by the LIPID MAPS consortium. LipidMapsTools is lightweight, relatively fast and contains no external dependencies. It is an open source package and freely available under the terms of the modified BSD license. PMID:23006594

  17. Template-based combinatorial enumeration of virtual compound libraries for lipids.

    PubMed

    Sud, Manish; Fahy, Eoin; Subramaniam, Shankar

    2012-09-25

    A variety of software packages are available for the combinatorial enumeration of virtual libraries for small molecules, starting from specifications of core scaffolds with attachments points and lists of R-groups as SMILES or SD files. Although SD files include atomic coordinates for core scaffolds and R-groups, it is not possible to control 2-dimensional (2D) layout of the enumerated structures generated for virtual compound libraries because different packages generate different 2D representations for the same structure. We have developed a software package called LipidMapsTools for the template-based combinatorial enumeration of virtual compound libraries for lipids. Virtual libraries are enumerated for the specified lipid abbreviations using matching lists of pre-defined templates and chain abbreviations, instead of core scaffolds and lists of R-groups provided by the user. 2D structures of the enumerated lipids are drawn in a specific and consistent fashion adhering to the framework for representing lipid structures proposed by the LIPID MAPS consortium. LipidMapsTools is lightweight, relatively fast and contains no external dependencies. It is an open source package and freely available under the terms of the modified BSD license.

  18. Legacy literature-a need for virtual libraries

    USDA-ARS?s Scientific Manuscript database

    After years of conducting, writing-up, and reviewing research, many entomologists have examined, organized, and annotated some as 2-3 gigabytes of pdfs and 4-5 file cabinets of hard-copy articles, in addition to thousands of spreadsheets, docs, jpgs, and wav files of data. This is a useful legacy th...

  19. Sandbox for Mac Malware v 1.0

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Walkup, Elizabeth

    This software is an analyzer for automated sandbox analysis of malware on the OS X operating system. It runs inside an OS X virtual machine to collect data about what happens when a given file is opened or run. As of August 2014, there was no sandbox software for Mac OS X malware, as it requires different methods from those used on the Windows OS (which most sandboxes are written for). This software adds OS X analysis capabilities to an existing open-source sandbox, Cuckoo Sandbox (http://cuckoosandbox.org/), which previously only worked for Windows. The analyzer itself can take many different typesmore » of files as input: the traditional Mach-O and FAT executables, .app files, zip files, Python scripts, Java archives, and web pages, as well as PDFs and other documents. While the file is running, the analyzer also simulates rudimentary human interaction with clicks and mouse movements in order to bypass the tests some malware use to see if they are being analyzed. The analyzer outputs several different kinds of data: function call traces, network captures, screenshots, and all created and modified files. This work also includes a static analysis Cuckoo module for Mach-O binary files. It extracts file structures, code library imports and exports, and signatures. This data can be used along with the analyzer results to create signatures for malware.« less

  20. Fast in-situ tool inspection based on inverse fringe projection and compact sensor heads

    NASA Astrophysics Data System (ADS)

    Matthias, Steffen; Kästner, Markus; Reithmeier, Eduard

    2016-11-01

    Inspection of machine elements is an important task in production processes in order to ensure the quality of produced parts and to gather feedback for the continuous improvement process. A new measuring system is presented, which is capable of performing the inspection of critical tool geometries, such as gearing elements, inside the forming machine. To meet the constraints on sensor head size and inspection time imposed by the limited space inside the machine and the cycle time of the process, the measuring device employs a combination of endoscopy techniques with the fringe projection principle. Compact gradient index lenses enable a compact design of the sensor head, which is connected to a CMOS camera and a flexible micro-mirror based projector via flexible fiber bundles. Using common fringe projection patterns, the system achieves measuring times of less than five seconds. To further reduce the time required for inspection, the generation of inverse fringe projection patterns has been implemented for the system. Inverse fringe projection speeds up the inspection process by employing object-adapted patterns, which enable the detection of geometry deviations in a single image. Two different approaches to generate object adapted patterns are presented. The first approach uses a reference measurement of a manufactured tool master to generate the inverse pattern. The second approach is based on a virtual master geometry in the form of a CAD file and a ray-tracing model of the measuring system. Virtual modeling of the measuring device and inspection setup allows for geometric tolerancing for free-form surfaces by the tool designer in the CAD-file. A new approach is presented, which uses virtual tolerance specifications and additional simulation steps to enable fast checking of metric tolerances. Following the description of the pattern generation process, the image processing steps required for inspection are demonstrated on captures of gearing geometries.

  1. Collaborative Workspaces within Distributed Virtual Environments.

    DTIC Science & Technology

    1996-12-01

    such as a text document, a 3D model, or a captured image using a collaborative workspace called the InPerson Whiteboard . The Whiteboard contains a...commands for editing objects drawn on the screen. Finally, when the call is completed, the Whiteboard can be saved to a file for future use . IRIS Annotator... use , and a shared whiteboard that includes a number of multimedia annotation tools. Both systems are also mindful of bandwidth limitations and can

  2. Cyber-Physical Multi-Core Optimization for Resource and Cache Effects (C2ORES)

    DTIC Science & Technology

    2014-03-01

    DoD-sponsored ATAACK mobile cloud testbed funded through the DURIP program, which is deployed at Virginia Tech and Vanderbilt University to conduct...0.9.2. Jug was configured to use a filesystem (network file system (nfs)) backend for locking and task synchronization. 4.1.7.2 Experiment 1...and performance-aware virtual machine placement technique that is realized as cloud infrastructure middleware. The key contributions of iPlace include

  3. Military Review. Volume 89, Number 5, September-October 2009

    DTIC Science & Technology

    2009-10-01

    to scrutinize paradigms, systemic thinking , and promotion of team learning .36 The principal challenge of innovation is to identify a problem and...and cognitive abilities of the students in a professionally facilitated forum. Virtual knowledge management forums do the same but on an Army-wide... think they need to truly and could file from the middle of properly inform the Ameri­ an artillery duel in Tuzla by can public. And there are a

  4. The Virtual City: Putting Charleston on the World Wide Web.

    ERIC Educational Resources Information Center

    Beagle, Donald

    1996-01-01

    Describes the Charleston Multimedia Project, a World Wide Web guide to the history, architecture, and culture of Charleston, South Carolina, which includes a timeline and virtual tours. Incorporates materials issued by many agencies that were previously held in vertical files. The Charleston County Library's role and future plans are also…

  5. Encouraging Social Presence and a Sense of Community in a Virtual Residential School

    ERIC Educational Resources Information Center

    Robinson, Kathleen

    2009-01-01

    This study describes the theoretical rationale underpinning the design and implementation of a career-related activity as an optional element of a virtual residential school. The activity comprised an interview with a practising chartered psychologist recorded as an MP3 audio file, which was subsequently supported by an asynchronous discussion…

  6. Uncoupling File System Components for Bridging Legacy and Modern Storage Architectures

    NASA Astrophysics Data System (ADS)

    Golpayegani, N.; Halem, M.; Tilmes, C.; Prathapan, S.; Earp, D. N.; Ashkar, J. S.

    2016-12-01

    Long running Earth Science projects can span decades of architectural changes in both processing and storage environments. As storage architecture designs change over decades such projects need to adjust their tools, systems, and expertise to properly integrate such new technologies with their legacy systems. Traditional file systems lack the necessary support to accommodate such hybrid storage infrastructure resulting in more complex tool development to encompass all possible storage architectures used for the project. The MODIS Adaptive Processing System (MODAPS) and the Level 1 and Atmospheres Archive and Distribution System (LAADS) is an example of a project spanning several decades which has evolved into a hybrid storage architecture. MODAPS/LAADS has developed the Lightweight Virtual File System (LVFS) which ensures a seamless integration of all the different storage architectures, including standard block based POSIX compliant storage disks, to object based architectures such as the S3 compliant HGST Active Archive System, and the Seagate Kinetic disks utilizing the Kinetic Protocol. With LVFS, all analysis and processing tools used for the project continue to function unmodified regardless of the underlying storage architecture enabling MODAPS/LAADS to easily integrate any new storage architecture without the costly need to modify existing tools to utilize such new systems. Most file systems are designed as a single application responsible for using metadata to organizing the data into a tree, determine the location for data storage, and a method of data retrieval. We will show how LVFS' unique approach of treating these components in a loosely coupled fashion enables it to merge different storage architectures into a single uniform storage system which bridges the underlying hybrid architecture.

  7. JWST science data products

    NASA Astrophysics Data System (ADS)

    Swade, Daryl; Bushouse, Howard; Greene, Gretchen; Swam, Michael

    2014-07-01

    Science data products for James Webb Space Telescope (JWST) ©observations will be generated by the Data Management Subsystem (DMS) within the JWST Science and Operations Center (S&OC) at the Space Telescope Science Institute (STScI). Data processing pipelines within the DMS will produce uncalibrated and calibrated exposure files, as well as higher level data products that result from combined exposures, such as mosaic images. Information to support the science observations, for example data from engineering telemetry, proposer inputs, and observation planning will be captured and incorporated into the science data products. All files will be generated in Flexible Image Transport System (FITS) format. The data products will be made available through the Mikulski Archive for Space Telescopes (MAST) and adhere to International Virtual Observatory Alliance (IVOA) standard data protocols.

  8. Agentless Cloud-Wide Monitoring of Virtual Disk State

    DTIC Science & Technology

    2015-10-01

    packages include Apache, MySQL , PHP, Ruby on Rails, Java Application Servers, and many others. Figure 2.12 shows the results of a run of the Software...Linux, Apache, MySQL , PHP (LAMP) set of applications. Thus, many file-level update logs will contain the same versions of files repeated across many

  9. A high-speed network for cardiac image review.

    PubMed

    Elion, J L; Petrocelli, R R

    1994-01-01

    A high-speed fiber-based network for the transmission and display of digitized full-motion cardiac images has been developed. Based on Asynchronous Transfer Mode (ATM), the network is scaleable, meaning that the same software and hardware is used for a small local area network or for a large multi-institutional network. The system can handle uncompressed digital angiographic images, considered to be at the "high-end" of the bandwidth requirements. Along with the networking, a general-purpose multi-modality review station has been implemented without specialized hardware. This station can store a full injection sequence in "loop RAM" in a 512 x 512 format, then interpolate to 1024 x 1024 while displaying at 30 frames per second. The network and review stations connect to a central file server that uses a virtual file system to make a large high-speed RAID storage disk and associated off-line storage tapes and cartridges all appear as a single large file system to the software. In addition to supporting archival storage and review, the system can also digitize live video using high-speed Direct Memory Access (DMA) from the frame grabber to present uncompressed data to the network. Fully functional prototypes have provided the proof of concept, with full deployment in the institution planned as the next stage.

  10. A high-speed network for cardiac image review.

    PubMed Central

    Elion, J. L.; Petrocelli, R. R.

    1994-01-01

    A high-speed fiber-based network for the transmission and display of digitized full-motion cardiac images has been developed. Based on Asynchronous Transfer Mode (ATM), the network is scaleable, meaning that the same software and hardware is used for a small local area network or for a large multi-institutional network. The system can handle uncompressed digital angiographic images, considered to be at the "high-end" of the bandwidth requirements. Along with the networking, a general-purpose multi-modality review station has been implemented without specialized hardware. This station can store a full injection sequence in "loop RAM" in a 512 x 512 format, then interpolate to 1024 x 1024 while displaying at 30 frames per second. The network and review stations connect to a central file server that uses a virtual file system to make a large high-speed RAID storage disk and associated off-line storage tapes and cartridges all appear as a single large file system to the software. In addition to supporting archival storage and review, the system can also digitize live video using high-speed Direct Memory Access (DMA) from the frame grabber to present uncompressed data to the network. Fully functional prototypes have provided the proof of concept, with full deployment in the institution planned as the next stage. PMID:7949964

  11. Distributed Computing for the Pierre Auger Observatory

    NASA Astrophysics Data System (ADS)

    Chudoba, J.

    2015-12-01

    Pierre Auger Observatory operates the largest system of detectors for ultra-high energy cosmic ray measurements. Comparison of theoretical models of interactions with recorded data requires thousands of computing cores for Monte Carlo simulations. Since 2007 distributed resources connected via EGI grid are successfully used. The first and the second versions of production system based on bash scripts and MySQL database were able to submit jobs to all reliable sites supporting Virtual Organization auger. For many years VO auger belongs to top ten of EGI users based on the total used computing time. Migration of the production system to DIRAC interware started in 2014. Pilot jobs improve efficiency of computing jobs and eliminate problems with small and less reliable sites used for the bulk production. The new system has also possibility to use available resources in clouds. Dirac File Catalog replaced LFC for new files, which are organized in datasets defined via metadata. CVMFS is used for software distribution since 2014. In the presentation we give a comparison of the old and the new production system and report the experience on migrating to the new system.

  12. 78 FR 54626 - Announcing Approval of Federal Information Processing Standard (FIPS) Publication 201-2, Personal...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-09-05

    ... its virtual contact interface be made mandatory as soon as possible for the many beneficial features... messaging and the virtual contact interface in the Standard, some Federal departments and agencies have... Laboratory Programs. [FR Doc. 2013-21491 Filed 9-4-13; 8:45 am] BILLING CODE 3510-13-P ...

  13. Making Sense of Students' Actions in an Open-Ended Virtual Laboratory Environment

    ERIC Educational Resources Information Center

    Gal, Ya'akov; Uzan, Oriel; Belford, Robert; Karabinos, Michael; Yaron, David

    2015-01-01

    A process for analyzing log files collected from open-ended learning environments is developed and tested on a virtual lab problem involving reaction stoichiometry. The process utilizes a set of visualization tools that, by grouping student actions in a hierarchical manner, helps experts make sense of the linear list of student actions recorded in…

  14. The Invasive Species Forecasting System (ISFS): An iRODS-Based, Cloud-Enabled Decision Support System for Invasive Species Habitat Suitability Modeling

    NASA Technical Reports Server (NTRS)

    Gill, Roger; Schnase, John L.

    2012-01-01

    The Invasive Species Forecasting System (ISFS) is an online decision support system that allows users to load point occurrence field sample data for a plant species of interest and quickly generate habitat suitability maps for geographic regions of interest, such as a national park, monument, forest, or refuge. Target customers for ISFS are natural resource managers and decision makers who have a need for scientifically valid, model- based predictions of the habitat suitability of plant species of management concern. In a joint project involving NASA and the Maryland Department of Natural Resources, ISFS has been used to model the potential distribution of Wavyleaf Basketgrass in Maryland's Chesapeake Bay Watershed. Maximum entropy techniques are used to generate predictive maps using predictor datasets derived from remotely sensed data and climate simulation outputs. The workflow to run a model is implemented in an iRODS microservice using a custom ISFS file driver that clips and re-projects data to geographic regions of interest, then shells out to perform MaxEnt processing on the input data. When the model completes, all output files and maps from the model run are registered in iRODS and made accessible to the user. The ISFS user interface is a web browser that uses the iRODS PHP client to interact with the ISFS/iRODS- server. ISFS is designed to reside in a VMware virtual machine running SLES 11 and iRODS 3.0. The ISFS virtual machine is hosted in a VMware vSphere private cloud infrastructure to deliver the online service.

  15. Multi-tiered S-SOA, Parameter-Driven New Islamic Syariah Products of Holistic Islamic Banking System (HiCORE): Virtual Banking Environment

    NASA Astrophysics Data System (ADS)

    Halimah, B. Z.; Azlina, A.; Sembok, T. M.; Sufian, I.; Sharul Azman, M. N.; Azuraliza, A. B.; Zulaiha, A. O.; Nazlia, O.; Salwani, A.; Sanep, A.; Hailani, M. T.; Zaher, M. Z.; Azizah, J.; Nor Faezah, M. Y.; Choo, W. O.; Abdullah, Chew; Sopian, B.

    The Holistic Islamic Banking System (HiCORE), a banking system suitable for virtual banking environment, created based on universityindustry collaboration initiative between Universiti Kebangsaan Malaysia (UKM) and Fuziq Software Sdn Bhd. HiCORE was modeled on a multitiered Simple - Services Oriented Architecture (S-SOA), using the parameterbased semantic approach. HiCORE's existence is timely as the financial world is looking for a new approach to creating banking and financial products that are interest free or based on the Islamic Syariah principles and jurisprudence. An interest free banking system has currently caught the interest of bankers and financiers all over the world. HiCORE's Parameter-based module houses the Customer-information file (CIF), Deposit and Financing components. The Parameter based module represents the third tier of the multi-tiered Simple SOA approach. This paper highlights the multi-tiered parameter- driven approach to the creation of new Islamiic products based on the 'dalil' (Quran), 'syarat' (rules) and 'rukun' (procedures) as required by the syariah principles and jurisprudence reflected by the semantic ontology embedded in the parameter module of the system.

  16. Three-dimensional virtual bone bank system for selecting massive bone allograft in orthopaedic oncology.

    PubMed

    Wu, Zhigang; Fu, Jun; Wang, Zhen; Li, Xiangdong; Li, Jing; Pei, Yanjun; Pei, Guoxian; Li, Dan; Guo, Zheng; Fan, Hongbin

    2015-06-01

    Although structural bone allografts have been used for years to treat large defects caused by tumour or trauma, selecting the most appropriate allograft is still challenging. The objectives of this study were to: (1) describe the establishment of a visual bone bank system and workflow of allograft selection, and (2) show mid-term follow-up results of patients after allograft implantation. Allografts were scanned and stored in Digital Imaging and Communications in Medicine (DICOM) files. Then, image segmentation was conducted and 3D model reconstructed to establish a visual bone bank system. Based on the volume registration method, allografts were selected after a careful matching process. From November 2010 to June 2013, with the help of the Computer-assisted Orthopaedic Surgery (CAOS) navigation system, the allografts were implanted in 14 patients to fill defects after tumour resection. By combining the virtual bone bank and CAOS, selection time was reduced and matching accuracy was increased. After 27.5 months of follow-up, the mean Musculoskeletal Tumor Society (MSTS) 93 functional score was 25.7 ± 1.1 points. Except for two patients with pulmonary metastases, 12 patents were alive without evidence of disease at the time this report was written. The virtual bone bank system was helpful for allograft selection, tumour excision and bone reconstruction, thereby improving the safety and effectiveness of limb-salvage surgery.

  17. Navigating protected genomics data with UCSC Genome Browser in a Box.

    PubMed

    Haeussler, Maximilian; Raney, Brian J; Hinrichs, Angie S; Clawson, Hiram; Zweig, Ann S; Karolchik, Donna; Casper, Jonathan; Speir, Matthew L; Haussler, David; Kent, W James

    2015-03-01

    Genome Browser in a Box (GBiB) is a small virtual machine version of the popular University of California Santa Cruz (UCSC) Genome Browser that can be run on a researcher's own computer. Once GBiB is installed, a standard web browser is used to access the virtual server and add personal data files from the local hard disk. Annotation data are loaded on demand through the Internet from UCSC or can be downloaded to the local computer for faster access. Software downloads and installation instructions are freely available for non-commercial use at https://genome-store.ucsc.edu/. GBiB requires the installation of open-source software VirtualBox, available for all major operating systems, and the UCSC Genome Browser, which is open source and free for non-commercial use. Commercial use of GBiB and the Genome Browser requires a license (http://genome.ucsc.edu/license/). © The Author 2014. Published by Oxford University Press.

  18. The INTERLISP Virtual Machine Specification,

    DTIC Science & Technology

    1976-09-01

    typescript ” files. t h i n _ r b is. f i les contain -sung all ofthen i n n _ p m_ n_ b n_ u n _ l~m m : t tm , u i n _ :- , u - n_ h i On s w i t h...nm _ sIn _ c d of a File N~n _ mune ) n_ n _ u u ’J t b - s e nann_e of tb;e current , - typescript file (i f n _ n _ n _ n y). The conrespondunmg f

  19. Fundamental study of compression for movie files of coronary angiography

    NASA Astrophysics Data System (ADS)

    Ando, Takekazu; Tsuchiya, Yuichiro; Kodera, Yoshie

    2005-04-01

    When network distribution of movie files was considered as reference, it could be useful that the lossy compression movie files which has small file size. We chouse three kinds of coronary stricture movies with different moving speed as an examination object; heart rate of slow, normal and fast movies. The movies of MPEG-1, DivX5.11, WMV9 (Windows Media Video 9), and WMV9-VCM (Windows Media Video 9-Video Compression Manager) were made from three kinds of AVI format movies with different moving speeds. Five kinds of movies that are four kinds of compression movies and non-compression AVI instead of the DICOM format were evaluated by Thurstone's method. The Evaluation factors of movies were determined as "sharpness, granularity, contrast, and comprehensive evaluation." In the virtual bradycardia movie, AVI was the best evaluation at all evaluation factors except the granularity. In the virtual normal movie, an excellent compression technique is different in all evaluation factors. In the virtual tachycardia movie, MPEG-1 was the best evaluation at all evaluation factors expects the contrast. There is a good compression form depending on the speed of movies because of the difference of compression algorithm. It is thought that it is an influence by the difference of the compression between frames. The compression algorithm for movie has the compression between the frames and the intra-frame compression. As the compression algorithm give the different influence to image by each compression method, it is necessary to examine the relation of the compression algorithm and our results.

  20. New virtual laboratories presenting advanced motion control concepts

    NASA Astrophysics Data System (ADS)

    Goubej, Martin; Krejčí, Alois; Reitinger, Jan

    2015-11-01

    The paper deals with development of software framework for rapid generation of remote virtual laboratories. Client-server architecture is chosen in order to employ real-time simulation core which is running on a dedicated server. Ordinary web browser is used as a final renderer to achieve hardware independent solution which can be run on different target platforms including laptops, tablets or mobile phones. The provided toolchain allows automatic generation of the virtual laboratory source code from the configuration file created in the open- source Inkscape graphic editor. Three virtual laboratories presenting advanced motion control algorithms have been developed showing the applicability of the proposed approach.

  1. Disaster Relief and Emergency Medical Services Project (DREAMS TM): Digital EMS

    DTIC Science & Technology

    2000-10-01

    exchanges between the hospital and the EMS vehicle. By creating the virtual presence of a physician at or near the emergency scene, more lives will be saved ...address, cross street, zip code etc. The map can be saved to the clipboard or to an EMF graphics file for use by other applications in the system. 29...section can be found in Appendix B. The EMS personnel on board the ambulance can benefit greatly from technology integration. Several time- saving

  2. Iris: Constructing and Analyzing Spectral Energy Distributions with the Virtual Observatory

    NASA Astrophysics Data System (ADS)

    Laurino, O.; Budynkiewicz, J.; Busko, I.; Cresitello-Dittmar, M.; D'Abrusco, R.; Doe, S.; Evans, J.; Pevunova, O.

    2014-05-01

    We present Iris 2.0, the latest release of the Virtual Astronomical Observatory application for building and analyzing Spectral Energy Distributions (SEDs). With Iris, users may read in and display SEDs inspect and edit any selection of SED data, fit models to SEDs in arbitrary spectral ranges, and calculate confidence limits on best-fit parameters. SED data may be loaded into the application from VOTable and FITS files compliant with the International Virtual Observatoy Alliance interoperable data models, or retrieved directly from NED or the Italian Space Agency Science Data Center; data in non-standard formats may also be converted within the application. Users may seamlessy exchange data between Iris and other Virtual Observatoy tools using the Simple Application Messaging Protocol. Iris 2.0 also provides a tool for redshifting, interpolating, and measuring integratd fluxes, and allows simple aperture corrections for individual points and SED segments. Custom Python functions, template models and template libraries may be imported into Iris for fitting SEDs. Iris may be extended through Java plugins; users can install third-party packages, or develop their own plugin using Iris' Software Development Kit. Iris 2.0 is available for Linux and Mac OS X systems.

  3. The Integration of CloudStack and OCCI/OpenNebula with DIRAC

    NASA Astrophysics Data System (ADS)

    Méndez Muñoz, Víctor; Fernández Albor, Víctor; Graciani Diaz, Ricardo; Casajús Ramo, Adriàn; Fernández Pena, Tomás; Merino Arévalo, Gonzalo; José Saborido Silva, Juan

    2012-12-01

    The increasing availability of Cloud resources is arising as a realistic alternative to the Grid as a paradigm for enabling scientific communities to access large distributed computing resources. The DIRAC framework for distributed computing is an easy way to efficiently access to resources from both systems. This paper explains the integration of DIRAC with two open-source Cloud Managers: OpenNebula (taking advantage of the OCCI standard) and CloudStack. These are computing tools to manage the complexity and heterogeneity of distributed data center infrastructures, allowing to create virtual clusters on demand, including public, private and hybrid clouds. This approach has required to develop an extension to the previous DIRAC Virtual Machine engine, which was developed for Amazon EC2, allowing the connection with these new cloud managers. In the OpenNebula case, the development has been based on the CernVM Virtual Software Appliance with appropriate contextualization, while in the case of CloudStack, the infrastructure has been kept more general, which permits other Virtual Machine sources and operating systems being used. In both cases, CernVM File System has been used to facilitate software distribution to the computing nodes. With the resulting infrastructure, the cloud resources are transparent to the users through a friendly interface, like the DIRAC Web Portal. The main purpose of this integration is to get a system that can manage cloud and grid resources at the same time. This particular feature pushes DIRAC to a new conceptual denomination as interware, integrating different middleware. Users from different communities do not need to care about the installation of the standard software that is available at the nodes, nor the operating system of the host machine which is transparent to the user. This paper presents an analysis of the overhead of the virtual layer, doing some tests to compare the proposed approach with the existing Grid solution. License Notice: Published under licence in Journal of Physics: Conference Series by IOP Publishing Ltd.

  4. WriteShield: A Pseudo Thin Client for Prevention of Information Leakage

    NASA Astrophysics Data System (ADS)

    Kirihata, Yasuhiro; Sameshima, Yoshiki; Onoyama, Takashi; Komoda, Norihisa

    While thin-client systems are diffusing as an effective security method in enterprises and organizations, there is a new approach called pseudo thin-client system. In this system, local disks of clients are write-protected and user data is forced to save on the central file server to realize the same security effect of conventional thin-client systems. Since it takes purely the software-based simple approach, it does not require the hardware enhancement of network and servers to reduce the installation cost. However there are several problems such as no write control to external media, memory depletion possibility, and lower security because of the exceptional write permission to the system processes. In this paper, we propose WriteShield, a pseudo thin-client system which solves these issues. In this system, the local disks are write-protected with volume filter driver and it has a virtual cache mechanism to extend the memory cache size for the write protection. This paper presents design and implementation details of WriteShield. Besides we describe the security analysis and simulation evaluation of paging algorithms for virtual cache mechanism and measure the disk I/O performance to verify its feasibility in the actual environment.

  5. Virtual and Printed 3D Models for Teaching Crystal Symmetry and Point Groups

    ERIC Educational Resources Information Center

    Casas, Lluís; Estop, Euge`nia

    2015-01-01

    Both, virtual and printed 3D crystal models can help students and teachers deal with chemical education topics such as symmetry and point groups. In the present paper, two freely downloadable tools (interactive PDF files and a mobile app) are presented as examples of the application of 3D design to study point-symmetry. The use of 3D printing to…

  6. Dynamic Test Generation for Large Binary Programs

    DTIC Science & Technology

    2009-11-12

    the fuzzing@whitestar.linuxbox.orgmailing list, including Jared DeMott, Disco Jonny, and Ari Takanen, for discussions on fuzzing tradeoffs. Martin...as is the case for large applications where exercising all execution paths is virtually hopeless anyway. This point will be further discussed in...consumes trace files generated by iDNA and virtually re-executes the recorded runs. TruScan offers several features that substantially simplify symbolic

  7. Construction of a virtual combinatorial library using SMILES strings to discover potential structure-diverse PPAR modulators.

    PubMed

    Liao, Chenzhong; Liu, Bing; Shi, Leming; Zhou, Jiaju; Lu, Xian-Ping

    2005-07-01

    Based on the structural characters of PPAR modulators, a virtual combinatorial library containing 1226,625 compounds was constructed using SMILES strings. Selected ADME filters were employed to compel compounds having poor drug-like properties from this library. This library was converted to sdf and mol2 files by CONCORD 4.0, and was then docked to PPARgamma by DOCK 4.0 to identify new chemical entities that may be potential drug leads against type 2 diabetes and other metabolic diseases. The method to construct virtual combinatorial library using SMILES strings was further visualized by Visual Basic.net that can facilitate the needs of generating other type virtual combinatorial libraries.

  8. Conversion of school nurse policy and procedure manual to electronic format.

    PubMed

    Randall, Joellyn; Knee, Rachel; Galemore, Cynthia

    2006-10-01

    Policy and procedure manuals are essential to establishing standards of practice and ensuring quality of care to students and families. The Olathe District Schools (Kansas) Technology Department created the Virtual File Cabinet to provide online access to employee policies, school board policies, forms, and other documents. A task force of school nurses was formed to convert the nursing department's policies, procedures, protocols, and forms from hard copy to electronic format and make them available on the district's Virtual File Cabinet. Having the policy and procedure manuals in electronic format allows for quick access and ease in updating information, thereby guaranteeing the school nurses have access to the most current information. Cost savings were realized by reducing the amount of paper and staff time needed to copy, collate, and assemble materials.

  9. PyGOLD: a python based API for docking based virtual screening workflow generation.

    PubMed

    Patel, Hitesh; Brinkjost, Tobias; Koch, Oliver

    2017-08-15

    Molecular docking is one of the successful approaches in structure based discovery and development of bioactive molecules in chemical biology and medicinal chemistry. Due to the huge amount of computational time that is still required, docking is often the last step in a virtual screening approach. Such screenings are set as workflows spanned over many steps, each aiming at different filtering task. These workflows can be automatized in large parts using python based toolkits except for docking using the docking software GOLD. However, within an automated virtual screening workflow it is not feasible to use the GUI in between every step to change the GOLD configuration file. Thus, a python module called PyGOLD was developed, to parse, edit and write the GOLD configuration file and to automate docking based virtual screening workflows. The latest version of PyGOLD, its documentation and example scripts are available at: http://www.ccb.tu-dortmund.de/koch or http://www.agkoch.de. PyGOLD is implemented in Python and can be imported as a standard python module without any further dependencies. oliver.koch@agkoch.de, oliver.koch@tu-dortmund.de. Supplementary data are available at Bioinformatics online. © The Author (2017). Published by Oxford University Press. All rights reserved. For Permissions, please email: journals.permissions@oup.com

  10. Geovisualisation of relief in a virtual reality system on the basis of low-level aerial imagery

    NASA Astrophysics Data System (ADS)

    Halik, Łukasz; Smaczyński, Maciej

    2017-12-01

    The aim of the following paper was to present the geomatic process of transforming low-level aerial imagery obtained with unmanned aerial vehicles (UAV) into a digital terrain model (DTM) and implementing the model into a virtual reality system (VR). The object of the study was a natural aggretage heap of an irregular shape and denivelations up to 11 m. Based on the obtained photos, three point clouds (varying in the level of detail) were generated for the 20,000-m2-area. For further analyses, the researchers selected the point cloud with the best ratio of accuracy to output file size. This choice was made based on seven control points of the heap surveyed in the field and the corresponding points in the generated 3D model. The obtained several-centimetre differences between the control points in the field and the ones from the model might testify to the usefulness of the described algorithm for creating large-scale DTMs for engineering purposes. Finally, the chosen model was implemented into the VR system, which enables the most lifelike exploration of 3D terrain plasticity in real time, thanks to the first person view mode (FPV). In this mode, the user observes an object with the aid of a Head- mounted display (HMD), experiencing the geovisualisation from the inside, and virtually analysing the terrain as a direct animator of the observations.

  11. NPSNET: Aural cues for virtual world immersion

    NASA Astrophysics Data System (ADS)

    Dahl, Leif A.

    1992-09-01

    NPSNET is a low-cost visual and aural simulation system designed and implemented at the Naval Postgraduate School. NPSNET is an example of a virtual world simulation environment that incorporates real-time aural cues through software-hardware interaction. In the current implementation of NPSNET, a graphics workstation functions in the sound server role which involves sending and receiving networked sound message packets across a Local Area Network, composed of multiple graphics workstations. The network messages contain sound file identification information that is transmitted from the sound server across an RS-422 protocol communication line to a serial to Musical Instrument Digital Interface (MIDI) converter. The MIDI converter, in turn relays the sound byte to a sampler, an electronic recording and playback device. The sampler correlates the hexadecimal input to a specific note or stored sound and sends it as an audio signal to speakers via an amplifier. The realism of a simulation is improved by involving multiple participant senses and removing external distractions. This thesis describes the incorporation of sound as aural cues, and the enhancement they provide in the virtual simulation environment of NPSNET.

  12. Field Experiments using Telepresence and Virtual Reality to Control Remote Vehicles: Application to Mars Rover Missions

    NASA Technical Reports Server (NTRS)

    Stoker, Carol

    1994-01-01

    This paper will describe a series of field experiments to develop and demonstrate file use of Telepresence and Virtual Reality systems for controlling rover vehicles on planetary surfaces. In 1993, NASA Ames deployed a Telepresence-Controlled Remotely Operated underwater Vehicle (TROV) into an ice-covered sea environment in Antarctica. The goal of the mission was to perform scientific exploration of an unknown environment using a remote vehicle with telepresence and virtual reality as a user interface. The vehicle was operated both locally, from above a dive hole in the ice through which it was launched, and remotely over a satellite communications link from a control room at NASA's Ames Research center, for over two months. Remote control used a bidirectional Internet link to the vehicle control computer. The operator viewed live stereo video from the TROV along with a computer-gene rated graphic representation of the underwater terrain showing file vehicle state and other related information. Tile actual vehicle could be driven either from within the virtual environment or through a telepresence interface. In March 1994, a second field experiment was performed in which [lie remote control system developed for the Antarctic TROV mission was used to control the Russian Marsokhod Rover, an advanced planetary surface rover intended for launch in 1998. Marsokhod consists of a 6-wheel chassis and is capable of traversing several kilometers of terrain each day, The rover can be controlled remotely, but is also capable of performing autonomous traverses. The rover was outfitted with a manipulator arm capable of deploying a small instrument, collecting soil samples, etc. The Marsokhod rover was deployed at Amboy Crater in the Mojave desert, a Mars analog site, and controlled remotely from Los Angeles. in two operating modes: (1) a Mars rover mission simulation with long time delay and (2) a Lunar rover mission simulation with live action video. A team of planetary geologists participated in the mission simulation. The scientific goal of the science mission was to determine what could be learned about the geologic context of the site using the capabilities of imaging and mobility provided by the Marsokhod system in these two modes of operation. I will discuss the lessons learned from these experiments in terms of the strategy for performing Mars surface exploration using rovers. This research is supported by the Solar System Exploration Exobiology, Geology, and Advanced Technology programs.

  13. 3D for Geosciences: Interactive Tangibles and Virtual Models

    NASA Astrophysics Data System (ADS)

    Pippin, J. E.; Matheney, M.; Kitsch, N.; Rosado, G.; Thompson, Z.; Pierce, S. A.

    2016-12-01

    Point cloud processing provides a method of studying and modelling geologic features relevant to geoscience systems and processes. Here, software including Skanect, MeshLab, Blender, PDAL, and PCL are used in conjunction with 3D scanning hardware, including a Structure scanner and a Kinect camera, to create and analyze point cloud images of small scale topography, karst features, tunnels, and structures at high resolution. This project successfully scanned internal karst features ranging from small stalactites to large rooms, as well as an external waterfall feature. For comparison purposes, multiple scans of the same object were merged into single object files both automatically, using commercial software, and manually using open source libraries and code. Files with format .ply were manually converted into numeric data sets to be analyzed for similar regions between files in order to match them together. We can assume a numeric process would be more powerful and efficient than the manual method, however it could lack other useful features that GUI's may have. The digital models have applications in mining as efficient means of replacing topography functions such as measuring distances and areas. Additionally, it is possible to make simulation models such as drilling templates and calculations related to 3D spaces. Advantages of using methods described here for these procedures include the relatively quick time to obtain data and the easy transport of the equipment. With regard to openpit mining, obtaining 3D images of large surfaces and with precision would be a high value tool by georeferencing scan data to interactive maps. The digital 3D images obtained from scans may be saved as printable files to create physical 3D-printable models to create tangible objects based on scientific information, as well as digital "worlds" able to be navigated virtually. The data, models, and algorithms explored here can be used to convey complex scientific ideas to a range of professionals and audiences.

  14. Incorporating Brokers within Collaboration Environments

    NASA Astrophysics Data System (ADS)

    Rajasekar, A.; Moore, R.; de Torcy, A.

    2013-12-01

    A collaboration environment, such as the integrated Rule Oriented Data System (iRODS - http://irods.diceresearch.org), provides interoperability mechanisms for accessing storage systems, authentication systems, messaging systems, information catalogs, networks, and policy engines from a wide variety of clients. The interoperability mechanisms function as brokers, translating actions requested by clients to the protocol required by a specific technology. The iRODS data grid is used to enable collaborative research within hydrology, seismology, earth science, climate, oceanography, plant biology, astronomy, physics, and genomics disciplines. Although each domain has unique resources, data formats, semantics, and protocols, the iRODS system provides a generic framework that is capable of managing collaborative research initiatives that span multiple disciplines. Each interoperability mechanism (broker) is linked to a name space that enables unified access across the heterogeneous systems. The collaboration environment provides not only support for brokers, but also support for virtualization of name spaces for users, files, collections, storage systems, metadata, and policies. The broker enables access to data or information in a remote system using the appropriate protocol, while the collaboration environment provides a uniform naming convention for accessing and manipulating each object. Within the NSF DataNet Federation Consortium project (http://www.datafed.org), three basic types of interoperability mechanisms have been identified and applied: 1) drivers for managing manipulation at the remote resource (such as data subsetting), 2) micro-services that execute the protocol required by the remote resource, and 3) policies for controlling the execution. For example, drivers have been written for manipulating NetCDF and HDF formatted files within THREDDS servers. Micro-services have been written that manage interactions with the CUAHSI data repository, the DataONE information catalog, and the GeoBrain broker. Policies have been written that manage transfer of messages between an iRODS message queue and the Advanced Message Queuing Protocol. Examples of these brokering mechanisms will be presented. The DFC collaboration environment serves as the intermediary between community resources and compute grids, enabling reproducible data-driven research. It is possible to create an analysis workflow that retrieves data subsets from a remote server, assemble the required input files, automate the execution of the workflow, automatically track the provenance of the workflow, and share the input files, workflow, and output files. A collaborator can re-execute a shared workflow, compare results, change input files, and re-execute an analysis.

  15. Zen and the Art of Virtual Observatory Maintenance

    NASA Astrophysics Data System (ADS)

    Bargatze, L. F.

    2014-12-01

    The NASA Science Mission Directive Science Plan stresses that the primary goals of Heliophysics research focus on the understanding of the Sun's influence on the Earth and other bodies in the solar system. The NASA Heliophysics Division has adopted the Virtual Observatory, or VxO, concept in order to enable scientists to easily discover and access all data products relevant to these goals via web portals that act as clearinghouses. Furthermore, Heliophysics discipline scientists have defined the Space Physics Archive Search and Extract (SPASE) metadata schema in order to describe the contents of such applicable data products with detail extending all the way down to the parameter level. One SPASE metadata description file must be written to describe each data product at the global level. And the collection of such data product metadata description files, stored in repositories, provides the searchable content that the VxO web sites require in order to match the list of products to the unique needs of each researcher. The VxO metadata repository content also allows one to provide links to each unique data file contained in the full complement of files on a per data product basis. These links are contained within SPASE "Granule" description files and permit uniform access, worldwide, regardless of data server location thus permitting the VxO clearinghouse capability. The VxO concept is sound in theory but difficult in practice given that the Heliophysics data environment is diverse, ever expanding, and volatile. Thus, it is imperative to update the VxO metadata repositories in order to provide a complete, accurate, and current portrayal of the data environment. Such attention to detail is not a VxO desire but a necessity in order to support Heliophysics researchers and foster VxO user loyalty. An application of these basic tenets to the construction of a VxO repository dedicated to providing access to the CDF-formatted data collection hosted on the NASA Goddard CDAWeb data server. Note that the CDF format is self-describing and thus it provides a source of information for initiating SPASE metadata description at the data product level. Also, the CDAWeb data server provides high-quality data product tracking down to the individual data file level permitting easy updating of SPASE Granule metadata.

  16. Local Area Network Strategies and Guidelines for a Peruvian Air Force Computer Center

    DTIC Science & Technology

    1991-03-01

    service elements to support application processes such as job management, and financial data exchange. The layer also supports the virtual terminal and... virtual file concept. [Ref.3 :p. 285] Essentially, the lowest three layers are concerned with the communication protocols associated with the data...General de la Fuerza Aerea Peruana Lima, Republica del Peru 5. Escuela de Oficiales de la Fuerza Aerea Peruana 2 Biblioteca del Grupo del Instruccion Base

  17. FITSManager: Management of Personal Astronomical Data

    NASA Astrophysics Data System (ADS)

    Cui, Chenzhou; Fan, Dongwei; Zhao, Yongheng; Kembhavi, Ajit; He, Boliang; Cao, Zihuang; Li, Jian; Nandrekar, Deoyani

    2011-07-01

    With the increase of personal storage capacity, it is easy to find hundreds to thousands of FITS files in the personal computer of an astrophysicist. Because Flexible Image Transport System (FITS) is a professional data format initiated by astronomers and used mainly in the small community, data management toolkits for FITS files are very few. Astronomers need a powerful tool to help them manage their local astronomical data. Although Virtual Observatory (VO) is a network oriented astronomical research environment, its applications and related technologies provide useful solutions to enhance the management and utilization of astronomical data hosted in an astronomer's personal computer. FITSManager is such a tool to provide astronomers an efficient management and utilization of their local data, bringing VO to astronomers in a seamless and transparent way. FITSManager provides fruitful functions for FITS file management, like thumbnail, preview, type dependent icons, header keyword indexing and search, collaborated working with other tools and online services, and so on. The development of the FITSManager is an effort to fill the gap between management and analysis of astronomical data.

  18. Five years of experience teaching pathology to dental students using the WebMicroscope

    PubMed Central

    2011-01-01

    Background We describe development and evaluation of the user-friendly web based virtual microscopy - WebMicroscope for teaching and learning dental students basic and oral pathology. Traditional students microscopes were replaced by computer workstations. Methods The transition of the basic and oral pathology courses from light to virtual microscopy has been completed gradually over a five-year period. A pilot study was conducted in academic year 2005/2006 to estimate the feasibility of integrating virtual microscopy into a traditional light microscopy-based pathology course. The entire training set of glass slides was subsequently converted to virtual slides and placed on the WebMicroscope server. Giving access to fully digitized slides on the web with a browser and a viewer plug-in, the computer has become a perfect companion of the student. Results The study material consists now of over 400 fully digitized slides which covering 15 entities in basic and systemic pathology and 15 entities in oral pathology. Digitized slides are linked with still macro- and microscopic images, organized with clinical information into virtual cases and supplemented with text files, syllabus, PowerPoint presentations and animations on the web, serving additionally as material for individual studies. After their examinations, the students rated the use of the software, quality of the images, the ease of handling the images, and the effective use of virtual slides during the laboratory practicals. Responses were evaluated on a standardized scale. Because of the positive opinions and support from the students, the satisfaction surveys had shown a progressive improvement over the past 5 years. The WebMicroscope as a didactic tool for laboratory practicals was rated over 8 on a 1-10 scale for basic and systemic pathology and 9/10 for oral pathology especially as various students’ suggestions were implemented. Overall, the quality of the images was rated as very good. Conclusions An overwhelming majority of our students regarded a possibility of using virtual slides at their convenience as highly desirable. Our students and faculty consider the use of the virtual microscope for the study of basic as well as oral pathology as a significant improvement over the light microscope. PMID:21489183

  19. eF-seek: prediction of the functional sites of proteins by searching for similar electrostatic potential and molecular surface shape.

    PubMed

    Kinoshita, Kengo; Murakami, Yoichi; Nakamura, Haruki

    2007-07-01

    We have developed a method to predict ligand-binding sites in a new protein structure by searching for similar binding sites in the Protein Data Bank (PDB). The similarities are measured according to the shapes of the molecular surfaces and their electrostatic potentials. A new web server, eF-seek, provides an interface to our search method. It simply requires a coordinate file in the PDB format, and generates a prediction result as a virtual complex structure, with the putative ligands in a PDB format file as the output. In addition, the predicted interacting interface is displayed to facilitate the examination of the virtual complex structure on our own applet viewer with the web browser (URL: http://eF-site.hgc.jp/eF-seek).

  20. MolabIS--an integrated information system for storing and managing molecular genetics data.

    PubMed

    Truong, Cong V C; Groeneveld, Linn F; Morgenstern, Burkhard; Groeneveld, Eildert

    2011-10-31

    Long-term sample storage, tracing of data flow and data export for subsequent analyses are of great importance in genetics studies. Therefore, molecular labs do need a proper information system to handle an increasing amount of data from different projects. We have developed a molecular labs information management system (MolabIS). It was implemented as a web-based system allowing the users to capture original data at each step of their workflow. MolabIS provides essential functionality for managing information on individuals, tracking samples and storage locations, capturing raw files, importing final data from external files, searching results, accessing and modifying data. Further important features are options to generate ready-to-print reports and convert sequence and microsatellite data into various data formats, which can be used as input files in subsequent analyses. Moreover, MolabIS also provides a tool for data migration. MolabIS is designed for small-to-medium sized labs conducting Sanger sequencing and microsatellite genotyping to store and efficiently handle a relative large amount of data. MolabIS not only helps to avoid time consuming tasks but also ensures the availability of data for further analyses. The software is packaged as a virtual appliance which can run on different platforms (e.g. Linux, Windows). MolabIS can be distributed to a wide range of molecular genetics labs since it was developed according to a general data model. Released under GPL, MolabIS is freely available at http://www.molabis.org.

  1. MolabIS - An integrated information system for storing and managing molecular genetics data

    PubMed Central

    2011-01-01

    Background Long-term sample storage, tracing of data flow and data export for subsequent analyses are of great importance in genetics studies. Therefore, molecular labs do need a proper information system to handle an increasing amount of data from different projects. Results We have developed a molecular labs information management system (MolabIS). It was implemented as a web-based system allowing the users to capture original data at each step of their workflow. MolabIS provides essential functionality for managing information on individuals, tracking samples and storage locations, capturing raw files, importing final data from external files, searching results, accessing and modifying data. Further important features are options to generate ready-to-print reports and convert sequence and microsatellite data into various data formats, which can be used as input files in subsequent analyses. Moreover, MolabIS also provides a tool for data migration. Conclusions MolabIS is designed for small-to-medium sized labs conducting Sanger sequencing and microsatellite genotyping to store and efficiently handle a relative large amount of data. MolabIS not only helps to avoid time consuming tasks but also ensures the availability of data for further analyses. The software is packaged as a virtual appliance which can run on different platforms (e.g. Linux, Windows). MolabIS can be distributed to a wide range of molecular genetics labs since it was developed according to a general data model. Released under GPL, MolabIS is freely available at http://www.molabis.org. PMID:22040322

  2. Development of climate data storage and processing model

    NASA Astrophysics Data System (ADS)

    Okladnikov, I. G.; Gordov, E. P.; Titov, A. G.

    2016-11-01

    We present a storage and processing model for climate datasets elaborated in the framework of a virtual research environment (VRE) for climate and environmental monitoring and analysis of the impact of climate change on the socio-economic processes on local and regional scales. The model is based on a «shared nothings» distributed computing architecture and assumes using a computing network where each computing node is independent and selfsufficient. Each node holds a dedicated software for the processing and visualization of geospatial data providing programming interfaces to communicate with the other nodes. The nodes are interconnected by a local network or the Internet and exchange data and control instructions via SSH connections and web services. Geospatial data is represented by collections of netCDF files stored in a hierarchy of directories in the framework of a file system. To speed up data reading and processing, three approaches are proposed: a precalculation of intermediate products, a distribution of data across multiple storage systems (with or without redundancy), and caching and reuse of the previously obtained products. For a fast search and retrieval of the required data, according to the data storage and processing model, a metadata database is developed. It contains descriptions of the space-time features of the datasets available for processing, their locations, as well as descriptions and run options of the software components for data analysis and visualization. The model and the metadata database together will provide a reliable technological basis for development of a high- performance virtual research environment for climatic and environmental monitoring.

  3. Accelerating Virtual High-Throughput Ligand Docking: current technology and case study on a petascale supercomputer.

    PubMed

    Ellingson, Sally R; Dakshanamurthy, Sivanesan; Brown, Milton; Smith, Jeremy C; Baudry, Jerome

    2014-04-25

    In this paper we give the current state of high-throughput virtual screening. We describe a case study of using a task-parallel MPI (Message Passing Interface) version of Autodock4 [1], [2] to run a virtual high-throughput screen of one-million compounds on the Jaguar Cray XK6 Supercomputer at Oak Ridge National Laboratory. We include a description of scripts developed to increase the efficiency of the predocking file preparation and postdocking analysis. A detailed tutorial, scripts, and source code for this MPI version of Autodock4 are available online at http://www.bio.utk.edu/baudrylab/autodockmpi.htm.

  4. [Whole slide imaging technology: from digitization to online applications].

    PubMed

    Ameisen, David; Le Naour, Gilles; Daniel, Christel

    2012-11-01

    As e-health becomes essential to modern care, whole slide images (virtual slides) are now an important clinical, teaching and research tool in pathology. Virtual microscopy consists of digitizing a glass slide by acquiring hundreds of tiles of regions of interest at different zoom levels and assembling them into a structured file. This gigapixel image can then be remotely viewed over a terminal, exactly the way pathologists use a microscope. In this article, we will first describe the key elements of this technology, from the acquisition, using a scanner or a motorized microscope, to the broadcasting of virtual slides through a local or distant viewer over an intranet or Internet connection. As virtual slides are now commonly used in virtual classrooms, clinical data and research databases, we will highlight the main issues regarding its uses in modern pathology. Emphasis will be made on quality assurance policies, standardization and scaling. © 2012 médecine/sciences – Inserm / SRMS.

  5. Information integration for a sky survey by data warehousing

    NASA Astrophysics Data System (ADS)

    Luo, A.; Zhang, Y.; Zhao, Y.

    The virtualization service of data system for a sky survey LAMOST is very important for astronomers The service needs to integrate information from data collections catalogs and references and support simple federation of a set of distributed files and associated metadata Data warehousing has been in existence for several years and demonstrated superiority over traditional relational database management systems by providing novel indexing schemes that supported efficient on-line analytical processing OLAP of large databases Now relational database systems such as Oracle etc support the warehouse capability which including extensions to the SQL language to support OLAP operations and a number of metadata management tools have been created The information integration of LAMOST by applying data warehousing is to effectively provide data and knowledge on-line

  6. Cyber Moat: Adaptive Virtualized Network Framework for Deception and Disinformation

    DTIC Science & Technology

    2016-12-12

    As one type of bots, web crawlers have been leveraged by search engines (e.g., Googlebot by Google) to popularize websites through website indexing...However, the number of malicious bots is increasing too. To regulate the behavior of crawlers, most websites include a file called "robots.txt" that...However, "robots.txt" only provides a guideline, and almost all malicious robots ignore it. Moreover, since this file is publicly available, malicious

  7. Protecting Files Hosted on Virtual Machines With Out-of-Guest Access Control

    DTIC Science & Technology

    2017-12-01

    analyzes the design and methodology of the implemented mechanism, while Chapter 4 explains the test methodology, test cases, and performance testing ...SACL, we verify that the user or group accessing the file has sufficient permissions. If that is correct, the callback function returns control to...ferify. In the first section, we validate our design of ferify. Next, we explain the tests we performed to verify that ferify has the results we expected

  8. Data Elevator

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    BYNA, SUNRENDRA; DONG, BIN; WU, KESHENG

    Data Elevator: Efficient Asynchronous Data Movement in Hierarchical Storage Systems Multi-layer storage subsystems, including SSD-based burst buffers and disk-based parallel file systems (PFS), are becoming part of HPC systems. However, software for this storage hierarchy is still in its infancy. Applications may have to explicitly move data among the storage layers. We propose Data Elevator for transparently and efficiently moving data between a burst buffer and a PFS. Users specify the final destination for their data, typically on PFS, Data Elevator intercepts the I/O calls, stages data on burst buffer, and then asynchronously transfers the data to their final destinationmore » in the background. This system allows extensive optimizations, such as overlapping read and write operations, choosing I/O modes, and aligning buffer boundaries. In tests with large-scale scientific applications, Data Elevator is as much as 4.2X faster than Cray DataWarp, the start-of-art software for burst buffer, and 4X faster than directly writing to PFS. The Data Elevator library uses HDF5's Virtual Object Layer (VOL) for intercepting parallel I/O calls that write data to PFS. The intercepted calls are redirected to the Data Elevator, which provides a handle to write the file in a faster and intermediate burst buffer system. Once the application finishes writing the data to the burst buffer, the Data Elevator job uses HDF5 to move the data to final destination in an asynchronous manner. Hence, using the Data Elevator library is currently useful for applications that call HDF5 for writing data files. Also, the Data Elevator depends on the HDF5 VOL functionality.« less

  9. ISIS and META projects

    NASA Technical Reports Server (NTRS)

    Birman, Kenneth; Cooper, Robert; Marzullo, Keith

    1990-01-01

    The ISIS project has developed a new methodology, virtual synchony, for writing robust distributed software. High performance multicast, large scale applications, and wide area networks are the focus of interest. Several interesting applications that exploit the strengths of ISIS, including an NFS-compatible replicated file system, are being developed. The META project is distributed control in a soft real-time environment incorporating feedback. This domain encompasses examples as diverse as monitoring inventory and consumption on a factory floor, and performing load-balancing on a distributed computing system. One of the first uses of META is for distributed application management: the tasks of configuring a distributed program, dynamically adapting to failures, and monitoring its performance. Recent progress and current plans are reported.

  10. VirGO: A Visual Browser for the ESO Science Archive Facility

    NASA Astrophysics Data System (ADS)

    Chéreau, Fabien

    2012-04-01

    VirGO is the next generation Visual Browser for the ESO Science Archive Facility developed by the Virtual Observatory (VO) Systems Department. It is a plug-in for the popular open source software Stellarium adding capabilities for browsing professional astronomical data. VirGO gives astronomers the possibility to easily discover and select data from millions of observations in a new visual and intuitive way. Its main feature is to perform real-time access and graphical display of a large number of observations by showing instrumental footprints and image previews, and to allow their selection and filtering for subsequent download from the ESO SAF web interface. It also allows the loading of external FITS files or VOTables, the superimposition of Digitized Sky Survey (DSS) background images, and the visualization of the sky in a `real life' mode as seen from the main ESO sites. All data interfaces are based on Virtual Observatory standards which allow access to images and spectra from external data centers, and interaction with the ESO SAF web interface or any other VO applications supporting the PLASTIC messaging system.

  11. SensorDB: a virtual laboratory for the integration, visualization and analysis of varied biological sensor data.

    PubMed

    Salehi, Ali; Jimenez-Berni, Jose; Deery, David M; Palmer, Doug; Holland, Edward; Rozas-Larraondo, Pablo; Chapman, Scott C; Georgakopoulos, Dimitrios; Furbank, Robert T

    2015-01-01

    To our knowledge, there is no software or database solution that supports large volumes of biological time series sensor data efficiently and enables data visualization and analysis in real time. Existing solutions for managing data typically use unstructured file systems or relational databases. These systems are not designed to provide instantaneous response to user queries. Furthermore, they do not support rapid data analysis and visualization to enable interactive experiments. In large scale experiments, this behaviour slows research discovery, discourages the widespread sharing and reuse of data that could otherwise inform critical decisions in a timely manner and encourage effective collaboration between groups. In this paper we present SensorDB, a web based virtual laboratory that can manage large volumes of biological time series sensor data while supporting rapid data queries and real-time user interaction. SensorDB is sensor agnostic and uses web-based, state-of-the-art cloud and storage technologies to efficiently gather, analyse and visualize data. Collaboration and data sharing between different agencies and groups is thereby facilitated. SensorDB is available online at http://sensordb.csiro.au.

  12. VO-KOREL: A Fourier Disentangling Service of the Virtual Observatory

    NASA Astrophysics Data System (ADS)

    Škoda, Petr; Hadrava, Petr; Fuchs, Jan

    2012-04-01

    VO-KOREL is a web service exploiting the technology of the Virtual Observatory for providing astronomers with the intuitive graphical front-end and distributed computing back-end running the most recent version of the Fourier disentangling code KOREL. The system integrates the ideas of the e-shop basket, conserving the privacy of every user by transfer encryption and access authentication, with features of laboratory notebook, allowing the easy housekeeping of both input parameters and final results, as well as it explores a newly emerging technology of cloud computing. While the web-based front-end allows the user to submit data and parameter files, edit parameters, manage a job list, resubmit or cancel running jobs and mainly watching the text and graphical results of a disentangling process, the main part of the back-end is a simple job queue submission system executing in parallel multiple instances of the FORTRAN code KOREL. This may be easily extended for GRID-based deployment on massively parallel computing clusters. The short introduction into underlying technologies is given, briefly mentioning advantages as well as bottlenecks of the design used.

  13. 76 FR 69311 - Self-Regulatory Organizations; NYSE Arca, Inc.; Notice of Filing and Immediate Effectiveness of...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-11-08

    ...'') and broker-dealers increased authority and flexibility to offer new and unique market data to..., providing virtually limitless opportunities for entrepreneurs who wish to produce and distribute their own...

  14. Legato: Personal Computer Software for Analyzing Pressure-Sensitive Paint Data

    NASA Technical Reports Server (NTRS)

    Schairer, Edward T.

    2001-01-01

    'Legato' is personal computer software for analyzing radiometric pressure-sensitive paint (PSP) data. The software is written in the C programming language and executes under Windows 95/98/NT operating systems. It includes all operations normally required to convert pressure-paint image intensities to normalized pressure distributions mapped to physical coordinates of the test article. The program can analyze data from both single- and bi-luminophore paints and provides for both in situ and a priori paint calibration. In addition, there are functions for determining paint calibration coefficients from calibration-chamber data. The software is designed as a self-contained, interactive research tool that requires as input only the bare minimum of information needed to accomplish each function, e.g., images, model geometry, and paint calibration coefficients (for a priori calibration) or pressure-tap data (for in situ calibration). The program includes functions that can be used to generate needed model geometry files for simple model geometries (e.g., airfoils, trapezoidal wings, rotor blades) based on the model planform and airfoil section. All data files except images are in ASCII format and thus are easily created, read, and edited. The program does not use database files. This simplifies setup but makes the program inappropriate for analyzing massive amounts of data from production wind tunnels. Program output consists of Cartesian plots, false-colored real and virtual images, pressure distributions mapped to the surface of the model, assorted ASCII data files, and a text file of tabulated results. Graphical output is displayed on the computer screen and can be saved as publication-quality (PostScript) files.

  15. Implementation of a fast 16-Bit dynamic clamp using LabVIEW-RT.

    PubMed

    Kullmann, Paul H M; Wheeler, Diek W; Beacom, Joshua; Horn, John P

    2004-01-01

    The dynamic-clamp method provides a powerful electrophysiological tool for creating virtual ionic conductances in living cells and studying their influence on membrane potential. Here we describe G-clamp, a new way to implement a dynamic clamp using the real-time version of the Lab-VIEW programming environment together with a Windows host, an embedded microprocessor that runs a real-time operating system and a multifunction data-acquisition board. The software includes descriptions of a fast voltage-dependent sodium conductance, delayed rectifier, M-type and A-type potassium conductances, and a leak conductance. The system can also read synaptic conductance waveforms from preassembled data files. These virtual conductances can be reliably implemented at speeds < or =43 kHz while simultaneously saving two channels of data with 16-bit precision. G-clamp also includes utilities for measuring current-voltage relations, synaptic strength, and synaptic gain. Taking an approach built on a commercially available software/hardware platform has resulted in a system that is easy to assemble and upgrade. In addition, the graphical programming structure of LabVIEW should make it relatively easy for others to adapt G-clamp for new experimental applications.

  16. Simple re-instantiation of small databases using cloud computing.

    PubMed

    Tan, Tin Wee; Xie, Chao; De Silva, Mark; Lim, Kuan Siong; Patro, C Pawan K; Lim, Shen Jean; Govindarajan, Kunde Ramamoorthy; Tong, Joo Chuan; Choo, Khar Heng; Ranganathan, Shoba; Khan, Asif M

    2013-01-01

    Small bioinformatics databases, unlike institutionally funded large databases, are vulnerable to discontinuation and many reported in publications are no longer accessible. This leads to irreproducible scientific work and redundant effort, impeding the pace of scientific progress. We describe a Web-accessible system, available online at http://biodb100.apbionet.org, for archival and future on demand re-instantiation of small databases within minutes. Depositors can rebuild their databases by downloading a Linux live operating system (http://www.bioslax.com), preinstalled with bioinformatics and UNIX tools. The database and its dependencies can be compressed into an ".lzm" file for deposition. End-users can search for archived databases and activate them on dynamically re-instantiated BioSlax instances, run as virtual machines over the two popular full virtualization standard cloud-computing platforms, Xen Hypervisor or vSphere. The system is adaptable to increasing demand for disk storage or computational load and allows database developers to use the re-instantiated databases for integration and development of new databases. Herein, we demonstrate that a relatively inexpensive solution can be implemented for archival of bioinformatics databases and their rapid re-instantiation should the live databases disappear.

  17. Simple re-instantiation of small databases using cloud computing

    PubMed Central

    2013-01-01

    Background Small bioinformatics databases, unlike institutionally funded large databases, are vulnerable to discontinuation and many reported in publications are no longer accessible. This leads to irreproducible scientific work and redundant effort, impeding the pace of scientific progress. Results We describe a Web-accessible system, available online at http://biodb100.apbionet.org, for archival and future on demand re-instantiation of small databases within minutes. Depositors can rebuild their databases by downloading a Linux live operating system (http://www.bioslax.com), preinstalled with bioinformatics and UNIX tools. The database and its dependencies can be compressed into an ".lzm" file for deposition. End-users can search for archived databases and activate them on dynamically re-instantiated BioSlax instances, run as virtual machines over the two popular full virtualization standard cloud-computing platforms, Xen Hypervisor or vSphere. The system is adaptable to increasing demand for disk storage or computational load and allows database developers to use the re-instantiated databases for integration and development of new databases. Conclusions Herein, we demonstrate that a relatively inexpensive solution can be implemented for archival of bioinformatics databases and their rapid re-instantiation should the live databases disappear. PMID:24564380

  18. Social Networking Adapted for Distributed Scientific Collaboration

    NASA Technical Reports Server (NTRS)

    Karimabadi, Homa

    2012-01-01

    Share is a social networking site with novel, specially designed feature sets to enable simultaneous remote collaboration and sharing of large data sets among scientists. The site will include not only the standard features found on popular consumer-oriented social networking sites such as Facebook and Myspace, but also a number of powerful tools to extend its functionality to a science collaboration site. A Virtual Observatory is a promising technology for making data accessible from various missions and instruments through a Web browser. Sci-Share augments services provided by Virtual Observatories by enabling distributed collaboration and sharing of downloaded and/or processed data among scientists. This will, in turn, increase science returns from NASA missions. Sci-Share also enables better utilization of NASA s high-performance computing resources by providing an easy and central mechanism to access and share large files on users space or those saved on mass storage. The most common means of remote scientific collaboration today remains the trio of e-mail for electronic communication, FTP for file sharing, and personalized Web sites for dissemination of papers and research results. Each of these tools has well-known limitations. Sci-Share transforms the social networking paradigm into a scientific collaboration environment by offering powerful tools for cooperative discourse and digital content sharing. Sci-Share differentiates itself by serving as an online repository for users digital content with the following unique features: a) Sharing of any file type, any size, from anywhere; b) Creation of projects and groups for controlled sharing; c) Module for sharing files on HPC (High Performance Computing) sites; d) Universal accessibility of staged files as embedded links on other sites (e.g. Facebook) and tools (e.g. e-mail); e) Drag-and-drop transfer of large files, replacing awkward e-mail attachments (and file size limitations); f) Enterprise-level data and messaging encryption; and g) Easy-to-use intuitive workflow.

  19. Review of Enabling Technologies to Facilitate Secure Compute Customization

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Aderholdt, Ferrol; Caldwell, Blake A; Hicks, Susan Elaine

    High performance computing environments are often used for a wide variety of workloads ranging from simulation, data transformation and analysis, and complex workflows to name just a few. These systems may process data for a variety of users, often requiring strong separation between job allocations. There are many challenges to establishing these secure enclaves within the shared infrastructure of high-performance computing (HPC) environments. The isolation mechanisms in the system software are the basic building blocks for enabling secure compute enclaves. There are a variety of approaches and the focus of this report is to review the different virtualization technologies thatmore » facilitate the creation of secure compute enclaves. The report reviews current operating system (OS) protection mechanisms and modern virtualization technologies to better understand the performance/isolation properties. We also examine the feasibility of running ``virtualized'' computing resources as non-privileged users, and providing controlled administrative permissions for standard users running within a virtualized context. Our examination includes technologies such as Linux containers (LXC [32], Docker [15]) and full virtualization (KVM [26], Xen [5]). We categorize these different approaches to virtualization into two broad groups: OS-level virtualization and system-level virtualization. The OS-level virtualization uses containers to allow a single OS kernel to be partitioned to create Virtual Environments (VE), e.g., LXC. The resources within the host's kernel are only virtualized in the sense of separate namespaces. In contrast, system-level virtualization uses hypervisors to manage multiple OS kernels and virtualize the physical resources (hardware) to create Virtual Machines (VM), e.g., Xen, KVM. This terminology of VE and VM, detailed in Section 2, is used throughout the report to distinguish between the two different approaches to providing virtualized execution environments. As part of our technology review we analyzed several current virtualization solutions to assess their vulnerabilities. This included a review of common vulnerabilities and exposures (CVEs) for Xen, KVM, LXC and Docker to gauge their susceptibility to different attacks. The complete details are provided in Section 5 on page 33. Based on this review we concluded that system-level virtualization solutions have many more vulnerabilities than OS level virtualization solutions. As such, security mechanisms like sVirt (Section 3.3) should be considered when using system-level virtualization solutions in order to protect the host against exploits. The majority of vulnerabilities related to KVM, LXC, and Docker are in specific regions of the system. Therefore, future "zero day attacks" are likely to be in the same regions, which suggests that protecting these areas can simplify the protection of the host and maintain the isolation between users. The evaluations of virtualization technologies done thus far are discussed in Section 4. This includes experiments with 'user' namespaces in VEs, which provides the ability to isolate user privileges and allow a user to run with different UIDs within the container while mapping them to non-privileged UIDs in the host. We have identified Linux namespaces as a promising mechanism to isolate shared resources, while maintaining good performance. In Section 4.1 we describe our tests with LXC as a non-root user and leveraging namespaces to control UID/GID mappings and support controlled sharing of parallel file-systems. We highlight several of these namespace capabilities in Section 6.2.3. The other evaluations that were performed during this initial phase of work provide baseline performance data for comparing VEs and VMs to purely native execution. In Section 4.2 we performed tests using the High-Performance Computing Conjugate Gradient (HPCCG) benchmark to establish baseline performance for a scientific application when run on the Native (host) machine in contrast with execution under Docker and KVM. Our tests verified prior studies showing roughly 2-4% overheads in application execution time & MFlops when running in hypervisor-base environments (VMs) as compared to near native performance with VEs. For more details, see Figures 4.5 (page 28), 4.6 (page 28), and 4.7 (page 29). Additionally, in Section 4.3 we include network measurements for TCP bandwidth performance over the 10GigE interface in our testbed. The Native and Docker based tests achieved >= ~9Gbits/sec, while the KVM configuration only achieved 2.5Gbits/sec (Table 4.6 on page 32). This may be a configuration issue with our KVM installation, and is a point for further testing as we refine the network settings in the testbed. The initial network tests were done using a bridged networking configuration. The report outline is as follows: - Section 1 introduces the report and clarifies the scope of the proj...« less

  20. High-Performance Tiled WMS and KML Web Server

    NASA Technical Reports Server (NTRS)

    Plesea, Lucian

    2007-01-01

    This software is an Apache 2.0 module implementing a high-performance map server to support interactive map viewers and virtual planet client software. It can be used in applications that require access to very-high-resolution geolocated images, such as GIS, virtual planet applications, and flight simulators. It serves Web Map Service (WMS) requests that comply with a given request grid from an existing tile dataset. It also generates the KML super-overlay configuration files required to access the WMS image tiles.

  1. Lecturing with a Virtual Whiteboard

    NASA Astrophysics Data System (ADS)

    Milanovic, Zoran

    2006-09-01

    Recent advances in computer technology, word processing software, and projection systems have made traditional whiteboard lecturing obsolete. Tablet personal computers connected to display projectors and running handwriting software have replaced the marker-on-whiteboard method of delivering a lecture. Since the notes can be saved into an electronic file, they can be uploaded to a class website to be perused by the students later. This paper will describe the author's experiences in using this new technology to deliver physics lectures at an engineering school. The benefits and problems discovered will be reviewed and results from a survey of student opinions will be discussed.

  2. Open source tools for management and archiving of digital microscopy data to allow integration with patient pathology and treatment information

    PubMed Central

    2013-01-01

    Background Virtual microscopy includes digitisation of histology slides and the use of computer technologies for complex investigation of diseases such as cancer. However, automated image analysis, or website publishing of such digital images, is hampered by their large file sizes. Results We have developed two Java based open source tools: Snapshot Creator and NDPI-Splitter. Snapshot Creator converts a portion of a large digital slide into a desired quality JPEG image. The image is linked to the patient’s clinical and treatment information in a customised open source cancer data management software (Caisis) in use at the Australian Breast Cancer Tissue Bank (ABCTB) and then published on the ABCTB website (http://www.abctb.org.au) using Deep Zoom open source technology. Using the ABCTB online search engine, digital images can be searched by defining various criteria such as cancer type, or biomarkers expressed. NDPI-Splitter splits a large image file into smaller sections of TIFF images so that they can be easily analysed by image analysis software such as Metamorph or Matlab. NDPI-Splitter also has the capacity to filter out empty images. Conclusions Snapshot Creator and NDPI-Splitter are novel open source Java tools. They convert digital slides into files of smaller size for further processing. In conjunction with other open source tools such as Deep Zoom and Caisis, this suite of tools is used for the management and archiving of digital microscopy images, enabling digitised images to be explored and zoomed online. Our online image repository also has the capacity to be used as a teaching resource. These tools also enable large files to be sectioned for image analysis. Virtual Slides The virtual slide(s) for this article can be found here: http://www.diagnosticpathology.diagnomx.eu/vs/5330903258483934 PMID:23402499

  3. [Method of file sorting for mini- and microcomputers].

    PubMed

    Chau, N; Legras, B; Benamghar, L; Martin, J

    1983-05-01

    The authors describe a new sorting method of files which belongs to the class of direct-addressing sorting methods. It makes use of a variant of the classical technique of 'virtual memory'. It is particularly well suited to mini- and micro-computers which have a small core memory (32 K words, for example) and are fitted with a direct-access peripheral device, such as a disc unit. When the file to be sorted is medium-sized (some thousand records), the running of the program essentially occurs inside the core memory and consequently, the method becomes very fast. This is very important because most medical files handled in our laboratory are in this category. However, the method is also suitable for big computers and large files; its implementation is easy. It does not require any magnetic tape unit, and it seems to us to be one of the fastest methods available.

  4. Implementation of Grid Tier 2 and Tier 3 facilities on a Distributed OpenStack Cloud

    NASA Astrophysics Data System (ADS)

    Limosani, Antonio; Boland, Lucien; Coddington, Paul; Crosby, Sean; Huang, Joanna; Sevior, Martin; Wilson, Ross; Zhang, Shunde

    2014-06-01

    The Australian Government is making a AUD 100 million investment in Compute and Storage for the academic community. The Compute facilities are provided in the form of 30,000 CPU cores located at 8 nodes around Australia in a distributed virtualized Infrastructure as a Service facility based on OpenStack. The storage will eventually consist of over 100 petabytes located at 6 nodes. All will be linked via a 100 Gb/s network. This proceeding describes the development of a fully connected WLCG Tier-2 grid site as well as a general purpose Tier-3 computing cluster based on this architecture. The facility employs an extension to Torque to enable dynamic allocations of virtual machine instances. A base Scientific Linux virtual machine (VM) image is deployed in the OpenStack cloud and automatically configured as required using Puppet. Custom scripts are used to launch multiple VMs, integrate them into the dynamic Torque cluster and to mount remote file systems. We report on our experience in developing this nation-wide ATLAS and Belle II Tier 2 and Tier 3 computing infrastructure using the national Research Cloud and storage facilities.

  5. Utilization and acceptance of virtual patients in veterinary basic sciences - the vetVIP-project.

    PubMed

    Kleinsorgen, Christin; Kankofer, Marta; Gradzki, Zbigniew; Mandoki, Mira; Bartha, Tibor; von Köckritz-Blickwede, Maren; Naim, Hassan Y; Beyerbach, Martin; Tipold, Andrea; Ehlers, Jan P

    2017-01-01

    Context: In medical and veterinary medical education the use of problem-based and cased-based learning has steadily increased over time. At veterinary faculties, this development has mainly been evident in the clinical phase of the veterinary education. Therefore, a consortium of teachers of biochemistry and physiology together with technical and didactical experts launched the EU-funded project "vetVIP", to create and implement veterinary virtual patients and problems for basic science instruction. In this study the implementation and utilization of virtual patients occurred at the veterinary faculties in Budapest, Hannover and Lublin. Methods: This report describes the investigation of the utilization and acceptance of students studying veterinary basic sciences using optional online learning material concurrently to regular biochemistry and physiology didactic instruction. The reaction of students towards this offer of clinical case-based learning in basic sciences was analysed using quantitative and qualitative data. Quantitative data were collected automatically within the chosen software-system CASUS as user-log-files. Responses regarding the quality of the virtual patients were obtained using an online questionnaire. Furthermore, subjective evaluation by authors was performed using a focus group discussion and an online questionnaire. Results: Implementation as well as usage and acceptance varied between the three participating locations. High approval was documented in Hannover and Lublin based upon the high proportion of voluntary students (>70%) using optional virtual patients. However, in Budapest the participation rate was below 1%. Due to utilization, students seem to prefer virtual patients and problems created in their native language and developed at their own university. In addition, the statement that assessment drives learning was supported by the observation that peak utilization was just prior to summative examinations. Conclusion: Veterinary virtual patients in basic sciences can be introduced and used for the presentation of integrative clinical case scenarios. Student post-course comments also supported the conclusion that overall the virtual cases increased their motivation for learning veterinary basic sciences.

  6. NAFFS: network attached flash file system for cloud storage on portable consumer electronics

    NASA Astrophysics Data System (ADS)

    Han, Lin; Huang, Hao; Xie, Changsheng

    Cloud storage technology has become a research hotspot in recent years, while the existing cloud storage services are mainly designed for data storage needs with stable high speed Internet connection. Mobile Internet connections are often unstable and the speed is relatively low. These native features of mobile Internet limit the use of cloud storage in portable consumer electronics. The Network Attached Flash File System (NAFFS) presented the idea of taking the portable device built-in NAND flash memory as the front-end cache of virtualized cloud storage device. Modern portable devices with Internet connection have built-in more than 1GB NAND Flash, which is quite enough for daily data storage. The data transfer rate of NAND flash device is much higher than mobile Internet connections[1], and its non-volatile feature makes it very suitable as the cache device of Internet cloud storage on portable device, which often have unstable power supply and intermittent Internet connection. In the present work, NAFFS is evaluated with several benchmarks, and its performance is compared with traditional network attached file systems, such as NFS. Our evaluation results indicate that the NAFFS achieves an average accessing speed of 3.38MB/s, which is about 3 times faster than directly accessing cloud storage by mobile Internet connection, and offers a more stable interface than that of directly using cloud storage API. Unstable Internet connection and sudden power off condition are tolerable, and no data in cache will be lost in such situation.

  7. Charliecloud

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Priedhorsky, Reid; Randles, Tim

    Charliecloud is a set of scripts to let users run a virtual cluster of virtual machines (VMs) on a desktop or supercomputer. Key functions include: 1. Creating (typically by installing an operating system from vendor media) and updating VM images; 2. Running a single VM; 3. Running multiple VMs in a virtual cluster. The virtual machines can talk to one another over the network and (in some cases) the outside world. This is accomplished by calling external programs such as QEMU and the Virtual Distributed Ethernet (VDE) suite. The goal is to let users have a virtual cluster containing nodesmore » where they have privileged access, while isolating that privilege within the virtual cluster so it cannot affect the physical compute resources. Host configuration enforces security; this is not included in Charliecloud, though security guidelines are included in its documentation and Charliecloud is designed to facilitate such configuration. Charliecloud manages passing information from host computers into and out of the virtual machines, such as parameters of the virtual cluster, input data specified by the user, output data from virtual compute jobs, VM console display, and network connections (e.g., SSH or X11). Parameters for the virtual cluster (number of VMs, RAM and disk per VM, etc.) are specified by the user or gathered from the environment (e.g., SLURM environment variables). Example job scripts are included. These include computation examples (such as a "hello world" MPI job) as well as performance tests. They also include a security test script to verify that the virtual cluster is appropriately sandboxed. Tests include: 1. Pinging hosts inside and outside the virtual cluster to explore connectivity; 2. Port scans (again inside and outside) to see what services are available; 3. Sniffing tests to see what traffic is visible to running VMs; 4. IP address spoofing to test network functionality in this case; 5. File access tests to make sure host access permissions are enforced. This test script is not a comprehensive scanner and does not test for specific vulnerabilities. Importantly, no information about physical hosts or network topology is included in this script (or any of Charliecloud); while part of a sensible test, such information is specified by the user when the test is run. That is, one cannot learn anything about the LANL network or computing infrastructure by examining Charliecloud code.« less

  8. Satellite medical centers project

    NASA Astrophysics Data System (ADS)

    Aggarwal, Arvind

    2002-08-01

    World class health care for common man at low affordable cost: anywhere, anytime The project envisages to set up a national network of satellite Medical centers. Each SMC would be manned by doctors, nurses and technicians, six doctors, six nurses, six technicians would be required to provide 24 hour cover, each SMC would operate 24 hours x 7 days. It would be equipped with the Digital telemedicine devices for capturing clinical patient information and investigations in the form of voice, images and data and create an audiovisual text file - a virtual Digital patient. Through the broad band connectivity the virtual patient can be sent to the central hub, manned by specialists, specialists from several specialists sitting together can view the virtual patient and provide a specialized opinion, they can see the virtual patient, see the examination on line through video conference or even PCs, talk to the patient and the doctor at the SMC and controlle capturing of information during examination and investigations of the patient at the SMC - thus creating a virtual Digital consultant at the SMC. Central hub shall be connected to the doctors and consultants in remote locations or tertiary care hospitals any where in the world, thus creating a virtual hub the hierarchical system shall provide upgradation of knowledge to thedoctors in central hub and smc and thus continued medical education and benefit the patient thru the world class treatment in the smc located at his door step. SMC shall be set up by franchisee who shall get safe business opportunity with high returns, patients shall get Low cost user friendly worldclass health care anywhere anytime, Doctors can get better meaningful selfemplyment with better earnings, flexibility of working time and place. SMC shall provide a wide variety of services from primary care to world class Global consultation for difficult patients.

  9. Data Services Upgrade: Perfecting the ISIS-I Topside Digital Ionogram Database

    NASA Technical Reports Server (NTRS)

    Wang, Yongli; Benson, Robert F.; Bilitza, Dieter; Fung, Shing. F.; Chu, Philip; Huang, Xueqin; Truhlik, Vladimir

    2015-01-01

    The ionospheric topside sounders of the International Satellites for Ionospheric Studies (ISIS) program were designed as analog systems. More than 16,000 of the original telemetry tapes from three satellites were used to produce topside digital ionograms, via an analog-to-digital (A/D) conversion process, suitable for modern analysis techniques. Unfortunately, many of the resulting digital topside ionogram files could not be auto-processed to produce topside Ne(h) profiles because of problems encountered during the A/D process. Software has been written to resolve these problems and here we report on (1) the first application of this software to a significant portion of the ISIS-1 digital topside-ionogram database, (2) software improvements motivated by this activity, (3) N(sub e)(h) profiles automatically produced from these corrected ISIS-1 digital ionogram files, and (4) the availability via the Virtual Wave Observatory (VWO) of the corrected ISIS-1 digital topside ionogram files for research. We will also demonstrate the use of these N(sub e)(h) profiles for making refinements in the International Reference Ionosphere (IRI) and in the determination of transition heights from Oxygen ion to Hydrogen ion.

  10. PATSTAGS - PATRAN-STAGSC-1 TRANSLATOR

    NASA Technical Reports Server (NTRS)

    Otte, N. E.

    1994-01-01

    PATSTAGS translates PATRAN finite model data into STAGS (Structural Analysis of General Shells) input records to be used for engineering analysis. The program reads data from a PATRAN neutral file and writes STAGS input records into a STAGS input file and a UPRESS data file. It is able to support translations of nodal constraints, nodal, element, force and pressure data. PATSTAGS uses three files: the PATRAN neutral file to be translated, a STAGS input file and a STAGS pressure data file. The user provides the names for the neutral file and the desired names of the STAGS files to be created. The pressure data file contains the element live pressure data used in the STAGS subroutine UPRESS. PATSTAGS is written in FORTRAN 77 for DEC VAX series computers running VMS. The main memory requirement for execution is approximately 790K of virtual memory. Output blocks can be modified to output the data in any format desired, allowing the program to be used to translate model data to analysis codes other than STAGSC-1 (HQN-10967). This program is available in DEC VAX BACKUP format on a 9-track magnetic tape or TK50 tape cartridge. Documentation is included in the price of the program. PATSTAGS was developed in 1990. DEC, VAX, TK50 and VMS are trademarks of Digital Equipment Corporation.

  11. System Administrator for LCS Development Sets

    NASA Technical Reports Server (NTRS)

    Garcia, Aaron

    2013-01-01

    The Spaceport Command and Control System Project is creating a Checkout and Control System that will eventually launch the next generation of vehicles from Kennedy Space Center. KSC has a large set of Development and Operational equipment already deployed in several facilities, including the Launch Control Center, which requires support. The position of System Administrator will complete tasks across multiple platforms (Linux/Windows), many of them virtual. The Hardware Branch of the Control and Data Systems Division at the Kennedy Space Center uses system administrators for a variety of tasks. The position of system administrator comes with many responsibilities which include maintaining computer systems, repair or set up hardware, install software, create backups and recover drive images are a sample of jobs which one must complete. Other duties may include working with clients in person or over the phone and resolving their computer system needs. Training is a major part of learning how an organization functions and operates. Taking that into consideration, NASA is no exception. Training on how to better protect the NASA computer infrastructure will be a topic to learn, followed by NASA work polices. Attending meetings and discussing progress will be expected. A system administrator will have an account with root access. Root access gives a user full access to a computer system and or network. System admins can remove critical system files and recover files using a tape backup. Problem solving will be an important skill to develop in order to complete the many tasks.

  12. A Virtual Instrument Panel and Serial Interface for the Parr 1672 Thermometer

    ERIC Educational Resources Information Center

    Salter, Gail; Range, Kevin; Salter, Carl

    2005-01-01

    The various features of a Visual Basic Program, which implements the 1672 Parr thermometer are described. The program permits remote control of the calorimetry experiment and also provides control for the flow of data and for file storage.

  13. VizieR Online Data Catalog: NLTE spectral analysis of white dwarf G191-B2B (Rauch+, 2013)

    NASA Astrophysics Data System (ADS)

    Rauch, T.; Werner, K.; Bohlin, R.; Kruk, J. W.

    2013-08-01

    In the framework of the Virtual Observatory, the German Astrophysical Virtual Observatory developed the registered service TheoSSA. It provides easy access to stellar spectral energy distributions (SEDs) and is intended to ingest SEDs calculated by any model-atmosphere code. In case of the DA white dwarf G191-B2B, we demonstrate that the model reproduces not only its overall continuum shape but also the numerous metal lines exhibited in its ultraviolet spectrum. (3 data files).

  14. iRODS-Based Climate Data Services and Virtualization-as-a-Service in the NASA Center for Climate Simulation

    NASA Astrophysics Data System (ADS)

    Schnase, J. L.; Duffy, D. Q.; Tamkin, G. S.; Strong, S.; Ripley, D.; Gill, R.; Sinno, S. S.; Shen, Y.; Carriere, L. E.; Brieger, L.; Moore, R.; Rajasekar, A.; Schroeder, W.; Wan, M.

    2011-12-01

    Scientific data services are becoming an important part of the NASA Center for Climate Simulation's mission. Our technological response to this expanding role is built around the concept of specialized virtual climate data servers, repetitive cloud provisioning, image-based deployment and distribution, and virtualization-as-a-service. A virtual climate data server is an OAIS-compliant, iRODS-based data server designed to support a particular type of scientific data collection. iRODS is data grid middleware that provides policy-based control over collection-building, managing, querying, accessing, and preserving large scientific data sets. We have developed prototype vCDSs to manage NetCDF, HDF, and GeoTIF data products. We use RPM scripts to build vCDS images in our local computing environment, our local Virtual Machine Environment, NASA's Nebula Cloud Services, and Amazon's Elastic Compute Cloud. Once provisioned into these virtualized resources, multiple vCDSs can use iRODS's federation and realized object capabilities to create an integrated ecosystem of data servers that can scale and adapt to changing requirements. This approach enables platform- or software-as-a-service deployment of the vCDSs and allows the NCCS to offer virtualization-as-a-service, a capacity to respond in an agile way to new customer requests for data services, and a path for migrating existing services into the cloud. We have registered MODIS Atmosphere data products in a vCDS that contains 54 million registered files, 630TB of data, and over 300 million metadata values. We are now assembling IPCC AR5 data into a production vCDS that will provide the platform upon which NCCS's Earth System Grid (ESG) node publishes to the extended science community. In this talk, we describe our approach, experiences, lessons learned, and plans for the future.

  15. Using virtualization to protect the proprietary material science applications in volunteer computing

    NASA Astrophysics Data System (ADS)

    Khrapov, Nikolay P.; Rozen, Valery V.; Samtsevich, Artem I.; Posypkin, Mikhail A.; Sukhomlin, Vladimir A.; Oganov, Artem R.

    2018-04-01

    USPEX is a world-leading software for computational material design. In essence, USPEX splits simulation into a large number of workunits that can be processed independently. This scheme ideally fits the desktop grid architecture. Workunit processing is done by a simulation package aimed at energy minimization. Many of such packages are proprietary and should be protected from unauthorized access when running on a volunteer PC. In this paper we present an original approach based on virtualization. In a nutshell, the proprietary code and input files are stored in an encrypted folder and run inside a virtual machine image that is also password protected. The paper describes this approach in detail and discusses its application in USPEX@home volunteer project.

  16. Virtual machine provisioning, code management, and data movement design for the Fermilab HEPCloud Facility

    NASA Astrophysics Data System (ADS)

    Timm, S.; Cooper, G.; Fuess, S.; Garzoglio, G.; Holzman, B.; Kennedy, R.; Grassano, D.; Tiradani, A.; Krishnamurthy, R.; Vinayagam, S.; Raicu, I.; Wu, H.; Ren, S.; Noh, S.-Y.

    2017-10-01

    The Fermilab HEPCloud Facility Project has as its goal to extend the current Fermilab facility interface to provide transparent access to disparate resources including commercial and community clouds, grid federations, and HPC centers. This facility enables experiments to perform the full spectrum of computing tasks, including data-intensive simulation and reconstruction. We have evaluated the use of the commercial cloud to provide elasticity to respond to peaks of demand without overprovisioning local resources. Full scale data-intensive workflows have been successfully completed on Amazon Web Services for two High Energy Physics Experiments, CMS and NOνA, at the scale of 58000 simultaneous cores. This paper describes the significant improvements that were made to the virtual machine provisioning system, code caching system, and data movement system to accomplish this work. The virtual image provisioning and contextualization service was extended to multiple AWS regions, and to support experiment-specific data configurations. A prototype Decision Engine was written to determine the optimal availability zone and instance type to run on, minimizing cost and job interruptions. We have deployed a scalable on-demand caching service to deliver code and database information to jobs running on the commercial cloud. It uses the frontiersquid server and CERN VM File System (CVMFS) clients on EC2 instances and utilizes various services provided by AWS to build the infrastructure (stack). We discuss the architecture and load testing benchmarks on the squid servers. We also describe various approaches that were evaluated to transport experimental data to and from the cloud, and the optimal solutions that were used for the bulk of the data transport. Finally, we summarize lessons learned from this scale test, and our future plans to expand and improve the Fermilab HEP Cloud Facility.

  17. Virtual Machine Provisioning, Code Management, and Data Movement Design for the Fermilab HEPCloud Facility

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Timm, S.; Cooper, G.; Fuess, S.

    The Fermilab HEPCloud Facility Project has as its goal to extend the current Fermilab facility interface to provide transparent access to disparate resources including commercial and community clouds, grid federations, and HPC centers. This facility enables experiments to perform the full spectrum of computing tasks, including data-intensive simulation and reconstruction. We have evaluated the use of the commercial cloud to provide elasticity to respond to peaks of demand without overprovisioning local resources. Full scale data-intensive workflows have been successfully completed on Amazon Web Services for two High Energy Physics Experiments, CMS and NOνA, at the scale of 58000 simultaneous cores.more » This paper describes the significant improvements that were made to the virtual machine provisioning system, code caching system, and data movement system to accomplish this work. The virtual image provisioning and contextualization service was extended to multiple AWS regions, and to support experiment-specific data configurations. A prototype Decision Engine was written to determine the optimal availability zone and instance type to run on, minimizing cost and job interruptions. We have deployed a scalable on-demand caching service to deliver code and database information to jobs running on the commercial cloud. It uses the frontiersquid server and CERN VM File System (CVMFS) clients on EC2 instances and utilizes various services provided by AWS to build the infrastructure (stack). We discuss the architecture and load testing benchmarks on the squid servers. We also describe various approaches that were evaluated to transport experimental data to and from the cloud, and the optimal solutions that were used for the bulk of the data transport. Finally, we summarize lessons learned from this scale test, and our future plans to expand and improve the Fermilab HEP Cloud Facility.« less

  18. So Wide a Web, So Little Time.

    ERIC Educational Resources Information Center

    McConville, David; And Others

    1996-01-01

    Discusses new trends in the World Wide Web. Highlights include multimedia; digitized audio-visual files; compression technology; telephony; virtual reality modeling language (VRML); open architecture; and advantages of Java, an object-oriented programming language, including platform independence, distributed development, and pay-per-use software.…

  19. ViDI: Virtual Diagnostics Interface. Volume 2; Unified File Format and Web Services as Applied to Seamless Data Transfer

    NASA Technical Reports Server (NTRS)

    Fleming, Gary A. (Technical Monitor); Schwartz, Richard J.

    2004-01-01

    The desire to revolutionize the aircraft design cycle from its currently lethargic pace to a fast turn-around operation enabling the optimization of non-traditional configurations is a critical challenge facing the aeronautics industry. In response, a large scale effort is underway to not only advance the state of the art in wind tunnel testing, computational modeling, and information technology, but to unify these often disparate elements into a cohesive design resource. This paper will address Seamless Data Transfer, the critical central nervous system that will enable a wide variety of varied components to work together.

  20. GeoMapApp, Virtual Ocean, and other Free Data Resources for the 21st Century Classroom

    NASA Astrophysics Data System (ADS)

    Goodwillie, A. M.; Ryan, W.; Carbotte, S.; Melkonian, A.; Coplan, J.; Arko, R.; Ferrini, V.; O'Hara, S.; Leung, A.; Bonckzowski, J.

    2008-12-01

    With funding from the U.S. National Science Foundation, the Marine Geoscience Data System (MGDS) (http://www.marine-geo.org/) is developing GeoMapApp (http://www.geomapapp.org) - a computer application that provides wide-ranging map-based visualization and manipulation options for interdisciplinary geosciences research and education. The novelty comes from the use of this visual tool to discover and explore data, with seamless links to further discovery using traditional text-based approaches. Users can generate custom maps and grids and import their own data sets. Built-in functionality allows users to readily explore a broad suite of interactive data sets and interfaces. Examples include multi-resolution global digital models of topography, gravity, sediment thickness, and crustal ages; rock, fluid, biology and sediment sample information; research cruise underway geophysical and multibeam data; earthquake events; submersible dive photos of hydrothermal vents; geochemical analyses; DSDP/ODP core logs; seismic reflection profiles; contouring, shading, profiling of grids; and many more. On-line audio-visual tutorials lead users step-by-step through GeoMapApp functionality (http://www.geomapapp.org/tutorials/). Virtual Ocean (http://www.virtualocean.org/) integrates GeoMapApp with a 3-D earth browser based upon NASA WorldWind, providing yet more powerful capabilities. The searchable MGDS Media Bank (http://media.marine-geo.org/) supports viewing of remarkable images and video from the NSF Ridge 2000 and MARGINS programs. For users familiar with Google Earth (tm), KML files are available for viewing several MGDS data sets (http://www.marine-geo.org/education/kmls.php). Examples of accessing and manipulating a range of geoscience data sets from various NSF-funded programs will be shown. GeoMapApp, Virtual Ocean, the MGDS Media Bank and KML files are free MGDS data resources and work on any type of computer. They are currently used by educators, researchers, school teachers and the general public.

  1. Publication Bias ( The "File-Drawer Problem") in Scientific Inference

    NASA Technical Reports Server (NTRS)

    Scargle, Jeffrey D.; DeVincenzi, Donald (Technical Monitor)

    1999-01-01

    Publication bias arises whenever the probability that a study is published depends on the statistical significance of its results. This bias, often called the file-drawer effect since the unpublished results are imagined to be tucked away in researchers' file cabinets, is potentially a severe impediment to combining the statistical results of studies collected from the literature. With almost any reasonable quantitative model for publication bias, only a small number of studies lost in the file-drawer will produce a significant bias. This result contradicts the well known Fail Safe File Drawer (FSFD) method for setting limits on the potential harm of publication bias, widely used in social, medical and psychic research. This method incorrectly treats the file drawer as unbiased, and almost always miss-estimates the seriousness of publication bias. A large body of not only psychic research, but medical and social science studies, has mistakenly relied on this method to validate claimed discoveries. Statistical combination can be trusted only if it is known with certainty that all studies that have been carried out are included. Such certainty is virtually impossible to achieve in literature surveys.

  2. VirGO: A Visual Browser for the ESO Science Archive Facility

    NASA Astrophysics Data System (ADS)

    Chéreau, F.

    2008-08-01

    VirGO is the next generation Visual Browser for the ESO Science Archive Facility developed by the Virtual Observatory (VO) Systems Department. It is a plug-in for the popular open source software Stellarium adding capabilities for browsing professional astronomical data. VirGO gives astronomers the possibility to easily discover and select data from millions of observations in a new visual and intuitive way. Its main feature is to perform real-time access and graphical display of a large number of observations by showing instrumental footprints and image previews, and to allow their selection and filtering for subsequent download from the ESO SAF web interface. It also allows the loading of external FITS files or VOTables, the superimposition of Digitized Sky Survey (DSS) background images, and the visualization of the sky in a `real life' mode as seen from the main ESO sites. All data interfaces are based on Virtual Observatory standards which allow access to images and spectra from external data centers, and interaction with the ESO SAF web interface or any other VO applications supporting the PLASTIC messaging system. The main website for VirGO is at http://archive.eso.org/cms/virgo.

  3. iScreen: world's first cloud-computing web server for virtual screening and de novo drug design based on TCM database@Taiwan

    NASA Astrophysics Data System (ADS)

    Tsai, Tsung-Ying; Chang, Kai-Wei; Chen, Calvin Yu-Chian

    2011-06-01

    The rapidly advancing researches on traditional Chinese medicine (TCM) have greatly intrigued pharmaceutical industries worldwide. To take initiative in the next generation of drug development, we constructed a cloud-computing system for TCM intelligent screening system (iScreen) based on TCM Database@Taiwan. iScreen is compacted web server for TCM docking and followed by customized de novo drug design. We further implemented a protein preparation tool that both extract protein of interest from a raw input file and estimate the size of ligand bind site. In addition, iScreen is designed in user-friendly graphic interface for users who have less experience with the command line systems. For customized docking, multiple docking services, including standard, in-water, pH environment, and flexible docking modes are implemented. Users can download first 200 TCM compounds of best docking results. For TCM de novo drug design, iScreen provides multiple molecular descriptors for a user's interest. iScreen is the world's first web server that employs world's largest TCM database for virtual screening and de novo drug design. We believe our web server can lead TCM research to a new era of drug development. The TCM docking and screening server is available at http://iScreen.cmu.edu.tw/.

  4. iScreen: world's first cloud-computing web server for virtual screening and de novo drug design based on TCM database@Taiwan.

    PubMed

    Tsai, Tsung-Ying; Chang, Kai-Wei; Chen, Calvin Yu-Chian

    2011-06-01

    The rapidly advancing researches on traditional Chinese medicine (TCM) have greatly intrigued pharmaceutical industries worldwide. To take initiative in the next generation of drug development, we constructed a cloud-computing system for TCM intelligent screening system (iScreen) based on TCM Database@Taiwan. iScreen is compacted web server for TCM docking and followed by customized de novo drug design. We further implemented a protein preparation tool that both extract protein of interest from a raw input file and estimate the size of ligand bind site. In addition, iScreen is designed in user-friendly graphic interface for users who have less experience with the command line systems. For customized docking, multiple docking services, including standard, in-water, pH environment, and flexible docking modes are implemented. Users can download first 200 TCM compounds of best docking results. For TCM de novo drug design, iScreen provides multiple molecular descriptors for a user's interest. iScreen is the world's first web server that employs world's largest TCM database for virtual screening and de novo drug design. We believe our web server can lead TCM research to a new era of drug development. The TCM docking and screening server is available at http://iScreen.cmu.edu.tw/.

  5. An effective XML based name mapping mechanism within StoRM

    NASA Astrophysics Data System (ADS)

    Corso, E.; Forti, A.; Ghiselli, A.; Magnoni, L.; Zappi, R.

    2008-07-01

    In a Grid environment the naming capability allows users to refer to specific data resources in a physical storage system using a high level logical identifier. This logical identifier is typically organized in a file system like structure, a hierarchical tree of names. Storage Resource Manager (SRM) services map the logical identifier to the physical location of data evaluating a set of parameters as the desired quality of services and the VOMS attributes specified in the requests. StoRM is a SRM service developed by INFN and ICTP-EGRID to manage file and space on standard POSIX and high performing parallel and cluster file systems. An upcoming requirement in the Grid data scenario is the orthogonality of the logical name and the physical location of data, in order to refer, with the same identifier, to different copies of data archived in various storage areas with different quality of service. The mapping mechanism proposed in StoRM is based on a XML document that represents the different storage components managed by the service, the storage areas defined by the site administrator, the quality of service they provide and the Virtual Organization that want to use the storage area. An appropriate directory tree is realized in each storage component reflecting the XML schema. In this scenario StoRM is able to identify the physical location of a requested data evaluating the logical identifier and the specified attributes following the XML schema, without querying any database service. This paper presents the namespace schema defined, the different entities represented and the technical details of the StoRM implementation.

  6. Global Software Development with Cloud Platforms

    NASA Astrophysics Data System (ADS)

    Yara, Pavan; Ramachandran, Ramaseshan; Balasubramanian, Gayathri; Muthuswamy, Karthik; Chandrasekar, Divya

    Offshore and outsourced distributed software development models and processes are facing challenges, previously unknown, with respect to computing capacity, bandwidth, storage, security, complexity, reliability, and business uncertainty. Clouds promise to address these challenges by adopting recent advances in virtualization, parallel and distributed systems, utility computing, and software services. In this paper, we envision a cloud-based platform that addresses some of these core problems. We outline a generic cloud architecture, its design and our first implementation results for three cloud forms - a compute cloud, a storage cloud and a cloud-based software service- in the context of global distributed software development (GSD). Our ”compute cloud” provides computational services such as continuous code integration and a compile server farm, ”storage cloud” offers storage (block or file-based) services with an on-line virtual storage service, whereas the on-line virtual labs represent a useful cloud service. We note some of the use cases for clouds in GSD, the lessons learned with our prototypes and identify challenges that must be conquered before realizing the full business benefits. We believe that in the future, software practitioners will focus more on these cloud computing platforms and see clouds as a means to supporting a ecosystem of clients, developers and other key stakeholders.

  7. 20 CFR 30.411 - What happens if the opinion of the physician selected by OWCP differs from the opinion of the...

    Code of Federal Regulations, 2011 CFR

    2011-04-01

    ... reports of virtually equal weight and rationale reach opposing conclusions. (b) If a conflict exists... the case. Also, a case file may be sent to a physician who conforms to the standards regarding...

  8. IDSP- INTERACTIVE DIGITAL SIGNAL PROCESSOR

    NASA Technical Reports Server (NTRS)

    Mish, W. H.

    1994-01-01

    The Interactive Digital Signal Processor, IDSP, consists of a set of time series analysis "operators" based on the various algorithms commonly used for digital signal analysis work. The processing of a digital time series to extract information is usually achieved by the application of a number of fairly standard operations. However, it is often desirable to "experiment" with various operations and combinations of operations to explore their effect on the results. IDSP is designed to provide an interactive and easy-to-use system for this type of digital time series analysis. The IDSP operators can be applied in any sensible order (even recursively), and can be applied to single time series or to simultaneous time series. IDSP is being used extensively to process data obtained from scientific instruments onboard spacecraft. It is also an excellent teaching tool for demonstrating the application of time series operators to artificially-generated signals. IDSP currently includes over 43 standard operators. Processing operators provide for Fourier transformation operations, design and application of digital filters, and Eigenvalue analysis. Additional support operators provide for data editing, display of information, graphical output, and batch operation. User-developed operators can be easily interfaced with the system to provide for expansion and experimentation. Each operator application generates one or more output files from an input file. The processing of a file can involve many operators in a complex application. IDSP maintains historical information as an integral part of each file so that the user can display the operator history of the file at any time during an interactive analysis. IDSP is written in VAX FORTRAN 77 for interactive or batch execution and has been implemented on a DEC VAX-11/780 operating under VMS. The IDSP system generates graphics output for a variety of graphics systems. The program requires the use of Versaplot and Template plotting routines and IMSL Math/Library routines. These software packages are not included in IDSP. The virtual memory requirement for the program is approximately 2.36 MB. The IDSP system was developed in 1982 and was last updated in 1986. Versaplot is a registered trademark of Versatec Inc. Template is a registered trademark of Template Graphics Software Inc. IMSL Math/Library is a registered trademark of IMSL Inc.

  9. Harmonize Pipeline and Archiving Aystem: PESSTO@IA2 Use Case

    NASA Astrophysics Data System (ADS)

    Smareglia, R.; Knapic, C.; Molinaro, M.; Young, D.; Valenti, S.

    2013-10-01

    Italian Astronomical Archives Center (IA2) is a research infrastructure project that aims at coordinating different national and international initiatives to improve the quality of astrophysical data services. IA2 is now also involved in the PESSTO (Public ESO Spectroscopic Survey of Transient Objects) collaboration, developing a complete archiving system to store calibrated post processed data (including sensitive intermediate products), a user interface to access private data and Virtual Observatory (VO) compliant web services to access public fast reduction data via VO tools. The archive system shall rely on the PESSTO Marshall to provide file data and its associated metadata output by the PESSTO data-reduction pipeline. To harmonize the object repository, data handling and archiving system, new tools are under development. These systems must have a strong cross-interaction without increasing the complexities of any single task, in order to improve the performances of the whole system and must have a sturdy logic in order to perform all operations in coordination with the other PESSTO tools. MySQL Replication technology and triggers are used for the synchronization of new data in an efficient, fault tolerant manner. A general purpose library is under development to manage data starting from raw observations to final calibrated ones, open to the overriding of different sources, formats, management fields, storage and publication policies. Configurations for all the systems are stored in a dedicated schema (no configuration files), but can be easily updated by a planned Archiving System Configuration Interface (ASCI).

  10. Providing Access to a Diverse Set of Global Reanalysis Dataset Collections

    NASA Astrophysics Data System (ADS)

    Schuster, D.; Worley, S. J.

    2015-12-01

    The National Center for Atmospheric Research (NCAR) Research Data Archive (RDA, http://rda.ucar.edu) provides open access to a variety of global reanalysis dataset collections to support atmospheric and related sciences research worldwide. These include products from the European Centre for Medium-Range Weather Forecasts (ECMWF), Japan Meteorological Agency (JMA), National Centers for Environmental Prediction (NCEP), National Oceanic and Atmospheric Administration (NOAA), and NCAR.All RDA hosted reanalysis collections are freely accessible to registered users through a variety of methods. Standard access methods include traditional browser and scripted HTTP file download. Enhanced downloads are available through the Globus GridFTP "fire and forget" data transfer service, which provides an efficient, reliable, and preferred alternative to traditional HTTP-based methods. For those that favor interoperable access using compatible tools, the Unidata THREDDS Data server provides remote access to complete reanalysis collections by virtual dataset aggregation "files". Finally, users can request data subsets and format conversions to be prepared for them through web interface form requests or web service API batch requests. This approach uses NCAR HPC and central file systems to effectively prepare products from the high-resolution and very large reanalyses archives. The presentation will include a detailed inventory of all RDA reanalysis dataset collection holdings, and highlight access capabilities to these collections through use case examples.

  11. Using Technology To Enhance Literacy in Elementary School Children.

    ERIC Educational Resources Information Center

    Christie, Alice

    The electronic information age is here, and adults as well as children are using new ways to gather and generate information. Electronics users are writing in hypertext; exploring cyberspace; living in virtual communities; scooping interactively with CD-ROMs and laserdiscs; using File Transfer Protocols to upload and download information from…

  12. A Google Earth Grand Tour of the Terrestrial Planets

    ERIC Educational Resources Information Center

    De Paor, Declan; Coba, Filis; Burgin, Stephen

    2016-01-01

    Google Earth is a powerful instructional resource for geoscience education. We have extended the virtual globe to include all terrestrial planets. Downloadable Keyhole Markup Language (KML) files (Google Earth's scripting language) associated with this paper include lessons about Mercury, Venus, the Moon, and Mars. We created "grand…

  13. Browsing Your Virtual Library: The Case of Expanding Universe.

    ERIC Educational Resources Information Center

    Daniels, Wayne; Enright, Jeanne; Mackenzie, Scott

    1997-01-01

    Describes "Expanding Universe: a classified search tool for amateur astronomy," a Web site maintained by the Metropolitan Toronto Reference Library which uses a modified form of the Dewey Decimal Classification to organize a large file of astronomy hotlinks. Highlights include structure, HTML coding, design requirements, and future…

  14. An Analysis of Newspaper Antitrust Actions: 1980-1986.

    ERIC Educational Resources Information Center

    Busterna, John C.

    The American Newspaper Association's 1986 compilation of 45 newspaper antitrust actions filed since 1980 revealed that the majority of antitrust actions during that period involved disputes over advertising practices. The federal government was virtually absent in its enforcement of antitrust laws against newspapers. About one-third of the…

  15. Web Surveys to Digital Movies: Technological Tools of the Trade.

    ERIC Educational Resources Information Center

    Fetterman, David M.

    2002-01-01

    Highlights some of the technological tools used by educational researchers today, focusing on data collection related tools such as Web surveys, digital photography, voice recognition and transcription, file sharing and virtual office, videoconferencing on the Internet, instantaneous chat and chat rooms, reporting and dissemination, and digital…

  16. Real-time, rapidly updating severe weather products for virtual globes

    NASA Astrophysics Data System (ADS)

    Smith, Travis M.; Lakshmanan, Valliappa

    2011-01-01

    It is critical that weather forecasters are able to put severe weather information from a variety of observational and modeling platforms into a geographic context so that warning information can be effectively conveyed to the public, emergency managers, and disaster response teams. The availability of standards for the specification and transport of virtual globe data products has made it possible to generate spatially precise, geo-referenced images and to distribute these centrally created products via a web server to a wide audience. In this paper, we describe the data and methods for enabling severe weather threat analysis information inside a KML framework. The method of creating severe weather diagnosis products that are generated and translating them to KML and image files is described. We illustrate some of the practical applications of these data when they are integrated into a virtual globe display. The availability of standards for interoperable virtual globe clients has not completely alleviated the need for custom solutions. We conclude by pointing out several of the limitations of the general-purpose virtual globe clients currently available.

  17. Virtual and flexible digital signal processing system based on software PnP and component works

    NASA Astrophysics Data System (ADS)

    He, Tao; Wu, Qinghua; Zhong, Fei; Li, Wei

    2005-05-01

    An idea about software PnP (Plug & Play) is put forward according to the hardware PnP. And base on this idea, a virtual flexible digital signal processing system (FVDSPS) is carried out. FVDSPS is composed of a main control center, many sub-function modules and other hardware I/O modules. Main control center sends out commands to sub-function modules, and manages running orders, parameters and results of sub-functions. The software kernel of FVDSPS is DSP (Digital Signal Processing) module, which communicates with the main control center through some protocols, accept commands or send requirements. The data sharing and exchanging between the main control center and the DSP modules are carried out and managed by the files system of the Windows Operation System through the effective communication. FVDSPS real orients objects, orients engineers and orients engineering problems. With FVDSPS, users can freely plug and play, and fast reconfigure a signal process system according to engineering problems without programming. What you see is what you get. Thus, an engineer can orient engineering problems directly, pay more attention to engineering problems, and promote the flexibility, reliability and veracity of testing system. Because FVDSPS orients TCP/IP protocol, through Internet, testing engineers, technology experts can be connected freely without space. Engineering problems can be resolved fast and effectively. FVDSPS can be used in many fields such as instruments and meter, fault diagnosis, device maintenance and quality control.

  18. How to Make a Virtual Landscape with Outcrops for Use in Geoscience Teaching

    NASA Astrophysics Data System (ADS)

    Houghton, J.; Gordon, C.; Craven, B.; Robinson, A.; Lloyd, G. E. E.; Morgan, D. J.

    2016-12-01

    We are using screen-based virtual reality landscapes to augment the teaching of basic geological field skills and to enhance 3D visualisation skills. Here we focus on the processes of creating these landscapes, both imagined and real, in the Unity 3D game engine. The virtual landscapes are terrains with embedded data for mapping exercises, or draped geological maps for understanding the 3D interaction of the geology with the topography. The nature of the landscapes built depends on the learning outcomes of the intended teaching exercise. For example, a simple model of two hills and a valley over which to drape a series of different geological maps can be used to enhance the understanding of the 3D interaction of the geology with the topography. A more complex topography reflecting the underlying geology can be used for geological mapping exercises. The process starts with a contour image or DEM, which needs to be converted into RAW files to be imported into Unity. Within Unity itself, there are a series of steps needed to create a world around the terrain (the setting of cameras, lighting, skyboxes etc) before the terrain can be painted with vegetation and populated with assets or before a splatmap of the geology can be added. We discuss how additional features such as a GPS unit or compass can be included. We are also working to create landscapes based on real localities, both in response to the demand for greater realism and to support students unable to access the field due to health or mobility issues. This includes adding 3D photogrammetric images of outcrops into the worlds. This process uses the open source/freeware tools VisualSFM and MeshLab to create files suitable to be imported into Unity. This project is a collaboration between the University of Leeds and Leeds College of Art, UK, and all our virtual landscapes are freely available online at www.see.leeds.ac.uk/virtual-landscapes/.

  19. Using Cesium for 3D Thematic Visualisations on the Web

    NASA Astrophysics Data System (ADS)

    Gede, Mátyás

    2018-05-01

    Cesium (http://cesiumjs.org) is an open source, WebGL-based JavaScript library for virtual globes and 3D maps. It is an excellent tool for 3D thematic visualisations, but to use its full functionality it has to be feed with its own file format, CZML. Unfortunately, this format is not yet supported by any major GIS software. This paper intro- duces a plugin for QGIS, developed by the author, which facilitates the creation of CZML file for various types of visualisations. The usability of Cesium is also examined in various hardware/software environments.

  20. A Virtual Mission Operations Center: Collaborative Environment

    NASA Technical Reports Server (NTRS)

    Medina, Barbara; Bussman, Marie; Obenschain, Arthur F. (Technical Monitor)

    2002-01-01

    The Virtual Mission Operations Center - Collaborative Environment (VMOC-CE) intent is to have a central access point for all the resources used in a collaborative mission operations environment to assist mission operators in communicating on-site and off-site in the investigation and resolution of anomalies. It is a framework that as a minimum incorporates online chat, realtime file sharing and remote application sharing components in one central location. The use of a collaborative environment in mission operations opens up the possibilities for a central framework for other project members to access and interact with mission operations staff remotely. The goal of the Virtual Mission Operations Center (VMOC) Project is to identify, develop, and infuse technology to enable mission control by on-call personnel in geographically dispersed locations. In order to achieve this goal, the following capabilities are needed: Autonomous mission control systems Automated systems to contact on-call personnel Synthesis and presentation of mission control status and history information Desktop tools for data and situation analysis Secure mechanism for remote collaboration commanding Collaborative environment for remote cooperative work The VMOC-CE is a collaborative environment that facilitates remote cooperative work. It is an application instance of the Virtual System Design Environment (VSDE), developed by NASA Goddard Space Flight Center's (GSFC) Systems Engineering Services & Advanced Concepts (SESAC) Branch. The VSDE is a web-based portal that includes a knowledge repository and collaborative environment to serve science and engineering teams in product development. It is a "one stop shop" for product design, providing users real-time access to product development data, engineering and management tools, and relevant design specifications and resources through the Internet. The initial focus of the VSDE has been to serve teams working in the early portion of the system/product lifecycle - concept development, proposal preparation, and formulation. The VMOC-CE expands the application of the VSDE into the operations portion of the system lifecycle. It will enable meaningful and real-time collaboration regardless of the geographical distribution of project team members. Team members will be able to interact in satellite operations, specifically for resolving anomalies, through access to a desktop computer and the Internet. Mission Operations Management will be able to participate and monitor up to the minute status of anomalies or other mission operations issues. In this paper we present the VMOC-CE project, system capabilities, and technologies.

  1. Demonstration of a Data Distribution System for ALMA Data Cubes

    NASA Astrophysics Data System (ADS)

    Eguchi, S.; Kawasaki, W.; Shirasaki, Y.; Komiya, Y.; Kosugi, G.; Ohishi, M.; Mizumoto, Y.; Kobayashi, T.

    2014-05-01

    The Atacama Large Millimeter / submillimeter Array (ALMA) is the world's largest radio telescope in Chile. As a part of Japanese Virtual Observatory (JVO) system, we have been constructing a prototype of data service to distribute ALMA data, which are three or four dimensional cubes and expected to exceed 2 TB in total size, corresponding to 75 days at world-averaged Internet bandwidth of 2.6 Mbps, in the next three years. To utilize the limited bandwidth, our system adopts a higher dimensional version of so-called "deep zoom": the system generates and stores lower resolution FITS data cubes with various binning parameters in directions of both space and frequency. Users of our portal site can easily visualize and cut out those data cubes by using ALMAWebQL, which is a web application built on customized GWT. Once the FITS files are downloaded via ALMAWebQL, one can visualize them in more detail using Vissage, a Java-based FITS cube browser. We exhibited our web and desktop viewer “fresh from the oven” at the last ADASS conference (Shirasaki et al. 2013). Improvement of their performance and functionality after that made the system nearly to a practical level. The performance problem of ALMAWebQL reported last year (Eguchi et al. 2013) was overcome by optimizing the network topology and applying the just-in-time endian conversion algorithm; the latest ALMAWebQL can follow up any user actions almost in real time for files smaller than 5 GB. It also enables users to define either a sub-region or sub-frequency range and move it freely on the graphical user interface, providing more detailed information of the FITS file. In addition, the latest Vissage now supports data from other telescopes including HST, Subaru, Chandra, etc. and overlaying two images. In this paper, we introduce the latest version of our VO system.

  2. The Careful Puppet Master: Reducing risk and fortifying acceptance testing with Jenkins CI

    NASA Astrophysics Data System (ADS)

    Smith, Jason A.; Richman, Gabriel; DeStefano, John; Pryor, James; Rao, Tejas; Strecker-Kellogg, William; Wong, Tony

    2015-12-01

    Centralized configuration management, including the use of automation tools such as Puppet, can greatly increase provisioning speed and efficiency when configuring new systems or making changes to existing systems, reduce duplication of work, and improve automated processes. However, centralized management also brings with it a level of inherent risk: a single change in just one file can quickly be pushed out to thousands of computers and, if that change is not properly and thoroughly tested and contains an error, could result in catastrophic damage to many services, potentially bringing an entire computer facility offline. Change management procedures can—and should—be formalized in order to prevent such accidents. However, like the configuration management process itself, if such procedures are not automated, they can be difficult to enforce strictly. Therefore, to reduce the risk of merging potentially harmful changes into our production Puppet environment, we have created an automated testing system, which includes the Jenkins CI tool, to manage our Puppet testing process. This system includes the proposed changes and runs Puppet on a pool of dozens of RedHat Enterprise Virtualization (RHEV) virtual machines (VMs) that replicate most of our important production services for the purpose of testing. This paper describes our automated test system and how it hooks into our production approval process for automatic acceptance testing. All pending changes that have been pushed to production must pass this validation process before they can be approved and merged into production.

  3. Data Publishing and Sharing Via the THREDDS Data Repository

    NASA Astrophysics Data System (ADS)

    Wilson, A.; Caron, J.; Davis, E.; Baltzer, T.

    2007-12-01

    The terms "Team Science" and "Networked Science" have been coined to describe a virtual organization of researchers tied via some intellectual challenge, but often located in different organizations and locations. A critical component to these endeavors is publishing and sharing of content, including scientific data. Imagine pointing your web browser to a web page that interactively lets you upload data and metadata to a repository residing on a remote server, which can then be accessed by others in a secure fasion via the web. While any content can be added to this repository, it is designed particularly for storing and sharing scientific data and metadata. Server support includes uploading of data files that can subsequently be subsetted, aggregrated, and served in NetCDF or other scientific data formats. Metadata can be associated with the data and interactively edited. The THREDDS Data Repository (TDR) is a server that provides client initiated, on demand, location transparent storage for data of any type that can then be served by the THREDDS Data Server (TDS). The TDR provides functionality to: * securely store and "own" data files and associated metadata * upload files via HTTP and gridftp * upload a collection of data as single file * modify and restructure repository contents * incorporate metadata provided by the user * generate additional metadata programmatically * edit individual metadata elements The TDR can exist separately from a TDS, serving content via HTTP. Also, it can work in conjunction with the TDS, which includes functionality to provide: * access to data in a variety of formats via -- OPeNDAP -- OGC Web Coverage Service (for gridded datasets) -- bulk HTTP file transfer * a NetCDF view of datasets in NetCDF, OPeNDAP, HDF-5, GRIB, and NEXRAD formats * serving of very large volume datasets, such as NEXRAD radar * aggregation into virtual datasets * subsetting via OPeNDAP and NetCDF Subsetting services This talk will discuss TDR/TDS capabilities as well as how users can install this software to create their own repositories.

  4. Arab-Norman Heritage: State of Knowledge and New Actions and Innovative Proposal

    NASA Astrophysics Data System (ADS)

    Prescia, R.; Scianna, A.

    2017-05-01

    This paper wants to offers a perlustrative recognition on the 'state of the studies', concerning to the Arab-Norman architecture of Palermo, admissed by Unesco in 2015 and explain a research in progress which, starting from re-cognition of the peculiarities of the restoration work carried out on it, consisting of the identification of authentic material-constructive values and / or reconstruction, it orients itself to develop a concrete proposal of filing for a more conscious knowledge. She, moreover, wants contribute to real enhancement through the use of targeted communication strategies that use innovative means capable, on one hand, to attracting the greatest possible number of users, on the other hand, to plan further interventions of conservation coherent with the previous data. The product that you want to achieve is that of a Bank-data that allows the "networking" of monumental emergencies, that become the virtual itineraries waypoint, which can be implemented periodically and whose boards meet the cataloging needs and documentation but with reference at geo-referred systems, compatible with the conservation and management of heritage and with need of usability, real and virtual.

  5. The Time Series Data Server (TSDS) for Standards-Compliant, Convenient, and Efficient Access to Time Series Data

    NASA Astrophysics Data System (ADS)

    Lindholm, D. M.; Weigel, R. S.; Wilson, A.; Ware Dewolfe, A.

    2009-12-01

    Data analysis in the physical sciences is often plagued by the difficulty in acquiring the desired data. A great deal of work has been done in the area of metadata and data discovery, however, many such discoveries simply provide links that lead directly to a data file. Often these files are impractically large, containing more time samples or variables than desired, and are slow to access. Once these files are downloaded, format issues further complicate using the data. Some data servers have begun to address these problems by improving data virtualization and ease of use. However, these services often don't scale to large datasets. Also, the generic nature of the data models used by these servers, while providing greater flexibility, may complicate setting up such a service for data providers and limit sufficient semantics that would otherwise simplify use for clients, machine or human. The Time Series Data Server (TSDS) aims to address these problems within the limited, yet common, domain of time series data. With the simplifying assumption that all data products served are a function of time, the server can optimize for data access based on time subsets, a common use case. The server also supports requests for specific variables, which can be of type scalar, structure, or sequence. It also supports data types with higher level semantics, such as "spectrum." The TSDS is implemented using Java Servlet technology and can be dropped into any servlet container and customized for a data provider's needs. The interface is based on OPeNDAP (http://opendap.org) and conforms to the Data Acces Protocol (DAP) 2.0, a NASA standard (ESDS-RFC-004), which defines a simple HTTP request and response paradigm. Thus a TSDS server instance is a compliant OPeNDAP server that can be accessed by any OPeNDAP client or directly via RESTful web service requests. The TSDS reads the data that it serves into a common data model via the NetCDF Markup Language (NcML, http://www.unidata.ucar.edu/software/netcdf/ncml/) which enables dataset virtualization. An NcML file can expose a single file, a subset, or an aggregation of files as a single, logical dataset. With the appropriate NcML adapter, the TSDS can read data from its native format, eliminating the need for data providers to reformat their data and lowering the barrier for integration. Data can even be read via remote services which is important for enabling VxOs to be truly virtual. The TSDS provides reading, writing, and filtering capabilities through a modular framework. A collection of standard modules is available and customized modules are easy to create and integrate. This way the TSDS can read and write data in a variety of formats and apply filters to them an a manner customizable to meet the needs of both the data providers and consumers. The TSDS server is currently in use serving solar irradiance data from the LASP Interactive Solar IRradiance Datacenter (LISIRD, http://lasp.colorado.edu/lisird/), and is being introduced into the space physics virtual observatory community. The TSDS software is Open Source and available at SourceForge.

  6. Use of virtual slide system for quick frozen intra-operative telepathology diagnosis in Kyoto, Japan.

    PubMed

    Tsuchihashi, Yasunari; Takamatsu, Terumasa; Hashimoto, Yukimasa; Takashima, Tooru; Nakano, Kooji; Fujita, Setsuya

    2008-07-15

    We started to use virtual slide (VS) and virtual microscopy (VM) systems for quick frozen intra-operative telepathology diagnosis in Kyoto, Japan. In the system we used a digital slide scanner, VASSALO by CLARO Inc., and a broadband optic fibre provided by NTT West Japan Inc. with the best effort capacity of 100 Mbps. The client is the pathology laboratory of Yamashiro Public Hospital, one of the local centre hospitals located in the south of Kyoto Prefecture, where a full-time pathologist is not present. The client is connected by VPN to the telepathology centre of our institute located in central Kyoto. As a result of the recent 15 test cases of VS telepathology diagnosis, including cases judging negative or positive surgical margins, we could estimate the usefulness of VS in intra-operative remote diagnosis. The time required for the frozen section VS file making was found to be around 10 min when we use x10 objective and if the maximal dimension of the frozen sample is less than 20 mm. Good correct focus of VS images was attained in all cases and all the fields of each tissue specimen. Up to now the capacity of best effort B-band appears to be sufficient to attain diagnosis on time in intra-operation. Telepathology diagnosis was achieved within 5 minutes in most cases using VS viewer provided by CLARO Inc. The VS telepathology system was found to be superior to the conventional still image telepathology system using a robotic microscope since in the former we can observe much greater image information than in the latter in a certain limited time of intra-operation and in the much more efficient ways. In the near future VS telepathology will replace conventional still image telepathology with a robotic microscope even in quick frozen intra-operative diagnosis.

  7. Use of virtual slide system for quick frozen intra-operative telepathology diagnosis in Kyoto, Japan

    PubMed Central

    Tsuchihashi, Yasunari; Takamatsu, Terumasa; Hashimoto, Yukimasa; Takashima, Tooru; Nakano, Kooji; Fujita, Setsuya

    2008-01-01

    We started to use virtual slide (VS) and virtual microscopy (VM) systems for quick frozen intra-operative telepathology diagnosis in Kyoto, Japan. In the system we used a digital slide scanner, VASSALO by CLARO Inc., and a broadband optic fibre provided by NTT West Japan Inc. with the best effort capacity of 100 Mbps. The client is the pathology laboratory of Yamashiro Public hospital, one of the local centre hospitals located in the south of Kyoto Prefecture, where a fulltime pathologist is not present. The client is connected by VPN to the telepathology centre of our institute located in central Kyoto. As a result of the recent 15 test cases of VS telepathology diagnosis, including cases judging negative or positive surgical margins, we could estimate the usefulness of VS in intra-operative remote diagnosis. The time required for the frozen section VS file making was found to be around 10 min when we use ×10 objective and if the maximal dimension of the frozen sample is less than 20 mm. Good correct focus of VS images was attained in all cases and all the fields of each tissue specimen. Up to now the capacity of best effort B-band appears to be sufficient to attain diagnosis on time in intra-operation. Telepathology diagnosis was achieved within 5 minutes in most cases using VS viewer provided by CLARO Inc. The VS telepathology system was found to be superior to the conventional still image telepathology system using a robotic microscope since in the former we can observe much greater image information than in the latter in a certain limited time of intra-operation and in the much more efficient ways. In the near future VS telepathology will replace conventional still image telepathology with a robotic microscope even in quick frozen intra-operative diagnosis. PMID:18673520

  8. Development of the Large-Scale Statistical Analysis System of Satellites Observations Data with Grid Datafarm Architecture

    NASA Astrophysics Data System (ADS)

    Yamamoto, K.; Murata, K.; Kimura, E.; Honda, R.

    2006-12-01

    In the Solar-Terrestrial Physics (STP) field, the amount of satellite observation data has been increasing every year. It is necessary to solve the following three problems to achieve large-scale statistical analyses of plenty of data. (i) More CPU power and larger memory and disk size are required. However, total powers of personal computers are not enough to analyze such amount of data. Super-computers provide a high performance CPU and rich memory area, but they are usually separated from the Internet or connected only for the purpose of programming or data file transfer. (ii) Most of the observation data files are managed at distributed data sites over the Internet. Users have to know where the data files are located. (iii) Since no common data format in the STP field is available now, users have to prepare reading program for each data by themselves. To overcome the problems (i) and (ii), we constructed a parallel and distributed data analysis environment based on the Gfarm reference implementation of the Grid Datafarm architecture. The Gfarm shares both computational resources and perform parallel distributed processings. In addition, the Gfarm provides the Gfarm filesystem which can be as virtual directory tree among nodes. The Gfarm environment is composed of three parts; a metadata server to manage distributed files information, filesystem nodes to provide computational resources and a client to throw a job into metadata server and manages data processing schedulings. In the present study, both data files and data processes are parallelized on the Gfarm with 6 file system nodes: CPU clock frequency of each node is Pentium V 1GHz, 256MB memory and40GB disk. To evaluate performances of the present Gfarm system, we scanned plenty of data files, the size of which is about 300MB for each, in three processing methods: sequential processing in one node, sequential processing by each node and parallel processing by each node. As a result, in comparison between the number of files and the elapsed time, parallel and distributed processing shorten the elapsed time to 1/5 than sequential processing. On the other hand, sequential processing times were shortened in another experiment, whose file size is smaller than 100KB. In this case, the elapsed time to scan one file is within one second. It implies that disk swap took place in case of parallel processing by each node. We note that the operation became unstable when the number of the files exceeded 1000. To overcome the problem (iii), we developed an original data class. This class supports our reading of data files with various data formats since it converts them into an original data format since it defines schemata for every type of data and encapsulates the structure of data files. In addition, since this class provides a function of time re-sampling, users can easily convert multiple data (array) with different time resolution into the same time resolution array. Finally, using the Gfarm, we achieved a high performance environment for large-scale statistical data analyses. It should be noted that the present method is effective only when one data file size is large enough. At present, we are restructuring the new Gfarm environment with 8 nodes: CPU is Athlon 64 x2 Dual Core 2GHz, 2GB memory and 1.2TB disk (using RAID0) for each node. Our original class is to be implemented on the new Gfarm environment. In the present talk, we show the latest results with applying the present system for data analyses with huge number of satellite observation data files.

  9. State of the art of teledermatopathology.

    PubMed

    Massone, Cesare; Brunasso, Alexandra M G; Campbell, Terri M; Soyer, H Peter

    2008-10-01

    Teledermatopathology may involve real-time transmission of images from distant locations to consulting pathologists by the remote manipulation of a robotic microscope. Alternatively, the static store-and-forward option involves the single-file transmission of subjectively preselected and captured areas of microscopic images by a referring physician. The recent introduction of virtual slide systems (VSS) involves the digitization of whole slides at high resolution thus enabling the user to view any part of the specimen at any magnification. Such technology has surmounted previous restrictions caused by the size of preselected areas and specimen sampling for telepathology. In terms of client access, these VSS may be stored on a virtual slide server, made available on the Web for remote consultation by pathologists via an integrated virtual slide client network. Despite store-and-forward teledermatopathology being the most frequently used and less expensive approach to teledermatopathology, VSS represents the future in this discipline. The recent pilot studies suggest that the use of remote expert consultants in diagnostic dermatopathology can be integrated into daily routine, teleconsultation, and teleteaching. The new technology enables rapid and reproducible diagnoses, but despite its usability, VSS is not completely feasible for teledermatopathology of inflammatory skin diseases as the performance seems to be influenced by the availability of complete clinical data. Improvements in the diagnostic facility will no doubt follow from further development of the VSS, the slide processor, and of course training in the use of virtual microscope. Undoubtedly, as technology becomes even more sophisticated in the future, VSS will overcome the present drawbacks and find its place in all facets of teledermatopathology.

  10. Conversion of School Nurse Policy and Procedure Manual to Electronic Format

    ERIC Educational Resources Information Center

    Randall, Joellyn; Knee, Rachel; Galemore, Cynthia

    2006-01-01

    Policy and procedure manuals are essential to establishing standards of practice and ensuring quality of care to students and families. The Olathe District Schools (Kansas) Technology Department created the Virtual File Cabinet to provide online access to employee policies, school board policies, forms, and other documents. A task force of school…

  11. 77 FR 40338 - Announcing Revised Draft Federal Information Processing Standard (FIPS) 201-2, Personal Identity...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2012-07-09

    ... may be sent to: Chief, Computer Security Division, Information Technology Laboratory, ATTN: Comments... introduces the concept of a virtual contact interface, over which all functionality of the PIV Card is... Laboratory Programs. [FR Doc. 2012-16725 Filed 7-6-12; 8:45 am] BILLING CODE 3510-13-P ...

  12. Social Networking in Libraries: New Tricks of the Trade, Part I

    ERIC Educational Resources Information Center

    Cooke, Nicole A.

    2008-01-01

    While not a brand new phenomenon, online social networking sites continue to be exceedingly popular and seem to be where students spend much of their time. Not only are participants blogging, texting, chatting, sharing files, gaming, and existing virtually online, they also are forming communities and new cultures and exhibiting new informational…

  13. 77 FR 17095 - Solicitation for a Cooperative Agreement-Development of a Core Correctional Practices Curriculum

    Federal Register 2010, 2011, 2012, 2013, 2014

    2012-03-23

    ... could include the following elements: (A) Agency and facilitator/trainer/coach readiness survey: virtual... developed during the project and in a design and format appropriate for public dissemination. A draft of... government's requirement for accessibility (508 PDF or HTML file). The awardee must provide descriptive text...

  14. Open source tools for management and archiving of digital microscopy data to allow integration with patient pathology and treatment information.

    PubMed

    Khushi, Matloob; Edwards, Georgina; de Marcos, Diego Alonso; Carpenter, Jane E; Graham, J Dinny; Clarke, Christine L

    2013-02-12

    Virtual microscopy includes digitisation of histology slides and the use of computer technologies for complex investigation of diseases such as cancer. However, automated image analysis, or website publishing of such digital images, is hampered by their large file sizes. We have developed two Java based open source tools: Snapshot Creator and NDPI-Splitter. Snapshot Creator converts a portion of a large digital slide into a desired quality JPEG image. The image is linked to the patient's clinical and treatment information in a customised open source cancer data management software (Caisis) in use at the Australian Breast Cancer Tissue Bank (ABCTB) and then published on the ABCTB website (http://www.abctb.org.au) using Deep Zoom open source technology. Using the ABCTB online search engine, digital images can be searched by defining various criteria such as cancer type, or biomarkers expressed. NDPI-Splitter splits a large image file into smaller sections of TIFF images so that they can be easily analysed by image analysis software such as Metamorph or Matlab. NDPI-Splitter also has the capacity to filter out empty images. Snapshot Creator and NDPI-Splitter are novel open source Java tools. They convert digital slides into files of smaller size for further processing. In conjunction with other open source tools such as Deep Zoom and Caisis, this suite of tools is used for the management and archiving of digital microscopy images, enabling digitised images to be explored and zoomed online. Our online image repository also has the capacity to be used as a teaching resource. These tools also enable large files to be sectioned for image analysis. The virtual slide(s) for this article can be found here: http://www.diagnosticpathology.diagnomx.eu/vs/5330903258483934.

  15. Intra-prosthetic breast MR virtual navigation: a preliminary study for a new evaluation of silicone breast implants.

    PubMed

    Moschetta, Marco; Telegrafo, Michele; Capuano, Giulia; Rella, Leonarda; Scardapane, Arnaldo; Angelelli, Giuseppe; Stabile Ianora, Amato Antonio

    2013-10-01

    To assess the contribute of intra-prosthetic MRI virtual navigation for evaluating breast implants and detecting implant ruptures. Forty-five breast implants were evaluated by MR examination. Only patients with a clinical indication were assessed. A 1.5-T device equipped with a 4-channel breast coil was used by performing axial TSE-T2, axial silicone-only, axial silicone suppression and sagittal STIR images. The obtained dicom files were also analyzed by using virtual navigation software. Two blinded radiologists evaluated all MR and virtual images. Eight patients for a total of 13 implants underwent surgical replacement. Sensitivity, specificity, accuracy, positive predictive value (PPV) and negative predictive value (NPV) were calculated for both imaging strategies. Intra-capsular rupture was diagnosed in 13 out of 45 (29%) implants by using MRI. Basing on virtual navigation, 9 (20%) cases of intra-capsular rupture were diagnosed. Sensitivity, specificity, accuracy, PPV and NPV values of 100%, 86%, 89%, 62% and 100%, respectively, were found for MRI. Virtual navigation increased the previous values up to 100%, 97%, 98%, 89% and 100%. Intra-prosthetic breast MR virtual navigation can represent an additional promising tool for the evaluation of breast implants being able to reduce false positives and to provide a more accurate detection of intra-capsular implant rupture signs. Copyright © 2013 Elsevier Inc. All rights reserved.

  16. Virtual Environments for Visualizing Structural Health Monitoring Sensor Networks, Data, and Metadata.

    PubMed

    Napolitano, Rebecca; Blyth, Anna; Glisic, Branko

    2018-01-16

    Visualization of sensor networks, data, and metadata is becoming one of the most pivotal aspects of the structural health monitoring (SHM) process. Without the ability to communicate efficiently and effectively between disparate groups working on a project, an SHM system can be underused, misunderstood, or even abandoned. For this reason, this work seeks to evaluate visualization techniques in the field, identify flaws in current practices, and devise a new method for visualizing and accessing SHM data and metadata in 3D. More precisely, the work presented here reflects a method and digital workflow for integrating SHM sensor networks, data, and metadata into a virtual reality environment by combining spherical imaging and informational modeling. Both intuitive and interactive, this method fosters communication on a project enabling diverse practitioners of SHM to efficiently consult and use the sensor networks, data, and metadata. The method is presented through its implementation on a case study, Streicker Bridge at Princeton University campus. To illustrate the efficiency of the new method, the time and data file size were compared to other potential methods used for visualizing and accessing SHM sensor networks, data, and metadata in 3D. Additionally, feedback from civil engineering students familiar with SHM is used for validation. Recommendations on how different groups working together on an SHM project can create SHM virtual environment and convey data to proper audiences, are also included.

  17. Virtual Environments for Visualizing Structural Health Monitoring Sensor Networks, Data, and Metadata

    PubMed Central

    Napolitano, Rebecca; Blyth, Anna; Glisic, Branko

    2018-01-01

    Visualization of sensor networks, data, and metadata is becoming one of the most pivotal aspects of the structural health monitoring (SHM) process. Without the ability to communicate efficiently and effectively between disparate groups working on a project, an SHM system can be underused, misunderstood, or even abandoned. For this reason, this work seeks to evaluate visualization techniques in the field, identify flaws in current practices, and devise a new method for visualizing and accessing SHM data and metadata in 3D. More precisely, the work presented here reflects a method and digital workflow for integrating SHM sensor networks, data, and metadata into a virtual reality environment by combining spherical imaging and informational modeling. Both intuitive and interactive, this method fosters communication on a project enabling diverse practitioners of SHM to efficiently consult and use the sensor networks, data, and metadata. The method is presented through its implementation on a case study, Streicker Bridge at Princeton University campus. To illustrate the efficiency of the new method, the time and data file size were compared to other potential methods used for visualizing and accessing SHM sensor networks, data, and metadata in 3D. Additionally, feedback from civil engineering students familiar with SHM is used for validation. Recommendations on how different groups working together on an SHM project can create SHM virtual environment and convey data to proper audiences, are also included. PMID:29337877

  18. ARC+(Registered Trademark) and ARC PC Welding Simulators: Teach Welders with Virtual Interactive 3D Technologies

    NASA Technical Reports Server (NTRS)

    Choquet, Claude

    2011-01-01

    123 Certification Inc., a Montreal based company, has developed an innovative hands-on welding simulator solution to help build the welding workforce in the most simple way. The solution lies in virtual reality technology, which has been fully tested since the early 90's. President and founder of 123 Certification Inc., Mr. Claude Choquet Ing. Msc. IWE. acts as a bridge between the welding and the programming world. Working in these fields for more than 20 years. he has filed 12 patents world-wide for a gesture control platform with leading edge hardware related to simulation. In the summer of 2006. Mr Choquet was proud to be invited to the annual IIW International Weld ing Congress in Quebec City to launch the ARC+ welding simulator. A 100% virtual reality system and web based training center was developed to simulate multi process. multi-materiaL multi-position and multi pass welding. The simulator is intended to train welding students and apprentices in schools or industries. The welding simulator is composed of a real welding e[eetrode holder (SMAW-GTAW) and gun (GMAW-FCAW). a head mounted display (HMD), a 6 degrees of freedom tracking system for interaction between the user's hands and head. as well as external audio speakers. Both guns and HMD are interacting online and simultaneously. The welding simulation is based on the law of physics and empirical results from detailed analysis of a series of welding tests based on industrial applications tested over the last 20 years. The simulation runs in real-time, using a local logic network to determine the quality and shape of the created weld. These results are based on the orientation distance. and speed of the welding torch and depth of penetration. The welding process and resulting weld bc.1d are displayed in a virtual environment with screenplay interactive training modules. For review. weld quality and recorded process values can be displayed and diagnosed after welding. To help in the le.tming process, a learning curve for each student and each Virtual Welding Class'" can be plotted, for an instructor's review or a required third party evaluation.

  19. MATLAB software for viewing and processing u-channel and discrete sample paleomagnetic data: UPmag and DPmag

    NASA Astrophysics Data System (ADS)

    Xuan, C.; Channell, J. E.

    2009-12-01

    With the increasing efficiency of acquiring paleomagnetic data from u-channel or discrete samples, large volumes of data can be accumulated within a short time period. It is often critical to visualize and process these data in “real time” as measurements proceed, so that the measurement plan can be dictated accordingly. New MATLABTM software, UPmag and DPmag, are introduced for easy and rapid analysis of natural remanent magnetization (NRM) and laboratory-induced remanent magnetization data for u-channel and discrete samples, respectively. UPmag comprises three MATLABTM graphic user interfaces: UVIEW, UDIR, and UINT. UVIEW allows users to open and check through measurement data from the magnetometer as well as to correct detected flux-jumps in the data, and to export files for further treatment. UDIR reads the *.dir file generated by UVIEW, automatically calculates component directions using selectable demagnetization range(s) with anchored or free origin, and displays orthogonal projections and stepwise intensity plots for any position along the u-channel sample. UDIR can also display data on equal area stereographic projections and draw virtual geomagnetic poles (VGP) on various map projections. UINT provides a convenient platform to evaluate relative paleointensity estimates using the *.int files that can be exported from UVIEW. DPmag comprises two MATLABTM graphic user interfaces: DDIR and DFISHER. DDIR reads output files from the discrete sample magnetometer measurement system. DDIR allows users to calculate component directions for each discrete sample, to plot the demagnetization data on orthogonal projections and equal area projections, as well as to show the stepwise intensity data. DFISHER reads the *.pca file exported from DDIR, calculates VGP and Fisher statistics for data from selected groups of samples, and plots the results on equal area projections and as VGPs on a range of map projections. Data and plots from UPmag and DPmag can be exported to various file formats.

  20. AstroCloud, a Cyber-Infrastructure for Astronomy Research: Data Access and Interoperability

    NASA Astrophysics Data System (ADS)

    Fan, D.; He, B.; Xiao, J.; Li, S.; Li, C.; Cui, C.; Yu, C.; Hong, Z.; Yin, S.; Wang, C.; Cao, Z.; Fan, Y.; Mi, L.; Wan, W.; Wang, J.

    2015-09-01

    Data access and interoperability module connects the observation proposals, data, virtual machines and software. According to the unique identifier of PI (principal investigator), an email address or an internal ID, data can be collected by PI's proposals, or by the search interfaces, e.g. conesearch. Files associated with the searched results could be easily transported to cloud storages, including the storage with virtual machines, or several commercial platforms like Dropbox. Benefitted from the standards of IVOA (International Observatories Alliance), VOTable formatted searching result could be sent to kinds of VO software. Latter endeavor will try to integrate more data and connect archives and some other astronomical resources.

  1. Extending Iris: The VAO SED Analysis Tool

    NASA Astrophysics Data System (ADS)

    Laurino, O.; Busko, I.; Cresitello-Dittmar, M.; D'Abrusco, R.; Doe, S.; Evans, J.; Pevunova, O.

    2013-10-01

    Iris is a tool developed by the Virtual Astronomical Observatory (VAO) for building and analyzing Spectral Energy Distributions (SEDs). Iris was designed to be extensible, so that new components and models can be developed by third parties and then included at runtime. Iris can be extended in different ways: new file readers allow users to integrate data in custom formats into Iris SEDs; new models can be fitted to the data, in the form of template libraries for template fitting, data tables, and arbitrary Python functions. The interoperability-centered design of Iris and the Virtual Observatory standards and protocols can enable new science functionalities involving SED data.

  2. A convenient and adaptable package of DNA sequence analysis programs for microcomputers.

    PubMed Central

    Pustell, J; Kafatos, F C

    1982-01-01

    We describe a package of DNA data handling and analysis programs designed for microcomputers. The package is convenient for immediate use by persons with little or no computer experience, and has been optimized by trial in our group for a year. By typing a single command, the user enters a system which asks questions or gives instructions in English. The system will enter, alter, and manage sequence files or a restriction enzyme library. It generates the reverse complement, translates, calculates codon usage, finds restriction sites, finds homologies with various degrees of mismatch, and graphs amino acid composition or base frequencies. A number of options for data handling and printing can be used to produce figures for publication. The package will be available in ANSI Standard FORTRAN for use with virtually any FORTRAN compiler. PMID:6278412

  3. A Uniform Ontology for Software Interfaces

    NASA Technical Reports Server (NTRS)

    Feyock, Stefan

    2002-01-01

    It is universally the case that computer users who are not also computer specialists prefer to deal with computers' in terms of a familiar ontology, namely that of their application domains. For example, the well-known Windows ontology assumes that the user is an office worker, and therefore should be presented with a "desktop environment" featuring entities such as (virtual) file folders, documents, appointment calendars, and the like, rather than a world of machine registers and machine language instructions, or even the DOS command level. The central theme of this research has been the proposition that the user interacting with a software system should have at his disposal both the ontology underlying the system, as well as a model of the system. This information is necessary for the understanding of the system in use, as well as for the automatic generation of assistance for the user, both in solving the problem for which the application is designed, and for providing guidance in the capabilities and use of the system.

  4. Digital surveying and mapping of forest road network for development of a GIS tool for the effective protection and management of natural ecosystems

    NASA Astrophysics Data System (ADS)

    Drosos, Vasileios C.; Liampas, Sarantis-Aggelos G.; Doukas, Aristotelis-Kosmas G.

    2014-08-01

    In our time, the Geographic Information Systems (GIS) have become important tools, not only in the geosciences and environmental sciences, as well as virtually for all researches that require monitoring, planning or land management. The purpose of this paper was to develop a planning tool and decision making tool using AutoCAD Map software, ArcGIS and Google Earth with emphasis on the investigation of the suitability of forest roads' mapping and the range of its implementation in Greece in prefecture level. Integrating spatial information into a database makes data available throughout the organization; improving quality, productivity, and data management. Also working in such an environment, you can: Access and edit information, integrate and analyze data and communicate effectively. To select desirable information such as forest road network in a very early stage in the planning of silviculture operations, for example before the planning of the harvest is carried out. The software programs that were used were AutoCAD Map for the export in shape files for the GPS data, and ArcGIS in shape files (ArcGlobe), while Google Earth with KML files (Keyhole Markup Language) in order to better visualize and evaluate existing conditions, design in a real-world context and exchange information with government agencies, utilities, and contractors in both CAD and GIS data formats. The automation of the updating procedure and transfer of any files between agencies-departments is one of the main tasks of the integrated GIS-tool among the others should be addressed.

  5. Islandora A Flexible Drupal-Based Virtual Research Environment

    NASA Astrophysics Data System (ADS)

    Leggott, M.; Pan, J.

    2011-12-01

    Research today exists in a landscape where data flood in, literature grows exponentially, and disciplinary boundaries are increasingly porous. Many of the greatest challenges facing researchers are related to managing the information produced during the research life cycle - from the discussion of new projects to the creation of funding proposals, the production and analysis of data, and the presentation of findings via conferences and scholarly publications. The Islandora framework provides a system that stewards digital data in any form (textual, numeric, scientific, multimedia) along the entire course of this research continuum, it facilitates collaboration not just among physically distant members of research groups but also among research groups and their associated support groups. Because Islandora accommodates both the project-specific, experiment-based context and the cross-project, interdisciplinary exploration context of data, the approach to the creation and discovery of data can be called 'discipline-agnostic.' UPEI's Virtual Research Environment (or VRE) has demonstrated the immense benefits of such an approach. In one example scientists collects samples, create detailed metadata for each sample, potentially generating thousands of data files of various kinds, which can all be loaded in one step. Software (some of it developed specifically for this project) then combines, recombines, and transforms these data into alternate formats for analysis -- thereby saving scientists hundreds of hours of manual labor. Wherever possible data are translated, converting them from proprietary file formats to standard XML, and stored -- thereby exposing the data to a larger audience that may bring them together with quite different samples or experiments in novel ways. The same computer processes and software work-flows brought to bear in the context of one research program can be re-used in other areas and across completely different disciplines, since the data are represented by similar streams of bits and bytes. Islandora is developing a strong set of features of interest to the geoscience community, including: a generic XML form builder with current support for DC, FGDC, EML, KML, NCD, Darwin Core, DDI, PREMIS and more coming). Strong support for large image files via JPEG2000, document formats, entity extraction, geo-referencing functions, OpenLayers integration, mobile iPad/iPhone interfaces and more make Islandora an ideal digital asset management system for geoscience researchers. Islandora is an open source project developed at the University of PEI, with a full suite of services available from the UPEI spin-off DiscoveryGarden Inc. Islandora is built around the Drupal content management system and the Fedora repository, providing a robust and flexible digital asset management framework. Examples of Islandora systems include a variety of research data repositories, including some examples in the earth sciences, such as the ESDORA system at Oak Ridge National Laboratory. The system will be described in detail, using a number of research systems as examples.

  6. Biological data integration: wrapping data and tools.

    PubMed

    Lacroix, Zoé

    2002-06-01

    Nowadays scientific data is inevitably digital and stored in a wide variety of formats in heterogeneous systems. Scientists need to access an integrated view of remote or local heterogeneous data sources with advanced data accessing, analyzing, and visualization tools. Building a digital library for scientific data requires accessing and manipulating data extracted from flat files or databases, documents retrieved from the Web as well as data generated by software. We present an approach to wrapping web data sources, databases, flat files, or data generated by tools through a database view mechanism. Generally, a wrapper has two tasks: it first sends a query to the source to retrieve data and, second builds the expected output with respect to the virtual structure. Our wrappers are composed of a retrieval component based on an intermediate object view mechanism called search views mapping the source capabilities to attributes, and an eXtensible Markup Language (XML) engine, respectively, to perform these two tasks. The originality of the approach consists of: 1) a generic view mechanism to access seamlessly data sources with limited capabilities and 2) the ability to wrap data sources as well as the useful specific tools they may provide. Our approach has been developed and demonstrated as part of the multidatabase system supporting queries via uniform object protocol model (OPM) interfaces.

  7. Operational Interoperable Web Coverage Service for Earth Observing Satellite Data: Issues and Lessons Learned

    NASA Astrophysics Data System (ADS)

    Yang, W.; Min, M.; Bai, Y.; Lynnes, C.; Holloway, D.; Enloe, Y.; di, L.

    2008-12-01

    In the past few years, there have been growing interests, among major earth observing satellite (EOS) data providers, in serving data through the interoperable Web Coverage Service (WCS) interface protocol, developed by the Open Geospatial Consortium (OGC). The interface protocol defined in WCS specifications allows client software to make customized requests of multi-dimensional EOS data, including spatial and temporal subsetting, resampling and interpolation, and coordinate reference system (CRS) transformation. A WCS server describes an offered coverage, i.e., a data product, through a response to a client's DescribeCoverage request. The description includes the offered coverage's spatial/temporal extents and resolutions, supported CRSs, supported interpolation methods, and supported encoding formats. Based on such information, a client can request the entire or a subset of coverage in any spatial/temporal resolutions and in any one of the supported CRSs, formats, and interpolation methods. When implementing a WCS server, a data provider has different approaches to present its data holdings to clients. One of the most straightforward, and commonly used, approaches is to offer individual physical data files as separate coverages. Such implementation, however, will result in too many offered coverages for large data holdings and it also cannot fully present the relationship among different, but spatially and/or temporally associated, data files. It is desirable to disconnect offered coverages from physical data files so that the former is more coherent, especially in spatial and temporal domains. Therefore, some servers offer one single coverage for a set of spatially coregistered time series data files such as a daily global precipitation coverage linked to many global single- day precipitation files; others offer one single coverage for multiple temporally coregistered files together forming a large spatial extent. In either case, a server needs to assemble an output coverage real-time by combining potentially large number of physical files, which can be operationally difficult. The task becomes more challenging if an offered coverage involves spatially and temporally un-registered physical files. In this presentation, we will discuss issues and lessons learned in providing NASA's AIRS Level 2 atmospheric products, which are in satellite swath CRS and in 6-minute segment granule files, as virtual global coverages. We"ll discuss the WCS server's on- the-fly georectification, mosaicking, quality screening, performance, and scalability.

  8. A Software Prototype For Accessing Large Climate Simulation Data Through Digital Globe Interface

    NASA Astrophysics Data System (ADS)

    Chaudhuri, A.; Sorokine, A.

    2010-12-01

    The IPCC suite of global Earth system models produced terabytes of data for the CMIP3/AR4 archive and is expected to reach the petabyte scale by CMIP5/AR5. Dynamic downscaling of global models based on regional climate models can potentially lead to even larger data volumes. The model simulations for global or regional climate models like CCSM3 or WRF are typically run on supercomputers like the ORNL/DOE Jaguar and the results are stored on high performance storage systems. Access to these results from a user workstation is impeded by a number of factors such as enormous data size, limited bandwidth of standard office networks, data formats which are not fully supported by applications. So, a user-friendly interface for accessing and visualizing these results over standard Internet connection is required to facilitate collaborative work among geographically dispersed groups of scientists. To address this problem, we have developed a virtual globe based application which enables the scientists to query, visualize and analyze the results without the need of large data transfers to desktops and department-level servers. We have used open-source NASA WorldWind as a virtual globe platform and extended it with modules capable of visualizing model outputs stored in NetCDF format, while the data resides on the high-performance system. Based on the query placed by the scientist, our system initiates data processing routines on the high performance storage system to subset the data and reduce its size and then transfer it back to scientist's workstation through secure shell tunnel. The whole operation is kept totally transparent to the scientist and for the most part is controlled from a point-and-click GUI. The virtual globe also serves as a common platform for geospatial data, allowing smooth integration of the model simulation results with geographic data from other sources such as various web services or user-specific data in local files, if required. Also the system has the capability of building and updating a metadata catalog on the high performance storage that presents a simplified summary of the stored variables, hiding the low-level details such as physical location, size or format of the files from the user. Since data are often contributed to the system from multiple sources, the metadata catalog provides the user with a bird's eye view of the recent status of the database. As a next step, we plan on parallelizing the metadata updating and query-driven data selection routines to reduce the query response time. At current stage, the system can be immediately useful in making climate model simulation results available to a greater number of researchers who need simple and intuitive visualization of the simulation data or want to perform some analysis on it. The system's utility can reach beyond this particular application since it is generic enough to be ported to other high performance systems and to enable easy access to other types of geographic data.

  9. GO, an exec for running the programs: CELL, COLLIDER, MAGIC, PATRICIA, PETROS, TRANSPORT, and TURTLE

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Shoaee, H.

    1982-05-01

    An exec has been written and placed on the PEP group's public disk to facilitate the use of several PEP related computer programs available on VM. The exec's program list currently includes: CELL, COLLIDER, MAGIC, PATRICIA, PETROS, TRANSPORT, and TURTLE. In addition, provisions have been made to allow addition of new programs to this list as they become available. The GO exec is directly callable from inside the Wylbur editor (in fact, currently this is the only way to use the GO exec.). It provides the option of running any of the above programs in either interactive or batch mode.more » In the batch mode, the GO exec sends the data in the Wylbur active file along with the information required to run the job to the batch monitor (BMON, a virtual machine that schedules and controls execution of batch jobs). This enables the user to proceed with other VM activities at his/her terminal while the job executes, thus making it of particular interest to the users with jobs requiring much CPU time to execute and/or those wishing to run multiple jobs independently. In the interactive mode, useful for small jobs requiring less CPU time, the job is executed by the user's own Virtual Machine using the data in the active file as input. At the termination of an interactive job, the GO exec facilitates examination of the output by placing it in the Wylbur active file.« less

  10. GO, an exec for running the programs: CELL, COLLIDER, MAGIC, PATRICIA, PETROS, TRANSPORT and TURTLE

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Shoaee, H.

    1982-05-01

    An exec has been written and placed on the PEP group's public disk (PUBRL 192) to facilitate the use of several PEP related computer programs available on VM. The exec's program list currently includes: CELL, COLLIDER, MAGIC, PATRICIA, PETROS, TRANSPORT, and TURTLE. In addition, provisions have been made to allow addition of new programs to this list as they become available. The GO exec is directly callable from inside the Wylbur editor (in fact, currently this is the only way to use the GO exec.) It provides the option of running any of the above programs in either interactive ormore » batch mode. In the batch mode, the GO exec sends the data in the Wylbur active file along with the information required to run the job to the batch monitor (BMON, a virtual machine that schedules and controls execution of batch jobs). This enables the user to proceed with other VM activities at his/her terminal while the job executes, thus making it of particular interest to the users with jobs requiring much CPU time to execute and/or those wishing to run multiple jobs independently. In the interactive mode, useful for small jobs requiring less CPU time, the job is executed by the user's own Virtual Machine using the data in the active file as input. At the termination of an interactive job, the GO exec facilitates examination of the output by placing it in the Wylbur active file.« less

  11. DockoMatic 2.0: high throughput inverse virtual screening and homology modeling.

    PubMed

    Bullock, Casey; Cornia, Nic; Jacob, Reed; Remm, Andrew; Peavey, Thomas; Weekes, Ken; Mallory, Chris; Oxford, Julia T; McDougal, Owen M; Andersen, Timothy L

    2013-08-26

    DockoMatic is a free and open source application that unifies a suite of software programs within a user-friendly graphical user interface (GUI) to facilitate molecular docking experiments. Here we describe the release of DockoMatic 2.0; significant software advances include the ability to (1) conduct high throughput inverse virtual screening (IVS); (2) construct 3D homology models; and (3) customize the user interface. Users can now efficiently setup, start, and manage IVS experiments through the DockoMatic GUI by specifying receptor(s), ligand(s), grid parameter file(s), and docking engine (either AutoDock or AutoDock Vina). DockoMatic automatically generates the needed experiment input files and output directories and allows the user to manage and monitor job progress. Upon job completion, a summary of results is generated by Dockomatic to facilitate interpretation by the user. DockoMatic functionality has also been expanded to facilitate the construction of 3D protein homology models using the Timely Integrated Modeler (TIM) wizard. The wizard TIM provides an interface that accesses the basic local alignment search tool (BLAST) and MODELER programs and guides the user through the necessary steps to easily and efficiently create 3D homology models for biomacromolecular structures. The DockoMatic GUI can be customized by the user, and the software design makes it relatively easy to integrate additional docking engines, scoring functions, or third party programs. DockoMatic is a free comprehensive molecular docking software program for all levels of scientists in both research and education.

  12. Comparison of the accuracy of direct and indirect three-dimensional digitizing processes for CAD/CAM systems - An in vitro study.

    PubMed

    Vecsei, Bálint; Joós-Kovács, Gellért; Borbély, Judit; Hermann, Péter

    2017-04-01

    To compare the accuracy (trueness, precision) of direct and indirect scanning CAD/CAM methods. A master cast with prepared abutments and edentulous parts was created from polymethyl methacrylate (PMMA). A high-resolution industrial scanner was used to create a reference model. Polyvinyl-siloxane (PVS) impressions and digital impressions with three intraoral scanners (iTero, Cerec, Trios) were made (n=10 for each) from the PMMA model. A laboratory scanner (Scan CS2) was used to digitize the sectioned cast made from the PVS impressions. The stereolithographic (STL) files of the impressions (n=40) were exported. Each file was compared to the reference using Geomagic Verify software. Six points were assigned to enable virtual calliper measurement of three distances of varying size within the arch. Methods were compared using interquartile range regression and equality-of-variance tests for precision, and mixed-effects linear regression for trueness. The mean (SD) deviation of short distance measurements from the reference value was -40.3 (79.7) μm using the indirect, and 22.3 (40.0) μm using the direct method. For the medium distance, indirect measurements deviated by 5.2 (SD: 111.3) μm, and direct measurements by 115.8 (SD: 50.7) μm, on average; for the long distance, the corresponding estimates were -325.8 (SD: 134.1) μm with the indirect, and -163.5 (SD: 145.5) μm with the direct method. Significant differences were found between the two methods (p<0.05). With both methods, the shorter the distance, the more accurate results were achieved. Virtual models obtained by digital impressions can be more accurate than their conventional counterparts. Copyright © 2016 Japan Prosthodontic Society. Published by Elsevier Ltd. All rights reserved.

  13. ISIS and META projects

    NASA Technical Reports Server (NTRS)

    Birman, Kenneth; Cooper, Robert; Marzullo, Keith

    1990-01-01

    ISIS and META are two distributed systems projects at Cornell University. The ISIS project, has developed a new methodology, virtual synchrony, for writing robust distributed software. This approach is directly supported by the ISIS Toolkit, a programming system that is distributed to over 300 academic and industrial sites. Several interesting applications that exploit the strengths of ISIS, including an NFS-compatible replicated file system, are being developed. The META project, is about distributed control in a soft real time environment incorporating feedback. This domain encompasses examples as diverse as monitoring inventory and consumption on a factory floor and performing load-balancing on a distributed computing system. One of the first uses of META is for distributed application management: the tasks of configuring a distributed program, dynamically adapting to failures, and monitoring its performance. Recent progress and current plans are presented. This approach to distributed computing, a philosophy that is believed to significantly distinguish the work from that of others in the field, is explained.

  14. Virtual Ligand Screening Using PL-PatchSurfer2, a Molecular Surface-Based Protein-Ligand Docking Method.

    PubMed

    Shin, Woong-Hee; Kihara, Daisuke

    2018-01-01

    Virtual screening is a computational technique for predicting a potent binding compound for a receptor protein from a ligand library. It has been a widely used in the drug discovery field to reduce the efforts of medicinal chemists to find hit compounds by experiments.Here, we introduce our novel structure-based virtual screening program, PL-PatchSurfer, which uses molecular surface representation with the three-dimensional Zernike descriptors, which is an effective mathematical representation for identifying physicochemical complementarities between local surfaces of a target protein and a ligand. The advantage of the surface-patch description is its tolerance on a receptor and compound structure variation. PL-PatchSurfer2 achieves higher accuracy on apo form and computationally modeled receptor structures than conventional structure-based virtual screening programs. Thus, PL-PatchSurfer2 opens up an opportunity for targets that do not have their crystal structures. The program is provided as a stand-alone program at http://kiharalab.org/plps2 . We also provide files for two ligand libraries, ChEMBL and ZINC Drug-like.

  15. Virtual Systems Pharmacology (ViSP) software for simulation from mechanistic systems-level models.

    PubMed

    Ermakov, Sergey; Forster, Peter; Pagidala, Jyotsna; Miladinov, Marko; Wang, Albert; Baillie, Rebecca; Bartlett, Derek; Reed, Mike; Leil, Tarek A

    2014-01-01

    Multiple software programs are available for designing and running large scale system-level pharmacology models used in the drug development process. Depending on the problem, scientists may be forced to use several modeling tools that could increase model development time, IT costs and so on. Therefore, it is desirable to have a single platform that allows setting up and running large-scale simulations for the models that have been developed with different modeling tools. We developed a workflow and a software platform in which a model file is compiled into a self-contained executable that is no longer dependent on the software that was used to create the model. At the same time the full model specifics is preserved by presenting all model parameters as input parameters for the executable. This platform was implemented as a model agnostic, therapeutic area agnostic and web-based application with a database back-end that can be used to configure, manage and execute large-scale simulations for multiple models by multiple users. The user interface is designed to be easily configurable to reflect the specifics of the model and the user's particular needs and the back-end database has been implemented to store and manage all aspects of the systems, such as Models, Virtual Patients, User Interface Settings, and Results. The platform can be adapted and deployed on an existing cluster or cloud computing environment. Its use was demonstrated with a metabolic disease systems pharmacology model that simulates the effects of two antidiabetic drugs, metformin and fasiglifam, in type 2 diabetes mellitus patients.

  16. All-optical virtual private network system in OFDM based long-reach PON using RSOA re-modulation technique

    NASA Astrophysics Data System (ADS)

    Kim, Chang-Hun; Jung, Sang-Min; Kang, Su-Min; Han, Sang-Kook

    2015-01-01

    We propose an all-optical virtual private network (VPN) system in an orthogonal frequency division multiplexing (OFDM) based long reach PON (LR-PON). In the optical access network field, technologies based on fundamental upstream (U/S) and downstream (D/S) have been actively researched to accommodate explosion of data capacity. However, data transmission among the end users which is arisen from cloud computing, file-sharing and interactive game takes a large weight inside of internet traffic. Moreover, this traffic is predicted to increase more if Internet of Things (IoT) services are activated. In a conventional PON, VPN data is transmitted through ONU-OLT-ONU via U/S and D/S carriers. It leads to waste of bandwidth and energy due to O-E-O conversion in the OLT and round-trip propagation between OLT and remote node (RN). Also, it causes inevitable load to the OLT for electrical buffer, scheduling and routing. The network inefficiency becomes more critical in a LR-PON which has been researched as an effort to reduce CAPEX and OPEX through metro-access consolidation. In the proposed system, the VPN data is separated from conventional U/S and re-modulated on the D/S carrier by using RSOA in the ONUs to avoid bandwidth consumption of U/S and D/S unlike in previously reported system. Moreover, the transmitted VPN data is re-directed to the ONUs by wavelength selective reflector device in the RN without passing through the OLT. Experimental demonstration for the VPN communication system in an OFDM based LR-PON has been verified.

  17. Virtual Systems Pharmacology (ViSP) software for simulation from mechanistic systems-level models

    PubMed Central

    Ermakov, Sergey; Forster, Peter; Pagidala, Jyotsna; Miladinov, Marko; Wang, Albert; Baillie, Rebecca; Bartlett, Derek; Reed, Mike; Leil, Tarek A.

    2014-01-01

    Multiple software programs are available for designing and running large scale system-level pharmacology models used in the drug development process. Depending on the problem, scientists may be forced to use several modeling tools that could increase model development time, IT costs and so on. Therefore, it is desirable to have a single platform that allows setting up and running large-scale simulations for the models that have been developed with different modeling tools. We developed a workflow and a software platform in which a model file is compiled into a self-contained executable that is no longer dependent on the software that was used to create the model. At the same time the full model specifics is preserved by presenting all model parameters as input parameters for the executable. This platform was implemented as a model agnostic, therapeutic area agnostic and web-based application with a database back-end that can be used to configure, manage and execute large-scale simulations for multiple models by multiple users. The user interface is designed to be easily configurable to reflect the specifics of the model and the user's particular needs and the back-end database has been implemented to store and manage all aspects of the systems, such as Models, Virtual Patients, User Interface Settings, and Results. The platform can be adapted and deployed on an existing cluster or cloud computing environment. Its use was demonstrated with a metabolic disease systems pharmacology model that simulates the effects of two antidiabetic drugs, metformin and fasiglifam, in type 2 diabetes mellitus patients. PMID:25374542

  18. Fully Three-Dimensional Virtual-Reality System

    NASA Technical Reports Server (NTRS)

    Beckman, Brian C.

    1994-01-01

    Proposed virtual-reality system presents visual displays to simulate free flight in three-dimensional space. System, virtual space pod, is testbed for control and navigation schemes. Unlike most virtual-reality systems, virtual space pod would not depend for orientation on ground plane, which hinders free flight in three dimensions. Space pod provides comfortable seating, convenient controls, and dynamic virtual-space images for virtual traveler. Controls include buttons plus joysticks with six degrees of freedom.

  19. DMFS: A Data Migration File System for NetBSD

    NASA Technical Reports Server (NTRS)

    Studenmund, William

    1999-01-01

    I have recently developed dmfs, a Data Migration File System, for NetBSD. This file system is based on the overlay file system, which is discussed in a separate paper, and provides kernel support for the data migration system being developed by my research group here at NASA/Ames. The file system utilizes an underlying file store to provide the file backing, and coordinates user and system access to the files. It stores its internal meta data in a flat file, which resides on a separate file system. Our data migration system provides archiving and file migration services. System utilities scan the dmfs file system for recently modified files, and archive them to two separate tape stores. Once a file has been doubly archived, files larger than a specified size will be truncated to that size, potentially freeing up large amounts of the underlying file store. Some sites will choose to retain none of the file (deleting its contents entirely from the file system) while others may choose to retain a portion, for instance a preamble describing the remainder of the file. The dmfs layer coordinates access to the file, retaining user-perceived access and modification times, file size, and restricting access to partially migrated files to the portion actually resident. When a user process attempts to read from the non-resident portion of a file, it is blocked and the dmfs layer sends a request to a system daemon to restore the file. As more of the file becomes resident, the user process is permitted to begin accessing the now-resident portions of the file. For simplicity, our data migration system divides a file into two portions, a resident portion followed by an optional non-resident portion. Also, a file is in one of three states: fully resident, fully resident and archived, and (partially) non-resident and archived. For a file which is only partially resident, any attempt to write or truncate the file, or to read a non-resident portion, will trigger a file restoration. Truncations and writes are blocked until the file is fully restored so that a restoration which only partially succeed does not leave the file in an indeterminate state with portions existing only on tape and other portions only in the disk file system. We chose layered file system technology as it permits us to focus on the data migration functionality, and permits end system administrators to choose the underlying file store technology. We chose the overlay layered file system instead of the null layer for two reasons: first to permit our layer to better preserve meta data integrity and second to prevent even root processes from accessing migrated files. This is achieved as the underlying file store becomes inaccessible once the dmfs layer is mounted. We are quite pleased with how the layered file system has turned out. Of the 45 vnode operations in NetBSD, 20 (forty-four percent) required no intervention by our file layer - they are passed directly to the underlying file store. Of the twenty five we do intercept, nine (such as vop_create()) are intercepted only to ensure meta data integrity. Most of the functionality was concentrated in five operations: vop_read, vop_write, vop_getattr, vop_setattr, and vop_fcntl. The first four are the core operations for controlling access to migrated files and preserving the user experience. vop_fcntl, a call generated for a certain class of fcntl codes, provides the command channel used by privileged user programs to communicate with the dmfs layer.

  20. Adding EUNIS and VAULT rocket data to the VSO with Modern Perl frameworks

    NASA Astrophysics Data System (ADS)

    Mansky, Edmund

    2017-08-01

    A new Perl code is described, that uses the modern Object-oriented Moose framework, to add EUNIS and VAULT rocket data to the Virtual Solar Observatory website. The code permits the easy fixing of FITS header fields in the case where some FITS fields that are required are missing from the original data files. The code makes novel use of the Moose extensions “before” and “after” to build in dependencies so that database creation of tables occurs before the loading of data, and that the validation of file-dependent tables occurs after the loading is completed. Also described is the computation and loading of the deferred FITS field CHECKSUM into the database following the loading and validation of the file-dependent tables. The loading of the EUNIS 2006 and 2007 flight data, and the VAULT 2.0 flight data is described in detail as illustrative examples.

  1. Suggestions for Improvement of User Access to GOCE L2 Data

    NASA Astrophysics Data System (ADS)

    Tscherning, C. C.

    2011-07-01

    ESA's has required that most GOCE L2 products are delivered in XML format. This creates difficulties for the users because a Parser written in Perl is needed to convert the files to files without XML tags. However several products, such as the coefficients of spherical harmonic coefficients are made available on standard form through the International Center for Global Gravity Field Models. The variance-covariance information for the gravity field models is only available without XML tags. It is suggested that all XML products are made available in the Virtual Data Archive as files without tags. This will besides making the data directly usable by a FORTRAN program also reduce the size (storage requirements) of the product to about 30 %. A further reduction of used storage should be made by tuning the number of digits for the individual quantities in the products, so that it corresponds to the actual number of significant digits.

  2. Landsat Data Continuity Mission (LDCM) space to ground mission data architecture

    USGS Publications Warehouse

    Nelson, Jack L.; Ames, J.A.; Williams, J.; Patschke, R.; Mott, C.; Joseph, J.; Garon, H.; Mah, G.

    2012-01-01

    The Landsat Data Continuity Mission (LDCM) is a scientific endeavor to extend the longest continuous multi-spectral imaging record of Earth's land surface. The observatory consists of a spacecraft bus integrated with two imaging instruments; the Operational Land Imager (OLI), built by Ball Aerospace & Technologies Corporation in Boulder, Colorado, and the Thermal Infrared Sensor (TIRS), an in-house instrument built at the Goddard Space Flight Center (GSFC). Both instruments are integrated aboard a fine-pointing, fully redundant, spacecraft bus built by Orbital Sciences Corporation, Gilbert, Arizona. The mission is scheduled for launch in January 2013. This paper will describe the innovative end-to-end approach for efficiently managing high volumes of simultaneous realtime and playback of image and ancillary data from the instruments to the reception at the United States Geological Survey's (USGS) Landsat Ground Network (LGN) and International Cooperator (IC) ground stations. The core enabling capability lies within the spacecraft Command and Data Handling (C&DH) system and Radio Frequency (RF) communications system implementation. Each of these systems uniquely contribute to the efficient processing of high speed image data (up to 265Mbps) from each instrument, and provide virtually error free data delivery to the ground. Onboard methods include a combination of lossless data compression, Consultative Committee for Space Data Systems (CCSDS) data formatting, a file-based/managed Solid State Recorder (SSR), and Low Density Parity Check (LDPC) forward error correction. The 440 Mbps wideband X-Band downlink uses Class 1 CCSDS File Delivery Protocol (CFDP), and an earth coverage antenna to deliver an average of 400 scenes per day to a combination of LGN and IC ground stations. This paper will also describe the integrated capabilities and processes at the LGN ground stations for data reception using adaptive filtering, and the mission operations approach fro- the LDCM Mission Operations Center (MOC) to perform the CFDP accounting, file retransmissions, and management of the autonomous features of the SSR.

  3. Vroom: designing an augmented environment for remote collaboration in digital cinema production

    NASA Astrophysics Data System (ADS)

    Margolis, Todd; Cornish, Tracy

    2013-03-01

    As media technologies become increasingly affordable, compact and inherently networked, new generations of telecollaborative platforms continue to arise which integrate these new affordances. Virtual reality has been primarily concerned with creating simulations of environments that can transport participants to real or imagined spaces that replace the "real world". Meanwhile Augmented Reality systems have evolved to interleave objects from Virtual Reality environments into the physical landscape. Perhaps now there is a new class of systems that reverse this precept to enhance dynamic media landscapes and immersive physical display environments to enable intuitive data exploration through collaboration. Vroom (Virtual Room) is a next-generation reconfigurable tiled display environment in development at the California Institute for Telecommunications and Information Technology (Calit2) at the University of California, San Diego. Vroom enables freely scalable digital collaboratories, connecting distributed, high-resolution visualization resources for collaborative work in the sciences, engineering and the arts. Vroom transforms a physical space into an immersive media environment with large format interactive display surfaces, video teleconferencing and spatialized audio built on a highspeed optical network backbone. Vroom enables group collaboration for local and remote participants to share knowledge and experiences. Possible applications include: remote learning, command and control, storyboarding, post-production editorial review, high resolution video playback, 3D visualization, screencasting and image, video and multimedia file sharing. To support these various scenarios, Vroom features support for multiple user interfaces (optical tracking, touch UI, gesture interface, etc.), support for directional and spatialized audio, giga-pixel image interactivity, 4K video streaming, 3D visualization and telematic production. This paper explains the design process that has been utilized to make Vroom an accessible and intuitive immersive environment for remote collaboration specifically for digital cinema production.

  4. Virtual Network Configuration Management System for Data Center Operations and Management

    NASA Astrophysics Data System (ADS)

    Okita, Hideki; Yoshizawa, Masahiro; Uehara, Keitaro; Mizuno, Kazuhiko; Tarui, Toshiaki; Naono, Ken

    Virtualization technologies are widely deployed in data centers to improve system utilization. However, they increase the workload for operators, who have to manage the structure of virtual networks in data centers. A virtual-network management system which automates the integration of the configurations of the virtual networks is provided. The proposed system collects the configurations from server virtualization platforms and VLAN-supported switches, and integrates these configurations according to a newly developed XML-based management information model for virtual-network configurations. Preliminary evaluations show that the proposed system helps operators by reducing the time to acquire the configurations from devices and correct the inconsistency of operators' configuration management database by about 40 percent. Further, they also show that the proposed system has excellent scalability; the system takes less than 20 minutes to acquire the virtual-network configurations from a large scale network that includes 300 virtual machines. These results imply that the proposed system is effective for improving the configuration management process for virtual networks in data centers.

  5. Accessing files in an Internet: The Jade file system

    NASA Technical Reports Server (NTRS)

    Peterson, Larry L.; Rao, Herman C.

    1991-01-01

    Jade is a new distribution file system that provides a uniform way to name and access files in an internet environment. It makes two important contributions. First, Jade is a logical system that integrates a heterogeneous collection of existing file systems, where heterogeneous means that the underlying file systems support different file access protocols. Jade is designed under the restriction that the underlying file system may not be modified. Second, rather than providing a global name space, Jade permits each user to define a private name space. These private name spaces support two novel features: they allow multiple file systems to be mounted under one directory, and they allow one logical name space to mount other logical name spaces. A prototype of the Jade File System was implemented on Sun Workstations running Unix. It consists of interfaces to the Unix file system, the Sun Network File System, the Andrew File System, and FTP. This paper motivates Jade's design, highlights several aspects of its implementation, and illustrates applications that can take advantage of its features.

  6. Accessing files in an internet - The Jade file system

    NASA Technical Reports Server (NTRS)

    Rao, Herman C.; Peterson, Larry L.

    1993-01-01

    Jade is a new distribution file system that provides a uniform way to name and access files in an internet environment. It makes two important contributions. First, Jade is a logical system that integrates a heterogeneous collection of existing file systems, where heterogeneous means that the underlying file systems support different file access protocols. Jade is designed under the restriction that the underlying file system may not be modified. Second, rather than providing a global name space, Jade permits each user to define a private name space. These private name spaces support two novel features: they allow multiple file systems to be mounted under one directory, and they allow one logical name space to mount other logical name spaces. A prototype of the Jade File System was implemented on Sun Workstations running Unix. It consists of interfaces to the Unix file system, the Sun Network File System, the Andrew File System, and FTP. This paper motivates Jade's design, highlights several aspects of its implementation, and illustrates applications that can take advantage of its features.

  7. The Implementation of Virtual Instruction in Relation to X-ray Anatomy and Positioning in a Chiropractic Degree Program: A Descriptive Paper.

    PubMed

    Rush, Perry O; Boone, William R

    2009-01-01

    This article provides information regarding the introduction of virtual education into classroom instruction, wherein a method of classroom instruction was developed with the use of a computer, digital camera, and various software programs. This approach simplified testing procedures, thus reducing institutional costs substantially by easing the demand for manpower, and seemed to improve average grade performance. Organized files with hundreds of digital pictures have created a range of instructor resources. Much of the new course materials were organized onto compact disks to complement course notes. Customizing presentations with digital technology holds potential benefits for students, instructors and the institution.

  8. Integration of Geophysical and Geochemical Data

    NASA Astrophysics Data System (ADS)

    Yamagishi, Y.; Suzuki, K.; Tamura, H.; Nagao, H.; Yanaka, H.; Tsuboi, S.

    2006-12-01

    Integration of geochemical and geophysical data would give us a new insight to the nature of the Earth. It should advance our understanding for the dynamics of the Earth's interior and surface processes. Today various geochemical and geophysical data are available on Internet. These data are stored in various database systems. Each system is isolated and provides own format data. The goal of this study is to display both the geochemical and geophysical data obtained from such databases together visually. We adopt Google Earth as the presentation tool. Google Earth is virtual globe software and is provided free of charge by Google, Inc. Google Earth displays the Earth's surface using satellite images with mean resolution of ~15m. We display any graphical features on Google Earth by KML format file. We have developed softwares to convert geochemical and geophysical data to KML file. First of all, we tried to overlay data from Georoc and PetDB and seismic tomography data on Google Earth. Georoc and PetDB are both online database systems for geochemical data. The data format of Georoc is CSV and that of PetDB is Microsoft Excel. The format of tomography data we used is plain text. The conversion software can process these different file formats. The geochemical data (e. g. compositional abundance) is displayed as a three-dimensional column on the Earth's surface. The shape and color of the column mean the element type. The size and color tone vary according to the abundance of the element. The tomography data can be converted into a KML file for each depth. This overlay plot of geochemical data and tomography data should help us to correlate internal temperature anomalies to geochemical anomalies, which are observed at the surface of the Earth. Our tool can convert any geophysical and geochemical data to a KML as long as the data is associated with longitude and latitude. We are going to support more geophysical data formats. In addition, we are currently trying to obtain scientific insights for the Earth's interior based on the view of both geophysical and geochemical data on Google Earth.

  9. Development and application of General Purpose Data Acquisition Shell (GPDAS) at advanced photon source

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chung, Youngjoo; Kim, Keeman.

    1991-01-01

    An operating system shell GPDAS (General Purpose Data Acquisition Shell) on MS-DOS-based microcomputers has been developed to provide flexibility in data acquisition and device control for magnet measurements at the Advanced Photon Source. GPDAS is both a command interpreter and an integrated script-based programming environment. It also incorporates the MS-DOS shell to make use of the existing utility programs for file manipulation and data analysis. Features include: alias definition, virtual memory, windows, graphics, data and procedure backup, background operation, script programming language, and script level debugging. Data acquisition system devices can be controlled through IEEE488 board, multifunction I/O board, digitalmore » I/O board and Gespac crate via Euro G-64 bus. GPDAS is now being used for diagnostics R D and accelerator physics studies as well as for magnet measurements. Their hardware configurations will also be discussed. 3 refs., 3 figs.« less

  10. Archive Management of NASA Earth Observation Data to Support Cloud Analysis

    NASA Technical Reports Server (NTRS)

    Lynnes, Christopher; Baynes, Kathleen; McInerney, Mark A.

    2017-01-01

    NASA collects, processes and distributes petabytes of Earth Observation (EO) data from satellites, aircraft, in situ instruments and model output, with an order of magnitude increase expected by 2024. Cloud-based web object storage (WOS) of these data can simplify the execution of such an increase. More importantly, it can also facilitate user analysis of those volumes by making the data available to the massively parallel computing power in the cloud. However, storing EO data in cloud WOS has a ripple effect throughout the NASA archive system with unexpected challenges and opportunities. One challenge is modifying data servicing software (such as Web Coverage Service servers) to access and subset data that are no longer on a directly accessible file system, but rather in cloud WOS. Opportunities include refactoring of the archive software to a cloud-native architecture; virtualizing data products by computing on demand; and reorganizing data to be more analysis-friendly.

  11. Forming an ad-hoc nearby storage, based on IKAROS and social networking services

    NASA Astrophysics Data System (ADS)

    Filippidis, Christos; Cotronis, Yiannis; Markou, Christos

    2014-06-01

    We present an ad-hoc "nearby" storage, based on IKAROS and social networking services, such as Facebook. By design, IKAROS is capable to increase or decrease the number of nodes of the I/O system instance on the fly, without bringing everything down or losing data. IKAROS is capable to decide the file partition distribution schema, by taking on account requests from the user or an application, as well as a domain or a Virtual Organization policy. In this way, it is possible to form multiple instances of smaller capacity higher bandwidth storage utilities capable to respond in an ad-hoc manner. This approach, focusing on flexibility, can scale both up and down and so can provide more cost effective infrastructures for both large scale and smaller size systems. A set of experiments is performed comparing IKAROS with PVFS2 by using multiple clients requests under HPC IOR benchmark and MPICH2.

  12. 78 FR 23773 - Center for Scientific Review; Notice of Closed Meetings

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-04-22

    ... public in accordance with the provisions set forth in sections 552b(c)(4) and 552b(c)(6), Title 5 U.S.C..., (Virtual Meeting). Contact Person: Ping Wu, Ph.D., Scientific Review Officer, HDM IRG, Center for... Federal Advisory Committee Policy. [FR Doc. 2013-09309 Filed 4-19-13; 8:45 am] BILLING CODE 4140-01-P ...

  13. Google earth as a source of ancillary material in a history of psychology class.

    PubMed

    Stevison, Blake K; Biggs, Patrick T; Abramson, Charles I

    2010-06-01

    This article discusses the use of Google Earth to visit significant geographical locations associated with events in the history of psychology. The process of opening files, viewing content, adding placemarks, and saving customized virtual tours on Google Earth are explained. Suggestions for incorporating Google Earth into a history of psychology course are also described.

  14. The Jade File System. Ph.D. Thesis

    NASA Technical Reports Server (NTRS)

    Rao, Herman Chung-Hwa

    1991-01-01

    File systems have long been the most important and most widely used form of shared permanent storage. File systems in traditional time-sharing systems, such as Unix, support a coherent sharing model for multiple users. Distributed file systems implement this sharing model in local area networks. However, most distributed file systems fail to scale from local area networks to an internet. Four characteristics of scalability were recognized: size, wide area, autonomy, and heterogeneity. Owing to size and wide area, techniques such as broadcasting, central control, and central resources, which are widely adopted by local area network file systems, are not adequate for an internet file system. An internet file system must also support the notion of autonomy because an internet is made up by a collection of independent organizations. Finally, heterogeneity is the nature of an internet file system, not only because of its size, but also because of the autonomy of the organizations in an internet. The Jade File System, which provides a uniform way to name and access files in the internet environment, is presented. Jade is a logical system that integrates a heterogeneous collection of existing file systems, where heterogeneous means that the underlying file systems support different file access protocols. Because of autonomy, Jade is designed under the restriction that the underlying file systems may not be modified. In order to avoid the complexity of maintaining an internet-wide, global name space, Jade permits each user to define a private name space. In Jade's design, we pay careful attention to avoiding unnecessary network messages between clients and file servers in order to achieve acceptable performance. Jade's name space supports two novel features: (1) it allows multiple file systems to be mounted under one direction; and (2) it permits one logical name space to mount other logical name spaces. A prototype of Jade was implemented to examine and validate its design. The prototype consists of interfaces to the Unix File System, the Sun Network File System, and the File Transfer Protocol.

  15. Transforming clinical imaging and 3D data for virtual reality learning objects: HTML5 and mobile devices implementation.

    PubMed

    Trelease, Robert B; Nieder, Gary L

    2013-01-01

    Web deployable anatomical simulations or "virtual reality learning objects" can easily be produced with QuickTime VR software, but their use for online and mobile learning is being limited by the declining support for web browser plug-ins for personal computers and unavailability on popular mobile devices like Apple iPad and Android tablets. This article describes complementary methods for creating comparable, multiplatform VR learning objects in the new HTML5 standard format, circumventing platform-specific limitations imposed by the QuickTime VR multimedia file format. Multiple types or "dimensions" of anatomical information can be embedded in such learning objects, supporting different kinds of online learning applications, including interactive atlases, examination questions, and complex, multi-structure presentations. Such HTML5 VR learning objects are usable on new mobile devices that do not support QuickTime VR, as well as on personal computers. Furthermore, HTML5 VR learning objects can be embedded in "ebook" document files, supporting the development of new types of electronic textbooks on mobile devices that are increasingly popular and self-adopted for mobile learning. © 2012 American Association of Anatomists.

  16. The smiling scan technique: Facially driven guided surgery and prosthetics.

    PubMed

    Pozzi, Alessandro; Arcuri, Lorenzo; Moy, Peter K

    2018-04-11

    To introduce a proof of concept technique and new integrated workflow to optimize the functional and esthetic outcome of the implant-supported restorations by means of a 3-dimensional (3D) facially-driven, digital assisted treatment plan. The Smiling Scan technique permits the creation of a virtual dental patient (VDP) showing a broad smile under static conditions. The patient is exposed to a cone beam computed tomography scan (CBCT), displaying a broad smile for the duration of the examination. Intraoral optical surface scanning (IOS) of the dental and soft tissue anatomy or extraoral optical surface scanning (EOS) of the study casts are achieved. The superimposition of the digital imaging and communications in medicine (DICOM) files with standard tessellation language (STL) files is performed using the virtual planning software program permitting the creation of a VDP. The smiling scan is an effective, easy to use, and low-cost technique to develop a more comprehensive and simplified facially driven computer-assisted treatment plan, allowing a prosthetically driven implant placement and the delivery of an immediate computer aided design (CAD) computer aided manufacturing (CAM) temporary fixed dental prostheses (CAD/CAM technology). Copyright © 2018 Japan Prosthodontic Society. Published by Elsevier Ltd. All rights reserved.

  17. Mobile Virtual Reality : A Solution for Big Data Visualization

    NASA Astrophysics Data System (ADS)

    Marshall, E.; Seichter, N. D.; D'sa, A.; Werner, L. A.; Yuen, D. A.

    2015-12-01

    Pursuits in geological sciences and other branches of quantitative sciences often require data visualization frameworks that are in continual need of improvement and new ideas. Virtual reality is a medium of visualization that has large audiences originally designed for gaming purposes; Virtual reality can be captured in Cave-like environment but they are unwieldy and expensive to maintain. Recent efforts by major companies such as Facebook have focussed more on a large market , The Oculus is the first of such kind of mobile devices The operating system Unity makes it possible for us to convert the data files into a mesh of isosurfaces and be rendered into 3D. A user is immersed inside of the virtual reality and is able to move within and around the data using arrow keys and other steering devices, similar to those employed in XBox.. With introductions of products like the Oculus Rift and Holo Lens combined with ever increasing mobile computing strength, mobile virtual reality data visualization can be implemented for better analysis of 3D geological and mineralogical data sets. As more new products like the Surface Pro 4 and other high power yet very mobile computers are introduced to the market, the RAM and graphics card capacity necessary to run these models is more available, opening doors to this new reality. The computing requirements needed to run these models are a mere 8 GB of RAM and 2 GHz of CPU speed, which many mobile computers are starting to exceed. Using Unity 3D software to create a virtual environment containing a visual representation of the data, any data set converted into FBX or OBJ format which can be traversed by wearing the Oculus Rift device. This new method for analysis in conjunction with 3D scanning has potential applications in many fields, including the analysis of precious stones or jewelry. Using hologram technology to capture in high-resolution the 3D shape, color, and imperfections of minerals and stones, detailed review and analysis of the stone can be done remotely without ever seeing the real thing. This strategy can be game-changer for shoppers without having to go to the store.

  18. Please Move Inactive Files Off the /projects File System | High-Performance

    Science.gov Websites

    Computing | NREL Please Move Inactive Files Off the /projects File System Please Move Inactive Files Off the /projects File System January 11, 2018 The /projects file system is a shared resource . This year this has created a space crunch - the file system is now about 90% full and we need your help

  19. A Low-cost System for Generating Near-realistic Virtual Actors

    NASA Astrophysics Data System (ADS)

    Afifi, Mahmoud; Hussain, Khaled F.; Ibrahim, Hosny M.; Omar, Nagwa M.

    2015-06-01

    Generating virtual actors is one of the most challenging fields in computer graphics. The reconstruction of a realistic virtual actor has been paid attention by the academic research and the film industry to generate human-like virtual actors. Many movies were acted by human-like virtual actors, where the audience cannot distinguish between real and virtual actors. The synthesis of realistic virtual actors is considered a complex process. Many techniques are used to generate a realistic virtual actor; however they usually require expensive hardware equipment. In this paper, a low-cost system that generates near-realistic virtual actors is presented. The facial features of the real actor are blended with a virtual head that is attached to the actor's body. Comparing with other techniques that generate virtual actors, the proposed system is considered a low-cost system that requires only one camera that records the scene without using any expensive hardware equipment. The results of our system show that the system generates good near-realistic virtual actors that can be used on many applications.

  20. VERAIn

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Simunovic, Srdjan

    2015-02-16

    CASL's modeling and simulation technology, the Virtual Environment for Reactor Applications (VERA), incorporates coupled physics and science-based models, state-of-the-art numerical methods, modern computational science, integrated uncertainty quantification (UQ) and validation against data from operating pressurized water reactors (PWRs), single-effect experiments, and integral tests. The computational simulation component of VERA is the VERA Core Simulator (VERA-CS). The core simulator is the specific collection of multi-physics computer codes used to model and deplete a LWR core over multiple cycles. The core simulator has a single common input file that drives all of the different physics codes. The parser code, VERAIn, converts VERAmore » Input into an XML file that is used as input to different VERA codes.« less

  1. Adapt

    NASA Astrophysics Data System (ADS)

    Bargatze, L. F.

    2015-12-01

    Active Data Archive Product Tracking (ADAPT) is a collection of software routines that permits one to generate XML metadata files to describe and register data products in support of the NASA Heliophysics Virtual Observatory VxO effort. ADAPT is also a philosophy. The ADAPT concept is to use any and all available metadata associated with scientific data to produce XML metadata descriptions in a consistent, uniform, and organized fashion to provide blanket access to the full complement of data stored on a targeted data server. In this poster, we present an application of ADAPT to describe all of the data products that are stored by using the Common Data File (CDF) format served out by the CDAWEB and SPDF data servers hosted at the NASA Goddard Space Flight Center. These data servers are the primary repositories for NASA Heliophysics data. For this purpose, the ADAPT routines have been used to generate data resource descriptions by using an XML schema named Space Physics Archive, Search, and Extract (SPASE). SPASE is the designated standard for documenting Heliophysics data products, as adopted by the Heliophysics Data and Model Consortium. The set of SPASE XML resource descriptions produced by ADAPT includes high-level descriptions of numerical data products, display data products, or catalogs and also includes low-level "Granule" descriptions. A SPASE Granule is effectively a universal access metadata resource; a Granule associates an individual data file (e.g. a CDF file) with a "parent" high-level data resource description, assigns a resource identifier to the file, and lists the corresponding assess URL(s). The CDAWEB and SPDF file systems were queried to provide the input required by the ADAPT software to create an initial set of SPASE metadata resource descriptions. Then, the CDAWEB and SPDF data repositories were queried subsequently on a nightly basis and the CDF file lists were checked for any changes such as the occurrence of new, modified, or deleted files, or the addition of new or the deletion of old data products. Next, ADAPT routines analyzed the query results and issued updates to the metadata stored in the UCLA CDAWEB and SPDF metadata registries. In this way, the SPASE metadata registries generated by ADAPT can be relied on to provide up to date and complete access to Heliophysics CDF data resources on a daily basis.

  2. Rightful Discharge: Making "Termination" Mean It Is Really Over: Part 1-Issues and Legislation.

    PubMed

    Mitchell, Michael S; Koen, Clifford M; Carmichael, Amanda J

    One of the most difficult undertakings for any employer is carrying out a decision to terminate an employee. Of all the employment-related actions taken by employers, the act of termination creates the greatest risk of legal liability. Many claims of employment discrimination filed with the Equal Employment Opportunity Commission arise from the act of termination. In many federal courts, employment-related lawsuits account for more than 50% of all court filings; these lawsuits cover a wide range of subjects, such as failure to hire, defamation, breach of contract, and harassment, to name a few. However, most employees sue because they have lost their job or fear they will lose their job. Because these individuals have virtually nothing to lose, they often see filing a claim with the Equal Employment Opportunity Commission or filing a lawsuit as the only viable option-often suing for wrongful discharge. With a thoughtful review of the issues and the legislation addressed in this article, health care managers can reduce the unnecessary risk of expensive, time-consuming litigation.

  3. Three-Dimensional Static Articulation Accuracy of Virtual Models-Part II: Effect of Model Scanner-CAD Systems and Articulation Method.

    PubMed

    Yee, Sophia Hui Xin; Esguerra, Roxanna Jean; Chew, Amelia Anya Qin'An; Wong, Keng Mun; Tan, Keson Beng Choon

    2018-02-01

    Accurate maxillomandibular relationship transfer is important for CAD/CAM prostheses. This study compared the 3D-accuracy of virtual model static articulation in three laboratory scanner-CAD systems (Ceramill Map400 [AG], inEos X5 [SIR], Scanner S600 Arti [ZKN]) using two virtual articulation methods: mounted models (MO), interocclusal record (IR). The master model simulated a single crown opposing a 3-unit fixed partial denture. Reference values were obtained by measuring interarch and interocclusal reference features with a coordinate measuring machine (CMM). MO group stone casts were articulator-mounted with acrylic resin bite registrations while IR group casts were hand-articulated with poly(vinyl siloxane) bite registrations. Five test model sets were scanned and articulated virtually with each system (6 test groups, 15 data sets). STL files of the virtual models were measured with CMM software. dR R , dR C , and dR L , represented interarch global distortions at right, central, and left sides, respectively, while dR M , dX M , dY M , and dZ M represented interocclusal global and linear distortions between preparations. Mean interarch 3D distortion ranged from -348.7 to 192.2 μm for dR R , -86.3 to 44.1 μm for dR C , and -168.1 to 4.4 μm for dR L . Mean interocclusal distortion ranged from -257.2 to -85.2 μm for dR M , -285.7 to 183.9 μm for dX M , -100.5 to 114.8 μm for dY M , and -269.1 to -50.6 μm for dZ M . ANOVA showed that articulation method had significant effect on dR R and dX M , while system had a significant effect on dR R , dR C , dR L , dR M , and dZ M . There were significant differences between 6 test groups for dR R, dR L dX M , and dZ M . dR R and dX M were significantly greater in AG-IR, and this was significantly different from SIR-IR, ZKN-IR, and all MO groups. Interarch and interocclusal distances increased in MO groups, while they decreased in IR groups. AG-IR had the greatest interarch distortion as well as interocclusal superior-inferior distortion. The other groups performed similarly to each other, and the overall interarch distortion did not exceed 0.7%. In these systems and articulation methods, interocclusal distortions may result in hyper- or infra-occluded prostheses. © 2017 by the American College of Prosthodontists.

  4. Design and evaluation of an augmented reality simulator using leap motion.

    PubMed

    Wright, Trinette; de Ribaupierre, Sandrine; Eagleson, Roy

    2017-10-01

    Advances in virtual and augmented reality (AR) are having an impact on the medical field in areas such as surgical simulation. Improvements to surgical simulation will provide students and residents with additional training and evaluation methods. This is particularly important for procedures such as the endoscopic third ventriculostomy (ETV), which residents perform regularly. Simulators such as NeuroTouch, have been designed to aid in training associated with this procedure. The authors have designed an affordable and easily accessible ETV simulator, and compare it with the existing NeuroTouch for its usability and training effectiveness. This simulator was developed using Unity, Vuforia and the leap motion (LM) for an AR environment. The participants, 16 novices and two expert neurosurgeons, were asked to complete 40 targeting tasks. Participants used the NeuroTouch tool or a virtual hand controlled by the LM to select the position and orientation for these tasks. The length of time to complete each task was recorded and the trajectory log files were used to calculate performance. The resulting data from the novices' and experts' speed and accuracy are compared, and they discuss the objective performance of training in terms of the speed and accuracy of targeting accuracy for each system.

  5. Design and evaluation of an augmented reality simulator using leap motion

    PubMed Central

    de Ribaupierre, Sandrine; Eagleson, Roy

    2017-01-01

    Advances in virtual and augmented reality (AR) are having an impact on the medical field in areas such as surgical simulation. Improvements to surgical simulation will provide students and residents with additional training and evaluation methods. This is particularly important for procedures such as the endoscopic third ventriculostomy (ETV), which residents perform regularly. Simulators such as NeuroTouch, have been designed to aid in training associated with this procedure. The authors have designed an affordable and easily accessible ETV simulator, and compare it with the existing NeuroTouch for its usability and training effectiveness. This simulator was developed using Unity, Vuforia and the leap motion (LM) for an AR environment. The participants, 16 novices and two expert neurosurgeons, were asked to complete 40 targeting tasks. Participants used the NeuroTouch tool or a virtual hand controlled by the LM to select the position and orientation for these tasks. The length of time to complete each task was recorded and the trajectory log files were used to calculate performance. The resulting data from the novices' and experts' speed and accuracy are compared, and they discuss the objective performance of training in terms of the speed and accuracy of targeting accuracy for each system. PMID:29184667

  6. Virtual performer: single camera 3D measuring system for interaction in virtual space

    NASA Astrophysics Data System (ADS)

    Sakamoto, Kunio; Taneji, Shoto

    2006-10-01

    The authors developed interaction media systems in the 3D virtual space. In these systems, the musician virtually plays an instrument like the theremin in the virtual space or the performer plays a show using the virtual character such as a puppet. This interactive virtual media system consists of the image capture, measuring performer's position, detecting and recognizing motions and synthesizing video image using the personal computer. In this paper, we propose some applications of interaction media systems; a virtual musical instrument and superimposing CG character. Moreover, this paper describes the measuring method of the positions of the performer, his/her head and both eyes using a single camera.

  7. Highway Safety Information System guidebook for the Minnesota state data files. Volume 1 : SAS file formats

    DOT National Transportation Integrated Search

    2001-02-01

    The Minnesota data system includes the following basic files: Accident data (Accident File, Vehicle File, Occupant File); Roadlog File; Reference Post File; Traffic File; Intersection File; Bridge (Structures) File; and RR Grade Crossing File. For ea...

  8. preAssemble: a tool for automatic sequencer trace data processing.

    PubMed

    Adzhubei, Alexei A; Laerdahl, Jon K; Vlasova, Anna V

    2006-01-17

    Trace or chromatogram files (raw data) are produced by automatic nucleic acid sequencing equipment or sequencers. Each file contains information which can be interpreted by specialised software to reveal the sequence (base calling). This is done by the sequencer proprietary software or publicly available programs. Depending on the size of a sequencing project the number of trace files can vary from just a few to thousands of files. Sequencing quality assessment on various criteria is important at the stage preceding clustering and contig assembly. Two major publicly available packages--Phred and Staden are used by preAssemble to perform sequence quality processing. The preAssemble pre-assembly sequence processing pipeline has been developed for small to large scale automatic processing of DNA sequencer chromatogram (trace) data. The Staden Package Pregap4 module and base-calling program Phred are utilized in the pipeline, which produces detailed and self-explanatory output that can be displayed with a web browser. preAssemble can be used successfully with very little previous experience, however options for parameter tuning are provided for advanced users. preAssemble runs under UNIX and LINUX operating systems. It is available for downloading and will run as stand-alone software. It can also be accessed on the Norwegian Salmon Genome Project web site where preAssemble jobs can be run on the project server. preAssemble is a tool allowing to perform quality assessment of sequences generated by automatic sequencing equipment. preAssemble is flexible since both interactive jobs on the preAssemble server and the stand alone downloadable version are available. Virtually no previous experience is necessary to run a default preAssemble job, on the other hand options for parameter tuning are provided. Consequently preAssemble can be used as efficiently for just several trace files as for large scale sequence processing.

  9. Long-Term file activity patterns in a UNIX workstation environment

    NASA Technical Reports Server (NTRS)

    Gibson, Timothy J.; Miller, Ethan L.

    1998-01-01

    As mass storage technology becomes more affordable for sites smaller than supercomputer centers, understanding their file access patterns becomes crucial for developing systems to store rarely used data on tertiary storage devices such as tapes and optical disks. This paper presents a new way to collect and analyze file system statistics for UNIX-based file systems. The collection system runs in user-space and requires no modification of the operating system kernel. The statistics package provides details about file system operations at the file level: creations, deletions, modifications, etc. The paper analyzes four months of file system activity on a university file system. The results confirm previously published results gathered from supercomputer file systems, but differ in several important areas. Files in this study were considerably smaller than those at supercomputer centers, and they were accessed less frequently. Additionally, the long-term creation rate on workstation file systems is sufficiently low so that all data more than a day old could be cheaply saved on a mass storage device, allowing the integration of time travel into every file system.

  10. Enhanced virtual microscopy for collaborative education.

    PubMed

    Triola, Marc M; Holloway, William J

    2011-01-26

    Curricular reform efforts and a desire to use novel educational strategies that foster student collaboration are challenging the traditional microscope-based teaching of histology. Computer-based histology teaching tools and Virtual Microscopes (VM), computer-based digital slide viewers, have been shown to be effective and efficient educational strategies. We developed an open-source VM system based on the Google Maps engine to transform our histology education and introduce new teaching methods. This VM allows students and faculty to collaboratively create content, annotate slides with markers, and it is enhanced with social networking features to give the community of learners more control over the system. We currently have 1,037 slides in our VM system comprised of 39,386,941 individual JPEG files that take up 349 gigabytes of server storage space. Of those slides 682 are for general teaching and available to our students and the public; the remaining 355 slides are used for practical exams and have restricted access. The system has seen extensive use with 289,352 unique slide views to date. Students viewed an average of 56.3 slides per month during the histology course and accessed the system at all hours of the day. Of the 621 annotations added to 126 slides 26.2% were added by faculty and 73.8% by students. The use of the VM system reduced the amount of time faculty spent administering the course by 210 hours, but did not reduce the number of laboratory sessions or the number of required faculty. Laboratory sessions were reduced from three hours to two hours each due to the efficiencies in the workflow of the VM system. Our virtual microscope system has been an effective solution to the challenges facing traditional histopathology laboratories and the novel needs of our revised curriculum. The web-based system allowed us to empower learners to have greater control over their content, as well as the ability to work together in collaborative groups. The VM system saved faculty time and there was no significant difference in student performance on an identical practical exam before and after its adoption. We have made the source code of our VM freely available and encourage use of the publically available slides on our website.

  11. Developing defensive aids suite technology on a virtual battlefield

    NASA Astrophysics Data System (ADS)

    Rapanotti, John L.; DeMontigny-Leboeuf, Annie; Palmarini, Marc; Cantin, Andre

    2002-07-01

    Modern anti-tank missiles and the requirement of rapid deployment are limiting the use of passive armour in protecting land vehicles. Vehicle survivability is becoming more dependent on sensors, computers and countermeasures to detect and avoid threats. The integration of various technologies into a Defensive Aids Suite (DAS) can be designed and analyzed by combining field trials and laboratory data with modeling and simulation. MATLAB is used as a quick prototyping tool to model DAS systems and facilitate transfer to other researchers. The DAS model can be transferred from MATLAB or programmed directly in ModSAF (Modular Semi-Automated Forces), which is used to construct the virtual battlefield. Through scripted input files, a fixed battle approach ensures implementation and analysis meeting the requirements of three different interests. These three communities include the scientists and engineers, military and operations research. This approach ensures the modelling of processes known to be important regardless of the level of information available about the system. A system can be modelled phenomenologically until more information is available. Further processing of the simulation can be used to optimize the vehicle for a specific mission. ModSAF will be used to analyze and plan trials and develop DAS technology for future vehicles. Survivability of a DAS-equipped vehicle can be assessed relative to a basic vehicle without a DAS. In later stages, more complete DAS systems will be analyzed to determine the optimum configuration of the DAS components and the effectiveness of a DAS-equipped vehicle for specific missions. These concepts and approach will be discussed in the paper.

  12. Use of Parallel Micro-Platform for the Simulation the Space Exploration

    NASA Astrophysics Data System (ADS)

    Velasco Herrera, Victor Manuel; Velasco Herrera, Graciela; Rosano, Felipe Lara; Rodriguez Lozano, Salvador; Lucero Roldan Serrato, Karen

    The purpose of this work is to create a parallel micro-platform, that simulates the virtual movements of a space exploration in 3D. One of the innovations presented in this design consists of the application of a lever mechanism for the transmission of the movement. The development of such a robot is a challenging task very different of the industrial manipulators due to a totally different target system of requirements. This work presents the study and simulation, aided by computer, of the movement of this parallel manipulator. The development of this model has been developed using the platform of computer aided design Unigraphics, in which it was done the geometric modeled of each one of the components and end assembly (CAD), the generation of files for the computer aided manufacture (CAM) of each one of the pieces and the kinematics simulation of the system evaluating different driving schemes. We used the toolbox (MATLAB) of aerospace and create an adaptive control module to simulate the system.

  13. DockoMatic: automated peptide analog creation for high throughput virtual screening.

    PubMed

    Jacob, Reed B; Bullock, Casey W; Andersen, Tim; McDougal, Owen M

    2011-10-01

    The purpose of this manuscript is threefold: (1) to describe an update to DockoMatic that allows the user to generate cyclic peptide analog structure files based on protein database (pdb) files, (2) to test the accuracy of the peptide analog structure generation utility, and (3) to evaluate the high throughput capacity of DockoMatic. The DockoMatic graphical user interface interfaces with the software program Treepack to create user defined peptide analogs. To validate this approach, DockoMatic produced cyclic peptide analogs were tested for three-dimensional structure consistency and binding affinity against four experimentally determined peptide structure files available in the Research Collaboratory for Structural Bioinformatics database. The peptides used to evaluate this new functionality were alpha-conotoxins ImI, PnIA, and their published analogs. Peptide analogs were generated by DockoMatic and tested for their ability to bind to X-ray crystal structure models of the acetylcholine binding protein originating from Aplysia californica. The results, consisting of more than 300 simulations, demonstrate that DockoMatic predicts the binding energy of peptide structures to within 3.5 kcal mol(-1), and the orientation of bound ligand compares to within 1.8 Å root mean square deviation for ligand structures as compared to experimental data. Evaluation of high throughput virtual screening capacity demonstrated that Dockomatic can collect, evaluate, and summarize the output of 10,000 AutoDock jobs in less than 2 hours of computational time, while 100,000 jobs requires approximately 15 hours and 1,000,000 jobs is estimated to take up to a week. Copyright © 2011 Wiley Periodicals, Inc.

  14. DockoMatic 2.0: High Throughput Inverse Virtual Screening and Homology Modeling

    PubMed Central

    Bullock, Casey; Cornia, Nic; Jacob, Reed; Remm, Andrew; Peavey, Thomas; Weekes, Ken; Mallory, Chris; Oxford, Julia T.; McDougal, Owen M.; Andersen, Timothy L.

    2013-01-01

    DockoMatic is a free and open source application that unifies a suite of software programs within a user-friendly Graphical User Interface (GUI) to facilitate molecular docking experiments. Here we describe the release of DockoMatic 2.0; significant software advances include the ability to: (1) conduct high throughput Inverse Virtual Screening (IVS); (2) construct 3D homology models; and (3) customize the user interface. Users can now efficiently setup, start, and manage IVS experiments through the DockoMatic GUI by specifying a receptor(s), ligand(s), grid parameter file(s), and docking engine (either AutoDock or AutoDock Vina). DockoMatic automatically generates the needed experiment input files and output directories, and allows the user to manage and monitor job progress. Upon job completion, a summary of results is generated by Dockomatic to facilitate interpretation by the user. DockoMatic functionality has also been expanded to facilitate the construction of 3D protein homology models using the Timely Integrated Modeler (TIM) wizard. The wizard TIM provides an interface that accesses the basic local alignment search tool (BLAST) and MODELLER programs, and guides the user through the necessary steps to easily and efficiently create 3D homology models for biomacromolecular structures. The DockoMatic GUI can be customized by the user, and the software design makes it relatively easy to integrate additional docking engines, scoring functions, or third party programs. DockoMatic is a free comprehensive molecular docking software program for all levels of scientists in both research and education. PMID:23808933

  15. 10 CFR 13.2 - Definitions.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... identity when filing documents and serving participants electronically through the E-Filing system, and... transmitted electronically from the E-Filing system to the submitter confirming receipt of electronic filing... presentation of the docket and a link to its files. E-Filing System means an electronic system that receives...

  16. Accuracy of open-source software segmentation and paper-based printed three-dimensional models.

    PubMed

    Szymor, Piotr; Kozakiewicz, Marcin; Olszewski, Raphael

    2016-02-01

    In this study, we aimed to verify the accuracy of models created with the help of open-source Slicer 3.6.3 software (Surgical Planning Lab, Harvard Medical School, Harvard University, Boston, MA, USA) and the Mcor Matrix 300 paper-based 3D printer. Our study focused on the accuracy of recreating the walls of the right orbit of a cadaveric skull. Cone beam computed tomography (CBCT) of the skull was performed (0.25-mm pixel size, 0.5-mm slice thickness). Acquired DICOM data were imported into Slicer 3.6.3 software, where segmentation was performed. A virtual model was created and saved as an .STL file and imported into Netfabb Studio professional 4.9.5 software. Three different virtual models were created by cutting the original file along three different planes (coronal, sagittal, and axial). All models were printed with a Selective Deposition Lamination Technology Matrix 300 3D printer using 80 gsm A4 paper. The models were printed so that their cutting plane was parallel to the paper sheets creating the model. Each model (coronal, sagittal, and axial) consisted of three separate parts (∼200 sheets of paper each) that were glued together to form a final model. The skull and created models were scanned with a three-dimensional (3D) optical scanner (Breuckmann smart SCAN) and were saved as .STL files. Comparisons of the orbital walls of the skull, the virtual model, and each of the three paper models were carried out with GOM Inspect 7.5SR1 software. Deviations measured between the models analysed were presented in the form of a colour-labelled map and covered with an evenly distributed network of points automatically generated by the software. An average of 804.43 ± 19.39 points for each measurement was created. Differences measured in each point were exported as a .csv file. The results were statistically analysed using Statistica 10, with statistical significance set at p < 0.05. The average number of points created on models for each measurement was 804.43 ± 19.39; however, deviation in some of the generated points could not be calculated, and those points were excluded from further calculations. From 94% to 99% of the measured absolute deviations were <1 mm. The mean absolute deviation between the skull and virtual model was 0.15 ± 0.11 mm, between the virtual and printed models was 0.15 ± 0.12 mm, and between the skull and printed models was 0.24 ± 0.21 mm. Using the optical scanner and specialized inspection software for measurements of accuracy of the created parts is recommended, as it allows one not only to measure 2-dimensional distances between anatomical points but also to perform more clinically suitable comparisons of whole surfaces. However, it requires specialized software and a very accurate scanner in order to be useful. Threshold-based, manually corrected segmentation of orbital walls performed with 3D Slicer software is accurate enough to be used for creating a virtual model of the orbit. The accuracy of the paper-based Mcor Matrix 300 3D printer is comparable to those of other commonly used 3-dimensional printers and allows one to create precise anatomical models for clinical use. The method of dividing the model into smaller parts and sticking them together seems to be quite accurate, although we recommend it only for creating small, solid models with as few parts as possible to minimize shift associated with gluing. Copyright © 2015 European Association for Cranio-Maxillo-Facial Surgery. Published by Elsevier Ltd. All rights reserved.

  17. Archive of Digital Chirp Subbottom Profile Data Collected During USGS Cruise 14BIM05 Offshore of Breton Island, Louisiana, August 2014

    USGS Publications Warehouse

    Forde, Arnell S.; Flocks, James G.; Wiese, Dana S.; Fredericks, Jake J.

    2016-03-29

    The archived trace data are in standard SEG Y rev. 0 format (Barry and others, 1975); the first 3,200 bytes of the card image header are in American Standard Code for Information Interchange (ASCII) format instead of Extended Binary Coded Decimal Interchange Code (EBCDIC) format. The SEG Y files are available on the DVD version of this report or online, downloadable via the USGS Coastal and Marine Geoscience Data System (http://cmgds.marine.usgs.gov). The data are also available for viewing using GeoMapApp (http://www.geomapapp.org) and Virtual Ocean (http://www.virtualocean.org) multi-platform open source software. The Web version of this archive does not contain the SEG Y trace files. To obtain the complete DVD archive, contact USGS Information Services at 1-888-ASK-USGS or infoservices@usgs.gov. The SEG Y files may be downloaded and processed with commercial or public domain software such as Seismic Unix (SU) (Cohen and Stockwell, 2010). See the How To Download SEG Y Data page for download instructions. The printable profiles are provided as Graphics Interchange Format (GIF) images processed and gained using SU software and can be viewed from theProfiles page or by using the links located on the trackline maps; refer to the Software page for links to example SU processing scripts.

  18. FEMA Database Requirements Assessment and Resource Directory Model.

    DTIC Science & Technology

    1982-05-01

    File (NRCM) -- WContains information on organizations that are information resources on virtually any subject. * NEW YORK TIMES ONLINE -- Full text...version of the New York Times. * Newsearch: The Daily Index (Newsearch) -- Daily indexing of the periodicals in Magazine Index, newspapers in National...a NEXIS -- Full text of all general and business news covered in a variety of news - papers, magazines and wire services. 0 Oceanic Abstracts

  19. Salient Feature Selection Using Feed-Forward Neural Networks and Signal-to-Noise Ratios with a Focus Toward Network Threat Detection and Classification

    DTIC Science & Technology

    2014-03-27

    0.8.0. The virtual machine’s network adapter was set to internal network only to keep any outside traffic from interfering. A MySQL -based query...primary output of Fullstats is the ARFF file format, intended for use with the WEKA Java -based data mining software developed at the University of Waikato

  20. JSOU and NDIA SO/LIC Division Essays (2007)

    DTIC Science & Technology

    2007-04-01

    Create several content-rich Darknet environments—a private virtual network where users connect only to people they trust7—that offer e-mail, file...chat rooms, and Darknets ). Moon: Cyber-Herding Cyber-Herding Nodes and Relationship Network Gatherer Construction Demolition Structure of Cyber-Herding...the extrem- ist messages, concentrating Web sites, and developing Darknets . A visual illustration of the entire process follows Phase 7. Phase 5

  1. DATALINK: Records inventory data collection software. User`s guide

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Cole, B.A.

    1995-03-01

    DATALINK was created to provide an easy to use data collection program for records management software products. It provides several useful tools for capturing and validating record index data in the field. It also allows users to easily create a comma delimited, ASCII text file for data export into most records management software products. It runs on virtually any computer us MS-DOS.

  2. 20 CFR 10.321 - What happens if the opinion of the physician selected by OWCP differs from the opinion of the...

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ... two reports of virtually equal weight and rationale reach opposing conclusions (see James P. Roberts... has had no prior connection with the case. The employee is not entitled to have anyone present at the... employee needs an interpreter, the presence of an interpreter would be allowed. Also, a case file may be...

  3. 20 CFR 10.321 - What happens if the opinion of the physician selected by OWCP differs from the opinion of the...

    Code of Federal Regulations, 2011 CFR

    2011-04-01

    ... two reports of virtually equal weight and rationale reach opposing conclusions (see James P. Roberts... has had no prior connection with the case. The employee is not entitled to have anyone present at the... employee needs an interpreter, the presence of an interpreter would be allowed. Also, a case file may be...

  4. Stereoscopic 3D graphics generation

    NASA Astrophysics Data System (ADS)

    Li, Zhi; Liu, Jianping; Zan, Y.

    1997-05-01

    Stereoscopic display technology is one of the key techniques of areas such as simulation, multimedia, entertainment, virtual reality, and so on. Moreover, stereoscopic 3D graphics generation is an important part of stereoscopic 3D display system. In this paper, at first, we describe the principle of stereoscopic display and summarize some methods to generate stereoscopic 3D graphics. Secondly, to overcome the problems which came from the methods of user defined models (such as inconvenience, long modifying period and so on), we put forward the vector graphics files defined method. Thus we can design more directly; modify the model simply and easily; generate more conveniently; furthermore, we can make full use of graphics accelerator card and so on. Finally, we discuss the problem of how to speed up the generation.

  5. Development of a virtual reality training system for endoscope-assisted submandibular gland removal.

    PubMed

    Miki, Takehiro; Iwai, Toshinori; Kotani, Kazunori; Dang, Jianwu; Sawada, Hideyuki; Miyake, Minoru

    2016-11-01

    Endoscope-assisted surgery has widely been adopted as a basic surgical procedure, with various training systems using virtual reality developed for this procedure. In the present study, a basic training system comprising virtual reality for the removal of submandibular glands under endoscope assistance was developed. The efficacy of the training system was verified in novice oral surgeons. A virtual reality training system was developed using existing haptic devices. Virtual reality models were constructed from computed tomography data to ensure anatomical accuracy. Novice oral surgeons were trained using the developed virtual reality training system. The developed virtual reality training system included models of the submandibular gland and surrounding connective tissues and blood vessels entering the submandibular gland. Cutting or abrasion of the connective tissue and manipulations, such as elevation of blood vessels, were reproduced by the virtual reality system. A training program using the developed system was devised. Novice oral surgeons were trained in accordance with the devised training program. Our virtual reality training system for endoscope-assisted removal of the submandibular gland is effective in the training of novice oral surgeons in endoscope-assisted surgery. Copyright © 2016 The Author(s). Published by Elsevier Ltd.. All rights reserved.

  6. Measuring driver satisfaction with an urban arterial before and after deployment of an adaptive timing signal system

    DOT National Transportation Integrated Search

    2001-02-01

    The Minnesota data system includes the following basic files: Accident data (Accident File, Vehicle File, Occupant File); Roadlog File; Reference Post File; Traffic File; Intersection File; Bridge (Structures) File; and RR Grade Crossing File. For ea...

  7. Automated Concurrent Blackboard System Generation in C++

    NASA Technical Reports Server (NTRS)

    Kaplan, J. A.; McManus, J. W.; Bynum, W. L.

    1999-01-01

    In his 1992 Ph.D. thesis, "Design and Analysis Techniques for Concurrent Blackboard Systems", John McManus defined several performance metrics for concurrent blackboard systems and developed a suite of tools for creating and analyzing such systems. These tools allow a user to analyze a concurrent blackboard system design and predict the performance of the system before any code is written. The design can be modified until simulated performance is satisfactory. Then, the code generator can be invoked to generate automatically all of the code required for the concurrent blackboard system except for the code implementing the functionality of each knowledge source. We have completed the port of the source code generator and a simulator for a concurrent blackboard system. The source code generator generates the necessary C++ source code to implement the concurrent blackboard system using Parallel Virtual Machine (PVM) running on a heterogeneous network of UNIX(trademark) workstations. The concurrent blackboard simulator uses the blackboard specification file to predict the performance of the concurrent blackboard design. The only part of the source code for the concurrent blackboard system that the user must supply is the code implementing the functionality of the knowledge sources.

  8. Zebra: A striped network file system

    NASA Technical Reports Server (NTRS)

    Hartman, John H.; Ousterhout, John K.

    1992-01-01

    The design of Zebra, a striped network file system, is presented. Zebra applies ideas from log-structured file system (LFS) and RAID research to network file systems, resulting in a network file system that has scalable performance, uses its servers efficiently even when its applications are using small files, and provides high availability. Zebra stripes file data across multiple servers, so that the file transfer rate is not limited by the performance of a single server. High availability is achieved by maintaining parity information for the file system. If a server fails its contents can be reconstructed using the contents of the remaining servers and the parity information. Zebra differs from existing striped file systems in the way it stripes file data: Zebra does not stripe on a per-file basis; instead it stripes the stream of bytes written by each client. Clients write to the servers in units called stripe fragments, which are analogous to segments in an LFS. Stripe fragments contain file blocks that were written recently, without regard to which file they belong. This method of striping has numerous advantages over per-file striping, including increased server efficiency, efficient parity computation, and elimination of parity update.

  9. Monte Carlo calculations for reporting patient organ doses from interventional radiology

    NASA Astrophysics Data System (ADS)

    Huo, Wanli; Feng, Mang; Pi, Yifei; Chen, Zhi; Gao, Yiming; Xu, X. George

    2017-09-01

    This paper describes a project to generate organ dose data for the purposes of extending VirtualDose software from CT imaging to interventional radiology (IR) applications. A library of 23 mesh-based anthropometric patient phantoms were involved in Monte Carlo simulations for database calculations. Organ doses and effective doses of IR procedures with specific beam projection, filed of view (FOV) and beam quality for all parts of body were obtained. Comparing organ doses for different beam qualities, beam projections, patients' ages and patient's body mass indexes (BMIs) which generated by VirtualDose-IR, significant discrepancies were observed. For relatively long time exposure, IR doses depend on beam quality, beam direction and patient size. Therefore, VirtualDose-IR, which is based on the latest anatomically realistic patient phantoms, can generate accurate doses for IR treatment. It is suitable to apply this software in clinical IR dose management as an effective tool to estimate patient doses and optimize IR treatment plans.

  10. Comparison of a virtual microscope laboratory to a regular microscope laboratory for teaching histology.

    PubMed

    Harris, T; Leaven, T; Heidger, P; Kreiter, C; Duncan, J; Dick, F

    2001-02-01

    Emerging technology now exists to digitize a gigabyte of information from a glass slide, save it in a highly compressed file format, and deliver it over the web. By accessing these images with a standard web browser and viewer plug-in, a computer can emulate a real microscope and glass slide. Using this new technology, the immediate aims of our project were to digitize the glass slides from urinary tract, male genital, and endocrine units and implement them in the Spring 2000 Histology course at the University of Iowa, and to carry out a formative evaluation of the virtual slides of these three units in a side-by-side comparison with the regular microscope laboratory. The methods and results of this paper will describe the technology employed to create the virtual slides, and the formative evaluation carried out in the course. Anat Rec (New Anat) 265:10-14, 2001. Copyright 2001 Wiley-Liss, Inc.

  11. Optimal Access to NASA Water Cycle Data for Water Resources Management

    NASA Astrophysics Data System (ADS)

    Teng, W. L.; Arctur, D. K.; Espinoza, G. E.; Rui, H.; Strub, R. F.; Vollmer, B.

    2016-12-01

    A "Digital Divide" in data representation exists between the preferred way of data access by the hydrology community (i.e., as time series of discrete spatial objects) and the common way of data archival by earth science data centers (i.e., as continuous spatial fields, one file per time step). This Divide has been an obstacle, specifically, between the Consortium of Universities for the Advancement of Hydrologic Science, Inc. Hydrologic Information System (CUAHSI HIS) and NASA Goddard Earth Sciences Data and Information Services Center (GES DISC). An optimal approach to bridging the Divide, developed by the GES DISC, is to reorganize data from the way they are archived to some way that is optimal for the desired method of data access. Specifically for CUAHSI HIS, selected data sets were reorganized into time series files, one per geographical "point." These time series files, termed "data rods," are pre-generated or virtual (generated on-the-fly). Data sets available as data rods include North American Land Data Assimilation System (NLDAS), Global Land Data Assimilation System (GLDAS), TRMM Multi-satellite Precipitation Analysis (TMPA), Land Parameter Retrieval Model (LPRM), Modern-Era Retrospective Analysis for Research and Applications (MERRA)-Land, and Groundwater and Soil Moisture Conditions from Gravity Recovery and Climate Experiment (GRACE) Data Assimilation drought indicators for North America Drought Monitor (GRACE-DA-DM). In order to easily avail the operational water resources community the benefits of optimally reorganized data, we have developed multiple methods of making these data more easily accessible and usable. These include direct access via RESTful Web services, a browser-based Web map and statistical tool for selected NLDAS variables for the U.S. (CONUS), a HydroShare app (Data Rods Explorer, under development) on the Tethys Platform, and access via the GEOSS Portal. Examples of drought-related applications of these data and data access methods are provided.

  12. DMFS: A Data Migration File System for NetBSD

    NASA Technical Reports Server (NTRS)

    Studenmund, William

    2000-01-01

    I have recently developed DMFS, a Data Migration File System, for NetBSD. This file system provides kernel support for the data migration system being developed by my research group at NASA/Ames. The file system utilizes an underlying file store to provide the file backing, and coordinates user and system access to the files. It stores its internal metadata in a flat file, which resides on a separate file system. This paper will first describe our data migration system to provide a context for DMFS, then it will describe DMFS. It also will describe the changes to NetBSD needed to make DMFS work. Then it will give an overview of the file archival and restoration procedures, and describe how some typical user actions are modified by DMFS. Lastly, the paper will present simple performance measurements which indicate that there is little performance loss due to the use of the DMFS layer.

  13. A Virtual Emergency Telemedicine Serious Game in Medical Training: A Quantitative, Professional Feedback-Informed Evaluation Study

    PubMed Central

    Constantinou, Riana; Marangos, Charis; Kyriacou, Efthyvoulos; Bamidis, Panagiotis; Dafli, Eleni; Pattichis, Constantinos S

    2015-01-01

    Background Serious games involving virtual patients in medical education can provide a controlled setting within which players can learn in an engaging way, while avoiding the risks associated with real patients. Moreover, serious games align with medical students’ preferred learning styles. The Virtual Emergency TeleMedicine (VETM) game is a simulation-based game that was developed in collaboration with the mEducator Best Practice network in response to calls to integrate serious games in medical education and training. The VETM game makes use of data from an electrocardiogram to train practicing doctors, nurses, or medical students for problem-solving in real-life clinical scenarios through a telemedicine system and virtual patients. The study responds to two gaps: the limited number of games in emergency cardiology and the lack of evaluations by professionals. Objective The objective of this study is a quantitative, professional feedback-informed evaluation of one scenario of VETM, involving cardiovascular complications. The study has the following research question: “What are professionals’ perceptions of the potential of the Virtual Emergency Telemedicine game for training people involved in the assessment and management of emergency cases?” Methods The evaluation of the VETM game was conducted with 90 professional ambulance crew nursing personnel specializing in the assessment and management of emergency cases. After collaboratively trying out one VETM scenario, participants individually completed an evaluation of the game (36 questions on a 5-point Likert scale) and provided written and verbal comments. The instrument assessed six dimensions of the game: (1) user interface, (2) difficulty level, (3) feedback, (4) educational value, (5) user engagement, and (6) terminology. Data sources of the study were 90 questionnaires, including written comments from 51 participants, 24 interviews with 55 participants, and 379 log files of their interaction with the game. Results Overall, the results were positive in all dimensions of the game that were assessed as means ranged from 3.2 to 3.99 out of 5, with user engagement receiving the highest score (mean 3.99, SD 0.87). Users’ perceived difficulty level received the lowest score (mean 3.20, SD 0.65), a finding which agrees with the analysis of log files that showed a rather low success rate (20.6%). Even though professionals saw the educational value and usefulness of the tool for pre-hospital emergency training (mean 3.83, SD 1.05), they identified confusing features and provided input for improving them. Conclusions Overall, the results of the professional feedback-informed evaluation of the game provide a strong indication of its potential as an educational tool for emergency training. Professionals’ input will serve to improve the game. Further research will aim to validate VETM, in a randomized pre-test, post-test control group study to examine possible learning gains in participants’ problem-solving skills in treating a patient’s symptoms in an emergency situation. PMID:26084866

  14. A Virtual Emergency Telemedicine Serious Game in Medical Training: A Quantitative, Professional Feedback-Informed Evaluation Study.

    PubMed

    Nicolaidou, Iolie; Antoniades, Athos; Constantinou, Riana; Marangos, Charis; Kyriacou, Efthyvoulos; Bamidis, Panagiotis; Dafli, Eleni; Pattichis, Constantinos S

    2015-06-17

    Serious games involving virtual patients in medical education can provide a controlled setting within which players can learn in an engaging way, while avoiding the risks associated with real patients. Moreover, serious games align with medical students' preferred learning styles. The Virtual Emergency TeleMedicine (VETM) game is a simulation-based game that was developed in collaboration with the mEducator Best Practice network in response to calls to integrate serious games in medical education and training. The VETM game makes use of data from an electrocardiogram to train practicing doctors, nurses, or medical students for problem-solving in real-life clinical scenarios through a telemedicine system and virtual patients. The study responds to two gaps: the limited number of games in emergency cardiology and the lack of evaluations by professionals. The objective of this study is a quantitative, professional feedback-informed evaluation of one scenario of VETM, involving cardiovascular complications. The study has the following research question: "What are professionals' perceptions of the potential of the Virtual Emergency Telemedicine game for training people involved in the assessment and management of emergency cases?" The evaluation of the VETM game was conducted with 90 professional ambulance crew nursing personnel specializing in the assessment and management of emergency cases. After collaboratively trying out one VETM scenario, participants individually completed an evaluation of the game (36 questions on a 5-point Likert scale) and provided written and verbal comments. The instrument assessed six dimensions of the game: (1) user interface, (2) difficulty level, (3) feedback, (4) educational value, (5) user engagement, and (6) terminology. Data sources of the study were 90 questionnaires, including written comments from 51 participants, 24 interviews with 55 participants, and 379 log files of their interaction with the game. Overall, the results were positive in all dimensions of the game that were assessed as means ranged from 3.2 to 3.99 out of 5, with user engagement receiving the highest score (mean 3.99, SD 0.87). Users' perceived difficulty level received the lowest score (mean 3.20, SD 0.65), a finding which agrees with the analysis of log files that showed a rather low success rate (20.6%). Even though professionals saw the educational value and usefulness of the tool for pre-hospital emergency training (mean 3.83, SD 1.05), they identified confusing features and provided input for improving them. Overall, the results of the professional feedback-informed evaluation of the game provide a strong indication of its potential as an educational tool for emergency training. Professionals' input will serve to improve the game. Further research will aim to validate VETM, in a randomized pre-test, post-test control group study to examine possible learning gains in participants' problem-solving skills in treating a patient's symptoms in an emergency situation.

  15. Computer network defense system

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Urias, Vincent; Stout, William M. S.; Loverro, Caleb

    A method and apparatus for protecting virtual machines. A computer system creates a copy of a group of the virtual machines in an operating network in a deception network to form a group of cloned virtual machines in the deception network when the group of the virtual machines is accessed by an adversary. The computer system creates an emulation of components from the operating network in the deception network. The components are accessible by the group of the cloned virtual machines as if the group of the cloned virtual machines was in the operating network. The computer system moves networkmore » connections for the group of the virtual machines in the operating network used by the adversary from the group of the virtual machines in the operating network to the group of the cloned virtual machines, enabling protecting the group of the virtual machines from actions performed by the adversary.« less

  16. The use of a virtual patient case in an OSCE-based exam--a pilot study.

    PubMed

    Courteille, O; Bergin, R; Stockeld, D; Ponzer, S; Fors, U

    2008-01-01

    This study focuses on a skills test based clinical assessment where 118 fourth-year medical students at the four teaching hospitals of Karolinska Institutet participated in the same 12-module OSCE. The goal of one of the twelve examination modules was to assess the students' skills and ability to solve a virtual patient (VP) case (the ISP system), which included medical history taking, lab tests, physical examinations and suggestion of a preliminary diagnosis. The primary aim of this study was to evaluate the potential of a VP as a possible tool for assessment of clinical reasoning and problem solving ability among medical students. The feeling of realism of the VP and its possible affective impact on the student's confidence were also investigated. We observed and analysed students' reactions, engagement and performance (activity log files) during their interactive sessions with the simulation. An individual human assistant was provided along with the computer simulation and the videotaped interaction student/assistant was then analysed in detail and related to the students' outcomes. The results indicate possible advantages of using ISP-like systems for assessment. The VP was for instance able to reliably differentiate between students' performances but some weaknesses were also identified, like a confounding influence on students' outcomes by the assistants used. Significant differences, affecting the results, were found between the students in their degree of affective response towards the system as well as the perceived usefulness of assistance. Students need to be trained beforehand in mastering the assessment tool. Rating compliance needs to be targeted before VP-based systems like ISP can be used in exams and if such systems would be used in high-stake exams, the use of human assistants should be limited and scoring rubrics validated (and preferably automated).

  17. Astronomical virtual observatory and the place and role of Bulgarian one

    NASA Astrophysics Data System (ADS)

    Petrov, Georgi; Dechev, Momchil; Slavcheva-Mihova, Luba; Duchlev, Peter; Mihov, Bojko; Kochev, Valentin; Bachev, Rumen

    2009-07-01

    Virtual observatory could be defined as a collection of integrated astronomical data archives and software tools that utilize computer networks to create an environment in which research can be conducted. Several countries have initiated national virtual observatory programs that combine existing databases from ground-based and orbiting observatories, scientific facility especially equipped to detect and record naturally occurring scientific phenomena. As a result, data from all the world's major observatories will be available to all users and to the public. This is significant not only because of the immense volume of astronomical data but also because the data on stars and galaxies has been compiled from observations in a variety of wavelengths-optical, radio, infrared, gamma ray, X-ray and more. In a virtual observatory environment, all of this data is integrated so that it can be synthesized and used in a given study. During the autumn of the 2001 (26.09.2001) six organizations from Europe put the establishment of the Astronomical Virtual Observatory (AVO)-ESO, ESA, Astrogrid, CDS, CNRS, Jodrell Bank (Dolensky et al., 2003). Its aims have been outlined as follows: - To provide comparative analysis of large sets of multiwavelength data; - To reuse data collected by a single source; - To provide uniform access to data; - To make data available to less-advantaged communities; - To be an educational tool. The Virtual observatory includes: - Tools that make it easy to locate and retrieve data from catalogues, archives, and databases worldwide; - Tools for data analysis, simulation, and visualization; - Tools to compare observations with results obtained from models, simulations and theory; - Interoperability: services that can be used regardless of the clients computing platform, operating system and software capabilities; - Access to data in near real-time, archived data and historical data; - Additional information - documentation, user-guides, reports, publications, news and so on. This large growth of astronomical data and the necessity of an easy access to those data led to the foundation of the International Virtual Observatory Alliance (IVOA). IVOA was formed in June 2002. By January 2005, the IVOA has grown to include 15 funded VO projects from Australia, Canada, China, Europe, France, Germany, Hungary, India, Italy, Japan, Korea, Russia, Spain, the United Kingdom, and the United States. At the time being Bulgaria is not a member of European Astronomical Virtual Observatory and as the Bulgarian Virtual Observatory is not a legal entity, we are not members of IVOA. The main purpose of the project is Bulgarian Virtual Observatory to join the leading virtual astronomical institutions in the world. Initially the Bulgarian Virtual Observatory will include: - BG Galaxian virtual observatory; - BG Solar virtual observatory; - Department Star clusters of IA, BAS; - WFPDB group of IA, BAS. All available data will be integrated in the Bulgarian centers of astronomical data, conducted by the Wide Field Plate Archive data centre. For the above purpose POSTGRESQL or/and MySQL will be installed on the server of BG-VO and SAADA tools, ESO-MEX or/and DAL ToolKit to transform our FITS files in standard format for VO-tools. A part of the participants was acquainted with the principles of these products during the "Days of virtual observatory in Sofia" January, 2008.

  18. Head-mounted active noise control system with virtual sensing technique

    NASA Astrophysics Data System (ADS)

    Miyazaki, Nobuhiro; Kajikawa, Yoshinobu

    2015-03-01

    In this paper, we apply a virtual sensing technique to a head-mounted active noise control (ANC) system we have already proposed. The proposed ANC system can reduce narrowband noise while improving the noise reduction ability at the desired locations. A head-mounted ANC system based on an adaptive feedback structure can reduce noise with periodicity or narrowband components. However, since quiet zones are formed only at the locations of error microphones, an adequate noise reduction cannot be achieved at the locations where error microphones cannot be placed such as near the eardrums. A solution to this problem is to apply a virtual sensing technique. A virtual sensing ANC system can achieve higher noise reduction at the desired locations by measuring the system models from physical sensors to virtual sensors, which will be used in the online operation of the virtual sensing ANC algorithm. Hence, we attempt to achieve the maximum noise reduction near the eardrums by applying the virtual sensing technique to the head-mounted ANC system. However, it is impossible to place the microphone near the eardrums. Therefore, the system models from physical sensors to virtual sensors are estimated using the Head And Torso Simulator (HATS) instead of human ears. Some simulation, experimental, and subjective assessment results demonstrate that the head-mounted ANC system with virtual sensing is superior to that without virtual sensing in terms of the noise reduction ability at the desired locations.

  19. PipeOnline 2.0: automated EST processing and functional data sorting.

    PubMed

    Ayoubi, Patricia; Jin, Xiaojing; Leite, Saul; Liu, Xianghui; Martajaja, Jeson; Abduraham, Abdurashid; Wan, Qiaolan; Yan, Wei; Misawa, Eduardo; Prade, Rolf A

    2002-11-01

    Expressed sequence tags (ESTs) are generated and deposited in the public domain, as redundant, unannotated, single-pass reactions, with virtually no biological content. PipeOnline automatically analyses and transforms large collections of raw DNA-sequence data from chromatograms or FASTA files by calling the quality of bases, screening and removing vector sequences, assembling and rewriting consensus sequences of redundant input files into a unigene EST data set and finally through translation, amino acid sequence similarity searches, annotation of public databases and functional data. PipeOnline generates an annotated database, retaining the processed unigene sequence, clone/file history, alignments with similar sequences, and proposed functional classification, if available. Functional annotation is automatic and based on a novel method that relies on homology of amino acid sequence multiplicity within GenBank records. Records are examined through a function ordered browser or keyword queries with automated export of results. PipeOnline offers customization for individual projects (MyPipeOnline), automated updating and alert service. PipeOnline is available at http://stress-genomics.org.

  20. Virtual hand: a 3D tactile interface to virtual environments

    NASA Astrophysics Data System (ADS)

    Rogowitz, Bernice E.; Borrel, Paul

    2008-02-01

    We introduce a novel system that allows users to experience the sensation of touch in a computer graphics environment. In this system, the user places his/her hand on an array of pins, which is moved about space on a 6 degree-of-freedom robot arm. The surface of the pins defines a surface in the virtual world. This "virtual hand" can move about the virtual world. When the virtual hand encounters an object in the virtual world, the heights of the pins are adjusted so that they represent the object's shape, surface, and texture. A control system integrates pin and robot arm motions to transmit information about objects in the computer graphics world to the user. It also allows the user to edit, change and move the virtual objects, shapes and textures. This system provides a general framework for touching, manipulating, and modifying objects in a 3-D computer graphics environment, which may be useful in a wide range of applications, including computer games, computer aided design systems, and immersive virtual worlds.

  1. DaCHS: Data Center Helper Suite

    NASA Astrophysics Data System (ADS)

    Demleitner, Markus

    2018-04-01

    DaCHS, the Data Center Helper Suite, is an integrated package for publishing astronomical data sets to the Virtual Observatory. Network-facing, it speaks the major VO protocols (SCS, SIAP, SSAP, TAP, Datalink, etc). Operator-facing, many input formats, including FITS/WCS, ASCII files, and VOTable, can be processed to publication-ready data. DaCHS puts particular emphasis on integrated metadata handling, which facilitates a tight integration with the VO's Registry

  2. Twin imaging phenomenon of integral imaging.

    PubMed

    Hu, Juanmei; Lou, Yimin; Wu, Fengmin; Chen, Aixi

    2018-05-14

    The imaging principles and phenomena of integral imaging technique have been studied in detail using geometrical optics, wave optics, or light filed theory. However, most of the conclusions are only suit for the integral imaging systems using diffused illumination. In this work, a kind of twin imaging phenomenon and mechanism has been observed in a non-diffused illumination reflective integral imaging system. Interactive twin images including a real and a virtual 3D image of one object can be activated in the system. The imaging phenomenon is similar to the conjugate imaging effect of hologram, but it base on the refraction and reflection instead of diffraction. The imaging characteristics and mechanisms different from traditional integral imaging are deduced analytically. Thin film integral imaging systems with 80μm thickness have also been made to verify the imaging phenomenon. Vivid lighting interactive twin 3D images have been realized using a light-emitting diode (LED) light source. When the LED is moving, the twin 3D images are moving synchronously. This interesting phenomenon shows a good application prospect in interactive 3D display, argument reality, and security authentication.

  3. [Constructing 3-dimensional colorized digital dental model assisted by digital photography].

    PubMed

    Ye, Hong-qiang; Liu, Yu-shu; Liu, Yun-song; Ning, Jing; Zhao, Yi-jiao; Zhou, Yong-sheng

    2016-02-18

    To explore a method of constructing universal 3-dimensional (3D) colorized digital dental model which can be displayed and edited in common 3D software (such as Geomagic series), in order to improve the visual effect of digital dental model in 3D software. The morphological data of teeth and gingivae were obtained by intra-oral scanning system (3Shape TRIOS), constructing 3D digital dental models. The 3D digital dental models were exported as STL files. Meanwhile, referring to the accredited photography guide of American Academy of Cosmetic Dentistry (AACD), five selected digital photographs of patients'teeth and gingivae were taken by digital single lens reflex camera (DSLR) with the same exposure parameters (except occlusal views) to capture the color data. In Geomagic Studio 2013, after STL file of 3D digital dental model being imported, digital photographs were projected on 3D digital dental model with corresponding position and angle. The junctions of different photos were carefully trimmed to get continuous and natural color transitions. Then the 3D colorized digital dental model was constructed, which was exported as OBJ file or WRP file which was a special file for software of Geomagic series. For the purpose of evaluating the visual effect of the 3D colorized digital model, a rating scale on color simulation effect in views of patients'evaluation was used. Sixteen patients were recruited and their scores on colored and non-colored digital dental models were recorded. The data were analyzed using McNemar-Bowker test in SPSS 20. Universal 3D colorized digital dental model with better color simulation was constructed based on intra-oral scanning and digital photography. For clinical application, the 3D colorized digital dental models, combined with 3D face images, were introduced into 3D smile design of aesthetic rehabilitation, which could improve the patients' cognition for the esthetic digital design and virtual prosthetic effect. Universal 3D colorized digital dental model with better color simulation can be constructed assisted by 3D dental scanning system and digital photography. In clinical practice, the communication between dentist and patients could be improved assisted by the better visual perception since the colorized 3D digital dental models with better color simulation effect.

  4. Efficient Generation and Selection of Virtual Populations in Quantitative Systems Pharmacology Models.

    PubMed

    Allen, R J; Rieger, T R; Musante, C J

    2016-03-01

    Quantitative systems pharmacology models mechanistically describe a biological system and the effect of drug treatment on system behavior. Because these models rarely are identifiable from the available data, the uncertainty in physiological parameters may be sampled to create alternative parameterizations of the model, sometimes termed "virtual patients." In order to reproduce the statistics of a clinical population, virtual patients are often weighted to form a virtual population that reflects the baseline characteristics of the clinical cohort. Here we introduce a novel technique to efficiently generate virtual patients and, from this ensemble, demonstrate how to select a virtual population that matches the observed data without the need for weighting. This approach improves confidence in model predictions by mitigating the risk that spurious virtual patients become overrepresented in virtual populations.

  5. Small file aggregation in a parallel computing system

    DOEpatents

    Faibish, Sorin; Bent, John M.; Tzelnic, Percy; Grider, Gary; Zhang, Jingwang

    2014-09-02

    Techniques are provided for small file aggregation in a parallel computing system. An exemplary method for storing a plurality of files generated by a plurality of processes in a parallel computing system comprises aggregating the plurality of files into a single aggregated file; and generating metadata for the single aggregated file. The metadata comprises an offset and a length of each of the plurality of files in the single aggregated file. The metadata can be used to unpack one or more of the files from the single aggregated file.

  6. Software for Managing Parametric Studies

    NASA Technical Reports Server (NTRS)

    Yarrow, Maurice; McCann, Karen M.; DeVivo, Adrian

    2003-01-01

    The Information Power Grid Virtual Laboratory (ILab) is a Practical Extraction and Reporting Language (PERL) graphical-user-interface computer program that generates shell scripts to facilitate parametric studies performed on the Grid. (The Grid denotes a worldwide network of supercomputers used for scientific and engineering computations involving data sets too large to fit on desktop computers.) Heretofore, parametric studies on the Grid have been impeded by the need to create control language scripts and edit input data files painstaking tasks that are necessary for managing multiple jobs on multiple computers. ILab reflects an object-oriented approach to automation of these tasks: All data and operations are organized into packages in order to accelerate development and debugging. A container or document object in ILab, called an experiment, contains all the information (data and file paths) necessary to define a complex series of repeated, sequenced, and/or branching processes. For convenience and to enable reuse, this object is serialized to and from disk storage. At run time, the current ILab experiment is used to generate required input files and shell scripts, create directories, copy data files, and then both initiate and monitor the execution of all computational processes.

  7. Collective operations in a file system based execution model

    DOEpatents

    Shinde, Pravin; Van Hensbergen, Eric

    2013-02-12

    A mechanism is provided for group communications using a MULTI-PIPE synthetic file system. A master application creates a multi-pipe synthetic file in the MULTI-PIPE synthetic file system, the master application indicating a multi-pipe operation to be performed. The master application then writes a header-control block of the multi-pipe synthetic file specifying at least one of a multi-pipe synthetic file system name, a message type, a message size, a specific destination, or a specification of the multi-pipe operation. Any other application participating in the group communications then opens the same multi-pipe synthetic file. A MULTI-PIPE file system module then implements the multi-pipe operation as identified by the master application. The master application and the other applications then either read or write operation messages to the multi-pipe synthetic file and the MULTI-PIPE synthetic file system module performs appropriate actions.

  8. Collective operations in a file system based execution model

    DOEpatents

    Shinde, Pravin; Van Hensbergen, Eric

    2013-02-19

    A mechanism is provided for group communications using a MULTI-PIPE synthetic file system. A master application creates a multi-pipe synthetic file in the MULTI-PIPE synthetic file system, the master application indicating a multi-pipe operation to be performed. The master application then writes a header-control block of the multi-pipe synthetic file specifying at least one of a multi-pipe synthetic file system name, a message type, a message size, a specific destination, or a specification of the multi-pipe operation. Any other application participating in the group communications then opens the same multi-pipe synthetic file. A MULTI-PIPE file system module then implements the multi-pipe operation as identified by the master application. The master application and the other applications then either read or write operation messages to the multi-pipe synthetic file and the MULTI-PIPE synthetic file system module performs appropriate actions.

  9. Design and Implementation of a Metadata-rich File System

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ames, S; Gokhale, M B; Maltzahn, C

    2010-01-19

    Despite continual improvements in the performance and reliability of large scale file systems, the management of user-defined file system metadata has changed little in the past decade. The mismatch between the size and complexity of large scale data stores and their ability to organize and query their metadata has led to a de facto standard in which raw data is stored in traditional file systems, while related, application-specific metadata is stored in relational databases. This separation of data and semantic metadata requires considerable effort to maintain consistency and can result in complex, slow, and inflexible system operation. To address thesemore » problems, we have developed the Quasar File System (QFS), a metadata-rich file system in which files, user-defined attributes, and file relationships are all first class objects. In contrast to hierarchical file systems and relational databases, QFS defines a graph data model composed of files and their relationships. QFS incorporates Quasar, an XPATH-extended query language for searching the file system. Results from our QFS prototype show the effectiveness of this approach. Compared to the de facto standard, the QFS prototype shows superior ingest performance and comparable query performance on user metadata-intensive operations and superior performance on normal file metadata operations.« less

  10. Accessible and informative sectioned images, color-coded images, and surface models of the ear.

    PubMed

    Park, Hyo Seok; Chung, Min Suk; Shin, Dong Sun; Jung, Yong Wook; Park, Jin Seo

    2013-08-01

    In our previous research, we created state-of-the-art sectioned images, color-coded images, and surface models of the human ear. Our ear data would be more beneficial and informative if they were more easily accessible. Therefore, the purpose of this study was to distribute the browsing software and the PDF file in which ear images are to be readily obtainable and freely explored. Another goal was to inform other researchers of our methods for establishing the browsing software and the PDF file. To achieve this, sectioned images and color-coded images of ear were prepared (voxel size 0.1 mm). In the color-coded images, structures related to hearing, equilibrium, and structures originated from the first and second pharyngeal arches were segmented supplementarily. The sectioned and color-coded images of right ear were added to the browsing software, which displayed the images serially along with structure names. The surface models were reconstructed to be combined into the PDF file where they could be freely manipulated. Using the browsing software and PDF file, sectional and three-dimensional shapes of ear structures could be comprehended in detail. Furthermore, using the PDF file, clinical knowledge could be identified through virtual otoscopy. Therefore, the presented educational tools will be helpful to medical students and otologists by improving their knowledge of ear anatomy. The browsing software and PDF file can be downloaded without charge and registration at our homepage (http://anatomy.dongguk.ac.kr/ear/). Copyright © 2013 Wiley Periodicals, Inc.

  11. Methods and apparatus for multi-resolution replication of files in a parallel computing system using semantic information

    DOEpatents

    Faibish, Sorin; Bent, John M.; Tzelnic, Percy; Grider, Gary; Torres, Aaron

    2015-10-20

    Techniques are provided for storing files in a parallel computing system using different resolutions. A method is provided for storing at least one file generated by a distributed application in a parallel computing system. The file comprises one or more of a complete file and a sub-file. The method comprises the steps of obtaining semantic information related to the file; generating a plurality of replicas of the file with different resolutions based on the semantic information; and storing the file and the plurality of replicas of the file in one or more storage nodes of the parallel computing system. The different resolutions comprise, for example, a variable number of bits and/or a different sub-set of data elements from the file. A plurality of the sub-files can be merged to reproduce the file.

  12. Reciproc versus Twisted file for root canal filling removal: assessment of apically extruded debris.

    PubMed

    Altunbas, Demet; Kutuk, Betul; Toyoglu, Mustafa; Kutlu, Gizem; Kustarci, Alper; Er, Kursat

    2016-01-01

    The aim of this study was to evaluate the amount of apically extruded debris during endodontic retreatment with different file systems. Sixty extracted human mandibular premolar teeth were used in this study. Root canals of the teeth were instrumented and filled before being randomly assigned to three groups. Guttapercha was removed using the Reciproc system, the Twisted File system (TF), and Hedström-files (H-file). Apically extruded debris was collected and dried in pre-weighed Eppendorf tubes. The amount of extruded debris was assessed with an electronic balance. Data were statistically analyzed using one-way ANOVA, Kruskal-Wallis, and Mann-Whitney U tests. The Reciproc and TF systems extruded significantly less debris than the H-file (p<0.05). However, no significant difference was found between the Reciproc and TF systems. All tested file systems caused apical extrusion of debris. Both the rotary file (TF) and the reciprocating single-file (Reciproc) systems were associated with less apical extrusion compared with the H-file.

  13. Cloud object store for checkpoints of high performance computing applications using decoupling middleware

    DOEpatents

    Bent, John M.; Faibish, Sorin; Grider, Gary

    2016-04-19

    Cloud object storage is enabled for checkpoints of high performance computing applications using a middleware process. A plurality of files, such as checkpoint files, generated by a plurality of processes in a parallel computing system are stored by obtaining said plurality of files from said parallel computing system; converting said plurality of files to objects using a log structured file system middleware process; and providing said objects for storage in a cloud object storage system. The plurality of processes may run, for example, on a plurality of compute nodes. The log structured file system middleware process may be embodied, for example, as a Parallel Log-Structured File System (PLFS). The log structured file system middleware process optionally executes on a burst buffer node.

  14. Distributed PACS using distributed file system with hierarchical meta data servers.

    PubMed

    Hiroyasu, Tomoyuki; Minamitani, Yoshiyuki; Miki, Mitsunori; Yokouchi, Hisatake; Yoshimi, Masato

    2012-01-01

    In this research, we propose a new distributed PACS (Picture Archiving and Communication Systems) which is available to integrate several PACSs that exist in each medical institution. The conventional PACS controls DICOM file into one data-base. On the other hand, in the proposed system, DICOM file is separated into meta data and image data and those are stored individually. Using this mechanism, since file is not always accessed the entire data, some operations such as finding files, changing titles, and so on can be performed in high-speed. At the same time, as distributed file system is utilized, accessing image files can also achieve high-speed access and high fault tolerant. The introduced system has a more significant point. That is the simplicity to integrate several PACSs. In the proposed system, only the meta data servers are integrated and integrated system can be constructed. This system also has the scalability of file access with along to the number of file numbers and file sizes. On the other hand, because meta-data server is integrated, the meta data server is the weakness of this system. To solve this defect, hieratical meta data servers are introduced. Because of this mechanism, not only fault--tolerant ability is increased but scalability of file access is also increased. To discuss the proposed system, the prototype system using Gfarm was implemented. For evaluating the implemented system, file search operating time of Gfarm and NFS were compared.

  15. Toward information management in corporations (2)

    NASA Astrophysics Data System (ADS)

    Shibata, Mitsuru

    If construction of inhouse information management systems in an advanced information society should be positioned along with the social information management, its base making begins with reviewing current paper filing systems. Since the problems which inhere in inhouse information management systems utilizing OA equipments also inhere in paper filing systems, the first step toward full scale inhouse information management should be to grasp and solve the fundamental problems in current filing systems. This paper describes analysis of fundamental problems in filing systems, making new type of offices and analysis of improvement needs in filing systems, and some points in improving filing systems.

  16. Efficient Generation and Selection of Virtual Populations in Quantitative Systems Pharmacology Models

    PubMed Central

    Rieger, TR; Musante, CJ

    2016-01-01

    Quantitative systems pharmacology models mechanistically describe a biological system and the effect of drug treatment on system behavior. Because these models rarely are identifiable from the available data, the uncertainty in physiological parameters may be sampled to create alternative parameterizations of the model, sometimes termed “virtual patients.” In order to reproduce the statistics of a clinical population, virtual patients are often weighted to form a virtual population that reflects the baseline characteristics of the clinical cohort. Here we introduce a novel technique to efficiently generate virtual patients and, from this ensemble, demonstrate how to select a virtual population that matches the observed data without the need for weighting. This approach improves confidence in model predictions by mitigating the risk that spurious virtual patients become overrepresented in virtual populations. PMID:27069777

  17. Status Report for Remediation Decision Support Project, Task 1, Activity 1.B – Physical and Hydraulic Properties Database and Interpretation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Rockhold, Mark L.

    2008-09-26

    The objective of Activity 1.B of the Remediation Decision Support (RDS) Project is to compile all available physical and hydraulic property data for sediments from the Hanford Site, to port these data into the Hanford Environmental Information System (HEIS), and to make the data web-accessible to anyone on the Hanford Local Area Network via the so-called Virtual Library. In past years efforts were made by RDS project staff to compile all available physical and hydraulic property data for Hanford sediments and to transfer these data into SoilVision{reg_sign}, a commercial geotechnical software package designed for storing, analyzing, and manipulating soils data.more » Although SoilVision{reg_sign} has proven to be useful, its access and use restrictions have been recognized as a limitation to the effective use of the physical and hydraulic property databases by the broader group of potential users involved in Hanford waste site issues. In order to make these data more widely available and useable, a decision was made to port them to HEIS and to make them web-accessible via a Virtual Library module. In FY08 the objectives of Activity 1.B of the RDS Project were to: (1) ensure traceability and defensibility of all physical and hydraulic property data currently residing in the SoilVision{reg_sign} database maintained by PNNL, (2) transfer the physical and hydraulic property data from the Microsoft Access database files used by SoilVision{reg_sign} into HEIS, which has most recently been maintained by Fluor-Hanford, Inc., (3) develop a Virtual Library module for accessing these data from HEIS, and (4) write a User's Manual for the Virtual Library module. The development of the Virtual Library module was to be performed by a third party under subcontract to Fluor. The intent of these activities is to make the available physical and hydraulic property data more readily accessible and useable by technical staff and operable unit managers involved in waste site assessments and remedial action decisions for Hanford. This status report describes the history of this development effort and progress to date.« less

  18. American Telephone and Telegraph System V/MLS Release 1.1.2 Running on Unix System V Release 3.1.1

    DTIC Science & Technology

    1989-10-18

    Evaluation Report AT&T System V/MLS SYSTEM OVERVIEW what is specified in the /mls/ passwd file. For a complete description of how this works, see page 62...from the publicly readable files /etc/ passwd and /etclgroup, to the protected files /mlslpasswd and /mls/group. These protected files are ASCII...files which are referred to as "shadow files". October 18, 1989 62 Final Evaluation Report AT&T System V/MLS SYSTEM OVERVIEW Imls/ passwd contains the

  19. 78 FR 21930 - Aquenergy Systems, Inc.; Notice of Intent To File License Application, Filing of Pre-Application...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-04-12

    ... Systems, Inc.; Notice of Intent To File License Application, Filing of Pre-Application Document, and Approving Use of the Traditional Licensing Process a. Type of Filing: Notice of Intent to File License...: November 11, 2012. d. Submitted by: Aquenergy Systems, Inc., a fully owned subsidiaries of Enel Green Power...

  20. Storing files in a parallel computing system based on user-specified parser function

    DOEpatents

    Faibish, Sorin; Bent, John M; Tzelnic, Percy; Grider, Gary; Manzanares, Adam; Torres, Aaron

    2014-10-21

    Techniques are provided for storing files in a parallel computing system based on a user-specified parser function. A plurality of files generated by a distributed application in a parallel computing system are stored by obtaining a parser from the distributed application for processing the plurality of files prior to storage; and storing one or more of the plurality of files in one or more storage nodes of the parallel computing system based on the processing by the parser. The plurality of files comprise one or more of a plurality of complete files and a plurality of sub-files. The parser can optionally store only those files that satisfy one or more semantic requirements of the parser. The parser can also extract metadata from one or more of the files and the extracted metadata can be stored with one or more of the plurality of files and used for searching for files.

  1. Methods and apparatus for capture and storage of semantic information with sub-files in a parallel computing system

    DOEpatents

    Faibish, Sorin; Bent, John M; Tzelnic, Percy; Grider, Gary; Torres, Aaron

    2015-02-03

    Techniques are provided for storing files in a parallel computing system using sub-files with semantically meaningful boundaries. A method is provided for storing at least one file generated by a distributed application in a parallel computing system. The file comprises one or more of a complete file and a plurality of sub-files. The method comprises the steps of obtaining a user specification of semantic information related to the file; providing the semantic information as a data structure description to a data formatting library write function; and storing the semantic information related to the file with one or more of the sub-files in one or more storage nodes of the parallel computing system. The semantic information provides a description of data in the file. The sub-files can be replicated based on semantically meaningful boundaries.

  2. Efficient operating system level virtualization techniques for cloud resources

    NASA Astrophysics Data System (ADS)

    Ansu, R.; Samiksha; Anju, S.; Singh, K. John

    2017-11-01

    Cloud computing is an advancing technology which provides the servcies of Infrastructure, Platform and Software. Virtualization and Computer utility are the keys of Cloud computing. The numbers of cloud users are increasing day by day. So it is the need of the hour to make resources available on demand to satisfy user requirements. The technique in which resources namely storage, processing power, memory and network or I/O are abstracted is known as Virtualization. For executing the operating systems various virtualization techniques are available. They are: Full System Virtualization and Para Virtualization. In Full Virtualization, the whole architecture of hardware is duplicated virtually. No modifications are required in Guest OS as the OS deals with the VM hypervisor directly. In Para Virtualization, modifications of OS is required to run in parallel with other OS. For the Guest OS to access the hardware, the host OS must provide a Virtual Machine Interface. OS virtualization has many advantages such as migrating applications transparently, consolidation of server, online maintenance of OS and providing security. This paper briefs both the virtualization techniques and discusses the issues in OS level virtualization.

  3. Some thoughts on cartographic and geographic information systems for the 1980's

    USGS Publications Warehouse

    Starr, L.E.; Anderson, Kirk E.

    1981-01-01

    The U.S. Geological Survey is adopting computer techniques to meet the expanding need for cartographic base category data. Digital methods are becoming increasingly important in the mapmaking process, and the demand is growing for physical, social, and economic data. Recognizing these emerging needs, the National Mapping Division began, several years ago, an active program to develop advanced digital methods to support cartographic and geographic data processing. An integrated digital cartographic database would meet the anticipated needs. Such a database would contain data from various sources, and could provide a variety of standard and customized map and digital data file products. This cartographic database soon will be technologically feasible. The present trends in the economics of cartographic and geographic data handling and the growing needs for integrated physical, social, and economic data make such a database virtually mandatory.

  4. Storing files in a parallel computing system using list-based index to identify replica files

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Faibish, Sorin; Bent, John M.; Tzelnic, Percy

    Improved techniques are provided for storing files in a parallel computing system using a list-based index to identify file replicas. A file and at least one replica of the file are stored in one or more storage nodes of the parallel computing system. An index for the file comprises at least one list comprising a pointer to a storage location of the file and a storage location of the at least one replica of the file. The file comprises one or more of a complete file and one or more sub-files. The index may also comprise a checksum value formore » one or more of the file and the replica(s) of the file. The checksum value can be evaluated to validate the file and/or the file replica(s). A query can be processed using the list.« less

  5. Web-based X-ray quality control documentation.

    PubMed

    David, George; Burnett, Lou Ann; Schenkel, Robert

    2003-01-01

    The department of radiology at the Medical College of Georgia Hospital and Clinics has developed an equipment quality control web site. Our goal is to provide immediate access to virtually all medical physics survey data. The web site is designed to assist equipment engineers, department management and technologists. By improving communications and access to equipment documentation, we believe productivity is enhanced. The creation of the quality control web site was accomplished in three distinct steps. First, survey data had to be placed in a computer format. The second step was to convert these various computer files to a format supported by commercial web browsers. Third, a comprehensive home page had to be designed to provide convenient access to the multitude of surveys done in the various x-ray rooms. Because we had spent years previously fine-tuning the computerization of the medical physics quality control program, most survey documentation was already in spreadsheet or database format. A major technical decision was the method of conversion of survey spreadsheet and database files into documentation appropriate for the web. After an unsatisfactory experience with a HyperText Markup Language (HTML) converter (packaged with spreadsheet and database software), we tried creating Portable Document Format (PDF) files using Adobe Acrobat software. This process preserves the original formatting of the document and takes no longer than conventional printing; therefore, it has been very successful. Although the PDF file generated by Adobe Acrobat is a proprietary format, it can be displayed through a conventional web browser using the freely distributed Adobe Acrobat Reader program that is available for virtually all platforms. Once a user installs the software, it is automatically invoked by the web browser whenever the user follows a link to a file with a PDF extension. Although no confidential patient information is available on the web site, our legal department recommended that we secure the site in order to keep out those wishing to make mischief. Our interim solution has not been to password protect the page, which we feared would hinder access for occasional legitimate users, but also not to provide links to it from other hospital and department pages. Utility and productivity were improved and time and money were saved by making radiological equipment quality control documentation instantly available on-line.

  6. Cloud object store for archive storage of high performance computing data using decoupling middleware

    DOEpatents

    Bent, John M.; Faibish, Sorin; Grider, Gary

    2015-06-30

    Cloud object storage is enabled for archived data, such as checkpoints and results, of high performance computing applications using a middleware process. A plurality of archived files, such as checkpoint files and results, generated by a plurality of processes in a parallel computing system are stored by obtaining the plurality of archived files from the parallel computing system; converting the plurality of archived files to objects using a log structured file system middleware process; and providing the objects for storage in a cloud object storage system. The plurality of processes may run, for example, on a plurality of compute nodes. The log structured file system middleware process may be embodied, for example, as a Parallel Log-Structured File System (PLFS). The log structured file system middleware process optionally executes on a burst buffer node.

  7. Monitoring System for the GRID Monte Carlo Mass Production in the H1 Experiment at DESY

    NASA Astrophysics Data System (ADS)

    Bystritskaya, Elena; Fomenko, Alexander; Gogitidze, Nelly; Lobodzinski, Bogdan

    2014-06-01

    The H1 Virtual Organization (VO), as one of the small VOs, employs most components of the EMI or gLite Middleware. In this framework, a monitoring system is designed for the H1 Experiment to identify and recognize within the GRID the best suitable resources for execution of CPU-time consuming Monte Carlo (MC) simulation tasks (jobs). Monitored resources are Computer Elements (CEs), Storage Elements (SEs), WMS-servers (WMSs), CernVM File System (CVMFS) available to the VO HONE and local GRID User Interfaces (UIs). The general principle of monitoring GRID elements is based on the execution of short test jobs on different CE queues using submission through various WMSs and directly to the CREAM-CEs as well. Real H1 MC Production jobs with a small number of events are used to perform the tests. Test jobs are periodically submitted into GRID queues, the status of these jobs is checked, output files of completed jobs are retrieved, the result of each job is analyzed and the waiting time and run time are derived. Using this information, the status of the GRID elements is estimated and the most suitable ones are included in the automatically generated configuration files for use in the H1 MC production. The monitoring system allows for identification of problems in the GRID sites and promptly reacts on it (for example by sending GGUS (Global Grid User Support) trouble tickets). The system can easily be adapted to identify the optimal resources for tasks other than MC production, simply by changing to the relevant test jobs. The monitoring system is written mostly in Python and Perl with insertion of a few shell scripts. In addition to the test monitoring system we use information from real production jobs to monitor the availability and quality of the GRID resources. The monitoring tools register the number of job resubmissions, the percentage of failed and finished jobs relative to all jobs on the CEs and determine the average values of waiting and running time for the involved GRID queues. CEs which do not meet the set criteria can be removed from the production chain by including them in an exception table. All of these monitoring actions lead to a more reliable and faster execution of MC requests.

  8. Storing files in a parallel computing system based on user or application specification

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Faibish, Sorin; Bent, John M.; Nick, Jeffrey M.

    2016-03-29

    Techniques are provided for storing files in a parallel computing system based on a user-specification. A plurality of files generated by a distributed application in a parallel computing system are stored by obtaining a specification from the distributed application indicating how the plurality of files should be stored; and storing one or more of the plurality of files in one or more storage nodes of a multi-tier storage system based on the specification. The plurality of files comprise a plurality of complete files and/or a plurality of sub-files. The specification can optionally be processed by a daemon executing on onemore » or more nodes in a multi-tier storage system. The specification indicates how the plurality of files should be stored, for example, identifying one or more storage nodes where the plurality of files should be stored.« less

  9. System-Level Virtualization for High Performance Computing

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Vallee, Geoffroy R; Naughton, III, Thomas J; Engelmann, Christian

    2008-01-01

    System-level virtualization has been a research topic since the 70's but regained popularity during the past few years because of the availability of efficient solution such as Xen and the implementation of hardware support in commodity processors (e.g. Intel-VT, AMD-V). However, a majority of system-level virtualization projects is guided by the server consolidation market. As a result, current virtualization solutions appear to not be suitable for high performance computing (HPC) which is typically based on large-scale systems. On another hand there is significant interest in exploiting virtual machines (VMs) within HPC for a number of other reasons. By virtualizing themore » machine, one is able to run a variety of operating systems and environments as needed by the applications. Virtualization allows users to isolate workloads, improving security and reliability. It is also possible to support non-native environments and/or legacy operating environments through virtualization. In addition, it is possible to balance work loads, use migration techniques to relocate applications from failing machines, and isolate fault systems for repair. This document presents the challenges for the implementation of a system-level virtualization solution for HPC. It also presents a brief survey of the different approaches and techniques to address these challenges.« less

  10. Novel virtual reality system integrating online self-face viewing and mirror visual feedback for stroke rehabilitation: rationale and feasibility.

    PubMed

    Shiri, Shimon; Feintuch, Uri; Lorber-Haddad, Adi; Moreh, Elior; Twito, Dvora; Tuchner-Arieli, Maya; Meiner, Zeev

    2012-01-01

    To introduce the rationale of a novel virtual reality system based on self-face viewing and mirror visual feedback, and to examine its feasibility as a rehabilitation tool for poststroke patients. A novel motion capture virtual reality system integrating online self-face viewing and mirror visual feedback has been developed for stroke rehabilitation.The system allows the replacement of the impaired arm by a virtual arm. Upon making small movements of the paretic arm, patients view themselves virtually performing healthy full-range movements. A sample of 6 patients in the acute poststroke phase received the virtual reality treatment concomitantly with conservative rehabilitation treatment. Feasibility was assessed during 10 sessions for each participant. All participants succeeded in operating the system, demonstrating its feasibility in terms of adherence and improvement in task performance. Patients' performance within the virtual environment and a set of clinical-functional measures recorded before the virtual reality treatment, at 1 week, and after 3 months indicated neurological status and general functioning improvement. These preliminary results indicate that this newly developed virtual reality system is safe and feasible. Future randomized controlled studies are required to assess whether this system has beneficial effects in terms of enhancing upper limb function and quality of life in poststroke patients.

  11. The computerized OMAHA system in microsoft office excel.

    PubMed

    Lai, Xiaobin; Wong, Frances K Y; Zhang, Peiqiang; Leung, Carenx W Y; Lee, Lai H; Wong, Jessica S Y; Lo, Yim F; Ching, Shirley S Y

    2014-01-01

    The OMAHA System was adopted as the documentation system in an interventional study. To systematically record client care and facilitate data analysis, two Office Excel files were developed. The first Excel file (File A) was designed to record problems, care procedure, and outcomes for individual clients according to the OMAHA System. It was used by the intervention nurses in the study. The second Excel file (File B) was the summary of all clients that had been automatically extracted from File A. Data in File B can be analyzed directly in Excel or imported in PASW for further analysis. Both files have four parts to record basic information and the three parts of the OMAHA System. The computerized OMAHA System simplified the documentation procedure and facilitated the management and analysis of data.

  12. Interactive Visualization of Near Real-Time and Production Global Precipitation Mission Data Online Using CesiumJS

    NASA Astrophysics Data System (ADS)

    Lammers, M.

    2016-12-01

    Advancements in the capabilities of JavaScript frameworks and web browsing technology make online visualization of large geospatial datasets viable. Commonly this is done using static image overlays, pre-rendered animations, or cumbersome geoservers. These methods can limit interactivity and/or place a large burden on server-side post-processing and storage of data. Geospatial data, and satellite data specifically, benefit from being visualized both on and above a three-dimensional surface. The open-source JavaScript framework CesiumJS, developed by Analytical Graphics, Inc., leverages the WebGL protocol to do just that. It has entered the void left by the abandonment of the Google Earth Web API, and it serves as a capable and well-maintained platform upon which data can be displayed. This paper will describe the technology behind the two primary products developed as part of the NASA Precipitation Processing System STORM website: GPM Near Real Time Viewer (GPMNRTView) and STORM Virtual Globe (STORM VG). GPMNRTView reads small post-processed CZML files derived from various Level 1 through 3 near real-time products. For swath-based products, several brightness temperature channels or precipitation-related variables are available for animating in virtual real-time as the satellite observed them on and above the Earth's surface. With grid-based products, only precipitation rates are available, but the grid points are visualized in such a way that they can be interactively examined to explore raw values. STORM VG reads values directly off the HDF5 files, converting the information into JSON on the fly. All data points both on and above the surface can be examined here as well. Both the raw values and, if relevant, elevations are displayed. Surface and above-ground precipitation rates from select Level 2 and 3 products are shown. Examples from both products will be shown, including visuals from high impact events observed by GPM constellation satellites.

  13. Interactive Visualization of Near Real Time and Production Global Precipitation Measurement (GPM) Mission Data Online Using CesiumJS

    NASA Technical Reports Server (NTRS)

    Lammers, Matthew

    2016-01-01

    Advancements in the capabilities of JavaScript frameworks and web browsing technology make online visualization of large geospatial datasets viable. Commonly this is done using static image overlays, prerendered animations, or cumbersome geoservers. These methods can limit interactivity andor place a large burden on server-side post-processing and storage of data. Geospatial data, and satellite data specifically, benefit from being visualized both on and above a three-dimensional surface. The open-source JavaScript framework CesiumJS, developed by Analytical Graphics, Inc., leverages the WebGL protocol to do just that. It has entered the void left by the abandonment of the Google Earth Web API, and it serves as a capable and well-maintained platform upon which data can be displayed. This paper will describe the technology behind the two primary products developed as part of the NASA Precipitation Processing System STORM website: GPM Near Real Time Viewer (GPMNRTView) and STORM Virtual Globe (STORM VG). GPMNRTView reads small post-processed CZML files derived from various Level 1 through 3 near real-time products. For swath-based products, several brightness temperature channels or precipitation-related variables are available for animating in virtual real-time as the satellite-observed them on and above the Earths surface. With grid-based products, only precipitation rates are available, but the grid points are visualized in such a way that they can be interactively examined to explore raw values. STORM VG reads values directly off the HDF5 files, converting the information into JSON on the fly. All data points both on and above the surface can be examined here as well. Both the raw values and, if relevant, elevations are displayed. Surface and above-ground precipitation rates from select Level 2 and 3 products are shown. Examples from both products will be shown, including visuals from high impact events observed by GPM constellation satellites.

  14. Incorporating Speech Recognition into a Natural User Interface

    NASA Technical Reports Server (NTRS)

    Chapa, Nicholas

    2017-01-01

    The Augmented/ Virtual Reality (AVR) Lab has been working to study the applicability of recent virtual and augmented reality hardware and software to KSC operations. This includes the Oculus Rift, HTC Vive, Microsoft HoloLens, and Unity game engine. My project in this lab is to integrate voice recognition and voice commands into an easy to modify system that can be added to an existing portion of a Natural User Interface (NUI). A NUI is an intuitive and simple to use interface incorporating visual, touch, and speech recognition. The inclusion of speech recognition capability will allow users to perform actions or make inquiries using only their voice. The simplicity of needing only to speak to control an on-screen object or enact some digital action means that any user can quickly become accustomed to using this system. Multiple programs were tested for use in a speech command and recognition system. Sphinx4 translates speech to text using a Hidden Markov Model (HMM) based Language Model, an Acoustic Model, and a word Dictionary running on Java. PocketSphinx had similar functionality to Sphinx4 but instead ran on C. However, neither of these programs were ideal as building a Java or C wrapper slowed performance. The most ideal speech recognition system tested was the Unity Engine Grammar Recognizer. A Context Free Grammar (CFG) structure is written in an XML file to specify the structure of phrases and words that will be recognized by Unity Grammar Recognizer. Using Speech Recognition Grammar Specification (SRGS) 1.0 makes modifying the recognized combinations of words and phrases very simple and quick to do. With SRGS 1.0, semantic information can also be added to the XML file, which allows for even more control over how spoken words and phrases are interpreted by Unity. Additionally, using a CFG with SRGS 1.0 produces a Finite State Machine (FSM) functionality limiting the potential for incorrectly heard words or phrases. The purpose of my project was to investigate options for a Speech Recognition System. To that end I attempted to integrate Sphinx4 into a user interface. Sphinx4 had great accuracy and is the only free program able to perform offline speech dictation. However it had a limited dictionary of words that could be recognized, single syllable words were almost impossible for it to hear, and since it ran on Java it could not be integrated into the Unity based NUI. PocketSphinx ran much faster than Sphinx4 which would've made it ideal as a plugin to the Unity NUI, unfortunately creating a C# wrapper for the C code made the program unusable with Unity due to the wrapper slowing code execution and class files becoming unreachable. Unity Grammar Recognizer is the ideal speech recognition interface, it is flexible in recognizing multiple variations of the same command. It is also the most accurate program in recognizing speech due to using an XML grammar to specify speech structure instead of relying solely on a Dictionary and Language model. The Unity Grammar Recognizer will be used with the NUI for these reasons as well as being written in C# which further simplifies the incorporation.

  15. Atlasmaker: A Grid-based Implementation of the Hyperatlas

    NASA Astrophysics Data System (ADS)

    Williams, R.; Djorgovski, S. G.; Feldmann, M. T.; Jacob, J.

    2004-07-01

    The Atlasmaker project is using Grid technology, in combination with NVO interoperability, to create new knowledge resources in astronomy. The product is a multi-faceted, multi-dimensional, scientifically trusted image atlas of the sky, made by federating many different surveys at different wavelengths, times, resolutions, polarizations, etc. The Atlasmaker software does resampling and mosaicking of image collections, and is well-suited to operate with the Hyperatlas standard. Requests can be satisfied via on-demand computations or by accessing a data cache. Computed data is stored in a distributed virtual file system, such as the Storage Resource Broker (SRB). We expect these atlases to be a new and powerful paradigm for knowledge extraction in astronomy, as well as a magnificent way to build educational resources. The system is being incorporated into the data analysis pipeline of the Palomar-Quest synoptic survey, and is being used to generate all-sky atlases from the 2MASS, SDSS, and DPOSS surveys for joint object detection.

  16. The effects of parameter variation on MSET models of the Crystal River-3 feedwater flow system.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Miron, A.

    1998-04-01

    In this paper we develop further the results reported in Reference 1 to include a systematic study of the effects of varying MSET models and model parameters for the Crystal River-3 (CR) feedwater flow system The study used archived CR process computer files from November 1-December 15, 1993 that were provided by Florida Power Corporation engineers Fairman Bockhorst and Brook Julias. The results support the conclusion that an optimal MSET model, properly trained and deriving its inputs in real-time from no more than 25 of the sensor signals normally provided to a PWR plant process computer, should be able tomore » reliably detect anomalous variations in the feedwater flow venturis of less than 0.1% and in the absence of a venturi sensor signal should be able to generate a virtual signal that will be within 0.1% of the correct value of the missing signal.« less

  17. Computer-aided design of tooth preparations for automated development of fixed prosthodontics.

    PubMed

    Yuan, Fusong; Sun, Yuchun; Wang, Yong; Lv, Peijun

    2014-01-01

    This paper introduces a method to digitally design a virtual model of a tooth preparation of the mandibular first molar, by using the commercial three-dimensional (3D) computer-aided design software packages Geomagic and Imageware, and using the model as an input to automatic tooth preparing system. The procedure included acquisition of 3D data from dentate casts and digital modeling of the shape of the tooth preparation components, such as the margin, occlusal surface, and axial surface. The completed model data were stored as stereolithography (STL) files, which were used in a tooth preparation system to help to plan the trajectory. Meanwhile, the required mathematical models in the design process were introduced. The method was used to make an individualized tooth preparation of the mandibular first molar. The entire process took 15min. Using the method presented, a straightforward 3D shape of a full crown can be obtained to meet clinical needs prior to tooth preparation. © 2013 Published by Elsevier Ltd.

  18. Archive Management of NASA Earth Observation Data to Support Cloud Analysis

    NASA Technical Reports Server (NTRS)

    Lynnes, Christopher; Baynes, Kathleen; McInerney, Mark

    2017-01-01

    NASA collects, processes and distributes petabytes of Earth Observation (EO) data from satellites, aircraft, in situ instruments and model output, with an order of magnitude increase expected by 2024. Cloud-based web object storage (WOS) of these data can simplify the execution of such an increase. More importantly, it can also facilitate user analysis of those volumes by making the data available to the massively parallel computing power in the cloud. However, storing EO data in cloud WOS has a ripple effect throughout the NASA archive system with unexpected challenges and opportunities. One challenge is modifying data servicing software (such as Web Coverage Service servers) to access and subset data that are no longer on a directly accessible file system, but rather in cloud WOS. Opportunities include refactoring of the archive software to a cloud-native architecture; virtualizing data products by computing on demand; and reorganizing data to be more analysis-friendly. Reviewed by Mark McInerney ESDIS Deputy Project Manager.

  19. Using ProHits to store, annotate and analyze affinity purification - mass spectrometry (AP-MS) data

    PubMed Central

    Liu, Guomin; Zhang, Jianping; Choi, Hyungwon; Lambert, Jean-Philippe; Srikumar, Tharan; Larsen, Brett; Nesvizhskii, Alexey I.; Raught, Brian; Tyers, Mike; Gingras, Anne-Claude

    2012-01-01

    Affinity purification coupled with mass spectrometry (AP-MS) is a robust technique used to identify protein-protein interactions. With recent improvements in sample preparation, and dramatic advances in MS instrumentation speed and sensitivity, this technique is becoming more widely used throughout the scientific community. To meet the needs of research groups both large and small, we have developed software solutions for tracking, scoring and analyzing AP-MS data. Here, we provide details for the installation and utilization of ProHits, a Laboratory Information Management System designed specifically for AP-MS interaction proteomics. This protocol explains: (i) how to install the complete ProHits system, including modules for the management of mass spectrometry files and the analysis of interaction data, and (ii) alternative options for the use of pre-existing search results in simpler versions of ProHits, including a virtual machine implementation of our ProHits Lite software. We also describe how to use the main features of the software to analyze AP-MS data. PMID:22948730

  20. Data Mining as a Service (DMaaS)

    NASA Astrophysics Data System (ADS)

    Tejedor, E.; Piparo, D.; Mascetti, L.; Moscicki, J.; Lamanna, M.; Mato, P.

    2016-10-01

    Data Mining as a Service (DMaaS) is a software and computing infrastructure that allows interactive mining of scientific data in the cloud. It allows users to run advanced data analyses by leveraging the widely adopted Jupyter notebook interface. Furthermore, the system makes it easier to share results and scientific code, access scientific software, produce tutorials and demonstrations as well as preserve the analyses of scientists. This paper describes how a first pilot of the DMaaS service is being deployed at CERN, starting from the notebook interface that has been fully integrated with the ROOT analysis framework, in order to provide all the tools for scientists to run their analyses. Additionally, we characterise the service backend, which combines a set of IT services such as user authentication, virtual computing infrastructure, mass storage, file synchronisation, development portals or batch systems. The added value acquired by the combination of the aforementioned categories of services is discussed, focusing on the opportunities offered by the CERNBox synchronisation service and its massive storage backend, EOS.

  1. VERAView

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lee, Ronald W.; Collins, Benjamin S.; Godfrey, Andrew T.

    2016-12-09

    In order to support engineering analysis of Virtual Environment for Reactor Analysis (VERA) model results, the Consortium for Advanced Simulation of Light Water Reactors (CASL) needs a tool that provides visualizations of HDF5 files that adhere to the VERAOUT specification. VERAView provides an interactive graphical interface for the visualization and engineering analyses of output data from VERA. The Python-based software provides instantaneous 2D and 3D images, 1D plots, and alphanumeric data from VERA multi-physics simulations.

  2. Automated Camouflage Pattern Generation Technology Survey.

    DTIC Science & Technology

    1985-08-07

    supported by high speed data communications? Costs: 9 What are your rates? $/CPU hour: $/MB disk storage/day: S/connect hour: other charges: What are your... data to the workstation, tape drives are needed for backing up and archiving completed patterns, 256 megabytes of on-line hard disk space as a minimum...is needed to support multiple processes and data files, and 4 megabytes of actual or virtual memory is needed to process the largest expected single

  3. Pentagon 9/11

    DTIC Science & Technology

    2007-09-01

    before McNair, Wills, Stevens, and Maxfield arrived. Finding no way to open one of the renovated windows, Petrovich had picked up a nearby laser printer...force" Most of the columns in the collapsed zone had been destroyed and some of the remaining ones were " stripped and bowed, retaining little structural...Desks and filing cabinets were reduced to scrap metal. Asbestos lay exposed and lead paint peeled off walls. A layer of black soot cov- ered virtually

  4. Enhanced Virtual Presence for Immersive Visualization of Complex Situations for Mission Rehearsal

    DTIC Science & Technology

    1997-06-01

    taken. We propose to join both these technologies together in a registration device . The registration device would be small and portable and easily...registering the panning of the camera (or other sensing device ) and also stitch together the shots to automatically generate panoramic files necessary to...database and as the base information changes each of the linked drawings is automatically updated. Filename Format A specific naming convention should be

  5. Improving the interactivity and functionality of Web-based radiology teaching files with the Java programming language.

    PubMed

    Eng, J

    1997-01-01

    Java is a programming language that runs on a "virtual machine" built into World Wide Web (WWW)-browsing programs on multiple hardware platforms. Web pages were developed with Java to enable Web-browsing programs to overlay transparent graphics and text on displayed images so that the user could control the display of labels and annotations on the images, a key feature not available with standard Web pages. This feature was extended to include the presentation of normal radiologic anatomy. Java programming was also used to make Web browsers compatible with the Digital Imaging and Communications in Medicine (DICOM) file format. By enhancing the functionality of Web pages, Java technology should provide greater incentive for using a Web-based approach in the development of radiology teaching material.

  6. SutraPrep, a pre-processor for SUTRA, a model for ground-water flow with solute or energy transport

    USGS Publications Warehouse

    Provost, Alden M.

    2002-01-01

    SutraPrep facilitates the creation of three-dimensional (3D) input datasets for the USGS ground-water flow and transport model SUTRA Version 2D3D.1. It is most useful for applications in which the geometry of the 3D model domain and the spatial distribution of physical properties and boundary conditions is relatively simple. SutraPrep can be used to create a SUTRA main input (?.inp?) file, an initial conditions (?.ics?) file, and a 3D plot of the finite-element mesh in Virtual Reality Modeling Language (VRML) format. Input and output are text-based. The code can be run on any platform that has a standard FORTRAN-90 compiler. Executable code is available for Microsoft Windows.

  7. Permanent-File-Validation Utility Computer Program

    NASA Technical Reports Server (NTRS)

    Derry, Stephen D.

    1988-01-01

    Errors in files detected and corrected during operation. Permanent File Validation (PFVAL) utility computer program provides CDC CYBER NOS sites with mechanism to verify integrity of permanent file base. Locates and identifies permanent file errors in Mass Storage Table (MST) and Track Reservation Table (TRT), in permanent file catalog entries (PFC's) in permit sectors, and in disk sector linkage. All detected errors written to listing file and system and job day files. Program operates by reading system tables , catalog track, permit sectors, and disk linkage bytes to vaidate expected and actual file linkages. Used extensively to identify and locate errors in permanent files and enable online correction, reducing computer-system downtime.

  8. Air Traffic Complexity Measurement Environment (ACME): Software User's Guide

    NASA Technical Reports Server (NTRS)

    1996-01-01

    A user's guide for the Air Traffic Complexity Measurement Environment (ACME) software is presented. The ACME consists of two major components, a complexity analysis tool and user interface. The Complexity Analysis Tool (CAT) analyzes complexity off-line, producing data files which may be examined interactively via the Complexity Data Analysis Tool (CDAT). The Complexity Analysis Tool is composed of three independently executing processes that communicate via PVM (Parallel Virtual Machine) and Unix sockets. The Runtime Data Management and Control process (RUNDMC) extracts flight plan and track information from a SAR input file, and sends the information to GARP (Generate Aircraft Routes Process) and CAT (Complexity Analysis Task). GARP in turn generates aircraft trajectories, which are utilized by CAT to calculate sector complexity. CAT writes flight plan, track and complexity data to an output file, which can be examined interactively. The Complexity Data Analysis Tool (CDAT) provides an interactive graphic environment for examining the complexity data produced by the Complexity Analysis Tool (CAT). CDAT can also play back track data extracted from System Analysis Recording (SAR) tapes. The CDAT user interface consists of a primary window, a controls window, and miscellaneous pop-ups. Aircraft track and position data is displayed in the main viewing area of the primary window. The controls window contains miscellaneous control and display items. Complexity data is displayed in pop-up windows. CDAT plays back sector complexity and aircraft track and position data as a function of time. Controls are provided to start and stop playback, adjust the playback rate, and reposition the display to a specified time.

  9. M4AST - A Tool for Asteroid Modelling

    NASA Astrophysics Data System (ADS)

    Birlan, Mirel; Popescu, Marcel; Irimiea, Lucian; Binzel, Richard

    2016-10-01

    M4AST (Modelling for asteroids) is an online tool devoted to the analysis and interpretation of reflection spectra of asteroids in the visible and near-infrared spectral intervals. It consists into a spectral database of individual objects and a set of routines for analysis which address scientific aspects such as: taxonomy, curve matching with laboratory spectra, space weathering models, and mineralogical diagnosis. Spectral data were obtained using groundbased facilities; part of these data are precompiled from the literature[1].The database is composed by permanent and temporary files. Each permanent file contains a header and two or three columns (wavelength, spectral reflectance, and the error on spectral reflectance). Temporary files can be uploaded anonymously, and are purged for the property of submitted data. The computing routines are organized in order to accomplish several scientific objectives: visualize spectra, compute the asteroid taxonomic class, compare an asteroid spectrum with similar spectra of meteorites, and computing mineralogical parameters. One facility of using the Virtual Observatory protocols was also developed.A new version of the service was released in June 2016. This new release of M4AST contains a database and facilities to model more than 6,000 spectra of asteroids. A new web-interface was designed. This development allows new functionalities into a user-friendly environment. A bridge system of access and exploiting the database SMASS-MIT (http://smass.mit.edu) allows the treatment and analysis of these data in the framework of M4AST environment.Reference:[1] M. Popescu, M. Birlan, and D.A. Nedelcu, "Modeling of asteroids: M4AST," Astronomy & Astrophysics 544, EDP Sciences, pp. A130, 2012.

  10. Novel interactive virtual showcase based on 3D multitouch technology

    NASA Astrophysics Data System (ADS)

    Yang, Tao; Liu, Yue; Lu, You; Wang, Yongtian

    2009-11-01

    A new interactive virtual showcase is proposed in this paper. With the help of virtual reality technology, the user of the proposed system can watch the virtual objects floating in the air from all four sides and interact with the virtual objects by touching the four surfaces of the virtual showcase. Unlike traditional multitouch system, this system cannot only realize multi-touch on a plane to implement 2D translation, 2D scaling, and 2D rotation of the objects; it can also realize the 3D interaction of the virtual objects by recognizing and analyzing the multi-touch that can be simultaneously captured from the four planes. Experimental results show the potential of the proposed system to be applied in the exhibition of historical relics and other precious goods.

  11. Service architecture challenges in building the KNMI Data Centre

    NASA Astrophysics Data System (ADS)

    Som de Cerff, Wim; van de Vegte, John; Plieger, Maarten; de Vreede, Ernst; Sluiter, Raymond; Willem Noteboom, Jan; van der Neut, Ian; Verhoef, Hans; van Versendaal, Robert; van Binnendijk, Martin; Kalle, Henk; Knopper, Arthur; Calis, Gijs; Ha, Siu Siu; van Moosel, WIm; Klein Ikkink, Henk-Jan; Tosun, Tuncay

    2013-04-01

    One of the objectives of KNMI is to act as a National Data centre for weather, climate and seismological data. KNMI has experience in curation of data for many years however important scientific data is not well accessible. New technologies also are available to improve the current infrastructure. Therefore a data curation program is initiated with two main goals: setup a Satellite Data Platform (SDP) and a KNMI data centre (KDC). KDC will provide, besides curation, data access, and storage and retrieval portal for KNMI data. In 2010 first requirements were gathered, in 2011 the main architecture was sketched, KDC was implemented in 2012 and is available on: http://data.knmi.nl KDC is built with the data providers involved with as key challenge: 'adding a dataset should be as simple as creating an HTML page'. This is enabled by a three step process, in which the data provider is responsible for two steps: 1. Provide dataset metadata: An easy to use web interface for providing metadata, with automated validation. Metadata consists of an ISO 19115 profile (matching INSPIRE and WMO requirements) and additional technical metadata regarding the data structure and access rights to the data. The interface hides certain metadata fields, which are filed by KDC automatically. 2. Provide data: after metadata has been entered, an upload location for uploading the dataset is provided. Also scripts for pushing large datasets are available. 3. Process and publish: once files are uploaded, they are processed for metadata (e.g., geolocation, time, version) and made available in KDC. The data is put into archive and made available using the in-house developed Virtual File System, which provides a persistent virtual path to the data. For the end-user of the data, KDC provides a web interface with search filters on key words, geolocation and time. Data can be downloaded using HTTP or FTP and can be scripted. Users can register to gain access to restricted datasets. The architecture combines Open Source software components (e.g. Geonetwork, Magnolia, MongoDB, MySQL) with in-house built software (ADAGUC, NADC) and newly developed software. Challenges faced and solved are: How to deal with the different file formats used at KNMI? (e.g. NetCDF, GRIB, BUFR, ASCII); How to deal with the different metadata profiles while hiding the complexity of this to the user? How to incorporate the existing archives? KDC is a node in several networks (WMO WIS, INSPIRE, Open Data): how to do this? In the presentation/poster we will describe what has been done for each of these challenges and how it is implemented in KDC.

  12. 76 FR 66695 - Privacy Act of 1974; System of Records

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-10-27

    .... DWHS P04 System name: Reduction-In-Force Case Files (February 11, 2011, 76 FR 7825). Changes....'' * * * * * DWHS P04 System name: Reduction-In-Force Case Files. System location: Human Resources Directorate... system: Storage: Paper file folders. Retrievability: Filed alphabetically by last name. Safeguards...

  13. Automatic River Network Extraction from LIDAR Data

    NASA Astrophysics Data System (ADS)

    Maderal, E. N.; Valcarcel, N.; Delgado, J.; Sevilla, C.; Ojeda, J. C.

    2016-06-01

    National Geographic Institute of Spain (IGN-ES) has launched a new production system for automatic river network extraction for the Geospatial Reference Information (GRI) within hydrography theme. The goal is to get an accurate and updated river network, automatically extracted as possible. For this, IGN-ES has full LiDAR coverage for the whole Spanish territory with a density of 0.5 points per square meter. To implement this work, it has been validated the technical feasibility, developed a methodology to automate each production phase: hydrological terrain models generation with 2 meter grid size and river network extraction combining hydrographic criteria (topographic network) and hydrological criteria (flow accumulation river network), and finally the production was launched. The key points of this work has been managing a big data environment, more than 160,000 Lidar data files, the infrastructure to store (up to 40 Tb between results and intermediate files), and process; using local virtualization and the Amazon Web Service (AWS), which allowed to obtain this automatic production within 6 months, it also has been important the software stability (TerraScan-TerraSolid, GlobalMapper-Blue Marble , FME-Safe, ArcGIS-Esri) and finally, the human resources managing. The results of this production has been an accurate automatic river network extraction for the whole country with a significant improvement for the altimetric component of the 3D linear vector. This article presents the technical feasibility, the production methodology, the automatic river network extraction production and its advantages over traditional vector extraction systems.

  14. Z-depth integration: a new technique for manipulating z-depth properties in composited scenes

    NASA Astrophysics Data System (ADS)

    Steckel, Kayla; Whittinghill, David

    2014-02-01

    This paper presents a new technique in the production pipeline of asset creation for virtual environments called Z-Depth Integration (ZeDI). ZeDI is intended to reduce the time required to place elements at the appropriate z-depth within a scene. Though ZeDI is intended for use primarily in two-dimensional scene composition, depth-dependent "flat" animated objects are often critical elements of augmented and virtual reality applications (AR/VR). ZeDI is derived from "deep image compositing", a capacity implemented within the OpenEXR file format. In order to trick the human eye into perceiving overlapping scene elements as being in front of or behind one another, the developer must manually manipulate which pixels of an element are visible in relation to other objects embedded within the environment's image sequence. ZeDI improves on this process by providing a means for interacting with procedurally extracted z-depth data from a virtual environment scene. By streamlining the process of defining objects' depth characteristics, it is expected that the time and energy required for developers to create compelling AR/VR scenes will be reduced. In the proof of concept presented in this manuscript, ZeDI is implemented for pre-rendered virtual scene construction via an AfterEffects software plug-in.

  15. Personal File Management for the Health Sciences.

    ERIC Educational Resources Information Center

    Apostle, Lynne

    Written as an introduction to the concepts of creating a personal or reprint file, this workbook discusses both manual and computerized systems, with emphasis on the preliminary groundwork that needs to be done before starting any filing system. A file assessment worksheet is provided; considerations in developing a personal filing system are…

  16. Study on virtual instrument developing system based on intelligent virtual control

    NASA Astrophysics Data System (ADS)

    Tang, Baoping; Cheng, Fabin; Qin, Shuren

    2005-01-01

    The paper introduces a non-programming developing system of a virtual instument (VI), i.e., a virtual measurement instrument developing system (VMIDS) based on intelligent virtual control (IVC). The background of the IVC-based VMIDS is described briefly, and the hierarchical message bus (HMB)-based software architecture of VMIDS is discussed in detail. The three parts and functions of VMIDS are introduced, and the process of non-programming developing VI is further described.

  17. Next Generation Landsat Products Delivered Using Virtual Globes and OGC Standard Services

    NASA Astrophysics Data System (ADS)

    Neiers, M.; Dwyer, J.; Neiers, S.

    2008-12-01

    The Landsat Data Continuity Mission (LDCM) is the next in the series of Landsat satellite missions and is tasked with the objective of delivering data acquired by the Operational Land Imager (OLI). The OLI instrument will provide data continuity to over 30 years of global multispectral data collected by the Landsat series of satellites. The U.S. Geological Survey Earth Resources Observation and Science (USGS EROS) Center has responsibility for the development and operation of the LDCM ground system. One of the mission objectives of the LDCM is to distribute OLI data products electronically over the Internet to the general public on a nondiscriminatory basis and at no cost. To ensure the user community and general public can easily access LDCM data from multiple clients, the User Portal Element (UPE) of the LDCM ground system will use OGC standards and services such as Keyhole Markup Language (KML), Web Map Service (WMS), Web Coverage Service (WCS), and Geographic encoding of Really Simple Syndication (GeoRSS) feeds for both access to and delivery of LDCM products. The USGS has developed and tested the capabilities of several successful UPE prototypes for delivery of Landsat metadata, full resolution browse, and orthorectified (L1T) products from clients such as Google Earth, Google Maps, ESRI ArcGIS Explorer, and Microsoft's Virtual Earth. Prototyping efforts included the following services: using virtual globes to search the historical Landsat archive by dynamic generation of KML; notification of and access to new Landsat acquisitions and L1T downloads from GeoRSS feeds; Google indexing of KML files containing links to full resolution browse and data downloads; WMS delivery of reduced resolution browse, full resolution browse, and cloud mask overlays; and custom data downloads using WCS clients. These various prototypes will be demonstrated and LDCM service implementation plans will be discussed during this session.

  18. 47 CFR 1.10008 - What are IBFS file numbers?

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    ... Random Selection International Bureau Filing System § 1.10008 What are IBFS file numbers? (a) We assign...) For a description of file number information, see The International Bureau Filing System File Number... 47 Telecommunication 1 2013-10-01 2013-10-01 false What are IBFS file numbers? 1.10008 Section 1...

  19. 47 CFR 1.10008 - What are IBFS file numbers?

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... Bureau Filing System § 1.10008 What are IBFS file numbers? (a) We assign file numbers to electronic... information, see The International Bureau Filing System File Number Format Public Notice, DA-04-568 (released... 47 Telecommunication 1 2010-10-01 2010-10-01 false What are IBFS file numbers? 1.10008 Section 1...

  20. 47 CFR 1.10008 - What are IBFS file numbers?

    Code of Federal Regulations, 2012 CFR

    2012-10-01

    ... Random Selection International Bureau Filing System § 1.10008 What are IBFS file numbers? (a) We assign...) For a description of file number information, see The International Bureau Filing System File Number... 47 Telecommunication 1 2012-10-01 2012-10-01 false What are IBFS file numbers? 1.10008 Section 1...

  1. 47 CFR 1.10008 - What are IBFS file numbers?

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ... Bureau Filing System § 1.10008 What are IBFS file numbers? (a) We assign file numbers to electronic... information, see The International Bureau Filing System File Number Format Public Notice, DA-04-568 (released... 47 Telecommunication 1 2011-10-01 2011-10-01 false What are IBFS file numbers? 1.10008 Section 1...

  2. 47 CFR 1.10008 - What are IBFS file numbers?

    Code of Federal Regulations, 2014 CFR

    2014-10-01

    ... Random Selection International Bureau Filing System § 1.10008 What are IBFS file numbers? (a) We assign...) For a description of file number information, see The International Bureau Filing System File Number... 47 Telecommunication 1 2014-10-01 2014-10-01 false What are IBFS file numbers? 1.10008 Section 1...

  3. Methods and systems relating to an augmented virtuality environment

    DOEpatents

    Nielsen, Curtis W; Anderson, Matthew O; McKay, Mark D; Wadsworth, Derek C; Boyce, Jodie R; Hruska, Ryan C; Koudelka, John A; Whetten, Jonathan; Bruemmer, David J

    2014-05-20

    Systems and methods relating to an augmented virtuality system are disclosed. A method of operating an augmented virtuality system may comprise displaying imagery of a real-world environment in an operating picture. The method may further include displaying a plurality of virtual icons in the operating picture representing at least some assets of a plurality of assets positioned in the real-world environment. Additionally, the method may include displaying at least one virtual item in the operating picture representing data sensed by one or more of the assets of the plurality of assets and remotely controlling at least one asset of the plurality of assets by interacting with a virtual icon associated with the at least one asset.

  4. Cloud-based opportunities in scientific computing: insights from processing Suomi National Polar-Orbiting Partnership (S-NPP) Direct Broadcast data

    NASA Astrophysics Data System (ADS)

    Evans, J. D.; Hao, W.; Chettri, S.

    2013-12-01

    The cloud is proving to be a uniquely promising platform for scientific computing. Our experience with processing satellite data using Amazon Web Services highlights several opportunities for enhanced performance, flexibility, and cost effectiveness in the cloud relative to traditional computing -- for example: - Direct readout from a polar-orbiting satellite such as the Suomi National Polar-Orbiting Partnership (S-NPP) requires bursts of processing a few times a day, separated by quiet periods when the satellite is out of receiving range. In the cloud, by starting and stopping virtual machines in minutes, we can marshal significant computing resources quickly when needed, but not pay for them when not needed. To take advantage of this capability, we are automating a data-driven approach to the management of cloud computing resources, in which new data availability triggers the creation of new virtual machines (of variable size and processing power) which last only until the processing workflow is complete. - 'Spot instances' are virtual machines that run as long as one's asking price is higher than the provider's variable spot price. Spot instances can greatly reduce the cost of computing -- for software systems that are engineered to withstand unpredictable interruptions in service (as occurs when a spot price exceeds the asking price). We are implementing an approach to workflow management that allows data processing workflows to resume with minimal delays after temporary spot price spikes. This will allow systems to take full advantage of variably-priced 'utility computing.' - Thanks to virtual machine images, we can easily launch multiple, identical machines differentiated only by 'user data' containing individualized instructions (e.g., to fetch particular datasets or to perform certain workflows or algorithms) This is particularly useful when (as is the case with S-NPP data) we need to launch many very similar machines to process an unpredictable number of data files concurrently. Our experience shows the viability and flexibility of this approach to workflow management for scientific data processing. - Finally, cloud computing is a promising platform for distributed volunteer ('interstitial') computing, via mechanisms such as the Berkeley Open Infrastructure for Network Computing (BOINC) popularized with the SETI@Home project and others such as ClimatePrediction.net and NASA's Climate@Home. Interstitial computing faces significant challenges as commodity computing shifts from (always on) desktop computers towards smartphones and tablets (untethered and running on scarce battery power); but cloud computing offers significant slack capacity. This capacity includes virtual machines with unused RAM or underused CPUs; virtual storage volumes allocated (& paid for) but not full; and virtual machines that are paid up for the current hour but whose work is complete. We are devising ways to facilitate the reuse of these resources (i.e., cloud-based interstitial computing) for satellite data processing and related analyses. We will present our findings and research directions on these and related topics.

  5. [Chronic atrophic gastritis: endoscopic and histological concordances, associated injuries and application of virtual chromoendoscopy].

    PubMed

    Liu Bejarano, Humberto

    2011-01-01

    Due to the poor agreement between endoscopy and histology, the gastric biopsy continues being the gold standard for the diagnosis of atrophic chronic gastritis. The Virtual chromoendoscopy system allows better observation of the gastric mucosa. Evaluate the agreement between the Kimura-Takemoto ´s endoscopic system classification and the histological system of OLGA (Operative for Link Assessment Gastritis), as well as to evaluate the application of the virtual chromoendoscopy. A prospective and longitudinal study of cohorts, 138 patients was include, using endoscopic system of atrophy by Kimura and Takemoto (K-T), with conventional optical and with the use of seventh filter of virtual chromoendoscopy ,then comparing with the histological findings of the OLGA pathology system, also were determinated injuries associated with respect to stage OLGA. The kappa index of agreement between conventional endoscopy and the system OLGA was 0.859 and with the system of virtual chromoendoscopy was 0.822, the preneoplasic and neoplastic gastric lesions were associate to stages III and IV of atrophy. The endoscopic and histological correlation with both systems isvery good, with or without the use of virtual chromoendoscopy. chronic atrophic gastritis, virtual chromoendoscopy, olga system, , kimuratakemoto system.

  6. A File Archival System

    NASA Technical Reports Server (NTRS)

    Fanselow, J. L.; Vavrus, J. L.

    1984-01-01

    ARCH, file archival system for DEC VAX, provides for easy offline storage and retrieval of arbitrary files on DEC VAX system. System designed to eliminate situations that tie up disk space and lead to confusion when different programers develop different versions of same programs and associated files.

  7. Interactive voxel graphics in virtual reality

    NASA Astrophysics Data System (ADS)

    Brody, Bill; Chappell, Glenn G.; Hartman, Chris

    2002-06-01

    Interactive voxel graphics in virtual reality poses significant research challenges in terms of interface, file I/O, and real-time algorithms. Voxel graphics is not so new, as it is the focus of a good deal of scientific visualization. Interactive voxel creation and manipulation is a more innovative concept. Scientists are understandably reluctant to manipulate data. They collect or model data. A scientific analogy to interactive graphics is the generation of initial conditions for some model. It is used as a method to test those models. We, however, are in the business of creating new data in the form of graphical imagery. In our endeavor, science is a tool and not an end. Nevertheless, there is a whole class of interactions and associated data generation scenarios that are natural to our way of working and that are also appropriate to scientific inquiry. Annotation by sketching or painting to point to and distinguish interesting and important information is very significant for science as well as art. Annotation in 3D is difficult without a good 3D interface. Interactive graphics in virtual reality is an appropriate approach to this problem.

  8. A VM-shared desktop virtualization system based on OpenStack

    NASA Astrophysics Data System (ADS)

    Liu, Xi; Zhu, Mingfa; Xiao, Limin; Jiang, Yuanjie

    2018-04-01

    With the increasing popularity of cloud computing, desktop virtualization is rising in recent years as a branch of virtualization technology. However, existing desktop virtualization systems are mostly designed as a one-to-one mode, which one VM can only be accessed by one user. Meanwhile, previous desktop virtualization systems perform weakly in terms of response time and cost saving. This paper proposes a novel VM-Shared desktop virtualization system based on OpenStack platform. The paper modified the connecting process and the display data transmission process of the remote display protocol SPICE to support VM-Shared function. On the other hand, we propose a server-push display mode to improve user interactive experience. The experimental results show that our system performs well in response time and achieves a low CPU consumption.

  9. Generating Mosaics of Astronomical Images

    NASA Technical Reports Server (NTRS)

    Bergou, Attila; Berriman, Bruce; Good, John; Jacob, Joseph; Katz, Daniel; Laity, Anastasia; Prince, Thomas; Williams, Roy

    2005-01-01

    "Montage" is the name of a service of the National Virtual Observatory (NVO), and of software being developed to implement the service via the World Wide Web. Montage generates science-grade custom mosaics of astronomical images on demand from input files that comply with the Flexible Image Transport System (FITS) standard and contain image data registered on projections that comply with the World Coordinate System (WCS) standards. "Science-grade" in this context signifies that terrestrial and instrumental features are removed from images in a way that can be described quantitatively. "Custom" refers to user-specified parameters of projection, coordinates, size, rotation, and spatial sampling. The greatest value of Montage is expected to lie in its ability to analyze images at multiple wavelengths, delivering them on a common projection, coordinate system, and spatial sampling, and thereby enabling further analysis as though they were part of a single, multi-wavelength image. Montage will be deployed as a computation-intensive service through existing astronomy portals and other Web sites. It will be integrated into the emerging NVO architecture and will be executed on the TeraGrid. The Montage software will also be portable and publicly available.

  10. 75 FR 65467 - Combined Notice of Filings No. 1

    Federal Register 2010, 2011, 2012, 2013, 2014

    2010-10-25

    ...: Venice Gathering System, L.L.C. Description: Venice Gathering System, L.L.C. submits tariff filing per 154.203: Venice Gathering System Rate Settlement Compliance Filing to be effective 11/1/2010. Filed...

  11. Register file soft error recovery

    DOEpatents

    Fleischer, Bruce M.; Fox, Thomas W.; Wait, Charles D.; Muff, Adam J.; Watson, III, Alfred T.

    2013-10-15

    Register file soft error recovery including a system that includes a first register file and a second register file that mirrors the first register file. The system also includes an arithmetic pipeline for receiving data read from the first register file, and error detection circuitry to detect whether the data read from the first register file includes corrupted data. The system further includes error recovery circuitry to insert an error recovery instruction into the arithmetic pipeline in response to detecting the corrupted data. The inserted error recovery instruction replaces the corrupted data in the first register file with a copy of the data from the second register file.

  12. 48 CFR 304.803-70 - Contract/order file organization and use of checklists.

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    ... 48 Federal Acquisition Regulations System 4 2013-10-01 2013-10-01 false Contract/order file organization and use of checklists. 304.803-70 Section 304.803-70 Federal Acquisition Regulations System HEALTH... content of HHS contract and order files, OPDIVs shall use the folder filing system and accompanying file...

  13. 48 CFR 304.803-70 - Contract/order file organization and use of checklists.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... 48 Federal Acquisition Regulations System 4 2010-10-01 2010-10-01 false Contract/order file organization and use of checklists. 304.803-70 Section 304.803-70 Federal Acquisition Regulations System HEALTH... content of HHS contract and order files, OPDIVs shall use the folder filing system and accompanying file...

  14. 48 CFR 304.803-70 - Contract/order file organization and use of checklists.

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ... 48 Federal Acquisition Regulations System 4 2011-10-01 2011-10-01 false Contract/order file organization and use of checklists. 304.803-70 Section 304.803-70 Federal Acquisition Regulations System HEALTH... content of HHS contract and order files, OPDIVs shall use the folder filing system and accompanying file...

  15. 48 CFR 304.803-70 - Contract/order file organization and use of checklists.

    Code of Federal Regulations, 2012 CFR

    2012-10-01

    ... 48 Federal Acquisition Regulations System 4 2012-10-01 2012-10-01 false Contract/order file organization and use of checklists. 304.803-70 Section 304.803-70 Federal Acquisition Regulations System HEALTH... content of HHS contract and order files, OPDIVs shall use the folder filing system and accompanying file...

  16. 48 CFR 304.803-70 - Contract/order file organization and use of checklists.

    Code of Federal Regulations, 2014 CFR

    2014-10-01

    ... 48 Federal Acquisition Regulations System 4 2014-10-01 2014-10-01 false Contract/order file organization and use of checklists. 304.803-70 Section 304.803-70 Federal Acquisition Regulations System HEALTH... content of HHS contract and order files, OPDIVs shall use the folder filing system and accompanying file...

  17. To evaluate and compare the efficacy, cleaning ability of hand and two rotary systems in root canal retreatment.

    PubMed

    Shivanand, Sunita; Patil, Chetan R; Thangala, Venugopal; Kumar, Pabbati Ravi; Sachdeva, Jyoti; Krishna, Akash

    2013-05-01

    To evaluate and compare the efficacy, cleaning ability of hand and two rotary systems in root canal retreatment. Sixty extracted premolars were retreated with following systems: Group -ProTaper Universal retreatment files, Group 2-ProFile system, Group 3-H-file. Specimens were split longitudinally and amount of remaining gutta-percha on the canal walls was assessed using direct visual scoring with the aid of stereomicroscope. Results were statistically analyzed using ANOVA test. Completely clean root canal walls were not achieved with any of the techniques investigated. However, all three systems proved to be effective for gutta-percha removal. Significant difference was found between ProTaper universal retreatment file and H-file, and also between ProFile and H-file. Under the conditions of the present study, ProTaper Universal retreatment files left significantly less guttapercha and sealer than ProFile and H-file. Rotary systems in combination with gutta-percha solvents can perform superiorly as compared to the time tested traditional hand instrumentation in root canal retreatment.

  18. High Performance Input/Output for Parallel Computer Systems

    NASA Technical Reports Server (NTRS)

    Ligon, W. B.

    1996-01-01

    The goal of our project is to study the I/O characteristics of parallel applications used in Earth Science data processing systems such as Regional Data Centers (RDCs) or EOSDIS. Our approach is to study the runtime behavior of typical programs and the effect of key parameters of the I/O subsystem both under simulation and with direct experimentation on parallel systems. Our three year activity has focused on two items: developing a test bed that facilitates experimentation with parallel I/O, and studying representative programs from the Earth science data processing application domain. The Parallel Virtual File System (PVFS) has been developed for use on a number of platforms including the Tiger Parallel Architecture Workbench (TPAW) simulator, The Intel Paragon, a cluster of DEC Alpha workstations, and the Beowulf system (at CESDIS). PVFS provides considerable flexibility in configuring I/O in a UNIX- like environment. Access to key performance parameters facilitates experimentation. We have studied several key applications fiom levels 1,2 and 3 of the typical RDC processing scenario including instrument calibration and navigation, image classification, and numerical modeling codes. We have also considered large-scale scientific database codes used to organize image data.

  19. An Audio Architecture Integrating Sound and Live Voice for Virtual Environments

    DTIC Science & Technology

    2002-09-01

    implementation of a virtual environment. As real world training locations become scarce and training budgets are trimmed, training system developers ...look more and more towards virtual environments as the answer. Virtual environments provide training system developers with several key benefits

  20. LVC interaction within a mixed-reality training system

    NASA Astrophysics Data System (ADS)

    Pollock, Brice; Winer, Eliot; Gilbert, Stephen; de la Cruz, Julio

    2012-03-01

    The United States military is increasingly pursuing advanced live, virtual, and constructive (LVC) training systems for reduced cost, greater training flexibility, and decreased training times. Combining the advantages of realistic training environments and virtual worlds, mixed reality LVC training systems can enable live and virtual trainee interaction as if co-located. However, LVC interaction in these systems often requires constructing immersive environments, developing hardware for live-virtual interaction, tracking in occluded environments, and an architecture that supports real-time transfer of entity information across many systems. This paper discusses a system that overcomes these challenges to empower LVC interaction in a reconfigurable, mixed reality environment. This system was developed and tested in an immersive, reconfigurable, and mixed reality LVC training system for the dismounted warfighter at ISU, known as the Veldt, to overcome LVC interaction challenges and as a test bed for cuttingedge technology to meet future U.S. Army battlefield requirements. Trainees interact physically in the Veldt and virtually through commercial and developed game engines. Evaluation involving military trained personnel found this system to be effective, immersive, and useful for developing the critical decision-making skills necessary for the battlefield. Procedural terrain modeling, model-matching database techniques, and a central communication server process all live and virtual entity data from system components to create a cohesive virtual world across all distributed simulators and game engines in real-time. This system achieves rare LVC interaction within multiple physical and virtual immersive environments for training in real-time across many distributed systems.

  1. A virtual source model for Monte Carlo simulation of helical tomotherapy.

    PubMed

    Yuan, Jiankui; Rong, Yi; Chen, Quan

    2015-01-08

    The purpose of this study was to present a Monte Carlo (MC) simulation method based on a virtual source, jaw, and MLC model to calculate dose in patient for helical tomotherapy without the need of calculating phase-space files (PSFs). Current studies on the tomotherapy MC simulation adopt a full MC model, which includes extensive modeling of radiation source, primary and secondary jaws, and multileaf collimator (MLC). In the full MC model, PSFs need to be created at different scoring planes to facilitate the patient dose calculations. In the present work, the virtual source model (VSM) we established was based on the gold standard beam data of a tomotherapy unit, which can be exported from the treatment planning station (TPS). The TPS-generated sinograms were extracted from the archived patient XML (eXtensible Markup Language) files. The fluence map for the MC sampling was created by incorporating the percentage leaf open time (LOT) with leaf filter, jaw penumbra, and leaf latency contained from sinogram files. The VSM was validated for various geometry setups and clinical situations involving heterogeneous media and delivery quality assurance (DQA) cases. An agreement of < 1% was obtained between the measured and simulated results for percent depth doses (PDDs) and open beam profiles for all three jaw settings in the VSM commissioning. The accuracy of the VSM leaf filter model was verified in comparing the measured and simulated results for a Picket Fence pattern. An agreement of < 2% was achieved between the presented VSM and a published full MC model for heterogeneous phantoms. For complex clinical head and neck (HN) cases, the VSM-based MC simulation of DQA plans agreed with the film measurement with 98% of planar dose pixels passing on the 2%/2 mm gamma criteria. For patient treatment plans, results showed comparable dose-volume histograms (DVHs) for planning target volumes (PTVs) and organs at risk (OARs). Deviations observed in this study were consistent with literature. The VSM-based MC simulation approach can be feasibly built from the gold standard beam model of a tomotherapy unit. The accuracy of the VSM was validated against measurements in homogeneous media, as well as published full MC model in heterogeneous media.

  2. A virtual source model for Monte Carlo simulation of helical tomotherapy

    PubMed Central

    Yuan, Jiankui; Rong, Yi

    2015-01-01

    The purpose of this study was to present a Monte Carlo (MC) simulation method based on a virtual source, jaw, and MLC model to calculate dose in patient for helical tomotherapy without the need of calculating phase‐space files (PSFs). Current studies on the tomotherapy MC simulation adopt a full MC model, which includes extensive modeling of radiation source, primary and secondary jaws, and multileaf collimator (MLC). In the full MC model, PSFs need to be created at different scoring planes to facilitate the patient dose calculations. In the present work, the virtual source model (VSM) we established was based on the gold standard beam data of a tomotherapy unit, which can be exported from the treatment planning station (TPS). The TPS‐generated sinograms were extracted from the archived patient XML (eXtensible Markup Language) files. The fluence map for the MC sampling was created by incorporating the percentage leaf open time (LOT) with leaf filter, jaw penumbra, and leaf latency contained from sinogram files. The VSM was validated for various geometry setups and clinical situations involving heterogeneous media and delivery quality assurance (DQA) cases. An agreement of <1% was obtained between the measured and simulated results for percent depth doses (PDDs) and open beam profiles for all three jaw settings in the VSM commissioning. The accuracy of the VSM leaf filter model was verified in comparing the measured and simulated results for a Picket Fence pattern. An agreement of <2% was achieved between the presented VSM and a published full MC model for heterogeneous phantoms. For complex clinical head and neck (HN) cases, the VSM‐based MC simulation of DQA plans agreed with the film measurement with 98% of planar dose pixels passing on the 2%/2 mm gamma criteria. For patient treatment plans, results showed comparable dose‐volume histograms (DVHs) for planning target volumes (PTVs) and organs at risk (OARs). Deviations observed in this study were consistent with literature. The VSM‐based MC simulation approach can be feasibly built from the gold standard beam model of a tomotherapy unit. The accuracy of the VSM was validated against measurements in homogeneous media, as well as published full MC model in heterogeneous media. PACS numbers: 87.53.‐j, 87.55.K‐ PMID:25679157

  3. Software platform virtualization in chemistry research and university teaching

    PubMed Central

    2009-01-01

    Background Modern chemistry laboratories operate with a wide range of software applications under different operating systems, such as Windows, LINUX or Mac OS X. Instead of installing software on different computers it is possible to install those applications on a single computer using Virtual Machine software. Software platform virtualization allows a single guest operating system to execute multiple other operating systems on the same computer. We apply and discuss the use of virtual machines in chemistry research and teaching laboratories. Results Virtual machines are commonly used for cheminformatics software development and testing. Benchmarking multiple chemistry software packages we have confirmed that the computational speed penalty for using virtual machines is low and around 5% to 10%. Software virtualization in a teaching environment allows faster deployment and easy use of commercial and open source software in hands-on computer teaching labs. Conclusion Software virtualization in chemistry, mass spectrometry and cheminformatics is needed for software testing and development of software for different operating systems. In order to obtain maximum performance the virtualization software should be multi-core enabled and allow the use of multiprocessor configurations in the virtual machine environment. Server consolidation, by running multiple tasks and operating systems on a single physical machine, can lead to lower maintenance and hardware costs especially in small research labs. The use of virtual machines can prevent software virus infections and security breaches when used as a sandbox system for internet access and software testing. Complex software setups can be created with virtual machines and are easily deployed later to multiple computers for hands-on teaching classes. We discuss the popularity of bioinformatics compared to cheminformatics as well as the missing cheminformatics education at universities worldwide. PMID:20150997

  4. Software platform virtualization in chemistry research and university teaching.

    PubMed

    Kind, Tobias; Leamy, Tim; Leary, Julie A; Fiehn, Oliver

    2009-11-16

    Modern chemistry laboratories operate with a wide range of software applications under different operating systems, such as Windows, LINUX or Mac OS X. Instead of installing software on different computers it is possible to install those applications on a single computer using Virtual Machine software. Software platform virtualization allows a single guest operating system to execute multiple other operating systems on the same computer. We apply and discuss the use of virtual machines in chemistry research and teaching laboratories. Virtual machines are commonly used for cheminformatics software development and testing. Benchmarking multiple chemistry software packages we have confirmed that the computational speed penalty for using virtual machines is low and around 5% to 10%. Software virtualization in a teaching environment allows faster deployment and easy use of commercial and open source software in hands-on computer teaching labs. Software virtualization in chemistry, mass spectrometry and cheminformatics is needed for software testing and development of software for different operating systems. In order to obtain maximum performance the virtualization software should be multi-core enabled and allow the use of multiprocessor configurations in the virtual machine environment. Server consolidation, by running multiple tasks and operating systems on a single physical machine, can lead to lower maintenance and hardware costs especially in small research labs. The use of virtual machines can prevent software virus infections and security breaches when used as a sandbox system for internet access and software testing. Complex software setups can be created with virtual machines and are easily deployed later to multiple computers for hands-on teaching classes. We discuss the popularity of bioinformatics compared to cheminformatics as well as the missing cheminformatics education at universities worldwide.

  5. A Pyramid Scheme for Constructing Geologic Maps on Geobrowsers

    NASA Astrophysics Data System (ADS)

    Whitmeyer, S. J.; de Paor, D. G.; Daniels, J.; Jeremy, N.; Michael, R.; Santangelo, B.

    2008-12-01

    Hundreds of geologic maps have been draped onto Google Earth (GE) using the ground overlay tag of Keyhole Markup Language (KML) and dozens have been published on academic and survey web pages as downloadable KML or KMZ (zipped KML) files. The vast majority of these are small KML docs that link to single, large - often very large - image files (jpegs, tiffs, etc.) Files that exceed 50 MB in size defeat the purpose of GE as an interactive and responsive, and therefore fast, virtual terrain medium. KML supports super-overlays (a.k.a. image pyramids), which break large graphic files into manageable tiles that load only when they are in the visible region at a sufficient level of detail (LOD), and several automatic tile-generating applications have been written. The process of exporting map data from applications such as ArcGIS® to KML format is becoming more manageable but still poses challenges. Complications arise, for example, because of differences between grid-north at a point on a map and true north at the equivalent location on the virtual globe. In our recent field season, we devised ways of overcoming many of these obstacles in order to generate responsive, panable, zoomable geologic maps in which data is layered in a pyramid structure similar to the image pyramid used for default GE terrain. The structure of our KML code for each level of the pyramid is self-similar: (i) check whether the current tile is in the visible region, (ii) if so, render the current overlay, (iii) add the current data level, and (iv) using four network links, check the visibility and LOD of four nested tiles. By using this pyramid structure we provide the user with access to geologic and map data at multiple levels of observation. For example, when the viewpoint is distant, regional structures and stratigraphy (e.g. lithological groups and terrane boundaries) are visible. As the user zooms to lower elevations, formations and ultimately individual outcrops come into focus. The pyramid structure is ideally suited to geologic data which tends to be unevenly exposed across the earth's surface.

  6. Mission Operations Center (MOC) - Precipitation Processing System (PPS) Interface Software System (MPISS)

    NASA Technical Reports Server (NTRS)

    Ferrara, Jeffrey; Calk, William; Atwell, William; Tsui, Tina

    2013-01-01

    MPISS is an automatic file transfer system that implements a combination of standard and mission-unique transfer protocols required by the Global Precipitation Measurement Mission (GPM) Precipitation Processing System (PPS) to control the flow of data between the MOC and the PPS. The primary features of MPISS are file transfers (both with and without PPS specific protocols), logging of file transfer and system events to local files and a standard messaging bus, short term storage of data files to facilitate retransmissions, and generation of file transfer accounting reports. The system includes a graphical user interface (GUI) to control the system, allow manual operations, and to display events in real time. The PPS specific protocols are an enhanced version of those that were developed for the Tropical Rainfall Measuring Mission (TRMM). All file transfers between the MOC and the PPS use the SSH File Transfer Protocol (SFTP). For reports and data files generated within the MOC, no additional protocols are used when transferring files to the PPS. For observatory data files, an additional handshaking protocol of data notices and data receipts is used. MPISS generates and sends to the PPS data notices containing data start and stop times along with a checksum for the file for each observatory data file transmitted. MPISS retrieves the PPS generated data receipts that indicate the success or failure of the PPS to ingest the data file and/or notice. MPISS retransmits the appropriate files as indicated in the receipt when required. MPISS also automatically retrieves files from the PPS. The unique feature of this software is the use of both standard and PPS specific protocols in parallel. The advantage of this capability is that it supports users that require the PPS protocol as well as those that do not require it. The system is highly configurable to accommodate the needs of future users.

  7. A Study of Multi-Representation of Geometry Problem Solving with Virtual Manipulatives and Whiteboard System

    ERIC Educational Resources Information Center

    Hwang, Wu-Yuin; Su, Jia-Han; Huang, Yueh-Min; Dong, Jian-Jie

    2009-01-01

    In this paper, the development of an innovative Virtual Manipulatives and Whiteboard (VMW) system is described. The VMW system allowed users to manipulate virtual objects in 3D space and find clues to solve geometry problems. To assist with multi-representation transformation, translucent multimedia whiteboards were used to provide a virtual 3D…

  8. 47 CFR 1.10006 - Is electronic filing mandatory?

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ... 47 Telecommunication 1 2011-10-01 2011-10-01 false Is electronic filing mandatory? 1.10006 Section... International Bureau Filing System § 1.10006 Is electronic filing mandatory? Electronic filing is mandatory for... System (IBFS) form is available. Applications for which an electronic form is not available must be filed...

  9. Instrumentation to Aid in Steel Bridge Fabrication : Bridge Virtual Assembly System

    DOT National Transportation Integrated Search

    2018-05-01

    This pool funded project developed a BRIDGE VIRTUAL ASSEMBLY SYSTEM (BRIDGE VAS) that improves manufacturing processes and enhances quality control for steel bridge fabrication. The system replaces conventional match-drilling with virtual assembly me...

  10. Developing and integrating an adverse drug reaction reporting system with the hospital information system.

    PubMed

    Kataoka, Satoshi; Ohe, Kazuhiko; Mochizuki, Mayumi; Ueda, Shiro

    2002-01-01

    We have developed an adverse drug reaction (ADR) reporting system integrating it with Hospital Information System (HIS) of the University of Tokyo Hospital. Since this system is designed with JAVA, it is portable without re-compiling to any operating systems on which JAVA virtual machines work. In this system, we implemented an automatic data filling function using XML-based (extended Markup Language) files generated by HIS. This new specification would decrease the time needed for physicians and pharmacists to fill the spontaneous ADR reports. By clicking a button, the report is sent to the text database through Simple Mail Transfer Protocol (SMTP) electronic mails. The destination of the report mail can be changed arbitrarily by administrators, which adds this system more flexibility for practical operation. Although we tried our best to use the SGML-based (Standard Generalized Markup Language) ICH M2 guideline to follow the global standard of the case report, we eventually adopted XML as the output report format. This is because we found some problems in handling two bytes characters with ICH guideline and XML has a lot of useful features. According to our pilot survey conducted at the University of Tokyo Hospital, many physicians answered that our idea, integrating ADR reporting system to HIS, would increase the ADR reporting numbers.

  11. Integration of the virtual 3D model of a control system with the virtual controller

    NASA Astrophysics Data System (ADS)

    Herbuś, K.; Ociepka, P.

    2015-11-01

    Nowadays the design process includes simulation analysis of different components of a constructed object. It involves the need for integration of different virtual object to simulate the whole investigated technical system. The paper presents the issues related to the integration of a virtual 3D model of a chosen control system of with a virtual controller. The goal of integration is to verify the operation of an adopted object of in accordance with the established control program. The object of the simulation work is the drive system of a tunneling machine for trenchless work. In the first stage of work was created an interactive visualization of functioning of the 3D virtual model of a tunneling machine. For this purpose, the software of the VR (Virtual Reality) class was applied. In the elaborated interactive application were created adequate procedures allowing controlling the drive system of a translatory motion, a rotary motion and the drive system of a manipulator. Additionally was created the procedure of turning on and off the output crushing head, mounted on the last element of the manipulator. In the elaborated interactive application have been established procedures for receiving input data from external software, on the basis of the dynamic data exchange (DDE), which allow controlling actuators of particular control systems of the considered machine. In the next stage of work, the program on a virtual driver, in the ladder diagram (LD) language, was created. The control program was developed on the basis of the adopted work cycle of the tunneling machine. The element integrating the virtual model of the tunneling machine for trenchless work with the virtual controller is the application written in a high level language (Visual Basic). In the developed application was created procedures responsible for collecting data from the running, in a simulation mode, virtual controller and transferring them to the interactive application, in which is verified the operation of the adopted research object. The carried out work allowed foot the integration of the virtual model of the control system of the tunneling machine with the virtual controller, enabling the verification of its operation.

  12. Checkpoint-Restart in User Space

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    CRUISE implements a user-space file system that stores data in main memory and transparently spills over to other storage, like local flash memory or the parallel file system, as needed. CRUISE also exposes file contents fo remote direct memory access, allowing external tools to copy files to the parallel file system in the background with reduced CPU interruption.

  13. An Ephemeral Burst-Buffer File System for Scientific Applications

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wang, Teng; Moody, Adam; Yu, Weikuan

    BurstFS is a distributed file system for node-local burst buffers on high performance computing systems. BurstFS presents a shared file system space across the burst buffers so that applications that use shared files can access the highly-scalable burst buffers without changing their applications.

  14. 5 CFR 293.504 - Composition of, and access to, the Employee Medical File System.

    Code of Federal Regulations, 2013 CFR

    2013-01-01

    ... Employee Medical File System. 293.504 Section 293.504 Administrative Personnel OFFICE OF PERSONNEL MANAGEMENT CIVIL SERVICE REGULATIONS PERSONNEL RECORDS Employee Medical File System Records § 293.504 Composition of, and access to, the Employee Medical File System. (a) All employee occupational medical records...

  15. 5 CFR 293.504 - Composition of, and access to, the Employee Medical File System.

    Code of Federal Regulations, 2014 CFR

    2014-01-01

    ... Employee Medical File System. 293.504 Section 293.504 Administrative Personnel OFFICE OF PERSONNEL MANAGEMENT CIVIL SERVICE REGULATIONS PERSONNEL RECORDS Employee Medical File System Records § 293.504 Composition of, and access to, the Employee Medical File System. (a) All employee occupational medical records...

  16. 5 CFR 293.504 - Composition of, and access to, the Employee Medical File System.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... Employee Medical File System. 293.504 Section 293.504 Administrative Personnel OFFICE OF PERSONNEL MANAGEMENT CIVIL SERVICE REGULATIONS PERSONNEL RECORDS Employee Medical File System Records § 293.504 Composition of, and access to, the Employee Medical File System. (a) All employee occupational medical records...

  17. 5 CFR 293.504 - Composition of, and access to, the Employee Medical File System.

    Code of Federal Regulations, 2012 CFR

    2012-01-01

    ... Employee Medical File System. 293.504 Section 293.504 Administrative Personnel OFFICE OF PERSONNEL MANAGEMENT CIVIL SERVICE REGULATIONS PERSONNEL RECORDS Employee Medical File System Records § 293.504 Composition of, and access to, the Employee Medical File System. (a) All employee occupational medical records...

  18. 5 CFR 293.504 - Composition of, and access to, the Employee Medical File System.

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ... Employee Medical File System. 293.504 Section 293.504 Administrative Personnel OFFICE OF PERSONNEL MANAGEMENT CIVIL SERVICE REGULATIONS PERSONNEL RECORDS Employee Medical File System Records § 293.504 Composition of, and access to, the Employee Medical File System. (a) All employee occupational medical records...

  19. VirGO: A Visual Browser for the ESO Science Archive Facility

    NASA Astrophysics Data System (ADS)

    Hatziminaoglou, Evanthia; Chéreau, Fabien

    2009-03-01

    VirGO is the next generation Visual Browser for the ESO Science Archive Facility (SAF) developed in the Virtual Observatory Project Office. VirGO enables astronomers to discover and select data easily from millions of observations in a visual and intuitive way. It allows real-time access and the graphical display of a large number of observations by showing instrumental footprints and image previews, as well as their selection and filtering for subsequent download from the ESO SAF web interface. It also permits the loading of external FITS files or VOTables, as well as the superposition of Digitized Sky Survey images to be used as background. All data interfaces are based on Virtual Observatory (VO) standards that allow access to images and spectra from external data centres, and interaction with the ESO SAF web interface or any other VO applications.

  20. Virtual fringe projection system with nonparallel illumination based on iteration

    NASA Astrophysics Data System (ADS)

    Zhou, Duo; Wang, Zhangying; Gao, Nan; Zhang, Zonghua; Jiang, Xiangqian

    2017-06-01

    Fringe projection profilometry has been widely applied in many fields. To set up an ideal measuring system, a virtual fringe projection technique has been studied to assist in the design of hardware configurations. However, existing virtual fringe projection systems use parallel illumination and have a fixed optical framework. This paper presents a virtual fringe projection system with nonparallel illumination. Using an iterative method to calculate intersection points between rays and reference planes or object surfaces, the proposed system can simulate projected fringe patterns and captured images. A new explicit calibration method has been presented to validate the precision of the system. Simulated results indicate that the proposed iterative method outperforms previous systems. Our virtual system can be applied to error analysis, algorithm optimization, and help operators to find ideal system parameter settings for actual measurements.

  1. ViRPET--combination of virtual reality and PET brain imaging

    DOEpatents

    Majewski, Stanislaw; Brefczynski-Lewis, Julie

    2017-05-23

    Various methods, systems and apparatus are provided for brain imaging during virtual reality stimulation. In one example, among others, a system for virtual ambulatory environment brain imaging includes a mobile brain imager configured to obtain positron emission tomography (PET) scans of a subject in motion, and a virtual reality (VR) system configured to provide one or more stimuli to the subject during the PET scans. In another example, a method for virtual ambulatory environment brain imaging includes providing stimulation to a subject through a virtual reality (VR) system; and obtaining a positron emission tomography (PET) scan of the subject while moving in response to the stimulation from the VR system. The mobile brain imager can be positioned on the subject with an array of imaging photodetector modules distributed about the head of the subject.

  2. Botnets, Cybercrime, and Cyberterrorism: Vulnerabilities and Policy Issues for Congress

    DTIC Science & Technology

    2008-01-29

    Crime and the Internet, December 2006, [http://www.sigma.com.pl/pliki/ albums /userpics/10007/Virtual_Criminology_Report_ 2006.pdf]. 22 Gnutella emerged...as the first fully decentralized peer-to-peer protocol in 2000, and was used on the Internet to share and swap music files in MP3 compression format...The music industry was often frustrated in their efforts to counter this peer-to-peer technology because it could not identify a main controlling

  3. Botnets, Cybercrime, and Cyberterrorism: Vulnerabilities and Policy Issues for Congress

    DTIC Science & Technology

    2007-11-15

    Organized Crime and the Internet, December 2006, [http://www.sigma.com.pl/pliki/ albums /userpics/10007/Virtual_Criminology_Report_ 2006.pdf]. 22 Gnutella...emerged as the first fully decentralized peer-to-peer protocol in 2000, and was used on the Internet to share and swap music files in MP3 compression...format. The music industry was often frustrated in their efforts to counter this peer-to-peer technology because it could not identify a main

  4. Multimodal Virtual Environments: MAGIC Toolkit and Visual-Haptic Interaction Paradigms

    DTIC Science & Technology

    1998-01-01

    2.7.3 Load/Save Options ..... 2.7.4 Information Display .... 2.8 Library Files. 2.9 Evaluation .............. 3 Visual-Haptic Interactions 3.1...Northwestern University[ Colgate , 1994]. It is possible for a user to touch one side of a thin object and be propelled out the opposite side, because...when there is a high correlation in motion and force between the visual and haptic realms. * Chapter 7 concludes with an evaluation of the application

  5. CloudMC: a cloud computing application for Monte Carlo simulation.

    PubMed

    Miras, H; Jiménez, R; Miras, C; Gomà, C

    2013-04-21

    This work presents CloudMC, a cloud computing application-developed in Windows Azure®, the platform of the Microsoft® cloud-for the parallelization of Monte Carlo simulations in a dynamic virtual cluster. CloudMC is a web application designed to be independent of the Monte Carlo code in which the simulations are based-the simulations just need to be of the form: input files → executable → output files. To study the performance of CloudMC in Windows Azure®, Monte Carlo simulations with penelope were performed on different instance (virtual machine) sizes, and for different number of instances. The instance size was found to have no effect on the simulation runtime. It was also found that the decrease in time with the number of instances followed Amdahl's law, with a slight deviation due to the increase in the fraction of non-parallelizable time with increasing number of instances. A simulation that would have required 30 h of CPU on a single instance was completed in 48.6 min when executed on 64 instances in parallel (speedup of 37 ×). Furthermore, the use of cloud computing for parallel computing offers some advantages over conventional clusters: high accessibility, scalability and pay per usage. Therefore, it is strongly believed that cloud computing will play an important role in making Monte Carlo dose calculation a reality in future clinical practice.

  6. RANS Simulation (Virtual Blade Model [VBM]) of Single Lab Scaled DOE RM1 MHK Turbine

    DOE Data Explorer

    Javaherchi, Teymour; Stelzenmuller, Nick; Aliseda, Alberto; Seydel, Joseph

    2014-04-15

    Attached are the .cas and .dat files for the Reynolds Averaged Navier-Stokes (RANS) simulation of a single lab-scaled DOE RM1 turbine implemented in ANSYS FLUENT CFD-package. The lab-scaled DOE RM1 is a re-design geometry, based of the full scale DOE RM1 design, producing same power output as the full scale model, while operating at matched Tip Speed Ratio values at reachable laboratory Reynolds number (see attached paper). In this case study the flow field around and in the wake of the lab-scaled DOE RM1 turbine is simulated using Blade Element Model (a.k.a Virtual Blade Model) by solving RANS equations coupled with k-\\omega turbulence closure model. It should be highlighted that in this simulation the actual geometry of the rotor blade is not modeled. The effect of turbine rotating blades are modeled using the Blade Element Theory. This simulation provides an accurate estimate for the performance of device and structure of it's turbulent far wake. Due to the simplifications implemented for modeling the rotating blades in this model, VBM is limited to capture details of the flow field in near wake region of the device. The required User Defined Functions (UDFs) and look-up table of lift and drag coefficients are included along with the .cas and .dat files.

  7. Distributed attitude synchronization of formation flying via consensus-based virtual structure

    NASA Astrophysics Data System (ADS)

    Cong, Bing-Long; Liu, Xiang-Dong; Chen, Zhen

    2011-06-01

    This paper presents a general framework for synchronized multiple spacecraft rotations via consensus-based virtual structure. In this framework, attitude control systems for formation spacecrafts and virtual structure are designed separately. Both parametric uncertainty and external disturbance are taken into account. A time-varying sliding mode control (TVSMC) algorithm is designed to improve the robustness of the actual attitude control system. As for the virtual attitude control system, a behavioral consensus algorithm is presented to accomplish the attitude maneuver of the entire formation and guarantee a consistent attitude among the local virtual structure counterparts during the attitude maneuver. A multiple virtual sub-structures (MVSSs) system is introduced to enhance current virtual structure scheme when large amounts of spacecrafts are involved in the formation. The attitude of spacecraft is represented by modified Rodrigues parameter (MRP) for its non-redundancy. Finally, a numerical simulation with three synchronization situations is employed to illustrate the effectiveness of the proposed strategy.

  8. Quantitative evaluation of apically extruded debris with different single-file systems: Reciproc, F360 and OneShape versus Mtwo.

    PubMed

    Bürklein, S; Benten, S; Schäfer, E

    2014-05-01

    To assess in a laboratory setting the amount of apically extruded debris associated with different single-file nickel-titanium instrumentation systems compared to one multiple-file rotary system. Eighty human mandibular central incisors were randomly assigned to four groups (n = 20 teeth per group). The root canals were instrumented according to the manufacturers' instructions using the reciprocating single-file system Reciproc, the single-file rotary systems F360 and OneShape and the multiple-file rotary Mtwo instruments. The apically extruded debris was collected and dried in pre-weighed glass vials. The amount of debris was assessed with a micro balance and statistically analysed using anova and post hoc Student-Newman-Keuls test. The time required to prepare the canals with the different instruments was also recorded. Reciproc produced significantly more debris compared to all other systems (P < 0.05). No significant difference was noted between the two single-file rotary systems and the multiple-file rotary system (P > 0.05). Instrumentation with the three single-file systems was significantly faster than with Mtwo (P < 0.05). Under the condition of this study, all systems caused apical debris extrusion. Rotary instrumentation was associated with less debris extrusion compared to reciprocal instrumentation. © 2013 International Endodontic Journal. Published by John Wiley & Sons Ltd.

  9. Usage analysis of user files in UNIX

    NASA Technical Reports Server (NTRS)

    Devarakonda, Murthy V.; Iyer, Ravishankar K.

    1987-01-01

    Presented is a user-oriented analysis of short term file usage in a 4.2 BSD UNIX environment. The key aspect of this analysis is a characterization of users and files, which is a departure from the traditional approach of analyzing file references. Two characterization measures are employed: accesses-per-byte (combining fraction of a file referenced and number of references) and file size. This new approach is shown to distinguish differences in files as well as users, which cam be used in efficient file system design, and in creating realistic test workloads for simulations. A multi-stage gamma distribution is shown to closely model the file usage measures. Even though overall file sharing is small, some files belonging to a bulletin board system are accessed by many users, simultaneously and otherwise. Over 50% of users referenced files owned by other users, and over 80% of all files were involved in such references. Based on the differences in files and users, suggestions to improve the system performance were also made.

  10. 48 CFR 204.802 - Contract files.

    Code of Federal Regulations, 2014 CFR

    2014-10-01

    ... 48 Federal Acquisition Regulations System 3 2014-10-01 2014-10-01 false Contract files. 204.802 Section 204.802 Federal Acquisition Regulations System DEFENSE ACQUISITION REGULATIONS SYSTEM, DEPARTMENT OF DEFENSE GENERAL ADMINISTRATIVE MATTERS Contract Files 204.802 Contract files. Official contract...

  11. 48 CFR 204.802 - Contract files.

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ... 48 Federal Acquisition Regulations System 3 2011-10-01 2011-10-01 false Contract files. 204.802 Section 204.802 Federal Acquisition Regulations System DEFENSE ACQUISITION REGULATIONS SYSTEM, DEPARTMENT OF DEFENSE GENERAL ADMINISTRATIVE MATTERS Contract Files 204.802 Contract files. Official contract...

  12. 48 CFR 204.802 - Contract files.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... 48 Federal Acquisition Regulations System 3 2010-10-01 2010-10-01 false Contract files. 204.802 Section 204.802 Federal Acquisition Regulations System DEFENSE ACQUISITION REGULATIONS SYSTEM, DEPARTMENT OF DEFENSE GENERAL ADMINISTRATIVE MATTERS Contract Files 204.802 Contract files. Official contract...

  13. 48 CFR 204.802 - Contract files.

    Code of Federal Regulations, 2012 CFR

    2012-10-01

    ... 48 Federal Acquisition Regulations System 3 2012-10-01 2012-10-01 false Contract files. 204.802 Section 204.802 Federal Acquisition Regulations System DEFENSE ACQUISITION REGULATIONS SYSTEM, DEPARTMENT OF DEFENSE GENERAL ADMINISTRATIVE MATTERS Contract Files 204.802 Contract files. Official contract...

  14. 48 CFR 204.802 - Contract files.

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    ... 48 Federal Acquisition Regulations System 3 2013-10-01 2013-10-01 false Contract files. 204.802 Section 204.802 Federal Acquisition Regulations System DEFENSE ACQUISITION REGULATIONS SYSTEM, DEPARTMENT OF DEFENSE GENERAL ADMINISTRATIVE MATTERS Contract Files 204.802 Contract files. Official contract...

  15. Twin-tailed fail-over for fileservers maintaining full performance in the presence of a failure

    DOEpatents

    Coteus, Paul W.; Gara, Alan G.; Giampapa, Mark E.; Heidelberger, Philip; Steinmacher-Burow, Burkhard D.

    2008-02-12

    A method for maintaining full performance of a file system in the presence of a failure is provided. The file system having N storage devices, where N is an integer greater than zero and N primary file servers where each file server is operatively connected to a corresponding storage device for accessing files therein. The file system further having a secondary file server operatively connected to at least one of the N storage devices. The method including: switching the connection of one of the N storage devices to the secondary file server upon a failure of one of the N primary file servers; and switching the connections of one or more of the remaining storage devices to a primary file server other than the failed file server as necessary so as to prevent a loss in performance and to provide each storage device with an operating file server.

  16. RAMA: A file system for massively parallel computers

    NASA Technical Reports Server (NTRS)

    Miller, Ethan L.; Katz, Randy H.

    1993-01-01

    This paper describes a file system design for massively parallel computers which makes very efficient use of a few disks per processor. This overcomes the traditional I/O bottleneck of massively parallel machines by storing the data on disks within the high-speed interconnection network. In addition, the file system, called RAMA, requires little inter-node synchronization, removing another common bottleneck in parallel processor file systems. Support for a large tertiary storage system can easily be integrated in lo the file system; in fact, RAMA runs most efficiently when tertiary storage is used.

  17. 10 CFR 110.89 - Filing and service.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ...: Rulemakings and Adjudications Staff or via the E-Filing system, following the procedure set forth in 10 CFR 2.302. Filing by mail is complete upon deposit in the mail. Filing via the E-Filing system is completed... residence with some occupant of suitable age and discretion; (2) Following the requirements for E-Filing in...

  18. CLIPS++: Embedding CLIPS into C++

    NASA Technical Reports Server (NTRS)

    Obermeyer, Lance; Miranker, Daniel P.

    1994-01-01

    This paper describes a set of C++ extensions to the CLIPS language and their embodiment in CLIPS++. These extensions and the implementation approach of CLIPS++ provide a new level of embeddability with C and C++. These extensions are a C++ include statement and a defcontainer construct; (include (c++-header-file.h)) and (defcontainer (c++-type)). The include construct allows C++ functions to be embedded in both the LHS and RHS of CLIPS rules. The header file in an include construct is the same header file the programmer uses for his/her own C++ code, independent of CLIPS. The defcontainer construct allows the inference engine to treat C++ class instances as CLIPS deftemplate facts. Consequently existing C++ class libraries may be transparently imported into CLIPS. These C++ types may use advanced features like inheritance, virtual functions, and templates. The implementation has been tested with several class libraries, including Rogue Wave Software's Tools.h++, GNU's libg++, and USL's C++ Standard Components. The execution speed of CLIPS++ has been determined to be 5 to 700 times the execution speed of CLIPS 6.0 (10 to 20X typical).

  19. Using smartphone technology to deliver a virtual pedestrian environment: usability and validation.

    PubMed

    Schwebel, David C; Severson, Joan; He, Yefei

    2017-09-01

    Various programs effectively teach children to cross streets more safely, but all are labor- and cost-intensive. Recent developments in mobile phone technology offer opportunity to deliver virtual reality pedestrian environments to mobile smartphone platforms. Such an environment may offer a cost- and labor-effective strategy to teach children to cross streets safely. This study evaluated usability, feasibility, and validity of a smartphone-based virtual pedestrian environment. A total of 68 adults completed 12 virtual crossings within each of two virtual pedestrian environments, one delivered by smartphone and the other a semi-immersive kiosk virtual environment. Participants completed self-report measures of perceived realism and simulator sickness experienced in each virtual environment, plus self-reported demographic and personality characteristics. All participants followed system instructions and used the smartphone-based virtual environment without difficulty. No significant simulator sickness was reported or observed. Users rated the smartphone virtual environment as highly realistic. Convergent validity was detected, with many aspects of pedestrian behavior in the smartphone-based virtual environment matching behavior in the kiosk virtual environment. Anticipated correlations between personality and kiosk virtual reality pedestrian behavior emerged for the smartphone-based system. A smartphone-based virtual environment can be usable and valid. Future research should develop and evaluate such a training system.

  20. The New Cloud Absorption Radiometer (CAR) Software: One Model for NASA Remote Sensing Virtual Instruments

    NASA Technical Reports Server (NTRS)

    Roth, Don J.; Rapchun, David A.; Jones, Hollis H.

    2001-01-01

    The Cloud Absorption Radiometer (CAR) instrument has been the most frequently used airborne instrument built in-house at NASA Goddard Space Flight Center, having flown scientific research missions on-board various aircraft to many locations in the United States, Azores, Brazil, and Kuwait since 1983. The CAR instrument is capable of measuring scattered light by clouds in fourteen spectral bands in UV, visible and near-infrared region. This document describes the control, data acquisition, display, and file storage software for the new version of CAR. This software completely replaces the prior CAR Data System and Control Panel with a compact and robust virtual instrument computer interface. Additionally, the instrument is now usable for the first time for taking data in an off-aircraft mode. The new instrument is controlled via a LabVIEW v5. 1.1-developed software interface that utilizes, (1) serial port writes to write commands to the controller module of the instrument, and (2) serial port reads to acquire data from the controller module of the instrument. Step-by-step operational procedures are provided in this document. A suite of other software programs has been developed to complement the actual CAR virtual instrument. These programs include: (1) a simulator mode that allows pretesting of new features that might be added in the future, as well as demonstrations to CAR customers, and development at times when the instrument/hardware is off-location, and (2) a post-experiment data viewer that can be used to view all segments of individual data cycles and to locate positions where 'start' and stop' byte sequences were incorrectly formulated by the instrument controller. The CAR software described here is expected to be the basis for CAR operation for many missions and many years to come.

  1. 75 FR 27986 - Electronic Filing System-Web (EFS-Web) Contingency Option

    Federal Register 2010, 2011, 2012, 2013, 2014

    2010-05-19

    ...] Electronic Filing System--Web (EFS-Web) Contingency Option AGENCY: United States Patent and Trademark Office... availability of its patent electronic filing system, Electronic Filing System--Web (EFS-Web) by providing a new contingency option when the primary portal to EFS-Web has an unscheduled outage. Previously, the entire EFS...

  2. 29 CFR 102.119 - Privacy Act Regulations: notification as to whether a system of records contains records...

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... Regional Office Files (NLRB-25), Regional Advice and Injunction Litigation System (RAILS) and Associated Headquarters Files (NLRB-28), and Appeals Case Tracking System (ACTS) and Associated Headquarters Files (NLRB... Judicial Case Management Systems-Pending Case List (JCMS-PCL) and Associated Headquarters Files (NLRB-21...

  3. 29 CFR 102.119 - Privacy Act Regulations: notification as to whether a system of records contains records...

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... Regional Office Files (NLRB-25), Regional Advice and Injunction Litigation System (RAILS) and Associated Headquarters Files (NLRB-28), and Appeals Case Tracking System (ACTS) and Associated Headquarters Files (NLRB... Judicial Case Management Systems-Pending Case List (JCMS-PCL) and Associated Headquarters Files (NLRB-21...

  4. 29 CFR 102.119 - Privacy Act Regulations: notification as to whether a system of records contains records...

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... Regional Office Files (NLRB-25), Regional Advice and Injunction Litigation System (RAILS) and Associated Headquarters Files (NLRB-28), and Appeals Case Tracking System (ACTS) and Associated Headquarters Files (NLRB... Judicial Case Management Systems-Pending Case List (JCMS-PCL) and Associated Headquarters Files (NLRB-21...

  5. 29 CFR 102.119 - Privacy Act Regulations: notification as to whether a system of records contains records...

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... Regional Office Files (NLRB-25), Regional Advice and Injunction Litigation System (RAILS) and Associated Headquarters Files (NLRB-28), and Appeals Case Tracking System (ACTS) and Associated Headquarters Files (NLRB... Judicial Case Management Systems-Pending Case List (JCMS-PCL) and Associated Headquarters Files (NLRB-21...

  6. The Galley Parallel File System

    NASA Technical Reports Server (NTRS)

    Nieuwejaar, Nils; Kotz, David

    1996-01-01

    As the I/O needs of parallel scientific applications increase, file systems for multiprocessors are being designed to provide applications with parallel access to multiple disks. Many parallel file systems present applications with a conventional Unix-like interface that allows the application to access multiple disks transparently. The interface conceals the parallelism within the file system, which increases the ease of programmability, but makes it difficult or impossible for sophisticated programmers and libraries to use knowledge about their I/O needs to exploit that parallelism. Furthermore, most current parallel file systems are optimized for a different workload than they are being asked to support. We introduce Galley, a new parallel file system that is intended to efficiently support realistic parallel workloads. We discuss Galley's file structure and application interface, as well as an application that has been implemented using that interface.

  7. Deceit: A flexible distributed file system

    NASA Technical Reports Server (NTRS)

    Siegel, Alex; Birman, Kenneth; Marzullo, Keith

    1989-01-01

    Deceit, a distributed file system (DFS) being developed at Cornell, focuses on flexible file semantics in relation to efficiency, scalability, and reliability. Deceit servers are interchangeable and collectively provide the illusion of a single, large server machine to any clients of the Deceit service. Non-volatile replicas of each file are stored on a subset of the file servers. The user is able to set parameters on a file to achieve different levels of availability, performance, and one-copy serializability. Deceit also supports a file version control mechanism. In contrast with many recent DFS efforts, Deceit can behave like a plain Sun Network File System (NFS) server and can be used by any NFS client without modifying any client software. The current Deceit prototype uses the ISIS Distributed Programming Environment for all communication and process group management, an approach that reduces system complexity and increases system robustness.

  8. Winning against big tobacco. Let's take the time to get it right.

    PubMed Central

    Humphrey, H H

    1997-01-01

    Three years ago, the state of Minnesota became the second state to sue the tobacco industry for wrongdoing and the first to charge consumer fraud and conspiracy. Together with our co-plaintiff, Blue Cross/Blue Shield of Minnesota, we filed a lawsuit against the six major U.S. cigarette manufacturers, two tobacco trade organizations, and British American Tobacco Industries (BAT), the parent company of Brown and Williamson. Specifically, the lawsuit alleges that the industry defrauded consumers and engaged in false advertising, deceptive practices, and anti-trust violations, including conspiracy to stifle development of safer cigarettes and to conceal information on smoking and health. Many later-filing states patterned complaints after Minnesota's, and virtually all have incorporated some or all of the claims first pled in the Minnesota complaint. Images p379-a p384-a PMID:9323388

  9. System-Level Virtualization Research at Oak Ridge National Laboratory

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Scott, Stephen L; Vallee, Geoffroy R; Naughton, III, Thomas J

    2010-01-01

    System-level virtualization is today enjoying a rebirth as a technique to effectively share what were then considered large computing resources to subsequently fade from the spotlight as individual workstations gained in popularity with a one machine - one user approach. One reason for this resurgence is that the simple workstation has grown in capability to rival that of anything available in the past. Thus, computing centers are again looking at the price/performance benefit of sharing that single computing box via server consolidation. However, industry is only concentrating on the benefits of using virtualization for server consolidation (enterprise computing) whereas ourmore » interest is in leveraging virtualization to advance high-performance computing (HPC). While these two interests may appear to be orthogonal, one consolidating multiple applications and users on a single machine while the other requires all the power from many machines to be dedicated solely to its purpose, we propose that virtualization does provide attractive capabilities that may be exploited to the benefit of HPC interests. This does raise the two fundamental questions of: is the concept of virtualization (a machine sharing technology) really suitable for HPC and if so, how does one go about leveraging these virtualization capabilities for the benefit of HPC. To address these questions, this document presents ongoing studies on the usage of system-level virtualization in a HPC context. These studies include an analysis of the benefits of system-level virtualization for HPC, a presentation of research efforts based on virtualization for system availability, and a presentation of research efforts for the management of virtual systems. The basis for this document was material presented by Stephen L. Scott at the Collaborative and Grid Computing Technologies meeting held in Cancun, Mexico on April 12-14, 2007.« less

  10. An alternative model to distribute VO software to WLCG sites based on CernVM-FS: a prototype at PIC Tier1

    NASA Astrophysics Data System (ADS)

    Lanciotti, E.; Merino, G.; Bria, A.; Blomer, J.

    2011-12-01

    In a distributed computing model as WLCG the software of experiment specific application software has to be efficiently distributed to any site of the Grid. Application software is currently installed in a shared area of the site visible for all Worker Nodes (WNs) of the site through some protocol (NFS, AFS or other). The software is installed at the site by jobs which run on a privileged node of the computing farm where the shared area is mounted in write mode. This model presents several drawbacks which cause a non-negligible rate of job failure. An alternative model for software distribution based on the CERN Virtual Machine File System (CernVM-FS) has been tried at PIC, the Spanish Tierl site of WLCG. The test bed used and the results are presented in this paper.

  11. Three-Dimensional Audio Client Library

    NASA Technical Reports Server (NTRS)

    Rizzi, Stephen A.

    2005-01-01

    The Three-Dimensional Audio Client Library (3DAudio library) is a group of software routines written to facilitate development of both stand-alone (audio only) and immersive virtual-reality application programs that utilize three-dimensional audio displays. The library is intended to enable the development of three-dimensional audio client application programs by use of a code base common to multiple audio server computers. The 3DAudio library calls vendor-specific audio client libraries and currently supports the AuSIM Gold-Server and Lake Huron audio servers. 3DAudio library routines contain common functions for (1) initiation and termination of a client/audio server session, (2) configuration-file input, (3) positioning functions, (4) coordinate transformations, (5) audio transport functions, (6) rendering functions, (7) debugging functions, and (8) event-list-sequencing functions. The 3DAudio software is written in the C++ programming language and currently operates under the Linux, IRIX, and Windows operating systems.

  12. 48 CFR 204.805 - Disposal of contract files.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... 48 Federal Acquisition Regulations System 3 2010-10-01 2010-10-01 false Disposal of contract files. 204.805 Section 204.805 Federal Acquisition Regulations System DEFENSE ACQUISITION REGULATIONS SYSTEM, DEPARTMENT OF DEFENSE GENERAL ADMINISTRATIVE MATTERS Contract Files 204.805 Disposal of contract files. (1...

  13. 48 CFR 204.804 - Closeout of contract files.

    Code of Federal Regulations, 2014 CFR

    2014-10-01

    ... 48 Federal Acquisition Regulations System 3 2014-10-01 2014-10-01 false Closeout of contract files. 204.804 Section 204.804 Federal Acquisition Regulations System DEFENSE ACQUISITION REGULATIONS SYSTEM, DEPARTMENT OF DEFENSE GENERAL ADMINISTRATIVE MATTERS Contract Files 204.804 Closeout of contract files. (1...

  14. 48 CFR 204.804 - Closeout of contract files.

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ... 48 Federal Acquisition Regulations System 3 2011-10-01 2011-10-01 false Closeout of contract files. 204.804 Section 204.804 Federal Acquisition Regulations System DEFENSE ACQUISITION REGULATIONS SYSTEM, DEPARTMENT OF DEFENSE GENERAL ADMINISTRATIVE MATTERS Contract Files 204.804 Closeout of contract files...

  15. 48 CFR 204.805 - Disposal of contract files.

    Code of Federal Regulations, 2014 CFR

    2014-10-01

    ... 48 Federal Acquisition Regulations System 3 2014-10-01 2014-10-01 false Disposal of contract files. 204.805 Section 204.805 Federal Acquisition Regulations System DEFENSE ACQUISITION REGULATIONS SYSTEM, DEPARTMENT OF DEFENSE GENERAL ADMINISTRATIVE MATTERS Contract Files 204.805 Disposal of contract files. (1...

  16. 48 CFR 204.805 - Disposal of contract files.

    Code of Federal Regulations, 2012 CFR

    2012-10-01

    ... 48 Federal Acquisition Regulations System 3 2012-10-01 2012-10-01 false Disposal of contract files. 204.805 Section 204.805 Federal Acquisition Regulations System DEFENSE ACQUISITION REGULATIONS SYSTEM, DEPARTMENT OF DEFENSE GENERAL ADMINISTRATIVE MATTERS Contract Files 204.805 Disposal of contract files. (1...

  17. 48 CFR 204.805 - Disposal of contract files.

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    ... 48 Federal Acquisition Regulations System 3 2013-10-01 2013-10-01 false Disposal of contract files. 204.805 Section 204.805 Federal Acquisition Regulations System DEFENSE ACQUISITION REGULATIONS SYSTEM, DEPARTMENT OF DEFENSE GENERAL ADMINISTRATIVE MATTERS Contract Files 204.805 Disposal of contract files. (1...

  18. 48 CFR 204.804 - Closeout of contract files.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... 48 Federal Acquisition Regulations System 3 2010-10-01 2010-10-01 false Closeout of contract files. 204.804 Section 204.804 Federal Acquisition Regulations System DEFENSE ACQUISITION REGULATIONS SYSTEM, DEPARTMENT OF DEFENSE GENERAL ADMINISTRATIVE MATTERS Contract Files 204.804 Closeout of contract files...

  19. 48 CFR 204.804 - Closeout of contract files.

    Code of Federal Regulations, 2012 CFR

    2012-10-01

    ... 48 Federal Acquisition Regulations System 3 2012-10-01 2012-10-01 false Closeout of contract files. 204.804 Section 204.804 Federal Acquisition Regulations System DEFENSE ACQUISITION REGULATIONS SYSTEM, DEPARTMENT OF DEFENSE GENERAL ADMINISTRATIVE MATTERS Contract Files 204.804 Closeout of contract files. (1...

  20. 48 CFR 204.805 - Disposal of contract files.

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ... 48 Federal Acquisition Regulations System 3 2011-10-01 2011-10-01 false Disposal of contract files. 204.805 Section 204.805 Federal Acquisition Regulations System DEFENSE ACQUISITION REGULATIONS SYSTEM, DEPARTMENT OF DEFENSE GENERAL ADMINISTRATIVE MATTERS Contract Files 204.805 Disposal of contract files. (1...

  1. 48 CFR 204.804 - Closeout of contract files.

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    ... 48 Federal Acquisition Regulations System 3 2013-10-01 2013-10-01 false Closeout of contract files. 204.804 Section 204.804 Federal Acquisition Regulations System DEFENSE ACQUISITION REGULATIONS SYSTEM, DEPARTMENT OF DEFENSE GENERAL ADMINISTRATIVE MATTERS Contract Files 204.804 Closeout of contract files. (1...

  2. User Guide: How to Use and Operate Virtual Reality Equipment in the Systems Assessment and Usability Laboratory (SAUL) for Conducting Demonstrations

    DTIC Science & Technology

    2017-08-01

    ARL-TN-0839 ● AUG 2017 US Army Research Laboratory User Guide: How to Use and Operate Virtual Reality Equipment in the Systems...ARL-TN-0839 ● AUG 2017 US Army Research Laboratory User Guide: How to Use and Operate Virtual Reality Equipment in the Systems...September 2017 4. TITLE AND SUBTITLE User Guide: How to Use and Operate Virtual Reality Equipment in the Systems Assessment and Usability Laboratory

  3. Virtual reality in surgical training.

    PubMed

    Lange, T; Indelicato, D J; Rosen, J M

    2000-01-01

    Virtual reality in surgery and, more specifically, in surgical training, faces a number of challenges in the future. These challenges are building realistic models of the human body, creating interface tools to view, hear, touch, feel, and manipulate these human body models, and integrating virtual reality systems into medical education and treatment. A final system would encompass simulators specifically for surgery, performance machines, telemedicine, and telesurgery. Each of these areas will need significant improvement for virtual reality to impact medicine successfully in the next century. This article gives an overview of, and the challenges faced by, current systems in the fast-changing field of virtual reality technology, and provides a set of specific milestones for a truly realistic virtual human body.

  4. The Convergence of High Performance Computing and Large Scale Data Analytics

    NASA Astrophysics Data System (ADS)

    Duffy, D.; Bowen, M. K.; Thompson, J. H.; Yang, C. P.; Hu, F.; Wills, B.

    2015-12-01

    As the combinations of remote sensing observations and model outputs have grown, scientists are increasingly burdened with both the necessity and complexity of large-scale data analysis. Scientists are increasingly applying traditional high performance computing (HPC) solutions to solve their "Big Data" problems. While this approach has the benefit of limiting data movement, the HPC system is not optimized to run analytics, which can create problems that permeate throughout the HPC environment. To solve these issues and to alleviate some of the strain on the HPC environment, the NASA Center for Climate Simulation (NCCS) has created the Advanced Data Analytics Platform (ADAPT), which combines both HPC and cloud technologies to create an agile system designed for analytics. Large, commonly used data sets are stored in this system in a write once/read many file system, such as Landsat, MODIS, MERRA, and NGA. High performance virtual machines are deployed and scaled according to the individual scientist's requirements specifically for data analysis. On the software side, the NCCS and GMU are working with emerging commercial technologies and applying them to structured, binary scientific data in order to expose the data in new ways. Native NetCDF data is being stored within a Hadoop Distributed File System (HDFS) enabling storage-proximal processing through MapReduce while continuing to provide accessibility of the data to traditional applications. Once the data is stored within HDFS, an additional indexing scheme is built on top of the data and placed into a relational database. This spatiotemporal index enables extremely fast mappings of queries to data locations to dramatically speed up analytics. These are some of the first steps toward a single unified platform that optimizes for both HPC and large-scale data analysis, and this presentation will elucidate the resulting and necessary exascale architectures required for future systems.

  5. Computational studies of steering nanoparticles with magnetic gradients

    NASA Astrophysics Data System (ADS)

    Aylak, Sultan Suleyman

    Magnetic Resonance Imaging (MRI) guided nanorobotic systems that could perform diagnostic, curative, and reconstructive treatments in the human body at the cellular and subcellular level in a controllable manner have recently been proposed. The concept of a MRI-guided nanorobotic system is based on the use of a MRI scanner to induce the required external driving forces to guide magnetic nanocapsules to a specific target. However, the maximum magnetic gradient specifications of existing clinical MRI systems are not capable of driving magnetic nanocapsules against the blood flow. This thesis presents the visualization of nanoparticles inside blood vessel, Graphical User Interface (GUI) for updating file including initial parameters and demonstrating the simulation of particles and C++ code for computing magnetic forces and fluidic forces. The visualization and GUI were designed using Virtual Reality Modeling Language (VRML), MATLAB and C#. The addition of software for MRI-guided nanorobotic system provides simulation results. Preliminary simulation results demonstrate that external magnetic field causes aggregation of nanoparticles while they flow in the vessel. This is a promising result --in accordance with similar experimental results- and encourages further investigation on the nanoparticle-based self-assembly structures for use in nanorobotic drug delivery.

  6. TGeoCad: an Interface between ROOT and CAD Systems

    NASA Astrophysics Data System (ADS)

    Luzzi, C.; Carminati, F.

    2014-06-01

    In the simulation of High Energy Physics experiment a very high precision in the description of the detector geometry is essential to achieve the required performances. The physicists in charge of Monte Carlo Simulation of the detector need to collaborate efficiently with the engineers working at the mechanical design of the detector. Often, this collaboration is made hard by the usage of different and incompatible software. ROOT is an object-oriented C++ framework used by physicists for storing, analyzing and simulating data produced by the high-energy physics experiments while CAD (Computer-Aided Design) software is used for mechanical design in the engineering field. The necessity to improve the level of communication between physicists and engineers led to the implementation of an interface between the ROOT geometrical modeler used by the virtual Monte Carlo simulation software and the CAD systems. In this paper we describe the design and implementation of the TGeoCad Interface that has been developed to enable the use of ROOT geometrical models in several CAD systems. To achieve this goal, the ROOT geometry description is converted into STEP file format (ISO 10303), which can be imported and used by many CAD systems.

  7. pcircle - A Suite of Scalable Parallel File System Tools

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    WANG, FEIYI

    2015-10-01

    Most of the software related to file system are written for conventional local file system, they are serialized and can't take advantage of the benefit of a large scale parallel file system. "pcircle" software builds on top of ubiquitous MPI in cluster computing environment and "work-stealing" pattern to provide a scalable, high-performance suite of file system tools. In particular - it implemented parallel data copy and parallel data checksumming, with advanced features such as async progress report, checkpoint and restart, as well as integrity checking.

  8. Optimizing Input/Output Using Adaptive File System Policies

    NASA Technical Reports Server (NTRS)

    Madhyastha, Tara M.; Elford, Christopher L.; Reed, Daniel A.

    1996-01-01

    Parallel input/output characterization studies and experiments with flexible resource management algorithms indicate that adaptivity is crucial to file system performance. In this paper we propose an automatic technique for selecting and refining file system policies based on application access patterns and execution environment. An automatic classification framework allows the file system to select appropriate caching and pre-fetching policies, while performance sensors provide feedback used to tune policy parameters for specific system environments. To illustrate the potential performance improvements possible using adaptive file system policies, we present results from experiments involving classification-based and performance-based steering.

  9. Education about Hallucinations Using an Internet Virtual Reality System: A Qualitative Survey

    ERIC Educational Resources Information Center

    Yellowlees, Peter M.; Cook, James N.

    2006-01-01

    Objective: The authors evaluate an Internet virtual reality technology as an education tool about the hallucinations of psychosis. Method: This is a pilot project using Second Life, an Internet-based virtual reality system, in which a virtual reality environment was constructed to simulate the auditory and visual hallucinations of two patients…

  10. Apically extruded dentin debris by reciprocating single-file and multi-file rotary system.

    PubMed

    De-Deus, Gustavo; Neves, Aline; Silva, Emmanuel João; Mendonça, Thais Accorsi; Lourenço, Caroline; Calixto, Camila; Lima, Edson Jorge Moreira

    2015-03-01

    This study aims to evaluate the apical extrusion of debris by the two reciprocating single-file systems: WaveOne and Reciproc. Conventional multi-file rotary system was used as a reference for comparison. The hypotheses tested were (i) the reciprocating single-file systems extrude more than conventional multi-file rotary system and (ii) the reciprocating single-file systems extrude similar amounts of dentin debris. After solid selection criteria, 80 mesial roots of lower molars were included in the present study. The use of four different instrumentation techniques resulted in four groups (n = 20): G1 (hand-file technique), G2 (ProTaper), G3 (WaveOne), and G4 (Reciproc). The apparatus used to evaluate the collection of apically extruded debris was typical double-chamber collector. Statistical analysis was performed for multiple comparisons. No significant difference was found in the amount of the debris extruded between the two reciprocating systems. In contrast, conventional multi-file rotary system group extruded significantly more debris than both reciprocating groups. Hand instrumentation group extruded significantly more debris than all other groups. The present results yielded favorable input for both reciprocation single-file systems, inasmuch as they showed an improved control of apically extruded debris. Apical extrusion of debris has been studied extensively because of its clinical relevance, particularly since it may cause flare-ups, originated by the introduction of bacteria, pulpal tissue, and irrigating solutions into the periapical tissues.

  11. Sacramento-Watt Avenue transit priority and mobility enhancement demonstration project: phase III evaluation report

    DOT National Transportation Integrated Search

    2001-02-01

    The Minnesota data system includes the following basic files: Accident data (Accident File, Vehicle File, Occupant File); Roadlog File; Reference Post File; Traffic File; Intersection File; Bridge (Structures) File; and RR Grade Crossing File. For ea...

  12. On-Board File Management and Its Application in Flight Operations

    NASA Technical Reports Server (NTRS)

    Kuo, N.

    1998-01-01

    In this paper, the author presents the minimum functions required for an on-board file management system. We explore file manipulation processes and demonstrate how the file transfer along with the file management system will be utilized to support flight operations and data delivery.

  13. 47 CFR 1.10006 - Is electronic filing mandatory?

    Code of Federal Regulations, 2014 CFR

    2014-10-01

    ... 47 Telecommunication 1 2014-10-01 2014-10-01 false Is electronic filing mandatory? 1.10006 Section... Random Selection International Bureau Filing System § 1.10006 Is electronic filing mandatory? Electronic... International Bureau Filing System (IBFS) form is available. Applications for which an electronic form is not...

  14. 47 CFR 1.10006 - Is electronic filing mandatory?

    Code of Federal Regulations, 2012 CFR

    2012-10-01

    ... 47 Telecommunication 1 2012-10-01 2012-10-01 false Is electronic filing mandatory? 1.10006 Section... Random Selection International Bureau Filing System § 1.10006 Is electronic filing mandatory? Electronic... International Bureau Filing System (IBFS) form is available. Applications for which an electronic form is not...

  15. 47 CFR 1.10006 - Is electronic filing mandatory?

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    ... 47 Telecommunication 1 2013-10-01 2013-10-01 false Is electronic filing mandatory? 1.10006 Section... Random Selection International Bureau Filing System § 1.10006 Is electronic filing mandatory? Electronic... International Bureau Filing System (IBFS) form is available. Applications for which an electronic form is not...

  16. 10 CFR 2.302 - Filing of documents.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... this part shall be electronically transmitted through the E-Filing system, unless the Commission or... all methods of filing have been completed. (e) For filings by electronic transmission, the filer must... digital ID certificates, the NRC permits participants in the proceeding to access the E-Filing system to...

  17. 48 CFR 1404.805 - Disposal of contract files.

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    ... 48 Federal Acquisition Regulations System 5 2013-10-01 2013-10-01 false Disposal of contract files. 1404.805 Section 1404.805 Federal Acquisition Regulations System DEPARTMENT OF THE INTERIOR GENERAL ADMINISTRATIVE MATTERS Contract Files 1404.805 Disposal of contract files. Disposition of files shall be...

  18. 48 CFR 1404.802 - Contract files.

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ... 48 Federal Acquisition Regulations System 5 2011-10-01 2011-10-01 false Contract files. 1404.802 Section 1404.802 Federal Acquisition Regulations System DEPARTMENT OF THE INTERIOR GENERAL ADMINISTRATIVE MATTERS Contract Files 1404.802 Contract files. In addition to the requirements in FAR 4.802, files shall...

  19. 48 CFR 1404.802 - Contract files.

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    ... 48 Federal Acquisition Regulations System 5 2013-10-01 2013-10-01 false Contract files. 1404.802 Section 1404.802 Federal Acquisition Regulations System DEPARTMENT OF THE INTERIOR GENERAL ADMINISTRATIVE MATTERS Contract Files 1404.802 Contract files. In addition to the requirements in FAR 4.802, files shall...

  20. 48 CFR 1404.802 - Contract files.

    Code of Federal Regulations, 2012 CFR

    2012-10-01

    ... 48 Federal Acquisition Regulations System 5 2012-10-01 2012-10-01 false Contract files. 1404.802 Section 1404.802 Federal Acquisition Regulations System DEPARTMENT OF THE INTERIOR GENERAL ADMINISTRATIVE MATTERS Contract Files 1404.802 Contract files. In addition to the requirements in FAR 4.802, files shall...

  1. 48 CFR 1404.805 - Disposal of contract files.

    Code of Federal Regulations, 2014 CFR

    2014-10-01

    ... 48 Federal Acquisition Regulations System 5 2014-10-01 2014-10-01 false Disposal of contract files. 1404.805 Section 1404.805 Federal Acquisition Regulations System DEPARTMENT OF THE INTERIOR GENERAL ADMINISTRATIVE MATTERS Contract Files 1404.805 Disposal of contract files. Disposition of files shall be...

  2. 48 CFR 1404.805 - Disposal of contract files.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... 48 Federal Acquisition Regulations System 5 2010-10-01 2010-10-01 false Disposal of contract files. 1404.805 Section 1404.805 Federal Acquisition Regulations System DEPARTMENT OF THE INTERIOR GENERAL ADMINISTRATIVE MATTERS Contract Files 1404.805 Disposal of contract files. Disposition of files shall be...

  3. 48 CFR 1404.805 - Disposal of contract files.

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ... 48 Federal Acquisition Regulations System 5 2011-10-01 2011-10-01 false Disposal of contract files. 1404.805 Section 1404.805 Federal Acquisition Regulations System DEPARTMENT OF THE INTERIOR GENERAL ADMINISTRATIVE MATTERS Contract Files 1404.805 Disposal of contract files. Disposition of files shall be...

  4. 48 CFR 1404.802 - Contract files.

    Code of Federal Regulations, 2014 CFR

    2014-10-01

    ... 48 Federal Acquisition Regulations System 5 2014-10-01 2014-10-01 false Contract files. 1404.802 Section 1404.802 Federal Acquisition Regulations System DEPARTMENT OF THE INTERIOR GENERAL ADMINISTRATIVE MATTERS Contract Files 1404.802 Contract files. In addition to the requirements in FAR 4.802, files shall...

  5. 48 CFR 1404.805 - Disposal of contract files.

    Code of Federal Regulations, 2012 CFR

    2012-10-01

    ... 48 Federal Acquisition Regulations System 5 2012-10-01 2012-10-01 false Disposal of contract files. 1404.805 Section 1404.805 Federal Acquisition Regulations System DEPARTMENT OF THE INTERIOR GENERAL ADMINISTRATIVE MATTERS Contract Files 1404.805 Disposal of contract files. Disposition of files shall be...

  6. 48 CFR 1404.802 - Contract files.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... 48 Federal Acquisition Regulations System 5 2010-10-01 2010-10-01 false Contract files. 1404.802 Section 1404.802 Federal Acquisition Regulations System DEPARTMENT OF THE INTERIOR GENERAL ADMINISTRATIVE MATTERS Contract Files 1404.802 Contract files. In addition to the requirements in FAR 4.802, files shall...

  7. The Galley Parallel File System

    NASA Technical Reports Server (NTRS)

    Nieuwejaar, Nils; Kotz, David

    1996-01-01

    Most current multiprocessor file systems are designed to use multiple disks in parallel, using the high aggregate bandwidth to meet the growing I/0 requirements of parallel scientific applications. Many multiprocessor file systems provide applications with a conventional Unix-like interface, allowing the application to access multiple disks transparently. This interface conceals the parallelism within the file system, increasing the ease of programmability, but making it difficult or impossible for sophisticated programmers and libraries to use knowledge about their I/O needs to exploit that parallelism. In addition to providing an insufficient interface, most current multiprocessor file systems are optimized for a different workload than they are being asked to support. We introduce Galley, a new parallel file system that is intended to efficiently support realistic scientific multiprocessor workloads. We discuss Galley's file structure and application interface, as well as the performance advantages offered by that interface.

  8. Virtual data

    NASA Astrophysics Data System (ADS)

    Bjorklund, E.

    1994-12-01

    In the 1970s, when computers were memory limited, operating system designers created the concept of "virtual memory", which gave users the ability to address more memory than physically existed. In the 1990s, many large control systems have the potential of becoming data limited. We propose that many of the principles behind virtual memory systems (working sets, locality, caching and clustering) can also be applied to data-limited systems, creating, in effect, "virtual data systems". At the Los Alamos National Laboratory's Clinton P. Anderson Meson Physics Facility (LAMPF), we have applied these principles to a moderately sized (10 000 data points) data acquisition and control system. To test the principles, we measured the system's performance during tune-up, production, and maintenance periods. In this paper, we present a general discussion of the principles of a virtual data system along with some discussion of our own implementation and the results of our performance measurements.

  9. Virtual reality: past, present and future.

    PubMed

    Gobbetti, E; Scateni, R

    1998-01-01

    This report provides a short survey of the field of virtual reality, highlighting application domains, technological requirements, and currently available solutions. The report is organized as follows: section 1 presents the background and motivation of virtual environment research and identifies typical application domain, section 2 discusses the characteristics a virtual reality system must have in order to exploit the perceptual and spatial skills of users, section 3 surveys current input/output devices for virtual reality, section 4 surveys current software approaches to support the creation of virtual reality systems, and section 5 summarizes the report.

  10. Units of Instruction for Vocational Office Education. Volume 1. Filing, Office Machines, and General Office Clerical Occupations. Teacher's Guide.

    ERIC Educational Resources Information Center

    East Texas State Univ., Commerce. Occupational Curriculum Lab.

    Nineteen units on filing, office machines, and general office clerical occupations are presented in this teacher's guide. The unit topics include indexing, alphabetizing, and filing (e.g., business names); labeling and positioning file folders and guides; establishing a correspondence filing system; utilizing charge-out and follow-up file systems;…

  11. Advanced Collaborative Environments Supporting Systems Integration and Design

    DTIC Science & Technology

    2003-03-01

    concurrently view a virtual system or product model while maintaining natural, human communication . These virtual systems operate within a computer-generated...These environments allow multiple individuals to concurrently view a virtual system or product model while simultaneously maintaining natural, human ... communication . As a result, TARDEC researchers and system developers are using this advanced high-end visualization technology to develop future

  12. Psychobiological Assessment and Enhancement of Team Cohesion and Psychological Resilience in ROTC Cadets Using a Virtual-Reality Team Cohesion Test

    DTIC Science & Technology

    2017-06-01

    assigned to the individual study. The same is done in case only two people or one person show(s) up. Storing of large data files: When preparing...AWARD NUMBER: W81XWH-15-1-0042 TITLE: Psychobiological Assessment & Enhancement of Team Cohesion and Psychological Resilience in ROTC Cadets...4302. Respondents should be aware that notwithstanding any other provision of law, no person shall be subject to any penalty for failing to comply with

  13. An interactive three-dimensional virtual body structures system for anatomical training over the internet.

    PubMed

    Temkin, Bharti; Acosta, Eric; Malvankar, Ameya; Vaidyanath, Sreeram

    2006-04-01

    The Visible Human digital datasets make it possible to develop computer-based anatomical training systems that use virtual anatomical models (virtual body structures-VBS). Medical schools are combining these virtual training systems and classical anatomy teaching methods that use labeled images and cadaver dissection. In this paper we present a customizable web-based three-dimensional anatomy training system, W3D-VBS. W3D-VBS uses National Library of Medicine's (NLM) Visible Human Male datasets to interactively locate, explore, select, extract, highlight, label, and visualize, realistic 2D (using axial, coronal, and sagittal views) and 3D virtual structures. A real-time self-guided virtual tour of the entire body is designed to provide detailed anatomical information about structures, substructures, and proximal structures. The system thus facilitates learning of visuospatial relationships at a level of detail that may not be possible by any other means. The use of volumetric structures allows for repeated real-time virtual dissections, from any angle, at the convenience of the user. Volumetric (3D) virtual dissections are performed by adding, removing, highlighting, and labeling individual structures (and/or entire anatomical systems). The resultant virtual explorations (consisting of anatomical 2D/3D illustrations and animations), with user selected highlighting colors and label positions, can be saved and used for generating lesson plans and evaluation systems. Tracking users' progress using the evaluation system helps customize the curriculum, making W3D-VBS a powerful learning tool. Our plan is to incorporate other Visible Human segmented datasets, especially datasets with higher resolutions, that make it possible to include finer anatomical structures such as nerves and small vessels. (c) 2006 Wiley-Liss, Inc.

  14. NASIS data base management system - IBM 360/370 OS MVT implementation. 6: NASIS message file

    NASA Technical Reports Server (NTRS)

    1973-01-01

    The message file for the NASA Aerospace Safety Information System (NASIS) is discussed. The message file contains all the message and term explanations for the system. The data contained in the file can be broken down into three separate sections: (1) global terms, (2) local terms, and (3) system messages. The various terms are defined and their use within the system is explained.

  15. NASIS data base management system: IBM 360 TSS implementation. Volume 6: NASIS message file

    NASA Technical Reports Server (NTRS)

    1973-01-01

    The message file for the NASA Aerospace Safety Information System (NASIS) is discussed. The message file contains all the message and term explanations for the system. The data contained in the file can be broken down into three separate sections: (1) global terms, (2) local terms, and (3) system messages. The various terms are defined and their use within the system is explained.

  16. Dynamic Non-Hierarchical File Systems for Exascale Storage

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Long, Darrell E.; Miller, Ethan L

    This constitutes the final report for “Dynamic Non-Hierarchical File Systems for Exascale Storage”. The ultimate goal of this project was to improve data management in scientific computing and high-end computing (HEC) applications, and to achieve this goal we proposed: to develop the first, HEC-targeted, file system featuring rich metadata and provenance collection, extreme scalability, and future storage hardware integration as core design goals, and to evaluate and develop a flexible non-hierarchical file system interface suitable for providing more powerful and intuitive data management interfaces to HEC and scientific computing users. Data management is swiftly becoming a serious problem in themore » scientific community – while copious amounts of data are good for obtaining results, finding the right data is often daunting and sometimes impossible. Scientists participating in a Department of Energy workshop noted that most of their time was spent “...finding, processing, organizing, and moving data and it’s going to get much worse”. Scientists should not be forced to become data mining experts in order to retrieve the data they want, nor should they be expected to remember the naming convention they used several years ago for a set of experiments they now wish to revisit. Ideally, locating the data you need would be as easy as browsing the web. Unfortunately, existing data management approaches are usually based on hierarchical naming, a 40 year-old technology designed to manage thousands of files, not exabytes of data. Today’s systems do not take advantage of the rich array of metadata that current high-end computing (HEC) file systems can gather, including content-based metadata and provenance1 information. As a result, current metadata search approaches are typically ad hoc and often work by providing a parallel management system to the “main” file system, as is done in Linux (the locate utility), personal computers, and enterprise search appliances. These search applications are often optimized for a single file system, making it difficult to move files and their metadata between file systems. Users have tried to solve this problem in several ways, including the use of separate databases to index file properties, the encoding of file properties into file names, and separately gathering and managing provenance data, but none of these approaches has worked well, either due to limited usefulness or scalability, or both. Our research addressed several key issues: High-performance, real-time metadata harvesting: extracting important attributes from files dynamically and immediately updating indexes used to improve search; Transparent, automatic, and secure provenance capture: recording the data inputs and processing steps used in the production of each file in the system; Scalable indexing: indexes that are optimized for integration with the file system; Dynamic file system structure: our approach provides dynamic directories similar to those in semantic file systems, but these are the native organization rather than a feature grafted onto a conventional system. In addition to these goals, our research effort will include evaluating the impact of new storage technologies on the file system design and performance. In particular, the indexing and metadata harvesting functions can potentially benefit from the performance improvements promised by new storage class memories.« less

  17. The Effects of a Virtual Tutee System on Academic Reading Engagement in a College Classroom

    ERIC Educational Resources Information Center

    Park, Seung Won; Kim, ChanMin

    2016-01-01

    Poor student engagement with academic readings has been frequently reported in college classrooms. As an effort to improve college students' reading engagement, researchers have developed a virtual environment in which students take on the role of tutor and teach a virtual tutee, the virtual tutee system (VTS). This research examined the…

  18. Intelligent Virtual Assistant's Impact on Technical Proficiency within Virtual Teams

    ERIC Educational Resources Information Center

    Graham, Christian; Jones, Nory B.

    2016-01-01

    Information-systems development continues to be a difficult process, particularly for virtual teams that do not have the luxury of meeting face-to-face. The research literature on this topic reinforces this point: the greater part of database systems development projects ends in failure. The use of virtual teams to complete projects further…

  19. V-ROOM: a virtual meeting system with intelligent structured summarisation

    NASA Astrophysics Data System (ADS)

    James, Anne E.; Nanos, Antonios G.; Thompson, Philip

    2016-10-01

    With the growth of virtual organisations and multinational companies, virtual collaboration tasks are becoming more important for employees. This paper describes the development of a virtual meeting system called V-ROOM. An exploration of facilities required in such a system has been conducted. The findings highlighted that intelligent systems are needed, especially since information that individuals have to know and process is vast. The survey results showed that meeting summarisation is one of the most important new features that should be added to virtual meeting systems for enterprises. This paper highlights the innovative methods employed in V-ROOM to produce relevant meeting summaries. V-ROOM's approach is compared to other methods from the literature, and it is shown how the use of metadata provided by parts of the V-ROOM system can improve the quality of summaries produced.

  20. A Desktop Virtual Reality Earth Motion System in Astronomy Education

    ERIC Educational Resources Information Center

    Chen, Chih Hung; Yang, Jie Chi; Shen, Sarah; Jeng, Ming Chang

    2007-01-01

    In this study, a desktop virtual reality earth motion system (DVREMS) is designed and developed to be applied in the classroom. The system is implemented to assist elementary school students to clarify earth motion concepts using virtual reality principles. A study was conducted to observe the influences of the proposed system in learning.…

Top