Storage system software solutions for high-end user needs
NASA Technical Reports Server (NTRS)
Hogan, Carole B.
1992-01-01
Today's high-end storage user is one that requires rapid access to a reliable terabyte-capacity storage system running in a distributed environment. This paper discusses conventional storage system software and concludes that this software, designed for other purposes, cannot meet high-end storage requirements. The paper also reviews the philosophy and design of evolving storage system software. It concludes that this new software, designed with high-end requirements in mind, provides the potential for solving not only the storage needs of today but those of the foreseeable future as well.
Open systems storage platforms
NASA Technical Reports Server (NTRS)
Collins, Kirby
1992-01-01
The building blocks for an open storage system includes a system platform, a selection of storage devices and interfaces, system software, and storage applications CONVEX storage systems are based on the DS Series Data Server systems. These systems are a variant of the C3200 supercomputer with expanded I/O capabilities. These systems support a variety of medium and high speed interfaces to networks and peripherals. System software is provided in the form of ConvexOS, a POSIX compliant derivative of 4.3BSD UNIX. Storage applications include products such as UNITREE and EMASS. With the DS Series of storage systems, Convex has developed a set of products which provide open system solutions for storage management applications. The systems are highly modular, assembled from off the shelf components with industry standard interfaces. The C Series system architecture provides a stable base, with the performance and reliability of a general purpose platform. This combination of a proven system architecture with a variety of choices in peripherals and application software allows wide flexibility in configurations, and delivers the benefits of open systems to the mass storage world.
75 FR 38945 - Airworthiness Directives; The Boeing Company Model 777-200 and -300 Series Airplanes
Federal Register 2010, 2011, 2012, 2013, 2014
2010-07-07
..., 2006. The service bulletin describes procedures for installing new operational software in the cabin... loading the new cabin services system central storage device software and CSCP OPS into the MMC. FAA's... cabin services system central storage device software and cabin system control panel operational...
Data storage technology: Hardware and software, Appendix B
NASA Technical Reports Server (NTRS)
Sable, J. D.
1972-01-01
This project involves the development of more economical ways of integrating and interfacing new storage devices and data processing programs into a computer system. It involves developing interface standards and a software/hardware architecture which will make it possible to develop machine independent devices and programs. These will interface with the machine dependent operating systems of particular computers. The development project will not be to develop the software which would ordinarily be the responsibility of the manufacturer to supply, but to develop the standards with which that software is expected to confirm in providing an interface with the user or storage system.
NASA Technical Reports Server (NTRS)
1973-01-01
A description of each of the software modules of the Image Data Processing System (IDAPS) is presented. The changes in the software modules are the result of additions to the application software of the system and an upgrade of the IBM 7094 Mod(1) computer to a 1301 disk storage configuration. Necessary information about IDAPS sofware is supplied to the computer programmer who desires to make changes in the software system or who desires to use portions of the software outside of the IDAPS system. Each software module is documented with: module name, purpose, usage, common block(s) description, method (algorithm of subroutine) flow diagram (if needed), subroutines called, and storage requirements.
iSDS: a self-configurable software-defined storage system for enterprise
NASA Astrophysics Data System (ADS)
Chen, Wen-Shyen Eric; Huang, Chun-Fang; Huang, Ming-Jen
2018-01-01
Storage is one of the most important aspects of IT infrastructure for various enterprises. But, enterprises are interested in more than just data storage; they are interested in such things as more reliable data protection, higher performance and reduced resource consumption. Traditional enterprise-grade storage satisfies these requirements at high cost. It is because traditional enterprise-grade storage is usually designed and constructed by customised field-programmable gate array to achieve high-end functionality. However, in this ever-changing environment, enterprises request storage with more flexible deployment and at lower cost. Moreover, the rise of new application fields, such as social media, big data, video streaming service etc., makes operational tasks for administrators more complex. In this article, a new storage system called intelligent software-defined storage (iSDS), based on software-defined storage, is described. More specifically, this approach advocates using software to replace features provided by traditional customised chips. To alleviate the management burden, it also advocates applying machine learning to automatically configure storage to meet dynamic requirements of workloads running on storage. This article focuses on the analysis feature of iSDS cluster by detailing its architecture and design.
Current state of the mass storage system reference model
NASA Technical Reports Server (NTRS)
Coyne, Robert
1993-01-01
IEEE SSSWG was chartered in May 1990 to abstract the hardware and software components of existing and emerging storage systems and to define the software interfaces between these components. The immediate goal is the decomposition of a storage system into interoperable functional modules which vendors can offer as separate commercial products. The ultimate goal is to develop interoperable standards which define the software interfaces, and in the distributed case, the associated protocols to each of the architectural modules in the model. The topics are presented in viewgraph form and include the following: IEEE SSSWG organization; IEEE SSSWG subcommittees & chairs; IEEE standards activity board; layered view of the reference model; layered access to storage services; IEEE SSSWG emphasis; and features for MSSRM version 5.
Mass Storage System Upgrades at the NASA Center for Computational Sciences
NASA Technical Reports Server (NTRS)
Tarshish, Adina; Salmon, Ellen; Macie, Medora; Saletta, Marty
2000-01-01
The NASA Center for Computational Sciences (NCCS) provides supercomputing and mass storage services to over 1200 Earth and space scientists. During the past two years, the mass storage system at the NCCS went through a great deal of changes both major and minor. Tape drives, silo control software, and the mass storage software itself were upgraded, and the mass storage platform was upgraded twice. Some of these upgrades were aimed at achieving year-2000 compliance, while others were simply upgrades to newer and better technologies. In this paper we will describe these upgrades.
Expert systems applied to fault isolation and energy storage management, phase 2
NASA Technical Reports Server (NTRS)
1987-01-01
A user's guide for the Fault Isolation and Energy Storage (FIES) II system is provided. Included are a brief discussion of the background and scope of this project, a discussion of basic and advanced operating installation and problem determination procedures for the FIES II system and information on hardware and software design and implementation. A number of appendices are provided including a detailed specification for the microprocessor software, a detailed description of the expert system rule base and a description and listings of the LISP interface software.
An efficient, modular and simple tape archiving solution for LHC Run-3
NASA Astrophysics Data System (ADS)
Murray, S.; Bahyl, V.; Cancio, G.; Cano, E.; Kotlyar, V.; Kruse, D. F.; Leduc, J.
2017-10-01
The IT Storage group at CERN develops the software responsible for archiving to tape the custodial copy of the physics data generated by the LHC experiments. Physics run 3 will start in 2021 and will introduce two major challenges for which the tape archive software must be evolved. Firstly the software will need to make more efficient use of tape drives in order to sustain the predicted data rate of 150 petabytes per year as opposed to the current 50 petabytes per year. Secondly the software will need to be seamlessly integrated with EOS, which has become the de facto disk storage system provided by the IT Storage group for physics data. The tape storage software for LHC physics run 3 is code named CTA (the CERN Tape Archive). This paper describes how CTA will introduce a pre-emptive drive scheduler to use tape drives more efficiently, will encapsulate all tape software into a single module that will sit behind one or more EOS systems, and will be simpler by dropping support for obsolete backwards compatibility.
Architecture for removable media USB-ARM
Shue, Craig A.; Lamb, Logan M.; Paul, Nathanael R.
2015-07-14
A storage device is coupled to a computing system comprising an operating system and application software. Access to the storage device is blocked by a kernel filter driver, except exclusive access is granted to a first anti-virus engine. The first anti-virus engine is directed to scan the storage device for malicious software and report results. Exclusive access may be granted to one or more other anti-virus engines and they may be directed to scan the storage device and report results. Approval of all or a portion of the information on the storage device is based on the results from the first anti-virus engine and the other anti-virus engines. The storage device is presented to the operating system and access is granted to the approved information. The operating system may be a Microsoft Windows operating system. The kernel filter driver and usage of anti-virus engines may be configurable by a user.
Enabling Co-Design of Multi-Layer Exascale Storage Architectures
DOE Office of Scientific and Technical Information (OSTI.GOV)
Carothers, Christopher
Growing demands for computing power in applications such as energy production, climate analysis, computational chemistry, and bioinformatics have propelled computing systems toward the exascale: systems with 10 18 floating-point operations per second. These systems, to be designed and constructed over the next decade, will create unprecedented challenges in component counts, power consumption, resource limitations, and system complexity. Data storage and access are an increasingly important and complex component in extreme-scale computing systems, and significant design work is needed to develop successful storage hardware and software architectures at exascale. Co-design of these systems will be necessary to find the best possiblemore » design points for exascale systems. The goal of this work has been to enable the exploration and co-design of exascale storage systems by providing a detailed, accurate, and highly parallel simulation of exascale storage and the surrounding environment. Specifically, this simulation has (1) portrayed realistic application checkpointing and analysis workloads, (2) captured the complexity, scale, and multilayer nature of exascale storage hardware and software, and (3) executed in a timeframe that enables “what if'” exploration of design concepts. We developed models of the major hardware and software components in an exascale storage system, as well as the application I/O workloads that drive them. We used our simulation system to investigate critical questions in reliability and concurrency at exascale, helping guide the design of future exascale hardware and software architectures. Additionally, we provided this system to interested vendors and researchers so that others can explore the design space. We validated the capabilities of our simulation environment by configuring the simulation to represent the Argonne Leadership Computing Facility Blue Gene/Q system and comparing simulation results for application I/O patterns to the results of executions of these I/O kernels on the actual system.« less
Integrating new Storage Technologies into EOS
NASA Astrophysics Data System (ADS)
Peters, Andreas J.; van der Ster, Dan C.; Rocha, Joaquim; Lensing, Paul
2015-12-01
The EOS[1] storage software was designed to cover CERN disk-only storage use cases in the medium-term trading scalability against latency. To cover and prepare for long-term requirements the CERN IT data and storage services group (DSS) is actively conducting R&D and open source contributions to experiment with a next generation storage software based on CEPH[3] and ethernet enabled disk drives. CEPH provides a scale-out object storage system RADOS and additionally various optional high-level services like S3 gateway, RADOS block devices and a POSIX compliant file system CephFS. The acquisition of CEPH by Redhat underlines the promising role of CEPH as the open source storage platform of the future. CERN IT is running a CEPH service in the context of OpenStack on a moderate scale of 1 PB replicated storage. Building a 100+PB storage system based on CEPH will require software and hardware tuning. It is of capital importance to demonstrate the feasibility and possibly iron out bottlenecks and blocking issues beforehand. The main idea behind this R&D is to leverage and contribute to existing building blocks in the CEPH storage stack and implement a few CERN specific requirements in a thin, customisable storage layer. A second research topic is the integration of ethernet enabled disks. This paper introduces various ongoing open source developments, their status and applicability.
NASA Astrophysics Data System (ADS)
Gan, Chenquan; Yang, Xiaofan
2015-05-01
In this paper, a new computer virus propagation model, which incorporates the effects of removable storage media and antivirus software, is proposed and analyzed. The global stability of the unique equilibrium of the model is independent of system parameters. Numerical simulations not only verify this result, but also illustrate the influences of removable storage media and antivirus software on viral spread. On this basis, some applicable measures for suppressing virus prevalence are suggested.
Survey of Mass Storage Systems
1975-09-01
software that Pre- cision Instruments can provide. System Name: IBM 3850 Mass Storage System Manufacturer and Location: International Business Machines...34 Datamation, pp. 52-58, October 1973. 15 17. International Business Machines, IBM 3850 Mass Storage System Facts Folder, White Plains, NY, n.d. 18... International Business Machines, Introduction to the IBM 3850 Mass Storage System (MSS), White Plains, NY, n.d. 19. International Business Machines
Technology for national asset storage systems
NASA Technical Reports Server (NTRS)
Coyne, Robert A.; Hulen, Harry; Watson, Richard
1993-01-01
An industry-led collaborative project, called the National Storage Laboratory, was organized to investigate technology for storage systems that will be the future repositories for our national information assets. Industry participants are IBM Federal Systems Company, Ampex Recording Systems Corporation, General Atomics DISCOS Division, IBM ADSTAR, Maximum Strategy Corporation, Network Systems Corporation, and Zitel Corporation. Industry members of the collaborative project are funding their own participation. Lawrence Livermore National Laboratory through its National Energy Research Supercomputer Center (NERSC) will participate in the project as the operational site and the provider of applications. The expected result is an evaluation of a high performance storage architecture assembled from commercially available hardware and software, with some software enhancements to meet the project's goals. It is anticipated that the integrated testbed system will represent a significant advance in the technology for distributed storage systems capable of handling gigabyte class files at gigabit-per-second data rates. The National Storage Laboratory was officially launched on 27 May 1992.
Earth Science Data Grid System
NASA Astrophysics Data System (ADS)
Chi, Y.; Yang, R.; Kafatos, M.
2004-05-01
The Earth Science Data Grid System (ESDGS) is a software system in support of earth science data storage and access. It is built upon the Storage Resource Broker (SRB) data grid technology. We have developed a complete data grid system consistent of SRB server providing users uniform access to diverse storage resources in a heterogeneous computing environment and metadata catalog server (MCAT) managing the metadata associated with data set, users, and resources. We also develop the earth science application metadata; geospatial, temporal, and content-based indexing; and some other tools. In this paper, we will describe software architecture and components of the data grid system, and use a practical example in support of storage and access of rainfall data from the Tropical Rainfall Measuring Mission (TRMM) to illustrate its functionality and features.
Earth Science Data Grid System
NASA Astrophysics Data System (ADS)
Chi, Y.; Yang, R.; Kafatos, M.
2004-12-01
The Earth Science Data Grid System (ESDGS) is a software in support of earth science data storage and access. It is built upon the Storage Resource Broker (SRB) data grid technology. We have developed a complete data grid system consistent of SRB server providing users uniform access to diverse storage resources in a heterogeneous computing environment and metadata catalog server (MCAT) managing the metadata associated with data set, users, and resources. We are also developing additional services of 1) metadata management, 2) geospatial, temporal, and content-based indexing, and 3) near/on site data processing, in response to the unique needs of Earth science applications. In this paper, we will describe the software architecture and components of the system, and use a practical example in support of storage and access of rainfall data from the Tropical Rainfall Measuring Mission (TRMM) to illustrate its functionality and features.
NASA Astrophysics Data System (ADS)
Tang, Li; Liu, Jing-Ning; Feng, Dan; Tong, Wei
2008-12-01
Existing security solutions in network storage environment perform poorly because cryptographic operations (encryption and decryption) implemented in software can dramatically reduce system performance. In this paper we propose a cryptographic hardware accelerator on dynamically reconfigurable platform for the security of high performance network storage system. We employ a dynamic reconfigurable platform based on a FPGA to implement a PowerPCbased embedded system, which executes cryptographic algorithms. To reduce the reconfiguration latency, we apply prefetch scheduling. Moreover, the processing elements could be dynamically configured to support different cryptographic algorithms according to the request received by the accelerator. In the experiment, we have implemented AES (Rijndael) and 3DES cryptographic algorithms in the reconfigurable accelerator. Our proposed reconfigurable cryptographic accelerator could dramatically increase the performance comparing with the traditional software-based network storage systems.
Application and design of solar photovoltaic system
NASA Astrophysics Data System (ADS)
Tianze, Li; Hengwei, Lu; Chuan, Jiang; Luan, Hou; Xia, Zhang
2011-02-01
Solar modules, power electronic equipments which include the charge-discharge controller, the inverter, the test instrumentation and the computer monitoring, and the storage battery or the other energy storage and auxiliary generating plant make up of the photovoltaic system which is shown in the thesis. PV system design should follow to meet the load supply requirements, make system low cost, seriously consider the design of software and hardware, and make general software design prior to hardware design in the paper. To take the design of PV system for an example, the paper gives the analysis of the design of system software and system hardware, economic benefit, and basic ideas and steps of the installation and the connection of the system. It elaborates on the information acquisition, the software and hardware design of the system, the evaluation and optimization of the system. Finally, it shows the analysis and prospect of the application of photovoltaic technology in outer space, solar lamps, freeways and communications.
The Design and Application of Data Storage System in Miyun Satellite Ground Station
NASA Astrophysics Data System (ADS)
Xue, Xiping; Su, Yan; Zhang, Hongbo; Liu, Bin; Yao, Meijuan; Zhao, Shu
2015-04-01
China has launched Chang'E-3 satellite in 2013, firstly achieved soft landing on moon for China's lunar probe. Miyun satellite ground station firstly used SAN storage network system based-on Stornext sharing software in Chang'E-3 mission. System performance fully meets the application requirements of Miyun ground station data storage.The Stornext file system is a sharing file system with high performance, supports multiple servers to access the file system using different operating system at the same time, and supports access to data on a variety of topologies, such as SAN and LAN. Stornext focused on data protection and big data management. It is announced that Quantum province has sold more than 70,000 licenses of Stornext file system worldwide, and its customer base is growing, which marks its leading position in the big data management.The responsibilities of Miyun satellite ground station are the reception of Chang'E-3 satellite downlink data and management of local data storage. The station mainly completes exploration mission management, receiving and management of observation data, and provides a comprehensive, centralized monitoring and control functions on data receiving equipment. The ground station applied SAN storage network system based on Stornext shared software for receiving and managing data reliable.The computer system in Miyun ground station is composed by business running servers, application workstations and other storage equipments. So storage systems need a shared file system which supports heterogeneous multi-operating system. In practical applications, 10 nodes simultaneously write data to the file system through 16 channels, and the maximum data transfer rate of each channel is up to 15MB/s. Thus the network throughput of file system is not less than 240MB/s. At the same time, the maximum capacity of each data file is up to 810GB. The storage system planned requires that 10 nodes simultaneously write data to the file system through 16 channels with 240MB/s network throughput.When it is integrated,sharing system can provide 1020MB/s write speed simultaneously.When the master storage server fails, the backup storage server takes over the normal service.The literacy of client will not be affected,in which switching time is less than 5s.The design and integrated storage system meet users requirements. Anyway, all-fiber way is too expensive in SAN; SCSI hard disk transfer rate may still be the bottleneck in the development of the entire storage system. Stornext can provide users with efficient sharing, management, automatic archiving of large numbers of files and hardware solutions. It occupies a leading position in big data management. Storage is the most popular sharing shareware, and there are drawbacks in Stornext: Firstly, Stornext software is expensive, in which charge by the sites. When the network scale is large, the purchase cost will be very high. Secondly, the parameters of Stornext software are more demands on the skills of technical staff. If there is a problem, it is difficult to exclude.
Mass storage system reference model, Version 4
NASA Technical Reports Server (NTRS)
Coleman, Sam (Editor); Miller, Steve (Editor)
1993-01-01
The high-level abstractions that underlie modern storage systems are identified. The information to generate the model was collected from major practitioners who have built and operated large storage facilities, and represents a distillation of the wisdom they have acquired over the years. The model provides a common terminology and set of concepts to allow existing systems to be examined and new systems to be discussed and built. It is intended that the model and the interfaces identified from it will allow and encourage vendors to develop mutually-compatible storage components that can be combined to form integrated storage systems and services. The reference model presents an abstract view of the concepts and organization of storage systems. From this abstraction will come the identification of the interfaces and modules that will be used in IEEE storage system standards. The model is not yet suitable as a standard; it does not contain implementation decisions, such as how abstract objects should be broken up into software modules or how software modules should be mapped to hosts; it does not give policy specifications, such as when files should be migrated; does not describe how the abstract objects should be used or connected; and does not refer to specific hardware components. In particular, it does not fully specify the interfaces.
System and Method for Providing a Climate Data Persistence Service
NASA Technical Reports Server (NTRS)
Schnase, John L. (Inventor); Ripley, III, William David (Inventor); Duffy, Daniel Q. (Inventor); Thompson, John H. (Inventor); Strong, Savannah L. (Inventor); McInerney, Mark (Inventor); Sinno, Scott (Inventor); Tamkin, Glenn S. (Inventor); Nadeau, Denis (Inventor)
2018-01-01
A system, method and computer-readable storage devices for providing a climate data persistence service. A system configured to provide the service can include a climate data server that performs data and metadata storage and management functions for climate data objects, a compute-storage platform that provides the resources needed to support a climate data server, provisioning software that allows climate data server instances to be deployed as virtual climate data servers in a cloud computing environment, and a service interface, wherein persistence service capabilities are invoked by software applications running on a client device. The climate data objects can be in various formats, such as International Organization for Standards (ISO) Open Archival Information System (OAIS) Reference Model Submission Information Packages, Archive Information Packages, and Dissemination Information Packages. The climate data server can enable scalable, federated storage, management, discovery, and access, and can be tailored for particular use cases.
LVFS: A Big Data File Storage Bridge for the HPC Community
NASA Astrophysics Data System (ADS)
Golpayegani, N.; Halem, M.; Mauoka, E.; Fonseca, L. F.
2015-12-01
Merging Big Data capabilities into High Performance Computing architecture starts at the file storage level. Heterogeneous storage systems are emerging which offer enhanced features for dealing with Big Data such as the IBM GPFS storage system's integration into Hadoop Map-Reduce. Taking advantage of these capabilities requires file storage systems to be adaptive and accommodate these new storage technologies. We present the extension of the Lightweight Virtual File System (LVFS) currently running as the production system for the MODIS Level 1 and Atmosphere Archive and Distribution System (LAADS) to incorporate a flexible plugin architecture which allows easy integration of new HPC hardware and/or software storage technologies without disrupting workflows, system architectures and only minimal impact on existing tools. We consider two essential aspects provided by the LVFS plugin architecture needed for the future HPC community. First, it allows for the seamless integration of new and emerging hardware technologies which are significantly different than existing technologies such as Segate's Kinetic disks and Intel's 3DXPoint non-volatile storage. Second is the transparent and instantaneous conversion between new software technologies and various file formats. With most current storage system a switch in file format would require costly reprocessing and nearly doubling of storage requirements. We will install LVFS on UMBC's IBM iDataPlex cluster with a heterogeneous storage architecture utilizing local, remote, and Seagate Kinetic storage as a case study. LVFS merges different kinds of storage architectures to show users a uniform layout and, therefore, prevent any disruption in workflows, architecture design, or tool usage. We will show how LVFS will convert HDF data produced by applying machine learning algorithms to Xco2 Level 2 data from the OCO-2 satellite to produce CO2 surface fluxes into GeoTIFF for visualization.
Global Software Development with Cloud Platforms
NASA Astrophysics Data System (ADS)
Yara, Pavan; Ramachandran, Ramaseshan; Balasubramanian, Gayathri; Muthuswamy, Karthik; Chandrasekar, Divya
Offshore and outsourced distributed software development models and processes are facing challenges, previously unknown, with respect to computing capacity, bandwidth, storage, security, complexity, reliability, and business uncertainty. Clouds promise to address these challenges by adopting recent advances in virtualization, parallel and distributed systems, utility computing, and software services. In this paper, we envision a cloud-based platform that addresses some of these core problems. We outline a generic cloud architecture, its design and our first implementation results for three cloud forms - a compute cloud, a storage cloud and a cloud-based software service- in the context of global distributed software development (GSD). Our ”compute cloud” provides computational services such as continuous code integration and a compile server farm, ”storage cloud” offers storage (block or file-based) services with an on-line virtual storage service, whereas the on-line virtual labs represent a useful cloud service. We note some of the use cases for clouds in GSD, the lessons learned with our prototypes and identify challenges that must be conquered before realizing the full business benefits. We believe that in the future, software practitioners will focus more on these cloud computing platforms and see clouds as a means to supporting a ecosystem of clients, developers and other key stakeholders.
NASA Astrophysics Data System (ADS)
Welsch, Bastian; Rühaak, Wolfram; Schulte, Daniel O.; Bär, Kristian; Sass, Ingo
2016-04-01
Seasonal thermal energy storage in borehole heat exchanger arrays is a promising technology to reduce primary energy consumption and carbon dioxide emissions. These systems usually consist of several subsystems like the heat source (e.g. solarthermics or a combined heat and power plant), the heat consumer (e.g. a heating system), diurnal storages (i.e. water tanks), the borehole thermal energy storage, additional heat sources for peak load coverage (e.g. a heat pump or a gas boiler) and the distribution network. For the design of an integrated system, numerical simulations of all subsystems are imperative. A separate simulation of the borehole energy storage is well-established but represents a simplification. In reality, the subsystems interact with each other. The fluid temperatures of the heat generation system, the heating system and the underground storage are interdependent and affect the performance of each subsystem. To take into account these interdependencies, we coupled a software for the simulation of the above ground facilities with a finite element software for the modeling of the heat flow in the subsurface and the borehole heat exchangers. This allows for a more realistic view on the entire system. Consequently, a finer adjustment of the system components and a more precise prognosis of the system's performance can be ensured.
The Bartlesville System; TGISS Software Documentation.
ERIC Educational Resources Information Center
Roberts, Tommy L.; And Others
TGISS (Total Guidance Information Support System) is an information storage and retrieval system specifically designed to meet the needs and requirements of a counselor in the Bartlesville Public School environment. The system, which is a combination of man/machine capabilities, includes the hardware and software necessary to extend the…
NASA Technical Reports Server (NTRS)
Kobler, Benjamin (Editor); Hariharan, P. C. (Editor); Blasso, L. G. (Editor)
1992-01-01
Papers and viewgraphs from the conference are presented. This conference served as a broad forum for the discussion of a number of important issues in the field of mass storage systems. Topics include magnetic disk and tape technologies, optical disks and tape, software storage and file management systems, and experiences with the use of a large, distributed storage system. The technical presentations describe, among other things, integrated mass storage systems that are expected to be available commercially. Also included is a series of presentations from Federal Government organizations and research institutions covering their mass storage requirements for the 1990's.
A model of cloud application assignments in software-defined storages
NASA Astrophysics Data System (ADS)
Bolodurina, Irina P.; Parfenov, Denis I.; Polezhaev, Petr N.; E Shukhman, Alexander
2017-01-01
The aim of this study is to analyze the structure and mechanisms of interaction of typical cloud applications and to suggest the approaches to optimize their placement in storage systems. In this paper, we describe a generalized model of cloud applications including the three basic layers: a model of application, a model of service, and a model of resource. The distinctive feature of the model suggested implies analyzing cloud resources from the user point of view and from the point of view of a software-defined infrastructure of the virtual data center (DC). The innovation character of this model is in describing at the same time the application data placements, as well as the state of the virtual environment, taking into account the network topology. The model of software-defined storage has been developed as a submodel within the resource model. This model allows implementing the algorithm for control of cloud application assignments in software-defined storages. Experimental researches returned this algorithm decreases in cloud application response time and performance growth in user request processes. The use of software-defined data storages allows the decrease in the number of physical store devices, which demonstrates the efficiency of our algorithm.
Large-Scale Wireless Temperature Monitoring System for Liquefied Petroleum Gas Storage Tanks.
Fan, Guangwen; Shen, Yu; Hao, Xiaowei; Yuan, Zongming; Zhou, Zhi
2015-09-18
Temperature distribution is a critical indicator of the health condition for Liquefied Petroleum Gas (LPG) storage tanks. In this paper, we present a large-scale wireless temperature monitoring system to evaluate the safety of LPG storage tanks. The system includes wireless sensors networks, high temperature fiber-optic sensors, and monitoring software. Finally, a case study on real-world LPG storage tanks proves the feasibility of the system. The unique features of wireless transmission, automatic data acquisition and management, local and remote access make the developed system a good alternative for temperature monitoring of LPG storage tanks in practical applications.
Systems aspects of COBE science data compression
NASA Technical Reports Server (NTRS)
Freedman, I.; Boggess, E.; Seiler, E.
1993-01-01
A general approach to compression of diverse data from large scientific projects has been developed and this paper addresses the appropriate system and scientific constraints together with the algorithm development and test strategy. This framework has been implemented for the COsmic Background Explorer spacecraft (COBE) by retrofitting the existing VAS-based data management system with high-performance compression software permitting random access to the data. Algorithms which incorporate scientific knowledge and consume relatively few system resources are preferred over ad hoc methods. COBE exceeded its planned storage by a large and growing factor and the retrieval of data significantly affects the processing, delaying the availability of data for scientific usage and software test. Embedded compression software is planned to make the project tractable by reducing the data storage volume to an acceptable level during normal processing.
A class Hierarchical, object-oriented approach to virtual memory management
NASA Technical Reports Server (NTRS)
Russo, Vincent F.; Campbell, Roy H.; Johnston, Gary M.
1989-01-01
The Choices family of operating systems exploits class hierarchies and object-oriented programming to facilitate the construction of customized operating systems for shared memory and networked multiprocessors. The software is being used in the Tapestry laboratory to study the performance of algorithms, mechanisms, and policies for parallel systems. Described here are the architectural design and class hierarchy of the Choices virtual memory management system. The software and hardware mechanisms and policies of a virtual memory system implement a memory hierarchy that exploits the trade-off between response times and storage capacities. In Choices, the notion of a memory hierarchy is captured by abstract classes. Concrete subclasses of those abstractions implement a virtual address space, segmentation, paging, physical memory management, secondary storage, and remote (that is, networked) storage. Captured in the notion of a memory hierarchy are classes that represent memory objects. These classes provide a storage mechanism that contains encapsulated data and have methods to read or write the memory object. Each of these classes provides specializations to represent the memory hierarchy.
Development of medical data information systems
NASA Technical Reports Server (NTRS)
Anderson, J.
1971-01-01
Computerized storage and retrieval of medical information is discussed. Tasks which were performed in support of the project are: (1) flight crew health stabilization computer system, (2) medical data input system, (3) graphic software development, (4) lunar receiving laboratory support, and (5) Statos V printer/plotter software development.
Towards Portable Large-Scale Image Processing with High-Performance Computing.
Huo, Yuankai; Blaber, Justin; Damon, Stephen M; Boyd, Brian D; Bao, Shunxing; Parvathaneni, Prasanna; Noguera, Camilo Bermudez; Chaganti, Shikha; Nath, Vishwesh; Greer, Jasmine M; Lyu, Ilwoo; French, William R; Newton, Allen T; Rogers, Baxter P; Landman, Bennett A
2018-05-03
High-throughput, large-scale medical image computing demands tight integration of high-performance computing (HPC) infrastructure for data storage, job distribution, and image processing. The Vanderbilt University Institute for Imaging Science (VUIIS) Center for Computational Imaging (CCI) has constructed a large-scale image storage and processing infrastructure that is composed of (1) a large-scale image database using the eXtensible Neuroimaging Archive Toolkit (XNAT), (2) a content-aware job scheduling platform using the Distributed Automation for XNAT pipeline automation tool (DAX), and (3) a wide variety of encapsulated image processing pipelines called "spiders." The VUIIS CCI medical image data storage and processing infrastructure have housed and processed nearly half-million medical image volumes with Vanderbilt Advanced Computing Center for Research and Education (ACCRE), which is the HPC facility at the Vanderbilt University. The initial deployment was natively deployed (i.e., direct installations on a bare-metal server) within the ACCRE hardware and software environments, which lead to issues of portability and sustainability. First, it could be laborious to deploy the entire VUIIS CCI medical image data storage and processing infrastructure to another HPC center with varying hardware infrastructure, library availability, and software permission policies. Second, the spiders were not developed in an isolated manner, which has led to software dependency issues during system upgrades or remote software installation. To address such issues, herein, we describe recent innovations using containerization techniques with XNAT/DAX which are used to isolate the VUIIS CCI medical image data storage and processing infrastructure from the underlying hardware and software environments. The newly presented XNAT/DAX solution has the following new features: (1) multi-level portability from system level to the application level, (2) flexible and dynamic software development and expansion, and (3) scalable spider deployment compatible with HPC clusters and local workstations.
Large-Scale Wireless Temperature Monitoring System for Liquefied Petroleum Gas Storage Tanks
Fan, Guangwen; Shen, Yu; Hao, Xiaowei; Yuan, Zongming; Zhou, Zhi
2015-01-01
Temperature distribution is a critical indicator of the health condition for Liquefied Petroleum Gas (LPG) storage tanks. In this paper, we present a large-scale wireless temperature monitoring system to evaluate the safety of LPG storage tanks. The system includes wireless sensors networks, high temperature fiber-optic sensors, and monitoring software. Finally, a case study on real-world LPG storage tanks proves the feasibility of the system. The unique features of wireless transmission, automatic data acquisition and management, local and remote access make the developed system a good alternative for temperature monitoring of LPG storage tanks in practical applications. PMID:26393596
Using Utility Functions to Control a Distributed Storage System
2008-05-01
Pinheiro et al. [2007] suggest this is not an accurate assumption. Nicola and Goyal [1990] examined correlated failures across multiversion software...F. and Goyal, A. (1990). Modeling of correlated failures and community error recovery in multiversion software. IEEE Transactions on Software
21 CFR 862.2570 - Instrumentation for clinical multiplex test systems.
Code of Federal Regulations, 2010 CFR
2010-04-01
... HUMAN SERVICES (CONTINUED) MEDICAL DEVICES CLINICAL CHEMISTRY AND CLINICAL TOXICOLOGY DEVICES Clinical... hardware components, as well as raw data storage mechanisms, data acquisition software, and software to...
NASA Technical Reports Server (NTRS)
Kobler, Ben (Editor); Hariharan, P. C. (Editor); Blasso, L. G. (Editor)
1992-01-01
This report contains copies of nearly all of the technical papers and viewgraphs presented at the NSSDC Conference on Mass Storage Systems and Technologies for Space and Earth Science Application. This conference served as a broad forum for the discussion of a number of important issues in the field of mass storage systems. Topics include the following: magnetic disk and tape technologies; optical disk and tape; software storage and file management systems; and experiences with the use of a large, distributed storage system. The technical presentations describe, among other things, integrated mass storage systems that are expected to be available commercially. Also included is a series of presentations from Federal Government organizations and research institutions covering their mass storage requirements for the 1990's.
NASA Technical Reports Server (NTRS)
Kobler, Ben (Editor); Hariharan, P. C. (Editor); Blasso, L. G. (Editor)
1992-01-01
This report contains copies of nearly all of the technical papers and viewgraphs presented at the National Space Science Data Center (NSSDC) Conference on Mass Storage Systems and Technologies for Space and Earth Science Applications. This conference served as a broad forum for the discussion of a number of important issues in the field of mass storage systems. Topics include magnetic disk and tape technologies, optical disk and tape, software storage and file management systems, and experiences with the use of a large, distributed storage system. The technical presentations describe, among other things, integrated mass storage systems that are expected to be available commercially. Also included is a series of presentations from Federal Government organizations and research institutions covering their mass storage requirements for the 1990s.
The challenge of a data storage hierarchy
NASA Technical Reports Server (NTRS)
Ruderman, Michael
1992-01-01
A discussion of Mesa Archival Systems' data archiving system is presented. This data archiving system is strictly a software system that is implemented on a mainframe and manages the data into permanent file storage. Emphasis is placed on the fact that any kind of client system on the network can be connected through the Unix interface of the data archiving system.
NASA Technical Reports Server (NTRS)
Regalado Reyes, Bjorn Constant
2015-01-01
1. Kennedy Space Center (KSC) is developing a mobile launching system with autonomous propellant loading capabilities for liquid-fueled rockets. An autonomous system will be responsible for monitoring and controlling the storage, loading and transferring of cryogenic propellants. The Physics Simulation Software will reproduce the sensor data seen during the delivery of cryogenic fluids including valve positions, pressures, temperatures and flow rates. The simulator will provide insight into the functionality of the propellant systems and demonstrate the effects of potential faults. This will provide verification of the communications protocols and the autonomous system control. 2. The High Pressure Gas Facility (HPGF) stores and distributes hydrogen, nitrogen, helium and high pressure air. The hydrogen and nitrogen are stored in cryogenic liquid state. The cryogenic fluids pose several hazards to operators and the storage and transfer equipment. Constant monitoring of pressures, temperatures and flow rates are required in order to maintain the safety of personnel and equipment during the handling and storage of these commodities. The Gas House Autonomous System Monitoring software will be responsible for constantly observing and recording sensor data, identifying and predicting faults and relaying hazard and operational information to the operators.
Measurements over distributed high performance computing and storage systems
NASA Technical Reports Server (NTRS)
Williams, Elizabeth; Myers, Tom
1993-01-01
A strawman proposal is given for a framework for presenting a common set of metrics for supercomputers, workstations, file servers, mass storage systems, and the networks that interconnect them. Production control and database systems are also included. Though other applications and third part software systems are not addressed, it is important to measure them as well.
NASA Technical Reports Server (NTRS)
Fournelle, John; Carpenter, Paul
2006-01-01
Modem electron microprobe systems have become increasingly sophisticated. These systems utilize either UNIX or PC computer systems for measurement, automation, and data reduction. These systems have undergone major improvements in processing, storage, display, and communications, due to increased capabilities of hardware and software. Instrument specifications are typically utilized at the time of purchase and concentrate on hardware performance. The microanalysis community includes analysts, researchers, software developers, and manufacturers, who could benefit from exchange of ideas and the ultimate development of core community specifications (CCS) for hardware and software components of microprobe instrumentation and operating systems.
Toward a digital library strategy for a National Information Infrastructure
NASA Technical Reports Server (NTRS)
Coyne, Robert A.; Hulen, Harry
1993-01-01
Bills currently before the House and Senate would give support to the development of a National Information Infrastructure, in which digital libraries and storage systems would be an important part. A simple model is offered to show the relationship of storage systems, software, and standards to the overall information infrastructure. Some elements of a national strategy for digital libraries are proposed, based on the mission of the nonprofit National Storage System Foundation.
NASA Technical Reports Server (NTRS)
Warren, A. W.; Esinger, A. W.
1979-01-01
Procedures are given for using the SIMWEST program on CDC 6000 series computers. This expanded software package includes wind and/or photovoltaic systems utilizing any combination of five types of storage (pumped hydro, battery, thermal, flywheel, and pneumatic).
ERIC Educational Resources Information Center
Roberts, Tommy L.; And Others
The Total Guidance Information Support System (TGISS), is an information storage and retrieval system for counselors. The total TGISS, including hardware and software, extends the counselor's capabilities by providing ready access to student information under secure conditions. The hardware required includes: (1) IBM 360/50 central processing…
Information Technology: A Survey from the Perspective of Higher Education.
ERIC Educational Resources Information Center
Van Houweling, Douglas E.
1986-01-01
Survey of the history and current development of information technology covers hardware (economies of scale, communications technology, magnetic and optical forms of storage), and the evolution of systems software ("tool" software, applications software, and nonprocedural languages). The effect of new computer technologies on human…
Sirocco Storage Server v. pre-alpha 0.1
DOE Office of Scientific and Technical Information (OSTI.GOV)
Curry, Matthew L.; Danielson, Geoffrey; Ward, H. Lee
Sirocco is a parallel storage system under development, designed for write-intensive workloads on large-scale HPC platforms. It implements a keyvalue object store on top of a set of loosely federated storage servers that cooperate to ensure data integrity and performance. It includes support for a range of different types of storage transactions. This software release constitutes a conformant storage server, along with the client-side libraries to access the storage over a network.
A high-speed, large-capacity, 'jukebox' optical disk system
NASA Technical Reports Server (NTRS)
Ammon, G. J.; Calabria, J. A.; Thomas, D. T.
1985-01-01
Two optical disk 'jukebox' mass storage systems which provide access to any data in a store of 10 to the 13th bits (1250G bytes) within six seconds have been developed. The optical disk jukebox system is divided into two units, including a hardware/software controller and a disk drive. The controller provides flexibility and adaptability, through a ROM-based microcode-driven data processor and a ROM-based software-driven control processor. The cartridge storage module contains 125 optical disks housed in protective cartridges. Attention is given to a conceptual view of the disk drive unit, the NASA optical disk system, the NASA database management system configuration, the NASA optical disk system interface, and an open systems interconnect reference model.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bent, John M.; Faibish, Sorin; Pedone, Jr., James M.
A cluster file system is provided having a plurality of distributed metadata servers with shared access to one or more shared low latency persistent key-value metadata stores. A metadata server comprises an abstract storage interface comprising a software interface module that communicates with at least one shared persistent key-value metadata store providing a key-value interface for persistent storage of key-value metadata. The software interface module provides the key-value metadata to the at least one shared persistent key-value metadata store in a key-value format. The shared persistent key-value metadata store is accessed by a plurality of metadata servers. A metadata requestmore » can be processed by a given metadata server independently of other metadata servers in the cluster file system. A distributed metadata storage environment is also disclosed that comprises a plurality of metadata servers having an abstract storage interface to at least one shared persistent key-value metadata store.« less
High-performance equation solvers and their impact on finite element analysis
NASA Technical Reports Server (NTRS)
Poole, Eugene L.; Knight, Norman F., Jr.; Davis, D. Dale, Jr.
1990-01-01
The role of equation solvers in modern structural analysis software is described. Direct and iterative equation solvers which exploit vectorization on modern high-performance computer systems are described and compared. The direct solvers are two Cholesky factorization methods. The first method utilizes a novel variable-band data storage format to achieve very high computation rates and the second method uses a sparse data storage format designed to reduce the number of operations. The iterative solvers are preconditioned conjugate gradient methods. Two different preconditioners are included; the first uses a diagonal matrix storage scheme to achieve high computation rates and the second requires a sparse data storage scheme and converges to the solution in fewer iterations that the first. The impact of using all of the equation solvers in a common structural analysis software system is demonstrated by solving several representative structural analysis problems.
High-performance equation solvers and their impact on finite element analysis
NASA Technical Reports Server (NTRS)
Poole, Eugene L.; Knight, Norman F., Jr.; Davis, D. D., Jr.
1992-01-01
The role of equation solvers in modern structural analysis software is described. Direct and iterative equation solvers which exploit vectorization on modern high-performance computer systems are described and compared. The direct solvers are two Cholesky factorization methods. The first method utilizes a novel variable-band data storage format to achieve very high computation rates and the second method uses a sparse data storage format designed to reduce the number od operations. The iterative solvers are preconditioned conjugate gradient methods. Two different preconditioners are included; the first uses a diagonal matrix storage scheme to achieve high computation rates and the second requires a sparse data storage scheme and converges to the solution in fewer iterations that the first. The impact of using all of the equation solvers in a common structural analysis software system is demonstrated by solving several representative structural analysis problems.
Comparing direct and iterative equation solvers in a large structural analysis software system
NASA Technical Reports Server (NTRS)
Poole, E. L.
1991-01-01
Two direct Choleski equation solvers and two iterative preconditioned conjugate gradient (PCG) equation solvers used in a large structural analysis software system are described. The two direct solvers are implementations of the Choleski method for variable-band matrix storage and sparse matrix storage. The two iterative PCG solvers include the Jacobi conjugate gradient method and an incomplete Choleski conjugate gradient method. The performance of the direct and iterative solvers is compared by solving several representative structural analysis problems. Some key factors affecting the performance of the iterative solvers relative to the direct solvers are identified.
NASA Technical Reports Server (NTRS)
Blackwell, Kim; Blasso, Len (Editor); Lipscomb, Ann (Editor)
1991-01-01
The proceedings of the National Space Science Data Center Conference on Mass Storage Systems and Technologies for Space and Earth Science Applications held July 23 through 25, 1991 at the NASA/Goddard Space Flight Center are presented. The program includes a keynote address, invited technical papers, and selected technical presentations to provide a broad forum for the discussion of a number of important issues in the field of mass storage systems. Topics include magnetic disk and tape technologies, optical disk and tape, software storage and file management systems, and experiences with the use of a large, distributed storage system. The technical presentations describe integrated mass storage systems that are expected to be available commercially. Also included is a series of presentations from Federal Government organizations and research institutions covering their mass storage requirements for the 1990's.
SIMWEST - A simulation model for wind energy storage systems
NASA Technical Reports Server (NTRS)
Edsinger, R. W.; Warren, A. W.; Gordon, L. H.; Chang, G. C.
1978-01-01
This paper describes a comprehensive and efficient computer program for the modeling of wind energy systems with storage. The level of detail of SIMWEST (SImulation Model for Wind Energy STorage) is consistent with evaluating the economic feasibility as well as the general performance of wind energy systems with energy storage options. The software package consists of two basic programs and a library of system, environmental, and control components. The first program is a precompiler which allows the library components to be put together in building block form. The second program performs the technoeconomic system analysis with the required input/output, and the integration of system dynamics. An example of the application of the SIMWEST program to a current 100 kW wind energy storage system is given.
The ATLAS Tier-3 in Geneva and the Trigger Development Facility
NASA Astrophysics Data System (ADS)
Gadomski, S.; Meunier, Y.; Pasche, P.; Baud, J.-P.; ATLAS Collaboration
2011-12-01
The ATLAS Tier-3 farm at the University of Geneva provides storage and processing power for analysis of ATLAS data. In addition the facility is used for development, validation and commissioning of the High Level Trigger of ATLAS [1]. The latter purpose leads to additional requirements on the availability of latest software and data, which will be presented. The farm is also a part of the WLCG [2], and is available to all members of the ATLAS Virtual Organization. The farm currently provides 268 CPU cores and 177 TB of storage space. A grid Storage Element, implemented with the Disk Pool Manager software [3], is available and integrated with the ATLAS Distributed Data Management system [4]. The batch system can be used directly by local users, or with a grid interface provided by NorduGrid ARC middleware [5]. In this article we will present the use cases that we support, as well as the experience with the software and the hardware we are using. Results of I/O benchmarking tests, which were done for our DPM Storage Element and for the NFS servers we are using, will also be presented.
McCullough, J Mac; Goodin, Kate
2016-01-01
Numerous software and data storage systems are employed by local health departments (LHDs) to manage clinical and nonclinical data needs. Leveraging electronic systems may yield improvements in public health practice. However, information is lacking regarding current usage patterns among LHDs. To analyze clinical and nonclinical data storage and software types by LHDs. Data came from the 2015 Informatics Capacity and Needs Assessment Survey, conducted by Georgia Southern University in collaboration with the National Association of County and City Health Officials. A total of 324 LHDs from all 50 states completed the survey (response rate: 50%). Outcome measures included LHD's primary clinical service data system, nonclinical data system(s) used, and plans to adopt electronic clinical data system (if not already in use). Predictors of interest included jurisdiction size and governance type, and other informatics capacities within the LHD. Bivariate analyses were performed using χ and t tests. Up to 38.4% of LHDs reported using an electronic health record (EHR). Usage was common especially among LHDs that provide primary care and/or dental services. LHDs serving smaller populations and those with state-level governance were both less likely to use an EHR. Paper records were a common data storage approach for both clinical data (28.9%) and nonclinical data (59.4%). Among LHDs without an EHR, 84.7% reported implementation plans. Our findings suggest that LHDs are increasingly using EHRs as a clinical data storage solution and that more LHDs are likely to adopt EHRs in the foreseeable future. Yet use of paper records remains common. Correlates of electronic system usage emerged across a range of factors. Program- or system-specific needs may be barriers or facilitators to EHR adoption. Policy makers can tailor resources to address barriers specific to LHD size, governance, service portfolio, existing informatics capabilities, and other pertinent characteristics.
A digital acquisition and elaboration system for nuclear fast pulse detection
NASA Astrophysics Data System (ADS)
Esposito, B.; Riva, M.; Marocco, D.; Kaschuck, Y.
2007-03-01
A new digital acquisition and elaboration system has been developed and assembled in ENEA-Frascati for the direct sampling of fast pulses from nuclear detectors such as scintillators and diamond detectors. The system is capable of performing the digital sampling of the pulses (200 MSamples/s, 14-bit) and the simultaneous (compressed) data transfer for further storage and software elaboration. The design (FPGA-based) is oriented to real-time applications and has been developed in order to allow acquisition with no loss of pulses and data storage for long-time intervals (tens of s at MHz pulse count rates) without the need of large on-board memory. A dedicated pulse analysis software, written in LabVIEWTM, performs the treatment of the acquired pulses, including pulse recognition, pile-up rejection, baseline removal, pulse shape particle separation and pulse height spectra analysis. The acquisition and pre-elaboration programs have been fully integrated with the analysis software.
Microcomputers, software combine to provide daily product, movement inventory
DOE Office of Scientific and Technical Information (OSTI.GOV)
Cable, T.
1985-06-01
This paper describes the efforts of Sante Fe Pipelines Inc. in keeping track of product inventory on the 810 mile, 12-in. Chapparal Pipeline and the 1,913 mile, 8- and 10-in. Gulf Central Pipeline. The decision to use a PC for monitoring the inventory was significant. The application was completed by TRON, Inc. The system is actually two major subsystems. The pipeline system accounts for injections into the pipeline and deliveries of product. This feeds the storage and the terminal inventory system where inventories are maintained at storage locations by shipper and supplier account. The paper further explains the inventory monitoringmore » process in detail. Communications software is described as well.« less
Leveraging Cloud Computing to Improve Storage Durability, Availability, and Cost for MER Maestro
NASA Technical Reports Server (NTRS)
Chang, George W.; Powell, Mark W.; Callas, John L.; Torres, Recaredo J.; Shams, Khawaja S.
2012-01-01
The Maestro for MER (Mars Exploration Rover) software is the premiere operation and activity planning software for the Mars rovers, and it is required to deliver all of the processed image products to scientists on demand. These data span multiple storage arrays sized at 2 TB, and a backup scheme ensures data is not lost. In a catastrophe, these data would currently recover at 20 GB/hour, taking several days for a restoration. A seamless solution provides access to highly durable, highly available, scalable, and cost-effective storage capabilities. This approach also employs a novel technique that enables storage of the majority of data on the cloud and some data locally. This feature is used to store the most recent data locally in order to guarantee utmost reliability in case of an outage or disconnect from the Internet. This also obviates any changes to the software that generates the most recent data set as it still has the same interface to the file system as it did before updates
Reliable data storage system design and implementation for acoustic logging while drilling
NASA Astrophysics Data System (ADS)
Hao, Xiaolong; Ju, Xiaodong; Wu, Xiling; Lu, Junqiang; Men, Baiyong; Yao, Yongchao; Liu, Dong
2016-12-01
Owing to the limitations of real-time transmission, reliable downhole data storage and fast ground reading have become key technologies in developing tools for acoustic logging while drilling (LWD). In order to improve the reliability of the downhole storage system in conditions of high temperature, intensive shake and periodic power supply, improvements were made in terms of hardware and software. In hardware, we integrated the storage system and data acquisition control module into one circuit board, to reduce the complexity of the storage process, by adopting the controller combination of digital signal processor and field programmable gate array. In software, we developed a systematic management strategy for reliable storage. Multiple-backup independent storage was employed to increase the data redundancy. A traditional error checking and correction (ECC) algorithm was improved and we embedded the calculated ECC code into all management data and waveform data. A real-time storage algorithm for arbitrary length data was designed to actively preserve the storage scene and ensure the independence of the stored data. The recovery procedure of management data was optimized to realize reliable self-recovery. A new bad block management idea of static block replacement and dynamic page mark was proposed to make the period of data acquisition and storage more balanced. In addition, we developed a portable ground data reading module based on a new reliable high speed bus to Ethernet interface to achieve fast reading of the logging data. Experiments have shown that this system can work stably below 155 °C with a periodic power supply. The effective ground data reading rate reaches 1.375 Mbps with 99.7% one-time success rate at room temperature. This work has high practical application significance in improving the reliability and field efficiency of acoustic LWD tools.
Performance of the engineering analysis and data system 2 common file system
NASA Technical Reports Server (NTRS)
Debrunner, Linda S.
1993-01-01
The Engineering Analysis and Data System (EADS) was used from April 1986 to July 1993 to support large scale scientific and engineering computation (e.g. computational fluid dynamics) at Marshall Space Flight Center. The need for an updated system resulted in a RFP in June 1991, after which a contract was awarded to Cray Grumman. EADS II was installed in February 1993, and by July 1993 most users were migrated. EADS II is a network of heterogeneous computer systems supporting scientific and engineering applications. The Common File System (CFS) is a key component of this system. The CFS provides a seamless, integrated environment to the users of EADS II including both disk and tape storage. UniTree software is used to implement this hierarchical storage management system. The performance of the CFS suffered during the early months of the production system. Several of the performance problems were traced to software bugs which have been corrected. Other problems were associated with hardware. However, the use of NFS in UniTree UCFM software limits the performance of the system. The performance issues related to the CFS have led to a need to develop a greater understanding of the CFS organization. This paper will first describe the EADS II with emphasis on the CFS. Then, a discussion of mass storage systems will be presented, and methods of measuring the performance of the Common File System will be outlined. Finally, areas for further study will be identified and conclusions will be drawn.
Scientific Data Storage for Cloud Computing
NASA Astrophysics Data System (ADS)
Readey, J.
2014-12-01
Traditionally data storage used for geophysical software systems has centered on file-based systems and libraries such as NetCDF and HDF5. In contrast cloud based infrastructure providers such as Amazon AWS, Microsoft Azure, and the Google Cloud Platform generally provide storage technologies based on an object based storage service (for large binary objects) complemented by a database service (for small objects that can be represented as key-value pairs). These systems have been shown to be highly scalable, reliable, and cost effective. We will discuss a proposed system that leverages these cloud-based storage technologies to provide an API-compatible library for traditional NetCDF and HDF5 applications. This system will enable cloud storage suitable for geophysical applications that can scale up to petabytes of data and thousands of users. We'll also cover other advantages of this system such as enhanced metadata search.
Volume serving and media management in a networked, distributed client/server environment
NASA Technical Reports Server (NTRS)
Herring, Ralph H.; Tefend, Linda L.
1993-01-01
The E-Systems Modular Automated Storage System (EMASS) is a family of hierarchical mass storage systems providing complete storage/'file space' management. The EMASS volume server provides the flexibility to work with different clients (file servers), different platforms, and different archives with a 'mix and match' capability. The EMASS design considers all file management programs as clients of the volume server system. System storage capacities are tailored to customer needs ranging from small data centers to large central libraries serving multiple users simultaneously. All EMASS hardware is commercial off the shelf (COTS), selected to provide the performance and reliability needed in current and future mass storage solutions. All interfaces use standard commercial protocols and networks suitable to service multiple hosts. EMASS is designed to efficiently store and retrieve in excess of 10,000 terabytes of data. Current clients include CRAY's YMP Model E based Data Migration Facility (DMF), IBM's RS/6000 based Unitree, and CONVEX based EMASS File Server software. The VolSer software provides the capability to accept client or graphical user interface (GUI) commands from the operator's console and translate them to the commands needed to control any configured archive. The VolSer system offers advanced features to enhance media handling and particularly media mounting such as: automated media migration, preferred media placement, drive load leveling, registered MediaClass groupings, and drive pooling.
A NASA family of minicomputer systems, Appendix A
NASA Technical Reports Server (NTRS)
Deregt, M. P.; Dulfer, J. E.
1972-01-01
This investigation was undertaken to establish sufficient specifications, or standards, for minicomputer hardware and software to provide NASA with realizable economics in quantity purchases, interchangeability of minicomputers, software, storage and peripherals, and a uniformly high quality. The standards will define minicomputer system component types, each specialized to its intended NASA application, in as many levels of capacity as required.
Expeditionary Oblong Mezzanine
2016-03-01
Operating System OSI Open Systems Interconnection OS X Operating System Ten PDU Power Distribution Unit POE Power Over Ethernet xvii SAAS ...providing infrastructure as a service (IaaS) and software as a service ( SaaS ) cloud computing technologies. IaaS is a way of providing computing services...such as servers, storage, and network equipment services (Mell & Grance, 2009). SaaS is a means of providing software and applications as an on
INTERFACING SAS TO ORACLE IN THE UNIX ENVIRONMENT
SAS is an EPA standard data and statistical analysis software package while ORACLE is EPA's standard data base management system software package. RACLE has the advantage over SAS in data retrieval and storage capabilities but has limited data and statistical analysis capability....
Conceptual design for the National Water Information System
Edwards, Melvin D.; Putnam, Arthur L.; Hutchison, Norman E.
1986-01-01
The Water Resources Division of the U.S. Geological Survey began the design and development of a National Water Information System (NWIS) in 1983. The NWIS will replace and integrate the existing data systems of the National Water Data Storage and Retrieval System, National Water Data Exchange, National Water-Use Information Program, and Water Resources Scientific Information Center. The NWIS has been designed as an interactive, distributed data system. The software system has been designed in a modular manner which integrates existing software functions and allows multiple use of software modules. The data base has been designed as a relational data model that allows integrated storage of the existing water data, water-use data, and water-data indexing information by using a common relational data base management system. The NWIS will be operated on microcomputers located in each of the Water Resources Division's District offices and many of its State, subdistrict, and field offices. The microcomputers will be linked together through a national telecommunication network maintained by the U. S. Geological Survey. The NWIS is scheduled to be placed in operation in 1990.
Data storage and retrieval system abstract
NASA Technical Reports Server (NTRS)
Matheson, Barbara
1992-01-01
The STX mass storage system design is intended for environments requiring high speed access to large volumes of data (terabyte and greater). Prior to commitment to a product design plan, STX conducted an exhaustive study of the commercially available off-the-shelf hardware and software. STX also conducted research into the area of emerging technologies in networks and storage media so that the design could easily accommodate new interfaces and peripherals as they came on the market. All the selected system elements were brought together in a demo suite sponsored jointly by STX and ALLIANT where the system elements were evaluated based on actual operation using a client-server mirror image configuration. Testing was conducted to assess the various component overheads and results were compared against vendor data claims. The resultant system, while adequate to meet our capacity requirements, fell short of transfer speed expectations. A product team lead by STX was assembled and chartered with solving the bottleneck issues. Optimization efforts yielded a 60 percent improvement in throughput performance. The ALLIANT computer platform provided the I/O flexibility needed to accommodate a multitude of peripheral interfaces including the following: up to twelve 25MB/s VME I/O channels; up to five HiPPI I/O full duplex channels; IPI-s, SCSI, SMD, and RAID disk array support; standard networking software support for TCP/IP, NFS, and FTP; open architecture based on standard RISC processors; and V.4/POSIX-based operating system (Concentrix). All components including the software are modular in design and can be reconfigured as needs and system uses change. Users can begin with a small system and add modules as needed in the field. Most add-ons can be accomplished seamlessly without revision, recompilation or re-linking of software.
Data storage and retrieval system abstract
NASA Astrophysics Data System (ADS)
Matheson, Barbara
1992-09-01
The STX mass storage system design is intended for environments requiring high speed access to large volumes of data (terabyte and greater). Prior to commitment to a product design plan, STX conducted an exhaustive study of the commercially available off-the-shelf hardware and software. STX also conducted research into the area of emerging technologies in networks and storage media so that the design could easily accommodate new interfaces and peripherals as they came on the market. All the selected system elements were brought together in a demo suite sponsored jointly by STX and ALLIANT where the system elements were evaluated based on actual operation using a client-server mirror image configuration. Testing was conducted to assess the various component overheads and results were compared against vendor data claims. The resultant system, while adequate to meet our capacity requirements, fell short of transfer speed expectations. A product team lead by STX was assembled and chartered with solving the bottleneck issues. Optimization efforts yielded a 60 percent improvement in throughput performance. The ALLIANT computer platform provided the I/O flexibility needed to accommodate a multitude of peripheral interfaces including the following: up to twelve 25MB/s VME I/O channels; up to five HiPPI I/O full duplex channels; IPI-s, SCSI, SMD, and RAID disk array support; standard networking software support for TCP/IP, NFS, and FTP; open architecture based on standard RISC processors; and V.4/POSIX-based operating system (Concentrix). All components including the software are modular in design and can be reconfigured as needs and system uses change. Users can begin with a small system and add modules as needed in the field. Most add-ons can be accomplished seamlessly without revision, recompilation or re-linking of software.
Optical Storage System For Small Software Package Distribution
NASA Astrophysics Data System (ADS)
Wehrenberg, Paul J.
1985-04-01
This paper describes an optical mass storage system being developed for extremely low cost distribution of small software packages. The structure of the media, design of the optical playback system, and some aspects of mastering and media production are discussed. This read only system is designed solely for the purpose of down loading code in a spooling fashion from the media to the host machine. The media is configured as a plastic card with dimensions 85 mm x 12 mm x 2mm. Each data region on a card is a rectangle 1.33 mm x 59.4 mm which carries up to 64 KB of user data. Cost estimates for production are 0.06 per card for the media and 38.00 for the playback device. The mastering process for the production tooling uses photolithography techniques and can provide production tooling within a few hours of software release. The playback mechanism is rugged and small, and does not require the use of any electromechanical servos.
NASA Astrophysics Data System (ADS)
Ames, D.; Kadlec, J.; Horsburgh, J. S.; Maidment, D. R.
2009-12-01
The Consortium of Universities for the Advancement of Hydrologic Sciences (CUAHSI) Hydrologic Information System (HIS) project includes extensive development of data storage and delivery tools and standards including WaterML (a language for sharing hydrologic data sets via web services); and HIS Server (a software tool set for delivering WaterML from a server); These and other CUASHI HIS tools have been under development and deployment for several years and together, present a relatively complete software “stack” to support the consistent storage and delivery of hydrologic and other environmental observation data. This presentation describes the development of a new HIS software tool called “HydroDesktop” and the development of an online open source software development community to update and maintain the software. HydroDesktop is a local (i.e. not server-based) client side software tool that ultimately will run on multiple operating systems and will provide a highly usable level of access to HIS services. The software provides many key capabilities including data query, map-based visualization, data download, local data maintenance, editing, graphing, data export to selected model-specific data formats, linkage with integrated modeling systems such as OpenMI, and ultimately upload to HIS servers from the local desktop software. As the software is presently in the early stages of development, this presentation will focus on design approach and paradigm and is viewed as an opportunity to encourage participation in the open development community. Indeed, recognizing the value of community based code development as a means of ensuring end-user adoption, this project has adopted an “iterative” or “spiral” software development approach which will be described in this presentation.
NASA Langley Research Center's distributed mass storage system
NASA Technical Reports Server (NTRS)
Pao, Juliet Z.; Humes, D. Creig
1993-01-01
There is a trend in institutions with high performance computing and data management requirements to explore mass storage systems with peripherals directly attached to a high speed network. The Distributed Mass Storage System (DMSS) Project at NASA LaRC is building such a system and expects to put it into production use by the end of 1993. This paper presents the design of the DMSS, some experiences in its development and use, and a performance analysis of its capabilities. The special features of this system are: (1) workstation class file servers running UniTree software; (2) third party I/O; (3) HIPPI network; (4) HIPPI/IPI3 disk array systems; (5) Storage Technology Corporation (STK) ACS 4400 automatic cartridge system; (6) CRAY Research Incorporated (CRI) CRAY Y-MP and CRAY-2 clients; (7) file server redundancy provision; and (8) a transition mechanism from the existent mass storage system to the DMSS.
Grand challenges in mass storage: A system integrator's perspective
NASA Technical Reports Server (NTRS)
Mintz, Dan; Lee, Richard
1993-01-01
The grand challenges are the following: to develop more innovation in approach; to expand the I/O barrier; to achieve increased volumetric efficiency and incremental cost improvements; to reinforce the 'weakest link' software; to implement improved architectures; and to minimize the impact of self-destructing technologies. Mass storage is defined as any type of storage system exceeding 100 GBytes in total size, under the control of a centralized file management scheme. The topics covered are presented in viewgraph form.
Evaluation of the Huawei UDS cloud storage system for CERN specific data
NASA Astrophysics Data System (ADS)
Zotes Resines, M.; Heikkila, S. S.; Duellmann, D.; Adde, G.; Toebbicke, R.; Hughes, J.; Wang, L.
2014-06-01
Cloud storage is an emerging architecture aiming to provide increased scalability and access performance, compared to more traditional solutions. CERN is evaluating this promise using Huawei UDS and OpenStack SWIFT storage deployments, focusing on the needs of high-energy physics. Both deployed setups implement S3, one of the protocols that are emerging as a standard in the cloud storage market. A set of client machines is used to generate I/O load patterns to evaluate the storage system performance. The presented read and write test results indicate scalability both in metadata and data perspectives. Futher the Huawei UDS cloud storage is shown to be able to recover from a major failure of losing 16 disks. Both cloud storages are finally demonstrated to function as back-end storage systems to a filesystem, which is used to deliver high energy physics software.
On Correlated Failures in Survivable Storage Systems
2002-05-01
Littlewood, D.R. Miller, “Conceptual modeling of coincident failures in multiversion software”, IEEE Transactions on Software Engineering, Volume: 15 Issue...Recovery in Multiversion Software”. IEEE Transaction on Software Engineering, Vol. 16 No.3, March 1990 [Plank1997] J. Plank “A tutorial on Reed-Solomon
High-performance mass storage system for workstations
NASA Technical Reports Server (NTRS)
Chiang, T.; Tang, Y.; Gupta, L.; Cooperman, S.
1993-01-01
Reduced Instruction Set Computer (RISC) workstations and Personnel Computers (PC) are very popular tools for office automation, command and control, scientific analysis, database management, and many other applications. However, when using Input/Output (I/O) intensive applications, the RISC workstations and PC's are often overburdened with the tasks of collecting, staging, storing, and distributing data. Also, by using standard high-performance peripherals and storage devices, the I/O function can still be a common bottleneck process. Therefore, the high-performance mass storage system, developed by Loral AeroSys' Independent Research and Development (IR&D) engineers, can offload a RISC workstation of I/O related functions and provide high-performance I/O functions and external interfaces. The high-performance mass storage system has the capabilities to ingest high-speed real-time data, perform signal or image processing, and stage, archive, and distribute the data. This mass storage system uses a hierarchical storage structure, thus reducing the total data storage cost, while maintaining high-I/O performance. The high-performance mass storage system is a network of low-cost parallel processors and storage devices. The nodes in the network have special I/O functions such as: SCSI controller, Ethernet controller, gateway controller, RS232 controller, IEEE488 controller, and digital/analog converter. The nodes are interconnected through high-speed direct memory access links to form a network. The topology of the network is easily reconfigurable to maximize system throughput for various applications. This high-performance mass storage system takes advantage of a 'busless' architecture for maximum expandability. The mass storage system consists of magnetic disks, a WORM optical disk jukebox, and an 8mm helical scan tape to form a hierarchical storage structure. Commonly used files are kept in the magnetic disk for fast retrieval. The optical disks are used as archive media, and the tapes are used as backup media. The storage system is managed by the IEEE mass storage reference model-based UniTree software package. UniTree software will keep track of all files in the system, will automatically migrate the lesser used files to archive media, and will stage the files when needed by the system. The user can access the files without knowledge of their physical location. The high-performance mass storage system developed by Loral AeroSys will significantly boost the system I/O performance and reduce the overall data storage cost. This storage system provides a highly flexible and cost-effective architecture for a variety of applications (e.g., realtime data acquisition with a signal and image processing requirement, long-term data archiving and distribution, and image analysis and enhancement).
Evaluation of ZFS as an efficient WLCG storage backend
NASA Astrophysics Data System (ADS)
Ebert, M.; Washbrook, A.
2017-10-01
A ZFS based software raid system was tested for performance against a hardware raid system providing storage based on the traditional Linux file systems XFS and EXT4. These tests were done for a healthy raid array as well as for a degraded raid array and during the rebuild of a raid array. It was found that ZFS performs better in almost all test scenarios. In addition, distinct features of ZFS were tested for WLCG data storage use, like compression and higher raid levels with triple redundancy information. The long term reliability was observed after converting all production storage servers at the Edinburgh WLCG Tier-2 site to ZFS, resulting in about 1.2PB of ZFS based storage at this site.
NASA Technical Reports Server (NTRS)
Moore, Reagan W.
2004-01-01
The long-term preservation of digital entities requires mechanisms to manage the authenticity of massive data collections that are written to archival storage systems. Preservation environments impose authenticity constraints and manage the evolution of the storage system technology by building infrastructure independent solutions. This seeming paradox, the need for large archives, while avoiding dependence upon vendor specific solutions, is resolved through use of data grid technology. Data grids provide the storage repository abstractions that make it possible to migrate collections between vendor specific products, while ensuring the authenticity of the archived data. Data grids provide the software infrastructure that interfaces vendor-specific storage archives to preservation environments.
A simulation model for wind energy storage systems. Volume 1: Technical report
NASA Technical Reports Server (NTRS)
Warren, A. W.; Edsinger, R. W.; Chan, Y. K.
1977-01-01
A comprehensive computer program for the modeling of wind energy and storage systems utilizing any combination of five types of storage (pumped hydro, battery, thermal, flywheel and pneumatic) was developed. The level of detail of Simulation Model for Wind Energy Storage (SIMWEST) is consistent with a role of evaluating the economic feasibility as well as the general performance of wind energy systems. The software package consists of two basic programs and a library of system, environmental, and load components. The first program is a precompiler which generates computer models (in FORTRAN) of complex wind source storage application systems, from user specifications using the respective library components. The second program provides the techno-economic system analysis with the respective I/O, the integration of systems dynamics, and the iteration for conveyance of variables. SIMWEST program, as described, runs on the UNIVAC 1100 series computers.
A Custom Approach for a Flexible, Real-Time and Reliable Software Defined Utility.
Zaballos, Agustín; Navarro, Joan; Martín De Pozuelo, Ramon
2018-02-28
Information and communication technologies (ICTs) have enabled the evolution of traditional electric power distribution networks towards a new paradigm referred to as the smart grid. However, the different elements that compose the ICT plane of a smart grid are usually conceived as isolated systems that typically result in rigid hardware architectures, which are hard to interoperate, manage and adapt to new situations. In the recent years, software-defined systems that take advantage of software and high-speed data network infrastructures have emerged as a promising alternative to classic ad hoc approaches in terms of integration, automation, real-time reconfiguration and resource reusability. The purpose of this paper is to propose the usage of software-defined utilities (SDUs) to address the latent deployment and management limitations of smart grids. More specifically, the implementation of a smart grid's data storage and management system prototype by means of SDUs is introduced, which exhibits the feasibility of this alternative approach. This system features a hybrid cloud architecture able to meet the data storage requirements of electric utilities and adapt itself to their ever-evolving needs. Conducted experimentations endorse the feasibility of this solution and encourage practitioners to point their efforts in this direction.
NASA Technical Reports Server (NTRS)
Burleigh, Scott C.
2011-01-01
Sptrace is a general-purpose space utilization tracing system that is conceptually similar to the commercial Purify product used to detect leaks and other memory usage errors. It is designed to monitor space utilization in any sort of heap, i.e., a region of data storage on some device (nominally memory; possibly shared and possibly persistent) with a flat address space. This software can trace usage of shared and/or non-volatile storage in addition to private RAM (random access memory). Sptrace is implemented as a set of C function calls that are invoked from within the software that is being examined. The function calls fall into two broad classes: (1) functions that are embedded within the heap management software [e.g., JPL's SDR (Simple Data Recorder) and PSM (Personal Space Management) systems] to enable heap usage analysis by populating a virtual time-sequenced log of usage activity, and (2) reporting functions that are embedded within the application program whose behavior is suspect. For ease of use, these functions may be wrapped privately inside public functions offered by the heap management software. Sptrace can be used for VxWorks or RTEMS realtime systems as easily as for Linux or OS/X systems.
A Custom Approach for a Flexible, Real-Time and Reliable Software Defined Utility
2018-01-01
Information and communication technologies (ICTs) have enabled the evolution of traditional electric power distribution networks towards a new paradigm referred to as the smart grid. However, the different elements that compose the ICT plane of a smart grid are usually conceived as isolated systems that typically result in rigid hardware architectures, which are hard to interoperate, manage and adapt to new situations. In the recent years, software-defined systems that take advantage of software and high-speed data network infrastructures have emerged as a promising alternative to classic ad hoc approaches in terms of integration, automation, real-time reconfiguration and resource reusability. The purpose of this paper is to propose the usage of software-defined utilities (SDUs) to address the latent deployment and management limitations of smart grids. More specifically, the implementation of a smart grid’s data storage and management system prototype by means of SDUs is introduced, which exhibits the feasibility of this alternative approach. This system features a hybrid cloud architecture able to meet the data storage requirements of electric utilities and adapt itself to their ever-evolving needs. Conducted experimentations endorse the feasibility of this solution and encourage practitioners to point their efforts in this direction. PMID:29495599
NASA Astrophysics Data System (ADS)
Karmazikov, Y. V.; Fainberg, E. M.
2005-06-01
Work with DICOM compatible equipment integrated into hardware and software systems for medical purposes has been considered. Structures of process of reception and translormation of the data are resulted by the example of digital rentgenography and angiography systems, included in hardware-software complex DIMOL-IK. Algorithms of reception and the analysis of the data are offered. Questions of the further processing and storage of the received data are considered.
National Storage Laboratory: a collaborative research project
NASA Astrophysics Data System (ADS)
Coyne, Robert A.; Hulen, Harry; Watson, Richard W.
1993-01-01
The grand challenges of science and industry that are driving computing and communications have created corresponding challenges in information storage and retrieval. An industry-led collaborative project has been organized to investigate technology for storage systems that will be the future repositories of national information assets. Industry participants are IBM Federal Systems Company, Ampex Recording Systems Corporation, General Atomics DISCOS Division, IBM ADSTAR, Maximum Strategy Corporation, Network Systems Corporation, and Zitel Corporation. Industry members of the collaborative project are funding their own participation. Lawrence Livermore National Laboratory through its National Energy Research Supercomputer Center (NERSC) will participate in the project as the operational site and provider of applications. The expected result is the creation of a National Storage Laboratory to serve as a prototype and demonstration facility. It is expected that this prototype will represent a significant advance in the technology for distributed storage systems capable of handling gigabyte-class files at gigabit-per-second data rates. Specifically, the collaboration expects to make significant advances in hardware, software, and systems technology in four areas of need, (1) network-attached high performance storage; (2) multiple, dynamic, distributed storage hierarchies; (3) layered access to storage system services; and (4) storage system management.
NASA Astrophysics Data System (ADS)
Poluyan, L. V.; Syutkina, E. V.; Guryev, E. S.
2017-11-01
The comparative analysis of key features of the software systems TOXI+Risk and ALOHA is presented. The authors made a comparison of domestic (TOXI+Risk) and foreign (ALOHA) software systems allowing to give the quantitative assessment of impact areas (pressure, thermal, toxic) in case of hypothetical emergencies in potentially hazardous objects of the oil, gas, chemical, petrochemical and oil-processing industry. Both software systems use different mathematical models for assessment of the release rate of a chemically hazardous substance from a storage tank and its evaporation. The comparison of the accuracy of definition of impact areas made by both software systems to verify the examples shows good convergence of both products. The analysis results showed that the ALOHA software can be actively used for forecasting and immediate assessment of emergency situations, assessment of damage as a result of emergencies on the territories of municipalities.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Warrant, Marilyn M.; Garcia, Rudy J.; Zhang, Pengchu
2004-09-15
Tomcat-Projects_RF is a software package for analyzing sensor data obtained from a database and displaying the results with Java Servlet Pages (JSP). SQL Views into the dataset are tailored for personnel having different roles in monitoring the items in a storage facility. For example, an inspector, a host treaty compliance officer, a system engineer and software developers were the users identified that would need to access data at different levels of detail, The analysis provides a high level status of the storage facility and allows the user to go deeper into the data details if the user desires.
NASA Technical Reports Server (NTRS)
1985-01-01
The second task in the Space Station Data System (SSDS) Analysis/Architecture Study is the development of an information base that will support the conduct of trade studies and provide sufficient data to make key design/programmatic decisions. This volume identifies the preferred options in the technology category and characterizes these options with respect to performance attributes, constraints, cost, and risk. The technology category includes advanced materials, processes, and techniques that can be used to enhance the implementation of SSDS design structures. The specific areas discussed are mass storage, including space and round on-line storage and off-line storage; man/machine interface; data processing hardware, including flight computers and advanced/fault tolerant computer architectures; and software, including data compression algorithms, on-board high level languages, and software tools. Also discussed are artificial intelligence applications and hard-wire communications.
Optimizing the Use of Storage Systems Provided by Cloud Computing Environments
NASA Astrophysics Data System (ADS)
Gallagher, J. H.; Potter, N.; Byrne, D. A.; Ogata, J.; Relph, J.
2013-12-01
Cloud computing systems present a set of features that include familiar computing resources (albeit augmented to support dynamic scaling of processing power) bundled with a mix of conventional and unconventional storage systems. The linux base on which many Cloud environments (e.g., Amazon) are based make it tempting to assume that any Unix software will run efficiently in this environment efficiently without change. OPeNDAP and NODC collaborated on a short project to explore how the S3 and Glacier storage systems provided by the Amazon Cloud Computing infrastructure could be used with a data server developed primarily to access data stored in a traditional Unix file system. Our work used the Amazon cloud system, but we strived for designs that could be adapted easily to other systems like OpenStack. Lastly, we evaluated different architectures from a computer security perspective. We found that there are considerable issues associated with treating S3 as if it is a traditional file system, even though doing so is conceptually simple. These issues include performance penalties because using a software tool that emulates a traditional file system to store data in S3 performs poorly when compared to a storing data directly in S3. We also found there are important benefits beyond performance to ensuring that data written to S3 can directly accessed without relying on a specific software tool. To provide a hierarchical organization to the data stored in S3, we wrote 'catalog' files, using XML. These catalog files map discrete files to S3 access keys. Like a traditional file system's directories, the catalogs can also contain references to other catalogs, providing a simple but effective hierarchy overlaid on top of S3's flat storage space. An added benefit to these catalogs is that they can be viewed in a web browser; our storage scheme provides both efficient access for the data server and access via a web browser. We also looked at the Glacier storage system and found that the system's response characteristics are very different from a traditional file system or database; it behaves like a near-line storage system. To be used by a traditional data server, the underlying access protocol must support asynchronous accesses. This is because the Glacier system takes a minimum of four hours to deliver any data object, so systems built with the expectation of instant access (i.e., most web systems) must be fundamentally changed to use Glacier. Part of a related project has been to develop an asynchronous access mode for OPeNDAP, and we have developed a design using that new addition to the DAP protocol with Glacier as a near-line mass store. In summary, we found that both S3 and Glacier require special treatment to be effectively used by a data server. It is important to add (new) interfaces to data servers that enable them to use these storage devices through their native interfaces. We also found that our designs could easily map to a cloud environment based on OpenStack. Lastly, we noted that while these designs invited more liberal use of remote references for data objects, that can expose software to new security risks.
MINDS: A microcomputer interactive data system for 8086-based controllers
NASA Technical Reports Server (NTRS)
Soeder, J. F.
1985-01-01
A microcomputer interactive data system (MINDS) software package for the 8086 family of microcomputers is described. To enhance program understandability and ease of code maintenance, the software is written in PL/M-86, Intel Corporation's high-level system implementation language. The MINDS software is intended to run in residence with real-time digital control software to provide displays of steady-state and transient data. In addition, the MINDS package provides classic monitor capabilities along with extended provisions for debugging an executing control system. The software uses the CP/M-86 operating system developed by Digital Research, Inc., to provide program load capabilities along with a uniform file structure for data and table storage. Finally, a library of input and output subroutines to be used with consoles equipped with PL/M-86 and assembly language is described.
Power, Avionics and Software Communication Network Architecture
NASA Technical Reports Server (NTRS)
Ivancic, William D.; Sands, Obed S.; Bakula, Casey J.; Oldham, Daniel R.; Wright, Ted; Bradish, Martin A.; Klebau, Joseph M.
2014-01-01
This document describes the communication architecture for the Power, Avionics and Software (PAS) 2.0 subsystem for the Advanced Extravehicular Mobile Unit (AEMU). The following systems are described in detail: Caution Warn- ing and Control System, Informatics, Storage, Video, Audio, Communication, and Monitoring Test and Validation. This document also provides some background as well as the purpose and goals of the PAS project at Glenn Research Center (GRC).
NASA Astrophysics Data System (ADS)
Zhang, Xinhua; Zhou, Zhongkang; Chen, Xiaochun; Song, Jishuang; Shi, Maolin
2017-05-01
system is proposed based on NaS battery and lithium ion battery, that the former is the main large scale energy storage technology world-widely used and developed and the latter is a flexible way to have both power and energy capacities. The hybrid energy storage system, which takes advantage of the two complementary technologies to provide large power and energy capacities, is chosen to do an evaluation of econom ical-environmental based on critical excess electricity production (CEEP), CO2 emission, annual total costs calculated on the specific given condition using Energy PLAN software. The result shows that hybrid storage system has strengths in environmental benefits and also can absorb more discarded wind power than single storage system and is a potential way to push forward the application of wind power and even other types of renewable energy resources.
NASA Technical Reports Server (NTRS)
Mukhopadhyay, A. K.
1978-01-01
The Data Storage Subsystem Simulator (DSSSIM) simulating (by ground software) occurrence of discrete events in the Voyager mission is described. Functional requirements for Data Storage Subsystems (DSS) simulation are discussed, and discrete event simulation/DSSSIM processing is covered. Four types of outputs associated with a typical DSSSIM run are presented, and DSSSIM limitations and constraints are outlined.
NASA Astrophysics Data System (ADS)
Davis, G. A.; Battistuz, B.; Foley, S.; Vernon, F. L.; Eakins, J. A.
2009-12-01
Since April 2004 the Earthscope USArray Transportable Array (TA) network has grown to over 400 broadband seismic stations that stream multi-channel data in near real-time to the Array Network Facility in San Diego. In total, over 1.7 terabytes per year of 24-bit, 40 samples-per-second seismic and state of health data is recorded from the stations. The ANF provides analysts access to real-time and archived data, as well as state-of-health data, metadata, and interactive tools for station engineers and the public via a website. Additional processing and recovery of missing data from on-site recorders (balers) at the stations is performed before the final data is transmitted to the IRIS Data Management Center (DMC). Assembly of the final data set requires additional storage and processing capabilities to combine the real-time data with baler data. The infrastructure supporting these diverse computational and storage needs currently consists of twelve virtualized Sun Solaris Zones executing on nine physical server systems. The servers are protected against failure by redundant power, storage, and networking connections. Storage needs are provided by a hybrid iSCSI and Fiber Channel Storage Area Network (SAN) with access to over 40 terabytes of RAID 5 and 6 storage. Processing tasks are assigned to systems based on parallelization and floating-point calculation needs. On-site buffering at the data-loggers provide protection in case of short-term network or hardware problems, while backup acquisition systems at the San Diego Supercomputer Center and the DMC protect against catastrophic failure of the primary site. Configuration management and monitoring of these systems is accomplished with open-source (Cfengine, Nagios, Solaris Community Software) and commercial tools (Intermapper). In the evolution from a single server to multiple virtualized server instances, Sun Cluster software was evaluated and found to be unstable in our environment. Shared filesystem architectures using PxFS and QFS were found to be incompatible with our software architecture, so sharing of data between systems is accomplished via traditional NFS. Linux was found to be limited in terms of deployment flexibility and consistency between versions. Despite the experimentation with various technologies, our current virtualized architecture is stable to the point of an average daily real time data return rate of 92.34% over the entire lifetime of the project to date.
A data-management system for detailed areal interpretive data
Ferrigno, C.F.
1986-01-01
A data storage and retrieval system has been developed to organize and preserve areal interpretive data. This system can be used by any study where there is a need to store areal interpretive data that generally is presented in map form. This system provides the capability to grid areal interpretive data for input to groundwater flow models at any spacing and orientation. The data storage and retrieval system is designed to be used for studies that cover small areas such as counties. The system is built around a hierarchically structured data base consisting of related latitude-longitude blocks. The information in the data base can be stored at different levels of detail, with the finest detail being a block of 6 sec of latitude by 6 sec of longitude (approximately 0.01 sq mi). This system was implemented on a mainframe computer using a hierarchical data base management system. The computer programs are written in Fortran IV and PL/1. The design and capabilities of the data storage and retrieval system, and the computer programs that are used to implement the system are described. Supplemental sections contain the data dictionary, user documentation of the data-system software, changes that would need to be made to use this system for other studies, and information on the computer software tape. (Lantz-PTT)
Microcomputers in Libraries: The Quiet Revolution.
ERIC Educational Resources Information Center
Boss, Richard
1985-01-01
This article defines three separate categories of microcomputers--personal, desk-top, multi-user devices--and relates storage capabilities (expandability, floppy disks) to library applications. Highlghts include de facto standards, operating systems, database management systems, applications software, circulation control systems, dumb and…
Software Defined Cyberinfrastructure
DOE Office of Scientific and Technical Information (OSTI.GOV)
Foster, Ian; Blaiszik, Ben; Chard, Kyle
Within and across thousands of science labs, researchers and students struggle to manage data produced in experiments, simulations, and analyses. Largely manual research data lifecycle management processes mean that much time is wasted, research results are often irreproducible, and data sharing and reuse remain rare. In response, we propose a new approach to data lifecycle management in which researchers are empowered to define the actions to be performed at individual storage systems when data are created or modified: actions such as analysis, transformation, copying, and publication. We term this approach software-defined cyberinfrastructure because users can implement powerful data management policiesmore » by deploying rules to local storage systems, much as software-defined networking allows users to configure networks by deploying rules to switches.We argue that this approach can enable a new class of responsive distributed storage infrastructure that will accelerate research innovation by allowing any researcher to associate data workflows with data sources, whether local or remote, for such purposes as data ingest, characterization, indexing, and sharing. We report on early experiments with this approach in the context of experimental science, in which a simple if-trigger-then-action (IFTA) notation is used to define rules.« less
NASA Technical Reports Server (NTRS)
Jamsek, Damir A.
1993-01-01
A brief example of the use of formal methods techniques in the specification of a software system is presented. The report is part of a larger effort targeted at defining a formal methods pilot project for NASA. One possible application domain that may be used to demonstrate the effective use of formal methods techniques within the NASA environment is presented. It is not intended to provide a tutorial on either formal methods techniques or the application being addressed. It should, however, provide an indication that the application being considered is suitable for a formal methods by showing how such a task may be started. The particular system being addressed is the Structured File Services (SFS), which is a part of the Data Storage and Retrieval Subsystem (DSAR), which in turn is part of the Data Management System (DMS) onboard Spacestation Freedom. This is a software system that is currently under development for NASA. An informal mathematical development is presented. Section 3 contains the same development using Penelope (23), an Ada specification and verification system. The complete text of the English version Software Requirements Specification (SRS) is reproduced in Appendix A.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Clark, R.E.
1994-11-02
This document provides the software development plan for the Waste Receiving and Processing (WRAP) Module 1 Data Management System (DMS). The DMS is one of the plant computer systems for the new WRAP 1 facility (Project W-026). The DMS will collect, store, and report data required to certify the low level waste (LLW) and transuranic (TRU) waste items processed at WRAP 1 as acceptable for shipment, storage, or disposal.
Monitoring SLAC High Performance UNIX Computing Systems
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lettsome, Annette K.; /Bethune-Cookman Coll. /SLAC
2005-12-15
Knowledge of the effectiveness and efficiency of computers is important when working with high performance systems. The monitoring of such systems is advantageous in order to foresee possible misfortunes or system failures. Ganglia is a software system designed for high performance computing systems to retrieve specific monitoring information. An alternative storage facility for Ganglia's collected data is needed since its default storage system, the round-robin database (RRD), struggles with data integrity. The creation of a script-driven MySQL database solves this dilemma. This paper describes the process took in the creation and implementation of the MySQL database for use by Ganglia.more » Comparisons between data storage by both databases are made using gnuplot and Ganglia's real-time graphical user interface.« less
FFI: A software tool for ecological monitoring
Duncan C. Lutes; Nathan C. Benson; MaryBeth Keifer; John F. Caratti; S. Austin Streetman
2009-01-01
A new monitoring tool called FFI (FEAT/FIREMON Integrated) has been developed to assist managers with collection, storage and analysis of ecological information. The tool was developed through the complementary integration of two fire effects monitoring systems commonly used in the United States: FIREMON and the Fire Ecology Assessment Tool. FFI provides software...
NASA Technical Reports Server (NTRS)
Yao, S. S. (Principal Investigator)
1981-01-01
A preliminary evaluation of the processing capability of the VICAR software for land information support system needs is presented. The geometric and radiometric properties of four sets of LANDSAT data taken over the Elk River, Idaho quadrangle were compared. Storage of data sets, the means of location, pixel resolution, and radiometric and geometric characteristics are described. Recommended modifications of VICAR programs are presented.
A Dependable Massive Storage Service for Medical Imaging.
Núñez-Gaona, Marco Antonio; Marcelín-Jiménez, Ricardo; Gutiérrez-Martínez, Josefina; Aguirre-Meneses, Heriberto; Gonzalez-Compean, José Luis
2018-05-18
We present the construction of Babel, a distributed storage system that meets stringent requirements on dependability, availability, and scalability. Together with Babel, we developed an application that uses our system to store medical images. Accordingly, we show the feasibility of our proposal to provide an alternative solution for massive scientific storage and describe the software architecture style that manages the DICOM images life cycle, utilizing Babel like a virtual local storage component for a picture archiving and communication system (PACS-Babel Interface). Furthermore, we describe the communication interface in the Unified Modeling Language (UML) and show how it can be extended to manage the hard work associated with data migration processes on PACS in case of updates or disaster recovery.
NASA Astrophysics Data System (ADS)
Jaworski, Allan
1993-08-01
The Earth Observing System (EOS) Data and Information System (EOSDIS) will serve as a major resource for the earth science community, supporting both command and control of complex instruments onboard the EOS spacecraft and the archiving, distribution, and analysis of data. The scale of EOSDIS and the volume of multidisciplinary research to be conducted using EOSDIS resources will produce unparalleled needs for technology transparency, data integration, and system interoperability. The scale of this effort far outscopes any previous scientific data system in its breadth or operational and performance needs. Modern hardware technology can meet the EOSDIS technical challenge. Multiprocessing speeds of many giga-flops are being realized by modern computers. Online storage disk, optical disk, and videocassette libraries with storage capacities of many terabytes are now commercially available. Radio frequency and fiber optics communications networks with gigabit rates are demonstrable today. It remains, of course, to perform the system engineering to establish the requirements, architectures, and designs that will implement the EOSDIS systems. Software technology, however, has not enjoyed the price/performance advances of hardware. Although we have learned to engineer hardware systems which have several orders of magnitude greater complexity and performance than those built in the 1960's, we have not made comparable progress in dramatically reducing the cost of software development. This lack of progress may significantly reduce our capabilities to achieve economically the types of highly interoperable, responsive, integraded, and productive environments which are needed by the earth science community. This paper describes some of the EOSDIS software requirements and current activities in the software community which are applicable to meeting the EOSDIS challenge. Some of these areas include intelligent user interfaces, software reuse libraries, and domain engineering. Also included are discussions of applicable standards in the areas of operating systems interfaces, user interfaces, communications interfaces, data transport, and science algorithm support, and their role in supporting the software development process.
An expanded system simulation model for solar energy storage (technical report), volume 1
NASA Technical Reports Server (NTRS)
Warren, A. W.
1979-01-01
The simulation model for wind energy storage (SIMWEST) program now includes wind and/or photovoltaic systems utilizing any combination of five types of storage (pumped hydro, battery, thermal, flywheel and pneumatic) and is available for the UNIVAC 1100 series and the CDC 6000 series computers. The level of detail is consistent with a role of evaluating the economic feasibility as well as the general performance of wind and/or photovoltaic energy systems. The software package consists of two basic programs and a library of system, environmental, and load components. The first program is a precompiler which generates computer models (in FORTRAN) of complex wind and/or photovoltaic source/storage/application systems, from user specifications using the respective library components. The second program provides the techno-economic system analysis with the respective I/0, the integration of system dynamics, and the iteration for conveyance of variables.
The INFN-CNAF Tier-1 GEMSS Mass Storage System and database facility activity
NASA Astrophysics Data System (ADS)
Ricci, Pier Paolo; Cavalli, Alessandro; Dell'Agnello, Luca; Favaro, Matteo; Gregori, Daniele; Prosperini, Andrea; Pezzi, Michele; Sapunenko, Vladimir; Zizzi, Giovanni; Vagnoni, Vincenzo
2015-05-01
The consolidation of Mass Storage services at the INFN-CNAF Tier1 Storage department that has occurred during the last 5 years, resulted in a reliable, high performance and moderately easy-to-manage facility that provides data access, archive, backup and database services to several different use cases. At present, the GEMSS Mass Storage System, developed and installed at CNAF and based upon an integration between the IBM GPFS parallel filesystem and the Tivoli Storage Manager (TSM) tape management software, is one of the largest hierarchical storage sites in Europe. It provides storage resources for about 12% of LHC data, as well as for data of other non-LHC experiments. Files are accessed using standard SRM Grid services provided by the Storage Resource Manager (StoRM), also developed at CNAF. Data access is also provided by XRootD and HTTP/WebDaV endpoints. Besides these services, an Oracle database facility is in production characterized by an effective level of parallelism, redundancy and availability. This facility is running databases for storing and accessing relational data objects and for providing database services to the currently active use cases. It takes advantage of several Oracle technologies, like Real Application Cluster (RAC), Automatic Storage Manager (ASM) and Enterprise Manager centralized management tools, together with other technologies for performance optimization, ease of management and downtime reduction. The aim of the present paper is to illustrate the state-of-the-art of the INFN-CNAF Tier1 Storage department infrastructures and software services, and to give a brief outlook to forthcoming projects. A description of the administrative, monitoring and problem-tracking tools that play a primary role in managing the whole storage framework is also given.
Design and Implementation of Telemedicine based on Java Media Framework
NASA Astrophysics Data System (ADS)
Xiong, Fengguang; Jia, Zhiyan
According to analyze the importance and problem of telemedicine in this paper, a telemedicine system based on JMF is proposed to design and implement capturing, compression, storage, transmission, reception and play of a medical audio and video. The telemedicine system can solve existing problems that medical information is not shared, platform-dependent is high, software is incompatibilities and so on. Experimental data prove that the system has low hardware cost, and is easy to transmission and storage, and is portable and powerful.
NASA Astrophysics Data System (ADS)
Dulǎu, Lucian Ioan
2015-12-01
This paper describes the simulation of a microgrid system with storage technologies. The microgrid comprises 6 distributed generators (DGs), 3 loads and a 150 kW storage unit. The installed capacity of the generators is 1100 kW, while the total load demand is 900 kW. The simulation is performed by using a SCADA software, considering the power generation costs, the loads demand and the system's power losses. The generators access the system in order of their power generation cost. The simulation is performed for the entire day.
CARGO: effective format-free compressed storage of genomic information
Roguski, Łukasz; Ribeca, Paolo
2016-01-01
The recent super-exponential growth in the amount of sequencing data generated worldwide has put techniques for compressed storage into the focus. Most available solutions, however, are strictly tied to specific bioinformatics formats, sometimes inheriting from them suboptimal design choices; this hinders flexible and effective data sharing. Here, we present CARGO (Compressed ARchiving for GenOmics), a high-level framework to automatically generate software systems optimized for the compressed storage of arbitrary types of large genomic data collections. Straightforward applications of our approach to FASTQ and SAM archives require a few lines of code, produce solutions that match and sometimes outperform specialized format-tailored compressors and scale well to multi-TB datasets. All CARGO software components can be freely downloaded for academic and non-commercial use from http://bio-cargo.sourceforge.net. PMID:27131376
NASA Astrophysics Data System (ADS)
Vikhlyantsev, O. P.; Generalov, L. N.; Kuryakin, A. V.; Karpov, I. A.; Gurin, N. E.; Tumkin, A. D.; Fil'chagin, S. V.
2017-12-01
A hardware-software complex for measurement of energy and angular distributions of charged particles formed in nuclear reactions is presented. Hardware and software structures of the complex, the basic set of the modular nuclear-physical apparatus of a multichannel detecting system on the basis of Δ E- E telescopes of silicon detectors, and the hardware of experimental data collection, storage, and processing are presented and described.
Buttles, John W
2013-04-23
Wireless communication devices include a software-defined radio coupled to processing circuitry. The system controller is configured to execute computer programming code. Storage media is coupled to the system controller and includes computer programming code configured to cause the system controller to configure and reconfigure the software-defined radio to operate on each of a plurality of communication networks according to a selected sequence. Methods for communicating with a wireless device and methods of wireless network-hopping are also disclosed.
Understanding I/O workload characteristics of a Peta-scale storage system
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kim, Youngjae; Gunasekaran, Raghul
2015-01-01
Understanding workload characteristics is critical for optimizing and improving the performance of current systems and software, and architecting new storage systems based on observed workload patterns. In this paper, we characterize the I/O workloads of scientific applications of one of the world s fastest high performance computing (HPC) storage cluster, Spider, at the Oak Ridge Leadership Computing Facility (OLCF). OLCF flagship petascale simulation platform, Titan, and other large HPC clusters, in total over 250 thousands compute cores, depend on Spider for their I/O needs. We characterize the system utilization, the demands of reads and writes, idle time, storage space utilization,more » and the distribution of read requests to write requests for the Peta-scale Storage Systems. From this study, we develop synthesized workloads, and we show that the read and write I/O bandwidth usage as well as the inter-arrival time of requests can be modeled as a Pareto distribution. We also study the I/O load imbalance problems using I/O performance data collected from the Spider storage system.« less
Development of an Automated Security Incident Reporting System (SIRS) for Bus Transit
DOT National Transportation Integrated Search
1986-12-01
The security incident reporting system (sirs) is a microcomputer-based software program demonstrated at the metropolitan transit commission (mtc) in Minneapolis, mn. Sirs is designed to provide convenient storage, update and retrieval of security inc...
Simulation model for wind energy storage systems. Volume II. Operation manual. [SIMWEST code
DOE Office of Scientific and Technical Information (OSTI.GOV)
Warren, A.W.; Edsinger, R.W.; Burroughs, J.D.
1977-08-01
The effort developed a comprehensive computer program for the modeling of wind energy/storage systems utilizing any combination of five types of storage (pumped hydro, battery, thermal, flywheel and pneumatic). An acronym for the program is SIMWEST (Simulation Model for Wind Energy Storage). The level of detail of SIMWEST is consistent with a role of evaluating the economic feasibility as well as the general performance of wind energy systems. The software package consists of two basic programs and a library of system, environmental, and load components. Volume II, the SIMWEST operation manual, describes the usage of the SIMWEST program, the designmore » of the library components, and a number of simple example simulations intended to familiarize the user with the program's operation. Volume II also contains a listing of each SIMWEST library subroutine.« less
Bathymetric Survey and Storage Capacity of Upper Lake Mary near Flagstaff, Arizona
Hornewer, Nancy J.; Flynn, Marilyn E.
2008-01-01
Upper Lake Mary is a preferred drinking-water source for the City of Flagstaff, Arizona. Therefore, storage capacity and sedimentation issues in Upper Lake Mary are of interest to the City. The U.S. Geological Survey, in cooperation with the City of Flagstaff, collected bathymetric and land-survey data in Upper Lake Mary during late August through October 2006. Water-depth data were collected using a single-beam, high-definition fathometer. Position data were collected using real-time differential global position system receivers. Data were processed using commercial software and imported into geographic information system software to produce contour maps of lakebed elevations and for the computation of area and storage-capacity information. At full pool (spillway elevation of 6,828.5 feet above mean sea level), Upper Lake Mary has a storage capacity of 16,300 acre-feet, a surface area of 939 acres, a mean depth of 17.4 feet, and a depth near the dam of 39 feet. It is 5.6 miles long and varies in width from 308 feet near the central, narrow portion of the lake to 2,630 feet in the upper portion. Comparisons between this survey and a previous survey conducted in the 1950s indicate no apparent decrease in reservoir area or storage capacity between the two surveys.
Liu, Hu; Su, Rong-jia; Wu, Min-jie; Zhang, Yi; Qiu, Xiang-jun; Feng, Jian-gang; Xie, Ting; Lu, Shu-liang
2012-06-01
To form a wound information management scheme with objectivity, standardization, and convenience by means of wound information management system. A wound information management system was set up with the acquisition terminal, the defined wound description, the data bank, and related softwares. The efficacy of this system was evaluated in clinical practice. The acquisition terminal was composed of the third generation mobile phone and the software. It was feasible to get access to the wound information, including description, image, and therapeutic plan from the data bank by mobile phone. During 4 months, a collection of a total of 232 wound treatment information was entered, and accordingly standardized data of 38 patients were formed automatically. This system can provide standardized wound information management by standardized techniques of acquisition, transmission, and storage of wound information. It can be used widely in hospitals, especially primary medical institutions. Data resource of the system makes it possible for epidemiological study with large sample size in future.
Choosing an Optical Disc System: A Guide for Users and Resellers.
ERIC Educational Resources Information Center
Vane-Tempest, Stewart
1995-01-01
Presents a guide for selecting an optional disc system. Highlights include storage hierarchy; standards; data life cycles; security; implementing an optical jukebox system; optimizing the system; performance; quality and reliability; software; cost of online versus near-line; and growing opportunities. Sidebars provide additional information on…
Application research of Ganglia in Hadoop monitoring and management
NASA Astrophysics Data System (ADS)
Li, Gang; Ding, Jing; Zhou, Lixia; Yang, Yi; Liu, Lei; Wang, Xiaolei
2017-03-01
There are many applications of Hadoop System in the field of large data, cloud computing. The test bench of storage and application in seismic network at Earthquake Administration of Tianjin use with Hadoop system, which is used the open source software of Ganglia to operate and monitor. This paper reviews the function, installation and configuration process, application effect of operating and monitoring in Hadoop system of the Ganglia system. It briefly introduces the idea and effect of Nagios software monitoring Hadoop system. It is valuable for the industry in the monitoring system of cloud computing platform.
Storing, Browsing, Querying, and Sharing Data: the THREDDS Data Repository (TDR)
NASA Astrophysics Data System (ADS)
Wilson, A.; Lindholm, D.; Baltzer, T.
2005-12-01
The Unidata Internet Data Distribution (IDD) network delivers gigabytes of data per day in near real time to sites across the U.S. and beyond. The THREDDS Data Server (TDS) supports public browsing of metadata and data access via OPeNDAP enabled URLs for datasets such as these. With such large quantities of data, sites generally employ a simple data management policy, keeping the data for a relatively short term on the order of hours to perhaps a week or two. In order to save interesting data in longer term storage and make it available for sharing, a user must move the data herself. In this case the user is responsible for determining where space is available, executing the data movement, generating any desired metadata, and setting access control to enable sharing. This task sequence is generally based on execution of a sequence of low level operating system specific commands with significant user involvement. The LEAD (Linked Environments for Atmospheric Discovery) project is building a cyberinfrastructure to support research and education in mesoscale meteorology. LEAD orchestrations require large, robust, and reliable storage with speedy access to stage data and store both intermediate and final results. These requirements suggest storage solutions that involve distributed storage, replication, and interfacing to archival storage systems such as mass storage systems and tape or removable disks. LEAD requirements also include metadata generation and access in order to support querying. In support of both THREDDS and LEAD requirements, Unidata is designing and prototyping the THREDDS Data Repository (TDR), a framework for a modular data repository to support distributed data storage and retrieval using a variety of back end storage media and interchangeable software components. The TDR interface will provide high level abstractions for long term storage, controlled, fast and reliable access, and data movement capabilities via a variety of technologies such as OPeNDAP and gridftp. The modular structure will allow substitution of software components so that both simple and complex storage media can be integrated into the repository. It will also allow integration of different varieties of supporting software. For example, if replication is desired, replica management could be handled via a simple hash table or a complex solution such as Replica Locater Service (RLS). In order to ensure that metadata is available for all the data in the repository, the TDR will also generate THREDDS metadata when necessary. Users will be able to establish levels of access control to their metadata and data. Coupled with a THREDDS Data Server, both browsing via THREDDS catalogs and querying capabilities will be supported. This presentation will describe the motivating factors, current status, and future plans of the TDR. References: IDD: http://www.unidata.ucar.edu/content/software/idd/index.html THREDDS: http://www.unidata.ucar.edu/content/projects/THREDDS/tech/server/ServerStatus.html LEAD: http://lead.ou.edu/ RLS: http://www.isi.edu/~annc/papers/chervenakRLSjournal05.pdf
Software system for data management and distributed processing of multichannel biomedical signals.
Franaszczuk, P J; Jouny, C C
2004-01-01
The presented software is designed for efficient utilization of cluster of PC computers for signal analysis of multichannel physiological data. The system consists of three main components: 1) a library of input and output procedures, 2) a database storing additional information about location in a storage system, 3) a user interface for selecting data for analysis, choosing programs for analysis, and distributing computing and output data on cluster nodes. The system allows for processing multichannel time series data in multiple binary formats. The description of data format, channels and time of recording are included in separate text files. Definition and selection of multiple channel montages is possible. Epochs for analysis can be selected both manually and automatically. Implementation of a new signal processing procedures is possible with a minimal programming overhead for the input/output processing and user interface. The number of nodes in cluster used for computations and amount of storage can be changed with no major modification to software. Current implementations include the time-frequency analysis of multiday, multichannel recordings of intracranial EEG of epileptic patients as well as evoked response analyses of repeated cognitive tasks.
Frank, M S; Schultz, T; Dreyer, K
2001-06-01
To provide a standardized and scaleable mechanism for exchanging digital radiologic educational content between software systems that use disparate authoring, storage, and presentation technologies. Our institution uses two distinct software systems for creating educational content for radiology. Each system is used to create in-house educational content as well as commercial educational products. One system is an authoring and viewing application that facilitates the input and storage of hierarchical knowledge and associated imagery, and is capable of supporting a variety of entity relationships. This system is primarily used for the production and subsequent viewing of educational CD-ROMS. Another software system is primarily used for radiologic education on the world wide web. This system facilitates input and storage of interactive knowledge and associated imagery, delivering this content over the internet in a Socratic manner simulating in-person interaction with an expert. A subset of knowledge entities common to both systems was derived. An additional subset of knowledge entities that could be bidirectionally mapped via algorithmic transforms was also derived. An extensible markup language (XML) object model and associated lexicon were then created to represent these knowledge entities and their interactive behaviors. Forward-looking attention was exercised in the creation of the object model in order to facilitate straightforward future integration of other sources of educational content. XML generators and interpreters were written for both systems. Deriving the XML object model and lexicon was the most critical and time-consuming aspect of the project. The coding of the XML generators and interpreters required only a few hours for each environment. Subsequently, the transfer of hundreds of educational cases and thematic presentations between the systems can now be accomplished in a matter of minutes. The use of algorithmic transforms results in nearly 100% transfer of context as well as content, thus providing "presentation-ready" outcomes. The automation of knowledge exchange between dissimilar digital teaching environments magnifies the efforts of educators and enriches the learning experience for participants. XML is a powerful and useful mechanism for transfering educational content, as well as the context and interactive behaviors of such content, between disparate systems.
Research on Multi - Person Parallel Modeling Method Based on Integrated Model Persistent Storage
NASA Astrophysics Data System (ADS)
Qu, MingCheng; Wu, XiangHu; Tao, YongChao; Liu, Ying
2018-03-01
This paper mainly studies the multi-person parallel modeling method based on the integrated model persistence storage. The integrated model refers to a set of MDDT modeling graphics system, which can carry out multi-angle, multi-level and multi-stage description of aerospace general embedded software. Persistent storage refers to converting the data model in memory into a storage model and converting the storage model into a data model in memory, where the data model refers to the object model and the storage model is a binary stream. And multi-person parallel modeling refers to the need for multi-person collaboration, the role of separation, and even real-time remote synchronization modeling.
Current Methods for Evaluation of Physical Security System Effectiveness.
1981-05-01
It also helps the user modify a data set before further processing. (c) Safeguards Engineering and Analysis Data Base (SEAD)--To complete SAFE’s...graphic display software in addition to a Fortran compiler, and up to about (3 35,000 words of storage. For a fairly complex problem, a single run through...operational software . 94 BIBLIOGRAPHY Lenz, J.E., "The PROSE (Protection System Evaluator) Model," Proc. 1979 Winter Simulation Conference, IEEE, 1979
Tautomerism in chemical information management systems
NASA Astrophysics Data System (ADS)
Warr, Wendy A.
2010-06-01
Tautomerism has an impact on many of the processes in chemical information management systems including novelty checking during registration into chemical structure databases; storage of structures; exact and substructure searching in chemical structure databases; and depiction of structures retrieved by a search. The approaches taken by 27 different software vendors and database producers are compared. It is hoped that this comparison will act as a discussion document that could ultimately improve databases and software for researchers in the future.
Using S3 cloud storage with ROOT and CvmFS
NASA Astrophysics Data System (ADS)
Arsuaga-Ríos, María; Heikkilä, Seppo S.; Duellmann, Dirk; Meusel, René; Blomer, Jakob; Couturier, Ben
2015-12-01
Amazon S3 is a widely adopted web API for scalable cloud storage that could also fulfill storage requirements of the high-energy physics community. CERN has been evaluating this option using some key HEP applications such as ROOT and the CernVM filesystem (CvmFS) with S3 back-ends. In this contribution we present an evaluation of two versions of the Huawei UDS storage system stressed with a large number of clients executing HEP software applications. The performance of concurrently storing individual objects is presented alongside with more complex data access patterns as produced by the ROOT data analysis framework. Both Huawei UDS generations show a successful scalability by supporting multiple byte-range requests in contrast with Amazon S3 or Ceph which do not support these commonly used HEP operations. We further report the S3 integration with recent CvmFS versions and summarize the experience with CvmFS/S3 for publishing daily releases of the full LHCb experiment software stack.
NASA Technical Reports Server (NTRS)
1983-01-01
Kennedy Space Center's primary institutional computer is a 4 megabyte IBM 4341 with 3.175 billion characters of IBM 3350 disc storage. This system utilizes the Software AG product known as ADABAS with the on line user oriented features of NATURAL and COMPLETE as a Data Base Management System (DBMS). It is operational under the OS/VSI and is currently supporting batch/on line applications such as Personnel, Training, Physical Space Management, Procurement, Office Equipment Maintenance, and Equipment Visibility. A third and by far the largest DBMS application is known as the Shuttle Inventory Management System (SIMS) which is operational on a Honeywell 6660 (dedicated) computer system utilizing Honeywell Integrated Data Storage I (IDSI) as the DBMS. The SIMS application is designed to provide central supply system acquisition, inventory control, receipt, storage, and issue of spares, supplies, and materials.
Development of fuel oil management system software: Phase 1, Tank management module. Final report
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lange, H.B.; Baker, J.P.; Allen, D.
1992-01-01
The Fuel Oil Management System (FOMS) is a micro-computer based software system being developed to assist electric utilities that use residual fuel oils with oil purchase and end-use decisions. The Tank Management Module (TMM) is the first FOMS module to be produced. TMM enables the user to follow the mixing status of oils contained in a number of oil storage tanks. The software contains a computational model of residual fuel oil mixing which addresses mixing that occurs as one oil is added to another in a storage tank and also purposeful mixing of the tank by propellers, recirculation or convection.Themore » model also addresses the potential for sludge formation due to incompatibility of oils being mixed. Part 1 of the report presents a technical description of the mixing model and a description of its development. Steps followed in developing the mixing model included: (1) definition of ranges of oil properties and tank design factors used by utilities; (2) review and adaption of prior applicable work; (3) laboratory development; and (4) field verification. Also, a brief laboratory program was devoted to exploring the suitability of suggested methods for predicting viscosities, flash points and pour points of oil mixtures. Part 2 of the report presents a functional description of the TMM software and a description of its development. The software development program consisted of the following steps: (1) on-site interviews at utilities to prioritize needs and characterize user environments; (2) construction of the user interface; and (3) field testing the software.« less
Development of fuel oil management system software: Phase 1, Tank management module
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lange, H.B.; Baker, J.P.; Allen, D.
1992-01-01
The Fuel Oil Management System (FOMS) is a micro-computer based software system being developed to assist electric utilities that use residual fuel oils with oil purchase and end-use decisions. The Tank Management Module (TMM) is the first FOMS module to be produced. TMM enables the user to follow the mixing status of oils contained in a number of oil storage tanks. The software contains a computational model of residual fuel oil mixing which addresses mixing that occurs as one oil is added to another in a storage tank and also purposeful mixing of the tank by propellers, recirculation or convection.Themore » model also addresses the potential for sludge formation due to incompatibility of oils being mixed. Part 1 of the report presents a technical description of the mixing model and a description of its development. Steps followed in developing the mixing model included: (1) definition of ranges of oil properties and tank design factors used by utilities; (2) review and adaption of prior applicable work; (3) laboratory development; and (4) field verification. Also, a brief laboratory program was devoted to exploring the suitability of suggested methods for predicting viscosities, flash points and pour points of oil mixtures. Part 2 of the report presents a functional description of the TMM software and a description of its development. The software development program consisted of the following steps: (1) on-site interviews at utilities to prioritize needs and characterize user environments; (2) construction of the user interface; and (3) field testing the software.« less
NASA Astrophysics Data System (ADS)
Huang, Y.; Liu, B. Z.; Wang, K. Y.; Ai, X.
2017-12-01
In response to the new requirements of the operation mode of wind-storage combined system and demand side response for transmission network planning, this paper presents a joint planning of energy storage and transmission considering wind-storage combined system and demand side response. Firstly, the charge-discharge strategy of energy storage system equipped at the outlet of wind farm and demand side response strategy are analysed to achieve the best comprehensive benefits through the coordination of the two. Secondly, in the general transmission network planning model with wind power, both energy storage cost and demand side response cost are added to the objective function. Not only energy storage operation constraints and but also demand side response constraints are introduced into the constraint condition. Based on the classical formulation of TEP, a new formulation is developed considering the simultaneous addition of the charge-discharge strategy of energy storage system equipped at the outlet of the wind farm and demand side response strategy, which belongs to a typical mixed integer linear programming model that can be solved by mature optimization software. The case study based on the Garver-6 bus system shows that the validity of the proposed model is verified by comparison with general transmission network planning model. Furthermore, the results demonstrate that the joint planning model can gain more economic benefits through setting up different cases.
Federal Register 2010, 2011, 2012, 2013, 2014
2010-12-28
...-Filing system does not support unlisted software, and the NRC Meta System Help Desk will not be able to... Setpoint Methodology for LSSS [Limiting Safety System Setting] Functions,'' which included the instrument... System Instrumentation,'' Function 3, Condensate Storage Tank Level--Low. The supporting TS Bases will...
HUC--A User Designed System for All Recorded Knowledge and Information.
ERIC Educational Resources Information Center
Hilton, Howard J.
This paper proposes a user designed system, HUC, intended to provide a single index and retrieval system covering all recorded knowledge and information capable of being retrieved from all modes of storage, from manual to the most sophisticated retrieval system. The concept integrates terminal hardware, software, and database structure to allow…
Implementation of a Campuswide Distributed Mass Storage Service: the Dream Versus Reality
NASA Technical Reports Server (NTRS)
Prahst, Stephen; Armstead, Betty Jo
1996-01-01
In 1990, a technical team at NASA Lewis Research Center, Cleveland, Ohio, began defining a Mass Storage Service to pro- wide long-term archival storage, short-term storage for very large files, distributed Network File System access, and backup services for critical data dw resides on workstations and personal computers. Because of software availability and budgets, the total service was phased in over dm years. During the process of building the service from the commercial technologies available, our Mass Storage Team refined the original vision and learned from the problems and mistakes that occurred. We also enhanced some technologies to better meet the needs of users and system administrators. This report describes our team's journey from dream to reality, outlines some of the problem areas that still exist, and suggests some solutions.
Distributed Power Systems for Sustainable Energy
2012-10-01
capital investment in state-of- the-art cogeneration technologies, renewable sources, energy storage, and interconnection hardware and software. It is...8 capacity may not be well suited to support building or campus-scale microgrids. This is because new thermal and electrical energy storage devices...constraints, as well as the site location, weather, and consumption patterns. These factors change over the life of the energy microgrid. • Tradeoffs
NASA Technical Reports Server (NTRS)
Cudmore, Alan; Leath, Tim; Ferrer, Art; Miller, Todd; Walters, Mark; Savadkin, Bruce; Wu, Ji-Wei; Slegel, Steve; Stagmer, Emory
2007-01-01
The command-and-data-handling (C&DH) software of the Wilkinson Microwave Anisotropy Probe (WMAP) spacecraft functions as the sole interface between (1) the spacecraft and its instrument subsystem and (2) ground operations equipment. This software includes a command-decoding and -distribution system, a telemetry/data-handling system, and a data-storage-and-playback system. This software performs onboard processing of attitude sensor data and generates commands for attitude-control actuators in a closed-loop fashion. It also processes stored commands and monitors health and safety functions for the spacecraft and its instrument subsystems. The basic functionality of this software is the same of that of the older C&DH software of the Rossi X-Ray Timing Explorer (RXTE) spacecraft, the main difference being the addition of the attitude-control functionality. Previously, the C&DH and attitude-control computations were performed by different processors because a single RXTE processor did not have enough processing power. The WMAP spacecraft includes a more-powerful processor capable of performing both computations.
Key Technologies of Phone Storage Forensics Based on ARM Architecture
NASA Astrophysics Data System (ADS)
Zhang, Jianghan; Che, Shengbing
2018-03-01
Smart phones are mainly running Android, IOS and Windows Phone three mobile platform operating systems. The android smart phone has the best market shares and its processor chips are almost ARM software architecture. The chips memory address mapping mechanism of ARM software architecture is different with x86 software architecture. To forensics to android mart phone, we need to understand three key technologies: memory data acquisition, the conversion mechanism from virtual address to the physical address, and find the system’s key data. This article presents a viable solution which does not rely on the operating system API for a complete solution to these three issues.
Integration experiences and performance studies of A COTS parallel archive systems
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chen, Hsing-bung; Scott, Cody; Grider, Bary
2010-01-01
Current and future Archive Storage Systems have been asked to (a) scale to very high bandwidths, (b) scale in metadata performance, (c) support policy-based hierarchical storage management capability, (d) scale in supporting changing needs of very large data sets, (e) support standard interface, and (f) utilize commercial-off-the-shelf(COTS) hardware. Parallel file systems have been asked to do the same thing but at one or more orders of magnitude faster in performance. Archive systems continue to move closer to file systems in their design due to the need for speed and bandwidth, especially metadata searching speeds such as more caching and lessmore » robust semantics. Currently the number of extreme highly scalable parallel archive solutions is very small especially those that will move a single large striped parallel disk file onto many tapes in parallel. We believe that a hybrid storage approach of using COTS components and innovative software technology can bring new capabilities into a production environment for the HPC community much faster than the approach of creating and maintaining a complete end-to-end unique parallel archive software solution. In this paper, we relay our experience of integrating a global parallel file system and a standard backup/archive product with a very small amount of additional code to provide a scalable, parallel archive. Our solution has a high degree of overlap with current parallel archive products including (a) doing parallel movement to/from tape for a single large parallel file, (b) hierarchical storage management, (c) ILM features, (d) high volume (non-single parallel file) archives for backup/archive/content management, and (e) leveraging all free file movement tools in Linux such as copy, move, ls, tar, etc. We have successfully applied our working COTS Parallel Archive System to the current world's first petaflop/s computing system, LANL's Roadrunner, and demonstrated its capability to address requirements of future archival storage systems.« less
Integration experiments and performance studies of a COTS parallel archive system
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chen, Hsing-bung; Scott, Cody; Grider, Gary
2010-06-16
Current and future Archive Storage Systems have been asked to (a) scale to very high bandwidths, (b) scale in metadata performance, (c) support policy-based hierarchical storage management capability, (d) scale in supporting changing needs of very large data sets, (e) support standard interface, and (f) utilize commercial-off-the-shelf (COTS) hardware. Parallel file systems have been asked to do the same thing but at one or more orders of magnitude faster in performance. Archive systems continue to move closer to file systems in their design due to the need for speed and bandwidth, especially metadata searching speeds such as more caching andmore » less robust semantics. Currently the number of extreme highly scalable parallel archive solutions is very small especially those that will move a single large striped parallel disk file onto many tapes in parallel. We believe that a hybrid storage approach of using COTS components and innovative software technology can bring new capabilities into a production environment for the HPC community much faster than the approach of creating and maintaining a complete end-to-end unique parallel archive software solution. In this paper, we relay our experience of integrating a global parallel file system and a standard backup/archive product with a very small amount of additional code to provide a scalable, parallel archive. Our solution has a high degree of overlap with current parallel archive products including (a) doing parallel movement to/from tape for a single large parallel file, (b) hierarchical storage management, (c) ILM features, (d) high volume (non-single parallel file) archives for backup/archive/content management, and (e) leveraging all free file movement tools in Linux such as copy, move, Is, tar, etc. We have successfully applied our working COTS Parallel Archive System to the current world's first petafiop/s computing system, LANL's Roadrunner machine, and demonstrated its capability to address requirements of future archival storage systems.« less
NASA Astrophysics Data System (ADS)
Ren, Zhengyi; Huang, Tong; Feng, Jiajia; Zhou, Yuanwei
2018-05-01
In this paper, a 600Wh vertical maglev energy storage flywheel rotor system is taken as a model. The motion equation of a rigid rotor considering the gyroscopic effect and the center of mass offset is obtained by the centroid theorem, and the experimental verification is carried out. Using the state variable method, the Matlab software was used to program and simulate the radial displacement and radial electromagnetic force of the rotor system at each speed. The results show that the established system model is in accordance with the designed 600Wh vertical maglev energy storage flywheel model. The results of the simulation analysis are helpful to further understand the dynamic nature of the flywheel rotor at different transient speeds.
Globally distributed software defined storage (proposal)
NASA Astrophysics Data System (ADS)
Shevel, A.; Khoruzhnikov, S.; Grudinin, V.; Sadov, O.; Kairkanov, A.
2017-10-01
The volume of the coming data in HEP is growing. The volume of the data to be held for a long time is growing as well. Large volume of data - big data - is distributed around the planet. The methods, approaches how to organize and manage the globally distributed data storage are required. The distributed storage has several examples for personal needs like own-cloud.org, pydio.com, seafile.com, sparkleshare.org. For enterprise-level there is a number of systems: SWIFT - distributed storage systems (part of Openstack), CEPH and the like which are mostly object storage. When several data center’s resources are integrated, the organization of data links becomes very important issue especially if several parallel data links between data centers are used. The situation in data centers and in data links may vary each hour. All that means each part of distributed data storage has to be able to rearrange usage of data links and storage servers in each data center. In addition, for each customer of distributed storage different requirements could appear. The above topics are planned to be discussed in data storage proposal.
Engineering the Implementation of Pumped Hydro Energy Storage in the Arizona Power Grid
NASA Astrophysics Data System (ADS)
Dixon, William Jesse J.
This thesis addresses the issue of making an economic case for bulk energy storage in the Arizona bulk power system. Pumped hydro energy storage (PHES) is used in this study. Bulk energy storage has often been suggested for large scale electric power systems in order to levelize load (store energy when it is inexpensive [energy demand is low] and discharge energy when it is expensive [energy demand is high]). It also has the potential to provide opportunities to avoid transmission and generation expansion, and provide for generation reserve margins. As the level of renewable energy resources increases, the uncertainty and variability of wind and solar resources may be improved by bulk energy storage technologies. For this study, the MATLab software platform is used, a mathematical based modeling language, optimization solvers (specifically Gurobi), and a power flow solver (PowerWorld) are used to simulate an economic dispatch problem that includes energy storage and transmission losses. A program is created which utilizes quadratic programming to analyze various cases using a 2010 summer peak load from the Arizona portion of the Western Electricity Coordinating Council (WECC) system. Actual data from industry are used in this test bed. In this thesis, the full capabilities of Gurobi are not utilized (e.g., integer variables, binary variables). However, the formulation shown here does create a platform such that future, more sophisticated modeling may readily be incorporated. The developed software is used to assess the Arizona test bed with a low level of energy storage to study how the storage power limit effects several optimization outputs such as the system wide operating costs. Large levels of energy storage are then added to see how high level energy storage affects peak shaving, load factor, and other system applications. Finally, various constraint relaxations are made to analyze why the applications tested eventually approach a constant value. This research illustrates the use of energy storage which helps minimize the system wide generator operating cost by "shaving" energy off of the peak demand. The thesis builds on the work of another recent researcher with the objectives of strengthening the assumptions used, checking the solutions obtained, utilizing higher level simulation languages to affirm results, and expanding the results and conclusions. One important point not fully discussed in the present thesis is the impact of efficiency in the pumped hydro cycle. The efficiency of the cycle for modern units is estimated at higher than 90%. Inclusion of pumped hydro losses is relegated to future work.
A Concept for the One Degree Imager (ODI) Data Reduction Pipeline and Archiving System
NASA Astrophysics Data System (ADS)
Knezek, Patricia; Stobie, B.; Michael, S.; Valdes, F.; Marru, S.; Henschel, R.; Pierce, M.
2010-05-01
The One Degree Imager (ODI), currently being built by the WIYN Observatory, will provide tremendous possibilities for conducting diverse scientific programs. ODI will be a complex instrument, using non-conventional Orthogonal Transfer Array (OTA) detectors. Due to its large field of view, small pixel size, use of OTA technology, and expected frequent use, ODI will produce vast amounts of astronomical data. If ODI is to achieve its full potential, a data reduction pipeline must be developed. Long-term archiving must also be incorporated into the pipeline system to ensure the continued value of ODI data. This paper presents a concept for an ODI data reduction pipeline and archiving system. To limit costs and development time, our plan leverages existing software and hardware, including existing pipeline software, Science Gateways, Computational Grid & Cloud Technology, Indiana University's Data Capacitor and Massive Data Storage System, and TeraGrid compute resources. Existing pipeline software will be augmented to add functionality required to meet challenges specific to ODI, enhance end-user control, and enable the execution of the pipeline on grid resources including national grid resources such as the TeraGrid and Open Science Grid. The planned system offers consistent standard reductions and end-user flexibility when working with images beyond the initial instrument signature removal. It also gives end-users access to computational and storage resources far beyond what are typically available at most institutions. Overall, the proposed system provides a wide array of software tools and the necessary hardware resources to use them effectively.
NASA Astrophysics Data System (ADS)
Palivos, Marios; Vokas, Georgios A.; Anastasiadis, Anestis; Papageorgas, Panagiotis; Salame, Chafic
2018-05-01
One of the major energy issues of our days is reliable and effective energy generation and supply of electricity grids. In recent years there has been experienced a rapid development and implementation of Renewable Energy Sources (RES) worldwide. On one hand, many Gigawatts of grid-connected renewables are being installed and on the other many Megawatts of hybrid renewable systems for residential use are being installed making use of electric battery systems, in order to cover all daily energy and power needs during. New types of batteries are being developed and many companies have made great progress providing a variety of electricity storage products. The purpose of this research is firstly to highlight the necessity and also the importance of the use of energy storage systems and secondly, through detailed technical and financial simulation analysis using HOMER Pro-optimization software, to compare the technical characteristics and performance of energy storage systems by various leading companies when installed in a residential renewable energy system with a specific load and at the same time to provide the most efficient system economically. Results concerning the operation and the choice of a storage system are derived.
Data collection with the ACTS propagation terminal
NASA Technical Reports Server (NTRS)
Remaklus, Will
1993-01-01
Viewgraphs on data collection with the ACTS propagation terminal are included. Topics covered include: DACS system overview; DACS board; APT data collection computer; APT software downloading; data storage for APT; and ACTS propagation experiment data flow.
21 CFR 876.1300 - Ingestible telemetric gastrointestinal capsule imaging system.
Code of Federal Regulations, 2012 CFR
2012-04-01
... images of the small bowel with a wireless camera contained in a capsule. This device includes an... receiving/recording unit, a data storage device, computer software to process the images, and accessories...
21 CFR 876.1300 - Ingestible telemetric gastrointestinal capsule imaging system.
Code of Federal Regulations, 2013 CFR
2013-04-01
... images of the small bowel with a wireless camera contained in a capsule. This device includes an... receiving/recording unit, a data storage device, computer software to process the images, and accessories...
21 CFR 876.1300 - Ingestible telemetric gastrointestinal capsule imaging system.
Code of Federal Regulations, 2014 CFR
2014-04-01
... images of the small bowel with a wireless camera contained in a capsule. This device includes an... receiving/recording unit, a data storage device, computer software to process the images, and accessories...
21 CFR 876.1300 - Ingestible telemetric gastrointestinal capsule imaging system.
Code of Federal Regulations, 2011 CFR
2011-04-01
... images of the small bowel with a wireless camera contained in a capsule. This device includes an... receiving/recording unit, a data storage device, computer software to process the images, and accessories...
A high-speed network for cardiac image review.
Elion, J L; Petrocelli, R R
1994-01-01
A high-speed fiber-based network for the transmission and display of digitized full-motion cardiac images has been developed. Based on Asynchronous Transfer Mode (ATM), the network is scaleable, meaning that the same software and hardware is used for a small local area network or for a large multi-institutional network. The system can handle uncompressed digital angiographic images, considered to be at the "high-end" of the bandwidth requirements. Along with the networking, a general-purpose multi-modality review station has been implemented without specialized hardware. This station can store a full injection sequence in "loop RAM" in a 512 x 512 format, then interpolate to 1024 x 1024 while displaying at 30 frames per second. The network and review stations connect to a central file server that uses a virtual file system to make a large high-speed RAID storage disk and associated off-line storage tapes and cartridges all appear as a single large file system to the software. In addition to supporting archival storage and review, the system can also digitize live video using high-speed Direct Memory Access (DMA) from the frame grabber to present uncompressed data to the network. Fully functional prototypes have provided the proof of concept, with full deployment in the institution planned as the next stage.
A high-speed network for cardiac image review.
Elion, J. L.; Petrocelli, R. R.
1994-01-01
A high-speed fiber-based network for the transmission and display of digitized full-motion cardiac images has been developed. Based on Asynchronous Transfer Mode (ATM), the network is scaleable, meaning that the same software and hardware is used for a small local area network or for a large multi-institutional network. The system can handle uncompressed digital angiographic images, considered to be at the "high-end" of the bandwidth requirements. Along with the networking, a general-purpose multi-modality review station has been implemented without specialized hardware. This station can store a full injection sequence in "loop RAM" in a 512 x 512 format, then interpolate to 1024 x 1024 while displaying at 30 frames per second. The network and review stations connect to a central file server that uses a virtual file system to make a large high-speed RAID storage disk and associated off-line storage tapes and cartridges all appear as a single large file system to the software. In addition to supporting archival storage and review, the system can also digitize live video using high-speed Direct Memory Access (DMA) from the frame grabber to present uncompressed data to the network. Fully functional prototypes have provided the proof of concept, with full deployment in the institution planned as the next stage. PMID:7949964
Systems, methods, and products for graphically illustrating and controlling a droplet actuator
NASA Technical Reports Server (NTRS)
Brafford, Keith R. (Inventor); Pamula, Vamsee K. (Inventor); Paik, Philip Y. (Inventor); Pollack, Michael G. (Inventor); Sturmer, Ryan A. (Inventor); Smith, Gregory F. (Inventor)
2010-01-01
Systems for controlling a droplet microactuator are provided. According to one embodiment, a system is provided and includes a controller, a droplet microactuator electronically coupled to the controller, and a display device displaying a user interface electronically coupled to the controller, wherein the system is programmed and configured to permit a user to effect a droplet manipulation by interacting with the user interface. According to another embodiment, a system is provided and includes a processor, a display device electronically coupled to the processor, and software loaded and/or stored in a storage device electronically coupled to the controller, a memory device electronically coupled to the controller, and/or the controller and programmed to display an interactive map of a droplet microactuator. According to yet another embodiment, a system is provided and includes a controller, a droplet microactuator electronically coupled to the controller, a display device displaying a user interface electronically coupled to the controller, and software for executing a protocol loaded and/or stored in a storage device electronically coupled to the controller, a memory device electronically coupled to the controller, and/or the controller.
VIDANA: Data Management System for Nano Satellites
NASA Astrophysics Data System (ADS)
Montenegro, Sergio; Walter, Thomas; Dilger, Erik
2013-08-01
A Vidana data management system is a network of software and hardware components. This implies a software network, a hardware network and a smooth connection between both of them. Our strategy is based on our innovative middleware. A reliable interconnection network (SW & HW) which can interconnect many unreliable redundant components such as sensors, actuators, communication devices, computers, and storage elements,... and software components! Component failures are detected, the affected device is disabled and its function is taken over by a redundant component. Our middleware doesn't connect only software, but also devices and software together. Software and hardware communicate with each other without having to distinguish which functions are in software and which are implemented in hardware. Components may be turned on and off at any time, and the whole system will autonomously adapt to its new configuration in order to continue fulfilling its task. In VIDANA we aim dynamic adaptability (run tine), static adaptability (tailoring), and unified HW/SW communication protocols. For many of these aspects we use "learn from the nature" where we can find astonishing reference implementations.
Federal Register 2010, 2011, 2012, 2013, 2014
2013-11-26
... the NRC's E-Filing system does not support unlisted software, and the NRC Meta System Help Desk will... Osmosis (RO) system borated water storage tank suction connections. Basis for proposed no significant... requirement. For the SFP, the suction to the RO system is above the required TS water level, therefore, the...
Federal Register 2010, 2011, 2012, 2013, 2014
2010-11-30
... a level of > 22 inches water column in support of SGIG system operation. Exelon is submitting this... site, but should note that the NRC's E-Filing system does not support unlisted software, and the NRC... EDGs and the associated support systems, such as the fuel oil storage and transfer systems, are...
Federal Register 2010, 2011, 2012, 2013, 2014
2013-04-16
...-Filing system does not support unlisted software, and the NRC Meta System Help Desk will not be able to... reverse osmosis system during normal plant operation to purify the water in the borated water storage... result of water returned from the RO System with lower boron concentration. Thus, no adverse effects from...
Using the K-25 C TD Common File System: A guide to CFSI (CFS Interface)
DOE Office of Scientific and Technical Information (OSTI.GOV)
Not Available
1989-12-01
A CFS (Common File System) is a large, centralized file management and storage facility based on software developed at Los Alamos National Laboratory. This manual is a guide to use of the CFS available to users of the Cray UNICOS system at Martin Marietta Energy Systems, Inc., in Oak Ridge, Tennessee.
A self-configuring control system for storage and computing departments at INFN-CNAF Tierl
NASA Astrophysics Data System (ADS)
Gregori, Daniele; Dal Pra, Stefano; Ricci, Pier Paolo; Pezzi, Michele; Prosperini, Andrea; Sapunenko, Vladimir
2015-05-01
The storage and farming departments at the INFN-CNAF Tier1[1] manage approximately thousands of computing nodes and several hundreds of servers that provides access to the disk and tape storage. In particular, the storage server machines should provide the following services: an efficient access to about 15 petabytes of disk space with different cluster of GPFS file system, the data transfers between LHC Tiers sites (Tier0, Tier1 and Tier2) via GridFTP cluster and Xrootd protocol and finally the writing and reading data operations on magnetic tape backend. One of the most important and essential point in order to get a reliable service is a control system that can warn if problems arise and which is able to perform automatic recovery operations in case of service interruptions or major failures. Moreover, during daily operations the configurations can change, i.e. if the GPFS cluster nodes roles can be modified and therefore the obsolete nodes must be removed from the control system production, and the new servers should be added to the ones that are already present. The manual management of all these changes is an operation that can be somewhat difficult in case of several changes, it can also take a long time and is easily subject to human error or misconfiguration. For these reasons we have developed a control system with the feature of self-configure itself if any change occurs. Currently, this system has been in production for about a year at the INFN-CNAF Tier1 with good results and hardly any major drawback. There are three major key points in this system. The first is a software configurator service (e.g. Quattor or Puppet) for the servers machines that we want to monitor with the control system; this service must ensure the presence of appropriate sensors and custom scripts on the nodes to check and should be able to install and update software packages on them. The second key element is a database containing information, according to a suitable format, on all the machines in production and able to provide for each of them the principal information such as the type of hardware, the network switch to which the machine is connected, if the machine is real (physical) or virtual, the possible hypervisor to which it belongs and so on. The last key point is a control system software (in our implementation we choose the Nagios software), capable of assessing the status of the servers and services, and that can attempt to restore the working state, restart or inhibit software services and send suitable alarm messages to the site administrators. The integration of these three elements was made by appropriate scripts and custom implementation that allow the self-configuration of the system according to a decisional logic and the whole combination of all the above-mentioned components will be deeply discussed in this paper.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Trujillo, Angelina Michelle
Strategy, Planning, Acquiring- very large scale computing platforms come and go and planning for immensely scalable machines often precedes actual procurement by 3 years. Procurement can be another year or more. Integration- After Acquisition, machines must be integrated into the computing environments at LANL. Connection to scalable storage via large scale storage networking, assuring correct and secure operations. Management and Utilization – Ongoing operations, maintenance, and trouble shooting of the hardware and systems software at massive scale is required.
In-Storage Embedded Accelerator for Sparse Pattern Processing
2016-09-13
computation . As a result, a very small processor could be used and still make full use of storage device bandwidth. When the host software sends...Rean Griffith, Anthony D. Joseph, Randy Katz, Andy Konwinski, Gunho Lee et al. "A view of cloud computing ."Communications of the ACM 53, no. 4 (2010...Laboratory, * MIT Computer Science & Artificial Intelligence Laboratory Abstract— We present a novel system architecture for sparse pattern
Information Loss from Technological Progress
NASA Astrophysics Data System (ADS)
Townsend, P. D.
2014-12-01
Progress in electronics and optics offers faster computers, and rapid communication via the internet that is matched by ever larger and evolving storage systems. Instinctively one assumes that this must be totally beneficial. However advances in software and storage media are progressing in ways which are frequently incompatible with earlier systems and the economics and commercial pressures rarely guarantee total compatibility with earlier systems. Instead, the industries actively choose to force the users to purchase new systems and software. Thus we are moving forward with new technological variants that may have access to only the most recent systems and we will have lost earlier alternatives. The reality is that increased processing speed and storage capacity are matched by an equally rapid decline in the access and survival lifetime of older information. This pattern is not limited to modern electronic systems but is evident throughout history from writing on stone and clay tablets to papyrus and paper. It is equally evident in image systems from painting, through film, to magnetic tapes and digital cameras. In sound recording we have variously progressed from wax discs to vinyl, magnetic tape and CD formats. In each case the need for better definition and greater capacity has forced the earlier systems into oblivion. Indeed proposed interactive music systems could similarly relegate music CDs to specialist collections. The article will track some of the examples and discuss the consequences as well as noting that this information loss is further compounded by developments in language and changes in cultural views of different societies.
Architectures and Evaluation for Adjustable Control Autonomy for Space-Based Life Support Systems
NASA Technical Reports Server (NTRS)
Malin, Jane T.; Schreckenghost, Debra K.
2001-01-01
In the past five years, a number of automation applications for control of crew life support systems have been developed and evaluated in the Adjustable Autonomy Testbed at NASA's Johnson Space Center. This paper surveys progress on an adjustable autonomous control architecture for situations where software and human operators work together to manage anomalies and other system problems. When problems occur, the level of control autonomy can be adjusted, so that operators and software agents can work together on diagnosis and recovery. In 1997 adjustable autonomy software was developed to manage gas transfer and storage in a closed life support test. Four crewmembers lived and worked in a chamber for 91 days, with both air and water recycling. CO2 was converted to O2 by gas processing systems and wheat crops. With the automation software, significantly fewer hours were spent monitoring operations. System-level validation testing of the software by interactive hybrid simulation revealed problems both in software requirements and implementation. Since that time, we have been developing multi-agent approaches for automation software and human operators, to cooperatively control systems and manage problems. Each new capability has been tested and demonstrated in realistic dynamic anomaly scenarios, using the hybrid simulation tool.
NASA Astrophysics Data System (ADS)
Syafiqah Syahirah Mohamed, Nor; Amalina Banu Mohamat Adek, Noor; Hamid, Nurul Farhana Abd
2018-03-01
This paper presents the development of Graphical User Interface (GUI) software for sizing main component in AC coupled photovoltaic (PV) hybrid power system based on Malaysia climate. This software provides guideline for PV system integrator to design effectively the size of components and system configuration to match the system and load requirement with geographical condition. The concept of the proposed software is balancing the annual average renewable energy generation and load demand. In this study, the PV to diesel generator (DG) ratio is introduced by considering the hybrid system energy contribution. The GUI software is able to size the main components in the PV hybrid system to meet with the set target of energy contribution ratio. The rated powers of the components to be defined are PV array, grid-tie inverter, bi-directional inverter, battery storage and DG. GUI is used to perform all the system sizing procedures to make it user friendly interface as a sizing tool for AC coupled PV hybrid system. The GUI will be done by using Visual Studio 2015 based on the real data under Malaysia Climate.
DOE Office of Scientific and Technical Information (OSTI.GOV)
BYNA, SUNRENDRA; DONG, BIN; WU, KESHENG
Data Elevator: Efficient Asynchronous Data Movement in Hierarchical Storage Systems Multi-layer storage subsystems, including SSD-based burst buffers and disk-based parallel file systems (PFS), are becoming part of HPC systems. However, software for this storage hierarchy is still in its infancy. Applications may have to explicitly move data among the storage layers. We propose Data Elevator for transparently and efficiently moving data between a burst buffer and a PFS. Users specify the final destination for their data, typically on PFS, Data Elevator intercepts the I/O calls, stages data on burst buffer, and then asynchronously transfers the data to their final destinationmore » in the background. This system allows extensive optimizations, such as overlapping read and write operations, choosing I/O modes, and aligning buffer boundaries. In tests with large-scale scientific applications, Data Elevator is as much as 4.2X faster than Cray DataWarp, the start-of-art software for burst buffer, and 4X faster than directly writing to PFS. The Data Elevator library uses HDF5's Virtual Object Layer (VOL) for intercepting parallel I/O calls that write data to PFS. The intercepted calls are redirected to the Data Elevator, which provides a handle to write the file in a faster and intermediate burst buffer system. Once the application finishes writing the data to the burst buffer, the Data Elevator job uses HDF5 to move the data to final destination in an asynchronous manner. Hence, using the Data Elevator library is currently useful for applications that call HDF5 for writing data files. Also, the Data Elevator depends on the HDF5 VOL functionality.« less
Wilson, A; Fram, D; Sistar, J
1981-06-01
An Imsai 8080 microcomputer is being used to simultaneously generate a color graphics stimulus display and to record visual-evoked cortical potentials. A brief description of the hardware and software developed for this system is presented. Data storage and analysis techniques are also discussed.
DECISION-SUPPORT SOFTWARE FOR SOIL VAPOR EXTRACTION TECHNOLOGY APPLICATION: HYPERVENTILATE
The U.S. Environmental Protection Agency (EPA) estimate* that 15% to 20% of the approximately 1.7 million underground storage tank (UST) systems containing petroleum products are either leaking or will leak In the near future. These UST systems could pose a serious threat to p...
A Computerized Interactive Vocabulary Development System for Advanced Learners.
ERIC Educational Resources Information Center
Kukulska-Hulme, Agnes
1988-01-01
Argues that the process of recording newly encountered vocabulary items in a typical language learning situation can be improved through a computerized system of vocabulary storage based on database management software that improves the discovery and recording of meaning, subsequent retrieval of items for productive use, and memory retention.…
ERIC Educational Resources Information Center
Barker, Philip
1986-01-01
Discussion of developments in information storage technology likely to have significant impact upon library utilization focuses on hardware (videodisc technology) and software developments (knowledge databases; computer networks; database management systems; interactive video, computer, and multimedia user interfaces). Three generic computer-based…
Storage, retrieval, and analysis of ST data
NASA Technical Reports Server (NTRS)
Albrecht, R.
1984-01-01
Space Telescope can generate multidimensional image data, very similar in nature to data produced with microdensitometers. An overview is presented of the ST science ground system between carrying out the observations and the interactive analysis of preprocessed data. The ground system elements used in data archival and retrieval are described and operational procedures are discussed. Emphasis is given to aspects of the ground system that are relevant to the science user and to general principles of system software development in a production environment. While the system being developed uses relatively conservative concepts for the launch baseline, concepts were developed to enhance the ground system. This includes networking, remote access, and the utilization of alternate data storage technologies.
dCache on Steroids - Delegated Storage Solutions
Mkrtchyan, Tigran; Adeyemi, F.; Ashish, A.; ...
2017-11-23
For over a decade, dCache.org has delivered a robust software used at more than 80 Universities and research institutes around the world, allowing these sites to provide reliable storage services for the WLCG experiments as well as many other scientific communities. The flexible architecture of dCache allows running it in a wide variety of configurations and platforms - from a SoC based all-in-one Raspberry-Pi up to hundreds of nodes in a multipetabyte installation. Due to lack of managed storage at the time, dCache implemented data placement, replication and data integrity directly. Today, many alternatives are available: S3, GlusterFS, CEPH andmore » others. While such solutions position themselves as scalable storage systems, they cannot be used by many scientific communities out of the box. The absence of community-accepted authentication and authorization mechanisms, the use of product specific protocols and the lack of namespace are some of the reasons that prevent wide-scale adoption of these alternatives. Most of these limitations are already solved by dCache. By delegating low-level storage management functionality to the above-mentioned new systems and providing the missing layer through dCache, we provide a solution which combines the benefits of both worlds - industry standard storage building blocks with the access protocols and authentication required by scientific communities. In this paper, we focus on CEPH, a popular software for clustered storage that supports file, block and object interfaces. CEPH is often used in modern computing centers, for example as a backend to OpenStack services. We will show prototypes of dCache running with a CEPH backend and discuss the benefits and limitations of such an approach. As a result, we will also outline the roadmap for supporting ‘delegated storage’ within the dCache releases.« less
dCache on Steroids - Delegated Storage Solutions
NASA Astrophysics Data System (ADS)
Mkrtchyan, T.; Adeyemi, F.; Ashish, A.; Behrmann, G.; Fuhrmann, P.; Litvintsev, D.; Millar, P.; Rossi, A.; Sahakyan, M.; Starek, J.
2017-10-01
For over a decade, dCache.org has delivered a robust software used at more than 80 Universities and research institutes around the world, allowing these sites to provide reliable storage services for the WLCG experiments as well as many other scientific communities. The flexible architecture of dCache allows running it in a wide variety of configurations and platforms - from a SoC based all-in-one Raspberry-Pi up to hundreds of nodes in a multipetabyte installation. Due to lack of managed storage at the time, dCache implemented data placement, replication and data integrity directly. Today, many alternatives are available: S3, GlusterFS, CEPH and others. While such solutions position themselves as scalable storage systems, they cannot be used by many scientific communities out of the box. The absence of community-accepted authentication and authorization mechanisms, the use of product specific protocols and the lack of namespace are some of the reasons that prevent wide-scale adoption of these alternatives. Most of these limitations are already solved by dCache. By delegating low-level storage management functionality to the above-mentioned new systems and providing the missing layer through dCache, we provide a solution which combines the benefits of both worlds - industry standard storage building blocks with the access protocols and authentication required by scientific communities. In this paper, we focus on CEPH, a popular software for clustered storage that supports file, block and object interfaces. CEPH is often used in modern computing centers, for example as a backend to OpenStack services. We will show prototypes of dCache running with a CEPH backend and discuss the benefits and limitations of such an approach. We will also outline the roadmap for supporting ‘delegated storage’ within the dCache releases.
dCache on Steroids - Delegated Storage Solutions
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mkrtchyan, Tigran; Adeyemi, F.; Ashish, A.
For over a decade, dCache.org has delivered a robust software used at more than 80 Universities and research institutes around the world, allowing these sites to provide reliable storage services for the WLCG experiments as well as many other scientific communities. The flexible architecture of dCache allows running it in a wide variety of configurations and platforms - from a SoC based all-in-one Raspberry-Pi up to hundreds of nodes in a multipetabyte installation. Due to lack of managed storage at the time, dCache implemented data placement, replication and data integrity directly. Today, many alternatives are available: S3, GlusterFS, CEPH andmore » others. While such solutions position themselves as scalable storage systems, they cannot be used by many scientific communities out of the box. The absence of community-accepted authentication and authorization mechanisms, the use of product specific protocols and the lack of namespace are some of the reasons that prevent wide-scale adoption of these alternatives. Most of these limitations are already solved by dCache. By delegating low-level storage management functionality to the above-mentioned new systems and providing the missing layer through dCache, we provide a solution which combines the benefits of both worlds - industry standard storage building blocks with the access protocols and authentication required by scientific communities. In this paper, we focus on CEPH, a popular software for clustered storage that supports file, block and object interfaces. CEPH is often used in modern computing centers, for example as a backend to OpenStack services. We will show prototypes of dCache running with a CEPH backend and discuss the benefits and limitations of such an approach. As a result, we will also outline the roadmap for supporting ‘delegated storage’ within the dCache releases.« less
Patton, John M.; Ketchum, David C.; Guy, Michelle R.
2015-11-02
This document provides an overview of the capabilities, design, and use cases of the data acquisition and archiving subsystem at the U.S. Geological Survey National Earthquake Information Center. The Edge and Continuous Waveform Buffer software supports the National Earthquake Information Center’s worldwide earthquake monitoring mission in direct station data acquisition, data import, short- and long-term data archiving, data distribution, query services, and playback, among other capabilities. The software design and architecture can be configured to support acquisition and (or) archiving use cases. The software continues to be developed in order to expand the acquisition, storage, and distribution capabilities.
NASA Astrophysics Data System (ADS)
Astafiev, A.; Orlov, A.; Privezencev, D.
2018-01-01
The article is devoted to the development of technology and software for the construction of positioning and control systems in industrial plants based on aggregation to determine the current storage area using computer vision and radiofrequency identification. It describes the developed of the project of hardware for industrial products positioning system in the territory of a plant on the basis of radio-frequency grid. It describes the development of the project of hardware for industrial products positioning system in the plant on the basis of computer vision methods. It describes the development of the method of aggregation to determine the current storage area using computer vision and radiofrequency identification. Experimental studies in laboratory and production conditions have been conducted and described in the article.
Small PACS implementation using publicly available software
NASA Astrophysics Data System (ADS)
Passadore, Diego J.; Isoardi, Roberto A.; Gonzalez Nicolini, Federico J.; Ariza, P. P.; Novas, C. V.; Omati, S. A.
1998-07-01
Building cost effective PACS solutions is a main concern in developing countries. Hardware and software components are generally much more expensive than in developed countries and also more tightened financial constraints are the main reasons contributing to a slow rate of implementation of PACS. The extensive use of Internet for sharing resources and information has brought a broad number of freely available software packages to an ever-increasing number of users. In the field of medical imaging is possible to find image format conversion packages, DICOM compliant servers for all kinds of service classes, databases, web servers, image visualization, manipulation and analysis tools, etc. This paper describes a PACS implementation for review and storage built on freely available software. It currently integrates four diagnostic modalities (PET, CT, MR and NM), a Radiotherapy Treatment Planning workstation and several computers in a local area network, for image storage, database management and image review, processing and analysis. It also includes a web-based application that allows remote users to query the archive for studies from any workstation and to view the corresponding images and reports. We conclude that the advantage of using this approach is twofold. It allows a full understanding of all the issues involved in the implementation of a PACS and also contributes to keep costs down while enabling the development of a functional system for storage, distribution and review that can prove to be helpful for radiologists and referring physicians.
Gas flow calculation method of a ramjet engine
NASA Astrophysics Data System (ADS)
Kostyushin, Kirill; Kagenov, Anuar; Eremin, Ivan; Zhiltsov, Konstantin; Shuvarikov, Vladimir
2017-11-01
At the present study calculation methodology of gas dynamics equations in ramjet engine is presented. The algorithm is based on Godunov`s scheme. For realization of calculation algorithm, the system of data storage is offered, the system does not depend on mesh topology, and it allows using the computational meshes with arbitrary number of cell faces. The algorithm of building a block-structured grid is given. Calculation algorithm in the software package "FlashFlow" is implemented. Software package is verified on the calculations of simple configurations of air intakes and scramjet models.
Set processing in a network environment. [data bases and magnetic disks and tapes
NASA Technical Reports Server (NTRS)
Hardgrave, W. T.
1975-01-01
A combination of a local network, a mass storage system, and an autonomous set processor serving as a data/storage management machine is described. Its characteristics include: content-accessible data bases usable from all connected devices; efficient storage/access of large data bases; simple and direct programming with data manipulation and storage management handled by the set processor; simple data base design and entry from source representation to set processor representation with no predefinition necessary; capability available for user sort/order specification; significant reduction in tape/disk pack storage and mounts; flexible environment that allows upgrading hardware/software configuration without causing major interruptions in service; minimal traffic on data communications network; and improved central memory usage on large processors.
Workload Characterization of a Leadership Class Storage Cluster
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kim, Youngjae; Gunasekaran, Raghul; Shipman, Galen M
2010-01-01
Understanding workload characteristics is critical for optimizing and improving the performance of current systems and software, and architecting new storage systems based on observed workload patterns. In this paper, we characterize the scientific workloads of the world s fastest HPC (High Performance Computing) storage cluster, Spider, at the Oak Ridge Leadership Computing Facility (OLCF). Spider provides an aggregate bandwidth of over 240 GB/s with over 10 petabytes of RAID 6 formatted capacity. OLCFs flagship petascale simulation platform, Jaguar, and other large HPC clusters, in total over 250 thousands compute cores, depend on Spider for their I/O needs. We characterize themore » system utilization, the demands of reads and writes, idle time, and the distribution of read requests to write requests for the storage system observed over a period of 6 months. From this study we develop synthesized workloads and we show that the read and write I/O bandwidth usage as well as the inter-arrival time of requests can be modeled as a Pareto distribution.« less
Federal Register 2010, 2011, 2012, 2013, 2014
2010-12-14
... level be raised to support SSF RC Makeup System operability. Thus, the SFP water level will not be..., but should note that the NRC's E-Filing system does not support unlisted software, and the NRC Meta... a reverse osmosis system during normal plant operation to remove silica from borated water storage...
Mass Memory Storage Devices for AN/SLQ-32(V).
1985-06-01
tactical programs and libraries into the AN/UYK-19 computer , the RP-16 microprocessor, and other peripheral processors (e.g., ADLS and Band 1) will be...software must be loaded into computer memory from the 4-track magnetic tape cartridges (MTCs) on which the programs are stored. Program load begins...software. Future computer programs , which will reside in peripheral processors, include the Automated Decoy Launching System (ADLS) and Band 1. As
A Survey of Rollback-Recovery Protocols in Message-Passing Systems
1999-06-01
and M.A. Castillo. "Checkpointing through garbage collection." Technical report. Departamento de Ciencia de la Computation, Escuela de Ingenieria ...between consecutive checkpoints. It can be implemented by using the dirty-bit of the memory protection hardware or by emulating a dirty-bit in software [4...compare the program’s state with the previous checkpoint in software , and writing the difference in a new checkpoint [46]. The required storage and
The Optimization dispatching of Micro Grid Considering Load Control
NASA Astrophysics Data System (ADS)
Zhang, Pengfei; Xie, Jiqiang; Yang, Xiu; He, Hongli
2018-01-01
This paper proposes an optimization control of micro-grid system economy operation model. It coordinates the new energy and storage operation with diesel generator output, so as to achieve the economic operation purpose of micro-grid. In this paper, the micro-grid network economic operation model is transformed into mixed integer programming problem, which is solved by the mature commercial software, and the new model is proved to be economical, and the load control strategy can reduce the charge and discharge times of energy storage devices, and extend the service life of the energy storage device to a certain extent.
A Fault-Tolerant Radiation-Robust Mass Storage Concept for Highly Scaled Flash Memory
NASA Astrophysics Data System (ADS)
Fuchs, Cristian M.; Trinitis, Carsten; Appel, Nicolas; Langer, Martin
2015-09-01
Future spacemissions will require vast amounts of data to be stored and processed aboard spacecraft. While satisfying operational mission requirements, storage systems must guarantee data integrity and recover damaged data throughout the mission. NAND-flash memories have become popular for space-borne high performance mass memory scenarios, though future storage concepts will rely upon highly scaled flash or other memory technologies. With modern flash memory, single bit erasure coding and RAID based concepts are insufficient. Thus, a fully run-time configurable, high performance, dependable storage concept, requiring a minimal set of logic or software. The solution is based on composite erasure coding and can be adjusted for altered mission duration or changing environmental conditions.
Experiences From NASA/Langley's DMSS Project
NASA Technical Reports Server (NTRS)
1996-01-01
There is a trend in institutions with high performance computing and data management requirements to explore mass storage systems with peripherals directly attached to a high speed network. The Distributed Mass Storage System (DMSS) Project at the NASA Langley Research Center (LaRC) has placed such a system into production use. This paper will present the experiences, both good and bad, we have had with this system since putting it into production usage. The system is comprised of: 1) National Storage Laboratory (NSL)/UniTree 2.1, 2) IBM 9570 HIPPI attached disk arrays (both RAID 3 and RAID 5), 3) IBM RS6000 server, 4) HIPPI/IPI3 third party transfers between the disk array systems and the supercomputer clients, a CRAY Y-MP and a CRAY 2, 5) a "warm spare" file server, 6) transition software to convert from CRAY's Data Migration Facility (DMF) based system to DMSS, 7) an NSC PS32 HIPPI switch, and 8) a STK 4490 robotic library accessed from the IBM RS6000 block mux interface. This paper will cover: the performance of the DMSS in the following areas: file transfer rates, migration and recall, and file manipulation (listing, deleting, etc.); the appropriateness of a workstation class of file server for NSL/UniTree with LaRC's present storage requirements in mind the role of the third party transfers between the supercomputers and the DMSS disk array systems in DMSS; a detailed comparison (both in performance and functionality) between the DMF and DMSS systems LaRC's enhancements to the NSL/UniTree system administration environment the mechanism for DMSS to provide file server redundancy the statistics on the availability of DMSS the design and experiences with the locally developed transparent transition software which allowed us to make over 1.5 million DMF files available to NSL/UniTree with minimal system outage
NASA Technical Reports Server (NTRS)
Bergmann, E.
1976-01-01
The current baseline method and software implementation of the space shuttle reaction control subsystem failure detection and identification (RCS FDI) system is presented. This algorithm is recommended for conclusion in the redundancy management (RM) module of the space shuttle guidance, navigation, and control system. Supporting software is presented, and recommended for inclusion in the system management (SM) and display and control (D&C) systems. RCS FDI uses data from sensors in the jets, in the manifold isolation valves, and in the RCS fuel and oxidizer storage tanks. A list of jet failures and fuel imbalance warnings is generated for use by the jet selection algorithm of the on-orbit and entry flight control systems, and to inform the crew and ground controllers of RCS failure status. Manifold isolation valve close commands are generated in the event of failed on or leaking jets to prevent loss of large quantities of RCS fuel.
Morales, M L; Callejón, R M; Ordóñez, J L; Troncoso, A M; García-Parrilla, M C
2017-11-03
Five free software packages were compared to assess their utility for the non-targeted study of changes in the volatile profile during the storage of a novel strawberry beverage. AMDIS coupled to Gavin software turned out to be easy to use, required the minimum handling for subsequent data treatment and its results were the most similar to those obtained by manual integration. However, AMDIS coupled to SpectConnect software provided more information for the study of volatile profile changes during the storage of strawberry beverage. During storage, volatile profile changed producing the differentiation among the strawberry beverage stored at different temperatures, and this difference increases as time passes; these results were also supported by PCA. As expected, it seems that cold temperature is the best way of preservation for this product during long time storage. Variable Importance in the Projection (VIP) and correlation scores pointed out four volatile compounds as potential markers for shelf-life of our strawberry beverage: 2-phenylethyl acetate, decanoic acid, γ-decalactone and furfural. Copyright © 2017 Elsevier B.V. All rights reserved.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Weiss, Chester J
Software solves the three-dimensional Poisson equation div(k(grad(u)) = f, by the finite element method for the case when material properties, k, are distributed over hierarchy of edges, facets and tetrahedra in the finite element mesh. Method is described in Weiss, CJ, Finite element analysis for model parameters distributed on a hierarchy of geometric simplices, Geophysics, v82, E155-167, doi:10.1190/GEO2017-0058.1 (2017). A standard finite element method for solving Poisson’s equation is augmented by including in the 3D stiffness matrix additional 2D and 1D stiffness matrices representing the contributions from material properties associated with mesh faces and edges, respectively. The resulting linear systemmore » is solved iteratively using the conjugate gradient method with Jacobi preconditioning. To minimize computer storage for program execution, the linear solver computes matrix-vector contractions element-by-element over the mesh, without explicit storage of the global stiffness matrix. Program output vtk compliant for visualization and rendering by 3rd party software. Program uses dynamic memory allocation and as such there are no hard limits on problem size outside of those imposed by the operating system and configuration on which the software is run. Dimension, N, of the finite element solution vector is constrained by the the addressable space in 32-vs-64 bit operating systems. Total storage requirements for the problem. Total working space required for the program is approximately 13*N double precision words.« less
ERIC Educational Resources Information Center
Beiser, Karl
1986-01-01
Describes a product--BiblioFile, Library Corporation's catalog production system--and a service--reproduction of public domain software on CD-ROM for sale to those interested--which revolve around the ultra-high-density storage capacity of CD-ROM discs. Criteria for selecting microcomputers are briefly reviewed. (MBR)
ERIC Educational Resources Information Center
McConnell, Terry
2004-01-01
Monica Adams, head librarian at Robinson Secondary in Fairfax country, Virginia, states that librarians should have the technical knowledge to support projects related to digital video editing. The process of digital video editing and the cables, storage issues and the computer system with software is described.
Towards Efficient Scientific Data Management Using Cloud Storage
NASA Technical Reports Server (NTRS)
He, Qiming
2013-01-01
A software prototype allows users to backup and restore data to/from both public and private cloud storage such as Amazon's S3 and NASA's Nebula. Unlike other off-the-shelf tools, this software ensures user data security in the cloud (through encryption), and minimizes users operating costs by using space- and bandwidth-efficient compression and incremental backup. Parallel data processing utilities have also been developed by using massively scalable cloud computing in conjunction with cloud storage. One of the innovations in this software is using modified open source components to work with a private cloud like NASA Nebula. Another innovation is porting the complex backup to- cloud software to embedded Linux, running on the home networking devices, in order to benefit more users.
A Report on Simulation-Driven Reliability and Failure Analysis of Large-Scale Storage Systems
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wan, Lipeng; Wang, Feiyi; Oral, H. Sarp
High-performance computing (HPC) storage systems provide data availability and reliability using various hardware and software fault tolerance techniques. Usually, reliability and availability are calculated at the subsystem or component level using limited metrics such as, mean time to failure (MTTF) or mean time to data loss (MTTDL). This often means settling on simple and disconnected failure models (such as exponential failure rate) to achieve tractable and close-formed solutions. However, such models have been shown to be insufficient in assessing end-to-end storage system reliability and availability. We propose a generic simulation framework aimed at analyzing the reliability and availability of storagemore » systems at scale, and investigating what-if scenarios. The framework is designed for an end-to-end storage system, accommodating the various components and subsystems, their interconnections, failure patterns and propagation, and performs dependency analysis to capture a wide-range of failure cases. We evaluate the framework against a large-scale storage system that is in production and analyze its failure projections toward and beyond the end of lifecycle. We also examine the potential operational impact by studying how different types of components affect the overall system reliability and availability, and present the preliminary results« less
SQA of finite element method (FEM) codes used for analyses of pit storage/transport packages
DOE Office of Scientific and Technical Information (OSTI.GOV)
Russel, E.
1997-11-01
This report contains viewgraphs on the software quality assurance of finite element method codes used for analyses of pit storage and transport projects. This methodology utilizes the ISO 9000-3: Guideline for application of 9001 to the development, supply, and maintenance of software, for establishing well-defined software engineering processes to consistently maintain high quality management approaches.
Using Cloud-based Storage Technologies for Earth Science Data
NASA Astrophysics Data System (ADS)
Michaelis, A.; Readey, J.; Votava, P.
2016-12-01
Cloud based infrastructure may offer several key benefits of scalability, built in redundancy and reduced total cost of ownership as compared with a traditional data center approach. However, most of the tools and software systems developed for NASA data repositories were not developed with a cloud based infrastructure in mind and do not fully take advantage of commonly available cloud-based technologies. Object storage services are provided through all the leading public (Amazon Web Service, Microsoft Azure, Google Cloud, etc.) and private (Open Stack) clouds, and may provide a more cost-effective means of storing large data collections online. We describe a system that utilizes object storage rather than traditional file system based storage to vend earth science data. The system described is not only cost effective, but shows superior performance for running many different analytics tasks in the cloud. To enable compatibility with existing tools and applications, we outline client libraries that are API compatible with existing libraries for HDF5 and NetCDF4. Performance of the system is demonstrated using clouds services running on Amazon Web Services.
A flexible continuous-variable QKD system using off-the-shelf components
NASA Astrophysics Data System (ADS)
Comandar, Lucian C.; Brunner, Hans H.; Bettelli, Stefano; Fung, Fred; Karinou, Fotini; Hillerkuss, David; Mikroulis, Spiros; Wang, Dawei; Kuschnerov, Maxim; Xie, Changsong; Poppe, Andreas; Peev, Momtchil
2017-10-01
We present the development of a robust and versatile CV-QKD architecture based on commercially available optical and electronic components. The system uses a pilot tone for phase synchronization with a local oscillator, as well as local feedback loops to mitigate frequency and polarization drifts. Transmit and receive-side digital signal processing is performed fully in software, allowing for rapid protocol reconfiguration. The quantum link is complemented with a software stack for secure-key processing, key storage and encrypted communication. All these features allow for the system to be at the same time a prototype for a future commercial product and a research platform.
Adopting Industry Standards for Control Systems Within Advanced Life Support
NASA Technical Reports Server (NTRS)
Young, James Scott; Boulanger, Richard
2002-01-01
This paper gives a description of OPC (Object Linking and Embedding for Process Control) standards for process control and outlines the experiences at JSC with using these standards to interface with I/O hardware from three independent vendors. The I/O hardware was integrated with a commercially available SCADA/HMI software package to make up the control and monitoring system for the Environmental Systems Test Stand (ESTS). OPC standards were utilized for communicating with I/O hardware and the software was used for implementing monitoring, PC-based distributed control, and redundant data storage over an Ethernet physical layer using an embedded din-rail mounted PC.
Modeling Pumped Thermal Energy Storage with Waste Heat Harvesting
NASA Astrophysics Data System (ADS)
Abarr, Miles L. Lindsey
This work introduces a new concept for a utility scale combined energy storage and generation system. The proposed design utilizes a pumped thermal energy storage (PTES) system, which also utilizes waste heat leaving a natural gas peaker plant. This system creates a low cost utility-scale energy storage system by leveraging this dual-functionality. This dissertation first presents a review of previous work in PTES as well as the details of the proposed integrated bottoming and energy storage system. A time-domain system model was developed in Mathworks R2016a Simscape and Simulink software to analyze this system. Validation of both the fluid state model and the thermal energy storage model are provided. The experimental results showed the average error in cumulative fluid energy between simulation and measurement was +/- 0.3% per hour. Comparison to a Finite Element Analysis (FEA) model showed <1% error for bottoming mode heat transfer. The system model was used to conduct sensitivity analysis, baseline performance, and levelized cost of energy of a recently proposed Pumped Thermal Energy Storage and Bottoming System (Bot-PTES) that uses ammonia as the working fluid. This analysis focused on the effects of hot thermal storage utilization, system pressure, and evaporator/condenser size on the system performance. This work presents the estimated performance for a proposed baseline Bot-PTES. Results of this analysis showed that all selected parameters had significant effects on efficiency, with the evaporator/condenser size having the largest effect over the selected ranges. Results for the baseline case showed stand-alone energy storage efficiencies between 51 and 66% for varying power levels and charge states, and a stand-alone bottoming efficiency of 24%. The resulting efficiencies for this case were low compared to competing technologies; however, the dual-functionality of the Bot-PTES enables it to have higher capacity factor, leading to 91-197/MWh levelized cost of energy compared to 262-284/MWh for batteries and $172-254/MWh for Compressed Air Energy Storage.
Integration of cloud-based storage in BES III computing environment
NASA Astrophysics Data System (ADS)
Wang, L.; Hernandez, F.; Deng, Z.
2014-06-01
We present an on-going work that aims to evaluate the suitability of cloud-based storage as a supplement to the Lustre file system for storing experimental data for the BES III physics experiment and as a backend for storing files belonging to individual members of the collaboration. In particular, we discuss our findings regarding the support of cloud-based storage in the software stack of the experiment. We report on our development work that improves the support of CERN' s ROOT data analysis framework and allows efficient remote access to data through several cloud storage protocols. We also present our efforts providing the experiment with efficient command line tools for navigating and interacting with cloud storage-based data repositories both from interactive sessions and grid jobs.
Zuhtuogullari, Kursat; Allahverdi, Novruz; Arikan, Nihat
2013-01-01
The systems consisting high input spaces require high processing times and memory usage. Most of the attribute selection algorithms have the problems of input dimensions limits and information storage problems. These problems are eliminated by means of developed feature reduction software using new modified selection mechanism with middle region solution candidates adding. The hybrid system software is constructed for reducing the input attributes of the systems with large number of input variables. The designed software also supports the roulette wheel selection mechanism. Linear order crossover is used as the recombination operator. In the genetic algorithm based soft computing methods, locking to the local solutions is also a problem which is eliminated by using developed software. Faster and effective results are obtained in the test procedures. Twelve input variables of the urological system have been reduced to the reducts (reduced input attributes) with seven, six, and five elements. It can be seen from the obtained results that the developed software with modified selection has the advantages in the fields of memory allocation, execution time, classification accuracy, sensitivity, and specificity values when compared with the other reduction algorithms by using the urological test data.
Zuhtuogullari, Kursat; Allahverdi, Novruz; Arikan, Nihat
2013-01-01
The systems consisting high input spaces require high processing times and memory usage. Most of the attribute selection algorithms have the problems of input dimensions limits and information storage problems. These problems are eliminated by means of developed feature reduction software using new modified selection mechanism with middle region solution candidates adding. The hybrid system software is constructed for reducing the input attributes of the systems with large number of input variables. The designed software also supports the roulette wheel selection mechanism. Linear order crossover is used as the recombination operator. In the genetic algorithm based soft computing methods, locking to the local solutions is also a problem which is eliminated by using developed software. Faster and effective results are obtained in the test procedures. Twelve input variables of the urological system have been reduced to the reducts (reduced input attributes) with seven, six, and five elements. It can be seen from the obtained results that the developed software with modified selection has the advantages in the fields of memory allocation, execution time, classification accuracy, sensitivity, and specificity values when compared with the other reduction algorithms by using the urological test data. PMID:23573172
NASA Technical Reports Server (NTRS)
Baxes, Gregory A. (Inventor); Linger, Timothy C. (Inventor)
2011-01-01
Systems and methods are provided for progressive mesh storage and reconstruction using wavelet-encoded height fields. A method for progressive mesh storage includes reading raster height field data, and processing the raster height field data with a discrete wavelet transform to generate wavelet-encoded height fields. In another embodiment, a method for progressive mesh storage includes reading texture map data, and processing the texture map data with a discrete wavelet transform to generate wavelet-encoded texture map fields. A method for reconstructing a progressive mesh from wavelet-encoded height field data includes determining terrain blocks, and a level of detail required for each terrain block, based upon a viewpoint. Triangle strip constructs are generated from vertices of the terrain blocks, and an image is rendered utilizing the triangle strip constructs. Software products that implement these methods are provided.
NASA Technical Reports Server (NTRS)
Baxes, Gregory A. (Inventor)
2010-01-01
Systems and methods are provided for progressive mesh storage and reconstruction using wavelet-encoded height fields. A method for progressive mesh storage includes reading raster height field data, and processing the raster height field data with a discrete wavelet transform to generate wavelet-encoded height fields. In another embodiment, a method for progressive mesh storage includes reading texture map data, and processing the texture map data with a discrete wavelet transform to generate wavelet-encoded texture map fields. A method for reconstructing a progressive mesh from wavelet-encoded height field data includes determining terrain blocks, and a level of detail required for each terrain block, based upon a viewpoint. Triangle strip constructs are generated from vertices of the terrain blocks, and an image is rendered utilizing the triangle strip constructs. Software products that implement these methods are provided.
System and Mass Storage Study for Defense Mapping Agency Topographic Center (DMATC/HC)
1977-04-01
34•»-—•—■»■—- view. The assessment should be based on carefully designed control condi- tions—data volume, resolution, function, etc...egories: hardware control and library management support. This software is designed to interface with IBM 360/370 OS and OS/VS. No interface with a...laser re- cording unit includes a programmable recorder control subsystem which can be designed to provide a hardware and software interface compatible
A cyber infrastructure for the SKA Telescope Manager
NASA Astrophysics Data System (ADS)
Barbosa, Domingos; Barraca, João. P.; Carvalho, Bruno; Maia, Dalmiro; Gupta, Yashwant; Natarajan, Swaminathan; Le Roux, Gerhard; Swart, Paul
2016-07-01
The Square Kilometre Array Telescope Manager (SKA TM) will be responsible for assisting the SKA Operations and Observation Management, carrying out System diagnosis and collecting Monitoring and Control data from the SKA subsystems and components. To provide adequate compute resources, scalability, operation continuity and high availability, as well as strict Quality of Service, the TM cyber-infrastructure (embodied in the Local Infrastructure - LINFRA) consists of COTS hardware and infrastructural software (for example: server monitoring software, host operating system, virtualization software, device firmware), providing a specially tailored Infrastructure as a Service (IaaS) and Platform as a Service (PaaS) solution. The TM infrastructure provides services in the form of computational power, software defined networking, power, storage abstractions, and high level, state of the art IaaS and PaaS management interfaces. This cyber platform will be tailored to each of the two SKA Phase 1 telescopes (SKA_MID in South Africa and SKA_LOW in Australia) instances, each presenting different computational and storage infrastructures and conditioned by location. This cyber platform will provide a compute model enabling TM to manage the deployment and execution of its multiple components (observation scheduler, proposal submission tools, MandC components, Forensic tools and several Databases, etc). In this sense, the TM LINFRA is primarily focused towards the provision of isolated instances, mostly resorting to virtualization technologies, while defaulting to bare hardware if specifically required due to performance, security, availability, or other requirement.
Development and prototype testing of MgCl 2 /graphite foam latent heat thermal energy storage system
DOE Office of Scientific and Technical Information (OSTI.GOV)
Singh, Dileep; Yu, Wenhua; Zhao, Weihuan
Composites of graphite foam infiltrated with a magnesium chloride phase-change material have been developed as high-temperature thermal energy storage media for concentrated solar power applications. This storage medium provides a high thermal energy storage density, a narrow operating temperature range, and excellent heat transfer characteristics. In this study, experimental investigations were conducted on laboratory-scale prototypes with magnesium chloride/graphite foam composite as the latent heat thermal energy storage system. Prototypes were designed and built to monitor the melt front movement during the charging/discharging tests. A test loop was built to ensure the charging/discharging of the prototypes at temperatures > 700 degreesmore » C. Repeated thermal cycling experiments were carried out on the fabricated prototypes, and the experimental temperature profiles were compared to the predicted results from numerical simulations using COMSOL Multiphysics software. Experimental results were found to be in good agreement with the simulations to validate the thermal models.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
McDonald, K; Curran, B
I. Information Security Background (Speaker = Kevin McDonald) Evolution of Medical Devices Living and Working in a Hostile Environment Attack Motivations Attack Vectors Simple Safety Strategies Medical Device Security in the News Medical Devices and Vendors Summary II. Keeping Radiation Oncology IT Systems Secure (Speaker = Bruce Curran) Hardware Security Double-lock Requirements “Foreign” computer systems Portable Device Encryption Patient Data Storage System Requirements Network Configuration Isolating Critical Devices Isolating Clinical Networks Remote Access Considerations Software Applications / Configuration Passwords / Screen Savers Restricted Services / access Software Configuration Restriction Use of DNS to restrict accesse. Patches / Upgrades Awareness Intrusionmore » Prevention Intrusion Detection Threat Risk Analysis Conclusion Learning Objectives: Understanding how Hospital IT Requirements affect Radiation Oncology IT Systems. Illustrating sample practices for hardware, network, and software security. Discussing implementation of good IT security practices in radiation oncology. Understand overall risk and threats scenario in a networked environment.« less
ATLAS tile calorimeter cesium calibration control and analysis software
NASA Astrophysics Data System (ADS)
Solovyanov, O.; Solodkov, A.; Starchenko, E.; Karyukhin, A.; Isaev, A.; Shalanda, N.
2008-07-01
An online control system to calibrate and monitor ATLAS Barrel hadronic calorimeter (TileCal) with a movable radioactive source, driven by liquid flow, is described. To read out and control the system an online software has been developed, using ATLAS TDAQ components like DVS (Diagnostic and Verification System) to verify the hardware before running, IS (Information Server) for data and status exchange between networked computers, and other components like DDC (DCS to DAQ Connection), to connect to PVSS-based slow control systems of Tile Calorimeter, high voltage and low voltage. A system of scripting facilities, based on Python language, is used to handle all the calibration and monitoring processes from hardware perspective to final data storage, including various abnormal situations. A QT based graphical user interface to display the status of the calibration system during the cesium source scan is described. The software for analysis of the detector response, using online data, is discussed. Performance of the system and first experience from the ATLAS pit are presented.
The amino acid's backup bone - storage solutions for proteomics facilities.
Meckel, Hagen; Stephan, Christian; Bunse, Christian; Krafzik, Michael; Reher, Christopher; Kohl, Michael; Meyer, Helmut Erich; Eisenacher, Martin
2014-01-01
Proteomics methods, especially high-throughput mass spectrometry analysis have been continually developed and improved over the years. The analysis of complex biological samples produces large volumes of raw data. Data storage and recovery management pose substantial challenges to biomedical or proteomic facilities regarding backup and archiving concepts as well as hardware requirements. In this article we describe differences between the terms backup and archive with regard to manual and automatic approaches. We also introduce different storage concepts and technologies from transportable media to professional solutions such as redundant array of independent disks (RAID) systems, network attached storages (NAS) and storage area network (SAN). Moreover, we present a software solution, which we developed for the purpose of long-term preservation of large mass spectrometry raw data files on an object storage device (OSD) archiving system. Finally, advantages, disadvantages, and experiences from routine operations of the presented concepts and technologies are evaluated and discussed. This article is part of a Special Issue entitled: Computational Proteomics in the Post-Identification Era. Guest Editors: Martin Eisenacher and Christian Stephan. Copyright © 2013. Published by Elsevier B.V.
User's guide to noise data acquisition and analysis programs for HP9845: Nicolet analyzers
NASA Technical Reports Server (NTRS)
Mcgary, M. C.
1982-01-01
A software interface package was written for use with a desktop computer and two models of single channel Fast Fourier analyzers. This software features a portable measurement and analysis system with several options. Two types of interface hardware can alternately be used in conjunction with the software. Either an IEEE-488 Bus interface or a 16-bit parallel system may be used. Two types of storage medium, either tape cartridge or floppy disc can be used with the software. Five types of data may be stored, plotted, and/or printed. The data types include time histories, narrow band power spectra, and narrow band, one-third octave band, or octave band sound pressure level. The data acquisition programming includes a front panel remote control option for the FFT analyzers. Data analysis options include choice of line type and pen color for plotting.
An Operating Environment for the Jellybean Machine
1988-05-01
MODEL 48 5.4.4 Restarting a Context The operating system provides one primitive message (RESTART-CONTEXT) and two system calls (XFERID and XFER.ADDR) to...efficient, powerful services is reqired to support this "stem. To provide this supportive operating environment, I developed an operating system kernel that...serves many of the initial needs of our machine. This Jellybean Operating System Software provides an object- based storage model, where typed
A case for automated tape in clinical imaging.
Bookman, G; Baune, D
1998-08-01
Electronic archiving of radiology images over many years will require many terabytes of storage with a need for rapid retrieval of these images. As more large PACS installations are installed and implemented, a data crisis occurs. The ability to store this large amount of data using the traditional method of optical jukeboxes or online disk alone becomes an unworkable solution. The amount of floor space number of optical jukeboxes, and off-line shelf storage required to store the images becomes unmanageable. With the recent advances in tape and tape drives, the use of tape for long term storage of PACS data has become the preferred alternative. A PACS system consisting of a centrally managed system of RAID disk, software and at the heart of the system, tape, presents a solution that for the first time solves the problems of multi-modality high end PACS, non-DICOM image, electronic medical record and ADT data storage. This paper will examine the installation of the University of Utah, Department of Radiology PACS system and the integration of automated tape archive. The tape archive is also capable of storing data other than traditional PACS data. The implementation of an automated data archive to serve the many other needs of a large hospital will also be discussed. This will include the integration of a filmless cardiology department and the backup/archival needs of a traditional MIS department. The need for high bandwidth to tape with a large RAID cache will be examined and how with an interface to a RIS pre-fetch engine, tape can be a superior solution to optical platters or other archival solutions. The data management software will be discussed in detail. The performance and cost of RAID disk cache and automated tape compared to a solution that includes optical will be examined.
Development of the updated system of city underground pipelines based on Visual Studio
NASA Astrophysics Data System (ADS)
Zhang, Jianxiong; Zhu, Yun; Li, Xiangdong
2009-10-01
Our city has owned the integrated pipeline network management system with ArcGIS Engine 9.1 as the bottom development platform and with Oracle9i as basic database for storaging data. In this system, ArcGIS SDE9.1 is applied as the spatial data engine, and the system was a synthetic management software developed with Visual Studio visualization procedures development tools. As the pipeline update function of the system has the phenomenon of slower update and even sometimes the data lost, to ensure the underground pipeline data can real-time be updated conveniently and frequently, and the actuality and integrity of the underground pipeline data, we have increased a new update module in the system developed and researched by ourselves. The module has the powerful data update function, and can realize the function of inputting and outputting and rapid update volume of data. The new developed module adopts Visual Studio visualization procedures development tools, and uses access as the basic database to storage data. We can edit the graphics in AutoCAD software, and realize the database update using link between the graphics and the system. Practice shows that the update module has good compatibility with the original system, reliable and high update efficient of the database.
Building a DAM To Last: Archiving Digital Assets.
ERIC Educational Resources Information Center
Zeichick, Alan
2003-01-01
Discusses archiving digital information and the need for organizations to develop policies regarding digital asset management (DAM) and storage. Topics include determining the value of digital assets; formats of digital information; use of stored information; and system architecture, including hardware and asset management software. (LRW)
Beyond Information Retrieval: Ways To Provide Content in Context.
ERIC Educational Resources Information Center
Wiley, Deborah Lynne
1998-01-01
Provides an overview of information retrieval from mainframe systems to Web search engines; discusses collaborative filtering, data extraction, data visualization, agent technology, pattern recognition, classification and clustering, and virtual communities. Argues that rather than huge data-storage centers and proprietary software, we need…
ERIC Educational Resources Information Center
MacKenzie, Douglas
1996-01-01
Discusses the use of computer systems for archival applications based on experiences at the Demarco European Arts Foundation (Scotland) and the TAMH Project, an attempt to build a virtual museum of Tay Valley maritime history. Highlights include hardware; development software; data representation, including storage space versus quality;…
Key-value store with internal key-value storage interface
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bent, John M.; Faibish, Sorin; Ting, Dennis P. J.
A key-value store is provided having one or more key-value storage interfaces. A key-value store on at least one compute node comprises a memory for storing a plurality of key-value pairs; and an abstract storage interface comprising a software interface module that communicates with at least one persistent storage device providing a key-value interface for persistent storage of one or more of the plurality of key-value pairs, wherein the software interface module provides the one or more key-value pairs to the at least one persistent storage device in a key-value format. The abstract storage interface optionally processes one or moremore » batch operations on the plurality of key-value pairs. A distributed embodiment for a partitioned key-value store is also provided.« less
Data Mining as a Service (DMaaS)
NASA Astrophysics Data System (ADS)
Tejedor, E.; Piparo, D.; Mascetti, L.; Moscicki, J.; Lamanna, M.; Mato, P.
2016-10-01
Data Mining as a Service (DMaaS) is a software and computing infrastructure that allows interactive mining of scientific data in the cloud. It allows users to run advanced data analyses by leveraging the widely adopted Jupyter notebook interface. Furthermore, the system makes it easier to share results and scientific code, access scientific software, produce tutorials and demonstrations as well as preserve the analyses of scientists. This paper describes how a first pilot of the DMaaS service is being deployed at CERN, starting from the notebook interface that has been fully integrated with the ROOT analysis framework, in order to provide all the tools for scientists to run their analyses. Additionally, we characterise the service backend, which combines a set of IT services such as user authentication, virtual computing infrastructure, mass storage, file synchronisation, development portals or batch systems. The added value acquired by the combination of the aforementioned categories of services is discussed, focusing on the opportunities offered by the CERNBox synchronisation service and its massive storage backend, EOS.
Digital imaging technology assessment: Digital document storage project
NASA Technical Reports Server (NTRS)
1989-01-01
An ongoing technical assessment and requirements definition project is examining the potential role of digital imaging technology at NASA's STI facility. The focus is on the basic components of imaging technology in today's marketplace as well as the components anticipated in the near future. Presented is a requirement specification for a prototype project, an initial examination of current image processing at the STI facility, and an initial summary of image processing projects at other sites. Operational imaging systems incorporate scanners, optical storage, high resolution monitors, processing nodes, magnetic storage, jukeboxes, specialized boards, optical character recognition gear, pixel addressable printers, communications, and complex software processes.
Jha, Ashish Kumar
2015-01-01
Glomerular filtration rate (GFR) estimation by plasma sampling method is considered as the gold standard. However, this method is not widely used because the complex technique and cumbersome calculations coupled with the lack of availability of user-friendly software. The routinely used Serum Creatinine method (SrCrM) of GFR estimation also requires the use of online calculators which cannot be used without internet access. We have developed user-friendly software "GFR estimation software" which gives the options to estimate GFR by plasma sampling method as well as SrCrM. We have used Microsoft Windows(®) as operating system and Visual Basic 6.0 as the front end and Microsoft Access(®) as database tool to develop this software. We have used Russell's formula for GFR calculation by plasma sampling method. GFR calculations using serum creatinine have been done using MIRD, Cockcroft-Gault method, Schwartz method, and Counahan-Barratt methods. The developed software is performing mathematical calculations correctly and is user-friendly. This software also enables storage and easy retrieval of the raw data, patient's information and calculated GFR for further processing and comparison. This is user-friendly software to calculate the GFR by various plasma sampling method and blood parameter. This software is also a good system for storing the raw and processed data for future analysis.
NASA Technical Reports Server (NTRS)
Salmon, Ellen; Tarshish, Adina; Palm, Nancy; Patel, Sanjay; Saletta, Marty; Vanderlan, Ed; Rouch, Mike; Burns, Lisa; Duffy, Daniel; Caine, Robert
2004-01-01
This paper presents the data management issues associated with a large center like the NCCS and how these issues are addressed. More specifically, the focus of this paper is on the recent transition from a legacy UniTree (Legato) system to a SAM-QFS (Sun) system. Therefore, this paper will describe the motivations, from both a hardware and software perspective, for migrating from one system to another. Coupled with the migration from UniTree into SAM-QFS, the complete mass storage environment was upgraded to provide high availability, redundancy, and enhanced performance. This paper will describe the resulting solution and lessons learned throughout the migration process.
The Impact of Implementing a Demand Forecasting System into a Low-Income Country’s Supply Chain
Mueller, Leslie E.; Haidari, Leila A.; Wateska, Angela R.; Phillips, Roslyn J.; Schmitz, Michelle M.; Connor, Diana L.; Norman, Bryan A.; Brown, Shawn T.; Welling, Joel S.; Lee, Bruce Y.
2016-01-01
OBJECTIVE To evaluate the potential impact and value of applications (e.g., ordering levels, storage capacity, transportation capacity, distribution frequency) of data from demand forecasting systems implemented in a lower-income country’s vaccine supply chain with different levels of population change to urban areas. MATERIALS AND METHODS Using our software, HERMES, we generated a detailed discrete event simulation model of Niger’s entire vaccine supply chain, including every refrigerator, freezer, transport, personnel, vaccine, cost, and location. We represented the introduction of a demand forecasting system to adjust vaccine ordering that could be implemented with increasing delivery frequencies and/or additions of cold chain equipment (storage and/or transportation) across the supply chain during varying degrees of population movement. RESULTS Implementing demand forecasting system with increased storage and transport frequency increased the number of successfully administered vaccine doses and lowered the logistics cost per dose up to 34%. Implementing demand forecasting system without storage/transport increases actually decreased vaccine availability in certain circumstances. DISCUSSION The potential maximum gains of a demand forecasting system may only be realized if the system is implemented to both augment the supply chain cold storage and transportation. Implementation may have some impact but, in certain circumstances, may hurt delivery. Therefore, implementation of demand forecasting systems with additional storage and transport may be the better approach. Significant decreases in the logistics cost per dose with more administered vaccines support investment in these forecasting systems. CONCLUSION Demand forecasting systems have the potential to greatly improve vaccine demand fulfillment, and decrease logistics cost/dose when implemented with storage and transportation increases direct vaccines. Simulation modeling can demonstrate the potential health and economic benefits of supply chain improvements. PMID:27219341
The impact of implementing a demand forecasting system into a low-income country's supply chain.
Mueller, Leslie E; Haidari, Leila A; Wateska, Angela R; Phillips, Roslyn J; Schmitz, Michelle M; Connor, Diana L; Norman, Bryan A; Brown, Shawn T; Welling, Joel S; Lee, Bruce Y
2016-07-12
To evaluate the potential impact and value of applications (e.g. adjusting ordering levels, storage capacity, transportation capacity, distribution frequency) of data from demand forecasting systems implemented in a lower-income country's vaccine supply chain with different levels of population change to urban areas. Using our software, HERMES, we generated a detailed discrete event simulation model of Niger's entire vaccine supply chain, including every refrigerator, freezer, transport, personnel, vaccine, cost, and location. We represented the introduction of a demand forecasting system to adjust vaccine ordering that could be implemented with increasing delivery frequencies and/or additions of cold chain equipment (storage and/or transportation) across the supply chain during varying degrees of population movement. Implementing demand forecasting system with increased storage and transport frequency increased the number of successfully administered vaccine doses and lowered the logistics cost per dose up to 34%. Implementing demand forecasting system without storage/transport increases actually decreased vaccine availability in certain circumstances. The potential maximum gains of a demand forecasting system may only be realized if the system is implemented to both augment the supply chain cold storage and transportation. Implementation may have some impact but, in certain circumstances, may hurt delivery. Therefore, implementation of demand forecasting systems with additional storage and transport may be the better approach. Significant decreases in the logistics cost per dose with more administered vaccines support investment in these forecasting systems. Demand forecasting systems have the potential to greatly improve vaccine demand fulfilment, and decrease logistics cost/dose when implemented with storage and transportation increases. Simulation modeling can demonstrate the potential health and economic benefits of supply chain improvements. Copyright © 2016 Elsevier Ltd. All rights reserved.
RDS-SL VS Communication System
DOE Office of Scientific and Technical Information (OSTI.GOV)
2012-09-12
The RDS-SL VS Communication System is a component of the Radiation Detection System for Strategic, Low-Volume Seaports. Its purpose is to acquire real-time data from radiation portal monitors and cameras, record that data in a database, and make it available to system operators and administrators via a web interface. The software system contains two components: a standalone data acquisition and storage component and an ASP.NETweb application that implements the web interface.
NASA Astrophysics Data System (ADS)
Pérez Peña, José Vicente; Baldó, Mane; Acosta, Yarci; Verschueren, Laurent; Thibaud, Kenmognie; Bilivogui, Pépé; Jean-Paul Ngandu, Alain; Beavogui, Maoro
2017-04-01
In the last decade the increasing interest for public health has promoted specific regulations for the transport, storage, transformation and/or elimination of potentially toxic waste. A special concern should focus on the effective management of biomedical waste, due to the environmental and health risk associated with them. The first stage for the effective management these waste includes the selection of the best sites for the location of facilities for its storage and/or elimination. Best-site selection is accomplished by means of multi-criteria decision analyses (MCDA) that aim to minimize the social and environmental impact, and to maximize management efficiency. In this work we presented a methodology that uses open-source software and data to analyze the best location for the implantation of a centralized waste management system in a developing country (Guinea, Conakry). We applied an analytical hierarchy process (AHP) using different thematic layers such as land use (derived from up-to-date Sentinel 2 remote sensing images), soil type, distance and type of roads, hydrography, distance to dense populated areas, etc. Land-use data were derived from up-to-date Sentinel 2 remote sensing images, whereas roads and hydrography were obtained from the Open Street Map database and latter validated with administrative data. We performed the AHP analysis with the aid of QGIS open-software Geospatial Information System. This methodology is very effective for developing countries as it uses open-source software and data for the MCDA analysis, thus reducing costs in these first stages of the integrated analysis.
Software Tools for Developing and Simulating the NASA LaRC CMF Motion Base
NASA Technical Reports Server (NTRS)
Bryant, Richard B., Jr.; Carrelli, David J.
2006-01-01
The NASA Langley Research Center (LaRC) Cockpit Motion Facility (CMF) motion base has provided many design and analysis challenges. In the process of addressing these challenges, a comprehensive suite of software tools was developed. The software tools development began with a detailed MATLAB/Simulink model of the motion base which was used primarily for safety loads prediction, design of the closed loop compensator and development of the motion base safety systems1. A Simulink model of the digital control law, from which a portion of the embedded code is directly generated, was later added to this model to form a closed loop system model. Concurrently, software that runs on a PC was created to display and record motion base parameters. It includes a user interface for controlling time history displays, strip chart displays, data storage, and initializing of function generators used during motion base testing. Finally, a software tool was developed for kinematic analysis and prediction of mechanical clearances for the motion system. These tools work together in an integrated package to support normal operations of the motion base, simulate the end to end operation of the motion base system providing facilities for software-in-the-loop testing, mechanical geometry and sensor data visualizations, and function generator setup and evaluation.
NASA Astrophysics Data System (ADS)
Abdiwe, Ramadan; Haider, Markus
2017-06-01
In this study the thermochemical system using ammonia as energy storage carrier is investigated and a transient mathematical model using MATLAB software was developed to predict the behavior of the ammonia closed-loop storage system including but not limited to the ammonia solar reactor and the ammonia synthesis reactor. The MATLAB model contains transient mass and energy balances as well as chemical equilibrium model for each relevant system component. For the importance of the dissociation and formation processes in the system, a Computational Fluid Dynamics (CFD) simulation on the ammonia solar and synthesis reactors has been performed. The CFD commercial package FLUENT is used for the simulation study and all the important mechanisms for packed bed reactors are taken into account, such as momentum, heat and mass transfer, and chemical reactions. The FLUENT simulation reveals the profiles inside both reactors and compared them with the profiles from the MATLAB code.
DICOM implementation on online tape library storage system
NASA Astrophysics Data System (ADS)
Komo, Darmadi; Dai, Hailei L.; Elghammer, David; Levine, Betty A.; Mun, Seong K.
1998-07-01
The main purpose of this project is to implement a Digital Image and Communications (DICOM) compliant online tape library system over the Internet. Once finished, the system will be used to store medical exams generated from U.S. ARMY Mobile ARMY Surgical Hospital (MASH) in Tuzla, Bosnia. A modified UC Davis implementation of DICOM storage class is used for this project. DICOM storage class user and provider are implemented as the system's interface to the Internet. The DICOM software provides flexible configuration options such as types of modalities and trusted remote DICOM hosts. Metadata is extracted from each exam and indexed in a relational database for query and retrieve purposes. The medical images are stored inside the Wolfcreek-9360 tape library system from StorageTek Corporation. The tape library system has nearline access to more than 1000 tapes. Each tape has a capacity of 800 megabytes making the total nearline tape access of around 1 terabyte. The tape library uses the Application Storage Manager (ASM) which provides cost-effective file management, storage, archival, and retrieval services. ASM automatically and transparently copies files from expensive magnetic disk to less expensive nearline tape library, and restores the files back when they are needed. The ASM also provides a crash recovery tool, which enable an entire file system restore in a short time. A graphical user interface (GUI) function is used to view the contents of the storage systems. This GUI also allows user to retrieve the stored exams and send the exams to anywhere on the Internet using DICOM protocols. With the integration of different components of the system, we have implemented a high capacity online tape library storage system that is flexible and easy to use. Using tape as an alternative storage media as opposed to the magnetic disk has the great potential of cost savings in terms of dollars per megabyte of storage. As this system matures, the Hospital Information Systems/Radiology Information Systems (HIS/RIS) or other components can be developed potentially as interfaces to the outside world thus widen the usage of the tape library system.
Development of software for geodynamic processes monitoring system
NASA Astrophysics Data System (ADS)
Kabanov, M. M.; Kapustin, S. N.; Gordeev, V. F.; Botygin, I. A.; Tartakovsky, V. A.
2017-11-01
This article justifies the usage of natural pulsed electromagnetic Earth's noises logging method for mapping anomalies of strain-stress state of Earth's crust. The methods and technologies for gathering, processing and systematization of data gathered by ground multi-channel geophysical loggers for monitoring geomagnetic situation have been experimentally tested, and software had been developed. The data was consolidated in a network storage and can be accessed without using any specialized client software. The article proposes ways to distinguish global and regional small-scale time-space variations of Earth's natural electromagnetic field. For research purposes, the software provides a way to export data for any given period of time for any loggers and displays measurement data charts for selected set of stations.
The geospatial modeling interface (GMI) framework for deploying and assessing environmental models
USDA-ARS?s Scientific Manuscript database
Geographical information systems (GIS) software packages have been used for close to three decades as analytical tools in environmental management for geospatial data assembly, processing, storage, and visualization of input data and model output. However, with increasing availability and use of ful...
ERIC Educational Resources Information Center
Burton, Adrian P.
1995-01-01
Discusses accessing online electronic documents at the European Telecommunications Satellite Organization (EUTELSAT). Highlights include off-site paper document storage, the document management system, benefits, the EUTELSAT Standard IBM Access software, implementation, the development process, and future enhancements. (AEF)
Integration of XRootD into the cloud infrastructure for ALICE data analysis
NASA Astrophysics Data System (ADS)
Kompaniets, Mikhail; Shadura, Oksana; Svirin, Pavlo; Yurchenko, Volodymyr; Zarochentsev, Andrey
2015-12-01
Cloud technologies allow easy load balancing between different tasks and projects. From the viewpoint of the data analysis in the ALICE experiment, cloud allows to deploy software using Cern Virtual Machine (CernVM) and CernVM File System (CVMFS), to run different (including outdated) versions of software for long term data preservation and to dynamically allocate resources for different computing activities, e.g. grid site, ALICE Analysis Facility (AAF) and possible usage for local projects or other LHC experiments. We present a cloud solution for Tier-3 sites based on OpenStack and Ceph distributed storage with an integrated XRootD based storage element (SE). One of the key features of the solution is based on idea that Ceph has been used as a backend for Cinder Block Storage service for OpenStack, and in the same time as a storage backend for XRootD, with redundancy and availability of data preserved by Ceph settings. For faster and easier OpenStack deployment was applied the Packstack solution, which is based on the Puppet configuration management system. Ceph installation and configuration operations are structured and converted to Puppet manifests describing node configurations and integrated into Packstack. This solution can be easily deployed, maintained and used even in small groups with limited computing resources and small organizations, which usually have lack of IT support. The proposed infrastructure has been tested on two different clouds (SPbSU & BITP) and integrates successfully with the ALICE data analysis model.
PC Software graphics tool for conceptual design of space/planetary electrical power systems
NASA Technical Reports Server (NTRS)
Truong, Long V.
1995-01-01
This paper describes the Decision Support System (DSS), a personal computer software graphics tool for designing conceptual space and/or planetary electrical power systems. By using the DSS, users can obtain desirable system design and operating parameters, such as system weight, electrical distribution efficiency, and bus power. With this tool, a large-scale specific power system was designed in a matter of days. It is an excellent tool to help designers make tradeoffs between system components, hardware architectures, and operation parameters in the early stages of the design cycle. The DSS is a user-friendly, menu-driven tool with online help and a custom graphical user interface. An example design and results are illustrated for a typical space power system with multiple types of power sources, frequencies, energy storage systems, and loads.
Initial Experience With A Prototype Storage System At The University Of North Carolina
NASA Astrophysics Data System (ADS)
Creasy, J. L.; Loendorf, D. D.; Hemminger, B. M.
1986-06-01
A prototype archiving system manufactured by the 3M Corporation has been in place at the University of North Carolina for approximately 12 months. The system was installed as a result of a collaboration between 3M and UNC, with 3M seeking testing of their system, and UNC realizing the need for an archiving system as an essential part of their PACS test-bed facilities. System hardware includes appropriate network and disk interface devices as well as media for both short and long term storage of images and their associated information. The system software includes those procedures necessary to communicate with the network interface elements(NIEs) as well as those procedures necessary to interpret the ACR-NEMA header blocks and to store the images. A subset of the total ACR-NEMA header is parsed and stored in a relational database system. The entire header is stored on disk with the completed study. Interactive programs have been developed that allow radiologists to easily retrieve information about the archived images and to send the full images to a viewing console. Initial experience with the system has consisted primarily of hardware and software debugging. Although the system is ACR-NEMA compatable, further objective and subjective assessments of system performance is awaiting the connection of compatable consoles and acquisition devices to the network.
NASA Technical Reports Server (NTRS)
Le, Diana; Cooper, David M. (Technical Monitor)
1994-01-01
Just imagine a mass storage system that consists of a machine with 2 CPUs, 1 Gigabyte (GB) of memory, 400 GB of disk space, 16800 cartridge tapes in the automated tape silos, 88,000 tapes located in the vault, and the software to manage the system. This system is designed to be a data repository; it will always have disk space to store all the incoming data. Currently 9.14 GB of new data per day enters the system with this rate doubling each year. To assure there is always disk space available for new data, the system. has to move data reside from the expensive disk to a much less expensive medium such as the 3480 cartridge tapes. Once the data is archived to tape, it should be able to move back to disk when someone wants to access it and the data movement should be transparent to the user. Now imagine all the tasks that a system administrator must perform to keep this system running 24 hour a day, 7 days a week. Since the filesystem maintains the illusion of unlimited disk space, data that comes to the system must get moved to tapes in an efficient manner. This paper will describe the mass storage system running at the Numerical Aerodynamic Simulation (NAS) at NASA Ames Research Center in both software and hardware aspects, then it will describe all of the tasks the system administrator has to perform on this system.
NASA Astrophysics Data System (ADS)
Zhao, Liang; Xing, Yuming; Liu, Xin; Rui, Zhoufeng
2018-01-01
The use of thermal energy storage systems can effectively reduce energy consumption and improve the system performance. One of the promising ways for thermal energy storage system is application of phase change materials (PCMs). In this study, a two-dimensional numerical model is presented to investigate the heat transfer enhancement during the melting/solidification process in a triplex tube heat exchanger (TTHX) by using fluent software. The thermal conduction and natural convection are all taken into account in the simulation of the melting/solidification process. As the volume fraction of fin is kept to be a constant, the influence of proposed fin arrangement on temporal profile of liquid fraction over the melting process is studied and reported. By rotating the unit with different angle, the simulation shows that the melting time varies a little, which means that the installation error can be reduced by the selected fin arrangement. The proposed fin arrangement also can effectively reduce time of the solidification of the PCM by investigating the solidification process. To summarize, this work presents a shape optimization for the improvement of the thermal energy storage system by considering both thermal energy charging and discharging process.
R&D100: Lightweight Distributed Metric Service
Gentile, Ann; Brandt, Jim; Tucker, Tom; Showerman, Mike
2018-06-12
On today's High Performance Computing platforms, the complexity of applications and configurations makes efficient use of resources difficult. The Lightweight Distributed Metric Service (LDMS) is monitoring software developed by Sandia National Laboratories to provide detailed metrics of system performance. LDMS provides collection, transport, and storage of data from extreme-scale systems at fidelities and timescales to provide understanding of application and system performance with no statistically significant impact on application performance.
R&D100: Lightweight Distributed Metric Service
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gentile, Ann; Brandt, Jim; Tucker, Tom
2015-11-19
On today's High Performance Computing platforms, the complexity of applications and configurations makes efficient use of resources difficult. The Lightweight Distributed Metric Service (LDMS) is monitoring software developed by Sandia National Laboratories to provide detailed metrics of system performance. LDMS provides collection, transport, and storage of data from extreme-scale systems at fidelities and timescales to provide understanding of application and system performance with no statistically significant impact on application performance.
A Disk-Based System for Producing and Distributing Science Products from MODIS
NASA Technical Reports Server (NTRS)
Masuoka, Edward; Wolfe, Robert; Sinno, Scott; Ye Gang; Teague, Michael
2007-01-01
Since beginning operations in 1999, the MODIS Adaptive Processing System (MODAPS) has evolved to take advantage of trends in information technology, such as the falling cost of computing cycles and disk storage and the availability of high quality open-source software (Linux, Apache and Perl), to achieve substantial gains in processing and distribution capacity and throughput while driving down the cost of system operations.
Mass Storage Performance Information System
NASA Technical Reports Server (NTRS)
Scheuermann, Peter
2000-01-01
The purpose of this task is to develop a data warehouse to enable system administrators and their managers to gather information by querying the data logs of the MDSDS. Currently detailed logs capture the activity of the MDSDS internal to the different systems. The elements to be included in the data warehouse are requirements analysis, data cleansing, database design, database population, hardware/software acquisition, data transformation, query and report generation, and data mining.
Storage Media for Microcomputers.
ERIC Educational Resources Information Center
Trautman, Rodes
1983-01-01
Reviews computer storage devices designed to provide additional memory for microcomputers--chips, floppy disks, hard disks, optical disks--and describes how secondary storage is used (file transfer, formatting, ingredients of incompatibility); disk/controller/software triplet; magnetic tape backup; storage volatility; disk emulator; and…
Introducing a New Software for Geodetic Analysis
NASA Astrophysics Data System (ADS)
Hjelle, G. A.; Dähnn, M.; Fausk, I.; Kirkvik, A. S.; Mysen, E.
2016-12-01
At the Norwegian Mapping Authority, we are currently developing Where, a newsoftware for geodetic analysis. Where is built on our experiences with theGeosat software, and will be able to analyse and combine data from VLBI, SLR,GNSS and DORIS. The software is mainly written in Python which has proved veryfruitful. The code is quick to write and the architecture is easily extendableand maintainable. The Python community provides a rich eco-system of tools fordoing data-analysis, including effective data storage and powerfulvisualization. Python interfaces well with other languages so that we can easilyreuse existing, well-tested code like the SOFA and IERS libraries. This presentation will show some of the current capabilities of Where,including benchmarks against other software packages. In addition we will reporton some simple investigations we have done using the software, and outline ourplans for further progress.
A versatile nondestructive evaluation imaging workstation
NASA Technical Reports Server (NTRS)
Chern, E. James; Butler, David W.
1994-01-01
Ultrasonic C-scan and eddy current imaging systems are of the pointwise type evaluation systems that rely on a mechanical scanner to physically maneuver a probe relative to the specimen point by point in order to acquire data and generate images. Since the ultrasonic C-scan and eddy current imaging systems are based on the same mechanical scanning mechanisms, the two systems can be combined using the same PC platform with a common mechanical manipulation subsystem and integrated data acquisition software. Based on this concept, we have developed an IBM PC-based combined ultrasonic C-scan and eddy current imaging system. The system is modularized and provides capacity for future hardware and software expansions. Advantages associated with the combined system are: (1) eliminated duplication of the computer and mechanical hardware, (2) unified data acquisition, processing and storage software, (3) reduced setup time for repetitious ultrasonic and eddy current scans, and (4) improved system efficiency. The concept can be adapted to many engineering systems by integrating related PC-based instruments into one multipurpose workstation such as dispensing, machining, packaging, sorting, and other industrial applications.
A versatile nondestructive evaluation imaging workstation
NASA Astrophysics Data System (ADS)
Chern, E. James; Butler, David W.
1994-02-01
Ultrasonic C-scan and eddy current imaging systems are of the pointwise type evaluation systems that rely on a mechanical scanner to physically maneuver a probe relative to the specimen point by point in order to acquire data and generate images. Since the ultrasonic C-scan and eddy current imaging systems are based on the same mechanical scanning mechanisms, the two systems can be combined using the same PC platform with a common mechanical manipulation subsystem and integrated data acquisition software. Based on this concept, we have developed an IBM PC-based combined ultrasonic C-scan and eddy current imaging system. The system is modularized and provides capacity for future hardware and software expansions. Advantages associated with the combined system are: (1) eliminated duplication of the computer and mechanical hardware, (2) unified data acquisition, processing and storage software, (3) reduced setup time for repetitious ultrasonic and eddy current scans, and (4) improved system efficiency. The concept can be adapted to many engineering systems by integrating related PC-based instruments into one multipurpose workstation such as dispensing, machining, packaging, sorting, and other industrial applications.
Implementation of a data management software system for SSME test history data
NASA Technical Reports Server (NTRS)
Abernethy, Kenneth
1986-01-01
The implementation of a software system for managing Space Shuttle Main Engine (SSME) test/flight historical data is presented. The software system uses the database management system RIM7 for primary data storage and routine data management, but includes several FORTRAN programs, described here, which provide customized access to the RIM7 database. The consolidation, modification, and transfer of data from the database THIST, to the RIM7 database THISRM is discussed. The RIM7 utility modules for generating some standard reports from THISRM and performing some routine updating and maintenance are briefly described. The FORTRAN accessing programs described include programs for initial loading of large data sets into the database, capturing data from files for database inclusion, and producing specialized statistical reports which cannot be provided by the RIM7 report generator utility. An expert system tutorial, constructed using the expert system shell product INSIGHT2, is described. Finally, a potential expert system, which would analyze data in the database, is outlined. This system could use INSIGHT2 as well and would take advantage of RIM7's compatibility with the microcomputer database system RBase 5000.
NASA Technical Reports Server (NTRS)
Vos, R. G.; Beste, D. L.; Gregg, J.
1984-01-01
The User Manual for the Integrated Analysis Capability (IAC) Level 1 system is presented. The IAC system currently supports the thermal, structures, controls and system dynamics technologies, and its development is influenced by the requirements for design/analysis of large space systems. The system has many features which make it applicable to general problems in engineering, and to management of data and software. Information includes basic IAC operation, executive commands, modules, solution paths, data organization and storage, IAC utilities, and module implementation.
USDA-ARS?s Scientific Manuscript database
Geographical information systems (GIS) software packages have been used for nearly three decades as analytical tools in natural resource management for geospatial data assembly, processing, storage, and visualization of input data and model output. However, with increasing availability and use of fu...
Facilitating Internet-Scale Code Retrieval
ERIC Educational Resources Information Center
Bajracharya, Sushil Krishna
2010-01-01
Internet-Scale code retrieval deals with the representation, storage, and access of relevant source code from a large amount of source code available on the Internet. Internet-Scale code retrieval systems support common emerging practices among software developers related to finding and reusing source code. In this dissertation we focus on some…
Comprehensive Benefit Platforms to Simplify Complex HR Processes
ERIC Educational Resources Information Center
Ehrsam, Hank
2012-01-01
Paying for employee turnover costs, data storage, and multiple layers of benefits can be difficult for fiscally constrained institutions, especially as budget cuts and finance-limiting legislation abound in school districts across the country. Many traditional paper-based systems have been replaced with automated, software-based services, helping…
Painless File Extraction: The A(rc)--Z(oo) of Internet Archive Formats.
ERIC Educational Resources Information Center
Simmonds, Curtis
1993-01-01
Discusses extraction programs needed to postprocess software downloaded from the Internet that has been archived and compressed for the purposes of storage and file transfer. Archiving formats for DOS, Macintosh, and UNIX operating systems are described; and cross-platform compression utilities are explained. (LRW)
Yakami, Masahiro; Ishizu, Koichi; Kubo, Takeshi; Okada, Tomohisa; Togashi, Kaori
2011-04-01
Thin-slice CT data, useful for clinical diagnosis and research, is now widely available but is typically discarded in many institutions, after a short period of time due to data storage capacity limitations. We designed and built a low-cost high-capacity Digital Imaging and COmmunication in Medicine (DICOM) storage system able to store thin-slice image data for years, using off-the-shelf consumer hardware components, such as a Macintosh computer, a Windows PC, and network-attached storage units. "Ordinary" hierarchical file systems, instead of a centralized data management system such as relational database, were adopted to manage patient DICOM files by arranging them in directories enabling quick and easy access to the DICOM files of each study by following the directory trees with Windows Explorer via study date and patient ID. Software used for this system was open-source OsiriX and additional programs we developed ourselves, both of which were freely available via the Internet. The initial cost of this system was about $3,600 with an incremental storage cost of about $900 per 1 terabyte (TB). This system has been running since 7th Feb 2008 with the data stored increasing at the rate of about 1.3 TB per month. Total data stored was 21.3 TB on 23rd June 2009. The maintenance workload was found to be about 30 to 60 min once every 2 weeks. In conclusion, this newly developed DICOM storage system is useful for research due to its cost-effectiveness, enormous capacity, high scalability, sufficient reliability, and easy data access.
Novel optimization technique of isolated microgrid with hydrogen energy storage.
Beshr, Eman Hassan; Abdelghany, Hazem; Eteiba, Mahmoud
2018-01-01
This paper presents a novel optimization technique for energy management studies of an isolated microgrid. The system is supplied by various Distributed Energy Resources (DERs), Diesel Generator (DG), a Wind Turbine Generator (WTG), Photovoltaic (PV) arrays and supported by fuel cell/electrolyzer Hydrogen storage system for short term storage. Multi-objective optimization is used through non-dominated sorting genetic algorithm to suit the load requirements under the given constraints. A novel multi-objective flower pollination algorithm is utilized to check the results. The Pros and cons of the two optimization techniques are compared and evaluated. An isolated microgrid is modelled using MATLAB software package, dispatch of active/reactive power, optimal load flow analysis with slack bus selection are carried out to be able to minimize fuel cost and line losses under realistic constraints. The performance of the system is studied and analyzed during both summer and winter conditions and three case studies are presented for each condition. The modified IEEE 15 bus system is used to validate the proposed algorithm.
Novel optimization technique of isolated microgrid with hydrogen energy storage
Abdelghany, Hazem; Eteiba, Mahmoud
2018-01-01
This paper presents a novel optimization technique for energy management studies of an isolated microgrid. The system is supplied by various Distributed Energy Resources (DERs), Diesel Generator (DG), a Wind Turbine Generator (WTG), Photovoltaic (PV) arrays and supported by fuel cell/electrolyzer Hydrogen storage system for short term storage. Multi-objective optimization is used through non-dominated sorting genetic algorithm to suit the load requirements under the given constraints. A novel multi-objective flower pollination algorithm is utilized to check the results. The Pros and cons of the two optimization techniques are compared and evaluated. An isolated microgrid is modelled using MATLAB software package, dispatch of active/reactive power, optimal load flow analysis with slack bus selection are carried out to be able to minimize fuel cost and line losses under realistic constraints. The performance of the system is studied and analyzed during both summer and winter conditions and three case studies are presented for each condition. The modified IEEE 15 bus system is used to validate the proposed algorithm. PMID:29466433
NASA Astrophysics Data System (ADS)
Cristóbal-Hornillos, D.; Varela, J.; Ederoclite, A.; Vázquez Ramió, H.; López-Sainz, A.; Hernández-Fuertes, J.; Civera, T.; Muniesa, D.; Moles, M.; Cenarro, A. J.; Marín-Franch, A.; Yanes-Díaz, A.
2015-05-01
The Observatorio Astrofísico de Javalambre consists of two main telescopes: JST/T250, a 2.5 m telescope with a FoV of 3 deg, and JAST/T80, a 83 cm with a 2 deg FoV. JST/T250 will be devoted to complete the Javalambre-PAU Astronomical Survey (J-PAS). It is a photometric survey with a system of 54 narrow-band plus 3 broad-band filters covering an area of 8500°^2. The JAST/T80 will perform the J-PLUS survey, covering the same area in a system of 12 filters. This contribution presents the software and hardware architecture designed to store and process the data. The processing pipeline runs daily and it is devoted to correct instrumental signature on the science images, to perform astrometric and photometric calibration, and the computation of individual image catalogs. In a second stage, the pipeline performs the combination of the tile mosaics and the computation of final catalogs. The catalogs are ingested in as Scientific database to be provided to the community. The processing software is connected with a management database to store persistent information about the pipeline operations done on each frame. The processing pipeline is executed in a computing cluster under a batch queuing system. Regarding the storage system, it will combine disk and tape technologies. The disk storage system will have capacity to store the data that is accessed by the pipeline. The tape library will store and archive the raw data and earlier data releases with lower access frequency.
NASA Astrophysics Data System (ADS)
Strotov, Valery V.; Taganov, Alexander I.; Konkin, Yuriy V.; Kolesenkov, Aleksandr N.
2017-10-01
Task of processing and analysis of obtained Earth remote sensing data on ultra-small spacecraft board is actual taking into consideration significant expenditures of energy for data transfer and low productivity of computers. Thereby, there is an issue of effective and reliable storage of the general information flow obtained from onboard systems of information collection, including Earth remote sensing data, into a specialized data base. The paper has considered peculiarities of database management system operation with the multilevel memory structure. For storage of data in data base the format has been developed that describes a data base physical structure which contains required parameters for information loading. Such structure allows reducing a memory size occupied by data base because it is not necessary to store values of keys separately. The paper has shown architecture of the relational database management system oriented into embedment into the onboard ultra-small spacecraft software. Data base for storage of different information, including Earth remote sensing data, can be developed by means of such database management system for its following processing. Suggested database management system architecture has low requirements to power of the computer systems and memory resources on the ultra-small spacecraft board. Data integrity is ensured under input and change of the structured information.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hansen, Randy R.; Bass, Robert B.; Kouzes, Richard T.
2003-01-20
This paper provides a brief overview of the implementation of the Advanced Encryption Standard (AES) as a hash function for confirming the identity of software resident on a computer system. The PNNL Software Authentication team chose to use a hash function to confirm software identity on a system for situations where: (1) there is limited time to perform the confirmation and (2) access to the system is restricted to keyboard or thumbwheel input and output can only be displayed on a monitor. PNNL reviewed three popular algorithms: the Secure Hash Algorithm - 1 (SHA-1), the Message Digest - 5 (MD-5),more » and the Advanced Encryption Standard (AES) and selected the AES to incorporate in software confirmation tool we developed. This paper gives a brief overview of the SHA-1, MD-5, and the AES and sites references for further detail. It then explains the overall processing steps of the AES to reduce a large amount of generic data-the plain text, such is present in memory and other data storage media in a computer system, to a small amount of data-the hash digest, which is a mathematically unique representation or signature of the former that could be displayed on a computer's monitor. This paper starts with a simple definition and example to illustrate the use of a hash function. It concludes with a description of how the software confirmation tool uses the hash function to confirm the identity of software on a computer system.« less
ERIC Educational Resources Information Center
Sieverts, Eric G.; And Others
1993-01-01
Reports on tests evaluating nine microcomputer software packages designed for information storage and retrieval: BRS-Search, dtSearch, InfoBank, Micro-OPC, Q&A, STN-PFS, Strix, TINman, and ZYindex. Tables and narrative evaluations detail results related to security, hardware, user features, search capability, indexing, input, maintenance of files,…
Waste Management Information System (WMIS) User Guide
DOE Office of Scientific and Technical Information (OSTI.GOV)
R. E. Broz
2008-12-22
This document provides the user of the Waste Management Information System (WMIS) instructions on how to use the WMIS software. WMIS allows users to initiate, track, and close waste packages. The modular design supports integration and utilization of data throuh the various stages of waste management. The phases of the waste management work process include generation, designation, packaging, container management, procurement, storage, treatment, transportation, and disposal.
Rubegni, Pietro; Nami, Niccolò; Poggiali, Sara; Tataranno, Domenico; Fimiani, M
2009-05-01
Because the skin is the only organ completely accessible to visual examination, digital technology has therefore attracted the attention of dermatologists for documenting, monitoring, measuring and classifying morphological manifestations. To describe a digital image management system dedicated to dermatological health care environments and to compare it with other existing softwares for digital image storage. We designed a reliable hardware structure that could ensure future scaling, because storage needs tend to grow exponentially. For the software, we chose a client-web server application based on a relational database and with a 'minimalist' user interface. We developed a software with a ready-made, adaptable index of skin pathologies. It facilitates classification by pathology, patient and visit, with an advanced search option allowing access to all images according to personalized criteria. The software also offers the possibility of comparing two or more digital images (follow-up). The fact that the archives of years of digital photos acquired and saved on PCs can easily be entered in the program distinguishes it from the others in the market. This option is fundamental for accessing all the photos taken in years of practice in the program without entering them one by one. The program is available to any user connected to the local Intranet and the system may directly be available in the future from the Internet. All clinics and surgeries, especially those that rely on digital images, are obliged to keep up with technological advances. It is therefore hoped that our project will become a model for medical structures intending to rationalise digital and other data according to statutory requirements.
Advanced program development management software system. Software description and user's manual
NASA Technical Reports Server (NTRS)
1990-01-01
The objectives of this project were to apply emerging techniques and tools from the computer science discipline of paperless management to the activities of the Space Transportation and Exploration Office (PT01) in Marshall Space Flight Center (MSFC) Program Development, thereby enhancing the productivity of the workforce, the quality of the data products, and the collection, dissemination, and storage of information. The approach used to accomplish the objectives emphasized the utilization of finished form (off-the-shelf) software products to the greatest extent possible without impacting the performance of the end product, to pursue developments when necessary in the rapid prototyping environment to provide a mechanism for frequent feedback from the users, and to provide a full range of user support functions during the development process to promote testing of the software.
Cloud Computing for radiologists.
Kharat, Amit T; Safvi, Amjad; Thind, Ss; Singh, Amarjit
2012-07-01
Cloud computing is a concept wherein a computer grid is created using the Internet with the sole purpose of utilizing shared resources such as computer software, hardware, on a pay-per-use model. Using Cloud computing, radiology users can efficiently manage multimodality imaging units by using the latest software and hardware without paying huge upfront costs. Cloud computing systems usually work on public, private, hybrid, or community models. Using the various components of a Cloud, such as applications, client, infrastructure, storage, services, and processing power, Cloud computing can help imaging units rapidly scale and descale operations and avoid huge spending on maintenance of costly applications and storage. Cloud computing allows flexibility in imaging. It sets free radiology from the confines of a hospital and creates a virtual mobile office. The downsides to Cloud computing involve security and privacy issues which need to be addressed to ensure the success of Cloud computing in the future.
Computer Administering of the Psychological Investigations: Set-Relational Representation
NASA Astrophysics Data System (ADS)
Yordzhev, Krasimir
Computer administering of a psychological investigation is the computer representation of the entire procedure of psychological assessments - test construction, test implementation, results evaluation, storage and maintenance of the developed database, its statistical processing, analysis and interpretation. A mathematical description of psychological assessment with the aid of personality tests is discussed in this article. The set theory and the relational algebra are used in this description. A relational model of data, needed to design a computer system for automation of certain psychological assessments is given. Some finite sets and relation on them, which are necessary for creating a personality psychological test, are described. The described model could be used to develop real software for computer administering of any psychological test and there is full automation of the whole process: test construction, test implementation, result evaluation, storage of the developed database, statistical implementation, analysis and interpretation. A software project for computer administering personality psychological tests is suggested.
Cloud Computing for radiologists
Kharat, Amit T; Safvi, Amjad; Thind, SS; Singh, Amarjit
2012-01-01
Cloud computing is a concept wherein a computer grid is created using the Internet with the sole purpose of utilizing shared resources such as computer software, hardware, on a pay-per-use model. Using Cloud computing, radiology users can efficiently manage multimodality imaging units by using the latest software and hardware without paying huge upfront costs. Cloud computing systems usually work on public, private, hybrid, or community models. Using the various components of a Cloud, such as applications, client, infrastructure, storage, services, and processing power, Cloud computing can help imaging units rapidly scale and descale operations and avoid huge spending on maintenance of costly applications and storage. Cloud computing allows flexibility in imaging. It sets free radiology from the confines of a hospital and creates a virtual mobile office. The downsides to Cloud computing involve security and privacy issues which need to be addressed to ensure the success of Cloud computing in the future. PMID:23599560
Safety and Suitability for Service Assessment Testing of Large Caliber Ammunition Greater Than 40MM
2013-07-02
2 July 2013 2 Page Paragraph 9.2 Insensitive Munitions Assessment ........................................ 14 9.3 Munition Software System ...encounter during storage and transportation. 3.12 Weapon System . A weapon and those components required for its operation, comprising the aggregate of...Provide a positive indexing system on the cartridge case to ensure proper orientation of the case when it is loaded into the weapon. 6.9 Weapon
Control and Information Systems for the National Ignition Facility
Brunton, Gordon; Casey, Allan; Christensen, Marvin; ...
2017-03-23
Orchestration of every National Ignition Facility (NIF) shot cycle is managed by the Integrated Computer Control System (ICCS), which uses a scalable software architecture running code on more than 1950 front-end processors, embedded controllers, and supervisory servers. The ICCS operates laser and industrial control hardware containing 66 000 control and monitor points to ensure that all of NIF’s laser beams arrive at the target within 30 ps of each other and are aligned to a pointing accuracy of less than 50 μm root-mean-square, while ensuring that a host of diagnostic instruments record data in a few billionths of a second.more » NIF’s automated control subsystems are built from a common object-oriented software framework that distributes the software across the computer network and achieves interoperation between different software languages and target architectures. A large suite of business and scientific software tools supports experimental planning, experimental setup, facility configuration, and post-shot analysis. Standard business services using open-source software, commercial workflow tools, and database and messaging technologies have been developed. An information technology infrastructure consisting of servers, network devices, and storage provides the foundation for these systems. Thus, this work is an overview of the control and information systems used to support a wide variety of experiments during the National Ignition Campaign.« less
Control and Information Systems for the National Ignition Facility
DOE Office of Scientific and Technical Information (OSTI.GOV)
Brunton, Gordon; Casey, Allan; Christensen, Marvin
Orchestration of every National Ignition Facility (NIF) shot cycle is managed by the Integrated Computer Control System (ICCS), which uses a scalable software architecture running code on more than 1950 front-end processors, embedded controllers, and supervisory servers. The ICCS operates laser and industrial control hardware containing 66 000 control and monitor points to ensure that all of NIF’s laser beams arrive at the target within 30 ps of each other and are aligned to a pointing accuracy of less than 50 μm root-mean-square, while ensuring that a host of diagnostic instruments record data in a few billionths of a second.more » NIF’s automated control subsystems are built from a common object-oriented software framework that distributes the software across the computer network and achieves interoperation between different software languages and target architectures. A large suite of business and scientific software tools supports experimental planning, experimental setup, facility configuration, and post-shot analysis. Standard business services using open-source software, commercial workflow tools, and database and messaging technologies have been developed. An information technology infrastructure consisting of servers, network devices, and storage provides the foundation for these systems. Thus, this work is an overview of the control and information systems used to support a wide variety of experiments during the National Ignition Campaign.« less
Multiple-function multi-input/multi-output digital control and on-line analysis
NASA Technical Reports Server (NTRS)
Hoadley, Sherwood T.; Wieseman, Carol D.; Mcgraw, Sandra M.
1992-01-01
The design and capabilities of two digital controller systems for aeroelastic wind-tunnel models are described. The first allowed control of flutter while performing roll maneuvers with wing load control as well as coordinating the acquisition, storage, and transfer of data for on-line analysis. This system, which employs several digital signal multi-processor (DSP) boards programmed in high-level software languages, is housed in a SUN Workstation environment. A second DCS provides a measure of wind-tunnel safety by functioning as a trip system during testing in the case of high model dynamic response or in case the first DCS fails. The second DCS uses National Instruments LabVIEW Software and Hardware within a Macintosh environment.
The Oklahoma Geographic Information Retrieval System
NASA Technical Reports Server (NTRS)
Blanchard, W. A.
1982-01-01
The Oklahoma Geographic Information Retrieval System (OGIRS) is a highly interactive data entry, storage, manipulation, and display software system for use with geographically referenced data. Although originally developed for a project concerned with coal strip mine reclamation, OGIRS is capable of handling any geographically referenced data for a variety of natural resource management applications. A special effort has been made to integrate remotely sensed data into the information system. The timeliness and synoptic coverage of satellite data are particularly useful attributes for inclusion into the geographic information system.
The MK VI - A second generation attitude control system
NASA Astrophysics Data System (ADS)
Meredith, P. J.
1986-10-01
The MK VI, a new multipurpose attitude control system for the exoatmospheric attitude control of sounding rocket payloads, is described. The system employs reprogrammable microcomputer memory for storage of basic control logic and for specific mission event control data. The paper includes descriptions of MK VI specifications and configuration; sensor characteristics; the electronic, analog, and digital sections; the pneumatic system; ground equipment; the system operation; and software. A review of the MK VI performance for the Comet Halley flight is presented. Block diagrams are included.
Medical-Information-Management System
NASA Technical Reports Server (NTRS)
Alterescu, Sidney; Friedman, Carl A.; Frankowski, James W.
1989-01-01
Medical Information Management System (MIMS) computer program interactive, general-purpose software system for storage and retrieval of information. Offers immediate assistance where manipulation of large data bases required. User quickly and efficiently extracts, displays, and analyzes data. Used in management of medical data and handling all aspects of data related to care of patients. Other applications include management of data on occupational safety in public and private sectors, handling judicial information, systemizing purchasing and procurement systems, and analyses of cost structures of organizations. Written in Microsoft FORTRAN 77.
WinTRAX: A raytracing software package for the design of multipole focusing systems
NASA Astrophysics Data System (ADS)
Grime, G. W.
2013-07-01
The software package TRAX was a simulation tool for modelling the path of charged particles through linear cylindrical multipole fields described by analytical expressions and was a development of the earlier OXRAY program (Grime and Watt, 1983; Grime et al., 1982) [1,2]. In a 2005 comparison of raytracing software packages (Incerti et al., 2005) [3], TRAX/OXRAY was compared with Geant4 and Zgoubi and was found to give close agreement with the more modern codes. TRAX was a text-based program which was only available for operation in a now rare VMS workstation environment, so a new program, WinTRAX, has been developed for the Windows operating system. This implements the same basic computing strategy as TRAX, and key sections of the code are direct translations from FORTRAN to C++, but the Windows environment is exploited to make an intuitive graphical user interface which simplifies and enhances many operations including system definition and storage, optimisation, beam simulation (including with misaligned elements) and aberration coefficient determination. This paper describes the program and presents comparisons with other software and real installations.
Lemaitre, D; Sauquet, D; Fofol, I; Tanguy, L; Jean, F C; Degoulet, P
1995-01-01
Legacy systems are crucial for organizations since they support key functionalities. But they become obsolete with aging and the apparition of new techniques. Managing their evolution is a key issue in software engineering. This paper presents a strategy that has been developed at Broussais University Hospital in Paris to make a legacy system devoted to the management of health care units evolve towards a new up-to-date software. A two-phase evolution pathway is described. The first phase consists in separating the interface from the data storage and application control and in using a communication channel between the individualized components. The second phase proposes to use an object-oriented DBMS in place of the homegrown system. An application example for the management of hypertensive patients is described.
Sensible heat receiver for solar dynamic space power system
NASA Astrophysics Data System (ADS)
Perez-Davis, Marla E.; Gaier, James R.; Petrefski, Chris
A sensible heat receiver is considered which uses a vapor grown carbon fiber-carbon (VGCF/C) composite as the thermal storage medium and which was designed for a 7-kW Brayton engine. This heat receiver stores the required energy to power the system during eclipse in the VGCF/C composite. The heat receiver thermal analysis was conducted through the Systems Improved Numerical Differencing Analyzer and Fluid Integrator (SINDA) software package. The sensible heat receiver compares well with other latent and advanced sensible heat receivers analyzed in other studies, while avoiding the problems associated with latent heat storage salts and liquid metal heat pipes. The concept also satisfies the design requirements for a 7-kW Brayton engine system. The weight and size of the system can be optimized by changes in geometry and technology advances for this new material.
Design of energy storage system to improve inertial response for large scale PV generation
Wang, Xiaoyu; Yue, Meng
2016-07-01
With high-penetration levels of renewable generating sources being integrated into the existing electric power grid, conventional generators are being replaced and grid inertial response is deteriorating. This technical challenge is more severe with photovoltaic (PV) generation than with wind generation because PV generation systems cannot provide inertial response unless special countermeasures are adopted. To enhance the inertial response, this paper proposes to use battery energy storage systems (BESS) as the remediation approach to accommodate the degrading inertial response when high penetrations of PV generation are integrated into the existing power grid. A sample power system was adopted and simulated usingmore » PSS/E software. Here, impacts of different penetration levels of PV generation on the system inertial response were investigated and then BESS was incorporated to improve the frequency dynamics.« less
Sensible heat receiver for solar dynamic space power system
NASA Technical Reports Server (NTRS)
Perez-Davis, Marla E.; Gaier, James R.; Petrefski, Chris
1991-01-01
A sensible heat receiver considered in this study uses a vapor grown carbon fiber-carbon (VGCF/C) composite as the thermal storage media and was designed for a 7 kW Brayton engine. The proposed heat receiver stores the required energy to power the system during eclipse in the VGCF/C composite. The heat receiver thermal analysis was conducted through the Systems Improved Numerical Differencing Analyzer and Fluid Integrator (SINDA) software package. The sensible heat receiver compares well with other latent and advanced sensible heat receivers analyzed in other studies while avoiding the problems associated with latent heat storage salts and liquid metal heat pipes. The concept also satisfies the design requirements for a 7 kW Brayton engine system. The weight and size of the system can be optimized by changes in geometry and technology advances for this new material.
Sensible heat receiver for solar dynamic space power system
NASA Technical Reports Server (NTRS)
Perez-Davis, Marla E.; Gaier, James R.; Petrefski, Chris
1991-01-01
A sensible heat receiver is considered which uses a vapor grown carbon fiber-carbon (VGCF/C) composite as the thermal storage medium and which was designed for a 7-kW Brayton engine. This heat receiver stores the required energy to power the system during eclipse in the VGCF/C composite. The heat receiver thermal analysis was conducted through the Systems Improved Numerical Differencing Analyzer and Fluid Integrator (SINDA) software package. The sensible heat receiver compares well with other latent and advanced sensible heat receivers analyzed in other studies, while avoiding the problems associated with latent heat storage salts and liquid metal heat pipes. The concept also satisfies the design requirements for a 7-kW Brayton engine system. The weight and size of the system can be optimized by changes in geometry and technology advances for this new material.
Evaluation of Optical Disk Jukebox Software.
ERIC Educational Resources Information Center
Ranade, Sanjay; Yee, Fonald
1989-01-01
Discusses software that is used to drive and access optical disk jukeboxes, which are used for data storage. Categories of the software are described, user categories are explained, the design of implementation approaches is discussed, and representative software products are reviewed. (eight references) (LRW)
Long-term room temperature preservation of corpse soft tissue: an approach for tissue sample storage
2011-01-01
Background Disaster victim identification (DVI) represents one of the most difficult challenges in forensic sciences, and subsequent DNA typing is essential. Collected samples for DNA-based human identification are usually stored at low temperature to halt the degradation processes of human remains. We have developed a simple and reliable procedure for soft tissue storage and preservation for DNA extraction. It ensures high quality DNA suitable for PCR-based DNA typing after at least 1 year of room temperature storage. Methods Fragments of human psoas muscle were exposed to three different environmental conditions for diverse time periods at room temperature. Storage conditions included: (a) a preserving medium consisting of solid sodium chloride (salt), (b) no additional substances and (c) garden soil. DNA was extracted with proteinase K/SDS followed by organic solvent treatment and concentration by centrifugal filter devices. Quantification was carried out by real-time PCR using commercial kits. Short tandem repeat (STR) typing profiles were analysed with 'expert software'. Results DNA quantities recovered from samples stored in salt were similar up to the complete storage time and underscored the effectiveness of the preservation method. It was possible to reliably and accurately type different genetic systems including autosomal STRs and mitochondrial and Y-chromosome haplogroups. Autosomal STR typing quality was evaluated by expert software, denoting high quality profiles from DNA samples obtained from corpse tissue stored in salt for up to 365 days. Conclusions The procedure proposed herein is a cost efficient alternative for storage of human remains in challenging environmental areas, such as mass disaster locations, mass graves and exhumations. This technique should be considered as an additional method for sample storage when preservation of DNA integrity is required for PCR-based DNA typing. PMID:21846338
The Current State of Data Transmission Channels from Pushchino to Moscow and Perspectives
NASA Astrophysics Data System (ADS)
Dumsky, D. V.; Isaev, E. A.; Samodurov, V. A.; Shatskaya, M. V.
Since the work of a unique space radio telescope in the international VLBI project "Radioastron" extended to 2017 the transmission and storage of large volumes of scientific and telemetry data obtained during the experiments is still remains actual. This project is carried out by the Astro Space Center of Lebedev Physical Institute in Moscow, Russia. It requires us to maintain in operating state the high-speed link to merge into a single LAN buffer data center in Puschino and scientific information center in Moscow. Still relevant the chanal equipment monitoring system, and storage systems, as well as the timely replacement of hardware and software upgrades, backups, and documentation of the network infrastructure.
NASA Astrophysics Data System (ADS)
McFall, Steve
1994-03-01
With the increase in business automation and the widespread availability and low cost of computer systems, law enforcement agencies have seen a corresponding increase in criminal acts involving computers. The examination of computer evidence is a new field of forensic science with numerous opportunities for research and development. Research is needed to develop new software utilities to examine computer storage media, expert systems capable of finding criminal activity in large amounts of data, and to find methods of recovering data from chemically and physically damaged computer storage media. In addition, defeating encryption and password protection of computer files is also a topic requiring more research and development.
DOE Office of Scientific and Technical Information (OSTI.GOV)
The system is developed to collect, process, store and present the information provided by the radio frequency identification (RFID) devices. The system contains three parts, the application software, the database and the web page. The application software manages multiple RFID devices, such as readers and portals, simultaneously. It communicates with the devices through application programming interface (API) provided by the device vendor. The application software converts data collected by the RFID readers and portals to readable information. It is capable of encrypting data using 256 bits advanced encryption standard (AES). The application software has a graphical user interface (GUI). Themore » GUI mimics the configurations of the nucler material storage sites or transport vehicles. The GUI gives the user and system administrator an intuitive way to read the information and/or configure the devices. The application software is capable of sending the information to a remote, dedicated and secured web and database server. Two captured screen samples, one for storage and transport, are attached. The database is constructed to handle a large number of RFID tag readers and portals. A SQL server is employed for this purpose. An XML script is used to update the database once the information is sent from the application software. The design of the web page imitates the design of the application software. The web page retrieves data from the database and presents it in different panels. The user needs a user name combined with a password to access the web page. The web page is capable of sending e-mail and text messages based on preset criteria, such as when alarm thresholds are excceeded. A captured screen sample is attached. The application software is designed to be installed on a local computer. The local computer is directly connected to the RFID devices and can be controlled locally or remotely. There are multiple local computers managing different sites or transport vehicles. The control from remote sites and information transmitted to a central database server is through secured internet. The information stored in the central databaser server is shown on the web page. The users can view the web page on the internet. A dedicated and secured web and database server (https) is used to provide information security.« less
Cloud Based Educational Systems and Its Challenges and Opportunities and Issues
ERIC Educational Resources Information Center
Paul, Prantosh Kr.; Lata Dangwal, Kiran
2014-01-01
Cloud Computing (CC) is actually is a set of hardware, software, networks, storage, services an interface combines to deliver aspects of computing as a service. Cloud Computing (CC) actually uses the central remote servers to maintain data and applications. Practically Cloud Computing (CC) is extension of Grid computing with independency and…
System of experts for intelligent data management (SEIDAM)
NASA Technical Reports Server (NTRS)
Goodenough, David G.; Iisaka, Joji; Fung, KO
1993-01-01
A proposal to conduct research and development on a system of expert systems for intelligent data management (SEIDAM) is being developed. CCRS has much expertise in developing systems for integrating geographic information with space and aircraft remote sensing data and in managing large archives of remotely sensed data. SEIDAM will be composed of expert systems grouped in three levels. At the lowest level, the expert systems will manage and integrate data from diverse sources, taking account of symbolic representation differences and varying accuracies. Existing software can be controlled by these expert systems, without rewriting existing software into an Artificial Intelligence (AI) language. At the second level, SEIDAM will take the interpreted data (symbolic and numerical) and combine these with data models. at the top level, SEIDAM will respond to user goals for predictive outcomes given existing data. The SEIDAM Project will address the research areas of expert systems, data management, storage and retrieval, and user access and interfaces.
System of Experts for Intelligent Data Management (SEIDAM)
NASA Technical Reports Server (NTRS)
Goodenough, David G.; Iisaka, Joji; Fung, KO
1992-01-01
It is proposed to conduct research and development on a system of expert systems for intelligent data management (SEIDAM). CCRS has much expertise in developing systems for integrating geographic information with space and aircraft remote sensing data and in managing large archives of remotely sensed data. SEIDAM will be composed of expert systems grouped in three levels. At the lowest level, the expert systems will manage and integrate data from diverse sources, taking account of symbolic representation differences and varying accuracies. Existing software can be controlled by these expert systems, without rewriting existing software into an Artificial Intelligence (AI) language. At the second level, SEIDAM will take the interpreted data (symbolic and numerical) and combine these with data models. At the top level, SEIDAM will respond to user goals for predictive outcomes given existing data. The SEIDAM Project will address the research areas of expert systems, data management, storage and retrieval, and user access and interfaces.
Cardiovascular imaging environment: will the future be cloud-based?
Kawel-Boehm, Nadine; Bluemke, David A
2017-07-01
In cardiovascular CT and MR imaging large datasets have to be stored, post-processed, analyzed and distributed. Beside basic assessment of volume and function in cardiac magnetic resonance imaging e.g., more sophisticated quantitative analysis is requested requiring specific software. Several institutions cannot afford various types of software and provide expertise to perform sophisticated analysis. Areas covered: Various cloud services exist related to data storage and analysis specifically for cardiovascular CT and MR imaging. Instead of on-site data storage, cloud providers offer flexible storage services on a pay-per-use basis. To avoid purchase and maintenance of specialized software for cardiovascular image analysis, e.g. to assess myocardial iron overload, MR 4D flow and fractional flow reserve, evaluation can be performed with cloud based software by the consumer or complete analysis is performed by the cloud provider. However, challenges to widespread implementation of cloud services include regulatory issues regarding patient privacy and data security. Expert commentary: If patient privacy and data security is guaranteed cloud imaging is a valuable option to cope with storage of large image datasets and offer sophisticated cardiovascular image analysis for institutions of all sizes.
Building a Snow Data Management System using Open Source Software (and IDL)
NASA Astrophysics Data System (ADS)
Goodale, C. E.; Mattmann, C. A.; Ramirez, P.; Hart, A. F.; Painter, T.; Zimdars, P. A.; Bryant, A.; Brodzik, M.; Skiles, M.; Seidel, F. C.; Rittger, K. E.
2012-12-01
At NASA's Jet Propulsion Laboratory free and open source software is used everyday to support a wide range of projects, from planetary to climate to research and development. In this abstract I will discuss the key role that open source software has played in building a robust science data processing pipeline for snow hydrology research, and how the system is also able to leverage programs written in IDL, making JPL's Snow Data System a hybrid of open source and proprietary software. Main Points: - The Design of the Snow Data System (illustrate how the collection of sub-systems are combined to create a complete data processing pipeline) - Discuss the Challenges of moving from a single algorithm on a laptop, to running 100's of parallel algorithms on a cluster of servers (lesson's learned) - Code changes - Software license related challenges - Storage Requirements - System Evolution (from data archiving, to data processing, to data on a map, to near-real-time products and maps) - Road map for the next 6 months (including how easily we re-used the snowDS code base to support the Airborne Snow Observatory Mission) Software in Use and their Software Licenses: IDL - Used for pre and post processing of data. Licensed under a proprietary software license held by Excelis. Apache OODT - Used for data management and workflow processing. Licensed under the Apache License Version 2. GDAL - Geospatial Data processing library used for data re-projection currently. Licensed under the X/MIT license. GeoServer - WMS Server. Licensed under the General Public License Version 2.0 Leaflet.js - Javascript web mapping library. Licensed under the Berkeley Software Distribution License. Python - Glue code and miscellaneous data processing support. Licensed under the Python Software Foundation License. Perl - Script wrapper for running the SCAG algorithm. Licensed under the General Public License Version 3. PHP - Front-end web application programming. Licensed under the PHP License Version 3.01
Lessons Learned During the Refurbishment and Testing of an Observatory After Long-Term Storage
NASA Technical Reports Server (NTRS)
Hawk, John; Peabody, Sharon; Stavely, Richard
2015-01-01
Thermal Fluids Analysis Workshop (TFAWS) 2015, Silver Spring, MD NCTS 21070-15. This paper addresses the lessons learned during the refurbishment and testing of the thermal control system for a spacecraft which was placed into long-term storage. The DSCOVR (Deep Space Climate Observatory) Observatory (formerly known as Triana) was originally scheduled to launch on the Space Shuttle in 2002. With the Triana spacecraft nearly complete, the mission was canceled and the satellite was abruptly put into storage in 2001. In 2008 the observatory was removed from storage to begin refurbishment and testing. Problems arose associated with hardware that was not currently manufactured, coatings degradation, and a significant lack of documentation. Also addressed is the conversion of the thermal and geometric math models for use with updated thermal analysis software tools.
Development of climate data storage and processing model
NASA Astrophysics Data System (ADS)
Okladnikov, I. G.; Gordov, E. P.; Titov, A. G.
2016-11-01
We present a storage and processing model for climate datasets elaborated in the framework of a virtual research environment (VRE) for climate and environmental monitoring and analysis of the impact of climate change on the socio-economic processes on local and regional scales. The model is based on a «shared nothings» distributed computing architecture and assumes using a computing network where each computing node is independent and selfsufficient. Each node holds a dedicated software for the processing and visualization of geospatial data providing programming interfaces to communicate with the other nodes. The nodes are interconnected by a local network or the Internet and exchange data and control instructions via SSH connections and web services. Geospatial data is represented by collections of netCDF files stored in a hierarchy of directories in the framework of a file system. To speed up data reading and processing, three approaches are proposed: a precalculation of intermediate products, a distribution of data across multiple storage systems (with or without redundancy), and caching and reuse of the previously obtained products. For a fast search and retrieval of the required data, according to the data storage and processing model, a metadata database is developed. It contains descriptions of the space-time features of the datasets available for processing, their locations, as well as descriptions and run options of the software components for data analysis and visualization. The model and the metadata database together will provide a reliable technological basis for development of a high- performance virtual research environment for climatic and environmental monitoring.
The ATLAS conditions database architecture for the Muon spectrometer
NASA Astrophysics Data System (ADS)
Verducci, Monica; ATLAS Muon Collaboration
2010-04-01
The Muon System, facing the challenge requirement of the conditions data storage, has extensively started to use the conditions database project 'COOL' as the basis for all its conditions data storage both at CERN and throughout the worldwide collaboration as decided by the ATLAS Collaboration. The management of the Muon COOL conditions database will be one of the most challenging applications for Muon System, both in terms of data volumes and rates, but also in terms of the variety of data stored. The Muon conditions database is responsible for almost all of the 'non event' data and detector quality flags storage needed for debugging of the detector operations and for performing reconstruction and analysis. The COOL database allows database applications to be written independently of the underlying database technology and ensures long term compatibility with the entire ATLAS Software. COOL implements an interval of validity database, i.e. objects stored or referenced in COOL have an associated start and end time between which they are valid, the data is stored in folders, which are themselves arranged in a hierarchical structure of folder sets. The structure is simple and mainly optimized to store and retrieve object(s) associated with a particular time. In this work, an overview of the entire Muon conditions database architecture is given, including the different sources of the data and the storage model used. In addiction the software interfaces used to access to the conditions data are described, more emphasis is given to the Offline Reconstruction framework ATHENA and the services developed to provide the conditions data to the reconstruction.
An interactive environment for the analysis of large Earth observation and model data sets
NASA Technical Reports Server (NTRS)
Bowman, Kenneth P.; Walsh, John E.; Wilhelmson, Robert B.
1993-01-01
We propose to develop an interactive environment for the analysis of large Earth science observation and model data sets. We will use a standard scientific data storage format and a large capacity (greater than 20 GB) optical disk system for data management; develop libraries for coordinate transformation and regridding of data sets; modify the NCSA X Image and X DataSlice software for typical Earth observation data sets by including map transformations and missing data handling; develop analysis tools for common mathematical and statistical operations; integrate the components described above into a system for the analysis and comparison of observations and model results; and distribute software and documentation to the scientific community.
An interactive environment for the analysis of large Earth observation and model data sets
NASA Technical Reports Server (NTRS)
Bowman, Kenneth P.; Walsh, John E.; Wilhelmson, Robert B.
1992-01-01
We propose to develop an interactive environment for the analysis of large Earth science observation and model data sets. We will use a standard scientific data storage format and a large capacity (greater than 20 GB) optical disk system for data management; develop libraries for coordinate transformation and regridding of data sets; modify the NCSA X Image and X Data Slice software for typical Earth observation data sets by including map transformations and missing data handling; develop analysis tools for common mathematical and statistical operations; integrate the components described above into a system for the analysis and comparison of observations and model results; and distribute software and documentation to the scientific community.
DOE Office of Scientific and Technical Information (OSTI.GOV)
NONE
1997-05-01
The Interactive Computer-Enhanced Remote Viewing System (ICERVS) is a software tool for complex three-dimensional (3-D) visualization and modeling. Its primary purpose is to facilitate the use of robotic and telerobotic systems in remote and/or hazardous environments, where spatial information is provided by 3-D mapping sensors. ICERVS provides a robust, interactive system for viewing sensor data in 3-D and combines this with interactive geometric modeling capabilities that allow an operator to construct CAD models to match the remote environment. Part I of this report traces the development of ICERVS through three evolutionary phases: (1) development of first-generation software to render orthogonalmore » view displays and wireframe models; (2) expansion of this software to include interactive viewpoint control, surface-shaded graphics, material (scalar and nonscalar) property data, cut/slice planes, color and visibility mapping, and generalized object models; (3) demonstration of ICERVS as a tool for the remediation of underground storage tanks (USTs) and the dismantlement of contaminated processing facilities. Part II of this report details the software design of ICERVS, with particular emphasis on its object-oriented architecture and user interface.« less
NASA Astrophysics Data System (ADS)
Miyaji, Kousuke; Sun, Chao; Soga, Ayumi; Takeuchi, Ken
2014-01-01
A relational database management system (RDBMS) is designed based on NAND flash solid-state drive (SSD) for storage. By vertically integrating the storage engine (SE) and the flash translation layer (FTL), system performance is maximized and the internal SSD overhead is minimized. The proposed RDBMS SE utilizes physical information about the NAND flash memory which is supplied from the FTL. The query operation is also optimized for SSD. By these treatments, page-copy-less garbage collection is achieved and data fragmentation in the NAND flash memory is suppressed. As a result, RDBMS performance increases by 3.8 times, power consumption of SSD decreases by 46% and SSD life time is increased by 61%. The effectiveness of the proposed scheme increases with larger erase block sizes, which matches the future scaling trend of three-dimensional (3D-) NAND flash memories. The preferable row data size of the proposed scheme is below 500 byte for 16 kbyte page size.
Configurable memory system and method for providing atomic counting operations in a memory device
Bellofatto, Ralph E.; Gara, Alan G.; Giampapa, Mark E.; Ohmacht, Martin
2010-09-14
A memory system and method for providing atomic memory-based counter operations to operating systems and applications that make most efficient use of counter-backing memory and virtual and physical address space, while simplifying operating system memory management, and enabling the counter-backing memory to be used for purposes other than counter-backing storage when desired. The encoding and address decoding enabled by the invention provides all this functionality through a combination of software and hardware.
Programmable, automated transistor test system
NASA Technical Reports Server (NTRS)
Truong, L. V.; Sundburg, G. R.
1986-01-01
A programmable, automated transistor test system was built to supply experimental data on new and advanced power semiconductors. The data will be used for analytical models and by engineers in designing space and aircraft electric power systems. A pulsed power technique was used at low duty cycles in a nondestructive test to examine the dynamic switching characteristic curves of power transistors in the 500 to 1000 V, 10 to 100 A range. Data collection, manipulation, storage, and output are operator interactive but are guided and controlled by the system software.
A Communication Architecture for an Advanced Extravehicular Mobile Unit
NASA Technical Reports Server (NTRS)
Ivancic, William D.; Sands, Obed S.; Bakula, Casey J.; Oldham, Daniel R.; Wright, Ted; Bradish, Martin A.; Klebau, Joseph M.
2014-01-01
This document describes the communication architecture for the Power, Avionics and Software (PAS) 1.0 subsystem for the Advanced Extravehicular Mobility Unit (AEMU). The following systems are described in detail: Caution Warning and Control System, Informatics, Storage, Video, Audio, Communication, and Monitoring Test and Validation. This document also provides some background as well as the purpose and goals of the PAS subsystem being developed at Glenn Research Center (GRC).
SnoVault and encodeD: A novel object-based storage system and applications to ENCODE metadata.
Hitz, Benjamin C; Rowe, Laurence D; Podduturi, Nikhil R; Glick, David I; Baymuradov, Ulugbek K; Malladi, Venkat S; Chan, Esther T; Davidson, Jean M; Gabdank, Idan; Narayana, Aditi K; Onate, Kathrina C; Hilton, Jason; Ho, Marcus C; Lee, Brian T; Miyasato, Stuart R; Dreszer, Timothy R; Sloan, Cricket A; Strattan, J Seth; Tanaka, Forrest Y; Hong, Eurie L; Cherry, J Michael
2017-01-01
The Encyclopedia of DNA elements (ENCODE) project is an ongoing collaborative effort to create a comprehensive catalog of functional elements initiated shortly after the completion of the Human Genome Project. The current database exceeds 6500 experiments across more than 450 cell lines and tissues using a wide array of experimental techniques to study the chromatin structure, regulatory and transcriptional landscape of the H. sapiens and M. musculus genomes. All ENCODE experimental data, metadata, and associated computational analyses are submitted to the ENCODE Data Coordination Center (DCC) for validation, tracking, storage, unified processing, and distribution to community resources and the scientific community. As the volume of data increases, the identification and organization of experimental details becomes increasingly intricate and demands careful curation. The ENCODE DCC has created a general purpose software system, known as SnoVault, that supports metadata and file submission, a database used for metadata storage, web pages for displaying the metadata and a robust API for querying the metadata. The software is fully open-source, code and installation instructions can be found at: http://github.com/ENCODE-DCC/snovault/ (for the generic database) and http://github.com/ENCODE-DCC/encoded/ to store genomic data in the manner of ENCODE. The core database engine, SnoVault (which is completely independent of ENCODE, genomic data, or bioinformatic data) has been released as a separate Python package.
SnoVault and encodeD: A novel object-based storage system and applications to ENCODE metadata
Podduturi, Nikhil R.; Glick, David I.; Baymuradov, Ulugbek K.; Malladi, Venkat S.; Chan, Esther T.; Davidson, Jean M.; Gabdank, Idan; Narayana, Aditi K.; Onate, Kathrina C.; Hilton, Jason; Ho, Marcus C.; Lee, Brian T.; Miyasato, Stuart R.; Dreszer, Timothy R.; Sloan, Cricket A.; Strattan, J. Seth; Tanaka, Forrest Y.; Hong, Eurie L.; Cherry, J. Michael
2017-01-01
The Encyclopedia of DNA elements (ENCODE) project is an ongoing collaborative effort to create a comprehensive catalog of functional elements initiated shortly after the completion of the Human Genome Project. The current database exceeds 6500 experiments across more than 450 cell lines and tissues using a wide array of experimental techniques to study the chromatin structure, regulatory and transcriptional landscape of the H. sapiens and M. musculus genomes. All ENCODE experimental data, metadata, and associated computational analyses are submitted to the ENCODE Data Coordination Center (DCC) for validation, tracking, storage, unified processing, and distribution to community resources and the scientific community. As the volume of data increases, the identification and organization of experimental details becomes increasingly intricate and demands careful curation. The ENCODE DCC has created a general purpose software system, known as SnoVault, that supports metadata and file submission, a database used for metadata storage, web pages for displaying the metadata and a robust API for querying the metadata. The software is fully open-source, code and installation instructions can be found at: http://github.com/ENCODE-DCC/snovault/ (for the generic database) and http://github.com/ENCODE-DCC/encoded/ to store genomic data in the manner of ENCODE. The core database engine, SnoVault (which is completely independent of ENCODE, genomic data, or bioinformatic data) has been released as a separate Python package. PMID:28403240
NASA Technical Reports Server (NTRS)
Callac, Christopher; Lunsford, Michelle
2005-01-01
The NASA Records Database, comprising a Web-based application program and a database, is used to administer an archive of paper records at Stennis Space Center. The system begins with an electronic form, into which a user enters information about records that the user is sending to the archive. The form is smart : it provides instructions for entering information correctly and prompts the user to enter all required information. Once complete, the form is digitally signed and submitted to the database. The system determines which storage locations are not in use, assigns the user s boxes of records to some of them, and enters these assignments in the database. Thereafter, the software tracks the boxes and can be used to locate them. By use of search capabilities of the software, specific records can be sought by box storage locations, accession numbers, record dates, submitting organizations, or details of the records themselves. Boxes can be marked with such statuses as checked out, lost, transferred, and destroyed. The system can generate reports showing boxes awaiting destruction or transfer. When boxes are transferred to the National Archives and Records Administration (NARA), the system can automatically fill out NARA records-transfer forms. Currently, several other NASA Centers are considering deploying the NASA Records Database to help automate their records archives.
A Highly Scalable Data Service (HSDS) using Cloud-based Storage Technologies for Earth Science Data
NASA Astrophysics Data System (ADS)
Michaelis, A.; Readey, J.; Votava, P.; Henderson, J.; Willmore, F.
2017-12-01
Cloud based infrastructure may offer several key benefits of scalability, built in redundancy, security mechanisms and reduced total cost of ownership as compared with a traditional data center approach. However, most of the tools and legacy software systems developed for online data repositories within the federal government were not developed with a cloud based infrastructure in mind and do not fully take advantage of commonly available cloud-based technologies. Moreover, services bases on object storage are well established and provided through all the leading cloud service providers (Amazon Web Service, Microsoft Azure, Google Cloud, etc…) of which can often provide unmatched "scale-out" capabilities and data availability to a large and growing consumer base at a price point unachievable from in-house solutions. We describe a system that utilizes object storage rather than traditional file system based storage to vend earth science data. The system described is not only cost effective, but shows a performance advantage for running many different analytics tasks in the cloud. To enable compatibility with existing tools and applications, we outline client libraries that are API compatible with existing libraries for HDF5 and NetCDF4. Performance of the system is demonstrated using clouds services running on Amazon Web Services.
Ada (Trademark) Reusability Guidelines.
1985-04-01
generators. Neighbors discusses another approach to reusable software using models. He describes a particular modeling technique using the Draco System ...experience with the Draco system . Is Fx~~~~flP7 7. 4 .~-’ b.r SECTION 4 DESIGN GUIDEUiNES As noted earlier, reusability is first and foremost a design issue...to be reused in another system that had a different type of physical data storage device, only this layer needs to be changed to deal with the new
2002-07-01
Knowledge From Data .................................................. 25 HIGH-CONFIDENCE SOFTWARE AND SYSTEMS Reliability, Security, and Safety for...NOAA’s Cessna Citation flew over the 16-acre World Trade Center site, scanning with an Optech ALSM unit. The system recorded data points from 33,000...provide the data storage and compute power for intelligence analysis, high-performance national defense systems , and critical scientific research • Large
Swertz, Morris A; De Brock, E O; Van Hijum, Sacha A F T; De Jong, Anne; Buist, Girbe; Baerends, Richard J S; Kok, Jan; Kuipers, Oscar P; Jansen, Ritsert C
2004-09-01
Genomic research laboratories need adequate infrastructure to support management of their data production and research workflow. But what makes infrastructure adequate? A lack of appropriate criteria makes any decision on buying or developing a system difficult. Here, we report on the decision process for the case of a molecular genetics group establishing a microarray laboratory. Five typical requirements for experimental genomics database systems were identified: (i) evolution ability to keep up with the fast developing genomics field; (ii) a suitable data model to deal with local diversity; (iii) suitable storage of data files in the system; (iv) easy exchange with other software; and (v) low maintenance costs. The computer scientists and the researchers of the local microarray laboratory considered alternative solutions for these five requirements and chose the following options: (i) use of automatic code generation; (ii) a customized data model based on standards; (iii) storage of datasets as black boxes instead of decomposing them in database tables; (iv) loosely linking to other programs for improved flexibility; and (v) a low-maintenance web-based user interface. Our team evaluated existing microarray databases and then decided to build a new system, Molecular Genetics Information System (MOLGENIS), implemented using code generation in a period of three months. This case can provide valuable insights and lessons to both software developers and a user community embarking on large-scale genomic projects. http://www.molgenis.nl
International Inventory of Software Packages in the Information Field.
ERIC Educational Resources Information Center
Keren, Carl, Ed.; Sered, Irina, Ed.
Designed to provide guidance in selecting appropriate software for library automation, information storage and retrieval, or management of bibliographic databases, this inventory describes 188 computer software packages. The information was obtained through a questionnaire survey of 600 software suppliers and developers who were asked to describe…
Cataloging and Organizing Microcomputer Software--Where Do We Go from First Base?
ERIC Educational Resources Information Center
Choi, Susan E.
This position paper addresses general topics to be considered when organizing library software collections. Tasks involved in organizing and cataloging educational software collections are discussed, including arrangement/classification; the type of catalog; descriptions of the software; the general materials designator; storage requirements; and…
Nine Easy Steps to Avoiding Software Copyright Infringement.
ERIC Educational Resources Information Center
Gamble, Lanny R.; Anderson, Larry S.
1989-01-01
To avoid microcomputer software copyright infringement, administrators must be aware of the law, read the software agreements, maintain good records, submit all software registration cards, provide secure storage, post warnings, be consistent when establishing and enforcing policies, consider a site license, and ensure the legality of currently…
The Role of Free/Libre and Open Source Software in Learning Health Systems.
Paton, C; Karopka, T
2017-08-01
Objective: To give an overview of the role of Free/Libre and Open Source Software (FLOSS) in the context of secondary use of patient data to enable Learning Health Systems (LHSs). Methods: We conducted an environmental scan of the academic and grey literature utilising the MedFLOSS database of open source systems in healthcare to inform a discussion of the role of open source in developing LHSs that reuse patient data for research and quality improvement. Results: A wide range of FLOSS is identified that contributes to the information technology (IT) infrastructure of LHSs including operating systems, databases, frameworks, interoperability software, and mobile and web apps. The recent literature around the development and use of key clinical data management tools is also reviewed. Conclusions: FLOSS already plays a critical role in modern health IT infrastructure for the collection, storage, and analysis of patient data. The nature of FLOSS systems to be collaborative, modular, and modifiable may make open source approaches appropriate for building the digital infrastructure for a LHS. Georg Thieme Verlag KG Stuttgart.
2015-01-19
MS WINDOWS platform, which enables multitasking with simultaneous evaluation and operation 1. REPORT DATE (DD-MM-YYYY) 4. TITLE AND SUBTITLE 13...measurement and analysis software for data acquisition, storage and evaluation with MS WINDOWS platform, which enables multitasking with simultaneous...Proteus measurement and analysis software for data acquisition, storage and evaluation with MS WINDOWS platform, which enables multitasking with
Patient Safety—Incorporating Drawing Software into Root Cause Analysis Software
Williams, Linda; Grayson, Diana; Gosbee, John
2001-01-01
Drawing software from Lassalle Technologies1 (France) designed for Visual Basic is the tool we used to standardize the creation, storage, and retrieval of flow diagrams containing information about adverse events and close calls.
Patient Safety—Incorporating Drawing Software into Root Cause Analysis Software
Williams, Linda; Grayson, Diana; Gosbee, John
2002-01-01
Drawing software from Lassalle Technologies1 (France) designed for Visual Basic is the tool we used to standardize the creation, storage, and retrieval of flow diagrams containing information about adverse events and close calls.
New Automotive Air Conditioning System Simulation Tool Developed in MATLAB/Simulink
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kiss, T.; Chaney, L.; Meyer, J.
Further improvements in vehicle fuel efficiency require accurate evaluation of the vehicle's transient total power requirement. When operated, the air conditioning (A/C) system is the largest auxiliary load on a vehicle; therefore, accurate evaluation of the load it places on the vehicle's engine and/or energy storage system is especially important. Vehicle simulation software, such as 'Autonomie,' has been used by OEMs to evaluate vehicles' energy performance. A transient A/C simulation tool incorporated into vehicle simulation models would also provide a tool for developing more efficient A/C systems through a thorough consideration of the transient A/C system performance. The dynamic systemmore » simulation software Matlab/Simulink was used to develop new and more efficient vehicle energy system controls. The various modeling methods used for the new simulation tool are described in detail. Comparison with measured data is provided to demonstrate the validity of the model.« less
Development of a data management front-end for use with a LANDSAT-based information system
NASA Technical Reports Server (NTRS)
Turner, B. J.
1982-01-01
The development and implementation of a data management front-end system for use with a LANDSAT based information system that facilitates the processsing of both LANDSAT and ancillary data was examined. The final tasks, reported on here, involved; (1) the implementation of the VICAR image processing software system at Penn State and the development of a user-friendly front-end for this system; (2) the implementation of JPL-developed software based on VICAR, for mosaicking LANDSAT scenes; (3) the creation and storage of a mosiac of 1981 summer LANDSAT data for the entire state of Pennsylvania; (4) demonstrations of the defoliation assessment procedure for Perry and Centre Counties, and presentation of the results at the 1982 National Gypsy Moth Review Meeting, and (5) the training of Pennsylvania Bureau of Forestry personnel in the use of the defoliation analysis system.
NASA Astrophysics Data System (ADS)
Petitjean, Gilles; de Hauteclocque, Bertrand
2004-06-01
EADS Defence and Security Systems (EADS DS SA) have developed an expertise as integrator of archive management systems for both their commercial and defence customers (ESA, CNES, EC, EUMETSAT, French MOD, US DOD, etc.), especially in Earth Observation and in Meteorology fields.The concern of valuable data owners is both their long-term preservation but also the integration of the archive in their information system with in particular an efficient access to archived data for their user community. The system integrator answers to this requirement by a methodology combining understanding of user needs, exhaustive knowledge of the existing solutions both for hardware and software elements and development and integration ability. The system integrator completes the facility development by support activities.The long-term preservation of archived data obviously involves a pertinent selection of storage media and archive library. This selection relies on storage technology survey but the selection criteria depend on the analysis of the user needs. The system integrator will recommend the best compromise for implementing an archive management facility, thanks to its knowledge and its independence of storage market and through the analysis of the user requirements. He will provide a solution, which is able to evolve to take advantage of the storage technology progress.But preserving the data for long-term is not only a question of storage technology. Some functions are required to secure the archive management system against contingency situation: multiple data set copies using operational procedures, active quality control of the archived data, migration policy optimising the cost of ownership.
Research on Key Technologies of Cloud Computing
NASA Astrophysics Data System (ADS)
Zhang, Shufen; Yan, Hongcan; Chen, Xuebin
With the development of multi-core processors, virtualization, distributed storage, broadband Internet and automatic management, a new type of computing mode named cloud computing is produced. It distributes computation task on the resource pool which consists of massive computers, so the application systems can obtain the computing power, the storage space and software service according to its demand. It can concentrate all the computing resources and manage them automatically by the software without intervene. This makes application offers not to annoy for tedious details and more absorbed in his business. It will be advantageous to innovation and reduce cost. It's the ultimate goal of cloud computing to provide calculation, services and applications as a public facility for the public, So that people can use the computer resources just like using water, electricity, gas and telephone. Currently, the understanding of cloud computing is developing and changing constantly, cloud computing still has no unanimous definition. This paper describes three main service forms of cloud computing: SAAS, PAAS, IAAS, compared the definition of cloud computing which is given by Google, Amazon, IBM and other companies, summarized the basic characteristics of cloud computing, and emphasized on the key technologies such as data storage, data management, virtualization and programming model.
Reference System of DNA and Protein Sequences on CD-ROM
NASA Astrophysics Data System (ADS)
Nasu, Hisanori; Ito, Toshiaki
DNASIS-DBREF31 is a database for DNA and Protein sequences in the form of optical Compact Disk (CD) ROM, developed and commercialized by Hitachi Software Engineering Co., Ltd. Both nucleic acid base sequences and protein amino acid sequences can be retrieved from a single CD-ROM. Existing database is offered in the form of on-line service, floppy disks, or magnetic tape, all of which have some problems or other, such as usability or storage capacity. DNASIS-DBREF31 newly adopt a CD-ROM as a database device to realize a mass storage and personal use of the database.
NASA Astrophysics Data System (ADS)
Schaller, S. C.; Bjorklund, E. A.; Carr, G. P.; Faucett, J. A.; Oothoudt, M. A.
1997-05-01
The Los Alamos Neutron Scattering Center (LANSCE) Proton Storage Ring (PSR) control system upgrade was completed in 1996. In previous work, much of a PDP-11-based control system was replaced with Experimental Physics and Industrial Control System (EPICS) controls. Several parts of the old control system which used a VAX for operator displays and direct access to a CAMAC serial highway still remained. The old system was preserved as a "fallback" if the new EPICS-based system had problems. The control system upgrade completion included conversion of several application programs to EPICS-based operator interfaces, moving some data acquisition hardware to EPICS Input-Output Controllers (IOCs), and the implementation of new gateway software to complete the overall control system interoperability. Many operator interface (OPI) screens, written by LANSCE operators, have been incorporated in the new system. The old PSR control system hardware was removed. The robustness and reliability of the new controls obviated the need for a fallback capability.
Lemaitre, D.; Sauquet, D.; Fofol, I.; Tanguy, L.; Jean, F. C.; Degoulet, P.
1995-01-01
Legacy systems are crucial for organizations since they support key functionalities. But they become obsolete with aging and the apparition of new techniques. Managing their evolution is a key issue in software engineering. This paper presents a strategy that has been developed at Broussais University Hospital in Paris to make a legacy system devoted to the management of health care units evolve towards a new up-to-date software. A two-phase evolution pathway is described. The first phase consists in separating the interface from the data storage and application control and in using a communication channel between the individualized components. The second phase proposes to use an object-oriented DBMS in place of the homegrown system. An application example for the management of hypertensive patients is described. PMID:8563252
Data systems and computer science programs: Overview
NASA Technical Reports Server (NTRS)
Smith, Paul H.; Hunter, Paul
1991-01-01
An external review of the Integrated Technology Plan for the Civil Space Program is presented. The topics are presented in viewgraph form and include the following: onboard memory and storage technology; advanced flight computers; special purpose flight processors; onboard networking and testbeds; information archive, access, and retrieval; visualization; neural networks; software engineering; and flight control and operations.
NASA Technical Reports Server (NTRS)
Auty, David
1988-01-01
The risk to the development of program reliability is derived from the use of a new language and from the potential use of new storage management techniques. With Ada and associated support software, there is a lack of established guidelines and procedures, drawn from experience and common usage, which assume reliable behavior. The risk is identified and clarified. In order to provide a framework for future consideration of dynamic storage management on Ada, a description of the relevant aspects of the language is presented in two sections: Program data sources, and declaration and allocation in Ada. Storage-management characteristics of the Ada language and storage-management characteristics of Ada implementations are differentiated. Terms that are used are defined in a narrow and precise sense. The storage-management implications of the Ada language are described. The storage-management options available to the Ada implementor and the implications of the implementor's choice for the Ada programmer are also described.
Online data monitoring in the LHCb experiment
NASA Astrophysics Data System (ADS)
Callot, O.; Cherukuwada, S.; Frank, M.; Gaspar, C.; Graziani, G.; Herwijnen, E. v.; Jost, B.; Neufeld, N.; P-Altarelli, M.; Somogyi, P.; Stoica, R.
2008-07-01
The High Level Trigger and Data Acquisition system selects about 2 kHz of events out of the 40 MHz of beam crossings. The selected events are sent to permanent storage for subsequent analysis. In order to ensure the quality of the collected data, identify possible malfunctions of the detector and perform calibration and alignment checks, a small fraction of the accepted events is sent to a monitoring farm, which consists of a few tens of general purpose processors. This contribution introduces the architecture of the data stream splitting mechanism from the storage system to the monitoring farm, where the raw data are analyzed by dedicated tasks. It describes the collaborating software components that are all based on the Gaudi event processing framework.
Portable Map-Reduce Utility for MIT SuperCloud Environment
2015-09-17
Reuther, A. Rosa, C. Yee, “Driving Big Data With Big Compute,” IEEE HPEC, Sep 10-12, 2012, Waltham, MA. [6] Apache Hadoop 1.2.1 Documentation: HDFS... big data architecture, which is designed to address these challenges, is made of the computing resources, scheduler, central storage file system...databases, analytics software and web interfaces [1]. These components are common to many big data and supercomputing systems. The platform is
Mixed Traffic Information Collection System based on Pressure Sensor
NASA Astrophysics Data System (ADS)
Liao, Wenzhe; Liu, Mingsheng; Meng, Qingli
The traffic information collection is the base of Intelligent Traffic.At present, there exist mixed traffic situation in urban road in China. This paper researched and implemented a system through collecting the vehicle and bicycle mixed traffic flow parameters based on pressure sensor. According to information collection requirements, we selected pressure sensor, designed the data collection, storage and other hardware circuitries and information processing software. The experiment shows that the system can meet the demand of traffic information collection in the actual.
Security of patient data when decommissioning ultrasound systems.
Moggridge, James
2017-02-01
Although ultrasound systems generally archive to Picture Archiving and Communication Systems (PACS), their archiving workflow typically involves storage to an internal hard disk before data are transferred onwards. Deleting records from the local system will delete entries in the database and from the file allocation table or equivalent but, as with a PC, files can be recovered. Great care is taken with disposal of media from a healthcare organisation to prevent data breaches, but ultrasound systems are routinely returned to lease companies, sold on or donated to third parties without such controls. In this project, five methods of hard disk erasure were tested on nine ultrasound systems being decommissioned: the system's own delete function; full reinstallation of system software; the manufacturer's own disk wiping service; open source disk wiping software for full and just blank space erasure. Attempts were then made to recover data using open source recovery tools. All methods deleted patient data as viewable from the ultrasound system and from browsing the disk from a PC. However, patient identifiable data (PID) could be recovered following the system's own deletion and the reinstallation methods. No PID could be recovered after using the manufacturer's wiping service or the open source wiping software. The typical method of reinstalling an ultrasound system's software may not prevent PID from being recovered. When transferring ownership, care should be taken that an ultrasound system's hard disk has been wiped to a sufficient level, particularly if the scanner is to be returned with approved parts and in a fully working state.
15 CFR 740.9 - Temporary imports, exports, reexports, and transfers (in-country) (TMP).
Code of Federal Regulations, 2014 CFR
2014-01-01
... commodities and software may be placed in a bonded warehouse or a storage facility provided that the exporter... the end of the beta test period as defined by the software producer or, if the software producer does... software. (a) Temporary exports, reexports, and transfers (in-country). License Exception TMP authorizes...
ERIC Educational Resources Information Center
Davies, Denise M.
1985-01-01
Discusses design, development, and use of a database to provide organization and access to a computer software collection at the University of Hawaii School of Library Studies. Field specifications, samples of report forms, and a description of the physical organization of the software collection are included. (MBR)
Integrated software system for low level waste management
DOE Office of Scientific and Technical Information (OSTI.GOV)
Worku, G.
1995-12-31
In the continually changing and uncertain world of low level waste management, many generators in the US are faced with the prospect of having to store their waste on site for the indefinite future. This consequently increases the set of tasks performed by the generators in the areas of packaging, characterizing, classifying, screening (if a set of acceptance criteria applies), and managing the inventory for the duration of onsite storage. When disposal sites become available, it is expected that the work will require re-evaluating the waste packages, including possible re-processing, re-packaging, or re-classifying in preparation for shipment for disposal undermore » the regulatory requirements of the time. In this day and age, when there is wide use of computers and computer literacy is at high levels, an important waste management tool would be an integrated software system that aids waste management personnel in conducting these tasks quickly and accurately. It has become evident that such an integrated radwaste management software system offers great benefits to radwaste generators both in the US and other countries. This paper discusses one such approach to integrated radwaste management utilizing some globally accepted radiological assessment software applications.« less
NASA Astrophysics Data System (ADS)
Arias Muñoz, C.; Brovelli, M. A.; Kilsedar, C. E.; Moreno-Sanchez, R.; Oxoli, D.
2017-09-01
The availability of water-related data and information across different geographical and jurisdictional scales is of critical importance for the conservation and management of water resources in the 21st century. Today information assets are often found fragmented across multiple agencies that use incompatible data formats and procedures for data collection, storage, maintenance, analysis, and distribution. The growing adoption of Web mapping systems in the water domain is reducing the gap between data availability and its practical use and accessibility. Nevertheless, more attention must be given to the design and development of these systems to achieve high levels of interoperability and usability while fulfilling different end user informational needs. This paper first presents a brief overview of technologies used in the water domain, and then presents three examples of Web mapping architectures based on free and open source software (FOSS) and the use of open specifications (OS) that address different users' needs for data sharing, visualization, manipulation, scenario simulations, and map production. The purpose of the paper is to illustrate how the latest developments in OS for geospatial and water-related data collection, storage, and sharing, combined with the use of mature FOSS projects facilitate the creation of sophisticated interoperable Web-based information systems in the water domain.
Intelligent sensor and controller framework for the power grid
Akyol, Bora A.; Haack, Jereme Nathan; Craig, Jr., Philip Allen; Tews, Cody William; Kulkarni, Anand V.; Carpenter, Brandon J.; Maiden, Wendy M.; Ciraci, Selim
2015-07-28
Disclosed below are representative embodiments of methods, apparatus, and systems for monitoring and using data in an electric power grid. For example, one disclosed embodiment comprises a sensor for measuring an electrical characteristic of a power line, electrical generator, or electrical device; a network interface; a processor; and one or more computer-readable storage media storing computer-executable instructions. In this embodiment, the computer-executable instructions include instructions for implementing an authorization and authentication module for validating a software agent received at the network interface; instructions for implementing one or more agent execution environments for executing agent code that is included with the software agent and that causes data from the sensor to be collected; and instructions for implementing an agent packaging and instantiation module for storing the collected data in a data container of the software agent and for transmitting the software agent, along with the stored data, to a next destination.
Intelligent sensor and controller framework for the power grid
DOE Office of Scientific and Technical Information (OSTI.GOV)
Akyol, Bora A.; Haack, Jereme Nathan; Craig, Jr., Philip Allen
Disclosed below are representative embodiments of methods, apparatus, and systems for monitoring and using data in an electric power grid. For example, one disclosed embodiment comprises a sensor for measuring an electrical characteristic of a power line, electrical generator, or electrical device; a network interface; a processor; and one or more computer-readable storage media storing computer-executable instructions. In this embodiment, the computer-executable instructions include instructions for implementing an authorization and authentication module for validating a software agent received at the network interface; instructions for implementing one or more agent execution environments for executing agent code that is included with themore » software agent and that causes data from the sensor to be collected; and instructions for implementing an agent packaging and instantiation module for storing the collected data in a data container of the software agent and for transmitting the software agent, along with the stored data, to a next destination.« less
HEP Data Grid Applications in Korea
NASA Astrophysics Data System (ADS)
Cho, Kihyeon; Oh, Youngdo; Son, Dongchul; Kim, Bockjoo; Lee, Sangsan
2003-04-01
We will introduce the national HEP Data Grid applications in Korea. Through a five-year HEP Data Grid project (2002-2006) for CMS, AMS, CDF, PHENIX, K2K and Belle experiments in Korea, the Center for High Energy Physics, Kyungpook National University in Korea will construct the 1,000 PC cluster and related storage system such as 1,200 TByte Raid disk system. This project includes one of the master plan to construct Asia Regional Data Center by 2006 for the CMS and AMS Experiments and DCAF(DeCentralized Analysis Farm) for the CDF Experiments. During the first year of the project, we have constructed a cluster of around 200 CPU's with a 50 TBytes of a storage system. We will present our first year's experience of the software and hardware applications for HEP Data Grid of EDG and SAM Grid testbeds.
Sustainable Software Decisions for Long-term Projects (Invited)
NASA Astrophysics Data System (ADS)
Shepherd, A.; Groman, R. C.; Chandler, C. L.; Gaylord, D.; Sun, M.
2013-12-01
Adopting new, emerging technologies can be difficult for established projects that are positioned to exist for years to come. In some cases the challenge lies in the pre-existing software architecture. In others, the challenge lies in the fluctuation of resources like people, time and funding. The Biological and Chemical Oceanography Data Management Office (BCO-DMO) was created in late 2006 by combining the data management offices for the U.S. GLOBEC and U.S. JGOFS programs to publish data for researchers funded by the National Science Foundation (NSF). Since its inception, BCO-DMO has been supporting access and discovery of these data through web-accessible software systems, and the office has worked through many of the challenges of incorporating new technologies into its software systems. From migrating human readable, flat file metadata storage into a relational database, and now, into a content management system (Drupal) to incorporating controlled vocabularies, new technologies can radically affect the existing software architecture. However, through the use of science-driven use cases, effective resource management, and loosely coupled software components, BCO-DMO has been able to adapt its existing software architecture to adopt new technologies. One of the latest efforts at BCO-DMO revolves around applying metadata semantics for publishing linked data in support of data discovery. This effort primarily affects the metadata web interface software at http://bco-dmo.org and the geospatial interface software at http://mapservice.bco-dmo.org/. With guidance from science-driven use cases and consideration of our resources, implementation decisions are made using a strategy to loosely couple the existing software systems to the new technologies. The results of this process led to the use of REST web services and a combination of contributed and custom Drupal modules for publishing BCO-DMO's content using the Resource Description Framework (RDF) via an instance of the Virtuoso Open-Source triplestore.
Cardio-PACs: a new opportunity
NASA Astrophysics Data System (ADS)
Heupler, Frederick A., Jr.; Thomas, James D.; Blume, Hartwig R.; Cecil, Robert A.; Heisler, Mary
2000-05-01
It is now possible to replace film-based image management in the cardiac catheterization laboratory with a Cardiology Picture Archiving and Communication System (Cardio-PACS) based on digital imaging technology. The first step in the conversion process is installation of a digital image acquisition system that is capable of generating high-quality DICOM-compatible images. The next three steps, which are the subject of this presentation, involve image display, distribution, and storage. Clinical requirements and associated cost considerations for these three steps are listed below: Image display: (1) Image quality equal to film, with DICOM format, lossless compression, image processing, desktop PC-based with color monitor, and physician-friendly imaging software; (2) Performance specifications include: acquire 30 frames/sec; replay 15 frames/sec; access to file server 5 seconds, and to archive 5 minutes; (3) Compatibility of image file, transmission, and processing formats; (4) Image manipulation: brightness, contrast, gray scale, zoom, biplane display, and quantification; (5) User-friendly control of image review. Image distribution: (1) Standard IP-based network between cardiac catheterization laboratories, file server, long-term archive, review stations, and remote sites; (2) Non-proprietary formats; (3) Bidirectional distribution. Image storage: (1) CD-ROM vs disk vs tape; (2) Verification of data integrity; (3) User-designated storage capacity for catheterization laboratory, file server, long-term archive. Costs: (1) Image acquisition equipment, file server, long-term archive; (2) Network infrastructure; (3) Review stations and software; (4) Maintenance and administration; (5) Future upgrades and expansion; (6) Personnel.
Improved memory loading techniques for the TSRV display system
NASA Technical Reports Server (NTRS)
Easley, W. C.; Lynn, W. A.; Mcluer, D. G.
1986-01-01
A recent upgrade of the TSRV research flight system at NASA Langley Research Center retained the original monochrome display system. However, the display memory loading equipment was replaced requiring design and development of new methods of performing this task. This paper describes the new techniques developed to load memory in the display system. An outdated paper tape method for loading the BOOTSTRAP control program was replaced by EPROM storage of the characters contained on the tape. Rather than move a tape past an optical reader, a counter was implemented which steps sequentially through EPROM addresses and presents the same data to the loader circuitry. A cumbersome cassette tape method for loading the applications software was replaced with a floppy disk method using a microprocessor terminal installed as part of the upgrade. The cassette memory image was transferred to disk and a specific software loader was written for the terminal which duplicates the function of the cassette loader.
CytometryML binary data standards
NASA Astrophysics Data System (ADS)
Leif, Robert C.
2005-03-01
CytometryML is a proposed new Analytical Cytology (Cytomics) data standard, which is based on a common set of XML schemas for encoding flow cytometry and digital microscopy text based data types (metadata). CytometryML schemas reference both DICOM (Digital Imaging and Communications in Medicine) codes and FCS keywords. Flow Cytometry Standard (FCS) list-mode has been mapped to the DICOM Waveform Information Object. The separation of the large binary data objects (list mode and image data) from the XML description of the metadata permits the metadata to be directly displayed, analyzed, and reported with standard commercial software packages; the direct use of XML languages; and direct interfacing with clinical information systems. The separation of the binary data into its own files simplifies parsing because all extraneous header data has been eliminated. The storage of images as two-dimensional arrays without any extraneous data, such as in the Adobe Photoshop RAW format, facilitates the development by scientists of their own analysis and visualization software. Adobe Photoshop provided the display infrastructure and the translation facility to interconvert between the image data from commercial formats and RAW format. Similarly, the storage and parsing of list mode binary data type with a group of parameters that are specified at compilation time is straight forward. However when the user is permitted at run-time to select a subset of the parameters and/or specify results of mathematical manipulations, the development of special software was required. The use of CytometryML will permit investigators to be able to create their own interoperable data analysis software and to employ commercially available software to disseminate their data.
Operating systems. [of computers
NASA Technical Reports Server (NTRS)
Denning, P. J.; Brown, R. L.
1984-01-01
A counter operating system creates a hierarchy of levels of abstraction, so that at a given level all details concerning lower levels can be ignored. This hierarchical structure separates functions according to their complexity, characteristic time scale, and level of abstraction. The lowest levels include the system's hardware; concepts associated explicitly with the coordination of multiple tasks appear at intermediate levels, which conduct 'primitive processes'. Software semaphore is the mechanism controlling primitive processes that must be synchronized. At higher levels lie, in rising order, the access to the secondary storage devices of a particular machine, a 'virtual memory' scheme for managing the main and secondary memories, communication between processes by way of a mechanism called a 'pipe', access to external input and output devices, and a hierarchy of directories cataloguing the hardware and software objects to which access must be controlled.
Real-time monitoring and control of the plasma hearth process
DOE Office of Scientific and Technical Information (OSTI.GOV)
Power, M.A.; Carney, K.P.; Peters, G.G.
1996-05-01
A distributed monitoring and control system is proposed for a plasma hearth, which will be used to decompose hazardous organic materials, encapsulate actinide waste in an obsidian-like slag, and reduce storage volume of actinide waste. The plasma hearth will be installed at ANL-West with the assistance of SAIC. Real-time monitoring of the off-gas system is accomplished using a Sun Workstation and embedded PCs. LabWindows/CVI software serves as the graphical user interface.
A Recommender System in the Cyber Defense Domain
2014-03-27
monitoring software is a java based program sending updates to the database on the sensor machine. The host monitoring program gathers information about...3.2.2 Database. A MySQL database located on the sensor machine acts as the storage for the sensors on the network. Snort, Nmap, vulnerability scores, and...machine with the IDS and the recommender is labeled “sensor”. The recommender system code is written in java and compiled using java version 1.6.024
Automatic Integration Testbeds validation on Open Science Grid
NASA Astrophysics Data System (ADS)
Caballero, J.; Thapa, S.; Gardner, R.; Potekhin, M.
2011-12-01
A recurring challenge in deploying high quality production middleware is the extent to which realistic testing occurs before release of the software into the production environment. We describe here an automated system for validating releases of the Open Science Grid software stack that leverages the (pilot-based) PanDA job management system developed and used by the ATLAS experiment. The system was motivated by a desire to subject the OSG Integration Testbed to more realistic validation tests. In particular those which resemble to every extent possible actual job workflows used by the experiments thus utilizing job scheduling at the compute element (CE), use of the worker node execution environment, transfer of data to/from the local storage element (SE), etc. The context is that candidate releases of OSG compute and storage elements can be tested by injecting large numbers of synthetic jobs varying in complexity and coverage of services tested. The native capabilities of the PanDA system can thus be used to define jobs, monitor their execution, and archive the resulting run statistics including success and failure modes. A repository of generic workflows and job types to measure various metrics of interest has been created. A command-line toolset has been developed so that testbed managers can quickly submit "VO-like" jobs into the system when newly deployed services are ready for testing. A system for automatic submission has been crafted to send jobs to integration testbed sites, collecting the results in a central service and generating regular reports for performance and reliability.
Test Telemetry And Command System (TTACS)
NASA Technical Reports Server (NTRS)
Fogel, Alvin J.
1994-01-01
The Jet Propulsion Laboratory has developed a multimission Test Telemetry and Command System (TTACS) which provides a multimission telemetry and command data system in a spacecraft test environment. TTACS reuses, in the spacecraft test environment, components of the same data system used for flight operations; no new software is developed for the spacecraft test environment. Additionally, the TTACS is transportable to any spacecraft test site, including the launch site. The TTACS is currently operational in the Galileo spacecraft testbed; it is also being provided to support the Cassini and Mars Surveyor Program projects. Minimal personnel data system training is required in the transition from pre-launch spacecraft test to post-launch flight operations since test personnel are already familiar with the data system's operation. Additionally, data system components, e.g. data display, can be reused to support spacecraft software development; and the same data system components will again be reused during the spacecraft integration and system test phases. TTACS usage also results in early availability of spacecraft data to data system development and, as a result, early data system development feedback to spacecraft system developers. The TTACS consists of a multimission spacecraft support equipment interface and components of the multimission telemetry and command software adapted for a specific project. The TTACS interfaces to the spacecraft, e.g., Command Data System (CDS), support equipment. The TTACS telemetry interface to the CDS support equipment performs serial (RS-422)-to-ethernet conversion at rates between 1 bps and 1 mbps, telemetry data blocking and header generation, guaranteed data transmission to the telemetry data system, and graphical downlink routing summary and control. The TTACS command interface to the CDS support equipment is nominally a command file transferred in non-real-time via ethernet. The CDS support equipment is responsible for metering the commands to the CDS; additionally for Galileo, TTACS includes a real-time-interface to the CDS support equipment. The TTACS provides the basic functionality of the multimission telemetry and command data system used during flight operations. TTACS telemetry capabilities include frame synchronization, Reed-Solomon decoding, packet extraction and channelization, and data storage/query. Multimission data display capabilities are also available. TTACS command capabilities include command generation verification, and storage.
Adopting Open Source Software to Address Software Risks during the Scientific Data Life Cycle
NASA Astrophysics Data System (ADS)
Vinay, S.; Downs, R. R.
2012-12-01
Software enables the creation, management, storage, distribution, discovery, and use of scientific data throughout the data lifecycle. However, the capabilities offered by software also present risks for the stewardship of scientific data, since future access to digital data is dependent on the use of software. From operating systems to applications for analyzing data, the dependence of data on software presents challenges for the stewardship of scientific data. Adopting open source software provides opportunities to address some of the proprietary risks of data dependence on software. For example, in some cases, open source software can be deployed to avoid licensing restrictions for using, modifying, and transferring proprietary software. The availability of the source code of open source software also enables the inclusion of modifications, which may be contributed by various community members who are addressing similar issues. Likewise, an active community that is maintaining open source software can be a valuable source of help, providing an opportunity to collaborate to address common issues facing adopters. As part of the effort to meet the challenges of software dependence for scientific data stewardship, risks from software dependence have been identified that exist during various times of the data lifecycle. The identification of these risks should enable the development of plans for mitigating software dependencies, where applicable, using open source software, and to improve understanding of software dependency risks for scientific data and how they can be reduced during the data life cycle.
GeoTess: A generalized Earth model software utility
Ballard, Sanford; Hipp, James; Kraus, Brian; ...
2016-03-23
GeoTess is a model parameterization and software support library that manages the construction, population, storage, and interrogation of data stored in 2D and 3D Earth models. Here, the software is available in Java and C++, with a C interface to the C++ library.
Tagliaferri, Luca; Gobitti, Carlo; Colloca, Giuseppe Ferdinando; Boldrini, Luca; Farina, Eleonora; Furlan, Carlo; Paiar, Fabiola; Vianello, Federica; Basso, Michela; Cerizza, Lorenzo; Monari, Fabio; Simontacchi, Gabriele; Gambacorta, Maria Antonietta; Lenkowicz, Jacopo; Dinapoli, Nicola; Lanzotti, Vito; Mazzarotto, Renzo; Russi, Elvio; Mangoni, Monica
2018-07-01
The big data approach offers a powerful alternative to Evidence-based medicine. This approach could guide cancer management thanks to machine learning application to large-scale data. Aim of the Thyroid CoBRA (Consortium for Brachytherapy Data Analysis) project is to develop a standardized web data collection system, focused on thyroid cancer. The Metabolic Radiotherapy Working Group of Italian Association of Radiation Oncology (AIRO) endorsed the implementation of a consortium directed to thyroid cancer management and data collection. The agreement conditions, the ontology of the collected data and the related software services were defined by a multicentre ad hoc working-group (WG). Six Italian cancer centres were firstly started the project, defined and signed the Thyroid COBRA consortium agreement. Three data set tiers were identified: Registry, Procedures and Research. The COBRA-Storage System (C-SS) appeared to be not time-consuming and to be privacy respecting, as data can be extracted directly from the single centre's storage platforms through a secured connection that ensures reliable encryption of sensible data. Automatic data archiving could be directly performed from Image Hospital Storage System or the Radiotherapy Treatment Planning Systems. The C-SS architecture will allow "Cloud storage way" or "distributed learning" approaches for predictive model definition and further clinical decision support tools development. The development of the Thyroid COBRA data Storage System C-SS through a multicentre consortium approach appeared to be a feasible tool in the setup of complex and privacy saving data sharing system oriented to the management of thyroid cancer and in the near future every cancer type. Copyright © 2018 European Federation of Internal Medicine. Published by Elsevier B.V. All rights reserved.
NASA Technical Reports Server (NTRS)
Cudmore, Alan; Flanegan, Mark
1993-01-01
The NASA Small Explorer Data System (SEDS), a space flight data system developed to support the Small Explorer (SMEX) project, is addressed. The system was flown on the Solar Anomalous Magnetospheric Particle Explorer (SAMPEX) SMEX mission, and with reconfiguration for different requirements will fly on the X-ray Timing Explorer (XTE) and the Tropical Rainfall Measuring Mission (TRMM). SEDS is also foreseen for the Hubble repair mission. Its name was changed to Spacecraft Data System (SDS) in view of expansions. Objectives, SDS hardware, and software are described. Each SDS box contains two computers, data storage memory, uplink (command) reception circuitry, downlink (telemetry) encoding circuitry, Instrument Telemetry Controller (ITC), and spacecraft timing circuitry. The SDS communicates with other subsystems over the MIL-STD-1773 data bus. The SDS software uses a real time Operating System (OS) and the C language. The OS layer, communications and scheduling layer, application task layer, and diagnostic software, are described. Decisions on the use of advanced technologies, such as ASIC's (Application Specific Integrated Circuits) and fiber optics, led to technical improvements, such as lower power and weight, without increasing the risk associated with the data system. The result was a successful SAMPEX development, integration and test, and mission using SEDS, and the upgrading of that system to SDS for TRMM and XTE.
Virtualization and cloud computing in dentistry.
Chow, Frank; Muftu, Ali; Shorter, Richard
2014-01-01
The use of virtualization and cloud computing has changed the way we use computers. Virtualization is a method of placing software called a hypervisor on the hardware of a computer or a host operating system. It allows a guest operating system to run on top of the physical computer with a virtual machine (i.e., virtual computer). Virtualization allows multiple virtual computers to run on top of one physical computer and to share its hardware resources, such as printers, scanners, and modems. This increases the efficient use of the computer by decreasing costs (e.g., hardware, electricity administration, and management) since only one physical computer is needed and running. This virtualization platform is the basis for cloud computing. It has expanded into areas of server and storage virtualization. One of the commonly used dental storage systems is cloud storage. Patient information is encrypted as required by the Health Insurance Portability and Accountability Act (HIPAA) and stored on off-site private cloud services for a monthly service fee. As computer costs continue to increase, so too will the need for more storage and processing power. Virtual and cloud computing will be a method for dentists to minimize costs and maximize computer efficiency in the near future. This article will provide some useful information on current uses of cloud computing.
NSSDC provides network access to key data via NDADS
NASA Technical Reports Server (NTRS)
Behnke, Jeanne; King, Joseph
1994-01-01
The National Space Science Data Center (NSSDC) is making a growing fraction of its most customer-desirable data electronically accessible via both the local and wide area networks. NSSDC is witnessing a great increase in its data dissemination owing to this network accessibility. To provide its customers the best data accessibility, the NSSDC makes data available from a nearline, mass storage system, the NSSDC Data Archive and Dissemination Service (NDADS). The NDADS, the initial version was made available in January 1992, is a customized system of hardware and software that provides users access to the nearline data via ANONYMOUS FTP, an e-mail interface (ARMS), and a C-based software library. In January 1992, the NDADS registered 416 requests for 1,957 files. By December of 1994, NDADS had been populated with 800 gigabytes of electronically accessible data and had registered 1458 requests for 20,887 files. In this report we describe the NDADS system, both hardware and software. Later in the report, we discuss some of the lessons that were learned as a result of operating NDADS, particularly in the area of ingest and dissemination.
Comparison of existing digital image analysis systems for the analysis of Thematic Mapper data
NASA Technical Reports Server (NTRS)
Likens, W. C.; Wrigley, R. C.
1984-01-01
Most existing image analysis systems were designed with the Landsat Multi-Spectral Scanner in mind, leaving open the question of whether or not these systems could adequately process Thematic Mapper data. In this report, both hardware and software systems have been evaluated for compatibility with TM data. Lack of spectral analysis capability was not found to be a problem, though techniques for spatial filtering and texture varied. Computer processing speed and data storage of currently existing mini-computer based systems may be less than adequate. Upgrading to more powerful hardware may be required for many TM applications.
Gao, Wu; Xu, Wenjie; Bian, Xuecheng; Chen, Yunmin
2017-11-01
The settlement of any position of the municipal solid waste (MSW) body during the landfilling process and after its closure has effects on the integrity of the internal structure and storage capacity of the landfill. This paper proposes a practical approach for calculating the settlement and storage capacity of landfills based on the space and time discretization of the landfilling process. The MSW body in the landfill was divided into independent column units, and the filling process of each column unit was determined by a simplified complete landfilling process. The settlement of a position in the landfill was calculated with the compression of each MSW layer in every column unit. Then, the simultaneous settlement of all the column units was integrated to obtain the settlement of the landfill and storage capacity of all the column units; this allowed to obtain the storage capacity of the landfill based on the layer-wise summation method. When the compression of each MSW layer was calculated, the effects of the fluctuation of the main leachate level and variation in the unit weight of the MSW on the overburdened effective stress were taken into consideration by introducing the main leachate level's proportion and the unit weight and buried depth curve. This approach is especially significant for MSW with a high kitchen waste content and landfills in developing countries. The stress-biodegradation compression model was used to calculate the compression of each MSW layer. A software program, Settlement and Storage Capacity Calculation System for Landfills, was developed by integrating the space and time discretization of the landfilling process and the settlement and storage capacity algorithms. The landfilling process of the phase IV of Shanghai Laogang Landfill was simulated using this software. The maximum geometric volume of the landfill error between the calculated and measured values is only 2.02%, and the accumulated filling weight error between the calculated value and measured value is less than 5%. These results show that this approach is practical for satisfactorily and reliably calculating the settlement and storage capacity. In addition, the development of the elevation lines in the landfill sections created with the software demonstrates that the optimization of the design of the structures should be based on the settlement of the landfill. Since this practical approach can reasonably calculate the storage capacity of landfills and efficiently provide the development of the settlement of each landfilling stage, it can be used for the optimizations of landfilling schemes and structural designs. Copyright © 2017 Elsevier Ltd. All rights reserved.
48 CFR 1552.215-72 - Instructions for the Preparation of Proposals.
Code of Federal Regulations, 2013 CFR
2013-10-01
... of the information, to expedite review of the proposal, submit an IBM-compatible software or storage... offeror used another spreadsheet program, indicate the software program used to create this information... submission of a compatible software or device will expedite review, failure to submit a disk will not affect...
25 CFR 543.7 - What are the minimum internal control standards for bingo?
Code of Federal Regulations, 2011 CFR
2011-04-01
... software upgrades, data storage media replacement, etc.). The information recorded must be used when...., draw objects and back-up draw objects); and (ii) Random number generator software. (Additional information technology security standards can be found in § 543.16 of this part.) (2) The game software...
48 CFR 1552.215-72 - Instructions for the Preparation of Proposals.
Code of Federal Regulations, 2014 CFR
2014-10-01
... of the information, to expedite review of the proposal, submit an IBM-compatible software or storage... offeror used another spreadsheet program, indicate the software program used to create this information... submission of a compatible software or device will expedite review, failure to submit a disk will not affect...
25 CFR 543.7 - What are the minimum internal control standards for bingo?
Code of Federal Regulations, 2012 CFR
2012-04-01
... software upgrades, data storage media replacement, etc.). The information recorded must be used when...., draw objects and back-up draw objects); and (ii) Random number generator software. (Additional information technology security standards can be found in § 543.16 of this part.) (2) The game software...
Development of a platform-independent receiver control system for SISIFOS
NASA Astrophysics Data System (ADS)
Lemke, Roland; Olberg, Michael
1998-05-01
Up to now receiver control software was a time consuming development usually written by receiver engineers who had mainly the hardware in mind. We are presenting a low-cost and very flexible system which uses a minimal interface to the real hardware, and which makes it easy to adapt to new receivers. Our system uses Tcl/Tk as a graphical user interface (GUI), SpecTcl as a GUI builder, Pgplot as plotting software, a simple query language (SQL) database for information storage and retrieval, Ethernet socket to socket communication and SCPI as a command control language. The complete system is in principal platform independent but for cost saving reasons we are using it actually on a PC486 running Linux 2.0.30, which is a copylefted Unix. The only hardware dependent part are the digital input/output boards, analog to digital and digital to analog convertors. In the case of the Linux PC we are using a device driver development kit to integrate the boards fully into the kernel of the operating system, which indeed makes them look like an ordinary device. The advantage of this system is firstly the low price and secondly the clear separation between the different software components which are available for many operating systems. If it is not possible, due to CPU performance limitations, to run all the software in a single machine,the SQL-database or the graphical user interface could be installed on separate computers.
NASA Astrophysics Data System (ADS)
Mohan, C.
In this paper, I survey briefly some of the recent and emerging trends in hardware and software features which impact high performance transaction processing and data analytics applications. These features include multicore processor chips, ultra large main memories, flash storage, storage class memories, database appliances, field programmable gate arrays, transactional memory, key-value stores, and cloud computing. While some applications, e.g., Web 2.0 ones, were initially built without traditional transaction processing functionality in mind, slowly system architects and designers are beginning to address such previously ignored issues. The availability, analytics and response time requirements of these applications were initially given more importance than ACID transaction semantics and resource consumption characteristics. A project at IBM Almaden is studying the implications of phase change memory on transaction processing, in the context of a key-value store. Bitemporal data management has also become an important requirement, especially for financial applications. Power consumption and heat dissipation properties are also major considerations in the emergence of modern software and hardware architectural features. Considerations relating to ease of configuration, installation, maintenance and monitoring, and improvement of total cost of ownership have resulted in database appliances becoming very popular. The MapReduce paradigm is now quite popular for large scale data analysis, in spite of the major inefficiencies associated with it.
Extreme I/O on HPC for HEP using the Burst Buffer at NERSC
NASA Astrophysics Data System (ADS)
Bhimji, Wahid; Bard, Debbie; Burleigh, Kaylan; Daley, Chris; Farrell, Steve; Fasel, Markus; Friesen, Brian; Gerhardt, Lisa; Liu, Jialin; Nugent, Peter; Paul, Dave; Porter, Jeff; Tsulaia, Vakho
2017-10-01
In recent years there has been increasing use of HPC facilities for HEP experiments. This has initially focussed on less I/O intensive workloads such as generator-level or detector simulation. We now demonstrate the efficient running of I/O-heavy analysis workloads on HPC facilities at NERSC, for the ATLAS and ALICE LHC collaborations as well as astronomical image analysis for DESI and BOSS. To do this we exploit a new 900 TB NVRAM-based storage system recently installed at NERSC, termed a Burst Buffer. This is a novel approach to HPC storage that builds on-demand filesystems on all-SSD hardware that is placed on the high-speed network of the new Cori supercomputer. We describe the hardware and software involved in this system, and give an overview of its capabilities, before focusing in detail on how the ATLAS, ALICE and astronomical workflows were adapted to work on this system. We describe these modifications and the resulting performance results, including comparisons to other filesystems. We demonstrate that we can meet the challenging I/O requirements of HEP experiments and scale to many thousands of cores accessing a single shared storage system.
Research on distributed optical fiber sensing data processing method based on LabVIEW
NASA Astrophysics Data System (ADS)
Li, Zhonghu; Yang, Meifang; Wang, Luling; Wang, Jinming; Yan, Junhong; Zuo, Jing
2018-01-01
The pipeline leak detection and leak location problem have gotten extensive attention in the industry. In this paper, the distributed optical fiber sensing system is designed based on the heat supply pipeline. The data processing method of distributed optical fiber sensing based on LabVIEW is studied emphatically. The hardware system includes laser, sensing optical fiber, wavelength division multiplexer, photoelectric detector, data acquisition card and computer etc. The software system is developed using LabVIEW. The software system adopts wavelet denoising method to deal with the temperature information, which improved the SNR. By extracting the characteristic value of the fiber temperature information, the system can realize the functions of temperature measurement, leak location and measurement signal storage and inquiry etc. Compared with traditional negative pressure wave method or acoustic signal method, the distributed optical fiber temperature measuring system can measure several temperatures in one measurement and locate the leak point accurately. It has a broad application prospect.
NASA Astrophysics Data System (ADS)
Li, Lihua; Ma, Jianshe; Liu, Lin; Pan, Longfa; Zhang, Jianyong; Lu, Junhui
2005-09-01
It is well known that the optical pick-up (OPU) plays a very important role in optical storage system. And the quality of OPU can be measured by the characteristics of OPU read-out spot for high density optical storage. Therefore this paper mainly designs an OPU model for high density optical storage to study the characteristics of OPU read-out spot. Firstly it analyses the optical read-out principle in OPU and contrives an optical read-out system based on the hereinbefore theory. In this step it chiefly designs the grating, splitter, collimator lens and objective lens. Secondly based on the aberrations analysis and theory involved by the splitter, the collimator lens and the optical lens, the paper uses the software CODE V to calculate the aberrations and to optimize the optical read-out system. Then the author can receive an ideal OPU read-out spot for high density optical storage and obtain the characteristics of the ideal OPU read-out spot. At the same time this paper analyses some influence factors which can directly affect the characteristics of the OPU read-out spot. Thirdly according to the up data the author practically manufactures a real optical pick-up to validate the hereinbefore designed optical read-out system. And it uses the Optical Spot Analyzer to get the image of the read-out spot. Comparing the ideal image to the actual image of the designed optical read-out system, the author finds out that the upwards analyses and design is suitable for high density storage and can be used in the actual production. And the author also receives the conclusion that the mostly influences on characteristics of OPU read-out spot for high density optical storage factors is not only the process of designing the grating, splitter, collimator lens and objective lens, but also the assembling work precision
2016-02-19
power converter, a solar photovoltaic ( PV ) system with inverter, and eighteen breakers. (Future work will require either validation of these models...custom control software. (For this project, this was done for the energy storage, solar PV , and breakers.) Implement several relay protection functions...for the PV array is given in Section A.3. This profile was generated by applying a decimation/interpolation filter to the signal from a solar flux
2016-02-23
power converter, a solar photovoltaic ( PV ) system with inverter, and eighteen breakers. (Future work will require either validation of these models or...control software. (For this project, this was done for the energy storage, solar PV , and breakers.) Implement several relay protection functions to...the PV array is given in Section A.3. This profile was generated by applying a decimation/interpolation filter to the signal from a solar flux point
CGNS Mid-Level Software Library and Users Guide
NASA Technical Reports Server (NTRS)
Poirier, Diane; Smith, Charles A. (Technical Monitor)
1998-01-01
The "CFD General Notation System" (CGNS) consists of a collection of conventions, and conforming software, for the storage and retrieval of Computational Fluid Dynamics (CFD) data. It facilitates the exchange of data between sites and applications, and helps stabilize the archiving of aerodynamic data. This effort was initiated in order to streamline the procedures in exchanging data and software between NASA and its customers, but the goal is to develop CGNS into a National Standard for the exchange of aerodynamic data. The CGNS development team is comprised of members from Boeing Commercial Airplane Group, NASA-Ames, NASA-Langley, NASA-Lewis, McDonnell-Douglas Corporation (now Boeing-St. Louis), Air Force-Wright Lab., and ICEM-CFD Engineering. The elements of CGNS address all activities associated with the storage of data on external media and its movement to and from application programs. These elements include: - The Advanced Data Format (ADF) Database manager, consisting of both a file format specification and its I/O software, which handles the actual reading and writing of data from and to external storage media; - The Standard Interface Data Structures (SIDS), which specify the intellectual content of CFD data and the conventions governing naming and terminology; - The SIDS-to-ADF File Mapping conventions, which specify the exact location where the CFD data defined by the SIDS is to be stored within the ADF file(s); and - The CGNS Mid-level Library, which provides CFD-knowledgeable routines suitable for direct installation into application codes. The CGNS Mid-level Library was designed to ease the implementation of CGNS by providing developers with a collection of handy I/O functions. Since knowledge of the ADF core is not required to use this library, it will greatly facilitate the task of interfacing with CGNS. There are currently 48 user callable functions that comprise the Mid-level library and are described in the Users Guide. The library is written in C, but each function has a FORTRAN counterpart.
SIDS-toADF File Mapping Manual
NASA Technical Reports Server (NTRS)
McCarthy, Douglas; Smith, Matthew; Poirier, Diane; Smith, Charles A. (Technical Monitor)
2002-01-01
The "CFD General Notation System" (CGNS) consists of a collection of conventions, and conforming software, for the storage and retrieval of Computational Fluid Dynamics (CFD) data. It facilitates the exchange of data between sites and applications, and helps stabilize the archiving of aerodynamic data. This effort was initiated in order to streamline the procedures in exchanging data and software between NASA and its customers, but the goal is to develop CGNS into a National Standard for the exchange of aerodynamic data. The CGNS development team is comprised of members from Boeing Commercial Airplane Group, NASA-Ames, NASA-Langley, NASA-Lewis, McDonnell-Douglas Corporation (now Boeing-St. Louis), Air Force-Wright Lab., and ICEM-CFD Engineering. The elements of CGNS address all activities associated with the storage of data on external media and its movement to and from application programs. These elements include: 1) The Advanced Data Format (ADF) Database manager, consisting of both a file format specification and its I/O software, which handles the actual reading and writing of data from and to external storage media; 2) The Standard Interface Data Structures (SIDS), which specify the intellectual content of CFD data and the conventions governing naming and terminology; 3) The SIDS-to-ADF File Mapping conventions, which specify the exact location where the CFD data defined by the SIDS is to be stored within the ADF file(s); and 4) The CGNS Mid-level Library, which provides CFD-knowledgeable routines suitable for direct installation into application codes. The SIDS-toADF File Mapping Manual specifies the exact manner in which, under CGNS conventions, CFD data structures (the SIDS) are to be stored in (i.e., mapped onto) the file structure provided by the database manager (ADF). The result is a conforming CGNS database. Adherence to the mapping conventions guarantees uniform meaning and location of CFD data within ADF files, and thereby allows the construction of universal software to read and write the data.
From a paper-based to an electronic registry in physiotherapy.
Buyl, Ronald; Nyssen, Marc
2008-01-01
During the past decade the healthcare industry has evolved from paper-based storage of clinical data into the digital era. Electronic healthcare records play a crucial role to meet the growing need for integrated data-storage and data communication. In this context a new law was issued in Belgium on December 7th, 2005, which requires physiotherapists (but also nurses and speech therapists) to keep an electronic version of the registry. This (electronic) registry contains all physiotherapeutic acts, starting from January 1, 2007. Up until that day, a paper version of the registry had to be created every month.This article describes the development of an electronic version of the registry that not only meets all legal constraints, but also enables to verify the traceability and inalterability of the generated documents, by means of SHA-256 codes. One of the major concerns of the process was that the rationale behind the electronic registry would conform well to the common practice of the physiotherapist. Therefore we opted for a periodic recording of a standardized "image" of the controllable data, in the patient database of the software-system, into the XML registry messages. The proposed XSLT schema can also form a basis for the development of tools that can be used by the controlling authorities. Hopefully the electronic registry for physiotherapists will be a first step towards the future development of a fully integrated electronic physiotherapy record.By means of a certification procedure for the software systems, we succeeded in developing a user friendly system that enables end-users that use a quality labeled software package, to automatically produce all the legally necessary documents concerning the registry. Moreover, we hope that this development will be an incentive for non-users to start working in an electronic way.
Data Processing Center of Radioastron Project: 3 years of operation.
NASA Astrophysics Data System (ADS)
Shatskaya, Marina
ASC DATA PROCESSING CENTER (DPC) of Radioastron Project is a fail-safe complex centralized system of interconnected software/ hardware components along with organizational procedures. Tasks facing of the scientific data processing center are organization of service information exchange, collection of scientific data, storage of all of scientific data, data science oriented processing. DPC takes part in the informational exchange with two tracking stations in Pushchino (Russia) and Green Bank (USA), about 30 ground telescopes, ballistic center, tracking headquarters and session scheduling center. Enormous flows of information go to Astro Space Center. For the inquiring of enormous data volumes we develop specialized network infrastructure, Internet channels and storage. The computer complex has been designed at the Astro Space Center (ASC) of Lebedev Physical Institute and includes: - 800 TB on-line storage, - 2000 TB hard drive archive, - backup system on magnetic tapes (2000 TB); - 24 TB redundant storage at Pushchino Radio Astronomy Observatory; - Web and FTP servers, - DPC management and data transmission networks. The structure and functions of ASC Data Processing Center are fully adequate to the data processing requirements of the Radioastron Mission and has been successfully confirmed during Fringe Search, Early Science Program and first year of Key Science Program.
Development of an automated electrical power subsystem testbed for large spacecraft
NASA Technical Reports Server (NTRS)
Hall, David K.; Lollar, Louis F.
1990-01-01
The NASA Marshall Space Flight Center (MSFC) has developed two autonomous electrical power system breadboards. The first breadboard, the autonomously managed power system (AMPS), is a two power channel system featuring energy generation and storage and 24-kW of switchable loads, all under computer control. The second breadboard, the space station module/power management and distribution (SSM/PMAD) testbed, is a two-bus 120-Vdc model of the Space Station power subsystem featuring smart switchgear and multiple knowledge-based control systems. NASA/MSFC is combining these two breadboards to form a complete autonomous source-to-load power system called the large autonomous spacecraft electrical power system (LASEPS). LASEPS is a high-power, intelligent, physical electrical power system testbed which can be used to derive and test new power system control techniques, new power switching components, and new energy storage elements in a more accurate and realistic fashion. LASEPS has the potential to be interfaced with other spacecraft subsystem breadboards in order to simulate an entire space vehicle. The two individual systems, the combined systems (hardware and software), and the current and future uses of LASEPS are described.
Mercury⊕: An evidential reasoning image classifier
NASA Astrophysics Data System (ADS)
Peddle, Derek R.
1995-12-01
MERCURY⊕ is a multisource evidential reasoning classification software system based on the Dempster-Shafer theory of evidence. The design and implementation of this software package is described for improving the classification and analysis of multisource digital image data necessary for addressing advanced environmental and geoscience applications. In the remote-sensing context, the approach provides a more appropriate framework for classifying modern, multisource, and ancillary data sets which may contain a large number of disparate variables with different statistical properties, scales of measurement, and levels of error which cannot be handled using conventional Bayesian approaches. The software uses a nonparametric, supervised approach to classification, and provides a more objective and flexible interface to the evidential reasoning framework using a frequency-based method for computing support values from training data. The MERCURY⊕ software package has been implemented efficiently in the C programming language, with extensive use made of dynamic memory allocation procedures and compound linked list and hash-table data structures to optimize the storage and retrieval of evidence in a Knowledge Look-up Table. The software is complete with a full user interface and runs under Unix, Ultrix, VAX/VMS, MS-DOS, and Apple Macintosh operating system. An example of classifying alpine land cover and permafrost active layer depth in northern Canada is presented to illustrate the use and application of these ideas.
The control system of the multi-strip ionization chamber for the HIMM
NASA Astrophysics Data System (ADS)
Li, Min; Yuan, Y. J.; Mao, R. S.; Xu, Z. G.; Li, Peng; Zhao, T. C.; Zhao, Z. L.; Zhang, Nong
2015-03-01
Heavy Ion Medical Machine (HIMM) is a carbon ion cancer treatment facility which is being built by the Institute of Modern Physics (IMP) in China. In this facility, transverse profile and intensity of the beam at the treatment terminals will be measured by the multi-strip ionization chamber. In order to fulfill the requirement of the beam position feedback to accomplish the beam automatic commissioning, less than 1 ms reaction time of the Data Acquisition (DAQ) of this detector must be achieved. Therefore, the control system and software framework for DAQ have been redesigned and developed with National Instruments Compact Reconfigurable Input/Output (CompactRIO) instead of PXI 6133. The software is Labview-based and developed following the producer-consumer pattern with message mechanism and queue technology. The newly designed control system has been tested with carbon beam at the Heavy Ion Research Facility at Lanzhou-Cooler Storage Ring (HIRFL-CSR) and it has provided one single beam profile measurement in less than 1 ms with 1 mm beam position resolution. The fast reaction time and high precision data processing during the beam test have verified the usability and maintainability of the software framework. Furthermore, such software architecture is easy-fitting to applications with different detectors such as wire scanner detector.
Software defined photon counting system for time resolved x-ray experiments.
Acremann, Y; Chembrolu, V; Strachan, J P; Tyliszczak, T; Stöhr, J
2007-01-01
The time structure of synchrotron radiation allows time resolved experiments with sub-100 ps temporal resolution using a pump-probe approach. However, the relaxation time of the samples may require a lower repetition rate of the pump pulse compared to the full repetition rate of the x-ray pulses from the synchrotron. The use of only the x-ray pulse immediately following the pump pulse is not efficient and often requires special operation modes where only a few buckets of the storage ring are filled. We designed a novel software defined photon counting system that allows to implement a variety of pump-probe schemes at the full repetition rate. The high number of photon counters allows to detect the response of the sample at multiple time delays simultaneously, thus improving the efficiency of the experiment. The system has been successfully applied to time resolved scanning transmission x-ray microscopy. However, this technique is applicable more generally.
An Object-Relational Ifc Storage Model Based on Oracle Database
NASA Astrophysics Data System (ADS)
Li, Hang; Liu, Hua; Liu, Yong; Wang, Yuan
2016-06-01
With the building models are getting increasingly complicated, the levels of collaboration across professionals attract more attention in the architecture, engineering and construction (AEC) industry. In order to adapt the change, buildingSMART developed Industry Foundation Classes (IFC) to facilitate the interoperability between software platforms. However, IFC data are currently shared in the form of text file, which is defective. In this paper, considering the object-based inheritance hierarchy of IFC and the storage features of different database management systems (DBMS), we propose a novel object-relational storage model that uses Oracle database to store IFC data. Firstly, establish the mapping rules between data types in IFC specification and Oracle database. Secondly, design the IFC database according to the relationships among IFC entities. Thirdly, parse the IFC file and extract IFC data. And lastly, store IFC data into corresponding tables in IFC database. In experiment, three different building models are selected to demonstrate the effectiveness of our storage model. The comparison of experimental statistics proves that IFC data are lossless during data exchange.
Joining the petabyte club with direct attached storage
NASA Astrophysics Data System (ADS)
Haupt, Andreas; Leffhalm, Kai; Wegner, Peter; Wiesand, Stephan
2011-12-01
Our site successfully runs more than a Petabyte of online disk, using nothing but Direct Attached Storage. The bulk of this capacity is grid-enabled and served by dCache, but sizable amounts are provided by traditional AFS or modern Lustre filesystems as well. While each of these storage flavors has a different purpose, owing to their respective strengths and weaknesses for certain use cases, their instances are all built from the same universal storage bricks. These are managed using the same scale-out techniques used for compute nodes, and run the same operating system as those, thus fully leveraging the existing know-how and infrastructure. As a result, this storage is cost effective especially regarding total cost of ownership. It is also competitive in terms of aggregate performance, performance per capacity, and - due to the possibility to make use of the latest technology early - density and power efficiency. Further advantages include a high degree of flexibility and complete avoidance of vendor lock-in. Availability and reliability in practice turn out to be more than adequate for a HENP site's major tasks. We present details about this Ansatz for online storage, hardware and software used, tweaking and tuning, lessons learned, and the actual result in practice.
NASA Astrophysics Data System (ADS)
Oya, I.; Anguner, E. A.; Behera, B.; Birsin, E.; Fuessling, M.; Lindemann, R.; Melkumyan, D.; Schlenstedt, S.; Schmidt, T.; Schwanke, U.; Sternberger, R.; Wegner, P.; Wiesand, S.
2014-07-01
The Cherenkov Telescope Array (CTA) will be the next generation ground-based very-high energy -ray observatory. CTA will consist of two arrays: one in the Northern hemisphere composed of about 20 telescopes, and the other one in the Southern hemisphere composed of about 100 telescopes, both arrays containing telescopes of different sizes and types and in addition numerous auxiliary devices. In order to provide a test-ground for the CTA array control, the steering software of the 12-m medium size telescope (MST) prototype deployed in Berlin has been implemented using the tools and design concepts under consideration to be used for the control of the CTA array. The prototype control system is implemented based on the Atacama Large Millimeter/submillimeter Array (ALMA) Common Software (ACS) control middleware, with components implemented in Java, C++ and Python. The interfacing to the hardware is standardized via the Object Linking and Embedding for Process Control Unified Architecture (OPC UA). In order to access the OPC UA servers from the ACS framework in a common way, a library has been developed that allows to tie the OPC UA server nodes, methods and events to the equivalents in ACS components. The front-end of the archive system is able to identify the deployed components and to perform the sampling of the monitoring points of each component following time and value change triggers according to the selected configurations. The back-end of the archive system of the prototype is composed by two different databases: MySQL and MongoDB. MySQL has been selected as storage of the system configurations, while MongoDB is used to have an efficient storage of device monitoring data, CCD images, logging and alarm information. In this contribution, the details and conclusions on the implementation of the control software of the MST prototype are presented.
Preparing for the Next Generation of Direct Broadcast
NASA Astrophysics Data System (ADS)
Shin, H.; Friedman Dubey, K.; Baptiste, E.; Prasad, K.; Lawrence, D.
2010-12-01
With the anticipated launch of NPP, JPSS-1 and GOES-R in the next five years, the flow of weather data to users will rise ten times (Berchoff, 2009). This volume of data will put a strain on the government infrastructure tasked for data distribution, which could limit real-time data distribution to government users only, forcing others to retrieve their data days to weeks later. In order to receive real-time data, direct reception will become a necessity. SeaSpace Corporation has created a complete solution in anticipation of the forthcoming needs of data users. This solution is made up of four parts: 1) ground reception stations, 2) software to process the data into products, 3) data storage hardware, and 4) data cataloging software and server. The ground station component consists of two systems, an X/L/S-band tracking system and an L-band geostationary system. The combined X-, L-, and S-band reception capabilities are included to ensure the user can receive the maximum amount of data. The X-band receiver in this system can receive data from Terra, Aqua, NPP, JPSS, Oceansat-2, and FY-3. The L-band receiver can currently receive NOAA and MetOp. The follow-on to MetOp will be assigned the mid-morning orbit in the next generation constellation, ensuring L-band reception will continue to be a necessity. The S-band is used for DMSP reception, which may, in the near-future, become more widely available to non-defense clients. The L-band stationary antenna in the proposed solution is used for reception of geostationary satellites, such as GOES, COMS, and MTSAT. Upon launch, GOES-R data can be received with hardware/software upgrade. Once the data is received by the ground stations, TeraScan’s Rapid Environmental Processing System (REPS) automatically processes the data through level 3 products using the official NOAA and NASA algorithms. REPS can process large amounts of satellite data incredibly quickly: for instance, all MODIS products are produced in less than fifteen minutes. After processing, the raw data and products are moved to TeraVault™, SeaSpace’s data storage solution. TeraVault™ comes standard with 84 TB of storage, can be easily expanded, and allows online and readily accessible storage for data. In order to easily manage data of this volume, SeaSpace recommends the TeraCat™ data catalog and retrieval system, which gives users and their customers a web-based interface to search for and order their data. A full direct-reception solution is the only way to guarantee real-time access to the next generation of environmental satellite data. The currently over-tasked system of data distribution via the internet is ill-equipped to service local and foreign customers on a real-time basis now, and this will only get worse as more data comes online.
Centralized Duplicate Removal Video Storage System with Privacy Preservation in IoT.
Yan, Hongyang; Li, Xuan; Wang, Yu; Jia, Chunfu
2018-06-04
In recent years, the Internet of Things (IoT) has found wide application and attracted much attention. Since most of the end-terminals in IoT have limited capabilities for storage and computing, it has become a trend to outsource the data from local to cloud computing. To further reduce the communication bandwidth and storage space, data deduplication has been widely adopted to eliminate the redundant data. However, since data collected in IoT are sensitive and closely related to users' personal information, the privacy protection of users' information becomes a challenge. As the channels, like the wireless channels between the terminals and the cloud servers in IoT, are public and the cloud servers are not fully trusted, data have to be encrypted before being uploaded to the cloud. However, encryption makes the performance of deduplication by the cloud server difficult because the ciphertext will be different even if the underlying plaintext is identical. In this paper, we build a centralized privacy-preserving duplicate removal storage system, which supports both file-level and block-level deduplication. In order to avoid the leakage of statistical information of data, Intel Software Guard Extensions (SGX) technology is utilized to protect the deduplication process on the cloud server. The results of the experimental analysis demonstrate that the new scheme can significantly improve the deduplication efficiency and enhance the security. It is envisioned that the duplicated removal system with privacy preservation will be of great use in the centralized storage environment of IoT.
Thermal Impact of Medium Deep Borehole Thermal Energy Storage on the Shallow Subsurface
NASA Astrophysics Data System (ADS)
Welsch, Bastian; Schulte, Daniel O.; Rühaak, Wolfram; Bär, Kristian; Sass, Ingo
2017-04-01
Borehole heat exchanger arrays are a well-suited and already widely applied method for exploiting the shallow subsurface as seasonal heat storage. However, in most of the populated regions the shallow subsurface also comprises an important aquifer system used for drinking water production. Thus, the operation of shallow geothermal heat storage systems leads to a significant increase in groundwater temperatures in the proximity of the borehole heat exchanger array. The magnitude of the impact on groundwater quality and microbiology associated with this temperature rise is controversially discussed. Nevertheless, the protection of shallow groundwater resources has priority. Accordingly, water authorities often follow restrictive permission policies for building such storage systems. An alternative approach to avoid this issue is the application of medium deep borehole heat exchanger arrays instead of shallow ones. The thermal impact on shallow aquifers can be significantly reduced as heat is stored at larger depth. Moreover, it can be further diminished by the installation of a thermally insulating materials in the upper section of the borehole heat exchangers. Based on a numerical simulation study, the advantageous effects of medium deep borehole thermal energy storage are demonstrated and quantified. A finite element software is used to model the heat transport in the subsurface in 3D, while the heat transport in the borehole heat exchangers is solved analytically in 1D. For this purpose, an extended analytical solution is implemented, which also allows for the consideration of a thermally insulating borehole section.
Means of storage and automated monitoring of versions of text technical documentation
NASA Astrophysics Data System (ADS)
Leonovets, S. A.; Shukalov, A. V.; Zharinov, I. O.
2018-03-01
The paper presents automation of the process of preparation, storage and monitoring of version control of a text designer, and program documentation by means of the specialized software is considered. Automation of preparation of documentation is based on processing of the engineering data which are contained in the specifications and technical documentation or in the specification. Data handling assumes existence of strictly structured electronic documents prepared in widespread formats according to templates on the basis of industry standards and generation by an automated method of the program or designer text document. Further life cycle of the document and engineering data entering it are controlled. At each stage of life cycle, archive data storage is carried out. Studies of high-speed performance of use of different widespread document formats in case of automated monitoring and storage are given. The new developed software and the work benches available to the developer of the instrumental equipment are described.
INTELLIGENT COMPUTING SYSTEM FOR RESERVOIR ANALYSIS AND RISK ASSESSMENT OF THE RED RIVER FORMATION
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kenneth D. Luff
2002-06-30
Integrated software has been written that comprises the tool kit for the Intelligent Computing System (ICS). Luff Exploration Company is applying these tools for analysis of carbonate reservoirs in the southern Williston Basin. The integrated software programs are designed to be used by small team consisting of an engineer, geologist and geophysicist. The software tools are flexible and robust, allowing application in many environments for hydrocarbon reservoirs. Keystone elements of the software tools include clustering and neural-network techniques. The tools are used to transform seismic attribute data to reservoir characteristics such as storage (phi-h), probable oil-water contacts, structural depths andmore » structural growth history. When these reservoir characteristics are combined with neural network or fuzzy logic solvers, they can provide a more complete description of the reservoir. This leads to better estimates of hydrocarbons in place, areal limits and potential for infill or step-out drilling. These tools were developed and tested using seismic, geologic and well data from the Red River Play in Bowman County, North Dakota and Harding County, South Dakota. The geologic setting for the Red River Formation is shallow-shelf carbonate at a depth from 8000 to 10,000 ft.« less
INTELLIGENT COMPUTING SYSTEM FOR RESERVOIR ANALYSIS AND RISK ASSESSMENT OF THE RED RIVER FORMATION
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kenneth D. Luff
2002-09-30
Integrated software has been written that comprises the tool kit for the Intelligent Computing System (ICS). Luff Exploration Company is applying these tools for analysis of carbonate reservoirs in the southern Williston Basin. The integrated software programs are designed to be used by small team consisting of an engineer, geologist and geophysicist. The software tools are flexible and robust, allowing application in many environments for hydrocarbon reservoirs. Keystone elements of the software tools include clustering and neural-network techniques. The tools are used to transform seismic attribute data to reservoir characteristics such as storage (phi-h), probable oil-water contacts, structural depths andmore » structural growth history. When these reservoir characteristics are combined with neural network or fuzzy logic solvers, they can provide a more complete description of the reservoir. This leads to better estimates of hydrocarbons in place, areal limits and potential for infill or step-out drilling. These tools were developed and tested using seismic, geologic and well data from the Red River Play in Bowman County, North Dakota and Harding County, South Dakota. The geologic setting for the Red River Formation is shallow-shelf carbonate at a depth from 8000 to 10,000 ft.« less
A microprocessor controlled pressure scanning system
NASA Technical Reports Server (NTRS)
Anderson, R. C.
1976-01-01
A microprocessor-based controller and data logger for pressure scanning systems is described. The microcomputer positions and manages data from as many as four 48-port electro-mechanical pressure scanners. The maximum scanning rate is 80 pressure measurements per second (20 ports per second on each of four scanners). The system features on-line calibration, position-directed data storage, and once-per-scan display in engineering units of data from a selected port. The system is designed to be interfaced to a facility computer through a shared memory. System hardware and software are described. Factors affecting measurement error in this type of system are also discussed.
An open-source data storage and visualization back end for experimental data.
Nielsen, Kenneth; Andersen, Thomas; Jensen, Robert; Nielsen, Jane H; Chorkendorff, Ib
2014-04-01
In this article, a flexible free and open-source software system for data logging and presentation will be described. The system is highly modular and adaptable and can be used in any laboratory in which continuous and/or ad hoc measurements require centralized storage. A presentation component for the data back end has furthermore been written that enables live visualization of data on any device capable of displaying Web pages. The system consists of three parts: data-logging clients, a data server, and a data presentation Web site. The logging of data from independent clients leads to high resilience to equipment failure, whereas the central storage of data dramatically eases backup and data exchange. The visualization front end allows direct monitoring of acquired data to see live progress of long-duration experiments. This enables the user to alter experimental conditions based on these data and to interfere with the experiment if needed. The data stored consist both of specific measurements and of continuously logged system parameters. The latter is crucial to a variety of automation and surveillance features, and three cases of such features are described: monitoring system health, getting status of long-duration experiments, and implementation of instant alarms in the event of failure.
Repository Planning, Design, and Engineering: Part II-Equipment and Costing.
Baird, Phillip M; Gunter, Elaine W
2016-08-01
Part II of this article discusses and provides guidance on the equipment and systems necessary to operate a repository. The various types of storage equipment and monitoring and support systems are presented in detail. While the material focuses on the large repository, the requirements for a small-scale startup are also presented. Cost estimates and a cost model for establishing a repository are presented. The cost model presents an expected range of acquisition costs for the large capital items in developing a repository. A range of 5,000-7,000 ft(2) constructed has been assumed, with 50 frozen storage units, to reflect a successful operation with growth potential. No design or engineering costs, permit or regulatory costs, or smaller items such as the computers, software, furniture, phones, and barcode readers required for operations have been included.
Security of patient data when decommissioning ultrasound systems
2017-01-01
Background Although ultrasound systems generally archive to Picture Archiving and Communication Systems (PACS), their archiving workflow typically involves storage to an internal hard disk before data are transferred onwards. Deleting records from the local system will delete entries in the database and from the file allocation table or equivalent but, as with a PC, files can be recovered. Great care is taken with disposal of media from a healthcare organisation to prevent data breaches, but ultrasound systems are routinely returned to lease companies, sold on or donated to third parties without such controls. Methods In this project, five methods of hard disk erasure were tested on nine ultrasound systems being decommissioned: the system’s own delete function; full reinstallation of system software; the manufacturer’s own disk wiping service; open source disk wiping software for full and just blank space erasure. Attempts were then made to recover data using open source recovery tools. Results All methods deleted patient data as viewable from the ultrasound system and from browsing the disk from a PC. However, patient identifiable data (PID) could be recovered following the system’s own deletion and the reinstallation methods. No PID could be recovered after using the manufacturer’s wiping service or the open source wiping software. Conclusion The typical method of reinstalling an ultrasound system’s software may not prevent PID from being recovered. When transferring ownership, care should be taken that an ultrasound system’s hard disk has been wiped to a sufficient level, particularly if the scanner is to be returned with approved parts and in a fully working state. PMID:28228821
NASA Astrophysics Data System (ADS)
Elfman, Mikael; Ros, Linus; Kristiansson, Per; Nilsson, E. J. Charlotta; Pallon, Jan
2016-03-01
With the recent advances towards modern Ion Beam Analysis (IBA), going from one- or few-parameter detector systems to multi-parameter systems, it has been necessary to expand and replace the more than twenty years old CAMAC based system. A new VME multi-parameter (presently up to 200 channels) data acquisition and control system has been developed and implemented at the Lund Ion Beam Analysis Facility (LIBAF). The system is based on the VX-511 Single Board Computer (SBC), acting as master with arbiter functionality and consists of standard VME modules like Analog to Digital Converters (ADC's), Charge to Digital Converters (QDC's), Time to Digital Converters (TDC's), scaler's, IO-cards, high voltage and waveform units. The modules have been specially selected to support all of the present detector systems in the laboratory, with the option of future expansion. Typically, the detector systems consist of silicon strip detectors, silicon drift detectors and scintillator detectors, for detection of charged particles, X-rays and γ-rays. The data flow of the raw data buffers out from the VME bus to the final storage place on a 16 terabyte network attached storage disc (NAS-disc) is described. The acquisition process, remotely controlled over one of the SBCs ethernet channels, is also discussed. The user interface is written in the Kmax software package, and is used to control the acquisition process as well as for advanced online and offline data analysis through a user-friendly graphical user interface (GUI). In this work the system implementation, layout and performance are presented. The user interface and possibilities for advanced offline analysis are also discussed and illustrated.
Zhou, Chunshan; Zhang, Chao; Tian, Di; Wang, Ke; Huang, Mingzhi; Liu, Yanbiao
2018-01-02
In order to manage water resources, a software sensor model was designed to estimate water quality using a hybrid fuzzy neural network (FNN) in Guangzhou section of Pearl River, China. The software sensor system was composed of data storage module, fuzzy decision-making module, neural network module and fuzzy reasoning generator module. Fuzzy subtractive clustering was employed to capture the character of model, and optimize network architecture for enhancing network performance. The results indicate that, on basis of available on-line measured variables, the software sensor model can accurately predict water quality according to the relationship between chemical oxygen demand (COD) and dissolved oxygen (DO), pH and NH 4 + -N. Owing to its ability in recognizing time series patterns and non-linear characteristics, the software sensor-based FNN is obviously superior to the traditional neural network model, and its R (correlation coefficient), MAPE (mean absolute percentage error) and RMSE (root mean square error) are 0.8931, 10.9051 and 0.4634, respectively.
Design of an FPGA-based electronic flow regulator (EFR) for spacecraft propulsion system
NASA Astrophysics Data System (ADS)
Manikandan, J.; Jayaraman, M.; Jayachandran, M.
2011-02-01
This paper describes a scheme for electronically regulating the flow of propellant to the thruster from a high-pressure storage tank used in spacecraft application. Precise flow delivery of propellant to thrusters ensures propulsion system operation at best efficiency by maximizing the propellant and power utilization for the mission. The proposed field programmable gate array (FPGA) based electronic flow regulator (EFR) is used to ensure precise flow of propellant to the thrusters from a high-pressure storage tank used in spacecraft application. This paper presents hardware and software design of electronic flow regulator and implementation of the regulation logic onto an FPGA.Motivation for proposed FPGA-based electronic flow regulation is on the disadvantages of conventional approach of using analog circuits. Digital flow regulation overcomes the analog equivalent as digital circuits are highly flexible, are not much affected due to noise, accurate performance is repeatable, interface is easier to computers, storing facilities are possible and finally failure rate of digital circuits is less. FPGA has certain advantages over ASIC and microprocessor/micro-controller that motivated us to opt for FPGA-based electronic flow regulator. Also the control algorithm being software, it is well modifiable without changing the hardware. This scheme is simple enough to adopt for a wide range of applications, where the flow is to be regulated for efficient operation.The proposed scheme is based on a space-qualified re-configurable field programmable gate arrays (FPGA) and hybrid micro circuit (HMC). A graphical user interface (GUI) based application software is also developed for debugging, monitoring and controlling the electronic flow regulator from PC COM port.
General consumer communication tools for improved image management and communication in medicine.
Rosset, Chantal; Rosset, Antoine; Ratib, Osman
2005-12-01
We elected to explore new technologies emerging on the general consumer market that can improve and facilitate image and data communication in medical and clinical environment. These new technologies developed for communication and storage of data can improve the user convenience and facilitate the communication and transport of images and related data beyond the usual limits and restrictions of a traditional picture archiving and communication systems (PACS) network. We specifically tested and implemented three new technologies provided on Apple computer platforms. (1) We adopted the iPod, a MP3 portable player with a hard disk storage, to easily and quickly move large number of DICOM images. (2) We adopted iChat, a videoconference and instant-messaging software, to transmit DICOM images in real time to a distant computer for conferencing teleradiology. (3) Finally, we developed a direct secure interface to use the iDisk service, a file-sharing service based on the WebDAV technology, to send and share DICOM files between distant computers. These three technologies were integrated in a new open-source image navigation and display software called OsiriX allowing for manipulation and communication of multimodality and multidimensional DICOM image data sets. This software is freely available as an open-source project at http://homepage.mac.com/rossetantoine/OsiriX. Our experience showed that the implementation of these technologies allowed us to significantly enhance the existing PACS with valuable new features without any additional investment or the need for complex extensions of our infrastructure. The added features such as teleradiology, secure and convenient image and data communication, and the use of external data storage services open the gate to a much broader extension of our imaging infrastructure to the outside world.
ERIC Educational Resources Information Center
Miller, Michael J.
1984-01-01
Description of the Macintosh personal, educational, and business computer produced by Apple covers cost; physical characteristics including display devices, circuit boards, and built-in features; company-produced software; third-party produced software; memory and storage capacity; word-processing features; and graphics capabilities. (MBR)
An optimized video system for augmented reality in endodontics: a feasibility study.
Bruellmann, D D; Tjaden, H; Schwanecke, U; Barth, P
2013-03-01
We propose an augmented reality system for the reliable detection of root canals in video sequences based on a k-nearest neighbor color classification and introduce a simple geometric criterion for teeth. The new software was implemented using C++, Qt, and the image processing library OpenCV. Teeth are detected in video images to restrict the segmentation of the root canal orifices by using a k-nearest neighbor algorithm. The location of the root canal orifices were determined using Euclidean distance-based image segmentation. A set of 126 human teeth with known and verified locations of the root canal orifices was used for evaluation. The software detects root canals orifices for automatic classification of the teeth in video images and stores location and size of the found structures. Overall 287 of 305 root canals were correctly detected. The overall sensitivity was about 94 %. Classification accuracy for molars ranged from 65.0 to 81.2 % and from 85.7 to 96.7 % for premolars. The realized software shows that observations made in anatomical studies can be exploited to automate real-time detection of root canal orifices and tooth classification with a software system. Automatic storage of location, size, and orientation of the found structures with this software can be used for future anatomical studies. Thus, statistical tables with canal locations will be derived, which can improve anatomical knowledge of the teeth to alleviate root canal detection in the future. For this purpose the software is freely available at: http://www.dental-imaging.zahnmedizin.uni-mainz.de/.
Development of new data acquisition system for COMPASS experiment
NASA Astrophysics Data System (ADS)
Bodlak, M.; Frolov, V.; Jary, V.; Huber, S.; Konorov, I.; Levit, D.; Novy, J.; Salac, R.; Virius, M.
2016-04-01
This paper presents development and recent status of the new data acquisiton system of the COMPASS experiment at CERN with up to 50 kHz trigger rate and 36 kB average event size during 10 second period with beam followed by approximately 40 second period without beam. In the original DAQ, the event building is performed by software deployed on switched computer network, moreover the data readout is based on deprecated PCI technology; the new system replaces the event building network with a custom FPGA-based hardware. The custom cards are introduced and advantages of the FPGA technology for DAQ related tasks are discussed. In this paper, we focus on the software part that is mainly responsible for control and monitoring. The most of the system can run as slow control; only readout process has realtime requirements. The design of the software is built on state machines that are implemented using the Qt framework; communication between remote nodes that form the software architecture is based on the DIM library and IPBus technology. Furthermore, PHP and JS languages are used to maintain system configuration; the MySQL database was selected as storage for both configuration of the system and system messages. The system has been design with maximum throughput of 1500 MB/s and large buffering ability used to spread load on readout computers over longer period of time. Great emphasis is put on data latency, data consistency, and even timing checks which are done at each stage of event assembly. System collects results of these checks which together with special data format allows the software to localize origin of problems in data transmission process. A prototype version of the system has already been developed and tested the new system fulfills all given requirements. It is expected that the full-scale version of the system will be finalized in June 2014 and deployed on September provided that tests with cosmic run succeed.
Lou, Jerry J; Andrechak, Gary; Riben, Michael; Yong, William H
2011-01-01
Patient safety initiatives throughout the anatomic laboratory and in biorepository laboratories have mandated increasing emphasis on the need for accurately identifying and tracking biospecimen assets throughout their production lifecycle and for archiving/retrieval purposes. However, increasing production volume along with complex workflow characteristics, reliance on manual production processes, and required asset movement to disparate destinations throughout asset lifecycles continue to challenge laboratory efforts. Radio Frequency Identification (RFID) technology, use of radio waves to communicate data between electronic tags attached to objects and a reader, shows significant potential to facilitate and overcome these hurdles. Advantages over traditional barcode labeling include readability without direct line-of-sight alignment to the reader, ability to read multiple tags simultaneously, higher data storage capacity, faster data transmission rate, and capacity to perform multiple read-writes of data to the tag. Most importantly, use of radio waves decreases the need to manually scan each asset, and at each step, identification or tracking event is needed. Temperature monitoring by on-board sensors and three-dimensional position tracking are additional potential benefits of using RFID technology. To date, barriers to implementation of RFID systems in the anatomic laboratory include increased associated costs of tags and readers, system software, data security concerns, lack of specific data standards for stored information, and potential for technological obsolescence during decades of specimen storage. Novel RFID production techniques and increased production capacity are projected to lower costs of some tags to a few cents each. Potentially, information security concerns can be addressed by techniques such as shielding, data encryption, and tag pseudonyms. Commitment by stakeholder groups to develop RFID tag data standards for anatomic pathology and biorepository laboratories could avoid or mitigate the "islands of data" dilemma presented by barcode usage where there are innumerable standards and a consequent paucity of hardware or software "plug and play" interoperability. Work remains to be done to establish the durability and appropriate shielding of individual tag types for use in harsh laboratory environmental conditions, and for long-term archival storage. Finally, given the requirements for long-term storage of biospecimen assets, consideration should be given to ways of mitigating data isolation due to eventual technological obsolescence of a particular RFID technology or software.
Lou, Jerry J.; Andrechak, Gary; Riben, Michael; Yong, William H.
2011-01-01
Patient safety initiatives throughout the anatomic laboratory and in biorepository laboratories have mandated increasing emphasis on the need for accurately identifying and tracking biospecimen assets throughout their production lifecycle and for archiving/retrieval purposes. However, increasing production volume along with complex workflow characteristics, reliance on manual production processes, and required asset movement to disparate destinations throughout asset lifecycles continue to challenge laboratory efforts. Radio Frequency Identification (RFID) technology, use of radio waves to communicate data between electronic tags attached to objects and a reader, shows significant potential to facilitate and overcome these hurdles. Advantages over traditional barcode labeling include readability without direct line-of-sight alignment to the reader, ability to read multiple tags simultaneously, higher data storage capacity, faster data transmission rate, and capacity to perform multiple read-writes of data to the tag. Most importantly, use of radio waves decreases the need to manually scan each asset, and at each step, identification or tracking event is needed. Temperature monitoring by on-board sensors and three-dimensional position tracking are additional potential benefits of using RFID technology. To date, barriers to implementation of RFID systems in the anatomic laboratory include increased associated costs of tags and readers, system software, data security concerns, lack of specific data standards for stored information, and potential for technological obsolescence during decades of specimen storage. Novel RFID production techniques and increased production capacity are projected to lower costs of some tags to a few cents each. Potentially, information security concerns can be addressed by techniques such as shielding, data encryption, and tag pseudonyms. Commitment by stakeholder groups to develop RFID tag data standards for anatomic pathology and biorepository laboratories could avoid or mitigate the “islands of data” dilemma presented by barcode usage where there are innumerable standards and a consequent paucity of hardware or software “plug and play” interoperability. Work remains to be done to establish the durability and appropriate shielding of individual tag types for use in harsh laboratory environmental conditions, and for long-term archival storage. Finally, given the requirements for long-term storage of biospecimen assets, consideration should be given to ways of mitigating data isolation due to eventual technological obsolescence of a particular RFID technology or software. PMID:21886890
The Muon Conditions Data Management:. Database Architecture and Software Infrastructure
NASA Astrophysics Data System (ADS)
Verducci, Monica
2010-04-01
The management of the Muon Conditions Database will be one of the most challenging applications for Muon System, both in terms of data volumes and rates, but also in terms of the variety of data stored and their analysis. The Muon conditions database is responsible for almost all of the 'non-event' data and detector quality flags storage needed for debugging of the detector operations and for performing the reconstruction and the analysis. In particular for the early data, the knowledge of the detector performance, the corrections in term of efficiency and calibration will be extremely important for the correct reconstruction of the events. In this work, an overview of the entire Muon conditions database architecture is given, in particular the different sources of the data and the storage model used, including the database technology associated. Particular emphasis is given to the Data Quality chain: the flow of the data, the analysis and the final results are described. In addition, the description of the software interfaces used to access to the conditions data are reported, in particular, in the ATLAS Offline Reconstruction framework ATHENA environment.
Golberg, Alexander; Linshiz, Gregory; Kravets, Ilia; Stawski, Nina; Hillson, Nathan J; Yarmush, Martin L; Marks, Robert S; Konry, Tania
2014-01-01
We report an all-in-one platform - ScanDrop - for the rapid and specific capture, detection, and identification of bacteria in drinking water. The ScanDrop platform integrates droplet microfluidics, a portable imaging system, and cloud-based control software and data storage. The cloud-based control software and data storage enables robotic image acquisition, remote image processing, and rapid data sharing. These features form a "cloud" network for water quality monitoring. We have demonstrated the capability of ScanDrop to perform water quality monitoring via the detection of an indicator coliform bacterium, Escherichia coli, in drinking water contaminated with feces. Magnetic beads conjugated with antibodies to E. coli antigen were used to selectively capture and isolate specific bacteria from water samples. The bead-captured bacteria were co-encapsulated in pico-liter droplets with fluorescently-labeled anti-E. coli antibodies, and imaged with an automated custom designed fluorescence microscope. The entire water quality diagnostic process required 8 hours from sample collection to online-accessible results compared with 2-4 days for other currently available standard detection methods.
Kravets, Ilia; Stawski, Nina; Hillson, Nathan J.; Yarmush, Martin L.; Marks, Robert S.; Konry, Tania
2014-01-01
We report an all-in-one platform – ScanDrop – for the rapid and specific capture, detection, and identification of bacteria in drinking water. The ScanDrop platform integrates droplet microfluidics, a portable imaging system, and cloud-based control software and data storage. The cloud-based control software and data storage enables robotic image acquisition, remote image processing, and rapid data sharing. These features form a “cloud” network for water quality monitoring. We have demonstrated the capability of ScanDrop to perform water quality monitoring via the detection of an indicator coliform bacterium, Escherichia coli, in drinking water contaminated with feces. Magnetic beads conjugated with antibodies to E. coli antigen were used to selectively capture and isolate specific bacteria from water samples. The bead-captured bacteria were co-encapsulated in pico-liter droplets with fluorescently-labeled anti-E. coli antibodies, and imaged with an automated custom designed fluorescence microscope. The entire water quality diagnostic process required 8 hours from sample collection to online-accessible results compared with 2–4 days for other currently available standard detection methods. PMID:24475107
48 CFR 252.239-7018 - Supply chain risk.
Code of Federal Regulations, 2014 CFR
2014-10-01
... subsystem(s) of equipment, that is used in the automatic acquisition, storage, analysis, evaluation..., ancillary equipment (including imaging peripherals, input, output, and storage devices necessary for... of a computer, software, firmware and similar procedures, services (including support services), and...
A Grid Infrastructure for Supporting Space-based Science Operations
NASA Technical Reports Server (NTRS)
Bradford, Robert N.; Redman, Sandra H.; McNair, Ann R. (Technical Monitor)
2002-01-01
Emerging technologies for computational grid infrastructures have the potential for revolutionizing the way computers are used in all aspects of our lives. Computational grids are currently being implemented to provide a large-scale, dynamic, and secure research and engineering environments based on standards and next-generation reusable software, enabling greater science and engineering productivity through shared resources and distributed computing for less cost than traditional architectures. Combined with the emerging technologies of high-performance networks, grids provide researchers, scientists and engineers the first real opportunity for an effective distributed collaborative environment with access to resources such as computational and storage systems, instruments, and software tools and services for the most computationally challenging applications.
NASA Astrophysics Data System (ADS)
Zhao, Ziyue; Gan, Xiaochuan; Zou, Zhi; Ma, Liqun
2018-01-01
The dynamic envelope measurement plays very important role in the external dimension design for high-speed train. Recently there is no digital measurement system to solve this problem. This paper develops an optoelectronic measurement system by using monocular digital camera, and presents the research of measurement theory, visual target design, calibration algorithm design, software programming and so on. This system consists of several CMOS digital cameras, several luminous targets for measuring, a scale bar, data processing software and a terminal computer. The system has such advantages as large measurement scale, high degree of automation, strong anti-interference ability, noise rejection and real-time measurement. In this paper, we resolve the key technology such as the transformation, storage and calculation of multiple cameras' high resolution digital image. The experimental data show that the repeatability of the system is within 0.02mm and the distance error of the system is within 0.12mm in the whole workspace. This experiment has verified the rationality of the system scheme, the correctness, the precision and effectiveness of the relevant methods.
Digital readout for image converter cameras
NASA Astrophysics Data System (ADS)
Honour, Joseph
1991-04-01
There is an increasing need for fast and reliable analysis of recorded sequences from image converter cameras so that experimental information can be readily evaluated without recourse to more time consuming photographic procedures. A digital readout system has been developed using a randomly triggerable high resolution CCD camera, the output of which is suitable for use with IBM AT compatible PC. Within half a second from receipt of trigger pulse, the frame reformatter displays the image and transfer to storage media can be readily achieved via the PC and dedicated software. Two software programmes offer different levels of image manipulation which includes enhancement routines and parameter calculations with accuracy down to pixel levels. Hard copy prints can be acquired using a specially adapted Polaroid printer, outputs for laser and video printer extend the overall versatility of the system.
An R package for the design, analysis and operation of reservoir systems
NASA Astrophysics Data System (ADS)
Turner, Sean; Ng, Jia Yi; Galelli, Stefano
2016-04-01
We present a new R package - named "reservoir" - which has been designed for rapid and easy routing of runoff through storage. The package comprises well-established tools for capacity design (e.g., the sequent peak algorithm), performance analysis (storage-yield-reliability and reliability-resilience-vulnerability analysis) and release policy optimization (Stochastic Dynamic Programming). Operating rules can be optimized for water supply, flood control and amenity objectives, as well as for maximum hydropower production. Storage-depth-area relationships are in-built, allowing users to incorporate evaporation from the reservoir surface. We demonstrate the capabilities of the software for global studies using thousands of reservoirs from the Global Reservoir and Dam (GRanD) database fed by historical monthly inflow time series from a 0.5 degree gridded global runoff dataset. The package is freely available through the Comprehensive R Archive Network (CRAN).
An Automated Medical Information Management System (OpScan-MIMS) in a Clinical Setting
Margolis, S.; Baker, T.G.; Ritchey, M.G.; Alterescu, S.; Friedman, C.
1981-01-01
This paper describes an automated medical information management system within a clinic setting. The system includes an optically scanned data entry system (OpScan), a generalized, interactive retrieval and storage software system(Medical Information Management System, MIMS) and the use of time-sharing. The system has the advantages of minimal hardware purchase and maintenance, rapid data entry and retrieval, user-created programs, no need for user knowledge of computer language or technology and is cost effective. The OpScan-MIMS system has been operational for approximately 16 months in a sexually transmitted disease clinic. The system's application to medical audit, quality assurance, clinic management and clinical training are demonstrated.
Active Storage with Analytics Capabilities and I/O Runtime System for Petascale Systems
DOE Office of Scientific and Technical Information (OSTI.GOV)
Choudhary, Alok
Computational scientists must understand results from experimental, observational and computational simulation generated data to gain insights and perform knowledge discovery. As systems approach the petascale range, problems that were unimaginable a few years ago are within reach. With the increasing volume and complexity of data produced by ultra-scale simulations and high-throughput experiments, understanding the science is largely hampered by the lack of comprehensive I/O, storage, acceleration of data manipulation, analysis, and mining tools. Scientists require techniques, tools and infrastructure to facilitate better understanding of their data, in particular the ability to effectively perform complex data analysis, statistical analysis and knowledgemore » discovery. The goal of this work is to enable more effective analysis of scientific datasets through the integration of enhancements in the I/O stack, from active storage support at the file system layer to MPI-IO and high-level I/O library layers. We propose to provide software components to accelerate data analytics, mining, I/O, and knowledge discovery for large-scale scientific applications, thereby increasing productivity of both scientists and the systems. Our approaches include 1) design the interfaces in high-level I/O libraries, such as parallel netCDF, for applications to activate data mining operations at the lower I/O layers; 2) Enhance MPI-IO runtime systems to incorporate the functionality developed as a part of the runtime system design; 3) Develop parallel data mining programs as part of runtime library for server-side file system in PVFS file system; and 4) Prototype an active storage cluster, which will utilize multicore CPUs, GPUs, and FPGAs to carry out the data mining workload.« less
Baran, Michael C; Moseley, Hunter N B; Sahota, Gurmukh; Montelione, Gaetano T
2002-10-01
Modern protein NMR spectroscopy laboratories have a rapidly growing need for an easily queried local archival system of raw experimental NMR datasets. SPINS (Standardized ProteIn Nmr Storage) is an object-oriented relational database that provides facilities for high-volume NMR data archival, organization of analyses, and dissemination of results to the public domain by automatic preparation of the header files required for submission of data to the BioMagResBank (BMRB). The current version of SPINS coordinates the process from data collection to BMRB deposition of raw NMR data by standardizing and integrating the storage and retrieval of these data in a local laboratory file system. Additional facilities include a data mining query tool, graphical database administration tools, and a NMRStar v2. 1.1 file generator. SPINS also includes a user-friendly internet-based graphical user interface, which is optionally integrated with Varian VNMR NMR data collection software. This paper provides an overview of the data model underlying the SPINS database system, a description of its implementation in Oracle, and an outline of future plans for the SPINS project.
Image matrix processor for fast multi-dimensional computations
Roberson, George P.; Skeate, Michael F.
1996-01-01
An apparatus for multi-dimensional computation which comprises a computation engine, including a plurality of processing modules. The processing modules are configured in parallel and compute respective contributions to a computed multi-dimensional image of respective two dimensional data sets. A high-speed, parallel access storage system is provided which stores the multi-dimensional data sets, and a switching circuit routes the data among the processing modules in the computation engine and the storage system. A data acquisition port receives the two dimensional data sets representing projections through an image, for reconstruction algorithms such as encountered in computerized tomography. The processing modules include a programmable local host, by which they may be configured to execute a plurality of different types of multi-dimensional algorithms. The processing modules thus include an image manipulation processor, which includes a source cache, a target cache, a coefficient table, and control software for executing image transformation routines using data in the source cache and the coefficient table and loading resulting data in the target cache. The local host processor operates to load the source cache with a two dimensional data set, loads the coefficient table, and transfers resulting data out of the target cache to the storage system, or to another destination.
Integration of communications with the Intelligent Gateway Processor
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hampel, V.E.
1986-01-01
The Intelligent Gateway Processor (IGP) software is being used to interconnect users equipped with different personal computers and ASCII terminals to mainframe machines of different make. This integration is made possible by the IGP's unique user interface and networking software. Prototype systems of the table-driven, interpreter-based IGP have been adapted to very different programmatic requirements and have demonstrated substantial increases in end-user productivity. Procedures previously requiring days can now be carried out in minutes. The IGP software has been under development by the Technology Information Systems (TIS) program at Lawrence Livermore National Laboratory (LLNL) since 1975 and is in usemore » by several federal agencies since 1983: The Air Force is prototyping applications which range from automated identification of spare parts for aircraft to office automation and the controlled storage and distribution of technical orders and engineering drawings. Other applications of the IGP are the Information Management System (IMS) for aviation statistics in the Federal Aviation Administration (FAA), the Nuclear Criticality Information System (NCIS) and a nationwide Cost Estimating System (CES) in the Department of Energy, the library automation network of the Defense Technical Information Center (DTIC), and the modernization program in the Office of the Secretary of Defense (OSD). 31 refs., 9 figs.« less
Software for Optical Archive and Retrieval (SOAR) user's guide, version 4.2
NASA Technical Reports Server (NTRS)
Davis, Charles
1991-01-01
The optical disk is an emerging technology. Because it is not a magnetic medium, it offers a number of distinct advantages over the established form of storage, advantages that make it extremely attractive. They are as follows: (1) the ability to store much more data within the same space; (2) the random access characteristics of the Write Once Read Many optical disk; (3) a much longer life than that of traditional storage media; and (4) much greater data access rate. Software for Optical Archive and Retrieval (SOAR) user's guide is presented.
Data archiving and serving system implementation in CLEP's GRAS Core System
NASA Astrophysics Data System (ADS)
Zuo, Wei; Zeng, Xingguo; Zhang, Zhoubin; Geng, Liang; Li, Chunlai
2017-04-01
The Ground Research & Applications System(GRAS) is one of the five systems of China's Lunar Exploration Project(CLEP), it is responsible for data acquisition, processing, management and application, and it is also the operation control center during satellite in-orbit and payload operation management. Chang'E-1, Chang'E-2 and Chang'E-3 have collected abundant lunar exploration data. The aim of this work is to present the implementation of data archiving and Serving in CLEP's GRAS Core System software. This first approach provides a client side API and server side software allowing the creation of a simplified version of CLEPDB data archiving software, and implements all required elements to complete data archiving flow from data acquisition until its persistent storage technology. The client side includes all necessary components that run on devices that acquire or produce data, distributing and streaming to configure remote archiving servers. The server side comprises an archiving service that stores into PDS files all received data. The archiving solution aims at storing data coming for the Data Acquisition Subsystem, the Operation Management Subsystem, the Data Preprocessing Subsystem and the Scientific Application & Research Subsystem. The serving solution aims at serving data for the various business systems, scientific researchers and public users. The data-driven and component clustering methods was adopted in this system, the former is used to solve real-time data archiving and data persistence services; the latter is used to keep the continuous supporting ability of archive and service to new data from Chang'E Mission. Meanwhile, it can save software development cost as well.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ballard, Sanford; Hipp, James; Kraus, Brian
GeoTess is a model parameterization and software support library that manages the construction, population, storage, and interrogation of data stored in 2D and 3D Earth models. Here, the software is available in Java and C++, with a C interface to the C++ library.
The CMS High Level Trigger System: Experience and Future Development
NASA Astrophysics Data System (ADS)
Bauer, G.; Behrens, U.; Bowen, M.; Branson, J.; Bukowiec, S.; Cittolin, S.; Coarasa, J. A.; Deldicque, C.; Dobson, M.; Dupont, A.; Erhan, S.; Flossdorf, A.; Gigi, D.; Glege, F.; Gomez-Reino, R.; Hartl, C.; Hegeman, J.; Holzner, A.; Hwong, Y. L.; Masetti, L.; Meijers, F.; Meschi, E.; Mommsen, R. K.; O'Dell, V.; Orsini, L.; Paus, C.; Petrucci, A.; Pieri, M.; Polese, G.; Racz, A.; Raginel, O.; Sakulin, H.; Sani, M.; Schwick, C.; Shpakov, D.; Simon, S.; Spataru, A. C.; Sumorok, K.
2012-12-01
The CMS experiment at the LHC features a two-level trigger system. Events accepted by the first level trigger, at a maximum rate of 100 kHz, are read out by the Data Acquisition system (DAQ), and subsequently assembled in memory in a farm of computers running a software high-level trigger (HLT), which selects interesting events for offline storage and analysis at a rate of order few hundred Hz. The HLT algorithms consist of sequences of offline-style reconstruction and filtering modules, executed on a farm of 0(10000) CPU cores built from commodity hardware. Experience from the operation of the HLT system in the collider run 2010/2011 is reported. The current architecture of the CMS HLT, its integration with the CMS reconstruction framework and the CMS DAQ, are discussed in the light of future development. The possible short- and medium-term evolution of the HLT software infrastructure to support extensions of the HLT computing power, and to address remaining performance and maintenance issues, are discussed.
Burdick, David B; Cavnor, Chris C; Handcock, Jeremy; Killcoyne, Sarah; Lin, Jake; Marzolf, Bruz; Ramsey, Stephen A; Rovira, Hector; Bressler, Ryan; Shmulevich, Ilya; Boyle, John
2010-07-14
High throughput sequencing has become an increasingly important tool for biological research. However, the existing software systems for managing and processing these data have not provided the flexible infrastructure that research requires. Existing software solutions provide static and well-established algorithms in a restrictive package. However as high throughput sequencing is a rapidly evolving field, such static approaches lack the ability to readily adopt the latest advances and techniques which are often required by researchers. We have used a loosely coupled, service-oriented infrastructure to develop SeqAdapt. This system streamlines data management and allows for rapid integration of novel algorithms. Our approach also allows computational biologists to focus on developing and applying new methods instead of writing boilerplate infrastructure code. The system is based around the Addama service architecture and is available at our website as a demonstration web application, an installable single download and as a collection of individual customizable services.
2010-01-01
Background High throughput sequencing has become an increasingly important tool for biological research. However, the existing software systems for managing and processing these data have not provided the flexible infrastructure that research requires. Results Existing software solutions provide static and well-established algorithms in a restrictive package. However as high throughput sequencing is a rapidly evolving field, such static approaches lack the ability to readily adopt the latest advances and techniques which are often required by researchers. We have used a loosely coupled, service-oriented infrastructure to develop SeqAdapt. This system streamlines data management and allows for rapid integration of novel algorithms. Our approach also allows computational biologists to focus on developing and applying new methods instead of writing boilerplate infrastructure code. Conclusion The system is based around the Addama service architecture and is available at our website as a demonstration web application, an installable single download and as a collection of individual customizable services. PMID:20630057
NASA Technical Reports Server (NTRS)
Bartram, Peter N.
1989-01-01
The current Life Sciences Laboratory Equipment (LSLE) microcomputer for life sciences experiment data acquisition is now obsolete. Among the weaknesses of the current microcomputer are small memory size, relatively slow analog data sampling rates, and the lack of a bulk data storage device. While life science investigators normally prefer data to be transmitted to Earth as it is taken, this is not always possible. No down-link exists for experiments performed in the Shuttle middeck region. One important aspect of a replacement microcomputer is provision for in-flight storage of experimental data. The Write Once, Read Many (WORM) optical disk was studied because of its high storage density, data integrity, and the availability of a space-qualified unit. In keeping with the goals for a replacement microcomputer based upon commercially available components and standard interfaces, the system studied includes a Small Computer System Interface (SCSI) for interfacing the WORM drive. The system itself is designed around the STD bus, using readily available boards. Configurations examined were: (1) master processor board and slave processor board with the SCSI interface; (2) master processor with SCSI interface; (3) master processor with SCSI and Direct Memory Access (DMA); (4) master processor controlling a separate STD bus SCSI board; and (5) master processor controlling a separate STD bus SCSI board with DMA.
High resolution image processing on low-cost microcomputers
NASA Technical Reports Server (NTRS)
Miller, R. L.
1993-01-01
Recent advances in microcomputer technology have resulted in systems that rival the speed, storage, and display capabilities of traditionally larger machines. Low-cost microcomputers can provide a powerful environment for image processing. A new software program which offers sophisticated image display and analysis on IBM-based systems is presented. Designed specifically for a microcomputer, this program provides a wide-range of functions normally found only on dedicated graphics systems, and therefore can provide most students, universities and research groups with an affordable computer platform for processing digital images. The processing of AVHRR images within this environment is presented as an example.
NASA Technical Reports Server (NTRS)
Schneider, Michelle; Lippincott, Jeff; Chubb, Steve; Whitaker, Jimmy; Rice, Jim; Gillis, Robert; Sims, Chris; Sellers, Donna; Bailey, Darrell (Technical Monitor)
2002-01-01
The Telescience Resource Kit (TReK) is a PC based ground control system. It can be used by a single individual or in a group environment to monitor and control spacecraft systems and payloads. Capabilities include data receipt, data processing, data storage, data management, and data transmission. Commercial-Off-The-Shelf (COTS) hardware and software have been employed to reduce development costs, operations and maintenance costs, and to effectively take advantage of new commercial products as they become available. The TReK system is currently being used to monitor and control payloads aboard the International Space Station. It is located at sites around the world.
NASA Technical Reports Server (NTRS)
1992-01-01
Although the need for orthopaedic shoes is increasing, the number of skilled shoemakers has declined. This has led to the development of a CAD/CAM system to design and fabricate, orthopaedic footwear. The NASA-developed RIM database management system is the central repository for CUSTOMLAST's information storage. Several other modules also comprise the system. The project was initiated by Langley Research Center and Research Triangle Institute in cooperation with the Veterans Administration and the National Institute for Disability and Rehabilitation Research. Later development was done by North Carolina State University and the University of Missouri-Columbia. The software is licensed by both universities.
Automated documentation generator for advanced protein crystal growth
NASA Technical Reports Server (NTRS)
Maddux, Gary A.; Provancha, Anna; Chattam, David
1994-01-01
To achieve an environment less dependent on the flow of paper, automated techniques of data storage and retrieval must be utilized. This software system, 'Automated Payload Experiment Tool,' seeks to provide a knowledge-based, hypertext environment for the development of NASA documentation. Once developed, the final system should be able to guide a Principal Investigator through the documentation process in a more timely and efficient manner, while supplying more accurate information to the NASA payload developer. The current system is designed for the development of the Science Requirements Document (SRD), the Experiment Requirements Document (ERD), the Project Plan, and the Safety Requirements Document.
Scalable cloud without dedicated storage
NASA Astrophysics Data System (ADS)
Batkovich, D. V.; Kompaniets, M. V.; Zarochentsev, A. K.
2015-05-01
We present a prototype of a scalable computing cloud. It is intended to be deployed on the basis of a cluster without the separate dedicated storage. The dedicated storage is replaced by the distributed software storage. In addition, all cluster nodes are used both as computing nodes and as storage nodes. This solution increases utilization of the cluster resources as well as improves fault tolerance and performance of the distributed storage. Another advantage of this solution is high scalability with a relatively low initial and maintenance cost. The solution is built on the basis of the open source components like OpenStack, CEPH, etc.
Orthos, an alarm system for the ALICE DAQ operations
NASA Astrophysics Data System (ADS)
Chapeland, Sylvain; Carena, Franco; Carena, Wisla; Chibante Barroso, Vasco; Costa, Filippo; Denes, Ervin; Divia, Roberto; Fuchs, Ulrich; Grigore, Alexandru; Simonetti, Giuseppe; Soos, Csaba; Telesca, Adriana; Vande Vyvre, Pierre; von Haller, Barthelemy
2012-12-01
ALICE (A Large Ion Collider Experiment) is the heavy-ion detector studying the physics of strongly interacting matter and the quark-gluon plasma at the CERN LHC (Large Hadron Collider). The DAQ (Data Acquisition System) facilities handle the data flow from the detectors electronics up to the mass storage. The DAQ system is based on a large farm of commodity hardware consisting of more than 600 devices (Linux PCs, storage, network switches), and controls hundreds of distributed hardware and software components interacting together. This paper presents Orthos, the alarm system used to detect, log, report, and follow-up abnormal situations on the DAQ machines at the experimental area. The main objective of this package is to integrate alarm detection and notification mechanisms with a full-featured issues tracker, in order to prioritize, assign, and fix system failures optimally. This tool relies on a database repository with a logic engine, SQL interfaces to inject or query metrics, and dynamic web pages for user interaction. We describe the system architecture, the technologies used for the implementation, and the integration with existing monitoring tools.
Next Generation Image-Based Phenotyping of Root System Architecture
NASA Astrophysics Data System (ADS)
Davis, T. W.; Shaw, N. M.; Cheng, H.; Larson, B. G.; Craft, E. J.; Shaff, J. E.; Schneider, D. J.; Piñeros, M. A.; Kochian, L. V.
2016-12-01
The development of the Plant Root Imaging and Data Acquisition (PRIDA) hardware/software system enables researchers to collect digital images, along with all the relevant experimental details, of a range of hydroponically grown agricultural crop roots for 2D and 3D trait analysis. Previous efforts of image-based root phenotyping focused on young cereals, such as rice; however, there is a growing need to measure both older and larger root systems, such as those of maize and sorghum, to improve our understanding of the underlying genetics that control favorable rooting traits for plant breeding programs to combat the agricultural risks presented by climate change. Therefore, a larger imaging apparatus has been prototyped for capturing 3D root architecture with an adaptive control system and innovative plant root growth media that retains three-dimensional root architectural features. New publicly available multi-platform software has been released with considerations for both high throughput (e.g., 3D imaging of a single root system in under ten minutes) and high portability (e.g., support for the Raspberry Pi computer). The software features unified data collection, management, exploration and preservation for continued trait and genetics analysis of root system architecture. The new system makes data acquisition efficient and includes features that address the needs of researchers and technicians, such as reduced imaging time, semi-automated camera calibration with uncertainty characterization, and safe storage of the critical experimental data.
Online data handling and storage at the CMS experiment
NASA Astrophysics Data System (ADS)
Andre, J.-M.; Andronidis, A.; Behrens, U.; Branson, J.; Chaze, O.; Cittolin, S.; Darlea, G.-L.; Deldicque, C.; Demiragli, Z.; Dobson, M.; Dupont, A.; Erhan, S.; Gigi, D.; Glege, F.; Gómez-Ceballos, G.; Hegeman, J.; Holzner, A.; Jimenez-Estupiñán, R.; Masetti, L.; Meijers, F.; Meschi, E.; Mommsen, RK; Morovic, S.; Nuñez-Barranco-Fernández, C.; O'Dell, V.; Orsini, L.; Paus, C.; Petrucci, A.; Pieri, M.; Racz, A.; Roberts, P.; Sakulin, H.; Schwick, C.; Stieger, B.; Sumorok, K.; Veverka, J.; Zaza, S.; Zejdl, P.
2015-12-01
During the LHC Long Shutdown 1, the CMS Data Acquisition (DAQ) system underwent a partial redesign to replace obsolete network equipment, use more homogeneous switching technologies, and support new detector back-end electronics. The software and hardware infrastructure to provide input, execute the High Level Trigger (HLT) algorithms and deal with output data transport and storage has also been redesigned to be completely file- based. All the metadata needed for bookkeeping are stored in files as well, in the form of small documents using the JSON encoding. The Storage and Transfer System (STS) is responsible for aggregating these files produced by the HLT, storing them temporarily and transferring them to the T0 facility at CERN for subsequent offline processing. The STS merger service aggregates the output files from the HLT from ∼62 sources produced with an aggregate rate of ∼2GB/s. An estimated bandwidth of 7GB/s in concurrent read/write mode is needed. Furthermore, the STS has to be able to store several days of continuous running, so an estimated of 250TB of total usable disk space is required. In this article we present the various technological and implementation choices of the three components of the STS: the distributed file system, the merger service and the transfer system.
Online Data Handling and Storage at the CMS Experiment
DOE Office of Scientific and Technical Information (OSTI.GOV)
Andre, J. M.; et al.
2015-12-23
During the LHC Long Shutdown 1, the CMS Data Acquisition (DAQ) system underwent a partial redesign to replace obsolete network equipment, use more homogeneous switching technologies, and support new detector back-end electronics. The software and hardware infrastructure to provide input, execute the High Level Trigger (HLT) algorithms and deal with output data transport and storage has also been redesigned to be completely file- based. All the metadata needed for bookkeeping are stored in files as well, in the form of small documents using the JSON encoding. The Storage and Transfer System (STS) is responsible for aggregating these files produced bymore » the HLT, storing them temporarily and transferring them to the T0 facility at CERN for subsequent offline processing. The STS merger service aggregates the output files from the HLT from ~62 sources produced with an aggregate rate of ~2GB/s. An estimated bandwidth of 7GB/s in concurrent read/write mode is needed. Furthermore, the STS has to be able to store several days of continuous running, so an estimated of 250TB of total usable disk space is required. In this article we present the various technological and implementation choices of the three components of the STS: the distributed file system, the merger service and the transfer system.« less
Grand challenges in mass storage: A systems integrators perspective
NASA Technical Reports Server (NTRS)
Lee, Richard R.; Mintz, Daniel G.
1993-01-01
Within today's much ballyhooed supercomputing environment, with its CFLOPS of CPU power, and Gigabit networks, there exists a major roadblock to computing success; that of Mass Storage. The solution to this mass storage problem is considered to be one of the 'Grand Challenges' facing the computer industry today, as well as long into the future. It has become obvious to us, as well as many others in the industry, that there is no clear single solution in sight. The Systems Integrator today is faced with a myriad of quandaries in approaching this challenge. He must first be innovative in approach, second choose hardware solutions that are volumetric efficient; high in signal bandwidth; available from multiple sources; competitively priced, and have forward growth extendibility. In addition he must also comply with a variety of mandated, and often conflicting software standards (GOSIP, POSIX, IEEE, MSRM 4.0, and others), and finally he must deliver a systems solution with the 'most bang for the buck' in terms of cost vs. performance factors. These quandaries challenge the Systems Integrator to 'push the envelope' in terms of his or her ingenuity and innovation on an almost daily basis. This dynamic is explored further, and an attempt to acquaint the audience with rational approaches to this 'Grand Challenge' is made.
Louisiana: a model for advancing regional e-Research through cyberinfrastructure.
Katz, Daniel S; Allen, Gabrielle; Cortez, Ricardo; Cruz-Neira, Carolina; Gottumukkala, Raju; Greenwood, Zeno D; Guice, Les; Jha, Shantenu; Kolluru, Ramesh; Kosar, Tevfik; Leger, Lonnie; Liu, Honggao; McMahon, Charlie; Nabrzyski, Jarek; Rodriguez-Milla, Bety; Seidel, Ed; Speyrer, Greg; Stubblefield, Michael; Voss, Brian; Whittenburg, Scott
2009-06-28
Louisiana researchers and universities are leading a concentrated, collaborative effort to advance statewide e-Research through a new cyberinfrastructure: computing systems, data storage systems, advanced instruments and data repositories, visualization environments and people, all linked together by software programs and high-performance networks. This effort has led to a set of interlinked projects that have started making a significant difference in the state, and has created an environment that encourages increased collaboration, leading to new e-Research. This paper describes the overall effort, the new projects and environment and the results to date.
DAS: A Data Management System for Instrument Tests and Operations
NASA Astrophysics Data System (ADS)
Frailis, M.; Sartor, S.; Zacchei, A.; Lodi, M.; Cirami, R.; Pasian, F.; Trifoglio, M.; Bulgarelli, A.; Gianotti, F.; Franceschi, E.; Nicastro, L.; Conforti, V.; Zoli, A.; Smart, R.; Morbidelli, R.; Dadina, M.
2014-05-01
The Data Access System (DAS) is a and data management software system, providing a reusable solution for the storage of data acquired both from telescopes and auxiliary data sources during the instrument development phases and operations. It is part of the Customizable Instrument WorkStation system (CIWS-FW), a framework for the storage, processing and quick-look at the data acquired from scientific instruments. The DAS provides a data access layer mainly targeted to software applications: quick-look displays, pre-processing pipelines and scientific workflows. It is logically organized in three main components: an intuitive and compact Data Definition Language (DAS DDL) in XML format, aimed for user-defined data types; an Application Programming Interface (DAS API), automatically adding classes and methods supporting the DDL data types, and providing an object-oriented query language; a data management component, which maps the metadata of the DDL data types in a relational Data Base Management System (DBMS), and stores the data in a shared (network) file system. With the DAS DDL, developers define the data model for a particular project, specifying for each data type the metadata attributes, the data format and layout (if applicable), and named references to related or aggregated data types. Together with the DDL user-defined data types, the DAS API acts as the only interface to store, query and retrieve the metadata and data in the DAS system, providing both an abstract interface and a data model specific one in C, C++ and Python. The mapping of metadata in the back-end database is automatic and supports several relational DBMSs, including MySQL, Oracle and PostgreSQL.
Role of the ATLAS Grid Information System (AGIS) in Distributed Data Analysis and Simulation
NASA Astrophysics Data System (ADS)
Anisenkov, A. V.
2018-03-01
In modern high-energy physics experiments, particular attention is paid to the global integration of information and computing resources into a unified system for efficient storage and processing of experimental data. Annually, the ATLAS experiment performed at the Large Hadron Collider at the European Organization for Nuclear Research (CERN) produces tens of petabytes raw data from the recording electronics and several petabytes of data from the simulation system. For processing and storage of such super-large volumes of data, the computing model of the ATLAS experiment is based on heterogeneous geographically distributed computing environment, which includes the worldwide LHC computing grid (WLCG) infrastructure and is able to meet the requirements of the experiment for processing huge data sets and provide a high degree of their accessibility (hundreds of petabytes). The paper considers the ATLAS grid information system (AGIS) used by the ATLAS collaboration to describe the topology and resources of the computing infrastructure, to configure and connect the high-level software systems of computer centers, to describe and store all possible parameters, control, configuration, and other auxiliary information required for the effective operation of the ATLAS distributed computing applications and services. The role of the AGIS system in the development of a unified description of the computing resources provided by grid sites, supercomputer centers, and cloud computing into a consistent information model for the ATLAS experiment is outlined. This approach has allowed the collaboration to extend the computing capabilities of the WLCG project and integrate the supercomputers and cloud computing platforms into the software components of the production and distributed analysis workload management system (PanDA, ATLAS).
Design notes for the next generation persistent object manager for CAP
DOE Office of Scientific and Technical Information (OSTI.GOV)
Isely, M.; Fischler, M.; Galli, M.
1995-05-01
The CAP query system software at Fermilab has several major components, including SQS (for managing the query), the retrieval system (for fetching auxiliary data), and the query software itself. The central query software in particular is essentially a modified version of the `ptool` product created at UIC (University of Illinois at Chicago) as part of the PASS project under Bob Grossman. The original UIC version was designed for use in a single-user non-distributed Unix environment. The Fermi modifications were an attempt to permit multi-user access to a data set distributed over a set of storage nodes. (The hardware is anmore » IBM SP-x system - a cluster of AIX POWER2 nodes with an IBM-proprietary high speed switch interconnect). Since the implementation work of the Fermi-ized ptool, the CAP members have learned quite a bit about the nature of queries and where the current performance bottlenecks exist. This has lead them to design a persistent object manager that will overcome these problems. For backwards compatibility with ptool, the ptool persistent object API will largely be retained, but the implementation will be entirely different.« less
Cross-Matching of Very Large Catalogs
NASA Astrophysics Data System (ADS)
Martynov, M. V.; Bodryagin, D. V.
Modern astronomical catalogs and sky surveys, that contain billions of objects, belong to the "big data" data class. Existing available services have limited functionality and do not include all required and available catalogs. The software package ACrId (Astronomical Cross Identification) for cross-matching large astronomical catalogs, which uses an sphere pixelation algorithm HEALPix, ReiserFS file system and JSON-type text files for storage, has been developed at the Research Institution "Mykolaiv Astronomical Observatory".
2013-04-01
products for energy generation, including solar, wind , and gas turbines , energy storage, power conversion, grid integration, and software for...potential security advantages over centralized systems • DER promote fuel diversity (e.g., biomass, landfill gas, flare gas, wind , solar) and...therefore reduce overall energy price volatility • Renewable DER such as wind and solar photovoltaics provide emissions-free energy • DER offer a quicker
2011-01-18
Observations, and Micronucleus Scoring Data Table 10: Summary of Micronucleus Assay Results Appendix I: Software Systems Attachment A: Material Safety ...compliance with U.S. Food and Drug Administration regulations set forth in 21 CFR, Part 58, and with the Organization for Economic Co-Operation and...Solubility: Insoluble in water pH: 7 Storage Conditions: Room Temperature Safety Precautions: Standard Toxikon Laboratory Safety Precautions, Bovine
2011-09-01
solutions to address these important challenges . The Air Force is seeking innovative architectures to process and store massive data sets in a flexible...Google Earth, the Video LAN Client ( VLC ) media player, and the Environmental Systems Research Institute corporation‘s (ESRI) ArcGIS product — to...Earth, Quantum GIS, VLC Media Player, NASA WorldWind, ESRI ArcGIS and many others. Open source GIS and media visualization software can also be
2006-01-01
cool , the ink is solid and does not flow. When the cantilever is heated , the ink melts and flows from the tip onto the surface. Mov ing the tip...IBM for use in the “Millipede” memory storage system. Thermal cantilevers may be designed to give rapid heating (1 to 20 µs) and cooling (1 to 50 µs...ability of combin- ing reconfigurable hardware devices with optimization software capable of executing real-time autonomous reconfiguration opens up a
A molecular dynamics study on sI hydrogen hydrate.
Mondal, S; Ghosh, S; Chattaraj, P K
2013-07-01
A molecular dynamics simulation is carried out to explore the possibility of using sI clathrate hydrate as hydrogen storage material. Metastable hydrogen hydrate structures are generated using the LAMMPS software. Different binding energies and radial distribution functions provide important insights into the behavior of the various types of hydrogen and oxygen atoms present in the system. Clathrate hydrate cages become more stable in the presence of guest molecules like hydrogen.
Concierge: Personal Database Software for Managing Digital Research Resources
Sakai, Hiroyuki; Aoyama, Toshihiro; Yamaji, Kazutsuna; Usui, Shiro
2007-01-01
This article introduces a desktop application, named Concierge, for managing personal digital research resources. Using simple operations, it enables storage of various types of files and indexes them based on content descriptions. A key feature of the software is a high level of extensibility. By installing optional plug-ins, users can customize and extend the usability of the software based on their needs. In this paper, we also introduce a few optional plug-ins: literature management, electronic laboratory notebook, and XooNlps client plug-ins. XooNIps is a content management system developed to share digital research resources among neuroscience communities. It has been adopted as the standard database system in Japanese neuroinformatics projects. Concierge, therefore, offers comprehensive support from management of personal digital research resources to their sharing in open-access neuroinformatics databases such as XooNIps. This interaction between personal and open-access neuroinformatics databases is expected to enhance the dissemination of digital research resources. Concierge is developed as an open source project; Mac OS X and Windows XP versions have been released at the official site (http://concierge.sourceforge.jp). PMID:18974800
NASA Astrophysics Data System (ADS)
Piasecki, M.; Ji, P.
2014-12-01
Geoscience data comes in many flavors that are determined by type of data such as continous on a grid or mesh or discrete colelcted at point either as one time samples or a stream of data coming of sensors, but coudl also encompass digital files of any time type such text files, WORD or EXCEL documents, or audio and video files. We present a storage facility that is comprsed of 6 nodes each of speciaized to host a certain data type: grid based data (netCDF on a THREDDS server), GIS data (shapefiles using GeoServer), point time series data (CUAHSI ODM), sample data (EDBS), and any digital data (RAMADAA) plus a server fro Remote sensing data and its products. While there is overlap in data type storage capabilities (rasters can go into several of these nodes) we prefer to use dedicated storage facilities that are a) freeware, and b) have a good degree of maturity, and c) have shown their utility for stroing a cetain type. In addition it allows to place these commonly used software stacks and storage solutiosn side-by-side to develop interoprability strategies. We have used a DRUPAL based system to handle user regoistration and authentication, and also use the system for data submission and data search. In support for tis system we developed an extensive controlled vocabulary system that is an amalgamation of various CVs used in the geosciecne community in order to achieve as high a degree of recognition, such the CF conventions, CUAHSI Cvs, , NASA (GCMD), EPA and USGS taxonomies, GEMET, in addition to ontological representations such as SWEET.
The science of computing - The evolution of parallel processing
NASA Technical Reports Server (NTRS)
Denning, P. J.
1985-01-01
The present paper is concerned with the approaches to be employed to overcome the set of limitations in software technology which impedes currently an effective use of parallel hardware technology. The process required to solve the arising problems is found to involve four different stages. At the present time, Stage One is nearly finished, while Stage Two is under way. Tentative explorations are beginning on Stage Three, and Stage Four is more distant. In Stage One, parallelism is introduced into the hardware of a single computer, which consists of one or more processors, a main storage system, a secondary storage system, and various peripheral devices. In Stage Two, parallel execution of cooperating programs on different machines becomes explicit, while in Stage Three, new languages will make parallelism implicit. In Stage Four, there will be very high level user interfaces capable of interacting with scientists at the same level of abstraction as scientists do with each other.
Analog-to-digital clinical data collection on networked workstations with graphic user interface.
Lunt, D
1991-02-01
An innovative respiratory examination system has been developed that combines physiological response measurement, real-time graphic displays, user-driven operating sequences, and networked file archiving and review into a scientific research and clinical diagnosis tool. This newly constructed computer network is being used to enhance the research center's ability to perform patient pulmonary function examinations. Respiratory data are simultaneously acquired and graphically presented during patient breathing maneuvers and rapidly transformed into graphic and numeric reports, suitable for statistical analysis or database access. The environment consists of the hardware (Macintosh computer, MacADIOS converters, analog amplifiers), the software (HyperCard v2.0, HyperTalk, XCMDs), and the network (AppleTalk, fileservers, printers) as building blocks for data acquisition, analysis, editing, and storage. System operation modules include: Calibration, Examination, Reports, On-line Help Library, Graphic/Data Editing, and Network Storage.
A GIS Based 3D Online Decision Assistance System for Underground Energy Storage in Northern Germany
NASA Astrophysics Data System (ADS)
Nolde, M.; Schwanebeck, M.; Biniyaz, E.; Duttmann, R.
2014-12-01
We would like to present a GIS-based 3D online decision assistance system for underground energy storage. Its aim is to support the local land use planning authorities through pre-selection of possible sites for thermal, electrical and substantial underground energy storages. Since the extension of renewable energies has become legal requirement in Germany, the underground storing of superfluously produced green energy (such as during a heavy wind event) in the form of compressed air, gas or heated water has become increasingly important. However, the selection of suitable sites is a complex task. The assistance system uses data of geological features such as rock layers, salt caverns and faults enriched with attribute data such as rock porosity and permeability. This information is combined with surface data of the existing energy infrastructure, such as locations of wind and biogas stations, power line arrangement and cable capacity, and energy distribution stations. Furthermore, legal obligations such as protected areas on the surface and current underground mining permissions are used for the decision finding process. Not only the current situation but also prospective scenarios, such as expected growth in produced amount of energy are incorporated in the system. The decision process is carried out via the 'Analytic Hierarchy Process' (AHP) methodology of the 'Multi Object Decision Making' (MODM) approach. While the process itself is completely automated, the user has full control of the weighting of the different factors via the web interface. The system is implemented as an online 3D server GIS environment, with no software needed to be installed on the user side. The results are visualized as interactive 3d graphics. The implementation of the assistance system is based exclusively on free and open source software, and utilizes the 'Python' programming language in combination with current web technologies, such as 'HTML5', 'CSS3' and 'JavaScript'. It is developed at Kiel University for the federal state of Schleswig-Holstein in northern Germany. This work is part of project 'ANGUS+', lead by Prof. Dr. Sebastian Bauer and funded by the German Ministry for Education and Research (BMBF).
NASA Astrophysics Data System (ADS)
Goyal, A.; Yadav, H.; Tyagi, H.; Gosain, A. K.; Khosa, R.
2017-12-01
Increased imperviousness due to rapid urbanization have changed the urban hydrological cycle. As watersheds are urbanized, infiltration and groundwater recharge have decreased, surface runoff hydrograph shows higher peak indicating large volumes of surface runoff in lesser time durations. The ultimate panacea is to reduce the peak of hydrograph or increase the retention time of surface flow. SWMM is widely used hydrologic and hydraulic software which helps to simulate the urban storm water management with the provision to apply different techniques to prevent flooding. A model was setup to simulate the surface runoff and channel flow in a small urban catchment. It provides the temporal and spatial information of flooding in a catchment. Incorporating the detention storages in the drainage network helps achieve reduced flooding. Detention storages provided with predefined algorithms were for controlling the pluvial flooding in urban watersheds. The algorithm based on control theory, automated the functioning of detention storages ensuring that the storages become active on occurrence of flood in the storm water drains and shuts down when flooding is over. Detention storages can be implemented either at source or at several downstream control points. The proposed piece of work helps to mitigate the wastage of rainfall water, achieve desirable groundwater and attain a controlled urban storm water management system.
Justification of Estimates for Fiscal Year 1983 Submitted to Congress.
1982-02-01
hierarchies to aid software production; completion of the components of an adaptive suspension vehicle including a storage energy unit, hydraulics, laser...and corrosion (long storage times), and radiation-induced breakdown. Solid- lubricated main engine bearings for cruise missile engines would offer...environments will cause "soft error" (computational and memory storage errors) in advanced microelectronic circuits. Research on high-speed, low-power
A pipelined FPGA implementation of an encryption algorithm based on genetic algorithm
NASA Astrophysics Data System (ADS)
Thirer, Nonel
2013-05-01
With the evolution of digital data storage and exchange, it is essential to protect the confidential information from every unauthorized access. High performance encryption algorithms were developed and implemented by software and hardware. Also many methods to attack the cipher text were developed. In the last years, the genetic algorithm has gained much interest in cryptanalysis of cipher texts and also in encryption ciphers. This paper analyses the possibility to use the genetic algorithm as a multiple key sequence generator for an AES (Advanced Encryption Standard) cryptographic system, and also to use a three stages pipeline (with four main blocks: Input data, AES Core, Key generator, Output data) to provide a fast encryption and storage/transmission of a large amount of data.
Radiology and Enterprise Medical Imaging Extensions (REMIX).
Erdal, Barbaros S; Prevedello, Luciano M; Qian, Songyue; Demirer, Mutlu; Little, Kevin; Ryu, John; O'Donnell, Thomas; White, Richard D
2018-02-01
Radiology and Enterprise Medical Imaging Extensions (REMIX) is a platform originally designed to both support the medical imaging-driven clinical and clinical research operational needs of Department of Radiology of The Ohio State University Wexner Medical Center. REMIX accommodates the storage and handling of "big imaging data," as needed for large multi-disciplinary cancer-focused programs. The evolving REMIX platform contains an array of integrated tools/software packages for the following: (1) server and storage management; (2) image reconstruction; (3) digital pathology; (4) de-identification; (5) business intelligence; (6) texture analysis; and (7) artificial intelligence. These capabilities, along with documentation and guidance, explaining how to interact with a commercial system (e.g., PACS, EHR, commercial database) that currently exists in clinical environments, are to be made freely available.
NASA Technical Reports Server (NTRS)
2003-01-01
Topics covered include: Real-Time, High-Frequency QRS Electrocardiograph; Software for Improved Extraction of Data From Tape Storage; Radio System for Locating Emergency Workers; Software for Displaying High-Frequency Test Data; Capacitor-Chain Successive-Approximation ADC; Simpler Alternative to an Optimum FQPSK-B Viterbi Receiver; Multilayer Patch Antenna Surrounded by a Metallic Wall; Software To Secure Distributed Propulsion Simulations; Explicit Pore Pressure Material Model in Carbon-Cloth Phenolic; Meshed-Pumpkin Super-Pressure Balloon Design; Corrosion Inhibitors as Penetrant Dyes for Radiography; Transparent Metal-Salt-Filled Polymeric Radiation Shields; Lightweight Energy Absorbers for Blast Containers; Brush-Wheel Samplers for Planetary Exploration; Dry Process for Making Polyimide/ Carbon-and-Boron-Fiber Tape; Relatively Inexpensive Rapid Prototyping of Small Parts; Magnetic Field Would Reduce Electron Backstreaming in Ion Thrusters; Alternative Electrochemical Systems for Ozonation of Water; Interferometer for Measuring Displacement to Within 20 pm; UV-Enhanced IR Raman System for Identifying Biohazards; Prognostics Methodology for Complex Systems; Algorithms for Haptic Rendering of 3D Objects; Modeling and Control of Aerothermoelastic Effects; Processing Digital Imagery to Enhance Perceptions of Realism; Analysis of Designs of Space Laboratories; Shields for Enhanced Protection Against High-Speed Debris; Study of Dislocation-Ordered In(x)Ga(1-x)As/GaAs Quantum Dots; and Tilt-Sensitivity Analysis for Space Telescopes.
A data acquisition and storage system for the ion auxiliary propulsion system cyclic thruster test
NASA Technical Reports Server (NTRS)
Hamley, John A.
1989-01-01
A nine-track tape drive interfaced to a standard personal computer was used to transport data from a remote test site to the NASA Lewis mainframe computer for analysis. The Cyclic Ground Test of the Ion Auxiliary Propulsion System (IAPS), which successfully achieved its goal of 2557 cycles and 7057 hr of thrusting beam on time generated several megabytes of test data over many months of continuous testing. A flight-like controller and power supply were used to control the thruster and acquire data. Thruster data was converted to RS232 format and transmitted to a personal computer, which stored the raw digital data on the nine-track tape. The tape format was such that with minor modifications, mainframe flight data analysis software could be used to analyze the Cyclic Ground Test data. The personal computer also converted the digital data to engineering units and displayed real time thruster parameters. Hardcopy data was printed at a rate dependent on thruster operating conditions. The tape drive provided a convenient means to transport the data to the mainframe for analysis, and avoided a development effort for new data analysis software for the Cyclic test. This paper describes the data system, interfacing and software requirements.
GPU Lossless Hyperspectral Data Compression System
NASA Technical Reports Server (NTRS)
Aranki, Nazeeh I.; Keymeulen, Didier; Kiely, Aaron B.; Klimesh, Matthew A.
2014-01-01
Hyperspectral imaging systems onboard aircraft or spacecraft can acquire large amounts of data, putting a strain on limited downlink and storage resources. Onboard data compression can mitigate this problem but may require a system capable of a high throughput. In order to achieve a high throughput with a software compressor, a graphics processing unit (GPU) implementation of a compressor was developed targeting the current state-of-the-art GPUs from NVIDIA(R). The implementation is based on the fast lossless (FL) compression algorithm reported in "Fast Lossless Compression of Multispectral-Image Data" (NPO- 42517), NASA Tech Briefs, Vol. 30, No. 8 (August 2006), page 26, which operates on hyperspectral data and achieves excellent compression performance while having low complexity. The FL compressor uses an adaptive filtering method and achieves state-of-the-art performance in both compression effectiveness and low complexity. The new Consultative Committee for Space Data Systems (CCSDS) Standard for Lossless Multispectral & Hyperspectral image compression (CCSDS 123) is based on the FL compressor. The software makes use of the highly-parallel processing capability of GPUs to achieve a throughput at least six times higher than that of a software implementation running on a single-core CPU. This implementation provides a practical real-time solution for compression of data from airborne hyperspectral instruments.
Tschmelak, Jens; Proll, Guenther; Riedt, Johannes; Kaiser, Joachim; Kraemmer, Peter; Bárzaga, Luis; Wilkinson, James S; Hua, Ping; Hole, J Patrick; Nudd, Richard; Jackson, Michael; Abuknesha, Ram; Barceló, Damià; Rodriguez-Mozaz, Sara; de Alda, Maria J López; Sacher, Frank; Stien, Jan; Slobodník, Jaroslav; Oswald, Peter; Kozmenko, Helena; Korenková, Eva; Tóthová, Lívia; Krascsenits, Zoltan; Gauglitz, Guenter
2005-02-15
A novel analytical system AWACSS (automated water analyser computer-supported system) based on immunochemical technology has been developed that can measure several organic pollutants at low nanogram per litre level in a single few-minutes analysis without any prior sample pre-concentration nor pre-treatment steps. Having in mind actual needs of water-sector managers related to the implementation of the Drinking Water Directive (DWD) (98/83/EC, 1998) and Water Framework Directive WFD (2000/60/EC, 2000), drinking, ground, surface, and waste waters were major media used for the evaluation of the system performance. The instrument was equipped with remote control and surveillance facilities. The system's software allows for the internet-based networking between the measurement and control stations, global management, trend analysis, and early-warning applications. The experience of water laboratories has been utilised at the design of the instrument's hardware and software in order to make the system rugged and user-friendly. Several market surveys were conducted during the project to assess the applicability of the final system. A web-based AWACSS database was created for automated evaluation and storage of the obtained data in a format compatible with major databases of environmental organic pollutants in Europe. This first part article gives the reader an overview of the aims and scope of the AWACSS project as well as details about basic technology, immunoassays, software, and networking developed and utilised within the research project. The second part article reports on the system performance, first real sample measurements, and an international collaborative trial (inter-laboratory tests) to compare the biosensor with conventional anayltical methods.
PASSIM--an open source software system for managing information in biomedical studies.
Viksna, Juris; Celms, Edgars; Opmanis, Martins; Podnieks, Karlis; Rucevskis, Peteris; Zarins, Andris; Barrett, Amy; Neogi, Sudeshna Guha; Krestyaninova, Maria; McCarthy, Mark I; Brazma, Alvis; Sarkans, Ugis
2007-02-09
One of the crucial aspects of day-to-day laboratory information management is collection, storage and retrieval of information about research subjects and biomedical samples. An efficient link between sample data and experiment results is absolutely imperative for a successful outcome of a biomedical study. Currently available software solutions are largely limited to large-scale, expensive commercial Laboratory Information Management Systems (LIMS). Acquiring such LIMS indeed can bring laboratory information management to a higher level, but often implies sufficient investment of time, effort and funds, which are not always available. There is a clear need for lightweight open source systems for patient and sample information management. We present a web-based tool for submission, management and retrieval of sample and research subject data. The system secures confidentiality by separating anonymized sample information from individuals' records. It is simple and generic, and can be customised for various biomedical studies. Information can be both entered and accessed using the same web interface. User groups and their privileges can be defined. The system is open-source and is supplied with an on-line tutorial and necessary documentation. It has proven to be successful in a large international collaborative project. The presented system closes the gap between the need and the availability of lightweight software solutions for managing information in biomedical studies involving human research subjects.
StatsDB: platform-agnostic storage and understanding of next generation sequencing run metrics
Ramirez-Gonzalez, Ricardo H.; Leggett, Richard M.; Waite, Darren; Thanki, Anil; Drou, Nizar; Caccamo, Mario; Davey, Robert
2014-01-01
Modern sequencing platforms generate enormous quantities of data in ever-decreasing amounts of time. Additionally, techniques such as multiplex sequencing allow one run to contain hundreds of different samples. With such data comes a significant challenge to understand its quality and to understand how the quality and yield are changing across instruments and over time. As well as the desire to understand historical data, sequencing centres often have a duty to provide clear summaries of individual run performance to collaborators or customers. We present StatsDB, an open-source software package for storage and analysis of next generation sequencing run metrics. The system has been designed for incorporation into a primary analysis pipeline, either at the programmatic level or via integration into existing user interfaces. Statistics are stored in an SQL database and APIs provide the ability to store and access the data while abstracting the underlying database design. This abstraction allows simpler, wider querying across multiple fields than is possible by the manual steps and calculation required to dissect individual reports, e.g. ”provide metrics about nucleotide bias in libraries using adaptor barcode X, across all runs on sequencer A, within the last month”. The software is supplied with modules for storage of statistics from FastQC, a commonly used tool for analysis of sequence reads, but the open nature of the database schema means it can be easily adapted to other tools. Currently at The Genome Analysis Centre (TGAC), reports are accessed through our LIMS system or through a standalone GUI tool, but the API and supplied examples make it easy to develop custom reports and to interface with other packages. PMID:24627795
The DAQ system for the AEḡIS experiment
NASA Astrophysics Data System (ADS)
Prelz, F.; Aghion, S.; Amsler, C.; Ariga, T.; Bonomi, G.; Brusa, R. S.; Caccia, M.; Caravita, R.; Castelli, F.; Cerchiari, G.; Comparat, D.; Consolati, G.; Demetrio, A.; Di Noto, L.; Doser, M.; Ereditato, A.; Evans, C.; Ferragut, R.; Fesel, J.; Fontana, A.; Gerber, S.; Giammarchi, M.; Gligorova, A.; Guatieri, F.; Haider, S.; Hinterberger, A.; Holmestad, H.; Kellerbauer, A.; Krasnický, D.; Lagomarsino, V.; Lansonneur, P.; Lebrun, P.; Malbrunot, C.; Mariazzi, S.; Matveev, V.; Mazzotta, Z.; Müller, S. R.; Nebbia, G.; Nedelec, P.; Oberthaler, M.; Pacifico, N.; Pagano, D.; Penasa, L.; Petracek, V.; Prevedelli, M.; Ravelli, L.; Rienaecker, B.; Robert, J.; Røhne, O. M.; Rotondi, A.; Sacerdoti, M.; Sandaker, H.; Santoro, R.; Scampoli, P.; Simon, M.; Smestad, L.; Sorrentino, F.; Testera, G.; Tietje, I. C.; Widmann, E.; Yzombard, P.; Zimmer, C.; Zmeskal, J.; Zurlo, N.
2017-10-01
In the sociology of small- to mid-sized (O(100) collaborators) experiments the issue of data collection and storage is sometimes felt as a residual problem for which well-established solutions are known. Still, the DAQ system can be one of the few forces that drive towards the integration of otherwise loosely coupled detector systems. As such it may be hard to complete with off-the-shelf components only. LabVIEW and ROOT are the (only) two software systems that were assumed to be familiar enough to all collaborators of the AEḡIS (AD6) experiment at CERN: working out of the GXML representation of LabVIEW Data types, a semantically equivalent representation as ROOT TTrees was developed for permanent storage and analysis. All data in the experiment is cast into this common format and can be produced and consumed on both systems and transferred over TCP and/or multicast over UDP for immediate sharing over the experiment LAN. We describe the setup that has been able to cater to all run data logging and long term monitoring needs of the AEḡIS experiment so far.
Astro-WISE: Chaining to the Universe
NASA Astrophysics Data System (ADS)
Valentijn, E. A.; McFarland, J. P.; Snigula, J.; Begeman, K. G.; Boxhoorn, D. R.; Rengelink, R.; Helmich, E.; Heraudeau, P.; Verdoes Kleijn, G.; Vermeij, R.; Vriend, W.-J.; Tempelaar, M. J.; Deul, E.; Kuijken, K.; Capaccioli, M.; Silvotti, R.; Bender, R.; Neeser, M.; Saglia, R.; Bertin, E.; Mellier, Y.
2007-10-01
The recent explosion of recorded digital data and its processed derivatives threatens to overwhelm researchers when analysing their experimental data or looking up data items in archives and file systems. While current hardware developments allow the acquisition, processing and storage of hundreds of terabytes of data at the cost of a modern sports car, the software systems to handle these data are lagging behind. This problem is very general and is well recognized by various scientific communities; several large projects have been initiated, e.g., DATAGRID/EGEE {http://www.eu-egee.org/} federates compute and storage power over the high-energy physical community, while the international astronomical community is building an Internet geared Virtual Observatory {http://www.euro-vo.org/pub/} (Padovani 2006) connecting archival data. These large projects either focus on a specific distribution aspect or aim to connect many sub-communities and have a relatively long trajectory for setting standards and a common layer. Here, we report first light of a very different solution (Valentijn & Kuijken 2004) to the problem initiated by a smaller astronomical IT community. It provides an abstract scientific information layer which integrates distributed scientific analysis with distributed processing and federated archiving and publishing. By designing new abstractions and mixing in old ones, a Science Information System with fully scalable cornerstones has been achieved, transforming data systems into knowledge systems. This break-through is facilitated by the full end-to-end linking of all dependent data items, which allows full backward chaining from the observer/researcher to the experiment. Key is the notion that information is intrinsic in nature and thus is the data acquired by a scientific experiment. The new abstraction is that software systems guide the user to that intrinsic information by forcing full backward and forward chaining in the data modelling.
Run Environment and Data Management for Earth System Models
NASA Astrophysics Data System (ADS)
Widmann, H.; Lautenschlager, M.; Fast, I.; Legutke, S.
2009-04-01
The Integrating Model and Data Infrastructure (IMDI) developed and maintained by the Model and Data Group (M&D) comprises the Standard Compile Environment (SCE) and the Standard Run Environment (SRE). The IMDI software has a modular design, which allows to combine and couple a suite of model components and as well to execute the tasks independently and on various platforms. Furthermore the modular structure enables the extension to new model combinations and new platforms. The SRE presented here enables the configuration and performance of earth system model experiments from model integration up to storage and visualization of data. We focus on recently implemented tasks such as synchronous data base filling, graphical monitoring and automatic generation of meta data in XML forms during run time. As well we address the capability to run experiments in heterogeneous IT environments with different computing systems for model integration, data processing and storage. These features are demonstrated for model configurations and on platforms used in current or upcoming projects, e.g. MILLENNIUM or IPCC AR5.
Attaching IBM-compatible 3380 disks to Cray X-MP
DOE Office of Scientific and Technical Information (OSTI.GOV)
Engert, D.E.; Midlock, J.L.
1989-01-01
A method of attaching IBM-compatible 3380 disks directly to a Cray X-MP via the XIOP with a BMC is described. The IBM 3380 disks appear to the UNICOS operating system as DD-29 disks with UNICOS file systems. IBM 3380 disks provide cheap, reliable large capacity disk storage. Combined with a small number of high-speed Cray disks, the IBM disks provide for the bulk of the storage for small files and infrequently used files. Cray Research designed the BMC and its supporting software in the XIOP to allow IBM tapes and other devices to be attached to the X-MP. No hardwaremore » changes were necessary, and we added less than 2000 lines of code to the XIOP to accomplish this project. This system has been in operation for over eight months. Future enhancements such as the use of a cache controller and attachment to a Y-MP are also described. 1 tab.« less
Bathymetric contours of Breckenridge Reservoir, Quantico, Virginia
Wicklein, S.M.; Lotspeich, R.R.; Banks, R.B.
2012-01-01
Breckenridge Reservoir, built in 1938, is fed by Chopawamsic Creek and South Branch Chopawamsic Creek. The Reservoir is a main source of drinking water for the U.S. Marine Corps (USMC) Base in Quantico, Virginia. The U.S. Geological Survey (USGS), in cooperation with the USMC, conducted a bathymetric survey of Breckenridge Reservoir in March 2009. The survey was conducted to provide the USMC Natural Resources and Environmental Affairs (NREA) with information regarding reservoir storage capacity and general bathymetric properties. The bathymetric survey can provide a baseline for future work on sediment loads and deposition rates for the reservoir. Bathymetric data were collected using a boat-mounted Wide Area Augmentation System (WAAS) differential global positioning system (DGPS), echo depth-sounding equipment, and computer software. Data were exported into a geographic information system (GIS) for mapping and calculating area and volume. Reservoir storage volume at the time of the survey was about 22,500,000 cubic feet (517 acre-feet) with a surface area of about 1,820,000 square feet (41.9 acres).
[The research in a foot pressure measuring system based on LabVIEW].
Li, Wei; Qiu, Hong; Xu, Jiang; He, Jiping
2011-01-01
This paper presents a system of foot pressure measuring system based on LabVIEW. The designs of hardware and software system are figured out. LabVIEW is used to design the application interface for displaying plantar pressure. The system can realize the plantar pressure data acquisition, data storage, waveform display, and waveform playback. It was also shown that the testing results of the system were in line with the changing trend of normal gait, which conformed to human system engineering theory. It leads to the demonstration of system reliability. The system gives vivid and visual results, and provides a new method of how to measure foot-pressure and some references for the design of Insole System.
ACE: A distributed system to manage large data archives
NASA Technical Reports Server (NTRS)
Daily, Mike I.; Allen, Frank W.
1993-01-01
Competitive pressures in the oil and gas industry are requiring a much tighter integration of technical data into E and P business processes. The development of new systems to accommodate this business need must comprehend the significant numbers of large, complex data objects which the industry generates. The life cycle of the data objects is a four phase progression from data acquisition, to data processing, through data interpretation, and ending finally with data archival. In order to implement a cost effect system which provides an efficient conversion from data to information and allows effective use of this information, an organization must consider the technical data management requirements in all four phases. A set of technical issues which may differ in each phase must be addressed to insure an overall successful development strategy. The technical issues include standardized data formats and media for data acquisition, data management during processing, plus networks, applications software, and GUI's for interpretation of the processed data. Mass storage hardware and software is required to provide cost effective storage and retrieval during the latter three stages as well as long term archival. Mobil Oil Corporation's Exploration and Producing Technical Center (MEPTEC) has addressed the technical and cost issues of designing, building, and implementing an Advanced Computing Environment (ACE) to support the petroleum E and P function, which is critical to the corporation's continued success. Mobile views ACE as a cost effective solution which can give Mobile a competitive edge as well as a viable technical solution.
High End Computing Technologies for Earth Science Applications: Trends, Challenges, and Innovations
NASA Technical Reports Server (NTRS)
Parks, John (Technical Monitor); Biswas, Rupak; Yan, Jerry C.; Brooks, Walter F.; Sterling, Thomas L.
2003-01-01
Earth science applications of the future will stress the capabilities of even the highest performance supercomputers in the areas of raw compute power, mass storage management, and software environments. These NASA mission critical problems demand usable multi-petaflops and exabyte-scale systems to fully realize their science goals. With an exciting vision of the technologies needed, NASA has established a comprehensive program of advanced research in computer architecture, software tools, and device technology to ensure that, in partnership with US industry, it can meet these demanding requirements with reliable, cost effective, and usable ultra-scale systems. NASA will exploit, explore, and influence emerging high end computing architectures and technologies to accelerate the next generation of engineering, operations, and discovery processes for NASA Enterprises. This article captures this vision and describes the concepts, accomplishments, and the potential payoff of the key thrusts that will help meet the computational challenges in Earth science applications.
Archive Management of NASA Earth Observation Data to Support Cloud Analysis
NASA Technical Reports Server (NTRS)
Lynnes, Christopher; Baynes, Kathleen; McInerney, Mark A.
2017-01-01
NASA collects, processes and distributes petabytes of Earth Observation (EO) data from satellites, aircraft, in situ instruments and model output, with an order of magnitude increase expected by 2024. Cloud-based web object storage (WOS) of these data can simplify the execution of such an increase. More importantly, it can also facilitate user analysis of those volumes by making the data available to the massively parallel computing power in the cloud. However, storing EO data in cloud WOS has a ripple effect throughout the NASA archive system with unexpected challenges and opportunities. One challenge is modifying data servicing software (such as Web Coverage Service servers) to access and subset data that are no longer on a directly accessible file system, but rather in cloud WOS. Opportunities include refactoring of the archive software to a cloud-native architecture; virtualizing data products by computing on demand; and reorganizing data to be more analysis-friendly.
EDGE3: A web-based solution for management and analysis of Agilent two color microarray experiments
Vollrath, Aaron L; Smith, Adam A; Craven, Mark; Bradfield, Christopher A
2009-01-01
Background The ability to generate transcriptional data on the scale of entire genomes has been a boon both in the improvement of biological understanding and in the amount of data generated. The latter, the amount of data generated, has implications when it comes to effective storage, analysis and sharing of these data. A number of software tools have been developed to store, analyze, and share microarray data. However, a majority of these tools do not offer all of these features nor do they specifically target the commonly used two color Agilent DNA microarray platform. Thus, the motivating factor for the development of EDGE3 was to incorporate the storage, analysis and sharing of microarray data in a manner that would provide a means for research groups to collaborate on Agilent-based microarray experiments without a large investment in software-related expenditures or extensive training of end-users. Results EDGE3 has been developed with two major functions in mind. The first function is to provide a workflow process for the generation of microarray data by a research laboratory or a microarray facility. The second is to store, analyze, and share microarray data in a manner that doesn't require complicated software. To satisfy the first function, EDGE3 has been developed as a means to establish a well defined experimental workflow and information system for microarray generation. To satisfy the second function, the software application utilized as the user interface of EDGE3 is a web browser. Within the web browser, a user is able to access the entire functionality, including, but not limited to, the ability to perform a number of bioinformatics based analyses, collaborate between research groups through a user-based security model, and access to the raw data files and quality control files generated by the software used to extract the signals from an array image. Conclusion Here, we present EDGE3, an open-source, web-based application that allows for the storage, analysis, and controlled sharing of transcription-based microarray data generated on the Agilent DNA platform. In addition, EDGE3 provides a means for managing RNA samples and arrays during the hybridization process. EDGE3 is freely available for download at . PMID:19732451
Vollrath, Aaron L; Smith, Adam A; Craven, Mark; Bradfield, Christopher A
2009-09-04
The ability to generate transcriptional data on the scale of entire genomes has been a boon both in the improvement of biological understanding and in the amount of data generated. The latter, the amount of data generated, has implications when it comes to effective storage, analysis and sharing of these data. A number of software tools have been developed to store, analyze, and share microarray data. However, a majority of these tools do not offer all of these features nor do they specifically target the commonly used two color Agilent DNA microarray platform. Thus, the motivating factor for the development of EDGE(3) was to incorporate the storage, analysis and sharing of microarray data in a manner that would provide a means for research groups to collaborate on Agilent-based microarray experiments without a large investment in software-related expenditures or extensive training of end-users. EDGE(3) has been developed with two major functions in mind. The first function is to provide a workflow process for the generation of microarray data by a research laboratory or a microarray facility. The second is to store, analyze, and share microarray data in a manner that doesn't require complicated software. To satisfy the first function, EDGE3 has been developed as a means to establish a well defined experimental workflow and information system for microarray generation. To satisfy the second function, the software application utilized as the user interface of EDGE(3) is a web browser. Within the web browser, a user is able to access the entire functionality, including, but not limited to, the ability to perform a number of bioinformatics based analyses, collaborate between research groups through a user-based security model, and access to the raw data files and quality control files generated by the software used to extract the signals from an array image. Here, we present EDGE(3), an open-source, web-based application that allows for the storage, analysis, and controlled sharing of transcription-based microarray data generated on the Agilent DNA platform. In addition, EDGE(3) provides a means for managing RNA samples and arrays during the hybridization process. EDGE(3) is freely available for download at http://edge.oncology.wisc.edu/.
Relative gravimeter prototype based on micro electro mechanical system
NASA Astrophysics Data System (ADS)
Rozy, A. S. A.; Nugroho, H. A.; Yusuf, M.
2018-03-01
This research to make gravity measurement system by utilizing micro electro mechanical system based sensor in Gal order. System design consists of three parts, design of hardware, software, and interface. The design of the hardware include of designing the sensor design to measure the value of a stable gravity acceleration. The ADXL345 and ADXL335 sensors are tuned to obtain stable measurements. The design of the instrumentation system the next stage by creating a design to integrate between the sensor, microcontroller, and GPS. The design of programming algorithm is done with Arduino IDE software. The interface design uses a 20x4 LCD display to display the gravity acceleration value and store data on the storage media. The system uses a box made of iron and plate leveling to minimize measurement errors. The sensor test shows the ADXL345 sensor has a more stable value. The system is examined by comparing with gravity measurement of gravimeter A-10 results in Bandung observation post. The result of system test resulted the average of system correction value equal to 0.19 Gal. The system is expected to use for mineral exploration, water supply analyze, and earthquake precursor.
Generalizing the extensibility of a dynamic geometry software
NASA Astrophysics Data System (ADS)
Herceg, Đorđe; Radaković, Davorka; Herceg, Dejana
2012-09-01
Plug-and-play visual components in a Dynamic Geometry Software (DGS) enable development of visually attractive, rich and highly interactive dynamic drawings. We are developing SLGeometry, a DGS that contains a custom programming language, a computer algebra system (CAS engine) and a graphics subsystem. The basic extensibility framework on SLGeometry supports dynamic addition of new functions from attribute annotated classes that implement runtime metadata registration in code. We present a general plug-in framework for dynamic importing of arbitrary Silverlight user interface (UI) controls into SLGeometry at runtime. The CAS engine maintains a metadata storage that describes each imported visual component and enables two-way communication between the expressions stored in the engine and the UI controls on the screen.
EPANET Multi-Species Extension Software and User's Manual ...
Software and User's Manual EPANET is used in homeland security research to model contamination threats to water systems. Historically, EPANET has been limited to tracking the dynamics of a single chemical transported through a network of pipes and storage tanks, such as a fluoride used in a tracer study or free chlorine used in a disinfection decay study. Recently, the NHSRC released a new extension to EPANET called EPANET-MSX (Multi-Species eXtension) that allows for the consideration of multiple interacting species in the bulk flow and on the pipe walls. This capability has been incorporated into both a stand-alone executable program as well as a toolkit library of functions that programmers can use to build customized applications.
Experiments and Demonstrations in Physics: Bar-Ilan Physics Laboratory (2nd Edition)
NASA Astrophysics Data System (ADS)
Kraftmakher, Yaakov
2014-08-01
The following sections are included: * Data-acquisition systems from PASCO * ScienceWorkshop 750 Interface and DataStudio software * 850 Universal Interface and Capstone software * Mass on spring * Torsional pendulum * Hooke's law * Characteristics of DC source * Digital storage oscilloscope * Charging and discharging a capacitor * Charge and energy stored in a capacitor * Speed of sound in air * Lissajous patterns * I-V characteristics * Light bulb * Short time intervals * Temperature measurements * Oersted's great discovery * Magnetic field measurements * Magnetic force * Magnetic braking * Curie's point I * Electric power in AC circuits * Faraday's law of induction I * Self-inductance and mutual inductance * Electromagnetic screening * LCR circuit I * Coupled LCR circuits * Probability functions * Photometric laws * Kirchhoff's rule for thermal radiation * Malus' law * Infrared radiation * Irradiance and illuminance
PANDA: A distributed multiprocessor operating system
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chubb, P.
1989-01-01
PANDA is a design for a distributed multiprocessor and an operating system. PANDA is designed to allow easy expansion of both hardware and software. As such, the PANDA kernel provides only message passing and memory and process management. The other features needed for the system (device drivers, secondary storage management, etc.) are provided as replaceable user tasks. The thesis presents PANDA's design and implementation, both hardware and software. PANDA uses multiple 68010 processors sharing memory on a VME bus, each such node potentially connected to others via a high speed network. The machine is completely homogeneous: there are no differencesmore » between processors that are detectable by programs running on the machine. A single two-processor node has been constructed. Each processor contains memory management circuits designed to allow processors to share page tables safely. PANDA presents a programmers' model similar to the hardware model: a job is divided into multiple tasks, each having its own address space. Within each task, multiple processes share code and data. Tasks can send messages to each other, and set up virtual circuits between themselves. Peripheral devices such as disc drives are represented within PANDA by tasks. PANDA divides secondary storage into volumes, each volume being accessed by a volume access task, or VAT. All knowledge about the way that data is stored on a disc is kept in its volume's VAT. The design is such that PANDA should provide a useful testbed for file systems and device drivers, as these can be installed without recompiling PANDA itself, and without rebooting the machine.« less
Smartphone-coupled rhinolaryngoscopy at the point of care
NASA Astrophysics Data System (ADS)
Mink, Jonah; Bolton, Frank J.; Sebag, Cathy M.; Peterson, Curtis W.; Assia, Shai; Levitz, David
2018-02-01
Rhinolaryngoscopy remains difficult to perform in resource-limited settings due to the high cost of purchasing and maintaining equipment as well as the need for specialists to interpret exam findings. While the lack of expertise can be obviated by adopting telemedicine-based approaches, the capture, storage, and sharing of images/video is not a common native functionality of medical devices. Most rhinolaryngoscopy systems consist of an endoscope that interfaces with the patient's naso/oropharynx, and a tower of modules that record video/images. However, these expensive and bulky modules can be replaced by a smartphone that can fulfill the same functions but at a lower cost. To demonstrate this, a commercially available rhinolaryngoscope was coupled to a smartphone using a 3D-printed adapter. Software developed for other clinical applications was repurposed for ENT use, including an application that controls image and video capture, a HIPAA-compliant image/video storage and transfer cloud database, and customized software features developed to improve practitioner competency. Audio recording capabilities to assess speech pathology were also integrated into the smartphone rhinolaryngoscope system. The illumination module coupled onto the endoscope remained unchanged. The spatial resolution of the rhinolaryngoscope system was defined by the fiber diameter of endoscope fiber bundle, rather than the smartphone camera. The mobile rhinolaryngoscope system was used with appropriate patients by a general practitioner in an office setting. The general practitioner then consulted with an ENT specialist via the HIPAA compliant cloud database and workflow modules on difficult cases. These results suggest the smartphone-based rhinolaryngoscope holds promise for use in low-resource settings.
A microcomputer controlled snow ski binding system--I. Instrumentation and field evaluation.
MacGregor, D; Hull, M L; Dorius, L K
1985-01-01
This paper presents the design and field evaluation of the first microcomputer controlled ski binding system. This system incorporates an Intel 8086 microcomputer controller and an integral binding/dynamometer. This instrumentation system not only undertakes real time control, but also it records dynamometer data via a miniature digital cassette tape recorder. The integral binding/dynamometer offers the same operational and mounting convenience of commercially available mechanical bindings. The binding may be released either manually or electrically via the controller. Comprised of four octagonal half strain rings, the strain gage dynamometer measures the three moment load components at the boot. To enable the user to conveniently operate the computer, extensive operating software was developed. The operating software is discussed in relation to both the acquisition and storage of data from the dynamometer and the control of the electro-mechanical snow ski binding. The binding system has been used successfully to both record boot moment components and control ski binding release during actual skiing maneuvers. Moment histories typical of three common recreational skiing maneuvers are presented.
A Pipeline for 3D Digital Optical Phenotyping Plant Root System Architecture
NASA Astrophysics Data System (ADS)
Davis, T. W.; Shaw, N. M.; Schneider, D. J.; Shaff, J. E.; Larson, B. G.; Craft, E. J.; Liu, Z.; Kochian, L. V.; Piñeros, M. A.
2017-12-01
This work presents a new pipeline for digital optical phenotyping the root system architecture of agricultural crops. The pipeline begins with a 3D root-system imaging apparatus for hydroponically grown crop lines of interest. The apparatus acts as a self-containing dark room, which includes an imaging tank, motorized rotating bearing and digital camera. The pipeline continues with the Plant Root Imaging and Data Acquisition (PRIDA) software, which is responsible for image capturing and storage. Once root images have been captured, image post-processing is performed using the Plant Root Imaging Analysis (PRIA) command-line tool, which extracts root pixels from color images. Following the pre-processing binarization of digital root images, 3D trait characterization is performed using the next-generation RootReader3D software. RootReader3D measures global root system architecture traits, such as total root system volume and length, total number of roots, and maximum rooting depth and width. While designed to work together, the four stages of the phenotyping pipeline are modular and stand-alone, which provides flexibility and adaptability for various research endeavors.
Hiraki, Masahiko; Kato, Ryuichi; Nagai, Minoru; Satoh, Tadashi; Hirano, Satoshi; Ihara, Kentaro; Kudo, Norio; Nagae, Masamichi; Kobayashi, Masanori; Inoue, Michio; Uejima, Tamami; Oda, Shunichiro; Chavas, Leonard M G; Akutsu, Masato; Yamada, Yusuke; Kawasaki, Masato; Matsugaki, Naohiro; Igarashi, Noriyuki; Suzuki, Mamoru; Wakatsuki, Soichi
2006-09-01
Protein crystallization remains one of the bottlenecks in crystallographic analysis of macromolecules. An automated large-scale protein-crystallization system named PXS has been developed consisting of the following subsystems, which proceed in parallel under unified control software: dispensing precipitants and protein solutions, sealing crystallization plates, carrying robot, incubators, observation system and image-storage server. A sitting-drop crystallization plate specialized for PXS has also been designed and developed. PXS can set up 7680 drops for vapour diffusion per hour, which includes time for replenishing supplies such as disposable tips and crystallization plates. Images of the crystallization drops are automatically recorded according to a preprogrammed schedule and can be viewed by users remotely using web-based browser software. A number of protein crystals were successfully produced and several protein structures could be determined directly from crystals grown by PXS. In other cases, X-ray quality crystals were obtained by further optimization by manual screening based on the conditions found by PXS.
Comparative analysis of data base management systems
NASA Technical Reports Server (NTRS)
Smith, R.
1983-01-01
A study to determine if the Remote File Inquiry (RFI) system would handle the future requirements of the user community is discussed. RFI is a locally written and locally maintained on-line query/update package. The current and future on-line requirements of the user community were studied. Additional consideration was given to the types of data structuring the users required. The survey indicated the features of greatest benefit were: sort, subtotals, totals, record selection, storage of queries, global updating and the ability to page break. The major deficiencies were: one level of hierarchy, excessive response time, software unreliability, difficult to add, delete and modify records, complicated error messages and the lack of ability to perform interfield comparisons. Missing features users required were: formatted screens, interfield comparions, interfield arithmetic, multiple file access, security and data integrity. The survey team recommended Kennedy Space Center move forward to state-of-the-art software, a Data Base Management System which is thoroughly tested and easy to implement and use.
NASA Technical Reports Server (NTRS)
Mckee, James W.
1990-01-01
This volume (4 of 4) contains the description, structured flow charts, prints of the graphical displays, and source code to generate the displays for the AMPS graphical status system. The function of these displays is to present to the manager of the AMPS system a graphical status display with the hot boxes that allow the manager to get more detailed status on selected portions of the AMPS system. The development of the graphical displays is divided into two processes; the creation of the screen images and storage of them in files on the computer, and the running of the status program which uses the screen images.
Fundamental Fortran for Social Scientists.
ERIC Educational Resources Information Center
Veldman, Donald J.
An introduction to Fortran programming specifically for social science statistical and routine data processing is provided. The first two sections of the manual describe the components of computer hardware and software. Topics include input, output, and mass storage devices; central memory; central processing unit; internal storage of data; and…
Advanced Data Format (ADF) Software Library and Users Guide
NASA Technical Reports Server (NTRS)
Smith, Matthew; Smith, Charles A. (Technical Monitor)
1998-01-01
The "CFD General Notation System" (CGNS) consists of a collection of conventions, and conforming software, for the storage and retrieval of Computational Fluid Dynamics (CFD) data. It facilitates the exchange of data between sites and applications, and helps stabilize the archiving of aerodynamic data. This effort was initiated in order to streamline the procedures in exchanging data and software between NASA and its customers, but the goal is to develop CGNS into a National Standard for the exchange of aerodynamic data. The CGNS development team is comprised of members from Boeing Commercial. Airplane Group, NASA-Ames, NASA-Langley, NASA-Lewis, McDonnell-Douglas Corporation (now Boeing-St. Louis), Air Force-Wright Lab., and ICEM-CFD Engineering. The elements of CGNS address all activities associated with the storage of data on external media and its movement to and from application programs. These elements include: 1) The Advanced Data Format (ADF) Database manager, consisting of both a file format specification and its 1/0 software, which handles the actual reading and writing of data from and to external storage media; 2) The Standard Interface Data Structures (SIDS), which specify the intellectual content of CFD data and the conventions governing naming and terminology; 3) The SIDS-to-ADF File Mapping conventions, which specify the exact location where the CFD data defined by the SIDS is to be stored within the ADF file(s); and 4) The CGNS Mid-level Library, which provides CFD-knowledgeable routines suitable for direct installation into application codes. The ADF is a generic database manager with minimal intrinsic capability. It was written for the purpose of storing large numerical datasets in an efficient, platform independent manner. To be effective, it must be used in conjunction with external agreements on how the data will be organized within the ADF database such defined by the SIDS. There are currently 34 user callable functions that comprise the ADF Core library and are described in the Users Guide. The library is written in C, but each function has a FORTRAN counterpart.
NASA Astrophysics Data System (ADS)
Puzyrkov, Dmitry; Polyakov, Sergey; Podryga, Viktoriia; Markizov, Sergey
2018-02-01
At the present stage of computer technology development it is possible to study the properties and processes in complex systems at molecular and even atomic levels, for example, by means of molecular dynamics methods. The most interesting are problems related with the study of complex processes under real physical conditions. Solving such problems requires the use of high performance computing systems of various types, for example, GRID systems and HPC clusters. Considering the time consuming computational tasks, the need arises of software for automatic and unified monitoring of such computations. A complex computational task can be performed over different HPC systems. It requires output data synchronization between the storage chosen by a scientist and the HPC system used for computations. The design of the computational domain is also quite a problem. It requires complex software tools and algorithms for proper atomistic data generation on HPC systems. The paper describes the prototype of a cloud service, intended for design of atomistic systems of large volume for further detailed molecular dynamic calculations and computational management for this calculations, and presents the part of its concept aimed at initial data generation on the HPC systems.
SNS programming environment user's guide
NASA Technical Reports Server (NTRS)
Tennille, Geoffrey M.; Howser, Lona M.; Humes, D. Creig; Cronin, Catherine K.; Bowen, John T.; Drozdowski, Joseph M.; Utley, Judith A.; Flynn, Theresa M.; Austin, Brenda A.
1992-01-01
The computing environment is briefly described for the Supercomputing Network Subsystem (SNS) of the Central Scientific Computing Complex of NASA Langley. The major SNS computers are a CRAY-2, a CRAY Y-MP, a CONVEX C-210, and a CONVEX C-220. The software is described that is common to all of these computers, including: the UNIX operating system, computer graphics, networking utilities, mass storage, and mathematical libraries. Also described is file management, validation, SNS configuration, documentation, and customer services.
Vehicle security encryption based on unlicensed encryption
NASA Astrophysics Data System (ADS)
Huang, Haomin; Song, Jing; Xu, Zhijia; Ding, Xiaoke; Deng, Wei
2018-03-01
The current vehicle key is easy to be destroyed and damage, proposing the use of elliptical encryption algorithm is improving the reliability of vehicle security system. Based on the encryption rules of elliptic curve, the chip's framework and hardware structure are designed, then the chip calculation process simulation has been analyzed by software. The simulation has been achieved the expected target. Finally, some issues pointed out in the data calculation about the chip's storage control and other modules.
2015-06-01
Hadoop Distributed File System (HDFS) without any integration with Accumulo-based Knowledge Stores based on OWL/RDF. 4. Cloud Based The Apache Software...BTW, 7(12), pp. 227–241. Godin, A. & Akins, D. (2014). Extending DCGS-N naval tactical clouds from in-storage to in-memory for the integrated fires...VISUALIZATIONS: A TOOL TO ACHIEVE OPTIMIZED OPERATIONAL DECISION MAKING AND DATA INTEGRATION by Paul C. Hudson Jeffrey A. Rzasa June 2015 Thesis
Ward, Adam S.; Kelleher, Christa A.; Mason, Seth J. K.; Wagener, Thorsten; McIntyre, Neil; McGlynn, Brian L.; Runkel, Robert L.; Payn, Robert A.
2017-01-01
Researchers and practitioners alike often need to understand and characterize how water and solutes move through a stream in terms of the relative importance of in-stream and near-stream storage and transport processes. In-channel and subsurface storage processes are highly variable in space and time and difficult to measure. Storage estimates are commonly obtained using transient-storage models (TSMs) of the experimentally obtained solute-tracer test data. The TSM equations represent key transport and storage processes with a suite of numerical parameters. Parameter values are estimated via inverse modeling, in which parameter values are iteratively changed until model simulations closely match observed solute-tracer data. Several investigators have shown that TSM parameter estimates can be highly uncertain. When this is the case, parameter values cannot be used reliably to interpret stream-reach functioning. However, authors of most TSM studies do not evaluate or report parameter certainty. Here, we present a software tool linked to the One-dimensional Transport with Inflow and Storage (OTIS) model that enables researchers to conduct uncertainty analyses via Monte-Carlo parameter sampling and to visualize uncertainty and sensitivity results. We demonstrate application of our tool to 2 case studies and compare our results to output obtained from more traditional implementation of the OTIS model. We conclude by suggesting best practices for transient-storage modeling and recommend that future applications of TSMs include assessments of parameter certainty to support comparisons and more reliable interpretations of transport processes.
EOSDIS: Archive and Distribution Systems in the Year 2000
NASA Technical Reports Server (NTRS)
Behnke, Jeanne; Lake, Alla
2000-01-01
Earth Science Enterprise (ESE) is a long-term NASA research mission to study the processes leading to global climate change. The Earth Observing System (EOS) is a NASA campaign of satellite observatories that are a major component of ESE. The EOS Data and Information System (EOSDIS) is another component of ESE that will provide the Earth science community with easy, affordable, and reliable access to Earth science data. EOSDIS is a distributed system, with major facilities at seven Distributed Active Archive Centers (DAACs) located throughout the United States. The EOSDIS software architecture is being designed to receive, process, and archive several terabytes of science data on a daily basis. Thousands of science users and perhaps several hundred thousands of non-science users are expected to access the system. The first major set of data to be archived in the EOSDIS is from Landsat-7. Another EOS satellite, Terra, was launched on December 18, 1999. With the Terra launch, the EOSDIS will be required to support approximately one terabyte of data into and out of the archives per day. Since EOS is a multi-mission program, including the launch of more satellites and many other missions, the role of the archive systems becomes larger and more critical. In 1995, at the fourth convening of NASA Mass Storage Systems and Technologies Conference, the development plans for the EOSDIS information system and archive were described. Five years later, many changes have occurred in the effort to field an operational system. It is interesting to reflect on some of the changes driving the archive technology and system development for EOSDIS. This paper principally describes the Data Server subsystem including how the other subsystems access the archive, the nature of the data repository, and the mass-storage I/O management. The paper reviews the system architecture (both hardware and software) of the basic components of the archive. It discusses the operations concept, code development, and testing phase of the system. Finally, it describes the future plans for the archive.
Clinical results of HIS, RIS, PACS integration using data integration CASE tools
NASA Astrophysics Data System (ADS)
Taira, Ricky K.; Chan, Hing-Ming; Breant, Claudine M.; Huang, Lu J.; Valentino, Daniel J.
1995-05-01
Current infrastructure research in PACS is dominated by the development of communication networks (local area networks, teleradiology, ATM networks, etc.), multimedia display workstations, and hierarchical image storage architectures. However, limited work has been performed on developing flexible, expansible, and intelligent information processing architectures for the vast decentralized image and text data repositories prevalent in healthcare environments. Patient information is often distributed among multiple data management systems. Current large-scale efforts to integrate medical information and knowledge sources have been costly with limited retrieval functionality. Software integration strategies to unify distributed data and knowledge sources is still lacking commercially. Systems heterogeneity (i.e., differences in hardware platforms, communication protocols, database management software, nomenclature, etc.) is at the heart of the problem and is unlikely to be standardized in the near future. In this paper, we demonstrate the use of newly available CASE (computer- aided software engineering) tools to rapidly integrate HIS, RIS, and PACS information systems. The advantages of these tools include fast development time (low-level code is generated from graphical specifications), and easy system maintenance (excellent documentation, easy to perform changes, and centralized code repository in an object-oriented database). The CASE tools are used to develop and manage the `middle-ware' in our client- mediator-serve architecture for systems integration. Our architecture is scalable and can accommodate heterogeneous database and communication protocols.
DICOM-compliant PACS with CD-based image archival
NASA Astrophysics Data System (ADS)
Cox, Robert D.; Henri, Christopher J.; Rubin, Richard K.; Bret, Patrice M.
1998-07-01
This paper describes the design and implementation of a low- cost PACS conforming to the DICOM 3.0 standard. The goal was to provide an efficient image archival and management solution on a heterogeneous hospital network as a basis for filmless radiology. The system follows a distributed, client/server model and was implemented at a fraction of the cost of a commercial PACS. It provides reliable archiving on recordable CD and allows access to digital images throughout the hospital and on the Internet. Dedicated servers have been designed for short-term storage, CD-based archival, data retrieval and remote data access or teleradiology. The short-term storage devices provide DICOM storage and query/retrieve services to scanners and workstations and approximately twelve weeks of 'on-line' image data. The CD-based archival and data retrieval processes are fully automated with the exception of CD loading and unloading. The system employs lossless compression on both short- and long-term storage devices. All servers communicate via the DICOM protocol in conjunction with both local and 'master' SQL-patient databases. Records are transferred from the local to the master database independently, ensuring that storage devices will still function if the master database server cannot be reached. The system features rules-based work-flow management and WWW servers to provide multi-platform remote data access. The WWW server system is distributed on the storage, retrieval and teleradiology servers allowing viewing of locally stored image data directly in a WWW browser without the need for data transfer to a central WWW server. An independent system monitors disk usage, processes, network and CPU load on each server and reports errors to the image management team via email. The PACS was implemented using a combination of off-the-shelf hardware, freely available software and applications developed in-house. The system has enabled filmless operation in CT, MR and ultrasound within the radiology department and throughout the hospital. The use of WWW technology has enabled the development of an intuitive we- based teleradiology and image management solution that provides complete access to image data.
Initial clinical results with a new needle screen storage phosphor system in chest radiograms.
Körner, M; Wirth, S; Treitl, M; Reiser, M; Pfeifer, K-J
2005-11-01
To evaluate image quality and anatomical detail depiction in dose-reduced digital plain chest radiograms using a new needle screen storage phosphor (NIP) in comparison to full dose conventional powder screen storage phosphor (PIP) images. 24 supine chest radiograms were obtained with PIP at standard dose and compared to follow-up studies of the same patients obtained with NIP with dose reduced to 50 % of the PIP dose (all imaging systems: AGFA-Gevaert, Mortsel, Belgium). In both systems identical versions of post-processing software supplied by the manufacturer were used with matched parameters. Six independent readers blinded to both modality and dose evaluated the images for depiction and differentiation of defined anatomical regions (peripheral lung parenchyma, central lung parenchyma, hilum, heart, diaphragm, upper mediastinum, and bone). All NIP images were compared to the corresponding PIP images using a five-point scale (- 2, clearly inferior to + 2, clearly superior). Overall image quality was rated for each PIP and NIP image separately (1, not usable to 5, excellent). PIP and dose reduced NIP images were rated equivalent. Mean image noise impression was only slightly higher on NIP images. Mean image quality for NIP showed no significant differences (p > 0.05, Mann-Whitney U test). With the use of the new needle structured storage phosphors in chest radiography, dose reduction of up to 50 % is possible without detracting from image quality or detail depiction. Especially in patients with multiple follow-up studies the overall dose can be decreased significantly.
RS/1 in the Clinical Environment
Kush, Thomas
1980-01-01
This paper describes the design of RS/1,™ the Research System, and its use in clinical patient studies. RS/1 is an interactive computer software system developed by the Medical Systems Group at BBN. Investigators and technicians who have never before used computers can learn RS/1 with a few hours of training. It uses familiar and intuitive concepts for data handling and data analysis, such as the “automated notebook” format of data storage, the direct use of graphs in curve-fitting, and a simple command language. Its versatility has made RS/1 useful in clinical research contexts, especially for studies involving patient care data.
Image matrix processor for fast multi-dimensional computations
Roberson, G.P.; Skeate, M.F.
1996-10-15
An apparatus for multi-dimensional computation is disclosed which comprises a computation engine, including a plurality of processing modules. The processing modules are configured in parallel and compute respective contributions to a computed multi-dimensional image of respective two dimensional data sets. A high-speed, parallel access storage system is provided which stores the multi-dimensional data sets, and a switching circuit routes the data among the processing modules in the computation engine and the storage system. A data acquisition port receives the two dimensional data sets representing projections through an image, for reconstruction algorithms such as encountered in computerized tomography. The processing modules include a programmable local host, by which they may be configured to execute a plurality of different types of multi-dimensional algorithms. The processing modules thus include an image manipulation processor, which includes a source cache, a target cache, a coefficient table, and control software for executing image transformation routines using data in the source cache and the coefficient table and loading resulting data in the target cache. The local host processor operates to load the source cache with a two dimensional data set, loads the coefficient table, and transfers resulting data out of the target cache to the storage system, or to another destination. 10 figs.
A computerized aircraft battery servicing facility
NASA Technical Reports Server (NTRS)
Glover, Richard D.
1992-01-01
The latest upgrade to the Aerospace Energy Systems Laboratory (AESL) is described. The AESL is a distributed digital system consisting of a central system and battery servicing stations connected by a high-speed serial data bus. The entire system is located in two adjoining rooms; the bus length is approximately 100 ft. Each battery station contains a digital processor, data acquisition, floppy diskette data storage, and operator interfaces. The operator initiates a servicing task and thereafter the battery station monitors the progress of the task and terminates it at the appropriate time. The central system provides data archives, manages the data bus, and provides a timeshare interface for multiple users. The system also hosts software production tools for the battery stations and the central system.
de Beer, R; Graveron-Demilly, D; Nastase, S; van Ormondt, D
2004-03-01
Recently we have developed a Java-based heterogeneous distributed computing system for the field of magnetic resonance imaging (MRI). It is a software system for embedding the various image reconstruction algorithms that we have created for handling MRI data sets with sparse sampling distributions. Since these data sets may result from multi-dimensional MRI measurements our system has to control the storage and manipulation of large amounts of data. In this paper we describe how we have employed the extensible markup language (XML) to realize this data handling in a highly structured way. To that end we have used Java packages, recently released by Sun Microsystems, to process XML documents and to compile pieces of XML code into Java classes. We have effectuated a flexible storage and manipulation approach for all kinds of data within the MRI system, such as data describing and containing multi-dimensional MRI measurements, data configuring image reconstruction methods and data representing and visualizing the various services of the system. We have found that the object-oriented approach, possible with the Java programming environment, combined with the XML technology is a convenient way of describing and handling various data streams in heterogeneous distributed computing systems.
A novel anti-piracy optical disk with photochromic diarylethene
NASA Astrophysics Data System (ADS)
Liu, Guodong; Cao, Guoqiang; Huang, Zhen; Wang, Shenqian; Zou, Daowen
2005-09-01
Diarylethene is one of photochromic material with many advantages and one of the most promising recording materials for huge optical data storage. Diarylethene has two forms, which can be converted to each other by laser beams of different wavelength. The material has been researched for rewritable optical disks. Volatile data storage is one of its properties, which was always considered as an obstacle to utility. Many researches have been done for combating the obstacle for a long time. In fact, volatile data storage is very useful for anti-piracy optical data storage. Piracy is a social and economical problem. One technology of anti-piracy optical data storage is to limit readout of the data recorded in the material by encryption software. By the development of computer technologies, this kind of software is more and more easily cracked. Using photochromic diarylethene as the optical recording material, the signals of the data recorded in the material are degraded when it is read, and readout of the data is limited. Because the method uses hardware to realize anti-piracy, it is impossible cracked. In this paper, we will introduce this usage of the material. Some experiments are presented for proving its feasibility.
The development of a clinical outcomes survey research application: Assessment Center.
Gershon, Richard; Rothrock, Nan E; Hanrahan, Rachel T; Jansky, Liz J; Harniss, Mark; Riley, William
2010-06-01
The National Institutes of Health sponsored Patient-Reported Outcome Measurement Information System (PROMIS) aimed to create item banks and computerized adaptive tests (CATs) across multiple domains for individuals with a range of chronic diseases. Web-based software was created to enable a researcher to create study-specific Websites that could administer PROMIS CATs and other instruments to research participants or clinical samples. This paper outlines the process used to develop a user-friendly, free, Web-based resource (Assessment Center) for storage, retrieval, organization, sharing, and administration of patient-reported outcomes (PRO) instruments. Joint Application Design (JAD) sessions were conducted with representatives from numerous institutions in order to supply a general wish list of features. Use Cases were then written to ensure that end user expectations matched programmer specifications. Program development included daily programmer "scrum" sessions, weekly Usability Acceptability Testing (UAT) and continuous Quality Assurance (QA) activities pre- and post-release. Assessment Center includes features that promote instrument development including item histories, data management, and storage of statistical analysis results. This case study of software development highlights the collection and incorporation of user input throughout the development process. Potential future applications of Assessment Center in clinical research are discussed.
Reconfigurable HIL Testing of Earth Satellites
NASA Technical Reports Server (NTRS)
2008-01-01
In recent years, hardware-in-the-loop (HIL) testing has carved a strong niche in several industries, such as automotive, aerospace, telecomm, and consumer electronics. As desktop computers have realized gains in speed, memory size, and data storage capacity, hardware/software platforms have evolved into high performance, deterministic HIL platforms, capable of hosting the most demanding applications for testing components and subsystems. Using simulation software to emulate the digital and analog I/O signals of system components, engineers of all disciplines can now test new systems in realistic environments to evaluate their function and performance prior to field deployment. Within the Aerospace industry, space-borne satellite systems are arguably some of the most demanding in terms of their requirement for custom engineering and testing. Typically, spacecraft are built one or few at a time to fulfill a space science or defense mission. In contrast to other industries that can amortize the cost of HIL systems over thousands, even millions of units, spacecraft HIL systems have been built as one-of-a-kind solutions, expensive in terms of schedule, cost, and risk, to assure satellite and spacecraft systems reliability. The focus of this paper is to present a new approach to HIL testing for spacecraft systems that takes advantage of a highly flexible hardware/software architecture based on National Instruments PXI reconfigurable hardware and virtual instruments developed using LabVIEW. This new approach to HIL is based on a multistage/multimode spacecraft bus emulation development model called Reconfigurable Hardware In-the-Loop or RHIL.
Virtualized Networks and Virtualized Optical Line Terminal (vOLT)
NASA Astrophysics Data System (ADS)
Ma, Jonathan; Israel, Stephen
2017-03-01
The success of the Internet and the proliferation of the Internet of Things (IoT) devices is forcing telecommunications carriers to re-architecture a central office as a datacenter (CORD) so as to bring the datacenter economics and cloud agility to a central office (CO). The Open Network Operating System (ONOS) is the first open-source software-defined network (SDN) operating system which is capable of managing and controlling network, computing, and storage resources to support CORD infrastructure and network virtualization. The virtualized Optical Line Termination (vOLT) is one of the key components in such virtualized networks.
Computer system for scanning tunneling microscope automation
NASA Astrophysics Data System (ADS)
Aguilar, M.; García, A.; Pascual, P. J.; Presa, J.; Santisteban, A.
1987-03-01
A computerized system for the automation of a scanning tunneling microscope is presented. It is based on an IBM personal computer (PC) either an XT or an AT, which performs the control, data acquisition and storage operations, displays the STM "images" in real time, and provides image processing tools for the restoration and analysis of data. It supports different data acquisition and control cards and image display cards. The software has been designed in a modular way to allow the replacement of these cards and other equipment improvements as well as the inclusion of user routines for data analysis.
NAS-current status and future plans
NASA Technical Reports Server (NTRS)
Bailey, F. R.
1987-01-01
The Numerical Aerodynamic Simulation (NAS) has met its first major milestone, the NAS Processing System Network (NPSN) Initial Operating Configuration (IOC). The program has met its goal of providing a national supercomputer facility capable of greatly enhancing the Nation's research and development efforts. Furthermore, the program is fulfilling its pathfinder role by defining and implementing a paradigm for supercomputing system environments. The IOC is only the begining and the NAS Program will aggressively continue to develop and implement emerging supercomputer, communications, storage, and software technologies to strengthen computations as a critical element in supporting the Nation's leadership role in aeronautics.
Louisiana: a model for advancing regional e-Research through cyberinfrastructure
Katz, Daniel S.; Allen, Gabrielle; Cortez, Ricardo; Cruz-Neira, Carolina; Gottumukkala, Raju; Greenwood, Zeno D.; Guice, Les; Jha, Shantenu; Kolluru, Ramesh; Kosar, Tevfik; Leger, Lonnie; Liu, Honggao; McMahon, Charlie; Nabrzyski, Jarek; Rodriguez-Milla, Bety; Seidel, Ed; Speyrer, Greg; Stubblefield, Michael; Voss, Brian; Whittenburg, Scott
2009-01-01
Louisiana researchers and universities are leading a concentrated, collaborative effort to advance statewide e-Research through a new cyberinfrastructure: computing systems, data storage systems, advanced instruments and data repositories, visualization environments and people, all linked together by software programs and high-performance networks. This effort has led to a set of interlinked projects that have started making a significant difference in the state, and has created an environment that encourages increased collaboration, leading to new e-Research. This paper describes the overall effort, the new projects and environment and the results to date. PMID:19451102
Maratt, Joseph D; Srinivasan, Ramesh C; Dahl, William J; Schilling, Peter L; Urquhart, Andrew G
2012-08-01
As digital radiography becomes more prevalent, several systems for digital preoperative planning have become available. The purpose of this study was to evaluate the accuracy and efficiency of an inexpensive, cloud-based digital templating system, which is comparable with acetate templating. However, cloud-based templating is substantially faster and more convenient than acetate templating or locally installed software. Although this is a practical solution for this particular medical application, regulatory changes are necessary before the tremendous advantages of cloud-based storage and computing can be realized in medical research and clinical practice. Copyright 2012, SLACK Incorporated.
SDI (Strategic Defense Initiative) Software Technology Program Plan
1987-06-01
station control, and defense. c. Simulation Display Generator ( SDG ) [Patterson 83] I0 SDG supports the creation, display, modification, storage, and...34 Proceedings Trends and Applications 1981, IEEE, (May 28, 1981). [Parnas 86] Parnas, D.L., "When can Software be Trustworthy?" Keynote Address to Compass
The Stabilization, Exploration, and Expression of Computer Game History
ERIC Educational Resources Information Center
Kaltman, Eric
2017-01-01
Computer games are now a significant cultural phenomenon, and a significant artistic output of humanity. However, little effort and attention have been paid to how the medium of games and interactive software developed, and even less to the historical storage of software development documentation. This thesis borrows methodologies and practices…
Science Gateways, Scientific Workflows and Open Community Software
NASA Astrophysics Data System (ADS)
Pierce, M. E.; Marru, S.
2014-12-01
Science gateways and scientific workflows occupy different ends of the spectrum of user-focused cyberinfrastructure. Gateways, sometimes called science portals, provide a way for enabling large numbers of users to take advantage of advanced computing resources (supercomputers, advanced storage systems, science clouds) by providing Web and desktop interfaces and supporting services. Scientific workflows, at the other end of the spectrum, support advanced usage of cyberinfrastructure that enable "power users" to undertake computational experiments that are not easily done through the usual mechanisms (managing simulations across multiple sites, for example). Despite these different target communities, gateways and workflows share many similarities and can potentially be accommodated by the same software system. For example, pipelines to process InSAR imagery sets or to datamine GPS time series data are workflows. The results and the ability to make downstream products may be made available through a gateway, and power users may want to provide their own custom pipelines. In this abstract, we discuss our efforts to build an open source software system, Apache Airavata, that can accommodate both gateway and workflow use cases. Our approach is general, and we have applied the software to problems in a number of scientific domains. In this talk, we discuss our applications to usage scenarios specific to earth science, focusing on earthquake physics examples drawn from the QuakSim.org and GeoGateway.org efforts. We also examine the role of the Apache Software Foundation's open community model as a way to build up common commmunity codes that do not depend upon a single "owner" to sustain. Pushing beyond open source software, we also see the need to provide gateways and workflow systems as cloud services. These services centralize operations, provide well-defined programming interfaces, scale elastically, and have global-scale fault tolerance. We discuss our work providing Apache Airavata as a hosted service to provide these features.
e!DAL - a framework to store, share and publish research data
2014-01-01
Background The life-science community faces a major challenge in handling “big data”, highlighting the need for high quality infrastructures capable of sharing and publishing research data. Data preservation, analysis, and publication are the three pillars in the “big data life cycle”. The infrastructures currently available for managing and publishing data are often designed to meet domain-specific or project-specific requirements, resulting in the repeated development of proprietary solutions and lower quality data publication and preservation overall. Results e!DAL is a lightweight software framework for publishing and sharing research data. Its main features are version tracking, metadata management, information retrieval, registration of persistent identifiers (DOI), an embedded HTTP(S) server for public data access, access as a network file system, and a scalable storage backend. e!DAL is available as an API for local non-shared storage and as a remote API featuring distributed applications. It can be deployed “out-of-the-box” as an on-site repository. Conclusions e!DAL was developed based on experiences coming from decades of research data management at the Leibniz Institute of Plant Genetics and Crop Plant Research (IPK). Initially developed as a data publication and documentation infrastructure for the IPK’s role as a data center in the DataCite consortium, e!DAL has grown towards being a general data archiving and publication infrastructure. The e!DAL software has been deployed into the Maven Central Repository. Documentation and Software are also available at: http://edal.ipk-gatersleben.de. PMID:24958009
e!DAL--a framework to store, share and publish research data.
Arend, Daniel; Lange, Matthias; Chen, Jinbo; Colmsee, Christian; Flemming, Steffen; Hecht, Denny; Scholz, Uwe
2014-06-24
The life-science community faces a major challenge in handling "big data", highlighting the need for high quality infrastructures capable of sharing and publishing research data. Data preservation, analysis, and publication are the three pillars in the "big data life cycle". The infrastructures currently available for managing and publishing data are often designed to meet domain-specific or project-specific requirements, resulting in the repeated development of proprietary solutions and lower quality data publication and preservation overall. e!DAL is a lightweight software framework for publishing and sharing research data. Its main features are version tracking, metadata management, information retrieval, registration of persistent identifiers (DOI), an embedded HTTP(S) server for public data access, access as a network file system, and a scalable storage backend. e!DAL is available as an API for local non-shared storage and as a remote API featuring distributed applications. It can be deployed "out-of-the-box" as an on-site repository. e!DAL was developed based on experiences coming from decades of research data management at the Leibniz Institute of Plant Genetics and Crop Plant Research (IPK). Initially developed as a data publication and documentation infrastructure for the IPK's role as a data center in the DataCite consortium, e!DAL has grown towards being a general data archiving and publication infrastructure. The e!DAL software has been deployed into the Maven Central Repository. Documentation and Software are also available at: http://edal.ipk-gatersleben.de.
Meenan, Christopher; Daly, Barry; Toland, Christopher; Nagy, Paul
2006-01-01
Rapid advances are changing the technology and applications of multidetector computed tomography (CT) scanners. The major increase in data associated with this new technology, however, breaks most commercial picture archiving and communication system (PACS) architectures by preventing them from delivering data in real time to radiologists and outside clinicians. We proposed a phased model for 3D workflow, installed a thin-slice archive and measured thin-slice data storage over a period of 5 months. A mean of 1,869 CT studies were stored per month, with an average of 643 images per study and a mean total volume of 588 GB/month. We also surveyed 48 radiologists to determine diagnostic use, impressions of thin-slice value, and requirements for retention times. The majority of radiologists thought thin slice was helpful for diagnosis and regularly used the application. Permanent storage of thin slice CT is likely to become best practice and a mission-critical pursuit for the health care enterprise.
A Layered Solution for Supercomputing Storage
Grider, Gary
2018-06-13
To solve the supercomputing challenge of memory keeping up with processing speed, a team at Los Alamos National Laboratory developed two innovative memory management and storage technologies. Burst buffers peel off data onto flash memory to support the checkpoint/restart paradigm of large simulations. MarFS adds a thin software layer enabling a new tier for campaign storageâbased on inexpensive, failure-prone disk drivesâbetween disk drives and tape archives.
Robust media processing on programmable power-constrained systems
NASA Astrophysics Data System (ADS)
McVeigh, Jeff
2005-03-01
To achieve consumer-level quality, media systems must process continuous streams of audio and video data while maintaining exacting tolerances on sampling rate, jitter, synchronization, and latency. While it is relatively straightforward to design fixed-function hardware implementations to satisfy worst-case conditions, there is a growing trend to utilize programmable multi-tasking solutions for media applications. The flexibility of these systems enables support for multiple current and future media formats, which can reduce design costs and time-to-market. This paper provides practical engineering solutions to achieve robust media processing on such systems, with specific attention given to power-constrained platforms. The techniques covered in this article utilize the fundamental concepts of algorithm and software optimization, software/hardware partitioning, stream buffering, hierarchical prioritization, and system resource and power management. A novel enhancement to dynamically adjust processor voltage and frequency based on buffer fullness to reduce system power consumption is examined in detail. The application of these techniques is provided in a case study of a portable video player implementation based on a general-purpose processor running a non real-time operating system that achieves robust playback of synchronized H.264 video and MP3 audio from local storage and streaming over 802.11.
Minati, L; Ghielmetti, F; Ciobanu, V; D'Incerti, L; Maccagnano, C; Bizzi, A; Bruzzone, M G
2007-03-01
Advanced neuroimaging techniques, such as functional magnetic resonance imaging (fMRI), chemical shift spectroscopy imaging (CSI), diffusion tensor imaging (DTI), and perfusion-weighted imaging (PWI) create novel challenges in terms of data storage and management: huge amounts of raw data are generated, the results of analysis may depend on the software and settings that have been used, and most often intermediate files are inherently not compliant with the current DICOM (digital imaging and communication in medicine) standard, as they contain multidimensional complex and tensor arrays and various other types of data structures. A software architecture, referred to as Bio-Image Warehouse System (BIWS), which can be used alongside a radiology information system/picture archiving and communication system (RIS/PACS) system to store neuroimaging data for research purposes, is presented. The system architecture is conceived with the purpose of enabling to query by diagnosis according to a predefined two-layered classification taxonomy. The operational impact of the system and the time needed to get acquainted with the web-based interface and with the taxonomy are found to be limited. The development of modules enabling automated creation of statistical templates is proposed.
Matam, B Rajeswari; Duncan, Heather
2018-06-01
Most existing, expert monitoring systems do not provide the real time continuous analysis of the monitored physiological data that is necessary to detect transient or combined vital sign indicators nor do they provide long term storage of the data for retrospective analyses. In this paper we examine the feasibility of implementing a long term data storage system which has the ability to incorporate real-time data analytics, the system design, report the main technical issues encountered, the solutions implemented and the statistics of the data recorded. McLaren Electronic Systems expertise used to continually monitor and analyse the data from F1 racing cars in real time was utilised to implement a similar real-time data recording platform system adapted with real time analytics to suit the requirements of the intensive care environment. We encountered many technical (hardware and software) implementation challenges. However there were many advantages of the system once it was operational. They include: (1) The ability to store the data for long periods of time enabling access to historical physiological data. (2) The ability to alter the time axis to contract or expand periods of interest. (3) The ability to store and review ECG morphology retrospectively. (4) Detailed post event (cardiac/respiratory arrest or other clinically significant deteriorations in patients) data can be reviewed clinically as opposed to trend data providing valuable clinical insight. Informed mortality and morbidity reviews can be conducted. (5) Storage of waveform data capture to use for algorithm development for adaptive early warning systems. Recording data from bed-side monitors in intensive care/wards is feasible. It is possible to set up real time data recording and long term storage systems. These systems in future can be improved with additional patient specific metrics which predict the status of a patient thus paving the way for real time predictive monitoring.
NASA Astrophysics Data System (ADS)
Badea, G.; Felseghi, R. A.; Aşchilean, I.; Rǎboacǎ, S. M.; Şoimoşan, T.
2017-12-01
The concept of sustainable development aims to meet the needs of the present without compromising the needs of future generations. In achieving the desideratum "low-carbon energy system", in the domain of energy production, the use of innovative low-carbon technologies providing maximum efficiency and minimum pollution is required. Such technology is the fuel cell; as these will be developed, it will become a reality to obtain the energy based on hydrogen. Thus, hydrogen produced by electrolysis of water using different forms of renewable resources becomes a secure and sustainable energy alternative. In this context, in the present paper, a comparative study of two different hybrid power generation systems for residential building placed in Cluj-Napoca was made. In these energy systems have been integrated renewable energies (photovoltaic panels and wind turbine), backup and storage system based on hydrogen (fuel cell, electrolyser and hydrogen storage tank), and, respectively, backup and storage system based on traditional technologies (diesel generator and battery). The software iHOGA was used to simulate the operating performance of the two hybrid systems. The aim of this study was to compare energy, environmental and economic performances of these two systems and to define possible future scenarios of competitiveness between traditional and new innovative technologies. After analyzing and comparing the results of simulations, it can be concluded that the fuel cells technology along with hydrogen, integrated in a hybrid system, may be the key to energy production systems with high energy efficiency, making possible an increased capitalization of renewable energy which have a low environmental impact.
AstroCloud, a Cyber-Infrastructure for Astronomy Research: Data Access and Interoperability
NASA Astrophysics Data System (ADS)
Fan, D.; He, B.; Xiao, J.; Li, S.; Li, C.; Cui, C.; Yu, C.; Hong, Z.; Yin, S.; Wang, C.; Cao, Z.; Fan, Y.; Mi, L.; Wan, W.; Wang, J.
2015-09-01
Data access and interoperability module connects the observation proposals, data, virtual machines and software. According to the unique identifier of PI (principal investigator), an email address or an internal ID, data can be collected by PI's proposals, or by the search interfaces, e.g. conesearch. Files associated with the searched results could be easily transported to cloud storages, including the storage with virtual machines, or several commercial platforms like Dropbox. Benefitted from the standards of IVOA (International Observatories Alliance), VOTable formatted searching result could be sent to kinds of VO software. Latter endeavor will try to integrate more data and connect archives and some other astronomical resources.
Service Management Database for DSN Equipment
NASA Technical Reports Server (NTRS)
Zendejas, Silvino; Bui, Tung; Bui, Bach; Malhotra, Shantanu; Chen, Fannie; Wolgast, Paul; Allen, Christopher; Luong, Ivy; Chang, George; Sadaqathulla, Syed
2009-01-01
This data- and event-driven persistent storage system leverages the use of commercial software provided by Oracle for portability, ease of maintenance, scalability, and ease of integration with embedded, client-server, and multi-tiered applications. In this role, the Service Management Database (SMDB) is a key component of the overall end-to-end process involved in the scheduling, preparation, and configuration of the Deep Space Network (DSN) equipment needed to perform the various telecommunication services the DSN provides to its customers worldwide. SMDB makes efficient use of triggers, stored procedures, queuing functions, e-mail capabilities, data management, and Java integration features provided by the Oracle relational database management system. SMDB uses a third normal form schema design that allows for simple data maintenance procedures and thin layers of integration with client applications. The software provides an integrated event logging system with ability to publish events to a JMS messaging system for synchronous and asynchronous delivery to subscribed applications. It provides a structured classification of events and application-level messages stored in database tables that are accessible by monitoring applications for real-time monitoring or for troubleshooting and analysis over historical archives.
Bendels, Michael H K; Beed, Prateep; Leibold, Christian; Schmitz, Dietmar; Johenning, Friedrich W
2008-10-30
Optical uncaging of caged compounds is a well-established method to study the functional anatomy of a brain region on the circuit level. We present an alternative approach to existing experimental setups. Using a low-magnification objective we acquire images for planning the spatial patterns of stimulation. Then high-magnification objectives are used during laser stimulation providing a laser spot between 2 microm and 20 microm size. The core of this system is a video-based control software that monitors and controls the connected devices, allows for planning of the experiment, coordinates the stimulation process and manages automatic data storage. This combines a high-resolution analysis of neuronal circuits with flexible and efficient online planning and execution of a grid of spatial stimulation patterns on a larger scale. The software offers special optical features that enable the system to achieve a maximum degree of spatial reliability. The hardware is mainly built upon standard laboratory devices and thus ideally suited to cost-effectively complement existing electrophysiological setups with a minimal amount of additional equipment. Finally, we demonstrate the performance of the system by mapping the excitatory and inhibitory connections of entorhinal cortex layer II stellate neurons and present an approach for the analysis of photo-induced synaptic responses in high spontaneous activity.
ZFS on RBODs - Leveraging RAID Controllers for Metrics and Enclosure Management
DOE Office of Scientific and Technical Information (OSTI.GOV)
Stearman, D. M.
2015-03-30
Traditionally, the Lustre file system has relied on the ldiskfs file system with reliable RAID (Redundant Array of Independent Disks) storage underneath. As of Lustre 2.4, ZFS was added as a backend file system, with built-in software RAID, thereby removing the need of expensive RAID controllers. ZFS was designed to work with JBOD (Just a Bunch Of Disks) storage enclosures under the Solaris Operating System, which provided a rich device management system. Long time users of the Lustre file system have relied on the RAID controllers to provide metrics and enclosure monitoring and management services, with rich APIs and commandmore » line interfaces. This paper will study a hybrid approach using an advanced full featured RAID enclosure which is presented to the host as a JBOD, This RBOD (RAIDed Bunch Of Disks) allows ZFS to do the RAID protection and error correction, while the RAID controller handles management of the disks and monitors the enclosure. It was hoped that the value of the RAID controller features would offset the additional cost, and that performance would not suffer in this mode. The test results revealed that the hybrid RBOD approach did suffer reduced performance.« less
2013-01-01
Analyzing and storing data and results from next-generation sequencing (NGS) experiments is a challenging task, hampered by ever-increasing data volumes and frequent updates of analysis methods and tools. Storage and computation have grown beyond the capacity of personal computers and there is a need for suitable e-infrastructures for processing. Here we describe UPPNEX, an implementation of such an infrastructure, tailored to the needs of data storage and analysis of NGS data in Sweden serving various labs and multiple instruments from the major sequencing technology platforms. UPPNEX comprises resources for high-performance computing, large-scale and high-availability storage, an extensive bioinformatics software suite, up-to-date reference genomes and annotations, a support function with system and application experts as well as a web portal and support ticket system. UPPNEX applications are numerous and diverse, and include whole genome-, de novo- and exome sequencing, targeted resequencing, SNP discovery, RNASeq, and methylation analysis. There are over 300 projects that utilize UPPNEX and include large undertakings such as the sequencing of the flycatcher and Norwegian spruce. We describe the strategic decisions made when investing in hardware, setting up maintenance and support, allocating resources, and illustrate major challenges such as managing data growth. We conclude with summarizing our experiences and observations with UPPNEX to date, providing insights into the successful and less successful decisions made. PMID:23800020