Sample records for integrated software infrastructure

  1. SeaBIRD: A Flexible and Intuitive Planetary Datamining Infrastructure

    NASA Astrophysics Data System (ADS)

    Politi, R.; Capaccioni, F.; Giardino, M.; Fonte, S.; Capria, M. T.; Turrini, D.; De Sanctis, M. C.; Piccioni, G.

    2018-04-01

    Description of SeaBIRD (Searchable and Browsable Infrastructure for Repository of Data), a software and hardware infrastructure for multi-mission planetary datamining, with web-based GUI and API set for the integration in users' software.

  2. An Overview of the Distributed Space Exploration Simulation (DSES) Project

    NASA Technical Reports Server (NTRS)

    Crues, Edwin Z.; Chung, Victoria I.; Blum, Michael G.; Bowman, James D.

    2007-01-01

    This paper describes the Distributed Space Exploration Simulation (DSES) Project, a research and development collaboration between NASA centers which investigates technologies, and processes related to integrated, distributed simulation of complex space systems in support of NASA's Exploration Initiative. In particular, it describes the three major components of DSES: network infrastructure, software infrastructure and simulation development. With regard to network infrastructure, DSES is developing a Distributed Simulation Network for use by all NASA centers. With regard to software, DSES is developing software models, tools and procedures that streamline distributed simulation development and provide an interoperable infrastructure for agency-wide integrated simulation. Finally, with regard to simulation development, DSES is developing an integrated end-to-end simulation capability to support NASA development of new exploration spacecraft and missions. This paper presents the current status and plans for these three areas, including examples of specific simulations.

  3. LHCb Build and Deployment Infrastructure for run 2

    NASA Astrophysics Data System (ADS)

    Clemencic, M.; Couturier, B.

    2015-12-01

    After the successful run 1 of the LHC, the LHCb Core software team has taken advantage of the long shutdown to consolidate and improve its build and deployment infrastructure. Several of the related projects have already been presented like the build system using Jenkins, as well as the LHCb Performance and Regression testing infrastructure. Some components are completely new, like the Software Configuration Database (using the Graph DB Neo4j), or the new packaging installation using RPM packages. Furthermore all those parts are integrated to allow easier and quicker releases of the LHCb Software stack, therefore reducing the risk of operational errors. Integration and Regression tests are also now easier to implement, allowing to improve further the software checks.

  4. THE EPA MULTIMEDIA INTEGRATED MODELING SYSTEM SOFTWARE SUITE

    EPA Science Inventory

    The U.S. EPA is developing a Multimedia Integrated Modeling System (MIMS) framework that will provide a software infrastructure or environment to support constructing, composing, executing, and evaluating complex modeling studies. The framework will include (1) common software ...

  5. Creating an open environment software infrastructure

    NASA Technical Reports Server (NTRS)

    Jipping, Michael J.

    1992-01-01

    As the development of complex computer hardware accelerates at increasing rates, the ability of software to keep pace is essential. The development of software design tools, however, is falling behind the development of hardware for several reasons, the most prominent of which is the lack of a software infrastructure to provide an integrated environment for all parts of a software system. The research was undertaken to provide a basis for answering this problem by investigating the requirements of open environments.

  6. The Human Physiome: how standards, software and innovative service infrastructures are providing the building blocks to make it achievable

    PubMed Central

    2016-01-01

    Reconstructing and understanding the Human Physiome virtually is a complex mathematical problem, and a highly demanding computational challenge. Mathematical models spanning from the molecular level through to whole populations of individuals must be integrated, then personalized. This requires interoperability with multiple disparate and geographically separated data sources, and myriad computational software tools. Extracting and producing knowledge from such sources, even when the databases and software are readily available, is a challenging task. Despite the difficulties, researchers must frequently perform these tasks so that available knowledge can be continually integrated into the common framework required to realize the Human Physiome. Software and infrastructures that support the communities that generate these, together with their underlying standards to format, describe and interlink the corresponding data and computer models, are pivotal to the Human Physiome being realized. They provide the foundations for integrating, exchanging and re-using data and models efficiently, and correctly, while also supporting the dissemination of growing knowledge in these forms. In this paper, we explore the standards, software tooling, repositories and infrastructures that support this work, and detail what makes them vital to realizing the Human Physiome. PMID:27051515

  7. The Human Physiome: how standards, software and innovative service infrastructures are providing the building blocks to make it achievable.

    PubMed

    Nickerson, David; Atalag, Koray; de Bono, Bernard; Geiger, Jörg; Goble, Carole; Hollmann, Susanne; Lonien, Joachim; Müller, Wolfgang; Regierer, Babette; Stanford, Natalie J; Golebiewski, Martin; Hunter, Peter

    2016-04-06

    Reconstructing and understanding the Human Physiome virtually is a complex mathematical problem, and a highly demanding computational challenge. Mathematical models spanning from the molecular level through to whole populations of individuals must be integrated, then personalized. This requires interoperability with multiple disparate and geographically separated data sources, and myriad computational software tools. Extracting and producing knowledge from such sources, even when the databases and software are readily available, is a challenging task. Despite the difficulties, researchers must frequently perform these tasks so that available knowledge can be continually integrated into the common framework required to realize the Human Physiome. Software and infrastructures that support the communities that generate these, together with their underlying standards to format, describe and interlink the corresponding data and computer models, are pivotal to the Human Physiome being realized. They provide the foundations for integrating, exchanging and re-using data and models efficiently, and correctly, while also supporting the dissemination of growing knowledge in these forms. In this paper, we explore the standards, software tooling, repositories and infrastructures that support this work, and detail what makes them vital to realizing the Human Physiome.

  8. The Distributed Space Exploration Simulation (DSES)

    NASA Technical Reports Server (NTRS)

    Crues, Edwin Z.; Chung, Victoria I.; Blum, Mike G.; Bowman, James D.

    2007-01-01

    The paper describes the Distributed Space Exploration Simulation (DSES) Project, a research and development collaboration between NASA centers which focuses on the investigation and development of technologies, processes and integrated simulations related to the collaborative distributed simulation of complex space systems in support of NASA's Exploration Initiative. This paper describes the three major components of DSES: network infrastructure, software infrastructure and simulation development. In the network work area, DSES is developing a Distributed Simulation Network that will provide agency wide support for distributed simulation between all NASA centers. In the software work area, DSES is developing a collection of software models, tool and procedures that ease the burden of developing distributed simulations and provides a consistent interoperability infrastructure for agency wide participation in integrated simulation. Finally, for simulation development, DSES is developing an integrated end-to-end simulation capability to support NASA development of new exploration spacecraft and missions. This paper will present current status and plans for each of these work areas with specific examples of simulations that support NASA's exploration initiatives.

  9. National Intelligent Transportation Infrastructure Initiative

    DOT National Transportation Integrated Search

    1997-09-19

    This report gives an overview of the National Intelligent Transportation Infrastructure Initiative (NITI). NITI refers to the integrated electronics, communications, and hardware and software elements that are available to support Intelligent Transpo...

  10. An Introduction to Flight Software Development: FSW Today, FSW 2010

    NASA Technical Reports Server (NTRS)

    Gouvela, John

    2004-01-01

    Experience and knowledge gained from ongoing maintenance of Space Shuttle Flight Software and new development projects including Cockpit Avionics Upgrade are applied to projected needs of the National Space Exploration Vision through Spiral 2. Lessons learned from these current activities are applied to create a sustainable, reliable model for development of critical software to support Project Constellation. This presentation introduces the technologies, methodologies, and infrastructure needed to produce and sustain high quality software. It will propose what is needed to support a Vision for Space Exploration that places demands on the innovation and productivity needed to support future space exploration. The technologies in use today within FSW development include tools that provide requirements tracking, integrated change management, modeling and simulation software. Specific challenges that have been met include the introduction and integration of Commercial Off the Shelf (COTS) Real Time Operating System for critical functions. Though technology prediction has proved to be imprecise, Project Constellation requirements will need continued integration of new technology with evolving methodologies and changing project infrastructure. Targets for continued technology investment are integrated health monitoring and management, self healing software, standard payload interfaces, autonomous operation, and improvements in training. Emulation of the target hardware will also allow significant streamlining of development and testing. The methodologies in use today for FSW development are object oriented UML design, iterative development using independent components, as well as rapid prototyping . In addition, Lean Six Sigma and CMMI play a critical role in the quality and efficiency of the workforce processes. Over the next six years, we expect these methodologies to merge with other improvements into a consolidated office culture with all processes being guided by automated office assistants. The infrastructure in use today includes strict software development and configuration management procedures, including strong control of resource management and critical skills coverage. This will evolve to a fully integrated staff organization with efficient and effective communication throughout all levels guided by a Mission-Systems Architecture framework with focus on risk management and attention toward inevitable product obsolescence. This infrastructure of computing equipment, software and processes will itself be subject to technological change and need for management of change and improvement,

  11. Implications of Responsive Space on the Flight Software Architecture

    NASA Technical Reports Server (NTRS)

    Wilmot, Jonathan

    2006-01-01

    The Responsive Space initiative has several implications for flight software that need to be addressed not only within the run-time element, but the development infrastructure and software life-cycle process elements as well. The runtime element must at a minimum support Plug & Play, while the development and process elements need to incorporate methods to quickly generate the needed documentation, code, tests, and all of the artifacts required of flight quality software. Very rapid response times go even further, and imply little or no new software development, requiring instead, using only predeveloped and certified software modules that can be integrated and tested through automated methods. These elements have typically been addressed individually with significant benefits, but it is when they are combined that they can have the greatest impact to Responsive Space. The Flight Software Branch at NASA's Goddard Space Flight Center has been developing the runtime, infrastructure and process elements needed for rapid integration with the Core Flight software System (CFS) architecture. The CFS architecture consists of three main components; the core Flight Executive (cFE), the component catalog, and the Integrated Development Environment (DE). This paper will discuss the design of the components, how they facilitate rapid integration, and lessons learned as the architecture is utilized for an upcoming spacecraft.

  12. Epos Working Group 10 Infrastructure for Georesources

    NASA Astrophysics Data System (ADS)

    Orlecka-Sikora, Beata; Lasocki, Stanisław; Kwiatek, Grzegorz

    2013-04-01

    Working Group 10 "Infrastructure for Georesources" deals primarily with induced seismicity (IS) infrastructure. Established during the EPOS Annual Meeting in Utrecht, November 2011, WG10 aims to integrate the research infrastructure in the area of seismicity induced by human activity: tremors and rockbursts in underground mines, seismicity associated with conventional and unconventional oil and gas production, induced by geothermal energy extraction and by underground reposition and storage of liquids (e.g. water disposal associated with energy extraction) and gases (CO2 sequestration, inter alia) and triggered by filling surface water reservoirs, etc. Until now the research in the area of IS has been organized around induced technologies rather than physical problems, common for these shallow seismic processes. This has hampered the integration of IS research community and the research progress. WG10 intends to work out a first step towards changing the IS research perspective from the present, technology-oriented, to physical problems-oriented without, however, losing touch with technological conditions of IS generation. This will be achieved by the integration of IS Research Infrastructure (ISRI) and the creation of Induced Seismicity Node within EPOS. The ISRI to be integrated has three components: data, software and reports. The IS data consists of seismic data and auxiliary data: geological, displacement, geomechanical, geodetic, etc, and last, but by no means least, technological data. A research in the field of IS cannot do without this last data class. The IS software comprises common software tools for data handling and visualisation, standard and advanced software for research and software based on newly proposed algorithms for tests and development. The IS reports are both peer reviewed and unreviewed as well as an internet forum. In addition to that the IS Node will play a significant role in integrating IS community and accelerating research, it will help to develop a synergy between research community and industrial partners. WG10 is working out the strategic solutions for integration and core services provided by future IS node for the European and other research groups, industrial partners, educational centers, central and local administration bodies. Measurable benefit of the integrated ISRI will be the intensification of studies on hazard and risk associated with anthropogenic seismicity and on methods of anthropogenic seismic risk mitigation. Best practices will be disseminated to industrial partners and relevant bodies of public administration. It is also planned to have an information node for the public use.

  13. The Information Technology Infrastructure for the Translational Genomics Core and the Partners Biobank at Partners Personalized Medicine

    PubMed Central

    Boutin, Natalie; Holzbach, Ana; Mahanta, Lisa; Aldama, Jackie; Cerretani, Xander; Embree, Kevin; Leon, Irene; Rathi, Neeta; Vickers, Matilde

    2016-01-01

    The Biobank and Translational Genomics core at Partners Personalized Medicine requires robust software and hardware. This Information Technology (IT) infrastructure enables the storage and transfer of large amounts of data, drives efficiencies in the laboratory, maintains data integrity from the time of consent to the time that genomic data is distributed for research, and enables the management of complex genetic data. Here, we describe the functional components of the research IT infrastructure at Partners Personalized Medicine and how they integrate with existing clinical and research systems, review some of the ways in which this IT infrastructure maintains data integrity and security, and discuss some of the challenges inherent to building and maintaining such infrastructure. PMID:26805892

  14. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Foucar, James G.; Salinger, Andrew G.; Deakin, Michael

    CIME is the software infrastructure for configuring, building, running, and testing an Earth system model. It can be developed and tested as stand-alone software, but its main role is to be integrating into the CESM and ACME Earth system models.

  15. caGrid 1.0 : an enterprise Grid infrastructure for biomedical research.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Oster, S.; Langella, S.; Hastings, S.

    To develop software infrastructure that will provide support for discovery, characterization, integrated access, and management of diverse and disparate collections of information sources, analysis methods, and applications in biomedical research. Design: An enterprise Grid software infrastructure, called caGrid version 1.0 (caGrid 1.0), has been developed as the core Grid architecture of the NCI-sponsored cancer Biomedical Informatics Grid (caBIG{trademark}) program. It is designed to support a wide range of use cases in basic, translational, and clinical research, including (1) discovery, (2) integrated and large-scale data analysis, and (3) coordinated study. Measurements: The caGrid is built as a Grid software infrastructure andmore » leverages Grid computing technologies and the Web Services Resource Framework standards. It provides a set of core services, toolkits for the development and deployment of new community provided services, and application programming interfaces for building client applications. Results: The caGrid 1.0 was released to the caBIG community in December 2006. It is built on open source components and caGrid source code is publicly and freely available under a liberal open source license. The core software, associated tools, and documentation can be downloaded from the following URL: .« less

  16. LCG/AA build infrastructure

    NASA Astrophysics Data System (ADS)

    Hodgkins, Alex Liam; Diez, Victor; Hegner, Benedikt

    2012-12-01

    The Software Process & Infrastructure (SPI) project provides a build infrastructure for regular integration testing and release of the LCG Applications Area software stack. In the past, regular builds have been provided using a system which has been constantly growing to include more features like server-client communication, long-term build history and a summary web interface using present-day web technologies. However, the ad-hoc style of software development resulted in a setup that is hard to monitor, inflexible and difficult to expand. The new version of the infrastructure is based on the Django Python framework, which allows for a structured and modular design, facilitating later additions. Transparency in the workflows and ease of monitoring has been one of the priorities in the design. Formerly missing functionality like on-demand builds or release triggering will support the transition to a more agile development process.

  17. caGrid 1.0: An Enterprise Grid Infrastructure for Biomedical Research

    PubMed Central

    Oster, Scott; Langella, Stephen; Hastings, Shannon; Ervin, David; Madduri, Ravi; Phillips, Joshua; Kurc, Tahsin; Siebenlist, Frank; Covitz, Peter; Shanbhag, Krishnakant; Foster, Ian; Saltz, Joel

    2008-01-01

    Objective To develop software infrastructure that will provide support for discovery, characterization, integrated access, and management of diverse and disparate collections of information sources, analysis methods, and applications in biomedical research. Design An enterprise Grid software infrastructure, called caGrid version 1.0 (caGrid 1.0), has been developed as the core Grid architecture of the NCI-sponsored cancer Biomedical Informatics Grid (caBIG™) program. It is designed to support a wide range of use cases in basic, translational, and clinical research, including 1) discovery, 2) integrated and large-scale data analysis, and 3) coordinated study. Measurements The caGrid is built as a Grid software infrastructure and leverages Grid computing technologies and the Web Services Resource Framework standards. It provides a set of core services, toolkits for the development and deployment of new community provided services, and application programming interfaces for building client applications. Results The caGrid 1.0 was released to the caBIG community in December 2006. It is built on open source components and caGrid source code is publicly and freely available under a liberal open source license. The core software, associated tools, and documentation can be downloaded from the following URL: https://cabig.nci.nih.gov/workspaces/Architecture/caGrid. Conclusions While caGrid 1.0 is designed to address use cases in cancer research, the requirements associated with discovery, analysis and integration of large scale data, and coordinated studies are common in other biomedical fields. In this respect, caGrid 1.0 is the realization of a framework that can benefit the entire biomedical community. PMID:18096909

  18. caGrid 1.0: an enterprise Grid infrastructure for biomedical research.

    PubMed

    Oster, Scott; Langella, Stephen; Hastings, Shannon; Ervin, David; Madduri, Ravi; Phillips, Joshua; Kurc, Tahsin; Siebenlist, Frank; Covitz, Peter; Shanbhag, Krishnakant; Foster, Ian; Saltz, Joel

    2008-01-01

    To develop software infrastructure that will provide support for discovery, characterization, integrated access, and management of diverse and disparate collections of information sources, analysis methods, and applications in biomedical research. An enterprise Grid software infrastructure, called caGrid version 1.0 (caGrid 1.0), has been developed as the core Grid architecture of the NCI-sponsored cancer Biomedical Informatics Grid (caBIG) program. It is designed to support a wide range of use cases in basic, translational, and clinical research, including 1) discovery, 2) integrated and large-scale data analysis, and 3) coordinated study. The caGrid is built as a Grid software infrastructure and leverages Grid computing technologies and the Web Services Resource Framework standards. It provides a set of core services, toolkits for the development and deployment of new community provided services, and application programming interfaces for building client applications. The caGrid 1.0 was released to the caBIG community in December 2006. It is built on open source components and caGrid source code is publicly and freely available under a liberal open source license. The core software, associated tools, and documentation can be downloaded from the following URL: https://cabig.nci.nih.gov/workspaces/Architecture/caGrid. While caGrid 1.0 is designed to address use cases in cancer research, the requirements associated with discovery, analysis and integration of large scale data, and coordinated studies are common in other biomedical fields. In this respect, caGrid 1.0 is the realization of a framework that can benefit the entire biomedical community.

  19. Department of Energy's Virtual Lab Infrastructure for Integrated Earth System Science Data

    NASA Astrophysics Data System (ADS)

    Williams, D. N.; Palanisamy, G.; Shipman, G.; Boden, T.; Voyles, J.

    2014-12-01

    The U.S. Department of Energy (DOE) Office of Biological and Environmental Research (BER) Climate and Environmental Sciences Division (CESD) produces a diversity of data, information, software, and model codes across its research and informatics programs and facilities. This information includes raw and reduced observational and instrumentation data, model codes, model-generated results, and integrated data products. Currently, most of this data and information are prepared and shared for program specific activities, corresponding to CESD organization research. A major challenge facing BER CESD is how best to inventory, integrate, and deliver these vast and diverse resources for the purpose of accelerating Earth system science research. This talk provides a concept for a CESD Integrated Data Ecosystem and an initial roadmap for its implementation to address this integration challenge in the "Big Data" domain. Towards this end, a new BER Virtual Laboratory Infrastructure will be presented, which will include services and software connecting the heterogeneous CESD data holdings, and constructed with open source software based on industry standards, protocols, and state-of-the-art technology.

  20. Globus Quick Start Guide. Globus Software Version 1.1

    NASA Technical Reports Server (NTRS)

    1999-01-01

    The Globus Project is a community effort, led by Argonne National Laboratory and the University of Southern California's Information Sciences Institute. Globus is developing the basic software infrastructure for computations that integrate geographically distributed computational and information resources.

  1. HPC Software Stack Testing Framework

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Garvey, Cormac

    The HPC Software stack testing framework (hpcswtest) is used in the INL Scientific Computing Department to test the basic sanity and integrity of the HPC Software stack (Compilers, MPI, Numerical libraries and Applications) and to quickly discover hard failures, and as a by-product it will indirectly check the HPC infrastructure (network, PBS and licensing servers).

  2. DEVELOP MULTI-STRESSOR, OPEN ARCHITECTURE MODELING FRAMEWORK FOR ECOLOGICAL EXPOSURE FROM SITE TO WATERSHED SCALE

    EPA Science Inventory

    A number of multimedia modeling frameworks are currently being developed. The Multimedia Integrated Modeling System (MIMS) is one of these frameworks. A framework should be seen as more of a multimedia modeling infrastructure than a single software system. This infrastructure do...

  3. Integrating a geographic information system, a scientific visualization system and an orographic precipitation model

    USGS Publications Warehouse

    Hay, L.; Knapp, L.

    1996-01-01

    Investigating natural, potential, and man-induced impacts on hydrological systems commonly requires complex modelling with overlapping data requirements, and massive amounts of one- to four-dimensional data at multiple scales and formats. Given the complexity of most hydrological studies, the requisite software infrastructure must incorporate many components including simulation modelling, spatial analysis and flexible, intuitive displays. There is a general requirement for a set of capabilities to support scientific analysis which, at this time, can only come from an integration of several software components. Integration of geographic information systems (GISs) and scientific visualization systems (SVSs) is a powerful technique for developing and analysing complex models. This paper describes the integration of an orographic precipitation model, a GIS and a SVS. The combination of these individual components provides a robust infrastructure which allows the scientist to work with the full dimensionality of the data and to examine the data in a more intuitive manner.

  4. Toolkit of Available EPA Green Infrastructure Modeling Software: Watershed Management Optimization Support Tool (WMOST)

    EPA Science Inventory

    Watershed Management Optimization Support Tool (WMOST) is a software application designed tofacilitate integrated water resources management across wet and dry climate regions. It allows waterresources managers and planners to screen a wide range of practices across their watersh...

  5. A Roadmap to Continuous Integration for ATLAS Software Development

    NASA Astrophysics Data System (ADS)

    Elmsheuser, J.; Krasznahorkay, A.; Obreshkov, E.; Undrus, A.; ATLAS Collaboration

    2017-10-01

    The ATLAS software infrastructure facilitates efforts of more than 1000 developers working on the code base of 2200 packages with 4 million lines of C++ and 1.4 million lines of python code. The ATLAS offline code management system is the powerful, flexible framework for processing new package versions requests, probing code changes in the Nightly Build System, migration to new platforms and compilers, deployment of production releases for worldwide access and supporting physicists with tools and interfaces for efficient software use. It maintains multi-stream, parallel development environment with about 70 multi-platform branches of nightly releases and provides vast opportunities for testing new packages, for verifying patches to existing software and for migrating to new platforms and compilers. The system evolution is currently aimed on the adoption of modern continuous integration (CI) practices focused on building nightly releases early and often, with rigorous unit and integration testing. This paper describes the CI incorporation program for the ATLAS software infrastructure. It brings modern open source tools such as Jenkins and GitLab into the ATLAS Nightly System, rationalizes hardware resource allocation and administrative operations, provides improved feedback and means to fix broken builds promptly for developers. Once adopted, ATLAS CI practices will improve and accelerate innovation cycles and result in increased confidence in new software deployments. The paper reports the status of Jenkins integration with the ATLAS Nightly System as well as short and long term plans for the incorporation of CI practices.

  6. Final report for the Integrated and Robust Security Infrastructure (IRSI) laboratory directed research and development project

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hutchinson, R.L.; Hamilton, V.A.; Istrail, G.G.

    1997-11-01

    This report describes the results of a Sandia-funded laboratory-directed research and development project titled {open_quotes}Integrated and Robust Security Infrastructure{close_quotes} (IRSI). IRSI was to provide a broad range of commercial-grade security services to any software application. IRSI has two primary goals: application transparency and manageable public key infrastructure. IRSI must provide its security services to any application without the need to modify the application to invoke the security services. Public key mechanisms are well suited for a network with many end users and systems. There are many issues that make it difficult to deploy and manage a public key infrastructure. IRSImore » addressed some of these issues to create a more manageable public key infrastructure.« less

  7. 47 CFR 59.3 - Information concerning deployment of new services and equipment.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... services and equipment, including any software or upgrades of software integral to the use or operation of... services and equipment. 59.3 Section 59.3 Telecommunication FEDERAL COMMUNICATIONS COMMISSION (CONTINUED) COMMON CARRIER SERVICES (CONTINUED) INFRASTRUCTURE SHARING § 59.3 Information concerning deployment of...

  8. NCI's High Performance Computing (HPC) and High Performance Data (HPD) Computing Platform for Environmental and Earth System Data Science

    NASA Astrophysics Data System (ADS)

    Evans, Ben; Allen, Chris; Antony, Joseph; Bastrakova, Irina; Gohar, Kashif; Porter, David; Pugh, Tim; Santana, Fabiana; Smillie, Jon; Trenham, Claire; Wang, Jingbo; Wyborn, Lesley

    2015-04-01

    The National Computational Infrastructure (NCI) has established a powerful and flexible in-situ petascale computational environment to enable both high performance computing and Data-intensive Science across a wide spectrum of national environmental and earth science data collections - in particular climate, observational data and geoscientific assets. This paper examines 1) the computational environments that supports the modelling and data processing pipelines, 2) the analysis environments and methods to support data analysis, and 3) the progress so far to harmonise the underlying data collections for future interdisciplinary research across these large volume data collections. NCI has established 10+ PBytes of major national and international data collections from both the government and research sectors based on six themes: 1) weather, climate, and earth system science model simulations, 2) marine and earth observations, 3) geosciences, 4) terrestrial ecosystems, 5) water and hydrology, and 6) astronomy, social and biosciences. Collectively they span the lithosphere, crust, biosphere, hydrosphere, troposphere, and stratosphere. The data is largely sourced from NCI's partners (which include the custodians of many of the major Australian national-scale scientific collections), leading research communities, and collaborating overseas organisations. New infrastructures created at NCI mean the data collections are now accessible within an integrated High Performance Computing and Data (HPC-HPD) environment - a 1.2 PFlop supercomputer (Raijin), a HPC class 3000 core OpenStack cloud system and several highly connected large-scale high-bandwidth Lustre filesystems. The hardware was designed at inception to ensure that it would allow the layered software environment to flexibly accommodate the advancement of future data science. New approaches to software technology and data models have also had to be developed to enable access to these large and exponentially increasing data volumes at NCI. Traditional HPC and data environments are still made available in a way that flexibly provides the tools, services and supporting software systems on these new petascale infrastructures. But to enable the research to take place at this scale, the data, metadata and software now need to evolve together - creating a new integrated high performance infrastructure. The new infrastructure at NCI currently supports a catalogue of integrated, reusable software and workflows from earth system and ecosystem modelling, weather research, satellite and other observed data processing and analysis. One of the challenges for NCI has been to support existing techniques and methods, while carefully preparing the underlying infrastructure for the transition needed for the next class of Data-intensive Science. In doing so, a flexible range of techniques and software can be made available for application across the corpus of data collections available, and to provide a new infrastructure for future interdisciplinary research.

  9. A Research Agenda for Service-Oriented Architecture (SOA): Maintenance and Evolution of Service-Oriented Systems

    DTIC Science & Technology

    2010-03-01

    service consumers, and infrastructure. Techniques from any iterative and incremental software development methodology followed by the organiza- tion... Service -Oriented Architecture Environment (CMU/SEI-2008-TN-008). Software Engineering Institute, Carnegie Mellon University, 2008. http://www.sei.cmu.edu...Integrating Legacy Software into a Service Oriented Architecture.” Proceedings of the 10th European Conference on Software Maintenance (CSMR 2006). Bari

  10. Towards Portable Large-Scale Image Processing with High-Performance Computing.

    PubMed

    Huo, Yuankai; Blaber, Justin; Damon, Stephen M; Boyd, Brian D; Bao, Shunxing; Parvathaneni, Prasanna; Noguera, Camilo Bermudez; Chaganti, Shikha; Nath, Vishwesh; Greer, Jasmine M; Lyu, Ilwoo; French, William R; Newton, Allen T; Rogers, Baxter P; Landman, Bennett A

    2018-05-03

    High-throughput, large-scale medical image computing demands tight integration of high-performance computing (HPC) infrastructure for data storage, job distribution, and image processing. The Vanderbilt University Institute for Imaging Science (VUIIS) Center for Computational Imaging (CCI) has constructed a large-scale image storage and processing infrastructure that is composed of (1) a large-scale image database using the eXtensible Neuroimaging Archive Toolkit (XNAT), (2) a content-aware job scheduling platform using the Distributed Automation for XNAT pipeline automation tool (DAX), and (3) a wide variety of encapsulated image processing pipelines called "spiders." The VUIIS CCI medical image data storage and processing infrastructure have housed and processed nearly half-million medical image volumes with Vanderbilt Advanced Computing Center for Research and Education (ACCRE), which is the HPC facility at the Vanderbilt University. The initial deployment was natively deployed (i.e., direct installations on a bare-metal server) within the ACCRE hardware and software environments, which lead to issues of portability and sustainability. First, it could be laborious to deploy the entire VUIIS CCI medical image data storage and processing infrastructure to another HPC center with varying hardware infrastructure, library availability, and software permission policies. Second, the spiders were not developed in an isolated manner, which has led to software dependency issues during system upgrades or remote software installation. To address such issues, herein, we describe recent innovations using containerization techniques with XNAT/DAX which are used to isolate the VUIIS CCI medical image data storage and processing infrastructure from the underlying hardware and software environments. The newly presented XNAT/DAX solution has the following new features: (1) multi-level portability from system level to the application level, (2) flexible and dynamic software development and expansion, and (3) scalable spider deployment compatible with HPC clusters and local workstations.

  11. Maintaining Enterprise Resiliency via Kaleidoscopic Adaption and Transformation of Software Services (MEERKATS)

    DTIC Science & Technology

    2016-04-01

    infrastructure . The work is motivated by the fact that today’s clouds are very static, uniform, and predictable, allowing attackers who identify a...vulnerability in one of the services or infrastructure components to spread their effect to other, mission-critical services. Our goal is to integrate into...clouds by elevating continuous change, evolution, and misinformation as first-rate design principles of the cloud’s infrastructure . Our work is

  12. Requirements Engineering in Building Climate Science Software

    NASA Astrophysics Data System (ADS)

    Batcheller, Archer L.

    Software has an important role in supporting scientific work. This dissertation studies teams that build scientific software, focusing on the way that they determine what the software should do. These requirements engineering processes are investigated through three case studies of climate science software projects. The Earth System Modeling Framework assists modeling applications, the Earth System Grid distributes data via a web portal, and the NCAR (National Center for Atmospheric Research) Command Language is used to convert, analyze and visualize data. Document analysis, observation, and interviews were used to investigate the requirements-related work. The first research question is about how and why stakeholders engage in a project, and what they do for the project. Two key findings arise. First, user counts are a vital measure of project success, which makes adoption important and makes counting tricky and political. Second, despite the importance of quantities of users, a few particular "power users" develop a relationship with the software developers and play a special role in providing feedback to the software team and integrating the system into user practice. The second research question focuses on how project objectives are articulated and how they are put into practice. The team seeks to both build a software system according to product requirements but also to conduct their work according to process requirements such as user support. Support provides essential communication between users and developers that assists with refining and identifying requirements for the software. It also helps users to learn and apply the software to their real needs. User support is a vital activity for scientific software teams aspiring to create infrastructure. The third research question is about how change in scientific practice and knowledge leads to changes in the software, and vice versa. The "thickness" of a layer of software infrastructure impacts whether the software team or users have control and responsibility for making changes in response to new scientific ideas. Thick infrastructure provides more functionality for users, but gives them less control of it. The stability of infrastructure trades off against the responsiveness that the infrastructure can have to user needs.

  13. Integrating and Managing Bim in GIS, Software Review

    NASA Astrophysics Data System (ADS)

    El Meouche, R.; Rezoug, M.; Hijazi, I.

    2013-08-01

    Since the advent of Computer-Aided Design (CAD) and Geographical Information System (GIS) tools, project participants have been increasingly leveraging these tools throughout the different phases of a civil infrastructure project. In recent years the number of GIS software that provides tools to enable the integration of Building information in geo context has risen sharply. More and more GIS software are added tools for this purposes and other software projects are regularly extending these tools. However, each software has its different strength and weakness and its purpose of use. This paper provides a thorough review to investigate the software capabilities and clarify its purpose. For this study, Autodesk Revit 2012 i.e. BIM editor software was used to create BIMs. In the first step, three building models were created, the resulted models were converted to BIM format and then the software was used to integrate it. For the evaluation of the software, general characteristics was studied such as the user interface, what formats are supported (import/export), and the way building information are imported.

  14. The multi-modal Australian ScienceS Imaging and Visualization Environment (MASSIVE) high performance computing infrastructure: applications in neuroscience and neuroinformatics research

    PubMed Central

    Goscinski, Wojtek J.; McIntosh, Paul; Felzmann, Ulrich; Maksimenko, Anton; Hall, Christopher J.; Gureyev, Timur; Thompson, Darren; Janke, Andrew; Galloway, Graham; Killeen, Neil E. B.; Raniga, Parnesh; Kaluza, Owen; Ng, Amanda; Poudel, Govinda; Barnes, David G.; Nguyen, Toan; Bonnington, Paul; Egan, Gary F.

    2014-01-01

    The Multi-modal Australian ScienceS Imaging and Visualization Environment (MASSIVE) is a national imaging and visualization facility established by Monash University, the Australian Synchrotron, the Commonwealth Scientific Industrial Research Organization (CSIRO), and the Victorian Partnership for Advanced Computing (VPAC), with funding from the National Computational Infrastructure and the Victorian Government. The MASSIVE facility provides hardware, software, and expertise to drive research in the biomedical sciences, particularly advanced brain imaging research using synchrotron x-ray and infrared imaging, functional and structural magnetic resonance imaging (MRI), x-ray computer tomography (CT), electron microscopy and optical microscopy. The development of MASSIVE has been based on best practice in system integration methodologies, frameworks, and architectures. The facility has: (i) integrated multiple different neuroimaging analysis software components, (ii) enabled cross-platform and cross-modality integration of neuroinformatics tools, and (iii) brought together neuroimaging databases and analysis workflows. MASSIVE is now operational as a nationally distributed and integrated facility for neuroinfomatics and brain imaging research. PMID:24734019

  15. Software as a service approach to sensor simulation software deployment

    NASA Astrophysics Data System (ADS)

    Webster, Steven; Miller, Gordon; Mayott, Gregory

    2012-05-01

    Traditionally, military simulation has been problem domain specific. Executing an exercise currently requires multiple simulation software providers to specialize, deploy, and configure their respective implementations, integrate the collection of software to achieve a specific system behavior, and then execute for the purpose at hand. This approach leads to rigid system integrations which require simulation expertise for each deployment due to changes in location, hardware, and software. Our alternative is Software as a Service (SaaS) predicated on the virtualization of Night Vision Electronic Sensors (NVESD) sensor simulations as an exemplary case. Management middleware elements layer self provisioning, configuration, and integration services onto the virtualized sensors to present a system of services at run time. Given an Infrastructure as a Service (IaaS) environment, enabled and managed system of simulations yields a durable SaaS delivery without requiring user simulation expertise. Persistent SaaS simulations would provide on demand availability to connected users, decrease integration costs and timelines, and benefit the domain community from immediate deployment of lessons learned.

  16. Surface transportation weather decision support requirements : advanced-integrated decision support using weather information for surface transportation decisions makers : draft (truncated*) version 1.0

    DOT National Transportation Integrated Search

    1997-09-19

    This report gives an overview of the National Intelligent Transportation Infrastructure Initiative (NITI). NITI refers to the integrated electronics, communications, and hardware and software elements that are available to support Intelligent Transpo...

  17. CMS Distributed Computing Integration in the LHC sustained operations era

    NASA Astrophysics Data System (ADS)

    Grandi, C.; Bockelman, B.; Bonacorsi, D.; Fisk, I.; González Caballero, I.; Farina, F.; Hernández, J. M.; Padhi, S.; Sarkar, S.; Sciabà, A.; Sfiligoi, I.; Spiga, F.; Úbeda García, M.; Van Der Ster, D. C.; Zvada, M.

    2011-12-01

    After many years of preparation the CMS computing system has reached a situation where stability in operations limits the possibility to introduce innovative features. Nevertheless it is the same need of stability and smooth operations that requires the introduction of features that were considered not strategic in the previous phases. Examples are: adequate authorization to control and prioritize the access to storage and computing resources; improved monitoring to investigate problems and identify bottlenecks on the infrastructure; increased automation to reduce the manpower needed for operations; effective process to deploy in production new releases of the software tools. We present the work of the CMS Distributed Computing Integration Activity that is responsible for providing a liaison between the CMS distributed computing infrastructure and the software providers, both internal and external to CMS. In particular we describe the introduction of new middleware features during the last 18 months as well as the requirements to Grid and Cloud software developers for the future.

  18. Multiphysics Application Coupling Toolkit

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Campbell, Michael T.

    2013-12-02

    This particular consortium implementation of the software integration infrastructure will, in large part, refactor portions of the Rocstar multiphysics infrastructure. Development of this infrastructure originated at the University of Illinois DOE ASCI Center for Simulation of Advanced Rockets (CSAR) to support the center's massively parallel multiphysics simulation application, Rocstar, and has continued at IllinoisRocstar, a small company formed near the end of the University-based program. IllinoisRocstar is now licensing these new developments as free, open source, in hopes to help improve their own and others' access to infrastructure which can be readily utilized in developing coupled or composite software systems;more » with particular attention to more rapid production and utilization of multiphysics applications in the HPC environment. There are two major pieces to the consortium implementation, the Application Component Toolkit (ACT), and the Multiphysics Application Coupling Toolkit (MPACT). The current development focus is the ACT, which is (will be) the substrate for MPACT. The ACT itself is built up from the components described in the technical approach. In particular, the ACT has the following major components: 1.The Component Object Manager (COM): The COM package provides encapsulation of user applications, and their data. COM also provides the inter-component function call mechanism. 2.The System Integration Manager (SIM): The SIM package provides constructs and mechanisms for orchestrating composite systems of multiply integrated pieces.« less

  19. Building the European Seismological Research Infrastructure: results from 4 years NERIES EC project

    NASA Astrophysics Data System (ADS)

    van Eck, T.; Giardini, D.

    2010-12-01

    The EC Research Infrastructure (RI) project, Network of Research Infrastructures for European Seismology (NERIES), implemented a comprehensive European integrated RI for earthquake seismological data that is scalable and sustainable. NERIES opened a significant amount of additional seismological data, integrated different distributed data archives, implemented and produced advanced analysis tools and advanced software packages and tools. A single seismic data portal provides a single access point and overview for European seismological data available for the earth science research community. Additional data access tools and sites have been implemented to meet user and robustness requirements, notably those at the EMSC and ORFEUS. The datasets compiled in NERIES and available through the portal include among others: - The expanded Virtual European Broadband Seismic Network (VEBSN) with real-time access to more then 500 stations from > 53 observatories. This data is continuously monitored, quality controlled and archived in the European Integrated Distributed waveform Archive (EIDA). - A unique integration of acceleration datasets from seven networks in seven European or associated countries centrally accessible in a homogeneous format, thus forming the core comprehensive European acceleration database. Standardized parameter analysis and actual software are included in the database. - A Distributed Archive of Historical Earthquake Data (AHEAD) for research purposes, containing among others a comprehensive European Macroseismic Database and Earthquake Catalogue (1000 - 1963, M ≥5.8), including analysis tools. - Data from 3 one year OBS deployments at three sites, Atlantic, Ionian and Ligurian Sea within the general SEED format, thus creating the core integrated data base for ocean, sea and land based seismological observatories. Tools to facilitate analysis and data mining of the RI datasets are: - A comprehensive set of European seismological velocity reference model including a standardized model description with several visualisation tools currently adapted on a global scale. - An integrated approach to seismic hazard modelling and forecasting, a community accepted forecasting testing and model validation approach and the core hazard portal developed along the same technologies as the NERIES data portal. - Implemented homogeneous shakemap estimation tools at several large European observatories and a complementary new loss estimation software tool. - A comprehensive set of new techniques for geotechnical site characterization with relevant software packages documented and maintained (www.geopsy.org). - A set of software packages for data mining, data reduction, data exchange and information management in seismology as research and observatory analysis tools NERIES has a long-term impact and is coordinated with related US initiatives IRIS and EarthScope. The follow-up EC project of NERIES, NERA (2010 - 2014), is funded and will integrate the seismological and the earthquake engineering infrastructures. NERIES further provided the proof of concept for the ESFRI2008 initiative: the European Plate Observing System (EPOS). Its preparatory phase (2010 - 2014) is also funded by the EC.

  20. DOE Office of Scientific and Technical Information (OSTI.GOV)

    McCaskey, Alexander J.

    Hybrid programming models for beyond-CMOS technologies will prove critical for integrating new computing technologies alongside our existing infrastructure. Unfortunately the software infrastructure required to enable this is lacking or not available. XACC is a programming framework for extreme-scale, post-exascale accelerator architectures that integrates alongside existing conventional applications. It is a pluggable framework for programming languages developed for next-gen computing hardware architectures like quantum and neuromorphic computing. It lets computational scientists efficiently off-load classically intractable work to attached accelerators through user-friendly Kernel definitions. XACC makes post-exascale hybrid programming approachable for domain computational scientists.

  1. Managing a tier-2 computer centre with a private cloud infrastructure

    NASA Astrophysics Data System (ADS)

    Bagnasco, Stefano; Berzano, Dario; Brunetti, Riccardo; Lusso, Stefano; Vallero, Sara

    2014-06-01

    In a typical scientific computing centre, several applications coexist and share a single physical infrastructure. An underlying Private Cloud infrastructure eases the management and maintenance of such heterogeneous applications (such as multipurpose or application-specific batch farms, Grid sites, interactive data analysis facilities and others), allowing dynamic allocation resources to any application. Furthermore, the maintenance of large deployments of complex and rapidly evolving middleware and application software is eased by the use of virtual images and contextualization techniques. Such infrastructures are being deployed in some large centres (see e.g. the CERN Agile Infrastructure project), but with several open-source tools reaching maturity this is becoming viable also for smaller sites. In this contribution we describe the Private Cloud infrastructure at the INFN-Torino Computer Centre, that hosts a full-fledged WLCG Tier-2 centre, an Interactive Analysis Facility for the ALICE experiment at the CERN LHC and several smaller scientific computing applications. The private cloud building blocks include the OpenNebula software stack, the GlusterFS filesystem and the OpenWRT Linux distribution (used for network virtualization); a future integration into a federated higher-level infrastructure is made possible by exposing commonly used APIs like EC2 and OCCI.

  2. Using Docker Compose for the Simple Deployment of an Integrated Drug Target Screening Platform.

    PubMed

    List, Markus

    2017-06-10

    Docker virtualization allows for software tools to be executed in an isolated and controlled environment referred to as a container. In Docker containers, dependencies are provided exactly as intended by the developer and, consequently, they simplify the distribution of scientific software and foster reproducible research. The Docker paradigm is that each container encapsulates one particular software tool. However, to analyze complex biomedical data sets, it is often necessary to combine several software tools into elaborate workflows. To address this challenge, several Docker containers need to be instantiated and properly integrated, which complicates the software deployment process unnecessarily. Here, we demonstrate how an extension to Docker, Docker compose, can be used to mitigate these problems by providing a unified setup routine that deploys several tools in an integrated fashion. We demonstrate the power of this approach by example of a Docker compose setup for a drug target screening platform consisting of five integrated web applications and shared infrastructure, deployable in just two lines of codes.

  3. Software to Manage the Unmanageable

    NASA Technical Reports Server (NTRS)

    2005-01-01

    In 1995, NASA s Jet Propulsion Laboratory (JPL) contracted Redmond, Washington-based Lucidoc Corporation, to design a technology infrastructure to automate the intersection between policy management and operations management with advanced software that automates document workflow, document status, and uniformity of document layout. JPL had very specific parameters for the software. It expected to store and catalog over 8,000 technical and procedural documents integrated with hundreds of processes. The project ended in 2000, but NASA still uses the resulting highly secure document management system, and Lucidoc has managed to help other organizations, large and small, with integrating document flow and operations management to ensure a compliance-ready culture.

  4. The EPOS e-Infrastructure

    NASA Astrophysics Data System (ADS)

    Jeffery, Keith; Bailo, Daniele

    2014-05-01

    The European Plate Observing System (EPOS) is integrating geoscientific information concerning earth movements in Europe. We are approaching the end of the PP (Preparatory Project) phase and in October 2014 expect to continue with the full project within ESFRI (European Strategic Framework for Research Infrastructures). The key aspects of EPOS concern providing services to allow homogeneous access by end-users over heterogeneous data, software, facilities, equipment and services. The e-infrastructure of EPOS is the heart of the project since it integrates the work on organisational, legal, economic and scientific aspects. Following the creation of an inventory of relevant organisations, persons, facilities, equipment, services, datasets and software (RIDE) the scale of integration required became apparent. The EPOS e-infrastructure architecture has been developed systematically based on recorded primary (user) requirements and secondary (interoperation with other systems) requirements through Strawman, Woodman and Ironman phases with the specification - and developed confirmatory prototypes - becoming more precise and progressively moving from paper to implemented system. The EPOS architecture is based on global core services (Integrated Core Services - ICS) which access thematic nodes (domain-specific European-wide collections, called thematic Core Services - TCS), national nodes and specific institutional nodes. The key aspect is the metadata catalog. In one dimension this is described in 3 levels: (1) discovery metadata using well-known and commonly used standards such as DC (Dublin Core) to enable users (via an intelligent user interface) to search for objects within the EPOS environment relevant to their needs; (2) contextual metadata providing the context of the object described in the catalog to enable a user or the system to determine the relevance of the discovered object(s) to their requirement - the context includes projects, funding, organisations involved, persons involved, related publications, facilities, equipment and others, and utilises CERIF (Common European Research Information Format) standard (see www.eurocris.org); (3) detailed metadata which is specific to a domain or to a particular object and includes the schema describing the object to processing software. The other dimension of the metadata concerns the objects described. These are classified into users, services (including software), data and resources (computing, data storage, instruments and scientific equipment). An alternative architecture has been considered: using brokering. This technique has been used especially in North America geoscience projects to interoperate datasets. The technique involves writing software to interconvert between any two node datasets. Given n nodes this implies writing n*(n-1) convertors. EPOS Working Group 7 (e-infrastructures and virtual community) which deals with the design and implementation of a prototype of the EPOS services, chose to use an approach which endows the system with an extreme flexibility and sustainability. It is called the Metadata Catalogue approach. With the use of the catalogue the EPOS system can: 1. interoperate with software, services, users, organisations, facilities, equipment etc. as well as datasets; 2. avoid to write n*(n-1) software convertors and generate as much as possible, through the information contained in the catalogue only n convertors. This is a huge saving - especially in maintenance as the datasets (or other node resources) evolve. We are working on (semi-) automation of convertor generation by metadata mapping - this is leading-edge computer science research; 3. make large use of contextual metadata which enable a user or a machine to: (i) improve discovery of resources at nodes; (ii) improve precision and recall in search; (iii) drive the systems for identification, authentication, authorisation, security and privacy recording the relevant attributes of the node resources and of the user; (iv) manage provenance and long-term digital preservation; The linkage between the Integrated Services, which provide the integration of data and services, with the diverse Thematic Services Nodes is provided by means of a compatibility layer, which includes the aforementioned metadata catalogue. This layer provides 'connectors' to make local data, software and services available through the EPOS Integrated Services layer. In conclusion, we believe the EPOS e-infrastructure architecture is fit for purpose including long-term sustainability and pan-European access to data and services.

  5. ICT Integration in Turkey: Evaluation of English Language E-Content of the FATIH Project

    ERIC Educational Resources Information Center

    Kizilet, Esra; Özmen, Kemal Sinan

    2017-01-01

    A nationwide technology integration movement, FATIH Project, was initiated by Ministry of National Education. FATIH Project whose main objective is to provide equal opportunities to the learners during compulsory education is made up of many components: Hardware supply, procurement of software and e-content, infrastructure set up, and teacher…

  6. Geospatial-enabled Data Exploration and Computation through Data Infrastructure Building Blocks

    NASA Astrophysics Data System (ADS)

    Song, C. X.; Biehl, L. L.; Merwade, V.; Villoria, N.

    2015-12-01

    Geospatial data are present everywhere today with the proliferation of location-aware computing devices and sensors. This is especially true in the scientific community where large amounts of data are driving research and education activities in many domains. Collaboration over geospatial data, for example, in modeling, data analysis and visualization, must still overcome the barriers of specialized software and expertise among other challenges. The GABBs project aims at enabling broader access to geospatial data exploration and computation by developing spatial data infrastructure building blocks that leverage capabilities of end-to-end application service and virtualized computing framework in HUBzero. Funded by NSF Data Infrastructure Building Blocks (DIBBS) initiative, GABBs provides a geospatial data architecture that integrates spatial data management, mapping and visualization and will make it available as open source. The outcome of the project will enable users to rapidly create tools and share geospatial data and tools on the web for interactive exploration of data without requiring significant software development skills, GIS expertise or IT administrative privileges. This presentation will describe the development of geospatial data infrastructure building blocks and the scientific use cases that help drive the software development, as well as seek feedback from the user communities.

  7. ICW eHealth Framework.

    PubMed

    Klein, Karsten; Wolff, Astrid C; Ziebold, Oliver; Liebscher, Thomas

    2008-01-01

    The ICW eHealth Framework (eHF) is a powerful infrastructure and platform for the development of service-oriented solutions in the health care business. It is the culmination of many years of experience of ICW in the development and use of in-house health care solutions and represents the foundation of ICW product developments based on the Java Enterprise Edition (Java EE). The ICW eHealth Framework has been leveraged to allow development by external partners - enabling adopters a straightforward integration into ICW solutions. The ICW eHealth Framework consists of reusable software components, development tools, architectural guidelines and conventions defining a full software-development and product lifecycle. From the perspective of a partner, the framework provides services and infrastructure capabilities for integrating applications within an eHF-based solution. This article introduces the ICW eHealth Framework's basic architectural concepts and technologies. It provides an overview of its module and component model, describes the development platform that supports the complete software development lifecycle of health care applications and outlines technological aspects, mainly focusing on application development frameworks and open standards.

  8. Los Angeles congestion reduction demonstration (Metro ExpressLanes) program. National evaluation : environmental data test plan.

    DOT National Transportation Integrated Search

    1997-09-19

    The term National Intelligent Transportation Infrastructure (NITI) refers to the integrated electronics, communications, and hardware and software elements that can support Intelligent Transportation System (ITS) services and products. NITI is not ju...

  9. Software Infrastructure for Computer-aided Drug Discovery and Development, a Practical Example with Guidelines.

    PubMed

    Moretti, Loris; Sartori, Luca

    2016-09-01

    In the field of Computer-Aided Drug Discovery and Development (CADDD) the proper software infrastructure is essential for everyday investigations. The creation of such an environment should be carefully planned and implemented with certain features in order to be productive and efficient. Here we describe a solution to integrate standard computational services into a functional unit that empowers modelling applications for drug discovery. This system allows users with various level of expertise to run in silico experiments automatically and without the burden of file formatting for different software, managing the actual computation, keeping track of the activities and graphical rendering of the structural outcomes. To showcase the potential of this approach, performances of five different docking programs on an Hiv-1 protease test set are presented. © 2016 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  10. Rapid Processing of Radio Interferometer Data for Transient Surveys

    NASA Astrophysics Data System (ADS)

    Bourke, S.; Mooley, K.; Hallinan, G.

    2014-05-01

    We report on a software infrastructure and pipeline developed to process large radio interferometer datasets. The pipeline is implemented using a radical redesign of the AIPS processing model. An infrastructure we have named AIPSlite is used to spawn, at runtime, minimal AIPS environments across a cluster. The pipeline then distributes and processes its data in parallel. The system is entirely free of the traditional AIPS distribution and is self configuring at runtime. This software has so far been used to process a EVLA Stripe 82 transient survey, the data for the JVLA-COSMOS project, and has been used to process most of the EVLA L-Band data archive imaging each integration to search for short duration transients.

  11. Collaboration and decision making tools for mobile groups

    NASA Astrophysics Data System (ADS)

    Abrahamyan, Suren; Balyan, Serob; Ter-Minasyan, Harutyun; Degtyarev, Alexander

    2017-12-01

    Nowadays the use of distributed collaboration tools is widespread in many areas of people activity. But lack of mobility and certain equipment-dependency creates difficulties and decelerates development and integration of such technologies. Also mobile technologies allow individuals to interact with each other without need of traditional office spaces and regardless of location. Hence, realization of special infrastructures on mobile platforms with help of ad-hoc wireless local networks could eliminate hardware-attachment and be useful also in terms of scientific approach. Solutions from basic internet-messengers to complex software for online collaboration equipment in large-scale workgroups are implementations of tools based on mobile infrastructures. Despite growth of mobile infrastructures, applied distributed solutions in group decisionmaking and e-collaboration are not common. In this article we propose software complex for real-time collaboration and decision-making based on mobile devices, describe its architecture and evaluate performance.

  12. Enabling Agile Testing through Continuous Integration

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Stolberg, Sean E.

    2009-08-24

    A Continuous Integration system is often considered one of the key elements involved in supporting an agile software development and testing environment. As a traditional software tester transitioning to an agile development environment it became clear to me that I would need to put this essential infrastructure in place and promote improved development practices in order to make the transition to agile testing possible. This experience report discusses a continuous integration implementation I lead last year. The initial motivations for implementing continuous integration are discussed and a pre and post-assessment using Martin Fowler's "Practices of Continuous Integration" is provided alongmore » with the technical specifics of the implementation. Finally, I’ll wrap up with a retrospective of my experiences implementing and promoting continuous integration within the context of agile testing.« less

  13. Web accessibility and open source software.

    PubMed

    Obrenović, Zeljko

    2009-07-01

    A Web browser provides a uniform user interface to different types of information. Making this interface universally accessible and more interactive is a long-term goal still far from being achieved. Universally accessible browsers require novel interaction modalities and additional functionalities, for which existing browsers tend to provide only partial solutions. Although functionality for Web accessibility can be found as open source and free software components, their reuse and integration is complex because they were developed in diverse implementation environments, following standards and conventions incompatible with the Web. To address these problems, we have started several activities that aim at exploiting the potential of open-source software for Web accessibility. The first of these activities is the development of Adaptable Multi-Interface COmmunicator (AMICO):WEB, an infrastructure that facilitates efficient reuse and integration of open source software components into the Web environment. The main contribution of AMICO:WEB is in enabling the syntactic and semantic interoperability between Web extension mechanisms and a variety of integration mechanisms used by open source and free software components. Its design is based on our experiences in solving practical problems where we have used open source components to improve accessibility of rich media Web applications. The second of our activities involves improving education, where we have used our platform to teach students how to build advanced accessibility solutions from diverse open-source software. We are also partially involved in the recently started Eclipse projects called Accessibility Tools Framework (ACTF), the aim of which is development of extensible infrastructure, upon which developers can build a variety of utilities that help to evaluate and enhance the accessibility of applications and content for people with disabilities. In this article we briefly report on these activities.

  14. Managing Watersheds with WMOST (Watershed Management Optimization Support Tool)

    EPA Science Inventory

    EPA’s Green Infrastructure research program and EPA Region 1 recently released a new public-domain software application, WMOST, which supports community applications of Integrated Water Resources Management (IWRM) principles (http://cfpub.epa.gov/si/si_public_record_report....

  15. Auscope: Australian Earth Science Information Infrastructure using Free and Open Source Software

    NASA Astrophysics Data System (ADS)

    Woodcock, R.; Cox, S. J.; Fraser, R.; Wyborn, L. A.

    2013-12-01

    Since 2005 the Australian Government has supported a series of initiatives providing researchers with access to major research facilities and information networks necessary for world-class research. Starting with the National Collaborative Research Infrastructure Strategy (NCRIS) the Australian earth science community established an integrated national geoscience infrastructure system called AuScope. AuScope is now in operation, providing a number of components to assist in understanding the structure and evolution of the Australian continent. These include the acquisition of subsurface imaging , earth composition and age analysis, a virtual drill core library, geological process simulation, and a high resolution geospatial reference framework. To draw together information from across the earth science community in academia, industry and government, AuScope includes a nationally distributed information infrastructure. Free and Open Source Software (FOSS) has been a significant enabler in building the AuScope community and providing a range of interoperable services for accessing data and scientific software. A number of FOSS components have been created, adopted or upgraded to create a coherent, OGC compliant Spatial Information Services Stack (SISS). SISS is now deployed at all Australian Geological Surveys, many Universities and the CSIRO. Comprising a set of OGC catalogue and data services, and augmented with new vocabulary and identifier services, the SISS provides a comprehensive package for organisations to contribute their data to the AuScope network. This packaging and a variety of software testing and documentation activities enabled greater trust and notably reduced barriers to adoption. FOSS selection was important, not only for technical capability and robustness, but also for appropriate licensing and community models to ensure sustainability of the infrastructure in the long term. Government agencies were sensitive to these issues and AuScope's careful selection has been rewarded by adoption. In some cases the features provided by the SISS solution are now significantly in advance of COTS offerings which will create expectations that can be passed back from users to their preferred vendors. Using FOSS, AuScope has addressed the challenge of data exchange across organisations nationally. The data standards (e.g. GeosciML) and platforms that underpin AuScope provide important new datasets and multi-agency links independent of underlying software and hardware differences. AuScope has created an infrastructure, a platform of technologies and the opportunity for new ways of working with and integrating disparate data at much lower cost. Research activities are now exploiting the information infrastructure to create virtual laboratories for research ranging from geophysics through water and the environment. Once again the AuScope community is making heavy use of FOSS to provide access to processing software and Cloud computing and HPC. The successful use of FOSS by AuScope, and the efforts made to ensure it is suitable for adoption, have resulted in the SISS being selected as a reference implementation for a number of Australian Government initiatives beyond AuScope in environmental information and bioregional assessments.

  16. Integrating multiple scientific computing needs via a Private Cloud infrastructure

    NASA Astrophysics Data System (ADS)

    Bagnasco, S.; Berzano, D.; Brunetti, R.; Lusso, S.; Vallero, S.

    2014-06-01

    In a typical scientific computing centre, diverse applications coexist and share a single physical infrastructure. An underlying Private Cloud facility eases the management and maintenance of heterogeneous use cases such as multipurpose or application-specific batch farms, Grid sites catering to different communities, parallel interactive data analysis facilities and others. It allows to dynamically and efficiently allocate resources to any application and to tailor the virtual machines according to the applications' requirements. Furthermore, the maintenance of large deployments of complex and rapidly evolving middleware and application software is eased by the use of virtual images and contextualization techniques; for example, rolling updates can be performed easily and minimizing the downtime. In this contribution we describe the Private Cloud infrastructure at the INFN-Torino Computer Centre, that hosts a full-fledged WLCG Tier-2 site and a dynamically expandable PROOF-based Interactive Analysis Facility for the ALICE experiment at the CERN LHC and several smaller scientific computing applications. The Private Cloud building blocks include the OpenNebula software stack, the GlusterFS filesystem (used in two different configurations for worker- and service-class hypervisors) and the OpenWRT Linux distribution (used for network virtualization). A future integration into a federated higher-level infrastructure is made possible by exposing commonly used APIs like EC2 and by using mainstream contextualization tools like CloudInit.

  17. Bim and Gis: when Parametric Modeling Meets Geospatial Data

    NASA Astrophysics Data System (ADS)

    Barazzetti, L.; Banfi, F.

    2017-12-01

    Geospatial data have a crucial role in several projects related to infrastructures and land management. GIS software are able to perform advanced geospatial analyses, but they lack several instruments and tools for parametric modelling typically available in BIM. At the same time, BIM software designed for buildings have limited tools to handle geospatial data. As things stand at the moment, BIM and GIS could appear as complementary solutions, notwithstanding research work is currently under development to ensure a better level of interoperability, especially at the scale of the building. On the other hand, the transition from the local (building) scale to the infrastructure (where geospatial data cannot be neglected) has already demonstrated that parametric modelling integrated with geoinformation is a powerful tool to simplify and speed up some phases of the design workflow. This paper reviews such mixed approaches with both simulated and real examples, demonstrating that integration is already a reality at specific scales, which are not dominated by "pure" GIS or BIM. The paper will also demonstrate that some traditional operations carried out with GIS software are also available in parametric modelling software for BIM, such as transformation between reference systems, DEM generation, feature extraction, and geospatial queries. A real case study is illustrated and discussed to show the advantage of a combined use of both technologies. BIM and GIS integration can generate greater usage of geospatial data in the AECOO (Architecture, Engineering, Construction, Owner and Operator) industry, as well as new solutions for parametric modelling with additional geoinformation.

  18. Effective Team Support: From Modeling to Software Agents

    NASA Technical Reports Server (NTRS)

    Remington, Roger W. (Technical Monitor); John, Bonnie; Sycara, Katia

    2003-01-01

    The purpose of this research contract was to perform multidisciplinary research between CMU psychologists, computer scientists and engineers and NASA researchers to design a next generation collaborative system to support a team of human experts and intelligent agents. To achieve robust performance enhancement of such a system, we had proposed to perform task and cognitive modeling to thoroughly understand the impact technology makes on the organization and on key individual personnel. Guided by cognitively-inspired requirements, we would then develop software agents that support the human team in decision making, information filtering, information distribution and integration to enhance team situational awareness. During the period covered by this final report, we made substantial progress in modeling infrastructure and task infrastructure. Work is continuing under a different contract to complete empirical data collection, cognitive modeling, and the building of software agents to support the teams task.

  19. MISSION: Mission and Safety Critical Support Environment. Executive overview

    NASA Technical Reports Server (NTRS)

    Mckay, Charles; Atkinson, Colin

    1992-01-01

    For mission and safety critical systems it is necessary to: improve definition, evolution and sustenance techniques; lower development and maintenance costs; support safe, timely and affordable system modifications; and support fault tolerance and survivability. The goal of the MISSION project is to lay the foundation for a new generation of integrated systems software providing a unified infrastructure for mission and safety critical applications and systems. This will involve the definition of a common, modular target architecture and a supporting infrastructure.

  20. The Integration of CloudStack and OCCI/OpenNebula with DIRAC

    NASA Astrophysics Data System (ADS)

    Méndez Muñoz, Víctor; Fernández Albor, Víctor; Graciani Diaz, Ricardo; Casajús Ramo, Adriàn; Fernández Pena, Tomás; Merino Arévalo, Gonzalo; José Saborido Silva, Juan

    2012-12-01

    The increasing availability of Cloud resources is arising as a realistic alternative to the Grid as a paradigm for enabling scientific communities to access large distributed computing resources. The DIRAC framework for distributed computing is an easy way to efficiently access to resources from both systems. This paper explains the integration of DIRAC with two open-source Cloud Managers: OpenNebula (taking advantage of the OCCI standard) and CloudStack. These are computing tools to manage the complexity and heterogeneity of distributed data center infrastructures, allowing to create virtual clusters on demand, including public, private and hybrid clouds. This approach has required to develop an extension to the previous DIRAC Virtual Machine engine, which was developed for Amazon EC2, allowing the connection with these new cloud managers. In the OpenNebula case, the development has been based on the CernVM Virtual Software Appliance with appropriate contextualization, while in the case of CloudStack, the infrastructure has been kept more general, which permits other Virtual Machine sources and operating systems being used. In both cases, CernVM File System has been used to facilitate software distribution to the computing nodes. With the resulting infrastructure, the cloud resources are transparent to the users through a friendly interface, like the DIRAC Web Portal. The main purpose of this integration is to get a system that can manage cloud and grid resources at the same time. This particular feature pushes DIRAC to a new conceptual denomination as interware, integrating different middleware. Users from different communities do not need to care about the installation of the standard software that is available at the nodes, nor the operating system of the host machine which is transparent to the user. This paper presents an analysis of the overhead of the virtual layer, doing some tests to compare the proposed approach with the existing Grid solution. License Notice: Published under licence in Journal of Physics: Conference Series by IOP Publishing Ltd.

  1. Open Architecture SDR for Space

    NASA Technical Reports Server (NTRS)

    Smith, Carl; Long, Chris; Liebetreu, John; Reinhart, Richard C.

    2005-01-01

    This paper describes an open-architecture SDR (software defined radio) infrastructure that is suitable for space-based operations (Space-SDR). SDR technologies will endow space and planetary exploration systems with dramatically increased capability, reduced power consumption, and significantly less mass than conventional systems, at costs reduced by vigorous competition, hardware commonality, dense integration, reduced obsolescence, interoperability, and software re-use. Significant progress has been recorded on developments like the Joint Tactical Radio System (JSTRS) Software Communication Architecture (SCA), which is oriented toward reconfigurable radios for defense forces operating in multiple theaters of engagement. The JTRS-SCA presents a consistent software interface for waveform development, and facilitates interoperability, waveform portability, software re-use, and technology evolution.

  2. Integration of the NRL Digital Library.

    ERIC Educational Resources Information Center

    King, James

    2001-01-01

    The Naval Research Laboratory (NRL) Library has identified six primary areas that need improvement: infrastructure, InfoWeb, TORPEDO Ultra, journal data management, classified data, and linking software. It is rebuilding InfoWeb and TORPEDO Ultra as database-driven Web applications, upgrading the STILAS library catalog, and creating other support…

  3. Linking earth science informatics resources into uninterrupted digital value chains

    NASA Astrophysics Data System (ADS)

    Woodcock, Robert; Angreani, Rini; Cox, Simon; Fraser, Ryan; Golodoniuc, Pavel; Klump, Jens; Rankine, Terry; Robertson, Jess; Vote, Josh

    2015-04-01

    The CSIRO Mineral Resources Flagship was established to tackle medium- to long-term challenges facing the Australian mineral industry across the value chain from exploration and mining through mineral processing within the framework of an economically, environmentally and socially sustainable minerals industry. This broad portfolio demands collaboration and data exchange with a broad range of participants and data providers across government, research and industry. It is an ideal environment to link geoscience informatics platforms to application across the resource extraction industry and to unlock the value of data integration between traditionally discrete parts of the minerals digital value chain. Despite the potential benefits, data integration remains an elusive goal within research and industry. Many projects use only a subset of available data types in an integrated manner, often maintaining the traditional discipline-based data 'silos'. Integrating data across the entire minerals digital value chain is an expensive proposition involving multiple disciplines and, significantly, multiple data sources both internal and external to any single organisation. Differing vocabularies and data formats, along with access regimes to appropriate analysis software and equipment all hamper the sharing and exchange of information. AuScope has addressed the challenge of data exchange across organisations nationally, and established a national geosciences information infrastructure using open standards-based web services. Federated across a wide variety of organisations, the resulting infrastructure contains a wide variety of live and updated data types. The community data standards and infrastructure platforms that underpin AuScope provide important new datasets and multi-agency links independent of software and hardware differences. AuScope has thus created an infrastructure, a platform of technologies and the opportunity for new ways of working with and integrating disparate data at much lower cost. An early example of this approach is the value generated by combining geological and metallurgical data sets as part of the rapidly growing field of geometallurgy. This not only provides a far better understanding of the impact of geological variability on ore processing but also leads to new thinking on the types and characteristics of data sets collected at various stages of the exploration and mining process. The Minerals Resources Flagship is linking its research activities to the AuScope infrastructure, exploiting the technology internally to create a platform for integrated research across the minerals value chain and improved interaction with industry. Referred to as the 'Early Access Virtual Lab', the system will be fully interoperable with AuScope and international infrastructures using open standards like GeosciML. Secured access is provided to allow confidential collaboration with industry when required. This presentation will discuss how the CSIRO Mineral Resources Flagship is building on the AuScope infrastructure to transform the way that data and data products are identified, shared, integrated, and reused, to unlock the benefits of true integration of research efforts across the minerals digital value chain.

  4. SEQADAPT: an adaptable system for the tracking, storage and analysis of high throughput sequencing experiments.

    PubMed

    Burdick, David B; Cavnor, Chris C; Handcock, Jeremy; Killcoyne, Sarah; Lin, Jake; Marzolf, Bruz; Ramsey, Stephen A; Rovira, Hector; Bressler, Ryan; Shmulevich, Ilya; Boyle, John

    2010-07-14

    High throughput sequencing has become an increasingly important tool for biological research. However, the existing software systems for managing and processing these data have not provided the flexible infrastructure that research requires. Existing software solutions provide static and well-established algorithms in a restrictive package. However as high throughput sequencing is a rapidly evolving field, such static approaches lack the ability to readily adopt the latest advances and techniques which are often required by researchers. We have used a loosely coupled, service-oriented infrastructure to develop SeqAdapt. This system streamlines data management and allows for rapid integration of novel algorithms. Our approach also allows computational biologists to focus on developing and applying new methods instead of writing boilerplate infrastructure code. The system is based around the Addama service architecture and is available at our website as a demonstration web application, an installable single download and as a collection of individual customizable services.

  5. SEQADAPT: an adaptable system for the tracking, storage and analysis of high throughput sequencing experiments

    PubMed Central

    2010-01-01

    Background High throughput sequencing has become an increasingly important tool for biological research. However, the existing software systems for managing and processing these data have not provided the flexible infrastructure that research requires. Results Existing software solutions provide static and well-established algorithms in a restrictive package. However as high throughput sequencing is a rapidly evolving field, such static approaches lack the ability to readily adopt the latest advances and techniques which are often required by researchers. We have used a loosely coupled, service-oriented infrastructure to develop SeqAdapt. This system streamlines data management and allows for rapid integration of novel algorithms. Our approach also allows computational biologists to focus on developing and applying new methods instead of writing boilerplate infrastructure code. Conclusion The system is based around the Addama service architecture and is available at our website as a demonstration web application, an installable single download and as a collection of individual customizable services. PMID:20630057

  6. The INDIGO-Datacloud Authentication and Authorization Infrastructure

    NASA Astrophysics Data System (ADS)

    Ceccanti, A.; Hardt, M.; Wegh, B.; Millar, AP; Caberletti, M.; Vianello, E.; Licehammer, S.

    2017-10-01

    Contemporary distributed computing infrastructures (DCIs) are not easily and securely accessible by scientists. These computing environments are typically hard to integrate due to interoperability problems resulting from the use of different authentication mechanisms, identity negotiation protocols and access control policies. Such limitations have a big impact on the user experience making it hard for user communities to port and run their scientific applications on resources aggregated from multiple providers. The INDIGO-DataCloud project wants to provide the services and tools needed to enable a secure composition of resources from multiple providers in support of scientific applications. In order to do so, a common AAI architecture has to be defined that supports multiple authentication mechanisms, support delegated authorization across services and can be easily integrated in off-the-shelf software. In this contribution we introduce the INDIGO Authentication and Authorization Infrastructure, describing its main components and their status and how authentication, delegation and authorization flows are implemented across services.

  7. DNAseq Workflow in a Diagnostic Context and an Example of a User Friendly Implementation.

    PubMed

    Wolf, Beat; Kuonen, Pierre; Dandekar, Thomas; Atlan, David

    2015-01-01

    Over recent years next generation sequencing (NGS) technologies evolved from costly tools used by very few, to a much more accessible and economically viable technology. Through this recently gained popularity, its use-cases expanded from research environments into clinical settings. But the technical know-how and infrastructure required to analyze the data remain an obstacle for a wider adoption of this technology, especially in smaller laboratories. We present GensearchNGS, a commercial DNAseq software suite distributed by Phenosystems SA. The focus of GensearchNGS is the optimal usage of already existing infrastructure, while keeping its use simple. This is achieved through the integration of existing tools in a comprehensive software environment, as well as custom algorithms developed with the restrictions of limited infrastructures in mind. This includes the possibility to connect multiple computers to speed up computing intensive parts of the analysis such as sequence alignments. We present a typical DNAseq workflow for NGS data analysis and the approach GensearchNGS takes to implement it. The presented workflow goes from raw data quality control to the final variant report. This includes features such as gene panels and the integration of online databases, like Ensembl for annotations or Cafe Variome for variant sharing.

  8. Experiences integrating autonomous components and legacy systems into tsunami early warning systems

    NASA Astrophysics Data System (ADS)

    Reißland, S.; Herrnkind, S.; Guenther, M.; Babeyko, A.; Comoglu, M.; Hammitzsch, M.

    2012-04-01

    Fostered by and embedded in the general development of Information and Communication Technology (ICT) the evolution of Tsunami Early Warning Systems (TEWS) shows a significant development from seismic-centred to multi-sensor system architectures using additional sensors, e.g. sea level stations for the detection of tsunami waves and GPS stations for the detection of ground displacements. Furthermore, the design and implementation of a robust and scalable service infrastructure supporting the integration and utilisation of existing resources serving near real-time data not only includes sensors but also other components and systems offering services such as the delivery of feasible simulations used for forecasting in an imminent tsunami threat. In the context of the development of the German Indonesian Tsunami Early Warning System (GITEWS) and the project Distant Early Warning System (DEWS) a service platform for both sensor integration and warning dissemination has been newly developed and demonstrated. In particular, standards of the Open Geospatial Consortium (OGC) and the Organization for the Advancement of Structured Information Standards (OASIS) have been successfully incorporated. In the project Collaborative, Complex, and Critical Decision-Support in Evolving Crises (TRIDEC) new developments are used to extend the existing platform to realise a component-based technology framework for building distributed TEWS. This talk will describe experiences made in GITEWS, DEWS and TRIDEC while integrating legacy stand-alone systems and newly developed special-purpose software components into TEWS using different software adapters and communication strategies to make the systems work together in a corporate infrastructure. The talk will also cover task management and data conversion between the different systems. Practical approaches and software solutions for the integration of sensors, e.g. providing seismic and sea level data, and utilisation of special-purpose components, such as simulation systems, in TEWS will be presented.

  9. Software defined networking (SDN) over space division multiplexing (SDM) optical networks: features, benefits and experimental demonstration.

    PubMed

    Amaya, N; Yan, S; Channegowda, M; Rofoee, B R; Shu, Y; Rashidi, M; Ou, Y; Hugues-Salas, E; Zervas, G; Nejabati, R; Simeonidou, D; Puttnam, B J; Klaus, W; Sakaguchi, J; Miyazawa, T; Awaji, Y; Harai, H; Wada, N

    2014-02-10

    We present results from the first demonstration of a fully integrated SDN-controlled bandwidth-flexible and programmable SDM optical network utilizing sliceable self-homodyne spatial superchannels to support dynamic bandwidth and QoT provisioning, infrastructure slicing and isolation. Results show that SDN is a suitable control plane solution for the high-capacity flexible SDM network. It is able to provision end-to-end bandwidth and QoT requests according to user requirements, considering the unique characteristics of the underlying SDM infrastructure.

  10. An integrated bioinformatics infrastructure essential for advancing pharmacogenomics and personalized medicine in the context of the FDA's Critical Path Initiative.

    PubMed

    Tong, Weida; Harris, Stephen C; Fang, Hong; Shi, Leming; Perkins, Roger; Goodsaid, Federico; Frueh, Felix W

    2007-01-01

    Pharmacogenomics (PGx) is identified in the FDA Critical Path document as a major opportunity for advancing medical product development and personalized medicine. An integrated bioinformatics infrastructure for use in FDA data review is crucial to realize the benefits of PGx for public health. We have developed an integrated bioinformatics tool, called ArrayTrack, for managing, analyzing and interpreting genomic and other biomarker data (e.g. proteomic and metabolomic data). ArrayTrack is a highly flexible and robust software platform, which allows evolving with technological advances and changing user needs. ArrayTrack is used in the routine review of genomic data submitted to the FDA; here, three hypothetical examples of its use in the Voluntary eXploratory Data Submission (VXDS) program are illustrated.: © Published by Elsevier Ltd.

  11. Manned/Unmanned Common Architecture Program (MCAP) net centric flight tests

    NASA Astrophysics Data System (ADS)

    Johnson, Dale

    2009-04-01

    Properly architected avionics systems can reduce the costs of periodic functional improvements, maintenance, and obsolescence. With this in mind, the U.S. Army Aviation Applied Technology Directorate (AATD) initiated the Manned/Unmanned Common Architecture Program (MCAP) in 2003 to develop an affordable, high-performance embedded mission processing architecture for potential application to multiple aviation platforms. MCAP analyzed Army helicopter and unmanned air vehicle (UAV) missions, identified supporting subsystems, surveyed advanced hardware and software technologies, and defined computational infrastructure technical requirements. The project selected a set of modular open systems standards and market-driven commercial-off-theshelf (COTS) electronics and software, and, developed experimental mission processors, network architectures, and software infrastructures supporting the integration of new capabilities, interoperability, and life cycle cost reductions. MCAP integrated the new mission processing architecture into an AH-64D Apache Longbow and participated in Future Combat Systems (FCS) network-centric operations field experiments in 2006 and 2007 at White Sands Missile Range (WSMR), New Mexico and at the Nevada Test and Training Range (NTTR) in 2008. The MCAP Apache also participated in PM C4ISR On-the-Move (OTM) Capstone Experiments 2007 (E07) and 2008 (E08) at Ft. Dix, NJ and conducted Mesa, Arizona local area flight tests in December 2005, February 2006, and June 2008.

  12. Exchange of Veterans Affairs medical data using national and local networks.

    PubMed

    Dayhoff, R E; Maloney, D L

    1992-12-17

    Remote data exchange is extremely useful to a number of medical applications. It requires an infrastructure including systems, network and software tools. With such an infrastructure, existing local applications can be extended to serve national needs. There are many approaches to providing remote data exchange. Selection of an approach for an application requires balancing of various factors, including the need for rapid interactive access to data and ad hoc queries, the adequacy of access to predefined data sets, the need for an integrated view of the data, the ability to provide adequate security protection, the amount of data required, and the time frame in which data is required. The applications described here demonstrate new ways that the VA is reaping benefits from its infrastructure and its compatible integrated hospital information systems located at its facilities. The needs that have been met are also needs of private hospitals. However, in many cases the infrastructure to allow data exchange is not present. The VA's experiences may serve to establish the benefits that can be obtained by all hospitals.

  13. The Computational Infrastructure for Geodynamics: An Example of Software Curation and Citation in the Geodynamics Community

    NASA Astrophysics Data System (ADS)

    Hwang, L.; Kellogg, L. H.

    2017-12-01

    Curation of software promotes discoverability and accessibility and works hand in hand with scholarly citation to ascribe value to, and provide recognition for software development. To meet this challenge, the Computational Infrastructure for Geodynamics (CIG) maintains a community repository built on custom and open tools to promote discovery, access, identification, credit, and provenance of research software for the geodynamics community. CIG (geodynamics.org) originated from recognition of the tremendous effort required to develop sound software and the need to reduce duplication of effort and to sustain community codes. CIG curates software across 6 domains and has developed and follows software best practices that include establishing test cases, documentation, and a citable publication for each software package. CIG software landing web pages provide access to current and past releases; many are also accessible through the CIG community repository on github. CIG has now developed abc - attribution builder for citation to enable software users to give credit to software developers. abc uses zenodo as an archive and as the mechanism to obtain a unique identifier (DOI) for scientific software. To assemble the metadata, we searched the software's documentation and research publications and then requested the primary developers to verify. In this process, we have learned that each development community approaches software attribution differently. The metadata gathered is based on guidelines established by groups such as FORCE11 and OntoSoft. The rollout of abc is gradual as developers are forward-looking, rarely willing to go back and archive prior releases in zenodo. Going forward all actively developed packages will utilize the zenodo and github integration to automate the archival process when a new release is issued. How to handle legacy software, multi-authored libraries, and assigning roles to software remain open issues.

  14. Semantic Service Matchmaking in the ATM Domain Considering Infrastructure Capability Constraints

    NASA Astrophysics Data System (ADS)

    Moser, Thomas; Mordinyi, Richard; Sunindyo, Wikan Danar; Biffl, Stefan

    In a service-oriented environment business processes flexibly build on software services provided by systems in a network. A key design challenge is the semantic matchmaking of business processes and software services in two steps: 1. Find for one business process the software services that meet or exceed the BP requirements; 2. Find for all business processes the software services that can be implemented within the capability constraints of the underlying network, which poses a major problem since even for small scenarios the solution space is typically very large. In this chapter we analyze requirements from mission-critical business processes in the Air Traffic Management (ATM) domain and introduce an approach for semi-automatic semantic matchmaking for software services, the “System-Wide Information Sharing” (SWIS) business process integration framework. A tool-supported semantic matchmaking process like SWIS can provide system designers and integrators with a set of promising software service candidates and therefore strongly reduces the human matching effort by focusing on a much smaller space of matchmaking candidates. We evaluate the feasibility of the SWIS approach in an industry use case from the ATM domain.

  15. Distributed Processing of Sentinel-2 Products using the BIGEARTH Platform

    NASA Astrophysics Data System (ADS)

    Bacu, Victor; Stefanut, Teodor; Nandra, Constantin; Mihon, Danut; Gorgan, Dorian

    2017-04-01

    The constellation of observational satellites orbiting around Earth is constantly increasing, providing more data that need to be processed in order to extract meaningful information and knowledge from it. Sentinel-2 satellites, part of the Copernicus Earth Observation program, aim to be used in agriculture, forestry and many other land management applications. ESA's SNAP toolbox can be used to process data gathered by Sentinel-2 satellites but is limited to the resources provided by a stand-alone computer. In this paper we present a cloud based software platform that makes use of this toolbox together with other remote sensing software applications to process Sentinel-2 products. The BIGEARTH software platform [1] offers an integrated solution for processing Earth Observation data coming from different sources (such as satellites or on-site sensors). The flow of processing is defined as a chain of tasks based on the WorDeL description language [2]. Each task could rely on a different software technology (such as Grass GIS and ESA's SNAP) in order to process the input data. One important feature of the BIGEARTH platform comes from this possibility of interconnection and integration, throughout the same flow of processing, of the various well known software technologies. All this integration is transparent from the user perspective. The proposed platform extends the SNAP capabilities by enabling specialists to easily scale the processing over distributed architectures, according to their specific needs and resources. The software platform [3] can be used in multiple configurations. In the basic one the software platform runs as a standalone application inside a virtual machine. Obviously in this case the computational resources are limited but it will give an overview of the functionalities of the software platform, and also the possibility to define the flow of processing and later on to execute it on a more complex infrastructure. The most complex and robust configuration is based on cloud computing and allows the installation on a private or public cloud infrastructure. In this configuration, the processing resources can be dynamically allocated and the execution time can be considerably improved by the available virtual resources and the number of parallelizable sequences in the processing flow. The presentation highlights the benefits and issues of the proposed solution by analyzing some significant experimental use cases. Main references for further information: [1] BigEarth project, http://cgis.utcluj.ro/projects/bigearth [2] Constantin Nandra, Dorian Gorgan: "Defining Earth data batch processing tasks by means of a flexible workflow description language", ISPRS Ann. Photogramm. Remote Sens. Spatial Inf. Sci., III-4, 59-66, (2016). [3] Victor Bacu, Teodor Stefanut, Dorian Gorgan, "Adaptive Processing of Earth Observation Data on Cloud Infrastructures Based on Workflow Description", Proceedings of the Intelligent Computer Communication and Processing (ICCP), IEEE-Press, pp.444-454, (2015).

  16. Waggle: A Framework for Intelligent Attentive Sensing and Actuation

    NASA Astrophysics Data System (ADS)

    Sankaran, R.; Jacob, R. L.; Beckman, P. H.; Catlett, C. E.; Keahey, K.

    2014-12-01

    Advances in sensor-driven computation and computationally steered sensing will greatly enable future research in fields including environmental and atmospheric sciences. We will present "Waggle," an open-source hardware and software infrastructure developed with two goals: (1) reducing the separation and latency between sensing and computing and (2) improving the reliability and longevity of sensing-actuation platforms in challenging and costly deployments. Inspired by "deep-space probe" systems, the Waggle platform design includes features that can support longitudinal studies, deployments with varying communication links, and remote management capabilities. Waggle lowers the barrier for scientists to incorporate real-time data from their sensors into their computations and to manipulate the sensors or provide feedback through actuators. A standardized software and hardware design allows quick addition of new sensors/actuators and associated software in the nodes and enables them to be coupled with computational codes both insitu and on external compute infrastructure. The Waggle framework currently drives the deployment of two observational systems - a portable and self-sufficient weather platform for study of small-scale effects in Chicago's urban core and an open-ended distributed instrument in Chicago that aims to support several research pursuits across a broad range of disciplines including urban planning, microbiology and computer science. Built around open-source software, hardware, and Linux OS, the Waggle system comprises two components - the Waggle field-node and Waggle cloud-computing infrastructure. Waggle field-node affords a modular, scalable, fault-tolerant, secure, and extensible platform for hosting sensors and actuators in the field. It supports insitu computation and data storage, and integration with cloud-computing infrastructure. The Waggle cloud infrastructure is designed with the goal of scaling to several hundreds of thousands of Waggle nodes. It supports aggregating data from sensors hosted by the nodes, staging computation, relaying feedback to the nodes and serving data to end-users. We will discuss the Waggle design principles and their applicability to various observational research pursuits, and demonstrate its capabilities.

  17. Cyberinfrastructure for the digital brain: spatial standards for integrating rodent brain atlases

    PubMed Central

    Zaslavsky, Ilya; Baldock, Richard A.; Boline, Jyl

    2014-01-01

    Biomedical research entails capture and analysis of massive data volumes and new discoveries arise from data-integration and mining. This is only possible if data can be mapped onto a common framework such as the genome for genomic data. In neuroscience, the framework is intrinsically spatial and based on a number of paper atlases. This cannot meet today's data-intensive analysis and integration challenges. A scalable and extensible software infrastructure that is standards based but open for novel data and resources, is required for integrating information such as signal distributions, gene-expression, neuronal connectivity, electrophysiology, anatomy, and developmental processes. Therefore, the International Neuroinformatics Coordinating Facility (INCF) initiated the development of a spatial framework for neuroscience data integration with an associated Digital Atlasing Infrastructure (DAI). A prototype implementation of this infrastructure for the rodent brain is reported here. The infrastructure is based on a collection of reference spaces to which data is mapped at the required resolution, such as the Waxholm Space (WHS), a 3D reconstruction of the brain generated using high-resolution, multi-channel microMRI. The core standards of the digital atlasing service-oriented infrastructure include Waxholm Markup Language (WaxML): XML schema expressing a uniform information model for key elements such as coordinate systems, transformations, points of interest (POI)s, labels, and annotations; and Atlas Web Services: interfaces for querying and updating atlas data. The services return WaxML-encoded documents with information about capabilities, spatial reference systems (SRSs) and structures, and execute coordinate transformations and POI-based requests. Key elements of INCF-DAI cyberinfrastructure have been prototyped for both mouse and rat brain atlas sources, including the Allen Mouse Brain Atlas, UCSD Cell-Centered Database, and Edinburgh Mouse Atlas Project. PMID:25309417

  18. Cyberinfrastructure for the digital brain: spatial standards for integrating rodent brain atlases.

    PubMed

    Zaslavsky, Ilya; Baldock, Richard A; Boline, Jyl

    2014-01-01

    Biomedical research entails capture and analysis of massive data volumes and new discoveries arise from data-integration and mining. This is only possible if data can be mapped onto a common framework such as the genome for genomic data. In neuroscience, the framework is intrinsically spatial and based on a number of paper atlases. This cannot meet today's data-intensive analysis and integration challenges. A scalable and extensible software infrastructure that is standards based but open for novel data and resources, is required for integrating information such as signal distributions, gene-expression, neuronal connectivity, electrophysiology, anatomy, and developmental processes. Therefore, the International Neuroinformatics Coordinating Facility (INCF) initiated the development of a spatial framework for neuroscience data integration with an associated Digital Atlasing Infrastructure (DAI). A prototype implementation of this infrastructure for the rodent brain is reported here. The infrastructure is based on a collection of reference spaces to which data is mapped at the required resolution, such as the Waxholm Space (WHS), a 3D reconstruction of the brain generated using high-resolution, multi-channel microMRI. The core standards of the digital atlasing service-oriented infrastructure include Waxholm Markup Language (WaxML): XML schema expressing a uniform information model for key elements such as coordinate systems, transformations, points of interest (POI)s, labels, and annotations; and Atlas Web Services: interfaces for querying and updating atlas data. The services return WaxML-encoded documents with information about capabilities, spatial reference systems (SRSs) and structures, and execute coordinate transformations and POI-based requests. Key elements of INCF-DAI cyberinfrastructure have been prototyped for both mouse and rat brain atlas sources, including the Allen Mouse Brain Atlas, UCSD Cell-Centered Database, and Edinburgh Mouse Atlas Project.

  19. Intelligent systems technology infrastructure for integrated systems

    NASA Technical Reports Server (NTRS)

    Lum, Henry

    1991-01-01

    A system infrastructure must be properly designed and integrated from the conceptual development phase to accommodate evolutionary intelligent technologies. Several technology development activities were identified that may have application to rendezvous and capture systems. Optical correlators in conjunction with fuzzy logic control might be used for the identification, tracking, and capture of either cooperative or non-cooperative targets without the intensive computational requirements associated with vision processing. A hybrid digital/analog system was developed and tested with a robotic arm. An aircraft refueling application demonstration is planned within two years. Initially this demonstration will be ground based with a follow-on air based demonstration. System dependability measurement and modeling techniques are being developed for fault management applications. This involves usage of incremental solution/evaluation techniques and modularized systems to facilitate reuse and to take advantage of natural partitions in system models. Though not yet commercially available and currently subject to accuracy limitations, technology is being developed to perform optical matrix operations to enhance computational speed. Optical terrain recognition using camera image sequencing processed with optical correlators is being developed to determine position and velocity in support of lander guidance. The system is planned for testing in conjunction with Dryden Flight Research Facility. Advanced architecture technology is defining open architecture design constraints, test bed concepts (processors, multiple hardware/software and multi-dimensional user support, knowledge/tool sharing infrastructure), and software engineering interface issues.

  20. Structure simulation with calculated NMR parameters - integrating COSMOS into the CCPN framework.

    PubMed

    Schneider, Olaf; Fogh, Rasmus H; Sternberg, Ulrich; Klenin, Konstantin; Kondov, Ivan

    2012-01-01

    The Collaborative Computing Project for NMR (CCPN) has build a software framework consisting of the CCPN data model (with APIs) for NMR related data, the CcpNmr Analysis program and additional tools like CcpNmr FormatConverter. The open architecture allows for the integration of external software to extend the abilities of the CCPN framework with additional calculation methods. Recently, we have carried out the first steps for integrating our software Computer Simulation of Molecular Structures (COSMOS) into the CCPN framework. The COSMOS-NMR force field unites quantum chemical routines for the calculation of molecular properties with a molecular mechanics force field yielding the relative molecular energies. COSMOS-NMR allows introducing NMR parameters as constraints into molecular mechanics calculations. The resulting infrastructure will be made available for the NMR community. As a first application we have tested the evaluation of calculated protein structures using COSMOS-derived 13C Cα and Cβ chemical shifts. In this paper we give an overview of the methodology and a roadmap for future developments and applications.

  1. Cancer Genomics: Integrative and Scalable Solutions in R / Bioconductor | Informatics Technology for Cancer Research (ITCR)

    Cancer.gov

    This proposal develops scalable R / Bioconductor software infrastructure and data resources to integrate complex, heterogeneous, and large cancer genomic experiments. The falling cost of genomic assays facilitates collection of multiple data types (e.g., gene and transcript expression, structural variation, copy number, methylation, and microRNA data) from a set of clinical specimens. Furthermore, substantial resources are now available from large consortium activities like The Cancer Genome Atlas (TCGA).

  2. Medical image informatics infrastructure design and applications.

    PubMed

    Huang, H K; Wong, S T; Pietka, E

    1997-01-01

    Picture archiving and communication systems (PACS) is a system integration of multimodality images and health information systems designed for improving the operation of a radiology department. As it evolves, PACS becomes a hospital image document management system with a voluminous image and related data file repository. A medical image informatics infrastructure can be designed to take advantage of existing data, providing PACS with add-on value for health care service, research, and education. A medical image informatics infrastructure (MIII) consists of the following components: medical images and associated data (including PACS database), image processing, data/knowledge base management, visualization, graphic user interface, communication networking, and application oriented software. This paper describes these components and their logical connection, and illustrates some applications based on the concept of the MIII.

  3. BioContainers: an open-source and community-driven framework for software standardization.

    PubMed

    da Veiga Leprevost, Felipe; Grüning, Björn A; Alves Aflitos, Saulo; Röst, Hannes L; Uszkoreit, Julian; Barsnes, Harald; Vaudel, Marc; Moreno, Pablo; Gatto, Laurent; Weber, Jonas; Bai, Mingze; Jimenez, Rafael C; Sachsenberg, Timo; Pfeuffer, Julianus; Vera Alvarez, Roberto; Griss, Johannes; Nesvizhskii, Alexey I; Perez-Riverol, Yasset

    2017-08-15

    BioContainers (biocontainers.pro) is an open-source and community-driven framework which provides platform independent executable environments for bioinformatics software. BioContainers allows labs of all sizes to easily install bioinformatics software, maintain multiple versions of the same software and combine tools into powerful analysis pipelines. BioContainers is based on popular open-source projects Docker and rkt frameworks, that allow software to be installed and executed under an isolated and controlled environment. Also, it provides infrastructure and basic guidelines to create, manage and distribute bioinformatics containers with a special focus on omics technologies. These containers can be integrated into more comprehensive bioinformatics pipelines and different architectures (local desktop, cloud environments or HPC clusters). The software is freely available at github.com/BioContainers/. yperez@ebi.ac.uk. © The Author(s) 2017. Published by Oxford University Press.

  4. BioContainers: an open-source and community-driven framework for software standardization

    PubMed Central

    da Veiga Leprevost, Felipe; Grüning, Björn A.; Alves Aflitos, Saulo; Röst, Hannes L.; Uszkoreit, Julian; Barsnes, Harald; Vaudel, Marc; Moreno, Pablo; Gatto, Laurent; Weber, Jonas; Bai, Mingze; Jimenez, Rafael C.; Sachsenberg, Timo; Pfeuffer, Julianus; Vera Alvarez, Roberto; Griss, Johannes; Nesvizhskii, Alexey I.; Perez-Riverol, Yasset

    2017-01-01

    Abstract Motivation BioContainers (biocontainers.pro) is an open-source and community-driven framework which provides platform independent executable environments for bioinformatics software. BioContainers allows labs of all sizes to easily install bioinformatics software, maintain multiple versions of the same software and combine tools into powerful analysis pipelines. BioContainers is based on popular open-source projects Docker and rkt frameworks, that allow software to be installed and executed under an isolated and controlled environment. Also, it provides infrastructure and basic guidelines to create, manage and distribute bioinformatics containers with a special focus on omics technologies. These containers can be integrated into more comprehensive bioinformatics pipelines and different architectures (local desktop, cloud environments or HPC clusters). Availability and Implementation The software is freely available at github.com/BioContainers/. Contact yperez@ebi.ac.uk PMID:28379341

  5. System Architecture Development for Energy and Water Infrastructure Data Management and Geovisual Analytics

    NASA Astrophysics Data System (ADS)

    Berres, A.; Karthik, R.; Nugent, P.; Sorokine, A.; Myers, A.; Pang, H.

    2017-12-01

    Building an integrated data infrastructure that can meet the needs of a sustainable energy-water resource management requires a robust data management and geovisual analytics platform, capable of cross-domain scientific discovery and knowledge generation. Such a platform can facilitate the investigation of diverse complex research and policy questions for emerging priorities in Energy-Water Nexus (EWN) science areas. Using advanced data analytics, machine learning techniques, multi-dimensional statistical tools, and interactive geovisualization components, such a multi-layered federated platform is being developed, the Energy-Water Nexus Knowledge Discovery Framework (EWN-KDF). This platform utilizes several enterprise-grade software design concepts and standards such as extensible service-oriented architecture, open standard protocols, event-driven programming model, enterprise service bus, and adaptive user interfaces to provide a strategic value to the integrative computational and data infrastructure. EWN-KDF is built on the Compute and Data Environment for Science (CADES) environment in Oak Ridge National Laboratory (ORNL).

  6. About Distributed Simulation-based Optimization of Forming Processes using a Grid Architecture

    NASA Astrophysics Data System (ADS)

    Grauer, Manfred; Barth, Thomas

    2004-06-01

    Permanently increasing complexity of products and their manufacturing processes combined with a shorter "time-to-market" leads to more and more use of simulation and optimization software systems for product design. Finding a "good" design of a product implies the solution of computationally expensive optimization problems based on the results of simulation. Due to the computational load caused by the solution of these problems, the requirements on the Information&Telecommunication (IT) infrastructure of an enterprise or research facility are shifting from stand-alone resources towards the integration of software and hardware resources in a distributed environment for high-performance computing. Resources can either comprise software systems, hardware systems, or communication networks. An appropriate IT-infrastructure must provide the means to integrate all these resources and enable their use even across a network to cope with requirements from geographically distributed scenarios, e.g. in computational engineering and/or collaborative engineering. Integrating expert's knowledge into the optimization process is inevitable in order to reduce the complexity caused by the number of design variables and the high dimensionality of the design space. Hence, utilization of knowledge-based systems must be supported by providing data management facilities as a basis for knowledge extraction from product data. In this paper, the focus is put on a distributed problem solving environment (PSE) capable of providing access to a variety of necessary resources and services. A distributed approach integrating simulation and optimization on a network of workstations and cluster systems is presented. For geometry generation the CAD-system CATIA is used which is coupled with the FEM-simulation system INDEED for simulation of sheet-metal forming processes and the problem solving environment OpTiX for distributed optimization.

  7. A SCORM Thin Client Architecture for E-Learning Systems Based on Web Services

    ERIC Educational Resources Information Center

    Casella, Giovanni; Costagliola, Gennaro; Ferrucci, Filomena; Polese, Giuseppe; Scanniello, Giuseppe

    2007-01-01

    In this paper we propose an architecture of e-learning systems characterized by the use of Web services and a suitable middleware component. These technical infrastructures allow us to extend the system with new services as well as to integrate and reuse heterogeneous software e-learning components. Moreover, they let us better support the…

  8. Navy Collaborative Integrated Information Technology Initiative

    DTIC Science & Technology

    2000-01-11

    investigating the development and application of collaborative multimedia conferencing software for education and other groupwork activities. We are extending...an alternative environment for place-based synchronous groupwork . The new environment is based on the same collaborative infrastructure as the...alternative environment for place- based synchronous groupwork . This information is being used as an initial user profile, requirements analysis

  9. Integrated remote sensing and visualization (IRSV) system for transportation infrastructure operations and management, phase two, volume 5 : aerial bridge deck imaging data collection and software revision.

    DOT National Transportation Integrated Search

    2012-02-01

    For rapid deployment of bridge scan missions, sub-inch aerial imaging using small format aerial photography : is suggested. Under-belly photography is used to generate high resolution aerial images that can be geo-referenced and : used for quantifyin...

  10. A semi-automated workflow for biodiversity data retrieval, cleaning, and quality control

    PubMed Central

    Mathew, Cherian; Obst, Matthias; Vicario, Saverio; Haines, Robert; Williams, Alan R.; de Jong, Yde; Goble, Carole

    2014-01-01

    Abstract The compilation and cleaning of data needed for analyses and prediction of species distributions is a time consuming process requiring a solid understanding of data formats and service APIs provided by biodiversity informatics infrastructures. We designed and implemented a Taverna-based Data Refinement Workflow which integrates taxonomic data retrieval, data cleaning, and data selection into a consistent, standards-based, and effective system hiding the complexity of underlying service infrastructures. The workflow can be freely used both locally and through a web-portal which does not require additional software installations by users. PMID:25535486

  11. A primer on precision medicine informatics.

    PubMed

    Sboner, Andrea; Elemento, Olivier

    2016-01-01

    In this review, we describe key components of a computational infrastructure for a precision medicine program that is based on clinical-grade genomic sequencing. Specific aspects covered in this review include software components and hardware infrastructure, reporting, integration into Electronic Health Records for routine clinical use and regulatory aspects. We emphasize informatics components related to reproducibility and reliability in genomic testing, regulatory compliance, traceability and documentation of processes, integration into clinical workflows, privacy requirements, prioritization and interpretation of results to report based on clinical needs, rapidly evolving knowledge base of genomic alterations and clinical treatments and return of results in a timely and predictable fashion. We also seek to differentiate between the use of precision medicine in germline and cancer. © The Author 2015. Published by Oxford University Press. For Permissions, please email: journals.permissions@oup.com.

  12. DOIDB: Reusing DataCite's search software as metadata portal for GFZ Data Services

    NASA Astrophysics Data System (ADS)

    Elger, K.; Ulbricht, D.; Bertelmann, R.

    2016-12-01

    GFZ Data Services is the central service point for the publication of research data at the Helmholtz Centre Potsdam GFZ German Research Centre for Geosciences (GFZ). It provides data publishing services to scientists of GFZ, associated projects, and associated institutions. The publishing services aim to make research data and physical samples visible and citable, by assigning persistent identifiers (DOI, IGSN) and by complementing existing IT infrastructure. To integrate several research domains a modular software stack that is made of free software components has been created to manage data and metadata as well as register persistent identifiers [1]. Pivotal component for the registration of DOIs is the DOIDB. It has been derived from three software components provided by DataCite [2] that moderate the registration of DOIs and the deposition of metadata, allow the dissemination of metadata, and provide a user interface to navigate and discover datasets. The DOIDB acts as a proxy to the DataCite infrastructure and in addition to the DataCite metadata schema, it allows to deposit and disseminate metadata following the schemas ISO19139 and NASA GCMD DIF. The search component has been modified to meet the requirements of a geosciences metadata portal. In particular, the search component has been altered to make use of Apache SOLRs capability to index and query spatial coordinates. Furthermore, the user interface has been adjusted to provide a first impression of the data by showing a map, summary information and subjects. DOIDB and its components are available on GitHub [3].We present a software solution for registration of DOIs that allows to integrate existing data systems, keeps track of registered DOIs, and provides a metadata portal to discover datasets [4]. [1] Ulbricht, D.; Elger, K.; Bertelmann, R.; Klump, J. panMetaDocs, eSciDoc, and DOIDB—An Infrastructure for the Curation and Publication of File-Based Datasets for GFZ Data Services. ISPRS Int. J. Geo-Inf. 2016, 5, 25. http://doi.org/10.3390/ijgi5030025[2] https://github.com/datacite[3] https://github.com/ulbricht/search/tree/doidb , https://github.com/ulbricht/mds/tree/doidb , https://github.com/ulbricht/oaip/tree/doidb[4] http://doidb.wdc-terra.org

  13. Integrating software architectures for distributed simulations and simulation analysis communities.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Goldsby, Michael E.; Fellig, Daniel; Linebarger, John Michael

    2005-10-01

    The one-year Software Architecture LDRD (No.79819) was a cross-site effort between Sandia California and Sandia New Mexico. The purpose of this research was to further develop and demonstrate integrating software architecture frameworks for distributed simulation and distributed collaboration in the homeland security domain. The integrated frameworks were initially developed through the Weapons of Mass Destruction Decision Analysis Center (WMD-DAC), sited at SNL/CA, and the National Infrastructure Simulation & Analysis Center (NISAC), sited at SNL/NM. The primary deliverable was a demonstration of both a federation of distributed simulations and a federation of distributed collaborative simulation analysis communities in the context ofmore » the same integrated scenario, which was the release of smallpox in San Diego, California. To our knowledge this was the first time such a combination of federations under a single scenario has ever been demonstrated. A secondary deliverable was the creation of the standalone GroupMeld{trademark} collaboration client, which uses the GroupMeld{trademark} synchronous collaboration framework. In addition, a small pilot experiment that used both integrating frameworks allowed a greater range of crisis management options to be performed and evaluated than would have been possible without the use of the frameworks.« less

  14. Exploiting IoT Technologies and Open Source Components for Smart Seismic Network Instrumentation

    NASA Astrophysics Data System (ADS)

    Germenis, N. G.; Koulamas, C. A.; Foundas, P. N.

    2017-12-01

    The data collection infrastructure of any seismic network poses a number of requirements and trade-offs related to accuracy, reliability, power autonomy and installation & operational costs. Having the right hardware design at the edge of this infrastructure, embedded software running inside the instruments is the heart of pre-processing and communication services implementation and their integration with the central storage and processing facilities of the seismic network. This work demonstrates the feasibility and benefits of exploiting software components from heterogeneous sources in order to realize a smart seismic data logger, achieving higher reliability, faster integration and less development and testing costs of critical functionality that is in turn responsible for the cost and power efficient operation of the device. The instrument's software builds on top of widely used open source components around the Linux kernel with real-time extensions, the core Debian Linux distribution, the earthworm and seiscomp tooling frameworks, as well as components from the Internet of Things (IoT) world, such as the CoAP and MQTT protocols for the signaling planes, besides the widely used de-facto standards of the application domain at the data plane, such as the SeedLink protocol. By using an innovative integration of features based on lower level GPL components of the seiscomp suite with higher level processing earthworm components, coupled with IoT protocol extensions to the latter, the instrument can implement smart functionality such as network controlled, event triggered data transmission in parallel with edge archiving and on demand, short term historical data retrieval.

  15. Increasing the resilience and security of the United States' power infrastructure

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Happenny, Sean F.

    2015-08-01

    The United States' power infrastructure is aging, underfunded, and vulnerable to cyber attack. Emerging smart grid technologies may take some of the burden off of existing systems and make the grid as a whole more efficient, reliable, and secure. The Pacific Northwest National Laboratory (PNNL) is funding research into several aspects of smart grid technology and grid security, creating a software simulation tool that will allow researchers to test power infrastructure control and distribution paradigms by utilizing different smart grid technologies to determine how the grid and these technologies react under different circumstances. Understanding how these systems behave in real-worldmore » conditions will lead to new ways to make our power infrastructure more resilient and secure. Demonstrating security in embedded systems is another research area PNNL is tackling. Many of the systems controlling the U.S. critical infrastructure, such as the power grid, lack integrated security and the aging networks protecting them are becoming easier to attack.« less

  16. The Other Infrastructure: Distance Education's Digital Plant.

    ERIC Educational Resources Information Center

    Boettcher, Judith V.; Kumar, M. S. Vijay

    2000-01-01

    Suggests a new infrastructure--the digital plant--for supporting flexible Web campus environments. Describes four categories which make up the infrastructure: personal communication tools and applications; network of networks for the Web campus; dedicated servers and software applications; software applications and services from external…

  17. Intelligent systems technology infrastructure for integrated systems

    NASA Technical Reports Server (NTRS)

    Lum, Henry, Jr.

    1991-01-01

    Significant advances have occurred during the last decade in intelligent systems technologies (a.k.a. knowledge-based systems, KBS) including research, feasibility demonstrations, and technology implementations in operational environments. Evaluation and simulation data obtained to date in real-time operational environments suggest that cost-effective utilization of intelligent systems technologies can be realized for Automated Rendezvous and Capture applications. The successful implementation of these technologies involve a complex system infrastructure integrating the requirements of transportation, vehicle checkout and health management, and communication systems without compromise to systems reliability and performance. The resources that must be invoked to accomplish these tasks include remote ground operations and control, built-in system fault management and control, and intelligent robotics. To ensure long-term evolution and integration of new validated technologies over the lifetime of the vehicle, system interfaces must also be addressed and integrated into the overall system interface requirements. An approach for defining and evaluating the system infrastructures including the testbed currently being used to support the on-going evaluations for the evolutionary Space Station Freedom Data Management System is presented and discussed. Intelligent system technologies discussed include artificial intelligence (real-time replanning and scheduling), high performance computational elements (parallel processors, photonic processors, and neural networks), real-time fault management and control, and system software development tools for rapid prototyping capabilities.

  18. Modernization of the USGS Hawaiian Volcano Observatory Seismic Processing Infrastructure

    NASA Astrophysics Data System (ADS)

    Antolik, L.; Shiro, B.; Friberg, P. A.

    2016-12-01

    The USGS Hawaiian Volcano Observatory (HVO) operates a Tier 1 Advanced National Seismic System (ANSS) seismic network to monitor, characterize, and report on volcanic and earthquake activity in the State of Hawaii. Upgrades at the observatory since 2009 have improved the digital telemetry network, computing resources, and seismic data processing with the adoption of the ANSS Quake Management System (AQMS) system. HVO aims to build on these efforts by further modernizing its seismic processing infrastructure and strengthen its ability to meet ANSS performance standards. Most notably, this will also allow HVO to support redundant systems, both onsite and offsite, in order to provide better continuity of operation during intermittent power and network outages. We are in the process of implementing a number of upgrades and improvements on HVO's seismic processing infrastructure, including: 1) Virtualization of AQMS physical servers; 2) Migration of server operating systems from Solaris to Linux; 3) Consolidation of AQMS real-time and post-processing services to a single server; 4) Upgrading database from Oracle 10 to Oracle 12; and 5) Upgrading to the latest Earthworm and AQMS software. These improvements will make server administration more efficient, minimize hardware resources required by AQMS, simplify the Oracle replication setup, and provide better integration with HVO's existing state of health monitoring tools and backup system. Ultimately, it will provide HVO with the latest and most secure software available while making the software easier to deploy and support.

  19. IHE cross-enterprise document sharing for imaging: interoperability testing software

    PubMed Central

    2010-01-01

    Background With the deployments of Electronic Health Records (EHR), interoperability testing in healthcare is becoming crucial. EHR enables access to prior diagnostic information in order to assist in health decisions. It is a virtual system that results from the cooperation of several heterogeneous distributed systems. Interoperability between peers is therefore essential. Achieving interoperability requires various types of testing. Implementations need to be tested using software that simulates communication partners, and that provides test data and test plans. Results In this paper we describe a software that is used to test systems that are involved in sharing medical images within the EHR. Our software is used as part of the Integrating the Healthcare Enterprise (IHE) testing process to test the Cross Enterprise Document Sharing for imaging (XDS-I) integration profile. We describe its architecture and functionalities; we also expose the challenges encountered and discuss the elected design solutions. Conclusions EHR is being deployed in several countries. The EHR infrastructure will be continuously evolving to embrace advances in the information technology domain. Our software is built on a web framework to allow for an easy evolution with web technology. The testing software is publicly available; it can be used by system implementers to test their implementations. It can also be used by site integrators to verify and test the interoperability of systems, or by developers to understand specifications ambiguities, or to resolve implementations difficulties. PMID:20858241

  20. IHE cross-enterprise document sharing for imaging: interoperability testing software.

    PubMed

    Noumeir, Rita; Renaud, Bérubé

    2010-09-21

    With the deployments of Electronic Health Records (EHR), interoperability testing in healthcare is becoming crucial. EHR enables access to prior diagnostic information in order to assist in health decisions. It is a virtual system that results from the cooperation of several heterogeneous distributed systems. Interoperability between peers is therefore essential. Achieving interoperability requires various types of testing. Implementations need to be tested using software that simulates communication partners, and that provides test data and test plans. In this paper we describe a software that is used to test systems that are involved in sharing medical images within the EHR. Our software is used as part of the Integrating the Healthcare Enterprise (IHE) testing process to test the Cross Enterprise Document Sharing for imaging (XDS-I) integration profile. We describe its architecture and functionalities; we also expose the challenges encountered and discuss the elected design solutions. EHR is being deployed in several countries. The EHR infrastructure will be continuously evolving to embrace advances in the information technology domain. Our software is built on a web framework to allow for an easy evolution with web technology. The testing software is publicly available; it can be used by system implementers to test their implementations. It can also be used by site integrators to verify and test the interoperability of systems, or by developers to understand specifications ambiguities, or to resolve implementations difficulties.

  1. OSiRIS: a distributed Ceph deployment using software defined networking for multi-institutional research

    NASA Astrophysics Data System (ADS)

    McKee, Shawn; Kissel, Ezra; Meekhof, Benjeman; Swany, Martin; Miller, Charles; Gregorowicz, Michael

    2017-10-01

    We report on the first year of the OSiRIS project (NSF Award #1541335, UM, IU, MSU and WSU) which is targeting the creation of a distributed Ceph storage infrastructure coupled together with software-defined networking to provide high-performance access for well-connected locations on any participating campus. The projects goal is to provide a single scalable, distributed storage infrastructure that allows researchers at each campus to read, write, manage and share data directly from their own computing locations. The NSF CC*DNI DIBBS program which funded OSiRIS is seeking solutions to the challenges of multi-institutional collaborations involving large amounts of data and we are exploring the creative use of Ceph and networking to address those challenges. While OSiRIS will eventually be serving a broad range of science domains, its first adopter will be the LHC ATLAS detector project via the ATLAS Great Lakes Tier-2 (AGLT2) jointly located at the University of Michigan and Michigan State University. Part of our presentation will cover how ATLAS is using the OSiRIS infrastructure and our experiences integrating our first user community. The presentation will also review the motivations for and goals of the project, the technical details of the OSiRIS infrastructure, the challenges in providing such an infrastructure, and the technical choices made to address those challenges. We will conclude with our plans for the remaining 4 years of the project and our vision for what we hope to deliver by the projects end.

  2. Multiuser Collaboration with Networked Mobile Devices

    NASA Technical Reports Server (NTRS)

    Tso, Kam S.; Tai, Ann T.; Deng, Yong M.; Becks, Paul G.

    2006-01-01

    In this paper we describe a multiuser collaboration infrastructure that enables multiple mission scientists to remotely and collaboratively interact with visualization and planning software, using wireless networked personal digital assistants(PDAs) and other mobile devices. During ground operations of planetary rover and lander missions, scientists need to meet daily to review downlinked data and plan science activities. For example, scientists use the Science Activity Planner (SAP) in the Mars Exploration Rover (MER) mission to visualize downlinked data and plan rover activities during the science meetings [1]. Computer displays are projected onto large screens in the meeting room to enable the scientists to view and discuss downlinked images and data displayed by SAP and other software applications. However, only one person can interact with the software applications because input to the computer is limited to a single mouse and keyboard. As a result, the scientists have to verbally express their intentions, such as selecting a target at a particular location on the Mars terrain image, to that person in order to interact with the applications. This constrains communication and limits the returns of science planning. Furthermore, ground operations for Mars missions are fundamentally constrained by the short turnaround time for science and engineering teams to process and analyze data, plan the next uplink, generate command sequences, and transmit the uplink to the vehicle [2]. Therefore, improving ground operations is crucial to the success of Mars missions. The multiuser collaboration infrastructure enables users to control software applications remotely and collaboratively using mobile devices. The infrastructure includes (1) human-computer interaction techniques to provide natural, fast, and accurate inputs, (2) a communications protocol to ensure reliable and efficient coordination of the input devices and host computers, (3) an application-independent middleware that maintains the states, sessions, and interactions of individual users of the software applications, (4) an application programming interface to enable tight integration of applications and the middleware. The infrastructure is able to support any software applications running under the Windows or Unix platforms. The resulting technologies not only are applicable to NASA mission operations, but also useful in other situations such as design reviews, brainstorming sessions, and business meetings, as they can benefit from having the participants concurrently interact with the software applications (e.g., presentation applications and CAD design tools) to illustrate their ideas and provide inputs.

  3. The Satellite Data Thematic Core Service within the EPOS Research Infrastructure

    NASA Astrophysics Data System (ADS)

    Manunta, Michele; Casu, Francesco; Zinno, Ivana; De Luca, Claudio; Buonanno, Sabatino; Zeni, Giovanni; Wright, Tim; Hooper, Andy; Diament, Michel; Ostanciaux, Emilie; Mandea, Mioara; Walter, Thomas; Maccaferri, Francesco; Fernandez, Josè; Stramondo, Salvatore; Bignami, Christian; Bally, Philippe; Pinto, Salvatore; Marin, Alessandro; Cuomo, Antonio

    2017-04-01

    EPOS, the European Plate Observing System, is a long-term plan to facilitate the integrated use of data, data products, software and services, available from distributed Research Infrastructures (RI), for solid Earth science in Europe. Indeed, EPOS integrates a large number of existing European RIs belonging to several fields of the Earth science, from seismology to geodesy, near fault and volcanic observatories as well as anthropogenic hazards. The EPOS vision is that the integration of the existing national and trans-national research infrastructures will increase access and use of the multidisciplinary data recorded by the solid Earth monitoring networks, acquired in laboratory experiments and/or produced by computational simulations. The establishment of EPOS will foster the interoperability of products and services in the Earth science field to a worldwide community of users. Accordingly, the EPOS aim is to integrate the diverse and advanced European Research Infrastructures for solid Earth science, and build on new e-science opportunities to monitor and understand the dynamic and complex solid-Earth System. One of the EPOS Thematic Core Services (TCS), referred to as Satellite Data, aims at developing, implementing and deploying advanced satellite data products and services, mainly based on Copernicus data (namely Sentinel acquisitions), for the Earth science community. This work intends to present the technological enhancements, fostered by EPOS, to deploy effective satellite services in a harmonized and integrated way. In particular, the Satellite Data TCS will deploy five services, EPOSAR, GDM, COMET, 3D-Def and MOD, which are mainly based on the exploitation of SAR data acquired by the Sentinel-1 constellation and designed to provide information on Earth surface displacements. In particular, the planned services will provide both advanced DInSAR products (deformation maps, velocity maps, deformation time series) and value-added measurements (source model, 3D displacement maps, seismic hazard maps). Moreover, the services will release both on-demand and systematic products. The latter will be generated and made available to the users on a continuous basis, by processing each Sentinel-1 data once acquired, over a defined number of areas of interest; while the former will allow users to select data, areas, and time period to carry out their own analyses via an on-line platform. The satellite components will be integrated within the EPOS infrastructure through a common and harmonized interface that will allow users to search, process and share remote sensing images and results. This gateway to the satellite services will be represented by the ESA- Geohazards Exploitation Platform (GEP), a new cloud-based platform for the satellite Earth Observations designed to support the scientific community in the understanding of high impact natural disasters. Satellite Data TCS will use GEP as the common interface toward the main EPOS portal to provide EPOS users not only with data products but also with relevant processing and visualisation software, thus allowing users to gather and process on a cloud-computing infrastructure large datasets without any need to download them locally.

  4. Integration of XRootD into the cloud infrastructure for ALICE data analysis

    NASA Astrophysics Data System (ADS)

    Kompaniets, Mikhail; Shadura, Oksana; Svirin, Pavlo; Yurchenko, Volodymyr; Zarochentsev, Andrey

    2015-12-01

    Cloud technologies allow easy load balancing between different tasks and projects. From the viewpoint of the data analysis in the ALICE experiment, cloud allows to deploy software using Cern Virtual Machine (CernVM) and CernVM File System (CVMFS), to run different (including outdated) versions of software for long term data preservation and to dynamically allocate resources for different computing activities, e.g. grid site, ALICE Analysis Facility (AAF) and possible usage for local projects or other LHC experiments. We present a cloud solution for Tier-3 sites based on OpenStack and Ceph distributed storage with an integrated XRootD based storage element (SE). One of the key features of the solution is based on idea that Ceph has been used as a backend for Cinder Block Storage service for OpenStack, and in the same time as a storage backend for XRootD, with redundancy and availability of data preserved by Ceph settings. For faster and easier OpenStack deployment was applied the Packstack solution, which is based on the Puppet configuration management system. Ceph installation and configuration operations are structured and converted to Puppet manifests describing node configurations and integrated into Packstack. This solution can be easily deployed, maintained and used even in small groups with limited computing resources and small organizations, which usually have lack of IT support. The proposed infrastructure has been tested on two different clouds (SPbSU & BITP) and integrates successfully with the ALICE data analysis model.

  5. Dynamic Collaboration Infrastructure for Hydrologic Science

    NASA Astrophysics Data System (ADS)

    Tarboton, D. G.; Idaszak, R.; Castillo, C.; Yi, H.; Jiang, F.; Jones, N.; Goodall, J. L.

    2016-12-01

    Data and modeling infrastructure is becoming increasingly accessible to water scientists. HydroShare is a collaborative environment that currently offers water scientists the ability to access modeling and data infrastructure in support of data intensive modeling and analysis. It supports the sharing of and collaboration around "resources" which are social objects defined to include both data and models in a structured standardized format. Users collaborate around these objects via comments, ratings, and groups. HydroShare also supports web services and cloud based computation for the execution of hydrologic models and analysis and visualization of hydrologic data. However, the quantity and variety of data and modeling infrastructure available that can be accessed from environments like HydroShare is increasing. Storage infrastructure can range from one's local PC to campus or organizational storage to storage in the cloud. Modeling or computing infrastructure can range from one's desktop to departmental clusters to national HPC resources to grid and cloud computing resources. How does one orchestrate this vast number of data and computing infrastructure without needing to correspondingly learn each new system? A common limitation across these systems is the lack of efficient integration between data transport mechanisms and the corresponding high-level services to support large distributed data and compute operations. A scientist running a hydrology model from their desktop may require processing a large collection of files across the aforementioned storage and compute resources and various national databases. To address these community challenges a proof-of-concept prototype was created integrating HydroShare with RADII (Resource Aware Data-centric collaboration Infrastructure) to provide software infrastructure to enable the comprehensive and rapid dynamic deployment of what we refer to as "collaborative infrastructure." In this presentation we discuss the results of this proof-of-concept prototype which enabled HydroShare users to readily instantiate virtual infrastructure marshaling arbitrary combinations, varieties, and quantities of distributed data and computing infrastructure in addressing big problems in hydrology.

  6. Evolving PSTN to NGN

    NASA Astrophysics Data System (ADS)

    Wu, Liang T.

    2004-04-01

    The concept of Next Generation Network (NGN) was conceived around 1998 as an integrated solution to combine the quality and features of the PSTN with the low cost and routing flexibility of the Internet to provide a single infrastructure for the future public network. This carrier grade Internet solution calls for the creation of a consolidated, packet transport and switching infrastructure and the development of a flexible, open, software switch (softswitch) to handle voice telephony as well as multimedia services. Almost all the telecom equipment manufacturers as well as some Internet equipment vendors immediately subscribed to this vision and joined the race to create convergent products for the NGN market.

  7. An Open Computing Infrastructure that Facilitates Integrated Product and Process Development from a Decision-Based Perspective

    NASA Technical Reports Server (NTRS)

    Hale, Mark A.

    1996-01-01

    Computer applications for design have evolved rapidly over the past several decades, and significant payoffs are being achieved by organizations through reductions in design cycle times. These applications are overwhelmed by the requirements imposed during complex, open engineering systems design. Organizations are faced with a number of different methodologies, numerous legacy disciplinary tools, and a very large amount of data. Yet they are also faced with few interdisciplinary tools for design collaboration or methods for achieving the revolutionary product designs required to maintain a competitive advantage in the future. These organizations are looking for a software infrastructure that integrates current corporate design practices with newer simulation and solution techniques. Such an infrastructure must be robust to changes in both corporate needs and enabling technologies. In addition, this infrastructure must be user-friendly, modular and scalable. This need is the motivation for the research described in this dissertation. The research is focused on the development of an open computing infrastructure that facilitates product and process design. In addition, this research explicitly deals with human interactions during design through a model that focuses on the role of a designer as that of decision-maker. The research perspective here is taken from that of design as a discipline with a focus on Decision-Based Design, Theory of Languages, Information Science, and Integration Technology. Given this background, a Model of IPPD is developed and implemented along the lines of a traditional experimental procedure: with the steps of establishing context, formalizing a theory, building an apparatus, conducting an experiment, reviewing results, and providing recommendations. Based on this Model, Design Processes and Specification can be explored in a structured and implementable architecture. An architecture for exploring design called DREAMS (Developing Robust Engineering Analysis Models and Specifications) has been developed which supports the activities of both meta-design and actual design execution. This is accomplished through a systematic process which is comprised of the stages of Formulation, Translation, and Evaluation. During this process, elements from a Design Specification are integrated into Design Processes. In addition, a software infrastructure was developed and is called IMAGE (Intelligent Multidisciplinary Aircraft Generation Environment). This represents a virtual apparatus in the Design Experiment conducted in this research. IMAGE is an innovative architecture because it explicitly supports design-related activities. This is accomplished through a GUI driven and Agent-based implementation of DREAMS. A HSCT design has been adopted from the Framework for Interdisciplinary Design Optimization (FIDO) and is implemented in IMAGE. This problem shows how Design Processes and Specification interact in a design system. In addition, the problem utilizes two different solution models concurrently: optimal and satisfying. The satisfying model allows for more design flexibility and allows a designer to maintain design freedom. As a result of following this experimental procedure, this infrastructure is an open system that it is robust to changes in both corporate needs and computer technologies. The development of this infrastructure leads to a number of significant intellectual contributions: 1) A new approach to implementing IPPD with the aid of a computer; 2) A formal Design Experiment; 3) A combined Process and Specification architecture that is language-based; 4) An infrastructure for exploring design; 5) An integration strategy for implementing computer resources; and 6) A seamless modeling language. The need for these contributions is emphasized by the demand by industry and government agencies for the development of these technologies.

  8. Better software, better research: the challenge of preserving your research and your reputation

    NASA Astrophysics Data System (ADS)

    Chue Hong, N.

    2017-12-01

    Software is fundamental to research. From short, thrown-together temporary scripts, through an abundance of complex spreadsheets analysing collected data, to the hundreds of software engineers and millions of lines of code behind international efforts such as the Large Hadron Collider and the Square Kilometre Array, software has made an invaluable contribution to advancing our research knowledge. Within the earth and space sciences, data is being generated, collected, processed and analysed in ever greater amounts and detail. However the pace of this improvement leads to challenges around the persistence of research outputs and artefacts. A specific challenge in this field is that often experiments and measurements cannot be repeated, yet the infrastructure used to manage, store and process this data must be continually updated and developed: constant change just to stay still. The UK-based Software Sustainability Institute (SSI) aims to improve research software sustainability, working with researchers, funders, research software engineers, managers, and other stakeholders across the research spectrum. In this talk, I will present lessons learned and good practice based on the work of the Institute and its collaborators. I will summarise some of the work that is being done to improve the integration of infrastructure for managing research outputs, including around software citation and reward, extending data management plans, and improving researcher skills: "better software, better research". Ultimately, being a modern researcher in the geosciences requires you to efficiently balance the pursuit of new knowledge with making your work reusable and reproducible. And as scientists are placed under greater scrutiny about whether others can trust their results, the preservation of your artefacts has a key role in the preservation of your reputation.

  9. Modular Infrastructure for Rapid Flight Software Development

    NASA Technical Reports Server (NTRS)

    Pires, Craig

    2010-01-01

    This slide presentation reviews the use of modular infrastructure to assist in the development of flight software. A feature of this program is the use of model based approach for application unique software. A review of two programs that this approach was use on are: the development of software for Hover Test Vehicle (HTV), and Lunar Atmosphere and Dust Environment Experiment (LADEE).

  10. The NCI High Performance Computing (HPC) and High Performance Data (HPD) Platform to Support the Analysis of Petascale Environmental Data Collections

    NASA Astrophysics Data System (ADS)

    Evans, B. J. K.; Pugh, T.; Wyborn, L. A.; Porter, D.; Allen, C.; Smillie, J.; Antony, J.; Trenham, C.; Evans, B. J.; Beckett, D.; Erwin, T.; King, E.; Hodge, J.; Woodcock, R.; Fraser, R.; Lescinsky, D. T.

    2014-12-01

    The National Computational Infrastructure (NCI) has co-located a priority set of national data assets within a HPC research platform. This powerful in-situ computational platform has been created to help serve and analyse the massive amounts of data across the spectrum of environmental collections - in particular the climate, observational data and geoscientific domains. This paper examines the infrastructure, innovation and opportunity for this significant research platform. NCI currently manages nationally significant data collections (10+ PB) categorised as 1) earth system sciences, climate and weather model data assets and products, 2) earth and marine observations and products, 3) geosciences, 4) terrestrial ecosystem, 5) water management and hydrology, and 6) astronomy, social science and biosciences. The data is largely sourced from the NCI partners (who include the custodians of many of the national scientific records), major research communities, and collaborating overseas organisations. By co-locating these large valuable data assets, new opportunities have arisen by harmonising the data collections, making a powerful transdisciplinary research platformThe data is accessible within an integrated HPC-HPD environment - a 1.2 PFlop supercomputer (Raijin), a HPC class 3000 core OpenStack cloud system and several highly connected large scale and high-bandwidth Lustre filesystems. New scientific software, cloud-scale techniques, server-side visualisation and data services have been harnessed and integrated into the platform, so that analysis is performed seamlessly across the traditional boundaries of the underlying data domains. Characterisation of the techniques along with performance profiling ensures scalability of each software component, all of which can either be enhanced or replaced through future improvements. A Development-to-Operations (DevOps) framework has also been implemented to manage the scale of the software complexity alone. This ensures that software is both upgradable and maintainable, and can be readily reused with complexly integrated systems and become part of the growing global trusted community tools for cross-disciplinary research.

  11. BioPortal: An Open-Source Community-Based Ontology Repository

    NASA Astrophysics Data System (ADS)

    Noy, N.; NCBO Team

    2011-12-01

    Advances in computing power and new computational techniques have changed the way researchers approach science. In many fields, one of the most fruitful approaches has been to use semantically aware software to break down the barriers among disparate domains, systems, data sources, and technologies. Such software facilitates data aggregation, improves search, and ultimately allows the detection of new associations that were previously not detectable. Achieving these analyses requires software systems that take advantage of the semantics and that can intelligently negotiate domains and knowledge sources, identifying commonality across systems that use different and conflicting vocabularies, while understanding apparent differences that may be concealed by the use of superficially similar terms. An ontology, a semantically rich vocabulary for a domain of interest, is the cornerstone of software for bridging systems, domains, and resources. However, as ontologies become the foundation of all semantic technologies in e-science, we must develop an infrastructure for sharing ontologies, finding and evaluating them, integrating and mapping among them, and using ontologies in applications that help scientists process their data. BioPortal [1] is an open-source on-line community-based ontology repository that has been used as a critical component of semantic infrastructure in several domains, including biomedicine and bio-geochemical data. BioPortal, uses the social approaches in the Web 2.0 style to bring structure and order to the collection of biomedical ontologies. It enables users to provide and discuss a wide array of knowledge components, from submitting the ontologies themselves, to commenting on and discussing classes in the ontologies, to reviewing ontologies in the context of their own ontology-based projects, to creating mappings between overlapping ontologies and discussing and critiquing the mappings. Critically, it provides web-service access to all its content, enabling its integration in semantically enriched applications. [1] Noy, N.F., Shah, N.H., et al., BioPortal: ontologies and integrated data resources at the click of a mouse. Nucleic Acids Res, 2009. 37(Web Server issue): p. W170-3.

  12. Sustainable access to data, products, services and software from the European seismological Research Infrastructures: the EPOS TCS Seismology

    NASA Astrophysics Data System (ADS)

    Haslinger, Florian; Dupont, Aurelien; Michelini, Alberto; Rietbrock, Andreas; Sleeman, Reinoud; Wiemer, Stefan; Basili, Roberto; Bossu, Rémy; Cakti, Eser; Cotton, Fabrice; Crawford, Wayne; Diaz, Jordi; Garth, Tom; Locati, Mario; Luzi, Lucia; Pinho, Rui; Pitilakis, Kyriazis; Strollo, Angelo

    2016-04-01

    Easy, efficient and comprehensive access to data, data products, scientific services and scientific software is a key ingredient in enabling research at the frontiers of science. Organizing this access across the European Research Infrastructures in the field of seismology, so that it best serves user needs, takes advantage of state-of-the-art ICT solutions, provides cross-domain interoperability, and is organizationally and financially sustainable in the long term, is the core challenge of the implementation phase of the Thematic Core Service (TCS) Seismology within the EPOS-IP project. Building upon the existing European-level infrastructures ORFEUS for seismological waveforms, EMSC for seismological products, and EFEHR for seismological hazard and risk information, and implementing a pilot Computational Earth Science service starting from the results of the VERCE project, the work within the EPOS-IP project focuses on improving and extending the existing services, aligning them with global developments, to at the end produce a well coordinated framework that is technically, organizationally, and financially integrated with the EPOS architecture. This framework needs to respect the roles and responsibilities of the underlying national research infrastructures that are the data owners and main providers of data and products, and allow for active input and feedback from the (scientific) user community. At the same time, it needs to remain flexible enough to cope with unavoidable challenges in the availability of resources and dynamics of contributors. The technical work during the next years is organized in four areas: - constructing the next generation software architecture for the European Integrated (waveform) Data Archive EIDA, developing advanced metadata and station information services, fully integrate strong motion waveforms and derived parametric engineering-domain data, and advancing the integration of mobile (temporary) networks and OBS deployments in EIDA; - further development and expansion of services to access seismological products of scientific interest as provided by the community by implementing a common collection and development (IT) platform, improvements in the earthquake information services e.g. by introducing more robust quality indicators and diversifying collection and dissemination mechanisms, as well as improving historical earthquake data services; - development of a comprehensive suite of earthquake hazard products, tools, and services harmonized on the European level and available through a common access platform, encompassing information on seismic sources, seismogenic faults, ground-motion prediction equations, geotechnical information, and strong-motion recordings in buildings, together with an interface to earthquake risk; - a portal implementation of computational seismology tools and services, specifically for seismic waveform propagation in complex 3D media following the results of the VERCE project, and initiating the inclusion of further suitable codes on that portal in discussion with the community, forming the basis of EPOS computational earth science infrastructure. This will be accompanied by development and implementation of integrated and interoperable metadata structures, adequate and referencable persistent identifiers, and appropriate user access and authorization mechanisms. Here we present further detail on the work plan with the attempt to foster interaction with the target user community on the spectrum of services as well as on feedback mechanisms and governance.

  13. Realizing software longevity over a system's lifetime

    NASA Astrophysics Data System (ADS)

    Lanclos, Kyle; Deich, William T. S.; Kibrick, Robert I.; Allen, Steven L.; Gates, John

    2010-07-01

    A successful instrument or telescope will measure its productive lifetime in decades; over that period, the technology behind the control hardware and software will evolve, and be replaced on a per-component basis. These new components must successfully integrate with the old, and the difficulty of that integration depends strongly on the design decisions made over the course of the facility's history. The same decisions impact the ultimate success of each upgrade, as measured in terms of observing efficiency and maintenance cost. We offer a case study of these critical design decisions, analyzing the layers of software deployed for instruments under the care of UCO/Lick Observatory, including recent upgrades to the Low Resolution Imaging Spectrometer (LRIS) at Keck Observatory in Hawaii, as well as the Kast spectrograph, Lick Adaptive Optics system, and Hamilton spectrograph, all at Lick Observatory's Shane 3-meter Telescope at Mt. Hamilton. These issues play directly into design considerations for the software intended for use at the next generation of telescopes, such as the Thirty Meter Telescope. We conduct our analysis with the future of observational astronomy infrastructure firmly in mind.

  14. Software Engineering Infrastructure in a Large Virtual Campus

    ERIC Educational Resources Information Center

    Cristobal, Jesus; Merino, Jorge; Navarro, Antonio; Peralta, Miguel; Roldan, Yolanda; Silveira, Rosa Maria

    2011-01-01

    Purpose: The design, construction and deployment of a large virtual campus are a complex issue. Present virtual campuses are made of several software applications that complement e-learning platforms. In order to develop and maintain such virtual campuses, a complex software engineering infrastructure is needed. This paper aims to analyse the…

  15. High-throughput neuroimaging-genetics computational infrastructure

    PubMed Central

    Dinov, Ivo D.; Petrosyan, Petros; Liu, Zhizhong; Eggert, Paul; Hobel, Sam; Vespa, Paul; Woo Moon, Seok; Van Horn, John D.; Franco, Joseph; Toga, Arthur W.

    2014-01-01

    Many contemporary neuroscientific investigations face significant challenges in terms of data management, computational processing, data mining, and results interpretation. These four pillars define the core infrastructure necessary to plan, organize, orchestrate, validate, and disseminate novel scientific methods, computational resources, and translational healthcare findings. Data management includes protocols for data acquisition, archival, query, transfer, retrieval, and aggregation. Computational processing involves the necessary software, hardware, and networking infrastructure required to handle large amounts of heterogeneous neuroimaging, genetics, clinical, and phenotypic data and meta-data. Data mining refers to the process of automatically extracting data features, characteristics and associations, which are not readily visible by human exploration of the raw dataset. Result interpretation includes scientific visualization, community validation of findings and reproducible findings. In this manuscript we describe the novel high-throughput neuroimaging-genetics computational infrastructure available at the Institute for Neuroimaging and Informatics (INI) and the Laboratory of Neuro Imaging (LONI) at University of Southern California (USC). INI and LONI include ultra-high-field and standard-field MRI brain scanners along with an imaging-genetics database for storing the complete provenance of the raw and derived data and meta-data. In addition, the institute provides a large number of software tools for image and shape analysis, mathematical modeling, genomic sequence processing, and scientific visualization. A unique feature of this architecture is the Pipeline environment, which integrates the data management, processing, transfer, and visualization. Through its client-server architecture, the Pipeline environment provides a graphical user interface for designing, executing, monitoring validating, and disseminating of complex protocols that utilize diverse suites of software tools and web-services. These pipeline workflows are represented as portable XML objects which transfer the execution instructions and user specifications from the client user machine to remote pipeline servers for distributed computing. Using Alzheimer's and Parkinson's data, we provide several examples of translational applications using this infrastructure1. PMID:24795619

  16. Unidata Cyberinfrastructure in the Cloud

    NASA Astrophysics Data System (ADS)

    Ramamurthy, M. K.; Young, J. W.

    2016-12-01

    Data services, software, and user support are critical components of geosciences cyber-infrastructure to help researchers to advance science. With the maturity of and significant advances in cloud computing, it has recently emerged as an alternative new paradigm for developing and delivering a broad array of services over the Internet. Cloud computing is now mature enough in usability in many areas of science and education, bringing the benefits of virtualized and elastic remote services to infrastructure, software, computation, and data. Cloud environments reduce the amount of time and money spent to procure, install, and maintain new hardware and software, and reduce costs through resource pooling and shared infrastructure. Given the enormous potential of cloud-based services, Unidata has been moving to augment its software, services, data delivery mechanisms to align with the cloud-computing paradigm. To realize the above vision, Unidata has worked toward: * Providing access to many types of data from a cloud (e.g., via the THREDDS Data Server, RAMADDA and EDEX servers); * Deploying data-proximate tools to easily process, analyze, and visualize those data in a cloud environment cloud for consumption by any one, by any device, from anywhere, at any time; * Developing and providing a range of pre-configured and well-integrated tools and services that can be deployed by any university in their own private or public cloud settings. Specifically, Unidata has developed Docker for "containerized applications", making them easy to deploy. Docker helps to create "disposable" installs and eliminates many configuration challenges. Containerized applications include tools for data transport, access, analysis, and visualization: THREDDS Data Server, Integrated Data Viewer, GEMPAK, Local Data Manager, RAMADDA Data Server, and Python tools; * Leveraging Jupyter as a central platform and hub with its powerful set of interlinking tools to connect interactively data servers, Python scientific libraries, scripts, and workflows; * Exploring end-to-end modeling and prediction capabilities in the cloud; * Partnering with NOAA and public cloud vendors (e.g., Amazon and OCC) on the NOAA Big Data Project to harness their capabilities and resources for the benefit of the academic community.

  17. SparkMed: a framework for dynamic integration of multimedia medical data into distributed m-Health systems.

    PubMed

    Constantinescu, Liviu; Kim, Jinman; Feng, David Dagan

    2012-01-01

    With the advent of 4G and other long-term evolution (LTE) wireless networks, the traditional boundaries of patient record propagation are diminishing as networking technologies extend the reach of hospital infrastructure and provide on-demand mobile access to medical multimedia data. However, due to legacy and proprietary software, storage and decommissioning costs, and the price of centralization and redevelopment, it remains complex, expensive, and often unfeasible for hospitals to deploy their infrastructure for online and mobile use. This paper proposes the SparkMed data integration framework for mobile healthcare (m-Health), which significantly benefits from the enhanced network capabilities of LTE wireless technologies, by enabling a wide range of heterogeneous medical software and database systems (such as the picture archiving and communication systems, hospital information system, and reporting systems) to be dynamically integrated into a cloud-like peer-to-peer multimedia data store. Our framework allows medical data applications to share data with mobile hosts over a wireless network (such as WiFi and 3G), by binding to existing software systems and deploying them as m-Health applications. SparkMed integrates techniques from multimedia streaming, rich Internet applications (RIA), and remote procedure call (RPC) frameworks to construct a Self-managing, Pervasive Automated netwoRK for Medical Enterprise Data (SparkMed). Further, it is resilient to failure, and able to use mobile and handheld devices to maintain its network, even in the absence of dedicated server devices. We have developed a prototype of the SparkMed framework for evaluation on a radiological workflow simulation, which uses SparkMed to deploy a radiological image viewer as an m-Health application for telemedical use by radiologists and stakeholders. We have evaluated our prototype using ten devices over WiFi and 3G, verifying that our framework meets its two main objectives: 1) interactive delivery of medical multimedia data to mobile devices; and 2) attaching to non-networked medical software processes without significantly impacting their performance. Consistent response times of under 500 ms and graphical frame rates of over 5 frames per second were observed under intended usage conditions. Further, overhead measurements displayed linear scalability and low resource requirements.

  18. Next generation simulation tools: the Systems Biology Workbench and BioSPICE integration.

    PubMed

    Sauro, Herbert M; Hucka, Michael; Finney, Andrew; Wellock, Cameron; Bolouri, Hamid; Doyle, John; Kitano, Hiroaki

    2003-01-01

    Researchers in quantitative systems biology make use of a large number of different software packages for modelling, analysis, visualization, and general data manipulation. In this paper, we describe the Systems Biology Workbench (SBW), a software framework that allows heterogeneous application components--written in diverse programming languages and running on different platforms--to communicate and use each others' capabilities via a fast binary encoded-message system. Our goal was to create a simple, high performance, opensource software infrastructure which is easy to implement and understand. SBW enables applications (potentially running on separate, distributed computers) to communicate via a simple network protocol. The interfaces to the system are encapsulated in client-side libraries that we provide for different programming languages. We describe in this paper the SBW architecture, a selection of current modules, including Jarnac, JDesigner, and SBWMeta-tool, and the close integration of SBW into BioSPICE, which enables both frameworks to share tools and compliment and strengthen each others capabilities.

  19. OpenSoC Fabric

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    2014-08-21

    Recent advancements in technology scaling have shown a trend towards greater integration with large-scale chips containing thousands of processors connected to memories and other I/O devices using non-trivial network topologies. Software simulation proves insufficient to study the tradeoffs in such complex systems due to slow execution time, whereas hardware RTL development is too time-consuming. We present OpenSoC Fabric, an on-chip network generation infrastructure which aims to provide a parameterizable and powerful on-chip network generator for evaluating future high performance computing architectures based on SoC technology. OpenSoC Fabric leverages a new hardware DSL, Chisel, which contains powerful abstractions provided by itsmore » base language, Scala, and generates both software (C++) and hardware (Verilog) models from a single code base. The OpenSoC Fabric2 infrastructure is modeled after existing state-of-the-art simulators, offers large and powerful collections of configuration options, and follows object-oriented design and functional programming to make functionality extension as easy as possible.« less

  20. Data Mining as a Service (DMaaS)

    NASA Astrophysics Data System (ADS)

    Tejedor, E.; Piparo, D.; Mascetti, L.; Moscicki, J.; Lamanna, M.; Mato, P.

    2016-10-01

    Data Mining as a Service (DMaaS) is a software and computing infrastructure that allows interactive mining of scientific data in the cloud. It allows users to run advanced data analyses by leveraging the widely adopted Jupyter notebook interface. Furthermore, the system makes it easier to share results and scientific code, access scientific software, produce tutorials and demonstrations as well as preserve the analyses of scientists. This paper describes how a first pilot of the DMaaS service is being deployed at CERN, starting from the notebook interface that has been fully integrated with the ROOT analysis framework, in order to provide all the tools for scientists to run their analyses. Additionally, we characterise the service backend, which combines a set of IT services such as user authentication, virtual computing infrastructure, mass storage, file synchronisation, development portals or batch systems. The added value acquired by the combination of the aforementioned categories of services is discussed, focusing on the opportunities offered by the CERNBox synchronisation service and its massive storage backend, EOS.

  1. Are Earth System model software engineering practices fit for purpose? A case study.

    NASA Astrophysics Data System (ADS)

    Easterbrook, S. M.; Johns, T. C.

    2009-04-01

    We present some analysis and conclusions from a case study of the culture and practices of scientists at the Met Office and Hadley Centre working on the development of software for climate and Earth System models using the MetUM infrastructure. The study examined how scientists think about software correctness, prioritize their requirements in making changes, and develop a shared understanding of the resulting models. We conclude that highly customized techniques driven strongly by scientific research goals have evolved for verification and validation of such models. In a formal software engineering context these represents costly, but invaluable, software integration tests with considerable benefits. The software engineering practices seen also exhibit recognisable features of both agile and open source software development projects - self-organisation of teams consistent with a meritocracy rather than top-down organisation, extensive use of informal communication channels, and software developers who are generally also users and science domain experts. We draw some general conclusions on whether these practices work well, and what new software engineering challenges may lie ahead as Earth System models become ever more complex and petascale computing becomes the norm.

  2. Clinical results of HIS, RIS, PACS integration using data integration CASE tools

    NASA Astrophysics Data System (ADS)

    Taira, Ricky K.; Chan, Hing-Ming; Breant, Claudine M.; Huang, Lu J.; Valentino, Daniel J.

    1995-05-01

    Current infrastructure research in PACS is dominated by the development of communication networks (local area networks, teleradiology, ATM networks, etc.), multimedia display workstations, and hierarchical image storage architectures. However, limited work has been performed on developing flexible, expansible, and intelligent information processing architectures for the vast decentralized image and text data repositories prevalent in healthcare environments. Patient information is often distributed among multiple data management systems. Current large-scale efforts to integrate medical information and knowledge sources have been costly with limited retrieval functionality. Software integration strategies to unify distributed data and knowledge sources is still lacking commercially. Systems heterogeneity (i.e., differences in hardware platforms, communication protocols, database management software, nomenclature, etc.) is at the heart of the problem and is unlikely to be standardized in the near future. In this paper, we demonstrate the use of newly available CASE (computer- aided software engineering) tools to rapidly integrate HIS, RIS, and PACS information systems. The advantages of these tools include fast development time (low-level code is generated from graphical specifications), and easy system maintenance (excellent documentation, easy to perform changes, and centralized code repository in an object-oriented database). The CASE tools are used to develop and manage the `middle-ware' in our client- mediator-serve architecture for systems integration. Our architecture is scalable and can accommodate heterogeneous database and communication protocols.

  3. Software-Reconfigurable Processors for Spacecraft

    NASA Technical Reports Server (NTRS)

    Farrington, Allen; Gray, Andrew; Bell, Bryan; Stanton, Valerie; Chong, Yong; Peters, Kenneth; Lee, Clement; Srinivasan, Jeffrey

    2005-01-01

    A report presents an overview of an architecture for a software-reconfigurable network data processor for a spacecraft engaged in scientific exploration. When executed on suitable electronic hardware, the software performs the functions of a physical layer (in effect, acts as a software radio in that it performs modulation, demodulation, pulse-shaping, error correction, coding, and decoding), a data-link layer, a network layer, a transport layer, and application-layer processing of scientific data. The software-reconfigurable network processor is undergoing development to enable rapid prototyping and rapid implementation of communication, navigation, and scientific signal-processing functions; to provide a long-lived communication infrastructure; and to provide greatly improved scientific-instrumentation and scientific-data-processing functions by enabling science-driven in-flight reconfiguration of computing resources devoted to these functions. This development is an extension of terrestrial radio and network developments (e.g., in the cellular-telephone industry) implemented in software running on such hardware as field-programmable gate arrays, digital signal processors, traditional digital circuits, and mixed-signal application-specific integrated circuits (ASICs).

  4. Crowd-Sourced Help with Emergent Knowledge for Optimized Formal Verification (CHEKOFV)

    DTIC Science & Technology

    2016-03-01

    up game Binary Fission, which was deployed during Phase Two of CHEKOFV. Xylem: The Code of Plants is a casual game for players using mobile ...there are the design and engineering challenges of building a game infrastructure that integrates verification technology with crowd participation...the backend processes that annotate the originating software. Allowing players to construct their own equations opened up the flexibility to receive

  5. AstroCloud, a Cyber-Infrastructure for Astronomy Research: Data Access and Interoperability

    NASA Astrophysics Data System (ADS)

    Fan, D.; He, B.; Xiao, J.; Li, S.; Li, C.; Cui, C.; Yu, C.; Hong, Z.; Yin, S.; Wang, C.; Cao, Z.; Fan, Y.; Mi, L.; Wan, W.; Wang, J.

    2015-09-01

    Data access and interoperability module connects the observation proposals, data, virtual machines and software. According to the unique identifier of PI (principal investigator), an email address or an internal ID, data can be collected by PI's proposals, or by the search interfaces, e.g. conesearch. Files associated with the searched results could be easily transported to cloud storages, including the storage with virtual machines, or several commercial platforms like Dropbox. Benefitted from the standards of IVOA (International Observatories Alliance), VOTable formatted searching result could be sent to kinds of VO software. Latter endeavor will try to integrate more data and connect archives and some other astronomical resources.

  6. Secured Advanced Federated Environment (SAFE): A NASA Solution for Secure Cross-Organization Collaboration

    NASA Technical Reports Server (NTRS)

    Chow, Edward; Spence, Matthew Chew; Pell, Barney; Stewart, Helen; Korsmeyer, David; Liu, Joseph; Chang, Hsin-Ping; Viernes, Conan; Gogorth, Andre

    2003-01-01

    This paper discusses the challenges and security issues inherent in building complex cross-organizational collaborative projects and software systems within NASA. By applying the design principles of compartmentalization, organizational hierarchy and inter-organizational federation, the Secured Advanced Federated Environment (SAFE) is laying the foundation for a collaborative virtual infrastructure for the NASA community. A key element of SAFE is the Micro Security Domain (MSD) concept, which balances the need to collaborate and the need to enforce enterprise and local security rules. With the SAFE approach, security is an integral component of enterprise software and network design, not an afterthought.

  7. Atlas - a data warehouse for integrative bioinformatics.

    PubMed

    Shah, Sohrab P; Huang, Yong; Xu, Tao; Yuen, Macaire M S; Ling, John; Ouellette, B F Francis

    2005-02-21

    We present a biological data warehouse called Atlas that locally stores and integrates biological sequences, molecular interactions, homology information, functional annotations of genes, and biological ontologies. The goal of the system is to provide data, as well as a software infrastructure for bioinformatics research and development. The Atlas system is based on relational data models that we developed for each of the source data types. Data stored within these relational models are managed through Structured Query Language (SQL) calls that are implemented in a set of Application Programming Interfaces (APIs). The APIs include three languages: C++, Java, and Perl. The methods in these API libraries are used to construct a set of loader applications, which parse and load the source datasets into the Atlas database, and a set of toolbox applications which facilitate data retrieval. Atlas stores and integrates local instances of GenBank, RefSeq, UniProt, Human Protein Reference Database (HPRD), Biomolecular Interaction Network Database (BIND), Database of Interacting Proteins (DIP), Molecular Interactions Database (MINT), IntAct, NCBI Taxonomy, Gene Ontology (GO), Online Mendelian Inheritance in Man (OMIM), LocusLink, Entrez Gene and HomoloGene. The retrieval APIs and toolbox applications are critical components that offer end-users flexible, easy, integrated access to this data. We present use cases that use Atlas to integrate these sources for genome annotation, inference of molecular interactions across species, and gene-disease associations. The Atlas biological data warehouse serves as data infrastructure for bioinformatics research and development. It forms the backbone of the research activities in our laboratory and facilitates the integration of disparate, heterogeneous biological sources of data enabling new scientific inferences. Atlas achieves integration of diverse data sets at two levels. First, Atlas stores data of similar types using common data models, enforcing the relationships between data types. Second, integration is achieved through a combination of APIs, ontology, and tools. The Atlas software is freely available under the GNU General Public License at: http://bioinformatics.ubc.ca/atlas/

  8. Atlas – a data warehouse for integrative bioinformatics

    PubMed Central

    Shah, Sohrab P; Huang, Yong; Xu, Tao; Yuen, Macaire MS; Ling, John; Ouellette, BF Francis

    2005-01-01

    Background We present a biological data warehouse called Atlas that locally stores and integrates biological sequences, molecular interactions, homology information, functional annotations of genes, and biological ontologies. The goal of the system is to provide data, as well as a software infrastructure for bioinformatics research and development. Description The Atlas system is based on relational data models that we developed for each of the source data types. Data stored within these relational models are managed through Structured Query Language (SQL) calls that are implemented in a set of Application Programming Interfaces (APIs). The APIs include three languages: C++, Java, and Perl. The methods in these API libraries are used to construct a set of loader applications, which parse and load the source datasets into the Atlas database, and a set of toolbox applications which facilitate data retrieval. Atlas stores and integrates local instances of GenBank, RefSeq, UniProt, Human Protein Reference Database (HPRD), Biomolecular Interaction Network Database (BIND), Database of Interacting Proteins (DIP), Molecular Interactions Database (MINT), IntAct, NCBI Taxonomy, Gene Ontology (GO), Online Mendelian Inheritance in Man (OMIM), LocusLink, Entrez Gene and HomoloGene. The retrieval APIs and toolbox applications are critical components that offer end-users flexible, easy, integrated access to this data. We present use cases that use Atlas to integrate these sources for genome annotation, inference of molecular interactions across species, and gene-disease associations. Conclusion The Atlas biological data warehouse serves as data infrastructure for bioinformatics research and development. It forms the backbone of the research activities in our laboratory and facilitates the integration of disparate, heterogeneous biological sources of data enabling new scientific inferences. Atlas achieves integration of diverse data sets at two levels. First, Atlas stores data of similar types using common data models, enforcing the relationships between data types. Second, integration is achieved through a combination of APIs, ontology, and tools. The Atlas software is freely available under the GNU General Public License at: PMID:15723693

  9. ibex: An open infrastructure software platform to facilitate collaborative work in radiomics

    PubMed Central

    Zhang, Lifei; Fried, David V.; Fave, Xenia J.; Hunter, Luke A.; Court, Laurence E.

    2015-01-01

    Purpose: Radiomics, which is the high-throughput extraction and analysis of quantitative image features, has been shown to have considerable potential to quantify the tumor phenotype. However, at present, a lack of software infrastructure has impeded the development of radiomics and its applications. Therefore, the authors developed the imaging biomarker explorer (ibex), an open infrastructure software platform that flexibly supports common radiomics workflow tasks such as multimodality image data import and review, development of feature extraction algorithms, model validation, and consistent data sharing among multiple institutions. Methods: The ibex software package was developed using the matlab and c/c++ programming languages. The software architecture deploys the modern model-view-controller, unit testing, and function handle programming concepts to isolate each quantitative imaging analysis task, to validate if their relevant data and algorithms are fit for use, and to plug in new modules. On one hand, ibex is self-contained and ready to use: it has implemented common data importers, common image filters, and common feature extraction algorithms. On the other hand, ibex provides an integrated development environment on top of matlab and c/c++, so users are not limited to its built-in functions. In the ibex developer studio, users can plug in, debug, and test new algorithms, extending ibex’s functionality. ibex also supports quality assurance for data and feature algorithms: image data, regions of interest, and feature algorithm-related data can be reviewed, validated, and/or modified. More importantly, two key elements in collaborative workflows, the consistency of data sharing and the reproducibility of calculation result, are embedded in the ibex workflow: image data, feature algorithms, and model validation including newly developed ones from different users can be easily and consistently shared so that results can be more easily reproduced between institutions. Results: Researchers with a variety of technical skill levels, including radiation oncologists, physicists, and computer scientists, have found the ibex software to be intuitive, powerful, and easy to use. ibex can be run at any computer with the windows operating system and 1GB RAM. The authors fully validated the implementation of all importers, preprocessing algorithms, and feature extraction algorithms. Windows version 1.0 beta of stand-alone ibex and ibex’s source code can be downloaded. Conclusions: The authors successfully implemented ibex, an open infrastructure software platform that streamlines common radiomics workflow tasks. Its transparency, flexibility, and portability can greatly accelerate the pace of radiomics research and pave the way toward successful clinical translation. PMID:25735289

  10. IBEX: an open infrastructure software platform to facilitate collaborative work in radiomics.

    PubMed

    Zhang, Lifei; Fried, David V; Fave, Xenia J; Hunter, Luke A; Yang, Jinzhong; Court, Laurence E

    2015-03-01

    Radiomics, which is the high-throughput extraction and analysis of quantitative image features, has been shown to have considerable potential to quantify the tumor phenotype. However, at present, a lack of software infrastructure has impeded the development of radiomics and its applications. Therefore, the authors developed the imaging biomarker explorer (IBEX), an open infrastructure software platform that flexibly supports common radiomics workflow tasks such as multimodality image data import and review, development of feature extraction algorithms, model validation, and consistent data sharing among multiple institutions. The IBEX software package was developed using the MATLAB and c/c++ programming languages. The software architecture deploys the modern model-view-controller, unit testing, and function handle programming concepts to isolate each quantitative imaging analysis task, to validate if their relevant data and algorithms are fit for use, and to plug in new modules. On one hand, IBEX is self-contained and ready to use: it has implemented common data importers, common image filters, and common feature extraction algorithms. On the other hand, IBEX provides an integrated development environment on top of MATLAB and c/c++, so users are not limited to its built-in functions. In the IBEX developer studio, users can plug in, debug, and test new algorithms, extending IBEX's functionality. IBEX also supports quality assurance for data and feature algorithms: image data, regions of interest, and feature algorithm-related data can be reviewed, validated, and/or modified. More importantly, two key elements in collaborative workflows, the consistency of data sharing and the reproducibility of calculation result, are embedded in the IBEX workflow: image data, feature algorithms, and model validation including newly developed ones from different users can be easily and consistently shared so that results can be more easily reproduced between institutions. Researchers with a variety of technical skill levels, including radiation oncologists, physicists, and computer scientists, have found the IBEX software to be intuitive, powerful, and easy to use. IBEX can be run at any computer with the windows operating system and 1GB RAM. The authors fully validated the implementation of all importers, preprocessing algorithms, and feature extraction algorithms. Windows version 1.0 beta of stand-alone IBEX and IBEX's source code can be downloaded. The authors successfully implemented IBEX, an open infrastructure software platform that streamlines common radiomics workflow tasks. Its transparency, flexibility, and portability can greatly accelerate the pace of radiomics research and pave the way toward successful clinical translation.

  11. Software and hardware infrastructure for research in electrophysiology

    PubMed Central

    Mouček, Roman; Ježek, Petr; Vařeka, Lukáš; Řondík, Tomáš; Brůha, Petr; Papež, Václav; Mautner, Pavel; Novotný, Jiří; Prokop, Tomáš; Štěbeták, Jan

    2014-01-01

    As in other areas of experimental science, operation of electrophysiological laboratory, design and performance of electrophysiological experiments, collection, storage and sharing of experimental data and metadata, analysis and interpretation of these data, and publication of results are time consuming activities. If these activities are well organized and supported by a suitable infrastructure, work efficiency of researchers increases significantly. This article deals with the main concepts, design, and development of software and hardware infrastructure for research in electrophysiology. The described infrastructure has been primarily developed for the needs of neuroinformatics laboratory at the University of West Bohemia, the Czech Republic. However, from the beginning it has been also designed and developed to be open and applicable in laboratories that do similar research. After introducing the laboratory and the whole architectural concept the individual parts of the infrastructure are described. The central element of the software infrastructure is a web-based portal that enables community researchers to store, share, download and search data and metadata from electrophysiological experiments. The data model, domain ontology and usage of semantic web languages and technologies are described. Current data publication policy used in the portal is briefly introduced. The registration of the portal within Neuroscience Information Framework is described. Then the methods used for processing of electrophysiological signals are presented. The specific modifications of these methods introduced by laboratory researches are summarized; the methods are organized into a laboratory workflow. Other parts of the software infrastructure include mobile and offline solutions for data/metadata storing and a hardware stimulator communicating with an EEG amplifier and recording software. PMID:24639646

  12. Software and hardware infrastructure for research in electrophysiology.

    PubMed

    Mouček, Roman; Ježek, Petr; Vařeka, Lukáš; Rondík, Tomáš; Brůha, Petr; Papež, Václav; Mautner, Pavel; Novotný, Jiří; Prokop, Tomáš; Stěbeták, Jan

    2014-01-01

    As in other areas of experimental science, operation of electrophysiological laboratory, design and performance of electrophysiological experiments, collection, storage and sharing of experimental data and metadata, analysis and interpretation of these data, and publication of results are time consuming activities. If these activities are well organized and supported by a suitable infrastructure, work efficiency of researchers increases significantly. This article deals with the main concepts, design, and development of software and hardware infrastructure for research in electrophysiology. The described infrastructure has been primarily developed for the needs of neuroinformatics laboratory at the University of West Bohemia, the Czech Republic. However, from the beginning it has been also designed and developed to be open and applicable in laboratories that do similar research. After introducing the laboratory and the whole architectural concept the individual parts of the infrastructure are described. The central element of the software infrastructure is a web-based portal that enables community researchers to store, share, download and search data and metadata from electrophysiological experiments. The data model, domain ontology and usage of semantic web languages and technologies are described. Current data publication policy used in the portal is briefly introduced. The registration of the portal within Neuroscience Information Framework is described. Then the methods used for processing of electrophysiological signals are presented. The specific modifications of these methods introduced by laboratory researches are summarized; the methods are organized into a laboratory workflow. Other parts of the software infrastructure include mobile and offline solutions for data/metadata storing and a hardware stimulator communicating with an EEG amplifier and recording software.

  13. Scalable collaborative risk management technology for complex critical systems

    NASA Technical Reports Server (NTRS)

    Campbell, Scott; Torgerson, Leigh; Burleigh, Scott; Feather, Martin S.; Kiper, James D.

    2004-01-01

    We describe here our project and plans to develop methods, software tools, and infrastructure tools to address challenges relating to geographically distributed software development. Specifically, this work is creating an infrastructure that supports applications working over distributed geographical and organizational domains and is using this infrastructure to develop a tool that supports project development using risk management and analysis techniques where the participants are not collocated.

  14. Coupling Sensing Hardware with Data Interrogation Software for Structural Health Monitoring

    DOE PAGES

    Farrar, Charles R.; Allen, David W.; Park, Gyuhae; ...

    2006-01-01

    The process of implementing a damage detection strategy for aerospace, civil and mechanical engineering infrastructure is referred to as structural health monitoring (SHM). The authors' approach is to address the SHM problem in the context of a statistical pattern recognition paradigm. In this paradigm, the process can be broken down into four parts: (1) Operational Evaluation, (2) Data Acquisition and Cleansing, (3) Feature Extraction and Data Compression, and (4) Statistical Model Development for Feature Discrimination. These processes must be implemented through hardware or software and, in general, some combination of these two approaches will be used. This paper will discussmore » each portion of the SHM process with particular emphasis on the coupling of a general purpose data interrogation software package for structural health monitoring with a modular wireless sensing and processing platform. More specifically, this paper will address the need to take an integrated hardware/software approach to developing SHM solutions.« less

  15. caCORE: a common infrastructure for cancer informatics.

    PubMed

    Covitz, Peter A; Hartel, Frank; Schaefer, Carl; De Coronado, Sherri; Fragoso, Gilberto; Sahni, Himanso; Gustafson, Scott; Buetow, Kenneth H

    2003-12-12

    Sites with substantive bioinformatics operations are challenged to build data processing and delivery infrastructure that provides reliable access and enables data integration. Locally generated data must be processed and stored such that relationships to external data sources can be presented. Consistency and comparability across data sets requires annotation with controlled vocabularies and, further, metadata standards for data representation. Programmatic access to the processed data should be supported to ensure the maximum possible value is extracted. Confronted with these challenges at the National Cancer Institute Center for Bioinformatics, we decided to develop a robust infrastructure for data management and integration that supports advanced biomedical applications. We have developed an interconnected set of software and services called caCORE. Enterprise Vocabulary Services (EVS) provide controlled vocabulary, dictionary and thesaurus services. The Cancer Data Standards Repository (caDSR) provides a metadata registry for common data elements. Cancer Bioinformatics Infrastructure Objects (caBIO) implements an object-oriented model of the biomedical domain and provides Java, Simple Object Access Protocol and HTTP-XML application programming interfaces. caCORE has been used to develop scientific applications that bring together data from distinct genomic and clinical science sources. caCORE downloads and web interfaces can be accessed from links on the caCORE web site (http://ncicb.nci.nih.gov/core). caBIO software is distributed under an open source license that permits unrestricted academic and commercial use. Vocabulary and metadata content in the EVS and caDSR, respectively, is similarly unrestricted, and is available through web applications and FTP downloads. http://ncicb.nci.nih.gov/core/publications contains links to the caBIO 1.0 class diagram and the caCORE 1.0 Technical Guide, which provide detailed information on the present caCORE architecture, data sources and APIs. Updated information appears on a regular basis on the caCORE web site (http://ncicb.nci.nih.gov/core).

  16. Integrated web system of geospatial data services for climate research

    NASA Astrophysics Data System (ADS)

    Okladnikov, Igor; Gordov, Evgeny; Titov, Alexander

    2016-04-01

    Georeferenced datasets are currently actively used for modeling, interpretation and forecasting of climatic and ecosystem changes on different spatial and temporal scales. Due to inherent heterogeneity of environmental datasets as well as their huge size (up to tens terabytes for a single dataset) a special software supporting studies in the climate and environmental change areas is required. An approach for integrated analysis of georefernced climatological data sets based on combination of web and GIS technologies in the framework of spatial data infrastructure paradigm is presented. According to this approach a dedicated data-processing web system for integrated analysis of heterogeneous georeferenced climatological and meteorological data is being developed. It is based on Open Geospatial Consortium (OGC) standards and involves many modern solutions such as object-oriented programming model, modular composition, and JavaScript libraries based on GeoExt library, ExtJS Framework and OpenLayers software. This work is supported by the Ministry of Education and Science of the Russian Federation, Agreement #14.613.21.0037.

  17. A cyber infrastructure for the SKA Telescope Manager

    NASA Astrophysics Data System (ADS)

    Barbosa, Domingos; Barraca, João. P.; Carvalho, Bruno; Maia, Dalmiro; Gupta, Yashwant; Natarajan, Swaminathan; Le Roux, Gerhard; Swart, Paul

    2016-07-01

    The Square Kilometre Array Telescope Manager (SKA TM) will be responsible for assisting the SKA Operations and Observation Management, carrying out System diagnosis and collecting Monitoring and Control data from the SKA subsystems and components. To provide adequate compute resources, scalability, operation continuity and high availability, as well as strict Quality of Service, the TM cyber-infrastructure (embodied in the Local Infrastructure - LINFRA) consists of COTS hardware and infrastructural software (for example: server monitoring software, host operating system, virtualization software, device firmware), providing a specially tailored Infrastructure as a Service (IaaS) and Platform as a Service (PaaS) solution. The TM infrastructure provides services in the form of computational power, software defined networking, power, storage abstractions, and high level, state of the art IaaS and PaaS management interfaces. This cyber platform will be tailored to each of the two SKA Phase 1 telescopes (SKA_MID in South Africa and SKA_LOW in Australia) instances, each presenting different computational and storage infrastructures and conditioned by location. This cyber platform will provide a compute model enabling TM to manage the deployment and execution of its multiple components (observation scheduler, proposal submission tools, MandC components, Forensic tools and several Databases, etc). In this sense, the TM LINFRA is primarily focused towards the provision of isolated instances, mostly resorting to virtualization technologies, while defaulting to bare hardware if specifically required due to performance, security, availability, or other requirement.

  18. 2015 ESGF Progress Report

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Williams, D. N.

    2015-06-22

    The Earth System Grid Federation (ESGF) is a multi-agency, international collaboration whose purpose is to develop the software infrastructure needed to facilitate and empower the study of climate change on a global scale. ESGF’s architecture employs a system of geographically distributed peer nodes that are independently administered yet united by common federation protocols and application programming interfaces. The cornerstones of its interoperability are the peer-to-peer messaging, which is continuously exchanged among all nodes in the federation; a shared architecture for search and discovery; and a security infrastructure based on industry standards. ESGF integrates popular application engines available from the open-sourcemore » community with custom components (for data publishing, searching, user interface, security, and messaging) that were developed collaboratively by the team. The full ESGF infrastructure has now been adopted by multiple Earth science projects and allows access to petabytes of geophysical data, including the Coupled Model Intercomparison Project (CMIP)—output used by the Intergovernmental Panel on Climate Change assessment reports. ESGF is a successful example of integration of disparate open-source technologies into a cohesive functional system that serves the needs of the global climate science community.« less

  19. Understanding Patterns for System-of-Systems Integration

    DTIC Science & Technology

    2013-12-01

    would be completely newly constructed. 2. Brownfield : there exists something, but we can (in principle) modify the realization of it. Here access to...systems (SoS constitu- ents). If we consider again the enterprise software infrastructure introduced above, a Brownfield scenario occurs in a SoS...context if the company did initially build its own SoS that now must be modified (e.g., by introducing an ESB5 backbone). A Brownfield scenario also

  20. The Human-Robot Interaction Operating System

    NASA Technical Reports Server (NTRS)

    Fong, Terrence; Kunz, Clayton; Hiatt, Laura M.; Bugajska, Magda

    2006-01-01

    In order for humans and robots to work effectively together, they need to be able to converse about abilities, goals and achievements. Thus, we are developing an interaction infrastructure called the "Human-Robot Interaction Operating System" (HRI/OS). The HRI/OS provides a structured software framework for building human-robot teams, supports a variety of user interfaces, enables humans and robots to engage in task-oriented dialogue, and facilitates integration of robots through an extensible API.

  1. Whole earth modeling: developing and disseminating scientific software for computational geophysics.

    NASA Astrophysics Data System (ADS)

    Kellogg, L. H.

    2016-12-01

    Historically, a great deal of specialized scientific software for modeling and data analysis has been developed by individual researchers or small groups of scientists working on their own specific research problems. As the magnitude of available data and computer power has increased, so has the complexity of scientific problems addressed by computational methods, creating both a need to sustain existing scientific software, and expand its development to take advantage of new algorithms, new software approaches, and new computational hardware. To that end, communities like the Computational Infrastructure for Geodynamics (CIG) have been established to support the use of best practices in scientific computing for solid earth geophysics research and teaching. Working as a scientific community enables computational geophysicists to take advantage of technological developments, improve the accuracy and performance of software, build on prior software development, and collaborate more readily. The CIG community, and others, have adopted an open-source development model, in which code is developed and disseminated by the community in an open fashion, using version control and software repositories like Git. One emerging issue is how to adequately identify and credit the intellectual contributions involved in creating open source scientific software. The traditional method of disseminating scientific ideas, peer reviewed publication, was not designed for review or crediting scientific software, although emerging publication strategies such software journals are attempting to address the need. We are piloting an integrated approach in which authors are identified and credited as scientific software is developed and run. Successful software citation requires integration with the scholarly publication and indexing mechanisms as well, to assign credit, ensure discoverability, and provide provenance for software.

  2. Robonaut's Flexible Information Technology Infrastructure

    NASA Technical Reports Server (NTRS)

    Askew, Scott; Bluethmann, William; Alder, Ken; Ambrose, Robert

    2003-01-01

    Robonaut, NASA's humanoid robot, is designed to work as both an astronaut assistant and, in certain situations, an astronaut surrogate. This highly dexterous robot performs complex tasks under telepresence control that could previously only be carried out directly by humans. Currently with 47 degrees of freedom (DOF), Robonaut is a state-of-the-art human size telemanipulator system. while many of Robonaut's embedded components have been custom designed to meet packaging or environmental requirements, the primary computing systems used in Robonaut are currently commercial-off-the-shelf (COTS) products which have some correlation to flight qualified computer systems. This loose coupling of information technology (IT) resources allows Robonaut to exploit cost effective solutions while floating the technology base to take advantage of the rapid pace of IT advances. These IT systems utilize a software development environment, which is both compatible with COTS hardware as well as flight proven computing systems, preserving the majority of software development for a flight system. The ability to use highly integrated and flexible COTS software development tools improves productivity while minimizing redesign for a space flight system. Further, the flexibility of Robonaut's software and communication architecture has allowed it to become a widely used distributed development testbed for integrating new capabilities and furthering experimental research.

  3. FOSS Tools for Research Data Management

    NASA Astrophysics Data System (ADS)

    Stender, Vivien; Jankowski, Cedric; Hammitzsch, Martin; Wächter, Joachim

    2017-04-01

    Established initiatives and organizations, e.g. the Initiative for Scientific Cyberinfrastructures (NSF, 2007) or the European Strategy Forum on Research Infrastructures (ESFRI, 2008), promote and foster the development of sustainable research infrastructures. These infrastructures aim the provision of services supporting scientists to search, visualize and access data, to collaborate and exchange information, as well as to publish data and other results. In this regard, Research Data Management (RDM) gains importance and thus requires the support by appropriate tools integrated in these infrastructures. Different projects provide arbitrary solutions to manage research data. However, within two projects - SUMARIO for land and water management and TERENO for environmental monitoring - solutions to manage research data have been developed based on Free and Open Source Software (FOSS) components. The resulting framework provides essential components for harvesting, storing and documenting research data, as well as for discovering, visualizing and downloading these data on the basis of standardized services stimulated considerably by enhanced data management approaches of Spatial Data Infrastructures (SDI). In order to fully exploit the potentials of these developments for enhancing data management in Geosciences the publication of software components, e.g. via GitHub, is not sufficient. We will use our experience to move these solutions into the cloud e.g. as PaaS or SaaS offerings. Our contribution will present data management solutions for the Geosciences developed in two projects. A sort of construction kit with FOSS components build the backbone for the assembly and implementation of projects specific platforms. Furthermore, an approach is presented to stimulate the reuse of FOSS RDM solutions with cloud concepts. In further projects specific RDM platforms can be set-up much faster, customized to the individual needs and tools can be added during the run-time.

  4. The Australian Computational Earth Systems Simulator

    NASA Astrophysics Data System (ADS)

    Mora, P.; Muhlhaus, H.; Lister, G.; Dyskin, A.; Place, D.; Appelbe, B.; Nimmervoll, N.; Abramson, D.

    2001-12-01

    Numerical simulation of the physics and dynamics of the entire earth system offers an outstanding opportunity for advancing earth system science and technology but represents a major challenge due to the range of scales and physical processes involved, as well as the magnitude of the software engineering effort required. However, new simulation and computer technologies are bringing this objective within reach. Under a special competitive national funding scheme to establish new Major National Research Facilities (MNRF), the Australian government together with a consortium of Universities and research institutions have funded construction of the Australian Computational Earth Systems Simulator (ACcESS). The Simulator or computational virtual earth will provide the research infrastructure to the Australian earth systems science community required for simulations of dynamical earth processes at scales ranging from microscopic to global. It will consist of thematic supercomputer infrastructure and an earth systems simulation software system. The Simulator models and software will be constructed over a five year period by a multi-disciplinary team of computational scientists, mathematicians, earth scientists, civil engineers and software engineers. The construction team will integrate numerical simulation models (3D discrete elements/lattice solid model, particle-in-cell large deformation finite-element method, stress reconstruction models, multi-scale continuum models etc) with geophysical, geological and tectonic models, through advanced software engineering and visualization technologies. When fully constructed, the Simulator aims to provide the software and hardware infrastructure needed to model solid earth phenomena including global scale dynamics and mineralisation processes, crustal scale processes including plate tectonics, mountain building, interacting fault system dynamics, and micro-scale processes that control the geological, physical and dynamic behaviour of earth systems. ACcESS represents a part of Australia's contribution to the APEC Cooperation for Earthquake Simulation (ACES) international initiative. Together with other national earth systems science initiatives including the Japanese Earth Simulator and US General Earthquake Model projects, ACcESS aims to provide a driver for scientific advancement and technological breakthroughs including: quantum leaps in understanding of earth evolution at global, crustal, regional and microscopic scales; new knowledge of the physics of crustal fault systems required to underpin the grand challenge of earthquake prediction; new understanding and predictive capabilities of geological processes such as tectonics and mineralisation.

  5. Architecture for distributed design and fabrication

    NASA Astrophysics Data System (ADS)

    McIlrath, Michael B.; Boning, Duane S.; Troxel, Donald E.

    1997-01-01

    We describe a flexible, distributed system architecture capable of supporting collaborative design and fabrication of semi-conductor devices and integrated circuits. Such capabilities are of particular importance in the development of new technologies, where both equipment and expertise are limited. Distributed fabrication enables direct, remote, physical experimentation in the development of leading edge technology, where the necessary manufacturing resources are new, expensive, and scarce. Computational resources, software, processing equipment, and people may all be widely distributed; their effective integration is essential in order to achieve the realization of new technologies for specific product requirements. Our architecture leverages is essential in order to achieve the realization of new technologies for specific product requirements. Our architecture leverages current vendor and consortia developments to define software interfaces and infrastructure based on existing and merging networking, CIM, and CAD standards. Process engineers and product designers access processing and simulation results through a common interface and collaborate across the distributed manufacturing environment.

  6. A Study to Identify the Critical Success Factors for ERP Implementation in an Indian SME: A Case Based Approach

    NASA Astrophysics Data System (ADS)

    Upadhyay, Parijat; Dan, Pranab K.

    To achieve synergy across product lines, businesses are implementing a set of standard business applications and consistent data definitions across all business units. ERP packages are extremely useful in integrating a global company and provide a "common language" throughout the company. Companies are not only implementing a standardized application but is also moving to a common architecture and infrastructure. For many companies, a standardized software rollout is a good time to do some consolidation of their IT infrastructure across various locations. Companies are also finding that the ERP solutions help them get rid of their legacy systems, most of which may not be compliant with the modern day business requirements.

  7. Implementation of a large-scale hospital information infrastructure for multi-unit health-care services.

    PubMed

    Yoo, Sun K; Kim, Dong Keun; Kim, Jung C; Park, Youn Jung; Chang, Byung Chul

    2008-01-01

    With the increase in demand for high quality medical services, the need for an innovative hospital information system has become essential. An improved system has been implemented in all hospital units of the Yonsei University Health System. Interoperability between multi-units required appropriate hardware infrastructure and software architecture. This large-scale hospital information system encompassed PACS (Picture Archiving and Communications Systems), EMR (Electronic Medical Records) and ERP (Enterprise Resource Planning). It involved two tertiary hospitals and 50 community hospitals. The monthly data production rate by the integrated hospital information system is about 1.8 TByte and the total quantity of data produced so far is about 60 TByte. Large scale information exchange and sharing will be particularly useful for telemedicine applications.

  8. Rearchitecting IT: Simplify. Simplify

    ERIC Educational Resources Information Center

    Panettieri, Joseph C.

    2006-01-01

    Simplifying and securing an IT infrastructure is not easy. It frequently requires rethinking years of hardware and software investments, and a gradual migration to modern systems. Even so, writes the author, universities can take six practical steps to success: (1) Audit software infrastructure; (2) Evaluate current applications; (3) Centralize…

  9. Moving code - Sharing geoprocessing logic on the Web

    NASA Astrophysics Data System (ADS)

    Müller, Matthias; Bernard, Lars; Kadner, Daniel

    2013-09-01

    Efficient data processing is a long-standing challenge in remote sensing. Effective and efficient algorithms are required for product generation in ground processing systems, event-based or on-demand analysis, environmental monitoring, and data mining. Furthermore, the increasing number of survey missions and the exponentially growing data volume in recent years have created demand for better software reuse as well as an efficient use of scalable processing infrastructures. Solutions that address both demands simultaneously have begun to slowly appear, but they seldom consider the possibility to coordinate development and maintenance efforts across different institutions, community projects, and software vendors. This paper presents a new approach to share, reuse, and possibly standardise geoprocessing logic in the field of remote sensing. Drawing from the principles of service-oriented design and distributed processing, this paper introduces moving-code packages as self-describing software components that contain algorithmic code and machine-readable descriptions of the provided functionality, platform, and infrastructure, as well as basic information about exploitation rights. Furthermore, the paper presents a lean publishing mechanism by which to distribute these packages on the Web and to integrate them in different processing environments ranging from monolithic workstations to elastic computational environments or "clouds". The paper concludes with an outlook toward community repositories for reusable geoprocessing logic and their possible impact on data-driven science in general.

  10. Modernized build and test infrastructure for control software at ESO: highly flexible building, testing, and automatic quality practices for telescope control software

    NASA Astrophysics Data System (ADS)

    Pellegrin, F.; Jeram, B.; Haucke, J.; Feyrin, S.

    2016-07-01

    The paper describes the introduction of a new automatized build and test infrastructure, based on the open-source software Jenkins1, into the ESO Very Large Telescope control software to replace the preexisting in-house solution. A brief introduction to software quality practices is given, a description of the previous solution, the limitations of it and new upcoming requirements. Modifications required to adapt the new system are described, how these were implemented to current software and the results obtained. An overview on how the new system may be used in future projects is also presented.

  11. Cyber Security Threats to Safety-Critical, Space-Based Infrastructures

    NASA Astrophysics Data System (ADS)

    Johnson, C. W.; Atencia Yepez, A.

    2012-01-01

    Space-based systems play an important role within national critical infrastructures. They are being integrated into advanced air-traffic management applications, rail signalling systems, energy distribution software etc. Unfortunately, the end users of communications, location sensing and timing applications often fail to understand that these infrastructures are vulnerable to a wide range of security threats. The following pages focus on concerns associated with potential cyber-attacks. These are important because future attacks may invalidate many of the safety assumptions that support the provision of critical space-based services. These safety assumptions are based on standard forms of hazard analysis that ignore cyber-security considerations This is a significant limitation when, for instance, security attacks can simultaneously exploit multiple vulnerabilities in a manner that would never occur without a deliberate enemy seeking to damage space based systems and ground infrastructures. We address this concern through the development of a combined safety and security risk assessment methodology. The aim is to identify attack scenarios that justify the allocation of additional design resources so that safety barriers can be strengthened to increase our resilience against security threats.

  12. A Microarray Tool Provides Pathway and GO Term Analysis.

    PubMed

    Koch, Martin; Royer, Hans-Dieter; Wiese, Michael

    2011-12-01

    Analysis of gene expression profiles is no longer exclusively a task for bioinformatic experts. However, gaining statistically significant results is challenging and requires both biological knowledge and computational know-how. Here we present a novel, user-friendly microarray reporting tool called maRt. The software provides access to bioinformatic resources, like gene ontology terms and biological pathways by use of the DAVID and the BioMart web-service. Results are summarized in structured HTML reports, each presenting a different layer of information. In these report, contents of diverse sources are integrated and interlinked. To speed up processing, maRt takes advantage of the multi-core technology of modern desktop computers by using parallel processing. Since the software is built upon a RCP infrastructure it might be an outset for developers aiming to integrate novel R based applications. Installer, documentation and various kinds of tutorials are available under LGPL license at the website of our institute http://www.pharma.uni-bonn.de/www/mart. This software is free for academic use. Copyright © 2011 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  13. Executable research compendia in geoscience research infrastructures

    NASA Astrophysics Data System (ADS)

    Nüst, Daniel

    2017-04-01

    From generation through analysis and collaboration to communication, scientific research requires the right tools. Scientists create their own software using third party libraries and platforms. Cloud computing, Open Science, public data infrastructures, and Open Source enable scientists with unprecedented opportunites, nowadays often in a field "Computational X" (e.g. computational seismology) or X-informatics (e.g. geoinformatics) [0]. This increases complexity and generates more innovation, e.g. Environmental Research Infrastructures (environmental RIs [1]). Researchers in Computational X write their software relying on both source code (e.g. from https://github.com) and binary libraries (e.g. from package managers such as APT, https://wiki.debian.org/Apt, or CRAN, https://cran.r-project.org/). They download data from domain specific (cf. https://re3data.org) or generic (e.g. https://zenodo.org) data repositories, and deploy computations remotely (e.g. European Open Science Cloud). The results themselves are archived, given persistent identifiers, connected to other works (e.g. using https://orcid.org/), and listed in metadata catalogues. A single researcher, intentionally or not, interacts with all sub-systems of RIs: data acquisition, data access, data processing, data curation, and community support [3]. To preserve computational research [3] proposes the Executable Research Compendium (ERC), a container format closing the gap of dependency preservation by encapsulating the runtime environment. ERCs and RIs can be integrated for different uses: (i) Coherence: ERC services validate completeness, integrity and results (ii) Metadata: ERCs connect the different parts of a piece of research and faciliate discovery (iii) Exchange and Preservation: ERC as usable building blocks are the shared and archived entity (iv) Self-consistency: ERCs remove dependence on ephemeral sources (v) Execution: ERC services create and execute a packaged analysis but integrate with existing platforms for display and control These integrations are vital for capturing workflows in RIs and connect key stakeholders (scientists, publishers, librarians). They are demonstrated using developments by the DFG-funded project Opening Reproducible Research (http://o2r.info). Semi-automatic creation of ERCs based on research workflows is a core goal of the project. References [0] Tony Hey, Stewart Tansley, Kristin Tolle (eds), 2009. The Fourth Paradigm: Data-Intensive Scientific Discovery. Microsoft Research. [1] P. Martin et al., Open Information Linking for Environmental Research Infrastructures, 2015 IEEE 11th International Conference on e-Science, Munich, 2015, pp. 513-520. doi: 10.1109/eScience.2015.66 [2] Y. Chen et al., Analysis of Common Requirements for Environmental Science Research Infrastructures, The International Symposium on Grids and Clouds (ISGC) 2013, Taipei, 2013, http://pos.sissa.it/archive/conferences/179/032/ISGC [3] Opening Reproducible Research, Geophysical Research Abstracts Vol. 18, EGU2016-7396, 2016, http://meetingorganizer.copernicus.org/EGU2016/EGU2016-7396.pdf

  14. The EGI-Engage EPOS Competence Center - Interoperating heterogeneous AAI mechanisms and Orchestrating distributed computational resources

    NASA Astrophysics Data System (ADS)

    Bailo, Daniele; Scardaci, Diego; Spinuso, Alessandro; Sterzel, Mariusz; Schwichtenberg, Horst; Gemuend, Andre

    2016-04-01

    The mission of EGI-Engage project [1] is to accelerate the implementation of the Open Science Commons vision, where researchers from all disciplines have easy and open access to the innovative digital services, data, knowledge and expertise they need for collaborative and excellent research. The Open Science Commons is grounded on three pillars: the e-Infrastructure Commons, an ecosystem of services that constitute the foundation layer of distributed infrastructures; the Open Data Commons, where observations, results and applications are increasingly available for scientific research and for anyone to use and reuse; and the Knowledge Commons, in which communities have shared ownership of knowledge, participate in the co-development of software and are technically supported to exploit state-of-the-art digital services. To develop the Knowledge Commons, EGI-Engage is supporting the work of a set of community-specific Competence Centres, with participants from user communities (scientific institutes), National Grid Initiatives (NGIs), technology and service providers. Competence Centres collect and analyse requirements, integrate community-specific applications into state-of-the-art services, foster interoperability across e-Infrastructures, and evolve services through a user-centric development model. One of these Competence Centres is focussed on the European Plate Observing System (EPOS) [2] as representative of the solid earth science communities. EPOS is a pan-European long-term plan to integrate data, software and services from the distributed (and already existing) Research Infrastructures all over Europe, in the domain of the solid earth science. EPOS will enable innovative multidisciplinary research for a better understanding of the Earth's physical and chemical processes that control earthquakes, volcanic eruptions, ground instability and tsunami as well as the processes driving tectonics and Earth's surface dynamics. EPOS will improve our ability to better manage the use of the subsurface of the Earth. EPOS started its Implementation Phase in October 2015 and is now actively working in order to integrate multidisciplinary data into a single e-infrastructure. Multidisciplinary data are organized and governed by the Thematic Core Services (TCS) - European wide organizations and e-Infrastructure providing community specific data and data products - and are driven by various scientific communities encompassing a wide spectrum of Earth science disciplines. TCS data, data products and services will be integrated into the Integrated Core Services (ICS) system, that will ensure their interoperability and access to these services by the scientific community as well as other users within the society. The EPOS competence center (EPOS CC) goal is to tackle two of the main challenges that the ICS are going to face in the near future, by taking advantage of the technical solutions provided by EGI. In order to do this, we will present the two pilot use cases the EGI-EPOS CC is developing: 1) The AAI pilot, dealing with the provision of transparent and homogeneous access to the ICS infrastructure to users owning different kind of credentials (e.g. eduGain, OpenID Connect, X509 certificates etc.). Here the focus is on the mechanisms which allow the credential delegation. 2) The computational pilot, Improve the back-end services of an existing application in the field of Computational Seismology, developed in the context of the EC funded project VERCE. The application allows the processing and the comparison of data resulting from the simulation of seismic wave propagation following a real earthquake and real measurements recorded by seismographs. While the simulation data is produced directly by the users and stored in a Data Management System, the observations need to be pre-staged from institutional data-services, which are maintained by the community itself. This use case aims at exploiting the EGI FedCloud e-infrastructure for Data Intensive analysis and also explores possible interaction with other Common Data Infrastructure initiatives as EUDAT. In the presentation, the state of the art of the two use cases, together with the open challenges and the future application will be discussed. Also, possible integration of EGI solutions with EPOS and other e-infrastructure providers will be considered. [1] EGI-ENGAGE https://www.egi.eu/about/egi-engage/ [2] EPOS http://www.epos-eu.org/

  15. Setting the stage for the EPOS ERIC: Integration of the legal, governance and financial framework

    NASA Astrophysics Data System (ADS)

    Atakan, Kuvvet; Bazin, Pierre-Louis; Bozzoli, Sabrina; Freda, Carmela; Giardini, Domenico; Hoffmann, Thomas; Kohler, Elisabeth; Kontkanen, Pirjo; Lauterjung, Jörn; Pedersen, Helle; Saleh, Kauzar; Sangianantoni, Agata

    2017-04-01

    EPOS - the European Plate Observing System - is the ESFRI infrastructure serving the need of the solid Earth science community at large. The EPOS mission is to create a single sustainable, and distributed infrastructure that integrates the diverse European Research Infrastructures for solid Earth science under a common framework. Thematic Core Services (TCS) and Integrated Core Services (Central Hub, ICS-C and Distributed, ICS-D) are key elements, together with NRIs (National Research Infrastructures), in the EPOS architecture. Following the preparatory phase, EPOS has initiated formal steps to adopt an ERIC legal framework (European Research Infrastructure Consortium). The statutory seat of EPOS will be in Rome, Italy, while the ICS-C will be jointly operated by France, UK and Denmark. The TCS planned so far cover: seismology, near-fault observatories, GNSS data and products, volcano observations, satellite data, geomagnetic observations, anthropogenic hazards, geological information modelling, multiscale laboratories and geo-energy test beds for low carbon energy. In the ERIC process, EPOS and all its services must achieve sustainability from a legal, governance, financial, and technical point of view, as well as full harmonization with national infrastructure roadmaps. As EPOS is a distributed infrastructure, the TCSs have to be linked to the future EPOS ERIC from legal and governance perspectives. For this purpose the TCSs have started to organize themselves as consortia and negotiate agreements to define the roles of the different actors in the consortium as well as their commitment to contribute to the EPOS activities. The link to the EPOS ERIC shall be made by service agreements of dedicated Service Providers. A common EPOS data policy has also been developed, based on the general principles of Open Access and paying careful attention to licensing issues, quality control, and intellectual property rights, which shall apply to the data, data products, software and services (DDSS) accessible through EPOS. From a financial standpoint, EPOS elaborated common guidelines for all institutions providing services, and selected a costing model and funding approach which foresees a mixed support of the services via national contributions and ERIC membership fees. In the EPOS multi-disciplinary environment, harmonization and integration are required at different levels and with a variety of different stakeholders; to this purpose, a Service Coordination Board (SCB) and technical Harmonization Groups (HGs) were established to develop the EPOS metadata standards with the EPOS Integrated Central Services, and to harmonize data and product standards with other projects at European and international level, including e.g. ENVRI+, EUDAT and EarthCube (US).

  16. Developing an Open Source, Reusable Platform for Distributed Collaborative Information Management in the Early Detection Research Network

    NASA Technical Reports Server (NTRS)

    Hart, Andrew F.; Verma, Rishi; Mattmann, Chris A.; Crichton, Daniel J.; Kelly, Sean; Kincaid, Heather; Hughes, Steven; Ramirez, Paul; Goodale, Cameron; Anton, Kristen; hide

    2012-01-01

    For the past decade, the NASA Jet Propulsion Laboratory, in collaboration with Dartmouth University has served as the center for informatics for the Early Detection Research Network (EDRN). The EDRN is a multi-institution research effort funded by the U.S. National Cancer Institute (NCI) and tasked with identifying and validating biomarkers for the early detection of cancer. As the distributed network has grown, increasingly formal processes have been developed for the acquisition, curation, storage, and dissemination of heterogeneous research information assets, and an informatics infrastructure has emerged. In this paper we discuss the evolution of EDRN informatics, its success as a mechanism for distributed information integration, and the potential sustainability and reuse benefits of emerging efforts to make the platform components themselves open source. We describe our experience transitioning a large closed-source software system to a community driven, open source project at the Apache Software Foundation, and point to lessons learned that will guide our present efforts to promote the reuse of the EDRN informatics infrastructure by a broader community.

  17. Future Standardization of Space Telecommunications Radio System with Core Flight System

    NASA Technical Reports Server (NTRS)

    Hickey, Joseph P.; Briones, Janette C.; Roche, Rigoberto; Handler, Louis M.; Hall, Steven

    2016-01-01

    NASA Glenn Research Center (GRC) is integrating the NASA Space Telecommunications Radio System (STRS) Standard with the Core Flight System (cFS). The STRS standard provides a common, consistent framework to develop, qualify, operate and maintain complex, reconfigurable and reprogrammable radio systems. The cFS is a flexible, open architecture that features a plug-and-play software executive called the Core Flight Executive (cFE), a reusable library of software components for flight and space missions and an integrated tool suite. Together, STRS and cFS create a development environment that allows for STRS compliant applications to reference the STRS APIs through the cFS infrastructure. These APis are used to standardize the communication protocols on NASAs space SDRs. The cFE-STRS Operating Environment (OE) is a portable cFS library, which adds the ability to run STRS applications on existing cFS platforms. The purpose of this paper is to discuss the cFE-STRS OE prototype, preliminary experimental results performed using the Advanced Space Radio Platform (ASRP), the GRC Sband Ground Station and the SCaN (Space Communication and Navigation) Testbed currently flying onboard the International Space Station. Additionally, this paper presents a demonstration of the Consultative Committee for Space Data Systems (CCSDS) Spacecraft Onboard Interface Services (SOIS) using electronic data sheets inside cFE. This configuration allows for the data sheets to specify binary formats for data exchange between STRS applications. The integration of STRS with cFS leverages mission-proven platform functions and mitigates barriers to integration with future missions. This reduces flight software development time and the costs of software-defined radio (SDR) platforms. Furthermore, the combined benefits of STRS standardization with the flexibility of cFS provide an effective, reliable and modular framework to minimize software development efforts for spaceflight missions.

  18. A Measurement & Analysis Training Solution Supporting CMMI & Six Sigma Transition

    DTIC Science & Technology

    2004-10-01

    product” • Designing an integrated training solution • Illustration(s) © 2004 by Carnegie Mellon University Version 1.0 page 5 Carnegie Mellon S...12207 Score- card EIA 632 ISO 9000 ITIL COBIT PSM GQIM © 2004 by Carnegie Mellon University Version 1.0 page 7 Carnegie Mellon S oftware Engineer ing...several Capability Maturity Models, reflects Crosby’s 5 maturity levels • Focuses on infrastructure and process maturity • Intended for software and

  19. NASA Technology Transfer System

    NASA Technical Reports Server (NTRS)

    Tran, Peter B.; Okimura, Takeshi

    2017-01-01

    NTTS is the IT infrastructure for the Agency's Technology Transfer (T2) program containing 60,000+ technology portfolio supporting all ten NASA field centers and HQ. It is the enterprise IT system for facilitating the Agency's technology transfer process, which includes reporting of new technologies (e.g., technology invention disclosures NF1679), protecting intellectual properties (e.g., patents), and commercializing technologies through various technology licenses, software releases, spinoffs, and success stories using custom built workflow, reporting, data consolidation, integration, and search engines.

  20. Degraded Operational Environment: Integration of Social Network Infrastructure Concept in a Traditional Military C2 System

    DTIC Science & Technology

    2013-06-01

    Communication Applet) UNIGE – D.I.M.E. Using a free application as “MIT APP Inventor” Android Software Development Kit DEGRADED C2 ICCRTS 2013...operate on an Android operating system up-gradable on which will be developed a simplified ACA ( Android Communication Applet) that will call C24U...Server) IP number . . . Portable COTS Devices ACA - C24U ( Android Communication Applet) Sending/receiving SEFL (Simple Exchange

  1. Development of a Dynamically Configurable,Object-Oriented Framework for Distributed, Multi-modal Computational Aerospace Systems Simulation

    NASA Technical Reports Server (NTRS)

    Afjeh, Abdollah A.; Reed, John A.

    2003-01-01

    This research is aimed at developing a neiv and advanced simulation framework that will significantly improve the overall efficiency of aerospace systems design and development. This objective will be accomplished through an innovative integration of object-oriented and Web-based technologies ivith both new and proven simulation methodologies. The basic approach involves Ihree major areas of research: Aerospace system and component representation using a hierarchical object-oriented component model which enables the use of multimodels and enforces component interoperability. Collaborative software environment that streamlines the process of developing, sharing and integrating aerospace design and analysis models. . Development of a distributed infrastructure which enables Web-based exchange of models to simplify the collaborative design process, and to support computationally intensive aerospace design and analysis processes. Research for the first year dealt with the design of the basic architecture and supporting infrastructure, an initial implementation of that design, and a demonstration of its application to an example aircraft engine system simulation.

  2. First use of LHC Run 3 Conditions Database infrastructure for auxiliary data files in ATLAS

    NASA Astrophysics Data System (ADS)

    Aperio Bella, L.; Barberis, D.; Buttinger, W.; Formica, A.; Gallas, E. J.; Rinaldi, L.; Rybkin, G.; ATLAS Collaboration

    2017-10-01

    Processing of the large amount of data produced by the ATLAS experiment requires fast and reliable access to what we call Auxiliary Data Files (ADF). These files, produced by Combined Performance, Trigger and Physics groups, contain conditions, calibrations, and other derived data used by the ATLAS software. In ATLAS this data has, thus far for historical reasons, been collected and accessed outside the ATLAS Conditions Database infrastructure and related software. For this reason, along with the fact that ADF are effectively read by the software as binary objects, this class of data appears ideal for testing the proposed Run 3 conditions data infrastructure now in development. This paper describes this implementation as well as the lessons learned in exploring and refining the new infrastructure with the potential for deployment during Run 2.

  3. Integrated homeland security system with passive thermal imaging and advanced video analytics

    NASA Astrophysics Data System (ADS)

    Francisco, Glen; Tillman, Jennifer; Hanna, Keith; Heubusch, Jeff; Ayers, Robert

    2007-04-01

    A complete detection, management, and control security system is absolutely essential to preempting criminal and terrorist assaults on key assets and critical infrastructure. According to Tom Ridge, former Secretary of the US Department of Homeland Security, "Voluntary efforts alone are not sufficient to provide the level of assurance Americans deserve and they must take steps to improve security." Further, it is expected that Congress will mandate private sector investment of over $20 billion in infrastructure protection between 2007 and 2015, which is incremental to funds currently being allocated to key sites by the department of Homeland Security. Nearly 500,000 individual sites have been identified by the US Department of Homeland Security as critical infrastructure sites that would suffer severe and extensive damage if a security breach should occur. In fact, one major breach in any of 7,000 critical infrastructure facilities threatens more than 10,000 people. And one major breach in any of 123 facilities-identified as "most critical" among the 500,000-threatens more than 1,000,000 people. Current visible, nightvision or near infrared imaging technology alone has limited foul-weather viewing capability, poor nighttime performance, and limited nighttime range. And many systems today yield excessive false alarms, are managed by fatigued operators, are unable to manage the voluminous data captured, or lack the ability to pinpoint where an intrusion occurred. In our 2006 paper, "Critical Infrastructure Security Confidence Through Automated Thermal Imaging", we showed how a highly effective security solution can be developed by integrating what are now available "next-generation technologies" which include: Thermal imaging for the highly effective detection of intruders in the dark of night and in challenging weather conditions at the sensor imaging level - we refer to this as the passive thermal sensor level detection building block Automated software detection for creating initial alerts - we refer to this as software level detection, the next level building block Immersive 3D visual assessment for situational awareness and to manage the reaction process - we refer to this as automated intelligent situational awareness, a third building block Wide area command and control capabilities to allow control from a remote location - we refer to this as the management and process control building block integrating together the lower level building elements. In addition, this paper describes three live installations of complete, total systems that incorporate visible and thermal cameras as well as advanced video analytics. Discussion of both system elements and design is extensive.

  4. A framework for integration of scientific applications into the OpenTopography workflow

    NASA Astrophysics Data System (ADS)

    Nandigam, V.; Crosby, C.; Baru, C.

    2012-12-01

    The NSF-funded OpenTopography facility provides online access to Earth science-oriented high-resolution LIDAR topography data, online processing tools, and derivative products. The underlying cyberinfrastructure employs a multi-tier service oriented architecture that is comprised of an infrastructure tier, a processing services tier, and an application tier. The infrastructure tier consists of storage, compute resources as well as supporting databases. The services tier consists of the set of processing routines each deployed as a Web service. The applications tier provides client interfaces to the system. (e.g. Portal). We propose a "pluggable" infrastructure design that will allow new scientific algorithms and processing routines developed and maintained by the community to be integrated into the OpenTopography system so that the wider earth science community can benefit from its availability. All core components in OpenTopography are available as Web services using a customized open-source Opal toolkit. The Opal toolkit provides mechanisms to manage and track job submissions, with the help of a back-end database. It allows monitoring of job and system status by providing charting tools. All core components in OpenTopography have been developed, maintained and wrapped as Web services using Opal by OpenTopography developers. However, as the scientific community develops new processing and analysis approaches this integration approach is not scalable efficiently. Most of the new scientific applications will have their own active development teams performing regular updates, maintenance and other improvements. It would be optimal to have the application co-located where its developers can continue to actively work on it while still making it accessible within the OpenTopography workflow for processing capabilities. We will utilize a software framework for remote integration of these scientific applications into the OpenTopography system. This will be accomplished by virtually extending the OpenTopography service over the various infrastructures running these scientific applications and processing routines. This involves packaging and distributing a customized instance of the Opal toolkit that will wrap the software application as an OPAL-based web service and integrate it into the OpenTopography framework. We plan to make this as automated as possible. A structured specification of service inputs and outputs along with metadata annotations encoded in XML can be utilized to automate the generation of user interfaces, with appropriate tools tips and user help features, and generation of other internal software. The OpenTopography Opal toolkit will also include the customizations that will enable security authentication, authorization and the ability to write application usage and job statistics back to the OpenTopography databases. This usage information could then be reported to the original service providers and used for auditing and performance improvements. This pluggable framework will enable the application developers to continue to work on enhancing their application while making the latest iteration available in a timely manner to the earth sciences community. This will also help us establish an overall framework that other scientific application providers will also be able to use going forward.

  5. Study the effect of reservoir spatial heterogeneity on CO2 sequestration under an uncertainty quantification (UQ) software framework

    NASA Astrophysics Data System (ADS)

    Fang, Y.; Hou, J.; Engel, D.; Lin, G.; Yin, J.; Han, B.; Fang, Z.; Fountoulakis, V.

    2011-12-01

    In this study, we introduce an uncertainty quantification (UQ) software framework for carbon sequestration, with the focus of studying being the effect of spatial heterogeneity of reservoir properties on CO2 migration. We use a sequential Gaussian method (SGSIM) to generate realizations of permeability fields with various spatial statistical attributes. To deal with the computational difficulties, we integrate the following ideas/approaches: 1) firstly, we use three different sampling approaches (probabilistic collocation, quasi-Monte Carlo, and adaptive sampling approaches) to reduce the required forward calculations while trying to explore the parameter space and quantify the input uncertainty; 2) secondly, we use eSTOMP as the forward modeling simulator. eSTOMP is implemented using the Global Arrays toolkit (GA) that is based on one-sided inter-processor communication and supports a shared memory programming style on distributed memory platforms. It provides highly-scalable performance. It uses a data model to partition most of the large scale data structures into a relatively small number of distinct classes. The lower level simulator infrastructure (e.g. meshing support, associated data structures, and data mapping to processors) is separated from the higher level physics and chemistry algorithmic routines using a grid component interface; and 3) besides the faster model and more efficient algorithms to speed up the forward calculation, we built an adaptive system infrastructure to select the best possible data transfer mechanisms, to optimally allocate system resources to improve performance, and to integrate software packages and data for composing carbon sequestration simulation, computation, analysis, estimation and visualization. We will demonstrate the framework with a given CO2 injection scenario in a heterogeneous sandstone reservoir.

  6. Unidata cyberinfrastructure in the cloud: A progress report

    NASA Astrophysics Data System (ADS)

    Ramamurthy, Mohan

    2016-04-01

    Data services, software, and committed support are critical components of geosciences cyber-infrastructure that can help scientists address problems of unprecedented complexity, scale, and scope. Unidata is currently working on innovative ideas, new paradigms, and novel techniques to complement and extend its offerings. Our goal is to empower users so that they can tackle major, heretofore difficult problems. Unidata recognizes that its products and services must evolve to support new approaches to research and education. After years of hype and ambiguity, cloud computing is maturing in usability in many areas of science and education, bringing the benefits of virtualized and elastic remote services to infrastructure, software, computation, and data. Cloud environments reduce the amount of time and money spent to procure, install, and maintain new hardware and software, and reduce costs through resource pooling and shared infrastructure. Cloud services aimed at providing any resource, at any time, from any place, using any device are increasingly being embraced by all types of organizations. Given this trend and the enormous potential of cloud-based services, Unidata is moving to augment its products, services, data delivery mechanisms and applications to align with the cloud-computing paradigm. To realize the above vision, Unidata is working toward: * Providing access to many types of data from a cloud (e.g., TDS, RAMADDA and EDEX); * Deploying data-proximate tools to easily process, analyze and visualize those data in a cloud environment cloud for consumption by any one, by any device, from anywhere, at any time; * Developing and providing a range of pre-configured and well-integrated tools and services that can be deployed by any university in their own private or public cloud settings. Specifically, Unidata has developed Docker for "containerized applications", making them easy to deploy. Docker helps to create "disposable" installs and eliminates many configuration challenges. Containerized applications include tools for data transport, access, analysis, and visualization: THREDDS Data Server, Integrated Data Viewer, GEMPAK, Local Data Manager, RAMADDA Data Server, and Python tools; * Fostering partnerships with NOAA and public cloud vendors (e.g., Amazon) to harness their capabilities and resources for the benefit of the academic community.

  7. Raising Virtual Laboratories in Australia onto global platforms

    NASA Astrophysics Data System (ADS)

    Wyborn, L. A.; Barker, M.; Fraser, R.; Evans, B. J. K.; Moloney, G.; Proctor, R.; Moise, A. F.; Hamish, H.

    2016-12-01

    Across the globe, Virtual Laboratories (VLs), Science Gateways (SGs), and Virtual Research Environments (VREs) are being developed that enable users who are not co-located to actively work together at various scales to share data, models, tools, software, workflows, best practices, etc. Outcomes range from enabling `long tail' researchers to more easily access specific data collections, to facilitating complex workflows on powerful supercomputers. In Australia, government funding has facilitated the development of a range of VLs through the National eResearch Collaborative Tools and Resources (NeCTAR) program. The VLs provide highly collaborative, research-domain oriented, integrated software infrastructures that meet user community needs. Twelve VLs have been funded since 2012, including the Virtual Geophysics Laboratory (VGL); Virtual Hazards, Impact and Risk Laboratory (VHIRL); Climate and Weather Science Laboratory (CWSLab); Marine Virtual Laboratory (MarVL); and Biodiversity and Climate Change Virtual Laboratory (BCCVL). These VLs share similar technical challenges, with common issues emerging on integration of tools, applications and access data collections via both cloud-based environments and other distributed resources. While each VL began with a focus on a specific research domain, communities of practice have now formed across the VLs around common issues, and facilitate identification of best practice case studies, and new standards. As a result, tools are now being shared where the VLs access data via data services using international standards such as ISO, OGC, W3C. The sharing of these approaches is starting to facilitate re-usability of infrastructure and is a step towards supporting interdisciplinary research. Whilst the focus of the VLs are Australia-centric, by using standards, these environments are able to be extended to analysis on other international datasets. Many VL datasets are subsets of global datasets and so extension to global is a small (and often requested) step. Similarly, most of the tools, software, and other technologies could be shared across infrastructures globally. Therefore, it is now time to better connect the Australian VLs with similar initiatives elsewhere to create international platforms that can contribute to global research challenges.

  8. A Serviced-based Approach to Connect Seismological Infrastructures: Current Efforts at the IRIS DMC

    NASA Astrophysics Data System (ADS)

    Ahern, Tim; Trabant, Chad

    2014-05-01

    As part of the COOPEUS initiative to build infrastructure that connects European and US research infrastructures, IRIS has advocated for the development of Federated services based upon internationally recognized standards using web services. By deploying International Federation of Digital Seismograph Networks (FDSN) endorsed web services at multiple data centers in the US and Europe, we have shown that integration within seismological domain can be realized. By deploying identical methods to invoke the web services at multiple centers this approach can significantly ease the methods through which a scientist can access seismic data (time series, metadata, and earthquake catalogs) from distributed federated centers. IRIS has developed an IRIS federator that helps a user identify where seismic data from global seismic networks can be accessed. The web services based federator can build the appropriate URLs and return them to client software running on the scientists own computer. These URLs are then used to directly pull data from the distributed center in a very peer-based fashion. IRIS is also involved in deploying web services across horizontal domains. As part of the US National Science Foundation's (NSF) EarthCube effort, an IRIS led EarthCube Building Block's project is underway. When completed this project will aid in the discovery, access, and usability of data across multiple geoscienece domains. This presentation will summarize current IRIS efforts in building vertical integration infrastructure within seismology working closely with 5 centers in Europe and 2 centers in the US, as well as how we are taking first steps toward horizontal integration of data from 14 different domains in the US, in Europe, and around the world.

  9. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Happenny, Sean F.

    The United States’ power infrastructure is aging, underfunded, and vulnerable to cyber attack. Emerging smart grid technologies may take some of the burden off of existing systems and make the grid as a whole more efficient, reliable, and secure. The Pacific Northwest National Laboratory (PNNL) is funding research into several aspects of smart grid technology and grid security, creating a software simulation tool that will allow researchers to test power distribution networks utilizing different smart grid technologies to determine how the grid and these technologies react under different circumstances. Demonstrating security in embedded systems is another research area PNNL ismore » tackling. Many of the systems controlling the U.S. critical infrastructure, such as the power grid, lack integrated security and the networks protecting them are becoming easier to breach. Providing a virtual power substation network to each student team at the National Collegiate Cyber Defense Competition, thereby supporting the education of future cyber security professionals, is another way PNNL is helping to strengthen the security of the nation’s power infrastructure.« less

  10. Space Images for NASA JPL Android Version

    NASA Technical Reports Server (NTRS)

    Nelson, Jon D.; Gutheinz, Sandy C.; Strom, Joshua R.; Arca, Jeremy M.; Perez, Martin; Boggs, Karen; Stanboli, Alice

    2013-01-01

    This software addresses the demand for easily accessible NASA JPL images and videos by providing a user friendly and simple graphical user interface that can be run via the Android platform from any location where Internet connection is available. This app is complementary to the iPhone version of the application. A backend infrastructure stores, tracks, and retrieves space images from the JPL Photojournal and Institutional Communications Web server, and catalogs the information into a streamlined rating infrastructure. This system consists of four distinguishing components: image repository, database, server-side logic, and Android mobile application. The image repository contains images from various JPL flight projects. The database stores the image information as well as the user rating. The server-side logic retrieves the image information from the database and categorizes each image for display. The Android mobile application is an interfacing delivery system that retrieves the image information from the server for each Android mobile device user. Also created is a reporting and tracking system for charting and monitoring usage. Unlike other Android mobile image applications, this system uses the latest emerging technologies to produce image listings based directly on user input. This allows for countless combinations of images returned. The backend infrastructure uses industry-standard coding and database methods, enabling future software improvement and technology updates. The flexibility of the system design framework permits multiple levels of display possibilities and provides integration capabilities. Unique features of the software include image/video retrieval from a selected set of categories, image Web links that can be shared among e-mail users, sharing to Facebook/Twitter, marking as user's favorites, and image metadata searchable for instant results.

  11. Remote software upload techniques in future vehicles and their performance analysis

    NASA Astrophysics Data System (ADS)

    Hossain, Irina

    Updating software in vehicle Electronic Control Units (ECUs) will become a mandatory requirement for a variety of reasons, for examples, to update/fix functionality of an existing system, add new functionality, remove software bugs and to cope up with ITS infrastructure. Software modules of advanced vehicles can be updated using Remote Software Upload (RSU) technique. The RSU employs infrastructure-based wireless communication technique where the software supplier sends the software to the targeted vehicle via a roadside Base Station (BS). However, security is critically important in RSU to avoid any disasters due to malfunctions of the vehicle or to protect the proprietary algorithms from hackers, competitors or people with malicious intent. In this thesis, a mechanism of secure software upload in advanced vehicles is presented which employs mutual authentication of the software provider and the vehicle using a pre-shared authentication key before sending the software. The software packets are sent encrypted with a secret key along with the Message Digest (MD). In order to increase the security level, it is proposed the vehicle to receive more than one copy of the software along with the MD in each copy. The vehicle will install the new software only when it receives more than one identical copies of the software. In order to validate the proposition, analytical expressions of average number of packet transmissions for successful software update is determined. Different cases are investigated depending on the vehicle's buffer size and verification methods. The analytical and simulation results show that it is sufficient to send two copies of the software to the vehicle to thwart any security attack while uploading the software. The above mentioned unicast method for RSU is suitable when software needs to be uploaded to a single vehicle. Since multicasting is the most efficient method of group communication, updating software in an ECU of a large number of vehicles could benefit from it. However, like the unicast RSU, the security requirements of multicast communication, i.e., authenticity, confidentiality and integrity of the software transmitted and access control of the group members is challenging. In this thesis, an infrastructure-based mobile multicasting for RSU in vehicle ECUs is proposed where an ECU receives the software from a remote software distribution center using the road side BSs as gateways. The Vehicular Software Distribution Network (VSDN) is divided into small regions administered by a Regional Group Manager (RGM). Two multicast Group Key Management (GKM) techniques are proposed based on the degree of trust on the BSs named Fully-trusted (FT) and Semi-trusted (ST) systems. Analytical models are developed to find the multicast session establishment latency and handover latency for these two protocols. The average latency to perform mutual authentication of the software vendor and a vehicle, and to send the multicast session key by the software provider during multicast session initialization, and the handoff latency during multicast session is calculated. Analytical and simulation results show that the link establishment latency per vehicle of our proposed schemes is in the range of few seconds and the ST system requires few ms higher time than the FT system. The handoff latency is also in the range of few seconds and in some cases ST system requires less handoff time than the FT system. Thus, it is possible to build an efficient GKM protocol without putting too much trust on the BSs.

  12. BioNet Digital Communications Framework

    NASA Technical Reports Server (NTRS)

    Gifford, Kevin; Kuzminsky, Sebastian; Williams, Shea

    2010-01-01

    BioNet v2 is a peer-to-peer middleware that enables digital communication devices to talk to each other. It provides a software development framework, standardized application, network-transparent device integration services, a flexible messaging model, and network communications for distributed applications. BioNet is an implementation of the Constellation Program Command, Control, Communications and Information (C3I) Interoperability specification, given in CxP 70022-01. The system architecture provides the necessary infrastructure for the integration of heterogeneous wired and wireless sensing and control devices into a unified data system with a standardized application interface, providing plug-and-play operation for hardware and software systems. BioNet v2 features a naming schema for mobility and coarse-grained localization information, data normalization within a network-transparent device driver framework, enabling of network communications to non-IP devices, and fine-grained application control of data subscription band width usage. BioNet directly integrates Disruption Tolerant Networking (DTN) as a communications technology, enabling networked communications with assets that are only intermittently connected including orbiting relay satellites and planetary rover vehicles.

  13. OOI CyberInfrastructure - Next Generation Oceanographic Research

    NASA Astrophysics Data System (ADS)

    Farcas, C.; Fox, P.; Arrott, M.; Farcas, E.; Klacansky, I.; Krueger, I.; Meisinger, M.; Orcutt, J.

    2008-12-01

    Software has become a key enabling technology for scientific discovery, observation, modeling, and exploitation of natural phenomena. New value emerges from the integration of individual subsystems into networked federations of capabilities exposed to the scientific community. Such data-intensive interoperability networks are crucial for future scientific collaborative research, as they open up new ways of fusing data from different sources and across various domains, and analysis on wide geographic areas. The recently established NSF OOI program, through its CyberInfrastructure component addresses this challenge by providing broad access from sensor networks for data acquisition up to computational grids for massive computations and binding infrastructure facilitating policy management and governance of the emerging system-of-scientific-systems. We provide insight into the integration core of this effort, namely, a hierarchic service-oriented architecture for a robust, performant, and maintainable implementation. We first discuss the relationship between data management and CI crosscutting concerns such as identity management, policy and governance, which define the organizational contexts for data access and usage. Next, we detail critical services including data ingestion, transformation, preservation, inventory, and presentation. To address interoperability issues between data represented in various formats we employ a semantic framework derived from the Earth System Grid technology, a canonical representation for scientific data based on DAP/OPeNDAP, and related data publishers such as ERDDAP. Finally, we briefly present the underlying transport based on a messaging infrastructure over the AMQP protocol, and the preservation based on a distributed file system through SDSC iRODS.

  14. Future Standardization of Space Telecommunications Radio System with Core Flight System

    NASA Technical Reports Server (NTRS)

    Briones, Janette C.; Hickey, Joseph P.; Roche, Rigoberto; Handler, Louis M.; Hall, Charles S.

    2016-01-01

    NASA Glenn Research Center (GRC) is integrating the NASA Space Telecommunications Radio System (STRS) Standard with the Core Flight System (cFS), an avionics software operating environment. The STRS standard provides a common, consistent framework to develop, qualify, operate and maintain complex, reconfigurable and reprogrammable radio systems. The cFS is a flexible, open architecture that features a plugand- play software executive called the Core Flight Executive (cFE), a reusable library of software components for flight and space missions and an integrated tool suite. Together, STRS and cFS create a development environment that allows for STRS compliant applications to reference the STRS application programmer interfaces (APIs) that use the cFS infrastructure. These APIs are used to standardize the communication protocols on NASAs space SDRs. The cFS-STRS Operating Environment (OE) is a portable cFS library, which adds the ability to run STRS applications on existing cFS platforms. The purpose of this paper is to discuss the cFS-STRS OE prototype, preliminary experimental results performed using the Advanced Space Radio Platform (ASRP), the GRC S- band Ground Station and the SCaN (Space Communication and Navigation) Testbed currently flying onboard the International Space Station (ISS). Additionally, this paper presents a demonstration of the Consultative Committee for Space Data Systems (CCSDS) Spacecraft Onboard Interface Services (SOIS) using electronic data sheets (EDS) inside cFE. This configuration allows for the data sheets to specify binary formats for data exchange between STRS applications. The integration of STRS with cFS leverages mission-proven platform functions and mitigates barriers to integration with future missions. This reduces flight software development time and the costs of software-defined radio (SDR) platforms. Furthermore, the combined benefits of STRS standardization with the flexibility of cFS provide an effective, reliable and modular framework to minimize software development efforts for spaceflight missions.

  15. Experimental demonstration of an OpenFlow based software-defined optical network employing packet, fixed and flexible DWDM grid technologies on an international multi-domain testbed.

    PubMed

    Channegowda, M; Nejabati, R; Rashidi Fard, M; Peng, S; Amaya, N; Zervas, G; Simeonidou, D; Vilalta, R; Casellas, R; Martínez, R; Muñoz, R; Liu, L; Tsuritani, T; Morita, I; Autenrieth, A; Elbers, J P; Kostecki, P; Kaczmarek, P

    2013-03-11

    Software defined networking (SDN) and flexible grid optical transport technology are two key technologies that allow network operators to customize their infrastructure based on application requirements and therefore minimizing the extra capital and operational costs required for hosting new applications. In this paper, for the first time we report on design, implementation & demonstration of a novel OpenFlow based SDN unified control plane allowing seamless operation across heterogeneous state-of-the-art optical and packet transport domains. We verify and experimentally evaluate OpenFlow protocol extensions for flexible DWDM grid transport technology along with its integration with fixed DWDM grid and layer-2 packet switching.

  16. iTools: a framework for classification, categorization and integration of computational biology resources.

    PubMed

    Dinov, Ivo D; Rubin, Daniel; Lorensen, William; Dugan, Jonathan; Ma, Jeff; Murphy, Shawn; Kirschner, Beth; Bug, William; Sherman, Michael; Floratos, Aris; Kennedy, David; Jagadish, H V; Schmidt, Jeanette; Athey, Brian; Califano, Andrea; Musen, Mark; Altman, Russ; Kikinis, Ron; Kohane, Isaac; Delp, Scott; Parker, D Stott; Toga, Arthur W

    2008-05-28

    The advancement of the computational biology field hinges on progress in three fundamental directions--the development of new computational algorithms, the availability of informatics resource management infrastructures and the capability of tools to interoperate and synergize. There is an explosion in algorithms and tools for computational biology, which makes it difficult for biologists to find, compare and integrate such resources. We describe a new infrastructure, iTools, for managing the query, traversal and comparison of diverse computational biology resources. Specifically, iTools stores information about three types of resources--data, software tools and web-services. The iTools design, implementation and resource meta-data content reflect the broad research, computational, applied and scientific expertise available at the seven National Centers for Biomedical Computing. iTools provides a system for classification, categorization and integration of different computational biology resources across space-and-time scales, biomedical problems, computational infrastructures and mathematical foundations. A large number of resources are already iTools-accessible to the community and this infrastructure is rapidly growing. iTools includes human and machine interfaces to its resource meta-data repository. Investigators or computer programs may utilize these interfaces to search, compare, expand, revise and mine meta-data descriptions of existent computational biology resources. We propose two ways to browse and display the iTools dynamic collection of resources. The first one is based on an ontology of computational biology resources, and the second one is derived from hyperbolic projections of manifolds or complex structures onto planar discs. iTools is an open source project both in terms of the source code development as well as its meta-data content. iTools employs a decentralized, portable, scalable and lightweight framework for long-term resource management. We demonstrate several applications of iTools as a framework for integrated bioinformatics. iTools and the complete details about its specifications, usage and interfaces are available at the iTools web page http://iTools.ccb.ucla.edu.

  17. iTools: A Framework for Classification, Categorization and Integration of Computational Biology Resources

    PubMed Central

    Dinov, Ivo D.; Rubin, Daniel; Lorensen, William; Dugan, Jonathan; Ma, Jeff; Murphy, Shawn; Kirschner, Beth; Bug, William; Sherman, Michael; Floratos, Aris; Kennedy, David; Jagadish, H. V.; Schmidt, Jeanette; Athey, Brian; Califano, Andrea; Musen, Mark; Altman, Russ; Kikinis, Ron; Kohane, Isaac; Delp, Scott; Parker, D. Stott; Toga, Arthur W.

    2008-01-01

    The advancement of the computational biology field hinges on progress in three fundamental directions – the development of new computational algorithms, the availability of informatics resource management infrastructures and the capability of tools to interoperate and synergize. There is an explosion in algorithms and tools for computational biology, which makes it difficult for biologists to find, compare and integrate such resources. We describe a new infrastructure, iTools, for managing the query, traversal and comparison of diverse computational biology resources. Specifically, iTools stores information about three types of resources–data, software tools and web-services. The iTools design, implementation and resource meta - data content reflect the broad research, computational, applied and scientific expertise available at the seven National Centers for Biomedical Computing. iTools provides a system for classification, categorization and integration of different computational biology resources across space-and-time scales, biomedical problems, computational infrastructures and mathematical foundations. A large number of resources are already iTools-accessible to the community and this infrastructure is rapidly growing. iTools includes human and machine interfaces to its resource meta-data repository. Investigators or computer programs may utilize these interfaces to search, compare, expand, revise and mine meta-data descriptions of existent computational biology resources. We propose two ways to browse and display the iTools dynamic collection of resources. The first one is based on an ontology of computational biology resources, and the second one is derived from hyperbolic projections of manifolds or complex structures onto planar discs. iTools is an open source project both in terms of the source code development as well as its meta-data content. iTools employs a decentralized, portable, scalable and lightweight framework for long-term resource management. We demonstrate several applications of iTools as a framework for integrated bioinformatics. iTools and the complete details about its specifications, usage and interfaces are available at the iTools web page http://iTools.ccb.ucla.edu. PMID:18509477

  18. Software Quality Control at Belle II

    NASA Astrophysics Data System (ADS)

    Ritter, M.; Kuhr, T.; Hauth, T.; Gebard, T.; Kristof, M.; Pulvermacher, C.; Belle Software Group, II

    2017-10-01

    Over the last seven years the software stack of the next generation B factory experiment Belle II has grown to over one million lines of C++ and Python code, counting only the part included in offline software releases. There are several thousand commits to the central repository by about 100 individual developers per year. To keep a coherent software stack of high quality that it can be sustained and used efficiently for data acquisition, simulation, reconstruction, and analysis over the lifetime of the Belle II experiment is a challenge. A set of tools is employed to monitor the quality of the software and provide fast feedback to the developers. They are integrated in a machinery that is controlled by a buildbot master and automates the quality checks. The tools include different compilers, cppcheck, the clang static analyzer, valgrind memcheck, doxygen, a geometry overlap checker, a check for missing or extra library links, unit tests, steering file level tests, a sophisticated high-level validation suite, and an issue tracker. The technological development infrastructure is complemented by organizational means to coordinate the development.

  19. Award ER25750: Coordinated Infrastructure for Fault Tolerance Systems Indiana University Final Report

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lumsdaine, Andrew

    2013-03-08

    The main purpose of the Coordinated Infrastructure for Fault Tolerance in Systems initiative has been to conduct research with a goal of providing end-to-end fault tolerance on a systemwide basis for applications and other system software. While fault tolerance has been an integral part of most high-performance computing (HPC) system software developed over the past decade, it has been treated mostly as a collection of isolated stovepipes. Visibility and response to faults has typically been limited to the particular hardware and software subsystems in which they are initially observed. Little fault information is shared across subsystems, allowing little flexibility ormore » control on a system-wide basis, making it practically impossible to provide cohesive end-to-end fault tolerance in support of scientific applications. As an example, consider faults such as communication link failures that can be seen by a network library but are not directly visible to the job scheduler, or consider faults related to node failures that can be detected by system monitoring software but are not inherently visible to the resource manager. If information about such faults could be shared by the network libraries or monitoring software, then other system software, such as a resource manager or job scheduler, could ensure that failed nodes or failed network links were excluded from further job allocations and that further diagnosis could be performed. As a founding member and one of the lead developers of the Open MPI project, our efforts over the course of this project have been focused on making Open MPI more robust to failures by supporting various fault tolerance techniques, and using fault information exchange and coordination between MPI and the HPC system software stack from the application, numeric libraries, and programming language runtime to other common system components such as jobs schedulers, resource managers, and monitoring tools.« less

  20. Software tools and e-infrastructure services to support the long term preservation of earth science data - new functionality from the SCIDIP-ES project

    NASA Astrophysics Data System (ADS)

    Riddick, Andrew; Glaves, Helen; Crompton, Shirley; Giaretta, David; Ritchie, Brian; Pepler, Sam; De Smet, Wim; Marelli, Fulvio; Mantovani, Pier-Luca

    2014-05-01

    The ability to preserve earth science data for the long-term is a key requirement to support on-going research and collaboration within and between earth science disciplines. A number of critically important current research initiatives (e.g. understanding climate change or ensuring sustainability of natural resources) typically rely on the continuous availability of data collected over several decades in a form which can be easily accessed and used by scientists. In many earth science disciplines the capture of key observational data may be difficult or even impossible to repeat. For example, a specific geological exposure or subsurface borehole may be only temporarily available, and earth observation data derived from a particular satellite mission is often unique. Another key driver for long-term data preservation is that the grand challenges of the kind described above frequently involve cross-disciplinary research utilising raw and interpreted data from a number of related earth science disciplines. Adopting effective data preservation strategies supports this requirement for interoperability as well as ensuring long term usability of earth science data, and has the added potential for stimulating innovative earth science research. The EU-funded SCIDIP-ES project seeks to address these challenges by developing a Europe-wide e-infrastructure for long-term data preservation by providing appropriate software tools and infrastructure services to enable and promote long-term preservation of earth science data. This poster will describe the current status of this e-infrastructure and outline the integration of the prototype SCIDIP-ES software components into the existing systems used by earth science archives and data providers. These prototypes utilise a system architecture which stores preservation information in a standardised OAIS-compliant way, and connects and adds value to existing earth science archives. A SCIDIP-ES test-bed has been implemented by the National Geoscience Data Centre (NGDC) and the British Atmospheric Data Centre (BADC) in the UK, which allows datasets to be more easily integrated and preserved for future use. Many of the data preservation requirements of these two key Natural Environment Research Council (NERC) data centres are common to other earth science data providers and are therefore more widely applicable. The capability for interoperability between datasets stored in different formats is a common requirement for the long-term preservation of data, and the way in which this is supported by the SCIDIP-ES tools and services will be explained.

  1. Initial implementation of a comparative data analysis ontology.

    PubMed

    Prosdocimi, Francisco; Chisham, Brandon; Pontelli, Enrico; Thompson, Julie D; Stoltzfus, Arlin

    2009-07-03

    Comparative analysis is used throughout biology. When entities under comparison (e.g. proteins, genomes, species) are related by descent, evolutionary theory provides a framework that, in principle, allows N-ary comparisons of entities, while controlling for non-independence due to relatedness. Powerful software tools exist for specialized applications of this approach, yet it remains under-utilized in the absence of a unifying informatics infrastructure. A key step in developing such an infrastructure is the definition of a formal ontology. The analysis of use cases and existing formalisms suggests that a significant component of evolutionary analysis involves a core problem of inferring a character history, relying on key concepts: "Operational Taxonomic Units" (OTUs), representing the entities to be compared; "character-state data" representing the observations compared among OTUs; "phylogenetic tree", representing the historical path of evolution among the entities; and "transitions", the inferred evolutionary changes in states of characters that account for observations. Using the Web Ontology Language (OWL), we have defined these and other fundamental concepts in a Comparative Data Analysis Ontology (CDAO). CDAO has been evaluated for its ability to represent token data sets and to support simple forms of reasoning. With further development, CDAO will provide a basis for tools (for semantic transformation, data retrieval, validation, integration, etc.) that make it easier for software developers and biomedical researchers to apply evolutionary methods of inference to diverse types of data, so as to integrate this powerful framework for reasoning into their research.

  2. Lbs Augmented Reality Assistive System for Utilities Infrastructure Management Through Galileo and Egnos

    NASA Astrophysics Data System (ADS)

    Stylianidis, E.; Valaria, E.; Smagas, K.; Pagani, A.; Henriques, J.; Garca, A.; Jimeno, E.; Carrillo, I.; Patias, P.; Georgiadis, C.; Kounoudes, A.; Michail, K.

    2016-06-01

    There is a continuous and increasing demand for solutions, both software and hardware-based, that are able to productively handle underground utilities geospatial data. Innovative approaches that are based on the use of the European GNSS, Galileo and EGNOS, sensor technologies and LBS, are able to monitor, document and manage utility infrastructures' data with an intuitive 3D augmented visualisation and navigation/positioning technology. A software and hardware-based system called LARA, currently under develop- ment through a H2020 co-funded project, aims at meeting that demand. The concept of LARA is to integrate the different innovative components of existing technologies in order to design and develop an integrated navigation/positioning and information system which coordinates GNSS, AR, 3D GIS and geodatabases on a mobile platform for monitoring, documenting and managing utility infrastruc- tures on-site. The LARA system will guide utility field workers to locate the working area by helping them see beneath the ground, rendering the complexity of the 3D models of the underground grid such as water, gas and electricity. The capacity and benefits of LARA are scheduled to be tested in two case studies located in Greece and the United Kingdom with various underground utilities. The paper aspires to present the first results from this initiative. The project leading to this application has received funding from the European GNSS Agency under the European Union's Horizon 2020 research and innovation programme under grant agreement No 641460.

  3. The Education and Public Engagement (EPE) Component of the Ocean Observatories Initiative (OOI): Enabling Near Real-Time Data Use in Undergraduate Classrooms

    NASA Astrophysics Data System (ADS)

    Glenn, S. M.; Companion, C.; Crowley, M.; deCharon, A.; Fundis, A. T.; Kilb, D. L.; Levenson, S.; Lichtenwalner, C. S.; McCurdy, A.; McDonnell, J. D.; Overoye, D.; Risien, C. M.; Rude, A.; Wieclawek, J., III

    2011-12-01

    The National Science Foundation's Ocean Observatories Initiative (OOI) is constructing observational and computer infrastructure that will provide sustained ocean measurements to study climate variability, ocean circulation, ecosystem dynamics, air-sea exchange, seafloor processes, and plate-scale geodynamics over the next ~25-30 years. To accomplish this, the Consortium for Ocean Leadership established four Implementing Organizations: (1) Regional Scale Nodes; (2) Coastal and Global Scale Nodes; (3) Cyberinfrastructure (CI); and (4) Education and Public Engagement (EPE). The EPE, which we represent, was just recently established to provide a new layer of cyber-interactivity for educators to bring near real-time data, images and videos of our Earth's oceans into their learning environments. Our focus over the next four years is engaging educators of undergraduates and free-choice learners. Demonstration projects of the OOI capabilities will use an Integrated Education Toolkit to access OOI data through the Cyberinfrastructure's On Demand Measurement Processing capability. We will present our plans to develop six education infrastructure software modules: Education Web Services (middleware), Visualization Tools, Concept Map and Lab/Lesson Builders, Collaboration Tools, and an Education Resources Database. The software release of these tools is staggered to coincide with other major OOI releases. The first release will include stand-alone versions of the first four EPE modules (Fall 2012). Next, all six EPE modules will be integrated within the OOI cyber-framework (Fall 2013). The last release will include advanced capabilities for all six modules within a collaborative network that leverages the CI's Integrated Observatory Network (Fall 2014). We are looking for undergraduate and informal science educators to provide feedback and guidance on the project, please contact us if you are interested in partnering with us.

  4. Continuous integration and quality control for scientific software

    NASA Astrophysics Data System (ADS)

    Neidhardt, A.; Ettl, M.; Brisken, W.; Dassing, R.

    2013-08-01

    Modern software has to be stable, portable, fast and reliable. This is going to be also more and more important for scientific software. But this requires a sophisticated way to inspect, check and evaluate the quality of source code with a suitable, automated infrastructure. A centralized server with a software repository and a version control system is one essential part, to manage the code basis and to control the different development versions. While each project can be compiled separately, the whole code basis can also be compiled with one central “Makefile”. This is used to create automated, nightly builds. Additionally all sources are inspected automatically with static code analysis and inspection tools, which check well-none error situations, memory and resource leaks, performance issues, or style issues. In combination with an automatic documentation generator it is possible to create the developer documentation directly from the code and the inline comments. All reports and generated information are presented as HTML page on a Web server. Because this environment increased the stability and quality of the software of the Geodetic Observatory Wettzell tremendously, it is now also available for scientific communities. One regular customer is already the developer group of the DiFX software correlator project.

  5. Long-term real-time structural health monitoring using wireless smart sensor

    NASA Astrophysics Data System (ADS)

    Jang, Shinae; Mensah-Bonsu, Priscilla O.; Li, Jingcheng; Dahal, Sushil

    2013-04-01

    Improving the safety and security of civil infrastructure has become a critical issue for decades since it plays a central role in the economics and politics of a modern society. Structural health monitoring of civil infrastructure using wireless smart sensor network has emerged as a promising solution recently to increase structural reliability, enhance inspection quality, and reduce maintenance costs. Though hardware and software framework are well prepared for wireless smart sensors, the long-term real-time health monitoring strategy are still not available due to the lack of systematic interface. In this paper, the Imote2 smart sensor platform is employed, and a graphical user interface for the long-term real-time structural health monitoring has been developed based on Matlab for the Imote2 platform. This computer-aided engineering platform enables the control, visualization of measured data as well as safety alarm feature based on modal property fluctuation. A new decision making strategy to check the safety is also developed and integrated in this software. Laboratory validation of the computer aided engineering platform for the Imote2 on a truss bridge and a building structure has shown the potential of the interface for long-term real-time structural health monitoring.

  6. cOSPREY: A Cloud-Based Distributed Algorithm for Large-Scale Computational Protein Design

    PubMed Central

    Pan, Yuchao; Dong, Yuxi; Zhou, Jingtian; Hallen, Mark; Donald, Bruce R.; Xu, Wei

    2016-01-01

    Abstract Finding the global minimum energy conformation (GMEC) of a huge combinatorial search space is the key challenge in computational protein design (CPD) problems. Traditional algorithms lack a scalable and efficient distributed design scheme, preventing researchers from taking full advantage of current cloud infrastructures. We design cloud OSPREY (cOSPREY), an extension to a widely used protein design software OSPREY, to allow the original design framework to scale to the commercial cloud infrastructures. We propose several novel designs to integrate both algorithm and system optimizations, such as GMEC-specific pruning, state search partitioning, asynchronous algorithm state sharing, and fault tolerance. We evaluate cOSPREY on three different cloud platforms using different technologies and show that it can solve a number of large-scale protein design problems that have not been possible with previous approaches. PMID:27154509

  7. Software Reuse Methods to Improve Technological Infrastructure for e-Science

    NASA Technical Reports Server (NTRS)

    Marshall, James J.; Downs, Robert R.; Mattmann, Chris A.

    2011-01-01

    Social computing has the potential to contribute to scientific research. Ongoing developments in information and communications technology improve capabilities for enabling scientific research, including research fostered by social computing capabilities. The recent emergence of e-Science practices has demonstrated the benefits from improvements in the technological infrastructure, or cyber-infrastructure, that has been developed to support science. Cloud computing is one example of this e-Science trend. Our own work in the area of software reuse offers methods that can be used to improve new technological development, including cloud computing capabilities, to support scientific research practices. In this paper, we focus on software reuse and its potential to contribute to the development and evaluation of information systems and related services designed to support new capabilities for conducting scientific research.

  8. Authentic Astronomical Discovery in Planetariums: Bringing Data to Domes

    NASA Astrophysics Data System (ADS)

    Wyatt, Ryan Jason; Subbarao, Mark; Christensen, Lars; Emmons, Ben; Hurt, Robert

    2018-01-01

    Planetariums offer a unique opportunity to disseminate astronomical discoveries using data visualization at all levels of complexity: the technical infrastructure to display data and a sizeable cohort of enthusiastic educators to interpret results. “Data to Dome” is an initiative the International Planetarium Society to develop our community’s capacity to integrate data in fulldome planetarium systems—including via open source software platforms such as WorldWide Telescope and OpenSpace. We are cultivating a network of planetarium professionals who integrate data into their presentations and share their content with others. Furthermore, we propose to shorten the delay between discovery and dissemination in planetariums. Currently, the “latest science” is often presented days or weeks after discoveries are announced, and we can shorten this to hours or even minutes. The Data2Dome (D2D) initiative, led by the European Southern Observatory, proposes technical infrastructure and data standards that will streamline content flow from research institutions to planetariums, offering audiences a unique opportunity to access to the latest astronomical data in near real time.

  9. Integrated System Health Management: Foundational Concepts, Approach, and Implementation

    NASA Technical Reports Server (NTRS)

    Figueroa, Fernando

    2009-01-01

    A sound basis to guide the community in the conception and implementation of ISHM (Integrated System Health Management) capability in operational systems was provided. The concept of "ISHM Model of a System" and a related architecture defined as a unique Data, Information, and Knowledge (DIaK) architecture were described. The ISHM architecture is independent of the typical system architecture, which is based on grouping physical elements that are assembled to make up a subsystem, and subsystems combine to form systems, etc. It was emphasized that ISHM capability needs to be implemented first at a low functional capability level (FCL), or limited ability to detect anomalies, diagnose, determine consequences, etc. As algorithms and tools to augment or improve the FCL are identified, they should be incorporated into the system. This means that the architecture, DIaK management, and software, must be modular and standards-based, in order to enable systematic augmentation of FCL (no ad-hoc modifications). A set of technologies (and tools) needed to implement ISHM were described. One essential tool is a software environment to create the ISHM Model. The software environment encapsulates DIaK, and an infrastructure to focus DIaK on determining health (detect anomalies, determine causes, determine effects, and provide integrated awareness of the system to the operator). The environment includes gateways to communicate in accordance to standards, specially the IEEE 1451.1 Standard for Smart Sensors and Actuators.

  10. Applications of the pipeline environment for visual informatics and genomics computations

    PubMed Central

    2011-01-01

    Background Contemporary informatics and genomics research require efficient, flexible and robust management of large heterogeneous data, advanced computational tools, powerful visualization, reliable hardware infrastructure, interoperability of computational resources, and detailed data and analysis-protocol provenance. The Pipeline is a client-server distributed computational environment that facilitates the visual graphical construction, execution, monitoring, validation and dissemination of advanced data analysis protocols. Results This paper reports on the applications of the LONI Pipeline environment to address two informatics challenges - graphical management of diverse genomics tools, and the interoperability of informatics software. Specifically, this manuscript presents the concrete details of deploying general informatics suites and individual software tools to new hardware infrastructures, the design, validation and execution of new visual analysis protocols via the Pipeline graphical interface, and integration of diverse informatics tools via the Pipeline eXtensible Markup Language syntax. We demonstrate each of these processes using several established informatics packages (e.g., miBLAST, EMBOSS, mrFAST, GWASS, MAQ, SAMtools, Bowtie) for basic local sequence alignment and search, molecular biology data analysis, and genome-wide association studies. These examples demonstrate the power of the Pipeline graphical workflow environment to enable integration of bioinformatics resources which provide a well-defined syntax for dynamic specification of the input/output parameters and the run-time execution controls. Conclusions The LONI Pipeline environment http://pipeline.loni.ucla.edu provides a flexible graphical infrastructure for efficient biomedical computing and distributed informatics research. The interactive Pipeline resource manager enables the utilization and interoperability of diverse types of informatics resources. The Pipeline client-server model provides computational power to a broad spectrum of informatics investigators - experienced developers and novice users, user with or without access to advanced computational-resources (e.g., Grid, data), as well as basic and translational scientists. The open development, validation and dissemination of computational networks (pipeline workflows) facilitates the sharing of knowledge, tools, protocols and best practices, and enables the unbiased validation and replication of scientific findings by the entire community. PMID:21791102

  11. SDR/STRS Flight Experiment and the Role of SDR-Based Communication and Navigation Systems

    NASA Technical Reports Server (NTRS)

    Reinhart, Richard C.

    2008-01-01

    This presentation describes an open architecture SDR (software defined radio) infrastructure, suitable for space-based radios and operations, entitled Space Telecommunications Radio System (STRS). SDR technologies will endow space and planetary exploration systems with dramatically increased capability, reduced power consumption, and less mass than conventional systems, at costs reduced by vigorous competition, hardware commonality, dense integration, minimizing the impact of parts obsolescence, improved interoperability, and software re-use. To advance the SDR architecture technology and demonstrate its applicability in space, NASA is developing a space experiment of multiple SDRs each with various waveforms to communicate with NASA s TDRSS satellite and ground networks, and the GPS constellation. An experiments program will investigate S-band and Ka-band communications, navigation, and networking technologies and operations.

  12. Toolkit of Available EPA Green Infrastructure Modeling Software. National Stormwater Calculator

    EPA Science Inventory

    This webinar will present a toolkit consisting of five EPA green infrastructure models and tools, along with communication material. This toolkit can be used as a teaching and quick reference resource for use by planners and developers when making green infrastructure implementat...

  13. Tool for Smart Integration of Solar Power

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Becker, Alan

    2017-01-31

    Kevala addresses a significant problem in solar deployment - reducing the risk of investing in solar by determining the inherent value of solar electricity based on the location where it is produced. Kevala’s product will transform the way solar assets are proposed, assessed, and financed resulting in lower capital costs, opening new markets and streamlining siting and customer acquisition. Using detailed electricity infrastructure data, pricing information, GIS mapping, and proprietary algorithms, Kevala’s Grid Assessor software lowers financial risk by providing transparency into the current and future value of projects based on their location.

  14. Designing a low cost bedside workstation for intensive care units.

    PubMed Central

    Michel, A.; Zörb, L.; Dudeck, J.

    1996-01-01

    The paper describes the design and implementation of a software architecture for a low cost bedside workstation for intensive care units. The development is fully integrated into the information infrastructure of the existing hospital information system (HIS) at the University Hospital of Giessen. It provides cost efficient and reliable access for data entry and review from the HIS database from within patient rooms, even in very space limited environments. The architecture further supports automatical data input from medical devices. First results from three different intensive care units are reported. PMID:8947771

  15. Design and Implementation of a Secure Modbus Protocol

    NASA Astrophysics Data System (ADS)

    Fovino, Igor Nai; Carcano, Andrea; Masera, Marcelo; Trombetta, Alberto

    The interconnectivity of modern and legacy supervisory control and data acquisition (SCADA) systems with corporate networks and the Internet has significantly increased the threats to critical infrastructure assets. Meanwhile, traditional IT security solutions such as firewalls, intrusion detection systems and antivirus software are relatively ineffective against attacks that specifically target vulnerabilities in SCADA protocols. This paper describes a secure version of the Modbus SCADA protocol that incorporates integrity, authentication, non-repudiation and anti-replay mechanisms. Experimental results using a power plant testbed indicate that the augmented protocol provides good security functionality without significant overhead.

  16. Software reconfigurable processor technologies: the key to long-life infrastructure for future space missions

    NASA Technical Reports Server (NTRS)

    Srinivasan, J.; Farrington, A.; Gray, A.

    2001-01-01

    They present an overview of long-life reconfigurable processor technologies and of a specific architecture for implementing a software reconfigurable (software-defined) network processor for space applications.

  17. Solving the Software Legacy Problem with RISA

    NASA Astrophysics Data System (ADS)

    Ibarra, A.; Gabriel, C.

    2012-09-01

    Nowadays hardware and system infrastructure evolve on time scales much shorter than the typical duration of space astronomy missions. Data processing software capabilities have to evolve to preserve the scientific return during the entire experiment life time. Software preservation is a key issue that has to be tackled before the end of the project to keep the data usable over many years. We present RISA (Remote Interface to Science Analysis) as a solution to decouple data processing software and infrastructure life-cycles, using JAVA applications and web-services wrappers to existing software. This architecture employs embedded SAS in virtual machines assuring a homogeneous job execution environment. We will also present the first studies to reactivate the data processing software of the EXOSAT mission, the first ESA X-ray astronomy mission launched in 1983, using the generic RISA approach.

  18. Next generation of decision making software for nanopatterns characterization: application to semiconductor industry

    NASA Astrophysics Data System (ADS)

    Dervilllé, A.; Labrosse, A.; Zimmermann, Y.; Foucher, J.; Gronheid, R.; Boeckx, C.; Singh, A.; Leray, P.; Halder, S.

    2016-03-01

    The dimensional scaling in IC manufacturing strongly drives the demands on CD and defect metrology techniques and their measurement uncertainties. Defect review has become as important as CD metrology and both of them create a new metrology paradigm because it creates a completely new need for flexible, robust and scalable metrology software. Current, software architectures and metrology algorithms are performant but it must be pushed to another higher level in order to follow roadmap speed and requirements. For example: manage defect and CD in one step algorithm, customize algorithms and outputs features for each R&D team environment, provide software update every day or every week for R&D teams in order to explore easily various development strategies. The final goal is to avoid spending hours and days to manually tune algorithm to analyze metrology data and to allow R&D teams to stay focus on their expertise. The benefits are drastic costs reduction, more efficient R&D team and better process quality. In this paper, we propose a new generation of software platform and development infrastructure which can integrate specific metrology business modules. For example, we will show the integration of a chemistry module dedicated to electronics materials like Direct Self Assembly features. We will show a new generation of image analysis algorithms which are able to manage at the same time defect rates, images classifications, CD and roughness measurements with high throughput performances in order to be compatible with HVM. In a second part, we will assess the reliability, the customization of algorithm and the software platform capabilities to follow new specific semiconductor metrology software requirements: flexibility, robustness, high throughput and scalability. Finally, we will demonstrate how such environment has allowed a drastic reduction of data analysis cycle time.

  19. The Contribution for Improving GNSS Data and Derived Products for Solid Earth Sciences Promoted by EPOS-IP

    NASA Astrophysics Data System (ADS)

    Fernandes, R. M. S.; Bos, M. S.; Bruyninx, C.; Crocker, P.; Dousa, J.; Walpersdorf, A.; Socquet, A.; Avallone, A.; Ganas, A.; Ionescu, C.; Kenyeres, A.; Ofeigsson, B.; Ozener, H.; Vergnolle, M.; Lidberg, M.; Liwosz, T.; Soehne, W.; Bezdeka, P.; Cardoso, R.; Cotte, N.; Couto, R.; D'Agostino, N.; Deprez, A.; Fabian, A.; Gonçalves, H.; Féres, L.; Legrand, J.; Menut, J. L.; Nastase, E.; Ngo, K. M.; Sigurðarson, F.; Vaclavovic, P.

    2017-12-01

    The GNSS working group part of the EPOS-IP (European Plate Observing System - Implementation Phase) project oversees the implementation of services focused on GNSS data and derived products for the use of the geo-sciences community. The objective is to serve essentially the Solid Earth community, but other scientific and technical communities will also be able the benefit of the efforts being carried out to access the data (and derived products) of the European Geodetic Infrastructures. The geodetic component of EPOS is dealing essentially with implementing an e-infrastructure to store and disseminate continuous GNSS data (and derived solutions) from existing Research Infrastructures and new dedicated services. Present efforts are on developing an integrated software package, called GLASS, that will permit to disseminate quality controlled data (using special tools) in a seamless way from dozens of Geodetic Research Infrastructures in Europe. Conceptually, GLASS can be used in a single Research Infrastructure or in hundreds cooperative ones. We present and discuss the status of the implementation of these services, including also the generation of products - time-series, velocity fields and strain rate fields. In concrete, we will present the results of the current validation phase of these services and we will discuss in detail the technical and cooperative efforts being implemented. EPOS-IP is a project funded by the ESFRI European Union.

  20. Towards the ecotourism: a decision support model for the assessment of sustainability of mountain huts in the Alps.

    PubMed

    Stubelj Ars, Mojca; Bohanec, Marko

    2010-12-01

    This paper studies mountain hut infrastructure in the Alps as an important element of ecotourism in the Alpine region. To improve the decision-making process regarding the implementation of future infrastructure and improvement of existing infrastructure in the vulnerable natural environment of mountain ecosystems, a new decision support model has been developed. The methodology is based on qualitative multi-attribute modelling supported by the DEXi software. The integrated rule-based model is hierarchical and consists of two submodels that cover the infrastructure of the mountain huts and that of the huts' surroundings. The final goal for the designed tool is to help minimize the ecological footprint of tourists in environmentally sensitive and undeveloped mountain areas and contribute to mountain ecotourism. The model has been tested in the case study of four mountain huts in Triglav National Park in Slovenia. Study findings provide a new empirical approach to evaluating existing mountain infrastructure and predicting improvements for the future. The assessment results are of particular interest for decision makers in protected areas, such as Alpine national parks managers and administrators. In a way, this model proposes an approach to the management assessment of mountain huts with the main aim of increasing the quality of life of mountain environment visitors as well as the satisfaction of tourists who may eventually become ecotourists. Copyright © 2010 Elsevier Ltd. All rights reserved.

  1. Building analytical platform with Big Data solutions for log files of PanDA infrastructure

    NASA Astrophysics Data System (ADS)

    Alekseev, A. A.; Barreiro Megino, F. G.; Klimentov, A. A.; Korchuganova, T. A.; Maendo, T.; Padolski, S. V.

    2018-05-01

    The paper describes the implementation of a high-performance system for the processing and analysis of log files for the PanDA infrastructure of the ATLAS experiment at the Large Hadron Collider (LHC), responsible for the workload management of order of 2M daily jobs across the Worldwide LHC Computing Grid. The solution is based on the ELK technology stack, which includes several components: Filebeat, Logstash, ElasticSearch (ES), and Kibana. Filebeat is used to collect data from logs. Logstash processes data and export to Elasticsearch. ES are responsible for centralized data storage. Accumulated data in ES can be viewed using a special software Kibana. These components were integrated with the PanDA infrastructure and replaced previous log processing systems for increased scalability and usability. The authors will describe all the components and their configuration tuning for the current tasks, the scale of the actual system and give several real-life examples of how this centralized log processing and storage service is used to showcase the advantages for daily operations.

  2. Resilient workflows for computational mechanics platforms

    NASA Astrophysics Data System (ADS)

    Nguyên, Toàn; Trifan, Laurentiu; Désidéri, Jean-Antoine

    2010-06-01

    Workflow management systems have recently been the focus of much interest and many research and deployment for scientific applications worldwide [26, 27]. Their ability to abstract the applications by wrapping application codes have also stressed the usefulness of such systems for multidiscipline applications [23, 24]. When complex applications need to provide seamless interfaces hiding the technicalities of the computing infrastructures, their high-level modeling, monitoring and execution functionalities help giving production teams seamless and effective facilities [25, 31, 33]. Software integration infrastructures based on programming paradigms such as Python, Mathlab and Scilab have also provided evidence of the usefulness of such approaches for the tight coupling of multidisciplne application codes [22, 24]. Also high-performance computing based on multi-core multi-cluster infrastructures open new opportunities for more accurate, more extensive and effective robust multi-discipline simulations for the decades to come [28]. This supports the goal of full flight dynamics simulation for 3D aircraft models within the next decade, opening the way to virtual flight-tests and certification of aircraft in the future [23, 24, 29].

  3. Implementing Production Grids

    NASA Technical Reports Server (NTRS)

    Johnston, William E.; Ziobarth, John (Technical Monitor)

    2002-01-01

    We have presented the essence of experience gained in building two production Grids, and provided some of the global context for this work. As the reader might imagine, there were a lot of false starts, refinements to the approaches and to the software, and several substantial integration projects (SRB and Condor integrated with Globus) to get where we are today. However, the point of this paper is to try and make it substantially easier for others to get to the point where Information Power Grids (IPG) and the DOE Science Grids are today. This is what is needed in order to move us toward the vision of a common cyber infrastructure for science. The author would also like to remind the readers that this paper primarily represents the actual experiences that resulted from specific architectural and software choices during the design and implementation of these two Grids. The choices made were dictated by the criteria laid out in section 1. There is a lot more Grid software available today that there was four years ago, and various of these packages are being integrated into IPG and the DOE Grids. However, the foundation choices of Globus, SRB, and Condor would not be significantly different today than they were four years ago. Nonetheless, if the GGF is successful in its work - and we have every reason to believe that it will be - then in a few years we will see that the 28 functions provided by these packages will be defined in terms of protocols and MIS, and there will be several robust implementations available for each of the basic components, especially the Grid Common Services. The impact of the emerging Web Grid Services work is not yet clear. It will likely have a substantial impact on building higher level services, however it is the opinion of the author that this will in no way obviate the need for the Grid Common Services. These are the foundation of Grids, and the focus of almost all of the operational and persistent infrastructure aspects of Grids.

  4. FOSS Tools for Research Infrastructures - A Success Story?

    NASA Astrophysics Data System (ADS)

    Stender, V.; Schroeder, M.; Wächter, J.

    2015-12-01

    Established initiatives and mandated organizations, e.g. the Initiative for Scientific Cyberinfrastructures (NSF, 2007) or the European Strategy Forum on Research Infrastructures (ESFRI, 2008), promote and foster the development of sustainable research infrastructures. The basic idea behind these infrastructures is the provision of services supporting scientists to search, visualize and access data, to collaborate and exchange information, as well as to publish data and other results. Especially the management of research data is gaining more and more importance. In geosciences these developments have to be merged with the enhanced data management approaches of Spatial Data Infrastructures (SDI). The Centre for GeoInformationTechnology (CeGIT) at the GFZ German Research Centre for Geosciences has the objective to establish concepts and standards of SDIs as an integral part of research infrastructure architectures. In different projects, solutions to manage research data for land- and water management or environmental monitoring have been developed based on a framework consisting of Free and Open Source Software (FOSS) components. The framework provides basic components supporting the import and storage of data, discovery and visualization as well as data documentation (metadata). In our contribution, we present our data management solutions developed in three projects, Central Asian Water (CAWa), Sustainable Management of River Oases (SuMaRiO) and Terrestrial Environmental Observatories (TERENO) where FOSS components build the backbone of the data management platform. The multiple use and validation of tools helped to establish a standardized architectural blueprint serving as a contribution to Research Infrastructures. We examine the question of whether FOSS tools are really a sustainable choice and whether the increased efforts of maintenance are justified. Finally it should help to answering the question if the use of FOSS for Research Infrastructures is a success story.

  5. A microkernel design for component-based parallel numerical software systems.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Balay, S.

    1999-01-13

    What is the minimal software infrastructure and what type of conventions are needed to simplify development of sophisticated parallel numerical application codes using a variety of software components that are not necessarily available as source code? We propose an opaque object-based model where the objects are dynamically loadable from the file system or network. The microkernel required to manage such a system needs to include, at most: (1) a few basic services, namely--a mechanism for loading objects at run time via dynamic link libraries, and consistent schemes for error handling and memory management; and (2) selected methods that all objectsmore » share, to deal with object life (destruction, reference counting, relationships), and object observation (viewing, profiling, tracing). We are experimenting with these ideas in the context of extensible numerical software within the ALICE (Advanced Large-scale Integrated Computational Environment) project, where we are building the microkernel to manage the interoperability among various tools for large-scale scientific simulations. This paper presents some preliminary observations and conclusions from our work with microkernel design.« less

  6. Integrated Service Provisioning in an Ipv6 over ATM Research Network

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Eli Dart; Helen Chen; Jerry Friesen

    1999-02-01

    During the past few years, the worldwide Internet has grown at a phenomenal rate, which has spurred the proposal of innovative network technologies to support the fast, efficient and low-latency transport of a wide spectrum of multimedia traffic types. Existing network infrastructures have been plagued by their inability to provide for real-time application traffic as well as their general lack of resources and resilience to congestion. This work proposes to address these issues by implementing a prototype high-speed network infrastructure consisting of Internet Protocol Version 6 (IPv6) on top of an Asynchronous Transfer Mode (ATM) transport medium. Since ATM ismore » connection-oriented whereas IP uses a connection-less paradigm, the efficient integration of IPv6 over ATM is especially challenging and has generated much interest in the research community. We propose, in collaboration with an industry partner, to implement IPv6 over ATM using a unique approach that integrates IP over fast A TM hardware while still preserving IP's connection-less paradigm. This is achieved by replacing ATM's control software with IP's routing code and by caching IP's forwarding decisions in ATM's VPI/VCI translation tables. Prototype ''VR'' and distributed-parallel-computing applications will also be developed to exercise the realtime capability of our IPv6 over ATM network.« less

  7. Online catalog access and distribution of remotely sensed information

    NASA Astrophysics Data System (ADS)

    Lutton, Stephen M.

    1997-09-01

    Remote sensing is providing voluminous data and value added information products. Electronic sensors, communication electronics, computer software, hardware, and network communications technology have matured to the point where a distributed infrastructure for remotely sensed information is a reality. The amount of remotely sensed data and information is making distributed infrastructure almost a necessity. This infrastructure provides data collection, archiving, cataloging, browsing, processing, and viewing for applications from scientific research to economic, legal, and national security decision making. The remote sensing field is entering a new exciting stage of commercial growth and expansion into the mainstream of government and business decision making. This paper overviews this new distributed infrastructure and then focuses on describing a software system for on-line catalog access and distribution of remotely sensed information.

  8. Constraints and Opportunities in GCM Model Development

    NASA Technical Reports Server (NTRS)

    Schmidt, Gavin; Clune, Thomas

    2010-01-01

    Over the past 30 years climate models have evolved from relatively simple representations of a few atmospheric processes to complex multi-disciplinary system models which incorporate physics from bottom of the ocean to the mesopause and are used for seasonal to multi-million year timescales. Computer infrastructure over that period has gone from punchcard mainframes to modern parallel clusters. Constraints of working within an ever evolving research code mean that most software changes must be incremental so as not to disrupt scientific throughput. Unfortunately, programming methodologies have generally not kept pace with these challenges, and existing implementations now present a heavy and growing burden on further model development as well as limiting flexibility and reliability. Opportunely, advances in software engineering from other disciplines (e.g. the commercial software industry) as well as new generations of powerful development tools can be incorporated by the model developers to incrementally and systematically improve underlying implementations and reverse the long term trend of increasing development overhead. However, these methodologies cannot be applied blindly, but rather must be carefully tailored to the unique characteristics of scientific software development. We will discuss the need for close integration of software engineers and climate scientists to find the optimal processes for climate modeling.

  9. Middleware Case Study: MeDICi

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wynne, Adam S.

    2011-05-05

    In many application domains in science and engineering, data produced by sensors, instruments and networks is naturally processed by software applications structured as a pipeline . Pipelines comprise a sequence of software components that progressively process discrete units of data to produce a desired outcome. For example, in a Web crawler that is extracting semantics from text on Web sites, the first stage in the pipeline might be to remove all HTML tags to leave only the raw text of the document. The second step may parse the raw text to break it down into its constituent grammatical parts, suchmore » as nouns, verbs and so on. Subsequent steps may look for names of people or places, interesting events or times so documents can be sequenced on a time line. Each of these steps can be written as a specialized program that works in isolation with other steps in the pipeline. In many applications, simple linear software pipelines are sufficient. However, more complex applications require topologies that contain forks and joins, creating pipelines comprising branches where parallel execution is desirable. It is also increasingly common for pipelines to process very large files or high volume data streams which impose end-to-end performance constraints. Additionally, processes in a pipeline may have specific execution requirements and hence need to be distributed as services across a heterogeneous computing and data management infrastructure. From a software engineering perspective, these more complex pipelines become problematic to implement. While simple linear pipelines can be built using minimal infrastructure such as scripting languages, complex topologies and large, high volume data processing requires suitable abstractions, run-time infrastructures and development tools to construct pipelines with the desired qualities-of-service and flexibility to evolve to handle new requirements. The above summarizes the reasons we created the MeDICi Integration Framework (MIF) that is designed for creating high-performance, scalable and modifiable software pipelines. MIF exploits a low friction, robust, open source middleware platform and extends it with component and service-based programmatic interfaces that make implementing complex pipelines simple. The MIF run-time automatically handles queues between pipeline elements in order to handle request bursts, and automatically executes multiple instances of pipeline elements to increase pipeline throughput. Distributed pipeline elements are supported using a range of configurable communications protocols, and the MIF interfaces provide efficient mechanisms for moving data directly between two distributed pipeline elements.« less

  10. JACOB: an enterprise framework for computational chemistry.

    PubMed

    Waller, Mark P; Dresselhaus, Thomas; Yang, Jack

    2013-06-15

    Here, we present just a collection of beans (JACOB): an integrated batch-based framework designed for the rapid development of computational chemistry applications. The framework expedites developer productivity by handling the generic infrastructure tier, and can be easily extended by user-specific scientific code. Paradigms from enterprise software engineering were rigorously applied to create a scalable, testable, secure, and robust framework. A centralized web application is used to configure and control the operation of the framework. The application-programming interface provides a set of generic tools for processing large-scale noninteractive jobs (e.g., systematic studies), or for coordinating systems integration (e.g., complex workflows). The code for the JACOB framework is open sourced and is available at: www.wallerlab.org/jacob. Copyright © 2013 Wiley Periodicals, Inc.

  11. Development of a telemedicine model for emerging countries: a case study on pediatric oncology in Brazil.

    PubMed

    Hira, A Y; Nebel de Mello, A; Faria, R A; Odone Filho, V; Lopes, R D; Zuffo, M K

    2006-01-01

    This article discusses a telemedicine model for emerging countries, through the description of ONCONET, a telemedicine initiative applied to pediatric oncology in Brazil. The ONCONET core technology is a Web-based system that offers health information and other services specialized in childhood cancer such as electronic medical records and cooperative protocols for complex treatments. All Web-based services are supported by the use of high performance computing infrastructure based on clusters of commodity computers. The system was fully implemented on an open-source and free-software approach. Aspects of modeling, implementation and integration are covered. A model, both technologically and economically viable, was created through the research and development of in-house solutions adapted to the emerging countries reality and with focus on scalability both in the total number of patients and in the national infrastructure.

  12. Integrating Puppet and Gitolite to provide a novel solution for scalable system management at the MPPMU Tier2 centre

    NASA Astrophysics Data System (ADS)

    Delle Fratte, C.; Kennedy, J. A.; Kluth, S.; Mazzaferro, L.

    2015-12-01

    In a grid computing infrastructure tasks such as continuous upgrades, services installations and software deployments are part of an admins daily work. In such an environment tools to help with the management, provisioning and monitoring of the deployed systems and services have become crucial. As experiments such as the LHC increase in scale, the computing infrastructure also becomes larger and more complex. Moreover, today's admins increasingly work within teams that share responsibilities and tasks. Such a scaled up situation requires tools that not only simplify the workload on administrators but also enable them to work seamlessly in teams. In this paper will be presented our experience from managing the Max Planck Institute Tier2 using Puppet and Gitolite in a cooperative way to help the system administrator in their daily work. In addition to describing the Puppet-Gitolite system, best practices and customizations will also be shown.

  13. Building Thematic and Integrated Services for European Solid Earth Sciences: the EPOS Integrated Approach

    NASA Astrophysics Data System (ADS)

    Harrison, M.; Cocco, M.

    2017-12-01

    EPOS (European Plate Observing System) has been designed with the vision of creating a pan-European infrastructure for solid Earth science to support a safe and sustainable society. In accordance with this scientific vision, the EPOS mission is to integrate the diverse and advanced European Research Infrastructures for solid Earth science relying on new e-science opportunities to monitor and unravel the dynamic and complex Earth System. EPOS will enable innovative multidisciplinary research for a better understanding of the Earth's physical and chemical processes that control earthquakes, volcanic eruptions, ground instability and tsunami as well as the processes driving tectonics and Earth's surface dynamics. To accomplish its mission, EPOS is engaging different stakeholders, to allow the Earth sciences to open new horizons in our understanding of the planet. EPOS also aims at contributing to prepare society for geo-hazards and to responsibly manage the exploitation of geo-resources. Through integration of data, models and facilities, EPOS will allow the Earth science community to make a step change in developing new concepts and tools for key answers to scientific and socio-economic questions concerning geo-hazards and geo-resources as well as Earth sciences applications to the environment and human welfare. The research infrastructures (RIs) that EPOS is coordinating include: i) distributed geophysical observing systems (seismological and geodetic networks); ii) local observatories (including geomagnetic, near-fault and volcano observatories); iii) analytical and experimental laboratories; iv) integrated satellite data and geological information services; v) new services for natural and anthropogenic hazards; vi) access to geo-energy test beds. Here we present the activities planned for the implementation phase focusing on the TCS, the ICS and on their interoperability. We will discuss the data, data-products, software and services (DDSS) presently under implementation, which will be validated and tested during 2018. Particular attention in this talk will be given to connecting EPOS with similar global initiatives and identifying common best practice and approaches.

  14. The Virtual Environment for Reactor Applications (VERA): Design and architecture

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Turner, John A., E-mail: turnerja@ornl.gov; Clarno, Kevin; Sieger, Matt

    VERA, the Virtual Environment for Reactor Applications, is the system of physics capabilities being developed and deployed by the Consortium for Advanced Simulation of Light Water Reactors (CASL). CASL was established for the modeling and simulation of commercial nuclear reactors. VERA consists of integrating and interfacing software together with a suite of physics components adapted and/or refactored to simulate relevant physical phenomena in a coupled manner. VERA also includes the software development environment and computational infrastructure needed for these components to be effectively used. We describe the architecture of VERA from both software and numerical perspectives, along with the goalsmore » and constraints that drove major design decisions, and their implications. We explain why VERA is an environment rather than a framework or toolkit, why these distinctions are relevant (particularly for coupled physics applications), and provide an overview of results that demonstrate the use of VERA tools for a variety of challenging applications within the nuclear industry.« less

  15. Integrating Infrastructures in the United States: Experience and Prospects

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wilbanks, Thomas

    Infrastructure integration has been limited in the United States because infrastructure management responsibilities are fragmented by divisions between sectors and between the public and the private sector, but some changes are under way. Stimulated by a number of extreme events in recent decades, data and modeling capabilities for simulating infrastructure interdependencies have been developed and applied, and infrastructure integration in some cities has been encouraged by such foci as emergency preparedness and “green infrastructure” strategies. Integrative strategies have been explored for energy and water resource systems, in some cases related to other sectors as well. In summary, infrastructure integration inmore » the United States is occurring from the ground up, due in many cases to climate change impacts and risks. A number of examples of successes, supported by broad coalitions of interested parties (with evident sociopolitical payoffs), suggest that integration will increase through time.« less

  16. BioShaDock: a community driven bioinformatics shared Docker-based tools registry

    PubMed Central

    Moreews, François; Sallou, Olivier; Ménager, Hervé; Le bras, Yvan; Monjeaud, Cyril; Blanchet, Christophe; Collin, Olivier

    2015-01-01

    Linux container technologies, as represented by Docker, provide an alternative to complex and time-consuming installation processes needed for scientific software. The ease of deployment and the process isolation they enable, as well as the reproducibility they permit across environments and versions, are among the qualities that make them interesting candidates for the construction of bioinformatic infrastructures, at any scale from single workstations to high throughput computing architectures. The Docker Hub is a public registry which can be used to distribute bioinformatic software as Docker images. However, its lack of curation and its genericity make it difficult for a bioinformatics user to find the most appropriate images needed. BioShaDock is a bioinformatics-focused Docker registry, which provides a local and fully controlled environment to build and publish bioinformatic software as portable Docker images. It provides a number of improvements over the base Docker registry on authentication and permissions management, that enable its integration in existing bioinformatic infrastructures such as computing platforms. The metadata associated with the registered images are domain-centric, including for instance concepts defined in the EDAM ontology, a shared and structured vocabulary of commonly used terms in bioinformatics. The registry also includes user defined tags to facilitate its discovery, as well as a link to the tool description in the ELIXIR registry if it already exists. If it does not, the BioShaDock registry will synchronize with the registry to create a new description in the Elixir registry, based on the BioShaDock entry metadata. This link will help users get more information on the tool such as its EDAM operations, input and output types. This allows integration with the ELIXIR Tools and Data Services Registry, thus providing the appropriate visibility of such images to the bioinformatics community. PMID:26913191

  17. BioShaDock: a community driven bioinformatics shared Docker-based tools registry.

    PubMed

    Moreews, François; Sallou, Olivier; Ménager, Hervé; Le Bras, Yvan; Monjeaud, Cyril; Blanchet, Christophe; Collin, Olivier

    2015-01-01

    Linux container technologies, as represented by Docker, provide an alternative to complex and time-consuming installation processes needed for scientific software. The ease of deployment and the process isolation they enable, as well as the reproducibility they permit across environments and versions, are among the qualities that make them interesting candidates for the construction of bioinformatic infrastructures, at any scale from single workstations to high throughput computing architectures. The Docker Hub is a public registry which can be used to distribute bioinformatic software as Docker images. However, its lack of curation and its genericity make it difficult for a bioinformatics user to find the most appropriate images needed. BioShaDock is a bioinformatics-focused Docker registry, which provides a local and fully controlled environment to build and publish bioinformatic software as portable Docker images. It provides a number of improvements over the base Docker registry on authentication and permissions management, that enable its integration in existing bioinformatic infrastructures such as computing platforms. The metadata associated with the registered images are domain-centric, including for instance concepts defined in the EDAM ontology, a shared and structured vocabulary of commonly used terms in bioinformatics. The registry also includes user defined tags to facilitate its discovery, as well as a link to the tool description in the ELIXIR registry if it already exists. If it does not, the BioShaDock registry will synchronize with the registry to create a new description in the Elixir registry, based on the BioShaDock entry metadata. This link will help users get more information on the tool such as its EDAM operations, input and output types. This allows integration with the ELIXIR Tools and Data Services Registry, thus providing the appropriate visibility of such images to the bioinformatics community.

  18. Searching for Physics Beyond the Standard Model: Strongly-Coupled Field Theories at the Intensity and Energy Frontiers

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Brower, Richard C.

    This proposal is to develop the software and algorithmic infrastructure needed for the numerical study of quantum chromodynamics (QCD), and of theories that have been proposed to describe physics beyond the Standard Model (BSM) of high energy physics, on current and future computers. This infrastructure will enable users (1) to improve the accuracy of QCD calculations to the point where they no longer limit what can be learned from high-precision experiments that seek to test the Standard Model, and (2) to determine the predictions of BSM theories in order to understand which of them are consistent with the data thatmore » will soon be available from the LHC. Work will include the extension and optimizations of community codes for the next generation of leadership class computers, the IBM Blue Gene/Q and the Cray XE/XK, and for the dedicated hardware funded for our field by the Department of Energy. Members of our collaboration at Brookhaven National Laboratory and Columbia University worked on the design of the Blue Gene/Q, and have begun to develop software for it. Under this grant we will build upon their experience to produce high-efficiency production codes for this machine. Cray XE/XK computers with many thousands of GPU accelerators will soon be available, and the dedicated commodity clusters we obtain with DOE funding include growing numbers of GPUs. We will work with our partners in NVIDIA's Emerging Technology group to scale our existing software to thousands of GPUs, and to produce highly efficient production codes for these machines. Work under this grant will also include the development of new algorithms for the effective use of heterogeneous computers, and their integration into our codes. It will include improvements of Krylov solvers and the development of new multigrid methods in collaboration with members of the FASTMath SciDAC Institute, using their HYPRE framework, as well as work on improved symplectic integrators.« less

  19. Flexible Workflow Software enables the Management of an Increased Volume and Heterogeneity of Sensors, and evolves with the Expansion of Complex Ocean Observatory Infrastructures.

    NASA Astrophysics Data System (ADS)

    Tomlin, M. C.; Jenkyns, R.

    2015-12-01

    Ocean Networks Canada (ONC) collects data from observatories in the northeast Pacific, Salish Sea, Arctic Ocean, Atlantic Ocean, and land-based sites in British Columbia. Data are streamed, collected autonomously, or transmitted via satellite from a variety of instruments. The Software Engineering group at ONC develops and maintains Oceans 2.0, an in-house software system that acquires and archives data from sensors, and makes data available to scientists, the public, government and non-government agencies. The Oceans 2.0 workflow tool was developed by ONC to manage a large volume of tasks and processes required for instrument installation, recovery and maintenance activities. Since 2013, the workflow tool has supported 70 expeditions and grown to include 30 different workflow processes for the increasing complexity of infrastructures at ONC. The workflow tool strives to keep pace with an increasing heterogeneity of sensors, connections and environments by supporting versioning of existing workflows, and allowing the creation of new processes and tasks. Despite challenges in training and gaining mutual support from multidisciplinary teams, the workflow tool has become invaluable in project management in an innovative setting. It provides a collective place to contribute to ONC's diverse projects and expeditions and encourages more repeatable processes, while promoting interactions between the multidisciplinary teams who manage various aspects of instrument development and the data they produce. The workflow tool inspires documentation of terminologies and procedures, and effectively links to other tools at ONC such as JIRA, Alfresco and Wiki. Motivated by growing sensor schemes, modes of collecting data, archiving, and data distribution at ONC, the workflow tool ensures that infrastructure is managed completely from instrument purchase to data distribution. It integrates all areas of expertise and helps fulfill ONC's mandate to offer quality data to users.

  20. HyRAM V1.0 User Guide

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Groth, Katrina M.; Zumwalt, Hannah Ruth; Clark, Andrew Jordan

    2016-03-01

    Hydrogen Risk Assessment Models (HyRAM) is a prototype software toolkit that integrates data and methods relevant to assessing the safety of hydrogen fueling and storage infrastructure. The HyRAM toolkit integrates deterministic and probabilistic models for quantifying accident scenarios, predicting physical effects, and characterizing the impact of hydrogen hazards, including thermal effects from jet fires and thermal pressure effects from deflagration. HyRAM version 1.0 incorporates generic probabilities for equipment failures for nine types of components, and probabilistic models for the impact of heat flux on humans and structures, with computationally and experimentally validated models of various aspects of gaseous hydrogen releasemore » and flame physics. This document provides an example of how to use HyRAM to conduct analysis of a fueling facility. This document will guide users through the software and how to enter and edit certain inputs that are specific to the user-defined facility. Description of the methodology and models contained in HyRAM is provided in [1]. This User’s Guide is intended to capture the main features of HyRAM version 1.0 (any HyRAM version numbered as 1.0.X.XXX). This user guide was created with HyRAM 1.0.1.798. Due to ongoing software development activities, newer versions of HyRAM may have differences from this guide.« less

  1. Towards AN Integration of GIS and Bim Data: what are the Geometric and Topological Issues?

    NASA Astrophysics Data System (ADS)

    Arroyo Ohori, K.; Biljecki, F.; Diakité, A.; Krijnen, T.; Ledoux, H.; Stoter, J.

    2017-10-01

    Geographic information and building information modelling both model buildings and infrastructure, but the way in which they are modelled is usually complimentary and BIM-GIS integration is widely considered as a way forward for both domains. For one, more detailed BIM data can feed more general GIS data and GIS data can provide the context that is necessary to BIM data. While previous studies have focused on the theoretical aspects of such an integration at a schema level, in this paper we focus on explaining the geometric and topological issues we have found while trying to develop software to realise such an integration in practice and at a data level. In our preliminary results, which are presented here, we have found that many issues for such an integration remain: handling the geometric and topological problems in BIM models, dealing with bad georeferencing and figuring out the best way to convert data between IFC and CityGML are all open issues.

  2. Infrastructure for Multiphysics Software Integration in High Performance Computing-Aided Science and Engineering

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Campbell, Michael T.; Safdari, Masoud; Kress, Jessica E.

    The project described in this report constructed and exercised an innovative multiphysics coupling toolkit called the Illinois Rocstar MultiPhysics Application Coupling Toolkit (IMPACT). IMPACT is an open source, flexible, natively parallel infrastructure for coupling multiple uniphysics simulation codes into multiphysics computational systems. IMPACT works with codes written in several high-performance-computing (HPC) programming languages, and is designed from the beginning for HPC multiphysics code development. It is designed to be minimally invasive to the individual physics codes being integrated, and has few requirements on those physics codes for integration. The goal of IMPACT is to provide the support needed to enablemore » coupling existing tools together in unique and innovative ways to produce powerful new multiphysics technologies without extensive modification and rewrite of the physics packages being integrated. There are three major outcomes from this project: 1) construction, testing, application, and open-source release of the IMPACT infrastructure, 2) production of example open-source multiphysics tools using IMPACT, and 3) identification and engagement of interested organizations in the tools and applications resulting from the project. This last outcome represents the incipient development of a user community and application echosystem being built using IMPACT. Multiphysics coupling standardization can only come from organizations working together to define needs and processes that span the space of necessary multiphysics outcomes, which Illinois Rocstar plans to continue driving toward. The IMPACT system, including source code, documentation, and test problems are all now available through the public gitHUB.org system to anyone interested in multiphysics code coupling. Many of the basic documents explaining use and architecture of IMPACT are also attached as appendices to this document. Online HTML documentation is available through the gitHUB site. There are over 100 unit tests provided that run through the Illinois Rocstar Application Development (IRAD) lightweight testing infrastructure that is also supplied along with IMPACT. The package as a whole provides an excellent base for developing high-quality multiphysics applications using modern software development practices. To facilitate understanding how to utilize IMPACT effectively, two multiphysics systems have been developed and are available open-source through gitHUB. The simpler of the two systems, named ElmerFoamFSI in the repository, is a multiphysics, fluid-structure-interaction (FSI) coupling of the solid mechanics package Elmer with a fluid dynamics module from OpenFOAM. This coupling illustrates how to combine software packages that are unrelated by either author or architecture and combine them into a robust, parallel multiphysics system. A more complex multiphysics tool is the Illinois Rocstar Rocstar Multiphysics code that was rebuilt during the project around IMPACT. Rocstar Multiphysics was already an HPC multiphysics tool, but now that it has been rearchitected around IMPACT, it can be readily expanded to capture new and different physics in the future. In fact, during this project, the Elmer and OpenFOAM tools were also coupled into Rocstar Multiphysics and demonstrated. The full Rocstar Multiphysics codebase is also available on gitHUB, and licensed for any organization to use as they wish. Finally, the new IMPACT product is already being used in several multiphysics code coupling projects for the Air Force, NASA and the Missile Defense Agency, and initial work on expansion of the IMPACT-enabled Rocstar Multiphysics has begun in support of a commercial company. These initiatives promise to expand the interest and reach of IMPACT and Rocstar Multiphysics, ultimately leading to the envisioned standardization and consortium of users that was one of the goals of this project.« less

  3. MicROS-drt: supporting real-time and scalable data distribution in distributed robotic systems.

    PubMed

    Ding, Bo; Wang, Huaimin; Fan, Zedong; Zhang, Pengfei; Liu, Hui

    A primary requirement in distributed robotic software systems is the dissemination of data to all interested collaborative entities in a timely and scalable manner. However, providing such a service in a highly dynamic and resource-limited robotic environment is a challenging task, and existing robot software infrastructure has limitations in this aspect. This paper presents a novel robot software infrastructure, micROS-drt, which supports real-time and scalable data distribution. The solution is based on a loosely coupled data publish-subscribe model with the ability to support various time-related constraints. And to realize this model, a mature data distribution standard, the data distribution service for real-time systems (DDS), is adopted as the foundation of the transport layer of this software infrastructure. By elaborately adapting and encapsulating the capability of the underlying DDS middleware, micROS-drt can meet the requirement of real-time and scalable data distribution in distributed robotic systems. Evaluation results in terms of scalability, latency jitter and transport priority as well as the experiment on real robots validate the effectiveness of this work.

  4. Managing Critical Infrastructures C.I.M. Suite

    ScienceCinema

    Dudenhoeffer, Donald

    2018-05-23

    See how a new software package developed by INL researchers could help protect infrastructure during natural disasters, terrorist attacks and electrical outages. For more information about INL research, visit http://www.facebook.com/idahonationallaboratory.

  5. First year of ALMA site software deployment: where everything comes together

    NASA Astrophysics Data System (ADS)

    González, Víctor; Mora, Matias; Araya, Rodrigo; Arredondo, Diego; Bartsch, Marcelo; Burgos, Pablo; Ibsen, Jorge; Reveco, Johnny; Sáez, Norman; Schemrl, Anton; Sepulveda, Jorge; Shen, Tzu-Chiang; Soto, Rubén; Troncoso, Nicolás; Zambrano, Mauricio; Barriga, Nicolás; Glendenning, Brian; Raffi, Gianni; Kern, Jeff

    2010-07-01

    Starting 2009, the ALMA project initiated one of its most exciting phases within construction: the first antenna from one of the vendors was delivered to the Assembly, Integration and Verification team. With this milestone and the closure of the ALMA Test Facility in New Mexico, the JAO Computing Group in Chile found itself in the front line of the project's software deployment and integration effort. Among the group's main responsibilities are the deployment, configuration and support of the observation systems, in addition to infrastructure administration, all of which needs to be done in close coordination with the development groups in Europe, North America and Japan. Software support has been the primary interaction key with the current users (mainly scientists, operators and hardware engineers), as the software is normally the most visible part of the system. During this first year of work with the production hardware, three consecutive software releases have been deployed and commissioned. Also, the first three antennas have been moved to the Array Operations Site, at 5.000 meters elevation, and the complete end-to-end system has been successfully tested. This paper shares the experience of this 15-people group as part of the construction team at the ALMA site, and working together with Computing IPT, on the achievements and problems overcomed during this period. It explores the excellent results of teamwork, and also some of the troubles that such a complex and geographically distributed project can run into. Finally, it approaches the challenges still to come, with the transition to the ALMA operations plan.

  6. HTTP as a Data Access Protocol: Trials with XrootD in CMS’s AAA Project

    NASA Astrophysics Data System (ADS)

    Balcas, J.; Bockelman, B. P.; Kcira, D.; Newman, H.; Vlimant, J.; Hendricks, T. W.; CMS Collaboration

    2017-10-01

    The main goal of the project to demonstrate the ability of using HTTP data federations in a manner analogous to the existing AAA infrastructure of the CMS experiment. An initial testbed at Caltech has been built and changes in the CMS software (CMSSW) are being implemented in order to improve HTTP support. The testbed consists of a set of machines at the Caltech Tier2 that improve the support infrastructure for data federations at CMS. As a first step, we are building systems that produce and ingest network data transfers up to 80 Gbps. In collaboration with AAA, HTTP support is enabled at the US redirector and the Caltech testbed. A plugin for CMSSW is being developed for HTTP access based on the DaviX software. It will replace the present fork/exec or curl for HTTP access. In addition, extensions to the XRootD HTTP implementation are being developed to add functionality to it, such as client-based monitoring identifiers. In the future, patches will be developed to better integrate HTTP-over-XRootD with the Open Science Grid (OSG) distribution. First results of the transfer tests using HTTP are presented in this paper together with details about the initial setup.

  7. Multimedia courseware in an open-systems environment: a DoD strategy

    NASA Astrophysics Data System (ADS)

    Welsch, Lawrence A.

    1991-03-01

    The federal government is about to invest billions of dollars to develop multimedia training materials for delivery on computer-based interactive training systems. Acquisition of a variety of computers and peripheral devices hosting various operating systems and suites of authoring system software will be necessary to facilitate the development of this courseware. There is no single source that will satisfy all needs. Although high-performance, low-cost interactive training hardware is available, the products have proprietary software interfaces. Because the interfaces are proprietary, expensive reprogramming is usually required to adapt such software products to other platforms. This costly reprogramming could be eliminated by adopting standard software interfaces. DoD's Portable Courseware Project (PORTCO) is typical of projects worldwide that require standard software interfaces. This paper articulates the strategy whereby PORTCO leverages the open systems movement and the new realities of information technology. These realities encompass changes in the pace at which new technology becomes available, changes in organizational goals and philosophy, new roles of vendors and users, changes in the procurement process, and acceleration toward open system environments. The PORTCO strategy is applicable to all projects and systems that require open systems to achieve mission objectives. The federal goal is to facilitate the creation of an environment in which high quality portable courseware is available as commercial off-the-shelf products and is competitively supplied by a variety of vendors. In order to achieve this goal a system architecture incorporating standards to meet the users' needs must be established. The Request for Architecture (RFA) developed cooperatively by DoD and the National Institute of Standards and Technology (NIST) will generate the PORTCO systems architecture. This architecture must freely integrate the courseware and authoring software from the lower levels of machine architecture and systems service implementation. In addition, the systems architecture will establish how the application-specific technologies relate to other technologies. Further, a computer-based interactive training applications profile must be developed. This profile, along with the systems architecture derived as a result of the RFA, provides the basis for identifying the needed standards. NIST will then accelerate the development of these standards using, but not restricted to, existing standards activities within established standards forums. The federal multimedia courseware effort has adopted the Interactive Multimedia Association (INA) Recommended Practices for Interactive Video Portability as the baseline for the migration of computer-based interactive training systems to an open systems environment based upon international standards. The PORTCO strategy includes an evolutionary migration to a standards-based, Open System Environments (OSE). An important aspect of this migration strategy is to move to open systems via stepwise evolution rather than via quantum leaps. Another area of concern is that of infrastructure issues, such as maintaining and supporting the technologies required for computer-based interactive training. The federal multimedia initiative will use the RFA-based architecture to differentiate between those technologies that can be maintained and supported by existing infrastructure mechanisms and those that require new mechanisms. Existing infrastructure mechanisms will be used and where infrastructure mechanisms do not exist, the approach will be to place high priority on establishing the appropriate mechanisms. Establishing an infrastructure mechanism is a nontrivial task requiring sustained investment of resources.

  8. Unidata's Vision for Transforming Geoscience by Moving Data Services and Software to the Cloud

    NASA Astrophysics Data System (ADS)

    Ramamurthy, M. K.; Fisher, W.; Yoksas, T.

    2014-12-01

    Universities are facing many challenges: shrinking budgets, rapidly evolving information technologies, exploding data volumes, multidisciplinary science requirements, and high student expectations. These changes are upending traditional approaches to accessing and using data and software. It is clear that Unidata's products and services must evolve to support new approaches to research and education. After years of hype and ambiguity, cloud computing is maturing in usability in many areas of science and education, bringing the benefits of virtualized and elastic remote services to infrastructure, software, computation, and data. Cloud environments reduce the amount of time and money spent to procure, install, and maintain new hardware and software, and reduce costs through resource pooling and shared infrastructure. Cloud services aimed at providing any resource, at any time, from any place, using any device are increasingly being embraced by all types of organizations. Given this trend and the enormous potential of cloud-based services, Unidata is taking moving to augment its products, services, data delivery mechanisms and applications to align with the cloud-computing paradigm. Specifically, Unidata is working toward establishing a community-based development environment that supports the creation and use of software services to build end-to-end data workflows. The design encourages the creation of services that can be broken into small, independent chunks that provide simple capabilities. Chunks could be used individually to perform a task, or chained into simple or elaborate workflows. The services will also be portable, allowing their use in researchers' own cloud-based computing environments. In this talk, we present a vision for Unidata's future in a cloud-enabled data services and discuss our initial efforts to deploy a subset of Unidata data services and tools in the Amazon EC2 and Microsoft Azure cloud environments, including the transfer of real-time meteorological data into its cloud instances, product generation using those data, and the deployment of TDS, McIDAS ADDE and AWIPS II data servers and the Integrated Data Server visualization tool.

  9. The ATLAS Simulation Infrastructure

    DOE PAGES

    Aad, G.; Abbott, B.; Abdallah, J.; ...

    2010-09-25

    The simulation software for the ATLAS Experiment at the Large Hadron Collider is being used for large-scale production of events on the LHC Computing Grid. This simulation requires many components, from the generators that simulate particle collisions, through packages simulating the response of the various detectors and triggers. All of these components come together under the ATLAS simulation infrastructure. In this paper, that infrastructure is discussed, including that supporting the detector description, interfacing the event generation, and combining the GEANT4 simulation of the response of the individual detectors. Also described are the tools allowing the software validation, performance testing, andmore » the validation of the simulated output against known physics processes.« less

  10. ENES the European Network for Earth System modelling and its infrastructure projects IS-ENES

    NASA Astrophysics Data System (ADS)

    Guglielmo, Francesca; Joussaume, Sylvie; Parinet, Marie

    2016-04-01

    The scientific community working on climate modelling is organized within the European Network for Earth System modelling (ENES). In the past decade, several European university departments, research centres, meteorological services, computer centres, and industrial partners engaged in the creation of ENES with the purpose of working together and cooperating towards the further development of the network, by signing a Memorandum of Understanding. As of 2015, the consortium counts 47 partners. The climate modelling community, and thus ENES, faces challenges which are both science-driven, i.e. analysing of the full complexity of the Earth System to improve our understanding and prediction of climate changes, and have multi-faceted societal implications, as a better representation of climate change on regional scales leads to improved understanding and prediction of impacts and to the development and provision of climate services. ENES, promoting and endorsing projects and initiatives, helps in developing and evaluating of state-of-the-art climate and Earth system models, facilitates model inter-comparison studies, encourages exchanges of software and model results, and fosters the use of high performance computing facilities dedicated to high-resolution multi-model experiments. ENES brings together public and private partners, integrates countries underrepresented in climate modelling studies, and reaches out to different user communities, thus enhancing European expertise and competitiveness. In this need of sophisticated models, world-class, high-performance computers, and state-of-the-art software solutions to make efficient use of models, data and hardware, a key role is played by the constitution and maintenance of a solid infrastructure, developing and providing services to the different user communities. ENES has investigated the infrastructural needs and has received funding from the EU FP7 program for the IS-ENES (InfraStructure for ENES) phase I and II projects. We present here the case study of an existing network of institutions brought together toward common goals by a non-binding agreement, ENES, and of its two IS-ENES projects. These latter will be discussed in their double role as a means to provide and/or maintain the actual infrastructure (hardware, software, skilled human resources, services) to achieve ENES scientific goals -fulfilling the aims set in a strategy document-, but also to inform and provide to the network a structured way of working and of interacting with the extended community. The genesis and evolution of the network and the interaction network/projects will also be analysed in terms of long-term sustainability.

  11. Sustainable, Reliable Mission-Systems Architecture

    NASA Technical Reports Server (NTRS)

    O'Neil, Graham; Orr, James K.; Watson, Steve

    2005-01-01

    A mission-systems architecture, based on a highly modular infrastructure utilizing open-standards hardware and software interfaces as the enabling technology is essential for affordable md sustainable space exploration programs. This mission-systems architecture requires (8) robust communication between heterogeneous systems, (b) high reliability, (c) minimal mission-to-mission reconfiguration, (d) affordable development, system integration, end verification of systems, and (e) minimal sustaining engineering. This paper proposes such an architecture. Lessons learned from the Space Shuttle program and Earthbound complex engineered systems are applied to define the model. Technology projections reaching out 5 years are made to refine model details.

  12. A Sustainable, Reliable Mission-Systems Architecture that Supports a System of Systems Approach to Space Exploration

    NASA Technical Reports Server (NTRS)

    Watson, Steve; Orr, Jim; O'Neil, Graham

    2004-01-01

    A mission-systems architecture based on a highly modular "systems of systems" infrastructure utilizing open-standards hardware and software interfaces as the enabling technology is absolutely essential for an affordable and sustainable space exploration program. This architecture requires (a) robust communication between heterogeneous systems, (b) high reliability, (c) minimal mission-to-mission reconfiguration, (d) affordable development, system integration, and verification of systems, and (e) minimum sustaining engineering. This paper proposes such an architecture. Lessons learned from the space shuttle program are applied to help define and refine the model.

  13. Sustainable, Reliable Mission-Systems Architecture

    NASA Technical Reports Server (NTRS)

    O'Neil, Graham; Orr, James K.; Watson, Steve

    2007-01-01

    A mission-systems architecture, based on a highly modular infrastructure utilizing: open-standards hardware and software interfaces as the enabling technology is essential for affordable and sustainable space exploration programs. This mission-systems architecture requires (a) robust communication between heterogeneous system, (b) high reliability, (c) minimal mission-to-mission reconfiguration, (d) affordable development, system integration, and verification of systems, and (e) minimal sustaining engineering. This paper proposes such an architecture. Lessons learned from the Space Shuttle program and Earthbound complex engineered system are applied to define the model. Technology projections reaching out 5 years are mde to refine model details.

  14. NASA Space Technology Draft Roadmap Area 13: Ground and Launch Systems Processing

    NASA Technical Reports Server (NTRS)

    Clements, Greg

    2011-01-01

    This slide presentation reviews the technology development roadmap for the area of ground and launch systems processing. The scope of this technology area includes: (1) Assembly, integration, and processing of the launch vehicle, spacecraft, and payload hardware (2) Supply chain management (3) Transportation of hardware to the launch site (4) Transportation to and operations at the launch pad (5) Launch processing infrastructure and its ability to support future operations (6) Range, personnel, and facility safety capabilities (7) Launch and landing weather (8) Environmental impact mitigations for ground and launch operations (9) Launch control center operations and infrastructure (10) Mission integration and planning (11) Mission training for both ground and flight crew personnel (12) Mission control center operations and infrastructure (13) Telemetry and command processing and archiving (14) Recovery operations for flight crews, flight hardware, and returned samples. This technology roadmap also identifies ground, launch and mission technologies that will: (1) Dramatically transform future space operations, with significant improvement in life-cycle costs (2) Improve the quality of life on earth, while exploring in co-existence with the environment (3) Increase reliability and mission availability using low/zero maintenance materials and systems, comprehensive capabilities to ascertain and forecast system health/configuration, data integration, and the use of advanced/expert software systems (4) Enhance methods to assess safety and mission risk posture, which would allow for timely and better decision making. Several key technologies are identified, with a couple of slides devoted to one of these technologies (i.e., corrosion detection and prevention). Development of these technologies can enhance life on earth and have a major impact on how we can access space, eventually making routine commercial space access and improve building and manufacturing, and weather forecasting for example for the effect of these process improvements on our daily lives.

  15. A flexible architecture for advanced process control solutions

    NASA Astrophysics Data System (ADS)

    Faron, Kamyar; Iourovitski, Ilia

    2005-05-01

    Advanced Process Control (APC) is now mainstream practice in the semiconductor manufacturing industry. Over the past decade and a half APC has evolved from a "good idea", and "wouldn"t it be great" concept to mandatory manufacturing practice. APC developments have primarily dealt with two major thrusts, algorithms and infrastructure, and often the line between them has been blurred. The algorithms have evolved from very simple single variable solutions to sophisticated and cutting edge adaptive multivariable (input and output) solutions. Spending patterns in recent times have demanded that the economics of a comprehensive APC infrastructure be completely justified for any and all cost conscious manufacturers. There are studies suggesting integration costs as high as 60% of the total APC solution costs. Such cost prohibitive figures clearly diminish the return on APC investments. This has limited the acceptance and development of pure APC infrastructure solutions for many fabs. Modern APC solution architectures must satisfy the wide array of requirements from very manual R&D environments to very advanced and automated "lights out" manufacturing facilities. A majority of commercially available control solutions and most in house developed solutions lack important attributes of scalability, flexibility, and adaptability and hence require significant resources for integration, deployment, and maintenance. Many APC improvement efforts have been abandoned and delayed due to legacy systems and inadequate architectural design. Recent advancements (Service Oriented Architectures) in the software industry have delivered ideal technologies for delivering scalable, flexible, and reliable solutions that can seamlessly integrate into any fabs" existing system and business practices. In this publication we shall evaluate the various attributes of the architectures required by fabs and illustrate the benefits of a Service Oriented Architecture to satisfy these requirements. Blue Control Technologies has developed an advance service oriented architecture Run to Run Control System which addresses these requirements.

  16. Cultural and Technological Issues and Solutions for Geodynamics Software Citation

    NASA Astrophysics Data System (ADS)

    Heien, E. M.; Hwang, L.; Fish, A. E.; Smith, M.; Dumit, J.; Kellogg, L. H.

    2014-12-01

    Computational software and custom-written codes play a key role in scientific research and teaching, providing tools to perform data analysis and forward modeling through numerical computation. However, development of these codes is often hampered by the fact that there is no well-defined way for the authors to receive credit or professional recognition for their work through the standard methods of scientific publication and subsequent citation of the work. This in turn may discourage researchers from publishing their codes or making them easier for other scientists to use. We investigate the issues involved in citing software in a scientific context, and introduce features that should be components of a citation infrastructure, particularly oriented towards the codes and scientific culture in the area of geodynamics research. The codes used in geodynamics are primarily specialized numerical modeling codes for continuum mechanics problems; they may be developed by individual researchers, teams of researchers, geophysicists in collaboration with computational scientists and applied mathematicians, or by coordinated community efforts such as the Computational Infrastructure for Geodynamics. Some but not all geodynamics codes are open-source. These characteristics are common to many areas of geophysical software development and use. We provide background on the problem of software citation and discuss some of the barriers preventing adoption of such citations, including social/cultural barriers, insufficient technological support infrastructure, and an overall lack of agreement about what a software citation should consist of. We suggest solutions in an initial effort to create a system to support citation of software and promotion of scientific software development.

  17. Development of Web GIS for complex processing and visualization of climate geospatial datasets as an integral part of dedicated Virtual Research Environment

    NASA Astrophysics Data System (ADS)

    Gordov, Evgeny; Okladnikov, Igor; Titov, Alexander

    2017-04-01

    For comprehensive usage of large geospatial meteorological and climate datasets it is necessary to create a distributed software infrastructure based on the spatial data infrastructure (SDI) approach. Currently, it is generally accepted that the development of client applications as integrated elements of such infrastructure should be based on the usage of modern web and GIS technologies. The paper describes the Web GIS for complex processing and visualization of geospatial (mainly in NetCDF and PostGIS formats) datasets as an integral part of the dedicated Virtual Research Environment for comprehensive study of ongoing and possible future climate change, and analysis of their implications, providing full information and computing support for the study of economic, political and social consequences of global climate change at the global and regional levels. The Web GIS consists of two basic software parts: 1. Server-side part representing PHP applications of the SDI geoportal and realizing the functionality of interaction with computational core backend, WMS/WFS/WPS cartographical services, as well as implementing an open API for browser-based client software. Being the secondary one, this part provides a limited set of procedures accessible via standard HTTP interface. 2. Front-end part representing Web GIS client developed according to a "single page application" technology based on JavaScript libraries OpenLayers (http://openlayers.org/), ExtJS (https://www.sencha.com/products/extjs), GeoExt (http://geoext.org/). It implements application business logic and provides intuitive user interface similar to the interface of such popular desktop GIS applications, as uDIG, QuantumGIS etc. Boundless/OpenGeo architecture was used as a basis for Web-GIS client development. According to general INSPIRE requirements to data visualization Web GIS provides such standard functionality as data overview, image navigation, scrolling, scaling and graphical overlay, displaying map legends and corresponding metadata information. The specialized Web GIS client contains three basic tires: • Tier of NetCDF metadata in JSON format • Middleware tier of JavaScript objects implementing methods to work with: o NetCDF metadata o XML file of selected calculations configuration (XML task) o WMS/WFS/WPS cartographical services • Graphical user interface tier representing JavaScript objects realizing general application business logic Web-GIS developed provides computational processing services launching to support solving tasks in the area of environmental monitoring, as well as presenting calculation results in the form of WMS/WFS cartographical layers in raster (PNG, JPG, GeoTIFF), vector (KML, GML, Shape), and binary (NetCDF) formats. It has shown its effectiveness in the process of solving real climate change research problems and disseminating investigation results in cartographical formats. The work is supported by the Russian Science Foundation grant No 16-19-10257.

  18. Coordinated Fault-Tolerance for High-Performance Computing Final Project Report

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Panda, Dhabaleswar Kumar; Beckman, Pete

    2011-07-28

    With the Coordinated Infrastructure for Fault Tolerance Systems (CIFTS, as the original project came to be called) project, our aim has been to understand and tackle the following broad research questions, the answers to which will help the HEC community analyze and shape the direction of research in the field of fault tolerance and resiliency on future high-end leadership systems. Will availability of global fault information, obtained by fault information exchange between the different HEC software on a system, allow individual system software to better detect, diagnose, and adaptively respond to faults? If fault-awareness is raised throughout the system throughmore » fault information exchange, is it possible to get all system software working together to provide a more comprehensive end-to-end fault management on the system? What are the missing fault-tolerance features that widely used HEC system software lacks today that would inhibit such software from taking advantage of systemwide global fault information? What are the practical limitations of a systemwide approach for end-to-end fault management based on fault awareness and coordination? What mechanisms, tools, and technologies are needed to bring about fault awareness and coordination of responses on a leadership-class system? What standards, outreach, and community interaction are needed for adoption of the concept of fault awareness and coordination for fault management on future systems? Keeping our overall objectives in mind, the CIFTS team has taken a parallel fourfold approach. Our central goal was to design and implement a light-weight, scalable infrastructure with a simple, standardized interface to allow communication of fault-related information through the system and facilitate coordinated responses. This work led to the development of the Fault Tolerance Backplane (FTB) publish-subscribe API specification, together with a reference implementation and several experimental implementations on top of existing publish-subscribe tools. We enhanced the intrinsic fault tolerance capabilities representative implementations of a variety of key HPC software subsystems and integrated them with the FTB. Targeting software subsystems included: MPI communication libraries, checkpoint/restart libraries, resource managers and job schedulers, and system monitoring tools. Leveraging the aforementioned infrastructure, as well as developing and utilizing additional tools, we have examined issues associated with expanded, end-to-end fault response from both system and application viewpoints. From the standpoint of system operations, we have investigated log and root cause analysis, anomaly detection and fault prediction, and generalized notification mechanisms. Our applications work has included libraries for fault-tolerance linear algebra, application frameworks for coupled multiphysics applications, and external frameworks to support the monitoring and response for general applications. Our final goal was to engage the high-end computing community to increase awareness of tools and issues around coordinated end-to-end fault management.« less

  19. Testing as a Service with HammerCloud

    NASA Astrophysics Data System (ADS)

    Medrano Llamas, Ramón; Barrand, Quentin; Elmsheuser, Johannes; Legger, Federica; Sciacca, Gianfranco; Sciabà, Andrea; van der Ster, Daniel

    2014-06-01

    HammerCloud was designed and born under the needs of the grid community to test the resources and automate operations from a user perspective. The recent developments in the IT space propose a shift to the software defined data centres, in which every layer of the infrastructure can be offered as a service. Testing and monitoring is an integral part of the development, validation and operations of big systems, like the grid. This area is not escaping the paradigm shift and we are starting to perceive as natural the Testing as a Service (TaaS) offerings, which allow testing any infrastructure service, such as the Infrastructure as a Service (IaaS) platforms being deployed in many grid sites, both from the functional and stressing perspectives. This work will review the recent developments in HammerCloud and its evolution to a TaaS conception, in particular its deployment on the Agile Infrastructure platform at CERN and the testing of many IaaS providers across Europe in the context of experiment requirements. The first section will review the architectural changes that a service running in the cloud needs, such an orchestration service or new storage requirements in order to provide functional and stress testing. The second section will review the first tests of infrastructure providers on the perspective of the challenges discovered from the architectural point of view. Finally, the third section will evaluate future requirements of scalability and features to increase testing productivity.

  20. Integration of the Eventlndex with other ATLAS systems

    NASA Astrophysics Data System (ADS)

    Barberis, D.; Cárdenas Zárate, S. E.; Gallas, E. J.; Prokoshin, F.

    2015-12-01

    The ATLAS EventIndex System, developed for use in LHC Run 2, is designed to index every processed event in ATLAS, replacing the TAG System used in Run 1. Its storage infrastructure, based on Hadoop open-source software framework, necessitates revamping how information in this system relates to other ATLAS systems. It will store more indexes since the fundamental mechanisms for retrieving these indexes will be better integrated into all stages of data processing, allowing more events from later stages of processing to be indexed than was possible with the previous system. Connections with other systems (conditions database, monitoring) are fundamentally critical to assess dataset completeness, identify data duplication, and check data integrity, and also enhance access to information in EventIndex by user and system interfaces. This paper gives an overview of the ATLAS systems involved, the relevant metadata, and describe the technologies we are deploying to complete these connections.

  1. Integrated Modeling, Mapping, and Simulation (IMMS) Framework for Exercise and Response Planning

    NASA Technical Reports Server (NTRS)

    Mapar, Jalal; Hoette, Trisha; Mahrous, Karim; Pancerella, Carmen M.; Plantenga, Todd; Yang, Christine; Yang, Lynn; Hopmeier, Michael

    2011-01-01

    EmergenCy management personnel at federal, stale, and local levels can benefit from the increased situational awareness and operational efficiency afforded by simulation and modeling for emergency preparedness, including planning, training and exercises. To support this goal, the Department of Homeland Security's Science & Technology Directorate is funding the Integrated Modeling, Mapping, and Simulation (IMMS) program to create an integrating framework that brings together diverse models for use by the emergency response community. SUMMIT, one piece of the IMMS program, is the initial software framework that connects users such as emergency planners and exercise developers with modeling resources, bridging the gap in expertise and technical skills between these two communities. SUMMIT was recently deployed to support exercise planning for National Level Exercise 2010. Threat, casualty. infrastructure, and medical surge models were combined within SUMMIT to estimate health care resource requirements for the exercise ground truth.

  2. Identification of needs and requirements defined by services subordinated to the Minister of the Interior and Administration in key technology and user interfaces to develop a concept of the Video Signals Integrator (VSI) system

    NASA Astrophysics Data System (ADS)

    Bukowiecka, Danuta; Tyburska, Agata; Struniawski, Jarosław; Jastrzebski, Pawel; Jewartowski, Blazej; Pozniak, Krzysztof; Kasprowicz, Grzegorz; Pastuszak, Grzegorz; Trochimiuk, Maciej; Abramowski, Andrzej; Gaska, Michal; Frasunek, Przemysław; Nalbach-Moszynska, Małgorzata; Brawata, Sebastian; Bubak, Iwona; Gloza, Małgorzata

    2016-09-01

    Preventing and eliminating the risks of terrorist attacks or natural disasters as well as an increase in the security of mass events and critical infrastructure requires the application of modern technologies. Therefore there is a proposal to construct a tool that integrates video signals transmitted by devices that are a part of video monitoring systems functioning in Poland. The article presents selected results of research conducted by the Police Academy in Szczytno under the implemented project for national defense and security on "Video Signals Integrator" Acronym - VSI. Project Leader: Warsaw University of Technology. The consortium: Police Academy in Szczytno, Atende Software Ltd., VORTEX Ltd. No. DOBBio7/ 01/02/2015 funded by the National Centre for Research and Development.

  3. The dependence of educational infrastructure on clinical infrastructure.

    PubMed Central

    Cimino, C.

    1998-01-01

    The Albert Einstein College of Medicine needed to assess the growth of its infrastructure for educational computing as a first step to determining if student needs were being met. Included in computing infrastructure are space, equipment, software, and computing services. The infrastructure was assessed by reviewing purchasing and support logs for a six year period from 1992 to 1998. This included equipment, software, and e-mail accounts provided to students and to faculty for educational purposes. Student space has grown at a constant rate (averaging 14% increase each year respectively). Student equipment on campus has grown by a constant amount each year (average 8.3 computers each year). Student infrastructure off campus and educational support of faculty has not kept pace. It has either declined or remained level over the six year period. The availability of electronic mail clearly demonstrates this with accounts being used by 99% of students, 78% of Basic Science Course Leaders, 38% of Clerkship Directors, 18% of Clerkship Site Directors, and 8% of Clinical Elective Directors. The collection of the initial descriptive infrastructure data has revealed problems that may generalize to other medical schools. The discrepancy between infrastructure available to students and faculty on campus and students and faculty off campus creates a setting where students perceive a paradoxical declining support for computer use as they progress through medical school. While clinical infrastructure may be growing, it is at the expense of educational infrastructure at affiliate hospitals. PMID:9929262

  4. Science of Security Lablet - Scalability and Usability

    DTIC Science & Technology

    2014-12-16

    mobile computing [19]. However, the high-level infrastructure design and our own implementation (both described throughout this paper) can easily...critical and infrastructural systems demands high levels of sophistication in the technical aspects of cybersecurity, software and hardware design...Forget, S. Komanduri, Alessandro Acquisti, Nicolas Christin, Lorrie Cranor, Rahul Telang. "Security Behavior Observatory: Infrastructure for Long-term

  5. Software and the future of programming languages.

    PubMed

    Aho, Alfred V

    2004-02-27

    Although software is the key enabler of the global information infrastructure, the amount and extent of software in use in the world today are not widely understood, nor are the programming languages and paradigms that have been used to create the software. The vast size of the embedded base of existing software and the increasing costs of software maintenance, poor security, and limited functionality are posing significant challenges for the software R&D community.

  6. The StratusLab cloud distribution: Use-cases and support for scientific applications

    NASA Astrophysics Data System (ADS)

    Floros, E.

    2012-04-01

    The StratusLab project is integrating an open cloud software distribution that enables organizations to setup and provide their own private or public IaaS (Infrastructure as a Service) computing clouds. StratusLab distribution capitalizes on popular infrastructure virtualization solutions like KVM, the OpenNebula virtual machine manager, Claudia service manager and SlipStream deployment platform, which are further enhanced and expanded with additional components developed within the project. The StratusLab distribution covers the core aspects of a cloud IaaS architecture, namely Computing (life-cycle management of virtual machines), Storage, Appliance management and Networking. The resulting software stack provides a packaged turn-key solution for deploying cloud computing services. The cloud computing infrastructures deployed using StratusLab can support a wide range of scientific and business use cases. Grid computing has been the primary use case pursued by the project and for this reason the initial priority has been the support for the deployment and operation of fully virtualized production-level grid sites; a goal that has already been achieved by operating such a site as part of EGI's (European Grid Initiative) pan-european grid infrastructure. In this area the project is currently working to provide non-trivial capabilities like elastic and autonomic management of grid site resources. Although grid computing has been the motivating paradigm, StratusLab's cloud distribution can support a wider range of use cases. Towards this direction, we have developed and currently provide support for setting up general purpose computing solutions like Hadoop, MPI and Torque clusters. For what concerns scientific applications the project is collaborating closely with the Bioinformatics community in order to prepare VM appliances and deploy optimized services for bioinformatics applications. In a similar manner additional scientific disciplines like Earth Science can take advantage of StratusLab cloud solutions. Interested users are welcomed to join StratusLab's user community by getting access to the reference cloud services deployed by the project and offered to the public.

  7. Conceptual Design of a 150-Passenger Civil Tiltrotor

    NASA Technical Reports Server (NTRS)

    Costa, Guillermo

    2012-01-01

    The conceptual design of a short-haul civil tiltrotor aircraft is presented. The concept vehicle is designed for runway-independent operations to increase the capacity of the National Airspace System without the need for increased infrastructure. This necessitates a vehicle that is capable of integrating with conventional air traffic without interfering with established flightpaths. The NASA Design and Analysis of Rotorcraft software was used to size the concept vehicle based on the mission requirements of this market. The final configuration was selected based upon performance metrics such as acquisition and maintenance costs, fuel fraction, empty weight, and required engine power. The concept presented herein has a proposed initial operating capability date of 2035, and is intended to integrate with conventional air traffic as well as proposed future air transportation concepts.

  8. Information Technology and Community Restoration Studies/Task 1: Information Technology

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Upton, Jaki F.; Lesperance, Ann M.; Stein, Steven L.

    2009-11-19

    Executive Summary The Interagency Biological Restoration Demonstration—a program jointly funded by the Department of Defense's Defense Threat Reduction Agency and the Department of Homeland Security's (DHS's) Science and Technology Directorate—is developing policies, methods, plans, and applied technologies to restore large urban areas, critical infrastructures, and Department of Defense installations following the intentional release of a biological agent (anthrax) by terrorists. There is a perception that there should be a common system that can share information both vertically and horizontally amongst participating organizations as well as support analyses. A key question is: "How far away from this are we?" As partmore » of this program, Pacific Northwest National Laboratory conducted research to identify the current information technology tools that would be used by organizations in the greater Seattle urban area in such a scenario, to define criteria for use in evaluating information technology tools, and to identify current gaps. Researchers interviewed 28 individuals representing 25 agencies in civilian and military organizations to identify the tools they currently use to capture data needed to support operations and decision making. The organizations can be grouped into five broad categories: defense (Department of Defense), environmental/ecological (Environmental Protection Agency/Ecology), public health and medical services, emergency management, and critical infrastructure. The types of information that would be communicated in a biological terrorism incident include critical infrastructure and resource status, safety and protection information, laboratory test results, and general emergency information. The most commonly used tools are WebEOC (web-enabled crisis information management systems with real-time information sharing), mass notification software, resource tracking software, and NW WARN (web-based information to protect critical infrastructure systems). It appears that the current information management tools are used primarily for information gathering and sharing—not decision making. Respondents identified the following criteria for a future software system. It is easy to learn, updates information in real time, works with all agencies, is secure, uses a visualization or geographic information system feature, enables varying permission levels, flows information from one stage to another, works with other databases, feeds decision support tools, is compliant with appropriate standards, and is reasonably priced. Current tools have security issues, lack visual/mapping functions and critical infrastructure status, and do not integrate with other tools. It is clear that there is a need for an integrated, common operating system. The system would need to be accessible by all the organizations that would have a role in managing an anthrax incident to enable regional decision making. The most useful tool would feature a GIS visualization that would allow for a common operating picture that is updated in real time. To capitalize on information gained from the interviews, the following activities are recommended: • Rate emergency management decision tools against the criteria specified by the interviewees. • Identify and analyze other current activities focused on information sharing in the greater Seattle urban area. • Identify and analyze information sharing systems/tools used in other regions.« less

  9. Seqcrawler: biological data indexing and browsing platform.

    PubMed

    Sallou, Olivier; Bretaudeau, Anthony; Roult, Aurelien

    2012-07-24

    Seqcrawler takes its roots in software like SRS or Lucegene. It provides an indexing platform to ease the search of data and meta-data in biological banks and it can scale to face the current flow of data. While many biological bank search tools are available on the Internet, mainly provided by large organizations to search their data, there is a lack of free and open source solutions to browse one's own set of data with a flexible query system and able to scale from a single computer to a cloud system. A personal index platform will help labs and bioinformaticians to search their meta-data but also to build a larger information system with custom subsets of data. The software is scalable from a single computer to a cloud-based infrastructure. It has been successfully tested in a private cloud with 3 index shards (pieces of index) hosting ~400 millions of sequence information (whole GenBank, UniProt, PDB and others) for a total size of 600 GB in a fault tolerant architecture (high-availability). It has also been successfully integrated with software to add extra meta-data from blast results to enhance users' result analysis. Seqcrawler provides a complete open source search and store solution for labs or platforms needing to manage large amount of data/meta-data with a flexible and customizable web interface. All components (search engine, visualization and data storage), though independent, share a common and coherent data system that can be queried with a simple HTTP interface. The solution scales easily and can also provide a high availability infrastructure.

  10. Cloud flexibility using DIRAC interware

    NASA Astrophysics Data System (ADS)

    Fernandez Albor, Víctor; Seco Miguelez, Marcos; Fernandez Pena, Tomas; Mendez Muñoz, Victor; Saborido Silva, Juan Jose; Graciani Diaz, Ricardo

    2014-06-01

    Communities of different locations are running their computing jobs on dedicated infrastructures without the need to worry about software, hardware or even the site where their programs are going to be executed. Nevertheless, this usually implies that they are restricted to use certain types or versions of an Operating System because either their software needs an definite version of a system library or a specific platform is required by the collaboration to which they belong. On this scenario, if a data center wants to service software to incompatible communities, it has to split its physical resources among those communities. This splitting will inevitably lead to an underuse of resources because the data centers are bound to have periods where one or more of its subclusters are idle. It is, in this situation, where Cloud Computing provides the flexibility and reduction in computational cost that data centers are searching for. This paper describes a set of realistic tests that we ran on one of such implementations. The test comprise software from three different HEP communities (Auger, LHCb and QCD phenomelogists) and the Parsec Benchmark Suite running on one or more of three Linux flavors (SL5, Ubuntu 10.04 and Fedora 13). The implemented infrastructure has, at the cloud level, CloudStack that manages the virtual machines (VM) and the hosts on which they run, and, at the user level, the DIRAC framework along with a VM extension that will submit, monitorize and keep track of the user jobs and also requests CloudStack to start or stop the necessary VM's. In this infrastructure, the community software is distributed via the CernVM-FS, which has been proven to be a reliable and scalable software distribution system. With the resulting infrastructure, users are allowed to send their jobs transparently to the Data Center. The main purpose of this system is the creation of flexible cluster, multiplatform with an scalable method for software distribution for several VOs. Users from different communities do not need to care about the installation of the standard software that is available at the nodes, nor the operating system of the host machine, which is transparent to the user.

  11. Computational Infrastructure for Geodynamics (CIG)

    NASA Astrophysics Data System (ADS)

    Gurnis, M.; Kellogg, L. H.; Bloxham, J.; Hager, B. H.; Spiegelman, M.; Willett, S.; Wysession, M. E.; Aivazis, M.

    2004-12-01

    Solid earth geophysicists have a long tradition of writing scientific software to address a wide range of problems. In particular, computer simulations came into wide use in geophysics during the decade after the plate tectonic revolution. Solution schemes and numerical algorithms that developed in other areas of science, most notably engineering, fluid mechanics, and physics, were adapted with considerable success to geophysics. This software has largely been the product of individual efforts and although this approach has proven successful, its strength for solving problems of interest is now starting to show its limitations as we try to share codes and algorithms or when we want to recombine codes in novel ways to produce new science. With funding from the NSF, the US community has embarked on a Computational Infrastructure for Geodynamics (CIG) that will develop, support, and disseminate community-accessible software for the greater geodynamics community from model developers to end-users. The software is being developed for problems involving mantle and core dynamics, crustal and earthquake dynamics, magma migration, seismology, and other related topics. With a high level of community participation, CIG is leveraging state-of-the-art scientific computing into a suite of open-source tools and codes. The infrastructure that we are now starting to develop will consist of: (a) a coordinated effort to develop reusable, well-documented and open-source geodynamics software; (b) the basic building blocks - an infrastructure layer - of software by which state-of-the-art modeling codes can be quickly assembled; (c) extension of existing software frameworks to interlink multiple codes and data through a superstructure layer; (d) strategic partnerships with the larger world of computational science and geoinformatics; and (e) specialized training and workshops for both the geodynamics and broader Earth science communities. The CIG initiative has already started to leverage and develop long-term strategic partnerships with open source development efforts within the larger thrusts of scientific computing and geoinformatics. These strategic partnerships are essential as the frontier has moved into multi-scale and multi-physics problems in which many investigators now want to use simulation software for data interpretation, data assimilation, and hypothesis testing.

  12. The new meaning of quality in the information age.

    PubMed

    Prahalad, C K; Krishnan, M S

    1999-01-01

    Software applications are now a mission-critical source of competitive advantage for most companies. They are also a source of great risk, as the Y2K bug has made clear. Yet many line managers still haven't confronted software issues--partly because they aren't sure how best to define the quality of the applications in their IT infrastructures. Some companies such as Wal-Mart and the Gap have successfully integrated the software in their networks, but most have accumulated an unwidely number of incompatible applications--all designed to perform the same tasks. The authors provide a framework for measuring the performance of software in a company's IT portfolio. Quality traditionally has been measured according to a product's ability to meet certain specifications; other views of quality have emerged that measure a product's adaptability to customers' needs and a product's ability to encourage innovation. To judge software quality properly, argue the authors, managers must measure applications against all three approaches. Understanding the domain of a software application is an important part of that process. The domain is the body of knowledge about a user's needs and expectations for a product. Software domains change frequently based on how a consumer chooses to use, for example, Microsoft Word or a spreadsheet application. The domain can also be influenced by general changes in technology, such as the development of a new software platform. Thus, applications can't be judged only according to whether they conform to specifications. The authors discuss how to identify domain characteristics and software risks and suggest ways to reduce the variability of software domains.

  13. The TJO-OAdM robotic observatory: OpenROCS and dome control

    NASA Astrophysics Data System (ADS)

    Colomé, Josep; Francisco, Xavier; Ribas, Ignasi; Casteels, Kevin; Martín, Jonatan

    2010-07-01

    The Telescope Joan Oró at the Montsec Astronomical Observatory (TJO - OAdM) is a small-class observatory working in completely unattended control. There are key problems to solve when a robotic control is envisaged, both on hardware and software issues. We present the OpenROCS (ROCS stands for Robotic Observatory Control System), an open source platform developed for the robotic control of the TJO - OAdM and similar astronomical observatories. It is a complex software architecture, composed of several applications for hardware control, event handling, environment monitoring, target scheduling, image reduction pipeline, etc. The code is developed in Java, C++, Python and Perl. The software infrastructure used is based on the Internet Communications Engine (Ice), an object-oriented middleware that provides object-oriented remote procedure call, grid computing, and publish/subscribe functionality. We also describe the subsystem in charge of the dome control: several hardware and software elements developed to specially protect the system at this identified single point of failure. It integrates a redundant control and a rain detector signal for alarm triggering and it responds autonomously in case communication with any of the control elements is lost (watchdog functionality). The self-developed control software suite (OpenROCS) and dome control system have proven to be highly reliable.

  14. From WSN towards WoT: Open API Scheme Based on oneM2M Platforms.

    PubMed

    Kim, Jaeho; Choi, Sung-Chan; Ahn, Il-Yeup; Sung, Nak-Myoung; Yun, Jaeseok

    2016-10-06

    Conventional computing systems have been able to be integrated into daily objects and connected to each other due to advances in computing and network technologies, such as wireless sensor networks (WSNs), forming a global network infrastructure, called the Internet of Things (IoT). To support the interconnection and interoperability between heterogeneous IoT systems, the availability of standardized, open application programming interfaces (APIs) is one of the key features of common software platforms for IoT devices, gateways, and servers. In this paper, we present a standardized way of extending previously-existing WSNs towards IoT systems, building the world of the Web of Things (WoT). Based on the oneM2M software platforms developed in the previous project, we introduce a well-designed open API scheme and device-specific thing adaptation software (TAS) enabling WSN elements, such as a wireless sensor node, to be accessed in a standardized way on a global scale. Three pilot services are implemented (i.e., a WiFi-enabled smart flowerpot, voice-based control for ZigBee-connected home appliances, and WiFi-connected AR.Drone control) to demonstrate the practical usability of the open API scheme and TAS modules. Full details on the method of integrating WSN elements into three example systems are described at the programming code level, which is expected to help future researchers in integrating their WSN systems in IoT platforms, such as oneM2M. We hope that the flexibly-deployable, easily-reusable common open API scheme and TAS-based integration method working with the oneM2M platforms will help the conventional WSNs in diverse industries evolve into the emerging WoT solutions.

  15. From WSN towards WoT: Open API Scheme Based on oneM2M Platforms

    PubMed Central

    Kim, Jaeho; Choi, Sung-Chan; Ahn, Il-Yeup; Sung, Nak-Myoung; Yun, Jaeseok

    2016-01-01

    Conventional computing systems have been able to be integrated into daily objects and connected to each other due to advances in computing and network technologies, such as wireless sensor networks (WSNs), forming a global network infrastructure, called the Internet of Things (IoT). To support the interconnection and interoperability between heterogeneous IoT systems, the availability of standardized, open application programming interfaces (APIs) is one of the key features of common software platforms for IoT devices, gateways, and servers. In this paper, we present a standardized way of extending previously-existing WSNs towards IoT systems, building the world of the Web of Things (WoT). Based on the oneM2M software platforms developed in the previous project, we introduce a well-designed open API scheme and device-specific thing adaptation software (TAS) enabling WSN elements, such as a wireless sensor node, to be accessed in a standardized way on a global scale. Three pilot services are implemented (i.e., a WiFi-enabled smart flowerpot, voice-based control for ZigBee-connected home appliances, and WiFi-connected AR.Drone control) to demonstrate the practical usability of the open API scheme and TAS modules. Full details on the method of integrating WSN elements into three example systems are described at the programming code level, which is expected to help future researchers in integrating their WSN systems in IoT platforms, such as oneM2M. We hope that the flexibly-deployable, easily-reusable common open API scheme and TAS-based integration method working with the oneM2M platforms will help the conventional WSNs in diverse industries evolve into the emerging WoT solutions. PMID:27782058

  16. Integrated Exoplanet Modeling with the GSFC Exoplanet Modeling & Analysis Center (EMAC)

    NASA Astrophysics Data System (ADS)

    Mandell, Avi M.; Hostetter, Carl; Pulkkinen, Antti; Domagal-Goldman, Shawn David

    2018-01-01

    Our ability to characterize the atmospheres of extrasolar planets will be revolutionized by JWST, WFIRST and future ground- and space-based telescopes. In preparation, the exoplanet community must develop an integrated suite of tools with which we can comprehensively predict and analyze observations of exoplanets, in order to characterize the planetary environments and ultimately search them for signs of habitability and life.The GSFC Exoplanet Modeling and Analysis Center (EMAC) will be a web-accessible high-performance computing platform with science support for modelers and software developers to host and integrate their scientific software tools, with the goal of leveraging the scientific contributions from the entire exoplanet community to improve our interpretations of future exoplanet discoveries. Our suite of models will include stellar models, models for star-planet interactions, atmospheric models, planet system science models, telescope models, instrument models, and finally models for retrieving signals from observational data. By integrating this suite of models, the community will be able to self-consistently calculate the emergent spectra from the planet whether from emission, scattering, or in transmission, and use these simulations to model the performance of current and new telescopes and their instrumentation.The EMAC infrastructure will not only provide a repository for planetary and exoplanetary community models, modeling tools and intermodal comparisons, but it will include a "run-on-demand" portal with each software tool hosted on a separate virtual machine. The EMAC system will eventually include a means of running or “checking in” new model simulations that are in accordance with the community-derived standards. Additionally, the results of intermodal comparisons will be used to produce open source publications that quantify the model comparisons and provide an overview of community consensus on model uncertainties on the climates of various planetary targets.

  17. The HACMS program: using formal methods to eliminate exploitable bugs

    PubMed Central

    Launchbury, John; Richards, Raymond

    2017-01-01

    For decades, formal methods have offered the promise of verified software that does not have exploitable bugs. Until recently, however, it has not been possible to verify software of sufficient complexity to be useful. Recently, that situation has changed. SeL4 is an open-source operating system microkernel efficient enough to be used in a wide range of practical applications. Its designers proved it to be fully functionally correct, ensuring the absence of buffer overflows, null pointer exceptions, use-after-free errors, etc., and guaranteeing integrity and confidentiality. The CompCert Verifying C Compiler maps source C programs to provably equivalent assembly language, ensuring the absence of exploitable bugs in the compiler. A number of factors have enabled this revolution, including faster processors, increased automation, more extensive infrastructure, specialized logics and the decision to co-develop code and correctness proofs rather than verify existing artefacts. In this paper, we explore the promise and limitations of current formal-methods techniques. We discuss these issues in the context of DARPA’s HACMS program, which had as its goal the creation of high-assurance software for vehicles, including quadcopters, helicopters and automobiles. This article is part of the themed issue ‘Verified trustworthy software systems’. PMID:28871050

  18. A Custom Approach for a Flexible, Real-Time and Reliable Software Defined Utility.

    PubMed

    Zaballos, Agustín; Navarro, Joan; Martín De Pozuelo, Ramon

    2018-02-28

    Information and communication technologies (ICTs) have enabled the evolution of traditional electric power distribution networks towards a new paradigm referred to as the smart grid. However, the different elements that compose the ICT plane of a smart grid are usually conceived as isolated systems that typically result in rigid hardware architectures, which are hard to interoperate, manage and adapt to new situations. In the recent years, software-defined systems that take advantage of software and high-speed data network infrastructures have emerged as a promising alternative to classic ad hoc approaches in terms of integration, automation, real-time reconfiguration and resource reusability. The purpose of this paper is to propose the usage of software-defined utilities (SDUs) to address the latent deployment and management limitations of smart grids. More specifically, the implementation of a smart grid's data storage and management system prototype by means of SDUs is introduced, which exhibits the feasibility of this alternative approach. This system features a hybrid cloud architecture able to meet the data storage requirements of electric utilities and adapt itself to their ever-evolving needs. Conducted experimentations endorse the feasibility of this solution and encourage practitioners to point their efforts in this direction.

  19. A Custom Approach for a Flexible, Real-Time and Reliable Software Defined Utility

    PubMed Central

    2018-01-01

    Information and communication technologies (ICTs) have enabled the evolution of traditional electric power distribution networks towards a new paradigm referred to as the smart grid. However, the different elements that compose the ICT plane of a smart grid are usually conceived as isolated systems that typically result in rigid hardware architectures, which are hard to interoperate, manage and adapt to new situations. In the recent years, software-defined systems that take advantage of software and high-speed data network infrastructures have emerged as a promising alternative to classic ad hoc approaches in terms of integration, automation, real-time reconfiguration and resource reusability. The purpose of this paper is to propose the usage of software-defined utilities (SDUs) to address the latent deployment and management limitations of smart grids. More specifically, the implementation of a smart grid’s data storage and management system prototype by means of SDUs is introduced, which exhibits the feasibility of this alternative approach. This system features a hybrid cloud architecture able to meet the data storage requirements of electric utilities and adapt itself to their ever-evolving needs. Conducted experimentations endorse the feasibility of this solution and encourage practitioners to point their efforts in this direction. PMID:29495599

  20. The HACMS program: using formal methods to eliminate exploitable bugs.

    PubMed

    Fisher, Kathleen; Launchbury, John; Richards, Raymond

    2017-10-13

    For decades, formal methods have offered the promise of verified software that does not have exploitable bugs. Until recently, however, it has not been possible to verify software of sufficient complexity to be useful. Recently, that situation has changed. SeL4 is an open-source operating system microkernel efficient enough to be used in a wide range of practical applications. Its designers proved it to be fully functionally correct, ensuring the absence of buffer overflows, null pointer exceptions, use-after-free errors, etc., and guaranteeing integrity and confidentiality. The CompCert Verifying C Compiler maps source C programs to provably equivalent assembly language, ensuring the absence of exploitable bugs in the compiler. A number of factors have enabled this revolution, including faster processors, increased automation, more extensive infrastructure, specialized logics and the decision to co-develop code and correctness proofs rather than verify existing artefacts. In this paper, we explore the promise and limitations of current formal-methods techniques. We discuss these issues in the context of DARPA's HACMS program, which had as its goal the creation of high-assurance software for vehicles, including quadcopters, helicopters and automobiles.This article is part of the themed issue 'Verified trustworthy software systems'. © 2017 The Authors.

  1. Characterizing Crowd Participation and Productivity of Foldit Through Web Scraping

    DTIC Science & Technology

    2016-03-01

    Berkeley Open Infrastructure for Network Computing CDF Cumulative Distribution Function CPU Central Processing Unit CSSG Crowdsourced Serious Game...computers at once can create a similar capacity. According to Anderson [6], principal investigator for the Berkeley Open Infrastructure for Network...extraterrestrial life. From this project, a software-based distributed computing platform called the Berkeley Open Infrastructure for Network Computing

  2. Digital Library Storage using iRODS Data Grids

    NASA Astrophysics Data System (ADS)

    Hedges, Mark; Blanke, Tobias; Hasan, Adil

    Digital repository software provides a powerful and flexible infrastructure for managing and delivering complex digital resources and metadata. However, issues can arise in managing the very large, distributed data files that may constitute these resources. This paper describes an implementation approach that combines the Fedora digital repository software with a storage layer implemented as a data grid, using the iRODS middleware developed by DICE (Data Intensive Cyber Environments) as the successor to SRB. This approach allows us to use Fedoras flexible architecture to manage the structure of resources and to provide application- layer services to users. The grid-based storage layer provides efficient support for managing and processing the underlying distributed data objects, which may be very large (e.g. audio-visual material). The Rule Engine built into iRODS is used to integrate complex workflows at the data level that need not be visible to users, e.g. digital preservation functionality.

  3. The Chandra Source Catalog 2.0: Building The Catalog

    NASA Astrophysics Data System (ADS)

    Grier, John D.; Plummer, David A.; Allen, Christopher E.; Anderson, Craig S.; Budynkiewicz, Jamie A.; Burke, Douglas; Chen, Judy C.; Civano, Francesca Maria; D'Abrusco, Raffaele; Doe, Stephen M.; Evans, Ian N.; Evans, Janet D.; Fabbiano, Giuseppina; Gibbs, Danny G., II; Glotfelty, Kenny J.; Graessle, Dale E.; Hain, Roger; Hall, Diane M.; Harbo, Peter N.; Houck, John C.; Lauer, Jennifer L.; Laurino, Omar; Lee, Nicholas P.; Martínez-Galarza, Juan Rafael; McCollough, Michael L.; McDowell, Jonathan C.; Miller, Joseph; McLaughlin, Warren; Morgan, Douglas L.; Mossman, Amy E.; Nguyen, Dan T.; Nichols, Joy S.; Nowak, Michael A.; Paxson, Charles; Primini, Francis Anthony; Rots, Arnold H.; Siemiginowska, Aneta; Sundheim, Beth A.; Tibbetts, Michael; Van Stone, David W.; Zografou, Panagoula

    2018-01-01

    To build release 2.0 of the Chandra Source Catalog (CSC2), we require scientific software tools and processing pipelines to evaluate and analyze the data. Additionally, software and hardware infrastructure is needed to coordinate and distribute pipeline execution, manage data i/o, and handle data for Quality Assurance (QA) intervention. We also provide data product staging for archive ingestion.Release 2 utilizes a database driven system used for integration and production. Included are four distinct instances of the Automatic Processing (AP) system (Source Detection, Master Match, Source Properties and Convex Hulls) and a high performance computing (HPC) cluster that is managed to provide efficient catalog processing. In this poster we highlight the internal systems developed to meet the CSC2 challenge.This work has been supported by NASA under contract NAS 8-03060 to the Smithsonian Astrophysical Observatory for operation of the Chandra X-ray Center.

  4. Role of the ATLAS Grid Information System (AGIS) in Distributed Data Analysis and Simulation

    NASA Astrophysics Data System (ADS)

    Anisenkov, A. V.

    2018-03-01

    In modern high-energy physics experiments, particular attention is paid to the global integration of information and computing resources into a unified system for efficient storage and processing of experimental data. Annually, the ATLAS experiment performed at the Large Hadron Collider at the European Organization for Nuclear Research (CERN) produces tens of petabytes raw data from the recording electronics and several petabytes of data from the simulation system. For processing and storage of such super-large volumes of data, the computing model of the ATLAS experiment is based on heterogeneous geographically distributed computing environment, which includes the worldwide LHC computing grid (WLCG) infrastructure and is able to meet the requirements of the experiment for processing huge data sets and provide a high degree of their accessibility (hundreds of petabytes). The paper considers the ATLAS grid information system (AGIS) used by the ATLAS collaboration to describe the topology and resources of the computing infrastructure, to configure and connect the high-level software systems of computer centers, to describe and store all possible parameters, control, configuration, and other auxiliary information required for the effective operation of the ATLAS distributed computing applications and services. The role of the AGIS system in the development of a unified description of the computing resources provided by grid sites, supercomputer centers, and cloud computing into a consistent information model for the ATLAS experiment is outlined. This approach has allowed the collaboration to extend the computing capabilities of the WLCG project and integrate the supercomputers and cloud computing platforms into the software components of the production and distributed analysis workload management system (PanDA, ATLAS).

  5. The DYNES Instrument: A Description and Overview

    NASA Astrophysics Data System (ADS)

    Zurawski, Jason; Ball, Robert; Barczyk, Artur; Binkley, Mathew; Boote, Jeff; Boyd, Eric; Brown, Aaron; Brown, Robert; Lehman, Tom; McKee, Shawn; Meekhof, Benjeman; Mughal, Azher; Newman, Harvey; Rozsa, Sandor; Sheldon, Paul; Tackett, Alan; Voicu, Ramiro; Wolff, Stephen; Yang, Xi

    2012-12-01

    Scientific innovation continues to increase requirements for the computing and networking infrastructures of the world. Collaborative partners, instrumentation, storage, and processing facilities are often geographically and topologically separated, as is the case with LHC virtual organizations. These separations challenge the technology used to interconnect available resources, often delivered by Research and Education (R&E) networking providers, and leads to complications in the overall process of end-to-end data management. Capacity and traffic management are key concerns of R&E network operators; a delicate balance is required to serve both long-lived, high capacity network flows, as well as more traditional end-user activities. The advent of dynamic circuit services, a technology that enables the creation of variable duration, guaranteed bandwidth networking channels, allows for the efficient use of common network infrastructures. These gains are seen particularly in locations where overall capacity is scarce compared to the (sustained peak) needs of user communities. Related efforts, including those of the LHCOPN [3] operations group and the emerging LHCONE [4] project, may take advantage of available resources by designating specific network activities as a “high priority”, allowing reservation of dedicated bandwidth or optimizing for deadline scheduling and predicable delivery patterns. This paper presents the DYNES instrument, an NSF funded cyberinfrastructure project designed to facilitate end-to-end dynamic circuit services [2]. This combination of hardware and software innovation is being deployed across R&E networks in the United States at selected end-sites located on University Campuses. DYNES is peering with international efforts in other countries using similar solutions, and is increasing the reach of this emerging technology. This global data movement solution could be integrated into computing paradigms such as cloud and grid computing platforms, and through the use of APIs can be integrated into existing data movement software.

  6. EUDAT and EPOS moving towards the efficient management of scientific data sets

    NASA Astrophysics Data System (ADS)

    Fiameni, Giuseppe; Bailo, Daniele; Cacciari, Claudio

    2016-04-01

    This abstract presents the collaboration between the European Collaborative Data Infrastructure (EUDAT) and the pan-European infrastructure for solid Earth science (EPOS) which draws on the management of scientific data sets through a reciprocal support agreement. EUDAT is a Consortium of European Data Centers and Scientific Communities whose focus is the development and realisation of the Collaborative Data Infrastructure (CDI), a common model for managing data spanning all European research data centres and data repositories and providing an interoperable layer of common data services. The EUDAT Service Suite is a set of a) implementations of the CDI model and b) standards, developed and offered by members of the EUDAT Consortium. These EUDAT Services include a baseline of CDI-compliant interface and API services - a "CDI Gateway" - plus a number of web-based GUIs and command-line client tools. On the other hand,the EPOS initiative aims at creating a pan-European infrastructure for the solid Earth science to support a safe and sustainable society. In accordance with this scientific vision, the mission of EPOS is to integrate the diverse and advanced European Research Infrastructures for solid Earth Science relying on new e-science opportunities to monitor and unravel the dynamic and complex Earth System. EPOS will enable innovative multidisciplinary research for a better understanding of the Earth's physical and chemical processes that control earthquakes, volcanic eruptions, ground instability and tsunami as well as the processes driving tectonics and Earth's surface dynamics. Through the integration of data, models and facilities EPOS will allow the Earth Science community to make a step change in developing new concepts and tools for key answers to scientific and socio-economic questions concerning geo-hazards and geo-resources as well as Earth sciences applications to the environment and to human welfare. To achieve this integration challenge and the interoperability among all involved communities, EPOS has designed an architecture capable to organize and manage distributed discipline-oriented centers (called Thematic Core Services - TCS). Such design envisage the creation of an integrating e-Infrastructure called Integrated Core Service (ICS), whose aim is to collect and integrate Data, Data Products, Software and Services, and provide homogeneous access to them to the end user, hiding all the complexity of the underlying network of TCS and National data centers. Therefore, EPOS can take advantage of EUDAT CDI at different levels: at the TCS level, providing technologies, knowledge and B2* services to discipline-oriented communities, and at the ICS level, by facilitating the integration and interoperability of different communities with different level of maturity in terms of technology expertise. EUDAT services are particularly suitable to facilitate this process as they can be deployed across the community centers to complement or augment existing services of more mature communities as well as be used by less mature communities as a gateway towards the EPOS integration. To this purpose, a pilot is being carried on in the context of the EPOS Seismological community to foster the uptake of EUDAT services among centers and thus ensure the efficient and sustainable management of scientific data sets. Data sets, e.g. seismic waveforms, collected through the Italian Seismic Network and the ORFEUS organization, are currently replicated onto EUDAT resources to ensure their long-term preservation and accessibility. The pilot will be extend to cover other use cases such as the management of meta-data and the fine-grained control of access.

  7. Control and Information Systems for the National Ignition Facility

    DOE PAGES

    Brunton, Gordon; Casey, Allan; Christensen, Marvin; ...

    2017-03-23

    Orchestration of every National Ignition Facility (NIF) shot cycle is managed by the Integrated Computer Control System (ICCS), which uses a scalable software architecture running code on more than 1950 front-end processors, embedded controllers, and supervisory servers. The ICCS operates laser and industrial control hardware containing 66 000 control and monitor points to ensure that all of NIF’s laser beams arrive at the target within 30 ps of each other and are aligned to a pointing accuracy of less than 50 μm root-mean-square, while ensuring that a host of diagnostic instruments record data in a few billionths of a second.more » NIF’s automated control subsystems are built from a common object-oriented software framework that distributes the software across the computer network and achieves interoperation between different software languages and target architectures. A large suite of business and scientific software tools supports experimental planning, experimental setup, facility configuration, and post-shot analysis. Standard business services using open-source software, commercial workflow tools, and database and messaging technologies have been developed. An information technology infrastructure consisting of servers, network devices, and storage provides the foundation for these systems. Thus, this work is an overview of the control and information systems used to support a wide variety of experiments during the National Ignition Campaign.« less

  8. Control and Information Systems for the National Ignition Facility

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Brunton, Gordon; Casey, Allan; Christensen, Marvin

    Orchestration of every National Ignition Facility (NIF) shot cycle is managed by the Integrated Computer Control System (ICCS), which uses a scalable software architecture running code on more than 1950 front-end processors, embedded controllers, and supervisory servers. The ICCS operates laser and industrial control hardware containing 66 000 control and monitor points to ensure that all of NIF’s laser beams arrive at the target within 30 ps of each other and are aligned to a pointing accuracy of less than 50 μm root-mean-square, while ensuring that a host of diagnostic instruments record data in a few billionths of a second.more » NIF’s automated control subsystems are built from a common object-oriented software framework that distributes the software across the computer network and achieves interoperation between different software languages and target architectures. A large suite of business and scientific software tools supports experimental planning, experimental setup, facility configuration, and post-shot analysis. Standard business services using open-source software, commercial workflow tools, and database and messaging technologies have been developed. An information technology infrastructure consisting of servers, network devices, and storage provides the foundation for these systems. Thus, this work is an overview of the control and information systems used to support a wide variety of experiments during the National Ignition Campaign.« less

  9. Freva - Freie Univ Evaluation System Framework for Scientific Infrastructures in Earth System Modeling

    NASA Astrophysics Data System (ADS)

    Kadow, Christopher; Illing, Sebastian; Kunst, Oliver; Schartner, Thomas; Kirchner, Ingo; Rust, Henning W.; Cubasch, Ulrich; Ulbrich, Uwe

    2016-04-01

    The Freie Univ Evaluation System Framework (Freva - freva.met.fu-berlin.de) is a software infrastructure for standardized data and tool solutions in Earth system science. Freva runs on high performance computers to handle customizable evaluation systems of research projects, institutes or universities. It combines different software technologies into one common hybrid infrastructure, including all features present in the shell and web environment. The database interface satisfies the international standards provided by the Earth System Grid Federation (ESGF). Freva indexes different data projects into one common search environment by storing the meta data information of the self-describing model, reanalysis and observational data sets in a database. This implemented meta data system with its advanced but easy-to-handle search tool supports users, developers and their plugins to retrieve the required information. A generic application programming interface (API) allows scientific developers to connect their analysis tools with the evaluation system independently of the programming language used. Users of the evaluation techniques benefit from the common interface of the evaluation system without any need to understand the different scripting languages. Facilitation of the provision and usage of tools and climate data automatically increases the number of scientists working with the data sets and identifying discrepancies. The integrated web-shell (shellinabox) adds a degree of freedom in the choice of the working environment and can be used as a gate to the research projects HPC. Plugins are able to integrate their e.g. post-processed results into the database of the user. This allows e.g. post-processing plugins to feed statistical analysis plugins, which fosters an active exchange between plugin developers of a research project. Additionally, the history and configuration sub-system stores every analysis performed with the evaluation system in a database. Configurations and results of the tools can be shared among scientists via shell or web system. Therefore, plugged-in tools benefit from transparency and reproducibility. Furthermore, if configurations match while starting an evaluation plugin, the system suggests to use results already produced by other users - saving CPU/h, I/O, disk space and time. The efficient interaction between different technologies improves the Earth system modeling science framed by Freva.

  10. Freva - Freie Univ Evaluation System Framework for Scientific HPC Infrastructures in Earth System Modeling

    NASA Astrophysics Data System (ADS)

    Kadow, C.; Illing, S.; Schartner, T.; Grieger, J.; Kirchner, I.; Rust, H.; Cubasch, U.; Ulbrich, U.

    2017-12-01

    The Freie Univ Evaluation System Framework (Freva - freva.met.fu-berlin.de) is a software infrastructure for standardized data and tool solutions in Earth system science (e.g. www-miklip.dkrz.de, cmip-eval.dkrz.de). Freva runs on high performance computers to handle customizable evaluation systems of research projects, institutes or universities. It combines different software technologies into one common hybrid infrastructure, including all features present in the shell and web environment. The database interface satisfies the international standards provided by the Earth System Grid Federation (ESGF). Freva indexes different data projects into one common search environment by storing the meta data information of the self-describing model, reanalysis and observational data sets in a database. This implemented meta data system with its advanced but easy-to-handle search tool supports users, developers and their plugins to retrieve the required information. A generic application programming interface (API) allows scientific developers to connect their analysis tools with the evaluation system independently of the programming language used. Users of the evaluation techniques benefit from the common interface of the evaluation system without any need to understand the different scripting languages. The integrated web-shell (shellinabox) adds a degree of freedom in the choice of the working environment and can be used as a gate to the research projects HPC. Plugins are able to integrate their e.g. post-processed results into the database of the user. This allows e.g. post-processing plugins to feed statistical analysis plugins, which fosters an active exchange between plugin developers of a research project. Additionally, the history and configuration sub-system stores every analysis performed with the evaluation system in a database. Configurations and results of the tools can be shared among scientists via shell or web system. Furthermore, if configurations match while starting an evaluation plugin, the system suggests to use results already produced by other users - saving CPU/h, I/O, disk space and time. The efficient interaction between different technologies improves the Earth system modeling science framed by Freva.

  11. The Earth System Grid Federation (ESGF) Project

    NASA Astrophysics Data System (ADS)

    Carenton-Madiec, Nicolas; Denvil, Sébastien; Greenslade, Mark

    2015-04-01

    The Earth System Grid Federation (ESGF) Peer-to-Peer (P2P) enterprise system is a collaboration that develops, deploys and maintains software infrastructure for the management, dissemination, and analysis of model output and observational data. ESGF's primary goal is to facilitate advancements in Earth System Science. It is an interagency and international effort led by the US Department of Energy (DOE), and co-funded by National Aeronautics and Space Administration (NASA), National Oceanic and Atmospheric Administration (NOAA), National Science Foundation (NSF), Infrastructure for the European Network of Earth System Modelling (IS-ENES) and international laboratories such as the Max Planck Institute for Meteorology (MPI-M) german Climate Computing Centre (DKRZ), the Australian National University (ANU) National Computational Infrastructure (NCI), Institut Pierre-Simon Laplace (IPSL), and the British Atmospheric Data Center (BADC). Its main mission is to support current CMIP5 activities and prepare for future assesments. The ESGF architecture is based on a system of autonomous and distributed nodes, which interoperate through common acceptance of federation protocols and trust agreements. Data is stored at multiple nodes around the world, and served through local data and metadata services. Nodes exchange information about their data holdings and services, trust each other for registering users and establishing access control decisions. The net result is that a user can use a web browser, connect to any node, and seamlessly find and access data throughout the federation. This type of collaborative working organization and distributed architecture context en-lighted the need of integration and testing processes definition to ensure the quality of software releases and interoperability. This presentation will introduce the ESGF project and demonstrate the range of tools and processes that have been set up to support release management activities.

  12. Open Source Dataturbine (OSDT) Android Sensorpod in Environmental Observing Systems

    NASA Astrophysics Data System (ADS)

    Fountain, T. R.; Shin, P.; Tilak, S.; Trinh, T.; Smith, J.; Kram, S.

    2014-12-01

    The OSDT Android SensorPod is a custom-designed mobile computing platform for assembling wireless sensor networks for environmental monitoring applications. Funded by an award from the Gordon and Betty Moore Foundation, the OSDT SensorPod represents a significant technological advance in the application of mobile and cloud computing technologies to near-real-time applications in environmental science, natural resources management, and disaster response and recovery. It provides a modular architecture based on open standards and open-source software that allows system developers to align their projects with industry best practices and technology trends, while avoiding commercial vendor lock-in to expensive proprietary software and hardware systems. The integration of mobile and cloud-computing infrastructure represents a disruptive technology in the field of environmental science, since basic assumptions about technology requirements are now open to revision, e.g., the roles of special purpose data loggers and dedicated site infrastructure. The OSDT Android SensorPod was designed with these considerations in mind, and the resulting system exhibits the following characteristics - it is flexible, efficient and robust. The system was developed and tested in the three science applications: 1) a fresh water limnology deployment in Wisconsin, 2) a near coastal marine science deployment at the UCSD Scripps Pier, and 3) a terrestrial ecological deployment in the mountains of Taiwan. As part of a public education and outreach effort, a Facebook page with daily ocean pH measurements from the UCSD Scripps pier was developed. Wireless sensor networks and the virtualization of data and network services is the future of environmental science infrastructure. The OSDT Android SensorPod was designed and developed to harness these new technology developments for environmental monitoring applications.

  13. The SISMA Project: A pre-operative seismic hazard monitoring system.

    NASA Astrophysics Data System (ADS)

    Massimiliano Chersich, M. C.; Amodio, A. A. Angelo; Francia, A. F. Andrea; Sparpaglione, C. S. Claudio

    2009-04-01

    Galileian Plus is currently leading the development, in collaboration with several Italian Universities, of the SISMA (Seismic Information System for Monitoring and Alert) Pilot Project financed by the Italian Space Agency. The system is devoted to the continuous monitoring of the seismic risk and is addressed to support the Italian Civil Protection decisional process. Completion of the Pilot Project is planned at the beginning of 2010. Main scientific paradigm of SISMA is an innovative deterministic approach integrating geophysical models, geodesy and active tectonics. This paper will give a general overview of project along with its progress status and a particular focus will be put on the architectural design details and to the software implementation choices. SISMA is built on top of a software infrastructure developed by Galileian Plus to integrate the scientific programs devoted to the update of seismic risk maps. The main characteristics of the system may be resumed as follow: automatic download of input data; integration of scientific programs; definition and scheduling of chains of processes; monitoring and control of the system through a graphical user interface (GUI); compatibility of the products with ESRI ArcGIS, by mean of post-processing conversion. a) automatic download of input data SISMA needs input data such as GNSS observations, updated seismic catalogue, SAR satellites orbits, etc. that are periodically updated and made available from remote servers through FTP and HTTP. This task is accomplished by a dedicated user configurable component. b) integration of scientific programs SISMA integrates many scientific programs written in different languages (Fortran, C, C++, Perl and Bash) and running into different operating systems. This design requirements lead to the development of a distributed system which is platform independent and is able to run any terminal-based program following few simple predefined rules. c) definition and scheduling of chains of processes Processes are bound each other, in the sense that the output of process "A" should be passed as input to process "B". In this case the process "B" must run automatically as soon as the required input is ready. In SISMA this issue is handled with the "data-driven" activation concept allowing specifying that a process should be started as soon as the needed input datum has been made available in the archive. Moreover SISMA may run processes on a "time-driven" base. The infrastructure of SISMA provides a configurable scheduler allowing the user to define the start time and the periodicity of such processes. d) monitoring and control The operator of the system needs to monitor and control every process running in the system. The SISMA infrastructure allows, through its GUI, the user to: view log messages of running and old processes; stop running processes; monitor processes executions; monitor resource status (available ram, network reachability, and available disk space) for every machine in the system. e) compatibility with ESRI Shapefiles Nearly all the SISMA data has some geographic information, and it is useful to integrate it in a Geographic Information System (GIS). Processors output are georeferred, but they are generated as ASCII files in a proprietary format, and thus cannot directly loaded in a GIS. The infrastructures provides a simple framework for adding filters that reads the data in the proprietary format and converts it to ESRI Shapefile format.

  14. DIaaS: Data-Intensive workflows as a service - Enabling easy composition and deployment of data-intensive workflows on Virtual Research Environments

    NASA Astrophysics Data System (ADS)

    Filgueira, R.; Ferreira da Silva, R.; Deelman, E.; Atkinson, M.

    2016-12-01

    We present the Data-Intensive workflows as a Service (DIaaS) model for enabling easy data-intensive workflow composition and deployment on clouds using containers. DIaaS model backbone is Asterism, an integrated solution for running data-intensive stream-based applications on heterogeneous systems, which combines the benefits of dispel4py with Pegasus workflow systems. The stream-based executions of an Asterism workflow are managed by dispel4py, while the data movement between different e-Infrastructures, and the coordination of the application execution are automatically managed by Pegasus. DIaaS combines Asterism framework with Docker containers to provide an integrated, complete, easy-to-use, portable approach to run data-intensive workflows on distributed platforms. Three containers integrate the DIaaS model: a Pegasus node, and an MPI and an Apache Storm clusters. Container images are described as Dockerfiles (available online at http://github.com/dispel4py/pegasus_dispel4py), linked to Docker Hub for providing continuous integration (automated image builds), and image storing and sharing. In this model, all required software (workflow systems and execution engines) for running scientific applications are packed into the containers, which significantly reduces the effort (and possible human errors) required by scientists or VRE administrators to build such systems. The most common use of DIaaS will be to act as a backend of VREs or Scientific Gateways to run data-intensive applications, deploying cloud resources upon request. We have demonstrated the feasibility of DIaaS using the data-intensive seismic ambient noise cross-correlation application (Figure 1). The application preprocesses (Phase1) and cross-correlates (Phase2) traces from several seismic stations. The application is submitted via Pegasus (Container1), and Phase1 and Phase2 are executed in the MPI (Container2) and Storm (Container3) clusters respectively. Although both phases could be executed within the same environment, this setup demonstrates the flexibility of DIaaS to run applications across e-Infrastructures. In summary, DIaaS delivers specialized software to execute data-intensive applications in a scalable, efficient, and robust manner reducing the engineering time and computational cost.

  15. 17 CFR 39.18 - System safeguards.

    Code of Federal Regulations, 2012 CFR

    2012-04-01

    ... physical infrastructure or personnel necessary for it to conduct activities necessary to the clearing and... transportation, telecommunications, power, water, or other critical infrastructure components in a relevant area... Division of Clearing and Risk promptly of: (1) Any hardware or software malfunction, cyber security...

  16. 17 CFR 39.18 - System safeguards.

    Code of Federal Regulations, 2014 CFR

    2014-04-01

    ... physical infrastructure or personnel necessary for it to conduct activities necessary to the clearing and... transportation, telecommunications, power, water, or other critical infrastructure components in a relevant area... Division of Clearing and Risk promptly of: (1) Any hardware or software malfunction, cyber security...

  17. 17 CFR 39.18 - System safeguards.

    Code of Federal Regulations, 2013 CFR

    2013-04-01

    ... physical infrastructure or personnel necessary for it to conduct activities necessary to the clearing and... transportation, telecommunications, power, water, or other critical infrastructure components in a relevant area... Division of Clearing and Risk promptly of: (1) Any hardware or software malfunction, cyber security...

  18. Preparing to use vehicle infrastructure integration (VII) in transportation operations : phase II.

    DOT National Transportation Integrated Search

    2009-01-01

    Vehicle infrastructure integration (VII) is an emerging approach intended to create an enabling communication capability to support vehicle-to-vehicle and vehicle-to-infrastructure communications for safety and mobility applications. The Virginia Dep...

  19. The TSO Logic and G2 Software Product

    NASA Technical Reports Server (NTRS)

    Davis, Derrick D.

    2014-01-01

    This internship assignment for spring 2014 was at John F. Kennedy Space Center (KSC), in NASAs Engineering and Technology (NE) group in support of the Control and Data Systems Division (NE-C) within the Systems Hardware Engineering Branch. (NEC-4) The primary focus was in system integration and benchmarking utilizing two separate computer software products. The first half of this 2014 internship is spent in assisting NE-C4s Electronics and Embedded Systems Engineer, Kelvin Ruiz and fellow intern Scott Ditto with the evaluation of a newly piece of software, called G2. Its developed by the Gensym Corporation and introduced to the group as a tool used in monitoring launch environments. All fellow interns and employees of the G2 group have been working together in order to better understand the significance of the G2 application and how KSC can benefit from its capabilities. The second stage of this Spring project is to assist with an ongoing integration of a benchmarking tool, developed by a group of engineers from a Canadian based organization known as TSO Logic. Guided by NE-C4s Computer Engineer, Allen Villorin, NASA 2014 interns put forth great effort in helping to integrate TSOs software into the Spaceport Processing Systems Development Laboratory (SPSDL) for further testing and evaluating. The TSO Logic group claims that their software is designed for, monitoring and reducing energy consumption at in-house server farms and large data centers, allows data centers to control the power state of servers, without impacting availability or performance and without changes to infrastructure and the focus of the assignment is to test this theory. TSOs Aaron Rallo Founder and CEO, and Chris Tivel CTO, both came to KSC to assist with the installation of their software in the SPSDL laboratory. TSOs software is installed onto 24 individual workstations running three different operating systems. The workstations were divided into three groups of 8 with each group having its own operating system. The first group is comprised of Ubuntus Debian -based Linux the second group is windows 7 Professional and the third group ran Red Hat Linux. The highlight of this portion of the assignment is to compose documentation expressing the overall impression of the software and its capabilities.

  20. Development of Network-based Communications Architectures for Future NASA Missions

    NASA Technical Reports Server (NTRS)

    Slywczak, Richard A.

    2007-01-01

    Since the Vision for Space Exploration (VSE) announcement, NASA has been developing a communications infrastructure that combines existing terrestrial techniques with newer concepts and capabilities. The overall goal is to develop a flexible, modular, and extensible architecture that leverages and enhances terrestrial networking technologies that can either be directly applied or modified for the space regime. In addition, where existing technologies leaves gaps, new technologies must be developed. An example includes dynamic routing that accounts for constrained power and bandwidth environments. Using these enhanced technologies, NASA can develop nodes that provide characteristics, such as routing, store and forward, and access-on-demand capabilities. But with the development of the new infrastructure, challenges and obstacles will arise. The current communications infrastructure has been developed on a mission-by-mission basis rather than an end-to-end approach; this has led to a greater ground infrastructure, but has not encouraged communications between space-based assets. This alone provides one of the key challenges that NASA must encounter. With the development of the new Crew Exploration Vehicle (CEV), NASA has the opportunity to provide an integration path for the new vehicles and provide standards for their development. Some of the newer capabilities these vehicles could include are routing, security, and Software Defined Radios (SDRs). To meet these needs, the NASA/Glenn Research Center s (GRC) Network Emulation Laboratory (NEL) has been using both simulation and emulation to study and evaluate these architectures. These techniques provide options to NASA that directly impact architecture development. This paper identifies components of the infrastructure that play a pivotal role in the new NASA architecture, develops a scheme using simulation and emulation for testing these architectures and demonstrates how NASA can strengthen the new infrastructure by implementing these concepts.

  1. The Virtual Environment for Reactor Applications (VERA): Design and architecture

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Turner, John A.; Clarno, Kevin; Sieger, Matt

    VERA, the Virtual Environment for Reactor Applications, is the system of physics capabilities being developed and deployed by the Consortium for Advanced Simulation of Light Water Reactors (CASL), the first DOE Hub, which was established in July 2010 for the modeling and simulation of commercial nuclear reactors. VERA consists of integrating and interfacing software together with a suite of physics components adapted and/or refactored to simulate relevant physical phenomena in a coupled manner. VERA also includes the software development environment and computational infrastructure needed for these components to be effectively used. We describe the architecture of VERA from both amore » software and a numerical perspective, along with the goals and constraints that drove the major design decisions and their implications. As a result, we explain why VERA is an environment rather than a framework or toolkit, why these distinctions are relevant (particularly for coupled physics applications), and provide an overview of results that demonstrate the application of VERA tools for a variety of challenging problems within the nuclear industry.« less

  2. The Virtual Environment for Reactor Applications (VERA): Design and architecture

    DOE PAGES

    Turner, John A.; Clarno, Kevin; Sieger, Matt; ...

    2016-09-08

    VERA, the Virtual Environment for Reactor Applications, is the system of physics capabilities being developed and deployed by the Consortium for Advanced Simulation of Light Water Reactors (CASL), the first DOE Hub, which was established in July 2010 for the modeling and simulation of commercial nuclear reactors. VERA consists of integrating and interfacing software together with a suite of physics components adapted and/or refactored to simulate relevant physical phenomena in a coupled manner. VERA also includes the software development environment and computational infrastructure needed for these components to be effectively used. We describe the architecture of VERA from both amore » software and a numerical perspective, along with the goals and constraints that drove the major design decisions and their implications. As a result, we explain why VERA is an environment rather than a framework or toolkit, why these distinctions are relevant (particularly for coupled physics applications), and provide an overview of results that demonstrate the application of VERA tools for a variety of challenging problems within the nuclear industry.« less

  3. The Next Generation of Lab and Classroom Computing - The Silver Lining

    DTIC Science & Technology

    2016-12-01

    desktop infrastructure (VDI) solution, as well as the computing solutions at three universities, was selected as the basis for comparison. The research... infrastructure , VDI, hardware cost, software cost, manpower, availability, cloud computing, private cloud, bring your own device, BYOD, thin client...virtual desktop infrastructure (VDI) solution, as well as the computing solutions at three universities, was selected as the basis for comparison. The

  4. Space Telecommunications Radio System (STRS) Compliance Testing

    NASA Technical Reports Server (NTRS)

    Handler, Louis M.

    2011-01-01

    The Space Telecommunications Radio System (STRS) defines an open architecture for software defined radios. This document describes the testing methodology to aid in determining the degree of compliance to the STRS architecture. Non-compliances are reported to the software and hardware developers as well as the NASA project manager so that any non-compliances may be fixed or waivers issued. Since the software developers may be divided into those that provide the operating environment including the operating system and STRS infrastructure (OE) and those that supply the waveform applications, the tests are divided accordingly. The static tests are also divided by the availability of an automated tool that determines whether the source code and configuration files contain the appropriate items. Thus, there are six separate step-by-step test procedures described as well as the corresponding requirements that they test. The six types of STRS compliance tests are: STRS application automated testing, STRS infrastructure automated testing, STRS infrastructure testing by compiling WFCCN with the infrastructure, STRS configuration file testing, STRS application manual code testing, and STRS infrastructure manual code testing. Examples of the input and output of the scripts are shown in the appendices as well as more specific information about what to configure and test in WFCCN for non-compliance. In addition, each STRS requirement is listed and the type of testing briefly described. Attached is also a set of guidelines on what to look for in addition to the requirements to aid in the document review process.

  5. Extensible Infrastructure for Browsing and Searching Abstracted Spacecraft Data

    NASA Technical Reports Server (NTRS)

    Wallick, Michael N.; Crockett, Thomas M.; Joswig, Joseph C.; Torres, Recaredo J.; Norris, Jeffrey S.; Fox, Jason M.; Powell, Mark W.; Mittman, David S.; Abramyan, Lucy; Shams, Khawaja S.; hide

    2009-01-01

    A computer program has been developed to provide a common interface for all space mission data, and allows different types of data to be displayed in the same context. This software provides an infrastructure for representing any type of mission data.

  6. Scaling Agile Infrastructure to People

    NASA Astrophysics Data System (ADS)

    Jones, B.; McCance, G.; Traylen, S.; Barrientos Arias, N.

    2015-12-01

    When CERN migrated its infrastructure away from homegrown fabric management tools to emerging industry-standard open-source solutions, the immediate technical challenges and motivation were clear. The move to a multi-site Cloud Computing model meant that the tool chains that were growing around this ecosystem would be a good choice, the challenge was to leverage them. The use of open-source tools brings challenges other than merely how to deploy them. Homegrown software, for all the deficiencies identified at the outset of the project, has the benefit of growing with the organization. This paper will examine what challenges there were in adapting open-source tools to the needs of the organization, particularly in the areas of multi-group development and security. Additionally, the increase in scale of the plant required changes to how Change Management was organized and managed. Continuous Integration techniques are used in order to manage the rate of change across multiple groups, and the tools and workflow for this will be examined.

  7. DES Science Portal: II- Creating Science-Ready Catalogs

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Fausti Neto, Angelo; et al.

    We present a novel approach for creating science-ready catalogs through a software infrastructure developed for the Dark Energy Survey (DES). We integrate the data products released by the DES Data Management and additional products created by the DES collaboration in an environment known as DES Science Portal. Each step involved in the creation of a science-ready catalog is recorded in a relational database and can be recovered at any time. We describe how the DES Science Portal automates the creation and characterization of lightweight catalogs for DES Year 1 Annual Release, and show its flexibility in creating multiple catalogs withmore » different inputs and configurations. Finally, we discuss the advantages of this infrastructure for large surveys such as DES and the Large Synoptic Survey Telescope. The capability of creating science-ready catalogs efficiently and with full control of the inputs and configurations used is an important asset for supporting science analysis using data from large astronomical surveys.« less

  8. Software and cyber-infrastructure development to control the Observatorio Astrofísico de Javalambre (OAJ)

    NASA Astrophysics Data System (ADS)

    Yanes-Díaz, A.; Antón, J. L.; Rueda-Teruel, S.; Guillén-Civera, L.; Bello, R.; Jiménez-Mejías, D.; Chueca, S.; Lasso-Cabrera, N. M.; Suárez, O.; Rueda-Teruel, F.; Cenarro, A. J.; Cristobal-Hornillos, D.; Marin-Franch, A.; Luis-Simoes, R.; López-Alegre, G.; Rodríguez-Hernández, M. A. C.; Moles, M.; Ederoclite, A.; Varela, J.; Vazquez Ramió, H.; Díaz-Martín, M. C.; Iglesias-Marzoa, R.; Maicas, N.; Lamadrid, J. L.; Lopez-Sainz, A.; Hernández-Fuertes, J.; Valdivielso, L.; Mendes de Oliveira, C.; Penteado, P.; Schoenell, W.; Kanaan, A.

    2014-07-01

    The Observatorio Astrofísico de Javalambre (OAJ) is a new astronomical facility located at the Sierra de Javalambre (Teruel, Spain) whose primary role will be to conduct all-sky astronomical surveys with two unprecedented telescopes of unusually large fields of view: the JST/T250, a 2.55m telescope of 3deg field of view, and the JAST/T80, an 83cm telescope of 2deg field of view. CEFCA engineering team has been designing the OAJ control system as a global concept to manage, monitor, control and maintain all the observatory systems including not only astronomical subsystems but also infrastructure and other facilities. In order to provide quality, reliability and efficiency, the OAJ control system (OCS) design is based on CIA (Control Integrated Architecture) and OEE (Overall Equipment Effectiveness) as a key to improve day and night operation processes. The OCS goes from low level hardware layer including IOs connected directly to sensors and actuators deployed around the whole observatory systems, including telescopes and astronomical instrumentation, up to the high level software layer as a tool to perform efficiently observatory operations. We will give an overview of the OAJ control system design and implementation from an engineering point of view, giving details of the design criteria, technology, architecture, standards, functional blocks, model structure, development, deployment, goals, report about the actual status and next steps.

  9. Visual Decision Support Tool for Supporting Asset ...

    EPA Pesticide Factsheets

    Abstract:Managing urban water infrastructures faces the challenge of jointly dealing with assets of diverse types, useful life, cost, ages and condition. Service quality and sustainability require sound long-term planning, well aligned with tactical and operational planning and management. In summary, the objective of an integrated approach to infrastructure asset management is to assist utilities answer the following questions:•Who are we at present?•What service do we deliver?•What do we own?•Where do we want to be in the long-term?•How do we get there?The AWARE-P approach (www.aware-p.org) offers a coherent methodological framework and a valuable portfolio of software tools. It is designed to assist water supply and wastewater utility decision-makers in their analyses and planning processes. It is based on a Plan-Do-Check-Act process and is in accordance with the key principles of the International Standards Organization (ISO) 55000 standards on asset management. It is compatible with, and complementary to WERF’s SIMPLE framework. The software assists in strategic, tactical, and operational planning, through a non-intrusive, web-based, collaborative environment where objectives and metrics drive IAM planning. It is aimed at industry professionals and managers, as well as at the consultants and technical experts that support them. It is easy to use and maximizes the value of information from multiple existing data sources, both in da

  10. Utilizing an integrated infrastructure for outcomes research: a systematic review.

    PubMed

    Dixon, Brian E; Whipple, Elizabeth C; Lajiness, John M; Murray, Michael D

    2016-03-01

    To explore the ability of an integrated health information infrastructure to support outcomes research. A systematic review of articles published from 1983 to 2012 by Regenstrief Institute investigators using data from an integrated electronic health record infrastructure involving multiple provider organisations was performed. Articles were independently assessed and classified by study design, disease and other metadata including bibliometrics. A total of 190 articles were identified. Diseases included cognitive, (16) cardiovascular, (16) infectious, (15) chronic illness (14) and cancer (12). Publications grew steadily (26 in the first decade vs. 100 in the last) as did the number of investigators (from 15 in 1983 to 62 in 2012). The proportion of articles involving non-Regenstrief authors also expanded from 54% in the first decade to 72% in the last decade. During this period, the infrastructure grew from a single health system into a health information exchange network covering more than 6 million patients. Analysis of journal and article metrics reveals high impact for clinical trials and comparative effectiveness research studies that utilised data available in the integrated infrastructure. Integrated information infrastructures support growth in high quality observational studies and diverse collaboration consistent with the goals for the learning health system. More recent publications demonstrate growing external collaborations facilitated by greater access to the infrastructure and improved opportunities to study broader disease and health outcomes. Integrated information infrastructures can stimulate learning from electronic data captured during routine clinical care but require time and collaboration to reach full potential. © 2015 Health Libraries Group.

  11. VectorBase: a data resource for invertebrate vector genomics

    PubMed Central

    Lawson, Daniel; Arensburger, Peter; Atkinson, Peter; Besansky, Nora J.; Bruggner, Robert V.; Butler, Ryan; Campbell, Kathryn S.; Christophides, George K.; Christley, Scott; Dialynas, Emmanuel; Hammond, Martin; Hill, Catherine A.; Konopinski, Nathan; Lobo, Neil F.; MacCallum, Robert M.; Madey, Greg; Megy, Karine; Meyer, Jason; Redmond, Seth; Severson, David W.; Stinson, Eric O.; Topalis, Pantelis; Birney, Ewan; Gelbart, William M.; Kafatos, Fotis C.; Louis, Christos; Collins, Frank H.

    2009-01-01

    VectorBase (http://www.vectorbase.org) is an NIAID-funded Bioinformatic Resource Center focused on invertebrate vectors of human pathogens. VectorBase annotates and curates vector genomes providing a web accessible integrated resource for the research community. Currently, VectorBase contains genome information for three mosquito species: Aedes aegypti, Anopheles gambiae and Culex quinquefasciatus, a body louse Pediculus humanus and a tick species Ixodes scapularis. Since our last report VectorBase has initiated a community annotation system, a microarray and gene expression repository and controlled vocabularies for anatomy and insecticide resistance. We have continued to develop both the software infrastructure and tools for interrogating the stored data. PMID:19028744

  12. Evolution of Autonomous Self-Righting Behaviors for Articulated Nanorovers

    NASA Technical Reports Server (NTRS)

    Tunstel, Edward

    1999-01-01

    Miniature rovers with articulated mobility mechanisms are being developed for planetary surface exploration on Mars and small solar system bodies. These vehicles are designed to be capable of autonomous recovery from overturning during surface operations. This paper describes a computational means of developing motion behaviors that achieve the autonomous recovery function. It proposes a control software design approach aimed at reducing the effort involved in developing self-righting behaviors. The approach is based on the integration of evolutionary computing with a dynamics simulation environment for evolving and evaluating motion behaviors. The automated behavior design approach is outlined and its underlying genetic programming infrastructure is described.

  13. Progress Toward Cancer Data Ecosystems.

    PubMed

    Grossman, Robert L

    One of the recommendations of the Cancer Moonshot Blue Ribbon Panel report from 2016 was the creation of a national cancer data ecosystem. We review some of the approaches for building cancer data ecosystems and some of the progress that has been made. A data commons is the colocation of data with cloud computing infrastructure and commonly used software services, tools, and applications for managing, integrating, analyzing, and sharing data to create an interoperable resource for the research community. We discuss data commons and their potential role in cancer data ecosystems and, in particular, how multiple data commons can interoperate to form part of the foundation for a cancer data ecosystem.

  14. Computational Materials Science and Chemistry: Accelerating Discovery and Innovation through Simulation-Based Engineering and Science

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Crabtree, George; Glotzer, Sharon; McCurdy, Bill

    This report is based on a SC Workshop on Computational Materials Science and Chemistry for Innovation on July 26-27, 2010, to assess the potential of state-of-the-art computer simulations to accelerate understanding and discovery in materials science and chemistry, with a focus on potential impacts in energy technologies and innovation. The urgent demand for new energy technologies has greatly exceeded the capabilities of today's materials and chemical processes. To convert sunlight to fuel, efficiently store energy, or enable a new generation of energy production and utilization technologies requires the development of new materials and processes of unprecedented functionality and performance. Newmore » materials and processes are critical pacing elements for progress in advanced energy systems and virtually all industrial technologies. Over the past two decades, the United States has developed and deployed the world's most powerful collection of tools for the synthesis, processing, characterization, and simulation and modeling of materials and chemical systems at the nanoscale, dimensions of a few atoms to a few hundred atoms across. These tools, which include world-leading x-ray and neutron sources, nanoscale science facilities, and high-performance computers, provide an unprecedented view of the atomic-scale structure and dynamics of materials and the molecular-scale basis of chemical processes. For the first time in history, we are able to synthesize, characterize, and model materials and chemical behavior at the length scale where this behavior is controlled. This ability is transformational for the discovery process and, as a result, confers a significant competitive advantage. Perhaps the most spectacular increase in capability has been demonstrated in high performance computing. Over the past decade, computational power has increased by a factor of a million due to advances in hardware and software. This rate of improvement, which shows no sign of abating, has enabled the development of computer simulations and models of unprecedented fidelity. We are at the threshold of a new era where the integrated synthesis, characterization, and modeling of complex materials and chemical processes will transform our ability to understand and design new materials and chemistries with predictive power. In turn, this predictive capability will transform technological innovation by accelerating the development and deployment of new materials and processes in products and manufacturing. Harnessing the potential of computational science and engineering for the discovery and development of materials and chemical processes is essential to maintaining leadership in these foundational fields that underpin energy technologies and industrial competitiveness. Capitalizing on the opportunities presented by simulation-based engineering and science in materials and chemistry will require an integration of experimental capabilities with theoretical and computational modeling; the development of a robust and sustainable infrastructure to support the development and deployment of advanced computational models; and the assembly of a community of scientists and engineers to implement this integration and infrastructure. This community must extend to industry, where incorporating predictive materials science and chemistry into design tools can accelerate the product development cycle and drive economic competitiveness. The confluence of new theories, new materials synthesis capabilities, and new computer platforms has created an unprecedented opportunity to implement a "materials-by-design" paradigm with wide-ranging benefits in technological innovation and scientific discovery. The Workshop on Computational Materials Science and Chemistry for Innovation was convened in Bethesda, Maryland, on July 26-27, 2010. Sponsored by the Department of Energy (DOE) Offices of Advanced Scientific Computing Research and Basic Energy Sciences, the workshop brought together 160 experts in materials science, chemistry, and computational science representing more than 65 universities, laboratories, and industries, and four agencies. The workshop examined seven foundational challenge areas in materials science and chemistry: materials for extreme conditions, self-assembly, light harvesting, chemical reactions, designer fluids, thin films and interfaces, and electronic structure. Each of these challenge areas is critical to the development of advanced energy systems, and each can be accelerated by the integrated application of predictive capability with theory and experiment. The workshop concluded that emerging capabilities in predictive modeling and simulation have the potential to revolutionize the development of new materials and chemical processes. Coupled with world-leading materials characterization and nanoscale science facilities, this predictive capability provides the foundation for an innovation ecosystem that can accelerate the discovery, development, and deployment of new technologies, including advanced energy systems. Delivering on the promise of this innovation ecosystem requires the following: Integration of synthesis, processing, characterization, theory, and simulation and modeling. Many of the newly established Energy Frontier Research Centers and Energy Hubs are exploiting this integration. Achieving/strengthening predictive capability in foundational challenge areas. Predictive capability in the seven foundational challenge areas described in this report is critical to the development of advanced energy technologies. Developing validated computational approaches that span vast differences in time and length scales. This fundamental computational challenge crosscuts all of the foundational challenge areas. Similarly challenging is coupling of analytical data from multiple instruments and techniques that are required to link these length and time scales. Experimental validation and quantification of uncertainty in simulation and modeling. Uncertainty quantification becomes increasingly challenging as simulations become more complex. Robust and sustainable computational infrastructure, including software and applications. For modeling and simulation, software equals infrastructure. To validate the computational tools, software is critical infrastructure that effectively translates huge arrays of experimental data into useful scientific understanding. An integrated approach for managing this infrastructure is essential. Efficient transfer and incorporation of simulation-based engineering and science in industry. Strategies for bridging the gap between research and industrial applications and for widespread industry adoption of integrated computational materials engineering are needed.« less

  15. Vehicle infrastructure integration (VII) based road-condition warning system for highway collision prevention.

    DOT National Transportation Integrated Search

    2009-05-01

    As a major ITS initiative, the Vehicle Infrastructure Integration (VII) program is to revolutionize : transportation by creating an enabling communication infrastructure that will open up a wide range of : safety applications. The road-condition warn...

  16. Weather applications and products enabled through vehicle infrastructure integration (VII) : feasibility and concept development study

    DOT National Transportation Integrated Search

    2007-01-01

    Vehicle Infrastructure Integration (VII) involves the two-way wireless transmission of data from vehicle-to-vehicle and vehicle-to-infrastructure utilizing Dedicated Short Range Communications (DSRC). VII will enable the development of weather-relate...

  17. GABBs: Cyberinfrastructure for Self-Service Geospatial Data Exploration, Computation, and Sharing

    NASA Astrophysics Data System (ADS)

    Song, C. X.; Zhao, L.; Biehl, L. L.; Merwade, V.; Villoria, N.

    2016-12-01

    Geospatial data are present everywhere today with the proliferation of location-aware computing devices. This is especially true in the scientific community where large amounts of data are driving research and education activities in many domains. Collaboration over geospatial data, for example, in modeling, data analysis and visualization, must still overcome the barriers of specialized software and expertise among other challenges. In addressing these needs, the Geospatial data Analysis Building Blocks (GABBs) project aims at building geospatial modeling, data analysis and visualization capabilities in an open source web platform, HUBzero. Funded by NSF's Data Infrastructure Building Blocks initiative, GABBs is creating a geospatial data architecture that integrates spatial data management, mapping and visualization, and interfaces in the HUBzero platform for scientific collaborations. The geo-rendering enabled Rappture toolkit, a generic Python mapping library, geospatial data exploration and publication tools, and an integrated online geospatial data management solution are among the software building blocks from the project. The GABBS software will be available through Amazon's AWS Marketplace VM images and open source. Hosting services are also available to the user community. The outcome of the project will enable researchers and educators to self-manage their scientific data, rapidly create GIS-enable tools, share geospatial data and tools on the web, and build dynamic workflows connecting data and tools, all without requiring significant software development skills, GIS expertise or IT administrative privileges. This presentation will describe the GABBs architecture, toolkits and libraries, and showcase the scientific use cases that utilize GABBs capabilities, as well as the challenges and solutions for GABBs to interoperate with other cyberinfrastructure platforms.

  18. dCache, towards Federated Identities & Anonymized Delegation

    NASA Astrophysics Data System (ADS)

    Ashish, A.; Millar, AP; Mkrtchyan, T.; Fuhrmann, P.; Behrmann, G.; Sahakyan, M.; Adeyemi, O. S.; Starek, J.; Litvintsev, D.; Rossi, A.

    2017-10-01

    For over a decade, dCache has relied on the authentication and authorization infrastructure (AAI) offered by VOMS, Kerberos, Xrootd etc. Although the established infrastructure has worked well and provided sufficient security, the implementation of procedures and the underlying software is often seen as a burden, especially by smaller communities trying to adopt existing HEP software stacks [1]. Moreover, scientists are increasingly dependent on service portals for data access [2]. In this paper, we describe how federated identity management systems can facilitate the transition from traditional AAI infrastructure to novel solutions like OpenID Connect. We investigate the advantages offered by OpenID Connect in regards to ‘delegation of authentication’ and ‘credential delegation for offline access’. Additionally, we demonstrate how macaroons can provide a more fine-granular authorization mechanism that supports anonymized delegation.

  19. Software architecture standard for simulation virtual machine, version 2.0

    NASA Technical Reports Server (NTRS)

    Sturtevant, Robert; Wessale, William

    1994-01-01

    The Simulation Virtual Machine (SBM) is an Ada architecture which eases the effort involved in the real-time software maintenance and sustaining engineering. The Software Architecture Standard defines the infrastructure which all the simulation models are built from. SVM was developed for and used in the Space Station Verification and Training Facility.

  20. Contingency theoretic methodology for agent-based web-oriented manufacturing systems

    NASA Astrophysics Data System (ADS)

    Durrett, John R.; Burnell, Lisa J.; Priest, John W.

    2000-12-01

    The development of distributed, agent-based, web-oriented, N-tier Information Systems (IS) must be supported by a design methodology capable of responding to the convergence of shifts in business process design, organizational structure, computing, and telecommunications infrastructures. We introduce a contingency theoretic model for the use of open, ubiquitous software infrastructure in the design of flexible organizational IS. Our basic premise is that developers should change in the way they view the software design process from a view toward the solution of a problem to one of the dynamic creation of teams of software components. We postulate that developing effective, efficient, flexible, component-based distributed software requires reconceptualizing the current development model. The basic concepts of distributed software design are merged with the environment-causes-structure relationship from contingency theory; the task-uncertainty of organizational- information-processing relationships from information processing theory; and the concept of inter-process dependencies from coordination theory. Software processes are considered as employees, groups of processes as software teams, and distributed systems as software organizations. Design techniques already used in the design of flexible business processes and well researched in the domain of the organizational sciences are presented. Guidelines that can be utilized in the creation of component-based distributed software will be discussed.

  1. Services Oriented Smart City Platform Based On 3d City Model Visualization

    NASA Astrophysics Data System (ADS)

    Prandi, F.; Soave, M.; Devigili, F.; Andreolli, M.; De Amicis, R.

    2014-04-01

    The rapid technological evolution, which is characterizing all the disciplines involved within the wide concept of smart cities, is becoming a key factor to trigger true user-driven innovation. However to fully develop the Smart City concept to a wide geographical target, it is required an infrastructure that allows the integration of heterogeneous geographical information and sensor networks into a common technological ground. In this context 3D city models will play an increasingly important role in our daily lives and become an essential part of the modern city information infrastructure (Spatial Data Infrastructure). The work presented in this paper describes an innovative Services Oriented Architecture software platform aimed at providing smartcities services on top of 3D urban models. 3D city models are the basis of many applications and can became the platform for integrating city information within the Smart-Cites context. In particular the paper will investigate how the efficient visualisation of 3D city models using different levels of detail (LODs) is one of the pivotal technological challenge to support Smart-Cities applications. The goal is to provide to the final user realistic and abstract 3D representations of the urban environment and the possibility to interact with a massive amounts of semantic information contained into the geospatial 3D city model. The proposed solution, using OCG standards and a custom service to provide 3D city models, lets the users to consume the services and interact with the 3D model via Web in a more effective way.

  2. Digital data collection in paleoanthropology.

    PubMed

    Reed, Denné; Barr, W Andrew; Mcpherron, Shannon P; Bobe, René; Geraads, Denis; Wynn, Jonathan G; Alemseged, Zeresenay

    2015-01-01

    Understanding patterns of human evolution across space and time requires synthesizing data collected by independent research teams, and this effort is part of a larger trend to develop cyber infrastructure and e-science initiatives. At present, paleoanthropology cannot easily answer basic questions about the total number of fossils and artifacts that have been discovered, or exactly how those items were collected. In this paper, we examine the methodological challenges to data integration, with the hope that mitigating the technical obstacles will further promote data sharing. At a minimum, data integration efforts must document what data exist and how the data were collected (discovery), after which we can begin standardizing data collection practices with the aim of achieving combined analyses (synthesis). This paper outlines a digital data collection system for paleoanthropology. We review the relevant data management principles for a general audience and supplement this with technical details drawn from over 15 years of paleontological and archeological field experience in Africa and Europe. The system outlined here emphasizes free open-source software (FOSS) solutions that work on multiple computer platforms; it builds on recent advances in open-source geospatial software and mobile computing. © 2015 Wiley Periodicals, Inc.

  3. Educational process in modern climatology within the web-GIS platform "Climate"

    NASA Astrophysics Data System (ADS)

    Gordova, Yulia; Gorbatenko, Valentina; Gordov, Evgeny; Martynova, Yulia; Okladnikov, Igor; Titov, Alexander; Shulgina, Tamara

    2013-04-01

    These days, common to all scientific fields the problem of training of scientists in the environmental sciences is exacerbated by the need to develop new computational and information technology skills in distributed multi-disciplinary teams. To address this and other pressing problems of Earth system sciences, software infrastructure for information support of integrated research in the geosciences was created based on modern information and computational technologies and a software and hardware platform "Climate» (http://climate.scert.ru/) was developed. In addition to the direct analysis of geophysical data archives, the platform is aimed at teaching the basics of the study of changes in regional climate. The educational component of the platform includes a series of lectures on climate, environmental and meteorological modeling and laboratory work cycles on the basics of analysis of current and potential future regional climate change using Siberia territory as an example. The educational process within the Platform is implemented using the distance learning system Moodle (www.moodle.org). This work is partially supported by the Ministry of education and science of the Russian Federation (contract #8345), SB RAS project VIII.80.2.1, RFBR grant #11-05-01190a, and integrated project SB RAS #131.

  4. Evolving a lingua franca and associated software infrastructure for computational systems biology: the Systems Biology Markup Language (SBML) project.

    PubMed

    Hucka, M; Finney, A; Bornstein, B J; Keating, S M; Shapiro, B E; Matthews, J; Kovitz, B L; Schilstra, M J; Funahashi, A; Doyle, J C; Kitano, H

    2004-06-01

    Biologists are increasingly recognising that computational modelling is crucial for making sense of the vast quantities of complex experimental data that are now being collected. The systems biology field needs agreed-upon information standards if models are to be shared, evaluated and developed cooperatively. Over the last four years, our team has been developing the Systems Biology Markup Language (SBML) in collaboration with an international community of modellers and software developers. SBML has become a de facto standard format for representing formal, quantitative and qualitative models at the level of biochemical reactions and regulatory networks. In this article, we summarise the current and upcoming versions of SBML and our efforts at developing software infrastructure for supporting and broadening its use. We also provide a brief overview of the many SBML-compatible software tools available today.

  5. Experiments Toward the Application of Multi-Robot Systems to Disaster-Relief Scenarios

    DTIC Science & Technology

    2015-09-01

    responsibility is assessment, such as dislocated populations, degree of property damage, and remaining communications infrastructure . These are all...specific problems: evaluating of damage to infrastructure in the environment, e.g., traversability of roads; and localizing particular targets of interest...regarding hardware and software infrastructure are driven by the need for these systems to “survive the field” and allow for reliable evaluation of autonomy

  6. Developing a Web-based system by integrating VGI and SDI for real estate management and marketing

    NASA Astrophysics Data System (ADS)

    Salajegheh, J.; Hakimpour, F.; Esmaeily, A.

    2014-10-01

    Property importance of various aspects, especially the impact on various sectors of the economy and the country's macroeconomic is clear. Because of the real, multi-dimensional and heterogeneous nature of housing as a commodity, the lack of an integrated system includes comprehensive information of property, the lack of awareness of some actors in this field about comprehensive information about property and the lack of clear and comprehensive rules and regulations for the trading and pricing, several problems arise for the people involved in this field. In this research implementation of a crowd-sourced Web-based real estate support system is desired. Creating a Spatial Data Infrastructure (SDI) in this system for collecting, updating and integrating all official data about property is also desired in this study. In this system a Web2.0 broker and technologies such as Web services and service composition has been used. This work aims to provide comprehensive and diverse information about property from different sources. For this purpose five-level real estate support system architecture is used. PostgreSql DBMS is used to implement the desired system. Geoserver software is also used as map server and reference implementation of OGC (Open Geospatial Consortium) standards. And Apache server is used to run web pages and user interfaces. Integration introduced methods and technologies provide a proper environment for various users to use the system and share their information. This goal is only achieved by cooperation between all involved organizations in real estate with implementation their required infrastructures in interoperability Web services format.

  7. Delivering integrated HAZUS-MH flood loss analyses and flood inundation maps over the Web.

    PubMed

    Hearn, Paul P; Longenecker, Herbert E; Aguinaldo, John J; Rahav, Ami N

    2013-01-01

    Catastrophic flooding is responsible for more loss of life and damages to property than any other natural hazard. Recently developed flood inundation mapping technologies make it possible to view the extent and depth of flooding on the land surface over the Internet; however, by themselves these technologies are unable to provide estimates of losses to property and infrastructure. The Federal Emergency Management Agency's (FEMA's) HAZUS-MH software is extensively used to conduct flood loss analyses in the United States, providing a nationwide database of population and infrastructure at risk. Unfortunately, HAZUS-MH requires a dedicated Geographic Information System (GIS) workstation and a trained operator, and analyses are not adapted for convenient delivery over the Web. This article describes a cooperative effort by the US Geological Survey (USGS) and FEMA to make HAZUS-MH output GIS and Web compatible and to integrate these data with digital flood inundation maps in USGS's newly developed Inundation Mapping Web Portal. By running the computationally intensive HAZUS-MH flood analyses offline and converting the output to a Web-GIS compatible format, detailed estimates of flood losses can now be delivered to anyone with Internet access, thus dramatically increasing the availability of these forecasts to local emergency planners and first responders.

  8. Delivering integrated HAZUS-MH flood loss analyses and flood inundation maps over the Web

    USGS Publications Warehouse

    Hearn,, Paul P.; Longenecker, Herbert E.; Aguinaldo, John J.; Rahav, Ami N.

    2013-01-01

    Catastrophic flooding is responsible for more loss of life and damages to property than any other natural hazard. Recently developed flood inundation mapping technologies make it possible to view the extent and depth of flooding on the land surface over the Internet; however, by themselves these technologies are unable to provide estimates of losses to property and infrastructure. The Federal Emergency Management Agency’s (FEMA's) HAZUS-MH software is extensively used to conduct flood loss analyses in the United States, providing a nationwide database of population and infrastructure at risk. Unfortunately, HAZUS-MH requires a dedicated Geographic Information System (GIS) workstation and a trained operator, and analyses are not adapted for convenient delivery over the Web. This article describes a cooperative effort by the US Geological Survey (USGS) and FEMA to make HAZUS-MH output GIS and Web compatible and to integrate these data with digital flood inundation maps in USGS’s newly developed Inundation Mapping Web Portal. By running the computationally intensive HAZUS-MH flood analyses offline and converting the output to a Web-GIS compatible format, detailed estimates of flood losses can now be delivered to anyone with Internet access, thus dramatically increasing the availability of these forecasts to local emergency planners and first responders.

  9. Leveraging geospatial data, technology, and methods for improving the health of communities: priorities and strategies from an expert panel convened by the CDC.

    PubMed

    Elmore, Kim; Flanagan, Barry; Jones, Nicholas F; Heitgerd, Janet L

    2010-04-01

    In 2008, CDC convened an expert panel to gather input on the use of geospatial science in surveillance, research and program activities focused on CDC's Healthy Communities Goal. The panel suggested six priorities: spatially enable and strengthen public health surveillance infrastructure; develop metrics for geospatial categorization of community health and health inequity; evaluate the feasibility and validity of standard metrics of community health and health inequities; support and develop GIScience and geospatial analysis; provide geospatial capacity building, training and education; and, engage non-traditional partners. Following the meeting, the strategies and action items suggested by the expert panel were reviewed by a CDC subcommittee to determine priorities relative to ongoing CDC geospatial activities, recognizing that many activities may need to occur either in parallel, or occur multiple times across phases. Phase A of the action items centers on developing leadership support. Phase B focuses on developing internal and external capacity in both physical (e.g., software and hardware) and intellectual infrastructure. Phase C of the action items plan concerns the development and integration of geospatial methods. In summary, the panel members provided critical input to the development of CDC's strategic thinking on integrating geospatial methods and research issues across program efforts in support of its Healthy Communities Goal.

  10. General consumer communication tools for improved image management and communication in medicine.

    PubMed

    Rosset, Chantal; Rosset, Antoine; Ratib, Osman

    2005-12-01

    We elected to explore new technologies emerging on the general consumer market that can improve and facilitate image and data communication in medical and clinical environment. These new technologies developed for communication and storage of data can improve the user convenience and facilitate the communication and transport of images and related data beyond the usual limits and restrictions of a traditional picture archiving and communication systems (PACS) network. We specifically tested and implemented three new technologies provided on Apple computer platforms. (1) We adopted the iPod, a MP3 portable player with a hard disk storage, to easily and quickly move large number of DICOM images. (2) We adopted iChat, a videoconference and instant-messaging software, to transmit DICOM images in real time to a distant computer for conferencing teleradiology. (3) Finally, we developed a direct secure interface to use the iDisk service, a file-sharing service based on the WebDAV technology, to send and share DICOM files between distant computers. These three technologies were integrated in a new open-source image navigation and display software called OsiriX allowing for manipulation and communication of multimodality and multidimensional DICOM image data sets. This software is freely available as an open-source project at http://homepage.mac.com/rossetantoine/OsiriX. Our experience showed that the implementation of these technologies allowed us to significantly enhance the existing PACS with valuable new features without any additional investment or the need for complex extensions of our infrastructure. The added features such as teleradiology, secure and convenient image and data communication, and the use of external data storage services open the gate to a much broader extension of our imaging infrastructure to the outside world.

  11. Automated sensor networks to advance ocean science

    NASA Astrophysics Data System (ADS)

    Schofield, O.; Orcutt, J. A.; Arrott, M.; Vernon, F. L.; Peach, C. L.; Meisinger, M.; Krueger, I.; Kleinert, J.; Chao, Y.; Chien, S.; Thompson, D. R.; Chave, A. D.; Balasuriya, A.

    2010-12-01

    The National Science Foundation has funded the Ocean Observatories Initiative (OOI), which over the next five years will deploy infrastructure to expand scientist’s ability to remotely study the ocean. The deployed infrastructure will be linked by a robust cyberinfrastructure (CI) that will integrate marine observatories into a coherent system-of-systems. OOI is committed to engaging the ocean sciences community during the construction pahse. For the CI, this is being enabled by using a “spiral design strategy” allowing for input throughout the construction phase. In Fall 2009, the OOI CI development team used an existing ocean observing network in the Mid-Atlantic Bight (MAB) to test OOI CI software. The objective of this CI test was to aggregate data from ships, autonomous underwater vehicles (AUVs), shore-based radars, and satellites and make it available to five different data-assimilating ocean forecast models. Scientists used these multi-model forecasts to automate future glider missions in order to demonstrate the feasibility of two-way interactivity between the sensor web and predictive models. The CI software coordinated and prioritized the shared resources that allowed for the semi-automated reconfiguration of assett-tasking, and thus enabled an autonomous execution of observation plans for the fixed and mobile observation platforms. Efforts were coordinated through a web portal that provided an access point for the observational data and model forecasts. Researchers could use the CI software in tandem with the web data portal to assess the performance of individual numerical model results, or multi-model ensembles, through real-time comparisons with satellite, shore-based radar, and in situ robotic measurements. The resulting sensor net will enable a new means to explore and study the world’s oceans by providing scientists a responsive network in the world’s oceans that can be accessed via any wireless network.

  12. The Role of Free/Libre and Open Source Software in Learning Health Systems.

    PubMed

    Paton, C; Karopka, T

    2017-08-01

    Objective: To give an overview of the role of Free/Libre and Open Source Software (FLOSS) in the context of secondary use of patient data to enable Learning Health Systems (LHSs). Methods: We conducted an environmental scan of the academic and grey literature utilising the MedFLOSS database of open source systems in healthcare to inform a discussion of the role of open source in developing LHSs that reuse patient data for research and quality improvement. Results: A wide range of FLOSS is identified that contributes to the information technology (IT) infrastructure of LHSs including operating systems, databases, frameworks, interoperability software, and mobile and web apps. The recent literature around the development and use of key clinical data management tools is also reviewed. Conclusions: FLOSS already plays a critical role in modern health IT infrastructure for the collection, storage, and analysis of patient data. The nature of FLOSS systems to be collaborative, modular, and modifiable may make open source approaches appropriate for building the digital infrastructure for a LHS. Georg Thieme Verlag KG Stuttgart.

  13. Vehicle infrastructure integration proof-of-concept results and findings--infrastructure : final report, volume 3B.

    DOT National Transportation Integrated Search

    2009-05-01

    In 2005, the US Department of Transportation (DOT) initiated a program to develop and test a 5.9GHzbased : Vehicle Infrastructure Integration (VII) proof of concept (POC). The POC was implemented in the northwest : suburbs of Detroit, Michigan. Th...

  14. Web-based spatial analysis with the ILWIS open source GIS software and satellite images from GEONETCast

    NASA Astrophysics Data System (ADS)

    Lemmens, R.; Maathuis, B.; Mannaerts, C.; Foerster, T.; Schaeffer, B.; Wytzisk, A.

    2009-12-01

    This paper involves easy accessible integrated web-based analysis of satellite images with a plug-in based open source software. The paper is targeted to both users and developers of geospatial software. Guided by a use case scenario, we describe the ILWIS software and its toolbox to access satellite images through the GEONETCast broadcasting system. The last two decades have shown a major shift from stand-alone software systems to networked ones, often client/server applications using distributed geo-(web-)services. This allows organisations to combine without much effort their own data with remotely available data and processing functionality. Key to this integrated spatial data analysis is a low-cost access to data from within a user-friendly and flexible software. Web-based open source software solutions are more often a powerful option for developing countries. The Integrated Land and Water Information System (ILWIS) is a PC-based GIS & Remote Sensing software, comprising a complete package of image processing, spatial analysis and digital mapping and was developed as commercial software from the early nineties onwards. Recent project efforts have migrated ILWIS into a modular, plug-in-based open source software, and provide web-service support for OGC-based web mapping and processing. The core objective of the ILWIS Open source project is to provide a maintainable framework for researchers and software developers to implement training components, scientific toolboxes and (web-) services. The latest plug-ins have been developed for multi-criteria decision making, water resources analysis and spatial statistics analysis. The development of this framework is done since 2007 in the context of 52°North, which is an open initiative that advances the development of cutting edge open source geospatial software, using the GPL license. GEONETCast, as part of the emerging Global Earth Observation System of Systems (GEOSS), puts essential environmental data at the fingertips of users around the globe. This user-friendly and low-cost information dissemination provides global information as a basis for decision-making in a number of critical areas, including public health, energy, agriculture, weather, water, climate, natural disasters and ecosystems. GEONETCast makes available satellite images via Digital Video Broadcast (DVB) technology. An OGC WMS interface and plug-ins which convert GEONETCast data streams allow an ILWIS user to integrate various distributed data sources with data locally stored on his machine. Our paper describes a use case in which ILWIS is used with GEONETCast satellite imagery for decision making processes in Ghana. We also explain how the ILWIS software can be extended with additional functionality by means of building plug-ins and unfold our plans to implement other OGC standards, such as WCS and WPS in the same context. Especially, the latter one can be seen as a major step forward in terms of moving well-proven desktop based processing functionality to the web. This enables the embedding of ILWIS functionality in Spatial Data Infrastructures or even the execution in scalable and on-demand cloud computing environments.

  15. iMAGE cloud: medical image processing as a service for regional healthcare in a hybrid cloud environment.

    PubMed

    Liu, Li; Chen, Weiping; Nie, Min; Zhang, Fengjuan; Wang, Yu; He, Ailing; Wang, Xiaonan; Yan, Gen

    2016-11-01

    To handle the emergence of the regional healthcare ecosystem, physicians and surgeons in various departments and healthcare institutions must process medical images securely, conveniently, and efficiently, and must integrate them with electronic medical records (EMRs). In this manuscript, we propose a software as a service (SaaS) cloud called the iMAGE cloud. A three-layer hybrid cloud was created to provide medical image processing services in the smart city of Wuxi, China, in April 2015. In the first step, medical images and EMR data were received and integrated via the hybrid regional healthcare network. Then, traditional and advanced image processing functions were proposed and computed in a unified manner in the high-performance cloud units. Finally, the image processing results were delivered to regional users using the virtual desktop infrastructure (VDI) technology. Security infrastructure was also taken into consideration. Integrated information query and many advanced medical image processing functions-such as coronary extraction, pulmonary reconstruction, vascular extraction, intelligent detection of pulmonary nodules, image fusion, and 3D printing-were available to local physicians and surgeons in various departments and healthcare institutions. Implementation results indicate that the iMAGE cloud can provide convenient, efficient, compatible, and secure medical image processing services in regional healthcare networks. The iMAGE cloud has been proven to be valuable in applications in the regional healthcare system, and it could have a promising future in the healthcare system worldwide.

  16. A generic open-source software framework supporting scenario simulations in bioterrorist crises.

    PubMed

    Falenski, Alexander; Filter, Matthias; Thöns, Christian; Weiser, Armin A; Wigger, Jan-Frederik; Davis, Matthew; Douglas, Judith V; Edlund, Stefan; Hu, Kun; Kaufman, James H; Appel, Bernd; Käsbohrer, Annemarie

    2013-09-01

    Since the 2001 anthrax attack in the United States, awareness of threats originating from bioterrorism has grown. This led internationally to increased research efforts to improve knowledge of and approaches to protecting human and animal populations against the threat from such attacks. A collaborative effort in this context is the extension of the open-source Spatiotemporal Epidemiological Modeler (STEM) simulation and modeling software for agro- or bioterrorist crisis scenarios. STEM, originally designed to enable community-driven public health disease models and simulations, was extended with new features that enable integration of proprietary data as well as visualization of agent spread along supply and production chains. STEM now provides a fully developed open-source software infrastructure supporting critical modeling tasks such as ad hoc model generation, parameter estimation, simulation of scenario evolution, estimation of effects of mitigation or management measures, and documentation. This open-source software resource can be used free of charge. Additionally, STEM provides critical features like built-in worldwide data on administrative boundaries, transportation networks, or environmental conditions (eg, rainfall, temperature, elevation, vegetation). Users can easily combine their own confidential data with built-in public data to create customized models of desired resolution. STEM also supports collaborative and joint efforts in crisis situations by extended import and export functionalities. In this article we demonstrate specifically those new software features implemented to accomplish STEM application in agro- or bioterrorist crisis scenarios.

  17. Cloud Infrastructure & Applications - CloudIA

    NASA Astrophysics Data System (ADS)

    Sulistio, Anthony; Reich, Christoph; Doelitzscher, Frank

    The idea behind Cloud Computing is to deliver Infrastructure-as-a-Services and Software-as-a-Service over the Internet on an easy pay-per-use business model. To harness the potentials of Cloud Computing for e-Learning and research purposes, and to small- and medium-sized enterprises, the Hochschule Furtwangen University establishes a new project, called Cloud Infrastructure & Applications (CloudIA). The CloudIA project is a market-oriented cloud infrastructure that leverages different virtualization technologies, by supporting Service-Level Agreements for various service offerings. This paper describes the CloudIA project in details and mentions our early experiences in building a private cloud using an existing infrastructure.

  18. Department of Defense statement on the X-ray Lithography Program to the Research and Development Subcommittee of the House Armed Services Committee of 100th Congress, second session

    NASA Astrophysics Data System (ADS)

    Maynard, E. D., Jr.

    1988-03-01

    The Department has a broad and necessarily diverse program in semiconductor science and technology. The three principal goals of that effort are: Reduce the gap between commercial integrated circuit usage and its deployment in military systems, assure a healthy on-shore industrial base to support our defense needs, enhance the producibility of specialized military semiconductor products. The major effort to achieve the first of these objectives is the Very High Speed Integrated Circuits (VHSIC) Program which is nearing completion. The Microwave/millimeter wave Monolithic Integrated Circuit (MIMIC) program has just completed a study program to define the product mix needed to meet military system requirements for radar, electronic warfare, smart weapons and telecommunications. We are bringing together the system requirements of all DoD with the device fabrication and product delivery capabilities of industry in an Infrared Focal Plane Array (IRFPA) program. The goal of the Software Initiative is to enhance our warfighting capability through development of efficient software generation technology and products plus the creation of a technology infusion infrastructure to couple the technology and products to system applications. The X-Ray Lithography Program will begin to establish the industrial base which will be required to sustain U.S. leadership in the semiconductor industry for the late 1990s.

  19. Unidata's Vision for Transforming Geoscience by Moving Data Services and Software to the Cloud

    NASA Astrophysics Data System (ADS)

    Ramamurthy, Mohan; Fisher, Ward; Yoksas, Tom

    2015-04-01

    Universities are facing many challenges: shrinking budgets, rapidly evolving information technologies, exploding data volumes, multidisciplinary science requirements, and high expectations from students who have grown up with smartphones and tablets. These changes are upending traditional approaches to accessing and using data and software. Unidata recognizes that its products and services must evolve to support new approaches to research and education. After years of hype and ambiguity, cloud computing is maturing in usability in many areas of science and education, bringing the benefits of virtualized and elastic remote services to infrastructure, software, computation, and data. Cloud environments reduce the amount of time and money spent to procure, install, and maintain new hardware and software, and reduce costs through resource pooling and shared infrastructure. Cloud services aimed at providing any resource, at any time, from any place, using any device are increasingly being embraced by all types of organizations. Given this trend and the enormous potential of cloud-based services, Unidata is taking moving to augment its products, services, data delivery mechanisms and applications to align with the cloud-computing paradigm. Specifically, Unidata is working toward establishing a community-based development environment that supports the creation and use of software services to build end-to-end data workflows. The design encourages the creation of services that can be broken into small, independent chunks that provide simple capabilities. Chunks could be used individually to perform a task, or chained into simple or elaborate workflows. The services will also be portable in the form of downloadable Unidata-in-a-box virtual images, allowing their use in researchers' own cloud-based computing environments. In this talk, we present a vision for Unidata's future in a cloud-enabled data services and discuss our ongoing efforts to deploy a suite of Unidata data services and tools in the Amazon EC2 and Microsoft Azure cloud environments, including the transfer of real-time meteorological data into its cloud instances, product generation using those data, and the deployment of TDS, McIDAS ADDE and AWIPS II data servers and the Integrated Data Server visualization tool.

  20. SBSI: an extensible distributed software infrastructure for parameter estimation in systems biology.

    PubMed

    Adams, Richard; Clark, Allan; Yamaguchi, Azusa; Hanlon, Neil; Tsorman, Nikos; Ali, Shakir; Lebedeva, Galina; Goltsov, Alexey; Sorokin, Anatoly; Akman, Ozgur E; Troein, Carl; Millar, Andrew J; Goryanin, Igor; Gilmore, Stephen

    2013-03-01

    Complex computational experiments in Systems Biology, such as fitting model parameters to experimental data, can be challenging to perform. Not only do they frequently require a high level of computational power, but the software needed to run the experiment needs to be usable by scientists with varying levels of computational expertise, and modellers need to be able to obtain up-to-date experimental data resources easily. We have developed a software suite, the Systems Biology Software Infrastructure (SBSI), to facilitate the parameter-fitting process. SBSI is a modular software suite composed of three major components: SBSINumerics, a high-performance library containing parallelized algorithms for performing parameter fitting; SBSIDispatcher, a middleware application to track experiments and submit jobs to back-end servers; and SBSIVisual, an extensible client application used to configure optimization experiments and view results. Furthermore, we have created a plugin infrastructure to enable project-specific modules to be easily installed. Plugin developers can take advantage of the existing user-interface and application framework to customize SBSI for their own uses, facilitated by SBSI's use of standard data formats. All SBSI binaries and source-code are freely available from http://sourceforge.net/projects/sbsi under an Apache 2 open-source license. The server-side SBSINumerics runs on any Unix-based operating system; both SBSIVisual and SBSIDispatcher are written in Java and are platform independent, allowing use on Windows, Linux and Mac OS X. The SBSI project website at http://www.sbsi.ed.ac.uk provides documentation and tutorials.

  1. Toward a digital library strategy for a National Information Infrastructure

    NASA Technical Reports Server (NTRS)

    Coyne, Robert A.; Hulen, Harry

    1993-01-01

    Bills currently before the House and Senate would give support to the development of a National Information Infrastructure, in which digital libraries and storage systems would be an important part. A simple model is offered to show the relationship of storage systems, software, and standards to the overall information infrastructure. Some elements of a national strategy for digital libraries are proposed, based on the mission of the nonprofit National Storage System Foundation.

  2. CrossTalk: The Journal of Defense Software Engineering. Volume 27, Number 5, September/October 2014

    DTIC Science & Technology

    2014-10-01

    CMSP Infrastructure . 24. CMSP Infrastructure sends message via broadcast to mobile devices in the designated area(s). 25. Mobile device users... infrastructure could potentially threaten our way of life. Given the swiftness of technological change, it is excusable that organizations might...system, which is diagramed in Fig. 1, would expand these op- tions to mobile devices. FEMA established the message struc- ture and the approvals needed to

  3. Scalable Analysis Methods and In Situ Infrastructure for Extreme Scale Knowledge Discovery

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Duque, Earl P.N.; Whitlock, Brad J.

    High performance computers have for many years been on a trajectory that gives them extraordinary compute power with the addition of more and more compute cores. At the same time, other system parameters such as the amount of memory per core and bandwidth to storage have remained constant or have barely increased. This creates an imbalance in the computer, giving it the ability to compute a lot of data that it cannot reasonably save out due to time and storage constraints. While technologies have been invented to mitigate this problem (burst buffers, etc.), software has been adapting to employ inmore » situ libraries which perform data analysis and visualization on simulation data while it is still resident in memory. This avoids the need to ever have to pay the costs of writing many terabytes of data files. Instead, in situ enables the creation of more concentrated data products such as statistics, plots, and data extracts, which are all far smaller than the full-sized volume data. With the increasing popularity of in situ, multiple in situ infrastructures have been created, each with its own mechanism for integrating with a simulation. To make it easier to instrument a simulation with multiple in situ infrastructures and include custom analysis algorithms, this project created the SENSEI framework.« less

  4. Run Environment and Data Management for Earth System Models

    NASA Astrophysics Data System (ADS)

    Widmann, H.; Lautenschlager, M.; Fast, I.; Legutke, S.

    2009-04-01

    The Integrating Model and Data Infrastructure (IMDI) developed and maintained by the Model and Data Group (M&D) comprises the Standard Compile Environment (SCE) and the Standard Run Environment (SRE). The IMDI software has a modular design, which allows to combine and couple a suite of model components and as well to execute the tasks independently and on various platforms. Furthermore the modular structure enables the extension to new model combinations and new platforms. The SRE presented here enables the configuration and performance of earth system model experiments from model integration up to storage and visualization of data. We focus on recently implemented tasks such as synchronous data base filling, graphical monitoring and automatic generation of meta data in XML forms during run time. As well we address the capability to run experiments in heterogeneous IT environments with different computing systems for model integration, data processing and storage. These features are demonstrated for model configurations and on platforms used in current or upcoming projects, e.g. MILLENNIUM or IPCC AR5.

  5. Integration of robotic resources into FORCEnet

    NASA Astrophysics Data System (ADS)

    Nguyen, Chinh; Carroll, Daniel; Nguyen, Hoa

    2006-05-01

    The Networked Intelligence, Surveillance, and Reconnaissance (NISR) project integrates robotic resources into Composeable FORCEnet to control and exploit unmanned systems over extremely long distances. The foundations are built upon FORCEnet-the U.S. Navy's process to define C4ISR for net-centric operations-and the Navy Unmanned Systems Common Control Roadmap to develop technologies and standards for interoperability, data sharing, publish-and-subscribe methodology, and software reuse. The paper defines the goals and boundaries for NISR with focus on the system architecture, including the design tradeoffs necessary for unmanned systems in a net-centric model. Special attention is given to two specific scenarios demonstrating the integration of unmanned ground and water surface vehicles into the open-architecture web-based command-and-control information-management system of Composeable FORCEnet. Planned spiral development for NISR will improve collaborative control, expand robotic sensor capabilities, address multiple domains including underwater and aerial platforms, and extend distributive communications infrastructure for battlespace optimization for unmanned systems in net-centric operations.

  6. The Image Data Resource: A Bioimage Data Integration and Publication Platform.

    PubMed

    Williams, Eleanor; Moore, Josh; Li, Simon W; Rustici, Gabriella; Tarkowska, Aleksandra; Chessel, Anatole; Leo, Simone; Antal, Bálint; Ferguson, Richard K; Sarkans, Ugis; Brazma, Alvis; Salas, Rafael E Carazo; Swedlow, Jason R

    2017-08-01

    Access to primary research data is vital for the advancement of science. To extend the data types supported by community repositories, we built a prototype Image Data Resource (IDR) that collects and integrates imaging data acquired across many different imaging modalities. IDR links data from several imaging modalities, including high-content screening, super-resolution and time-lapse microscopy, digital pathology, public genetic or chemical databases, and cell and tissue phenotypes expressed using controlled ontologies. Using this integration, IDR facilitates the analysis of gene networks and reveals functional interactions that are inaccessible to individual studies. To enable re-analysis, we also established a computational resource based on Jupyter notebooks that allows remote access to the entire IDR. IDR is also an open source platform that others can use to publish their own image data. Thus IDR provides both a novel on-line resource and a software infrastructure that promotes and extends publication and re-analysis of scientific image data.

  7. Electronic Business Transaction Infrastructure Analysis Using Petri Nets and Simulation

    ERIC Educational Resources Information Center

    Feller, Andrew Lee

    2010-01-01

    Rapid growth in eBusiness has made industry and commerce increasingly dependent on the hardware and software infrastructure that enables high-volume transaction processing across the Internet. Large transaction volumes at major industrial-firm data centers rely on robust transaction protocols and adequately provisioned hardware capacity to ensure…

  8. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Billings, Jay J.; Bonior, Jason D.; Evans, Philip G.

    Securely transferring timing information in the electrical grid is a critical component of securing the nation's infrastructure from cyber attacks. One solution to this problem is to use quantum information to securely transfer the timing information across sites. This software provides such an infrastructure using a standard Java webserver that pulls the quantum information from associated hardware.

  9. Design of a Mobile Agent-Based Adaptive Communication Middleware for Federations of Critical Infrastructure Simulations

    NASA Astrophysics Data System (ADS)

    Görbil, Gökçe; Gelenbe, Erol

    The simulation of critical infrastructures (CI) can involve the use of diverse domain specific simulators that run on geographically distant sites. These diverse simulators must then be coordinated to run concurrently in order to evaluate the performance of critical infrastructures which influence each other, especially in emergency or resource-critical situations. We therefore describe the design of an adaptive communication middleware that provides reliable and real-time one-to-one and group communications for federations of CI simulators over a wide-area network (WAN). The proposed middleware is composed of mobile agent-based peer-to-peer (P2P) overlays, called virtual networks (VNets), to enable resilient, adaptive and real-time communications over unreliable and dynamic physical networks (PNets). The autonomous software agents comprising the communication middleware monitor their performance and the underlying PNet, and dynamically adapt the P2P overlay and migrate over the PNet in order to optimize communications according to the requirements of the federation and the current conditions of the PNet. Reliable communications is provided via redundancy within the communication middleware and intelligent migration of agents over the PNet. The proposed middleware integrates security methods in order to protect the communication infrastructure against attacks and provide privacy and anonymity to the participants of the federation. Experiments with an initial version of the communication middleware over a real-life networking testbed show that promising improvements can be obtained for unicast and group communications via the agent migration capability of our middleware.

  10. Fast Risk Assessment Software For Natural Hazard Phenomena Using Georeference Population And Infrastructure Data Bases

    NASA Astrophysics Data System (ADS)

    Marrero, J. M.; Pastor Paz, J. E.; Erazo, C.; Marrero, M.; Aguilar, J.; Yepes, H. A.; Estrella, C. M.; Mothes, P. A.

    2015-12-01

    Disaster Risk Reduction (DRR) requires an integrated multi-hazard assessment approach towards natural hazard mitigation. In the case of volcanic risk, long term hazard maps are generally developed on a basis of the most probable scenarios (likelihood of occurrence) or worst cases. However, in the short-term, expected scenarios may vary substantially depending on the monitoring data or new knowledge. In this context, the time required to obtain and process data is critical for optimum decision making. Availability of up-to-date volcanic scenarios is as crucial as it is to have this data accompanied by efficient estimations of their impact among populations and infrastructure. To address this impact estimation during volcanic crises, or other natural hazards, a web interface has been developed to execute an ANSI C application. This application allows one to compute - in a matter of seconds - the demographic and infrastructure impact that any natural hazard may cause employing an overlay-layer approach. The web interface is tailored to users involved in the volcanic crises management of Cotopaxi volcano (Ecuador). The population data base and the cartographic basis used are of public domain, published by the National Office of Statistics of Ecuador (INEC, by its Spanish acronym). To run the application and obtain results the user is expected to upload a raster file containing information related to the volcanic hazard or any other natural hazard, and determine categories to group population or infrastructure potentially affected. The results are displayed in a user-friendly report.

  11. How to Purchase, Set Up, & Safeguard a CD-ROM Network.

    ERIC Educational Resources Information Center

    Almquist, Arne J.

    1996-01-01

    Presents an overview of the hardware and software required to network CD-ROMs in schools. Topics include network infrastructures, networking software, file server-based systems, CD-ROM servers, vendors of network components, workstations, network utilities, and network management. (LRW)

  12. Building climate adaptation capabilities through technology and community

    NASA Astrophysics Data System (ADS)

    Murray, D.; McWhirter, J.; Intsiful, J. D.; Cozzini, S.

    2011-12-01

    To effectively plan for adaptation to changes in climate, decision makers require infrastructure and tools that will provide them with timely access to current and future climate information. For example, climate scientists and operational forecasters need to access global and regional model projections and current climate information that they can use to prepare monitoring products and reports and then publish these for the decision makers. Through the UNDP African Adaption Programme, an infrastructure is being built across Africa that will provide multi-tiered access to such information. Web accessible servers running RAMADDA, an open source content management system for geoscience information, will provide access to the information at many levels: from the raw and processed climate model output to real-time climate conditions and predictions to documents and presentation for government officials. Output from regional climate models (e.g. RegCM4) and downscaled global climate models will be accessible through RAMADDA. The Integrated Data Viewer (IDV) is being used by scientists to create visualizations that assist the understanding of climate processes and projections, using the data on these as well as external servers. Since RAMADDA is more than a data server, it is also being used as a publishing platform for the generated material that will be available and searchable by the decision makers. Users can wade through the enormous volumes of information and extract subsets for their region or project of interest. Participants from 20 countries attended workshops at ICTP during 2011. They received training on setting up and installing the servers and necessary software and are now working on deploying the systems in their respective countries. This is the first time an integrated and comprehensive approach to climate change adaptation has been widely applied in Africa. It is expected that this infrastructure will enhance North-South collaboration and improve the delivery of technical support and services. This improved infrastructure will enhance the capacity of countries to provide a wide range of robust products and services in a timely manner.

  13. Swiss Experiment: Design, implemention and use of a cross-disciplinary infrastructure for data intensive science

    NASA Astrophysics Data System (ADS)

    Dawes, N.; Salehi, A.; Clifton, A.; Bavay, M.; Aberer, K.; Parlange, M. B.; Lehning, M.

    2010-12-01

    It has long been known that environmental processes are cross-disciplinary, but data has continued to be acquired and held for a single purpose. Swiss Experiment is a rapidly evolving cross-disciplinary, distributed sensor data infrastructure, where tools for the environmental science community stem directly from computer science research. The platform uses the bleeding edge of computer science to acquire, store and distribute data and metadata from all environmental science disciplines at a variety of temporal and spatial resolutions. SwissEx is simultaneously developing new technologies to allow low cost, high spatial and temporal resolution measurements such that small areas can be intensely monitored. This data is then combined with existing widespread, low density measurements in the cross-disciplinary platform to provide well documented datasets, which are of use to multiple research disciplines. We present a flexible, generic infrastructure at an advanced stage of development. The infrastructure makes the most of Web 2.0 technologies for a collaborative working environment and as a user interface for a metadata database. This environment is already closely integrated with GSN, an open-source database middleware developed under Swiss Experiment for acquisition and storage of generic time-series data (2D and 3D). GSN can be queried directly by common data processing packages and makes data available in real-time to models and 3rd party software interfaces via its web service interface. It also provides real-time push or pull data exchange between instances, a user management system which leaves data owners in charge of their data, advanced real-time processing and much more. The SwissEx interface is increasingly gaining users and supporting environmental science in Switzerland. It is also an integral part of environmental education projects ClimAtscope and O3E, where the technologies can provide rapid feedback of results for children of all ages and where the data from their own stations can be compared to national data networks.

  14. EPOS-Seismology: building the Thematic Core Service for Seismology during the EPOS Implementation Phase

    NASA Astrophysics Data System (ADS)

    Haslinger, Florian; EPOS Seismology Consortium, the

    2015-04-01

    After the successful completion of the EPOS Preparatory Phase, the community of European Research Infrastructures in Seismology is now moving ahead with the build-up of the Thematic Core Service (TCS) for Seismology in EPOS, EPOS-Seismology. Seismology is a domain where European-level infrastructures have been developed since decades, often supported by large-scale EU projects. Today these infrastructures provide services to access earthquake waveforms (ORFEUS), parameters (EMSC) and hazard data and products (EFEHR). The existing organizations constitute the backbone of infrastructures that also in future will continue to manage and host the services of the TCS EPOS-Seismology. While the governance and internal structure of these organizations will remain active, and continue to provide direct interaction with the community, EPOS-Seismology will provide the integration of these within EPOS. The main challenge in the build-up of the TCS EPOS-Seismology is to improve and extend these existing services, producing a single framework which is technically, organizationally and financially integrated with the EPOS architecture, and to further engage various kinds of end users (e.g. scientists, engineers, public managers, citizen scientists). On the technical side the focus lies on four major tasks: - the construction of the next generation software architecture for the European Integrated (waveform) Data Archive EIDA, developing advanced metadata and station information services, fully integrate strong motion waveforms and derived parametric engineering-domain data, and advancing the integration of mobile (temporary) networks and OBS deployments in EIDA; - the further development and expansion of services to access seismological products of scientific interest as provided by the community by implementing a common collection and development (IT) platform, improvements in the earthquake information services e.g. by introducing more robust quality indicators and diversifying collection and dissemination mechanisms, as well as improving historical earthquake data services; - the development of a comprehensive suite of earthquake hazard products, tools, and services harmonized on the European level and available through a common access platform, encompassing information on seismic sources, seismogenic faults, ground-motion prediction equations, geotechnical information, and strong-motion recordings in buildings, together with an interface to earthquake risk; - a portal implementation of computational seismology tools and services, specifically for seismic waveform propagation in complex 3D media following the results of the VERCE project, and initiating the inclusion of further suitable codes on that portal in discussion with the community, forming the basis of EPOS computational earth science infrastructure. Important features common to all tasks are the development of EPOS-wide integrated and interoperable metadata structures, the introduction and utilization of adequate and referencable persistent identifiers for data and products, and the implementation of appropriate user access and authorization mechanisms. Here we present further details on the technical work plan for Seismology during the EPOS Implementation Phase and its integration into the overall EPOS build-up, together with the current view and state of the discussion on the development of adequate governance structures, and discuss how we envision the interaction with and involvement of the wider community outside the consortium in these activities.

  15. Reusablility in ESOC mission control systems developments - the SMART-1 mission case

    NASA Astrophysics Data System (ADS)

    Pignède, Max; Davies, Kevin

    2002-07-01

    The European Space Operations Centre (ESOC) have a long experience in spacecraft mission control systems developments and use a large number of existing elements for the build up of control systems for new missions. The integration of such elements in a new system covers not only the direct re-use of infrastructure software but also the re-use of concepts and work methodology. Applying reusability is a major asset in ESOC's strategy, especially for low cost space missions. This paper describes re-use of existing elements in the ESOC production of the SMART-1 mission control system (S1MCS) and explores the following areas: The most significant (and major cost-saving contributors) re-used elements are the Spacecraft Control and Operations System (SCOS-2000) and the Network Control and TM/TC Router System (NCTRS) infrastructure systems. These systems are designed precisely for allowing all general mission parameters to be configured easily without any change in the software (in particular the NCTRS configuration for SMART-1 was time and cost effective). Further, large parts of the ESOC ROSETTA and INTEGRAL software systems (also SCOS-2000 based) were directly re-used, such as the on-board command schedule maintenance and modelling subsystem (OBQ), the time correlator (TCO) and the external file transfer subsystem (FTS). The INTEGRAL spacecraft database maintenance system (both the editors and configuration control mechanism) and its export facilities into the S1MCS runtime system are directly reused. A special kind of re-use concerns the ENVISAT approach to both the telemetry (TM) and telecommanding (TC) context saving in the redundant server system in order to enable smooth support of operations in case of prime server failure. In this case no software or tools can be re-used because the S1MCS is based on a much more modern technology than the ENVISAT mission control system as well as on largely differing workstations architectures but the ENVISAT validated capabilities to support hot-standby system reconfiguration and machines and data resynchronisation following failures for all mission phases make them a good candidate for re-use by newer missions. Common methods and tools for requirements production, test plan production and problem tracking which are used by most of the other ESOC missions development teams in their daily work are also re-used without any changes. Finally conclusions are drawn about reusability in perspective with the latest state of the S1MCS and about benefits to other SCOS-2000 based "client" missions. Lessons learned for ESOC space missions (whether for mission control systems currently under development or up-and-coming space missions) and also related considerations for the wider space community are made, reflecting ESOC skills and expertise in mission operations and control.

  16. Development and implementation of an Integrated Water Resources Management System (IWRMS)

    NASA Astrophysics Data System (ADS)

    Flügel, W.-A.; Busch, C.

    2011-04-01

    One of the innovative objectives in the EC project BRAHMATWINN was the development of a stakeholder oriented Integrated Water Resources Management System (IWRMS). The toolset integrates the findings of the project and presents it in a user friendly way for decision support in sustainable integrated water resources management (IWRM) in river basins. IWRMS is a framework, which integrates different types of basin information and which supports the development of IWRM options for climate change mitigation. It is based on the River Basin Information System (RBIS) data models and delivers a graphical user interface for stakeholders. A special interface was developed for the integration of the enhanced DANUBIA model input and the NetSyMod model with its Mulino decision support system (mulino mDss) component. The web based IWRMS contains and combines different types of data and methods to provide river basin data and information for decision support. IWRMS is based on a three tier software framework which uses (i) html/javascript at the client tier, (ii) PHP programming language to realize the application tier, and (iii) a postgresql/postgis database tier to manage and storage all data, except the DANUBIA modelling raw data, which are file based and registered in the database tier. All three tiers can reside on one or different computers and are adapted to the local hardware infrastructure. IWRMS as well as RBIS are based on Open Source Software (OSS) components and flexible and time saving access to that database is guaranteed by web-based interfaces for data visualization and retrieval. The IWRMS is accessible via the BRAHMATWINN homepage: http://www.brahmatwinn.uni-jena.de and a user manual for the RBIS is available for download as well.

  17. NASA-evolving to Ada: Five-year plan. A plan for implementing recommendations made by the Ada and software management assessment working group

    NASA Technical Reports Server (NTRS)

    1989-01-01

    At their March 1988 meeting, members of the National Aeronautics and Space Administration (NASA) Information Resources Management (IRM) Council expressed concern that NASA may not have the infrastructure necessary to support the use of Ada for major NASA software projects. Members also observed that the agency has no coordinated strategy for applying its experiences with Ada to subsequent projects (Hinners, 27 June 1988). To deal with these problems, the IRM Council chair appointed an intercenter Ada and Software Management Assessment Working Group (ASMAWG). They prepared a report (McGarry et al., March 1989) entitled, 'Ada and Software Management in NASA: Findings and Recommendations'. That report presented a series of recommendations intended to enable NASA to develop better software at lower cost through the use of Ada and other state-of-the-art software engineering technologies. The purpose here is to describe the steps (called objectives) by which this goal may be achieved, to identify the NASA officials or organizations responsible for carrying out the steps, and to define a schedule for doing so. This document sets forth four goals: adopt agency-wide software standards and policies; use Ada as the programming language for all mission software; establish an infrastructure to support software engineering, including the use of Ada, and to leverage the agency's software experience; and build the agency's knowledge base in Ada and software engineering. A schedule for achieving the objectives and goals is given.

  18. Preparing to use vehicle infrastructure integration in transportation operations : phase I.

    DOT National Transportation Integrated Search

    2007-01-01

    The close integration of vehicles and the infrastructure in the surface transportation system has been envisioned for years, but recent advances in wireless communications has made such integration feasible. Given this feasibility, a coalition of the...

  19. A Semantic Web Management Model for Integrative Biomedical Informatics

    PubMed Central

    Deus, Helena F.; Stanislaus, Romesh; Veiga, Diogo F.; Behrens, Carmen; Wistuba, Ignacio I.; Minna, John D.; Garner, Harold R.; Swisher, Stephen G.; Roth, Jack A.; Correa, Arlene M.; Broom, Bradley; Coombes, Kevin; Chang, Allen; Vogel, Lynn H.; Almeida, Jonas S.

    2008-01-01

    Background Data, data everywhere. The diversity and magnitude of the data generated in the Life Sciences defies automated articulation among complementary efforts. The additional need in this field for managing property and access permissions compounds the difficulty very significantly. This is particularly the case when the integration involves multiple domains and disciplines, even more so when it includes clinical and high throughput molecular data. Methodology/Principal Findings The emergence of Semantic Web technologies brings the promise of meaningful interoperation between data and analysis resources. In this report we identify a core model for biomedical Knowledge Engineering applications and demonstrate how this new technology can be used to weave a management model where multiple intertwined data structures can be hosted and managed by multiple authorities in a distributed management infrastructure. Specifically, the demonstration is performed by linking data sources associated with the Lung Cancer SPORE awarded to The University of Texas MDAnderson Cancer Center at Houston and the Southwestern Medical Center at Dallas. A software prototype, available with open source at www.s3db.org, was developed and its proposed design has been made publicly available as an open source instrument for shared, distributed data management. Conclusions/Significance The Semantic Web technologies have the potential to addresses the need for distributed and evolvable representations that are critical for systems Biology and translational biomedical research. As this technology is incorporated into application development we can expect that both general purpose productivity software and domain specific software installed on our personal computers will become increasingly integrated with the relevant remote resources. In this scenario, the acquisition of a new dataset should automatically trigger the delegation of its analysis. PMID:18698353

  20. GeoDeepDive: Towards a Machine Reading-Ready Digital Library and Information Integration Resource

    NASA Astrophysics Data System (ADS)

    Husson, J. M.; Peters, S. E.; Livny, M.; Ross, I.

    2015-12-01

    Recent developments in machine reading and learning approaches to text and data mining hold considerable promise for accelerating the pace and quality of literature-based data synthesis, but these advances have outpaced even basic levels of access to the published literature. For many geoscience domains, particularly those based on physical samples and field-based descriptions, this limitation is significant. Here we describe a general infrastructure to support published literature-based machine reading and learning approaches to information integration and knowledge base creation. This infrastructure supports rate-controlled automated fetching of original documents, along with full bibliographic citation metadata, from remote servers, the secure storage of original documents, and the utilization of considerable high-throughput computing resources for the pre-processing of these documents by optical character recognition, natural language parsing, and other document annotation and parsing software tools. New tools and versions of existing tools can be automatically deployed against original documents when they are made available. The products of these tools (text/XML files) are managed by MongoDB and are available for use in data extraction applications. Basic search and discovery functionality is provided by ElasticSearch, which is used to identify documents of potential relevance to a given data extraction task. Relevant files derived from the original documents are then combined into basic starting points for application building; these starting points are kept up-to-date as new relevant documents are incorporated into the digital library. Currently, our digital library stores contains more than 360K documents supplied by Elsevier and the USGS and we are actively seeking additional content providers. By focusing on building a dependable infrastructure to support the retrieval, storage, and pre-processing of published content, we are establishing a foundation for complex, and continually improving, information integration and data extraction applications. We have developed one such application, which we present as an example, and invite new collaborations to develop other such applications.

  1. Lean and Efficient Software: Whole-Program Optimization of Executables

    DTIC Science & Technology

    2013-01-03

    staffing for the project  Implementing the necessary infrastructure ( testing, performance evaluation, needed support software, bug and issue...in the SOW The result of the planning discussions is shown in the milestone table (section 6). In addition, we selected appropriate engineering

  2. Sharing the Code.

    ERIC Educational Resources Information Center

    Olsen, Florence

    2003-01-01

    Colleges and universities are beginning to consider collaborating on open-source-code projects as a way to meet critical software and computing needs. Points out the attractive features of noncommercial open-source software and describes some examples in use now, especially for the creation of Web infrastructure. (SLD)

  3. Using neural networks in software repositories

    NASA Technical Reports Server (NTRS)

    Eichmann, David (Editor); Srinivas, Kankanahalli; Boetticher, G.

    1992-01-01

    The first topic is an exploration of the use of neural network techniques to improve the effectiveness of retrieval in software repositories. The second topic relates to a series of experiments conducted to evaluate the feasibility of using adaptive neural networks as a means of deriving (or more specifically, learning) measures on software. Taken together, these two efforts illuminate a very promising mechanism supporting software infrastructures - one based upon a flexible and responsive technology.

  4. Collaboratively Architecting a Scalable and Adaptable Petascale Infrastructure to Support Transdisciplinary Scientific Research for the Australian Earth and Environmental Sciences

    NASA Astrophysics Data System (ADS)

    Wyborn, L. A.; Evans, B. J. K.; Pugh, T.; Lescinsky, D. T.; Foster, C.; Uhlherr, A.

    2014-12-01

    The National Computational Infrastructure (NCI) at the Australian National University (ANU) is a partnership between CSIRO, ANU, Bureau of Meteorology (BoM) and Geoscience Australia. Recent investments in a 1.2 PFlop Supercomputer (Raijin), ~ 20 PB data storage using Lustre filesystems and a 3000 core high performance cloud have created a hybrid platform for higher performance computing and data-intensive science to enable large scale earth and climate systems modelling and analysis. There are > 3000 users actively logging in and > 600 projects on the NCI system. Efficiently scaling and adapting data and software systems to petascale infrastructures requires the collaborative development of an architecture that is designed, programmed and operated to enable users to interactively invoke different forms of in-situ computation over complex and large scale data collections. NCI makes available major and long tail data collections from both the government and research sectors based on six themes: 1) weather, climate and earth system science model simulations, 2) marine and earth observations, 3) geosciences, 4) terrestrial ecosystems, 5) water and hydrology and 6) astronomy, bio and social. Collectively they span the lithosphere, crust, biosphere, hydrosphere, troposphere, and stratosphere. Collections are the operational form for data management and access. Similar data types from individual custodians are managed cohesively. Use of international standards for discovery and interoperability allow complex interactions within and between the collections. This design facilitates a transdisciplinary approach to research and enables a shift from small scale, 'stove-piped' science efforts to large scale, collaborative systems science. This new and complex infrastructure requires a move to shared, globally trusted software frameworks that can be maintained and updated. Workflow engines become essential and need to integrate provenance, versioning, traceability, repeatability and publication. There are also human resource challenges as highly skilled HPC/HPD specialists, specialist programmers, and data scientists are required whose skills can support scaling to the new paradigm of effective and efficient data-intensive earth science analytics on petascale, and soon to be exascale systems.

  5. Sunderland Software City: An Innovative Approach to Knowledge Exchange in the North East of England

    ERIC Educational Resources Information Center

    Hall, Lynne; Irons, Alastair; MacIntyre, John; Sellers, Charles; Smith, Peter

    2010-01-01

    This paper presents a collaborative initiative within the North East of England which aims to grow and sustain a software industry, based on the strengths of regional players, including in particular the local university. The project Sunderland Software City has the ambitious aim of developing the people, the infrastructure and the business and…

  6. Architecting Service-Oriented Systems

    DTIC Science & Technology

    2011-08-01

    Abstract Service orientation is an approach to software systems development that has become a popular way to implement distributed, loosely coupled...runtime. The later you defer binding the more flexibility service providers and service consumers have to develop their software systems independently...Enterprise Service Bus An Enterprise Service Bus (ESB) is a software pattern that can be part of a SOA infrastructure and acts as an intermediary

  7. Testing in Service-Oriented Environments

    DTIC Science & Technology

    2010-03-01

    software releases (versions, service packs, vulnerability patches) for one com- mon ESB during the 13-month period from January 1, 2008 through...impact on quality of service : Unlike traditional software compo- nents, a single instance of a web service can be used by multiple consumers. Since the...distributed, with heterogeneous hardware and software (SOA infrastructure, services , operating systems, and databases). Because of cost and security, it

  8. Information resources assessment of a healthcare integrated delivery system.

    PubMed Central

    Gadd, C. S.; Friedman, C. P.; Douglas, G.; Miller, D. J.

    1999-01-01

    While clinical healthcare systems may have lagged behind computer applications in other fields in the shift from mainframes to client-server architectures, the rapid deployment of newer applications is closing that gap. Organizations considering the transition to client-server must identify and position themselves to provide the resources necessary to implement and support the infrastructure requirements of client-server architectures and to manage the accelerated complexity at the desktop, including hardware and software deployment, training, and maintenance needs. This paper describes an information resources assessment of the recently aligned Pennsylvania regional Veterans Administration Stars and Stripes Health Network (VISN4), in anticipation of the shift from a predominantly mainframe to a client-server information systems architecture in its well-established VistA clinical information system. The multimethod assessment study is described here to demonstrate this approach and its value to regional healthcare networks undergoing organizational integration and/or significant information technology transformations. PMID:10566414

  9. Applications integration in a hybrid cloud computing environment: modelling and platform

    NASA Astrophysics Data System (ADS)

    Li, Qing; Wang, Ze-yuan; Li, Wei-hua; Li, Jun; Wang, Cheng; Du, Rui-yang

    2013-08-01

    With the development of application services providers and cloud computing, more and more small- and medium-sized business enterprises use software services and even infrastructure services provided by professional information service companies to replace all or part of their information systems (ISs). These information service companies provide applications, such as data storage, computing processes, document sharing and even management information system services as public resources to support the business process management of their customers. However, no cloud computing service vendor can satisfy the full functional IS requirements of an enterprise. As a result, enterprises often have to simultaneously use systems distributed in different clouds and their intra enterprise ISs. Thus, this article presents a framework to integrate applications deployed in public clouds and intra ISs. A run-time platform is developed and a cross-computing environment process modelling technique is also developed to improve the feasibility of ISs under hybrid cloud computing environments.

  10. Multi-Level Data-Security and Data-Protection in a Distributed Search Infrastructure for Digital Medical Samples.

    PubMed

    Witt, Michael; Krefting, Dagmar

    2016-01-01

    Human sample data is stored in biobanks with software managing digital derived sample data. When these stand-alone components are connected and a search infrastructure is employed users become able to collect required research data from different data sources. Data protection, patient rights, data heterogeneity and access control are major challenges for such an infrastructure. This dissertation will investigate concepts for a multi-level security architecture to comply with these requirements.

  11. The EPOS ICT Architecture

    NASA Astrophysics Data System (ADS)

    Jeffery, Keith; Harrison, Matt; Bailo, Daniele

    2016-04-01

    The EPOS-PP Project 2010-2014 proposed an architecture and demonstrated feasibility with a prototype. Requirements based on use cases were collected and an inventory of assets (e.g. datasets, software, users, computing resources, equipment/detectors, laboratory services) (RIDE) was developed. The architecture evolved through three stages of refinement with much consultation both with the EPOS community representing EPOS users and participants in geoscience and with the overall ICT community especially those working on research such as the RDA (Research Data Alliance) community. The architecture consists of a central ICS (Integrated Core Services) consisting of a portal and catalog, the latter providing to end-users a 'map' of all EPOS resources (datasets, software, users, computing, equipment/detectors etc.). ICS is extended to ICS-d (distributed ICS) for certain services (such as visualisation software services or Cloud computing resources) and CES (Computational Earth Science) for specific simulation or analytical processing. ICS also communicates with TCS (Thematic Core Services) which represent European-wide portals to national and local assets, resources and services in the various specific domains (e.g. seismology, volcanology, geodesy) of EPOS. The EPOS-IP project 2015-2019 started October 2015. Two work-packages cover the ICT aspects; WP6 involves interaction with the TCS while WP7 concentrates on ICS including interoperation with ICS-d and CES offerings: in short the ICT architecture. Based on the experience and results of EPOS-PP the ICT team held a pre-meeting in July 2015 and set out a project plan. The first major activity involved requirements (re-)collection with use cases and also updating the inventory of assets held by the various TCS in EPOS. The RIDE database of assets is currently being converted to CERIF (Common European Research Information Format - an EU Recommendation to Member States) to provide the basis for the EPOS-IP ICS Catalog. In parallel the ICT team is tracking developments in ICT for relevance to EPOS-IP. In particular, the potential utilisation of e-Is (e-Infrastructures) such as GEANT(network), AARC (security), EGI (GRID computing), EUDAT (data curation), PRACE (High Performance Computing), HELIX-Nebula / Open Science Cloud (Cloud computing) are being assessed. Similarly relationships to other e-RIs (e-Research Infrastructures) such as ENVRI+, EXCELERATE and other ESFRI (European Strategic Forum for Research Infrastructures) projects are developed to share experience and technology and to promote interoperability. EPOS ICT team members are also involved in VRE4EIC, a project developing a reference architecture and component software services for a Virtual Research Environment to be superimposed on EPOS-ICS. The challenge which is being tackled now is therefore to keep consistency and interoperability among the different modules, initiatives and actors which participate to the process of running the EPOS platform. It implies both a continuous update about IT aspects of mentioned initiatives and a refinement of the e-architecture designed so far. One major aspect of EPOS-IP is the ICT support for legalistic, financial and governance aspects of the EPOS ERIC to be initiated during EPOS-IP. This implies a sophisticated AAAI (Authentication, authorization, accounting infrastructure) with consistency throughout the software, communications and data stack.

  12. Jungle Computing: Distributed Supercomputing Beyond Clusters, Grids, and Clouds

    NASA Astrophysics Data System (ADS)

    Seinstra, Frank J.; Maassen, Jason; van Nieuwpoort, Rob V.; Drost, Niels; van Kessel, Timo; van Werkhoven, Ben; Urbani, Jacopo; Jacobs, Ceriel; Kielmann, Thilo; Bal, Henri E.

    In recent years, the application of high-performance and distributed computing in scientific practice has become increasingly wide spread. Among the most widely available platforms to scientists are clusters, grids, and cloud systems. Such infrastructures currently are undergoing revolutionary change due to the integration of many-core technologies, providing orders-of-magnitude speed improvements for selected compute kernels. With high-performance and distributed computing systems thus becoming more heterogeneous and hierarchical, programming complexity is vastly increased. Further complexities arise because urgent desire for scalability and issues including data distribution, software heterogeneity, and ad hoc hardware availability commonly force scientists into simultaneous use of multiple platforms (e.g., clusters, grids, and clouds used concurrently). A true computing jungle.

  13. Judicious use of custom development in an open source component architecture

    NASA Astrophysics Data System (ADS)

    Bristol, S.; Latysh, N.; Long, D.; Tekell, S.; Allen, J.

    2014-12-01

    Modern software engineering is not as much programming from scratch as innovative assembly of existing components. Seamlessly integrating disparate components into scalable, performant architecture requires sound engineering craftsmanship and can often result in increased cost efficiency and accelerated capabilities if software teams focus their creativity on the edges of the problem space. ScienceBase is part of the U.S. Geological Survey scientific cyberinfrastructure, providing data and information management, distribution services, and analysis capabilities in a way that strives to follow this pattern. ScienceBase leverages open source NoSQL and relational databases, search indexing technology, spatial service engines, numerous libraries, and one proprietary but necessary software component in its architecture. The primary engineering focus is cohesive component interaction, including construction of a seamless Application Programming Interface (API) across all elements. The API allows researchers and software developers alike to leverage the infrastructure in unique, creative ways. Scaling the ScienceBase architecture and core API with increasing data volume (more databases) and complexity (integrated science problems) is a primary challenge addressed by judicious use of custom development in the component architecture. Other data management and informatics activities in the earth sciences have independently resolved to a similar design of reusing and building upon established technology and are working through similar issues for managing and developing information (e.g., U.S. Geoscience Information Network; NASA's Earth Observing System Clearing House; GSToRE at the University of New Mexico). Recent discussions facilitated through the Earth Science Information Partners are exploring potential avenues to exploit the implicit relationships between similar projects for explicit gains in our ability to more rapidly advance global scientific cyberinfrastructure.

  14. Three-Dimensional Space to Assess Cloud Interoperability

    DTIC Science & Technology

    2013-03-01

    12 1. Portability and Mobility ...collection of network-enabled services that guarantees to provide a scalable, easy accessible, reliable, and personalized computing infrastructure , based on...are used in research to describe cloud models, such as SaaS (Software as a Service), PaaS (Platform as a service), IaaS ( Infrastructure as a Service

  15. Cloud Computing in Support of Applied Learning: A Baseline Study of Infrastructure Design at Southern Polytechnic State University

    ERIC Educational Resources Information Center

    Conn, Samuel S.; Reichgelt, Han

    2013-01-01

    Cloud computing represents an architecture and paradigm of computing designed to deliver infrastructure, platforms, and software as constructible computing resources on demand to networked users. As campuses are challenged to better accommodate academic needs for applications and computing environments, cloud computing can provide an accommodating…

  16. InterMine Webservices for Phytozome

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Carlson, Joseph; Hayes, David; Goodstein, David

    2014-01-10

    A data warehousing framework for biological information provides a useful infrastructure for providers and users of genomic data. For providers, the infrastructure give them a consistent mechanism for extracting raw data. While for the users, the web services supported by the software allows them to make either simple and common, or complex and unique, queries of the data

  17. The CMS High Level Trigger System: Experience and Future Development

    NASA Astrophysics Data System (ADS)

    Bauer, G.; Behrens, U.; Bowen, M.; Branson, J.; Bukowiec, S.; Cittolin, S.; Coarasa, J. A.; Deldicque, C.; Dobson, M.; Dupont, A.; Erhan, S.; Flossdorf, A.; Gigi, D.; Glege, F.; Gomez-Reino, R.; Hartl, C.; Hegeman, J.; Holzner, A.; Hwong, Y. L.; Masetti, L.; Meijers, F.; Meschi, E.; Mommsen, R. K.; O'Dell, V.; Orsini, L.; Paus, C.; Petrucci, A.; Pieri, M.; Polese, G.; Racz, A.; Raginel, O.; Sakulin, H.; Sani, M.; Schwick, C.; Shpakov, D.; Simon, S.; Spataru, A. C.; Sumorok, K.

    2012-12-01

    The CMS experiment at the LHC features a two-level trigger system. Events accepted by the first level trigger, at a maximum rate of 100 kHz, are read out by the Data Acquisition system (DAQ), and subsequently assembled in memory in a farm of computers running a software high-level trigger (HLT), which selects interesting events for offline storage and analysis at a rate of order few hundred Hz. The HLT algorithms consist of sequences of offline-style reconstruction and filtering modules, executed on a farm of 0(10000) CPU cores built from commodity hardware. Experience from the operation of the HLT system in the collider run 2010/2011 is reported. The current architecture of the CMS HLT, its integration with the CMS reconstruction framework and the CMS DAQ, are discussed in the light of future development. The possible short- and medium-term evolution of the HLT software infrastructure to support extensions of the HLT computing power, and to address remaining performance and maintenance issues, are discussed.

  18. ImTK: an open source multi-center information management toolkit

    NASA Astrophysics Data System (ADS)

    Alaoui, Adil; Ingeholm, Mary Lou; Padh, Shilpa; Dorobantu, Mihai; Desai, Mihir; Cleary, Kevin; Mun, Seong K.

    2008-03-01

    The Information Management Toolkit (ImTK) Consortium is an open source initiative to develop robust, freely available tools related to the information management needs of basic, clinical, and translational research. An open source framework and agile programming methodology can enable distributed software development while an open architecture will encourage interoperability across different environments. The ISIS Center has conceptualized a prototype data sharing network that simulates a multi-center environment based on a federated data access model. This model includes the development of software tools to enable efficient exchange, sharing, management, and analysis of multimedia medical information such as clinical information, images, and bioinformatics data from multiple data sources. The envisioned ImTK data environment will include an open architecture and data model implementation that complies with existing standards such as Digital Imaging and Communications (DICOM), Health Level 7 (HL7), and the technical framework and workflow defined by the Integrating the Healthcare Enterprise (IHE) Information Technology Infrastructure initiative, mainly the Cross Enterprise Document Sharing (XDS) specifications.

  19. Remote monitoring system for the cryogenic system of superconducting magnets in the SuperKEKB interaction region

    NASA Astrophysics Data System (ADS)

    Aoki, K.; Ohuchi, N.; Zong, Z.; Arimoto, Y.; Wang, X.; Yamaoka, H.; Kawai, M.; Kondou, Y.; Makida, Y.; Hirose, M.; Endou, T.; Iwasaki, M.; Nakamura, T.

    2017-12-01

    A remote monitoring system was developed based on the software infrastructure of the Experimental Physics and Industrial Control System (EPICS) for the cryogenic system of superconducting magnets in the interaction region of the SuperKEKB accelerator. The SuperKEKB has been constructed to conduct high-energy physics experiments at KEK. These superconducting magnets consist of three apparatuses, the Belle II detector solenoid, and QCSL and QCSR accelerator magnets. They are each contained in three cryostats cooled by dedicated helium cryogenic systems. The monitoring system was developed to read data of the EX-8000, which is an integrated instrumentation system to control all cryogenic components. The monitoring system uses the I/O control tools of EPICS software for TCP/IP, archiving techniques using a relational database, and easy human-computer interface. Using this monitoring system, it is possible to remotely monitor all real-time data of the superconducting magnets and cryogenic systems. It is also convenient to share data among multiple groups.

  20. Teacher-Pedagogy Approach for Sustainable Proficiency

    ERIC Educational Resources Information Center

    Nath, Baiju K.; Balan, Meera

    2010-01-01

    Quality concerns of an institution shall be explained in terms of hardware and software. The hardware comprises of building and other infrastructural facilities and software involves teachers, students and administrative staff. Various agencies such as National Council for Educational Research & Training (NCERT), National Council for Teacher…

  1. Security Isn't Just for Techies Anymore

    ERIC Educational Resources Information Center

    Mills, Lane B.

    2004-01-01

    School district networks are particularly difficult to protect given the diverse types of users, software, equipment and connections that most school districts provide. Vulnerabilities to the security of school district's technology infrastructure can relate to users, data, software, hardware and transmission. This article discusses different…

  2. HiCAT Software Infrastructure: Safe hardware control with object oriented Python

    NASA Astrophysics Data System (ADS)

    Moriarty, Christopher; Brooks, Keira; Soummer, Remi

    2018-01-01

    High contrast imaging for Complex Aperture Telescopes (HiCAT) is a testbed designed to demonstrate coronagraphy and wavefront control for segmented on-axis space telescopes such as envisioned for LUVOIR. To limit the air movements in the testbed room, software interfaces for several different hardware components were developed to completely automate operations. When developing software interfaces for many different pieces of hardware, unhandled errors are commonplace and can prevent the software from properly closing a hardware resource. Some fragile components (e.g. deformable mirrors) can be permanently damaged because of this. We present an object oriented Python-based infrastructure to safely automate hardware control and optical experiments. Specifically, conducting high-contrast imaging experiments while monitoring humidity and power status along with graceful shutdown processes even for unexpected errors. Python contains a construct called a “context manager” that allows you define code to run when a resource is opened or closed. Context managers ensure that a resource is properly closed, even when unhandled errors occur. Harnessing the context manager design, we also use Python’s multiprocessing library to monitor humidity and power status without interrupting the experiment. Upon detecting a safety problem, the master process sends an event to the child process that triggers the context managers to gracefully close any open resources. This infrastructure allows us to queue up several experiments and safely operate the testbed without a human in the loop.

  3. Community-driven computational biology with Debian Linux.

    PubMed

    Möller, Steffen; Krabbenhöft, Hajo Nils; Tille, Andreas; Paleino, David; Williams, Alan; Wolstencroft, Katy; Goble, Carole; Holland, Richard; Belhachemi, Dominique; Plessy, Charles

    2010-12-21

    The Open Source movement and its technologies are popular in the bioinformatics community because they provide freely available tools and resources for research. In order to feed the steady demand for updates on software and associated data, a service infrastructure is required for sharing and providing these tools to heterogeneous computing environments. The Debian Med initiative provides ready and coherent software packages for medical informatics and bioinformatics. These packages can be used together in Taverna workflows via the UseCase plugin to manage execution on local or remote machines. If such packages are available in cloud computing environments, the underlying hardware and the analysis pipelines can be shared along with the software. Debian Med closes the gap between developers and users. It provides a simple method for offering new releases of software and data resources, thus provisioning a local infrastructure for computational biology. For geographically distributed teams it can ensure they are working on the same versions of tools, in the same conditions. This contributes to the world-wide networking of researchers.

  4. Hadoop distributed batch processing for Gaia: a success story

    NASA Astrophysics Data System (ADS)

    Riello, Marco

    2015-12-01

    The DPAC Cambridge Data Processing Centre (DPCI) is responsible for the photometric calibration of the Gaia data including the low resolution spectra. The large data volume produced by Gaia (~26 billion transits/year), the complexity of its data stream and the self-calibrating approach pose unique challenges for scalability, reliability and robustness of both the software pipelines and the operations infrastructure. DPCI has been the first in DPAC to realise the potential of Hadoop and Map/Reduce and to adopt them as the core technologies for its infrastructure. This has proven a winning choice allowing DPCI unmatched processing throughput and reliability within DPAC to the point that other DPCs have started following our footsteps. In this talk we will present the software infrastructure developed to build the distributed and scalable batch data processing system that is currently used in production at DPCI and the excellent results in terms of performance of the system.

  5. Cafe: A Generic Configurable Customizable Composite Cloud Application Framework

    NASA Astrophysics Data System (ADS)

    Mietzner, Ralph; Unger, Tobias; Leymann, Frank

    In this paper we present Cafe (Composite Application Framework) an approach to describe configurable composite service-oriented applications and to automatically provision them across different providers. Cafe enables independent software vendors to describe their composite service-oriented applications and the components that are used to assemble them. Components can be internal to the application or external and can be deployed in any of the delivery models present in the cloud. The components are annotated with requirements for the infrastructure they later need to be run on. Providers on the other hand advertise their infrastructure services by describing them as infrastructure capabilities. The separation of software vendors and providers enables end users and providers to follow a best-of-breed strategy by combining arbitrary applications with arbitrary providers. We show how such applications can be automatically provisioned and present an architecture and a prototype that implements the concepts.

  6. Integrating child welfare, juvenile justice, and other agencies in a continuum of services.

    PubMed

    Howell, James C; Kelly, Marion R; Palmer, James; Mangum, Ronald L

    2004-01-01

    This article presents a comprehensive strategy framework for integrating mental health, child welfare, education, substance abuse, and juvenile justice system services. It proposes an infrastructure of information exchange, cross-agency client referrals, a networking protocol, interagency councils, and service integration models. This infrastructure facilitates integrated service delivery.

  7. Australia's TERN: Building, Sustaining and Advancing Collaborative Long Term Ecosystem Research Networks

    NASA Astrophysics Data System (ADS)

    HEld, A. A.; Phinn, S. R.

    2012-12-01

    TERN is Australia's Terrestrial Ecosystem Research Network (www.tern.org.au) is one of several environmental data collection, storage and sharing projects developed through the government's research infrastructure programs 2008-2014. This includes terrestrial and coastal ecosystem data collection infrastructure across multiple disciplines, hardware, software and processes used to store, analyse and integrate data sets. TERN's overall objective is to build the collaborations, infrastructure and programs to meet the needs of ecosystem science communities in Australia in the long term, through institutional frameworks necessary to establish a national terrestrial ecosystem site and observational network, coordinated networks enabling cooperation and operational experience; public access to quality assured and appropriately licensed data; and allowing the terrestrial ecosystem research community to define and sustain the terrestrial observing paradigm into the longer term. This paper explains how TERN was originally established, and now operates, along with plans to sustain itself in the future. TERN is implemented through discipline/technical groups referred to as "TERN Facilities". Combined, the facilities provide observations of surface mass and energy fluxes over key ecosystems, biophysical remote sensing data, ecological survey plots, soils information, and coastal ecosystems and associated water quality variables across Australia. Additional integrative facilities cover elements of ecoinformatics, data-scaling and modelling, and linking science to management. A central coordination and portal facility provides meta-data storage, data identification, legal and licensing support. Data access, uploading, meta-data generation, DOI attachment and licensing is completed at each facility's own portal level. TERN also acts as the open-data repository of choice for Australian scientists required to publish their data. Several key lessons we have learnt, will be presented during the talk.

  8. SaDA: From Sampling to Data Analysis—An Extensible Open Source Infrastructure for Rapid, Robust and Automated Management and Analysis of Modern Ecological High-Throughput Microarray Data

    PubMed Central

    Singh, Kumar Saurabh; Thual, Dominique; Spurio, Roberto; Cannata, Nicola

    2015-01-01

    One of the most crucial characteristics of day-to-day laboratory information management is the collection, storage and retrieval of information about research subjects and environmental or biomedical samples. An efficient link between sample data and experimental results is absolutely important for the successful outcome of a collaborative project. Currently available software solutions are largely limited to large scale, expensive commercial Laboratory Information Management Systems (LIMS). Acquiring such LIMS indeed can bring laboratory information management to a higher level, but most of the times this requires a sufficient investment of money, time and technical efforts. There is a clear need for a light weighted open source system which can easily be managed on local servers and handled by individual researchers. Here we present a software named SaDA for storing, retrieving and analyzing data originated from microorganism monitoring experiments. SaDA is fully integrated in the management of environmental samples, oligonucleotide sequences, microarray data and the subsequent downstream analysis procedures. It is simple and generic software, and can be extended and customized for various environmental and biomedical studies. PMID:26047146

  9. SaDA: From Sampling to Data Analysis-An Extensible Open Source Infrastructure for Rapid, Robust and Automated Management and Analysis of Modern Ecological High-Throughput Microarray Data.

    PubMed

    Singh, Kumar Saurabh; Thual, Dominique; Spurio, Roberto; Cannata, Nicola

    2015-06-03

    One of the most crucial characteristics of day-to-day laboratory information management is the collection, storage and retrieval of information about research subjects and environmental or biomedical samples. An efficient link between sample data and experimental results is absolutely important for the successful outcome of a collaborative project. Currently available software solutions are largely limited to large scale, expensive commercial Laboratory Information Management Systems (LIMS). Acquiring such LIMS indeed can bring laboratory information management to a higher level, but most of the times this requires a sufficient investment of money, time and technical efforts. There is a clear need for a light weighted open source system which can easily be managed on local servers and handled by individual researchers. Here we present a software named SaDA for storing, retrieving and analyzing data originated from microorganism monitoring experiments. SaDA is fully integrated in the management of environmental samples, oligonucleotide sequences, microarray data and the subsequent downstream analysis procedures. It is simple and generic software, and can be extended and customized for various environmental and biomedical studies.

  10. Image-guided navigation: a cost effective practical introduction using the Image-Guided Surgery Toolkit (IGSTK).

    PubMed

    Güler, Özgür; Yaniv, Ziv

    2012-01-01

    Teaching the key technical aspects of image-guided interventions using a hands-on approach is a challenging task. This is primarily due to the high cost and lack of accessibility to imaging and tracking systems. We provide a software and data infrastructure which addresses both challenges. Our infrastructure allows students, patients, and clinicians to develop an understanding of the key technologies by using them, and possibly by developing additional components and integrating them into a simple navigation system which we provide. Our approach requires minimal hardware, LEGO blocks to construct a phantom for which we provide CT scans, and a webcam which when combined with our software provides the functionality of a tracking system. A premise of this approach is that tracking accuracy is sufficient for our purpose. We evaluate the accuracy provided by a consumer grade webcam and show that it is sufficient for educational use. We provide an open source implementation of all the components required for a basic image-guided navigation as part of the Image-Guided Surgery Toolkit (IGSTK). It has long been known that in education there is no substitute for hands-on experience, to quote Sophocles, "One must learn by doing the thing; for though you think you know it, you have no certainty, until you try.". Our work provides this missing capability in the context of image-guided navigation. Enabling a wide audience to learn and experience the use of a navigation system.

  11. A qualitative study identifying the cost categories associated with electronic health record implementation in the UK

    PubMed Central

    Slight, Sarah P; Quinn, Casey; Avery, Anthony J; Bates, David W; Sheikh, Aziz

    2014-01-01

    Objective We conducted a prospective evaluation of different forms of electronic health record (EHR) systems to better understand the costs incurred during implementation and the factors that can influence these costs. Methods We selected a range of diverse organizations across three different geographical areas in England that were at different stages of implementing three centrally procured applications, that is, iSOFT's Lorenzo Regional Care, Cerner's Millennium, and CSE's RiO. 41 semi-structured interviews were conducted with hospital staff, members of the implementation team, and those involved in the implementation at a national level. Results Four main overarching cost categories were identified: infrastructure (eg, hardware and software), personnel (eg, training team), estates/facilities (eg, space), and other (eg, training materials). Many factors were felt to impact on these costs, with different hospitals choosing varying amounts and types of infrastructure, diverse training approaches for staff, and different software applications to integrate with the new system. Conclusions Improving the quality and safety of patient care through EHR adoption is a priority area for UK and US governments and policy makers worldwide. With cost considered one of the most significant barriers, it is important for hospitals and governments to be clear from the outset of the major cost categories involved and the factors that may impact on these costs. Failure to adequately train staff or to follow key steps in implementation has preceded many of the failures in this domain, which can create new safety hazards. PMID:24523391

  12. Controlling Infrastructure Costs: Right-Sizing the Mission Control Facility

    NASA Technical Reports Server (NTRS)

    Martin, Keith; Sen-Roy, Michael; Heiman, Jennifer

    2009-01-01

    Johnson Space Center's Mission Control Center is a space vehicle, space program agnostic facility. The current operational design is essentially identical to the original facility architecture that was developed and deployed in the mid-90's. In an effort to streamline the support costs of the mission critical facility, the Mission Operations Division (MOD) of Johnson Space Center (JSC) has sponsored an exploratory project to evaluate and inject current state-of-the-practice Information Technology (IT) tools, processes and technology into legacy operations. The general push in the IT industry has been trending towards a data-centric computer infrastructure for the past several years. Organizations facing challenges with facility operations costs are turning to creative solutions combining hardware consolidation, virtualization and remote access to meet and exceed performance, security, and availability requirements. The Operations Technology Facility (OTF) organization at the Johnson Space Center has been chartered to build and evaluate a parallel Mission Control infrastructure, replacing the existing, thick-client distributed computing model and network architecture with a data center model utilizing virtualization to provide the MCC Infrastructure as a Service. The OTF will design a replacement architecture for the Mission Control Facility, leveraging hardware consolidation through the use of blade servers, increasing utilization rates for compute platforms through virtualization while expanding connectivity options through the deployment of secure remote access. The architecture demonstrates the maturity of the technologies generally available in industry today and the ability to successfully abstract the tightly coupled relationship between thick-client software and legacy hardware into a hardware agnostic "Infrastructure as a Service" capability that can scale to meet future requirements of new space programs and spacecraft. This paper discusses the benefits and difficulties that a migration to cloud-based computing philosophies has uncovered when compared to the legacy Mission Control Center architecture. The team consists of system and software engineers with extensive experience with the MCC infrastructure and software currently used to support the International Space Station (ISS) and Space Shuttle program (SSP).

  13. AGIS: Integration of new technologies used in ATLAS Distributed Computing

    NASA Astrophysics Data System (ADS)

    Anisenkov, Alexey; Di Girolamo, Alessandro; Alandes Pradillo, Maria

    2017-10-01

    The variety of the ATLAS Distributed Computing infrastructure requires a central information system to define the topology of computing resources and to store different parameters and configuration data which are needed by various ATLAS software components. The ATLAS Grid Information System (AGIS) is the system designed to integrate configuration and status information about resources, services and topology of the computing infrastructure used by ATLAS Distributed Computing applications and services. Being an intermediate middleware system between clients and external information sources (like central BDII, GOCDB, MyOSG), AGIS defines the relations between experiment specific used resources and physical distributed computing capabilities. Being in production during LHC Runl AGIS became the central information system for Distributed Computing in ATLAS and it is continuously evolving to fulfil new user requests, enable enhanced operations and follow the extension of the ATLAS Computing model. The ATLAS Computing model and data structures used by Distributed Computing applications and services are continuously evolving and trend to fit newer requirements from ADC community. In this note, we describe the evolution and the recent developments of AGIS functionalities, related to integration of new technologies recently become widely used in ATLAS Computing, like flexible computing utilization of opportunistic Cloud and HPC resources, ObjectStore services integration for Distributed Data Management (Rucio) and ATLAS workload management (PanDA) systems, unified storage protocols declaration required for PandDA Pilot site movers and others. The improvements of information model and general updates are also shown, in particular we explain how other collaborations outside ATLAS could benefit the system as a computing resources information catalogue. AGIS is evolving towards a common information system, not coupled to a specific experiment.

  14. Integrated Seismological Network of Brazil: Key developments in technology.

    NASA Astrophysics Data System (ADS)

    Pirchiner, Marlon; Assumpção, Marcelo; Ferreira, Joaquim; França, George

    2010-05-01

    The Integrated Seismological Network of Brazil - BRASIS - will integrate the main Brazilian seismology groups in an extensive permanent broadband network with a (near) real-time acquisition system and automatic preliminary processing of epicenters and magnitudes. About 60 stations will cover the whole country to continuously monitor the seismic activity. Most stations will be operating by the end of 2010. Data will be received from remote stations at each research group and redistributed to all other groups. In addition to issuing a national catalog of earthquakes, each institution will make its own analysis and issue their own bulletins taking into account local and regional lithospheric structure. We chose the SEED format, seedlink and SeisComP as standard data format, exchange protocol and software framework for the network management, respectively. All different existing equipment (eg, Guralp/Scream, Geotech/CD1.1, RefTek/RTP, Quanterra/seedlink) will be integrated into the same system. We plan to provide: 1) improved station management through remote control, and indexes for quality control of acquired data, sending alerts to operators in critical cases. 2) automatic processing: picking, location with local and regional models and determination of magnitudes, issuing newsletters and alerts. 3) maintainence of an earthquakes catalog, and a waveforms database. 4) query tools and access to metadata, catalogs and waveform available to all researchers. In addition, the catalog of earthquakes and other seismological data will be available as layers in a Spatial Data Infrastructure with open source standards, providing new analysis capabilities to all geoscientists. Seiscomp3 has already been installed in two centers (UFRN and USP) with successful tests of Q330, Guralp, RefTek and Geotech plug-ins to the seedlink protocol. We will discuss the main difficulties of our project and the solutions adopted to improve the Brazilian seismological infrastructure.

  15. Towards an integrated European strong motion data distribution

    NASA Astrophysics Data System (ADS)

    Luzi, Lucia; Clinton, John; Cauzzi, Carlo; Puglia, Rodolfo; Michelini, Alberto; Van Eck, Torild; Sleeman, Reinhoud; Akkar, Sinan

    2013-04-01

    Recent decades have seen a significant increase in the quality and quantity of strong motion data collected in Europe, as dense and often real-time and continuously monitored broadband strong motion networks have been constructed in many nations. There has been a concurrent increase in demand for access to strong motion data not only from researchers for engineering and seismological studies, but also from civil authorities and seismic networks for the rapid assessment of ground motion and shaking intensity following significant earthquakes (e.g. ShakeMaps). Aside from a few notable exceptions on the national scale, databases providing access to strong motion data has not appeared to keep pace with these developments. In the framework of the EC infrastructure project NERA (2010 - 2014), that integrates key research infrastructures in Europe for monitoring earthquakes and assessing their hazard and risk, the network activity NA3 deals with the networking of acceleration networks and SM data. Within the NA3 activity two infrastructures are being constructed: i) a Rapid Response Strong Motion (RRSM) database, that following a strong event, automatically parameterises all available on-scale waveform data within the European Integrated waveform Data Archives (EIDA) and makes the waveforms easily available to the seismological community within minutes of an event; and ii) a European Strong Motion (ESM) database of accelerometric records, with associated metadata relevant to earthquake engineering and seismology research communities, using standard, manual processing that reflects the state of the art and research needs in these fields. These two separate repositories form the core infrastructures being built to distribute strong motion data in Europe in order to guarantee rapid and long-term availability of high quality waveform data to both the international scientific community and the hazard mitigation communities. These infrastructures will provide the access to strong motion data in an eventual EPOS seismological service. A working group on Strong Motion data is being created at ORFEUS in 2013. This body, consisting of experts in strong motion data collection, processing and research from across Europe, will provide the umbrella organisation that will 1) have the political clout to negotiate data sharing agreements with strong motion data providers and 2) manage the software during a transition from the end of NERA to the EPOS community. We expect the community providing data to the RRSM and ESM will gradually grow, under the supervision of ORFEUS, and eventually include strong motion data from networks from all European countries that can have an open data policy.

  16. The CARMEN software as a service infrastructure.

    PubMed

    Weeks, Michael; Jessop, Mark; Fletcher, Martyn; Hodge, Victoria; Jackson, Tom; Austin, Jim

    2013-01-28

    The CARMEN platform allows neuroscientists to share data, metadata, services and workflows, and to execute these services and workflows remotely via a Web portal. This paper describes how we implemented a service-based infrastructure into the CARMEN Virtual Laboratory. A Software as a Service framework was developed to allow generic new and legacy code to be deployed as services on a heterogeneous execution framework. Users can submit analysis code typically written in Matlab, Python, C/C++ and R as non-interactive standalone command-line applications and wrap them as services in a form suitable for deployment on the platform. The CARMEN Service Builder tool enables neuroscientists to quickly wrap their analysis software for deployment to the CARMEN platform, as a service without knowledge of the service framework or the CARMEN system. A metadata schema describes each service in terms of both system and user requirements. The search functionality allows services to be quickly discovered from the many services available. Within the platform, services may be combined into more complicated analyses using the workflow tool. CARMEN and the service infrastructure are targeted towards the neuroscience community; however, it is a generic platform, and can be targeted towards any discipline.

  17. Use of Google Earth to strengthen public health capacity and facilitate management of vector-borne diseases in resource-poor environments.

    PubMed

    Lozano-Fuentes, Saul; Elizondo-Quiroga, Darwin; Farfan-Ale, Jose Arturo; Loroño-Pino, Maria Alba; Garcia-Rejon, Julian; Gomez-Carro, Salvador; Lira-Zumbardo, Victor; Najera-Vazquez, Rosario; Fernandez-Salas, Ildefonso; Calderon-Martinez, Joaquin; Dominguez-Galera, Marco; Mis-Avila, Pedro; Morris, Natashia; Coleman, Michael; Moore, Chester G; Beaty, Barry J; Eisen, Lars

    2008-09-01

    Novel, inexpensive solutions are needed for improved management of vector-borne and other diseases in resource-poor environments. Emerging free software providing access to satellite imagery and simple editing tools (e.g. Google Earth) complement existing geographic information system (GIS) software and provide new opportunities for: (i) strengthening overall public health capacity through development of information for city infrastructures; and (ii) display of public health data directly on an image of the physical environment. We used freely accessible satellite imagery and a set of feature-making tools included in the software (allowing for production of polygons, lines and points) to generate information for city infrastructure and to display disease data in a dengue decision support system (DDSS) framework. Two cities in Mexico (Chetumal and Merida) were used to demonstrate that a basic representation of city infrastructure useful as a spatial backbone in a DDSS can be rapidly developed at minimal cost. Data layers generated included labelled polygons representing city blocks, lines representing streets, and points showing the locations of schools and health clinics. City blocks were colour-coded to show presence of dengue cases. The data layers were successfully imported in a format known as shapefile into a GIS software. The combination of Google Earth and free GIS software (e.g. HealthMapper, developed by WHO, and SIGEpi, developed by PAHO) has tremendous potential to strengthen overall public health capacity and facilitate decision support system approaches to prevention and control of vector-borne diseases in resource-poor environments.

  18. Challenges in Managing Trustworthy Large-scale Digital Science

    NASA Astrophysics Data System (ADS)

    Evans, B. J. K.

    2017-12-01

    The increased use of large-scale international digital science has opened a number of challenges for managing, handling, using and preserving scientific information. The large volumes of information are driven by three main categories - model outputs including coupled models and ensembles, data products that have been processing to a level of usability, and increasingly heuristically driven data analysis. These data products are increasingly the ones that are usable by the broad communities, and far in excess of the raw instruments data outputs. The data, software and workflows are then shared and replicated to allow broad use at an international scale, which places further demands of infrastructure to support how the information is managed reliably across distributed resources. Users necessarily rely on these underlying "black boxes" so that they are productive to produce new scientific outcomes. The software for these systems depend on computational infrastructure, software interconnected systems, and information capture systems. This ranges from the fundamentals of the reliability of the compute hardware, system software stacks and libraries, and the model software. Due to these complexities and capacity of the infrastructure, there is an increased emphasis of transparency of the approach and robustness of the methods over the full reproducibility. Furthermore, with large volume data management, it is increasingly difficult to store the historical versions of all model and derived data. Instead, the emphasis is on the ability to access the updated products and the reliability by which both previous outcomes are still relevant and can be updated for the new information. We will discuss these challenges and some of the approaches underway that are being used to address these issues.

  19. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Williams, Edward J., Jr.; Henry, Karen Lynne

    Sandia National Laboratories develops technologies to: (1) sustain, modernize, and protect our nuclear arsenal (2) Prevent the spread of weapons of mass destruction; (3) Provide new capabilities to our armed forces; (4) Protect our national infrastructure; (5) Ensure the stability of our nation's energy and water supplies; and (6) Defend our nation against terrorist threats. We identified the need for a single overarching Integrated Workplace Management System (IWMS) that would enable us to focus on customer missions and improve FMOC processes. Our team selected highly configurable commercial-off-the-shelf (COTS) software with out-of-the-box workflow processes that integrate strategic planning, project management, facilitymore » assessments, and space management, and can interface with existing systems, such as Oracle, PeopleSoft, Maximo, Bentley, and FileNet. We selected the Integrated Workplace Management System (IWMS) from Tririga, Inc. Facility Management System (FMS) Benefits are: (1) Create a single reliable source for facility data; (2) Improve transparency with oversight organizations; (3) Streamline FMOC business processes with a single, integrated facility-management tool; (4) Give customers simple tools and real-time information; (5) Reduce indirect costs; (6) Replace approximately 30 FMOC systems and 60 homegrown tools (such as Microsoft Access databases); and (7) Integrate with FIMS.« less

  20. A Scalable Data Integration and Analysis Architecture for Sensor Data of Pediatric Asthma.

    PubMed

    Stripelis, Dimitris; Ambite, José Luis; Chiang, Yao-Yi; Eckel, Sandrah P; Habre, Rima

    2017-04-01

    According to the Centers for Disease Control, in the United States there are 6.8 million children living with asthma. Despite the importance of the disease, the available prognostic tools are not sufficient for biomedical researchers to thoroughly investigate the potential risks of the disease at scale. To overcome these challenges we present a big data integration and analysis infrastructure developed by our Data and Software Coordination and Integration Center (DSCIC) of the NIBIB-funded Pediatric Research using Integrated Sensor Monitoring Systems (PRISMS) program. Our goal is to help biomedical researchers to efficiently predict and prevent asthma attacks. The PRISMS-DSCIC is responsible for collecting, integrating, storing, and analyzing real-time environmental, physiological and behavioral data obtained from heterogeneous sensor and traditional data sources. Our architecture is based on the Apache Kafka, Spark and Hadoop frameworks and PostgreSQL DBMS. A main contribution of this work is extending the Spark framework with a mediation layer, based on logical schema mappings and query rewriting, to facilitate data analysis over a consistent harmonized schema. The system provides both batch and stream analytic capabilities over the massive data generated by wearable and fixed sensors.

  1. 3rd Annual Earth System Grid Federation and 3rd Annual Earth System Grid Federation and Ultrascale Visualization Climate Data Analysis Tools Face-to-Face Meeting Report December 2013

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Williams, Dean N.

    The climate and weather data science community gathered December 3–5, 2013, at Lawrence Livermore National Laboratory, in Livermore, California, for the third annual Earth System Grid Federation (ESGF) and Ultra-scale Visualization Climate Data Analysis Tools (UV-CDAT) Face-to-Face (F2F) Meeting, which was hosted by the Department of Energy, National Aeronautics and Space Administration, National Oceanic and Atmospheric Administration, the European Infrastructure for the European Network of Earth System Modelling, and the Australian Department of Education. Both ESGF and UV-CDAT are global collaborations designed to develop a new generation of open-source software infrastructure that provides distributed access and analysis to observed andmore » simulated data from the climate and weather communities. The tools and infrastructure developed under these international multi-agency collaborations are critical to understanding extreme weather conditions and long-term climate change, while the F2F meetings help to build a stronger climate and weather data science community and stronger federated software infrastructure. The 2013 F2F meeting determined requirements for existing and impending national and international community projects; enhancements needed for data distribution, analysis, and visualization infrastructure; and standards and resources needed for better collaborations.« less

  2. European seismological data exchange, access and processing: current status of the Research Infrastructure project NERIES

    NASA Astrophysics Data System (ADS)

    Giardini, D.; van Eck, T.; Bossu, R.; Wiemer, S.

    2009-04-01

    The EC Research infrastructure project NERIES, an Integrated Infrastructure Initiative in seismology for 2006-2010 has passed its mid-term point. We will present a short concise overview of the current state of the project, established cooperation with other European and global projects and the planning for the last year of the project. Earthquake data archiving and access within Europe has dramatically improved during the last two years. This concerns earthquake parameters, digital broadband and acceleration waveforms and historical data. The Virtual European Broadband Seismic Network (VEBSN) consists currently of more then 300 stations. A new distributed data archive concept, the European Integrated Waveform Data Archive (EIDA), has been implemented in Europe connecting the larger European seismological waveform data. Global standards for earthquake parameter data (QuakeML) and tomography models have been developed and are being established. Web application technology has been and is being developed to make a jump start to the next generation data services. A NERIES data portal provides a number of services testing the potential capacities of new open-source web technologies. Data application tools like shakemaps, lossmaps, site response estimation and tools for data processing and visualisation are currently available, although some of these tools are still in an alpha version. A European tomography reference model will be discussed at a special workshop in June 2009. Shakemaps, coherent with the NEIC application, are implemented in, among others, Turkey, Italy, Romania, Switzerland, several countries. The comprehensive site response software is being distributed and used both inside and outside the project. NERIES organises several workshops inviting both consortium and non-consortium participants and covering a wide range of subjects: ‘Seismological observatory operation tools', ‘Tomography', ‘Ocean bottom observatories', 'Site response software training', ‘Historical earthquake catalogues', ‘Distribution of acceleration data', etc. Some of these workshops are coordinated with other organisations/projects, like ORFEUS, ESONET, IRIS, etc. NERIES still offers grants to individual researchers or groups to work at facilities such as the Swiss national seismological network (SED/ETHZ, Switzerland), the CEA/DASE facilities in France, the data scanning facilities at INGV (SISMOS), the array facilities of NORSAR (Norway) and the new Conrad Facility in Austria.

  3. Architectural Implications of Cloud Computing

    DTIC Science & Technology

    2011-10-24

    Public Cloud Infrastructure-as-a- Service (IaaS) Software -as-a- Service ( SaaS ) Cloud Computing Types Platform-as-a- Service (PaaS) Based on Type of...Twitter #SEIVirtualForum © 2011 Carnegie Mellon University Software -as-a- Service ( SaaS ) Model of software deployment in which a third-party...and System Solutions (RTSS) Program. Her current interests and projects are in service -oriented architecture (SOA), cloud computing, and context

  4. Why You Should Consider Green Stormwater Infrastructure for Your Community

    EPA Pesticide Factsheets

    This page provides an overview of the nation's infrastructure needs and cost and the benefits of integrating green infrastructure into projects that typically use grey infrastructure, such as roadways, sidewalks and parking lots.

  5. Integration of structural health monitoring and asset management.

    DOT National Transportation Integrated Search

    2012-08-01

    This project investigated the feasibility and potential benefits of the integration of infrastructure monitoring systems into enterprise-scale transportation management systems. An infrastructure monitoring system designed for bridges was implemented...

  6. NASA Ames Sustainability Initiatives: Aeronautics, Space Exploration, and Sustainable Futures

    NASA Technical Reports Server (NTRS)

    Grymes, Rosalind A.

    2015-01-01

    In support of the mission-specific challenges of aeronautics and space exploration, NASA Ames produces a wealth of research and technology advancements with significant relevance to larger issues of planetary sustainability. NASA research on NexGen airspace solutions and its development of autonomous and intelligent technologies will revolutionize both the nation's air transporation systems and have applicability to the low altitude flight economy and to both air and ground transporation, more generally. NASA's understanding of the Earth as a complex of integrated systems contributes to humanity's perception of the sustainability of our home planet. Research at NASA Ames on closed environment life support systems produces directly applicable lessons on energy, water, and resource management in ground-based infrastructure. Moreover, every NASA campus is a 'city'; including an urbanscape and a workplace including scientists, human relations specialists, plumbers, engineers, facility managers, construction trades, transportation managers, software developers, leaders, financial planners, technologists, electricians, students, accountants, and even lawyers. NASA is applying the lessons of our mission-related activities to our urbanscapes and infrastructure, and also anticipates a leadership role in developing future environments for living and working in space.

  7. Genomic cloud computing: legal and ethical points to consider

    PubMed Central

    Dove, Edward S; Joly, Yann; Tassé, Anne-Marie; Burton, Paul; Chisholm, Rex; Fortier, Isabel; Goodwin, Pat; Harris, Jennifer; Hveem, Kristian; Kaye, Jane; Kent, Alistair; Knoppers, Bartha Maria; Lindpaintner, Klaus; Little, Julian; Riegman, Peter; Ripatti, Samuli; Stolk, Ronald; Bobrow, Martin; Cambon-Thomsen, Anne; Dressler, Lynn; Joly, Yann; Kato, Kazuto; Knoppers, Bartha Maria; Rodriguez, Laura Lyman; McPherson, Treasa; Nicolás, Pilar; Ouellette, Francis; Romeo-Casabona, Carlos; Sarin, Rajiv; Wallace, Susan; Wiesner, Georgia; Wilson, Julia; Zeps, Nikolajs; Simkevitz, Howard; De Rienzo, Assunta; Knoppers, Bartha M

    2015-01-01

    The biggest challenge in twenty-first century data-intensive genomic science, is developing vast computer infrastructure and advanced software tools to perform comprehensive analyses of genomic data sets for biomedical research and clinical practice. Researchers are increasingly turning to cloud computing both as a solution to integrate data from genomics, systems biology and biomedical data mining and as an approach to analyze data to solve biomedical problems. Although cloud computing provides several benefits such as lower costs and greater efficiency, it also raises legal and ethical issues. In this article, we discuss three key ‘points to consider' (data control; data security, confidentiality and transfer; and accountability) based on a preliminary review of several publicly available cloud service providers' Terms of Service. These ‘points to consider' should be borne in mind by genomic research organizations when negotiating legal arrangements to store genomic data on a large commercial cloud service provider's servers. Diligent genomic cloud computing means leveraging security standards and evaluation processes as a means to protect data and entails many of the same good practices that researchers should always consider in securing their local infrastructure. PMID:25248396

  8. Genomic cloud computing: legal and ethical points to consider.

    PubMed

    Dove, Edward S; Joly, Yann; Tassé, Anne-Marie; Knoppers, Bartha M

    2015-10-01

    The biggest challenge in twenty-first century data-intensive genomic science, is developing vast computer infrastructure and advanced software tools to perform comprehensive analyses of genomic data sets for biomedical research and clinical practice. Researchers are increasingly turning to cloud computing both as a solution to integrate data from genomics, systems biology and biomedical data mining and as an approach to analyze data to solve biomedical problems. Although cloud computing provides several benefits such as lower costs and greater efficiency, it also raises legal and ethical issues. In this article, we discuss three key 'points to consider' (data control; data security, confidentiality and transfer; and accountability) based on a preliminary review of several publicly available cloud service providers' Terms of Service. These 'points to consider' should be borne in mind by genomic research organizations when negotiating legal arrangements to store genomic data on a large commercial cloud service provider's servers. Diligent genomic cloud computing means leveraging security standards and evaluation processes as a means to protect data and entails many of the same good practices that researchers should always consider in securing their local infrastructure.

  9. Human rights barriers for displaced persons in southern Sudan.

    PubMed

    Pavlish, Carol; Ho, Anita

    2009-01-01

    This community-based research explores community perspectives on human rights barriers that women encounter in a postconflict setting of southern Sudan. An ethnographic design was used to guide data collection in five focus groups with community members and during in-depth interviews with nine key informants. A constant comparison method of data analysis was used. Atlas.ti data management software facilitated the inductive coding and sorting of data. Participants identified three formal and one set of informal community structures for human rights. Human rights barriers included shifting legal frameworks, doubt about human rights, weak government infrastructure, and poverty. The evolving government infrastructure cannot currently provide adequate human rights protection, especially for women. The nature of living in poverty without development opportunities includes human rights abuses. Good governance, protection, and human development opportunities were emphasized as priority human rights concerns. Human rights framework could serve as a powerful integrator of health and development work with community-based organizations. Results help nurses understand the intersection between health and human rights as well as approaches to advancing rights in a culturally attuned manner.

  10. Performance Characteristic Mems-Based IMUs for UAVs Navigation

    NASA Astrophysics Data System (ADS)

    Mohamed, H. A.; Hansen, J. M.; Elhabiby, M. M.; El-Sheimy, N.; Sesay, A. B.

    2015-08-01

    Accurate 3D reconstruction has become essential for non-traditional mapping applications such as urban planning, mining industry, environmental monitoring, navigation, surveillance, pipeline inspection, infrastructure monitoring, landslide hazard analysis, indoor localization, and military simulation. The needs of these applications cannot be satisfied by traditional mapping, which is based on dedicated data acquisition systems designed for mapping purposes. Recent advances in hardware and software development have made it possible to conduct accurate 3D mapping without using costly and high-end data acquisition systems. Low-cost digital cameras, laser scanners, and navigation systems can provide accurate mapping if they are properly integrated at the hardware and software levels. Unmanned Aerial Vehicles (UAVs) are emerging as a mobile mapping platform that can provide additional economical and practical advantages. However, such economical and practical requirements need navigation systems that can provide uninterrupted navigation solution. Hence, testing the performance characteristics of Micro-Electro-Mechanical Systems (MEMS) or low cost navigation sensors for various UAV applications is important research. This work focuses on studying the performance characteristics under different manoeuvres using inertial measurements integrated with single point positioning, Real-Time-Kinematic (RTK), and additional navigational aiding sensors. Furthermore, the performance of the inertial sensors is tested during Global Positioning System (GPS) signal outage.

  11. e!DAL - a framework to store, share and publish research data

    PubMed Central

    2014-01-01

    Background The life-science community faces a major challenge in handling “big data”, highlighting the need for high quality infrastructures capable of sharing and publishing research data. Data preservation, analysis, and publication are the three pillars in the “big data life cycle”. The infrastructures currently available for managing and publishing data are often designed to meet domain-specific or project-specific requirements, resulting in the repeated development of proprietary solutions and lower quality data publication and preservation overall. Results e!DAL is a lightweight software framework for publishing and sharing research data. Its main features are version tracking, metadata management, information retrieval, registration of persistent identifiers (DOI), an embedded HTTP(S) server for public data access, access as a network file system, and a scalable storage backend. e!DAL is available as an API for local non-shared storage and as a remote API featuring distributed applications. It can be deployed “out-of-the-box” as an on-site repository. Conclusions e!DAL was developed based on experiences coming from decades of research data management at the Leibniz Institute of Plant Genetics and Crop Plant Research (IPK). Initially developed as a data publication and documentation infrastructure for the IPK’s role as a data center in the DataCite consortium, e!DAL has grown towards being a general data archiving and publication infrastructure. The e!DAL software has been deployed into the Maven Central Repository. Documentation and Software are also available at: http://edal.ipk-gatersleben.de. PMID:24958009

  12. e!DAL--a framework to store, share and publish research data.

    PubMed

    Arend, Daniel; Lange, Matthias; Chen, Jinbo; Colmsee, Christian; Flemming, Steffen; Hecht, Denny; Scholz, Uwe

    2014-06-24

    The life-science community faces a major challenge in handling "big data", highlighting the need for high quality infrastructures capable of sharing and publishing research data. Data preservation, analysis, and publication are the three pillars in the "big data life cycle". The infrastructures currently available for managing and publishing data are often designed to meet domain-specific or project-specific requirements, resulting in the repeated development of proprietary solutions and lower quality data publication and preservation overall. e!DAL is a lightweight software framework for publishing and sharing research data. Its main features are version tracking, metadata management, information retrieval, registration of persistent identifiers (DOI), an embedded HTTP(S) server for public data access, access as a network file system, and a scalable storage backend. e!DAL is available as an API for local non-shared storage and as a remote API featuring distributed applications. It can be deployed "out-of-the-box" as an on-site repository. e!DAL was developed based on experiences coming from decades of research data management at the Leibniz Institute of Plant Genetics and Crop Plant Research (IPK). Initially developed as a data publication and documentation infrastructure for the IPK's role as a data center in the DataCite consortium, e!DAL has grown towards being a general data archiving and publication infrastructure. The e!DAL software has been deployed into the Maven Central Repository. Documentation and Software are also available at: http://edal.ipk-gatersleben.de.

  13. Security and Policy for Group Collaboration

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ian Foster; Carl Kesselman

    2006-07-31

    “Security and Policy for Group Collaboration” was a Collaboratory Middleware research project aimed at providing the fundamental security and policy infrastructure required to support the creation and operation of distributed, computationally enabled collaborations. The project developed infrastructure that exploits innovative new techniques to address challenging issues of scale, dynamics, distribution, and role. To reduce greatly the cost of adding new members to a collaboration, we developed and evaluated new techniques for creating and managing credentials based on public key certificates, including support for online certificate generation, online certificate repositories, and support for multiple certificate authorities. To facilitate the integration ofmore » new resources into a collaboration, we improved significantly the integration of local security environments. To make it easy to create and change the role and associated privileges of both resources and participants of collaboration, we developed community wide authorization services that provide distributed, scalable means for specifying policy. These services make it possible for the delegation of capability from the community to a specific user, class of user or resource. Finally, we instantiated our research results into a framework that makes it useable to a wide range of collaborative tools. The resulting mechanisms and software have been widely adopted within DOE projects and in many other scientific projects. The widespread adoption of our Globus Toolkit technology has provided, and continues to provide, a natural dissemination and technology transfer vehicle for our results.« less

  14. Integration of Mobil Satellite and Cellular Systems

    NASA Technical Reports Server (NTRS)

    Drucker, E. H.; Estabrook, P.; Pinck, D.; Ekroot, L.

    1993-01-01

    By integrating the ground based infrastructure component of a mobile satellite system with the infrastructure systems of terrestrial 800 MHz cellular service providers, a seamless network of universal coverage can be established.

  15. Green Infrastructure Management Techniques in Arid and Semi-arid Regions: Software Implementation and Demonstration using the AGWA/KINEROS2 Watershed Model

    EPA Science Inventory

    Increasing urban development in the arid and semi-arid regions of the southwestern United States has led to greater demand for water in a region with limited water resources and has fundamentally altered the hydrologic response of developed watersheds. Green Infrastructure (GI) p...

  16. Representing Green Infrastructure Management Techniques in Arid and Semi-arid Regions: Software Implementation and Demonstration using the AGWA/KINEROS2 Watershed Model

    EPA Science Inventory

    Increasing urban development in the arid and semi-arid regions of the southwestern United States has led to greater demand for water from a region of limited water resources which has fundamentally altered the hydrologic response of developed watersheds. Green Infrastructure (GI)...

  17. PIPER: Performance Insight for Programmers and Exascale Runtimes: Guiding the Development of the Exascale Software Stack

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mellor-Crummey, John

    The PIPER project set out to develop methodologies and software for measurement, analysis, attribution, and presentation of performance data for extreme-scale systems. Goals of the project were to support analysis of massive multi-scale parallelism, heterogeneous architectures, multi-faceted performance concerns, and to support both post-mortem performance analysis to identify program features that contribute to problematic performance and on-line performance analysis to drive adaptation. This final report summarizes the research and development activity at Rice University as part of the PIPER project. Producing a complete suite of performance tools for exascale platforms during the course of this project was impossible since bothmore » hardware and software for exascale systems is still a moving target. For that reason, the project focused broadly on the development of new techniques for measurement and analysis of performance on modern parallel architectures, enhancements to HPCToolkit’s software infrastructure to support our research goals or use on sophisticated applications, engaging developers of multithreaded runtimes to explore how support for tools should be integrated into their designs, engaging operating system developers with feature requests for enhanced monitoring support, engaging vendors with requests that they add hardware measure- ment capabilities and software interfaces needed by tools as they design new components of HPC platforms including processors, accelerators and networks, and finally collaborations with partners interested in using HPCToolkit to analyze and tune scalable parallel applications.« less

  18. EPOS Thematic Core Service Anthropogenic Hazards: Implementation Plan

    NASA Astrophysics Data System (ADS)

    Orlecka-Sikora, Beata; Lasocki, Stanislaw; Grasso, Jean Robert; Schmittbuhl, Jean; Styles, Peter; Kwiatek, Grzegorz; Sterzel, Mariusz; Garcia, Alexander

    2015-04-01

    EPOS Thematic Core Service ANTHROPOGENIC HAZARDS (TCS AH) aims to integrate distributed research infrastructures (RI) to facilitate and stimulate research on anthropogenic hazards (AH) especially those associated with the exploration and exploitation of geo-resources. The innovative element is the uniqueness of the integrated RI which comprises two main deliverables: (1) Exceptional datasets, called "episodes", which comprehensively describe a geophysical process; induced or triggered by human technological activity, posing hazard for populations, infrastructure and the environment, (2) Problem-oriented, bespoke services uniquely designed for the discrimination and analysis of correlations between technology, geophysical response and resulting hazard. These objectives will be achieved through the Science-Industry Synergy (SIS) built by EPOS WG10, ensuring bi-directional information exchange, including unique and previously unavailable data furnished by industrial partners. The Episodes and services to be integrated have been selected using strict criteria during the EPOS PP. The data are related to a wide spectrum of inducing technologies, with seismic/aseismic deformation and production history as a minimum data set requirement and the quality of software services is confirmed and referenced in literature. Implementation of TCS AH is planned for four years and requires five major activities: (1) Strategic Activities and Governance: will define and establish the governance structure to ensure the long-term sustainability of these research infrastructures for data provision through EPOS. (2) Coordination and Interaction with the Community: will establish robust communication channels within the whole TCS AH community while supporting global EPOS communication strategy. (3) Interoperability with EPOS Integrated Core Service (ICS) and Testing Activities: will coordinate and ensure interoperability between the RIs and the ICS. Within this modality a functional e-research environment with access to High-Performance Computing will be built. A prototype for such an environment is already under construction and will become operational in mid -2015 (is-epos.eu). (4) Integration of AH Episodes: will address at least 20 global episodes related to conventional hydrocarbon extraction, reservoir treatment, underground mining and geothermal energy production which will be integrated into the e-environment of TCS AH. All the multi-disciplinary heterogeneous data from these particular episodes will be transformed to unified structures to form integrated data sets articulated with the defined standards of ICS and other TCS's. (5) Implementation of services for analyzing Episodes: will deliver the protocols and methodologies for analysis of the seismic/deformation response to time-varying georesource exploitation technologies on long and short time scales and the related time- and technology-dependent seismic hazard issues.

  19. Tips for Ensuring Successful Software Implementation

    ERIC Educational Resources Information Center

    Weathers, Robert

    2013-01-01

    Implementing an enterprise-level, mission-critical software system is an infrastructure project akin to other sizable projects, such as building a school. It's costly and complex, takes a year or more to complete, requires the collaboration of many different parties, involves uncertainties, results in a long-lived asset requiring ongoing…

  20. --No Title--

    Science.gov Websites

    interoperability emerging infrastructure for data management on computational grids Software Packages Services : ATLAS: Management and Steering: Computing Management Board Software Project Management Board Database Model Group Computing TDR: 4.5 Event Data 4.8 Database and Data Management Services 6.3.4 Production and

  1. System for critical infrastructure security based on multispectral observation-detection module

    NASA Astrophysics Data System (ADS)

    Trzaskawka, Piotr; Kastek, Mariusz; Życzkowski, Marek; Dulski, Rafał; Szustakowski, Mieczysław; Ciurapiński, Wiesław; Bareła, Jarosław

    2013-10-01

    Recent terrorist attacks and possibilities of such actions in future have forced to develop security systems for critical infrastructures that embrace sensors technologies and technical organization of systems. The used till now perimeter protection of stationary objects, based on construction of a ring with two-zone fencing, visual cameras with illumination are efficiently displaced by the systems of the multisensor technology that consists of: visible technology - day/night cameras registering optical contrast of a scene, thermal technology - cheap bolometric cameras recording thermal contrast of a scene and active ground radars - microwave and millimetre wavelengths that record and detect reflected radiation. Merging of these three different technologies into one system requires methodology for selection of technical conditions of installation and parameters of sensors. This procedure enables us to construct a system with correlated range, resolution, field of view and object identification. Important technical problem connected with the multispectral system is its software, which helps couple the radar with the cameras. This software can be used for automatic focusing of cameras, automatic guiding cameras to an object detected by the radar, tracking of the object and localization of the object on the digital map as well as target identification and alerting. Based on "plug and play" architecture, this system provides unmatched flexibility and simplistic integration of sensors and devices in TCP/IP networks. Using a graphical user interface it is possible to control sensors and monitor streaming video and other data over the network, visualize the results of data fusion process and obtain detailed information about detected intruders over a digital map. System provide high-level applications and operator workload reduction with features such as sensor to sensor cueing from detection devices, automatic e-mail notification and alarm triggering. The paper presents a structure and some elements of critical infrastructure protection solution which is based on a modular multisensor security system. System description is focused mainly on methodology of selection of sensors parameters. The results of the tests in real conditions are also presented.

  2. Elements of an integrated health monitoring framework

    NASA Astrophysics Data System (ADS)

    Fraser, Michael; Elgamal, Ahmed; Conte, Joel P.; Masri, Sami; Fountain, Tony; Gupta, Amarnath; Trivedi, Mohan; El Zarki, Magda

    2003-07-01

    Internet technologies are increasingly facilitating real-time monitoring of Bridges and Highways. The advances in wireless communications for instance, are allowing practical deployments for large extended systems. Sensor data, including video signals, can be used for long-term condition assessment, traffic-load regulation, emergency response, and seismic safety applications. Computer-based automated signal-analysis algorithms routinely process the incoming data and determine anomalies based on pre-defined response thresholds and more involved signal analysis techniques. Upon authentication, appropriate action may be authorized for maintenance, early warning, and/or emergency response. In such a strategy, data from thousands of sensors can be analyzed with near real-time and long-term assessment and decision-making implications. Addressing the above, a flexible and scalable (e.g., for an entire Highway system, or portfolio of Networked Civil Infrastructure) software architecture/framework is being developed and implemented. This framework will network and integrate real-time heterogeneous sensor data, database and archiving systems, computer vision, data analysis and interpretation, physics-based numerical simulation of complex structural systems, visualization, reliability & risk analysis, and rational statistical decision-making procedures. Thus, within this framework, data is converted into information, information into knowledge, and knowledge into decision at the end of the pipeline. Such a decision-support system contributes to the vitality of our economy, as rehabilitation, renewal, replacement, and/or maintenance of this infrastructure are estimated to require expenditures in the Trillion-dollar range nationwide, including issues of Homeland security and natural disaster mitigation. A pilot website (http://bridge.ucsd.edu/compositedeck.html) currently depicts some basic elements of the envisioned integrated health monitoring analysis framework.

  3. Middleware for Plug and Play Integration of Heterogeneous Sensor Resources into the Sensor Web

    PubMed Central

    Toma, Daniel M.; Jirka, Simon; Del Río, Joaquín

    2017-01-01

    The study of global phenomena requires the combination of a considerable amount of data coming from different sources, acquired by different observation platforms and managed by institutions working in different scientific fields. Merging this data to provide extensive and complete data sets to monitor the long-term, global changes of our oceans is a major challenge. The data acquisition and data archival procedures usually vary significantly depending on the acquisition platform. This lack of standardization ultimately leads to information silos, preventing the data to be effectively shared across different scientific communities. In the past years, important steps have been taken in order to improve both standardization and interoperability, such as the Open Geospatial Consortium’s Sensor Web Enablement (SWE) framework. Within this framework, standardized models and interfaces to archive, access and visualize the data from heterogeneous sensor resources have been proposed. However, due to the wide variety of software and hardware architectures presented by marine sensors and marine observation platforms, there is still a lack of uniform procedures to integrate sensors into existing SWE-based data infrastructures. In this work, a framework aimed to enable sensor plug and play integration into existing SWE-based data infrastructures is presented. First, an analysis of the operations required to automatically identify, configure and operate a sensor are analysed. Then, the metadata required for these operations is structured in a standard way. Afterwards, a modular, plug and play, SWE-based acquisition chain is proposed. Finally different use cases for this framework are presented. PMID:29244732

  4. Looking at CER from the managed care organization perspective.

    PubMed

    Cannon, H Eric

    2012-05-01

    The amount of available comparative effectiveness research (CER) is increasing, giving managed care organizations (MCOs) more information to use in decision making. However, MCOs may not be prepared to integrate this new and voluminous data into their current practices and policies. To describe ways that health care reform will affect MCO populations in the future, to examine examples of how MCOs have utilized CER data in the past, and to identify questions that MCOs will have to address as they integrate CER into future decision making. Unquestionably, health care reform will change the U.S. market. Millions more insured individuals will be making purchasing decisions. In addition, health care reform will mean more CER data will be available, affecting the decisions MCOs must make. In the past, MCOs may not have used CER as effectively as they could in making formulary and other policy decisions. However, there are examples that show how CER can be integrated effectively, such as Intermountain Healthcare's use of CER to create treatment guidelines, which have been shown to lower costs and improve delivery of care. In the future, MCOs will need to assess their own abilities to utilize CER, including their infrastructure of expertise, hardware, software, and protocols and processes. MCOs will also need to understand how pertinent CER is to their own needs, how it may affect benefit design, and how it will affect their customers' needs. Health care reform, and the resultant growth of CER, will have significant impact on MCOs, who will need to invest in better infrastructure and new understandings of a transforming market, changing customer bases, and evolving data.

  5. The co-integration analysis of relationship between urban infrastructure and urbanization - A case of Shanghai

    NASA Astrophysics Data System (ADS)

    Wang, Qianlu

    2017-10-01

    Urban infrastructure and urbanization influence each other, and quantitative analysis of the relationship between them will play a significant role in promoting the social development. The paper based on the data of infrastructure and the proportion of urban population in Shanghai from 1988 to 2013, use the econometric analysis of co-integration test, error correction model and Granger causality test method, and empirically analyze the relationship between Shanghai's infrastructure and urbanization. The results show that: 1) Shanghai Urban infrastructure has a positive effect for the development of urbanization and narrowing the population gap; 2) when the short-term fluctuations deviate from long-term equilibrium, the system will pull the non-equilibrium state back to equilibrium with an adjust intensity 0.342670. And hospital infrastructure is not only an important variable for urban development in short-term, but also a leading infrastructure in the process of urbanization in Shanghai; 3) there has Granger causality between road infrastructure and urbanization; and there is no Granger causality between water infrastructure and urbanization, hospital and school infrastructures of social infrastructure have unidirectional Granger causality with urbanization.

  6. Collecting, Integrating, and Disseminating Patient-Reported Outcomes for Research in a Learning Healthcare System

    PubMed Central

    Harle, Christopher A.; Lipori, Gloria; Hurley, Robert W.

    2016-01-01

    Introduction: Advances in health policy, research, and information technology have converged to increase the electronic collection and use of patient-reported outcomes (PROs). Therefore, it is important to share lessons learned in implementing PROs in research information systems. Case Description: The purpose of this case study is to describe a novel information system for electronic PROs and lessons learned in implementing that system to support research in an academic health center. The system incorporates freely available and commercial software and involves clinical and research workflows that support the collection, transformation, and research use of PRO data. The software and processes that comprise the system serve three main functions, (i) collecting electronic PROs in clinical care, (ii) integrating PRO data with non-patient generated clinical data, and (iii) disseminating data to researchers through the institution’s research informatics infrastructure, including the i2b2 (Informatics for Integrating Biology and the Bedside) system. Strategies: Our successful design and implementation was driven by three overarching strategies. First, we selected and implemented multiple interfaced technologies to support PRO collection, management, and research use. Second, we aimed to use standardized approaches to measuring PROs, sending PROs between systems, and disseminating PROs. Finally, we focused on using technologies and processes that aligned with existing clinical research information management strategies within our organization. Conclusion: These experiences and lessons may help future implementers and researchers enhance the scale and sustainable use of systems for research use of PROs. PMID:27563683

  7. Collecting, Integrating, and Disseminating Patient-Reported Outcomes for Research in a Learning Healthcare System.

    PubMed

    Harle, Christopher A; Lipori, Gloria; Hurley, Robert W

    2016-01-01

    Advances in health policy, research, and information technology have converged to increase the electronic collection and use of patient-reported outcomes (PROs). Therefore, it is important to share lessons learned in implementing PROs in research information systems. The purpose of this case study is to describe a novel information system for electronic PROs and lessons learned in implementing that system to support research in an academic health center. The system incorporates freely available and commercial software and involves clinical and research workflows that support the collection, transformation, and research use of PRO data. The software and processes that comprise the system serve three main functions, (i) collecting electronic PROs in clinical care, (ii) integrating PRO data with non-patient generated clinical data, and (iii) disseminating data to researchers through the institution's research informatics infrastructure, including the i2b2 (Informatics for Integrating Biology and the Bedside) system. Our successful design and implementation was driven by three overarching strategies. First, we selected and implemented multiple interfaced technologies to support PRO collection, management, and research use. Second, we aimed to use standardized approaches to measuring PROs, sending PROs between systems, and disseminating PROs. Finally, we focused on using technologies and processes that aligned with existing clinical research information management strategies within our organization. These experiences and lessons may help future implementers and researchers enhance the scale and sustainable use of systems for research use of PROs.

  8. BIM cost analysis of transport infrastructure projects

    NASA Astrophysics Data System (ADS)

    Volkov, Andrey; Chelyshkov, Pavel; Grossman, Y.; Khromenkova, A.

    2017-10-01

    The article describes the method of analysis of the energy costs of transport infrastructure objects using BIM software. The paper consideres several options of orientation of a building using SketchUp and IES VE software programs. These options allow to choose the best direction of the building facades. Particular attention is given to a distribution of a temperature field in a cross-section of the wall according to the calculation made in the ELCUT software. The issues related to calculation of solar radiation penetration into a building and selection of translucent structures are considered in the paper. The article presents data on building codes relating to the transport sector, on the basis of which the calculations were made. The author emphasizes that BIM-programs should be implemented and used in order to optimize a thermal behavior of a building and increase its energy efficiency using climatic data.

  9. Proceedings of the International Workshop on the Foundations of Service-Oriented Architecture (FSOA 2007)

    DTIC Science & Technology

    2008-06-01

    agenda are summarized. x | CMU/SEI-2008-SR-011 SOFTWARE ENGINEERING INSTITUTE | 1 1 Introduction Service -oriented architecture (SOA... service -provision software systems. In this po- sition paper, we investigate an initial classification of challenge areas related to service orientation...decade we have witnessed a significant growth of software applications that are de- livered in the form of services utilizing a network infrastructure

  10. Software Attribution for Geoscience Applications in the Computational Infrastructure for Geodynamics

    NASA Astrophysics Data System (ADS)

    Hwang, L.; Dumit, J.; Fish, A.; Soito, L.; Kellogg, L. H.; Smith, M.

    2015-12-01

    Scientific software is largely developed by individual scientists and represents a significant intellectual contribution to the field. As the scientific culture and funding agencies move towards an expectation that software be open-source, there is a corresponding need for mechanisms to cite software, both to provide credit and recognition to developers, and to aid in discoverability of software and scientific reproducibility. We assess the geodynamic modeling community's current citation practices by examining more than 300 predominantly self-reported publications utilizing scientific software in the past 5 years that is available through the Computational Infrastructure for Geodynamics (CIG). Preliminary results indicate that authors cite and attribute software either through citing (in rank order) peer-reviewed scientific publications, a user's manual, and/or a paper describing the software code. Attributions maybe found directly in the text, in acknowledgements, in figure captions, or in footnotes. What is considered citable varies widely. Citations predominantly lack software version numbers or persistent identifiers to find the software package. Versioning may be implied through reference to a versioned user manual. Authors sometimes report code features used and whether they have modified the code. As an open-source community, CIG requests that researchers contribute their modifications to the repository. However, such modifications may not be contributed back to a repository code branch, decreasing the chances of discoverability and reproducibility. Survey results through CIG's Software Attribution for Geoscience Applications (SAGA) project suggest that lack of knowledge, tools, and workflows to cite codes are barriers to effectively implement the emerging citation norms. Generated on-demand attributions on software landing pages and a prototype extensible plug-in to automatically generate attributions in codes are the first steps towards reproducibility.

  11. OpenFlyData: an exemplar data web integrating gene expression data on the fruit fly Drosophila melanogaster.

    PubMed

    Miles, Alistair; Zhao, Jun; Klyne, Graham; White-Cooper, Helen; Shotton, David

    2010-10-01

    Integrating heterogeneous data across distributed sources is a major requirement for in silico bioinformatics supporting translational research. For example, genome-scale data on patterns of gene expression in the fruit fly Drosophila melanogaster are widely used in functional genomic studies in many organisms to inform candidate gene selection and validate experimental results. However, current data integration solutions tend to be heavy weight, and require significant initial and ongoing investment of effort. Development of a common Web-based data integration infrastructure (a.k.a. data web), using Semantic Web standards, promises to alleviate these difficulties, but little is known about the feasibility, costs, risks or practical means of migrating to such an infrastructure. We describe the development of OpenFlyData, a proof-of-concept system integrating gene expression data on D. melanogaster, combining Semantic Web standards with light-weight approaches to Web programming based on Web 2.0 design patterns. To support researchers designing and validating functional genomic studies, OpenFlyData includes user-facing search applications providing intuitive access to and comparison of gene expression data from FlyAtlas, the BDGP in situ database, and FlyTED, using data from FlyBase to expand and disambiguate gene names. OpenFlyData's services are also openly accessible, and are available for reuse by other bioinformaticians and application developers. Semi-automated methods and tools were developed to support labour- and knowledge-intensive tasks involved in deploying SPARQL services. These include methods for generating ontologies and relational-to-RDF mappings for relational databases, which we illustrate using the FlyBase Chado database schema; and methods for mapping gene identifiers between databases. The advantages of using Semantic Web standards for biomedical data integration are discussed, as are open issues. In particular, although the performance of open source SPARQL implementations is sufficient to query gene expression data directly from user-facing applications such as Web-based data fusions (a.k.a. mashups), we found open SPARQL endpoints to be vulnerable to denial-of-service-type problems, which must be mitigated to ensure reliability of services based on this standard. These results are relevant to data integration activities in translational bioinformatics. The gene expression search applications and SPARQL endpoints developed for OpenFlyData are deployed at http://openflydata.org. FlyUI, a library of JavaScript widgets providing re-usable user-interface components for Drosophila gene expression data, is available at http://flyui.googlecode.com. Software and ontologies to support transformation of data from FlyBase, FlyAtlas, BDGP and FlyTED to RDF are available at http://openflydata.googlecode.com. SPARQLite, an implementation of the SPARQL protocol, is available at http://sparqlite.googlecode.com. All software is provided under the GPL version 3 open source license.

  12. Power Systems Integration Laboratory | Energy Systems Integration Facility

    Science.gov Websites

    inverters. Key Infrastructure Grid simulator, load bank, Opal-RT, battery, inverter mounting racks, data , frequency-watt, and grid anomaly ride-through. Key Infrastructure House power, Opal-RT, PV simulator access

  13. Calibration of radio-astronomical data on the cloud. LOFAR, the pathway to SKA

    NASA Astrophysics Data System (ADS)

    Sabater, J.; Sánchez-Expósito, S.; Garrido, J.; Ruiz, J. E.; Best, P. N.; Verdes-Montenegro, L.

    2015-05-01

    The radio interferometer LOFAR (LOw Frequency ARray) is fully operational now. This Square Kilometre Array (SKA) pathfinder allows the observation of the sky at frequencies between 10 and 240 MHz, a relatively unexplored region of the spectrum. LOFAR is a software defined telescope: the data is mainly processed using specialized software running in common computing facilities. That means that the capabilities of the telescope are virtually defined by software and mainly limited by the available computing power. However, the quantity of data produced can quickly reach huge volumes (several Petabytes per day). After the correlation and pre-processing of the data in a dedicated cluster, the final dataset is handled to the user (typically several Terabytes). The calibration of these data requires a powerful computing facility in which the specific state of the art software under heavy continuous development can be easily installed and updated. That makes this case a perfect candidate for a cloud infrastructure which adds the advantages of an on demand, flexible solution. We present our approach to the calibration of LOFAR data using Ibercloud, the cloud infrastructure provided by Ibergrid. With the calibration work-flow adapted to the cloud, we can explore calibration strategies for the SKA and show how private or commercial cloud infrastructures (Ibercloud, Amazon EC2, Google Compute Engine, etc.) can help to solve the problems with big datasets that will be prevalent in the future of astronomy.

  14. A Grid Infrastructure for Supporting Space-based Science Operations

    NASA Technical Reports Server (NTRS)

    Bradford, Robert N.; Redman, Sandra H.; McNair, Ann R. (Technical Monitor)

    2002-01-01

    Emerging technologies for computational grid infrastructures have the potential for revolutionizing the way computers are used in all aspects of our lives. Computational grids are currently being implemented to provide a large-scale, dynamic, and secure research and engineering environments based on standards and next-generation reusable software, enabling greater science and engineering productivity through shared resources and distributed computing for less cost than traditional architectures. Combined with the emerging technologies of high-performance networks, grids provide researchers, scientists and engineers the first real opportunity for an effective distributed collaborative environment with access to resources such as computational and storage systems, instruments, and software tools and services for the most computationally challenging applications.

  15. NSLS-II HIGH LEVEL APPLICATION INFRASTRUCTURE AND CLIENT API DESIGN

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Shen, G.; Yang; L.

    2011-03-28

    The beam commissioning software framework of NSLS-II project adopts a client/server based architecture to replace the more traditional monolithic high level application approach. It is an open structure platform, and we try to provide a narrow API set for client application. With this narrow API, existing applications developed in different language under different architecture could be ported to our platform with small modification. This paper describes system infrastructure design, client API and system integration, and latest progress. As a new 3rd generation synchrotron light source with ultra low emittance, there are new requirements and challenges to control and manipulate themore » beam. A use case study and a theoretical analysis have been performed to clarify requirements and challenges to the high level applications (HLA) software environment. To satisfy those requirements and challenges, adequate system architecture of the software framework is critical for beam commissioning, study and operation. The existing traditional approaches are self-consistent, and monolithic. Some of them have adopted a concept of middle layer to separate low level hardware processing from numerical algorithm computing, physics modelling, data manipulating, plotting, and error handling. However, none of the existing approaches can satisfy the requirement. A new design has been proposed by introducing service oriented architecture technology. The HLA is combination of tools for accelerator physicists and operators, which is same as traditional approach. In NSLS-II, they include monitoring applications and control routines. Scripting environment is very important for the later part of HLA and both parts are designed based on a common set of APIs. Physicists and operators are users of these APIs, while control system engineers and a few accelerator physicists are the developers of these APIs. With our Client/Server mode based approach, we leave how to retrieve information to the developers of APIs and how to use them to form a physics application to the users. For example, how the channels are related to magnet and what the current real-time setting of a magnet is in physics unit are the internals of APIs. Measuring chromaticities are the users of APIs. All the users of APIs are working with magnet and instrument names in a physics unit. The low level communications in current or voltage unit are minimized. In this paper, we discussed our recent progress of our infrastructure development, and client API.« less

  16. UAS Integration in the NAS Project: Integrated Test and LVC Infrastructure

    NASA Technical Reports Server (NTRS)

    Murphy, Jim; Hoang, Ty

    2015-01-01

    Overview presentation of the Integrated Test and Evaluation sub-project of the Unmanned Aircraft System (UAS) in the National Airspace System (NAS). The emphasis of the presentation is the Live, Virtual, and Constructive (LVC) system (a broadly used name for classifying modeling and simulation) infrastructure and use of external assets and connection.

  17. SBSI: an extensible distributed software infrastructure for parameter estimation in systems biology

    PubMed Central

    Adams, Richard; Clark, Allan; Yamaguchi, Azusa; Hanlon, Neil; Tsorman, Nikos; Ali, Shakir; Lebedeva, Galina; Goltsov, Alexey; Sorokin, Anatoly; Akman, Ozgur E.; Troein, Carl; Millar, Andrew J.; Goryanin, Igor; Gilmore, Stephen

    2013-01-01

    Summary: Complex computational experiments in Systems Biology, such as fitting model parameters to experimental data, can be challenging to perform. Not only do they frequently require a high level of computational power, but the software needed to run the experiment needs to be usable by scientists with varying levels of computational expertise, and modellers need to be able to obtain up-to-date experimental data resources easily. We have developed a software suite, the Systems Biology Software Infrastructure (SBSI), to facilitate the parameter-fitting process. SBSI is a modular software suite composed of three major components: SBSINumerics, a high-performance library containing parallelized algorithms for performing parameter fitting; SBSIDispatcher, a middleware application to track experiments and submit jobs to back-end servers; and SBSIVisual, an extensible client application used to configure optimization experiments and view results. Furthermore, we have created a plugin infrastructure to enable project-specific modules to be easily installed. Plugin developers can take advantage of the existing user-interface and application framework to customize SBSI for their own uses, facilitated by SBSI’s use of standard data formats. Availability and implementation: All SBSI binaries and source-code are freely available from http://sourceforge.net/projects/sbsi under an Apache 2 open-source license. The server-side SBSINumerics runs on any Unix-based operating system; both SBSIVisual and SBSIDispatcher are written in Java and are platform independent, allowing use on Windows, Linux and Mac OS X. The SBSI project website at http://www.sbsi.ed.ac.uk provides documentation and tutorials. Contact: stg@inf.ed.ac.uk Supplementary information: Supplementary data are available at Bioinformatics online. PMID:23329415

  18. The optical antenna system design research on earth integrative network laser link in the future

    NASA Astrophysics Data System (ADS)

    Liu, Xianzhu; Fu, Qiang; He, Jingyi

    2014-11-01

    Earth integrated information network can be real-time acquisition, transmission and processing the spatial information with the carrier based on space platforms, such as geostationary satellites or in low-orbit satellites, stratospheric balloons or unmanned and manned aircraft, etc. It is an essential infrastructure for China to constructed earth integrated information network. Earth integrated information network can not only support the highly dynamic and the real-time transmission of broadband down to earth observation, but the reliable transmission of the ultra remote and the large delay up to the deep space exploration, as well as provide services for the significant application of the ocean voyage, emergency rescue, navigation and positioning, air transportation, aerospace measurement or control and other fields.Thus the earth integrated information network can expand the human science, culture and productive activities to the space, ocean and even deep space, so it is the global research focus. The network of the laser communication link is an important component and the mean of communication in the earth integrated information network. Optimize the structure and design the system of the optical antenna is considered one of the difficulty key technologies for the space laser communication link network. Therefore, this paper presents an optical antenna system that it can be used in space laser communication link network.The antenna system was consisted by the plurality mirrors stitched with the rotational paraboloid as a substrate. The optical system structure of the multi-mirror stitched was simulated and emulated by the light tools software. Cassegrain form to be used in a relay optical system. The structural parameters of the relay optical system was optimized and designed by the optical design software of zemax. The results of the optimal design and simulation or emulation indicated that the antenna system had a good optical performance and a certain reference value in engineering. It can provide effective technical support to realize interconnection of earth integrated laser link information network in the future.

  19. Teledesic Global Wireless Broadband Network: Space Infrastructure Architecture, Design Features and Technologies

    NASA Technical Reports Server (NTRS)

    Stuart, James R.

    1995-01-01

    The Teledesic satellites are a new class of small satellites which demonstrate the important commercial benefits of using technologies developed for other purposes by U.S. National Laboratories. The Teledesic satellite architecture, subsystem design features, and new technologies are described. The new Teledesic satellite manufacturing, integration, and test approaches which use modern high volume production techniques and result in surprisingly low space segment costs are discussed. The constellation control and management features and attendant software architecture features are addressed. After briefly discussing the economic and technological impact on the USA commercial space industries of the space communications revolution and such large constellation projects, the paper concludes with observations on the trend toward future system architectures using networked groups of much smaller satellites.

  20. FELIX: a PCIe based high-throughput approach for interfacing front-end and trigger electronics in the ATLAS Upgrade framework

    NASA Astrophysics Data System (ADS)

    Anderson, J.; Bauer, K.; Borga, A.; Boterenbrood, H.; Chen, H.; Chen, K.; Drake, G.; Dönszelmann, M.; Francis, D.; Guest, D.; Gorini, B.; Joos, M.; Lanni, F.; Lehmann Miotto, G.; Levinson, L.; Narevicius, J.; Panduro Vazquez, W.; Roich, A.; Ryu, S.; Schreuder, F.; Schumacher, J.; Vandelli, W.; Vermeulen, J.; Whiteson, D.; Wu, W.; Zhang, J.

    2016-12-01

    The ATLAS Phase-I upgrade (2019) requires a Trigger and Data Acquisition (TDAQ) system able to trigger and record data from up to three times the nominal LHC instantaneous luminosity. The Front-End LInk eXchange (FELIX) system provides an infrastructure to achieve this in a scalable, detector agnostic and easily upgradeable way. It is a PC-based gateway, interfacing custom radiation tolerant optical links from front-end electronics, via PCIe Gen3 cards, to a commodity switched Ethernet or InfiniBand network. FELIX enables reducing custom electronics in favour of software running on commercial servers. The FELIX system, the design of the PCIe prototype card and the integration test results are presented in this paper.

  1. NASA Automated Rendezvous and Capture Review. Executive summary

    NASA Technical Reports Server (NTRS)

    1991-01-01

    In support of the Cargo Transfer Vehicle (CTV) Definition Studies in FY-92, the Advanced Program Development division of the Office of Space Flight at NASA Headquarters conducted an evaluation and review of the United States capabilities and state-of-the-art in Automated Rendezvous and Capture (AR&C). This review was held in Williamsburg, Virginia on 19-21 Nov. 1991 and included over 120 attendees from U.S. government organizations, industries, and universities. One hundred abstracts were submitted to the organizing committee for consideration. Forty-two were selected for presentation. The review was structured to include five technical sessions. Forty-two papers addressed topics in the five categories below: (1) hardware systems and components; (2) software systems; (3) integrated systems; (4) operations; and (5) supporting infrastructure.

  2. XVis: Visualization for the Extreme-Scale Scientific-Computation Ecosystem: Mid-year report FY17 Q2

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Moreland, Kenneth D.; Pugmire, David; Rogers, David

    The XVis project brings together the key elements of research to enable scientific discovery at extreme scale. Scientific computing will no longer be purely about how fast computations can be performed. Energy constraints, processor changes, and I/O limitations necessitate significant changes in both the software applications used in scientific computation and the ways in which scientists use them. Components for modeling, simulation, analysis, and visualization must work together in a computational ecosystem, rather than working independently as they have in the past. This project provides the necessary research and infrastructure for scientific discovery in this new computational ecosystem by addressingmore » four interlocking challenges: emerging processor technology, in situ integration, usability, and proxy analysis.« less

  3. XVis: Visualization for the Extreme-Scale Scientific-Computation Ecosystem: Year-end report FY17.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Moreland, Kenneth D.; Pugmire, David; Rogers, David

    The XVis project brings together the key elements of research to enable scientific discovery at extreme scale. Scientific computing will no longer be purely about how fast computations can be performed. Energy constraints, processor changes, and I/O limitations necessitate significant changes in both the software applications used in scientific computation and the ways in which scientists use them. Components for modeling, simulation, analysis, and visualization must work together in a computational ecosystem, rather than working independently as they have in the past. This project provides the necessary research and infrastructure for scientific discovery in this new computational ecosystem by addressingmore » four interlocking challenges: emerging processor technology, in situ integration, usability, and proxy analysis.« less

  4. XVis: Visualization for the Extreme-Scale Scientific-Computation Ecosystem. Mid-year report FY16 Q2

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Moreland, Kenneth D.; Sewell, Christopher; Childs, Hank

    The XVis project brings together the key elements of research to enable scientific discovery at extreme scale. Scientific computing will no longer be purely about how fast computations can be performed. Energy constraints, processor changes, and I/O limitations necessitate significant changes in both the software applications used in scientific computation and the ways in which scientists use them. Components for modeling, simulation, analysis, and visualization must work together in a computational ecosystem, rather than working independently as they have in the past. This project provides the necessary research and infrastructure for scientific discovery in this new computational ecosystem by addressingmore » four interlocking challenges: emerging processor technology, in situ integration, usability, and proxy analysis.« less

  5. XVis: Visualization for the Extreme-Scale Scientific-Computation Ecosystem: Year-end report FY15 Q4.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Moreland, Kenneth D.; Sewell, Christopher; Childs, Hank

    The XVis project brings together the key elements of research to enable scientific discovery at extreme scale. Scientific computing will no longer be purely about how fast computations can be performed. Energy constraints, processor changes, and I/O limitations necessitate significant changes in both the software applications used in scientific computation and the ways in which scientists use them. Components for modeling, simulation, analysis, and visualization must work together in a computational ecosystem, rather than working independently as they have in the past. This project provides the necessary research and infrastructure for scientific discovery in this new computational ecosystem by addressingmore » four interlocking challenges: emerging processor technology, in situ integration, usability, and proxy analysis.« less

  6. Visual exploration and analysis of ionospheric scintillation monitoring data: The ISMR Query Tool

    NASA Astrophysics Data System (ADS)

    Vani, Bruno César; Shimabukuro, Milton Hirokazu; Galera Monico, João Francisco

    2017-07-01

    Ionospheric Scintillations are rapid variations on the phase and/or amplitude of a radio signal as it passes through ionospheric plasma irregularities. The ionosphere is a specific layer of the Earth's atmosphere located approximately between 50 km and 1000 km above the Earth's surface. As Global Navigation Satellite Systems (GNSS) - such as GPS, Galileo, BDS and GLONASS - use radio signals, these variations degrade their positioning service quality. Due to its location, Brazil is one of the places most affected by scintillation in the world. For that reason, ionosphere monitoring stations have been deployed over Brazilian territory since 2011 through cooperative projects between several institutions in Europe and Brazil. Such monitoring stations compose a network that generates a large amount of monitoring data everyday. GNSS receivers deployed at these stations - named Ionospheric Scintillation Monitor Receivers (ISMR) - provide scintillation indices and related signal metrics for available satellites dedicated to satellite-based navigation and positioning services. With this monitoring infrastructure, more than ten million observation values are generated and stored every day. Extracting the relevant information from this huge amount of data was a hard process and required the expertise of computer and geoscience scientists. This paper describes the concepts, design and aspects related to the implementation of the software that has been supporting research on ISMR data - the so-called ISMR Query Tool. Usability and other aspects are also presented via examples of application. This web based software has been designed and developed aiming to ensure insights over the huge amount of ISMR data that is fetched every day on an integrated platform. The software applies and adapts time series mining and information visualization techniques to extend the possibilities of exploring and analyzing ISMR data. The software is available to the scientific community through the World Wide Web, therefore constituting an analysis infrastructure that complements the monitoring one, providing support for researching ionospheric scintillation in the GNSS context. Interested researchers can access the functionalities without cost at http://is-cigala-calibra.fct.unesp.br/, under online request to the Space Geodesy Study Group from UNESP - Univ Estadual Paulista at Presidente Prudente.

  7. Software Defined Radio Architecture Contributions to Next Generation Space Communications

    NASA Technical Reports Server (NTRS)

    Kacpura, Thomas J.; Eddy, Wesley M.; Smith, Carl R.; Liebetreu, John

    2015-01-01

    Space communications architecture concepts, comprising the elements of the system, the interactions among them, and the principles that govern their development, are essential factors in developing National Aeronautics and Space Administration (NASA) future exploration and science missions. Accordingly, vital architectural attributes encompass flexibility, the extensibility to insert future capabilities, and to enable evolution to provide interoperability with other current and future systems. Space communications architectures and technologies for this century must satisfy a growing set of requirements, including those for Earth sensing, collaborative observation missions, robotic scientific missions, human missions for exploration of the Moon and Mars where surface activities require supporting communications, and in-space observatories for observing the earth, as well as other star systems and the universe. An advanced, integrated, communications infrastructure will enable the reliable, multipoint, high-data-rate capabilities needed on demand to provide continuous, maximum coverage for areas of concentrated activity. Importantly, the cost/value proposition of the future architecture must be an integral part of its design; an affordable and sustainable architecture is indispensable within anticipated future budget environments. Effective architecture design informs decision makers with insight into the capabilities needed to efficiently satisfy the demanding space-communication requirements of future missions and formulate appropriate requirements. A driving requirement for the architecture is the extensibility to address new requirements and provide low-cost on-ramps for new capabilities insertion, ensuring graceful growth as new functionality and new technologies are infused into the network infrastructure. In addition to extensibility, another key architectural attribute of the space communication equipment's interoperability with other NASA communications systems, as well as those communications and navigation systems operated by international space agencies and civilian and government agencies. In this paper, we review the philosophies, technologies, architectural attributes, mission services, and communications capabilities that form the structure of candidate next-generation integrated communication architectures for space communications and navigation. A key area that this paper explores is from the development and operation of the software defined radio for the NASA Space Communications and Navigation (SCaN) Testbed currently on the International Space Station (ISS). Evaluating the lessons learned from development and operation feed back into the communications architecture. Leveraging the reconfigurability provides a change in the way that operations are done and must be considered. Quantifying the impact on the NASA Space Telecommunications Radio System (STRS) software defined radio architecture provides feedback to keep the standard useful and up to date. NASA is not the only customer of these radios. Software defined radios are developed for other applications, and taking advantage of these developments promotes an architecture that is cost effective and sustainable. Developments in the following areas such as an updated operating environment, higher data rates, networking and security can be leveraged. The ability to sustain an architecture that uses radios for multiple markets can lower costs and keep new technology infused.

  8. Service-oriented infrastructure for scientific data mashups

    NASA Astrophysics Data System (ADS)

    Baru, C.; Krishnan, S.; Lin, K.; Moreland, J. L.; Nadeau, D. R.

    2009-12-01

    An important challenge in informatics is the development of concepts and corresponding architecture and tools to assist scientists with their data integration tasks. A typical Earth Science data integration request may be expressed, for example, as “For a given region (i.e. lat/long extent, plus depth), return a 3D structural model with accompanying physical parameters of density, seismic velocities, geochemistry, and geologic ages, using a cell size of 10km.” Such requests create “mashups” of scientific data. Currently, such integration is hand-crafted and depends heavily upon a scientist’s intimate knowledge of how to process, interpret, and integrate data from individual sources. In most case, the ultimate “integration” is performed by overlaying output images from individual processing steps using image manipulation software such as, say, Adobe Photoshop—leading to “Photoshop science”, where it is neither easy to repeat the integration steps nor to share the data mashup. As a result, scientists share only the final images and not the mashup itself. A more capable information infrastructure is needed to support the authoring and sharing of scientific data mashups. The infrastructure must include services for data discovery, access, and transformation and should be able to create mashups that are interactive, allowing users to probe and manipulate the data and follow its provenance. We present an architectural framework based on a service-oriented architecture for scientific data mashups in a distributed environment. The framework includes services for Data Access, Data Modeling, and Data Interaction. The Data Access services leverage capabilities for discovery and access to distributed data resources provided by efforts such as GEON and the EarthScope Data Portal, and services for federated metadata catalogs under development by projects like the Geosciences Information Network (GIN). The Data Modeling services provide 2D, 3D, and 4D modeling services based on standards such as WFS, WMS, WCS, and GeoSciML that allow integration of disparate data in a distributed, Web-based environment. Along these lines, we introduce the notion of a Web Volume Service (WVS) for modeling and manipulating 3D data. The Data Interaction Services provide services for rich interactions with the integrated 3D data. To provide efficient interactions with large-scale data in a distributed environment the architecture must include capabilities for caching and reuse of data, use of multi-level indexing, and the ability to orchestrate and coordinate execution of data processing and transformation routines as part of the data access and integration steps. The data mashup infrastructure is based on a service-oriented architecture. A range of alternatives are available for implementing these mashup services in a scalable fashion, using the cloud computing paradigm. We will describe the tradeoffs of each approach and provide an evaluation of which options are best suited to which types of services. We will describe security, privacy, performance, and price/performance issues and considerations in implementing services on dedicated servers versus private as well as public clouds, including systems such as Amazon Web Services.

  9. Information technologies in optimization process of monitoring of software and hardware status

    NASA Astrophysics Data System (ADS)

    Nikitin, P. V.; Savinov, A. N.; Bazhenov, R. I.; Ryabov, I. V.

    2018-05-01

    The article describes a model of a hardware and software monitoring system for a large company that provides customers with software as a service (SaaS solution) using information technology. The main functions of the monitoring system are: provision of up-todate data for analyzing the state of the IT infrastructure, rapid detection of the fault and its effective elimination. The main risks associated with the provision of these services are described; the comparative characteristics of the software are given; author's methods of monitoring the status of software and hardware are proposed.

  10. Integrating the Allen Brain Institute Cell Types Database into Automated Neuroscience Workflow.

    PubMed

    Stockton, David B; Santamaria, Fidel

    2017-10-01

    We developed software tools to download, extract features, and organize the Cell Types Database from the Allen Brain Institute (ABI) in order to integrate its whole cell patch clamp characterization data into the automated modeling/data analysis cycle. To expand the potential user base we employed both Python and MATLAB. The basic set of tools downloads selected raw data and extracts cell, sweep, and spike features, using ABI's feature extraction code. To facilitate data manipulation we added a tool to build a local specialized database of raw data plus extracted features. Finally, to maximize automation, we extended our NeuroManager workflow automation suite to include these tools plus a separate investigation database. The extended suite allows the user to integrate ABI experimental and modeling data into an automated workflow deployed on heterogeneous computer infrastructures, from local servers, to high performance computing environments, to the cloud. Since our approach is focused on workflow procedures our tools can be modified to interact with the increasing number of neuroscience databases being developed to cover all scales and properties of the nervous system.

  11. Linked-List-Based Multibody Dynamics (MBDyn) Engine

    NASA Technical Reports Server (NTRS)

    MacLean, John; Brain, Thomas; Wuiocho, Leslie; Huynh, An; Ghosh, Tushar

    2012-01-01

    This new release of MBDyn is a software engine that calculates the dynamics states of kinematic, rigid, or flexible multibody systems. An MBDyn multibody system may consist of multiple groups of articulated chains, trees, or closed-loop topologies. Transient topologies are handled through conservation of energy and momentum. The solution for rigid-body systems is exact, and several configurable levels of nonlinear term fidelity are available for flexible dynamics systems. The algorithms have been optimized for efficiency and can be used for both non-real-time (NRT) and real-time (RT) simulations. Interfaces are currently compatible with NASA's Trick Simulation Environment. This new release represents a significant advance in capability and ease of use. The two most significant new additions are an application programming interface (API) that clarifies and simplifies use of MBDyn, and a link-list infrastructure that allows a single MBDyn instance to propagate an arbitrary number of interacting groups of multibody top ologies. MBDyn calculates state and state derivative vectors for integration using an external integration routine. A Trickcompatible interface is provided for initialization, data logging, integration, and input/output.

  12. Technical note: Efficient online source identification algorithm for integration within a contamination event management system

    NASA Astrophysics Data System (ADS)

    Deuerlein, Jochen; Meyer-Harries, Lea; Guth, Nicolai

    2017-07-01

    Drinking water distribution networks are part of critical infrastructures and are exposed to a number of different risks. One of them is the risk of unintended or deliberate contamination of the drinking water within the pipe network. Over the past decade research has focused on the development of new sensors that are able to detect malicious substances in the network and early warning systems for contamination. In addition to the optimal placement of sensors, the automatic identification of the source of a contamination is an important component of an early warning and event management system for security enhancement of water supply networks. Many publications deal with the algorithmic development; however, only little information exists about the integration within a comprehensive real-time event detection and management system. In the following the analytical solution and the software implementation of a real-time source identification module and its integration within a web-based event management system are described. The development was part of the SAFEWATER project, which was funded under FP 7 of the European Commission.

  13. Community-driven computational biology with Debian Linux

    PubMed Central

    2010-01-01

    Background The Open Source movement and its technologies are popular in the bioinformatics community because they provide freely available tools and resources for research. In order to feed the steady demand for updates on software and associated data, a service infrastructure is required for sharing and providing these tools to heterogeneous computing environments. Results The Debian Med initiative provides ready and coherent software packages for medical informatics and bioinformatics. These packages can be used together in Taverna workflows via the UseCase plugin to manage execution on local or remote machines. If such packages are available in cloud computing environments, the underlying hardware and the analysis pipelines can be shared along with the software. Conclusions Debian Med closes the gap between developers and users. It provides a simple method for offering new releases of software and data resources, thus provisioning a local infrastructure for computational biology. For geographically distributed teams it can ensure they are working on the same versions of tools, in the same conditions. This contributes to the world-wide networking of researchers. PMID:21210984

  14. Designing software for operational decision support through coloured Petri nets

    NASA Astrophysics Data System (ADS)

    Maggi, F. M.; Westergaard, M.

    2017-05-01

    Operational support provides, during the execution of a business process, replies to questions such as 'how do I end the execution of the process in the cheapest way?' and 'is my execution compliant with some expected behaviour?' These questions may be asked several times during a single execution and, to answer them, dedicated software components (the so-called operational support providers) need to be invoked. Therefore, an infrastructure is needed to handle multiple providers, maintain data between queries about the same execution and discard information when it is no longer needed. In this paper, we use coloured Petri nets (CPNs) to model and analyse software implementing such an infrastructure. This analysis is needed to clarify the requirements before implementation and to guarantee that the resulting software is correct. To this aim, we present techniques to represent and analyse state spaces with 250 million states on a normal PC. We show how the specified requirements have been implemented as a plug-in of the process mining tool ProM and how the operational support in ProM can be used in combination with an existing operational support provider.

  15. 2014 Earth System Grid Federation and Ultrascale Visualization Climate Data Analysis Tools Conference Report

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Williams, Dean N.

    2015-01-27

    The climate and weather data science community met December 9–11, 2014, in Livermore, California, for the fourth annual Earth System Grid Federation (ESGF) and Ultrascale Visualization Climate Data Analysis Tools (UV-CDAT) Face-to-Face (F2F) Conference, hosted by the Department of Energy, National Aeronautics and Space Administration, National Oceanic and Atmospheric Administration, the European Infrastructure for the European Network of Earth System Modelling, and the Australian Department of Education. Both ESGF and UVCDATremain global collaborations committed to developing a new generation of open-source software infrastructure that provides distributed access and analysis to simulated and observed data from the climate and weather communities.more » The tools and infrastructure created under these international multi-agency collaborations are critical to understanding extreme weather conditions and long-term climate change. In addition, the F2F conference fosters a stronger climate and weather data science community and facilitates a stronger federated software infrastructure. The 2014 F2F conference detailed the progress of ESGF, UV-CDAT, and other community efforts over the year and sets new priorities and requirements for existing and impending national and international community projects, such as the Coupled Model Intercomparison Project Phase Six. Specifically discussed at the conference were project capabilities and enhancements needs for data distribution, analysis, visualization, hardware and network infrastructure, standards, and resources.« less

  16. Status report of the SRT radiotelescope control software: the DISCOS project

    NASA Astrophysics Data System (ADS)

    Orlati, A.; Bartolini, M.; Buttu, M.; Fara, A.; Migoni, C.; Poppi, S.; Righini, S.

    2016-08-01

    The Sardinia Radio Telescope (SRT) is a 64-m fully-steerable radio telescope. It is provided with an active surface to correct for gravitational deformations, allowing observations from 300 MHz to 100 GHz. At present, three receivers are available: a coaxial LP-band receiver (305-410 MHz and 1.5-1.8 GHz), a C-band receiver (5.7-7.7 GHz) and a 7-feed K-band receiver (18-26.5 GHz). Several back-ends are also available in order to perform the different data acquisition and analysis procedures requested by scientific projects. The design and development of the SRT control software started in 2004, and now belongs to a wider project called DISCOS (Development of the Italian Single-dish COntrol System), which provides a common infrastructure to the three Italian radio telescopes (Medicina, Noto and SRT dishes). DISCOS is based on the Alma Common Software (ACS) framework, and currently consists of more than 500k lines of code. It is organized in a common core and three specific product lines, one for each telescope. Recent developments, carried out after the conclusion of the technical commissioning of the instrument (October 2013), consisted in the addition of several new features in many parts of the observing pipeline, spanning from the motion control to the digital back-ends for data acquisition and data formatting; we brie y describe such improvements. More importantly, in the last two years we have supported the astronomical validation of the SRT radio telescope, leading to the opening of the first public call for proposals in late 2015. During this period, while assisting both the engineering and the scientific staff, we massively employed the control software and were able to test all of its features: in this process we received our first feedback from the users and we could verify how the system performed in a real-life scenario, drawing the first conclusions about the overall system stability and performance. We examine how the system behaves in terms of network load and system load, how it reacts to failures and errors, and what components and services seem to be the most critical parts of our architecture, showing how the ACS framework impacts on these aspects. Moreover, the exposure to public utilization has highlighted the major flaws in our development and software management process, which had to be tuned and improved in order to achieve faster release cycles in response to user feedback, and safer deploy operations. In this regard we show how the introduction of testing practices, along with continuous integration, helped us to meet higher quality standards. Having identified the most critical aspects of our software, we conclude showing our intentions for the future development of DISCOS, both in terms of software features and software infrastructures.

  17. Ada and software management in NASA: Assessment and recommendations

    NASA Technical Reports Server (NTRS)

    1989-01-01

    Recent NASA missions have required software systems that are larger, more complex, and more critical than NASA software systems of the past. The Ada programming language and the software methods and support environments associated with it are seen as potential breakthroughs in meeting NASA's software requirements. The findings of a study by the Ada and Software Management Assessment Working Group (ASMAWG) are presented. The study was chartered to perform three tasks: (1) assess the agency's ongoing and planned Ada activities; (2) assess the infrastructure (standards, policies, and internal organizations) supporting software management and the Ada activities; and (3) present an Ada implementation and use strategy appropriate for NASA over the next 5 years.

  18. The costs of avoiding environmental impacts from shale-gas surface infrastructure.

    PubMed

    Milt, Austin W; Gagnolet, Tamara D; Armsworth, Paul R

    2016-12-01

    Growing energy demand has increased the need to manage conflicts between energy production and the environment. As an example, shale-gas extraction requires substantial surface infrastructure, which fragments habitats, erodes soils, degrades freshwater systems, and displaces rare species. Strategic planning of shale-gas infrastructure can reduce trade-offs between economic and environmental objectives, but the specific nature of these trade-offs is not known. We estimated the cost of avoiding impacts from land-use change on forests, wetlands, rare species, and streams from shale-energy development within leaseholds. We created software for optimally siting shale-gas surface infrastructure to minimize its environmental impacts at reasonable construction cost. We visually assessed sites before infrastructure optimization to test whether such inspection could be used to predict whether impacts could be avoided at the site. On average, up to 38% of aggregate environmental impacts of infrastructure could be avoided for 20% greater development costs by spatially optimizing infrastructure. However, we found trade-offs between environmental impacts and costs among sites. In visual inspections, we often distinguished between sites that could be developed to avoid impacts at relatively low cost (29%) and those that could not (20%). Reductions in a metric of aggregate environmental impact could be largely attributed to potential displacement of rare species, sedimentation, and forest fragmentation. Planners and regulators can estimate and use heterogeneous trade-offs among development sites to create industry-wide improvements in environmental performance and do so at reasonable costs by, for example, leveraging low-cost avoidance of impacts at some sites to offset others. This could require substantial effort, but the results and software we provide can facilitate the process. © 2016 Society for Conservation Biology.

  19. The GEOSS solution for enabling data interoperability and integrative research.

    PubMed

    Nativi, Stefano; Mazzetti, Paolo; Craglia, Max; Pirrone, Nicola

    2014-03-01

    Global sustainability research requires an integrative research effort underpinned by digital infrastructures (systems) able to harness data and heterogeneous information across disciplines. Digital data and information sharing across systems and applications is achieved by implementing interoperability: a property of a product or system to work with other products or systems, present or future. There are at least three main interoperability challenges a digital infrastructure must address: technological, semantic, and organizational. In recent years, important international programs and initiatives are focusing on such an ambitious objective. This manuscript presents and combines the studies and the experiences carried out by three relevant projects, focusing on the heavy metal domain: Global Mercury Observation System, Global Earth Observation System of Systems (GEOSS), and INSPIRE. This research work recognized a valuable interoperability service bus (i.e., a set of standards models, interfaces, and good practices) proposed to characterize the integrative research cyber-infrastructure of the heavy metal research community. In the paper, the GEOSS common infrastructure is discussed implementing a multidisciplinary and participatory research infrastructure, introducing a possible roadmap for the heavy metal pollution research community to join GEOSS as a new Group on Earth Observation community of practice and develop a research infrastructure for carrying out integrative research in its specific domain.

  20. Proceedings of the Fourth International Workshop on a Research Agenda for Maintenance and Evolution of Service-Oriented Systems (MESOA 2010)

    DTIC Science & Technology

    2011-09-01

    service -oriented systems • Software -as-a- Service ( SaaS ) • social network infrastructures • Internet marketing • mobile computing • context awareness...Maintenance and Evolution of Service -Oriented Systems (MESOA 2010), organized by members of the Carnegie Mellon Software Engineering Institute’s...CMU/SEI-2011-SR-008 | 1 1 Workshop Introduction The Software Engineering Institute (SEI) started developing a service -oriented architecture

  1. Deploying the integrated metropolitan intelligent transportation systems (ITS) infrastructure : FY 2003 report

    DOT National Transportation Integrated Search

    2003-01-01

    In January 1996, the Secretary of Transportation set a goal of deploying the integrated metropolitan Intelligent Transportation System (ITS) infrastructure in 75 of the nation's largest metropolitan areas by 2005. Using data from surveys administered...

  2. Deploying the integrated metropolitan intelligent transportation systems (ITS) infrastructure : FY 2004 report

    DOT National Transportation Integrated Search

    2005-07-01

    In January 1996, the Secretary of Transportation set a goal of deploying the integrated metropolitan Intelligent Transportation System (ITS) infrastructure in 75 of the nation's largest metropolitan areas by 2005. Using data from surveys administered...

  3. Open source system OpenVPN in a function of Virtual Private Network

    NASA Astrophysics Data System (ADS)

    Skendzic, A.; Kovacic, B.

    2017-05-01

    Using of Virtual Private Networks (VPN) can establish high security level in network communication. VPN technology enables high security networking using distributed or public network infrastructure. VPN uses different security and managing rules inside networks. It can be set up using different communication channels like Internet or separate ISP communication infrastructure. VPN private network makes security communication channel over public network between two endpoints (computers). OpenVPN is an open source software product under GNU General Public License (GPL) that can be used to establish VPN communication between two computers inside business local network over public communication infrastructure. It uses special security protocols and 256-bit Encryption and it is capable of traversing network address translators (NATs) and firewalls. It allows computers to authenticate each other using a pre-shared secret key, certificates or username and password. This work gives review of VPN technology with a special accent on OpenVPN. This paper will also give comparison and financial benefits of using open source VPN software in business environment.

  4. Cloudweaver: Adaptive and Data-Driven Workload Manager for Generic Clouds

    NASA Astrophysics Data System (ADS)

    Li, Rui; Chen, Lei; Li, Wen-Syan

    Cloud computing denotes the latest trend in application development for parallel computing on massive data volumes. It relies on clouds of servers to handle tasks that used to be managed by an individual server. With cloud computing, software vendors can provide business intelligence and data analytic services for internet scale data sets. Many open source projects, such as Hadoop, offer various software components that are essential for building a cloud infrastructure. Current Hadoop (and many others) requires users to configure cloud infrastructures via programs and APIs and such configuration is fixed during the runtime. In this chapter, we propose a workload manager (WLM), called CloudWeaver, which provides automated configuration of a cloud infrastructure for runtime execution. The workload management is data-driven and can adapt to dynamic nature of operator throughput during different execution phases. CloudWeaver works for a single job and a workload consisting of multiple jobs running concurrently, which aims at maximum throughput using a minimum set of processors.

  5. Towards usable and interdisciplinary e-infrastructure (Invited)

    NASA Astrophysics Data System (ADS)

    de Roure, D.

    2010-12-01

    e-Science and cyberinfrastucture at their outset tended to focus on ‘big science’ and cross-organisational infrastructures, demonstrating complex engineering with the promise of high returns. It soon became evident that the key to researchers harnessing new technology for everyday use is a user-centric approach which empowers the user - both from a developer and an end user viewpoint. For example, this philosophy is demonstrated in workflow systems for systematic data processing and in the Web 2.0 approach as exemplified by the myExperiment social web site for sharing workflows, methods and ‘research objects’. Hence the most disruptive aspect of Cloud and virtualisation is perhaps that they make new computational resources and applications usable, creating a flourishing ecosystem for routine processing and innovation alike - and in this we must consider software sustainability. This talk will discuss the changing nature of e-Science digital ecosystem, focus on the e-infrastructure for cross-disciplinary work, and highlight issues in sustainable software development in this context.

  6. Assessing the uptake of persistent identifiers by research infrastructure users

    PubMed Central

    Maull, Keith E.

    2017-01-01

    Significant progress has been made in the past few years in the development of recommendations, policies, and procedures for creating and promoting citations to data sets, software, and other research infrastructures like computing facilities. Open questions remain, however, about the extent to which referencing practices of authors of scholarly publications are changing in ways desired by these initiatives. This paper uses four focused case studies to evaluate whether research infrastructures are being increasingly identified and referenced in the research literature via persistent citable identifiers. The findings of the case studies show that references to such resources are increasing, but that the patterns of these increases are variable. In addition, the study suggests that citation practices for data sets may change more slowly than citation practices for software and research facilities, due to the inertia of existing practices for referencing the use of data. Similarly, existing practices for acknowledging computing support may slow the adoption of formal citations for computing resources. PMID:28394907

  7. Policy Model of Sustainable Infrastructure Development (Case Study : Bandarlampung City, Indonesia)

    NASA Astrophysics Data System (ADS)

    Persada, C.; Sitorus, S. R. P.; Marimin; Djakapermana, R. D.

    2018-03-01

    Infrastructure development does not only affect the economic aspect, but also social and environmental, those are the main dimensions of sustainable development. Many aspects and actors involved in urban infrastructure development requires a comprehensive and integrated policy towards sustainability. Therefore, it is necessary to formulate an infrastructure development policy that considers various dimensions of sustainable development. The main objective of this research is to formulate policy of sustainable infrastructure development. In this research, urban infrastructure covers transportation, water systems (drinking water, storm water, wastewater), green open spaces and solid waste. This research was conducted in Bandarlampung City. This study use a comprehensive modeling, namely the Multi Dimensional Scaling (MDS) with Rapid Appraisal of Infrastructure (Rapinfra), it uses of Analytic Network Process (ANP) and it uses system dynamics model. The findings of the MDS analysis showed that the status of Bandarlampung City infrastructure sustainability is less sustainable. The ANP analysis produces 8 main indicators of the most influential in the development of sustainable infrastructure. The system dynamics model offered 4 scenarios of sustainable urban infrastructure policy model. The best scenario was implemented into 3 policies consist of: the integrated infrastructure management, the population control, and the local economy development.

  8. European Plate Observing System - Norway (EPOS-N): A National Consortium for the Norwegian Implementation of EPOS

    NASA Astrophysics Data System (ADS)

    Atakan, Kuvvet; Tellefsen, Karen

    2017-04-01

    The European Plate Observing System (EPOS) aims to create a pan-European infrastructure for solid Earth science to support a safe and sustainable society. The main vision of the European Plate Observing System (EPOS) is to address the three basic challenges in Earth Science: (i) unravelling the Earth's deformational processes which are part of the Earth system evolution in time, (ii) understanding geo-hazards and their implications to society, and (iii) contributing to the safe and sustainable use of geo-resources. The mission of EPOS-Norway is therefore in line with the European vision of EPOS, i.e. monitor and understand the dynamic and complex Earth system by relying on new e-science opportunities and integrating diverse and advanced Research Infrastructures for solid Earth science. The EPOS-Norway project started in January 2016 with a national consortium consisting of six institutions. These are: University of Bergen (Coordinator), NORSAR, National Mapping Authority, Geological Survey of Norway, Christian Michelsen Research and University of Oslo. EPOS-N will during the next five years focus on the implementation of three main components. These are: (i) Developing a Norwegian e-Infrastructure to integrate the Norwegian Solid Earth data from the seismological and geodetic networks, as well as the data from the geological and geophysical data repositories, (ii) Improving the monitoring capacity in the Arctic, including Northern Norway and the Arctic islands, and (iii) Establishing a national Solid Earth Science Forum providing a constant feedback mechanism for improved integration of multidisciplinary data, as well as training of young scientists for future utilization of all available solid Earth observational data through a single e-infrastructure. Currently, a list of data, data products, software and services (DDSS) is being prepared. These elements will be integrated in the EPOS-N data/web-portal, which will allow users to browse, select and download relevant data for solid Earth science research. In addition to the standard data and data products such as seismological, geodetic, geomagnetic and geological data, there are a number of non-standard data and data products that will be integrated. In parallel, advanced visualization technologies are being implemented, which will provide a platform for a possible future ICS-D (distributed components of the Integrated Core Services) for EPOS. In order to enhance the monitoring capacity in the Arctic, planning and site selection process for the new instrument installations are well underway, as well as the procurement of the required equipment. In total, 17 new seismological and geodetic stations will be co-located in selected sites in Northern Norway, Jan Mayen and Svalbard. In addition, a seismic array with 9 nodes will be installed on Bear Island. A planned aeromagnetic survey along the Knipovich Ridge is being conducted this year, which will give new insights to the tectonic development of the mid-ocean ridge systems in the North Atlantic.

  9. 20 CFR 619.1 - Definitions.

    Code of Federal Regulations, 2014 CFR

    2014-04-01

    ... inquiries and responses between SWAs. Major IT Modernization Project means conversion, re-engineering..., or upgrading software libraries, protocols, or hardware platform and infrastructure. These are...

  10. Integration in primary community care networks (PCCNs): examination of governance, clinical, marketing, financial, and information infrastructures in a national demonstration project in Taiwan

    PubMed Central

    Lin, Blossom Yen-Ju

    2007-01-01

    Background Taiwan's primary community care network (PCCN) demonstration project, funded by the Bureau of National Health Insurance on March 2003, was established to discourage hospital shopping behavior of people and drive the traditional fragmented health care providers into cooperate care models. Between 2003 and 2005, 268 PCCNs were established. This study profiled the individual members in the PCCNs to study the nature and extent to which their network infrastructures have been integrated among the members (clinics and hospitals) within individual PCCNs. Methods The thorough questionnaire items, covering the network working infrastructures – governance, clinical, marketing, financial, and information integration in PCCNs, were developed with validity and reliability confirmed. One thousand five hundred and fifty-seven clinics that had belonged to PCCNs for more than one year, based on the 2003–2005 Taiwan Primary Community Care Network List, were surveyed by mail. Nine hundred and twenty-eight clinic members responded to the surveys giving a 59.6 % response rate. Results Overall, the PCCNs' members had higher involvement in the governance infrastructure, which was usually viewed as the most important for establishment of core values in PCCNs' organization design and management at the early integration stage. In addition, it found that there existed a higher extent of integration of clinical, marketing, and information infrastructures among the hospital-clinic member relationship than those among clinic members within individual PCCNs. The financial infrastructure was shown the least integrated relative to other functional infrastructures at the early stage of PCCN formation. Conclusion There was still room for better integrated partnerships, as evidenced by the great variety of relationships and differences in extent of integration in this study. In addition to provide how the network members have done for their initial work at the early stage of network forming in this study, the detailed surveyed items, the concepts proposed by the managerial and theoretical professionals, could be a guide for those health care providers who have willingness to turn their business into multi-organizations. PMID:17577422

  11. Integration in primary community care networks (PCCNs): examination of governance, clinical, marketing, financial, and information infrastructures in a national demonstration project in Taiwan.

    PubMed

    Lin, Blossom Yen-Ju

    2007-06-19

    Taiwan's primary community care network (PCCN) demonstration project, funded by the Bureau of National Health Insurance on March 2003, was established to discourage hospital shopping behavior of people and drive the traditional fragmented health care providers into cooperate care models. Between 2003 and 2005, 268 PCCNs were established. This study profiled the individual members in the PCCNs to study the nature and extent to which their network infrastructures have been integrated among the members (clinics and hospitals) within individual PCCNs. The thorough questionnaire items, covering the network working infrastructures--governance, clinical, marketing, financial, and information integration in PCCNs, were developed with validity and reliability confirmed. One thousand five hundred and fifty-seven clinics that had belonged to PCCNs for more than one year, based on the 2003-2005 Taiwan Primary Community Care Network List, were surveyed by mail. Nine hundred and twenty-eight clinic members responded to the surveys giving a 59.6 % response rate. Overall, the PCCNs' members had higher involvement in the governance infrastructure, which was usually viewed as the most important for establishment of core values in PCCNs' organization design and management at the early integration stage. In addition, it found that there existed a higher extent of integration of clinical, marketing, and information infrastructures among the hospital-clinic member relationship than those among clinic members within individual PCCNs. The financial infrastructure was shown the least integrated relative to other functional infrastructures at the early stage of PCCN formation. There was still room for better integrated partnerships, as evidenced by the great variety of relationships and differences in extent of integration in this study. In addition to provide how the network members have done for their initial work at the early stage of network forming in this study, the detailed surveyed items, the concepts proposed by the managerial and theoretical professionals, could be a guide for those health care providers who have willingness to turn their business into multi-organizations.

  12. Application of the dynamically allocated virtual clustering management system to emulated tactical network experimentation

    NASA Astrophysics Data System (ADS)

    Marcus, Kelvin

    2014-06-01

    The U.S Army Research Laboratory (ARL) has built a "Network Science Research Lab" to support research that aims to improve their ability to analyze, predict, design, and govern complex systems that interweave the social/cognitive, information, and communication network genres. Researchers at ARL and the Network Science Collaborative Technology Alliance (NS-CTA), a collaborative research alliance funded by ARL, conducted experimentation to determine if automated network monitoring tools and task-aware agents deployed within an emulated tactical wireless network could potentially increase the retrieval of relevant data from heterogeneous distributed information nodes. ARL and NS-CTA required the capability to perform this experimentation over clusters of heterogeneous nodes with emulated wireless tactical networks where each node could contain different operating systems, application sets, and physical hardware attributes. Researchers utilized the Dynamically Allocated Virtual Clustering Management System (DAVC) to address each of the infrastructure support requirements necessary in conducting their experimentation. The DAVC is an experimentation infrastructure that provides the means to dynamically create, deploy, and manage virtual clusters of heterogeneous nodes within a cloud computing environment based upon resource utilization such as CPU load, available RAM and hard disk space. The DAVC uses 802.1Q Virtual LANs (VLANs) to prevent experimentation crosstalk and to allow for complex private networks. Clusters created by the DAVC system can be utilized for software development, experimentation, and integration with existing hardware and software. The goal of this paper is to explore how ARL and the NS-CTA leveraged the DAVC to create, deploy and manage multiple experimentation clusters to support their experimentation goals.

  13. The PRISM project

    NASA Astrophysics Data System (ADS)

    Guilyardi, E.

    2003-04-01

    The European Union's PRISM infrastructure project (PRogram for Integrated earth System Modelling) aims at designing a flexible environment to easily assemble and run Earth System Models (http://prism.enes.org). Europe's widely distributed modelling expertise is both a strength and a challenge. Recognizing this, the PRISM project aims at developing an efficient shared modelling software infrastructure for climate scientists, providing them with an opportunity for greater focus on scientific issues, including the necessary scientific diversity (models and approaches). The proposed PRISM system includes 1) the use - or definition - and promotion of scientific and technical standards to increase component modularity, 2) an end-to-end software environment (coupler, user interface, diagnostics) to launch, monitor and analyze complex Earth System Models built around the existing and future community models, 3) testing and quality standards to ensure HPC performance on a variety of platforms and 4) community wide inputs and requirements capture in all stages of system specifications and design through user/developers meetings, workshops and thematic schools. This science driven project, led by 22 institutes* and started December 1st 2001, benefits from a unique gathering of scientific and technical expertise. More than 30 models (both global and regional) have expressed interest to be part of the PRISM system and 6 types of components have been identified: atmosphere, atmosphere chemistry, land surface, ocean, sea ice and ocean biochemistry. Progress and overall architecture design will be presented. * MPI-Met (Coordinator), KNMI (co-coordinator), MPI-M&D, Met Office, University of Reading, IPSL, Meteo-France, CERFACS, DMI, SMHI, NERSC, ETH Zurich, INGV, MPI-BGC, PIK, ECMWF, UCL-ASTR, NEC, FECIT, SGI, SUN, CCRLE

  14. TerraFERMA: The Transparent Finite Element Rapid Model Assembler for multiphysics problems in Earth sciences

    NASA Astrophysics Data System (ADS)

    Wilson, Cian R.; Spiegelman, Marc; van Keken, Peter E.

    2017-02-01

    We introduce and describe a new software infrastructure TerraFERMA, the Transparent Finite Element Rapid Model Assembler, for the rapid and reproducible description and solution of coupled multiphysics problems. The design of TerraFERMA is driven by two computational needs in Earth sciences. The first is the need for increased flexibility in both problem description and solution strategies for coupled problems where small changes in model assumptions can lead to dramatic changes in physical behavior. The second is the need for software and models that are more transparent so that results can be verified, reproduced, and modified in a manner such that the best ideas in computation and Earth science can be more easily shared and reused. TerraFERMA leverages three advanced open-source libraries for scientific computation that provide high-level problem description (FEniCS), composable solvers for coupled multiphysics problems (PETSc), and an options handling system (SPuD) that allows the hierarchical management of all model options. TerraFERMA integrates these libraries into an interface that organizes the scientific and computational choices required in a model into a single options file from which a custom compiled application is generated and run. Because all models share the same infrastructure, models become more reusable and reproducible, while still permitting the individual researcher considerable latitude in model construction. TerraFERMA solves partial differential equations using the finite element method. It is particularly well suited for nonlinear problems with complex coupling between components. TerraFERMA is open-source and available at http://terraferma.github.io, which includes links to documentation and example input files.

  15. Brokered virtual hubs for facilitating access and use of geospatial Open Data

    NASA Astrophysics Data System (ADS)

    Mazzetti, Paolo; Latre, Miguel; Kamali, Nargess; Brumana, Raffaella; Braumann, Stefan; Nativi, Stefano

    2016-04-01

    Open Data is a major trend in current information technology scenario and it is often publicised as one of the pillars of the information society in the near future. In particular, geospatial Open Data have a huge potential also for Earth Sciences, through the enablement of innovative applications and services integrating heterogeneous information. However, open does not mean usable. As it was recognized at the very beginning of the Web revolution, many different degrees of openness exist: from simple sharing in a proprietary format to advanced sharing in standard formats and including semantic information. Therefore, to fully unleash the potential of geospatial Open Data, advanced infrastructures are needed to increase the data openness degree, enhancing their usability. In October 2014, the ENERGIC OD (European NEtwork for Redistributing Geospatial Information to user Communities - Open Data) project, funded by the European Union under the Competitiveness and Innovation framework Programme (CIP), has started. In response to the EU call, the general objective of the project is to "facilitate the use of open (freely available) geographic data from different sources for the creation of innovative applications and services through the creation of Virtual Hubs". The ENERGIC OD Virtual Hubs aim to facilitate the use of geospatial Open Data by lowering and possibly removing the main barriers which hampers geo-information (GI) usage by end-users and application developers. Data and services heterogeneity is recognized as one of the major barriers to Open Data (re-)use. It imposes end-users and developers to spend a lot of effort in accessing different infrastructures and harmonizing datasets. Such heterogeneity cannot be completely removed through the adoption of standard specifications for service interfaces, metadata and data models, since different infrastructures adopt different standards to answer to specific challenges and to address specific use-cases. Thus, beyond a certain extent, heterogeneity is irreducible especially in interdisciplinary contexts. ENERGIC OD Virtual Hubs address heterogeneity adopting a mediation and brokering approach: specific components (brokers) are dedicated to harmonize service interfaces, metadata and data models, enabling seamless discovery and access to heterogeneous infrastructures and datasets. As an innovation project, ENERGIC OD integrates several existing technologies to implement Virtual Hubs as single points of access to geospatial datasets provided by new or existing platforms and infrastructures, including INSPIRE-compliant systems and Copernicus services. A first version of the ENERGIC OD brokers has been implemented based on the GI-Suite Brokering Framework developed by CNR-IIA, and complemented with other tools under integration and development. It already enables mediated discovery and harmonized access to different geospatial Open Data sources. It is accessible by users as Software-as-a-Service through a browser. Moreover, open APIs and a Javascript library are available for application developers. Six ENERGIC OD Virtual Hubs have been currently deployed: one at regional level (Berlin metropolitan area) and five at national-level (in France, Germany, Italy, Poland and Spain). Each Virtual Hub manager decided the deployment strategy (local infrastructure or commercial Infrastructure-as-a-Service cloud), and the list of connected Open Data sources. The ENERGIC OD Virtual Hubs are under test and validation through the development of ten different mobile and Web applications.

  16. Telescience workstation

    NASA Technical Reports Server (NTRS)

    Brown, Robert L.; Doyle, Dee; Haines, Richard F.; Slocum, Michael

    1989-01-01

    As part of the Telescience Testbed Pilot Program, the Universities Space Research Association/ Research Institute for Advanced Computer Science (USRA/RIACS) proposed to support remote communication by providing a network of human/machine interfaces, computer resources, and experimental equipment which allows: remote science, collaboration, technical exchange, and multimedia communication. The telescience workstation is intended to provide a local computing environment for telescience. The purpose of the program are as follows: (1) to provide a suitable environment to integrate existing and new software for a telescience workstation; (2) to provide a suitable environment to develop new software in support of telescience activities; (3) to provide an interoperable environment so that a wide variety of workstations may be used in the telescience program; (4) to provide a supportive infrastructure and a common software base; and (5) to advance, apply, and evaluate the telescience technolgy base. A prototype telescience computing environment designed to bring practicing scientists in domains other than their computer science into a modern style of doing their computing was created and deployed. This environment, the Telescience Windowing Environment, Phase 1 (TeleWEn-1), met some, but not all of the goals stated above. The TeleWEn-1 provided a window-based workstation environment and a set of tools for text editing, document preparation, electronic mail, multimedia mail, raster manipulation, and system management.

  17. An Object-Oriented Network-Centric Software Architecture for Physical Computing

    NASA Astrophysics Data System (ADS)

    Palmer, Richard

    1997-08-01

    Recent developments in object-oriented computer languages and infrastructure such as the Internet, Web browsers, and the like provide an opportunity to define a more productive computational environment for scientific programming that is based more closely on the underlying mathematics describing physics than traditional programming languages such as FORTRAN or C++. In this talk I describe an object-oriented software architecture for representing physical problems that includes classes for such common mathematical objects as geometry, boundary conditions, partial differential and integral equations, discretization and numerical solution methods, etc. In practice, a scientific program written using this architecture looks remarkably like the mathematics used to understand the problem, is typically an order of magnitude smaller than traditional FORTRAN or C++ codes, and hence easier to understand, debug, describe, etc. All objects in this architecture are ``network-enabled,'' which means that components of a software solution to a physical problem can be transparently loaded from anywhere on the Internet or other global network. The architecture is expressed as an ``API,'' or application programmers interface specification, with reference embeddings in Java, Python, and C++. A C++ class library for an early version of this API has been implemented for machines ranging from PC's to the IBM SP2, meaning that phidentical codes run on all architectures.

  18. search GenBank: interactive orchestration and ad-hoc choreography of Web services in the exploration of the biomedical resources of the National Center For Biotechnology Information

    PubMed Central

    2013-01-01

    Background Due to the growing number of biomedical entries in data repositories of the National Center for Biotechnology Information (NCBI), it is difficult to collect, manage and process all of these entries in one place by third-party software developers without significant investment in hardware and software infrastructure, its maintenance and administration. Web services allow development of software applications that integrate in one place the functionality and processing logic of distributed software components, without integrating the components themselves and without integrating the resources to which they have access. This is achieved by appropriate orchestration or choreography of available Web services and their shared functions. After the successful application of Web services in the business sector, this technology can now be used to build composite software tools that are oriented towards biomedical data processing. Results We have developed a new tool for efficient and dynamic data exploration in GenBank and other NCBI databases. A dedicated search GenBank system makes use of NCBI Web services and a package of Entrez Programming Utilities (eUtils) in order to provide extended searching capabilities in NCBI data repositories. In search GenBank users can use one of the three exploration paths: simple data searching based on the specified user’s query, advanced data searching based on the specified user’s query, and advanced data exploration with the use of macros. search GenBank orchestrates calls of particular tools available through the NCBI Web service providing requested functionality, while users interactively browse selected records in search GenBank and traverse between NCBI databases using available links. On the other hand, by building macros in the advanced data exploration mode, users create choreographies of eUtils calls, which can lead to the automatic discovery of related data in the specified databases. Conclusions search GenBank extends standard capabilities of the NCBI Entrez search engine in querying biomedical databases. The possibility of creating and saving macros in the search GenBank is a unique feature and has a great potential. The potential will further grow in the future with the increasing density of networks of relationships between data stored in particular databases. search GenBank is available for public use at http://sgb.biotools.pl/. PMID:23452691

  19. search GenBank: interactive orchestration and ad-hoc choreography of Web services in the exploration of the biomedical resources of the National Center For Biotechnology Information.

    PubMed

    Mrozek, Dariusz; Małysiak-Mrozek, Bożena; Siążnik, Artur

    2013-03-01

    Due to the growing number of biomedical entries in data repositories of the National Center for Biotechnology Information (NCBI), it is difficult to collect, manage and process all of these entries in one place by third-party software developers without significant investment in hardware and software infrastructure, its maintenance and administration. Web services allow development of software applications that integrate in one place the functionality and processing logic of distributed software components, without integrating the components themselves and without integrating the resources to which they have access. This is achieved by appropriate orchestration or choreography of available Web services and their shared functions. After the successful application of Web services in the business sector, this technology can now be used to build composite software tools that are oriented towards biomedical data processing. We have developed a new tool for efficient and dynamic data exploration in GenBank and other NCBI databases. A dedicated search GenBank system makes use of NCBI Web services and a package of Entrez Programming Utilities (eUtils) in order to provide extended searching capabilities in NCBI data repositories. In search GenBank users can use one of the three exploration paths: simple data searching based on the specified user's query, advanced data searching based on the specified user's query, and advanced data exploration with the use of macros. search GenBank orchestrates calls of particular tools available through the NCBI Web service providing requested functionality, while users interactively browse selected records in search GenBank and traverse between NCBI databases using available links. On the other hand, by building macros in the advanced data exploration mode, users create choreographies of eUtils calls, which can lead to the automatic discovery of related data in the specified databases. search GenBank extends standard capabilities of the NCBI Entrez search engine in querying biomedical databases. The possibility of creating and saving macros in the search GenBank is a unique feature and has a great potential. The potential will further grow in the future with the increasing density of networks of relationships between data stored in particular databases. search GenBank is available for public use at http://sgb.biotools.pl/.

  20. Tracking the deployment of the integrated metropolitan ITS infrastructure in Columbus : FY99 results

    DOT National Transportation Integrated Search

    2000-01-01

    In January 1996, Secretary Pea set a goal of deploying the integrated metropolitan Intelligent Transportation System (ITS) infrastructure in 75 of the nations largest metropolitan areas by 2006. In 1997, the U.S. Department of Transportation ini...

  1. Tracking the deployment of the integrated metropolitan ITS infrastructure in Fresno : FY99 results

    DOT National Transportation Integrated Search

    2000-01-01

    In January 1996, Secretary Pea set a goal of deploying the integrated metropolitan Intelligent Transportation System (ITS) infrastructure in 75 of the nations largest metropolitan areas by 2006. In 1997, the U.S. Department of Transportation ini...

  2. Tracking the deployment of the integrated metropolitan ITS infrastructure in Sacramento : FY99 results

    DOT National Transportation Integrated Search

    2000-01-01

    In January 1996, Secretary Pea set a goal of deploying the integrated metropolitan Intelligent Transportation System (ITS) infrastructure in 75 of the nations largest metropolitan areas by 2006. In 1997, the U.S. Department of Transportation ini...

  3. Tracking the deployment of the integrated metropolitan ITS infrastructure in Bakersfield : FY99 results

    DOT National Transportation Integrated Search

    2000-01-01

    In January 1996, Secretary Pea set a goal of deploying the integrated metropolitan Intelligent Transportation System (ITS) infrastructure in 75 of the nations largest metropolitan areas by 2006. In 1997, the U.S. Department of Transportation ini...

  4. Tracking the deployment of the integrated metropolitan ITS infrastructure in Albuquerque : FY99 results

    DOT National Transportation Integrated Search

    2000-01-01

    In January 1996, Secretary Pea set a goal of deploying the integrated metropolitan Intelligent Transportation System (ITS) infrastructure in 75 of the nations largest metropolitan areas by 2006. In 1997, the U.S. Department of Transportation ini...

  5. Tracking the deployment of the integrated metropolitan ITS infrastructure in Springfield : FY99 results

    DOT National Transportation Integrated Search

    2000-01-01

    In January 1996, Secretary Pea set a goal of deploying the integrated metropolitan Intelligent Transportation System (ITS) infrastructure in 75 of the nations largest metropolitan areas by 2006. In 1997, the U.S. Department of Transportation ini...

  6. Technology Assessment On Stressor Impacts To Green Infrastructure BMP Performance, Monitoring And Integration

    EPA Science Inventory

    This presentation will document, benchmark and evalute state-of-the-science research and implementation on BMP performance, monitoring, and integration for green infrastructure applications, to manage wet weather flwo, storm-water-runoff stressor relief and remedial sustainable w...

  7. Tracking the deployment of the integrated metropolitan ITS infrastructure in Knoxville : FY99 results

    DOT National Transportation Integrated Search

    2000-01-01

    In January 1996, Secretary Pea set a goal of deploying the integrated metropolitan Intelligent Transportation System (ITS) infrastructure in 75 of the nations largest metropolitan areas by 2006. In 1997, the U.S. Department of Transportation ini...

  8. Tracking the deployment of the integrated metropolitan ITS infrastructure in Wichita : FY99 results

    DOT National Transportation Integrated Search

    2000-01-01

    In January 1996, Secretary Pea set a goal of deploying the integrated metropolitan Intelligent Transportation System (ITS) infrastructure in 75 of the nations largest metropolitan areas by 2006. In 1997, the U.S. Department of Transportation ini...

  9. Tracking the deployment of the integrated metropolitan ITS infrastructure in Phoenix : FY99 results

    DOT National Transportation Integrated Search

    2000-01-01

    In January 1996, Secretary Pea set a goal of deploying the integrated metropolitan Intelligent Transportation System (ITS) infrastructure in 75 of the nations largest metropolitan areas by 2006. In 1997, the U.S. Department of Transportation ini...

  10. Tracking the deployment of the integrated metropolitan ITS infrastructure in Baltimore : FY99 results

    DOT National Transportation Integrated Search

    2000-01-01

    In January 1996, Secretary Pea set a goal of deploying the integrated metropolitan Intelligent Transportation System (ITS) infrastructure in 75 of the nations largest metropolitan areas by 2006. In 1997, the U.S. Department of Transportation ini...

  11. Tracking the deployment of the integrated metropolitan ITS infrastructure in Orlando : FY99 results

    DOT National Transportation Integrated Search

    2000-01-01

    In January 1996, Secretary Pea set a goal of deploying the integrated metropolitan Intelligent Transportation System (ITS) infrastructure in 75 of the nations largest metropolitan areas by 2006. In 1997, the U.S. Department of Transportation ini...

  12. Tracking the deployment of the integrated metropolitan ITS infrastructure in Austin : FY99 results

    DOT National Transportation Integrated Search

    2000-01-01

    In January 1996, Secretary Pea set a goal of deploying the integrated metropolitan Intelligent Transportation System (ITS) infrastructure in 75 of the nations largest metropolitan areas by 2006. In 1997, the U.S. Department of Transportation ini...

  13. Tracking the deployment of the integrated metropolitan ITS infrastructure in Toledo : FY99 results

    DOT National Transportation Integrated Search

    2000-01-01

    In January 1996, Secretary Pea set a goal of deploying the integrated metropolitan Intelligent Transportation System (ITS) infrastructure in 75 of the nations largest metropolitan areas by 2006. In 1997, the U.S. Department of Transportation ini...

  14. Tracking the deployment of the integrated metropolitan ITS infrastructure in Charleston : FY99 results

    DOT National Transportation Integrated Search

    2000-01-01

    In January 1996, Secretary Pea set a goal of deploying the integrated metropolitan Intelligent Transportation System (ITS) infrastructure in 75 of the nations largest metropolitan areas by 2006. In 1997, the U.S. Department of Transportation ini...

  15. Tracking the deployment of the integrated metropolitan ITS infrastructure in Louisville : FY99 results

    DOT National Transportation Integrated Search

    2000-01-01

    In January 1996, Secretary Pea set a goal of deploying the integrated metropolitan Intelligent Transportation System (ITS) infrastructure in 75 of the nations largest metropolitan areas by 2006. In 1997, the U.S. Department of Transportation ini...

  16. Tracking the deployment of the integrated metropolitan ITS infrastructure in Nashville : FY99 results

    DOT National Transportation Integrated Search

    2000-01-01

    In January 1996, Secretary Pea set a goal of deploying the integrated metropolitan Intelligent Transportation System (ITS) infrastructure in 75 of the nations largest metropolitan areas by 2006. In 1997, the U.S. Department of Transportation ini...

  17. Tracking the deployment of the integrated metropolitan ITS infrastructure in Rochester : FY99 results

    DOT National Transportation Integrated Search

    2000-01-01

    In January 1996, Secretary Pea set a goal of deploying the integrated metropolitan Intelligent Transportation System (ITS) infrastructure in 75 of the nations largest metropolitan areas by 2006. In 1997, the U.S. Department of Transportation ini...

  18. Tracking the deployment of the integrated metropolitan ITS infrastructure in Honolulu : FY99 results

    DOT National Transportation Integrated Search

    2000-01-01

    In January 1996, Secretary Pea set a goal of deploying the integrated metropolitan Intelligent Transportation System (ITS) infrastructure in 75 of the nations largest metropolitan areas by 2006. In 1997, the U.S. Department of Transportation ini...

  19. Tracking the deployment of the integrated metropolitan ITS infrastructure in Birmingham : FY99 results

    DOT National Transportation Integrated Search

    2000-01-01

    In January 1996, Secretary Pea set a goal of deploying the integrated metropolitan Intelligent Transportation System (ITS) infrastructure in 75 of the nations largest metropolitan areas by 2006. In 1997, the U.S. Department of Transportation ini...

  20. Tracking the deployment of the integrated metropolitan ITS infrastructure in Memphis : FY99 results

    DOT National Transportation Integrated Search

    2000-01-01

    In January 1996, Secretary Pea set a goal of deploying the integrated metropolitan Intelligent Transportation System (ITS) infrastructure in 75 of the nations largest metropolitan areas by 2006. In 1997, the U.S. Department of Transportation ini...

  1. Tracking the deployment of the integrated metropolitan ITS infrastructure in Jacksonville : FY99 results

    DOT National Transportation Integrated Search

    2000-01-01

    In January 1996, Secretary Pea set a goal of deploying the integrated metropolitan Intelligent Transportation System (ITS) infrastructure in 75 of the nations largest metropolitan areas by 2006. In 1997, the U.S. Department of Transportation ini...

  2. Tracking the deployment of the integrated metropolitan ITS infrastructure in Indianapolis : FY99 results

    DOT National Transportation Integrated Search

    2000-01-01

    In January 1996, Secretary Pea set a goal of deploying the integrated metropolitan Intelligent Transportation System (ITS) infrastructure in 75 of the nations largest metropolitan areas by 2006. In 1997, the U.S. Department of Transportation ini...

  3. Tracking the deployment of the integrated metropolitan ITS infrastructure in Tulsa : FY99 results

    DOT National Transportation Integrated Search

    2000-01-01

    In January 1996, Secretary Pea set a goal of deploying the integrated metropolitan Intelligent Transportation System (ITS) infrastructure in 75 of the nations largest metropolitan areas by 2006. In 1997, the U.S. Department of Transportation ini...

  4. Tracking the deployment of the integrated metropolitan ITS infrastructure in Atlanta : FY99 results

    DOT National Transportation Integrated Search

    2000-01-01

    In January 1996, Secretary Pea set a goal of deploying the integrated metropolitan Intelligent Transportation System (ITS) infrastructure in 75 of the nations largest metropolitan areas by 2006. In 1997, the U.S. Department of Transportation ini...

  5. Tracking the deployment of the integrated metropolitan ITS infrastructure in Syracuse : FY99 results

    DOT National Transportation Integrated Search

    2000-01-01

    In January 1996, Secretary Pea set a goal of deploying the integrated metropolitan Intelligent Transportation System (ITS) infrastructure in 75 of the nations largest metropolitan areas by 2006. In 1997, the U.S. Department of Transportation ini...

  6. Tracking the deployment of the integrated metropolitan ITS infrastructure in Washington : FY99 results

    DOT National Transportation Integrated Search

    2000-01-01

    In January 1996, Secretary Pea set a goal of deploying the integrated metropolitan Intelligent Transportation System (ITS) infrastructure in 75 of the nations largest metropolitan areas by 2006. In 1997, the U.S. Department of Transportation ini...

  7. Tracking the deployment of the integrated metropolitan ITS infrastructure in Dallas : FY99 results

    DOT National Transportation Integrated Search

    2000-01-01

    In January 1996, Secretary Pea set a goal of deploying the integrated metropolitan Intelligent Transportation System (ITS) infrastructure in 75 of the nations largest metropolitan areas by 2006. In 1997, the U.S. Department of Transportation ini...

  8. Tracking the deployment of the integrated metropolitan ITS infrastructure in Omaha : FY99 results

    DOT National Transportation Integrated Search

    2000-01-01

    In January 1996, Secretary Pea set a goal of deploying the integrated metropolitan Intelligent Transportation System (ITS) infrastructure in 75 of the nations largest metropolitan areas by 2006. In 1997, the U.S. Department of Transportation ini...

  9. The role of the ADS in software discovery and citation

    NASA Astrophysics Data System (ADS)

    Accomazzi, Alberto

    2018-01-01

    As the primary index of scholarly content in astronomy and physics, the NASA Astrophysics Data System (ADS) is collaborating with the AAS journals and the Zenodo repository in an effort to promote the preservation of scientific software used in astronomy research and its citation in scholarly publications. In this talk I will discuss how ADS is updating its service infrastructure to allow for the publication, indexing, and citation of software records in scientific articles.

  10. Global Combat Support System-Marine Corps Proof-of-Concept for Dashboard Analytics

    DTIC Science & Technology

    2014-12-01

    The core is modern, commercial-off-the-shelf enterprise resource planning ( ERP ) software (Oracle 11i e-Business Suite). GCSS-MCs design is focused...factor in the decision to implement this new software . GCSS-MC is the technology centerpiece of the Logistics Modernization (LogMod) Program...GCSS-MC is based on the implementation of Oracle e-Business Suite 11i as the core software package. This is the same infrastructure that Oracle

  11. Connectivity and Resilience: A Multidimensional Analysis of Infrastructure Impacts in the Southwestern Amazon

    ERIC Educational Resources Information Center

    Perz, Stephen G.; Shenkin, Alexander; Barnes, Grenville; Cabrera, Liliana; Carvalho, Lucas A.; Castillo, Jorge

    2012-01-01

    Infrastructure is a worldwide policy priority for national development via regional integration into the global economy. However, economic, ecological and social research draws contrasting conclusions about the consequences of infrastructure. We present a synthetic approach to the study of infrastructure, focusing on a multidimensional treatment…

  12. STAR Online Meta-Data Collection Framework: Integration with the Pre-existing Controls Infrastructure

    NASA Astrophysics Data System (ADS)

    Arkhipkin, D.; Lauret, J.

    2017-10-01

    One of the STAR experiment’s modular Messaging Interface and Reliable Architecture framework (MIRA) integration goals is to provide seamless and automatic connections with the existing control systems. After an initial proof of concept and operation of the MIRA system as a parallel data collection system for online use and real-time monitoring, the STAR Software and Computing group is now working on the integration of Experimental Physics and Industrial Control System (EPICS) with MIRA’s interfaces. This integration goals are to allow functional interoperability and, later on, to replace the existing/legacy Detector Control System components at the service level. In this report, we describe the evolutionary integration process and, as an example, will discuss the EPICS Alarm Handler conversion. We review the complete upgrade procedure starting with the integration of EPICS-originated alarm signals propagation into MIRA, followed by the replacement of the existing operator interface based on Motif Editor and Display Manager (MEDM) with modern portable web-based Alarm Handler interface. To achieve this aim, we have built an EPICS-to-MQTT [8] bridging service, and recreated the functionality of the original Alarm Handler using low-latency web messaging technologies. The integration of EPICS alarm handling into our messaging framework allowed STAR to improve the DCS alarm awareness of existing STAR DAQ and RTS services, which use MIRA as a primary source of experiment control information.

  13. Phenotypic and genotypic data integration and exploration through a web-service architecture.

    PubMed

    Nuzzo, Angelo; Riva, Alberto; Bellazzi, Riccardo

    2009-10-15

    Linking genotypic and phenotypic information is one of the greatest challenges of current genetics research. The definition of an Information Technology infrastructure to support this kind of studies, and in particular studies aimed at the analysis of complex traits, which require the definition of multifaceted phenotypes and the integration genotypic information to discover the most prevalent diseases, is a paradigmatic goal of Biomedical Informatics. This paper describes the use of Information Technology methods and tools to develop a system for the management, inspection and integration of phenotypic and genotypic data. We present the design and architecture of the Phenotype Miner, a software system able to flexibly manage phenotypic information, and its extended functionalities to retrieve genotype information from external repositories and to relate it to phenotypic data. For this purpose we developed a module to allow customized data upload by the user and a SOAP-based communications layer to retrieve data from existing biomedical knowledge management tools. In this paper we also demonstrate the system functionality by an example application of the system in which we analyze two related genomic datasets. In this paper we show how a comprehensive, integrated and automated workbench for genotype and phenotype integration can facilitate and improve the hypothesis generation process underlying modern genetic studies.

  14. A reliable low cost integrated wireless sensor network for water quality monitoring and level control system in UAE

    NASA Astrophysics Data System (ADS)

    Abou-Elnour, Ali; Khaleeq, Hyder; Abou-Elnour, Ahmad

    2016-04-01

    In the present work, wireless sensor network and real-time controlling and monitoring system are integrated for efficient water quality monitoring for environmental and domestic applications. The proposed system has three main components (i) the sensor circuits, (ii) the wireless communication system, and (iii) the monitoring and controlling unit. LabView software has been used in the implementation of the monitoring and controlling system. On the other hand, ZigBee and myRIO wireless modules have been used to implement the wireless system. The water quality parameters are accurately measured by the present computer based monitoring system and the measurement results are instantaneously transmitted and published with minimum infrastructure costs and maximum flexibility in term of distance or location. The mobility and durability of the proposed system are further enhanced by fully powering via a photovoltaic system. The reliability and effectiveness of the system are evaluated under realistic operating conditions.

  15. The Logical Extension

    NASA Technical Reports Server (NTRS)

    2003-01-01

    The same software controlling autonomous and crew-assisted operations for the International Space Station (ISS) is enabling commercial enterprises to integrate and automate manual operations, also known as decision logic, in real time across complex and disparate networked applications, databases, servers, and other devices, all with quantifiable business benefits. Auspice Corporation, of Framingham, Massachusetts, developed the Auspice TLX (The Logical Extension) software platform to effectively mimic the human decision-making process. Auspice TLX automates operations across extended enterprise systems, where any given infrastructure can include thousands of computers, servers, switches, and modems that are connected, and therefore, dependent upon each other. The concept behind the Auspice software spawned from a computer program originally developed in 1981 by Cambridge, Massachusetts-based Draper Laboratory for simulating tasks performed by astronauts aboard the Space Shuttle. At the time, the Space Shuttle Program was dependent upon paper-based procedures for its manned space missions, which typically averaged 2 weeks in duration. As the Shuttle Program progressed, NASA began increasing the length of manned missions in preparation for a more permanent space habitat. Acknowledging the need to relinquish paper-based procedures in favor of an electronic processing format to properly monitor and manage the complexities of these longer missions, NASA realized that Draper's task simulation software could be applied to its vision of year-round space occupancy. In 1992, Draper was awarded a NASA contract to build User Interface Language software to enable autonomous operations of a multitude of functions on Space Station Freedom (the station was redesigned in 1993 and converted into the international venture known today as the ISS)

  16. UAV Inspection of Electrical Transmission Infrastructure with Path Conformance Autonomy and Lidar-Based Geofences NASA Report on UTM Reference Mission Flights at Southern Company Flights November 2016

    NASA Technical Reports Server (NTRS)

    Moore, Andrew J.; Schubert, Matthew; Rymer, Nicholas; Balachandran, Swee; Consiglio, Maria; Munoz, Cesar; Smith, Joshua; Lewis, Dexter; Schneider, Paul

    2017-01-01

    Flights at low altitudes in close proximity to electrical transmission infrastructure present serious navigational challenges: GPS and radio communication quality is variable and yet tight position control is needed to measure defects while avoiding collisions with ground structures. To advance unmanned aerial vehicle (UAV) navigation technology while accomplishing a task with economic and societal benefit, a high voltage electrical infrastructure inspection reference mission was designed. An integrated air-ground platform was developed for this mission and tested in two days of experimental flights to determine whether navigational augmentation was needed to successfully conduct a controlled inspection experiment. The airborne component of the platform was a multirotor UAV built from commercial off-the-shelf hardware and software, and the ground component was a commercial laptop running open source software. A compact ultraviolet sensor mounted on the UAV can locate 'hot spots' (potential failure points in the electric grid), so long as the UAV flight path adequately samples the airspace near the power grid structures. To improve navigation, the platform was supplemented with two navigation technologies: lidar-to-polyhedron preflight processing for obstacle demarcation and inspection distance planning, and trajectory management software to enforce inspection standoff distance. Both navigation technologies were essential to obtaining useful results from the hot spot sensor in this obstacle-rich, low-altitude airspace. Because the electrical grid extends into crowded airspaces, the UAV position was tracked with NASA unmanned aerial system traffic management (UTM) technology. The following results were obtained: (1) Inspection of high-voltage electrical transmission infrastructure to locate 'hot spots' of ultraviolet emission requires navigation methods that are not broadly available and are not needed at higher altitude flights above ground structures. (2) The sensing capability of a novel airborne UV detector was verified with a standard ground-based instrument. Flights with this sensor showed that UAV measurement operations and recording methods are viable. With improved sensor range, UAVs equipped with compact UV sensors could serve as the detection elements in a self-diagnosing power grid. (3) Simplification of rich lidar maps to polyhedral obstacle maps reduces data volume by orders of magnitude, so that computation with the resultant maps in real time is possible. This enables real-time obstacle avoidance autonomy. Stable navigation may be feasible in the GPS-deprived environment near transmission lines by a UAV that senses ground structures and compares them to these simplified maps. (4) A new, formally verified path conformance software system that runs onboard a UAV was demonstrated in flight for the first time. It successfully maneuvered the aircraft after a sudden lateral perturbation that models a gust of wind, and processed lidar-derived polyhedral obstacle maps in real time. (5) Tracking of the UAV in the national airspace using the NASA UTM technology was a key safety component of this reference mission, since the flights were conducted beneath the landing approach to a heavily used runway. Comparison to autopilot tracking showed that UTM tracking accurately records the UAV position throughout the flight path.

  17. NASA JPL Distributed Systems Technology (DST) Object-Oriented Component Approach for Software Inter-Operability and Reuse

    NASA Technical Reports Server (NTRS)

    Hall, Laverne; Hung, Chaw-Kwei; Lin, Imin

    2000-01-01

    The purpose of this paper is to provide a description of NASA JPL Distributed Systems Technology (DST) Section's object-oriented component approach to open inter-operable systems software development and software reuse. It will address what is meant by the terminology object component software, give an overview of the component-based development approach and how it relates to infrastructure support of software architectures and promotes reuse, enumerate on the benefits of this approach, and give examples of application prototypes demonstrating its usage and advantages. Utilization of the object-oriented component technology approach for system development and software reuse will apply to several areas within JPL, and possibly across other NASA Centers.

  18. Federated ontology-based queries over cancer data

    PubMed Central

    2012-01-01

    Background Personalised medicine provides patients with treatments that are specific to their genetic profiles. It requires efficient data sharing of disparate data types across a variety of scientific disciplines, such as molecular biology, pathology, radiology and clinical practice. Personalised medicine aims to offer the safest and most effective therapeutic strategy based on the gene variations of each subject. In particular, this is valid in oncology, where knowledge about genetic mutations has already led to new therapies. Current molecular biology techniques (microarrays, proteomics, epigenetic technology and improved DNA sequencing technology) enable better characterisation of cancer tumours. The vast amounts of data, however, coupled with the use of different terms - or semantic heterogeneity - in each discipline makes the retrieval and integration of information difficult. Results Existing software infrastructures for data-sharing in the cancer domain, such as caGrid, support access to distributed information. caGrid follows a service-oriented model-driven architecture. Each data source in caGrid is associated with metadata at increasing levels of abstraction, including syntactic, structural, reference and domain metadata. The domain metadata consists of ontology-based annotations associated with the structural information of each data source. However, caGrid's current querying functionality is given at the structural metadata level, without capitalising on the ontology-based annotations. This paper presents the design of and theoretical foundations for distributed ontology-based queries over cancer research data. Concept-based queries are reformulated to the target query language, where join conditions between multiple data sources are found by exploiting the semantic annotations. The system has been implemented, as a proof of concept, over the caGrid infrastructure. The approach is applicable to other model-driven architectures. A graphical user interface has been developed, supporting ontology-based queries over caGrid data sources. An extensive evaluation of the query reformulation technique is included. Conclusions To support personalised medicine in oncology, it is crucial to retrieve and integrate molecular, pathology, radiology and clinical data in an efficient manner. The semantic heterogeneity of the data makes this a challenging task. Ontologies provide a formal framework to support querying and integration. This paper provides an ontology-based solution for querying distributed databases over service-oriented, model-driven infrastructures. PMID:22373043

  19. Self-service for software development projects and HPC activities

    NASA Astrophysics Data System (ADS)

    Husejko, M.; Høimyr, N.; Gonzalez, A.; Koloventzos, G.; Asbury, D.; Trzcinska, A.; Agtzidis, I.; Botrel, G.; Otto, J.

    2014-05-01

    This contribution describes how CERN has implemented several essential tools for agile software development processes, ranging from version control (Git) to issue tracking (Jira) and documentation (Wikis). Running such services in a large organisation like CERN requires many administrative actions both by users and service providers, such as creating software projects, managing access rights, users and groups, and performing tool-specific customisation. Dealing with these requests manually would be a time-consuming task. Another area of our CERN computing services that has required dedicated manual support has been clusters for specific user communities with special needs. Our aim is to move all our services to a layered approach, with server infrastructure running on the internal cloud computing infrastructure at CERN. This contribution illustrates how we plan to optimise the management of our of services by means of an end-user facing platform acting as a portal into all the related services for software projects, inspired by popular portals for open-source developments such as Sourceforge, GitHub and others. Furthermore, the contribution will discuss recent activities with tests and evaluations of High Performance Computing (HPC) applications on different hardware and software stacks, and plans to offer a dynamically scalable HPC service at CERN, based on affordable hardware.

  20. Online molecular image repository and analysis system: A multicenter collaborative open-source infrastructure for molecular imaging research and application.

    PubMed

    Rahman, Mahabubur; Watabe, Hiroshi

    2018-05-01

    Molecular imaging serves as an important tool for researchers and clinicians to visualize and investigate complex biochemical phenomena using specialized instruments; these instruments are either used individually or in combination with targeted imaging agents to obtain images related to specific diseases with high sensitivity, specificity, and signal-to-noise ratios. However, molecular imaging, which is a multidisciplinary research field, faces several challenges, including the integration of imaging informatics with bioinformatics and medical informatics, requirement of reliable and robust image analysis algorithms, effective quality control of imaging facilities, and those related to individualized disease mapping, data sharing, software architecture, and knowledge management. As a cost-effective and open-source approach to address these challenges related to molecular imaging, we develop a flexible, transparent, and secure infrastructure, named MIRA, which stands for Molecular Imaging Repository and Analysis, primarily using the Python programming language, and a MySQL relational database system deployed on a Linux server. MIRA is designed with a centralized image archiving infrastructure and information database so that a multicenter collaborative informatics platform can be built. The capability of dealing with metadata, image file format normalization, and storing and viewing different types of documents and multimedia files make MIRA considerably flexible. With features like logging, auditing, commenting, sharing, and searching, MIRA is useful as an Electronic Laboratory Notebook for effective knowledge management. In addition, the centralized approach for MIRA facilitates on-the-fly access to all its features remotely through any web browser. Furthermore, the open-source approach provides the opportunity for sustainable continued development. MIRA offers an infrastructure that can be used as cross-boundary collaborative MI research platform for the rapid achievement in cancer diagnosis and therapeutics. Copyright © 2018 Elsevier Ltd. All rights reserved.

Top