Sample records for open cloud testbed

  1. HyspIRI Low Latency Concept and Benchmarks

    NASA Technical Reports Server (NTRS)

    Mandl, Dan

    2010-01-01

    Topics include HyspIRI low latency data ops concept, HyspIRI data flow, ongoing efforts, experiment with Web Coverage Processing Service (WCPS) approach to injecting new algorithms into SensorWeb, low fidelity HyspIRI IPM testbed, compute cloud testbed, open cloud testbed environment, Global Lambda Integrated Facility (GLIF) and OCC collaboration with Starlight, delay tolerant network (DTN) protocol benchmarking, and EO-1 configuration for preliminary DTN prototype.

  2. Elastic Cloud Computing Infrastructures in the Open Cirrus Testbed Implemented via Eucalyptus

    NASA Astrophysics Data System (ADS)

    Baun, Christian; Kunze, Marcel

    Cloud computing realizes the advantages and overcomes some restrictionsof the grid computing paradigm. Elastic infrastructures can easily be createdand managed by cloud users. In order to accelerate the research ondata center management and cloud services the OpenCirrusTM researchtestbed has been started by HP, Intel and Yahoo!. Although commercialcloud offerings are proprietary, Open Source solutions exist in the field ofIaaS with Eucalyptus, PaaS with AppScale and at the applications layerwith Hadoop MapReduce. This paper examines the I/O performance ofcloud computing infrastructures implemented with Eucalyptus in contrastto Amazon S3.

  3. Design and deployment of an elastic network test-bed in IHEP data center based on SDN

    NASA Astrophysics Data System (ADS)

    Zeng, Shan; Qi, Fazhi; Chen, Gang

    2017-10-01

    High energy physics experiments produce huge amounts of raw data, while because of the sharing characteristics of the network resources, there is no guarantee of the available bandwidth for each experiment which may cause link congestion problems. On the other side, with the development of cloud computing technologies, IHEP have established a cloud platform based on OpenStack which can ensure the flexibility of the computing and storage resources, and more and more computing applications have been deployed on virtual machines established by OpenStack. However, under the traditional network architecture, network capability can’t be required elastically, which becomes the bottleneck of restricting the flexible application of cloud computing. In order to solve the above problems, we propose an elastic cloud data center network architecture based on SDN, and we also design a high performance controller cluster based on OpenDaylight. In the end, we present our current test results.

  4. A Test-Bed of Secure Mobile Cloud Computing for Military Applications

    DTIC Science & Technology

    2016-09-13

    searching databases. This kind of applications is a typical example of mobile cloud computing (MCC). MCC has lots of applications in the military...Release; Distribution Unlimited UU UU UU UU 13-09-2016 1-Aug-2014 31-Jul-2016 Final Report: A Test-bed of Secure Mobile Cloud Computing for Military...Army Research Office P.O. Box 12211 Research Triangle Park, NC 27709-2211 Test-bed, Mobile Cloud Computing , Security, Military Applications REPORT

  5. A satellite observation test bed for cloud parameterization development

    NASA Astrophysics Data System (ADS)

    Lebsock, M. D.; Suselj, K.

    2015-12-01

    We present an observational test-bed of cloud and precipitation properties derived from CloudSat, CALIPSO, and the the A-Train. The focus of the test-bed is on marine boundary layer clouds including stratocumulus and cumulus and the transition between these cloud regimes. Test-bed properties include the cloud cover and three dimensional cloud fraction along with the cloud water path and precipitation water content, and associated radiative fluxes. We also include the subgrid scale distribution of cloud and precipitation, and radiaitive quantities, which must be diagnosed by a model parameterization. The test-bed further includes meterological variables from the Modern Era Retrospective-analysis for Research and Applications (MERRA). MERRA variables provide the initialization and forcing datasets to run a parameterization in Single Column Model (SCM) mode. We show comparisons of an Eddy-Diffusivity/Mass-FLux (EDMF) parameterization coupled to micorphsycis and macrophysics packages run in SCM mode with observed clouds. Comparsions are performed regionally in areas of climatological subsidence as well stratified by dynamical and thermodynamical variables. Comparisons demonstrate the ability of the EDMF model to capture the observed transitions between subtropical stratocumulus and cumulus cloud regimes.

  6. Investigating the Application of Moving Target Defenses to Network Security

    DTIC Science & Technology

    2013-08-01

    developing an MTD testbed using OpenStack [14] to show that our MTD design can actually work. Building an MTD system in a cloud infrastructure will be...Information Intelli- gence Research. New York, USA: ACM, 2013. [14] Openstack , “ Openstack : The folsom release,” http://www.openstack.org/software

  7. Increasing the value of geospatial informatics with open approaches for Big Data

    NASA Astrophysics Data System (ADS)

    Percivall, G.; Bermudez, L. E.

    2017-12-01

    Open approaches to big data provide geoscientists with new capabilities to address problems of unmatched size and complexity. Consensus approaches for Big Geo Data have been addressed in multiple international workshops and testbeds organized by the Open Geospatial Consortium (OGC) in the past year. Participants came from government (NASA, ESA, USGS, NOAA, DOE); research (ORNL, NCSA, IU, JPL, CRIM, RENCI); industry (ESRI, Digital Globe, IBM, rasdaman); standards (JTC 1/NIST); and open source software communities. Results from the workshops and testbeds are documented in Testbed reports and a White Paper published by the OGC. The White Paper identifies the following set of use cases: Collection and Ingest: Remote sensed data processing; Data stream processing Prepare and Structure: SQL and NoSQL databases; Data linking; Feature identification Analytics and Visualization: Spatial-temporal analytics; Machine Learning; Data Exploration Modeling and Prediction: Integrated environmental models; Urban 4D models. Open implementations were developed in the Arctic Spatial Data Pilot using Discrete Global Grid Systems (DGGS) and in Testbeds using WPS and ESGF to publish climate predictions. Further development activities to advance open implementations of Big Geo Data include the following: Open Cloud Computing: Avoid vendor lock-in through API interoperability and Application portability. Open Source Extensions: Implement geospatial data representations in projects from Apache, Location Tech, and OSGeo. Investigate parallelization strategies for N-Dimensional spatial data. Geospatial Data Representations: Schemas to improve processing and analysis using geospatial concepts: Features, Coverages, DGGS. Use geospatial encodings like NetCDF and GeoPackge. Big Linked Geodata: Use linked data methods scaled to big geodata. Analysis Ready Data: Support "Download as last resort" and "Analytics as a service". Promote elements common to "datacubes."

  8. The OGC Innovation Program Testbeds - Advancing Architectures for Earth and Systems

    NASA Astrophysics Data System (ADS)

    Bermudez, L. E.; Percivall, G.; Simonis, I.; Serich, S.

    2017-12-01

    The OGC Innovation Program provides a collaborative agile process for solving challenging science problems and advancing new technologies. Since 1999, 100 initiatives have taken place, from multi-million dollar testbeds to small interoperability experiments. During these initiatives, sponsors and technology implementers (including academia and private sector) come together to solve problems, produce prototypes, develop demonstrations, provide best practices, and advance the future of standards. This presentation will provide the latest system architectures that can be used for Earth and space systems as a result of the OGC Testbed 13, including the following components: Elastic cloud autoscaler for Earth Observations (EO) using a WPS in an ESGF hybrid climate data research platform. Accessibility of climate data for the scientist and non-scientist users via on demand models wrapped in WPS. Standards descriptions for containerize applications to discover processes on the cloud, including using linked data, a WPS extension for hybrid clouds and linking to hybrid big data stores. OpenID and OAuth to secure OGC Services with built-in Attribute Based Access Control (ABAC) infrastructures leveraging GeoDRM patterns. Publishing and access of vector tiles, including use of compression and attribute options reusing patterns from WMS, WMTS and WFS. Servers providing 3D Tiles and streaming of data, including Indexed 3d Scene Layer (I3S), CityGML and Common DataBase (CDB). Asynchronous Services with advanced pushed notifications strategies, with a filter language instead of simple topic subscriptions, that can be use across OGC services. Testbed 14 will continue advancing topics like Big Data, security, and streaming, as well as making easier to use OGC services (e.g. RESTful APIs). The Call for Participation will be issued in December and responses are due on mid January 2018.

  9. The OGC Innovation Program Testbeds - Advancing Architectures for Earth and Systems

    NASA Astrophysics Data System (ADS)

    Bermudez, L. E.; Percivall, G.; Simonis, I.; Serich, S.

    2016-12-01

    The OGC Innovation Program provides a collaborative agile process for solving challenging science problems and advancing new technologies. Since 1999, 100 initiatives have taken place, from multi-million dollar testbeds to small interoperability experiments. During these initiatives, sponsors and technology implementers (including academia and private sector) come together to solve problems, produce prototypes, develop demonstrations, provide best practices, and advance the future of standards. This presentation will provide the latest system architectures that can be used for Earth and space systems as a result of the OGC Testbed 13, including the following components: Elastic cloud autoscaler for Earth Observations (EO) using a WPS in an ESGF hybrid climate data research platform. Accessibility of climate data for the scientist and non-scientist users via on demand models wrapped in WPS. Standards descriptions for containerize applications to discover processes on the cloud, including using linked data, a WPS extension for hybrid clouds and linking to hybrid big data stores. OpenID and OAuth to secure OGC Services with built-in Attribute Based Access Control (ABAC) infrastructures leveraging GeoDRM patterns. Publishing and access of vector tiles, including use of compression and attribute options reusing patterns from WMS, WMTS and WFS. Servers providing 3D Tiles and streaming of data, including Indexed 3d Scene Layer (I3S), CityGML and Common DataBase (CDB). Asynchronous Services with advanced pushed notifications strategies, with a filter language instead of simple topic subscriptions, that can be use across OGC services. Testbed 14 will continue advancing topics like Big Data, security, and streaming, as well as making easier to use OGC services (e.g. RESTful APIs). The Call for Participation will be issued in December and responses are due on mid January 2018.

  10. A Hybrid Cloud Computing Service for Earth Sciences

    NASA Astrophysics Data System (ADS)

    Yang, C. P.

    2016-12-01

    Cloud Computing is becoming a norm for providing computing capabilities for advancing Earth sciences including big Earth data management, processing, analytics, model simulations, and many other aspects. A hybrid spatiotemporal cloud computing service is bulit at George Mason NSF spatiotemporal innovation center to meet this demands. This paper will report the service including several aspects: 1) the hardware includes 500 computing services and close to 2PB storage as well as connection to XSEDE Jetstream and Caltech experimental cloud computing environment for sharing the resource; 2) the cloud service is geographically distributed at east coast, west coast, and central region; 3) the cloud includes private clouds managed using open stack and eucalyptus, DC2 is used to bridge these and the public AWS cloud for interoperability and sharing computing resources when high demands surfing; 4) the cloud service is used to support NSF EarthCube program through the ECITE project, ESIP through the ESIP cloud computing cluster, semantics testbed cluster, and other clusters; 5) the cloud service is also available for the earth science communities to conduct geoscience. A brief introduction about how to use the cloud service will be included.

  11. Solar Resource Assessment with Sky Imagery and a Virtual Testbed for Sky Imager Solar Forecasting

    NASA Astrophysics Data System (ADS)

    Kurtz, Benjamin Bernard

    In recent years, ground-based sky imagers have emerged as a promising tool for forecasting solar energy on short time scales (0 to 30 minutes ahead). Following the development of sky imager hardware and algorithms at UC San Diego, we present three new or improved algorithms for sky imager forecasting and forecast evaluation. First, we present an algorithm for measuring irradiance with a sky imager. Sky imager forecasts are often used in conjunction with other instruments for measuring irradiance, so this has the potential to decrease instrumentation costs and logistical complexity. In particular, the forecast algorithm itself often relies on knowledge of the current irradiance which can now be provided directly from the sky images. Irradiance measurements are accurate to within about 10%. Second, we demonstrate a virtual sky imager testbed that can be used for validating and enhancing the forecast algorithm. The testbed uses high-quality (but slow) simulations to produce virtual clouds and sky images. Because virtual cloud locations are known, much more advanced validation procedures are possible with the virtual testbed than with measured data. In this way, we are able to determine that camera geometry and non-uniform evolution of the cloud field are the two largest sources of forecast error. Finally, with the assistance of the virtual sky imager testbed, we develop improvements to the cloud advection model used for forecasting. The new advection schemes are 10-20% better at short time horizons.

  12. The Magellan Final Report on Cloud Computing

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    ,; Coghlan, Susan; Yelick, Katherine

    The goal of Magellan, a project funded through the U.S. Department of Energy (DOE) Office of Advanced Scientific Computing Research (ASCR), was to investigate the potential role of cloud computing in addressing the computing needs for the DOE Office of Science (SC), particularly related to serving the needs of mid- range computing and future data-intensive computing workloads. A set of research questions was formed to probe various aspects of cloud computing from performance, usability, and cost. To address these questions, a distributed testbed infrastructure was deployed at the Argonne Leadership Computing Facility (ALCF) and the National Energy Research Scientific Computingmore » Center (NERSC). The testbed was designed to be flexible and capable enough to explore a variety of computing models and hardware design points in order to understand the impact for various scientific applications. During the project, the testbed also served as a valuable resource to application scientists. Applications from a diverse set of projects such as MG-RAST (a metagenomics analysis server), the Joint Genome Institute, the STAR experiment at the Relativistic Heavy Ion Collider, and the Laser Interferometer Gravitational Wave Observatory (LIGO), were used by the Magellan project for benchmarking within the cloud, but the project teams were also able to accomplish important production science utilizing the Magellan cloud resources.« less

  13. Investigation of tropical diurnal convection biases in a climate model using TWP-ICE observations and convection-permitting simulations

    NASA Astrophysics Data System (ADS)

    Lin, W.; Xie, S.; Jackson, R. C.; Endo, S.; Vogelmann, A. M.; Collis, S. M.; Golaz, J. C.

    2017-12-01

    Climate models are known to have difficulty in simulating tropical diurnal convections that exhibit distinct characteristics over land and open ocean. While the causes are rooted in deficiencies in convective parameterization in general, lack of representations of mesoscale dynamics in terms of land-sea breeze, convective organization, and propagation of convection-induced gravity waves also play critical roles. In this study, the problem is investigated at the process-level with the U.S. Department of Energy Accelerated Climate Modeling for Energy (ACME) model in short-term hindcast mode using the Cloud Associated Parameterization Testbed (CAPT) framework. Convective-scale radar retrievals and observation-driven convection-permitting simulations for the Tropical Warm Pool-International Cloud Experiment (TWP-ICE) cases are used to guide the analysis of the underlying processes. The emphasis will be on linking deficiencies in representation of detailed process elements to the model biases in diurnal convective properties and their contrast among inland, coastal and open ocean conditions.

  14. Cross layer optimization for cloud-based radio over optical fiber networks

    NASA Astrophysics Data System (ADS)

    Shao, Sujie; Guo, Shaoyong; Qiu, Xuesong; Yang, Hui; Meng, Luoming

    2016-07-01

    To adapt the 5G communication, the cloud radio access network is a paradigm introduced by operators which aggregates all base stations computational resources into a cloud BBU pool. The interaction between RRH and BBU or resource schedule among BBUs in cloud have become more frequent and complex with the development of system scale and user requirement. It can promote the networking demand among RRHs and BBUs, and force to form elastic optical fiber switching and networking. In such network, multiple stratum resources of radio, optical and BBU processing unit have interweaved with each other. In this paper, we propose a novel multiple stratum optimization (MSO) architecture for cloud-based radio over optical fiber networks (C-RoFN) with software defined networking. Additionally, a global evaluation strategy (GES) is introduced in the proposed architecture. MSO can enhance the responsiveness to end-to-end user demands and globally optimize radio frequency, optical spectrum and BBU processing resources effectively to maximize radio coverage. The feasibility and efficiency of the proposed architecture with GES strategy are experimentally verified on OpenFlow-enabled testbed in terms of resource occupation and path provisioning latency.

  15. Developing the science product algorithm testbed for Chinese next-generation geostationary meteorological satellites: Fengyun-4 series

    NASA Astrophysics Data System (ADS)

    Min, Min; Wu, Chunqiang; Li, Chuan; Liu, Hui; Xu, Na; Wu, Xiao; Chen, Lin; Wang, Fu; Sun, Fenglin; Qin, Danyu; Wang, Xi; Li, Bo; Zheng, Zhaojun; Cao, Guangzhen; Dong, Lixin

    2017-08-01

    Fengyun-4A (FY-4A), the first of the Chinese next-generation geostationary meteorological satellites, launched in 2016, offers several advances over the FY-2: more spectral bands, faster imaging, and infrared hyperspectral measurements. To support the major objective of developing the prototypes of FY-4 science algorithms, two science product algorithm testbeds for imagers and sounders have been developed by the scientists in the FY-4 Algorithm Working Group (AWG). Both testbeds, written in FORTRAN and C programming languages for Linux or UNIX systems, have been tested successfully by using Intel/g compilers. Some important FY-4 science products, including cloud mask, cloud properties, and temperature profiles, have been retrieved successfully through using a proxy imager, Himawari-8/Advanced Himawari Imager (AHI), and sounder data, obtained from the Atmospheric InfraRed Sounder, thus demonstrating their robustness. In addition, in early 2016, the FY-4 AWG was developed based on the imager testbed—a near real-time processing system for Himawari-8/AHI data for use by Chinese weather forecasters. Consequently, robust and flexible science product algorithm testbeds have provided essential and productive tools for popularizing FY-4 data and developing substantial improvements in FY-4 products.

  16. Experimental demonstration of multi-dimensional resources integration for service provisioning in cloud radio over fiber network

    NASA Astrophysics Data System (ADS)

    Yang, Hui; Zhang, Jie; Ji, Yuefeng; He, Yongqi; Lee, Young

    2016-07-01

    Cloud radio access network (C-RAN) becomes a promising scenario to accommodate high-performance services with ubiquitous user coverage and real-time cloud computing in 5G area. However, the radio network, optical network and processing unit cloud have been decoupled from each other, so that their resources are controlled independently. Traditional architecture cannot implement the resource optimization and scheduling for the high-level service guarantee due to the communication obstacle among them with the growing number of mobile internet users. In this paper, we report a study on multi-dimensional resources integration (MDRI) for service provisioning in cloud radio over fiber network (C-RoFN). A resources integrated provisioning (RIP) scheme using an auxiliary graph is introduced based on the proposed architecture. The MDRI can enhance the responsiveness to dynamic end-to-end user demands and globally optimize radio frequency, optical network and processing resources effectively to maximize radio coverage. The feasibility of the proposed architecture is experimentally verified on OpenFlow-based enhanced SDN testbed. The performance of RIP scheme under heavy traffic load scenario is also quantitatively evaluated to demonstrate the efficiency of the proposal based on MDRI architecture in terms of resource utilization, path blocking probability, network cost and path provisioning latency, compared with other provisioning schemes.

  17. Experimental demonstration of multi-dimensional resources integration for service provisioning in cloud radio over fiber network.

    PubMed

    Yang, Hui; Zhang, Jie; Ji, Yuefeng; He, Yongqi; Lee, Young

    2016-07-28

    Cloud radio access network (C-RAN) becomes a promising scenario to accommodate high-performance services with ubiquitous user coverage and real-time cloud computing in 5G area. However, the radio network, optical network and processing unit cloud have been decoupled from each other, so that their resources are controlled independently. Traditional architecture cannot implement the resource optimization and scheduling for the high-level service guarantee due to the communication obstacle among them with the growing number of mobile internet users. In this paper, we report a study on multi-dimensional resources integration (MDRI) for service provisioning in cloud radio over fiber network (C-RoFN). A resources integrated provisioning (RIP) scheme using an auxiliary graph is introduced based on the proposed architecture. The MDRI can enhance the responsiveness to dynamic end-to-end user demands and globally optimize radio frequency, optical network and processing resources effectively to maximize radio coverage. The feasibility of the proposed architecture is experimentally verified on OpenFlow-based enhanced SDN testbed. The performance of RIP scheme under heavy traffic load scenario is also quantitatively evaluated to demonstrate the efficiency of the proposal based on MDRI architecture in terms of resource utilization, path blocking probability, network cost and path provisioning latency, compared with other provisioning schemes.

  18. Performance evaluation of multi-stratum resources optimization with network functions virtualization for cloud-based radio over optical fiber networks.

    PubMed

    Yang, Hui; He, Yongqi; Zhang, Jie; Ji, Yuefeng; Bai, Wei; Lee, Young

    2016-04-18

    Cloud radio access network (C-RAN) has become a promising scenario to accommodate high-performance services with ubiquitous user coverage and real-time cloud computing using cloud BBUs. In our previous work, we implemented cross stratum optimization of optical network and application stratums resources that allows to accommodate the services in optical networks. In view of this, this study extends to consider the multiple dimensional resources optimization of radio, optical and BBU processing in 5G age. We propose a novel multi-stratum resources optimization (MSRO) architecture with network functions virtualization for cloud-based radio over optical fiber networks (C-RoFN) using software defined control. A global evaluation scheme (GES) for MSRO in C-RoFN is introduced based on the proposed architecture. The MSRO can enhance the responsiveness to dynamic end-to-end user demands and globally optimize radio frequency, optical and BBU resources effectively to maximize radio coverage. The efficiency and feasibility of the proposed architecture are experimentally demonstrated on OpenFlow-based enhanced SDN testbed. The performance of GES under heavy traffic load scenario is also quantitatively evaluated based on MSRO architecture in terms of resource occupation rate and path provisioning latency, compared with other provisioning scheme.

  19. Experimental demonstration of multi-dimensional resources integration for service provisioning in cloud radio over fiber network

    PubMed Central

    Yang, Hui; Zhang, Jie; Ji, Yuefeng; He, Yongqi; Lee, Young

    2016-01-01

    Cloud radio access network (C-RAN) becomes a promising scenario to accommodate high-performance services with ubiquitous user coverage and real-time cloud computing in 5G area. However, the radio network, optical network and processing unit cloud have been decoupled from each other, so that their resources are controlled independently. Traditional architecture cannot implement the resource optimization and scheduling for the high-level service guarantee due to the communication obstacle among them with the growing number of mobile internet users. In this paper, we report a study on multi-dimensional resources integration (MDRI) for service provisioning in cloud radio over fiber network (C-RoFN). A resources integrated provisioning (RIP) scheme using an auxiliary graph is introduced based on the proposed architecture. The MDRI can enhance the responsiveness to dynamic end-to-end user demands and globally optimize radio frequency, optical network and processing resources effectively to maximize radio coverage. The feasibility of the proposed architecture is experimentally verified on OpenFlow-based enhanced SDN testbed. The performance of RIP scheme under heavy traffic load scenario is also quantitatively evaluated to demonstrate the efficiency of the proposal based on MDRI architecture in terms of resource utilization, path blocking probability, network cost and path provisioning latency, compared with other provisioning schemes. PMID:27465296

  20. A price- and-time-slot-negotiation mechanism for Cloud service reservations.

    PubMed

    Son, Seokho; Sim, Kwang Mong

    2012-06-01

    When making reservations for Cloud services, consumers and providers need to establish service-level agreements through negotiation. Whereas it is essential for both a consumer and a provider to reach an agreement on the price of a service and when to use the service, to date, there is little or no negotiation support for both price and time-slot negotiations (PTNs) for Cloud service reservations. This paper presents a multi-issue negotiation mechanism to facilitate the following: 1) PTNs between Cloud agents and 2) tradeoff between price and time-slot utilities. Unlike many existing negotiation mechanisms in which a negotiation agent can only make one proposal at a time, agents in this work are designed to concurrently make multiple proposals in a negotiation round that generate the same aggregated utility, differing only in terms of individual price and time-slot utilities. Another novelty of this work is formulating a novel time-slot utility function that characterizes preferences for different time slots. These ideas are implemented in an agent-based Cloud testbed. Using the testbed, experiments were carried out to compare this work with related approaches. Empirical results show that PTN agents reach faster agreements and achieve higher utilities than other related approaches. A case study was carried out to demonstrate the application of the PTN mechanism for pricing Cloud resources.

  1. A Business-to-Business Interoperability Testbed: An Overview

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kulvatunyou, Boonserm; Ivezic, Nenad; Monica, Martin

    In this paper, we describe a business-to-business (B2B) testbed co-sponsored by the Open Applications Group, Inc. (OAGI) and the National Institute of Standard and Technology (NIST) to advance enterprise e-commerce standards. We describe the business and technical objectives and initial activities within the B2B Testbed. We summarize our initial lessons learned to form the requirements that drive the next generation testbed development. We also give an overview of a promising testing framework architecture in which to drive the testbed developments. We outline the future plans for the testbed development.

  2. Environmental assessment for the Atmospheric Radiation Measurement (ARM) Program: Southern Great Plains Cloud and Radiation Testbed (CART) site

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Policastro, A.J.; Pfingston, J.M.; Maloney, D.M.

    The Atmospheric Radiation Measurement (ARM) Program is aimed at supplying improved predictive capability of climate change, particularly the prediction of cloud-climate feedback. The objective will be achieved by measuring the atmospheric radiation and physical and meteorological quantities that control solar radiation in the earth`s atmosphere and using this information to test global climate and related models. The proposed action is to construct and operate a Cloud and Radiation Testbed (CART) research site in the southern Great Plains as part of the Department of Energy`s Atmospheric Radiation Measurement Program whose objective is to develop an improved predictive capability of global climatemore » change. The purpose of this CART research site in southern Kansas and northern Oklahoma would be to collect meteorological and other scientific information to better characterize the processes controlling radiation transfer on a global scale. Impacts which could result from this facility are described.« less

  3. Delay Tolerant Networking on NASA's Space Communication and Navigation Testbed

    NASA Technical Reports Server (NTRS)

    Johnson, Sandra; Eddy, Wesley

    2016-01-01

    This presentation covers the status of the implementation of an open source software that implements the specifications developed by the CCSDS Working Group. Interplanetary Overlay Network (ION) is open source software and it implements specifications that have been developed by two international working groups through IETF and CCSDS. ION was implemented on the SCaN Testbed, a testbed located on an external pallet on ISS, by the GRC team. The presentation will cover the architecture of the system, high level implementation details, and issues porting ION to VxWorks.

  4. Commissioning the CERN IT Agile Infrastructure with experiment workloads

    NASA Astrophysics Data System (ADS)

    Medrano Llamas, Ramón; Harald Barreiro Megino, Fernando; Kucharczyk, Katarzyna; Kamil Denis, Marek; Cinquilli, Mattia

    2014-06-01

    In order to ease the management of their infrastructure, most of the WLCG sites are adopting cloud based strategies. In the case of CERN, the Tier 0 of the WLCG, is completely restructuring the resource and configuration management of their computing center under the codename Agile Infrastructure. Its goal is to manage 15,000 Virtual Machines by means of an OpenStack middleware in order to unify all the resources in CERN's two datacenters: the one placed in Meyrin and the new on in Wigner, Hungary. During the commissioning of this infrastructure, CERN IT is offering an attractive amount of computing resources to the experiments (800 cores for ATLAS and CMS) through a private cloud interface. ATLAS and CMS have joined forces to exploit them by running stress tests and simulation workloads since November 2012. This work will describe the experience of the first deployments of the current experiment workloads on the CERN private cloud testbed. The paper is organized as follows: the first section will explain the integration of the experiment workload management systems (WMS) with the cloud resources. The second section will revisit the performance and stress testing performed with HammerCloud in order to evaluate and compare the suitability for the experiment workloads. The third section will go deeper into the dynamic provisioning techniques, such as the use of the cloud APIs directly by the WMS. The paper finishes with a review of the conclusions and the challenges ahead.

  5. Open source IPSEC software in manned and unmanned space missions

    NASA Astrophysics Data System (ADS)

    Edwards, Jacob

    Network security is a major topic of research because cyber attackers pose a threat to national security. Securing ground-space communications for NASA missions is important because attackers could endanger mission success and human lives. This thesis describes how an open source IPsec software package was used to create a secure and reliable channel for ground-space communications. A cost efficient, reproducible hardware testbed was also created to simulate ground-space communications. The testbed enables simulation of low-bandwidth and high latency communications links to experiment how the open source IPsec software reacts to these network constraints. Test cases were built that allowed for validation of the testbed and the open source IPsec software. The test cases also simulate using an IPsec connection from mission control ground routers to points of interest in outer space. Tested open source IPsec software did not meet all the requirements. Software changes were suggested to meet requirements.

  6. Managing autonomy levels in the SSM/PMAD testbed. [Space Station Power Management and Distribution

    NASA Technical Reports Server (NTRS)

    Ashworth, Barry R.

    1990-01-01

    It is pointed out that when autonomous operations are mixed with those of a manual nature, concepts concerning the boundary of operations and responsibility become clouded. The space station module power management and distribution (SSM/PMAD) automation testbed has the need for such mixed-mode capabilities. The concept of managing the SSM/PMAD testbed in the presence of changing levels of autonomy is examined. A knowledge-based approach to implementing autonomy management in the distributed SSM/PMAD utilizing a centralized planning system is presented. Its knowledge relations and system-wide interactions are discussed, along with the operational nature of the currently functioning SSM/PMAD knowledge-based systems.

  7. Urban Climate Resilience - Connecting climate models with decision support cyberinfrastructure using open standards

    NASA Astrophysics Data System (ADS)

    Bermudez, L. E.; Percivall, G.; Idol, T. A.

    2015-12-01

    Experts in climate modeling, remote sensing of the Earth, and cyber infrastructure must work together in order to make climate predictions available to decision makers. Such experts and decision makers worked together in the Open Geospatial Consortium's (OGC) Testbed 11 to address a scenario of population displacement by coastal inundation due to the predicted sea level rise. In a Policy Fact Sheet "Harnessing Climate Data to Boost Ecosystem & Water Resilience", issued by White House Office of Science and Technology (OSTP) in December 2014, OGC committed to increase access to climate change information using open standards. In July 2015, the OGC Testbed 11 Urban Climate Resilience activity delivered on that commitment with open standards based support for climate-change preparedness. Using open standards such as the OGC Web Coverage Service and Web Processing Service and the NetCDF and GMLJP2 encoding standards, Testbed 11 deployed an interoperable high-resolution flood model to bring climate model outputs together with global change assessment models and other remote sensing data for decision support. Methods to confirm model predictions and to allow "what-if-scenarios" included in-situ sensor webs and crowdsourcing. A scenario was in two locations: San Francisco Bay Area and Mozambique. The scenarios demonstrated interoperation and capabilities of open geospatial specifications in supporting data services and processing services. The resultant High Resolution Flood Information System addressed access and control of simulation models and high-resolution data in an open, worldwide, collaborative Web environment. The scenarios examined the feasibility and capability of existing OGC geospatial Web service specifications in supporting the on-demand, dynamic serving of flood information from models with forecasting capacity. Results of this testbed included identification of standards and best practices that help researchers and cities deal with climate-related issues. Results of the testbeds will now be deployed in pilot applications. The testbed also identified areas of additional development needed to help identify scientific investments and cyberinfrastructure approaches needed to improve the application of climate science research results to urban climate resilence.

  8. A New User Interface for On-Demand Customizable Data Products for Sensors in a SensorWeb

    NASA Technical Reports Server (NTRS)

    Mandl, Daniel; Cappelaere, Pat; Frye, Stuart; Sohlberg, Rob; Ly, Vuong; Chien, Steve; Sullivan, Don

    2011-01-01

    A SensorWeb is a set of sensors, which can consist of ground, airborne and space-based sensors interoperating in an automated or autonomous collaborative manner. The NASA SensorWeb toolbox, developed at NASA/GSFC in collaboration with NASA/JPL, NASA/Ames and other partners, is a set of software and standards that (1) enables users to create virtual private networks of sensors over open networks; (2) provides the capability to orchestrate their actions; (3) provides the capability to customize the output data products and (4) enables automated delivery of the data products to the users desktop. A recent addition to the SensorWeb Toolbox is a new user interface, together with web services co-resident with the sensors, to enable rapid creation, loading and execution of new algorithms for processing sensor data. The web service along with the user interface follows the Open Geospatial Consortium (OGC) standard called Web Coverage Processing Service (WCPS). This presentation will detail the prototype that was built and how the WCPS was tested against a HyspIRI flight testbed and an elastic computation cloud on the ground with EO-1 data. HyspIRI is a future NASA decadal mission. The elastic computation cloud stores EO-1 data and runs software similar to Amazon online shopping.

  9. Thermodynamic and cloud parameter retrieval using infrared spectral data

    NASA Technical Reports Server (NTRS)

    Zhou, Daniel K.; Smith, William L., Sr.; Liu, Xu; Larar, Allen M.; Huang, Hung-Lung A.; Li, Jun; McGill, Matthew J.; Mango, Stephen A.

    2005-01-01

    High-resolution infrared radiance spectra obtained from near nadir observations provide atmospheric, surface, and cloud property information. A fast radiative transfer model, including cloud effects, is used for atmospheric profile and cloud parameter retrieval. The retrieval algorithm is presented along with its application to recent field experiment data from the NPOESS Airborne Sounding Testbed - Interferometer (NAST-I). The retrieval accuracy dependence on cloud properties is discussed. It is shown that relatively accurate temperature and moisture retrievals can be achieved below optically thin clouds. For optically thick clouds, accurate temperature and moisture profiles down to cloud top level are obtained. For both optically thin and thick cloud situations, the cloud top height can be retrieved with an accuracy of approximately 1.0 km. Preliminary NAST-I retrieval results from the recent Atlantic-THORPEX Regional Campaign (ATReC) are presented and compared with coincident observations obtained from dropsondes and the nadir-pointing Cloud Physics Lidar (CPL).

  10. Cyber-Physical Multi-Core Optimization for Resource and Cache Effects (C2ORES)

    DTIC Science & Technology

    2014-03-01

    DoD-sponsored ATAACK mobile cloud testbed funded through the DURIP program, which is deployed at Virginia Tech and Vanderbilt University to conduct...0.9.2. Jug was configured to use a filesystem (network file system (nfs)) backend for locking and task synchronization. 4.1.7.2 Experiment 1...and performance-aware virtual machine placement technique that is realized as cloud infrastructure middleware. The key contributions of iPlace include

  11. Evaluating cloud processes in large-scale models: Of idealized case studies, parameterization testbeds and single-column modelling on climate time-scales

    NASA Astrophysics Data System (ADS)

    Neggers, Roel

    2016-04-01

    Boundary-layer schemes have always formed an integral part of General Circulation Models (GCMs) used for numerical weather and climate prediction. The spatial and temporal scales associated with boundary-layer processes and clouds are typically much smaller than those at which GCMs are discretized, which makes their representation through parameterization a necessity. The need for generally applicable boundary-layer parameterizations has motivated many scientific studies, which in effect has created its own active research field in the atmospheric sciences. Of particular interest has been the evaluation of boundary-layer schemes at "process-level". This means that parameterized physics are studied in isolated mode from the larger-scale circulation, using prescribed forcings and excluding any upscale interaction. Although feedbacks are thus prevented, the benefit is an enhanced model transparency, which might aid an investigator in identifying model errors and understanding model behavior. The popularity and success of the process-level approach is demonstrated by the many past and ongoing model inter-comparison studies that have been organized by initiatives such as GCSS/GASS. A red line in the results of these studies is that although most schemes somehow manage to capture first-order aspects of boundary layer cloud fields, there certainly remains room for improvement in many areas. Only too often are boundary layer parameterizations still found to be at the heart of problems in large-scale models, negatively affecting forecast skills of NWP models or causing uncertainty in numerical predictions of future climate. How to break this parameterization "deadlock" remains an open problem. This presentation attempts to give an overview of the various existing methods for the process-level evaluation of boundary-layer physics in large-scale models. This includes i) idealized case studies, ii) longer-term evaluation at permanent meteorological sites (the testbed approach), and iii) process-level evaluation at climate time-scales. The advantages and disadvantages of each approach will be identified and discussed, and some thoughts about possible future developments will be given.

  12. Cloud Based Earth Observation Data Exploitation Platforms

    NASA Astrophysics Data System (ADS)

    Romeo, A.; Pinto, S.; Loekken, S.; Marin, A.

    2017-12-01

    In the last few years data produced daily by several private and public Earth Observation (EO) satellites reached the order of tens of Terabytes, representing for scientists and commercial application developers both a big opportunity for their exploitation and a challenge for their management. New IT technologies, such as Big Data and cloud computing, enable the creation of web-accessible data exploitation platforms, which offer to scientists and application developers the means to access and use EO data in a quick and cost effective way. RHEA Group is particularly active in this sector, supporting the European Space Agency (ESA) in the Exploitation Platforms (EP) initiative, developing technology to build multi cloud platforms for the processing and analysis of Earth Observation data, and collaborating with larger European initiatives such as the European Plate Observing System (EPOS) and the European Open Science Cloud (EOSC). An EP is a virtual workspace, providing a user community with access to (i) large volume of data, (ii) algorithm development and integration environment, (iii) processing software and services (e.g. toolboxes, visualization routines), (iv) computing resources, (v) collaboration tools (e.g. forums, wiki, etc.). When an EP is dedicated to a specific Theme, it becomes a Thematic Exploitation Platform (TEP). Currently, ESA has seven TEPs in a pre-operational phase dedicated to geo-hazards monitoring and prevention, costal zones, forestry areas, hydrology, polar regions, urban areas and food security. On the technology development side, solutions like the multi cloud EO data processing platform provides the technology to integrate ICT resources and EO data from different vendors in a single platform. In particular it offers (i) Multi-cloud data discovery, (ii) Multi-cloud data management and access and (iii) Multi-cloud application deployment. This platform has been demonstrated with the EGI Federated Cloud, Innovation Platform Testbed Poland and the Amazon Web Services cloud. This work will present an overview of the TEPs and the multi-cloud EO data processing platform, and discuss their main achievements and their impacts in the context of distributed Research Infrastructures such as EPOS and EOSC.

  13. Development of a Cloud Resolving Model for Heterogeneous Supercomputers

    NASA Astrophysics Data System (ADS)

    Sreepathi, S.; Norman, M. R.; Pal, A.; Hannah, W.; Ponder, C.

    2017-12-01

    A cloud resolving climate model is needed to reduce major systematic errors in climate simulations due to structural uncertainty in numerical treatments of convection - such as convective storm systems. This research describes the porting effort to enable SAM (System for Atmosphere Modeling) cloud resolving model on heterogeneous supercomputers using GPUs (Graphical Processing Units). We have isolated a standalone configuration of SAM that is targeted to be integrated into the DOE ACME (Accelerated Climate Modeling for Energy) Earth System model. We have identified key computational kernels from the model and offloaded them to a GPU using the OpenACC programming model. Furthermore, we are investigating various optimization strategies intended to enhance GPU utilization including loop fusion/fission, coalesced data access and loop refactoring to a higher abstraction level. We will present early performance results, lessons learned as well as optimization strategies. The computational platform used in this study is the Summitdev system, an early testbed that is one generation removed from Summit, the next leadership class supercomputer at Oak Ridge National Laboratory. The system contains 54 nodes wherein each node has 2 IBM POWER8 CPUs and 4 NVIDIA Tesla P100 GPUs. This work is part of a larger project, ACME-MMF component of the U.S. Department of Energy(DOE) Exascale Computing Project. The ACME-MMF approach addresses structural uncertainty in cloud processes by replacing traditional parameterizations with cloud resolving "superparameterization" within each grid cell of global climate model. Super-parameterization dramatically increases arithmetic intensity, making the MMF approach an ideal strategy to achieve good performance on emerging exascale computing architectures. The goal of the project is to integrate superparameterization into ACME, and explore its full potential to scientifically and computationally advance climate simulation and prediction.

  14. Dynamic federation of grid and cloud storage

    NASA Astrophysics Data System (ADS)

    Furano, Fabrizio; Keeble, Oliver; Field, Laurence

    2016-09-01

    The Dynamic Federations project ("Dynafed") enables the deployment of scalable, distributed storage systems composed of independent storage endpoints. While the Uniform Generic Redirector at the heart of the project is protocol-agnostic, we have focused our effort on HTTP-based protocols, including S3 and WebDAV. The system has been deployed on testbeds covering the majority of the ATLAS and LHCb data, and supports geography-aware replica selection. The work done exploits the federation potential of HTTP to build systems that offer uniform, scalable, catalogue-less access to the storage and metadata ensemble and the possibility of seamless integration of other compatible resources such as those from cloud providers. Dynafed can exploit the potential of the S3 delegation scheme, effectively federating on the fly any number of S3 buckets from different providers and applying a uniform authorization to them. This feature has been used to deploy in production the BOINC Data Bridge, which uses the Uniform Generic Redirector with S3 buckets to harmonize the BOINC authorization scheme with the Grid/X509. The Data Bridge has been deployed in production with good results. We believe that the features of a loosely coupled federation of open-protocolbased storage elements open many possibilities of smoothly evolving the current computing models and of supporting new scientific computing projects that rely on massive distribution of data and that would appreciate systems that can more easily be interfaced with commercial providers and can work natively with Web browsers and clients.

  15. NBodyLab: A Testbed for Undergraduates Utilizing a Web Interface to NEMO and MD-GRAPE2 Hardware

    NASA Astrophysics Data System (ADS)

    Johnson, V. L.; Teuben, P. J.; Penprase, B. E.

    An N-body simulation testbed called NBodyLab was developed at Pomona College as a teaching tool for undergraduates. The testbed runs under Linux and provides a web interface to selected back-end NEMO modeling and analysis tools, and several integration methods which can optionally use an MD-GRAPE2 supercomputer card in the server to accelerate calculation of particle-particle forces. The testbed provides a framework for using and experimenting with the main components of N-body simulations: data models and transformations, numerical integration of the equations of motion, analysis and visualization products, and acceleration techniques (in this case, special purpose hardware). The testbed can be used by students with no knowledge of programming or Unix, freeing such students and their instructor to spend more time on scientific experimentation. The advanced student can extend the testbed software and/or more quickly transition to the use of more advanced Unix-based toolsets such as NEMO, Starlab and model builders such as GalactICS. Cosmology students at Pomona College used the testbed to study collisions of galaxies with different speeds, masses, densities, collision angles, angular momentum, etc., attempting to simulate, for example, the Tadpole Galaxy and the Antenna Galaxies. The testbed framework is available as open-source to assist other researchers and educators. Recommendations are made for testbed enhancements.

  16. Using the Atmospheric Radiation Measurement (ARM) Datasets to Evaluate Climate Models in Simulating Diurnal and Seasonal Variations of Tropical Clouds

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wang, Hailong; Burleyson, Casey D.; Ma, Po-Lun

    We use the long-term Atmospheric Radiation Measurement (ARM) datasets collected at the three Tropical Western Pacific (TWP) sites as a tropical testbed to evaluate the ability of the Community Atmosphere Model (CAM5) to simulate the various types of clouds, their seasonal and diurnal variations, and their impact on surface radiation. We conducted a series of CAM5 simulations at various horizontal grid spacing (around 2°, 1°, 0.5°, and 0.25°) with meteorological constraints from reanalysis. Model biases in the seasonal cycle of cloudiness are found to be weakly dependent on model resolution. Positive biases (up to 20%) in the annual mean totalmore » cloud fraction appear mostly in stratiform ice clouds. Higher-resolution simulations do reduce the positive bias in the frequency of ice clouds, but they inadvertently increase the negative biases in convective clouds and low-level liquid clouds, leading to a positive bias in annual mean shortwave fluxes at the sites, as high as 65 W m-2 in the 0.25° simulation. Such resolution-dependent biases in clouds can adversely lead to biases in ambient thermodynamic properties and, in turn, feedback on clouds. Both the CAM5 model and ARM observations show distinct diurnal cycles in total, stratiform and convective cloud fractions; however, they are out-of-phase by 12 hours and the biases vary by site. Our results suggest that biases in deep convection affect the vertical distribution and diurnal cycle of stratiform clouds through the transport of vapor and/or the detrainment of liquid and ice. We also found that the modelled gridmean surface longwave fluxes are systematically larger than site measurements when the grid that the ARM sites reside in is partially covered by ocean. The modeled longwave fluxes at such sites also lack a discernable diurnal cycle because the ocean part of the grid is warmer and less sensitive to radiative heating/cooling compared to land. Higher spatial resolution is more helpful is this regard. Our testbed approach can be easily adapted for the evaluation of new parameterizations being developed for CAM5 or other global or regional model simulations at high spatial resolutions.« less

  17. Efficient On-Demand Operations in Large-Scale Infrastructures

    ERIC Educational Resources Information Center

    Ko, Steven Y.

    2009-01-01

    In large-scale distributed infrastructures such as clouds, Grids, peer-to-peer systems, and wide-area testbeds, users and administrators typically desire to perform "on-demand operations" that deal with the most up-to-date state of the infrastructure. However, the scale and dynamism present in the operating environment make it challenging to…

  18. The GridEcon Platform: A Business Scenario Testbed for Commercial Cloud Services

    NASA Astrophysics Data System (ADS)

    Risch, Marcel; Altmann, Jörn; Guo, Li; Fleming, Alan; Courcoubetis, Costas

    Within this paper, we present the GridEcon Platform, a testbed for designing and evaluating economics-aware services in a commercial Cloud computing setting. The Platform is based on the idea that the exact working of such services is difficult to predict in the context of a market and, therefore, an environment for evaluating its behavior in an emulated market is needed. To identify the components of the GridEcon Platform, a number of economics-aware services and their interactions have been envisioned. The two most important components of the platform are the Marketplace and the Workflow Engine. The Workflow Engine allows the simple composition of a market environment by describing the service interactions between economics-aware services. The Marketplace allows trading goods using different market mechanisms. The capabilities of these components of the GridEcon Platform in conjunction with the economics-aware services are described in this paper in detail. The validation of an implemented market mechanism and a capacity planning service using the GridEcon Platform also demonstrated the usefulness of the GridEcon Platform.

  19. Current Sounding Capability From Satellite Meteorological Observation With Ultraspectral Infrared Instruments

    NASA Technical Reports Server (NTRS)

    Zhou, Daniel K.; Liu, Xu; Larar, Allen M.

    2008-01-01

    Ultraspectral resolution infrared spectral radiance obtained from near nadir observations provide atmospheric, surface, and cloud property information. The intent of the measurement of tropospheric thermodynamic state and trace abundances is the initialization of climate models and the monitoring of air quality. The NPOESS Airborne Sounder Testbed-Interferometer (NAST-I), designed to support the development of future satellite temperature and moisture sounders, aboard high altitude aircraft has been collecting data throughout many field campaigns. An advanced retrieval algorithm developed with NAST-I is now applied to satellite data collected with the Atmospheric InfraRed Sounder (AIRS) on the Aqua satellite launched on 4 May 2002 and the Infrared Atmospheric Sounding Interferometer (IASI) on the MetOp satellite launched on October 19, 2006. These instruments possess an ultra-spectral resolution, for example, both IASI and NAST-I have 0.25 cm-1 and a spectral coverage from 645 to 2760 cm-1. The retrieval algorithm with a fast radiative transfer model, including cloud effects, is used for atmospheric profile and cloud parameter retrieval. The physical inversion scheme has been developed, dealing with cloudy as well as cloud-free radiance observed with ultraspectral infrared sounders, to simultaneously retrieve surface, atmospheric thermodynamic, and cloud microphysical parameters. A fast radiative transfer model, which applies to the clouded atmosphere, is used for atmospheric profile and cloud parameter retrieval. A one-dimensional (1-d) variational multi-variable inversion solution is used to improve an iterative background state defined by an eigenvector-regression-retrieval. The solution is iterated in order to account for non-linearity in the 1-d variational solution. It is shown that relatively accurate temperature and moisture retrievals can be achieved below optically thin clouds. For optically thick clouds, accurate temperature and moisture profiles down to cloud top level are obtained. For both optically thin and thick cloud situations, the cloud top height can be retrieved with relatively high accuracy (i.e., error less than 1 km). Retrievals of atmospheric soundings, surface properties, and cloud microphysical properties with the AIRS and IASI observations are obtained and presented. These retrievals are further inter-compared with those obtained from airborne FTS system, such as the NPOESS Airborne Sounder Testbed? Interferometer (NAST I), dedicated dropsondes, radiosondes, and ground based Raman Lidar. The capabilities of satellite ultra-spectral sounder such as the AIRS and IASI are investigated. These advanced satellite ultraspectral infrared instruments are now playing an important role in satellite meteorological observation for numerical weather prediction.

  20. Pattern recognition of satellite cloud imagery for improved weather prediction

    NASA Technical Reports Server (NTRS)

    Gautier, Catherine; Somerville, Richard C. J.; Volfson, Leonid B.

    1986-01-01

    The major accomplishment was the successful development of a method for extracting time derivative information from geostationary meteorological satellite imagery. This research is a proof-of-concept study which demonstrates the feasibility of using pattern recognition techniques and a statistical cloud classification method to estimate time rate of change of large-scale meteorological fields from remote sensing data. The cloud classification methodology is based on typical shape function analysis of parameter sets characterizing the cloud fields. The three specific technical objectives, all of which were successfully achieved, are as follows: develop and test a cloud classification technique based on pattern recognition methods, suitable for the analysis of visible and infrared geostationary satellite VISSR imagery; develop and test a methodology for intercomparing successive images using the cloud classification technique, so as to obtain estimates of the time rate of change of meteorological fields; and implement this technique in a testbed system incorporating an interactive graphics terminal to determine the feasibility of extracting time derivative information suitable for comparison with numerical weather prediction products.

  1. A simple map-based localization strategy using range measurements

    NASA Astrophysics Data System (ADS)

    Moore, Kevin L.; Kutiyanawala, Aliasgar; Chandrasekharan, Madhumita

    2005-05-01

    In this paper we present a map-based approach to localization. We consider indoor navigation in known environments based on the idea of a "vector cloud" by observing that any point in a building has an associated vector defining its distance to the key structural components (e.g., walls, ceilings, etc.) of the building in any direction. Given a building blueprint we can derive the "ideal" vector cloud at any point in space. Then, given measurements from sensors on the robot we can compare the measured vector cloud to the possible vector clouds cataloged from the blueprint, thus determining location. We present algorithms for implementing this approach to localization, using the Hamming norm, the 1-norm, and the 2-norm. The effectiveness of the approach is verified by experiments on a 2-D testbed using a mobile robot with a 360° laser range-finder and through simulation analysis of robustness.

  2. CanOpen on RASTA: The Integration of the CanOpen IP Core in the Avionics Testbed

    NASA Astrophysics Data System (ADS)

    Furano, Gianluca; Guettache, Farid; Magistrati, Giorgio; Tiotto, Gabriele; Ortega, Carlos Urbina; Valverde, Alberto

    2013-08-01

    This paper presents the work done within the ESA Estec Data Systems Division, targeting the integration of the CanOpen IP Core with the existing Reference Architecture Test-bed for Avionics (RASTA). RASTA is the reference testbed system of the ESA Avionics Lab, designed to integrate the main elements of a typical Data Handling system. It aims at simulating a scenario where a Mission Control Center communicates with on-board computers and systems through a TM/TC link, thus providing the data management through qualified processors and interfaces such as Leon2 core processors, CAN bus controllers, MIL-STD-1553 and SpaceWire. This activity aims at the extension of the RASTA with two boards equipped with HurriCANe controller, acting as CANOpen slaves. CANOpen software modules have been ported on the RASTA system I/O boards equipped with Gaisler GR-CAN controller and acts as master communicating with the CCIPC boards. CanOpen serves as upper application layer for based on CAN defined within the CAN-in-Automation standard and can be regarded as the definitive standard for the implementation of CAN-based systems solutions. The development and integration of CCIPC performed by SITAEL S.p.A., is the first application that aims to bring the CANOpen standard for space applications. The definition of CANOpen within the European Cooperation for Space Standardization (ECSS) is under development.

  3. Remote sensing data from CLARET: A prototype CART data set

    NASA Technical Reports Server (NTRS)

    Eberhard, Wynn L.; Uttal, Taneil; Clark, Kurt A.; Cupp, Richard E.; Dutton, Ellsworth G.; Fedor, Leonard, S.; Intrieri, Janet M.; Matrosov, Sergey Y.; Snider, Jack B.; Willis, Ron J.

    1992-01-01

    The data set containing radiation, meteorological , and cloud sensor observations is documented. It was prepared for use by the Department of Energy's Atmospheric Radiation Measurement (ARM) Program and other interested scientists. These data are a precursor of the types of data that ARM Cloud And Radiation Testbed (CART) sites will provide. The data are from the Cloud Lidar And Radar Exploratory Test (CLARET) conducted by the Wave Propagation Laboratory during autumn 1989 in the Denver-Boulder area of Colorado primarily for the purpose of developing new cloud-sensing techniques on cirrus. After becoming aware of the experiment, ARM scientists requested archival of subsets of the data to assist in the developing ARM program. Five CLARET cases were selected: two with cirrus, one with stratus, one with mixed-phase clouds, and one with clear skies. Satellite data from the stratus case and one cirrus case were analyzed for statistics on cloud cover and top height. The main body of the selected data are available on diskette from the Wave Propagation Laboratory or Los Alamos National Laboratory.

  4. Identifying Meteorological Controls on Open and Closed Mesoscale Cellular Convection Associated with Marine Cold Air Outbreaks

    NASA Astrophysics Data System (ADS)

    McCoy, Isabel L.; Wood, Robert; Fletcher, Jennifer K.

    2017-11-01

    Mesoscale cellular convective (MCC) clouds occur in large-scale patterns over the ocean and have important radiative effects on the climate system. An examination of time-varying meteorological conditions associated with satellite-observed open and closed MCC clouds is conducted to illustrate the influence of large-scale meteorological conditions. Marine cold air outbreaks (MCAO) influence the development of open MCC clouds and the transition from closed to open MCC clouds. MCC neural network classifications on Moderate Resolution Imaging Spectroradiometer (MODIS) data for 2008 are collocated with Clouds and the Earth's Radiant Energy System (CERES) data and ERA-Interim reanalysis to determine the radiative effects of MCC clouds and their thermodynamic environments. Closed MCC clouds are found to have much higher albedo on average than open MCC clouds for the same cloud fraction. Three meteorological control metrics are tested: sea-air temperature difference (ΔT), estimated inversion strength (EIS), and a MCAO index (M). These predictive metrics illustrate the importance of atmospheric surface forcing and static stability for open and closed MCC cloud formation. Predictive sigmoidal relations are found between M and MCC cloud frequency globally and regionally: negative for closed MCC cloud and positive for open MCC cloud. The open MCC cloud seasonal cycle is well correlated with M, while the seasonality of closed MCC clouds is well correlated with M in the midlatitudes and EIS in the tropics and subtropics. M is found to best distinguish open and closed MCC clouds on average over shorter time scales. The possibility of a MCC cloud feedback is discussed.

  5. Large-scale, high-performance and cloud-enabled multi-model analytics experiments in the context of the Earth System Grid Federation

    NASA Astrophysics Data System (ADS)

    Fiore, S.; Płóciennik, M.; Doutriaux, C.; Blanquer, I.; Barbera, R.; Williams, D. N.; Anantharaj, V. G.; Evans, B. J. K.; Salomoni, D.; Aloisio, G.

    2017-12-01

    The increased models resolution in the development of comprehensive Earth System Models is rapidly leading to very large climate simulations output that pose significant scientific data management challenges in terms of data sharing, processing, analysis, visualization, preservation, curation, and archiving.Large scale global experiments for Climate Model Intercomparison Projects (CMIP) have led to the development of the Earth System Grid Federation (ESGF), a federated data infrastructure which has been serving the CMIP5 experiment, providing access to 2PB of data for the IPCC Assessment Reports. In such a context, running a multi-model data analysis experiment is very challenging, as it requires the availability of a large amount of data related to multiple climate models simulations and scientific data management tools for large-scale data analytics. To address these challenges, a case study on climate models intercomparison data analysis has been defined and implemented in the context of the EU H2020 INDIGO-DataCloud project. The case study has been tested and validated on CMIP5 datasets, in the context of a large scale, international testbed involving several ESGF sites (LLNL, ORNL and CMCC), one orchestrator site (PSNC) and one more hosting INDIGO PaaS services (UPV). Additional ESGF sites, such as NCI (Australia) and a couple more in Europe, are also joining the testbed. The added value of the proposed solution is summarized in the following: it implements a server-side paradigm which limits data movement; it relies on a High-Performance Data Analytics (HPDA) stack to address performance; it exploits the INDIGO PaaS layer to support flexible, dynamic and automated deployment of software components; it provides user-friendly web access based on the INDIGO Future Gateway; and finally it integrates, complements and extends the support currently available through ESGF. Overall it provides a new "tool" for climate scientists to run multi-model experiments. At the time this contribution is being written, the proposed testbed represents the first implementation of a distributed large-scale, multi-model experiment in the ESGF/CMIP context, joining together server-side approaches for scientific data analysis, HPDA frameworks, end-to-end workflow management, and cloud computing.

  6. Optical interferometer testbed

    NASA Technical Reports Server (NTRS)

    Blackwood, Gary H.

    1991-01-01

    Viewgraphs on optical interferometer testbed presented at the MIT Space Research Engineering Center 3rd Annual Symposium are included. Topics covered include: space-based optical interferometer; optical metrology; sensors and actuators; real time control hardware; controlled structures technology (CST) design methodology; identification for MIMO control; FEM/ID correlation for the naked truss; disturbance modeling; disturbance source implementation; structure design: passive damping; low authority control; active isolation of lightweight mirrors on flexible structures; open loop transfer function of mirror; and global/high authority control.

  7. Creating a Rackspace and NASA Nebula compatible cloud using the OpenStack project (Invited)

    NASA Astrophysics Data System (ADS)

    Clark, R.

    2010-12-01

    NASA and Rackspace have both provided technology to the OpenStack that allows anyone to create a private Infrastructure as a Service (IaaS) cloud using open source software and commodity hardware. OpenStack is designed and developed completely in the open and with an open governance process. NASA donated Nova, which powers the compute portion of NASA Nebula Cloud Computing Platform, and Rackspace donated Swift, which powers Rackspace Cloud Files. The project is now in continuous development by NASA, Rackspace, and hundreds of other participants. When you create a private cloud using Openstack, you will have the ability to easily interact with your private cloud, a government cloud, and an ecosystem of public cloud providers, using the same API.

  8. Real-time video streaming in mobile cloud over heterogeneous wireless networks

    NASA Astrophysics Data System (ADS)

    Abdallah-Saleh, Saleh; Wang, Qi; Grecos, Christos

    2012-06-01

    Recently, the concept of Mobile Cloud Computing (MCC) has been proposed to offload the resource requirements in computational capabilities, storage and security from mobile devices into the cloud. Internet video applications such as real-time streaming are expected to be ubiquitously deployed and supported over the cloud for mobile users, who typically encounter a range of wireless networks of diverse radio access technologies during their roaming. However, real-time video streaming for mobile cloud users across heterogeneous wireless networks presents multiple challenges. The network-layer quality of service (QoS) provision to support high-quality mobile video delivery in this demanding scenario remains an open research question, and this in turn affects the application-level visual quality and impedes mobile users' perceived quality of experience (QoE). In this paper, we devise a framework to support real-time video streaming in this new mobile video networking paradigm and evaluate the performance of the proposed framework empirically through a lab-based yet realistic testing platform. One particular issue we focus on is the effect of users' mobility on the QoS of video streaming over the cloud. We design and implement a hybrid platform comprising of a test-bed and an emulator, on which our concept of mobile cloud computing, video streaming and heterogeneous wireless networks are implemented and integrated to allow the testing of our framework. As representative heterogeneous wireless networks, the popular WLAN (Wi-Fi) and MAN (WiMAX) networks are incorporated in order to evaluate effects of handovers between these different radio access technologies. The H.264/AVC (Advanced Video Coding) standard is employed for real-time video streaming from a server to mobile users (client nodes) in the networks. Mobility support is introduced to enable continuous streaming experience for a mobile user across the heterogeneous wireless network. Real-time video stream packets are captured for analytical purposes on the mobile user node. Experimental results are obtained and analysed. Future work is identified towards further improvement of the current design and implementation. With this new mobile video networking concept and paradigm implemented and evaluated, results and observations obtained from this study would form the basis of a more in-depth, comprehensive understanding of various challenges and opportunities in supporting high-quality real-time video streaming in mobile cloud over heterogeneous wireless networks.

  9. Advanced data management system architectures testbed

    NASA Technical Reports Server (NTRS)

    Grant, Terry

    1990-01-01

    The objective of the Architecture and Tools Testbed is to provide a working, experimental focus to the evolving automation applications for the Space Station Freedom data management system. Emphasis is on defining and refining real-world applications including the following: the validation of user needs; understanding system requirements and capabilities; and extending capabilities. The approach is to provide an open, distributed system of high performance workstations representing both the standard data processors and networks and advanced RISC-based processors and multiprocessor systems. The system provides a base from which to develop and evaluate new performance and risk management concepts and for sharing the results. Participants are given a common view of requirements and capability via: remote login to the testbed; standard, natural user interfaces to simulations and emulations; special attention to user manuals for all software tools; and E-mail communication. The testbed elements which instantiate the approach are briefly described including the workstations, the software simulation and monitoring tools, and performance and fault tolerance experiments.

  10. ooi: OpenStack OCCI interface

    NASA Astrophysics Data System (ADS)

    López García, Álvaro; Fernández del Castillo, Enol; Orviz Fernández, Pablo

    In this document we present an implementation of the Open Grid Forum's Open Cloud Computing Interface (OCCI) for OpenStack, namely ooi (Openstack occi interface, 2015) [1]. OCCI is an open standard for management tasks over cloud resources, focused on interoperability, portability and integration. ooi aims to implement this open interface for the OpenStack cloud middleware, promoting interoperability with other OCCI-enabled cloud management frameworks and infrastructures. ooi focuses on being non-invasive with a vanilla OpenStack installation, not tied to a particular OpenStack release version.

  11. Retrievals with the Infrared Atmospheric Sounding Interferometer

    NASA Technical Reports Server (NTRS)

    Zhou, Daniel K.; Liu, Xu; Larar, Allen M.; Smith, William L.; Taylor, Jonathan P.; Schlussel, Peter; Strow, L. Larrabee; Calbet, Xavier; Mango, Stephen A.

    2007-01-01

    The Infrared Atmospheric Sounding Interferometer (IASI) on the MetOp satellite was launched on October 19, 2006. The Joint Airborne IASI Validation Experiment (JAIVEx) was conducted during April 2007 mainly for validation of the IASI on the MetOp satellite. IASI possesses an ultra-spectral resolution of 0.25/cm and a spectral coverage from 645 to 2760/cm. Ultraspectral resolution infrared spectral radiance obtained from near nadir observations provide atmospheric, surface, and cloud property information. An advanced retrieval algorithm with a fast radiative transfer model, including cloud effects, is used for atmospheric profile and cloud parameter retrieval. Preliminary retrievals of atmospheric soundings, surface properties, and cloud optical/microphysical properties with the IASI observations during the JAIVEx are obtained and presented. These retrievals are further inter-compared with those obtained from airborne FTS system, such as the NPOESS Airborne Sounder Testbed Interferometer (NAST-I), dedicated dropsondes, radiosondes, and ground based Raman Lidar. The capabilities of satellite ultra-spectral sounder such as the IASI are investigated.

  12. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kim, Hyunwoo; Timm, Steven

    We present a summary of how X.509 authentication and authorization are used with OpenNebula in FermiCloud. We also describe a history of why the X.509 authentication was needed in FermiCloud, and review X.509 authorization options, both internal and external to OpenNebula. We show how these options can be and have been used to successfully run scientific workflows on federated clouds, which include OpenNebula on FermiCloud and Amazon Web Services as well as other community clouds. We also outline federation options being used by other commercial and open-source clouds and cloud research projects.

  13. Research on OpenStack of open source cloud computing in colleges and universities’ computer room

    NASA Astrophysics Data System (ADS)

    Wang, Lei; Zhang, Dandan

    2017-06-01

    In recent years, the cloud computing technology has a rapid development, especially open source cloud computing. Open source cloud computing has attracted a large number of user groups by the advantages of open source and low cost, have now become a large-scale promotion and application. In this paper, firstly we briefly introduced the main functions and architecture of the open source cloud computing OpenStack tools, and then discussed deeply the core problems of computer labs in colleges and universities. Combining with this research, it is not that the specific application and deployment of university computer rooms with OpenStack tool. The experimental results show that the application of OpenStack tool can efficiently and conveniently deploy cloud of university computer room, and its performance is stable and the functional value is good.

  14. Fading testbed for free-space optical communications

    NASA Astrophysics Data System (ADS)

    Shrestha, Amita; Giggenbach, Dirk; Mustafa, Ahmad; Pacheco-Labrador, Jorge; Ramirez, Julio; Rein, Fabian

    2016-10-01

    Free-space optical (FSO) communication is a very attractive technology offering very high throughput without spectral regulation constraints, yet allowing small antennas (telescopes) and tap-proof communication. However, the transmitted signal has to travel through the atmosphere where it gets influenced by atmospheric turbulence, causing scintillation of the received signal. In addition, climatic effects like fogs, clouds and rain also affect the signal significantly. Moreover, FSO being a line of sight communication requires precise pointing and tracking of the telescopes, which otherwise also causes fading. To achieve error-free transmission, various mitigation techniques like aperture averaging, adaptive optics, transmitter diversity, sophisticated coding and modulation schemes are being investigated and implemented. Evaluating the performance of such systems under controlled conditions is very difficult in field trials since the atmospheric situation constantly changes, and the target scenario (e.g. on aircraft or satellites) is not easily accessible for test purposes. Therefore, with the motivation to be able to test and verify a system under laboratory conditions, DLR has developed a fading testbed that can emulate most realistic channel conditions. The main principle of the fading testbed is to control the input current of a variable optical attenuator such that it attenuates the incoming signal according to the loaded power vector. The sampling frequency and mean power of the vector can be optionally changed according to requirements. This paper provides a brief introduction to software and hardware development of the fading testbed and measurement results showing its accuracy and application scenarios.

  15. Automatic Integration Testbeds validation on Open Science Grid

    NASA Astrophysics Data System (ADS)

    Caballero, J.; Thapa, S.; Gardner, R.; Potekhin, M.

    2011-12-01

    A recurring challenge in deploying high quality production middleware is the extent to which realistic testing occurs before release of the software into the production environment. We describe here an automated system for validating releases of the Open Science Grid software stack that leverages the (pilot-based) PanDA job management system developed and used by the ATLAS experiment. The system was motivated by a desire to subject the OSG Integration Testbed to more realistic validation tests. In particular those which resemble to every extent possible actual job workflows used by the experiments thus utilizing job scheduling at the compute element (CE), use of the worker node execution environment, transfer of data to/from the local storage element (SE), etc. The context is that candidate releases of OSG compute and storage elements can be tested by injecting large numbers of synthetic jobs varying in complexity and coverage of services tested. The native capabilities of the PanDA system can thus be used to define jobs, monitor their execution, and archive the resulting run statistics including success and failure modes. A repository of generic workflows and job types to measure various metrics of interest has been created. A command-line toolset has been developed so that testbed managers can quickly submit "VO-like" jobs into the system when newly deployed services are ready for testing. A system for automatic submission has been crafted to send jobs to integration testbed sites, collecting the results in a central service and generating regular reports for performance and reliability.

  16. The Role of Standards in Cloud-Computing Interoperability

    DTIC Science & Technology

    2012-10-01

    services are not shared outside the organization. CloudStack, Eucalyptus, HP, Microsoft, OpenStack , Ubuntu, and VMWare provide tools for building...center requirements • Developing usage models for cloud ven- dors • Independent IT consortium OpenStack http://www.openstack.org • Open-source...software for running private clouds • Currently consists of three core software projects: OpenStack Compute (Nova), OpenStack Object Storage (Swift

  17. Storm-based Cloud-to-Ground Lightning Probabilities and Warnings

    NASA Astrophysics Data System (ADS)

    Calhoun, K. M.; Meyer, T.; Kingfield, D.

    2017-12-01

    A new cloud-to-ground (CG) lightning probability algorithm has been developed using machine-learning methods. With storm-based inputs of Earth Networks' in-cloud lightning, Vaisala's CG lightning, multi-radar/multi-sensor (MRMS) radar derived products including the Maximum Expected Size of Hail (MESH) and Vertically Integrated Liquid (VIL), and near storm environmental data including lapse rate and CAPE, a random forest algorithm was trained to produce probabilities of CG lightning up to one-hour in advance. As part of the Prototype Probabilistic Hazard Information experiment in the Hazardous Weather Testbed in 2016 and 2017, National Weather Service forecasters were asked to use this CG lightning probability guidance to create rapidly updating probability grids and warnings for the threat of CG lightning for 0-60 minutes. The output from forecasters was shared with end-users, including emergency managers and broadcast meteorologists, as part of an integrated warning team.

  18. The North Slope of Alaska and Adjacent Arctic Ocean (NSA/AAO) cart site begins operation: Collaboration with SHEBA and FIRE

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zak, D. B.; Church, H.; Ivey, M.

    2000-04-04

    Since the 1997 Atmospheric Radiation Measurement (ARM) Science Team Meeting, the North Slope of Alaska and Adjacent Arctic Ocean (NSA/AAO) Cloud and Radiation Testbed (CART) site has come into being. Much has happened even since the 1998 Science Team Meeting at which this paper was presented. To maximize its usefulness, this paper has been updated to include developments through July 1998.

  19. Impact of AIRS Thermodynamic Profiles on Precipitation Forecasts for Atmospheric River Cases Affecting the Western United States

    NASA Technical Reports Server (NTRS)

    Zavodsky, Bradley T.; Jedlovec, Gary J.; Blakenship, Clay B.; Wick, Gary A.; Neiman, Paul J.

    2013-01-01

    This project is a collaborative activity between the NASA Short-term Prediction Research and Transition (SPoRT) Center and the NOAA Hydrometeorology Testbed (HMT) to evaluate a SPoRT Advanced Infrared Sounding Radiometer (AIRS: Aumann et al. 2003) enhanced moisture analysis product. We test the impact of assimilating AIRS temperature and humidity profiles above clouds and in partly cloudy regions, using the three-dimensional variational Gridpoint Statistical Interpolation (GSI) data assimilation (DA) system (Developmental Testbed Center 2012) to produce a new analysis. Forecasts of the Weather Research and Forecasting (WRF) model initialized from the new analysis are compared to control forecasts without the additional AIRS data. We focus on some cases where atmospheric rivers caused heavy precipitation on the US West Coast. We verify the forecasts by comparison with dropsondes and the Cooperative Institute for Research in the Atmosphere (CIRA) Blended Total Precipitable Water product.

  20. Lessons learned in deploying a cloud-based knowledge platform for the Earth Science Information Partners Federation (ESIP)

    NASA Astrophysics Data System (ADS)

    Pouchard, L. C.; Depriest, A.; Huhns, M.

    2012-12-01

    Ontologies and semantic technologies are an essential infrastructure component of systems supporting knowledge integration in the Earth Sciences. Numerous earth science ontologies exist, but are hard to discover because they tend to be hosted with the projects that develop them. There are often few quality measures and sparse metadata associated with these ontologies, such as modification dates, versioning, purpose, number of classes, and properties. Projects often develop ontologies for their own needs without considering existing ontology entities or derivations from formal and more basic ontologies. The result is mostly orthogonal ontologies, and ontologies that are not modular enough to reuse in part or adapt for new purposes, in spite of existing, standards for ontology representation. Additional obstacles to sharing and reuse include a lack of maintenance once a project is completed. The obstacles prevent the full exploitation of semantic technologies in a context where they could become needed enablers for service discovery and for matching data with services. To start addressing this gap, we have deployed BioPortal, a mature, domain-independent ontology and semantic service system developed by the National Center for Biomedical Ontologies (NCBO), on the ESIP Testbed under the governance of the ESIP Semantic Web cluster. ESIP provides a forum for a broad-based, distributed community of data and information technology practitioners and stakeholders to coordinate their efforts and develop new ideas for interoperability solutions. The Testbed provides an environment where innovations and best practices can be explored and evaluated. One objective of this deployment is to provide a community platform that would harness the organizational and cyber infrastructure provided by ESIP at minimal costs. Another objective is to host ontology services on a scalable, public cloud and investigate the business case for crowd sourcing of ontology maintenance. We deployed the system on Amazon 's Elastic Compute Cloud (EC2) where ESIP maintains an account. Our approach had three phases: 1) set up a private cloud environment at the University of South Carolina to become familiar with the complex architecture of the system and enable some basic customization, 2) coordinate the production of a Virtual Appliance for the system with NCBO and deploy it on the Amazon cloud, and 3) outreach to the ESIP community to solicit participation, populate the repository, and develop new use cases. Phase 2 is nearing completion and Phase 3 is underway. Ontologies were gathered during updates to the ESIP cluster. Discussion points included the criteria for a shareable ontology and how to determine the best size for an ontology to be reusable. Outreach highlighted that the system can start addressing an integration of discovery frameworks via linking data and services in a pull model (data and service casting), a key issue of the Discovery cluster. This work thus presents several contributions: 1) technology injection from another domain into the earth sciences, 2) the deployment of a mature knowledge platform on the EC2 cloud, and 3) the successful engagement of the community through the ESIP clusters and Testbed model.

  1. Climatic Implications of the Observed Temperature Dependence of the Liquid Water Path of Low Clouds

    NASA Technical Reports Server (NTRS)

    DelGenio, Anthony

    1999-01-01

    The uncertainty in the global climate sensitivity to an equilibrium doubling of carbon dioxide is often stated to be 1.5-4.5 K, largely due to uncertainties in cloud feedbacks. The lower end of this range is based on the assumption or prediction in some GCMs that cloud liquid water behaves adiabatically, thus implying that cloud optical thickness will increase in a warming climate if the physical thickness of clouds is invariant. Satellite observations of low-level cloud optical thickness and liquid water path have challenged this assumption, however, at low and middle latitudes. We attempt to explain the satellite results using four years of surface remote sensing data from the Atmospheric Radiation Measurements (ARM) Cloud And Radiation Testbed (CART) site in the Southern Great Plains. We find that low cloud liquid water path is insensitive to temperature in winter but strongly decreases with temperature in summer. The latter occurs because surface relative humidity decreases with warming, causing cloud base to rise and clouds to geometrically thin. Meanwhile, inferred liquid water contents hardly vary with temperature, suggesting entrainment depletion. Physically, the temperature dependence appears to represent a transition from higher probabilities of stratified boundary layers at cold temperatures to a higher incidence of convective boundary layers at warm temperatures. The combination of our results and the earlier satellite findings imply that the minimum climate sensitivity should be revised upward from 1.5 K.

  2. Evaluating open-source cloud computing solutions for geosciences

    NASA Astrophysics Data System (ADS)

    Huang, Qunying; Yang, Chaowei; Liu, Kai; Xia, Jizhe; Xu, Chen; Li, Jing; Gui, Zhipeng; Sun, Min; Li, Zhenglong

    2013-09-01

    Many organizations start to adopt cloud computing for better utilizing computing resources by taking advantage of its scalability, cost reduction, and easy to access characteristics. Many private or community cloud computing platforms are being built using open-source cloud solutions. However, little has been done to systematically compare and evaluate the features and performance of open-source solutions in supporting Geosciences. This paper provides a comprehensive study of three open-source cloud solutions, including OpenNebula, Eucalyptus, and CloudStack. We compared a variety of features, capabilities, technologies and performances including: (1) general features and supported services for cloud resource creation and management, (2) advanced capabilities for networking and security, and (3) the performance of the cloud solutions in provisioning and operating the cloud resources as well as the performance of virtual machines initiated and managed by the cloud solutions in supporting selected geoscience applications. Our study found that: (1) no significant performance differences in central processing unit (CPU), memory and I/O of virtual machines created and managed by different solutions, (2) OpenNebula has the fastest internal network while both Eucalyptus and CloudStack have better virtual machine isolation and security strategies, (3) Cloudstack has the fastest operations in handling virtual machines, images, snapshots, volumes and networking, followed by OpenNebula, and (4) the selected cloud computing solutions are capable for supporting concurrent intensive web applications, computing intensive applications, and small-scale model simulations without intensive data communication.

  3. Application of Model-based Prognostics to a Pneumatic Valves Testbed

    NASA Technical Reports Server (NTRS)

    Daigle, Matthew; Kulkarni, Chetan S.; Gorospe, George

    2014-01-01

    Pneumatic-actuated valves play an important role in many applications, including cryogenic propellant loading for space operations. Model-based prognostics emphasizes the importance of a model that describes the nominal and faulty behavior of a system, and how faulty behavior progresses in time, causing the end of useful life of the system. We describe the construction of a testbed consisting of a pneumatic valve that allows the injection of faulty behavior and controllable fault progression. The valve opens discretely, and is controlled through a solenoid valve. Controllable leaks of pneumatic gas in the testbed are introduced through proportional valves, allowing the testing and validation of prognostics algorithms for pneumatic valves. A new valve prognostics approach is developed that estimates fault progression and predicts remaining life based only on valve timing measurements. Simulation experiments demonstrate and validate the approach.

  4. The NASA LeRC regenerative fuel cell system testbed program for goverment and commercial applications

    NASA Astrophysics Data System (ADS)

    Maloney, Thomas M.; Prokopius, Paul R.; Voecks, Gerald E.

    1995-01-01

    The Electrochemical Technology Branch of the NASA Lewis Research Center (LeRC) has initiated a program to develop a renewable energy system testbed to evaluate, characterize, and demonstrate fully integrated regenerative fuel cell (RFC) system for space, military, and commercial applications. A multi-agency management team, led by NASA LeRC, is implementing the program through a unique international coalition which encompasses both government and industry participants. This open-ended teaming strategy optimizes the development for space, military, and commercial RFC system technologies. Program activities to date include system design and analysis, and reactant storage sub-system design, with a major emphasis centered upon testbed fabrication and installation and testing of two key RFC system components, namely, the fuel cells and electrolyzers. Construction of the LeRC 25 kW RFC system testbed at the NASA-Jet Propulsion Labortory (JPL) facility at Edwards Air Force Base (EAFB) is nearly complete and some sub-system components have already been installed. Furthermore, planning for the first commercial RFC system demonstration is underway.

  5. Evaluating Aerosol Process Modules within the Framework of the Aerosol Modeling Testbed

    NASA Astrophysics Data System (ADS)

    Fast, J. D.; Velu, V.; Gustafson, W. I.; Chapman, E.; Easter, R. C.; Shrivastava, M.; Singh, B.

    2012-12-01

    Factors that influence predictions of aerosol direct and indirect forcing, such as aerosol mass, composition, size distribution, hygroscopicity, and optical properties, still contain large uncertainties in both regional and global models. New aerosol treatments are usually implemented into a 3-D atmospheric model and evaluated using a limited number of measurements from a specific case study. Under this modeling paradigm, the performance and computational efficiency of several treatments for a specific aerosol process cannot be adequately quantified because many other processes among various modeling studies (e.g. grid configuration, meteorology, emission rates) are different as well. The scientific community needs to know the advantages and disadvantages of specific aerosol treatments when the meteorology, chemistry, and other aerosol processes are identical in order to reduce the uncertainties associated with aerosols predictions. To address these issues, an Aerosol Modeling Testbed (AMT) has been developed that systematically and objectively evaluates new aerosol treatments for use in regional and global models. The AMT consists of the modular Weather Research and Forecasting (WRF) model, a series testbed cases for which extensive in situ and remote sensing measurements of meteorological, trace gas, and aerosol properties are available, and a suite of tools to evaluate the performance of meteorological, chemical, aerosol process modules. WRF contains various parameterizations of meteorological, chemical, and aerosol processes and includes interactive aerosol-cloud-radiation treatments similar to those employed by climate models. In addition, the physics suite from the Community Atmosphere Model version 5 (CAM5) have also been ported to WRF so that they can be tested at various spatial scales and compared directly with field campaign data and other parameterizations commonly used by the mesoscale modeling community. Data from several campaigns, including the 2006 MILAGRO, 2008 ISDAC, 2008 VOCALS, 2010 CARES, and 2010 CalNex campaigns, have been incorporated into the AMT as testbed cases. Data from operational networks (e.g. air quality, meteorology, satellite) are also included in the testbed cases to supplement the field campaign data. The CARES and CalNex testbed cases are used to demonstrate how the AMT can be used to assess the strengths and weaknesses of simple and complex representations of aerosol processes in relation to computational cost. Anticipated enhancements to the AMT and how this type of testbed can be used by the scientific community to foster collaborations and coordinate aerosol modeling research will also be discussed.

  6. Extraction of Profile Information from Cloud Contaminated Radiances. Appendixes 2

    NASA Technical Reports Server (NTRS)

    Smith, W. L.; Zhou, D. K.; Huang, H.-L.; Li, Jun; Liu, X.; Larar, A. M.

    2003-01-01

    Clouds act to reduce the signal level and may produce noise dependence on the complexity of the cloud properties and the manner in which they are treated in the profile retrieval process. There are essentially three ways to extract profile information from cloud contaminated radiances: (1) cloud-clearing using spatially adjacent cloud contaminated radiance measurements, (2) retrieval based upon the assumption of opaque cloud conditions, and (3) retrieval or radiance assimilation using a physically correct cloud radiative transfer model which accounts for the absorption and scattering of the radiance observed. Cloud clearing extracts the radiance arising from the clear air portion of partly clouded fields of view permitting soundings to the surface or the assimilation of radiances as in the clear field of view case. However, the accuracy of the clear air radiance signal depends upon the cloud height and optical property uniformity across the two fields of view used in the cloud clearing process. The assumption of opaque clouds within the field of view permits relatively accurate profiles to be retrieved down to near cloud top levels, the accuracy near the cloud top level being dependent upon the actual microphysical properties of the cloud. The use of a physically correct cloud radiative transfer model enables accurate retrievals down to cloud top levels and below semi-transparent cloud layers (e.g., cirrus). It should also be possible to assimilate cloudy radiances directly into the model given a physically correct cloud radiative transfer model using geometric and microphysical cloud parameters retrieved from the radiance spectra as initial cloud variables in the radiance assimilation process. This presentation reviews the above three ways to extract profile information from cloud contaminated radiances. NPOESS Airborne Sounder Testbed-Interferometer radiance spectra and Aqua satellite AIRS radiance spectra are used to illustrate how cloudy radiances can be used in the profile retrieval process.

  7. CAUSES: Clouds Above the United States and Errors at the Surface

    NASA Astrophysics Data System (ADS)

    Ma, H. Y.; Klein, S. A.; Xie, S.; Morcrette, C. J.; Van Weverberg, K.; Zhang, Y.; Lo, M. H.

    2015-12-01

    The Clouds Above the United States and Errors at the Surface (CAUSES) is a new joint Global Atmospheric System Studies/Regional and Global Climate model/Atmospheric System Research (GASS/RGCM/ASR) intercomparison project to evaluate the central U.S. summertime surface warm biases seen in many weather and climate models. The main focus is to identify the role of cloud, radiation, and precipitation processes in contributing to surface air temperature biases. In this project, we use short-term hindcast approach and examine the growth of the error as a function of hindcast lead time. The study period covers from April 1 to August 31, 2011, which also covers the entire Midlatitude Continental Convective Clouds Experiment (MC3E) campaign. Preliminary results from several models will be presented. (http://portal.nersc.gov/project/capt/CAUSES/) (This study is funded by the RGCM and ASR programs of the U.S. Department of Energy as part of the Cloud-Associated Parameterizations Testbed. This work is performed under the auspices of the U.S. Department of Energy by Lawrence Livermore National Laboratory under Contract DE-AC52-07NA27344. LLNL-ABS-658017)

  8. CAUSES: Clouds Above the United States and Errors at the Surface

    NASA Astrophysics Data System (ADS)

    Ma, H. Y.; Klein, S. A.; Xie, S.; Zhang, Y.; Morcrette, C. J.; Van Weverberg, K.; Petch, J.; Lo, M. H.

    2014-12-01

    The Clouds Above the United States and Errors at the Surface (CAUSES) is a new joint Global Atmospheric System Studies/Regional and Global Climate model/Atmospheric System Research (GASS/RGCM/ASR) intercomparison project to evaluate the central U.S. summertime surface warm biases seen in many weather and climate models. The main focus is to identify the role of cloud, radiation, and precipitation processes in contributing to surface air temperature biases. In this project, we use short-term hindcast approach and examine the growth of the error as a function of hindcast lead time. The study period covers from April 1 to August 31, 2011, which also covers the entire Midlatitude Continental Convective Clouds Experiment (MC3E) campaign. Preliminary results from several models will be presented. (http://portal.nersc.gov/project/capt/CAUSES/) (This study is funded by the RGCM and ASR programs of the U.S. Department of Energy as part of the Cloud-Associated Parameterizations Testbed. This work is performed under the auspices of the U.S. Department of Energy by Lawrence Livermore National Laboratory under Contract DE-AC52-07NA27344. LLNL-ABS-658017)

  9. Two-channel microwave radiometer for observations of total column precipitable water vapor and cloud liquid water path

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Liljegren, J.C.

    1994-01-01

    The Atmospheric Radiation Measurement (ARM) Program is focused on improving the treatment of radiation transfer in models of the atmospheric general circulation, as well as on improving parameterizations of cloud properties and formation processes in these models (USDOE, 1990). To help achieve these objectives, ARM is deploying several two-channel, microwave radiometers at the Cloud and Radiation Testbed (CART) site in Oklahoma for the purpose of obtaining long time series observations of total precipitable water vapor (PWV) and cloud liquid water path (LWP). The performance of the WVR-1100 microwave radiometer deployed by ARM at the Oklahoma CART site central facility tomore » provide time series measurements precipitable water vapor (PWV) and liquid water path (LWP) has been presented. The instrument has proven to be durable and reliable in continuous field operation since June, 1992. The accuracy of the PWV has been demonstrated to achieve the limiting accuracy of the statistical retrieval under clear sky conditions, degrading with increasing LWP. Improvements are planned to address moisture accumulation on the Teflon window, as well as to identity the presence of clouds with LWP at or below the retrieval uncertainty.« less

  10. RACORO Continental Boundary Layer Cloud Investigations: 3. Separation of Parameterization Biases in Single-Column Model CAM5 Simulations of Shallow Cumulus

    NASA Technical Reports Server (NTRS)

    Lin, Wuyin; Liu, Yangang; Vogelmann, Andrew M.; Fridlind, Ann; Endo, Satoshi; Song, Hua; Feng, Sha; Toto, Tami; Li, Zhijin; Zhang, Minghua

    2015-01-01

    Climatically important low-level clouds are commonly misrepresented in climate models. The FAst-physics System TEstbed and Research (FASTER) Project has constructed case studies from the Atmospheric Radiation Measurement Climate Research Facility's Southern Great Plain site during the RACORO aircraft campaign to facilitate research on model representation of boundary-layer clouds. This paper focuses on using the single-column Community Atmosphere Model version 5 (SCAM5) simulations of a multi-day continental shallow cumulus case to identify specific parameterization causes of low-cloud biases. Consistent model biases among the simulations driven by a set of alternative forcings suggest that uncertainty in the forcing plays only a relatively minor role. In-depth analysis reveals that the model's shallow cumulus convection scheme tends to significantly under-produce clouds during the times when shallow cumuli exist in the observations, while the deep convective and stratiform cloud schemes significantly over-produce low-level clouds throughout the day. The links between model biases and the underlying assumptions of the shallow cumulus scheme are further diagnosed with the aid of large-eddy simulations and aircraft measurements, and by suppressing the triggering of the deep convection scheme. It is found that the weak boundary layer turbulence simulated is directly responsible for the weak cumulus activity and the simulated boundary layer stratiform clouds. Increased vertical and temporal resolutions are shown to lead to stronger boundary layer turbulence and reduction of low-cloud biases.

  11. RACORO continental boundary layer cloud investigations. 3. Separation of parameterization biases in single-column model CAM5 simulations of shallow cumulus

    DOE PAGES

    Lin, Wuyin; Liu, Yangang; Vogelmann, Andrew M.; ...

    2015-06-19

    Climatically important low-level clouds are commonly misrepresented in climate models. The FAst-physics System TEstbed and Research (FASTER) project has constructed case studies from the Atmospheric Radiation Measurement (ARM) Climate Research Facility's Southern Great Plain site during the RACORO aircraft campaign to facilitate research on model representation of boundary-layer clouds. This paper focuses on using the single-column Community Atmosphere Model version 5 (SCAM5) simulations of a multi-day continental shallow cumulus case to identify specific parameterization causes of low-cloud biases. Consistent model biases among the simulations driven by a set of alternative forcings suggest that uncertainty in the forcing plays only amore » relatively minor role. In-depth analysis reveals that the model's shallow cumulus convection scheme tends to significantly under-produce clouds during the times when shallow cumuli exist in the observations, while the deep convective and stratiform cloud schemes significantly over-produce low-level clouds throughout the day. The links between model biases and the underlying assumptions of the shallow cumulus scheme are further diagnosed with the aid of large-eddy simulations and aircraft measurements, and by suppressing the triggering of the deep convection scheme. It is found that the weak boundary layer turbulence simulated is directly responsible for the weak cumulus activity and the simulated boundary layer stratiform clouds. Increased vertical and temporal resolutions are shown to lead to stronger boundary layer turbulence and reduction of low-cloud biases.« less

  12. OpenID Connect as a security service in cloud-based medical imaging systems.

    PubMed

    Ma, Weina; Sartipi, Kamran; Sharghigoorabi, Hassan; Koff, David; Bak, Peter

    2016-04-01

    The evolution of cloud computing is driving the next generation of medical imaging systems. However, privacy and security concerns have been consistently regarded as the major obstacles for adoption of cloud computing by healthcare domains. OpenID Connect, combining OpenID and OAuth together, is an emerging representational state transfer-based federated identity solution. It is one of the most adopted open standards to potentially become the de facto standard for securing cloud computing and mobile applications, which is also regarded as "Kerberos of cloud." We introduce OpenID Connect as an authentication and authorization service in cloud-based diagnostic imaging (DI) systems, and propose enhancements that allow for incorporating this technology within distributed enterprise environments. The objective of this study is to offer solutions for secure sharing of medical images among diagnostic imaging repository (DI-r) and heterogeneous picture archiving and communication systems (PACS) as well as Web-based and mobile clients in the cloud ecosystem. The main objective is to use OpenID Connect open-source single sign-on and authorization service and in a user-centric manner, while deploying DI-r and PACS to private or community clouds should provide equivalent security levels to traditional computing model.

  13. A Digital Knowledge Preservation Platform for Environmental Sciences

    NASA Astrophysics Data System (ADS)

    Aguilar Gómez, Fernando; de Lucas, Jesús Marco; Pertinez, Esther; Palacio, Aida; Perez, David

    2017-04-01

    The Digital Knowledge Preservation Platform is the evolution of a pilot project for Open Data supporting the full research data life cycle. It is currently being evolved at IFCA (Instituto de Física de Cantabria) as a combination of different open tools that have been extended: DMPTool (https://dmptool.org/) with pilot semantics features (RDF export, parameters definition), INVENIO (http://invenio-software.org/ ) customized version to integrate the entire research data life cycle and Jupyter (http://jupyter.org/) as processing tool and reproducibility environment. This complete platform aims to provide an integrated environment for research data management following the FAIR+R principles: -Findable: The Web portal based on Invenio provides a search engine and all elements including metadata to make them easily findable. -Accessible: Both data and software are available online with internal PIDs and DOIs (provided by Datacite). -Interoperable: Datasets can be combined to perform new analysis. The OAI-PMH standard is also integrated. -Re-usable: different licenses types and embargo periods can be defined. -+Reproducible: directly integrated with cloud computing resources. The deployment of the entire system over a Cloud framework helps to build a dynamic and scalable solution, not only for managing open datasets but also as a useful tool for the final user, who is able to directly process and analyse the open data. In parallel, the direct use of semantics and metadata is being explored and integrated in the framework. Ontologies, being a knowledge representation, can contribute to define the elements and relationships of the research data life cycle, including DMP, datasets, software, etc. The first advantage of developing an ontology of a knowledge domain is that they provide a common vocabulary hierarchy (i.e. a conceptual schema) that can be used and standardized by all the agents interested in the domain (either humans or machines). This way of using ontologies is one of the basis of the Semantic Web, where ontologies are set to play a key role in establishing a common terminology between agents. To develop the ontology we are using a graphical tool called Protégé. Protégé is a graphical ontology-development tool which supports a rich knowledge model and it is open-source and freely available. However in order to process and manage the ontology from the web framework, we are using Semantic MediaWiki, which is able to process queries. Semantic MediaWiki is an extension of MediaWiki where we can do semantic search and export data in RDF and CSV format. This system is used as a testbed for the potential use of semantics in a more general environment. This Digital Knowledge Preservation Platform is very closed related to INDIGO-DataCloud project (https://www.indigo-datacloud.eu) since the same data life cycle approach is taking into account (Planning, Collect, Curate, Analyze, Publish, Preserve). INDIGO-DataCloud solutions will be able to support all the different elements in the system, as we showed in the last Research Data Alliance Plenary. This presentation will show the different elements on the system and how they work, as well as the roadmap of their continuous integration.

  14. High Vertically Resolved Atmospheric State Revealed with IASI Single FOV Retrievals under All-weather Conditions

    NASA Technical Reports Server (NTRS)

    Zhou, Daniel K.; Liu, Xu; Larar, Allen M.; Smith, William L.; Taylor, Jonathan P.; Schluessel, L. Peter; Strow, Larrybee; Mango, Stephen A.

    2008-01-01

    The Infrared Atmospheric Sounding Interferometer (IASI) on the MetOp satellite was launched on October 19, 2006. The Joint Airborne IASI Validation Experiment (JAIVEx) was conducted during April 2007 mainly for validation of the IASI on the MetOp satellite. IASI possesses an ultra-spectral resolution of 0.25 cm(exp -1) and a spectral coverage from 645 to 2760 cm(exp -1). Ultra-spectral resolution infrared spectral radiance obtained from near nadir observations provide atmospheric, surface, and cloud property information. An advanced retrieval algorithm with a fast radiative transfer model, including cloud effects, is used for atmospheric profile and cloud parameter retrieval. Preliminary retrievals of atmospheric soundings, surface properties, and cloud optical/microphysical properties with the IASI observations are obtained and presented. These retrievals are further inter-compared with those obtained from airborne FTS system, such as the NPOESS Airborne Sounder Testbed - Interferometer (NAST-I), dedicated dropsondes, radiosondes, and ground based Raman Lidar. The capabilities of satellite ultra-spectral sounder such as the IASI are investigated to benefit future NPOESS operation.

  15. ATP3 Unified Field Study Data

    DOE Data Explorer

    Wolfrum, Ed (ORCID:0000000273618931); Knoshug, Eric (ORCID:000000025709914X); Laurens, Lieve (ORCID:0000000349303267); Harmon, Valerie; Dempster, Thomas (ORCID:000000029550488X); McGowan, John (ORCID:0000000266920518); Rosov, Theresa; Cardello, David; Arrowsmith, Sarah; Kempkes, Sarah; Bautista, Maria; Lundquist, Tryg; Crowe, Brandon; Murawsky, Garrett; Nicolai, Eric; Rowe, Egan; Knurek, Emily; Javar, Reyna; Saracco Alvarez, Marcela; Schlosser, Steve; Riddle, Mary; Withstandley, Chris; Chen, Yongsheng; Van Ginkel, Steven; Igou, Thomas; Xu, Chunyan; Hu, Zixuan

    2017-10-20

    ATP3 Unified Field Study Data The Algae Testbed Public-Private Partnership (ATP3) was established with the goal of investigating open pond algae cultivation across different geographic, climatic, seasonal, and operational conditions while setting the benchmark for quality data collection, analysis, and dissemination. Identical algae cultivation systems and data analysis methodologies were established at testbed sites across the continental United States and Hawaii. Within this framework, the Unified Field Studies (UFS) were designed to characterize the cultivation of different algal strains during all 4 seasons across this testbed network. The dataset presented here is the complete, curated, climatic, cultivation, harvest, and biomass composition data for each season at each site. These data enable others to do in-depth cultivation, harvest, techno-economic, life cycle, resource, and predictive growth modeling analysis, as well as develop crop protection strategies for the nascent algae industry. NREL Sub award Number: DE-AC36-08-GO28308

  16. Single link flexible beam testbed project. Thesis

    NASA Technical Reports Server (NTRS)

    Hughes, Declan

    1992-01-01

    This thesis describes the single link flexible beam testbed at the CLaMS laboratory in terms of its hardware, software, and linear model, and presents two controllers, each including a hub angle proportional-derivative (PD) feedback compensator and one augmented by a second static gain full state feedback loop, based upon a synthesized strictly positive real (SPR) output, that increases specific flexible mode pole damping ratios w.r.t the PD only case and hence reduces unwanted residual oscillation effects. Restricting full state feedback gains so as to produce a SPR open loop transfer function ensures that the associated compensator has an infinite gain margin and a phase margin of at least (-90, 90) degrees. Both experimental and simulation data are evaluated in order to compare some different observer performance when applied to the real testbed and to the linear model when uncompensated flexible modes are included.

  17. KSC-00pp0503

    NASA Image and Video Library

    2000-04-14

    Center Director Roy Bridges (left) dons protective apron, gloves and face shield before the "ribbon-breaking" to open the new Cryogenic Testbed Facility. Part of the normal ceremonial ribbon was replaced with plastic tubing and frozen in liquid nitrogen for the event. Bridges hit the tubing with a small hammer to break it. The Cryogenics Testbed was built to provide cryogenics engineering development and testing services to meet the needs of industry. It will also support commercial, government and academic customers for technology development initiatives on the field of cryogenics. The facility is jointly managed by NASA and Dynacs Engineering Co. , NASA/SC's Engineering Development contractor

  18. KSC-00pp0504

    NASA Image and Video Library

    2000-04-14

    Center Director Roy Bridges (left), wearing protective apron, gloves and face shield, watches as liquid nitrogen is poured into a container to freeze the plastic tubing for a special "ribbon-breaking" to open the new Cryogenic Testbed Facility. Bridges hit the section of tubing with a small hammer to break it. The Cryogenics Testbed was built to provide cryogenics engineering development and testing services to meet the needs of industry. It will also support commercial, government and academic customers for technology development initiatives on the field of cryogenics. The facility is jointly managed by NASA and Dynacs Engineering Co. , NASA/SC's Engineering Development contractor

  19. KSC-00pp0505

    NASA Image and Video Library

    2000-04-14

    A shower of frozen plastic signifies the successful breaking of the ceremonial "ribbon" at the opening of the new Cryogenic Testbed Facility. Part of the normal ribbon was replaced with plastic tubing and frozen in liquid nitrogen for the event. Bridges hit the tubing with a small hammer to break it. The Cryogenics Testbed was built to provide cryogenics engineering development and testing services to meet the needs of industry. It will also support commercial, government and academic customers for technology development initiatives on the field of cryogenics. The facility is jointly managed by NASA and Dynacs Engineering Co. , NASA/SC's Engineering Development contractor

  20. KSC-00pp0506

    NASA Image and Video Library

    2000-04-14

    Center Director Roy Bridges (center) is congratulated for the successful breaking of the ceremonial "ribbon" and the opening of the new Cryogenic Testbed Facility. Part of the normal ribbon was replaced with plastic tubing and frozen in liquid nitrogen for the event. Bridges hit the tubing with a small hammer to break it. The Cryogenics Testbed was built to provide cryogenics engineering development and testing services to meet the needs of industry. It will also support commercial, government and academic customers for technology development initiatives on the field of cryogenics. The facility is jointly managed by NASA and Dynacs Engineering Co. , NASA/SC's Engineering Development contractor

  1. OpenID connect as a security service in Cloud-based diagnostic imaging systems

    NASA Astrophysics Data System (ADS)

    Ma, Weina; Sartipi, Kamran; Sharghi, Hassan; Koff, David; Bak, Peter

    2015-03-01

    The evolution of cloud computing is driving the next generation of diagnostic imaging (DI) systems. Cloud-based DI systems are able to deliver better services to patients without constraining to their own physical facilities. However, privacy and security concerns have been consistently regarded as the major obstacle for adoption of cloud computing by healthcare domains. Furthermore, traditional computing models and interfaces employed by DI systems are not ready for accessing diagnostic images through mobile devices. RESTful is an ideal technology for provisioning both mobile services and cloud computing. OpenID Connect, combining OpenID and OAuth together, is an emerging REST-based federated identity solution. It is one of the most perspective open standards to potentially become the de-facto standard for securing cloud computing and mobile applications, which has ever been regarded as "Kerberos of Cloud". We introduce OpenID Connect as an identity and authentication service in cloud-based DI systems and propose enhancements that allow for incorporating this technology within distributed enterprise environment. The objective of this study is to offer solutions for secure radiology image sharing among DI-r (Diagnostic Imaging Repository) and heterogeneous PACS (Picture Archiving and Communication Systems) as well as mobile clients in the cloud ecosystem. Through using OpenID Connect as an open-source identity and authentication service, deploying DI-r and PACS to private or community clouds should obtain equivalent security level to traditional computing model.

  2. OpenID Connect as a security service in cloud-based medical imaging systems

    PubMed Central

    Ma, Weina; Sartipi, Kamran; Sharghigoorabi, Hassan; Koff, David; Bak, Peter

    2016-01-01

    Abstract. The evolution of cloud computing is driving the next generation of medical imaging systems. However, privacy and security concerns have been consistently regarded as the major obstacles for adoption of cloud computing by healthcare domains. OpenID Connect, combining OpenID and OAuth together, is an emerging representational state transfer-based federated identity solution. It is one of the most adopted open standards to potentially become the de facto standard for securing cloud computing and mobile applications, which is also regarded as “Kerberos of cloud.” We introduce OpenID Connect as an authentication and authorization service in cloud-based diagnostic imaging (DI) systems, and propose enhancements that allow for incorporating this technology within distributed enterprise environments. The objective of this study is to offer solutions for secure sharing of medical images among diagnostic imaging repository (DI-r) and heterogeneous picture archiving and communication systems (PACS) as well as Web-based and mobile clients in the cloud ecosystem. The main objective is to use OpenID Connect open-source single sign-on and authorization service and in a user-centric manner, while deploying DI-r and PACS to private or community clouds should provide equivalent security levels to traditional computing model. PMID:27340682

  3. Physically-Retrieving Cloud and Thermodynamic Parameters from Ultraspectral IR Measurements

    NASA Technical Reports Server (NTRS)

    Zhou, Daniel K.; Smith, William L., Sr.; Liu, Xu; Larar, Allen M.; Mango, Stephen A.; Huang, Hung-Lung

    2007-01-01

    A physical inversion scheme has been developed, dealing with cloudy as well as cloud-free radiance observed with ultraspectral infrared sounders, to simultaneously retrieve surface, atmospheric thermodynamic, and cloud microphysical parameters. A fast radiative transfer model, which applies to the clouded atmosphere, is used for atmospheric profile and cloud parameter retrieval. A one-dimensional (1-d) variational multi-variable inversion solution is used to improve an iterative background state defined by an eigenvector-regression-retrieval. The solution is iterated in order to account for non-linearity in the 1-d variational solution. It is shown that relatively accurate temperature and moisture retrievals can be achieved below optically thin clouds. For optically thick clouds, accurate temperature and moisture profiles down to cloud top level are obtained. For both optically thin and thick cloud situations, the cloud top height can be retrieved with relatively high accuracy (i.e., error < 1 km). NPOESS Airborne Sounder Testbed Interferometer (NAST-I) retrievals from the Atlantic-THORPEX Regional Campaign are compared with coincident observations obtained from dropsondes and the nadir-pointing Cloud Physics Lidar (CPL). This work was motivated by the need to obtain solutions for atmospheric soundings from infrared radiances observed for every individual field of view, regardless of cloud cover, from future ultraspectral geostationary satellite sounding instruments, such as the Geosynchronous Imaging Fourier Transform Spectrometer (GIFTS) and the Hyperspectral Environmental Suite (HES). However, this retrieval approach can also be applied to the ultraspectral sounding instruments to fly on Polar satellites, such as the Infrared Atmospheric Sounding Interferometer (IASI) on the European MetOp satellite, the Cross-track Infrared Sounder (CrIS) on the NPOESS Preparatory Project and the following NPOESS series of satellites.

  4. To Which Extent can Aerosols Affect Alpine Mixed-Phase Clouds?

    NASA Astrophysics Data System (ADS)

    Henneberg, O.; Lohmann, U.

    2017-12-01

    Aerosol-cloud interactions constitute a high uncertainty in regional climate and changing weather patterns. Such uncertainties are due to the multiple processes that can be triggered by aerosol especially in mixed-phase clouds. Mixed-phase clouds most likely result in precipitation due to the formation of ice crystals, which can grow to precipitation size. Ice nucleating particles (INPs) determine how fast these clouds glaciate and form precipitation. The potential for INP to transfer supercooled liquid clouds to precipitating clouds depends on the available humidity and supercooled liquid. Those conditions are determined by dynamics. Moderately high updraft velocities result in persistent mixed-phase clouds in the Swiss Alps [1], which provide an ideal testbed to investigate the effect of aerosol on precipitation in mixed-phase clouds. To address the effect of aerosols in orographic winter clouds under different dynamic conditions, we run a number of real case ensembles with the regional climate model COSMO on a horizontal resolution of 1.1 km. Simulations with different INP concentrations within the range observed at the GAW research station Jungfraujoch in the Swiss Alps are conducted and repeated within the ensemble. Microphysical processes are described with a two-moment scheme. Enhanced INP concentrations enhance the precipitation rate of a single precipitation event up to 20%. Other precipitation events of similar strength are less affected by the INP concentration. The effect of CCNs is negligible for precipitation from orographic winter clouds in our case study. There is evidence for INP to change precipitation rate and location more effectively in stronger dynamic regimes due to the enhanced potential to transfer supercooled liquid to ice. The classification of the ensemble members according to their dynamics will quantify the interaction of aerosol effects and dynamics. Reference [1] Lohmann et al, 2016: Persistence of orographic mixed-phase clouds, GRL

  5. Experimental demonstration of bandwidth on demand (BoD) provisioning based on time scheduling in software-defined multi-domain optical networks

    NASA Astrophysics Data System (ADS)

    Zhao, Yongli; Li, Yajie; Wang, Xinbo; Chen, Bowen; Zhang, Jie

    2016-09-01

    A hierarchical software-defined networking (SDN) control architecture is designed for multi-domain optical networks with the Open Daylight (ODL) controller. The OpenFlow-based Control Virtual Network Interface (CVNI) protocol is deployed between the network orchestrator and the domain controllers. Then, a dynamic bandwidth on demand (BoD) provisioning solution is proposed based on time scheduling in software-defined multi-domain optical networks (SD-MDON). Shared Risk Link Groups (SRLG)-disjoint routing schemes are adopted to separate each tenant for reliability. The SD-MDON testbed is built based on the proposed hierarchical control architecture. Then the proposed time scheduling-based BoD (Ts-BoD) solution is experimentally demonstrated on the testbed. The performance of the Ts-BoD solution is evaluated with respect to blocking probability, resource utilization, and lightpath setup latency.

  6. Identity federation in OpenStack - an introduction to hybrid clouds

    NASA Astrophysics Data System (ADS)

    Denis, Marek; Castro Leon, Jose; Ormancey, Emmanuel; Tedesco, Paolo

    2015-12-01

    We are evaluating cloud identity federation available in the OpenStack ecosystem that allows for on premise bursting into remote clouds with use of local identities (i.e. domain accounts). Further enhancements to identity federation are a clear way to hybrid cloud architectures - virtualized infrastructures layered across independent private and public clouds.

  7. Design and construction of a testbed for the application of real volcanic ash from the Eyjafjallajökull and Grimsvötn eruptions to microgas turbines

    NASA Astrophysics Data System (ADS)

    Weber, Konradin; Fischer, Christian; Lange, Martin; Schulz, Uwe; Naraparaju, Ravisankar; Kramer, Dietmar

    2017-04-01

    It is well known that volcanic ash clouds emitted from erupting volcanoes pose a considerable threat to the aviation. The volcanic ash particles can damage the turbine blades and their thermal barrier coatings as well as the bearings of the turbine. For a detailed investigation of this damaging effect a testbed was designed and constructed, which allowed to study the damaging effects of real volcanic ash to an especially for these investigations modified microgas turbine. The use of this microgas turbine had the advantage that it delivers near reality conditions, using kerosene and operating at similar temperatures as big turbines, but at a very cost effective level. The testbed consisted out of a disperser for the real volcanic ash and all the equipment needed to control the micro gas turbine. Moreover, in front and behind the microgas turbine the concentration and the distribution of the volcanic ash were measured online by optical particle counters (OPCs). The particle concentration and size distribution of the volcanic ash particles in the intake in front of the microgas turbine was measured by an optical particle counter (OPC) combined with an isokinetic intake. Behind the microgas turbine in the exhaust gas additionally to the measurement with a second OPC ash particles were caught with an impactor, in order to enable the later analysis with an electron microscope concerning the morphology to verify possible melting processes of the ash particles. This testbed is of high importance as it allows detailed investigations of the impact of volcanic ash to jet turbines and appropriate countermeasures.

  8. Satellite Cloud and Radiative Property Processing and Distribution System on the NASA Langley ASDC OpenStack and OpenShift Cloud Platform

    NASA Astrophysics Data System (ADS)

    Nguyen, L.; Chee, T.; Palikonda, R.; Smith, W. L., Jr.; Bedka, K. M.; Spangenberg, D.; Vakhnin, A.; Lutz, N. E.; Walter, J.; Kusterer, J.

    2017-12-01

    Cloud Computing offers new opportunities for large-scale scientific data producers to utilize Infrastructure-as-a-Service (IaaS) and Platform-as-a-Service (PaaS) IT resources to process and deliver data products in an operational environment where timely delivery, reliability, and availability are critical. The NASA Langley Research Center Atmospheric Science Data Center (ASDC) is building and testing a private and public facing cloud for users in the Science Directorate to utilize as an everyday production environment. The NASA SatCORPS (Satellite ClOud and Radiation Property Retrieval System) team processes and derives near real-time (NRT) global cloud products from operational geostationary (GEO) satellite imager datasets. To deliver these products, we will utilize the public facing cloud and OpenShift to deploy a load-balanced webserver for data storage, access, and dissemination. The OpenStack private cloud will host data ingest and computational capabilities for SatCORPS processing. This paper will discuss the SatCORPS migration towards, and usage of, the ASDC Cloud Services in an operational environment. Detailed lessons learned from use of prior cloud providers, specifically the Amazon Web Services (AWS) GovCloud and the Government Cloud administered by the Langley Managed Cloud Environment (LMCE) will also be discussed.

  9. The Algae Testbed Public-Private Partnership (ATP 3 ) framework; establishment of a national network of testbed sites to support sustainable algae production

    DOE PAGES

    McGowen, John; Knoshaug, Eric P.; Laurens, Lieve M. L.; ...

    2017-07-01

    Well-controlled experiments that directly compare seasonal algal productivities across geographically distinct locations have not been reported before. To fill this gap, six cultivation testbed facilities were chosen across the United States to evaluate different climatic zones with respect to algal biomass productivity potential. The geographical locations and climates were as follows: Southwest, desert; Western, coastal; Southeast, inland; Southeast, coastal; Pacific, tropical; and Midwest, greenhouse. The testbed facilities were equipped with identical systems for inoculum production and open pond operation and methods were standardized across all testbeds to ensure accurate measurement of physical and biological variables. The ability of the testbedmore » sites to culture and analyze the same algal species, Nannochloropsis oceanica KA32, using identical pond operational and data collection procedures was evaluated during the same seasonal timeframe. This manuscript describes the results of a first-of-its-kind coordinated testbed validation field study while providing critical details on how geographical variations in temperature, light, and weather variables influenced algal productivity, nitrate consumption, and biomass composition. We found distinct differences in growth characteristics due to the geographic location and the resulting climatic and seasonal conditions across the sites, with the highest productivities observed at the desert Southwest and tropical Pacific regions, followed by the Western coastal region. The lowest productivities were observed at the Southeast inland and Midwest greenhouse locations. These differences in productivities among the sites correlated with the differences in pond water temperature and available solar radiation. In addition two sites, the tropical Pacific and Southeast inland experienced unusual events, spontaneous flocculation, and unusually cold and wet (rainfall) conditions respectively, that negatively affected outdoor algal growth. In addition, minor variability in productivity was observed between the different experimental treatments at each site, much smaller compared to differences due to geographic location. Finally, the successful demonstration of the coordinated and standardized operation of the testbed sites established a rigorous basis for future validation of algal strains and operational conditions and protocols across a geographically diverse testbed network.« less

  10. The Algae Testbed Public-Private Partnership (ATP 3 ) framework; establishment of a national network of testbed sites to support sustainable algae production

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    McGowen, John; Knoshaug, Eric P.; Laurens, Lieve M. L.

    Well-controlled experiments that directly compare seasonal algal productivities across geographically distinct locations have not been reported before. To fill this gap, six cultivation testbed facilities were chosen across the United States to evaluate different climatic zones with respect to algal biomass productivity potential. The geographical locations and climates were as follows: Southwest, desert; Western, coastal; Southeast, inland; Southeast, coastal; Pacific, tropical; and Midwest, greenhouse. The testbed facilities were equipped with identical systems for inoculum production and open pond operation and methods were standardized across all testbeds to ensure accurate measurement of physical and biological variables. The ability of the testbedmore » sites to culture and analyze the same algal species, Nannochloropsis oceanica KA32, using identical pond operational and data collection procedures was evaluated during the same seasonal timeframe. This manuscript describes the results of a first-of-its-kind coordinated testbed validation field study while providing critical details on how geographical variations in temperature, light, and weather variables influenced algal productivity, nitrate consumption, and biomass composition. We found distinct differences in growth characteristics due to the geographic location and the resulting climatic and seasonal conditions across the sites, with the highest productivities observed at the desert Southwest and tropical Pacific regions, followed by the Western coastal region. The lowest productivities were observed at the Southeast inland and Midwest greenhouse locations. These differences in productivities among the sites correlated with the differences in pond water temperature and available solar radiation. In addition two sites, the tropical Pacific and Southeast inland experienced unusual events, spontaneous flocculation, and unusually cold and wet (rainfall) conditions respectively, that negatively affected outdoor algal growth. In addition, minor variability in productivity was observed between the different experimental treatments at each site, much smaller compared to differences due to geographic location. Finally, the successful demonstration of the coordinated and standardized operation of the testbed sites established a rigorous basis for future validation of algal strains and operational conditions and protocols across a geographically diverse testbed network.« less

  11. Testing cloud microphysics parameterizations in NCAR CAM5 with ISDAC and M-PACE observations

    NASA Astrophysics Data System (ADS)

    Liu, Xiaohong; Xie, Shaocheng; Boyle, James; Klein, Stephen A.; Shi, Xiangjun; Wang, Zhien; Lin, Wuyin; Ghan, Steven J.; Earle, Michael; Liu, Peter S. K.; Zelenyuk, Alla

    2011-01-01

    Arctic clouds simulated by the National Center for Atmospheric Research (NCAR) Community Atmospheric Model version 5 (CAM5) are evaluated with observations from the U.S. Department of Energy (DOE) Atmospheric Radiation Measurement (ARM) Indirect and Semi-Direct Aerosol Campaign (ISDAC) and Mixed-Phase Arctic Cloud Experiment (M-PACE), which were conducted at its North Slope of Alaska site in April 2008 and October 2004, respectively. Model forecasts for the Arctic spring and fall seasons performed under the Cloud-Associated Parameterizations Testbed framework generally reproduce the spatial distributions of cloud fraction for single-layer boundary-layer mixed-phase stratocumulus and multilayer or deep frontal clouds. However, for low-level stratocumulus, the model significantly underestimates the observed cloud liquid water content in both seasons. As a result, CAM5 significantly underestimates the surface downward longwave radiative fluxes by 20-40 W m-2. Introducing a new ice nucleation parameterization slightly improves the model performance for low-level mixed-phase clouds by increasing cloud liquid water content through the reduction of the conversion rate from cloud liquid to ice by the Wegener-Bergeron-Findeisen process. The CAM5 single-column model testing shows that changing the instantaneous freezing temperature of rain to form snow from -5°C to -40°C causes a large increase in modeled cloud liquid water content through the slowing down of cloud liquid and rain-related processes (e.g., autoconversion of cloud liquid to rain). The underestimation of aerosol concentrations in CAM5 in the Arctic also plays an important role in the low bias of cloud liquid water in the single-layer mixed-phase clouds. In addition, numerical issues related to the coupling of model physics and time stepping in CAM5 are responsible for the model biases and will be explored in future studies.

  12. High Vertically Resolved Atmospheric and Surface/Cloud Parameters Retrieved with Infrared Atmospheric Sounding Interferometer (IASI)

    NASA Technical Reports Server (NTRS)

    Zhou, Daniel K.; Liu, Xu; Larar, Allen M.; Smith, WIlliam L.; Taylor, Jonathan P.; Schluessel, Peter; Strow, L. Larrabee; Mango, Stephen A.

    2008-01-01

    The Joint Airborne IASI Validation Experiment (JAIVEx) was conducted during April 2007 mainly for validation of the IASI on the MetOp satellite. IASI possesses an ultra-spectral resolution of 0.25/cm and a spectral coverage from 645 to 2760/cm. Ultra-spectral resolution infrared spectral radiance obtained from near nadir observations provide atmospheric, surface, and cloud property information. An advanced retrieval algorithm with a fast radiative transfer model, including cloud effects, is used for atmospheric profile and cloud parameter retrieval. This physical inversion scheme has been developed, dealing with cloudy as well as cloud-free radiance observed with ultraspectral infrared sounders, to simultaneously retrieve surface, atmospheric thermodynamic, and cloud microphysical parameters. A fast radiative transfer model, which applies to the cloud-free and/or clouded atmosphere, is used for atmospheric profile and cloud parameter retrieval. A one-dimensional (1-d) variational multi-variable inversion solution is used to improve an iterative background state defined by an eigenvector-regression-retrieval. The solution is iterated in order to account for non-linearity in the 1-d variational solution. It is shown that relatively accurate temperature and moisture retrievals are achieved below optically thin clouds. For optically thick clouds, accurate temperature and moisture profiles down to cloud top level are obtained. For both optically thin and thick cloud situations, the cloud top height can be retrieved with relatively high accuracy (i.e., error < 1 km). Preliminary retrievals of atmospheric soundings, surface properties, and cloud optical/microphysical properties with the IASI observations are obtained and presented. These retrievals will be further inter-compared with those obtained from airborne FTS system, such as the NPOESS Airborne Sounder Testbed - Interferometer (NAST-I), dedicated dropsondes, radiosondes, and ground based Raman Lidar. The capabilities of satellite ultra-spectral sounder such as the IASI are investigated indicating a high vertical structure of atmosphere is retrieved.

  13. Wireless Sensor Networks for Environmental Monitoring

    NASA Astrophysics Data System (ADS)

    Liang, X.; Liang, Y.; Navarro, M.; Zhong, X.; Villalba, G.; Li, Y.; Davis, T.; Erratt, N.

    2015-12-01

    Wireless sensor networks (WSNs) have gained an increasing interest in a broad range of new scientific research and applications. WSN technologies can provide high resolution for spatial and temporal data which has not been possible before, opening up new opportunities. On the other hand, WSNs, particularly outdoor WSNs in harsh environments, present great challenges for scientists and engineers in terms of the network design, deployment, operation, management, and maintenance. Since 2010, we have been working on the deployment of an outdoor multi-hop WSN testbed for hydrological/environmental monitoring in a forested hill-sloped region at the Audubon Society of Western Pennsylvania (ASWP), Pennsylvania, USA. The ASWP WSN testbed has continuously evolved and had more than 80 nodes by now. To our knowledge, the ASWP WSN testbed represents one of the first known long-term multi-hop WSN deployments in an outdoor environment. As simulation and laboratory methods are unable to capture the complexity of outdoor environments (e.g., forests, oceans, mountains, or glaciers), which significantly affect WSN operations and maintenance, experimental deployments are essential to investigate and understand WSN behaviors and performances as well as its maintenance characteristics under these harsh conditions. In this talk, based on our empirical studies with the ASWP WSN testbed, we will present our discoveries and investigations on several important aspects including WSN energy profile, node reprogramming, network management system, and testbed maintenance. We will then provide our insight into these critical aspects of outdoor WSN deployments and operations.

  14. Large Eddy Simulations of Continental Boundary Layer Clouds Observed during the RACORO Field Campaign

    NASA Astrophysics Data System (ADS)

    Endo, S.; Fridlind, A. M.; Lin, W.; Vogelmann, A. M.; Toto, T.; Liu, Y.

    2013-12-01

    Three cases of boundary layer clouds are analyzed in the FAst-physics System TEstbed and Research (FASTER) project, based on continental boundary-layer-cloud observations during the RACORO Campaign [Routine Atmospheric Radiation Measurement (ARM) Aerial Facility (AAF) Clouds with Low Optical Water Depths (CLOWD) Optical Radiative Observations] at the ARM Climate Research Facility's Southern Great Plains (SGP) site. The three 60-hour case study periods are selected to capture the temporal evolution of cumulus, stratiform, and drizzling boundary-layer cloud systems under a range of conditions, intentionally including those that are relatively more mixed or transitional in nature versus being of a purely canonical type. Multi-modal and temporally varying aerosol number size distribution profiles are derived from aircraft observations. Large eddy simulations (LESs) are performed for the three case study periods using the GISS Distributed Hydrodynamic Aerosol and Radiative Modeling Application (DHARMA) model and the WRF-FASTER model, which is the Weather Research and Forecasting (WRF) model implemented with forcing ingestion and other functions to constitute a flexible LES. The two LES models commonly capture the significant transitions of cloud-topped boundary layers in the three periods: diurnal evolution of cumulus layers repeating over multiple days, nighttime evolution/daytime diminution of thick stratus, and daytime breakup of stratus and stratocumulus clouds. Simulated transitions of thermodynamic structures of the cloud-topped boundary layers are examined by balloon-borne soundings and ground-based remote sensors. Aircraft observations are then used to statistically evaluate the predicted cloud droplet number size distributions under varying aerosol and cloud conditions. An ensemble approach is used to refine the model configuration for the combined use of observations with parallel LES and single-column model simulations. See Lin et al. poster for single-column model investigation.

  15. HiCAT Software Infrastructure: Safe hardware control with object oriented Python

    NASA Astrophysics Data System (ADS)

    Moriarty, Christopher; Brooks, Keira; Soummer, Remi

    2018-01-01

    High contrast imaging for Complex Aperture Telescopes (HiCAT) is a testbed designed to demonstrate coronagraphy and wavefront control for segmented on-axis space telescopes such as envisioned for LUVOIR. To limit the air movements in the testbed room, software interfaces for several different hardware components were developed to completely automate operations. When developing software interfaces for many different pieces of hardware, unhandled errors are commonplace and can prevent the software from properly closing a hardware resource. Some fragile components (e.g. deformable mirrors) can be permanently damaged because of this. We present an object oriented Python-based infrastructure to safely automate hardware control and optical experiments. Specifically, conducting high-contrast imaging experiments while monitoring humidity and power status along with graceful shutdown processes even for unexpected errors. Python contains a construct called a “context manager” that allows you define code to run when a resource is opened or closed. Context managers ensure that a resource is properly closed, even when unhandled errors occur. Harnessing the context manager design, we also use Python’s multiprocessing library to monitor humidity and power status without interrupting the experiment. Upon detecting a safety problem, the master process sends an event to the child process that triggers the context managers to gracefully close any open resources. This infrastructure allows us to queue up several experiments and safely operate the testbed without a human in the loop.

  16. Development, Demonstration, and Control of a Testbed for Multiterminal HVDC System

    DOE PAGES

    Li, Yalong; Shi, Xiaojie M.; Liu, Bo; ...

    2016-10-21

    This paper presents the development of a scaled four-terminal high-voltage direct current (HVDC) testbed, including hardware structure, communication architecture, and different control schemes. The developed testbed is capable of emulating typical operation scenarios including system start-up, power variation, line contingency, and converter station failure. Some unique scenarios are also developed and demonstrated, such as online control mode transition and station re-commission. In particular, a dc line current control is proposed, through the regulation of a converter station at one terminal. By controlling a dc line current to zero, the transmission line can be opened by using relatively low-cost HVDC disconnectsmore » with low current interrupting capability, instead of the more expensive dc circuit breaker. Utilizing the dc line current control, an automatic line current limiting scheme is developed. As a result, when a dc line is overloaded, the line current control will be automatically activated to regulate current within the allowable maximum value.« less

  17. The Geoengineering Model Intercomparison Project Phase 6 (GeoMIP6): Simulation design and preliminary results

    DOE PAGES

    Kravitz, Benjamin S.; Robock, Alan; Tilmes, S.; ...

    2015-10-27

    We present a suite of new climate model experiment designs for the Geoengineering Model Intercomparison Project (GeoMIP). This set of experiments, named GeoMIP6 (to be consistent with the Coupled Model Intercomparison Project Phase 6), builds on the previous GeoMIP project simulations, and has been expanded to address several further important topics, including key uncertainties in extreme events, the use of geoengineering as part of a portfolio of responses to climate change, and the relatively new idea of cirrus cloud thinning to allow more long wave radiation to escape to space. We discuss experiment designs, as well as the rationale formore » those designs, showing preliminary results from individual models when available. We also introduce a new feature, called the GeoMIP Testbed, which provides a platform for simulations that will be performed with a few models and subsequently assessed to determine whether the proposed experiment designs will be adopted as core (Tier 1) GeoMIP experiments. In conclusion, this is meant to encourage various stakeholders to propose new targeted experiments that address their key open science questions, with the goal of making GeoMIP more relevant to a broader set of communities.« less

  18. SensorWeb 3G: Extending On-Orbit Sensor Capabilities to Enable Near Realtime User Configurability

    NASA Technical Reports Server (NTRS)

    Mandl, Daniel; Cappelaere, Pat; Frye, Stuart; Sohlberg, Rob; Ly, Vuong; Chien, Steve; Tran, Daniel; Davies, Ashley; Sullivan, Don; Ames, Troy; hide

    2010-01-01

    This research effort prototypes an implementation of a standard interface, Web Coverage Processing Service (WCPS), which is an Open Geospatial Consortium(OGC) standard, to enable users to define, test, upload and execute algorithms for on-orbit sensor systems. The user is able to customize on-orbit data products that result from raw data streaming from an instrument. This extends the SensorWeb 2.0 concept that was developed under a previous Advanced Information System Technology (AIST) effort in which web services wrap sensors and a standardized Extensible Markup Language (XML) based scripting workflow language orchestrates processing steps across multiple domains. SensorWeb 3G extends the concept by providing the user controls into the flight software modules associated with on-orbit sensor and thus provides a degree of flexibility which does not presently exist. The successful demonstrations to date will be presented, which includes a realistic HyspIRI decadal mission testbed. Furthermore, benchmarks that were run will also be presented along with future demonstration and benchmark tests planned. Finally, we conclude with implications for the future and how this concept dovetails into efforts to develop "cloud computing" methods and standards.

  19. From BASE-ASIA Toward 7-SEAS: A Satellite-Surface Perspective of Boreal Spring Biomass-Burning Aerosols and Clouds in Southeast Asia

    NASA Technical Reports Server (NTRS)

    Tsay, Si-Chee; Hsu, N. Christina; Lau, William K.-M.; Li, Can; Gabriel, Philip M.; Ji, Qiang; Holben, Brent N.; Welton, E. Judd; Nguyen, Anh X.; Janjai, Serm; hide

    2013-01-01

    In this paper, we present recent field studies conducted by NASA's SMART-COMMIT (and ACHIEVE, to be operated in 2013) mobile laboratories, jointly with distributed ground-based networks (e.g., AERONET, http://aeronet.gsfc.nasa.gov/ and MPLNET, http://mplnet.gsfc.nasa.gov/) and other contributing instruments over northern Southeast Asia. These three mobile laboratories, collectively called SMARTLabs (cf. http://smartlabs.gsfc.nasa.gov/, Surface-based Mobile Atmospheric Research & Testbed Laboratories) comprise a suite of surface remote sensing and in-situ instruments that are pivotal in providing high spectral and temporal measurements, complementing the collocated spatial observations from various Earth Observing System (EOS) satellites. A satellite-surface perspective and scientific findings, drawn from the BASE-ASIA (2006) field deployment as well as a series of ongoing 7-SEAS (2010-13) field activities over northern Southeast Asia are summarized, concerning (i) regional properties of aerosols from satellite and in situ measurements, (ii) cloud properties from remote sensing and surface observations, (iii) vertical distribution of aerosols and clouds, and (iv) regional aerosol radiative effects and impact assessment. The aerosol burden over Southeast Asia in boreal spring, attributed to biomass burning, exhibits highly consistent spatial and temporal distribution patterns, with major variability arising from changes in the magnitude of the aerosol loading mediated by processes ranging from large-scale climate factors to diurnal meteorological events. Downwind from the source regions, the tightly coupled-aerosolecloud system provides a unique, natural laboratory for further exploring the micro- and macro-scale relationships of the complex interactions. The climatic significance is presented through large-scale anti-correlations between aerosol and precipitation anomalies, showing spatial and seasonal variability, but their precise cause-and-effect relationships remain an open-ended question. To facilitate an improved understanding of the regional aerosol radiative effects, which continue to be one of the largest uncertainties in climate forcing, a joint international effort is required and anticipated to commence in springtime 2013 in northern Southeast Asia.

  20. A MATLAB/Simulink based GUI for the CERES Simulator

    NASA Technical Reports Server (NTRS)

    Valencia, Luis H.

    1995-01-01

    The Clouds and The Earth's Radiant Energy System (CERES) simulator will allow flight operational familiarity with the CERES instrument prior to launch. It will provide a CERES instrument simulation facility for NASA Langley Research Center. NASA Goddard Space Flight Center and TRW. One of the objectives of building this simulator would be for use as a testbed for functionality checking of atypical memory uploads and for anomaly investigation. For instance, instrument malfunction due to memory damage requires troubleshooting on a simulator to determine the nature of the problem and to find a solution.

  1. Isolating the Liquid Cloud Response to Recent Arctic Sea Ice Variability Using Spaceborne Lidar Observations

    NASA Astrophysics Data System (ADS)

    Morrison, A. L.; Kay, J. E.; Chepfer, H.; Guzman, R.; Yettella, V.

    2018-01-01

    While the radiative influence of clouds on Arctic sea ice is known, the influence of sea ice cover on Arctic clouds is challenging to detect, separate from atmospheric circulation, and attribute to human activities. Providing observational constraints on the two-way relationship between sea ice cover and Arctic clouds is important for predicting the rate of future sea ice loss. Here we use 8 years of CALIPSO (Cloud-Aerosol Lidar and Infrared Pathfinder Satellite Observations) spaceborne lidar observations from 2008 to 2015 to analyze Arctic cloud profiles over sea ice and over open water. Using a novel surface mask to restrict our analysis to where sea ice concentration varies, we isolate the influence of sea ice cover on Arctic Ocean clouds. The study focuses on clouds containing liquid water because liquid-containing clouds are the most important cloud type for radiative fluxes and therefore for sea ice melt and growth. Summer is the only season with no observed cloud response to sea ice cover variability: liquid cloud profiles are nearly identical over sea ice and over open water. These results suggest that shortwave summer cloud feedbacks do not slow long-term summer sea ice loss. In contrast, more liquid clouds are observed over open water than over sea ice in the winter, spring, and fall in the 8 year mean and in each individual year. Observed fall sea ice loss cannot be explained by natural variability alone, which suggests that observed increases in fall Arctic cloud cover over newly open water are linked to human activities.

  2. Cloud computing geospatial application for water resources based on free and open source software and open standards - a prototype

    NASA Astrophysics Data System (ADS)

    Delipetrev, Blagoj

    2016-04-01

    Presently, most of the existing software is desktop-based, designed to work on a single computer, which represents a major limitation in many ways, starting from limited computer processing, storage power, accessibility, availability, etc. The only feasible solution lies in the web and cloud. This abstract presents research and development of a cloud computing geospatial application for water resources based on free and open source software and open standards using hybrid deployment model of public - private cloud, running on two separate virtual machines (VMs). The first one (VM1) is running on Amazon web services (AWS) and the second one (VM2) is running on a Xen cloud platform. The presented cloud application is developed using free and open source software, open standards and prototype code. The cloud application presents a framework how to develop specialized cloud geospatial application that needs only a web browser to be used. This cloud application is the ultimate collaboration geospatial platform because multiple users across the globe with internet connection and browser can jointly model geospatial objects, enter attribute data and information, execute algorithms, and visualize results. The presented cloud application is: available all the time, accessible from everywhere, it is scalable, works in a distributed computer environment, it creates a real-time multiuser collaboration platform, the programing languages code and components are interoperable, and it is flexible in including additional components. The cloud geospatial application is implemented as a specialized water resources application with three web services for 1) data infrastructure (DI), 2) support for water resources modelling (WRM), 3) user management. The web services are running on two VMs that are communicating over the internet providing services to users. The application was tested on the Zletovica river basin case study with concurrent multiple users. The application is a state-of-the-art cloud geospatial collaboration platform. The presented solution is a prototype and can be used as a foundation for developing of any specialized cloud geospatial applications. Further research will be focused on distributing the cloud application on additional VMs, testing the scalability and availability of services.

  3. Scaling the CERN OpenStack cloud

    NASA Astrophysics Data System (ADS)

    Bell, T.; Bompastor, B.; Bukowiec, S.; Castro Leon, J.; Denis, M. K.; van Eldik, J.; Fermin Lobo, M.; Fernandez Alvarez, L.; Fernandez Rodriguez, D.; Marino, A.; Moreira, B.; Noel, B.; Oulevey, T.; Takase, W.; Wiebalck, A.; Zilli, S.

    2015-12-01

    CERN has been running a production OpenStack cloud since July 2013 to support physics computing and infrastructure services for the site. In the past year, CERN Cloud Infrastructure has seen a constant increase in nodes, virtual machines, users and projects. This paper will present what has been done in order to make the CERN cloud infrastructure scale out.

  4. Evidence in Magnetic Clouds for Systematic Open Flux Transport on the Sun

    NASA Technical Reports Server (NTRS)

    Crooker, N. U.; Kahler, S. W.; Gosling, J. T.; Lepping, R. P.

    2008-01-01

    Most magnetic clouds encountered by spacecraft at 1 AU display a mix of unidirectional suprathermal electrons signaling open field lines and counterstreaming electrons signaling loops connected to the Sun at both ends. Assuming the open fields were originally loops that underwent interchange reconnection with open fields at the Sun, we determine the sense of connectedness of the open fields found in 72 of 97 magnetic clouds identified by the Wind spacecraft in order to obtain information on the location and sense of the reconnection and resulting flux transport at the Sun. The true polarity of the open fields in each magnetic cloud was determined from the direction of the suprathermal electron flow relative to the magnetic field direction. Results indicate that the polarity of all open fields within a given magnetic cloud is the same 89% of the time, implying that interchange reconnection at the Sun most often occurs in only one leg of a flux rope loop, thus transporting open flux in a single direction, from a coronal hole near that leg to the foot point of the opposite leg. This pattern is consistent with the view that interchange reconnection in coronal mass ejections systematically transports an amount of open flux sufficient to reverse the polarity of the heliospheric field through the course of the solar cycle. Using the same electron data, we also find that the fields encountered in magnetic clouds are only a third as likely to be locally inverted as not. While one might expect inversions to be equally as common as not in flux rope coils, consideration of the geometry of spacecraft trajectories relative to the modeled magnetic cloud axes leads us to conclude that the result is reasonable.

  5. An operational open-end file transfer protocol for mobile satellite communications

    NASA Technical Reports Server (NTRS)

    Wang, Charles; Cheng, Unjeng; Yan, Tsun-Yee

    1988-01-01

    This paper describes an operational open-end file transfer protocol which includes the connecting procedure, data transfer, and relinquishment procedure for mobile satellite communications. The protocol makes use of the frame level and packet level formats of the X.25 standard for the data link layer and network layer, respectively. The structure of a testbed for experimental simulation of this protocol over a mobile fading channel is also introduced.

  6. Continuation: The EOSDIS testbed data system

    NASA Technical Reports Server (NTRS)

    Emery, Bill; Kelley, Timothy D.

    1995-01-01

    The continuation of the EOSDIS testbed ('Testbed') has materialized from a multi-task system to a fully functional stand-alone data archive distribution center that once was only X-Windows driven to a system that is accessible by all types of users and computers via the World Wide Web. Throughout the past months, the Testbed has evolved into a completely new system. The current system is now accessible through Netscape, Mosaic, and all other servers that can contact the World Wide Web. On October 1, 1995 we will open to the public and we expect that the statistics of the type of user, where they are located, and what they are looking for will drastically change. What is the most important change in the Testbed has been the Web interface. This interface will allow more users access to the system and walk them through the data types with more ease than before. All of the callbacks are written in such a way that icons can be used to easily move around in the programs interface. The homepage offers the user the opportunity to go and get more information about each satellite data type and also information on free programs. These programs are grouped into categories for types of computers that the programs are compiled for, along with information on how to FTP the programs back to the end users computer. The heart of the Testbed is still the acquisition of satellite data. From the Testbed homepage, the user selects the 'access to data system' icon, which will take them to the world map and allow them to select an area that they would like coverage on by simply clicking that area of the map. This creates a new map where other similar choices can be made to get the latitude and longitude of the region the satellite data will cover. Once a selection has been made the search parameters page will appear to be filled out. Afterwards, the browse image will be called for once the search is completed and the images for viewing can be selected. There are several other option pages, but once an order has been selected the Testbed will bring up the order list page and the user will then be able to place their order. After the order has been completed, the Testbed will mail the user to notify them of the completed order and how the images can be picked up.

  7. An Adaptive Tradeoff Algorithm for Multi-issue SLA Negotiation

    NASA Astrophysics Data System (ADS)

    Son, Seokho; Sim, Kwang Mong

    Since participants in a Cloud may be independent bodies, mechanisms are necessary for resolving different preferences in leasing Cloud services. Whereas there are currently mechanisms that support service-level agreement negotiation, there is little or no negotiation support for concurrent price and timeslot for Cloud service reservations. For the concurrent price and timeslot negotiation, a tradeoff algorithm to generate and evaluate a proposal which consists of price and timeslot proposal is necessary. The contribution of this work is thus to design an adaptive tradeoff algorithm for multi-issue negotiation mechanism. The tradeoff algorithm referred to as "adaptive burst mode" is especially designed to increase negotiation speed and total utility and to reduce computational load by adaptively generating concurrent set of proposals. The empirical results obtained from simulations carried out using a testbed suggest that due to the concurrent price and timeslot negotiation mechanism with adaptive tradeoff algorithm: 1) both agents achieve the best performance in terms of negotiation speed and utility; 2) the number of evaluations of each proposal is comparatively lower than previous scheme (burst-N).

  8. Lightning Tracking Tool for Assessment of Total Cloud Lightning within AWIPS II

    NASA Technical Reports Server (NTRS)

    Burks, Jason E.; Stano, Geoffrey T.; Sperow, Ken

    2014-01-01

    Total lightning (intra-cloud and cloud-to-ground) has been widely researched and shown to be a valuable tool to aid real-time warning forecasters in the assessment of severe weather potential of convective storms. The trend of total lightning has been related to the strength of a storm's updraft. Therefore a rapid increase in total lightning signifies the strengthening of the parent thunderstorm. The assessment of severe weather potential occurs in a time limited environment and therefore constrains the use of total lightning. A tool has been developed at NASA's Short-term Prediction Research and Transition (SPoRT) Center to assist in quickly analyzing the total lightning signature of multiple storms. The development of this tool comes as a direct result of forecaster feedback from numerous assessments requesting a real-time display of the time series of total lightning. This tool also takes advantage of the new architecture available within the AWIPS II environment. SPoRT's lightning tracking tool has been tested in the Hazardous Weather Testbed (HWT) Spring Program and significant changes have been made based on the feedback. In addition to the updates in response to the HWT assessment, the lightning tracking tool may also be extended to incorporate other requested displays, such as the intra-cloud to cloud-to-ground ratio as well as incorporate the lightning jump algorithm.

  9. Efficacy of Cloud-Radiative Perturbations in Deep Open- and Closed-Cell Stratocumulus Clouds due to Aerosol Perturbations

    NASA Astrophysics Data System (ADS)

    Possner, A.; Wang, H.; Caldeira, K.; Wood, R.; Ackerman, T. P.

    2017-12-01

    Aerosol-cloud interactions (ACIs) in marine stratocumulus remain a significant source of uncertainty in constraining the cloud-radiative effect in a changing climate. Ship tracks are undoubted manifestations of ACIs embedded within stratocumulus cloud decks and have proven to be a useful framework to study the effect of aerosol perturbations on cloud morphology, macrophysical, microphyiscal and cloud-radiative properties. However, so far most observational (Christensen et al. 2012, Chen et al. 2015) and numerical studies (Wang et al. 2011, Possner et al. 2015, Berner et al. 2015) have concentrated on ship tracks in shallow boundary layers of depths between 300 - 800 m, while most stratocumulus decks form in significantly deeper boundary layers (Muhlbauer et al. 2014). In this study we investigate the efficacy of aerosol perturbations in deep open and closed cell stratocumulus. Multi-day idealised cloud-resolving simulations are performed for the RF06 flight of the VOCALS-Rex field campaign (Wood et al. 2011). During this flight pockets of deep open and closed cells were observed in a 1410 m deep boundary layer. The efficacy of aerosol perturbations of varied concentration and spatial gradients in altering the cloud micro- and macrophysical state and cloud-radiative effect is determined in both cloud regimes. Our simulations show that a continued point source emission flux of 1.16*1011 particles m-2 s-1 applied within a 300x300 m2 gridbox induces pronounced cloud cover changes in approximately a third of the simulated 80x80 km2 domain, a weakening of the diurnal cycle in the open-cell regime and a resulting increase in domain-mean cloud albedo of 0.2. Furthermore, we contrast the efficacy of equal strength near-surface or above-cloud aerosol perturbations in altering the cloud state.

  10. Data systems for science integration within the Atmospheric Radiation Measurement Program

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gracio, D.K.; Hatfield, L.D.; Yates, K.R.

    The Atmospheric Radiation Measurement (ARM) Program was developed by the US Department of Energy to support the goals and mission of the US Global Change Research Program. The purpose of the ARM program is to improve the predictive capabilities of General Circulation Models (GCMs) in their treatment of clouds and radiative transfer effects. Three experimental testbeds were designed for the deployment of instruments to collect atmospheric data used to drive the GCMs. Each site, known as a Cloud and Radiation Testbed (CART), consists of a highly available, redundant data system for the collection of data from a variety of instrumentation.more » The first CART site was deployed in April 1992 in the Southern Great Plains (SGP), Lamont, Oklahoma, with the other two sites to follow in early 1996 in the Tropical Western Pacific (TWP) and in 1997 on the North Slope of Alaska (NSA). Approximately 1.5 GB of data are transferred per day via the Internet from the CART sites, and external data sources to the ARM Experiment Center (EC) at Pacific Northwest Laboratory in Richland, Washington. The Experimental Center is central to the ARM data path and provides for the collection, processing, analysis and delivery of ARM data. Data from the CART sites from a variety of instrumentation, observational systems and from external data sources are transferred to the Experiment Center. The EC processes these data streams on a continuous basis to provide derived data products to the ARM Science Team in near real-time while maintaining a three-month running archive of data.« less

  11. Scientific investigations planned for the Lidar in-Space Technology Experiment (LITE)

    NASA Technical Reports Server (NTRS)

    Mccormick, M. P.; Winker, D. M.; Browell, E. V.; Coakley, J. A.; Gardner, C. S.; Hoff, R. M.; Kent, G. S.; Melfi, S. H.; Menzies, R. T.; Platt, C. M. R.

    1993-01-01

    The Lidar In-Space Technology Experiment (LITE) is being developed by NASA/Langley Research Center for a series of flights on the space shuttle beginning in 1994. Employing a three-wavelength Nd:YAG laser and a 1-m-diameter telescope, the system is a test-bed for the development of technology required for future operational spaceborne lidars. The system has been designed to observe clouds, tropospheric and stratospheric aerosols, characteristics of the planetary boundary layer, and stratospheric density and temperature perturbations with much greater resolution than is available from current orbiting sensors. In addition to providing unique datasets on these phenomena, the data obtained will be useful in improving retrieval algorithms currently in use. Observations of clouds and the planetary boundary layer will aid in the development of global climate model (GCM) parameterizations. This article briefly describes the LITE program and discusses the types of scientific investigations planned for the first flight.

  12. Software-defined optical network for metro-scale geographically distributed data centers.

    PubMed

    Samadi, Payman; Wen, Ke; Xu, Junjie; Bergman, Keren

    2016-05-30

    The emergence of cloud computing and big data has rapidly increased the deployment of small and mid-sized data centers. Enterprises and cloud providers require an agile network among these data centers to empower application reliability and flexible scalability. We present a software-defined inter data center network to enable on-demand scale out of data centers on a metro-scale optical network. The architecture consists of a combined space/wavelength switching platform and a Software-Defined Networking (SDN) control plane equipped with a wavelength and routing assignment module. It enables establishing transparent and bandwidth-selective connections from L2/L3 switches, on-demand. The architecture is evaluated in a testbed consisting of 3 data centers, 5-25 km apart. We successfully demonstrated end-to-end bulk data transfer and Virtual Machine (VM) migrations across data centers with less than 100 ms connection setup time and close to full link capacity utilization.

  13. Generalized Intelligent Framework for Tutoring (GIFT) Cloud/Virtual Open Campus Quick Start Guide (Revision 1)

    DTIC Science & Technology

    2017-06-01

    for GIFT Cloud, the web -based application version of the Generalized Intelligent Framework for Tutoring (GIFT). GIFT is a modular, open-source...external applications. GIFT is available to users with a GIFT Account at no cost. GIFT Cloud is an implementation of GIFT. This web -based application...section. Approved for public release; distribution is unlimited. 3 3. Requirements for GIFT Cloud GIFT Cloud is accessed via a web browser

  14. Sensor System Performance Evaluation and Benefits from the NPOESS Airborne Sounder Testbed-Interferometer (NAST-I)

    NASA Technical Reports Server (NTRS)

    Larar, A.; Zhou, D.; Smith, W.

    2009-01-01

    Advanced satellite sensors are tasked with improving global-scale measurements of the Earth's atmosphere, clouds, and surface to enable enhancements in weather prediction, climate monitoring, and environmental change detection. Validation of the entire measurement system is crucial to achieving this goal and thus maximizing research and operational utility of resultant data. Field campaigns employing satellite under-flights with well-calibrated FTS sensors aboard high-altitude aircraft are an essential part of this validation task. The National Polar-orbiting Operational Environmental Satellite System (NPOESS) Airborne Sounder Testbed-Interferometer (NAST-I) has been a fundamental contributor in this area by providing coincident high spectral/spatial resolution observations of infrared spectral radiances along with independently-retrieved geophysical products for comparison with like products from satellite sensors being validated. This paper focuses on some of the challenges associated with validating advanced atmospheric sounders and the benefits obtained from employing airborne interferometers such as the NAST-I. Select results from underflights of the Aqua Atmospheric InfraRed Sounder (AIRS) and the Infrared Atmospheric Sounding Interferometer (IASI) obtained during recent field campaigns will be presented.

  15. Open Source Cloud-Based Technologies for Bim

    NASA Astrophysics Data System (ADS)

    Logothetis, S.; Karachaliou, E.; Valari, E.; Stylianidis, E.

    2018-05-01

    This paper presents a Cloud-based open source system for storing and processing data from a 3D survey approach. More specifically, we provide an online service for viewing, storing and analysing BIM. Cloud technologies were used to develop a web interface as a BIM data centre, which can handle large BIM data using a server. The server can be accessed by many users through various electronic devices anytime and anywhere so they can view online 3D models using browsers. Nowadays, the Cloud computing is engaged progressively in facilitating BIM-based collaboration between the multiple stakeholders and disciplinary groups for complicated Architectural, Engineering and Construction (AEC) projects. Besides, the development of Open Source Software (OSS) has been rapidly growing and their use tends to be united. Although BIM and Cloud technologies are extensively known and used, there is a lack of integrated open source Cloud-based platforms able to support all stages of BIM processes. The present research aims to create an open source Cloud-based BIM system that is able to handle geospatial data. In this effort, only open source tools will be used; from the starting point of creating the 3D model with FreeCAD to its online presentation through BIMserver. Python plug-ins will be developed to link the two software which will be distributed and freely available to a large community of professional for their use. The research work will be completed by benchmarking four Cloud-based BIM systems: Autodesk BIM 360, BIMserver, Graphisoft BIMcloud and Onuma System, which present remarkable results.

  16. Current Developments in DETER Cybersecurity Testbed Technology

    DTIC Science & Technology

    2015-12-08

    management on PlanetLab [12], such as Plush and Nebula [4], PlMan [19], Stork [20], pShell [21], Planetlab Application Manager [22], parallel open...SSH tools [23], plDist [24], Nixes [25], PLDeploy [26] and vxargs [27]. With the exception of Plush and Nebula , these tools are all low-level

  17. Building a Flexible Nework Infrastructure for Moving Target Defense

    DTIC Science & Technology

    2017-10-13

    testbed. We have published a paper in the CAN Workshop held in conjunction with the ACM CoNext 2016 conference [3]. ML implementations on OpenNetVM...Artificial Intelligence applications in the network Workshop held in conjunction with IEEE ICNP 2017 [4]. [1] Azeem Aqil, Karim Khalil, Ahmed Atya

  18. On the reversibility of transitions between closed and open cellular convection

    DOE PAGES

    Feingold, G.; Koren, I.; Yamaguchi, T.; ...

    2015-07-08

    The two-way transition between closed and open cellular convection is addressed in an idealized cloud-resolving modeling framework. A series of cloud-resolving simulations shows that the transition between closed and open cellular states is asymmetrical and characterized by a rapid ("runaway") transition from the closed- to the open-cell state but slower recovery to the closed-cell state. Given that precipitation initiates the closed–open cell transition and that the recovery requires a suppression of the precipitation, we apply an ad hoc time-varying drop concentration to initiate and suppress precipitation. We show that the asymmetry in the two-way transition occurs even for very rapidmore » drop concentration replenishment. The primary barrier to recovery is the loss in turbulence kinetic energy (TKE) associated with the loss in cloud water (and associated radiative cooling) and the vertical stratification of the boundary layer during the open-cell period. In transitioning from the open to the closed state, the system faces the task of replenishing cloud water fast enough to counter precipitation losses, such that it can generate radiative cooling and TKE. It is hampered by a stable layer below cloud base that has to be overcome before water vapor can be transported more efficiently into the cloud layer. Recovery to the closed-cell state is slower when radiative cooling is inefficient such as in the presence of free tropospheric clouds or after sunrise, when it is hampered by the absorption of shortwave radiation. Tests suggest that recovery to the closed-cell state is faster when the drizzle is smaller in amount and of shorter duration, i.e., when the precipitation causes less boundary layer stratification. Cloud-resolving model results on recovery rates are supported by simulations with a simple predator–prey dynamical system analogue. It is suggested that the observed closing of open cells by ship effluent likely occurs when aerosol intrusions are large, when contact comes prior to the heaviest drizzle in the early morning hours, and when the free troposphere is cloud free.« less

  19. BEATBOX v1.0: Background Error Analysis Testbed with Box Models

    NASA Astrophysics Data System (ADS)

    Knote, Christoph; Barré, Jérôme; Eckl, Max

    2018-02-01

    The Background Error Analysis Testbed (BEATBOX) is a new data assimilation framework for box models. Based on the BOX Model eXtension (BOXMOX) to the Kinetic Pre-Processor (KPP), this framework allows users to conduct performance evaluations of data assimilation experiments, sensitivity analyses, and detailed chemical scheme diagnostics from an observation simulation system experiment (OSSE) point of view. The BEATBOX framework incorporates an observation simulator and a data assimilation system with the possibility of choosing ensemble, adjoint, or combined sensitivities. A user-friendly, Python-based interface allows for the tuning of many parameters for atmospheric chemistry and data assimilation research as well as for educational purposes, for example observation error, model covariances, ensemble size, perturbation distribution in the initial conditions, and so on. In this work, the testbed is described and two case studies are presented to illustrate the design of a typical OSSE experiment, data assimilation experiments, a sensitivity analysis, and a method for diagnosing model errors. BEATBOX is released as an open source tool for the atmospheric chemistry and data assimilation communities.

  20. Design of Bi-Directional Hydrofoils for Tidal Current Turbines

    NASA Astrophysics Data System (ADS)

    Nedyalkov, Ivaylo; Wosnik, Martin

    2015-11-01

    Tidal Current Turbines operate in flows which reverse direction. Bi-directional hydrofoils have rotational symmetry and allow such turbines to operate without the need for pitch or yaw control, decreasing the initial and maintenance costs. A numerical test-bed was developed to automate the simulations of hydrofoils in OpenFOAM and was utilized to simulate the flow over eleven classes of hydrofoils comprising a total of 700 foil shapes at different angles of attack. For promising candidate foil shapes physical models of 75 mm chord and 150 mm span were fabricated and tested in the University of New Hampshire High-Speed Cavitation Tunnel (HiCaT). The experimental results were compared to the simulations for model validation. The numerical test-bed successfully generated simulations for a wide range of foil shapes, although, as expected, the k - ω - SST turbulence model employed here was not adequate for some of the foils and for large angles of attack at which separation occurred. An optimization algorithm is currently being coupled with the numerical test-bed and additional turbulence models will be implemented in the future.

  1. Processing Uav and LIDAR Point Clouds in Grass GIS

    NASA Astrophysics Data System (ADS)

    Petras, V.; Petrasova, A.; Jeziorska, J.; Mitasova, H.

    2016-06-01

    Today's methods of acquiring Earth surface data, namely lidar and unmanned aerial vehicle (UAV) imagery, non-selectively collect or generate large amounts of points. Point clouds from different sources vary in their properties such as number of returns, density, or quality. We present a set of tools with applications for different types of points clouds obtained by a lidar scanner, structure from motion technique (SfM), and a low-cost 3D scanner. To take advantage of the vertical structure of multiple return lidar point clouds, we demonstrate tools to process them using 3D raster techniques which allow, for example, the development of custom vegetation classification methods. Dense point clouds obtained from UAV imagery, often containing redundant points, can be decimated using various techniques before further processing. We implemented and compared several decimation techniques in regard to their performance and the final digital surface model (DSM). Finally, we will describe the processing of a point cloud from a low-cost 3D scanner, namely Microsoft Kinect, and its application for interaction with physical models. All the presented tools are open source and integrated in GRASS GIS, a multi-purpose open source GIS with remote sensing capabilities. The tools integrate with other open source projects, specifically Point Data Abstraction Library (PDAL), Point Cloud Library (PCL), and OpenKinect libfreenect2 library to benefit from the open source point cloud ecosystem. The implementation in GRASS GIS ensures long term maintenance and reproducibility by the scientific community but also by the original authors themselves.

  2. Modeling the surface evapotranspiration over the southern Great Plains

    NASA Technical Reports Server (NTRS)

    Liljegren, J. C.; Doran, J. C.; Hubbe, J. M.; Shaw, W. J.; Zhong, S.; Collatz, G. J.; Cook, D. R.; Hart, R. L.

    1996-01-01

    We have developed a method to apply the Simple Biosphere Model of Sellers et al to calculate the surface fluxes of sensible heat and water vapor at high spatial resolution over the domain of the US DOE's Cloud and Radiation Testbed (CART) in Kansas and Oklahoma. The CART, which is within the GCIP area of interest for the Mississippi River Basin, is an extensively instrumented facility operated as part of the DOE's Atmospheric Radiation Measurement (ARM) program. Flux values calculated with our method will be used to provide lower boundary conditions for numerical models to study the atmosphere over the CART domain.

  3. Climatic Implications of the Observed Temperature Dependence of the Liquid Water Path of Low Clouds in the Southern Great Plains

    NASA Technical Reports Server (NTRS)

    DelGenio, Anthony

    1999-01-01

    Satellite observations of low-level clouds have challenged the assumption that adiabatic liquid water content combined with constant physical thickness will lead to a negative cloud optics feedback in a decadal climate change. We explore the reasons for the satellite results using four years of surface remote sensing data from the Atmospheric Radiation Measurement Program Cloud and Radiation Testbed site in the Southern Great Plains of the United States. We find that low cloud liquid water path is approximately invariant with temperature in winter but decreases strongly with temperature in summer, consistent with the satellite inferences at this latitude. This behavior occurs because liquid water content shows no detectable temperature dependence while cloud physical thickness decreases with warming. Thinning of clouds with warming is observed on seasonal, synoptic, and diurnal time scales; it is most obvious in the warm sectors of baroclinic waves. Although cloud top is observed to slightly descend with warming, the primary cause of thinning, is the ascent of cloud base due to the reduction in surface relative humidity and the concomitant increase in the lifting condensation level of surface air. Low cloud liquid water path is not observed to be a continuous function of temperature. Rather, the behavior we observe is best explained as a transition in the frequency of occurrence of different boundary layer types. At cold temperatures, a mixture of stratified and convective boundary layers is observed, leading to a broad distribution of liquid water path values, while at warm temperatures, only convective boundary layers with small liquid water paths, some of them decoupled, are observed. Our results, combined with the earlier satellite inferences, imply that the commonly quoted 1.5C lower limit for the equilibrium global climate sensitivity to a doubling of CO2 which is based on models with near-adiabatic liquid water behavior and constant physical thickness, should be revised upward.

  4. Climatic Implications of the Observed Temperature Dependence of the Liquid Water Path of Low Clouds in the Southern Great Plains

    NASA Technical Reports Server (NTRS)

    DelGenio, Anthony D.; Wolf, Audrey B.

    1999-01-01

    Satellite observations of low-level clouds have challenged the assumption that adiabatic liquid water content combined with constant physical thickness will lead to a negative cloud optics feedback in a decadal climate change. We explore the reasons for the satellite results using four years of surface remote sensing data from the Atmospheric Radiation Measurement Program Cloud and Radiation Testbed site in the Southern Great Plains of the United States. We find that low cloud liquid water path is approximately invariant with temperature in winter but decreases strongly with temperature in summer, consistent with the satellite inferences at this latitude. This behavior occurs because liquid water content shows no detectable temperature dependence while cloud physical thickness decreases with warming. Thinning of clouds with warming is observed on seasonal, synoptic, and diurnal time scales; it is most obvious in the warm sectors of baroclinic waves. Although cloud top is observed to slightly descend with warming, the primary cause of thinning is the ascent of cloud base due to the reduction in surface relative humidity and the concomitant increase in the lifting condensation level of surface air. Low cloud liquid water path is not observed to be a continuous function of temperature. Rather, the behavior we observe is best explained as a transition in the frequency of occurrence of different boundary layer types: At cold temperatures, a mixture of stratified and convective boundary layers is observed, leading to a broad distribution of liquid water path values, while at warm temperatures, only convective boundary layers with small liquid water paths, some of them decoupled, are observed. Our results, combined with the earlier satellite inferences, imply that the commonly quoted 1.50 C lower limit for the equilibrium global climate sensitivity to a doubling of CO2, which is based on models with near-adiabatic liquid water behavior and constant physical thickness, should be revised upward.

  5. Cloud regimes as phase transitions

    NASA Astrophysics Data System (ADS)

    Stechmann, Samuel; Hottovy, Scott

    2017-11-01

    Clouds are repeatedly identified as a leading source of uncertainty in future climate predictions. Of particular importance are stratocumulus clouds, which can appear as either (i) closed cells that reflect solar radiation back to space or (ii) open cells that allow solar radiation to reach the Earth's surface. Here we show that these clouds regimes - open versus closed cells - fit the paradigm of a phase transition. In addition, this paradigm characterizes pockets of open cells (POCs) as the interface between the open- and closed-cell regimes, and it identifies shallow cumulus clouds as a regime of higher variability. This behavior can be understood using an idealized model for the dynamics of atmospheric water as a stochastic diffusion process. Similar viewpoints of deep convection and self-organized criticality will also be discussed. With these new conceptual viewpoints, ideas from statistical mechanics could potentially be used for understanding uncertainties related to clouds in the climate system and climate predictions. The research of S.N.S. is partially supported by a Sloan Research Fellowship, ONR Young Investigator Award N00014-12-1-0744, and ONR MURI Grant N00014-12-1-0912.

  6. ProteoCloud: a full-featured open source proteomics cloud computing pipeline.

    PubMed

    Muth, Thilo; Peters, Julian; Blackburn, Jonathan; Rapp, Erdmann; Martens, Lennart

    2013-08-02

    We here present the ProteoCloud pipeline, a freely available, full-featured cloud-based platform to perform computationally intensive, exhaustive searches in a cloud environment using five different peptide identification algorithms. ProteoCloud is entirely open source, and is built around an easy to use and cross-platform software client with a rich graphical user interface. This client allows full control of the number of cloud instances to initiate and of the spectra to assign for identification. It also enables the user to track progress, and to visualize and interpret the results in detail. Source code, binaries and documentation are all available at http://proteocloud.googlecode.com. Copyright © 2012 Elsevier B.V. All rights reserved.

  7. Strike Up the Score: Deriving Searchable and Playable Digital Formats from Sheet Music; Smart Objects and Open Archives; Building the Archives of the Future: Advanced in Preserving Electronic Records at the National Archives and Records Administration; From the Digitized to the Digital Library.

    ERIC Educational Resources Information Center

    Choudhury, G. Sayeed; DiLauro, Tim; Droettboom, Michael; Fujinaga, Ichiro; MacMillan, Karl; Nelson, Michael L.; Maly, Kurt; Thibodeau, Kenneth; Thaller, Manfred

    2001-01-01

    These articles describe the experiences of the Johns Hopkins University library in digitizing their collection of sheet music; motivation for buckets, Smart Object, Dumb Archive (SODA) and the Open Archives Initiative (OAI), and initial experiences using them in digital library (DL) testbeds; requirements for archival institutions, the National…

  8. Open-cell and closed-cell clouds off Peru

    NASA Image and Video Library

    2010-04-27

    2010/107 - 04/17 at 21 :05 UTC. Open-cell and closed-cell clouds off Peru, Pacific Ocean Resembling a frosted window on a cold winter's day, this lacy pattern of marine clouds was captured off the coast of Peru in the Pacific Ocean by the MODIS on the Aqua satellite on April 19, 2010. The image reveals both open- and closed-cell cumulus cloud patterns. These cells, or parcels of air, often occur in roughly hexagonal arrays in a layer of fluid (the atmosphere often behaves like a fluid) that begins to "boil," or convect, due to heating at the base or cooling at the top of the layer. In "closed" cells warm air is rising in the center, and sinking around the edges, so clouds appear in cell centers, but evaporate around cell edges. This produces cloud formations like those that dominate the lower left. The reverse flow can also occur: air can sink in the center of the cell and rise at the edge. This process is called "open cell" convection, and clouds form at cell edges around open centers, which creates a lacy, hollow-looking pattern like the clouds in the upper right. Closed and open cell convection represent two stable atmospheric configurations — two sides of the convection coin. But what determines which path the "boiling" atmosphere will take? Apparently the process is highly chaotic, and there appears to be no way to predict whether convection will result in open or closed cells. Indeed, the atmosphere may sometimes flip between one mode and another in no predictable pattern. Satellite: Aqua NASA/GSFC/Jeff Schmaltz/MODIS Land Rapid Response Team To learn more about MODIS go to: rapidfire.sci.gsfc.nasa.gov/gallery/?latest NASA Goddard Space Flight Center is home to the nation's largest organization of combined scientists, engineers and technologists that build spacecraft, instruments and new technology to study the Earth, the sun, our solar system, and the universe.

  9. Key Technology Research on Open Architecture for The Sharing of Heterogeneous Geographic Analysis Models

    NASA Astrophysics Data System (ADS)

    Yue, S. S.; Wen, Y. N.; Lv, G. N.; Hu, D.

    2013-10-01

    In recent years, the increasing development of cloud computing technologies laid critical foundation for efficiently solving complicated geographic issues. However, it is still difficult to realize the cooperative operation of massive heterogeneous geographical models. Traditional cloud architecture is apt to provide centralized solution to end users, while all the required resources are often offered by large enterprises or special agencies. Thus, it's a closed framework from the perspective of resource utilization. Solving comprehensive geographic issues requires integrating multifarious heterogeneous geographical models and data. In this case, an open computing platform is in need, with which the model owners can package and deploy their models into cloud conveniently, while model users can search, access and utilize those models with cloud facility. Based on this concept, the open cloud service strategies for the sharing of heterogeneous geographic analysis models is studied in this article. The key technology: unified cloud interface strategy, sharing platform based on cloud service, and computing platform based on cloud service are discussed in detail, and related experiments are conducted for further verification.

  10. NASA Hybrid Reflectometer Project

    NASA Technical Reports Server (NTRS)

    Lynch, Dana; Mancini, Ron (Technical Monitor)

    2002-01-01

    Time-domain and frequency-domain reflectometry have been used for about forty years to locate opens and shorts in cables. Interpretation of reflectometry data is as much art as science. Is there information in the data that is being missed? Can the reflectometers be improved to allow us to detect and locate defects in cables that are not outright shorts or opens? The Hybrid Reflectometer Project was begun this year at NASA Ames Research Center, initially to model wire physics, simulating time-domain reflectometry (TDR) signals in those models and validating the models against actual TDR data taken on testbed cables. Theoretical models of reflectometry in wires will give us an understanding of the merits and limits of these techniques and will guide the application of a proposed hybrid reflectometer with the aim of enhancing reflectometer sensitivity to the point that wire defects can be detected. We will point out efforts by some other researchers to apply wire physics models to the problem of defect detection in wires and we will describe our own initial efforts to create wire physics models and report on testbed validation of the TDR simulations.

  11. The Wisconsin Snow and Cloud-Terra 2000 Experiment (WISC-T2000)

    NASA Technical Reports Server (NTRS)

    2002-01-01

    Atmospheric scientists take to the skies this winter for the Wisconsin Snow and Cloud-Terra 2000 experiment, Feb. 25 through March 13. Scientists in WISC-T2000 will use instruments on board NASA's ER-2, a high-altitude research plane, to validate new science products from NASA's earth-observing satellite Terra, which began its five-year mission on Dec. 18, 1999. Contact Terri Gregory Public Information Coordinator Space Science and Engineering Center University of Wisconsin-Madison (608) 263-3373; fax (608) 262-5974 terri.gregory@ssec.wisc.edu Science Goals: WISC-T2000 is the third in a series of field experiments sponsored by the University of Wisconsin-Madison's Space Science and Engineering Center. The center helped develop one of the five science instruments on Terra, the Moderate-Resolution Imaging Spectroradiometer (MODIS). MODIS will make global measurements of clouds, oceans, land, and atmospheric properties in an effort to monitor and predict global climate change. Infrastructure: The ER-2 will be based at Madison's Truax Field and will fly over the upper Midwest and Oklahoma. ER-2 measurements will be coordinated with observations at the Department of Energy's Cloud and Radiation Testbed site in Oklahoma (http://www.arm.gov/), which will be engaged in a complementary cloud experiment. The center will work closely with NASA's Goddard Space Flight Center, which will collect and distribute MODIS data and science products. Additional information on the WISC-T2000 field campaign is available at the project's Web site http://cimss.ssec.wisc.edu/wisct2000/

  12. Automating NEURON Simulation Deployment in Cloud Resources.

    PubMed

    Stockton, David B; Santamaria, Fidel

    2017-01-01

    Simulations in neuroscience are performed on local servers or High Performance Computing (HPC) facilities. Recently, cloud computing has emerged as a potential computational platform for neuroscience simulation. In this paper we compare and contrast HPC and cloud resources for scientific computation, then report how we deployed NEURON, a widely used simulator of neuronal activity, in three clouds: Chameleon Cloud, a hybrid private academic cloud for cloud technology research based on the OpenStack software; Rackspace, a public commercial cloud, also based on OpenStack; and Amazon Elastic Cloud Computing, based on Amazon's proprietary software. We describe the manual procedures and how to automate cloud operations. We describe extending our simulation automation software called NeuroManager (Stockton and Santamaria, Frontiers in Neuroinformatics, 2015), so that the user is capable of recruiting private cloud, public cloud, HPC, and local servers simultaneously with a simple common interface. We conclude by performing several studies in which we examine speedup, efficiency, total session time, and cost for sets of simulations of a published NEURON model.

  13. Automating NEURON Simulation Deployment in Cloud Resources

    PubMed Central

    Santamaria, Fidel

    2016-01-01

    Simulations in neuroscience are performed on local servers or High Performance Computing (HPC) facilities. Recently, cloud computing has emerged as a potential computational platform for neuroscience simulation. In this paper we compare and contrast HPC and cloud resources for scientific computation, then report how we deployed NEURON, a widely used simulator of neuronal activity, in three clouds: Chameleon Cloud, a hybrid private academic cloud for cloud technology research based on the Open-Stack software; Rackspace, a public commercial cloud, also based on OpenStack; and Amazon Elastic Cloud Computing, based on Amazon’s proprietary software. We describe the manual procedures and how to automate cloud operations. We describe extending our simulation automation software called NeuroManager (Stockton and Santamaria, Frontiers in Neuroinformatics, 2015), so that the user is capable of recruiting private cloud, public cloud, HPC, and local servers simultaneously with a simple common interface. We conclude by performing several studies in which we examine speedup, efficiency, total session time, and cost for sets of simulations of a published NEURON model. PMID:27655341

  14. Cloud Response to Arctic Sea Ice Loss and Implications for Feedbacks in the CESM1 Climate Model

    NASA Astrophysics Data System (ADS)

    Morrison, A.; Kay, J. E.; Chepfer, H.; Guzman, R.; Bonazzola, M.

    2017-12-01

    Clouds have the potential to accelerate or slow the rate of Arctic sea ice loss through their radiative influence on the surface. Cloud feedbacks can therefore play into Arctic warming as clouds respond to changes in sea ice cover. As the Arctic moves toward an ice-free state, understanding how cloud - sea ice relationships change in response to sea ice loss is critical for predicting the future climate trajectory. From satellite observations we know the effect of present-day sea ice cover on clouds, but how will clouds respond to sea ice loss as the Arctic transitions to a seasonally open water state? In this study we use a lidar simulator to first evaluate cloud - sea ice relationships in the Community Earth System Model (CESM1) against present-day observations (2006-2015). In the current climate, the cloud response to sea ice is well-represented in CESM1: we see no summer cloud response to changes in sea ice cover, but more fall clouds over open water than over sea ice. Since CESM1 is credible for the current Arctic climate, we next assess if our process-based understanding of Arctic cloud feedbacks related to sea ice loss is relevant for understanding future Arctic clouds. In the future Arctic, summer cloud structure continues to be insensitive to surface conditions. As the Arctic warms in the fall, however, the boundary layer deepens and cloud fraction increases over open ocean during each consecutive decade from 2020 - 2100. This study will also explore seasonal changes in cloud properties such as opacity and liquid water path. Results thus far suggest that a positive fall cloud - sea ice feedback exists in the present-day and future Arctic climate.

  15. 75 FR 13258 - Announcing a Meeting of the Information Security and Privacy Advisory Board

    Federal Register 2010, 2011, 2012, 2013, 2014

    2010-03-19

    .../index.html/ . Agenda: --Cloud Computing Implementations --Health IT --OpenID --Pending Cyber Security... will be available for the public and media. --OpenID --Cloud Computing Implementations --Security...

  16. Studying NASA's Transition to Ka-Band Communications for Low Earth Orbit

    NASA Technical Reports Server (NTRS)

    Chelmins, David; Reinhart, Richard; Mortensen, Dale; Welch, Bryan; Downey, Joseph; Evans, Mike

    2014-01-01

    As the S-band spectrum becomes crowded, future space missions will need to consider moving command and telemetry services to Ka-band. NASAs Space Communications and Navigation (SCaN) Testbed provides a software-defined radio (SDR) platform that is capable of supporting investigation of this service transition. The testbed contains two S-band SDRs and one Ka-band SDR. Over the past year, SCaN Testbed has demonstrated Ka-band communications capabilities with NASAs Tracking and Data Relay Satellite System (TDRSS) using both open- and closed-loop antenna tracking profiles. A number of technical areas need to be addressed for successful transition to Ka-band. The smaller antenna beamwidth at Ka-band increases the criticality of antenna pointing, necessitating closed loop tracking algorithms and new techniques for received power estimation. Additionally, the antenna pointing routines require enhanced knowledge of spacecraft position and attitude for initial acquisition, versus an S-band antenna. Ka-band provides a number of technical advantages for bulk data transfer. Unlike at S-band, a larger bandwidth may be available for space missions, allowing increased data rates. The potential for high rate data transfer can also be extended for direct-to-ground links through use of variable or adaptive coding and modulation. Specific examples of Ka-band research from SCaN Testbeds first year of operation will be cited, such as communications link performance with TDRSS, and the effects of truss flexure on antenna pointing.

  17. Studying NASA's Transition to Ka-Band Communications for Low Earth Orbit

    NASA Technical Reports Server (NTRS)

    Chelmins, David T.; Reinhart, Richard C.; Mortensen, Dale; Welch, Bryan; Downey, Joseph; Evans, Michael

    2014-01-01

    As the S-band spectrum becomes crowded, future space missions will need to consider moving command and telemetry services to Ka-band. NASA's Space Communications and Navigation (SCaN) Testbed provides a software-defined radio (SDR) platform that is capable of supporting investigation of this service transition. The testbed contains two S-band SDRs and one Ka-band SDR. Over the past year, SCaN Testbed has demonstrated Ka-band communications capabilities with NASAs Tracking and Data Relay Satellite System (TDRSS) using both open- and closed-loop antenna tracking profiles. A number of technical areas need to be addressed for successful transition to Ka-band. The smaller antenna beamwidth at Ka-band increases the criticality of antenna pointing, necessitating closed loop tracking algorithms and new techniques for received power estimation. Additionally, the antenna pointing routines require enhanced knowledge of spacecraft position and attitude for initial acquisition, versus an S-band antenna. Ka-band provides a number of technical advantages for bulk data transfer. Unlike at S-band, a larger bandwidth may be available for space missions, allowing increased data rates. The potential for high rate data transfer can also be extended for direct-to-ground links through use of variable or adaptive coding and modulation. Specific examples of Ka-band research from SCaN Testbeds first year of operation will be cited, such as communications link performance with TDRSS, and the effects of truss flexure on antenna pointing.

  18. The Research of the Parallel Computing Development from the Angle of Cloud Computing

    NASA Astrophysics Data System (ADS)

    Peng, Zhensheng; Gong, Qingge; Duan, Yanyu; Wang, Yun

    2017-10-01

    Cloud computing is the development of parallel computing, distributed computing and grid computing. The development of cloud computing makes parallel computing come into people’s lives. Firstly, this paper expounds the concept of cloud computing and introduces two several traditional parallel programming model. Secondly, it analyzes and studies the principles, advantages and disadvantages of OpenMP, MPI and Map Reduce respectively. Finally, it takes MPI, OpenMP models compared to Map Reduce from the angle of cloud computing. The results of this paper are intended to provide a reference for the development of parallel computing.

  19. WFIRST Coronagraph Technology Development Testbeds: Status and Recent Testbed Results

    NASA Astrophysics Data System (ADS)

    Shi, Fang; An, Xin; Balasubramanian, Kunjithapatham; cady, eric; Gordon, Brian; Greer, Frank; Kasdin, N. Jeremy; Kern, Brian; Lam, Raymond; Marx, David; Moody, Dwight; Patterson, Keith; Poberezhskiy, Ilya; mejia prada, camilo; Gersh-Range, Jessica; Eldorado Riggs, A. J.; Seo, Byoung-Joon; Shields, Joel; Sidick, Erkin; Tang, Hong; Trauger, John Terry; Truong, Tuan; White, Victor; Wilson, Daniel; Zhou, Hanying; JPL WFIRST Testbed Team, Princeton University

    2018-01-01

    As a part of technology development for the WFIRST coronagraph instrument (CGI), dedicated testbeds are built and commissioned at JPL. The coronagraph technology development testbeds include the Occulting Mask Coronagraph (OMC) testbed, the Shaped Pupil Coronagraph/Integral Field Spectrograph (SPC/IFS) testbed, and the Vacuum Surface Gauge (VSG) testbed. With its configuration similar to the WFIRST flight coronagraph instrument the OMC testbed consists of two coronagraph modes, Shaped Pupil Coronagraph (SPC) and Hybrid Lyot Coronagraph (HLC), a low order wavefront sensor (LOWFS), and an optical telescope assembly (OTA) simulator which can generate realistic LoS drift and jitter as well as low order wavefront error that would be induced by the WFIRST telescope’s vibration and thermal changes. The SPC/IFS testbed is a dedicated testbed to test the IFS working with a Shaped Pupil Coronagraph while the VSG testbed is for measuring and calibrating the deformable mirrors, a key component used for WFIRST CGI's wavefront control. In this poster, we will describe the testbed functions and status as well as the highlight of the latest testbed results from OMC, SPC/IFS and VSG testbeds.

  20. Extensions to Traditional Spatial Data Infrastructures: Integration of Social Media, Synchronization of Datasets, and Data on the Go in GeoPackages

    NASA Astrophysics Data System (ADS)

    Simonis, Ingo

    2015-04-01

    Traditional Spatial Data Infrastructures focus on aspects such as description and discovery of geospatial data, integration of these data into processing workflows, and representation of fusion or other data analysis results. Though lots of interoperability agreements still need to be worked out to achieve a satisfying level of interoperability within large scale initiatives such as INSPIRE, new technologies, use cases and requirements are constantly emerging from the user community. This paper focuses on three aspects that came up recently: The integration of social media data into SDIs, synchronization aspects between datasets used by field workers in shared resources environments, and the generation and maintenance of data for mixed mode online/offline situations that can be easily packed, delivered, modified, and synchronized with reference data sets. The work described in this paper results from the latest testbed executed by the Open Geospatial Consortium, OGC. The testbed is part of the interoperability program (IP), which constitutes a significant part of the OGC standards development process. The IP has a number of instruments to enhance geospatial standards and technologies, such as Testbeds, Pilot Projects, Interoperability Experiments, and Interoperability Expert Services. These activities are designed to encourage rapid development, testing, validation, demonstration and adoption of open, consensus based standards and best practices. The latest global activity, testbed-11, aims at exploring new technologies and architectural approaches to enrich and extend traditional spatial data infrastructures with data from Social Media, improved data synchronization, and the capability to take data to the field in new synchronized data containers called GeoPackages. Social media sources are a valuable supplement to providing up to date information in distributed environments. Following an uncoordinated crowdsourcing approach, social media data can be both overwhelming in volume and questionable in its accuracy and legitimacy. Testbed-11 explores how best to make use of such sources of information and how to deal with immanent issues with data from platforms such as OpenStreetMap, Twitter, tumblr, flickr, Snapchat, Facebook, Instagram, YouTube, Vimeo, Panoramio, Pinterest, Picasa or storyful. Further important aspects highlighted here are the synchronization of data and the capability to take complex data sets of any size on mobile devices to the field - and keeping them in sync with reference data stores. In particular in emergency management situations, it is crucial to ensure properly synchronized data sets across different types of data stores and applications. Often data is taken to the field on mobile devices, where it gets updated or annotated. Though bandwidth permanently improves, requirements on data quality and complexity grow in parallel. Intermitted connectivity is paired with high security requirements that have to be fulfilled. This paper discusses the latest approaches using synchronization services and synchronized GeoPackages, the new container format for geospatial data.

  1. Identifying Meteorological Controls on Open and Closed Mesoscale Cellular Convection as Associated with Marine Cold Air Outbreaks

    NASA Astrophysics Data System (ADS)

    McCoy, Isabel; Wood, Robert; Fletcher, Jennifer

    Marine low clouds are key influencers of the climate and contribute significantly to uncertainty in model climate sensitivity due to their small scale and complex processes. Many low clouds occur in large-scale cellular patterns, known as open and closed mesoscale cellular convection (MCC), which have significantly different radiative and microphysical properties. Investigating MCC development and meteorological controls will improve our understanding of their impacts on the climate. We conducted an examination of time-varying meteorological conditions associated with satellite-determined open and closed MCC. The spatial and temporal patterns of MCC clouds were compared with key meteorological control variables calculated from ERA-Interim Reanalysis to highlight dependencies and major differences. This illustrated the influence of environmental stability and surface forcing as well as the role of marine cold air outbreaks (MCAO, the movement of cold air from polar-regions across warmer waters) in MCC cloud formation. Such outbreaks are important to open MCC development and may also influence the transition from open to closed MCC. Our results may lead to improvements in the parameterization of cloudiness and advance the simulation of marine low clouds. National Science Foundation Graduate Research Fellowship Grant (DGE-1256082).

  2. A Single Column Model Ensemble Approach Applied to the TWP-ICE Experiment

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Davies, Laura; Jakob, Christian; Cheung, K.

    2013-06-27

    Single column models (SCM) are useful testbeds for investigating the parameterisation schemes of numerical weather prediction and climate models. The usefulness of SCM simulations are limited, however, by the accuracy of the best-estimate large-scale data prescribed. One method to address this uncertainty is to perform ensemble simulations of the SCM. This study first derives an ensemble of large-scale data for the Tropical Warm Pool International Cloud Experiment (TWP-ICE) based on an estimate of a possible source of error in the best-estimate product. This data is then used to carry out simulations with 11 SCM and 2 cloud-resolving models (CRM). Best-estimatemore » simulations are also performed. All models show that moisture related variables are close to observations and there are limited differences between the best-estimate and ensemble mean values. The models, however, show different sensitivities to changes in the forcing particularly when weakly forced. The ensemble simulations highlight important differences in the moisture budget between the SCM and CRM. Systematic differences are also apparent in the ensemble mean vertical structure of cloud variables. The ensemble is further used to investigate relations between cloud variables and precipitation identifying large differences between CRM and SCM. This study highlights that additional information can be gained by performing ensemble simulations enhancing the information derived from models using the more traditional single best-estimate simulation.« less

  3. The Fourier-Kelvin Stellar Interferometer (FKSI): A Progress Report and Preliminary Results from Our Laboratory Testbed

    NASA Technical Reports Server (NTRS)

    Berry, Richard; Rajagopa, J.; Danchi, W. C.; Allen, R. J.; Benford, D. J.; Deming, D.; Gezari, D. Y.; Kuchner, M.; Leisawitz, D. T.; Linfield, R.

    2005-01-01

    The Fourier-Kelvin Stellar Interferometer (FKSI) is a mission concept for an imaging and nulling interferometer for the near-infrared to mid-infrared spectral region (3-8 microns). FKSI is conceived as a scientific and technological pathfinder to TPF/DARWIN as well as SPIRIT, SPECS, and SAFIR. It will also be a high angular resolution system complementary to JWST. The scientific emphasis of the mission is on the evolution of protostellar systems, from just after the collapse of the precursor molecular cloud core, through the formation of the disk surrounding the protostar, the formation of planets in the disk, and eventual dispersal of the disk material. FKSI will also search for brown dwarfs and Jupiter mass and smaller planets, and could also play a very powerful role in the investigation of the structure of active galactic nuclei and extra-galactic star formation. We report additional studies of the imaging capabilities of the FKSI with various configurations of two to five telescopes, studies of the capabilities of FKSI assuming an increase in long wavelength response to 10 or 12 microns (depending on availability of detectors), and preliminary results from our nulling testbed.

  4. Cloud Environment Automation: from infrastructure deployment to application monitoring

    NASA Astrophysics Data System (ADS)

    Aiftimiei, C.; Costantini, A.; Bucchi, R.; Italiano, A.; Michelotto, D.; Panella, M.; Pergolesi, M.; Saletta, M.; Traldi, S.; Vistoli, C.; Zizzi, G.; Salomoni, D.

    2017-10-01

    The potential offered by the cloud paradigm is often limited by technical issues, rules and regulations. In particular, the activities related to the design and deployment of the Infrastructure as a Service (IaaS) cloud layer can be difficult to apply and time-consuming for the infrastructure maintainers. In this paper the research activity, carried out during the Open City Platform (OCP) research project [1], aimed at designing and developing an automatic tool for cloud-based IaaS deployment is presented. Open City Platform is an industrial research project funded by the Italian Ministry of University and Research (MIUR), started in 2014. It intends to research, develop and test new technological solutions open, interoperable and usable on-demand in the field of Cloud Computing, along with new sustainable organizational models that can be deployed for and adopted by the Public Administrations (PA). The presented work and the related outcomes are aimed at simplifying the deployment and maintenance of a complete IaaS cloud-based infrastructure.

  5. Large scale and cloud-based multi-model analytics experiments on climate change data in the Earth System Grid Federation

    NASA Astrophysics Data System (ADS)

    Fiore, Sandro; Płóciennik, Marcin; Doutriaux, Charles; Blanquer, Ignacio; Barbera, Roberto; Donvito, Giacinto; Williams, Dean N.; Anantharaj, Valentine; Salomoni, Davide D.; Aloisio, Giovanni

    2017-04-01

    In many scientific domains such as climate, data is often n-dimensional and requires tools that support specialized data types and primitives to be properly stored, accessed, analysed and visualized. Moreover, new challenges arise in large-scale scenarios and eco-systems where petabytes (PB) of data can be available and data can be distributed and/or replicated, such as the Earth System Grid Federation (ESGF) serving the Coupled Model Intercomparison Project, Phase 5 (CMIP5) experiment, providing access to 2.5PB of data for the Intergovernmental Panel on Climate Change (IPCC) Fifth Assessment Report (AR5). A case study on climate models intercomparison data analysis addressing several classes of multi-model experiments is being implemented in the context of the EU H2020 INDIGO-DataCloud project. Such experiments require the availability of large amount of data (multi-terabyte order) related to the output of several climate models simulations as well as the exploitation of scientific data management tools for large-scale data analytics. More specifically, the talk discusses in detail a use case on precipitation trend analysis in terms of requirements, architectural design solution, and infrastructural implementation. The experiment has been tested and validated on CMIP5 datasets, in the context of a large scale distributed testbed across EU and US involving three ESGF sites (LLNL, ORNL, and CMCC) and one central orchestrator site (PSNC). The general "environment" of the case study relates to: (i) multi-model data analysis inter-comparison challenges; (ii) addressed on CMIP5 data; and (iii) which are made available through the IS-ENES/ESGF infrastructure. The added value of the solution proposed in the INDIGO-DataCloud project are summarized in the following: (i) it implements a different paradigm (from client- to server-side); (ii) it intrinsically reduces data movement; (iii) it makes lightweight the end-user setup; (iv) it fosters re-usability (of data, final/intermediate products, workflows, sessions, etc.) since everything is managed on the server-side; (v) it complements, extends and interoperates with the ESGF stack; (vi) it provides a "tool" for scientists to run multi-model experiments, and finally; and (vii) it can drastically reduce the time-to-solution for these experiments from weeks to hours. At the time the contribution is being written, the proposed testbed represents the first concrete implementation of a distributed multi-model experiment in the ESGF/CMIP context joining server-side and parallel processing, end-to-end workflow management and cloud computing. As opposed to the current scenario based on search & discovery, data download, and client-based data analysis, the INDIGO-DataCloud architectural solution described in this contribution addresses the scientific computing & analytics requirements by providing a paradigm shift based on server-side and high performance big data frameworks jointly with two-level workflow management systems realized at the PaaS level via a cloud infrastructure.

  6. A programmable laboratory testbed in support of evaluation of functional brain activation and connectivity.

    PubMed

    Barbour, Randall L; Graber, Harry L; Xu, Yong; Pei, Yaling; Schmitz, Christoph H; Pfeil, Douglas S; Tyagi, Anandita; Andronica, Randy; Lee, Daniel C; Barbour, San-Lian S; Nichols, J David; Pflieger, Mark E

    2012-03-01

    An important determinant of the value of quantitative neuroimaging studies is the reliability of the derived information, which is a function of the data collection conditions. Near infrared spectroscopy (NIRS) and electroencelphalography are independent sensing domains that are well suited to explore principal elements of the brain's response to neuroactivation, and whose integration supports development of compact, even wearable, systems suitable for use in open environments. In an effort to maximize the translatability and utility of such resources, we have established an experimental laboratory testbed that supports measures and analysis of simulated macroscopic bioelectric and hemodynamic responses of the brain. Principal elements of the testbed include 1) a programmable anthropomorphic head phantom containing a multisignal source array embedded within a matrix that approximates the background optical and bioelectric properties of the brain, 2) integrated translatable headgear that support multimodal studies, and 3) an integrated data analysis environment that supports anatomically based mapping of experiment-derived measures that are directly and not directly observable. Here, we present a description of system components and fabrication, an overview of the analysis environment, and findings from a representative study that document the ability to experimentally validate effective connectivity models based on NIRS tomography.

  7. The NASA LeRC regenerative fuel cell system testbed program for goverment and commercial applications

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Maloney, T.M.; Prokopius, P.R.; Voecks, G.E.

    1995-01-25

    The Electrochemical Technology Branch of the NASA Lewis Research Center (LeRC) has initiated a program to develop a renewable energy system testbed to evaluate, characterize, and demonstrate fully integrated regenerative fuel cell (RFC) system for space, military, and commercial applications. A multi-agency management team, led by NASA LeRC, is implementing the program through a unique international coalition which encompasses both government and industry participants. This open-ended teaming strategy optimizes the development for space, military, and commercial RFC system technologies. Program activities to date include system design and analysis, and reactant storage sub-system design, with a major emphasis centered upon testbedmore » fabrication and installation and testing of two key RFC system components, namely, the fuel cells and electrolyzers. Construction of the LeRC 25 kW RFC system testbed at the NASA-Jet Propulsion Labortory (JPL) facility at Edwards Air Force Base (EAFB) is nearly complete and some sub-system components have already been installed. Furthermore, planning for the first commercial RFC system demonstration is underway. {copyright} {ital 1995} {ital American} {ital Institute} {ital of} {ital Physics}« less

  8. Open-cell and closed-cell clouds off Peru [detail

    NASA Image and Video Library

    2017-12-08

    2010/107 - 04/17 at 21 :05 UTC. Open-cell and closed-cell clouds off Peru, Pacific Ocean. To view the full fame of this image to go: www.flickr.com/photos/gsfc/4557497219/ Resembling a frosted window on a cold winter's day, this lacy pattern of marine clouds was captured off the coast of Peru in the Pacific Ocean by the MODIS on the Aqua satellite on April 19, 2010. The image reveals both open- and closed-cell cumulus cloud patterns. These cells, or parcels of air, often occur in roughly hexagonal arrays in a layer of fluid (the atmosphere often behaves like a fluid) that begins to "boil," or convect, due to heating at the base or cooling at the top of the layer. In "closed" cells warm air is rising in the center, and sinking around the edges, so clouds appear in cell centers, but evaporate around cell edges. This produces cloud formations like those that dominate the lower left. The reverse flow can also occur: air can sink in the center of the cell and rise at the edge. This process is called "open cell" convection, and clouds form at cell edges around open centers, which creates a lacy, hollow-looking pattern like the clouds in the upper right. Closed and open cell convection represent two stable atmospheric configurations — two sides of the convection coin. But what determines which path the "boiling" atmosphere will take? Apparently the process is highly chaotic, and there appears to be no way to predict whether convection will result in open or closed cells. Indeed, the atmosphere may sometimes flip between one mode and another in no predictable pattern. Satellite: Aqua NASA/GSFC/Jeff Schmaltz/MODIS Land Rapid Response Team To learn more about MODIS go to: rapidfire.sci.gsfc.nasa.gov/gallery/?latest NASA Goddard Space Flight Center is home to the nation's largest organization of combined scientists, engineers and technologists that build spacecraft, instruments and new technology to study the Earth, the sun, our solar system, and the universe.

  9. Microphysical and macrophysical responses of marine stratocumulus polluted by underlying ships

    NASA Astrophysics Data System (ADS)

    Christensen, Matthew Wells

    Multiple sensors flying in the A-train constellation of satellites were used to determine the extent to which aerosol plumes from ships passing below marine stratocumulus alter the microphysical and macrophysical properties of the clouds. Aerosol plumes generated by ships sometimes influence cloud microphysical properties (effective radius) and, to a largely undetermined extent, cloud macrophysical properties (liquid water path, coverage, depth, precipitation, and longevity). Aerosol indirect effects were brought into focus, using observations from the Cloud-Aerosol Lidar with Orthogonal Polarization (CALIOP) and the 94-GHZ radar onboard CloudSat. To assess local cloud scale responses to aerosol, the locations of over one thousand ship tracks coinciding with the radar were meticulously logged by hand from the Moderate Resolution Imaging Spectroradiometer (MODIS) imagery. MODIS imagery was used to distinguish ship tracks that were embedded in closed, open, and unclassifiable mesoscale cellular cloud structures. The impact of aerosol on the microphysical cloud properties in both the closed and open cell regimes were consistent with the changes predicted by the Twomey hypothesis. For the macrophysical changes, differences in the sign and magnitude of these properties were observed between cloud regimes. The results demonstrate that the spatial extent of rainfall (rain cover fraction) and intensity decrease in the clouds contaminated by the ship plume compared to the ambient pristine clouds. Although reductions of precipitation were common amongst the clouds with detectable rainfall (72% of cases), a substantial fraction of ship tracks (28% of cases) exhibited the opposite response. The sign and strength of the response was tied to the type of stratocumulus (e.g., closed vs open cells), depth of the boundary layer, and humidity in the free-troposphere. When closed cellular clouds were identified, liquid water path, drizzle rate, and rain cover fraction (an average relative decrease of 61%) was significantly smaller in the ship-contaminated clouds. Differences in drizzle rate resulted primarily from the reductions in rain cover fraction (i.e., fewer pixels were identified with rain in the clouds polluted by the ship). The opposite occurred in the open cell regime. Ship plumes ingested into this regime resulted in significantly deeper and brighter clouds with higher liquid water amounts and rain rates. Enhanced rain rates (average relative increase of 89%) were primarily due to the changes in intensity (i.e., rain rates on the 1.1 km pixel scale were higher in the ship contaminated clouds) and, to a lesser extent, rain cover fraction. One implication for these differences is that the local aerosol indirect radiative forcing was more than five times larger for ship tracks observed in the open cell regime (-59 W m-2) compared to those identified in the closed cell regime (-12 W m -2). The results presented here underline the need to consider the mesoscale structure of stratocumulus when examining the cloud dynamic response to changes in aerosol concentration. In the final part of the dissertation, the focus shifted to the climate scale to examine the impact of shipping on the Earth's radiation budget. Two studies were employed, in the first; changes to the radiative properties of boundary layer clouds (i.e., cloud top heights less than 3 km) were examined in response to the substantial decreases in ship traffic that resulted from the recent world economic recession in 2008. Differences in the annually averaged droplet effective radius and top of atmosphere outgoing shortwave radiative flux between 2007 and 2009 did not manifest as a clear response in the climate system and, was probably masked either due to competing aerosol cloud feedbacks or by interannual climate variability. In the second study, a method was developed to estimate the radiative forcing from shipping by convolving lanes of densely populated ships onto the global distributions of closed and open cell stratocumulus clouds. Closed cells were observed more than twice as often as open cells. Despite the smaller abundance of open cells, a significant portion of the radiaitve forcing from shipping was claimed by this regime. On the whole, the global radiative forcing from ship tracks was small (approximately -0.45 mW m-2) compared to the radiative forcing associated with the atmospheric buildup of anthropogenic CO2.

  10. JINR cloud infrastructure evolution

    NASA Astrophysics Data System (ADS)

    Baranov, A. V.; Balashov, N. A.; Kutovskiy, N. A.; Semenov, R. N.

    2016-09-01

    To fulfil JINR commitments in different national and international projects related to the use of modern information technologies such as cloud and grid computing as well as to provide a modern tool for JINR users for their scientific research a cloud infrastructure was deployed at Laboratory of Information Technologies of Joint Institute for Nuclear Research. OpenNebula software was chosen as a cloud platform. Initially it was set up in simple configuration with single front-end host and a few cloud nodes. Some custom development was done to tune JINR cloud installation to fit local needs: web form in the cloud web-interface for resources request, a menu item with cloud utilization statistics, user authentication via Kerberos, custom driver for OpenVZ containers. Because of high demand in that cloud service and its resources over-utilization it was re-designed to cover increasing users' needs in capacity, availability and reliability. Recently a new cloud instance has been deployed in high-availability configuration with distributed network file system and additional computing power.

  11. In situ observations of Arctic cloud properties across the Beaufort Sea marginal ice zone

    NASA Astrophysics Data System (ADS)

    Corr, C.; Moore, R.; Winstead, E.; Thornhill, K. L., II; Crosbie, E.; Ziemba, L. D.; Beyersdorf, A. J.; Chen, G.; Martin, R.; Shook, M.; Corbett, J.; Smith, W. L., Jr.; Anderson, B. E.

    2016-12-01

    Clouds play an important role in Arctic climate. This is particularly true over the Arctic Ocean where feedbacks between clouds and sea-ice impact the surface radiation budget through modifications of sea-ice extent, ice thickness, cloud base height, and cloud cover. This work summarizes measurements of Arctic cloud properties made aboard the NASA C-130 aircraft over the Beaufort Sea during ARISE (Arctic Radiation - IceBridge Sea&Ice Experiment) in September 2014. The influence of surface-type on cloud properties is also investigated. Specifically, liquid water content (LWC), droplet concentrations, and droplet size distributions are compared for clouds sampled over three distinct regimes in the Beaufort Sea: 1) open water, 2) the marginal ice zone, and 3) sea-ice. Regardless of surface type, nearly all clouds intercepted during ARISE were liquid-phase clouds. However, differences in droplet size distributions and concentrations were evident for the surface types; clouds over the MIZ and sea-ice generally had fewer and larger droplets compared to those over open water. The potential implication these results have for understanding cloud-surface albedo climate feedbacks in Arctic are discussed.

  12. Stewardship and management challenges within a cloud-based open data ecosystem (Invited Paper 211863)

    NASA Astrophysics Data System (ADS)

    Kearns, E. J.

    2017-12-01

    NOAA's Big Data Project is conducting an experiment in the collaborative distribution of open government data to non-governmental cloud-based systems. Through Cooperative Research and Development Agreements signed in 2015 between NOAA and Amazon Web Services, Google Cloud Platform, IBM, Microsoft Azure, and the Open Commons Consortium, NOAA is distributing open government data to a wide community of potential users. There are a number of significant advantages related to the use of open data on commercial cloud platforms, but through this experiment NOAA is also discovering significant challenges for those stewarding and maintaining NOAA's data resources in support of users in the wider open data ecosystem. Among the challenges that will be discussed are: the need to provide effective interpretation of the data content to enable their use by data scientists from other expert communities; effective maintenance of Collaborators' open data stores through coordinated publication of new data and new versions of older data; the provenance and verification of open data as authentic NOAA-sourced data across multiple management boundaries and analytical tools; and keeping pace with the accelerating expectations of users with regard to improved quality control, data latency, availability, and discoverability. Suggested strategies to address these challenges will also be described.

  13. Precipitation-generated oscillations in open cellular cloud fields.

    PubMed

    Feingold, Graham; Koren, Ilan; Wang, Hailong; Xue, Huiwen; Brewer, Wm Alan

    2010-08-12

    Cloud fields adopt many different patterns that can have a profound effect on the amount of sunlight reflected back to space, with important implications for the Earth's climate. These cloud patterns can be observed in satellite images of the Earth and often exhibit distinct cell-like structures associated with organized convection at scales of tens of kilometres. Recent evidence has shown that atmospheric aerosol particles-through their influence on precipitation formation-help to determine whether cloud fields take on closed (more reflective) or open (less reflective) cellular patterns. The physical mechanisms controlling the formation and evolution of these cells, however, are still poorly understood, limiting our ability to simulate realistically the effects of clouds on global reflectance. Here we use satellite imagery and numerical models to show how precipitating clouds produce an open cellular cloud pattern that oscillates between different, weakly stable states. The oscillations are a result of precipitation causing downward motion and outflow from clouds that were previously positively buoyant. The evaporating precipitation drives air down to the Earth's surface, where it diverges and collides with the outflows of neighbouring precipitating cells. These colliding outflows form surface convergence zones and new cloud formation. In turn, the newly formed clouds produce precipitation and new colliding outflow patterns that are displaced from the previous ones. As successive cycles of this kind unfold, convergence zones alternate with divergence zones and new cloud patterns emerge to replace old ones. The result is an oscillating, self-organized system with a characteristic cell size and precipitation frequency.

  14. A Real-Time Linux for Multicore Platforms

    DTIC Science & Technology

    2013-12-20

    under ARO support) to obtain a fully-functional OS for supporting real-time workloads on multicore platforms. This system, called LITMUS -RT...to be specified as plugin components. LITMUS -RT is open-source software (available at The views, opinions and/or findings contained in this report... LITMUS -RT (LInux Testbed for MUltiprocessor Scheduling in Real-Time systems), allows different multiprocessor real-time scheduling and

  15. Terrestrial Planet Finder Interferometer Technology Status and Plans

    NASA Technical Reports Server (NTRS)

    Lawson, Perter R.; Ahmed, A.; Gappinger, R. O.; Ksendzov, A.; Lay, O. P.; Martin, S. R.; Peters, R. D.; Scharf, D. P.; Wallace, J. K.; Ware, B.

    2006-01-01

    A viewgraph presentation on the technology status and plans for Terrestrial Planet Finder Interferometer is shown. The topics include: 1) The Navigator Program; 2) TPF-I Project Overview; 3) Project Organization; 4) Technology Plan for TPF-I; 5) TPF-I Testbeds; 6) Nulling Error Budget; 7) Nulling Testbeds; 8) Nulling Requirements; 9) Achromatic Nulling Testbed; 10) Single Mode Spatial Filter Technology; 11) Adaptive Nuller Testbed; 12) TPF-I: Planet Detection Testbed (PDT); 13) Planet Detection Testbed Phase Modulation Experiment; and 14) Formation Control Testbed.

  16. Evaluating the feasibility of global climate models to simulate cloud cover effect controlled by Marine Stratocumulus regime transitions

    NASA Astrophysics Data System (ADS)

    Goren, Tom; Muelmenstaedt, Johannes; Rosenfeld, Daniel; Quaas, Johannes

    2017-04-01

    Marine stratocumulus clouds (MSC) occur in two main cloud regimes of open and closed cells that differ significantly by their cloud cover. Closed cells gradually get cleansed of high CCN concentrations in a process that involves initiation of drizzle that breaks the full cloud cover into open cells. The drizzle creates downdrafts that organize the convection along converging gust fronts, which in turn produce stronger updrafts that can sustain more cloud water that compensates the depletion of the cloud water by the rain. In addition, having stronger updrafts allow the clouds to grow relatively deep before rain starts to deplete its cloud water. Therefore, lower droplet concentrations and stronger rain would lead to lower cloud fraction, but not necessary also to lower liquid water path (LWP). The fundamental relationships between these key variables derived from global climate model (GCM) simulations are analyzed with respect to observations in order to determine whether the GCM parameterizations can represent well the governing physical mechanisms upon MSC regime transitions. The results are used to evaluate the feasibility of GCM's for estimating aerosol cloud-mediated radiative forcing upon MSC regime transitions, which are responsible for the largest aerosol cloud-mediated radiative forcing.

  17. Secure open cloud in data transmission using reference pattern and identity with enhanced remote privacy checking

    NASA Astrophysics Data System (ADS)

    Vijay Singh, Ran; Agilandeeswari, L.

    2017-11-01

    To handle the large amount of client’s data in open cloud lots of security issues need to be address. Client’s privacy should not be known to other group members without data owner’s valid permission. Sometime clients are fended to have accessing with open cloud servers due to some restrictions. To overcome the security issues and these restrictions related to storing, data sharing in an inter domain network and privacy checking, we propose a model in this paper which is based on an identity based cryptography in data transmission and intermediate entity which have client’s reference with identity that will take control handling of data transmission in an open cloud environment and an extended remote privacy checking technique which will work at admin side. On behalf of data owner’s authority this proposed model will give best options to have secure cryptography in data transmission and remote privacy checking either as private or public or instructed. The hardness of Computational Diffie-Hellman assumption algorithm for key exchange makes this proposed model more secure than existing models which are being used for public cloud environment.

  18. Sohbrit: Autonomous COTS System for Satellite Characterization

    NASA Astrophysics Data System (ADS)

    Blazier, N.; Tarin, S.; Wells, M.; Brown, N.; Nandy, P.; Woodbury, D.

    As technology continues to improve, driving down the cost of commercial astronomical products while increasing their capabilities, manpower to run observations has become the limiting factor in acquiring continuous and repeatable space situational awareness data. Sandia National Laboratories set out to automate a testbed comprised entirely of commercial off-the-shelf (COTS) hardware for space object characterization (SOC) focusing on satellites in geosynchronous orbit. Using an entirely autonomous system allows collection parameters such as target illumination and nightly overlap to be accounted for habitually; this enables repeatable development of target light curves to establish patterns of life in a variety of spectral bands. The system, known as Sohbrit, is responsible for autonomously creating an optimized schedule, checking the weather, opening the observatory dome, aligning and focusing the telescope, executing the schedule by slewing to each target and imaging it in a number of spectral bands (e.g., B, V, R, I, wide-open) via a filter wheel, closing the dome at the end of observations, processing the data, and storing/disseminating the data for exploitation via the web. Sohbrit must handle various situations such as weather outages and focus changes due to temperature shifts and optical seeing variations without human interaction. Sohbrit can collect large volumes of data nightly due to its high level of automation. To store and disseminate these large quantities of data, we utilize a cloud-based big data architecture called Firebird, which exposes the data out to the community for use by developers and analysts. Sohbrit is the first COTS system we are aware of to automate the full process of multispectral geosynchronous characterization from scheduling all the way to processed, disseminated data. In this paper we will discuss design decisions, issues encountered and overcome during implementation, and show results produced by Sohbrit.

  19. Probing Cloud-Driven Variability on Two of the Youngest, Lowest-Mass Brown Dwarfs in the Solar Neighborhood

    NASA Astrophysics Data System (ADS)

    Schneider, Adam; Cushing, Michael; Kirkpatrick, J. Davy

    2016-08-01

    Young, late-type brown dwarfs share many properties with directly imaged giant extrasolar planets. They therefore provide unique testbeds for investigating the physical conditions present in this critical temperature and mass regime. WISEA 1147-2040 and 2MASS 1119-1137, two recently discovered late-type (~L7) brown dwarfs, have both been determined to be members of the ~10 Myr old TW Hya Association (Kellogg et al. 2016, Schneider et al. 2016). Each has an estimated mass of 5-6 MJup, making them two of the youngest and lowest-mass free floating objects yet found in the solar neighborhood. As such, these two planetary mass objects provide unparalleled laboratories for investigating giant planet-like atmospheres far from the contaminating starlight of a host sun. Condensate clouds play a critical role in shaping the emergent spectra of both brown dwarfs and gas giant planets, and can cause photometric variability via their non-uniform spatial distribution. We propose to photometrically monitor WISEA 1147-2040 and 2MASS 1119-1137 in order to search for the presence of cloud-driven variability to 1) investigate the potential trend of low surface gravity with high-amplitude variability in a previously unexplored mass regime and 2) explore the angular momentum evolution of isolated planetary mass objects.

  20. Uniform Atmospheric Retrievals of Ultracool Late-T and Early-Y dwarfs

    NASA Astrophysics Data System (ADS)

    Garland, Ryan; Irwin, Patrick

    2017-10-01

    A significant number of ultracool (<600K) extrasolar objects have been discovered in the past decade thanks to wide-field surveys such as WISE. These objects present a perfect testbed for examining the evolution of atmospheric structure as we transition from typically hot extrasolar temperatures to the temperatures found within our Solar System.By examining these types of objects with a uniform retrieval method, we hope to elucidate any trends and (dis)similarities found in atmospheric parameters, such as chemical abundances, temperature-pressure profile, and cloud structure, for a sample of 7 ultracool brown dwarfs as we transition from hotter (~700K) to colder objects (~450K).We perform atmospheric retrievals on two late-T and five early-Y dwarfs. We use the NEMESIS atmospheric retrieval code coupled to a Nested Sampling algorithm, along with a standard uniform model for all of our retrievals. The uniform model assumes the atmosphere is described by a gray radiative-convective temperature profile, (optionally) a gray cloud, and a number of relevant gases. We first verify our methods by comparing it to a benchmark retrieval for Gliese 570D, which is found to be consistent. Furthermore, we present the retrieved gaseous composition, temperature structure, spectroscopic mass and radius, cloud structure and the trends associated with decreasing temperature found in this small sample of objects.

  1. 77 FR 18793 - Spectrum Sharing Innovation Test-Bed Pilot Program

    Federal Register 2010, 2011, 2012, 2013, 2014

    2012-03-28

    .... 120322212-2212-01] Spectrum Sharing Innovation Test-Bed Pilot Program AGENCY: National Telecommunications... Innovation Test-Bed pilot program to assess whether devices employing Dynamic Spectrum Access techniques can... Spectrum Sharing Innovation Test-Bed (Test-Bed) pilot program to examine the feasibility of increased...

  2. The Cloud-Aerosol Transport System (CATS): a New Lidar for Aerosol and Cloud Profiling from the International Space Station

    NASA Technical Reports Server (NTRS)

    Welton, Ellsworth J.; McGill, Matthew J.; Yorks, John E.; Hlavka, Dennis L.; Hart, William D.; Palm, Stephen P.; Colarco, Peter R.

    2011-01-01

    Spaceborne lidar profiling of aerosol and cloud layers has been successfully implemented during a number of prior missions, including LITE, ICESat, and CALIPSO. Each successive mission has added increased capability and further expanded the role of these unique measurements in wide variety of applications ranging from climate, to air quality, to special event monitoring (ie, volcanic plumes). Many researchers have come to rely on the availability of profile data from CALIPSO, especially data coincident with measurements from other A-Train sensors. The CALIOP lidar on CALIPSO continues to operate well as it enters its fifth year of operations. However, active instruments have more limited lifetimes than their passive counterparts, and we are faced with a potential gap in lidar profiling from space if the CALIOP lidar fails before a new mission is operational. The ATLID lidar on EarthCARE is not expected to launch until 2015 or later, and the lidar component of NASA's proposed Aerosols, Clouds, and Ecosystems (ACE) mission would not be until after 2020. Here we present a new aerosol and cloud lidar that was recently selected to provide profiling data from the International Space Station (ISS) starting in 2013. The Cloud-Aerosol Transport System (CATS) is a three wavelength (1064, 532, 355 nm) elastic backscatter lidar with HSRL capability at 532 nm. Depolarization measurements will be made at all wavelengths. The primary objective of CATS is to continue the CALIPSO aerosol and cloud profile data record, ideally with overlap between both missions and EarthCARE. In addition, the near real time data capability of the ISS will enable CATS to support operational applications such as air quality and special event monitoring. The HSRL channel will provide a demonstration of technology and a data testbed for direct extinction retrievals in support of ACE mission development. An overview of the instrument and mission will be provided, along with a summary of the science objectives and simulated data.

  3. Evaluation of Future Internet Technologies for Processing and Distribution of Satellite Imagery

    NASA Astrophysics Data System (ADS)

    Becedas, J.; Perez, R.; Gonzalez, G.; Alvarez, J.; Garcia, F.; Maldonado, F.; Sucari, A.; Garcia, J.

    2015-04-01

    Satellite imagery data centres are designed to operate a defined number of satellites. For instance, difficulties when new satellites have to be incorporated in the system appear. This occurs because traditional infrastructures are neither flexible nor scalable. With the appearance of Future Internet technologies new solutions can be provided to manage large and variable amounts of data on demand. These technologies optimize resources and facilitate the appearance of new applications and services in the traditional Earth Observation (EO) market. The use of Future Internet technologies for the EO sector were validated with the GEO-Cloud experiment, part of the Fed4FIRE FP7 European project. This work presents the final results of the project, in which a constellation of satellites records the whole Earth surface on a daily basis. The satellite imagery is downloaded into a distributed network of ground stations and ingested in a cloud infrastructure, where the data is processed, stored, archived and distributed to the end users. The processing and transfer times inside the cloud, workload of the processors, automatic cataloguing and accessibility through the Internet are evaluated to validate if Future Internet technologies present advantages over traditional methods. Applicability of these technologies is evaluated to provide high added value services. Finally, the advantages of using federated testbeds to carry out large scale, industry driven experiments are analysed evaluating the feasibility of an experiment developed in the European infrastructure Fed4FIRE and its migration to a commercial cloud: SoftLayer, an IBM Company.

  4. The Goes-R Geostationary Lightning Mapper (GLM)

    NASA Technical Reports Server (NTRS)

    Goodman, Steven J.; Blakeslee, Richard J.; Koshak, William J.; Mach, Douglas

    2011-01-01

    The Geostationary Operational Environmental Satellite (GOES-R) is the next series to follow the existing GOES system currently operating over the Western Hemisphere. Superior spacecraft and instrument technology will support expanded detection of environmental phenomena, resulting in more timely and accurate forecasts and warnings. Advancements over current GOES capabilities include a new capability for total lightning detection (cloud and cloud-to-ground flashes) from the Geostationary Lightning Mapper (GLM), and improved storm diagnostic capability with the Advanced Baseline Imager. The GLM will map total lightning activity (in-cloud and cloud-to-ground lighting flashes) continuously day and night with near-uniform spatial resolution of 8 km with a product refresh rate of less than 20 sec over the Americas and adjacent oceanic regions. This will aid in forecasting severe storms and tornado activity, and convective weather impacts on aviation safety and efficiency. In parallel with the instrument development, a GOES-R Risk Reduction Team and Algorithm Working Group Lightning Applications Team have begun to develop the Level 2 algorithms, cal/val performance monitoring tools, and new applications. Proxy total lightning data from the NASA Lightning Imaging Sensor on the Tropical Rainfall Measuring Mission (TRMM) satellite and regional test beds are being used to develop the pre-launch algorithms and applications, and also improve our knowledge of thunderstorm initiation and evolution. In this paper we will report on new Nowcasting and storm warning applications being developed and evaluated at various NOAA Testbeds.

  5. Cross stratum resources protection in fog-computing-based radio over fiber networks for 5G services

    NASA Astrophysics Data System (ADS)

    Guo, Shaoyong; Shao, Sujie; Wang, Yao; Yang, Hui

    2017-09-01

    In order to meet the requirement of internet of things (IoT) and 5G, the cloud radio access network is a paradigm which converges all base stations computational resources into a cloud baseband unit (BBU) pool, while the distributed radio frequency signals are collected by remote radio head (RRH). A precondition for centralized processing in the BBU pool is an interconnection fronthaul network with high capacity and low delay. However, it has become more complex and frequent in the interaction between RRH and BBU and resource scheduling among BBUs in cloud. Cloud radio over fiber network has been proposed in our previous work already. In order to overcome the complexity and latency, in this paper, we first present a novel cross stratum resources protection (CSRP) architecture in fog-computing-based radio over fiber networks (F-RoFN) for 5G services. Additionally, a cross stratum protection (CSP) scheme considering the network survivability is introduced in the proposed architecture. The CSRP with CSP scheme can effectively pull the remote processing resource locally to implement the cooperative radio resource management, enhance the responsiveness and resilience to the dynamic end-to-end 5G service demands, and globally optimize optical network, wireless and fog resources. The feasibility and efficiency of the proposed architecture with CSP scheme are verified on our software defined networking testbed in terms of service latency, transmission success rate, resource occupation rate and blocking probability.

  6. Cloud Computing for DoD

    DTIC Science & Technology

    2012-05-01

    cloud computing 17 NASA Nebula Platform •  Cloud computing pilot program at NASA Ames •  Integrates open-source components into seamless, self...Mission support •  Education and public outreach (NASA Nebula , 2010) 18 NSF Supported Cloud Research •  Support for Cloud Computing in...Mell, P. & Grance, T. (2011). The NIST Definition of Cloud Computing. NIST Special Publication 800-145 •  NASA Nebula (2010). Retrieved from

  7. Distributed computing testbed for a remote experimental environment

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Butner, D.N.; Casper, T.A.; Howard, B.C.

    1995-09-18

    Collaboration is increasing as physics research becomes concentrated on a few large, expensive facilities, particularly in magnetic fusion energy research, with national and international participation. These facilities are designed for steady state operation and interactive, real-time experimentation. We are developing tools to provide for the establishment of geographically distant centers for interactive operations; such centers would allow scientists to participate in experiments from their home institutions. A testbed is being developed for a Remote Experimental Environment (REE), a ``Collaboratory.`` The testbed will be used to evaluate the ability of a remotely located group of scientists to conduct research on themore » DIII-D Tokamak at General Atomics. The REE will serve as a testing environment for advanced control and collaboration concepts applicable to future experiments. Process-to-process communications over high speed wide area networks provide real-time synchronization and exchange of data among multiple computer networks, while the ability to conduct research is enhanced by adding audio/video communication capabilities. The Open Software Foundation`s Distributed Computing Environment is being used to test concepts in distributed control, security, naming, remote procedure calls and distributed file access using the Distributed File Services. We are exploring the technology and sociology of remotely participating in the operation of a large scale experimental facility.« less

  8. An Integrated Testbed for Cooperative Perception with Heterogeneous Mobile and Static Sensors

    PubMed Central

    Jiménez-González, Adrián; Martínez-De Dios, José Ramiro; Ollero, Aníbal

    2011-01-01

    Cooperation among devices with different sensing, computing and communication capabilities provides interesting possibilities in a growing number of problems and applications including domotics (domestic robotics), environmental monitoring or intelligent cities, among others. Despite the increasing interest in academic and industrial communities, experimental tools for evaluation and comparison of cooperative algorithms for such heterogeneous technologies are still very scarce. This paper presents a remote testbed with mobile robots and Wireless Sensor Networks (WSN) equipped with a set of low-cost off-the-shelf sensors, commonly used in cooperative perception research and applications, that present high degree of heterogeneity in their technology, sensed magnitudes, features, output bandwidth, interfaces and power consumption, among others. Its open and modular architecture allows tight integration and interoperability between mobile robots and WSN through a bidirectional protocol that enables full interaction. Moreover, the integration of standard tools and interfaces increases usability, allowing an easy extension to new hardware and software components and the reuse of code. Different levels of decentralization are considered, supporting from totally distributed to centralized approaches. Developed for the EU-funded Cooperating Objects Network of Excellence (CONET) and currently available at the School of Engineering of Seville (Spain), the testbed provides full remote control through the Internet. Numerous experiments have been performed, some of which are described in the paper. PMID:22247679

  9. Medlay: A Reconfigurable Micro-Power Management to Investigate Self-Powered Systems.

    PubMed

    Kokert, Jan; Beckedahl, Tobias; Reindl, Leonhard M

    2018-01-17

    In self-powered microsystems, a power management is essential to extract, transfer and regulate power from energy harvesting sources to loads such as sensors. The challenge is to consider all of the different structures and components available and build the optimal power management on a microscale. The purpose of this paper is to streamline the design process by creating a novel reconfigurable testbed called Medlay. First, we propose a uniform interface for management functions e.g., power conversion, energy storing and power routing. This interface results in a clear layout because power and status pins are strictly separated, and inputs and outputs have fixed positions. Medlay is the ready-to-use and open-hardware platform based on the interface. It consists of a base board and small modules incorporating e.g., dc-dc converters, power switches and supercapacitors. Measurements confirm that Medlay represents a system on one circuit board, as parasitic effects of the interconnections are negligible. The versatility regarding different setups on the testbed is determined to over 250,000 combinations by layout graph grammar. Lastly, we underline the applicability by recreating three state-of-the-art systems with the testbed. In conclusion, Medlay facilitates building and testing power management in a very compact, clear and extensible fashion.

  10. HTTP as a Data Access Protocol: Trials with XrootD in CMS’s AAA Project

    NASA Astrophysics Data System (ADS)

    Balcas, J.; Bockelman, B. P.; Kcira, D.; Newman, H.; Vlimant, J.; Hendricks, T. W.; CMS Collaboration

    2017-10-01

    The main goal of the project to demonstrate the ability of using HTTP data federations in a manner analogous to the existing AAA infrastructure of the CMS experiment. An initial testbed at Caltech has been built and changes in the CMS software (CMSSW) are being implemented in order to improve HTTP support. The testbed consists of a set of machines at the Caltech Tier2 that improve the support infrastructure for data federations at CMS. As a first step, we are building systems that produce and ingest network data transfers up to 80 Gbps. In collaboration with AAA, HTTP support is enabled at the US redirector and the Caltech testbed. A plugin for CMSSW is being developed for HTTP access based on the DaviX software. It will replace the present fork/exec or curl for HTTP access. In addition, extensions to the XRootD HTTP implementation are being developed to add functionality to it, such as client-based monitoring identifiers. In the future, patches will be developed to better integrate HTTP-over-XRootD with the Open Science Grid (OSG) distribution. First results of the transfer tests using HTTP are presented in this paper together with details about the initial setup.

  11. An integrated testbed for cooperative perception with heterogeneous mobile and static sensors.

    PubMed

    Jiménez-González, Adrián; Martínez-De Dios, José Ramiro; Ollero, Aníbal

    2011-01-01

    Cooperation among devices with different sensing, computing and communication capabilities provides interesting possibilities in a growing number of problems and applications including domotics (domestic robotics), environmental monitoring or intelligent cities, among others. Despite the increasing interest in academic and industrial communities, experimental tools for evaluation and comparison of cooperative algorithms for such heterogeneous technologies are still very scarce. This paper presents a remote testbed with mobile robots and Wireless Sensor Networks (WSN) equipped with a set of low-cost off-the-shelf sensors, commonly used in cooperative perception research and applications, that present high degree of heterogeneity in their technology, sensed magnitudes, features, output bandwidth, interfaces and power consumption, among others. Its open and modular architecture allows tight integration and interoperability between mobile robots and WSN through a bidirectional protocol that enables full interaction. Moreover, the integration of standard tools and interfaces increases usability, allowing an easy extension to new hardware and software components and the reuse of code. Different levels of decentralization are considered, supporting from totally distributed to centralized approaches. Developed for the EU-funded Cooperating Objects Network of Excellence (CONET) and currently available at the School of Engineering of Seville (Spain), the testbed provides full remote control through the Internet. Numerous experiments have been performed, some of which are described in the paper.

  12. Medlay: A Reconfigurable Micro-Power Management to Investigate Self-Powered Systems

    PubMed Central

    Beckedahl, Tobias

    2018-01-01

    In self-powered microsystems, a power management is essential to extract, transfer and regulate power from energy harvesting sources to loads such as sensors. The challenge is to consider all of the different structures and components available and build the optimal power management on a microscale. The purpose of this paper is to streamline the design process by creating a novel reconfigurable testbed called Medlay. First, we propose a uniform interface for management functions e.g., power conversion, energy storing and power routing. This interface results in a clear layout because power and status pins are strictly separated, and inputs and outputs have fixed positions. Medlay is the ready-to-use and open-hardware platform based on the interface. It consists of a base board and small modules incorporating e.g., dc-dc converters, power switches and supercapacitors. Measurements confirm that Medlay represents a system on one circuit board, as parasitic effects of the interconnections are negligible. The versatility regarding different setups on the testbed is determined to over 250,000 combinations by layout graph grammar. Lastly, we underline the applicability by recreating three state-of-the-art systems with the testbed. In conclusion, Medlay facilitates building and testing power management in a very compact, clear and extensible fashion. PMID:29342110

  13. Sensor Web Interoperability Testbed Results Incorporating Earth Observation Satellites

    NASA Technical Reports Server (NTRS)

    Frye, Stuart; Mandl, Daniel J.; Alameh, Nadine; Bambacus, Myra; Cappelaere, Pat; Falke, Stefan; Derezinski, Linda; Zhao, Piesheng

    2007-01-01

    This paper describes an Earth Observation Sensor Web scenario based on the Open Geospatial Consortium s Sensor Web Enablement and Web Services interoperability standards. The scenario demonstrates the application of standards in describing, discovering, accessing and tasking satellites and groundbased sensor installations in a sequence of analysis activities that deliver information required by decision makers in response to national, regional or local emergencies.

  14. 76 FR 62373 - Notice of Public Meeting-Cloud Computing Forum & Workshop IV

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-10-07

    ...--Cloud Computing Forum & Workshop IV AGENCY: National Institute of Standards and Technology (NIST), Commerce. ACTION: Notice. SUMMARY: NIST announces the Cloud Computing Forum & Workshop IV to be held on... to help develop open standards in interoperability, portability and security in cloud computing. This...

  15. Open Reading Frame Phylogenetic Analysis on the Cloud

    PubMed Central

    2013-01-01

    Phylogenetic analysis has become essential in researching the evolutionary relationships between viruses. These relationships are depicted on phylogenetic trees, in which viruses are grouped based on sequence similarity. Viral evolutionary relationships are identified from open reading frames rather than from complete sequences. Recently, cloud computing has become popular for developing internet-based bioinformatics tools. Biocloud is an efficient, scalable, and robust bioinformatics computing service. In this paper, we propose a cloud-based open reading frame phylogenetic analysis service. The proposed service integrates the Hadoop framework, virtualization technology, and phylogenetic analysis methods to provide a high-availability, large-scale bioservice. In a case study, we analyze the phylogenetic relationships among Norovirus. Evolutionary relationships are elucidated by aligning different open reading frame sequences. The proposed platform correctly identifies the evolutionary relationships between members of Norovirus. PMID:23671843

  16. CSNS computing environment Based on OpenStack

    NASA Astrophysics Data System (ADS)

    Li, Yakang; Qi, Fazhi; Chen, Gang; Wang, Yanming; Hong, Jianshu

    2017-10-01

    Cloud computing can allow for more flexible configuration of IT resources and optimized hardware utilization, it also can provide computing service according to the real need. We are applying this computing mode to the China Spallation Neutron Source(CSNS) computing environment. So, firstly, CSNS experiment and its computing scenarios and requirements are introduced in this paper. Secondly, the design and practice of cloud computing platform based on OpenStack are mainly demonstrated from the aspects of cloud computing system framework, network, storage and so on. Thirdly, some improvments to openstack we made are discussed further. Finally, current status of CSNS cloud computing environment are summarized in the ending of this paper.

  17. ASDC Advances in the Utilization of Microservices and Hybrid Cloud Environments

    NASA Astrophysics Data System (ADS)

    Baskin, W. E.; Herbert, A.; Mazaika, A.; Walter, J.

    2017-12-01

    The Atmospheric Science Data Center (ASDC) is transitioning many of its software tools and applications to standalone microservices deployable in a hybrid cloud, offering benefits such as scalability and efficient environment management. This presentation features several projects the ASDC staff have implemented leveraging the OpenShift Container Application Platform and OpenStack Hybrid Cloud Environment focusing on key tools and techniques applied to: Earth Science data processing Spatial-Temporal metadata generation, validation, repair, and curation Archived Data discovery, visualization, and access

  18. Hybrid cloud: bridging of private and public cloud computing

    NASA Astrophysics Data System (ADS)

    Aryotejo, Guruh; Kristiyanto, Daniel Y.; Mufadhol

    2018-05-01

    Cloud Computing is quickly emerging as a promising paradigm in the recent years especially for the business sector. In addition, through cloud service providers, cloud computing is widely used by Information Technology (IT) based startup company to grow their business. However, the level of most businesses awareness on data security issues is low, since some Cloud Service Provider (CSP) could decrypt their data. Hybrid Cloud Deployment Model (HCDM) has characteristic as open source, which is one of secure cloud computing model, thus HCDM may solve data security issues. The objective of this study is to design, deploy and evaluate a HCDM as Infrastructure as a Service (IaaS). In the implementation process, Metal as a Service (MAAS) engine was used as a base to build an actual server and node. Followed by installing the vsftpd application, which serves as FTP server. In comparison with HCDM, public cloud was adopted through public cloud interface. As a result, the design and deployment of HCDM was conducted successfully, instead of having good security, HCDM able to transfer data faster than public cloud significantly. To the best of our knowledge, Hybrid Cloud Deployment model is one of secure cloud computing model due to its characteristic as open source. Furthermore, this study will serve as a base for future studies about Hybrid Cloud Deployment model which may relevant for solving big security issues of IT-based startup companies especially in Indonesia.

  19. Simulation of a severe convective storm using a numerical model with explicitly incorporated aerosols

    NASA Astrophysics Data System (ADS)

    Lompar, Miloš; Ćurić, Mladjen; Romanic, Djordje

    2017-09-01

    Despite an important role the aerosols play in all stages of cloud lifecycle, their representation in numerical weather prediction models is often rather crude. This paper investigates the effects the explicit versus implicit inclusion of aerosols in a microphysics parameterization scheme in Weather Research and Forecasting (WRF) - Advanced Research WRF (WRF-ARW) model has on cloud dynamics and microphysics. The testbed selected for this study is a severe mesoscale convective system with supercells that struck west and central parts of Serbia in the afternoon of July 21, 2014. Numerical products of two model runs, i.e. one with aerosols explicitly (WRF-AE) included and another with aerosols implicitly (WRF-AI) assumed, are compared against precipitation measurements from surface network of rain gauges, as well as against radar and satellite observations. The WRF-AE model accurately captured the transportation of dust from the north Africa over the Mediterranean and to the Balkan region. On smaller scales, both models displaced the locations of clouds situated above west and central Serbia towards southeast and under-predicted the maximum values of composite radar reflectivity. Similar to satellite images, WRF-AE shows the mesoscale convective system as a merged cluster of cumulonimbus clouds. Both models over-predicted the precipitation amounts; WRF-AE over-predictions are particularly pronounced in the zones of light rain, while WRF-AI gave larger outliers. Unlike WRF-AI, the WRF-AE approach enables the modelling of time evolution and influx of aerosols into the cloud which could be of practical importance in weather forecasting and weather modification. Several likely causes for discrepancies between models and observations are discussed and prospects for further research in this field are outlined.

  20. The Integration of CloudStack and OCCI/OpenNebula with DIRAC

    NASA Astrophysics Data System (ADS)

    Méndez Muñoz, Víctor; Fernández Albor, Víctor; Graciani Diaz, Ricardo; Casajús Ramo, Adriàn; Fernández Pena, Tomás; Merino Arévalo, Gonzalo; José Saborido Silva, Juan

    2012-12-01

    The increasing availability of Cloud resources is arising as a realistic alternative to the Grid as a paradigm for enabling scientific communities to access large distributed computing resources. The DIRAC framework for distributed computing is an easy way to efficiently access to resources from both systems. This paper explains the integration of DIRAC with two open-source Cloud Managers: OpenNebula (taking advantage of the OCCI standard) and CloudStack. These are computing tools to manage the complexity and heterogeneity of distributed data center infrastructures, allowing to create virtual clusters on demand, including public, private and hybrid clouds. This approach has required to develop an extension to the previous DIRAC Virtual Machine engine, which was developed for Amazon EC2, allowing the connection with these new cloud managers. In the OpenNebula case, the development has been based on the CernVM Virtual Software Appliance with appropriate contextualization, while in the case of CloudStack, the infrastructure has been kept more general, which permits other Virtual Machine sources and operating systems being used. In both cases, CernVM File System has been used to facilitate software distribution to the computing nodes. With the resulting infrastructure, the cloud resources are transparent to the users through a friendly interface, like the DIRAC Web Portal. The main purpose of this integration is to get a system that can manage cloud and grid resources at the same time. This particular feature pushes DIRAC to a new conceptual denomination as interware, integrating different middleware. Users from different communities do not need to care about the installation of the standard software that is available at the nodes, nor the operating system of the host machine which is transparent to the user. This paper presents an analysis of the overhead of the virtual layer, doing some tests to compare the proposed approach with the existing Grid solution. License Notice: Published under licence in Journal of Physics: Conference Series by IOP Publishing Ltd.

  1. Marine CCN Activation: A Battle Between Primary and Secondary Sources

    NASA Astrophysics Data System (ADS)

    Fossum, K. N.; Ovadnevaite, J.; Ceburnis, D.; Preissler, J.; O'Dowd, C. D. D.

    2017-12-01

    Low-altitude marine clouds are cooling components of the Earth's radiative budget, and the direct measurements of the properties of these cloud forming particles, called cloud condensation nuclei (CCN), helps modellers reconstruct aerosol-to-cloud droplet processes, improving climate predictions. In this study, CCN are directly measured (CCNC commercially available from Droplet Measurement Technologies, Inc.), resolving activation efficiency at varying supersaturated conditions. Previous studies show that sub-micron sea salt particulates activate competitively, reducing the cloud peak supersaturation and inhibiting the activation of sulphate particulates into cloud droplets, making chemical composition an important component in determining cloud droplet number concentration (CDNC). This effect and the sea salt numbers needed to induce it have not been previously studied long-term in the natural environment. As part of this work, data was analysed from a two month marine research ship campaign during the Antarctic Austral summer, in 2015. Ambient aerosol in the Scotia Sea region was sampled continuously, and through the use of multiple aerosol in-situ instruments, this study shows that CCN number in both the open ocean and ice-pack influenced air masses are largely proportionate to secondary aerosol. However, open ocean air masses show a significant primary aerosol influence which changes the aerosol characteristics. Higher sea salt mass concentrations in the open ocean lead to better CCN activation efficiencies. Coupled with high wind speeds and sea surface turbulence, open ocean air masses show a repression of the CDNC number compared with the theoretical values that should be expected with the sub-cloud aerosol number concentration. This is not seen in the ice-pack air masses. Work is ongoing, looking into a long-term North Atlantic marine aerosol data set, but it would appear that chemical composition plays a large role in aerosol to cloud droplet processes, and can initially restrict CDNC when sea salt is abundant and updraft velocities are relatively low.

  2. Trusted computing strengthens cloud authentication.

    PubMed

    Ghazizadeh, Eghbal; Zamani, Mazdak; Ab Manan, Jamalul-lail; Alizadeh, Mojtaba

    2014-01-01

    Cloud computing is a new generation of technology which is designed to provide the commercial necessities, solve the IT management issues, and run the appropriate applications. Another entry on the list of cloud functions which has been handled internally is Identity Access Management (IAM). Companies encounter IAM as security challenges while adopting more technologies became apparent. Trust Multi-tenancy and trusted computing based on a Trusted Platform Module (TPM) are great technologies for solving the trust and security concerns in the cloud identity environment. Single sign-on (SSO) and OpenID have been released to solve security and privacy problems for cloud identity. This paper proposes the use of trusted computing, Federated Identity Management, and OpenID Web SSO to solve identity theft in the cloud. Besides, this proposed model has been simulated in .Net environment. Security analyzing, simulation, and BLP confidential model are three ways to evaluate and analyze our proposed model.

  3. Trusted Computing Strengthens Cloud Authentication

    PubMed Central

    2014-01-01

    Cloud computing is a new generation of technology which is designed to provide the commercial necessities, solve the IT management issues, and run the appropriate applications. Another entry on the list of cloud functions which has been handled internally is Identity Access Management (IAM). Companies encounter IAM as security challenges while adopting more technologies became apparent. Trust Multi-tenancy and trusted computing based on a Trusted Platform Module (TPM) are great technologies for solving the trust and security concerns in the cloud identity environment. Single sign-on (SSO) and OpenID have been released to solve security and privacy problems for cloud identity. This paper proposes the use of trusted computing, Federated Identity Management, and OpenID Web SSO to solve identity theft in the cloud. Besides, this proposed model has been simulated in .Net environment. Security analyzing, simulation, and BLP confidential model are three ways to evaluate and analyze our proposed model. PMID:24701149

  4. Astronomy In The Cloud: Using Mapreduce For Image Coaddition

    NASA Astrophysics Data System (ADS)

    Wiley, Keith; Connolly, A.; Gardner, J.; Krughoff, S.; Balazinska, M.; Howe, B.; Kwon, Y.; Bu, Y.

    2011-01-01

    In the coming decade, astronomical surveys of the sky will generate tens of terabytes of images and detect hundreds of millions of sources every night. The study of these sources will involve computational challenges such as anomaly detection, classification, and moving object tracking. Since such studies require the highest quality data, methods such as image coaddition, i.e., registration, stacking, and mosaicing, will be critical to scientific investigation. With a requirement that these images be analyzed on a nightly basis to identify moving sources, e.g., asteroids, or transient objects, e.g., supernovae, these datastreams present many computational challenges. Given the quantity of data involved, the computational load of these problems can only be addressed by distributing the workload over a large number of nodes. However, the high data throughput demanded by these applications may present scalability challenges for certain storage architectures. One scalable data-processing method that has emerged in recent years is MapReduce, and in this paper we focus on its popular open-source implementation called Hadoop. In the Hadoop framework, the data is partitioned among storage attached directly to worker nodes, and the processing workload is scheduled in parallel on the nodes that contain the required input data. A further motivation for using Hadoop is that it allows us to exploit cloud computing resources, i.e., platforms where Hadoop is offered as a service. We report on our experience implementing a scalable image-processing pipeline for the SDSS imaging database using Hadoop. This multi-terabyte imaging dataset provides a good testbed for algorithm development since its scope and structure approximate future surveys. First, we describe MapReduce and how we adapted image coaddition to the MapReduce framework. Then we describe a number of optimizations to our basic approach and report experimental results compring their performance. This work is funded by the NSF and by NASA.

  5. Astronomy in the Cloud: Using MapReduce for Image Co-Addition

    NASA Astrophysics Data System (ADS)

    Wiley, K.; Connolly, A.; Gardner, J.; Krughoff, S.; Balazinska, M.; Howe, B.; Kwon, Y.; Bu, Y.

    2011-03-01

    In the coming decade, astronomical surveys of the sky will generate tens of terabytes of images and detect hundreds of millions of sources every night. The study of these sources will involve computation challenges such as anomaly detection and classification and moving-object tracking. Since such studies benefit from the highest-quality data, methods such as image co-addition, i.e., astrometric registration followed by per-pixel summation, will be a critical preprocessing step prior to scientific investigation. With a requirement that these images be analyzed on a nightly basis to identify moving sources such as potentially hazardous asteroids or transient objects such as supernovae, these data streams present many computational challenges. Given the quantity of data involved, the computational load of these problems can only be addressed by distributing the workload over a large number of nodes. However, the high data throughput demanded by these applications may present scalability challenges for certain storage architectures. One scalable data-processing method that has emerged in recent years is MapReduce, and in this article we focus on its popular open-source implementation called Hadoop. In the Hadoop framework, the data are partitioned among storage attached directly to worker nodes, and the processing workload is scheduled in parallel on the nodes that contain the required input data. A further motivation for using Hadoop is that it allows us to exploit cloud computing resources: i.e., platforms where Hadoop is offered as a service. We report on our experience of implementing a scalable image-processing pipeline for the SDSS imaging database using Hadoop. This multiterabyte imaging data set provides a good testbed for algorithm development, since its scope and structure approximate future surveys. First, we describe MapReduce and how we adapted image co-addition to the MapReduce framework. Then we describe a number of optimizations to our basic approach and report experimental results comparing their performance.

  6. An open science cloud for scientific research

    NASA Astrophysics Data System (ADS)

    Jones, Bob

    2016-04-01

    The Helix Nebula initiative was presented at EGU 2013 (http://meetingorganizer.copernicus.org/EGU2013/EGU2013-1510-2.pdf) and has continued to expand with more research organisations, providers and services. The hybrid cloud model deployed by Helix Nebula has grown to become a viable approach for provisioning ICT services for research communities from both public and commercial service providers (http://dx.doi.org/10.5281/zenodo.16001). The relevance of this approach for all those communities facing societal challenges in explained in a recent EIROforum publication (http://dx.doi.org/10.5281/zenodo.34264). This presentation will describe how this model brings together a range of stakeholders to implement a common platform for data intensive services that builds upon existing public funded e-infrastructures and commercial cloud services to promote open science. It explores the essential characteristics of a European Open Science Cloud if it is to address the big data needs of the latest generation of Research Infrastructures. The high-level architecture and key services as well as the role of standards is described. A governance and financial model together with the roles of the stakeholders, including commercial service providers and downstream business sectors, that will ensure a European Open Science Cloud can innovate, grow and be sustained beyond the current project cycles is described.

  7. Dynamic Extension of a Virtualized Cluster by using Cloud Resources

    NASA Astrophysics Data System (ADS)

    Oberst, Oliver; Hauth, Thomas; Kernert, David; Riedel, Stephan; Quast, Günter

    2012-12-01

    The specific requirements concerning the software environment within the HEP community constrain the choice of resource providers for the outsourcing of computing infrastructure. The use of virtualization in HPC clusters and in the context of cloud resources is therefore a subject of recent developments in scientific computing. The dynamic virtualization of worker nodes in common batch systems provided by ViBatch serves each user with a dynamically virtualized subset of worker nodes on a local cluster. Now it can be transparently extended by the use of common open source cloud interfaces like OpenNebula or Eucalyptus, launching a subset of the virtual worker nodes within the cloud. This paper demonstrates how a dynamically virtualized computing cluster is combined with cloud resources by attaching remotely started virtual worker nodes to the local batch system.

  8. Algorithms and software for solving finite element equations on serial and parallel architectures

    NASA Technical Reports Server (NTRS)

    Chu, Eleanor; George, Alan

    1988-01-01

    The primary objective was to compare the performance of state-of-the-art techniques for solving sparse systems with those that are currently available in the Computational Structural Mechanics (MSC) testbed. One of the first tasks was to become familiar with the structure of the testbed, and to install some or all of the SPARSPAK package in the testbed. A brief overview of the CSM Testbed software and its usage is presented. An overview of the sparse matrix research for the Testbed currently employed in the CSM Testbed is given. An interface which was designed and implemented as a research tool for installing and appraising new matrix processors in the CSM Testbed is described. The results of numerical experiments performed in solving a set of testbed demonstration problems using the processor SPK and other experimental processors are contained.

  9. Optical instruments synergy in determination of optical depth of thin clouds

    NASA Astrophysics Data System (ADS)

    Viviana Vlăduţescu, Daniela; Schwartz, Stephen E.; Huang, Dong

    2018-04-01

    Optically thin clouds have a strong radiative effect and need to be represented accurately in climate models. Cloud optical depth of thin clouds was retrieved using high resolution digital photography, lidar, and a radiative transfer model. The Doppler Lidar was operated at 1.5 μm, minimizing return from Rayleigh scattering, emphasizing return from aerosols and clouds. This approach examined cloud structure on scales 3 to 5 orders of magnitude finer than satellite products, opening new avenues for examination of cloud structure and evolution.

  10. Optical Instruments Synergy in Determination of Optical Depth of Thin Clouds

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Vladutescu, Daniela V.; Schwartz, Stephen E.

    Optically thin clouds have a strong radiative effect and need to be represented accurately in climate models. Cloud optical depth of thin clouds was retrieved using high resolution digital photography, lidar, and a radiative transfer model. The Doppler Lidar was operated at 1.5 μm, minimizing return from Rayleigh scattering, emphasizing return from aerosols and clouds. This approach examined cloud structure on scales 3 to 5 orders of magnitude finer than satellite products, opening new avenues for examination of cloud structure and evolution.

  11. General Mission Analysis Tool (GMAT)

    NASA Technical Reports Server (NTRS)

    Hughes, Steven P.

    2007-01-01

    The General Mission Analysis Tool (GMAT) is a space trajectory optimization and mission analysis system developed by NASA and private industry in the spirit of the NASA Mission. GMAT contains new technology and is a testbed for future technology development. The goal of the GMAT project is to develop new space trajectory optimization and mission design technology by working inclusively with ordinary people, universities, businesses, and other government organizations, and to share that technology in an open and unhindered way. GMAT is a free and open source software system licensed under the NASA Open Source Agreement: free for anyone to use in development of new mission concepts or to improve current missions, freely available in source code form for enhancement or further technology development.

  12. Experimental demonstration of an OpenFlow based software-defined optical network employing packet, fixed and flexible DWDM grid technologies on an international multi-domain testbed.

    PubMed

    Channegowda, M; Nejabati, R; Rashidi Fard, M; Peng, S; Amaya, N; Zervas, G; Simeonidou, D; Vilalta, R; Casellas, R; Martínez, R; Muñoz, R; Liu, L; Tsuritani, T; Morita, I; Autenrieth, A; Elbers, J P; Kostecki, P; Kaczmarek, P

    2013-03-11

    Software defined networking (SDN) and flexible grid optical transport technology are two key technologies that allow network operators to customize their infrastructure based on application requirements and therefore minimizing the extra capital and operational costs required for hosting new applications. In this paper, for the first time we report on design, implementation & demonstration of a novel OpenFlow based SDN unified control plane allowing seamless operation across heterogeneous state-of-the-art optical and packet transport domains. We verify and experimentally evaluate OpenFlow protocol extensions for flexible DWDM grid transport technology along with its integration with fixed DWDM grid and layer-2 packet switching.

  13. Services for domain specific developments in the Cloud

    NASA Astrophysics Data System (ADS)

    Schwichtenberg, Horst; Gemuend, André

    2015-04-01

    We will discuss and demonstrate the possibilities of new Cloud Services where the complete development of code is in the Cloud. We will discuss the possibilities of such services where the complete development cycle from programing to testing is in the cloud. This can be also combined with dedicated research domain specific services and hide the burden of accessing available infrastructures. As an example, we will show a service that is intended to complement the services of the VERCE projects infrastructure, a service that utilizes Cloud resources to offer simplified execution of data pre- and post-processing scripts. It offers users access to the ObsPy seismological toolbox for processing data with the Python programming language, executed on virtual Cloud resources in a secured sandbox. The solution encompasses a frontend with a modern graphical user interface, a messaging infrastructure as well as Python worker nodes for background processing. All components are deployable in the Cloud and have been tested on different environments based on OpenStack and OpenNebula. Deployments on commercial, public Clouds will be tested in the future.

  14. Open-cell cloud formation over the Bahamas

    NASA Technical Reports Server (NTRS)

    2002-01-01

    What atmospheric scientists refer to as open cell cloud formation is a regular occurrence on the back side of a low-pressure system or cyclone in the mid-latitudes. In the Northern Hemisphere, a low-pressure system will draw in surrounding air and spin it counterclockwise. That means that on the back side of the low-pressure center, cold air will be drawn in from the north, and on the front side, warm air will be drawn up from latitudes closer to the equator. This movement of an air mass is called advection, and when cold air advection occurs over warmer waters, open cell cloud formations often result. This MODIS image shows open cell cloud formation over the Atlantic Ocean off the southeast coast of the United States on February 19, 2002. This particular formation is the result of a low-pressure system sitting out in the North Atlantic Ocean a few hundred miles east of Massachusetts. (The low can be seen as the comma-shaped figure in the GOES-8 Infrared image from February 19, 2002.) Cold air is being drawn down from the north on the western side of the low and the open cell cumulus clouds begin to form as the cold air passes over the warmer Caribbean waters. For another look at the scene, check out the MODIS Direct Broadcast Image from the University of Wisconsin. Image courtesy Jacques Descloitres, MODIS Land Rapid Response Team at NASA GSFC

  15. Guaranteeing Isochronous Control of Networked Motion Control Systems Using Phase Offset Adjustment

    PubMed Central

    Kim, Ikhwan; Kim, Taehyoun

    2015-01-01

    Guaranteeing isochronous transfer of control commands is an essential function for networked motion control systems. The adoption of real-time Ethernet (RTE) technologies may be profitable in guaranteeing deterministic transfer of control messages. However, unpredictable behavior of software in the motion controller often results in unexpectedly large deviation in control message transmission intervals, and thus leads to imprecise motion. This paper presents a simple and efficient heuristic to guarantee the end-to-end isochronous control with very small jitter. The key idea of our approach is to adjust the phase offset of control message transmission time in the motion controller by investigating the behavior of motion control task. In realizing the idea, we performed a pre-runtime analysis to determine a safe and reliable phase offset and applied the phase offset to the runtime code of motion controller by customizing an open-source based integrated development environment (IDE). We also constructed an EtherCAT-based motion control system testbed and performed extensive experiments on the testbed to verify the effectiveness of our approach. The experimental results show that our heuristic is highly effective even for low-end embedded controller implemented in open-source software components under various configurations of control period and the number of motor drives. PMID:26076407

  16. Improved Arctic Cloud and Aerosol Research and Model Parameterizations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kenneth Sassen

    2007-03-01

    In this report are summarized our contributions to the Atmospheric Measurement (ARM) program supported by the Department of Energy. Our involvement commenced in 1990 during the planning stages of the design of the ARM Cloud and Radiation Testbed (CART) sites. We have worked continuously (up to 2006) on our ARM research objectives, building on our earlier findings to advance our knowledge in several areas. Below we summarize our research over this period, with an emphasis on the most recent work. We have participated in several aircraft-supported deployments at the SGP and NSA sites. In addition to deploying the Polarization Diversitymore » Lidar (PDL) system (Sassen 1994; Noel and Sassen 2005) designed and constructed under ARM funding, we have operated other sophisticated instruments W-band polarimetric Doppler radar, and midinfrared radiometer for intercalibration and student training purposes. We have worked closely with University of North Dakota scientists, twice co-directing the Citation operations through ground-to-air communications, and serving as the CART ground-based mission coordinator with NASA aircraft during the 1996 SUCCESS/IOP campaign. We have also taken a leading role in initiating case study research involving a number of ARM coinvestigators. Analyses of several case studies from these IOPs have been reported in journal articles, as we show in Table 1. The PDL has also participated in other major field projects, including FIRE II and CRYSTAL-FACE. In general, the published results of our IOP research can be divided into two categories: comprehensive cloud case study analyses to shed light on fundamental cloud processes using the unique CART IOP measurement capabilities, and the analysis of in situ data for the testing of remote sensing cloud retrieval algorithms. One of the goals of the case study approach is to provide sufficiently detailed descriptions of cloud systems from the data-rich CART environment to make them suitable for application to cloud modeling groups, such as the GEWEX Cloud Simulation Study (GCSS) Cirrus Working Groups. In this paper we summarize our IOP-related accomplishments.« less

  17. Observations and simulations of three-dimensional radiative interactions between Arctic boundary layer clouds and ice floes

    NASA Astrophysics Data System (ADS)

    Schäfer, M.; Bierwirth, E.; Ehrlich, A.; Jäkel, E.; Wendisch, M.

    2015-01-01

    Based on airborne spectral imaging observations three-dimensional (3-D) radiative effects between Arctic boundary layer clouds and ice floes have been identified and quantified. A method is presented to discriminate sea ice and open water in case of clouds from imaging radiance measurements. This separation simultaneously reveals that in case of clouds the transition of radiance between open water and sea ice is not instantaneously but horizontally smoothed. In general, clouds reduce the nadir radiance above bright surfaces in the vicinity of sea ice - open water boundaries, while the nadir radiance above dark surfaces is enhanced compared to situations with clouds located above horizontal homogeneous surfaces. With help of the observations and 3-D radiative transfer simulations, this effect was quantified to range between 0 and 2200 m distance to the sea ice edge. This affected distance Δ L was found to depend on both, cloud and sea ice properties. For a ground overlaying cloud in 0-200 m altitude, increasing the cloud optical thickness from τ = 1 to τ = 10 decreases Δ L from 600 to 250 m, while increasing cloud base altitude or cloud geometrical thickness can increase Δ L; Δ L(τ = 1/10) = 2200 m/1250 m for 500-1000 m cloud altitude. To quantify the effect for different shapes and sizes of the ice floes, various albedo fields (infinite straight ice edge, circles, squares, realistic ice floe field) were modelled. Simulations show that Δ L increases by the radius of the ice floe and for sizes larger than 6 km (500-1000 m cloud altitude) asymptotically reaches maximum values, which corresponds to an infinite straight ice edge. Furthermore, the impact of these 3-D-radiative effects on retrieval of cloud optical properties was investigated. The enhanced brightness of a dark pixel next to an ice edge results in uncertainties of up to 90 and 30% in retrievals of cloud optical thickness and effective radius reff, respectively. With help of Δ L quantified here, an estimate of the distance to the ice edge for which the retrieval errors are negligible is given.

  18. Sparse matrix methods research using the CSM testbed software system

    NASA Technical Reports Server (NTRS)

    Chu, Eleanor; George, J. Alan

    1989-01-01

    Research is described on sparse matrix techniques for the Computational Structural Mechanics (CSM) Testbed. The primary objective was to compare the performance of state-of-the-art techniques for solving sparse systems with those that are currently available in the CSM Testbed. Thus, one of the first tasks was to become familiar with the structure of the testbed, and to install some or all of the SPARSPAK package in the testbed. A suite of subroutines to extract from the data base the relevant structural and numerical information about the matrix equations was written, and all the demonstration problems distributed with the testbed were successfully solved. These codes were documented, and performance studies comparing the SPARSPAK technology to the methods currently in the testbed were completed. In addition, some preliminary studies were done comparing some recently developed out-of-core techniques with the performance of the testbed processor INV.

  19. Network testbed creation and validation

    DOEpatents

    Thai, Tan Q.; Urias, Vincent; Van Leeuwen, Brian P.; Watts, Kristopher K.; Sweeney, Andrew John

    2017-03-21

    Embodiments of network testbed creation and validation processes are described herein. A "network testbed" is a replicated environment used to validate a target network or an aspect of its design. Embodiments describe a network testbed that comprises virtual testbed nodes executed via a plurality of physical infrastructure nodes. The virtual testbed nodes utilize these hardware resources as a network "fabric," thereby enabling rapid configuration and reconfiguration of the virtual testbed nodes without requiring reconfiguration of the physical infrastructure nodes. Thus, in contrast to prior art solutions which require a tester manually build an emulated environment of physically connected network devices, embodiments receive or derive a target network description and build out a replica of this description using virtual testbed nodes executed via the physical infrastructure nodes. This process allows for the creation of very large (e.g., tens of thousands of network elements) and/or very topologically complex test networks.

  20. Network testbed creation and validation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Thai, Tan Q.; Urias, Vincent; Van Leeuwen, Brian P.

    Embodiments of network testbed creation and validation processes are described herein. A "network testbed" is a replicated environment used to validate a target network or an aspect of its design. Embodiments describe a network testbed that comprises virtual testbed nodes executed via a plurality of physical infrastructure nodes. The virtual testbed nodes utilize these hardware resources as a network "fabric," thereby enabling rapid configuration and reconfiguration of the virtual testbed nodes without requiring reconfiguration of the physical infrastructure nodes. Thus, in contrast to prior art solutions which require a tester manually build an emulated environment of physically connected network devices,more » embodiments receive or derive a target network description and build out a replica of this description using virtual testbed nodes executed via the physical infrastructure nodes. This process allows for the creation of very large (e.g., tens of thousands of network elements) and/or very topologically complex test networks.« less

  1. On the Interaction between Marine Boundary Layer Cellular Cloudiness and Surface Heat Fluxes

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kazil, J.; Feingold, G.; Wang, Hailong

    2014-01-02

    The interaction between marine boundary layer cellular cloudiness and surface uxes of sensible and latent heat is investigated. The investigation focuses on the non-precipitating closed-cell state and the precipitating open-cell state at low geostrophic wind speed. The Advanced Research WRF model is used to conduct cloud-system-resolving simulations with interactive surface fluxes of sensible heat, latent heat, and of sea salt aerosol, and with a detailed representation of the interaction between aerosol particles and clouds. The mechanisms responsible for the temporal evolution and spatial distribution of the surface heat fluxes in the closed- and open-cell state are investigated and explained. Itmore » is found that the horizontal spatial structure of the closed-cell state determines, by entrainment of dry free tropospheric air, the spatial distribution of surface air temperature and water vapor, and, to a lesser degree, of the surface sensible and latent heat flux. The synchronized dynamics of the the open-cell state drives oscillations in surface air temperature, water vapor, and in the surface fluxes of sensible and latent heat, and of sea salt aerosol. Open-cell cloud formation, cloud optical depth and liquid water path, and cloud and rain water path are identified as good predictors of the spatial distribution of surface air temperature and sensible heat flux, but not of surface water vapor and latent heat flux. It is shown that by enhancing the surface sensible heat flux, the open-cell state creates conditions by which it is maintained. While the open-cell state under consideration is not depleted in aerosol, and is insensitive to variations in sea-salt fluxes, it also enhances the sea-salt flux relative to the closed-cell state. In aerosol-depleted conditions, this enhancement may replenish the aerosol needed for cloud formation, and hence contribute to the perpetuation of the open-cell state as well. Spatial homogenization of the surface fluxes is found to have only a small effect on cloud properties in the investigated cases. This indicates that sub-grid scale spatial variability in the surface flux of sensible and latent heat and of sea salt aerosol may not be required in large scale and global models to describe marine boundary layer cellular cloudiness.« less

  2. Satellite-Surface Perspectives of Air Quality and Aerosol-Cloud Effects on the Environment: An Overview of 7-SEAS BASELInE

    NASA Technical Reports Server (NTRS)

    Tsay, Si-Chee; Maring, Hal B.; Lin, Neng-Huei; Buntoung, Sumaman; Chantara, Somporn; Chuang, Hsiao-Chi; Gabriel, Philip M.; Goodloe, Colby S.; Holben, Brent N.; Hsiao, Ta-Chih; hide

    2016-01-01

    The objectives of 7-SEASBASELInE (Seven SouthEast Asian Studies Biomass-burning Aerosols and Stratocumulus Environment: Lifecycles and Interactions Experiment) campaigns in spring 2013-2015 were to synergize measurements from uniquely distributed ground-based networks (e.g., AERONET (AErosol RObotic NETwork)), MPLNET ( NASA Micro-Pulse Lidar Network)) and sophisticated platforms (e.g.,SMARTLabs (Surface-based Mobile Atmospheric Research and Testbed Laboratories), regional contributing instruments), along with satellite observations retrievals and regional atmospheric transport chemical models to establish a critically needed database, and to advance our understanding of biomass-burning aerosols and trace gases in Southeast Asia (SEA). We present a satellite-surface perspective of 7-SEASBASELInE and highlight scientific findings concerning: (1) regional meteorology of moisture fields conducive to the production and maintenance of low-level stratiform clouds over land; (2) atmospheric composition in a biomass-burning environment, particularly tracers-markers to serve as important indicators for assessing the state and evolution of atmospheric constituents; (3) applications of remote sensing to air quality and impact on radiative energetics, examining the effect of diurnal variability of boundary-layer height on aerosol loading; (4) aerosol hygroscopicity and ground-based cloud radar measurements in aerosol-cloud processes by advanced cloud ensemble models; and (5) implications of air quality, in terms of toxicity of nanoparticles and trace gases, to human health. This volume is the third 7-SEAS special issue (after Atmospheric Research, vol. 122, 2013; and Atmospheric Environment, vol. 78, 2013) and includes 27 papers published, with emphasis on air quality and aerosol-cloud effects on the environment. BASELInE observations of stratiform clouds over SEA are unique, such clouds are embedded in a heavy aerosol-laden environment and feature characteristically greater stability over land than over ocean, with minimal radar surface clutter at a high vertical spatial resolution. To facilitate an improved understanding of regional aerosol-cloud effects, we envision that future BASELInE-like measurement modeling needs fall into two categories: (1) efficient yet critical in-situ profiling of the boundary layer for validating remote-sensing retrievals and for initializing regional transport chemical and cloud ensemble models; and (2) fully utilizing the high observing frequencies of geostationary satellites for resolving the diurnal cycle of the boundary layerheight as it affects the loading of biomass-burning aerosols, air quality and radiative energetics.

  3. Construction and application of Red5 cluster based on OpenStack

    NASA Astrophysics Data System (ADS)

    Wang, Jiaqing; Song, Jianxin

    2017-08-01

    With the application and development of cloud computing technology in various fields, the resource utilization rate of the data center has been improved obviously, and the system based on cloud computing platform has also improved the expansibility and stability. In the traditional way, Red5 cluster resource utilization is low and the system stability is poor. This paper uses cloud computing to efficiently calculate the resource allocation ability, and builds a Red5 server cluster based on OpenStack. Multimedia applications can be published to the Red5 cloud server cluster. The system achieves the flexible construction of computing resources, but also greatly improves the stability of the cluster and service efficiency.

  4. Advanced Wavefront Sensing and Control Testbed (AWCT)

    NASA Technical Reports Server (NTRS)

    Shi, Fang; Basinger, Scott A.; Diaz, Rosemary T.; Gappinger, Robert O.; Tang, Hong; Lam, Raymond K.; Sidick, Erkin; Hein, Randall C.; Rud, Mayer; Troy, Mitchell

    2010-01-01

    The Advanced Wavefront Sensing and Control Testbed (AWCT) is built as a versatile facility for developing and demonstrating, in hardware, the future technologies of wave front sensing and control algorithms for active optical systems. The testbed includes a source projector for a broadband point-source and a suite of extended scene targets, a dispersed fringe sensor, a Shack-Hartmann camera, and an imaging camera capable of phase retrieval wavefront sensing. The testbed also provides two easily accessible conjugated pupil planes which can accommodate the active optical devices such as fast steering mirror, deformable mirror, and segmented mirrors. In this paper, we describe the testbed optical design, testbed configurations and capabilities, as well as the initial results from the testbed hardware integrations and tests.

  5. Advanced turboprop testbed systems study. Volume 1: Testbed program objectives and priorities, drive system and aircraft design studies, evaluation and recommendations and wind tunnel test plans

    NASA Technical Reports Server (NTRS)

    Bradley, E. S.; Little, B. H.; Warnock, W.; Jenness, C. M.; Wilson, J. M.; Powell, C. W.; Shoaf, L.

    1982-01-01

    The establishment of propfan technology readiness was determined and candidate drive systems for propfan application were identified. Candidate testbed aircraft were investigated for testbed aircraft suitability and four aircraft selected as possible propfan testbed vehicles. An evaluation of the four candidates was performed and the Boeing KC-135A and the Gulfstream American Gulfstream II recommended as the most suitable aircraft for test application. Conceptual designs of the two recommended aircraft were performed and cost and schedule data for the entire testbed program were generated. The program total cost was estimated and a wind tunnel program cost and schedule is generated in support of the testbed program.

  6. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Rosenfeld, Daniel; Wang, Hailong; Rasch, Philip J.

    Numerical simulations described in previous studies showed that adding cloud condensation nuclei to marine stratocumulus can prevent their breakup from closed into open cells. Additional analyses of the same simulations show that the suppression of rain is well described in terms of cloud drop effective radius (re). Rain is initiated when re near cloud top is around 12-14 um. Cloud water starts to get depleted when column-maximum rain intensity (Rmax) exceeds 0.1 mm h-1. This happens when cloud-top re reaches 14 um. Rmax is mostly less than 0.1 mm h-1 at re<14 um, regardless of the cloud water path, butmore » increases rapidly when re exceeds 14 um. This is in agreement with recent aircraft observations and theoretical observations in convective clouds so that the mechanism is not limited to describing marine stratocumulus. These results support the hypothesis that the onset of significant precipitation is determined by the number of nucleated cloud drops and the height (H) above cloud base within the cloud that is required for cloud drops to reach re of 14 um. In turn, this can explain the conditions for initiation of significant drizzle and opening of closed cells providing the basis for a simple parameterization for GCMs that unifies the representation of both precipitating and non-precipitating clouds as well as the transition between them. Furthermore, satellite global observations of cloud depth (from base to top), and cloud top re can be used to derive and validate this parameterization.« less

  7. Generalized Intelligent Framework for Tutoring (GIFT) Cloud/Virtual Open Campus Quick-Start Guide

    DTIC Science & Technology

    2016-03-01

    distribution is unlimited. 13. SUPPLEMENTARY NOTES 14. ABSTRACT This document serves as the quick-start guide for GIFT Cloud, the web -based...to users with a GIFT Account at no cost. GIFT Cloud is a new implementation of GIFT. This web -based application allows learners, authors, and...distribution is unlimited. 3 3. Requirements for GIFT Cloud GIFT Cloud is accessed via a web browser. Officially, GIFT Cloud has been tested to work on

  8. NASA Stennis Space Center Integrated System Health Management Test Bed and Development Capabilities

    NASA Technical Reports Server (NTRS)

    Figueroa, Fernando; Holland, Randy; Coote, David

    2006-01-01

    Integrated System Health Management (ISHM) is a capability that focuses on determining the condition (health) of every element in a complex System (detect anomalies, diagnose causes, prognosis of future anomalies), and provide data, information, and knowledge (DIaK)-not just data-to control systems for safe and effective operation. This capability is currently done by large teams of people, primarily from ground, but needs to be embedded on-board systems to a higher degree to enable NASA's new Exploration Mission (long term travel and stay in space), while increasing safety and decreasing life cycle costs of spacecraft (vehicles; platforms; bases or outposts; and ground test, launch, and processing operations). The topics related to this capability include: 1) ISHM Related News Articles; 2) ISHM Vision For Exploration; 3) Layers Representing How ISHM is Currently Performed; 4) ISHM Testbeds & Prototypes at NASA SSC; 5) ISHM Functional Capability Level (FCL); 6) ISHM Functional Capability Level (FCL) and Technology Readiness Level (TRL); 7) Core Elements: Capabilities Needed; 8) Core Elements; 9) Open Systems Architecture for Condition-Based Maintenance (OSA-CBM); 10) Core Elements: Architecture, taxonomy, and ontology (ATO) for DIaK management; 11) Core Elements: ATO for DIaK Management; 12) ISHM Architecture Physical Implementation; 13) Core Elements: Standards; 14) Systematic Implementation; 15) Sketch of Work Phasing; 16) Interrelationship Between Traditional Avionics Systems, Time Critical ISHM and Advanced ISHM; 17) Testbeds and On-Board ISHM; 18) Testbed Requirements: RETS AND ISS; 19) Sustainable Development and Validation Process; 20) Development of on-board ISHM; 21) Taxonomy/Ontology of Object Oriented Implementation; 22) ISHM Capability on the E1 Test Stand Hydraulic System; 23) Define Relationships to Embed Intelligence; 24) Intelligent Elements Physical and Virtual; 25) ISHM Testbeds and Prototypes at SSC Current Implementations; 26) Trailer-Mounted RETS; 27) Modeling and Simulation; 28) Summary ISHM Testbed Environments; 29) Data Mining - ARC; 30) Transitioning ISHM to Support NASA Missions; 31) Feature Detection Routines; 32) Sample Features Detected in SSC Test Stand Data; and 33) Health Assessment Database (DIaK Repository).

  9. Remote Sensing of Clouds for Solar Forecasting Applications

    NASA Astrophysics Data System (ADS)

    Mejia, Felipe

    A method for retrieving cloud optical depth (tauc) using a UCSD developed ground- based Sky Imager (USI) is presented. The Radiance Red-Blue Ratio (RRBR) method is motivated from the analysis of simulated images of various tauc produced by a Radiative Transfer Model (RTM). From these images the basic parameters affecting the radiance and RBR of a pixel are identified as the solar zenith angle (SZA), tau c , solar pixel an- gle/scattering angle (SPA), and pixel zenith angle/view angle (PZA). The effects of these parameters are described and the functions for radiance, Ilambda (tau c ,SZA,SPA,PZA) , and the red-blue ratio, RBR(tauc ,SZA,SPA,PZA) , are retrieved from the RTM results. RBR, which is commonly used for cloud detection in sky images, provides non-unique solutions for tau c , where RBR increases with tauc up to about tauc = 1 (depending on other parameters) and then decreases. Therefore, the RRBR algorithm uses the measured Imeaslambda (SPA,PZA) , in addition to RBRmeas (SPA,PZA ) to obtain a unique solution for tauc . The RRBR method is applied to images of liquid water clouds taken by a USI at the Oklahoma Atmospheric Radiation Measurement program (ARM) site over the course of 220 days and compared against measurements from a microwave radiometer (MWR) and output from the Min [ MH96a ] method for overcast skies. tau c values ranged from 0-80 with values over 80 being capped and registered as 80. A tauc RMSE of 2.5 between the Min method [ MH96b ] and the USI are observed. The MWR and USI have an RMSE of 2.2 which is well within the uncertainty of the MWR. The procedure developed here provides a foundation to test and develop other cloud detection algorithms. Using the RRBR tauc estimate as an input we then explore the potential of using tomographic techniques for 3-D cloud reconstruction. The Algebraic Reconstruction Technique (ART) is applied to optical depth maps from sky images to reconstruct 3-D cloud extinction coefficients. Reconstruction accuracy is explored for different products, including surface irradiance, extinction coefficients and Liquid Water Path, as a function of the number of available sky imagers (SIs) and setup distance. Increasing the number of cameras improves the accuracy of the 3-D reconstruction: For surface irradiance, the error decreases significantly up to four imagers at which point the improvements become marginal while k error continues to decrease with more cameras. The ideal distance between imagers was also explored: For a cloud height of 1 km, increasing distance up to 3 km (the domain length) improved the 3-D reconstruction for surface irradiance, while k error continued to decrease with increasing decrease. An iterative reconstruction technique was also used to improve the results of the ART by minimizing the error between input images and reconstructed simulations. For the best case of a nine imager deployment, the ART and iterative method resulted in 53.4% and 33.6% mean average error (MAE) for the extinction coefficients, respectively. The tomographic methods were then tested on real world test cases in the Uni- versity of California San Diego's (UCSD) solar testbed. Five UCSD sky imagers (USI) were installed across the testbed based on the best performing distances in simulations. Topographic obstruction is explored as a source of error by analyzing the increased error with obstruction in the field of view of the horizon. As more of the horizon is obstructed the error increases. If at least a field of view of 70° is available for the camera the accuracy is within 2% of the full field of view. Errors caused by stray light are also explored by removing the circumsolar region from images and comparing the cloud reconstruction to a full image. Removing less than 30% of the circumsolar region image and GHI errors were within 0.2% of the full image while errors in k increased 1%. Removing more than 30° around the sun resulted in inaccurate cloud reconstruction. Using four of the five USI a 3D cloud is reconstructed and compared to the fifth camera. The image of the fifth camera (excluded from the reconstruction) was then simulated and found to have a 22.9% error compared to the ground truth.

  10. Diagnosing turbulence for research aircraft safety using open source toolkits

    NASA Astrophysics Data System (ADS)

    Lang, T. J.; Guy, N.

    Open source software toolkits have been developed and applied to diagnose in-cloud turbulence in the vicinity of Earth science research aircraft, via analysis of ground-based Doppler radar data. Based on multiple retrospective analyses, these toolkits show promise for detecting significant turbulence well prior to cloud penetrations by research aircraft. A pilot study demonstrated the ability to provide mission scientists turbulence estimates in near real time during an actual field campaign, and thus these toolkits are recommended for usage in future cloud-penetrating aircraft field campaigns.

  11. Dynamic VM Provisioning for TORQUE in a Cloud Environment

    NASA Astrophysics Data System (ADS)

    Zhang, S.; Boland, L.; Coddington, P.; Sevior, M.

    2014-06-01

    Cloud computing, also known as an Infrastructure-as-a-Service (IaaS), is attracting more interest from the commercial and educational sectors as a way to provide cost-effective computational infrastructure. It is an ideal platform for researchers who must share common resources but need to be able to scale up to massive computational requirements for specific periods of time. This paper presents the tools and techniques developed to allow the open source TORQUE distributed resource manager and Maui cluster scheduler to dynamically integrate OpenStack cloud resources into existing high throughput computing clusters.

  12. Space Subdivision in Indoor Mobile Laser Scanning Point Clouds Based on Scanline Analysis.

    PubMed

    Zheng, Yi; Peter, Michael; Zhong, Ruofei; Oude Elberink, Sander; Zhou, Quan

    2018-06-05

    Indoor space subdivision is an important aspect of scene analysis that provides essential information for many applications, such as indoor navigation and evacuation route planning. Until now, most proposed scene understanding algorithms have been based on whole point clouds, which has led to complicated operations, high computational loads and low processing speed. This paper presents novel methods to efficiently extract the location of openings (e.g., doors and windows) and to subdivide space by analyzing scanlines. An opening detection method is demonstrated that analyses the local geometric regularity in scanlines to refine the extracted opening. Moreover, a space subdivision method based on the extracted openings and the scanning system trajectory is described. Finally, the opening detection and space subdivision results are saved as point cloud labels which will be used for further investigations. The method has been tested on a real dataset collected by ZEB-REVO. The experimental results validate the completeness and correctness of the proposed method for different indoor environment and scanning paths.

  13. Midlatitude Cirrus Clouds Derived from Hurricane Nora: A Case Study with Implications for Ice Crystal Nucleation and Shape.

    NASA Astrophysics Data System (ADS)

    Sassen, Kenneth; Arnott, W. Patrick; O'C. Starr, David; Mace, Gerald G.; Wang, Zhien; Poellot, Michael R.

    2003-04-01

    Hurricane Nora traveled up the Baja Peninsula coast in the unusually warm El Niño waters of September 1997 until rapidly decaying as it approached southern California on 24 September. The anvil cirrus blowoff from the final surge of tropical convection became embedded in subtropical flow that advected the cirrus across the western United States, where it was studied from the Facility for Atmospheric Remote Sensing (FARS) in Salt Lake City, Utah, on 25 September. A day later, the cirrus shield remnants were redirected southward by midlatitude circulations into the southern Great Plains, providing a case study opportunity for the research aircraft and ground-based remote sensors assembled at the Clouds and Radiation Testbed (CART) site in northern Oklahoma. Using these comprehensive resources and new remote sensing cloud retrieval algorithms, the microphysical and radiative cloud properties of this unusual cirrus event are uniquely characterized.Importantly, at both the FARS and CART sites the cirrus generated spectacular halos and arcs, which acted as a tracer for the hurricane cirrus, despite the limited lifetimes of individual ice crystals. Lidar depolarization data indicate widespread regions of uniform ice plate orientations, and in situ particle replicator data show a preponderance of pristine, solid hexagonal plates and columns. It is suggested that these unusual aspects are the result of the mode of cirrus particle nucleation, presumably involving the lofting of sea salt nuclei in strong thunderstorm updrafts into the upper troposphere. This created a reservoir of haze particles that continued to produce halide-salt-contaminated ice crystals during the extended period of cirrus cloud maintenance. The inference that marine microbiota are embedded in the replicas of some ice crystals collected over the CART site points to the longevity of marine effects. Various nucleation scenarios proposed for cirrus clouds based on this and other studies, and the implications for understanding cirrus radiative properties on a global scale, are discussed.

  14. Midlatitude Cirrus Clouds Derived from Hurricane Nora: A Case Study with Implications for Ice Crystal Nucleation and Shape

    NASA Technical Reports Server (NTRS)

    Sassen, Kenneth; Arnott, W. Patrick; OCStarr, David; Mace, Gerald G.; Wang, Zhien; Poellot, Michael R.

    2002-01-01

    Hurricane Nora traveled up the Bala Peninsula coast in the unusually warm El Nino waters of September 1997, until rapidly decaying as it approached Southern California on 24 September. The anvil cirrus blowoff from the final surge of tropical convection became embedded in subtropical flow that advected the cirrus across the western US, where it was studied from the Facility for Atmospheric Remote Sensing (FARS) in Salt Lake City, Utah. A day later, the cirrus shield remnants were redirected southward by midlatitude circulations into the Southern Great Plains, providing a case study opportunity for the research aircraft and ground-based remote sensors assembled at the Clouds and Radiation Testbed (CART) site in northern Oklahoma. Using these comprehensive resources and new remote sensing cloud retrieval algorithms, the microphysical and radiative cloud properties of this unusual cirrus event are uniquely characterized. Importantly, at both the FARS and CART sites the cirrus generated spectacular optical displays, which acted as a tracer for the hurricane cirrus, despite the limited lifetimes of individual ice crystals. Lidar polarization data indicate widespread regions of uniform ice plate orientations, and in situ particle masticator data show a preponderance of pristine, solid hexagonal plates and columns. It is suggested that these unusual aspects are the result of the mode of cirrus particle nucleation, presumably involving the lofting of sea-salt nuclei in thunderstorm updrafts into the upper troposphere. This created a reservoir of haze particles that continued to produce halide-saltcontaminated ice crystals during the extended period of cirrus cloud maintenance. The reference that marine microliters are embedded in the replicas of ice crystals collected over the CART site points to the longevity of marine effects. Various nucleation scenarios proposed for cirrus clouds based on this and other studies, and the implications for understanding cirrus radiative properties or a global scale, are discussed.

  15. Embedded Data Processor and Portable Computer Technology testbeds

    NASA Technical Reports Server (NTRS)

    Alena, Richard; Liu, Yuan-Kwei; Goforth, Andre; Fernquist, Alan R.

    1993-01-01

    Attention is given to current activities in the Embedded Data Processor and Portable Computer Technology testbed configurations that are part of the Advanced Data Systems Architectures Testbed at the Information Sciences Division at NASA Ames Research Center. The Embedded Data Processor Testbed evaluates advanced microprocessors for potential use in mission and payload applications within the Space Station Freedom Program. The Portable Computer Technology (PCT) Testbed integrates and demonstrates advanced portable computing devices and data system architectures. The PCT Testbed uses both commercial and custom-developed devices to demonstrate the feasibility of functional expansion and networking for portable computers in flight missions.

  16. Web Solutions Inspire Cloud Computing Software

    NASA Technical Reports Server (NTRS)

    2013-01-01

    An effort at Ames Research Center to standardize NASA websites unexpectedly led to a breakthrough in open source cloud computing technology. With the help of Rackspace Inc. of San Antonio, Texas, the resulting product, OpenStack, has spurred the growth of an entire industry that is already employing hundreds of people and generating hundreds of millions in revenue.

  17. Mapping urban green open space in Bontang city using QGIS and cloud computing

    NASA Astrophysics Data System (ADS)

    Agus, F.; Ramadiani; Silalahi, W.; Armanda, A.; Kusnandar

    2018-04-01

    Digital mapping techniques are available freely and openly so that map-based application development is easier, faster and cheaper. A rapid development of Cloud Computing Geographic Information System makes this system can help the needs of the community for the provision of geospatial information online. The presence of urban Green Open Space (GOS) provide great benefits as an oxygen supplier, carbon-binding agent and can contribute to providing comfort and beauty of city life. This study aims to propose a platform application of GIS Cloud Computing (CC) of Bontang City GOS mapping. The GIS-CC platform uses the basic map available that’s free and open source. The research used survey method to collect GOS data obtained from Bontang City Government, while application developing works Quantum GIS-CC. The result section describes the existence of GOS Bontang City and the design of GOS mapping application.

  18. Experimental demonstration of OpenFlow-enabled media ecosystem architecture for high-end applications over metro and core networks.

    PubMed

    Ntofon, Okung-Dike; Channegowda, Mayur P; Efstathiou, Nikolaos; Rashidi Fard, Mehdi; Nejabati, Reza; Hunter, David K; Simeonidou, Dimitra

    2013-02-25

    In this paper, a novel Software-Defined Networking (SDN) architecture is proposed for high-end Ultra High Definition (UHD) media applications. UHD media applications require huge amounts of bandwidth that can only be met with high-capacity optical networks. In addition, there are requirements for control frameworks capable of delivering effective application performance with efficient network utilization. A novel SDN-based Controller that tightly integrates application-awareness with network control and management is proposed for such applications. An OpenFlow-enabled test-bed demonstrator is reported with performance evaluations of advanced online and offline media- and network-aware schedulers.

  19. A network approach to the geometric structure of shallow cloud fields

    NASA Astrophysics Data System (ADS)

    Glassmeier, F.; Feingold, G.

    2017-12-01

    The representation of shallow clouds and their radiative impact is one of the largest challenges for global climate models. While the bulk properties of cloud fields, including effects of organization, are a very active area of research, the potential of the geometric arrangement of cloud fields for the development of new parameterizations has hardly been explored. Self-organized patterns are particularly evident in the cellular structure of Stratocumulus (Sc) clouds so readily visible in satellite imagery. Inspired by similar patterns in biology and physics, we approach pattern formation in Sc fields from the perspective of natural cellular networks. Our network analysis is based on large-eddy simulations of open- and closed-cell Sc cases. We find the network structure to be neither random nor characteristic to natural convection. It is independent of macroscopic cloud fields properties like the Sc regime (open vs closed) and its typical length scale (boundary layer height). The latter is a consequence of entropy maximization (Lewis's Law with parameter 0.16). The cellular pattern is on average hexagonal, where non-6 sided cells occur according to a neighbor-number distribution variance of about 2. Reflecting the continuously renewing dynamics of Sc fields, large (many-sided) cells tend to neighbor small (few-sided) cells (Aboav-Weaire Law with parameter 0.9). These macroscopic network properties emerge independent of the Sc regime because the different processes governing the evolution of closed as compared to open cells correspond to topologically equivalent network dynamics. By developing a heuristic model, we show that open and closed cell dynamics can both be mimicked by versions of cell division and cell disappearance and are biased towards the expansion of smaller cells. This model offers for the first time a fundamental and universal explanation for the geometric pattern of Sc clouds. It may contribute to the development of advanced Sc parameterizations. As an outlook, we discuss how a similar network approach can be applied to describe and quantify the geometric structure of shallow cumulus cloud fields.

  20. Preliminary design for Arctic atmospheric radiative transfer experiments

    NASA Technical Reports Server (NTRS)

    Zak, B. D.; Church, H. W.; Stamnes, K.; Shaw, G.; Filyushkin, V.; Jin, Z.; Ellingson, R. G.; Tsay, S. C.

    1995-01-01

    If current plans are realized, within the next few years, an extraordinary set of coordinated research efforts focusing on energy flows in the Arctic will be implemented. All are motivated by the prospect of global climate change. SHEBA (Surface Energy Budget of the Arctic Ocean), led by the National Science Foundation (NSF) and the Office of Naval Research (ONR), involves instrumenting an ice camp in the perennial Arctic ice pack, and taking data for 12-18 months. The ARM (Atmospheric Radiation Measurement) North Slope of Alaska and Adjacent Arctic Ocean (NSA/AAO) Cloud and Radiation Testbed (CART) focuses on atmospheric radiative transport, especially in the presence of clouds. The NSA/AAO CART involves instrumenting a sizeable area on the North Slope of Alaska and adjacent waters in the vicinity of Barrow, and acquiring data over a period of about 10 years. FIRE (First ISCCP (International Satellite Cloud Climatology Program) Regional Experiment) Phase 3 is a program led by the National Aeronautics and Space Administration (NASA) which focuses on Arctic clouds, and which is coordinated with SHEBA and ARM. FIRE has historically emphasized data from airborne and satellite platforms. All three program anticipate initiating Arctic data acquisition during spring, 1997. In light of his historic opportunity, the authors discuss a strawman atmospheric radiative transfer experimental plan that identifies which features of the radiative transport models they think should be tested, what experimental data are required for each type of test, the platforms and instrumentation necessary to acquire those data, and in general terms, how the experiments could be conducted. Aspects of the plan are applicable to all three programs.

  1. A compact high repetition rate CO2 coherent Doppler lidar

    NASA Technical Reports Server (NTRS)

    Alejandro, S.; Frelin, R.; Dix, B.; Mcnicholl, P.

    1992-01-01

    As part of its program to develop coherent heterodyne detection lidar technology for space, airborne, and ground based applications, the Optical Environment Division of the USAF's Phillips Laboratory developed a compact coherent CO2 TEA lidar system. Although originally conceived as a high altitude balloon borne system, the lidar is presently integrated into a trailer for ground based field measurements of aerosols and wind fields. In this role, it will also serve as a testbed for signal acquisition and processing development for planned future airborne and space based solid state lidar systems. The system has also found significance in new areas of interest to the Air Force such as cloud studies and coherent Differential Absorption Lidar (DIAL) systems.

  2. Human Spacecraft Structures Internship

    NASA Technical Reports Server (NTRS)

    Bhakta, Kush

    2017-01-01

    DSG will be placed in halo orbit around themoon- Platform for international/commercialpartners to explore lunar surface- Testbed for technologies needed toexplore Mars• Habitat module used to house up to 4crew members aboard the DSG- Launched on EM-3- Placed inside SLS fairing Habitat Module - Task Habitat Finite Element Model Re-modeled entire structure in NX2) Used Beam and Shell elements torepresent the pressure vessel structure3) Created a point cloud of centers of massfor mass components- Can now inspect local moments andinertias for thrust ring application8/ Habitat Structure – Docking Analysis Problem: Artificial Gravity may be necessary forastronaut health in deep spaceGoal: develop concepts that show how artificialgravity might be incorporated into a spacecraft inthe near term Orion Window Radiant Heat Testing.

  3. US-Korea collaborative research for bridge monitoring test beds

    NASA Astrophysics Data System (ADS)

    Yun, C. B.; Sohn, H.; Lee, J. J.; Park, S.; Wang, M. L.; Zhang, Y. F.; Lynch, J. P.

    2010-04-01

    This paper presents an interim report on an international collaborative research project between the United States and Korea that fundamentally addresses the challenges associated with integrating structural health monitoring (SHM) system components into a comprehensive system for bridges. The objective of the project is to integrate and validate cutting-edge sensors and SHM methods under development for monitoring the long-term performance and structural integrity of highway bridges. A variety of new sensor and monitoring technologies have been selected for integration including wireless sensors, EM stress sensors and piezoelectric active sensors. Using these sensors as building blocks, the first phase of the study focuses on the design of a comprehensive SHM system that is deployed upon a series of highway bridges in Korea. With permanently installed SHM systems in place, the second phase of the study provides open access to the bridges and response data continuously collected as an internal test-bed for SHM. Currently, basic facilities including Internet lines have been constructed on the test-beds, and the participants carried out tests on bridges on the test road section owned by the Korea Expressway Corporation (KEC) with their own measurement and monitoring systems in the local area network environment. The participants were able to access and control their measurement systems by using Remote Desktop in Windows XP through Internet. Researchers interested in this test-bed are encouraged to join in the collaborative research.

  4. Abstracting application deployment on Cloud infrastructures

    NASA Astrophysics Data System (ADS)

    Aiftimiei, D. C.; Fattibene, E.; Gargana, R.; Panella, M.; Salomoni, D.

    2017-10-01

    Deploying a complex application on a Cloud-based infrastructure can be a challenging task. In this contribution we present an approach for Cloud-based deployment of applications and its present or future implementation in the framework of several projects, such as “!CHAOS: a cloud of controls” [1], a project funded by MIUR (Italian Ministry of Research and Education) to create a Cloud-based deployment of a control system and data acquisition framework, “INDIGO-DataCloud” [2], an EC H2020 project targeting among other things high-level deployment of applications on hybrid Clouds, and “Open City Platform”[3], an Italian project aiming to provide open Cloud solutions for Italian Public Administrations. We considered to use an orchestration service to hide the complex deployment of the application components, and to build an abstraction layer on top of the orchestration one. Through Heat [4] orchestration service, we prototyped a dynamic, on-demand, scalable platform of software components, based on OpenStack infrastructures. On top of the orchestration service we developed a prototype of a web interface exploiting the Heat APIs. The user can start an instance of the application without having knowledge about the underlying Cloud infrastructure and services. Moreover, the platform instance can be customized by choosing parameters related to the application such as the size of a File System or the number of instances of a NoSQL DB cluster. As soon as the desired platform is running, the web interface offers the possibility to scale some infrastructure components. In this contribution we describe the solution design and implementation, based on the application requirements, the details of the development of both the Heat templates and of the web interface, together with possible exploitation strategies of this work in Cloud data centers.

  5. Uniform Atmospheric Retrievals of Ultracool Late-T and Early-Y dwarfs

    NASA Astrophysics Data System (ADS)

    Garland, Ryan; Irwin, Patrick

    2018-01-01

    A significant number of ultracool (<600K) extrasolar objects have been unearthed in the past decade thanks to wide-field surveys such as WISE. These objects present a perfect testbed for examining the evolution of atmospheric structure as we transition from typically hot extrasolar temperatures to the temperatures found within our Solar System.By examining these types of objects with a uniform retrieval method, we hope to elucidate any trends and (dis)similarities found in atmospheric parameters, such as chemical abundances, temperature-pressure profile, and cloud structure, for a sample of 7 ultracool brown dwarfs as we transition from hotter (~700K) to colder objects (~450K).We perform atmospheric retrievals on two late-T and five early-Y dwarfs. We use the NEMESIS atmospheric retrieval code coupled to a Nested Sampling algorithm, along with a standard uniform model for all of our retrievals. The uniform model assumes the atmosphere is described by a gray radiative-convective temperature profile, (optionally) a self-consistent Mie scattering cloud, and a number of relevant gases. We first verify our methods by comparing it to a benchmark retrieval for Gliese 570D, which is found to be consistent. Furthermore, we present the retrieved gaseous composition, temperature structure, spectroscopic mass and radius, cloud structure and the trends associated with decreasing temperature found in this small sample of objects.

  6. THE SIMULATION OF FINE SCALE NOCTURNAL BOUNDARY LAYER MOTIONS WITH A MESO-SCALE ATMOSPHERIC MODEL

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Werth, D.; Kurzeja, R.; Parker, M.

    A field project over the Atmospheric Radiation Measurement-Clouds and Radiation Testbed (ARM-CART) site during a period of several nights in September, 2007 was conducted to explore the evolution of the low-level jet (LLJ). Data was collected from a tower and a sodar and analyzed for turbulent behavior. To study the full range of nocturnal boundary layer (NBL) behavior, the Regional Atmospheric Modeling System (RAMS) was used to simulate the ARM-CART NBL field experiment and validated against the data collected from the site. This model was run at high resolution, and is ideal for calculating the interactions among the various motionsmore » within the boundary layer and their influence on the surface. The model reproduces adequately the synoptic situation and the formation and dissolution cycles of the low-level jet, although it suffers from insufficient cloud production and excessive nocturnal cooling. The authors suggest that observed heat flux data may further improve the realism of the simulations both in the cloud formation and in the jet characteristics. In a higher resolution simulation, the NBL experiences motion on a range of timescales as revealed by a wavelet analysis, and these are affected by the presence of the LLJ. The model can therefore be used to provide information on activity throughout the depth of the NBL.« less

  7. Diversionary device

    DOEpatents

    Grubelich, Mark C.

    2001-01-01

    A diversionary device has a housing having at least one opening and containing a non-explosive propellant and a quantity of fine powder packed within the housing, with the powder being located between the propellant and the opening. When the propellant is activated, it has sufficient energy to propel the powder through the opening to produce a cloud of powder outside the housing. An igniter is also provided for igniting the cloud of powder to create a diversionary flash and bang, but at a low enough pressure to avoid injuring nearby people.

  8. Delivering Unidata Technology via the Cloud

    NASA Astrophysics Data System (ADS)

    Fisher, Ward; Oxelson Ganter, Jennifer

    2016-04-01

    Over the last two years, Docker has emerged as the clear leader in open-source containerization. Containerization technology provides a means by which software can be pre-configured and packaged into a single unit, i.e. a container. This container can then be easily deployed either on local or remote systems. Containerization is particularly advantageous when moving software into the cloud, as it simplifies the process. Unidata is adopting containerization as part of our commitment to migrate our technologies to the cloud. We are using a two-pronged approach in this endeavor. In addition to migrating our data-portal services to a cloud environment, we are also exploring new and novel ways to use cloud-specific technology to serve our community. This effort has resulted in several new cloud/Docker-specific projects at Unidata: "CloudStream," "CloudIDV," and "CloudControl." CloudStream is a docker-based technology stack for bringing legacy desktop software to new computing environments, without the need to invest significant engineering/development resources. CloudStream helps make it easier to run existing software in a cloud environment via a technology called "Application Streaming." CloudIDV is a CloudStream-based implementation of the Unidata Integrated Data Viewer (IDV). CloudIDV serves as a practical example of application streaming, and demonstrates how traditional software can be easily accessed and controlled via a web browser. Finally, CloudControl is a web-based dashboard which provides administrative controls for running docker-based technologies in the cloud, as well as providing user management. In this work we will give an overview of these three open-source technologies and the value they offer to our community.

  9. New Educational Modules Using a Cyber-Distribution System Testbed

    DOE PAGES

    Xie, Jing; Bedoya, Juan Carlos; Liu, Chen-Ching; ...

    2018-03-30

    At Washington State University (WSU), a modern cyber-physical system testbed has been implemented based on an industry grade distribution management system (DMS) that is integrated with remote terminal units (RTUs), smart meters, and a solar photovoltaic (PV). In addition, the real model from the Avista Utilities distribution system in Pullman, WA, is modeled in DMS. The proposed testbed environment allows students and instructors to utilize these facilities for innovations in learning and teaching. For power engineering education, this testbed helps students understand the interaction between a cyber system and a physical distribution system through industrial level visualization. The testbed providesmore » a distribution system monitoring and control environment for students. Compared with a simulation based approach, the testbed brings the students' learning environment a step closer to the real world. The educational modules allow students to learn the concepts of a cyber-physical system and an electricity market through an integrated testbed. Furthermore, the testbed provides a platform in the study mode for students to practice working on a real distribution system model. Here, this paper describes the new educational modules based on the testbed environment. Three modules are described together with the underlying educational principles and associated projects.« less

  10. New Educational Modules Using a Cyber-Distribution System Testbed

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Xie, Jing; Bedoya, Juan Carlos; Liu, Chen-Ching

    At Washington State University (WSU), a modern cyber-physical system testbed has been implemented based on an industry grade distribution management system (DMS) that is integrated with remote terminal units (RTUs), smart meters, and a solar photovoltaic (PV). In addition, the real model from the Avista Utilities distribution system in Pullman, WA, is modeled in DMS. The proposed testbed environment allows students and instructors to utilize these facilities for innovations in learning and teaching. For power engineering education, this testbed helps students understand the interaction between a cyber system and a physical distribution system through industrial level visualization. The testbed providesmore » a distribution system monitoring and control environment for students. Compared with a simulation based approach, the testbed brings the students' learning environment a step closer to the real world. The educational modules allow students to learn the concepts of a cyber-physical system and an electricity market through an integrated testbed. Furthermore, the testbed provides a platform in the study mode for students to practice working on a real distribution system model. Here, this paper describes the new educational modules based on the testbed environment. Three modules are described together with the underlying educational principles and associated projects.« less

  11. The AppScale Cloud Platform

    PubMed Central

    Krintz, Chandra

    2013-01-01

    AppScale is an open source distributed software system that implements a cloud platform as a service (PaaS). AppScale makes cloud applications easy to deploy and scale over disparate cloud fabrics, implementing a set of APIs and architecture that also makes apps portable across the services they employ. AppScale is API-compatible with Google App Engine (GAE) and thus executes GAE applications on-premise or over other cloud infrastructures, without modification. PMID:23828721

  12. Research on private cloud computing based on analysis on typical opensource platform: a case study with Eucalyptus and Wavemaker

    NASA Astrophysics Data System (ADS)

    Yu, Xiaoyuan; Yuan, Jian; Chen, Shi

    2013-03-01

    Cloud computing is one of the most popular topics in the IT industry and is recently being adopted by many companies. It has four development models, as: public cloud, community cloud, hybrid cloud and private cloud. Except others, private cloud can be implemented in a private network, and delivers some benefits of cloud computing without pitfalls. This paper makes a comparison of typical open source platforms through which we can implement a private cloud. After this comparison, we choose Eucalyptus and Wavemaker to do a case study on the private cloud. We also do some performance estimation of cloud platform services and development of prototype software as cloud services.

  13. The CAUSES Model Intercomparison Project: Using hindcast approach to study the U.S. summertime surface warm temperature bias

    NASA Astrophysics Data System (ADS)

    Ma, H. Y.; Klein, S. A.; Xie, S.; Zhang, C.; Morcrette, C. J.; Van Weverberg, K.; Petch, J.

    2016-12-01

    The CAUSES (Clouds Above the United States and Errors at the Surface) is a joint GASS/RGCM/ASR model intercomparison project with an observational focus (data from the U.S. DOE ARM SGP site and other observations). The goal of this project is to evaluate the role of clouds, radiation and precipitation processes in contributing to the surface air temperature bias in the region of the central U.S., which is seen in several weather and climate models. In this project, we use a short-term hindcast approach and examine the error growth due to cloud-associated processes while the large-scale state remains close to observations. The study period is from April 1 to August 31, 2011, which also covers the entire Midlatitude Continental Convective Clouds Experiment (MC3E) campaign that provides very frequent radiosondes (8 per day) and many extensive cloud and precipitation radar observations. Our preliminary analysis indicates that the warm surface air temperature bias in the mean diurnal cycle of the whole study period is very robust across all the participating models over the ARM SGP site. During the spring season (April-May), the daytime warm bias in most models is mostly due to excessive net surface shortwave flux resulting from insufficient deep convective cloud fraction or too optically thin clouds. The nighttime warm bias is likely due to the excessive downwelling longwave flux warming resulting from the persisting deep clouds. During the summer season (June-August), bias contribution from precipitation bias becomes important. The insufficient seasonal accumulated precipitation from the propagating convective systems originated from the Rockies contributes to lower soil moisture. Such condition drives the land surface to a dry state whereby radiative input can only be balanced by sensible heat loss through an increased surface air temperature. More information about the CAUSES project can be found through the following project webpage (http://portal.nersc.gov/project/capt/CAUSES/). (This study is funded by the RGCM and ASR programs of the U.S. Department of Energy as part of the Cloud-Associated Parameterizations Testbed. This work is performed under the auspices of the U.S. Department of Energy by Lawrence Livermore National Laboratory under Contract DE-AC52-07NA27344. LLNL-ABS-688818)

  14. Cloud and surface textural features in polar regions

    NASA Technical Reports Server (NTRS)

    Welch, Ronald M.; Kuo, Kwo-Sen; Sengupta, Sailes K.

    1990-01-01

    The study examines the textural signatures of clouds, ice-covered mountains, solid and broken sea ice and floes, and open water. The textural features are computed from sum and difference histogram and gray-level difference vector statistics defined at various pixel displacement distances derived from Landsat multispectral scanner data. Polar cloudiness, snow-covered mountainous regions, solid sea ice, glaciers, and open water have distinguishable texture features. This suggests that textural measures can be successfully applied to the detection of clouds over snow-covered mountains, an ability of considerable importance for the modeling of snow-melt runoff. However, broken stratocumulus cloud decks and thin cirrus over broken sea ice remain difficult to distinguish texturally. It is concluded that even with high spatial resolution imagery, it may not be possible to distinguish broken stratocumulus and thin clouds from sea ice in the marginal ice zone using the visible channel textural features alone.

  15. Airborne observations and simulations of three-dimensional radiative interactions between Arctic boundary layer clouds and ice floes

    NASA Astrophysics Data System (ADS)

    Schäfer, M.; Bierwirth, E.; Ehrlich, A.; Jäkel, E.; Wendisch, M.

    2015-07-01

    Based on airborne spectral imaging observations, three-dimensional (3-D) radiative effects between Arctic boundary layer clouds and highly variable Arctic surfaces were identified and quantified. A method is presented to discriminate between sea ice and open water under cloudy conditions based on airborne nadir reflectivity γλ measurements in the visible spectral range. In cloudy cases the transition of γλ from open water to sea ice is not instantaneous but horizontally smoothed. In general, clouds reduce γλ above bright surfaces in the vicinity of open water, while γλ above open sea is enhanced. With the help of observations and 3-D radiative transfer simulations, this effect was quantified to range between 0 and 2200 m distance to the sea ice edge (for a dark-ocean albedo of αwater = 0.042 and a sea-ice albedo of αice = 0.91 at 645 nm wavelength). The affected distance Δ L was found to depend on both cloud and sea ice properties. For a low-level cloud at 0-200 m altitude, as observed during the Arctic field campaign VERtical Distribution of Ice in Arctic clouds (VERDI) in 2012, an increase in the cloud optical thickness τ from 1 to 10 leads to a decrease in Δ L from 600 to 250 m. An increase in the cloud base altitude or cloud geometrical thickness results in an increase in Δ L; for τ = 1/10 Δ L = 2200 m/1250 m in case of a cloud at 500-1000 m altitude. To quantify the effect for different shapes and sizes of ice floes, radiative transfer simulations were performed with various albedo fields (infinitely long straight ice edge, circular ice floes, squares, realistic ice floe field). The simulations show that Δ L increases with increasing radius of the ice floe and reaches maximum values for ice floes with radii larger than 6 km (500-1000 m cloud altitude), which matches the results found for an infinitely long, straight ice edge. Furthermore, the influence of these 3-D radiative effects on the retrieved cloud optical properties was investigated. The enhanced brightness of a dark pixel next to an ice edge results in uncertainties of up to 90 and 30 % in retrievals of τ and effective radius reff, respectively. With the help of Δ L, an estimate of the distance to the ice edge is given, where the retrieval uncertainties due to 3-D radiative effects are negligible.

  16. Marine boundary layer cloud regimes and POC formation in an LES coupled to a bulk aerosol scheme

    NASA Astrophysics Data System (ADS)

    Berner, A. H.; Bretherton, C. S.; Wood, R.; Muhlbauer, A.

    2013-07-01

    A large-eddy simulation (LES) coupled to a new bulk aerosol scheme is used to study long-lived regimes of aerosol-boundary layer cloud-precipitation interaction and the development of pockets of open cells (POCs) in subtropical stratocumulus cloud layers. The aerosol scheme prognoses mass and number concentration of a single log-normal accumulation mode with surface and entrainment sources, evolving subject to processing of activated aerosol and scavenging of dry aerosol by cloud and rain. The LES with the aerosol scheme is applied to a range of steadily-forced simulations idealized from a well-observed POC case. The long-term system evolution is explored with extended two-dimensional simulations of up to 20 days, mostly with diurnally-averaged insolation. One three-dimensional two-day simulation confirms the initial development of the corresponding two-dimensional case. With weak mean subsidence, an initially aerosol-rich mixed layer deepens, the capping stratocumulus cloud slowly thickens and increasingly depletes aerosol via precipitation accretion, then the boundary layer transitions within a few hours into an open-cell regime with scattered precipitating cumuli, in which entrainment is much weaker. The inversion slowly collapses for several days until the cumulus clouds are too shallow to efficiently precipitate. Inversion cloud then reforms and radiatively drives renewed entrainment, allowing the boundary layer to deepen and become more aerosol-rich, until the stratocumulus layer thickens enough to undergo another cycle of open-cell formation. If mean subsidence is stronger, the stratocumulus never thickens enough to initiate drizzle and settles into a steady state. With lower initial aerosol concentrations, this system quickly transitions into open cells, collapses, and redevelops into a different steady state with a shallow, optically thin cloud layer. In these steady states, interstitial scavenging by cloud droplets is the main sink of aerosol number. The system is described in a reduced two-dimensional phase plane with inversion height and boundary-layer average aerosol concentrations as the state variables. Simulations with a full diurnal cycle show similar evolutions, except that open-cell formation is phase-locked into the early morning hours. The same steadily-forced modeling framework is applied to the development and evolution of a POC and the surrounding overcast boundary layer. An initial aerosol perturbation applied to a portion of the model domain leads that portion to transition into open-cell convection, forming a POC. Reduced entrainment in the POC induces a negative feedback between areal fraction covered by the POC and boundary layer depth changes. This stabilizes the system by controlling liquid water path and precipitation sinks of aerosol number in the overcast region, while also preventing boundary-layer collapse within the POC, allowing the POC and overcast to coexist indefinitely in a quasi-steady equilibrium.

  17. The Cloud-Aerosol Transport System (CATS): A New Lidar for Aerosol and Cloud Profiling from the International Space Station

    NASA Technical Reports Server (NTRS)

    Welton, Ellsworth J.; McGill, Mathew J.; Yorks. John E.; Hlavka, Dennis L.; Hart, William D.; Palm, Stephen P.; Colarco, Peter R.

    2012-01-01

    Spaceborne lidar profiling of aerosol and cloud layers has been successfully implemented during a number of prior missions, including LITE, ICESat, and CALIPSO. Each successive mission has added increased capability and further expanded the role of these unique measurements in wide variety of applications ranging from climate, to air quality, to special event monitoring (ie, volcanic plumes). Many researchers have come to rely on the availability of profile data from CALIPSO, especially data coincident with measurements from other A-Train sensors. The CALIOP lidar on CALIPSO continues to operate well as it enters its fifth year of operations. However, active instruments have more limited lifetimes than their passive counterparts, and we are faced with a potential gap in lidar profiling from space if the CALIOP lidar fails before a new mission is operational. The ATLID lidar on EarthCARE is not expected to launch until 2015 or later, and the lidar component of NASA's proposed Aerosols, Clouds, and Ecosystems (ACE) mission would not be until after 2020. Here we present a new aerosol and cloud lidar that was recently selected to provide profiling data from the International Space Station (ISS) starting in 2013. The Cloud-Aerosol Transport System (CATS) is a three wavelength (1064,532,355 nm) elastic backscatter lidar with HSRL capability at 532 nm. Depolarization measurements will be made at all wavelengths. The primary objective of CATS is to continue the CALIPSO aerosol and cloud profile data record, ideally with overlap between both missions and EarthCARE. In addition, the near real time (NRT) data capability ofthe ISS will enable CATS to support operational applications such as aerosol and air quality forecasting and special event monitoring. The HSRL channel will provide a demonstration of technology and a data testbed for direct extinction retrievals in support of ACE mission development. An overview of the instrument and mission will be provided, along with a summary of the science objectives and simulated data. Input from the ICAP community is desired to help plan our NRT mission goals and interactions with ICAP forecasters.

  18. Actuator with built-in viscous damping for isolation and structural control

    NASA Astrophysics Data System (ADS)

    Hyde, T. Tupper; Anderson, Eric H.

    1994-05-01

    This paper describes the development and experimental application of an actuator with built-in viscous damping. An existing passive damper was modified for use as a novel actuation device for isolation and structural control. The device functions by using the same fluid for viscous damping and as a hydraulic lever for a voice coil actuator. Applications for such an actuator include structural control and active isolation. Lumped parameter models capturing structural and fluid effects are presented. Component tests of free stroke, blocked force, and passive complex stiffness are used to update the assumed model parameters. The structural damping effectiveness of the new actuator is shown to be that of a regular D-strut passively and that of a piezoelectric strut with load cell feedback actively in a complex testbed structure. Open and closed loop results are presented for a force isolation application showing an 8 dB passive and 20 dB active improvement over an undamped mount. An optimized design for a future experimental testbed is developed.

  19. The NASA/OAST telerobot testbed architecture

    NASA Technical Reports Server (NTRS)

    Matijevic, J. R.; Zimmerman, W. F.; Dolinsky, S.

    1989-01-01

    Through a phased development such as a laboratory-based research testbed, the NASA/OAST Telerobot Testbed provides an environment for system test and demonstration of the technology which will usefully complement, significantly enhance, or even replace manned space activities. By integrating advanced sensing, robotic manipulation and intelligent control under human-interactive supervision, the Testbed will ultimately demonstrate execution of a variety of generic tasks suggestive of space assembly, maintenance, repair, and telescience. The Testbed system features a hierarchical layered control structure compatible with the incorporation of evolving technologies as they become available. The Testbed system is physically implemented in a computing architecture which allows for ease of integration of these technologies while preserving the flexibility for test of a variety of man-machine modes. The development currently in progress on the functional and implementation architectures of the NASA/OAST Testbed and capabilities planned for the coming years are presented.

  20. Modeling and Analysis of the Water Cycle: Seasonal and Event Variability at the Walnut River Research Watershed

    NASA Astrophysics Data System (ADS)

    Miller, M. A.; Miller, N. L.; Sale, M. J.; Springer, E. P.; Wesely, M. L.; Bashford, K. E.; Conrad, M. E.; Costigan, K. R.; Kemball-Cook, S.; King, A. W.; Klazura, G. E.; Lesht, B. M.; Machavaram, M. V.; Sultan, M.; Song, J.; Washington-Allen, R.

    2001-12-01

    A multi-laboratory Department of Energy (DOE) team (Argonne National Laboratory, Brookhaven National Laboratory, Los Alamos National Laboratory, Lawrence Berkeley National Laboratory, Oak Ridge National Laboratory) has begun an investigation of hydrometeorological processes at the Whitewater subbasin of the Walnut River Watershed in Kansas. The Whitewater sub-basin is viewed as a DOE long-term hydrologic research watershed and resides within the well-instrumented Atmospheric Radiation Measurement/Cloud Radiation Atmosphere Testbed (ARM/CART) and the proposed Arkansas-Red River regional hydrologic testbed. The focus of this study is the development and evaluation of coupled regional to watershed scale models that simulate atmospheric, land surface, and hydrologic processes as systems with linkages and feedback mechanisms. This pilot is the precursor to the proposed DOE Water Cycle Dynamics Prediction Program. An important new element is the introduction of water isotope budget equations into mesoscale and hydrologic modeling. Two overarching hypotheses are part of this pilot study: (1) Can the predictability of the regional water balance be improved using high-resolution model simulations that are constrained and validated using new water isotope and hydrospheric water measurements? (2) Can water isotopic tracers be used to segregate different pathways through the water cycle and predict a change in regional climate patterns? Initial results of the pilot will be presented along with a description and copies of the proposed DOE Water Cycle Dynamics Prediction Program.

  1. Structure and organization of Stratocumulus fields: A network approach

    NASA Astrophysics Data System (ADS)

    Glassmeier, Franziska; Feingold, Graham

    2017-04-01

    The representation of Stratocumulus (Sc) clouds and their radiative impact is one of the large challenges for global climate models. Aerosol-cloud-precipitation interactions greatly contribute to this challenge by influencing the morphology of Sc fields: In the absence of rain, Sc are arranged in a relatively regular pattern of cloudy cells separated by cloud-free rings of down welling air ('closed cells'). Raining cloud fields, in contrast, exhibit an oscillating pattern of cloudy rings surrounding cloud free cells of negatively buoyant air caused by sedimentation and evaporation of rain ('open cells'). Surprisingly, these regular structures of open and closed cellular Sc fields and their potential for the development of new parameterizations have hardly been explored. In this contribution, we approach the organization of Sc from the perspective of a 2-dimensional random network. We find that cellular networks derived from LES simulations of open- and closed-cell Sc cases are almost indistinguishable and share the following features: (i) The distributions of nearest neighbors, or cell degree, are centered at six. This corresponds to approximately hexagonal cloud cells and is a direct mathematical consequence (Euler formula) of the triple junctions featured by Sc organization. (ii) The degree of individual cells is found to be proportional to the normalized size of the cells. This means that cell arrangement is independent of the typical cell size. (iii) Reflecting the continuously renewing dynamics of Sc fields, large (high-degree) cells tend to be neighbored by small (low-degree) cells and vice versa. These macroscopic network properties emerge independent of the state of the Sc field because the different processes governing the evolution of closed as compared to open cells correspond to topologically equivalent network dynamics. By developing a heuristic model, we show that open and closed cell dynamics can both be mimicked by versions of cell division and cell disappearance and are biased towards the expansion of smaller cells. As a conclusion of our network analysis, Sc organization can be characterized by a typical length scale and a scale-independent cell arrangement. While the typical length scale emerges from the full complexity of aerosol-cloud-precipitation-radiation interactions, cell arrangement is independent of cloud processes and its evolution could be parameterized based on our heuristic model.

  2. Multi-Dimensional Optimization for Cloud Based Multi-Tier Applications

    ERIC Educational Resources Information Center

    Jung, Gueyoung

    2010-01-01

    Emerging trends toward cloud computing and virtualization have been opening new avenues to meet enormous demands of space, resource utilization, and energy efficiency in modern data centers. By being allowed to host many multi-tier applications in consolidated environments, cloud infrastructure providers enable resources to be shared among these…

  3. Extending Climate Analytics-As to the Earth System Grid Federation

    NASA Astrophysics Data System (ADS)

    Tamkin, G.; Schnase, J. L.; Duffy, D.; McInerney, M.; Nadeau, D.; Li, J.; Strong, S.; Thompson, J. H.

    2015-12-01

    We are building three extensions to prior-funded work on climate analytics-as-a-service that will benefit the Earth System Grid Federation (ESGF) as it addresses the Big Data challenges of future climate research: (1) We are creating a cloud-based, high-performance Virtual Real-Time Analytics Testbed supporting a select set of climate variables from six major reanalysis data sets. This near real-time capability will enable advanced technologies like the Cloudera Impala-based Structured Query Language (SQL) query capabilities and Hadoop-based MapReduce analytics over native NetCDF files while providing a platform for community experimentation with emerging analytic technologies. (2) We are building a full-featured Reanalysis Ensemble Service comprising monthly means data from six reanalysis data sets. The service will provide a basic set of commonly used operations over the reanalysis collections. The operations will be made accessible through NASA's climate data analytics Web services and our client-side Climate Data Services (CDS) API. (3) We are establishing an Open Geospatial Consortium (OGC) WPS-compliant Web service interface to our climate data analytics service that will enable greater interoperability with next-generation ESGF capabilities. The CDS API will be extended to accommodate the new WPS Web service endpoints as well as ESGF's Web service endpoints. These activities address some of the most important technical challenges for server-side analytics and support the research community's requirements for improved interoperability and improved access to reanalysis data.

  4. Synergy of stereo cloud top height and ORAC optimal estimation cloud retrieval: evaluation and application to AATSR

    NASA Astrophysics Data System (ADS)

    Fisher, Daniel; Poulsen, Caroline A.; Thomas, Gareth E.; Muller, Jan-Peter

    2016-03-01

    In this paper we evaluate the impact on the cloud parameter retrievals of the ORAC (Optimal Retrieval of Aerosol and Cloud) algorithm following the inclusion of stereo-derived cloud top heights as a priori information. This is performed in a mathematically rigorous way using the ORAC optimal estimation retrieval framework, which includes the facility to use such independent a priori information. Key to the use of a priori information is a characterisation of their associated uncertainty. This paper demonstrates the improvements that are possible using this approach and also considers their impact on the microphysical cloud parameters retrieved. The Along-Track Scanning Radiometer (AATSR) instrument has two views and three thermal channels, so it is well placed to demonstrate the synergy of the two techniques. The stereo retrieval is able to improve the accuracy of the retrieved cloud top height when compared to collocated Cloud-Aerosol Lidar and Infrared Pathfinder Satellite Observations (CALIPSO), particularly in the presence of boundary layer inversions and high clouds. The impact of the stereo a priori information on the microphysical cloud properties of cloud optical thickness (COT) and effective radius (RE) was evaluated and generally found to be very small for single-layer clouds conditions over open water (mean RE differences of 2.2 (±5.9) microns and mean COD differences of 0.5 (±1.8) for single-layer ice clouds over open water at elevations of above 9 km, which are most strongly affected by the inclusion of the a priori).

  5. COLUMBUS as Engineering Testbed for Communications and Multimedia Equipment

    NASA Astrophysics Data System (ADS)

    Bank, C.; Anspach von Broecker, G. O.; Kolloge, H.-G.; Richters, M.; Rauer, D.; Urban, G.; Canovai, G.; Oesterle, E.

    2002-01-01

    The paper presents ongoing activities to prepare COLUMBUS for communications and multimedia technology experiments. For this purpose, Astrium SI, Bremen, has studied several options how to best combine the given system architecture with flexible and state-of-the-art interface avionics and software. These activities have been conducted in coordination with, and partially under contract of, DLR and ESA/ESTEC. Moreover, Astrium SI has realized three testbeds for multimedia software and hardware testing under own funding. The experimental core avionics unit - about a half double rack - establishes the core of a new multi-user experiment facility for this type of investigation onboard COLUMBUS, which shall be available to all users of COLUMBUS. It allows for the connection of 2nd generation payload, that is payload requiring broadband data transfer and near-real-time access by the Principal Investigator on ground, to test highly interactive and near-realtime payload operation. The facility is also foreseen to test new equipment to provide the astronauts onboard the ISS/COLUMBUS with bi- directional hi-fi voice and video connectivity to ground, private voice coms and e-mail, and a multimedia workstation for ops training and recreation. Connection to an appropriate Wide Area Network (WAN) on Earth is possible. The facility will include a broadband data transmission front-end terminal, which is mounted externally on the COLUMBUS module. This Equipment provides high flexibility due to the complete transparent transmit and receive chains, the steerable multi-frequency antenna system and its own thermal and power control and distribution. The Equipment is monitored and controlled via the COLUMBUS internal facility. It combines several new hardware items, which are newly developed for the next generation of broadband communication satellites and operates in Ka -Band with the experimental ESA data relay satellite ARTEMIS. The equipment is also TDRSS compatible; the open loop antenna tracking system employing star sensors enables usability with any other GEO data relay satellite system. In order to be prepared for the upcoming telecom standards for ground distribution of spacecraft generated data, the interface avionics allows for testing ATM-based data formatting and routing. Three testbeds accompany these studies and designs: i)a cable-and-connector testbed measures the signal characteristics for data transfer of up to 200 Mbps through the ii)an avionics &embedded software testbed prepares for data formatting, routing, and storage in CCSDS and ATM; iii)a software testbed tests newly developed S/W man-machine interfaces and simulates bandwidth limitations, on- This makes COLUMBUS a true technology testbed for a variety of engineering topics: - application of terrestrial standard data formats for broadband, near-real-time applications in space - qualification &test of off-the-shelf multimedia equipment in manned spacecraft - secure data transmission in flexible VPNs - in-orbit demonstration of advanced data transmission technology - elaboration of efficient crew and ground operations and training procedures - evaluation of personalized displays (S/W HFI) for long-duration space missions

  6. Analysis, modeling, and simulation (AMS) testbed development and evaluation to support dynamic applications (DMA) and active transportation and demand management (ATDM) programs — leveraging AMS testbed outputs for ATDM analysis – a primer.

    DOT National Transportation Integrated Search

    2017-08-01

    The primary objective of AMS Testbed project is to develop multiple simulation Testbeds/transportation models to evaluate the impacts of DMA connected vehicle applications and the active and dynamic transportation management (ATDM) strategies. Throug...

  7. Advanced turboprop testbed systems study

    NASA Technical Reports Server (NTRS)

    Goldsmith, I. M.

    1982-01-01

    The proof of concept, feasibility, and verification of the advanced prop fan and of the integrated advanced prop fan aircraft are established. The use of existing hardware is compatible with having a successfully expedited testbed ready for flight. A prop fan testbed aircraft is definitely feasible and necessary for verification of prop fan/prop fan aircraft integrity. The Allison T701 is most suitable as a propulsor and modification of existing engine and propeller controls are adequate for the testbed. The airframer is considered the logical overall systems integrator of the testbed program.

  8. Enhancing data utilization through adoption of cloud-based data architectures (Invited Paper 211869)

    NASA Astrophysics Data System (ADS)

    Kearns, E. J.

    2017-12-01

    A traditional approach to data distribution and utilization of open government data involves continuously moving those data from a central government location to each potential user, who would then utilize them on their local computer systems. An alternate approach would be to bring those users to the open government data, where users would also have access to computing and analytics capabilities that would support data utilization. NOAA's Big Data Project is exploring such an alternate approach through an experimental collaboration with Amazon Web Services, Google Cloud Platform, IBM, Microsoft Azure, and the Open Commons Consortium. As part of this ongoing experiment, NOAA is providing open data of interest which are freely hosted by the Big Data Project Collaborators, who provide a variety of cloud-based services and capabilities to enable utilization by data users. By the terms of the agreement, the Collaborators may charge for those value-added services and processing capacities to recover their costs to freely host the data and to generate profits if so desired. Initial results have shown sustained increases in data utilization from 2 to over 100 times previously-observed access patterns from traditional approaches. Significantly increased utilization speed as compared to the traditional approach has also been observed by NOAA data users who have volunteered their experiences on these cloud-based systems. The potential for implementing and sustaining the alternate cloud-based approach as part of a change in operational data utilization strategies will be discussed.

  9. The Confluence of GIS, Cloud and Open Source, Enabling Big Raster Data Applications

    NASA Astrophysics Data System (ADS)

    Plesea, L.; Emmart, C. B.; Boller, R. A.; Becker, P.; Baynes, K.

    2016-12-01

    The rapid evolution of available cloud services is profoundly changing the way applications are being developed and used. Massive object stores, service scalability, continuous integration are some of the most important cloud technology advances that directly influence science applications and GIS. At the same time, more and more scientists are using GIS platforms in their day to day research. Yet with new opportunities there are always some challenges. Given the large amount of data commonly required in science applications, usually large raster datasets, connectivity is one of the biggest problems. Connectivity has two aspects, one being the limited bandwidth and latency of the communication link due to the geographical location of the resources, the other one being the interoperability and intrinsic efficiency of the interface protocol used to connect. NASA and Esri are actively helping each other and collaborating on a few open source projects, aiming to provide some of the core technology components to directly address the GIS enabled data connectivity problems. Last year Esri contributed LERC, a very fast and efficient compression algorithm to the GDAL/MRF format, which itself is a NASA/Esri collaboration project. The MRF raster format has some cloud aware features that make it possible to build high performance web services on cloud platforms, as some of the Esri projects demonstrate. Currently, another NASA open source project, the high performance OnEarth WMTS server is being refactored and enhanced to better integrate with MRF, GDAL and Esri software. Taken together, the GDAL, MRF and OnEarth form the core of an open source CloudGIS toolkit that is already showing results. Since it is well integrated with GDAL, which is the most common interoperability component of GIS applications, this approach should improve the connectivity and performance of many science and GIS applications in the cloud.

  10. Genes2WordCloud: a quick way to identify biological themes from gene lists and free text.

    PubMed

    Baroukh, Caroline; Jenkins, Sherry L; Dannenfelser, Ruth; Ma'ayan, Avi

    2011-10-13

    Word-clouds recently emerged on the web as a solution for quickly summarizing text by maximizing the display of most relevant terms about a specific topic in the minimum amount of space. As biologists are faced with the daunting amount of new research data commonly presented in textual formats, word-clouds can be used to summarize and represent biological and/or biomedical content for various applications. Genes2WordCloud is a web application that enables users to quickly identify biological themes from gene lists and research relevant text by constructing and displaying word-clouds. It provides users with several different options and ideas for the sources that can be used to generate a word-cloud. Different options for rendering and coloring the word-clouds give users the flexibility to quickly generate customized word-clouds of their choice. Genes2WordCloud is a word-cloud generator and a word-cloud viewer that is based on WordCram implemented using Java, Processing, AJAX, mySQL, and PHP. Text is fetched from several sources and then processed to extract the most relevant terms with their computed weights based on word frequencies. Genes2WordCloud is freely available for use online; it is open source software and is available for installation on any web-site along with supporting documentation at http://www.maayanlab.net/G2W. Genes2WordCloud provides a useful way to summarize and visualize large amounts of textual biological data or to find biological themes from several different sources. The open source availability of the software enables users to implement customized word-clouds on their own web-sites and desktop applications.

  11. Genes2WordCloud: a quick way to identify biological themes from gene lists and free text

    PubMed Central

    2011-01-01

    Background Word-clouds recently emerged on the web as a solution for quickly summarizing text by maximizing the display of most relevant terms about a specific topic in the minimum amount of space. As biologists are faced with the daunting amount of new research data commonly presented in textual formats, word-clouds can be used to summarize and represent biological and/or biomedical content for various applications. Results Genes2WordCloud is a web application that enables users to quickly identify biological themes from gene lists and research relevant text by constructing and displaying word-clouds. It provides users with several different options and ideas for the sources that can be used to generate a word-cloud. Different options for rendering and coloring the word-clouds give users the flexibility to quickly generate customized word-clouds of their choice. Methods Genes2WordCloud is a word-cloud generator and a word-cloud viewer that is based on WordCram implemented using Java, Processing, AJAX, mySQL, and PHP. Text is fetched from several sources and then processed to extract the most relevant terms with their computed weights based on word frequencies. Genes2WordCloud is freely available for use online; it is open source software and is available for installation on any web-site along with supporting documentation at http://www.maayanlab.net/G2W. Conclusions Genes2WordCloud provides a useful way to summarize and visualize large amounts of textual biological data or to find biological themes from several different sources. The open source availability of the software enables users to implement customized word-clouds on their own web-sites and desktop applications. PMID:21995939

  12. Analysis, Modeling, and Simulation (AMS) Testbed Development and Evaluation to Support Dynamic Mobility Applications (DMA) and Active Transportation and Demand Management (ATDM) Programs - calibration Report for Phoenix Testbed : Final Report. [supporting datasets - Phoenix Testbed

    DOT National Transportation Integrated Search

    2017-07-26

    The datasets in this zip file are in support of FHWA-JPO-16-379, Analysis, Modeling, and Simulation (AMS) Testbed Development and Evaluation to Support Dynamic Mobility Applications (DMA) and Active Transportation and Demand Management (ATDM) Program...

  13. Analysis, modeling, and simulation (AMS) testbed development and evaluation to support dynamic mobility applications (DMA) and active transportation and demand management (ATDM) programs — summary report for the Chicago testbed. [supporting datasets - Chicago Testbed

    DOT National Transportation Integrated Search

    2017-04-01

    The datasets in this zip file are in support of Intelligent Transportation Systems Joint Program Office (ITS JPO) report FHWA-JPO-16-385, "Analysis, Modeling, and Simulation (AMS) Testbed Development and Evaluation to Support Dynamic Mobility Applica...

  14. Analysis, modeling, and simulation (AMS) testbed development and evaluation to support dynamic mobility applications (DMA) and active transportation and demand management (ATDM) programs : Dallas testbed analysis plan. [supporting datasets - Dallas Testbed

    DOT National Transportation Integrated Search

    2017-07-26

    The datasets in this zip file are in support of Intelligent Transportation Systems Joint Program Office (ITS JPO) report FHWA-JPO-16-385, "Analysis, Modeling, and Simulation (AMS) Testbed Development and Evaluation to Support Dynamic Mobility Applica...

  15. Analysis, Modeling, and Simulation (AMS) Testbed Development and Evaluation to Support Dynamic Mobility Applications (DMA) and Active Transportation and Demand Management (ATDM) Programs - San Mateo Testbed Analysis Plan [supporting datasets - San Mateo Testbed

    DOT National Transportation Integrated Search

    2017-06-26

    This zip file contains files of data to support FHWA-JPO-16-370, Analysis, Modeling, and Simulation (AMS) Testbed Development and Evaluation to Support Dynamic Mobility Applications (DMA) and Active Transportation and Demand Management (ATDM) Program...

  16. Development of a Scalable Testbed for Mobile Olfaction Verification.

    PubMed

    Zakaria, Syed Muhammad Mamduh Syed; Visvanathan, Retnam; Kamarudin, Kamarulzaman; Yeon, Ahmad Shakaff Ali; Md Shakaff, Ali Yeon; Zakaria, Ammar; Kamarudin, Latifah Munirah

    2015-12-09

    The lack of information on ground truth gas dispersion and experiment verification information has impeded the development of mobile olfaction systems, especially for real-world conditions. In this paper, an integrated testbed for mobile gas sensing experiments is presented. The integrated 3 m × 6 m testbed was built to provide real-time ground truth information for mobile olfaction system development. The testbed consists of a 72-gas-sensor array, namely Large Gas Sensor Array (LGSA), a localization system based on cameras and a wireless communication backbone for robot communication and integration into the testbed system. Furthermore, the data collected from the testbed may be streamed into a simulation environment to expedite development. Calibration results using ethanol have shown that using a large number of gas sensor in the LGSA is feasible and can produce coherent signals when exposed to the same concentrations. The results have shown that the testbed was able to capture the time varying characteristics and the variability of gas plume in a 2 h experiment thus providing time dependent ground truth concentration maps. The authors have demonstrated the ability of the mobile olfaction testbed to monitor, verify and thus, provide insight to gas distribution mapping experiment.

  17. Development of a Scalable Testbed for Mobile Olfaction Verification

    PubMed Central

    Syed Zakaria, Syed Muhammad Mamduh; Visvanathan, Retnam; Kamarudin, Kamarulzaman; Ali Yeon, Ahmad Shakaff; Md. Shakaff, Ali Yeon; Zakaria, Ammar; Kamarudin, Latifah Munirah

    2015-01-01

    The lack of information on ground truth gas dispersion and experiment verification information has impeded the development of mobile olfaction systems, especially for real-world conditions. In this paper, an integrated testbed for mobile gas sensing experiments is presented. The integrated 3 m × 6 m testbed was built to provide real-time ground truth information for mobile olfaction system development. The testbed consists of a 72-gas-sensor array, namely Large Gas Sensor Array (LGSA), a localization system based on cameras and a wireless communication backbone for robot communication and integration into the testbed system. Furthermore, the data collected from the testbed may be streamed into a simulation environment to expedite development. Calibration results using ethanol have shown that using a large number of gas sensor in the LGSA is feasible and can produce coherent signals when exposed to the same concentrations. The results have shown that the testbed was able to capture the time varying characteristics and the variability of gas plume in a 2 h experiment thus providing time dependent ground truth concentration maps. The authors have demonstrated the ability of the mobile olfaction testbed to monitor, verify and thus, provide insight to gas distribution mapping experiment. PMID:26690175

  18. Trace explosives sensor testbed (TESTbed)

    NASA Astrophysics Data System (ADS)

    Collins, Greg E.; Malito, Michael P.; Tamanaha, Cy R.; Hammond, Mark H.; Giordano, Braden C.; Lubrano, Adam L.; Field, Christopher R.; Rogers, Duane A.; Jeffries, Russell A.; Colton, Richard J.; Rose-Pehrsson, Susan L.

    2017-03-01

    A novel vapor delivery testbed, referred to as the Trace Explosives Sensor Testbed, or TESTbed, is demonstrated that is amenable to both high- and low-volatility explosives vapors including nitromethane, nitroglycerine, ethylene glycol dinitrate, triacetone triperoxide, 2,4,6-trinitrotoluene, pentaerythritol tetranitrate, and hexahydro-1,3,5-trinitro-1,3,5-triazine. The TESTbed incorporates a six-port dual-line manifold system allowing for rapid actuation between a dedicated clean air source and a trace explosives vapor source. Explosives and explosives-related vapors can be sourced through a number of means including gas cylinders, permeation tube ovens, dynamic headspace chambers, and a Pneumatically Modulated Liquid Delivery System coupled to a perfluoroalkoxy total-consumption microflow nebulizer. Key features of the TESTbed include continuous and pulseless control of trace vapor concentrations with wide dynamic range of concentration generation, six sampling ports with reproducible vapor profile outputs, limited low-volatility explosives adsorption to the manifold surface, temperature and humidity control of the vapor stream, and a graphical user interface for system operation and testing protocol implementation.

  19. Exploring the nonlinear cloud and rain equation

    NASA Astrophysics Data System (ADS)

    Koren, Ilan; Tziperman, Eli; Feingold, Graham

    2017-01-01

    Marine stratocumulus cloud decks are regarded as the reflectors of the climate system, returning back to space a significant part of the income solar radiation, thus cooling the atmosphere. Such clouds can exist in two stable modes, open and closed cells, for a wide range of environmental conditions. This emergent behavior of the system, and its sensitivity to aerosol and environmental properties, is captured by a set of nonlinear equations. Here, using linear stability analysis, we express the transition from steady to a limit-cycle state analytically, showing how it depends on the model parameters. We show that the control of the droplet concentration (N), the environmental carrying-capacity (H0), and the cloud recovery parameter (τ) can be linked by a single nondimensional parameter (μ=√{N }/(ατH0) ) , suggesting that for deeper clouds the transition from open (oscillating) to closed (stable fixed point) cells will occur for higher droplet concentration (i.e., higher aerosol loading). The analytical calculations of the possible states, and how they are affected by changes in aerosol and the environmental variables, provide an enhanced understanding of the complex interactions of clouds and rain.

  20. Tidal disruption of open clusters in their parent molecular clouds

    NASA Technical Reports Server (NTRS)

    Long, Kevin

    1989-01-01

    A simple model of tidal encounters has been applied to the problem of an open cluster in a clumpy molecular cloud. The parameters of the clumps are taken from the Blitz, Stark, and Long (1988) catalog of clumps in the Rosette molecular cloud. Encounters are modeled as impulsive, rectilinear collisions between Plummer spheres, but the tidal approximation is not invoked. Mass and binding energy changes during an encounter are computed by considering the velocity impulses given to individual stars in a random realization of a Plummer sphere. Mean rates of mass and binding energy loss are then computed by integrating over many encounters. Self-similar evolutionary calculations using these rates indicate that the disruption process is most sensitive to the cluster radius and relatively insensitive to cluster mass. The calculations indicate that clusters which are born in a cloud similar to the Rosette with a cluster radius greater than about 2.5 pc will not survive long enough to leave the cloud. The majority of clusters, however, have smaller radii and will survive the passage through their parent cloud.

  1. A scalable infrastructure for CMS data analysis based on OpenStack Cloud and Gluster file system

    NASA Astrophysics Data System (ADS)

    Toor, S.; Osmani, L.; Eerola, P.; Kraemer, O.; Lindén, T.; Tarkoma, S.; White, J.

    2014-06-01

    The challenge of providing a resilient and scalable computational and data management solution for massive scale research environments requires continuous exploration of new technologies and techniques. In this project the aim has been to design a scalable and resilient infrastructure for CERN HEP data analysis. The infrastructure is based on OpenStack components for structuring a private Cloud with the Gluster File System. We integrate the state-of-the-art Cloud technologies with the traditional Grid middleware infrastructure. Our test results show that the adopted approach provides a scalable and resilient solution for managing resources without compromising on performance and high availability.

  2. !CHAOS: A cloud of controls

    NASA Astrophysics Data System (ADS)

    Angius, S.; Bisegni, C.; Ciuffetti, P.; Di Pirro, G.; Foggetta, L. G.; Galletti, F.; Gargana, R.; Gioscio, E.; Maselli, D.; Mazzitelli, G.; Michelotti, A.; Orrù, R.; Pistoni, M.; Spagnoli, F.; Spigone, D.; Stecchi, A.; Tonto, T.; Tota, M. A.; Catani, L.; Di Giulio, C.; Salina, G.; Buzzi, P.; Checcucci, B.; Lubrano, P.; Piccini, M.; Fattibene, E.; Michelotto, M.; Cavallaro, S. R.; Diana, B. F.; Enrico, F.; Pulvirenti, S.

    2016-01-01

    The paper is aimed to present the !CHAOS open source project aimed to develop a prototype of a national private Cloud Computing infrastructure, devoted to accelerator control systems and large experiments of High Energy Physics (HEP). The !CHAOS project has been financed by MIUR (Italian Ministry of Research and Education) and aims to develop a new concept of control system and data acquisition framework by providing, with a high level of aaabstraction, all the services needed for controlling and managing a large scientific, or non-scientific, infrastructure. A beta version of the !CHAOS infrastructure will be released at the end of December 2015 and will run on private Cloud infrastructures based on OpenStack.

  3. Optimizing the resource usage in Cloud based environments: the Synergy approach

    NASA Astrophysics Data System (ADS)

    Zangrando, L.; Llorens, V.; Sgaravatto, M.; Verlato, M.

    2017-10-01

    Managing resource allocation in a cloud based data centre serving multiple virtual organizations is a challenging issue. In fact, while batch systems are able to allocate resources to different user groups according to specific shares imposed by the data centre administrator, without a static partitioning of such resources, this is not so straightforward in the most common cloud frameworks, e.g. OpenStack. In the current OpenStack implementation, it is only possible to grant fixed quotas to the different user groups and these resources cannot be exceeded by one group even if there are unused resources allocated to other groups. Moreover in the existing OpenStack implementation, when there aren’t resources available, new requests are simply rejected: it is then up to the client to later re-issue the request. The recently started EU-funded INDIGO-DataCloud project is addressing this issue through “Synergy”, a new advanced scheduling service targeted for OpenStack. Synergy adopts a fair-share model for resource provisioning which guarantees that resources are distributed among users following the fair-share policies defined by the administrator, taken also into account the past usage of such resources. We present the architecture of Synergy, the status of its implementation, some preliminary results and the foreseen evolution of the service.

  4. The Wide-Field Imaging Interferometry Testbed: Enabling Techniques for High Angular Resolution Astronomy

    NASA Technical Reports Server (NTRS)

    Rinehart, S. A.; Armstrong, T.; Frey, Bradley J.; Jung, J.; Kirk, J.; Leisawitz, David T.; Leviton, Douglas B.; Lyon, R.; Maher, Stephen; Martino, Anthony J.; hide

    2007-01-01

    The Wide-Field Imaging Interferometry Testbed (WIIT) was designed to develop techniques for wide-field of view imaging interferometry, using "double-Fourier" methods. These techniques will be important for a wide range of future spacebased interferometry missions. We have provided simple demonstrations of the methodology already, and continuing development of the testbed will lead to higher data rates, improved data quality, and refined algorithms for image reconstruction. At present, the testbed effort includes five lines of development; automation of the testbed, operation in an improved environment, acquisition of large high-quality datasets, development of image reconstruction algorithms, and analytical modeling of the testbed. We discuss the progress made towards the first four of these goals; the analytical modeling is discussed in a separate paper within this conference.

  5. Sensitivity of a Simulated Derecho Event to Model Initial Conditions

    NASA Astrophysics Data System (ADS)

    Wang, Wei

    2014-05-01

    Since 2003, the MMM division at NCAR has been experimenting cloud-permitting scale weather forecasting using Weather Research and Forecasting (WRF) model. Over the years, we've tested different model physics, and tried different initial and boundary conditions. Not surprisingly, we found that the model's forecasts are more sensitive to the initial conditions than model physics. In 2012 real-time experiment, WRF-DART (Data Assimilation Research Testbed) at 15 km was employed to produce initial conditions for twice-a-day forecast at 3 km. On June 29, this forecast system captured one of the most destructive derecho event on record. In this presentation, we will examine forecast sensitivity to different model initial conditions, and try to understand the important features that may contribute to the success of the forecast.

  6. plas.io: Open Source, Browser-based WebGL Point Cloud Visualization

    NASA Astrophysics Data System (ADS)

    Butler, H.; Finnegan, D. C.; Gadomski, P. J.; Verma, U. K.

    2014-12-01

    Point cloud data, in the form of Light Detection and Ranging (LiDAR), RADAR, or semi-global matching (SGM) image processing, are rapidly becoming a foundational data type to quantify and characterize geospatial processes. Visualization of these data, due to overall volume and irregular arrangement, is often difficult. Technological advancement in web browsers, in the form of WebGL and HTML5, have made interactivity and visualization capabilities ubiquitously available which once only existed in desktop software. plas.io is an open source JavaScript application that provides point cloud visualization, exploitation, and compression features in a web-browser platform, reducing the reliance for client-based desktop applications. The wide reach of WebGL and browser-based technologies mean plas.io's capabilities can be delivered to a diverse list of devices -- from phones and tablets to high-end workstations -- with very little custom software development. These properties make plas.io an ideal open platform for researchers and software developers to communicate visualizations of complex and rich point cloud data to devices to which everyone has easy access.

  7. Moving Towards a Science-Driven Workbench for Earth Science Solutions

    NASA Astrophysics Data System (ADS)

    Graves, S. J.; Djorgovski, S. G.; Law, E.; Yang, C. P.; Keiser, K.

    2017-12-01

    The NSF-funded EarthCube Integration and Test Environment (ECITE) prototype was proposed as a 2015 Integrated Activities project and resulted in the prototyping of an EarthCube federated cloud environment and the Integration and Testing Framework. The ECITE team has worked with EarthCube science and technology governance committees to define the types of integration, testing and evaluation necessary to achieve and demonstrate interoperability and functionality that benefit and support the objectives of the EarthCube cyber-infrastructure. The scope of ECITE also includes reaching beyond NSF and EarthCube to work with the broader Earth science community, such as the Earth Science Information Partners (ESIP) to incorporate lessons learned from other testbed activities, and ultimately provide broader community benefits. This presentation will discuss evolving ECITE ideas for a science-driven workbench that will start with documented science use cases, map the use cases to solution scenarios that identify the available technology and data resources that match the use case, the generation of solution workflows and test plans, the testing and evaluation of the solutions in a cloud environment, and finally the documentation of identified technology and data gaps that will assist with driving the development of additional EarthCube resources.

  8. A Proof-of-Concept for Semantically Interoperable Federation of IoT Experimentation Facilities.

    PubMed

    Lanza, Jorge; Sanchez, Luis; Gomez, David; Elsaleh, Tarek; Steinke, Ronald; Cirillo, Flavio

    2016-06-29

    The Internet-of-Things (IoT) is unanimously identified as one of the main pillars of future smart scenarios. The potential of IoT technologies and deployments has been already demonstrated in a number of different application areas, including transport, energy, safety and healthcare. However, despite the growing number of IoT deployments, the majority of IoT applications tend to be self-contained, thereby forming application silos. A lightweight data centric integration and combination of these silos presents several challenges that still need to be addressed. Indeed, the ability to combine and synthesize data streams and services from diverse IoT platforms and testbeds, holds the promise to increase the potentiality of smart applications in terms of size, scope and targeted business context. In this article, a proof-of-concept implementation that federates two different IoT experimentation facilities by means of semantic-based technologies will be described. The specification and design of the implemented system and information models will be described together with the practical details of the developments carried out and its integration with the existing IoT platforms supporting the aforementioned testbeds. Overall, the system described in this paper demonstrates that it is possible to open new horizons in the development of IoT applications and experiments at a global scale, that transcend the (silo) boundaries of individual deployments, based on the semantic interconnection and interoperability of diverse IoT platforms and testbeds.

  9. A Proof-of-Concept for Semantically Interoperable Federation of IoT Experimentation Facilities

    PubMed Central

    Lanza, Jorge; Sanchez, Luis; Gomez, David; Elsaleh, Tarek; Steinke, Ronald; Cirillo, Flavio

    2016-01-01

    The Internet-of-Things (IoT) is unanimously identified as one of the main pillars of future smart scenarios. The potential of IoT technologies and deployments has been already demonstrated in a number of different application areas, including transport, energy, safety and healthcare. However, despite the growing number of IoT deployments, the majority of IoT applications tend to be self-contained, thereby forming application silos. A lightweight data centric integration and combination of these silos presents several challenges that still need to be addressed. Indeed, the ability to combine and synthesize data streams and services from diverse IoT platforms and testbeds, holds the promise to increase the potentiality of smart applications in terms of size, scope and targeted business context. In this article, a proof-of-concept implementation that federates two different IoT experimentation facilities by means of semantic-based technologies will be described. The specification and design of the implemented system and information models will be described together with the practical details of the developments carried out and its integration with the existing IoT platforms supporting the aforementioned testbeds. Overall, the system described in this paper demonstrates that it is possible to open new horizons in the development of IoT applications and experiments at a global scale, that transcend the (silo) boundaries of individual deployments, based on the semantic interconnection and interoperability of diverse IoT platforms and testbeds. PMID:27367695

  10. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ozgur, Utku; Tonyali, Samet; Akkaya, Kemal

    Although smart meters are deployed in a lot of countries as a result of aiming to make the power grid more resilient and efficient, the data collection process from smart meters in Smart Grid (SG) has still challenges that relate to consumer privacy. Referred to as Advanced Metering Infrastructure (AMI), the data collected and transmitted through this AMI can leak sensitive information about the consumers if it is sent as plaintext. While a number of solutions have been proposed in the past, the deployment of these solutions in real-life was not possible since the actual AMIs were not accessible tomore » researchers. Therefore, a lot of solutions relied on simulations which may not be able to capture the performance of these solutions. In this paper, we pick a widely used homomorphic-based aggregation and implement it in a realistic testbed so that it can be compared with a simulation-based solution. Specifically, we develop a system that provides privacy with the Paillier cryptosystem and twofactor authentication with ECDSA and OpenSSL certificates. In order to test the system, an IEEE 802.11s-based SG AMI network testbed is constructed with Beaglebone Black boards that imitate the behavior of smart meters. The same network is also simulated in widely used ns-3. The results showed that ns-3 simulation and testbed results are parallel in most cases and the proposed system can perform effectively. However, there are still many differences that need to be taken into account in deploying real systems.« less

  11. An Interactive Web-Based Analysis Framework for Remote Sensing Cloud Computing

    NASA Astrophysics Data System (ADS)

    Wang, X. Z.; Zhang, H. M.; Zhao, J. H.; Lin, Q. H.; Zhou, Y. C.; Li, J. H.

    2015-07-01

    Spatiotemporal data, especially remote sensing data, are widely used in ecological, geographical, agriculture, and military research and applications. With the development of remote sensing technology, more and more remote sensing data are accumulated and stored in the cloud. An effective way for cloud users to access and analyse these massive spatiotemporal data in the web clients becomes an urgent issue. In this paper, we proposed a new scalable, interactive and web-based cloud computing solution for massive remote sensing data analysis. We build a spatiotemporal analysis platform to provide the end-user with a safe and convenient way to access massive remote sensing data stored in the cloud. The lightweight cloud storage system used to store public data and users' private data is constructed based on open source distributed file system. In it, massive remote sensing data are stored as public data, while the intermediate and input data are stored as private data. The elastic, scalable, and flexible cloud computing environment is built using Docker, which is a technology of open-source lightweight cloud computing container in the Linux operating system. In the Docker container, open-source software such as IPython, NumPy, GDAL, and Grass GIS etc., are deployed. Users can write scripts in the IPython Notebook web page through the web browser to process data, and the scripts will be submitted to IPython kernel to be executed. By comparing the performance of remote sensing data analysis tasks executed in Docker container, KVM virtual machines and physical machines respectively, we can conclude that the cloud computing environment built by Docker makes the greatest use of the host system resources, and can handle more concurrent spatial-temporal computing tasks. Docker technology provides resource isolation mechanism in aspects of IO, CPU, and memory etc., which offers security guarantee when processing remote sensing data in the IPython Notebook. Users can write complex data processing code on the web directly, so they can design their own data processing algorithm.

  12. Variable Dynamic Testbed Vehicle Dynamics Analysis

    DOT National Transportation Integrated Search

    1996-03-01

    ANTI-ROLL BAR, EMULATION, FOUR-WHEEL-STEERING, LATERAL RESPONSE CHARACTERISTICS, SIMULATION, VARIABLE DYNAMIC TESTBED VEHICLE, INTELLIGENT VEHICLE INITIATIVE OR IVI : THE VARIABLE DYNAMIC TESTBED VEHICLE (VDTV) CONCEPT HAS BEEN PROPOSED AS A TOOL...

  13. Integration of Cloud resources in the LHCb Distributed Computing

    NASA Astrophysics Data System (ADS)

    Úbeda García, Mario; Méndez Muñoz, Víctor; Stagni, Federico; Cabarrou, Baptiste; Rauschmayr, Nathalie; Charpentier, Philippe; Closier, Joel

    2014-06-01

    This contribution describes how Cloud resources have been integrated in the LHCb Distributed Computing. LHCb is using its specific Dirac extension (LHCbDirac) as an interware for its Distributed Computing. So far, it was seamlessly integrating Grid resources and Computer clusters. The cloud extension of DIRAC (VMDIRAC) allows the integration of Cloud computing infrastructures. It is able to interact with multiple types of infrastructures in commercial and institutional clouds, supported by multiple interfaces (Amazon EC2, OpenNebula, OpenStack and CloudStack) - instantiates, monitors and manages Virtual Machines running on this aggregation of Cloud resources. Moreover, specifications for institutional Cloud resources proposed by Worldwide LHC Computing Grid (WLCG), mainly by the High Energy Physics Unix Information Exchange (HEPiX) group, have been taken into account. Several initiatives and computing resource providers in the eScience environment have already deployed IaaS in production during 2013. Keeping this on mind, pros and cons of a cloud based infrasctructure have been studied in contrast with the current setup. As a result, this work addresses four different use cases which represent a major improvement on several levels of our infrastructure. We describe the solution implemented by LHCb for the contextualisation of the VMs based on the idea of Cloud Site. We report on operational experience of using in production several institutional Cloud resources that are thus becoming integral part of the LHCb Distributed Computing resources. Furthermore, we describe as well the gradual migration of our Service Infrastructure towards a fully distributed architecture following the Service as a Service (SaaS) model.

  14. Combining Fog Computing with Sensor Mote Machine Learning for Industrial IoT.

    PubMed

    Lavassani, Mehrzad; Forsström, Stefan; Jennehag, Ulf; Zhang, Tingting

    2018-05-12

    Digitalization is a global trend becoming ever more important to our connected and sustainable society. This trend also affects industry where the Industrial Internet of Things is an important part, and there is a need to conserve spectrum as well as energy when communicating data to a fog or cloud back-end system. In this paper we investigate the benefits of fog computing by proposing a novel distributed learning model on the sensor device and simulating the data stream in the fog, instead of transmitting all raw sensor values to the cloud back-end. To save energy and to communicate as few packets as possible, the updated parameters of the learned model at the sensor device are communicated in longer time intervals to a fog computing system. The proposed framework is implemented and tested in a real world testbed in order to make quantitative measurements and evaluate the system. Our results show that the proposed model can achieve a 98% decrease in the number of packets sent over the wireless link, and the fog node can still simulate the data stream with an acceptable accuracy of 97%. We also observe an end-to-end delay of 180 ms in our proposed three-layer framework. Hence, the framework shows that a combination of fog and cloud computing with a distributed data modeling at the sensor device for wireless sensor networks can be beneficial for Industrial Internet of Things applications.

  15. Combining Fog Computing with Sensor Mote Machine Learning for Industrial IoT

    PubMed Central

    Lavassani, Mehrzad; Jennehag, Ulf; Zhang, Tingting

    2018-01-01

    Digitalization is a global trend becoming ever more important to our connected and sustainable society. This trend also affects industry where the Industrial Internet of Things is an important part, and there is a need to conserve spectrum as well as energy when communicating data to a fog or cloud back-end system. In this paper we investigate the benefits of fog computing by proposing a novel distributed learning model on the sensor device and simulating the data stream in the fog, instead of transmitting all raw sensor values to the cloud back-end. To save energy and to communicate as few packets as possible, the updated parameters of the learned model at the sensor device are communicated in longer time intervals to a fog computing system. The proposed framework is implemented and tested in a real world testbed in order to make quantitative measurements and evaluate the system. Our results show that the proposed model can achieve a 98% decrease in the number of packets sent over the wireless link, and the fog node can still simulate the data stream with an acceptable accuracy of 97%. We also observe an end-to-end delay of 180 ms in our proposed three-layer framework. Hence, the framework shows that a combination of fog and cloud computing with a distributed data modeling at the sensor device for wireless sensor networks can be beneficial for Industrial Internet of Things applications. PMID:29757227

  16. Implementation of a virtual link between power system testbeds at Marshall Spaceflight Center and Lewis Research Center

    NASA Technical Reports Server (NTRS)

    Doreswamy, Rajiv

    1990-01-01

    The Marshall Space Flight Center (MSFC) owns and operates a space station module power management and distribution (SSM-PMAD) testbed. This system, managed by expert systems, is used to analyze and develop power system automation techniques for Space Station Freedom. The Lewis Research Center (LeRC), Cleveland, Ohio, has developed and implemented a space station electrical power system (EPS) testbed. This system and its power management controller are representative of the overall Space Station Freedom power system. A virtual link is being implemented between the testbeds at MSFC and LeRC. This link would enable configuration of SSM-PMAD as a load center for the EPS testbed at LeRC. This connection will add to the versatility of both systems, and provide an environment of enhanced realism for operation of both testbeds.

  17. Validation of Model-Based Prognostics for Pneumatic Valves in a Demonstration Testbed

    DTIC Science & Technology

    2014-10-02

    predict end of life ( EOL ) and remaining useful life (RUL). The approach still follows the general estimation-prediction framework devel- oped in the...atmosphere, with linearly increasing leak area. kA2leak = Cleak (16) We define valve end of life ( EOL ) through open/close time limits of the valves, as in...represents end of life ( EOL ), and ∆kE represents remaining useful life (RUL). For valves, timing requirements are provided that de- fine the maximum

  18. A Modular Approach to Video Designation of Manipulation Targets for Manipulators

    DTIC Science & Technology

    2014-05-12

    side view of a ray going through a point cloud of a water bottle sitting on the ground. The bottom left image shows the same point cloud after it has...System (ROS), Point Cloud Library (PCL), and OpenRAVE were used to a great extent to help promote reusability of the code developed during this

  19. The computational structural mechanics testbed procedures manual

    NASA Technical Reports Server (NTRS)

    Stewart, Caroline B. (Compiler)

    1991-01-01

    The purpose of this manual is to document the standard high level command language procedures of the Computational Structural Mechanics (CSM) Testbed software system. A description of each procedure including its function, commands, data interface, and use is presented. This manual is designed to assist users in defining and using command procedures to perform structural analysis in the CSM Testbed User's Manual and the CSM Testbed Data Library Description.

  20. Exploring Infiniband Hardware Virtualization in OpenNebula towards Efficient High-Performance Computing

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Pais Pitta de Lacerda Ruivo, Tiago; Bernabeu Altayo, Gerard; Garzoglio, Gabriele

    2014-11-11

    has been widely accepted that software virtualization has a big negative impact on high-performance computing (HPC) application performance. This work explores the potential use of Infiniband hardware virtualization in an OpenNebula cloud towards the efficient support of MPI-based workloads. We have implemented, deployed, and tested an Infiniband network on the FermiCloud private Infrastructure-as-a-Service (IaaS) cloud. To avoid software virtualization towards minimizing the virtualization overhead, we employed a technique called Single Root Input/Output Virtualization (SRIOV). Our solution spanned modifications to the Linux’s Hypervisor as well as the OpenNebula manager. We evaluated the performance of the hardware virtualization on up to 56more » virtual machines connected by up to 8 DDR Infiniband network links, with micro-benchmarks (latency and bandwidth) as well as w a MPI-intensive application (the HPL Linpack benchmark).« less

  1. New Air-Launched Small Missile (ALSM) Flight Testbed for Hypersonic Systems

    NASA Technical Reports Server (NTRS)

    Bui, Trong T.; Lux, David P.; Stenger, Mike; Munson, Mike; Teate, George

    2006-01-01

    A new testbed for hypersonic flight research is proposed. Known as the Phoenix air-launched small missile (ALSM) flight testbed, it was conceived to help address the lack of quick-turnaround and cost-effective hypersonic flight research capabilities. The Phoenix ALSM testbed results from utilization of two unique and very capable flight assets: the United States Navy Phoenix AIM-54 long-range, guided air-to-air missile and the NASA Dryden F-15B testbed airplane. The U.S. Navy retirement of the Phoenix AIM-54 missiles from fleet operation has presented an excellent opportunity for converting this valuable flight asset into a new flight testbed. This cost-effective new platform will fill an existing gap in the test and evaluation of current and future hypersonic systems for flight Mach numbers ranging from 3 to 5. Preliminary studies indicate that the Phoenix missile is a highly capable platform. When launched from a high-performance airplane, the guided Phoenix missile can boost research payloads to low hypersonic Mach numbers, enabling flight research in the supersonic-to-hypersonic transitional flight envelope. Experience gained from developing and operating the Phoenix ALSM testbed will be valuable for the development and operation of future higher-performance ALSM flight testbeds as well as responsive microsatellite small-payload air-launched space boosters.

  2. Overview on In-Space Internet Node Testbed (ISINT)

    NASA Technical Reports Server (NTRS)

    Richard, Alan M.; Kachmar, Brian A.; Fabian, Theodore; Kerczewski, Robert J.

    2000-01-01

    The Satellite Networks and Architecture Branch has developed the In-Space Internet Node Technology testbed (ISINT) for investigating the use of commercial Internet products for NASA missions. The testbed connects two closed subnets over a tabletop Ka-band transponder by using commercial routers and modems. Since many NASA assets are in low Earth orbits (LEO's), the testbed simulates the varying signal strength, changing propagation delay, and varying connection times that are normally experienced when communicating to the Earth via a geosynchronous orbiting (GEO) communications satellite. Research results from using this testbed will be used to determine which Internet technologies are appropriate for NASA's future communication needs.

  3. A Fault-Oblivious Extreme-Scale Execution Environment (FOX)

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Van Hensbergen, Eric; Speight, William; Xenidis, Jimi

    IBM Research’s contribution to the Fault Oblivious Extreme-scale Execution Environment (FOX) revolved around three core research deliverables: • collaboration with Boston University around the Kittyhawk cloud infrastructure which both enabled a development and deployment platform for the project team and provided a fault-injection testbed to evaluate prototypes • operating systems research focused on exploring role-based operating system technologies through collaboration with Sandia National Labs on the NIX research operating system and collaboration with the broader IBM Research community around a hybrid operating system model which became known as FusedOS • IBM Research also participated in an advisory capacity with themore » Boston University SESA project, the core of which was derived from the K42 operating system research project funded in part by DARPA’s HPCS program. Both of these contributions were built on a foundation of previous operating systems research funding by the Department of Energy’s FastOS Program. Through the course of the X-stack funding we were able to develop prototypes, deploy them on production clusters at scale, and make them available to other researchers. As newer hardware, in the form of BlueGene/Q, came online, we were able to port the prototypes to the new hardware and release the source code for the resulting prototypes as open source to the community. In addition to the open source coded for the Kittyhawk and NIX prototypes, we were able to bring the BlueGene/Q Linux patches up to a more recent kernel and contribute them for inclusion by the broader Linux community. The lasting impact of the IBM Research work on FOX can be seen in its effect on the shift of IBM’s approach to HPC operating systems from Linux and Compute Node Kernels to role-based approaches as prototyped by the NIX and FusedOS work. This impact can be seen beyond IBM in follow-on ideas being incorporated into the proposals for the Exasacale Operating Systems/Runtime program.« less

  4. Data management and scientific integration within the Atmospheric Radiation Measurement Program

    NASA Technical Reports Server (NTRS)

    Gracio, Deborah K.; Hatfield, Larry D.; Yates, Kenneth R.; Voyles, Jimmy W.; Tichler, Joyce L.; Cederwall, Richard T.; Laufersweiler, Mark J.; Leach, Martin J.; Singley, Paul

    1995-01-01

    The Atmospheric Radiation Measurement (ARM) Program has been developed by the U.S. Department of Energy with the goal to improve the predictive capabilities of General Circulation Models (GCM's) in their treatment of clouds and radiative transfer effects. To achieve this goal, three experimental testbeds were designed for the deployment of instruments that will collect atmospheric data used to drive the GCM's. Each site, known as a Cloud and Radiation Testbed (CART), consists of a highly available, redundant data system for the collection of data from a variety of instrumentation. The first CART site was deployed in April 1992 in the Southern Great Plains (SGP), Lamont, Oklahoma, with the other two sites to follow in September 1995 in the Tropical Western Pacific and in 1997 on the North Slope of Alaska. Approximately 400 MB of data are transferred per day via the Internet from the SGP site to the ARM Experiment Center at Pacific Northwest Laboratory in Richland, Washington. The Experiment Center is central to the ARM data path and provides for the collection, processing, analysis, and delivery of ARM data. Data are received from the CART sites from a variety of instrumentation, observational systems, amd external data sources. The Experiment Center processes these data streams on a continuous basis to provide derived data products to the ARM Science Team in near real-time while providing a three-month running archive of data. A primary requirement of the ARM Program is to preserve and protect all data produced or acquired. This function is performed at Oak Ridge National Laboratory where leading edge technology is employed for the long-term storage of ARM data. The ARM Archive provides access to data for participation outside of the ARM Program. The ARM Program involves a collaborative effort by teams from various DOE National Laboratories, providing multi-disciplinary areas of expertise. This paper will discuss the collaborative methods in which the ARM teams translate the scientific goals of the Program into data products. By combining atmospheric scientists, systems engineers, and software engineers, the ARM Program has successfully designed and developed an environment where advances in understanding the parameterizations of GCM's can be made.

  5. The computational structural mechanics testbed generic structural-element processor manual

    NASA Technical Reports Server (NTRS)

    Stanley, Gary M.; Nour-Omid, Shahram

    1990-01-01

    The usage and development of structural finite element processors based on the CSM Testbed's Generic Element Processor (GEP) template is documented. By convention, such processors have names of the form ESi, where i is an integer. This manual is therefore intended for both Testbed users who wish to invoke ES processors during the course of a structural analysis, and Testbed developers who wish to construct new element processors (or modify existing ones).

  6. [Porting Radiotherapy Software of Varian to Cloud Platform].

    PubMed

    Zou, Lian; Zhang, Weisha; Liu, Xiangxiang; Xie, Zhao; Xie, Yaoqin

    2017-09-30

    To develop a low-cost private cloud platform of radiotherapy software. First, a private cloud platform which was based on OpenStack and the virtual GPU hardware was builded. Then on the private cloud platform, all the Varian radiotherapy software modules were installed to the virtual machine, and the corresponding function configuration was completed. Finally the software on the cloud was able to be accessed by virtual desktop client. The function test results of the cloud workstation show that a cloud workstation is equivalent to an isolated physical workstation, and any clients on the LAN can use the cloud workstation smoothly. The cloud platform transplantation in this study is economical and practical. The project not only improves the utilization rates of radiotherapy software, but also makes it possible that the cloud computing technology can expand its applications to the field of radiation oncology.

  7. Menu-driven cloud computing and resource sharing for R and Bioconductor.

    PubMed

    Bolouri, Hamid; Dulepet, Rajiv; Angerman, Michael

    2011-08-15

    We report CRdata.org, a cloud-based, free, open-source web server for running analyses and sharing data and R scripts with others. In addition to using the free, public service, CRdata users can launch their own private Amazon Elastic Computing Cloud (EC2) nodes and store private data and scripts on Amazon's Simple Storage Service (S3) with user-controlled access rights. All CRdata services are provided via point-and-click menus. CRdata is open-source and free under the permissive MIT License (opensource.org/licenses/mit-license.php). The source code is in Ruby (ruby-lang.org/en/) and available at: github.com/seerdata/crdata. hbolouri@fhcrc.org.

  8. Identification of Program Signatures from Cloud Computing System Telemetry Data

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Nichols, Nicole M.; Greaves, Mark T.; Smith, William P.

    Malicious cloud computing activity can take many forms, including running unauthorized programs in a virtual environment. Detection of these malicious activities while preserving the privacy of the user is an important research challenge. Prior work has shown the potential viability of using cloud service billing metrics as a mechanism for proxy identification of malicious programs. Previously this novel detection method has been evaluated in a synthetic and isolated computational environment. In this paper we demonstrate the ability of billing metrics to identify programs, in an active cloud computing environment, including multiple virtual machines running on the same hypervisor. The openmore » source cloud computing platform OpenStack, is used for private cloud management at Pacific Northwest National Laboratory. OpenStack provides a billing tool (Ceilometer) to collect system telemetry measurements. We identify four different programs running on four virtual machines under the same cloud user account. Programs were identified with up to 95% accuracy. This accuracy is dependent on the distinctiveness of telemetry measurements for the specific programs we tested. Future work will examine the scalability of this approach for a larger selection of programs to better understand the uniqueness needed to identify a program. Additionally, future work should address the separation of signatures when multiple programs are running on the same virtual machine.« less

  9. Single-Column Modeling, GCM Parameterizations and Atmospheric Radiation Measurement Data

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Somerville, R.C.J.; Iacobellis, S.F.

    2005-03-18

    Our overall goal is identical to that of the Atmospheric Radiation Measurement (ARM) Program: the development of new and improved parameterizations of cloud-radiation effects and related processes, using ARM data at all three ARM sites, and the implementation and testing of these parameterizations in global and regional models. To test recently developed prognostic parameterizations based on detailed cloud microphysics, we have first compared single-column model (SCM) output with ARM observations at the Southern Great Plains (SGP), North Slope of Alaska (NSA) and Topical Western Pacific (TWP) sites. We focus on the predicted cloud amounts and on a suite of radiativemore » quantities strongly dependent on clouds, such as downwelling surface shortwave radiation. Our results demonstrate the superiority of parameterizations based on comprehensive treatments of cloud microphysics and cloud-radiative interactions. At the SGP and NSA sites, the SCM results simulate the ARM measurements well and are demonstrably more realistic than typical parameterizations found in conventional operational forecasting models. At the TWP site, the model performance depends strongly on details of the scheme, and the results of our diagnostic tests suggest ways to develop improved parameterizations better suited to simulating cloud-radiation interactions in the tropics generally. These advances have made it possible to take the next step and build on this progress, by incorporating our parameterization schemes in state-of-the-art 3D atmospheric models, and diagnosing and evaluating the results using independent data. Because the improved cloud-radiation results have been obtained largely via implementing detailed and physically comprehensive cloud microphysics, we anticipate that improved predictions of hydrologic cycle components, and hence of precipitation, may also be achievable. We are currently testing the performance of our ARM-based parameterizations in state-of-the--art global and regional models. One fruitful strategy for evaluating advances in parameterizations has turned out to be using short-range numerical weather prediction as a test-bed within which to implement and improve parameterizations for modeling and predicting climate variability. The global models we have used to date are the CAM atmospheric component of the National Center for Atmospheric Research (NCAR) CCSM climate model as well as the National Centers for Environmental Prediction (NCEP) numerical weather prediction model, thus allowing testing in both climate simulation and numerical weather prediction modes. We present detailed results of these tests, demonstrating the sensitivity of model performance to changes in parameterizations.« less

  10. OpenNEX, a private-public partnership in support of the national climate assessment

    NASA Astrophysics Data System (ADS)

    Nemani, R. R.; Wang, W.; Michaelis, A.; Votava, P.; Ganguly, S.

    2016-12-01

    The NASA Earth Exchange (NEX) is a collaborative computing platform that has been developed with the objective of bringing scientists together with the software tools, massive global datasets, and supercomputing resources necessary to accelerate research in Earth systems science and global change. NEX is funded as an enabling tool for sustaining the national climate assessment. Over the past five years, researchers have used the NEX platform and produced a number of data sets highly relevant to the National Climate Assessment. These include high-resolution climate projections using different downscaling techniques and trends in historical climate from satellite data. To enable a broader community in exploiting the above datasets, the NEX team partnered with public cloud providers to create the OpenNEX platform. OpenNEX provides ready access to NEX data holdings on a number of public cloud platforms along with pertinent analysis tools and workflows in the form of Machine Images and Docker Containers, lectures and tutorials by experts. We will showcase some of the applications of OpenNEX data and tools by the community on Amazon Web Services, Google Cloud and the NEX Sandbox.

  11. An overview of the DII-HEP OpenStack based CMS data analysis

    NASA Astrophysics Data System (ADS)

    Osmani, L.; Tarkoma, S.; Eerola, P.; Komu, M.; Kortelainen, M. J.; Kraemer, O.; Lindén, T.; Toor, S.; White, J.

    2015-05-01

    An OpenStack based private cloud with the Cluster File System has been built and used with both CMS analysis and Monte Carlo simulation jobs in the Datacenter Indirection Infrastructure for Secure High Energy Physics (DII-HEP) project. On the cloud we run the ARC middleware that allows running CMS applications without changes on the job submission side. Our test results indicate that the adopted approach provides a scalable and resilient solution for managing resources without compromising on performance and high availability. To manage the virtual machines (VM) dynamically in an elastic fasion, we are testing the EMI authorization service (Argus) and the Execution Environment Service (Argus-EES). An OpenStackplugin has been developed for Argus-EES. The Host Identity Protocol (HIP) has been designed for mobile networks and it provides a secure method for IP multihoming. HIP separates the end-point identifier and locator role for IP address which increases the network availability for the applications. Our solution leverages HIP for traffic management. This presentation gives an update on the status of the work and our lessons learned in creating an OpenStackbased cloud for HEP.

  12. NOAA - Western Regional Center

    Science.gov Websites

    open a larger version of the photo. The complete Western Regional Center consists of nine buildings of a cloud with text about 2018 Open house. Links to Open House web page. Privacy Policy | FOIA

  13. Managing competing elastic Grid and Cloud scientific computing applications using OpenNebula

    NASA Astrophysics Data System (ADS)

    Bagnasco, S.; Berzano, D.; Lusso, S.; Masera, M.; Vallero, S.

    2015-12-01

    Elastic cloud computing applications, i.e. applications that automatically scale according to computing needs, work on the ideal assumption of infinite resources. While large public cloud infrastructures may be a reasonable approximation of this condition, scientific computing centres like WLCG Grid sites usually work in a saturated regime, in which applications compete for scarce resources through queues, priorities and scheduling policies, and keeping a fraction of the computing cores idle to allow for headroom is usually not an option. In our particular environment one of the applications (a WLCG Tier-2 Grid site) is much larger than all the others and cannot autoscale easily. Nevertheless, other smaller applications can benefit of automatic elasticity; the implementation of this property in our infrastructure, based on the OpenNebula cloud stack, will be described and the very first operational experiences with a small number of strategies for timely allocation and release of resources will be discussed.

  14. Design of Control Plane Architecture Based on Cloud Platform and Experimental Network Demonstration for Multi-domain SDON

    NASA Astrophysics Data System (ADS)

    Li, Ming; Yin, Hongxi; Xing, Fangyuan; Wang, Jingchao; Wang, Honghuan

    2016-02-01

    With the features of network virtualization and resource programming, Software Defined Optical Network (SDON) is considered as the future development trend of optical network, provisioning a more flexible, efficient and open network function, supporting intraconnection and interconnection of data centers. Meanwhile cloud platform can provide powerful computing, storage and management capabilities. In this paper, with the coordination of SDON and cloud platform, a multi-domain SDON architecture based on cloud control plane has been proposed, which is composed of data centers with database (DB), path computation element (PCE), SDON controller and orchestrator. In addition, the structure of the multidomain SDON orchestrator and OpenFlow-enabled optical node are proposed to realize the combination of centralized and distributed effective management and control platform. Finally, the functional verification and demonstration are performed through our optical experiment network.

  15. The telerobot testbed: An architecture for remote servicing

    NASA Technical Reports Server (NTRS)

    Matijevic, J. R.

    1990-01-01

    The NASA/OAST Telerobot Testbed will reach its next increment in development by the end of FY-89. The testbed will have the capability for: force reflection in teleoperation, shared control, traded control, operator designate and relative update. These five capabilities will be shown in a module release and exchange operation using mockups of Orbital Replacement Units (ORU). This development of the testbed shows examples of the technologies needed for remote servicing, particularly under conditions of delay in transmissions to the servicing site. Here, the following topics are presented: the system architecture of the testbed which incorporates these telerobotic technologies for servicing, the implementation of the five capabilities and the operation of the ORU mockups.

  16. Sensor Networking Testbed with IEEE 1451 Compatibility and Network Performance Monitoring

    NASA Technical Reports Server (NTRS)

    Gurkan, Deniz; Yuan, X.; Benhaddou, D.; Figueroa, F.; Morris, Jonathan

    2007-01-01

    Design and implementation of a testbed for testing and verifying IEEE 1451-compatible sensor systems with network performance monitoring is of significant importance. The performance parameters measurement as well as decision support systems implementation will enhance the understanding of sensor systems with plug-and-play capabilities. The paper will present the design aspects for such a testbed environment under development at University of Houston in collaboration with NASA Stennis Space Center - SSST (Smart Sensor System Testbed).

  17. Technology Developments Integrating a Space Network Communications Testbed

    NASA Technical Reports Server (NTRS)

    Kwong, Winston; Jennings, Esther; Clare, Loren; Leang, Dee

    2006-01-01

    As future manned and robotic space explorations missions involve more complex systems, it is essential to verify, validate, and optimize such systems through simulation and emulation in a low cost testbed environment. The goal of such a testbed is to perform detailed testing of advanced space and ground communications networks, technologies, and client applications that are essential for future space exploration missions. We describe the development of new technologies enhancing our Multi-mission Advanced Communications Hybrid Environment for Test and Evaluation (MACHETE) that enable its integration in a distributed space communications testbed. MACHETE combines orbital modeling, link analysis, and protocol and service modeling to quantify system performance based on comprehensive considerations of different aspects of space missions. It can simulate entire networks and can interface with external (testbed) systems. The key technology developments enabling the integration of MACHETE into a distributed testbed are the Monitor and Control module and the QualNet IP Network Emulator module. Specifically, the Monitor and Control module establishes a standard interface mechanism to centralize the management of each testbed component. The QualNet IP Network Emulator module allows externally generated network traffic to be passed through MACHETE to experience simulated network behaviors such as propagation delay, data loss, orbital effects and other communications characteristics, including entire network behaviors. We report a successful integration of MACHETE with a space communication testbed modeling a lunar exploration scenario. This document is the viewgraph slides of the presentation.

  18. Cloud fractions estimated from shipboard whole-sky camera and ceilometer observations between East Asia and Antarctica

    NASA Astrophysics Data System (ADS)

    Kuji, M.; Hagiwara, M.; Hori, M.; Shiobara, M.

    2017-12-01

    Shipboard observations on cloud fraction were carried out along the round research cruise between East Asia and Antarctica from November 2015 to Aril 2016 using a whole-sky camera and a ceilometer onboard Research Vessel (R/V) Shirase. We retrieved cloud fraction from the whole-sky camera based on the brightness and color of the images, while we estimated cloud fraction from the ceilometer as a cloud frequency of occurrence. As a result, the average cloud fractions over outward open ocean, sea ice region, and returning openocean were approximately 56% (60%), 44% (64%), and 67% (72%), respectively, with the whole-sky camera (ceilometer). The comparison of the daily-averaged cloud fractions from the whole-sky camera and the ceilometer, it is found that the correlation coefficient was 0.73 for the 129 match-up dataset between East Asia and Antarctica including sea ice region as well as open ocean. The results are qualitatively consistent between the two observations as a whole, but there exists some underestimation with the whole-sky camera compared to the ceilometer. One of the reasons is possibly that the imager is apt to dismiss an optically thinner clouds that can be detected by the ceilometer. On the other hand, the difference of their view angles between the imager and the ceilometer possibly affects the estimation. Therefore, it is necessary to elucidate the cloud properties with detailed match-up analyses in future. Another future task is to compare the cloud fractions with satellite observation such as MODIS cloud products. Shipboard observations in themselves are very valuable for the validation of products from satellite observation, because we do not necessarily have many validation sites over Southern Ocean and sea ice region in particular.

  19. Arctic sea ice melt leads to atmospheric new particle formation.

    PubMed

    Dall Osto, M; Beddows, D C S; Tunved, P; Krejci, R; Ström, J; Hansson, H-C; Yoon, Y J; Park, Ki-Tae; Becagli, S; Udisti, R; Onasch, T; O Dowd, C D; Simó, R; Harrison, Roy M

    2017-06-12

    Atmospheric new particle formation (NPF) and growth significantly influences climate by supplying new seeds for cloud condensation and brightness. Currently, there is a lack of understanding of whether and how marine biota emissions affect aerosol-cloud-climate interactions in the Arctic. Here, the aerosol population was categorised via cluster analysis of aerosol size distributions taken at Mt Zeppelin (Svalbard) during a 11 year record. The daily temporal occurrence of NPF events likely caused by nucleation in the polar marine boundary layer was quantified annually as 18%, with a peak of 51% during summer months. Air mass trajectory analysis and atmospheric nitrogen and sulphur tracers link these frequent nucleation events to biogenic precursors released by open water and melting sea ice regions. The occurrence of such events across a full decade was anti-correlated with sea ice extent. New particles originating from open water and open pack ice increased the cloud condensation nuclei concentration background by at least ca. 20%, supporting a marine biosphere-climate link through sea ice melt and low altitude clouds that may have contributed to accelerate Arctic warming. Our results prompt a better representation of biogenic aerosol sources in Arctic climate models.

  20. Managing a tier-2 computer centre with a private cloud infrastructure

    NASA Astrophysics Data System (ADS)

    Bagnasco, Stefano; Berzano, Dario; Brunetti, Riccardo; Lusso, Stefano; Vallero, Sara

    2014-06-01

    In a typical scientific computing centre, several applications coexist and share a single physical infrastructure. An underlying Private Cloud infrastructure eases the management and maintenance of such heterogeneous applications (such as multipurpose or application-specific batch farms, Grid sites, interactive data analysis facilities and others), allowing dynamic allocation resources to any application. Furthermore, the maintenance of large deployments of complex and rapidly evolving middleware and application software is eased by the use of virtual images and contextualization techniques. Such infrastructures are being deployed in some large centres (see e.g. the CERN Agile Infrastructure project), but with several open-source tools reaching maturity this is becoming viable also for smaller sites. In this contribution we describe the Private Cloud infrastructure at the INFN-Torino Computer Centre, that hosts a full-fledged WLCG Tier-2 centre, an Interactive Analysis Facility for the ALICE experiment at the CERN LHC and several smaller scientific computing applications. The private cloud building blocks include the OpenNebula software stack, the GlusterFS filesystem and the OpenWRT Linux distribution (used for network virtualization); a future integration into a federated higher-level infrastructure is made possible by exposing commonly used APIs like EC2 and OCCI.

  1. Exploration Systems Health Management Facilities and Testbed Workshop

    NASA Technical Reports Server (NTRS)

    Wilson, Scott; Waterman, Robert; McCleskey, Carey

    2004-01-01

    Presentation Agenda : (1) Technology Maturation Pipeline (The Plan) (2) Cryogenic testbed (and other KSC Labs) (2a) Component / Subsystem technologies (3) Advanced Technology Development Center (ATDC) (3a) System / Vehic1e technologies (4) EL V Flight Experiments (Flight Testbeds).

  2. Spectral Dependence of MODIS Cloud Droplet Effective Radius Retrievals for Marine Boundary Layer Clouds

    NASA Technical Reports Server (NTRS)

    Zhang, Zhibo; Platnick, Steven E.; Ackerman, Andrew S.; Cho, Hyoun-Myoung

    2014-01-01

    Low-level warm marine boundary layer (MBL) clouds cover large regions of Earth's surface. They have a significant role in Earth's radiative energy balance and hydrological cycle. Despite the fundamental role of low-level warm water clouds in climate, our understanding of these clouds is still limited. In particular, connections between their properties (e.g. cloud fraction, cloud water path, and cloud droplet size) and environmental factors such as aerosol loading and meteorological conditions continue to be uncertain or unknown. Modeling these clouds in climate models remains a challenging problem. As a result, the influence of aerosols on these clouds in the past and future, and the potential impacts of these clouds on global warming remain open questions leading to substantial uncertainty in climate projections. To improve our understanding of these clouds, we need continuous observations of cloud properties on both a global scale and over a long enough timescale for climate studies. At present, satellite-based remote sensing is the only means of providing such observations.

  3. Mounted Smartphones as Measurement and Control Platforms for Motor-Based Laboratory Test-Beds †

    PubMed Central

    Frank, Jared A.; Brill, Anthony; Kapila, Vikram

    2016-01-01

    Laboratory education in science and engineering often entails the use of test-beds equipped with costly peripherals for sensing, acquisition, storage, processing, and control of physical behavior. However, costly peripherals are no longer necessary to obtain precise measurements and achieve stable feedback control of test-beds. With smartphones performing diverse sensing and processing tasks, this study examines the feasibility of mounting smartphones directly to test-beds to exploit their embedded hardware and software in the measurement and control of the test-beds. This approach is a first step towards replacing laboratory-grade peripherals with more compact and affordable smartphone-based platforms, whose interactive user interfaces can engender wider participation and engagement from learners. Demonstrative cases are presented in which the sensing, computation, control, and user interaction with three motor-based test-beds are handled by a mounted smartphone. Results of experiments and simulations are used to validate the feasibility of mounted smartphones as measurement and feedback control platforms for motor-based laboratory test-beds, report the measurement precision and closed-loop performance achieved with such platforms, and address challenges in the development of platforms to maintain system stability. PMID:27556464

  4. Mounted Smartphones as Measurement and Control Platforms for Motor-Based Laboratory Test-Beds.

    PubMed

    Frank, Jared A; Brill, Anthony; Kapila, Vikram

    2016-08-20

    Laboratory education in science and engineering often entails the use of test-beds equipped with costly peripherals for sensing, acquisition, storage, processing, and control of physical behavior. However, costly peripherals are no longer necessary to obtain precise measurements and achieve stable feedback control of test-beds. With smartphones performing diverse sensing and processing tasks, this study examines the feasibility of mounting smartphones directly to test-beds to exploit their embedded hardware and software in the measurement and control of the test-beds. This approach is a first step towards replacing laboratory-grade peripherals with more compact and affordable smartphone-based platforms, whose interactive user interfaces can engender wider participation and engagement from learners. Demonstrative cases are presented in which the sensing, computation, control, and user interaction with three motor-based test-beds are handled by a mounted smartphone. Results of experiments and simulations are used to validate the feasibility of mounted smartphones as measurement and feedback control platforms for motor-based laboratory test-beds, report the measurement precision and closed-loop performance achieved with such platforms, and address challenges in the development of platforms to maintain system stability.

  5. AstroCloud, a Cyber-Infrastructure for Astronomy Research: Cloud Computing Environments

    NASA Astrophysics Data System (ADS)

    Li, C.; Wang, J.; Cui, C.; He, B.; Fan, D.; Yang, Y.; Chen, J.; Zhang, H.; Yu, C.; Xiao, J.; Wang, C.; Cao, Z.; Fan, Y.; Hong, Z.; Li, S.; Mi, L.; Wan, W.; Wang, J.; Yin, S.

    2015-09-01

    AstroCloud is a cyber-Infrastructure for Astronomy Research initiated by Chinese Virtual Observatory (China-VO) under funding support from NDRC (National Development and Reform commission) and CAS (Chinese Academy of Sciences). Based on CloudStack, an open source software, we set up the cloud computing environment for AstroCloud Project. It consists of five distributed nodes across the mainland of China. Users can use and analysis data in this cloud computing environment. Based on GlusterFS, we built a scalable cloud storage system. Each user has a private space, which can be shared among different virtual machines and desktop systems. With this environments, astronomer can access to astronomical data collected by different telescopes and data centers easily, and data producers can archive their datasets safely.

  6. Enhancing Security by System-Level Virtualization in Cloud Computing Environments

    NASA Astrophysics Data System (ADS)

    Sun, Dawei; Chang, Guiran; Tan, Chunguang; Wang, Xingwei

    Many trends are opening up the era of cloud computing, which will reshape the IT industry. Virtualization techniques have become an indispensable ingredient for almost all cloud computing system. By the virtual environments, cloud provider is able to run varieties of operating systems as needed by each cloud user. Virtualization can improve reliability, security, and availability of applications by using consolidation, isolation, and fault tolerance. In addition, it is possible to balance the workloads by using live migration techniques. In this paper, the definition of cloud computing is given; and then the service and deployment models are introduced. An analysis of security issues and challenges in implementation of cloud computing is identified. Moreover, a system-level virtualization case is established to enhance the security of cloud computing environments.

  7. Development and Validation of the Air Force Cyber Intruder Alert Testbed (CIAT)

    DTIC Science & Technology

    2016-07-27

    Validation of the Air Force Cyber Intruder Alert Testbed (CIAT) 5a. CONTRACT NUMBER FA8650-16-C-6722 5b. GRANT NUMBER 5c. PROGRAM ELEMENT NUMBER...network analysts. Therefore, a new cyber STE focused on network analysts called the Air Force Cyber Intruder Alert Testbed (CIAT) was developed. This...Prescribed by ANSI Std. Z39-18 Development and Validation of the Air Force Cyber Intruder Alert Testbed (CIAT) Gregory Funke, Gregory Dye, Brett Borghetti

  8. Guidance and control 1992; Proceedings of the 15th Annual AAS Rocky Mountain Conference, Keystone, CO, Feb. 8-12, 1992

    NASA Astrophysics Data System (ADS)

    Culp, Robert D.; Zietz, Richard P.

    The present volume on guidance and control discusses advances in guidance, navigation, and control, guidance and control storyboard displays, space robotic control, spacecraft control and flexible body interaction, and the Mission to Planet Earth. Attention is given to applications of Newton's method to attitude determination, a new family of low-cost momentum/reaction wheels, stellar attitude data handling, and satellite life prediction using propellant quantity measurements. Topics addressed include robust manipulator controller specification and design, implementations and applications of a manipulator control testbed, optimizing transparency in teleoperator architectures, and MIMO system identification using frequency response data. Also discussed are instrument configurations for the restructured Earth Observing System, the HIRIS instrument, clouds and the earth's radiant energy system, and large space-based systems for dealing with global change.

  9. Implementation and use of a highly available and innovative IaaS solution: the Cloud Area Padovana

    NASA Astrophysics Data System (ADS)

    Aiftimiei, C.; Andreetto, P.; Bertocco, S.; Biasotto, M.; Dal Pra, S.; Costa, F.; Crescente, A.; Dorigo, A.; Fantinel, S.; Fanzago, F.; Frizziero, E.; Gulmini, M.; Michelotto, M.; Sgaravatto, M.; Traldi, S.; Venaruzzo, M.; Verlato, M.; Zangrando, L.

    2015-12-01

    While in the business world the cloud paradigm is typically implemented purchasing resources and services from third party providers (e.g. Amazon), in the scientific environment there's usually the need of on-premises IaaS infrastructures which allow efficient usage of the hardware distributed among (and owned by) different scientific administrative domains. In addition, the requirement of open source adoption has led to the choice of products like OpenStack by many organizations. We describe a use case of the Italian National Institute for Nuclear Physics (INFN) which resulted in the implementation of a unique cloud service, called ’Cloud Area Padovana’, which encompasses resources spread over two different sites: the INFN Legnaro National Laboratories and the INFN Padova division. We describe how this IaaS has been implemented, which technologies have been adopted and how services have been configured in high-availability (HA) mode. We also discuss how identity and authorization management were implemented, adopting a widely accepted standard architecture based on SAML2 and OpenID: by leveraging the versatility of those standards the integration with authentication federations like IDEM was implemented. We also discuss some other innovative developments, such as a pluggable scheduler, implemented as an extension of the native OpenStack scheduler, which allows the allocation of resources according to a fair-share based model and which provides a persistent queuing mechanism for handling user requests that can not be immediately served. Tools, technologies, procedures used to install, configure, monitor, operate this cloud service are also discussed. Finally we present some examples that show how this IaaS infrastructure is being used.

  10. Towards Efficient Scientific Data Management Using Cloud Storage

    NASA Technical Reports Server (NTRS)

    He, Qiming

    2013-01-01

    A software prototype allows users to backup and restore data to/from both public and private cloud storage such as Amazon's S3 and NASA's Nebula. Unlike other off-the-shelf tools, this software ensures user data security in the cloud (through encryption), and minimizes users operating costs by using space- and bandwidth-efficient compression and incremental backup. Parallel data processing utilities have also been developed by using massively scalable cloud computing in conjunction with cloud storage. One of the innovations in this software is using modified open source components to work with a private cloud like NASA Nebula. Another innovation is porting the complex backup to- cloud software to embedded Linux, running on the home networking devices, in order to benefit more users.

  11. Closing the contrast gap between testbed and model prediction with WFIRST-CGI shaped pupil coronagraph

    NASA Astrophysics Data System (ADS)

    Zhou, Hanying; Nemati, Bijan; Krist, John; Cady, Eric; Prada, Camilo M.; Kern, Brian; Poberezhskiy, Ilya

    2016-07-01

    JPL has recently passed an important milestone in its technology development for a proposed NASA WFIRST mission coronagraph: demonstration of better than 1x10-8 contrast over broad bandwidth (10%) on both shaped pupil coronagraph (SPC) and hybrid Lyot coronagraph (HLC) testbeds with the WFIRST obscuration pattern. Challenges remain, however, in the technology readiness for the proposed mission. One is the discrepancies between the achieved contrasts on the testbeds and their corresponding model predictions. A series of testbed diagnoses and modeling activities were planned and carried out on the SPC testbed in order to close the gap. A very useful tool we developed was a derived "measured" testbed wavefront control Jacobian matrix that could be compared with the model-predicted "control" version that was used to generate the high contrast dark hole region in the image plane. The difference between these two is an estimate of the error in the control Jacobian. When the control matrix, which includes both amplitude and phase, was modified to reproduce the error, the simulated performance closely matched the SPC testbed behavior in both contrast floor and contrast convergence speed. This is a step closer toward model validation for high contrast coronagraphs. Further Jacobian analysis and modeling provided clues to the possible sources for the mismatch: DM misregistration and testbed optical wavefront error (WFE) and the deformable mirror (DM) setting for correcting this WFE. These analyses suggested that a high contrast coronagraph has a tight tolerance in the accuracy of its control Jacobian. Modifications to both testbed control model as well as prediction model are being implemented, and future works are discussed.

  12. Laser-based structural sensing and surface damage detection

    NASA Astrophysics Data System (ADS)

    Guldur, Burcu

    Damage due to age or accumulated damage from hazards on existing structures poses a worldwide problem. In order to evaluate the current status of aging, deteriorating and damaged structures, it is vital to accurately assess the present conditions. It is possible to capture the in situ condition of structures by using laser scanners that create dense three-dimensional point clouds. This research investigates the use of high resolution three-dimensional terrestrial laser scanners with image capturing abilities as tools to capture geometric range data of complex scenes for structural engineering applications. Laser scanning technology is continuously improving, with commonly available scanners now capturing over 1,000,000 texture-mapped points per second with an accuracy of ~2 mm. However, automatically extracting meaningful information from point clouds remains a challenge, and the current state-of-the-art requires significant user interaction. The first objective of this research is to use widely accepted point cloud processing steps such as registration, feature extraction, segmentation, surface fitting and object detection to divide laser scanner data into meaningful object clusters and then apply several damage detection methods to these clusters. This required establishing a process for extracting important information from raw laser-scanned data sets such as the location, orientation and size of objects in a scanned region, and location of damaged regions on a structure. For this purpose, first a methodology for processing range data to identify objects in a scene is presented and then, once the objects from model library are correctly detected and fitted into the captured point cloud, these fitted objects are compared with the as-is point cloud of the investigated object to locate defects on the structure. The algorithms are demonstrated on synthetic scenes and validated on range data collected from test specimens and test-bed bridges. The second objective of this research is to combine useful information extracted from laser scanner data with color information, which provides information in the fourth dimension that enables detection of damage types such as cracks, corrosion, and related surface defects that are generally difficult to detect using only laser scanner data; moreover, the color information also helps to track volumetric changes on structures such as spalling. Although using images with varying resolution to detect cracks is an extensively researched topic, damage detection using laser scanners with and without color images is a new research area that holds many opportunities for enhancing the current practice of visual inspections. The aim is to combine the best features of laser scans and images to create an automatic and effective surface damage detection method, which will reduce the need for skilled labor during visual inspections and allow automatic documentation of related information. This work enables developing surface damage detection strategies that integrate existing condition rating criteria for a wide range damage types that are collected under three main categories: small deformations already existing on the structure (cracks); damage types that induce larger deformations, but where the initial topology of the structure has not changed appreciably (e.g., bent members); and large deformations where localized changes in the topology of the structure have occurred (e.g., rupture, discontinuities and spalling). The effectiveness of the developed damage detection algorithms are validated by comparing the detection results with the measurements taken from test specimens and test-bed bridges.

  13. An overview of the U.S. Army Research Laboratory's Sensor Information Testbed for Collaborative Research Environment (SITCORE) and Automated Online Data Repository (AODR) capabilities

    NASA Astrophysics Data System (ADS)

    Ward, Dennis W.; Bennett, Kelly W.

    2017-05-01

    The Sensor Information Testbed COllaberative Research Environment (SITCORE) and the Automated Online Data Repository (AODR) are significant enablers of the U.S. Army Research Laboratory (ARL)'s Open Campus Initiative and together create a highly-collaborative research laboratory and testbed environment focused on sensor data and information fusion. SITCORE creates a virtual research development environment allowing collaboration from other locations, including DoD, industry, academia, and collation facilities. SITCORE combined with AODR provides end-toend algorithm development, experimentation, demonstration, and validation. The AODR enterprise allows the U.S. Army Research Laboratory (ARL), as well as other government organizations, industry, and academia to store and disseminate multiple intelligence (Multi-INT) datasets collected at field exercises and demonstrations, and to facilitate research and development (R and D), and advancement of analytical tools and algorithms supporting the Intelligence, Surveillance, and Reconnaissance (ISR) community. The AODR provides a potential central repository for standards compliant datasets to serve as the "go-to" location for lessons-learned and reference products. Many of the AODR datasets have associated ground truth and other metadata which provides a rich and robust data suite for researchers to develop, test, and refine their algorithms. Researchers download the test data to their own environments using a sophisticated web interface. The AODR allows researchers to request copies of stored datasets and for the government to process the requests and approvals in an automated fashion. Access to the AODR requires two-factor authentication in the form of a Common Access Card (CAC) or External Certificate Authority (ECA)

  14. High-contrast Imager for Complex Aperture Telescopes (HICAT): II. Design overview and first light results

    NASA Astrophysics Data System (ADS)

    N'Diaye, Mamadou; Choquet, Elodie; Egron, Sylvain; Pueyo, Laurent; Leboulleux, Lucie; Levecq, Olivier; Perrin, Marshall D.; Elliot, Erin; Wallace, J. Kent; Hugot, Emmanuel; Marcos, Michel; Ferrari, Marc; Long, Chris A.; Anderson, Rachel; DiFelice, Audrey; Soummer, Rémi

    2014-08-01

    We present a new high-contrast imaging testbed designed to provide complete solutions in wavefront sensing, control and starlight suppression with complex aperture telescopes. The testbed was designed to enable a wide range of studies of the effects of such telescope geometries, with primary mirror segmentation, central obstruction, and spiders. The associated diffraction features in the point spread function make high-contrast imaging more challenging. In particular the testbed will be compatible with both AFTA-like and ATLAST-like aperture shapes, respectively on-axis monolithic, and on-axis segmented telescopes. The testbed optical design was developed using a novel approach to define the layout and surface error requirements to minimize amplitude­ induced errors at the target contrast level performance. In this communication we compare the as-built surface errors for each optic to their specifications based on end-to-end Fresnel modelling of the testbed. We also report on the testbed optical and optomechanical alignment performance, coronagraph design and manufacturing, and preliminary first light results.

  15. Menu-driven cloud computing and resource sharing for R and Bioconductor

    PubMed Central

    Bolouri, Hamid; Angerman, Michael

    2011-01-01

    Summary: We report CRdata.org, a cloud-based, free, open-source web server for running analyses and sharing data and R scripts with others. In addition to using the free, public service, CRdata users can launch their own private Amazon Elastic Computing Cloud (EC2) nodes and store private data and scripts on Amazon's Simple Storage Service (S3) with user-controlled access rights. All CRdata services are provided via point-and-click menus. Availability and Implementation: CRdata is open-source and free under the permissive MIT License (opensource.org/licenses/mit-license.php). The source code is in Ruby (ruby-lang.org/en/) and available at: github.com/seerdata/crdata. Contact: hbolouri@fhcrc.org PMID:21685055

  16. Hybrid Lyot coronagraph for WFIRST: high-contrast broadband testbed demonstration

    NASA Astrophysics Data System (ADS)

    Seo, Byoung-Joon; Cady, Eric; Gordon, Brian; Kern, Brian; Lam, Raymond; Marx, David; Moody, Dwight; Muller, Richard; Patterson, Keith; Poberezhskiy, Ilya; Mejia Prada, Camilo; Sidick, Erkin; Shi, Fang; Trauger, John; Wilson, Daniel

    2017-09-01

    Hybrid Lyot Coronagraph (HLC) is one of the two operating modes of the Wide-Field InfraRed Survey Telescope (WFIRST) coronagraph instrument. Since being selected by National Aeronautics and Space Administration (NASA) in December 2013, the coronagraph technology is being matured to Technology Readiness Level (TRL) 6 by 2018. To demonstrate starlight suppression in presence of expecting on-orbit input wavefront disturbances, we have built a dynamic testbed in Jet Propulsion Laboratory (JPL) in 2016. This testbed, named as Occulting Mask Coronagraph (OMC) testbed, is designed analogous to the WFIRST flight instrument architecture: It has both HLC and Shape Pupil Coronagraph (SPC) architectures, and also has the Low Order Wavefront Sensing and Control (LOWFS/C) subsystem to sense and correct the dynamic wavefront disturbances. We present upto-date progress of HLC mode demonstration in the OMC testbed. SPC results will be reported separately. We inject the flight-like Line of Sight (LoS) and Wavefront Error (WFE) perturbation to the OMC testbed and demonstrate wavefront control using two deformable mirrors while the LOWFS/C is correcting those perturbation in our vacuum testbed. As a result, we obtain repeatable convergence below 5 × 10-9 mean contrast with 10% broadband light centered at 550 nm in the 360 degrees dark hole with working angle between 3 λ/D and 9 λ/D. We present the key hardware and software used in the testbed, the performance results and their comparison to model expectations.

  17. The Goddard Space Flight Center (GSFC) robotics technology testbed

    NASA Technical Reports Server (NTRS)

    Schnurr, Rick; Obrien, Maureen; Cofer, Sue

    1989-01-01

    Much of the technology planned for use in NASA's Flight Telerobotic Servicer (FTS) and the Demonstration Test Flight (DTF) is relatively new and untested. To provide the answers needed to design safe, reliable, and fully functional robotics for flight, NASA/GSFC is developing a robotics technology testbed for research of issues such as zero-g robot control, dual arm teleoperation, simulations, and hierarchical control using a high level programming language. The testbed will be used to investigate these high risk technologies required for the FTS and DTF projects. The robotics technology testbed is centered around the dual arm teleoperation of a pair of 7 degree-of-freedom (DOF) manipulators, each with their own 6-DOF mini-master hand controllers. Several levels of safety are implemented using the control processor, a separate watchdog computer, and other low level features. High speed input/output ports allow the control processor to interface to a simulation workstation: all or part of the testbed hardware can be used in real time dynamic simulation of the testbed operations, allowing a quick and safe means for testing new control strategies. The NASA/National Bureau of Standards Standard Reference Model for Telerobot Control System Architecture (NASREM) hierarchical control scheme, is being used as the reference standard for system design. All software developed for the testbed, excluding some of simulation workstation software, is being developed in Ada. The testbed is being developed in phases. The first phase, which is nearing completion, and highlights future developments is described.

  18. Development of a flexible test-bed for robotics, telemanipulation and servicing research

    NASA Technical Reports Server (NTRS)

    Davies, Barry F.

    1989-01-01

    The development of a flexible operation test-bed, based around a commercially available ASEA industrial robot is described. The test-bed was designed to investigate fundamental human factors issues concerned with the unique problems of robotic manipulation in the hostile environment of Space.

  19. Contrasting sea-ice and open-water boundary layers during melt and freeze-up seasons: Some result from the Arctic Clouds in Summer Experiment.

    NASA Astrophysics Data System (ADS)

    Tjernström, Michael; Sotiropoulou, Georgia; Sedlar, Joseph; Achtert, Peggy; Brooks, Barbara; Brooks, Ian; Persson, Ola; Prytherch, John; Salsbury, Dominic; Shupe, Matthew; Johnston, Paul; Wolfe, Dan

    2016-04-01

    With more open water present in the Arctic summer, an understanding of atmospheric processes over open-water and sea-ice surfaces as summer turns into autumn and ice starts forming becomes increasingly important. The Arctic Clouds in Summer Experiment (ACSE) was conducted in a mix of open water and sea ice in the eastern Arctic along the Siberian shelf during late summer and early autumn 2014, providing detailed observations of the seasonal transition, from melt to freeze. Measurements were taken over both ice-free and ice-covered surfaces, offering an insight to the role of the surface state in shaping the lower troposphere and the boundary-layer conditions as summer turned into autumn. During summer, strong surface inversions persisted over sea ice, while well-mixed boundary layers capped by elevated inversions were frequent over open-water. The former were often associated with advection of warm air from adjacent open-water or land surfaces, whereas the latter were due to a positive buoyancy flux from the warm ocean surface. Fog and stratus clouds often persisted over the ice, whereas low-level liquid-water clouds developed over open water. These differences largely disappeared in autumn, when mixed-phase clouds capped by elevated inversions dominated in both ice-free and ice-covered conditions. Low-level-jets occurred ~20-25% of the time in both seasons. The observations indicate that these jets were typically initiated at air-mass boundaries or along the ice edge in autumn, while in summer they appeared to be inertial oscillations initiated by partial frictional decoupling as warm air was advected in over the sea ice. The start of the autumn season was related to an abrupt change in atmospheric conditions, rather than to the gradual change in solar radiation. The autumn onset appeared as a rapid cooling of the whole atmosphere and the freeze up followed as the warm surface lost heat to the atmosphere. While the surface type had a pronounced impact on boundary-layer structure in summer, the surface was often warmer than the atmosphere in autumn, regardless of surface type. Hence the autumn boundary-layer structure was more dependent on synoptic scale meteorology.

  20. Using Cloud Computing infrastructure with CloudBioLinux, CloudMan and Galaxy

    PubMed Central

    Afgan, Enis; Chapman, Brad; Jadan, Margita; Franke, Vedran; Taylor, James

    2012-01-01

    Cloud computing has revolutionized availability and access to computing and storage resources; making it possible to provision a large computational infrastructure with only a few clicks in a web browser. However, those resources are typically provided in the form of low-level infrastructure components that need to be procured and configured before use. In this protocol, we demonstrate how to utilize cloud computing resources to perform open-ended bioinformatics analyses, with fully automated management of the underlying cloud infrastructure. By combining three projects, CloudBioLinux, CloudMan, and Galaxy into a cohesive unit, we have enabled researchers to gain access to more than 100 preconfigured bioinformatics tools and gigabytes of reference genomes on top of the flexible cloud computing infrastructure. The protocol demonstrates how to setup the available infrastructure and how to use the tools via a graphical desktop interface, a parallel command line interface, and the web-based Galaxy interface. PMID:22700313

  1. Using cloud computing infrastructure with CloudBioLinux, CloudMan, and Galaxy.

    PubMed

    Afgan, Enis; Chapman, Brad; Jadan, Margita; Franke, Vedran; Taylor, James

    2012-06-01

    Cloud computing has revolutionized availability and access to computing and storage resources, making it possible to provision a large computational infrastructure with only a few clicks in a Web browser. However, those resources are typically provided in the form of low-level infrastructure components that need to be procured and configured before use. In this unit, we demonstrate how to utilize cloud computing resources to perform open-ended bioinformatic analyses, with fully automated management of the underlying cloud infrastructure. By combining three projects, CloudBioLinux, CloudMan, and Galaxy, into a cohesive unit, we have enabled researchers to gain access to more than 100 preconfigured bioinformatics tools and gigabytes of reference genomes on top of the flexible cloud computing infrastructure. The protocol demonstrates how to set up the available infrastructure and how to use the tools via a graphical desktop interface, a parallel command-line interface, and the Web-based Galaxy interface.

  2. Development of performance specifications for collision avoidance systems for lane change crashes. Task 6, interim report : testbed systems design and associated facilities

    DOT National Transportation Integrated Search

    2001-11-01

    This report documents the design of an on-road testbed vehicle. The purposes of this testbed are twofold: (1) Establish a foundation for estimating lane change collision avoidance effectiveness, and (2) provide information pertinent to setting perfor...

  3. Integrated Network Testbed for Energy Grid Research and Technology

    Science.gov Websites

    Network Testbed for Energy Grid Research and Technology Experimentation Project Under the Integrated Network Testbed for Energy Grid Research and Technology Experimentation (INTEGRATE) project, NREL and partners completed five successful technology demonstrations at the ESIF. INTEGRATE is a $6.5-million, cost

  4. Development of a space-systems network testbed

    NASA Technical Reports Server (NTRS)

    Lala, Jaynarayan; Alger, Linda; Adams, Stuart; Burkhardt, Laura; Nagle, Gail; Murray, Nicholas

    1988-01-01

    This paper describes a communications network testbed which has been designed to allow the development of architectures and algorithms that meet the functional requirements of future NASA communication systems. The central hardware components of the Network Testbed are programmable circuit switching communication nodes which can be adapted by software or firmware changes to customize the testbed to particular architectures and algorithms. Fault detection, isolation, and reconfiguration has been implemented in the Network with a hybrid approach which utilizes features of both centralized and distributed techniques to provide efficient handling of faults within the Network.

  5. New Air-Launched Small Missile (ALSM) Flight Testbed for Hypersonic Systems

    NASA Technical Reports Server (NTRS)

    Bui, Trong T.; Lux, David P.; Stenger, Michael T.; Munson, Michael J.; Teate, George F.

    2007-01-01

    The Phoenix Air-Launched Small Missile (ALSM) flight testbed was conceived and is proposed to help address the lack of quick-turnaround and cost-effective hypersonic flight research capabilities. The Phoenix ALSM testbed results from utilization of the United States Navy Phoenix AIM-54 (Hughes Aircraft Company, now Raytheon Company, Waltham, Massachusetts) long-range, guided air-to-air missile and the National Aeronautics and Space Administration (NASA) Dryden Flight Research Center (Edwards, California) F-15B (McDonnell Douglas, now the Boeing Company, Chicago, Illinois) testbed airplane. The retirement of the Phoenix AIM-54 missiles from fleet operation has presented an opportunity for converting this flight asset into a new flight testbed. This cost-effective new platform will fill the gap in the test and evaluation of hypersonic systems for flight Mach numbers ranging from 3 to 5. Preliminary studies indicate that the Phoenix missile is a highly capable platform; when launched from a high-performance airplane, the guided Phoenix missile can boost research payloads to low hypersonic Mach numbers, enabling flight research in the supersonic-to-hypersonic transitional flight envelope. Experience gained from developing and operating the Phoenix ALSM testbed will assist the development and operation of future higher-performance ALSM flight testbeds as well as responsive microsatellite-small-payload air-launched space boosters.

  6. Demonstration of Supervisory Control and Data Acquisition (SCADA) Virtualization Capability in the US Army Research Laboratory (ARL)/Sustaining Base Network Assurance Branch (SBNAB) US Army Cyber Analytics Laboratory (ACAL) SCADA Hardware Testbed

    DTIC Science & Technology

    2015-05-01

    application ,1 while the simulated PLC software is the open source ModbusPal Java application . When queried using the Modbus TCP protocol, ModbusPal reports...and programmable logic controller ( PLC ) components. The HMI and PLC components were instantiated with software and installed in multiple virtual...creating and capturing HMI– PLC network traffic over a 24-h period in the virtualized network and inspect the packets for errors.  Test the

  7. Analysis and testing of a space crane articulating joint testbed

    NASA Technical Reports Server (NTRS)

    Sutter, Thomas R.; Wu, K. Chauncey

    1992-01-01

    The topics are presented in viewgraph form and include: space crane concept with mobile base; mechanical versus structural articulating joint; articulating joint test bed and reference truss; static and dynamic characterization completed for space crane reference truss configuration; improved linear actuators reduce articulating joint test bed backlash; 1-DOF space crane slew maneuver; boom 2 tip transient response finite element dynamic model; boom 2 tip transient response shear-corrected component modes torque driver profile; peak root member force vs. slew time torque driver profile; and open loop control of space crane motion.

  8. Integration of XRootD into the cloud infrastructure for ALICE data analysis

    NASA Astrophysics Data System (ADS)

    Kompaniets, Mikhail; Shadura, Oksana; Svirin, Pavlo; Yurchenko, Volodymyr; Zarochentsev, Andrey

    2015-12-01

    Cloud technologies allow easy load balancing between different tasks and projects. From the viewpoint of the data analysis in the ALICE experiment, cloud allows to deploy software using Cern Virtual Machine (CernVM) and CernVM File System (CVMFS), to run different (including outdated) versions of software for long term data preservation and to dynamically allocate resources for different computing activities, e.g. grid site, ALICE Analysis Facility (AAF) and possible usage for local projects or other LHC experiments. We present a cloud solution for Tier-3 sites based on OpenStack and Ceph distributed storage with an integrated XRootD based storage element (SE). One of the key features of the solution is based on idea that Ceph has been used as a backend for Cinder Block Storage service for OpenStack, and in the same time as a storage backend for XRootD, with redundancy and availability of data preserved by Ceph settings. For faster and easier OpenStack deployment was applied the Packstack solution, which is based on the Puppet configuration management system. Ceph installation and configuration operations are structured and converted to Puppet manifests describing node configurations and integrated into Packstack. This solution can be easily deployed, maintained and used even in small groups with limited computing resources and small organizations, which usually have lack of IT support. The proposed infrastructure has been tested on two different clouds (SPbSU & BITP) and integrates successfully with the ALICE data analysis model.

  9. Operational processing and cloud boundary detection from micro pulse lidar data

    NASA Technical Reports Server (NTRS)

    Campbell, James R.; Hlavka, Dennis L.; Spinhirne, James D.; Scott, V. Stanley., III; Turner, David D.

    1998-01-01

    Micro Pulse Lidar (MPL) was developed at NASA Goddard Space Flight Center (GSFC) as the result of research on space-borne lidar techniques. It was designed to provide continuous, unattended observations of all significant atmospheric cloud and aerosol structure with a rugged, compact system design and the benefit of eye safety (Spinhirne 1993). The significant eye safety feature is achieved by using low pulse energies and high pulse repetition rates compared to standard lidar systems. MPL systems use a diode pumped 10 microj, 2500 Hz doubled Nd:YLF laser. In addition, a solid state Geiger mode avalanche photo diode (GAPD) photon counting detector is used allowing for quantum efficiencies approaching 70%. Other design features have previously been noted by Spinhirne (1995). Though a commercially available instrument, with nearly 20 systems operating around the world, the most extensive MPL work has come from those operated by the Atmospheric Radiation Measurement (ARM) (Stokes and Schwartz 1994) program. The diverse ability of the instrument relating to the measurement of basic cloud macrophysical structure and both cloud and aerosol radiative properties well suits the ARM research philosophy. MPL data can be used to yield many parameters including cloud boundary heights to the limit of signal attenuation, cloud scattering cross sections and optical thicknesses, planetary boundary layer heights and aerosol scattering profiles, including those into the stratosphere in nighttime cases (Hlavka et al 1996). System vertical resolution ranges from 30 m to 300 m (i.e. high and low resolution respectively) depending on system design. The lidar research group at GSFC plays an advisory role in the operation, calibration and maintenance of NASA and ARM owned MPL systems. Over the past three years, processing software and system correction techniques have been developed in anticipation of the increasing population of systems amongst the community. Datasets produced by three ARM-owned systems have served as the basis for this development. With two operating at the southern Great Plains Cloud and Radiation Testbed Site (SGP CART) since December 1993 and another at the Manus Island Atmospheric Radiation and Cloud Station (TWP ARCS) location in the tropical western Pacific since February 1997, the ARM archive contains over 4 years of observations. In addition, high resolution systems planning to come on-line at the North Slope, AK CART shortly with another scheduled to follow at the TWP ARCS-II will diversify this archive with more extensive observations.

  10. Key Lessons in Building "Data Commons": The Open Science Data Cloud Ecosystem

    NASA Astrophysics Data System (ADS)

    Patterson, M.; Grossman, R.; Heath, A.; Murphy, M.; Wells, W.

    2015-12-01

    Cloud computing technology has created a shift around data and data analysis by allowing researchers to push computation to data as opposed to having to pull data to an individual researcher's computer. Subsequently, cloud-based resources can provide unique opportunities to capture computing environments used both to access raw data in its original form and also to create analysis products which may be the source of data for tables and figures presented in research publications. Since 2008, the Open Cloud Consortium (OCC) has operated the Open Science Data Cloud (OSDC), which provides scientific researchers with computational resources for storing, sharing, and analyzing large (terabyte and petabyte-scale) scientific datasets. OSDC has provided compute and storage services to over 750 researchers in a wide variety of data intensive disciplines. Recently, internal users have logged about 2 million core hours each month. The OSDC also serves the research community by colocating these resources with access to nearly a petabyte of public scientific datasets in a variety of fields also accessible for download externally by the public. In our experience operating these resources, researchers are well served by "data commons," meaning cyberinfrastructure that colocates data archives, computing, and storage infrastructure and supports essential tools and services for working with scientific data. In addition to the OSDC public data commons, the OCC operates a data commons in collaboration with NASA and is developing a data commons for NOAA datasets. As cloud-based infrastructures for distributing and computing over data become more pervasive, we ask, "What does it mean to publish data in a data commons?" Here we present the OSDC perspective and discuss several services that are key in architecting data commons, including digital identifier services.

  11. Raman lidar and sun photometer measurements of aerosols and water vapor during the ARM RCS experiment

    NASA Technical Reports Server (NTRS)

    Ferrare, R. A.; Whiteman, D. N.; Melfi, S. H.; Evans, K. D.; Holben, B. N.

    1995-01-01

    The first Atmospheric Radiation Measurement (ARM) Remote Cloud Study (RCS) Intensive Operations Period (IOP) was held during April 1994 at the Southern Great Plains (SGP) Cloud and Radiation Testbed (CART) site near Lamont, Oklahoma. This experiment was conducted to evaluate and calibrate state-of-the-art, ground based remote sensing instruments and to use the data acquired by these instruments to validate retrieval algorithms developed under the ARM program. These activities are part of an overall plan to assess general circulation model (GCM) parameterization research. Since radiation processes are one of the key areas included in this parameterization research, measurements of water vapor and aerosols are required because of the important roles these atmospheric constituents play in radiative transfer. Two instruments were deployed during this IOP to measure water vapor and aerosols and study their relationship. The NASA/Goddard Space Flight Center (GSFC) Scanning Raman Lidar (SRL) acquired water vapor and aerosol profile data during 15 nights of operations. The lidar acquired vertical profiles as well as nearly horizontal profiles directed near an instrumented 60 meter tower. Aerosol optical thickness, phase function, size distribution, and integrated water vapor were derived from measurements with a multiband automatic sun and sky scanning radiometer deployed at this site.

  12. A modular (almost) automatic set-up for elastic multi-tenants cloud (micro)infrastructures

    NASA Astrophysics Data System (ADS)

    Amoroso, A.; Astorino, F.; Bagnasco, S.; Balashov, N. A.; Bianchi, F.; Destefanis, M.; Lusso, S.; Maggiora, M.; Pellegrino, J.; Yan, L.; Yan, T.; Zhang, X.; Zhao, X.

    2017-10-01

    An auto-installing tool on an usb drive can allow for a quick and easy automatic deployment of OpenNebula-based cloud infrastructures remotely managed by a central VMDIRAC instance. A single team, in the main site of an HEP Collaboration or elsewhere, can manage and run a relatively large network of federated (micro-)cloud infrastructures, making an highly dynamic and elastic use of computing resources. Exploiting such an approach can lead to modular systems of cloud-bursting infrastructures addressing complex real-life scenarios.

  13. Cloud-Coffee: implementation of a parallel consistency-based multiple alignment algorithm in the T-Coffee package and its benchmarking on the Amazon Elastic-Cloud.

    PubMed

    Di Tommaso, Paolo; Orobitg, Miquel; Guirado, Fernando; Cores, Fernado; Espinosa, Toni; Notredame, Cedric

    2010-08-01

    We present the first parallel implementation of the T-Coffee consistency-based multiple aligner. We benchmark it on the Amazon Elastic Cloud (EC2) and show that the parallelization procedure is reasonably effective. We also conclude that for a web server with moderate usage (10K hits/month) the cloud provides a cost-effective alternative to in-house deployment. T-Coffee is a freeware open source package available from http://www.tcoffee.org/homepage.html

  14. CERES FM6 First Light Imagery

    Atmospheric Science Data Center

    2018-06-07

    ... Larger Image   Clouds and the Earth's Radiant Energy System Flight Model 6 (CERES FM6) opened its cover on Jan. 5, 2018 ... radiometer that NASA/NOAA has flown that measures the solar energy reflected by Earth, heat the planet emits, and the role of clouds in ...

  15. Analysis, Modeling, and Simulation (AMS) Testbed Development and Evaluation to Support Dynamic Mobility Applications (DMA) and Active Transportation and Demand Management (ATDM) Programs - AMS Testbed Detailed Requirements

    DOT National Transportation Integrated Search

    2016-04-20

    The primary objective of this project is to develop multiple simulation Testbeds/transportation models to evaluate the impacts of DMA connected vehicle applications and the active and dynamic transportation management (ATDM) strategies. The outputs (...

  16. Analysis, modeling, and simulation (AMS) testbed development and evaluation to support dynamic mobility applications (DMA) and active transportation and demand management (ATDM) programs - Chicago testbed analysis plan.

    DOT National Transportation Integrated Search

    2016-10-01

    The primary objective of this project is to develop multiple simulation Testbeds/transportation models to evaluate the impacts of DMA connected vehicle applications and the active and dynamic transportation management (ATDM) strategies. The outputs (...

  17. Analysis, modeling, and simulation (AMS) testbed development and evaluation to support dynamic mobility applications (DMA) and active transportation and demand management (ATDM) programs : summary report for the Chicago testbed.

    DOT National Transportation Integrated Search

    2017-04-01

    The primary objective of this project is to develop multiple simulation testbeds and transportation models to evaluate the impacts of Connected Vehicle Dynamic Mobility Applications (DMA) and Active Transportation and Demand Management (ATDM) strateg...

  18. Analysis, Modeling, and Simulation (AMS) Testbed Development and Evaluation to Support Dynamic Mobility Applications (DMA) and Active Transportation and Demand Management (ATDM) Programs : Evaluation Report for the Chicago Testbed

    DOT National Transportation Integrated Search

    2017-04-01

    The primary objective of this project is to develop multiple simulation testbeds and transportation models to evaluate the impacts of Connected Vehicle Dynamic Mobility Applications (DMA) and Active Transportation and Demand Management (ATDM) strateg...

  19. Development of performance specifications for collision avoidance systems for lane change, merging, and backing. Task 6, Interim report : testbed systems design and associated facilities

    DOT National Transportation Integrated Search

    1997-05-01

    This report represents the documentation of the design of the testbed. The purposes of the testbed are twofold 1) Establish a foundation for estimating collision avoidance effectiveness and 2) Provide information pertinent to setting performance spec...

  20. Analysis, modeling, and simulation (AMS) testbed development and evaluation to support dynamic mobility applications (DMA) and active transportation and demand management (ATDM) programs - Pasadena testbed analysis plan : final report.

    DOT National Transportation Integrated Search

    2016-06-30

    The primary objective of this project is to develop multiple simulation Testbeds/transportation models to evaluate the impacts of DMA connected vehicle applications and the active and dynamic transportation management (ATDM) strategies. The outputs (...

  1. Analysis, modeling, and simulation (AMS) testbed development and evaluation to support dynamic mobility applications (DMA) and active transportation and demand management (ATDM) programs - San Diego testbed analysis plan.

    DOT National Transportation Integrated Search

    2016-10-01

    The primary objective of this project is to develop multiple simulation Testbeds/transportation models to evaluate the impacts of DMA connected vehicle applications and the active and dynamic transportation management (ATDM) strategies. The outputs (...

  2. Analysis, Modeling, and Simulation (AMS) Testbed Development and Evaluation to Support Dynamic Mobility Applications (DMA) and Active Transportation and Demand Management (ATDM) Programs - AMS Testbed Selection Criteria

    DOT National Transportation Integrated Search

    2016-06-16

    The primary objective of this project is to develop multiple simulation Testbeds/transportation models to evaluate the impacts of DMA connected vehicle applications and the active and dynamic transportation management (ATDM) strategies. The outputs (...

  3. Analysis, modeling, and simulation (AMS) testbed development and evaluation to support dynamic mobility applications (DMA) and active transportation and demand management (ATDM) programs : Dallas testbed analysis plan.

    DOT National Transportation Integrated Search

    2016-06-16

    The primary objective of this project is to develop multiple simulation Testbeds/transportation models to evaluate theimpacts of DMA connected vehicle applications and the active and dynamic transportation management (ATDM)strategies. The outputs (mo...

  4. Homomorphic encryption experiments on IBM's cloud quantum computing platform

    NASA Astrophysics Data System (ADS)

    Huang, He-Liang; Zhao, You-Wei; Li, Tan; Li, Feng-Guang; Du, Yu-Tao; Fu, Xiang-Qun; Zhang, Shuo; Wang, Xiang; Bao, Wan-Su

    2017-02-01

    Quantum computing has undergone rapid development in recent years. Owing to limitations on scalability, personal quantum computers still seem slightly unrealistic in the near future. The first practical quantum computer for ordinary users is likely to be on the cloud. However, the adoption of cloud computing is possible only if security is ensured. Homomorphic encryption is a cryptographic protocol that allows computation to be performed on encrypted data without decrypting them, so it is well suited to cloud computing. Here, we first applied homomorphic encryption on IBM's cloud quantum computer platform. In our experiments, we successfully implemented a quantum algorithm for linear equations while protecting our privacy. This demonstration opens a feasible path to the next stage of development of cloud quantum information technology.

  5. Response of a 2-story test-bed structure for the seismic evaluation of nonstructural systems

    NASA Astrophysics Data System (ADS)

    Soroushian, Siavash; Maragakis, E. "Manos"; Zaghi, Arash E.; Rahmanishamsi, Esmaeel; Itani, Ahmad M.; Pekcan, Gokhan

    2016-03-01

    A full-scale, two-story, two-by-one bay, steel braced-frame was subjected to a number of unidirectional ground motions using three shake tables at the UNR-NEES site. The test-bed frame was designed to study the seismic performance of nonstructural systems including steel-framed gypsum partition walls, suspended ceilings and fire sprinkler systems. The frame can be configured to perform as an elastic or inelastic system to generate large floor accelerations or large inter story drift, respectively. In this study, the dynamic performance of the linear and nonlinear test-beds was comprehensively studied. The seismic performance of nonstructural systems installed in the linear and nonlinear test-beds were assessed during extreme excitations. In addition, the dynamic interactions of the test-bed and installed nonstructural systems are investigated.

  6. The Fizeau Interferometer Testbed

    NASA Technical Reports Server (NTRS)

    Zhang, Xiaolei; Carpenter, Kenneth G.; Lyon, Richard G,; Huet, Hubert; Marzouk, Joe; Solyar, Gregory

    2003-01-01

    The Fizeau Interferometer Testbed (FIT) is a collaborative effort between NASA's Goddard Space Flight Center, the Naval Research Laboratory, Sigma Space Corporation, and the University of Maryland. The testbed will be used to explore the principles of and the requirements for the full, as well as the pathfinder, Stellar Imager mission concept. It has a long term goal of demonstrating closed-loop control of a sparse array of numerous articulated mirrors to keep optical beams in phase and optimize interferometric synthesis imaging. In this paper we present the optical and data acquisition system design of the testbed, and discuss the wavefront sensing and control algorithms to be used. Currently we have completed the initial design and hardware procurement for the FIT. The assembly and testing of the Testbed will be underway at Goddard's Instrument Development Lab in the coming months.

  7. Development of Hardware-in-the-loop Microgrid Testbed

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Xiao, Bailu; Prabakar, Kumaraguru; Starke, Michael R

    2015-01-01

    A hardware-in-the-loop (HIL) microgrid testbed for the evaluation and assessment of microgrid operation and control system has been presented in this paper. The HIL testbed is composed of a real-time digital simulator (RTDS) for modeling of the microgrid, multiple NI CompactRIOs for device level control, a prototype microgrid energy management system (MicroEMS), and a relay protection system. The applied communication-assisted hybrid control system has been also discussed. Results of function testing of HIL controller, communication, and the relay protection system are presented to show the effectiveness of the proposed HIL microgrid testbed.

  8. Wavefront control performance modeling with WFIRST shaped pupil coronagraph testbed

    NASA Astrophysics Data System (ADS)

    Zhou, Hanying; Nemati, Bijian; Krist, John; Cady, Eric; Kern, Brian; Poberezhskiy, Ilya

    2017-09-01

    NASA's WFIRST mission includes a coronagraph instrument (CGI) for direct imaging of exoplanets. Significant improvement in CGI model fidelity has been made recently, alongside a testbed high contrast demonstration in a simulated dynamic environment at JPL. We present our modeling method and results of comparisons to testbed's high order wavefront correction performance for the shaped pupil coronagraph. Agreement between model prediction and testbed result at better than a factor of 2 has been consistently achieved in raw contrast (contrast floor, chromaticity, and convergence), and with that comes good agreement in contrast sensitivity to wavefront perturbations and mask lateral shear.

  9. STORMSeq: an open-source, user-friendly pipeline for processing personal genomics data in the cloud.

    PubMed

    Karczewski, Konrad J; Fernald, Guy Haskin; Martin, Alicia R; Snyder, Michael; Tatonetti, Nicholas P; Dudley, Joel T

    2014-01-01

    The increasing public availability of personal complete genome sequencing data has ushered in an era of democratized genomics. However, read mapping and variant calling software is constantly improving and individuals with personal genomic data may prefer to customize and update their variant calls. Here, we describe STORMSeq (Scalable Tools for Open-Source Read Mapping), a graphical interface cloud computing solution that does not require a parallel computing environment or extensive technical experience. This customizable and modular system performs read mapping, read cleaning, and variant calling and annotation. At present, STORMSeq costs approximately $2 and 5-10 hours to process a full exome sequence and $30 and 3-8 days to process a whole genome sequence. We provide this open-access and open-source resource as a user-friendly interface in Amazon EC2.

  10. Kite: status of the external metrology testbed for SIM

    NASA Astrophysics Data System (ADS)

    Dekens, Frank G.; Alvarez-Salazar, Oscar S.; Azizi, Alireza; Moser, Steven J.; Nemati, Bijan; Negron, John; Neville, Timothy; Ryan, Daniel

    2004-10-01

    Kite is a system level testbed for the External Metrology System of the Space Interferometry Mission (SIM). The External Metrology System is used to track the fiducials that are located at the centers of the interferometer's siderostats. The relative changes in their positions needs to be tracked to an accuracy of tens of picometers in order to correct for thermal deformations and attitude changes of the spacecraft. Because of the need for such high precision measurements, the Kite testbed was build to test both the metrology gauges and our ability to optically model the system at these levels. The Kite testbed is a redundant metrology truss, in which 6 lengths are measured, but only 5 are needed to define the system. The RMS error between the redundant measurements needs to be less than 140pm for the SIM Wide-Angle observing scenario and less than 8 pm for the Narrow-Angle observing scenario. With our current testbed layout, we have achieved an RMS of 85 pm in the Wide-Angle case, meeting the goal. For the Narrow-Angle case, we have reached 5.8 pm, but only for on-axis observations. We describe the testbed improvements that have been made since our initial results, and outline the future Kite changes that will add further effects that SIM faces in order to make the testbed more representative of SIM.

  11. A Novel UAV Electric Propulsion Testbed for Diagnostics and Prognostics

    NASA Technical Reports Server (NTRS)

    Gorospe, George E., Jr.; Kulkarni, Chetan S.

    2017-01-01

    This paper presents a novel hardware-in-the-loop (HIL) testbed for systems level diagnostics and prognostics of an electric propulsion system used in UAVs (unmanned aerial vehicle). Referencing the all electric, Edge 540T aircraft used in science and research by NASA Langley Flight Research Center, the HIL testbed includes an identical propulsion system, consisting of motors, speed controllers and batteries. Isolated under a controlled laboratory environment, the propulsion system has been instrumented for advanced diagnostics and prognostics. To produce flight like loading on the system a slave motor is coupled to the motor under test (MUT) and provides variable mechanical resistance, and the capability of introducing nondestructive mechanical wear-like frictional loads on the system. This testbed enables the verification of mathematical models of each component of the propulsion system, the repeatable generation of flight-like loads on the system for fault analysis, test-to-failure scenarios, and the development of advanced system level diagnostics and prognostics methods. The capabilities of the testbed are extended through the integration of a LabVIEW-based client for the Live Virtual Constructive Distributed Environment (LVCDC) Gateway which enables both the publishing of generated data for remotely located observers and prognosers and the synchronization the testbed propulsion system with vehicles in the air. The developed HIL testbed gives researchers easy access to a scientifically relevant portion of the aircraft without the overhead and dangers encountered during actual flight.

  12. Inter-comparison of soil moisture sensors from the soil moisture active passive marena Oklahoma in situ sensor testbed (SMAP-MOISST)

    USDA-ARS?s Scientific Manuscript database

    The diversity of in situ soil moisture network protocols and instrumentation led to the development of a testbed for comparing in situ soil moisture sensors. Located in Marena, Oklahoma on the Oklahoma State University Range Research Station, the testbed consists of four base stations. Each station ...

  13. Application developer's tutorial for the CSM testbed architecture

    NASA Technical Reports Server (NTRS)

    Underwood, Phillip; Felippa, Carlos A.

    1988-01-01

    This tutorial serves as an illustration of the use of the programmer interface on the CSM Testbed Architecture (NICE). It presents a complete, but simple, introduction to using both the GAL-DBM (Global Access Library-Database Manager) and CLIP (Command Language Interface Program) to write a NICE processor. Familiarity with the CSM Testbed architecture is required.

  14. Analysis, Modeling, and Simulation (AMS) Testbed Development and Evaluation to Support Dynamic Mobility Applications (DMA) and Active Transportation and Demand Management (ATDM) Programs — Calibration Report for San Mateo Testbed.

    DOT National Transportation Integrated Search

    2016-08-22

    The primary objective of this project is to develop multiple simulation Testbeds/transportation models to evaluate the impacts of DMA connected vehicle applications and the active and dynamic transportation management (ATDM) strategies. The outputs (...

  15. Analysis, modeling, and simulation (AMS) testbed development and evaluation to support dynamic mobility applications (DMA) and active transportation and demand management (ATDM) programs - calibration report for Dallas testbed : final report.

    DOT National Transportation Integrated Search

    2016-10-01

    The primary objective of this project is to develop multiple simulation testbeds/transportation models to evaluate the impacts of DMA connected vehicle applications and the active and dynamic transportation management (ATDM) strategies. The outputs (...

  16. Analysis, Modeling, and Simulation (AMS) Testbed Development and Evaluation to Support Dynamic Mobility Applications (DMA) and Active Transportation and Demand Management (ATDM) Programs - Evaluation Report for the San Diego Testbed

    DOT National Transportation Integrated Search

    2017-07-01

    The primary objective of this project is to develop multiple simulation testbeds and transportation models to evaluate the impacts of Connected Vehicle Dynamic Mobility Applications (DMA) and Active Transportation and Demand Management (ATDM) strateg...

  17. Analysis, Modeling, and Simulation (AMS) Testbed Development and Evaluation to Support Dynamic Mobility Applications (DMA) and Active Transportation and Demand Management (ATDM) Programs : Evaluation Report for the San Diego Testbed : Draft Report.

    DOT National Transportation Integrated Search

    2017-07-01

    The primary objective of this project is to develop multiple simulation testbeds and transportation models to evaluate the impacts of Connected Vehicle Dynamic Mobility Applications (DMA) and Active Transportation and Demand Management (ATDM) strateg...

  18. Analysis, Modeling, and Simulation (AMS) Testbed Development and Evaluation to Support Dynamic Mobility Applications (DMA) and Active Transportation and Demand Management (ATDM) Programs - calibration Report for Phoenix Testbed : Final Report.

    DOT National Transportation Integrated Search

    2016-10-01

    The primary objective of this project is to develop multiple simulation Testbeds/transportation models to evaluate the impacts of DMA connected vehicle applications and the active and dynamic transportation management (ATDM) strategies. The outputs (...

  19. Analysis, modeling, and simulation (AMS) testbed development and evaluation to support dynamic mobility applications (DMA) and active transportation and demand management (ATDM) programs - evaluation summary for the San Diego testbed

    DOT National Transportation Integrated Search

    2017-08-01

    The primary objective of this project is to develop multiple simulation testbeds and transportation models to evaluate the impacts of Connected Vehicle Dynamic Mobility Applications (DMA) and Active Transportation and Demand Management (ATDM) strateg...

  20. Analysis, Modeling, and Simulation (AMS) Testbed Development and Evaluation to Support Dynamic Mobility Applications (DMA) and Active Transportation and Demand Management (ATDM) Programs - San Mateo Testbed Analysis Plan : Final Report.

    DOT National Transportation Integrated Search

    2016-06-29

    The primary objective of this project is to develop multiple simulation testbeds/transportation models to evaluate the impacts of DMA connected vehicle applications and the active and dynamic transportation management (ATDM) strategies. The outputs (...

  1. Exploration of cloud computing late start LDRD #149630 : Raincoat. v. 2.1.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Echeverria, Victor T.; Metral, Michael David; Leger, Michelle A.

    This report contains documentation from an interoperability study conducted under the Late Start LDRD 149630, Exploration of Cloud Computing. A small late-start LDRD from last year resulted in a study (Raincoat) on using Virtual Private Networks (VPNs) to enhance security in a hybrid cloud environment. Raincoat initially explored the use of OpenVPN on IPv4 and demonstrates that it is possible to secure the communication channel between two small 'test' clouds (a few nodes each) at New Mexico Tech and Sandia. We extended the Raincoat study to add IPSec support via Vyatta routers, to interface with a public cloud (Amazon Elasticmore » Compute Cloud (EC2)), and to be significantly more scalable than the previous iteration. The study contributed to our understanding of interoperability in a hybrid cloud.« less

  2. 1D-VAR Retrieval Using Superchannels

    NASA Technical Reports Server (NTRS)

    Liu, Xu; Zhou, Daniel; Larar, Allen; Smith, William L.; Schluessel, Peter; Mango, Stephen; SaintGermain, Karen

    2008-01-01

    Since modern ultra-spectral remote sensors have thousands of channels, it is difficult to include all of them in a 1D-var retrieval system. We will describe a physical inversion algorithm, which includes all available channels for the atmospheric temperature, moisture, cloud, and surface parameter retrievals. Both the forward model and the inversion algorithm compress the channel radiances into super channels. These super channels are obtained by projecting the radiance spectra onto a set of pre-calculated eigenvectors. The forward model provides both super channel properties and jacobian in EOF space directly. For ultra-spectral sensors such as Infrared Atmospheric Sounding Interferometer (IASI) and the NPOESS Airborne Sounder Testbed Interferometer (NAST), a compression ratio of more than 80 can be achieved, leading to a significant reduction in computations involved in an inversion process. Results will be shown applying the algorithm to real IASI and NAST data.

  3. Overview of the United States Department of Energy's ARM (Atmospheric Radiation Measurement) Program

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Stokes, G.M.; Tichler, J.L.

    The Department of Energy (DOE) is initiating a major atmospheric research effort, the Atmospheric Radiation Measurement Program (ARM). The program is a key component of DOE's research strategy to address global climate change and is a direct continuation of DOE's decade-long effort to improve the ability of General Circulation Models (GCMs) to provide reliable simulations of regional, and long-term climate change in response to increasing greenhouse gases. The effort is multi-disciplinary and multi-agency, involving universities, private research organizations and more than a dozen government laboratories. The objective of the ARM Research is to provide an experimental testbed for the studymore » of important atmospheric effects, particularly cloud and radiative processes, and to test parameterizations of these processes for use in atmospheric models. This effort will support the continued and rapid improvement of GCM predictive capability. 2 refs.« less

  4. ECITE: A Testbed for Assessment of Technology Interoperability and Integration wiht Architecture Components

    NASA Astrophysics Data System (ADS)

    Graves, S. J.; Keiser, K.; Law, E.; Yang, C. P.; Djorgovski, S. G.

    2016-12-01

    ECITE (EarthCube Integration and Testing Environment) is providing both cloud-based computational testing resources and an Assessment Framework for Technology Interoperability and Integration. NSF's EarthCube program is funding the development of cyberinfrastructure building block components as technologies to address Earth science research problems. These EarthCube building blocks need to support integration and interoperability objectives to work towards a coherent cyberinfrastructure architecture for the program. ECITE is being developed to provide capabilities to test and assess the interoperability and integration across funded EarthCube technology projects. EarthCube defined criteria for interoperability and integration are applied to use cases coordinating science problems with technology solutions. The Assessment Framework facilitates planning, execution and documentation of the technology assessments for review by the EarthCube community. This presentation will describe the components of ECITE and examine the methodology of cross walking between science and technology use cases.

  5. Advanced Turbine Technology Applications Project (ATTAP)

    NASA Technical Reports Server (NTRS)

    1989-01-01

    ATTAP activities during the past year were highlighted by an extensive materials assessment, execution of a reference powertrain design, test-bed engine design and development, ceramic component design, materials and component characterization, ceramic component process development and fabrication, component rig design and fabrication, test-bed engine fabrication, and hot gasifier rig and engine testing. Materials assessment activities entailed engine environment evaluation of domestically supplied radial gasifier turbine rotors that were available at the conclusion of the Advanced Gas Turbine (AGT) Technology Development Project as well as an extensive survey of both domestic and foreign ceramic suppliers and Government laboratories performing ceramic materials research applicable to advanced heat engines. A reference powertrain design was executed to reflect the selection of the AGT-5 as the ceramic component test-bed engine for the ATTAP. Test-bed engine development activity focused on upgrading the AGT-5 from a 1038 C (1900 F) metal engine to a durable 1371 C (2500 F) structural ceramic component test-bed engine. Ceramic component design activities included the combustor, gasifier turbine static structure, and gasifier turbine rotor. The materials and component characterization efforts have included the testing and evaluation of several candidate ceramic materials and components being developed for use in the ATTAP. Ceramic component process development and fabrication activities were initiated for the gasifier turbine rotor, gasifier turbine vanes, gasifier turbine scroll, extruded regenerator disks, and thermal insulation. Component rig development activities included combustor, hot gasifier, and regenerator rigs. Test-bed engine fabrication activities consisted of the fabrication of an all-new AGT-5 durability test-bed engine and support of all engine test activities through instrumentation/build/repair. Hot gasifier rig and test-bed engine testing activities were performed.

  6. Preliminary Design of a Galactic Cosmic Ray Shielding Materials Testbed for the International Space Station

    NASA Technical Reports Server (NTRS)

    Gaier, James R.; Berkebile, Stephen; Sechkar, Edward A.; Panko, Scott R.

    2012-01-01

    The preliminary design of a testbed to evaluate the effectiveness of galactic cosmic ray (GCR) shielding materials, the MISSE Radiation Shielding Testbed (MRSMAT) is presented. The intent is to mount the testbed on the Materials International Space Station Experiment-X (MISSE-X) which is to be mounted on the International Space Station (ISS) in 2016. A key feature is the ability to simultaneously test nine samples, including standards, which are 5.25 cm thick. This thickness will enable most samples to have an areal density greater than 5 g/sq cm. It features a novel and compact GCR telescope which will be able to distinguish which cosmic rays have penetrated which shielding material, and will be able to evaluate the dose transmitted through the shield. The testbed could play a pivotal role in the development and qualification of new cosmic ray shielding technologies.

  7. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Jonathan Gray; Robert Anderson; Julio G. Rodriguez

    Abstract: Identifying and understanding digital instrumentation and control (I&C) cyber vulnerabilities within nuclear power plants and other nuclear facilities, is critical if nation states desire to operate nuclear facilities safely, reliably, and securely. In order to demonstrate objective evidence that cyber vulnerabilities have been adequately identified and mitigated, a testbed representing a facility’s critical nuclear equipment must be replicated. Idaho National Laboratory (INL) has built and operated similar testbeds for common critical infrastructure I&C for over ten years. This experience developing, operating, and maintaining an I&C testbed in support of research identifying cyber vulnerabilities has led the Korean Atomic Energymore » Research Institute of the Republic of Korea to solicit the experiences of INL to help mitigate problems early in the design, development, operation, and maintenance of a similar testbed. The following information will discuss I&C testbed lessons learned and the impact of these experiences to KAERI.« less

  8. Performance Evaluation of a Data Validation System

    NASA Technical Reports Server (NTRS)

    Wong, Edmond (Technical Monitor); Sowers, T. Shane; Santi, L. Michael; Bickford, Randall L.

    2005-01-01

    Online data validation is a performance-enhancing component of modern control and health management systems. It is essential that performance of the data validation system be verified prior to its use in a control and health management system. A new Data Qualification and Validation (DQV) Test-bed application was developed to provide a systematic test environment for this performance verification. The DQV Test-bed was used to evaluate a model-based data validation package known as the Data Quality Validation Studio (DQVS). DQVS was employed as the primary data validation component of a rocket engine health management (EHM) system developed under NASA's NGLT (Next Generation Launch Technology) program. In this paper, the DQVS and DQV Test-bed software applications are described, and the DQV Test-bed verification procedure for this EHM system application is presented. Test-bed results are summarized and implications for EHM system performance improvements are discussed.

  9. Space-Based Reconfigurable Software Defined Radio Test Bed Aboard International Space Station

    NASA Technical Reports Server (NTRS)

    Reinhart, Richard C.; Lux, James P.

    2014-01-01

    The National Aeronautical and Space Administration (NASA) recently launched a new software defined radio research test bed to the International Space Station. The test bed, sponsored by the Space Communications and Navigation (SCaN) Office within NASA is referred to as the SCaN Testbed. The SCaN Testbed is a highly capable communications system, composed of three software defined radios, integrated into a flight system, and mounted to the truss of the International Space Station. Software defined radios offer the future promise of in-flight reconfigurability, autonomy, and eventually cognitive operation. The adoption of software defined radios offers space missions a new way to develop and operate space transceivers for communications and navigation. Reconfigurable or software defined radios with communications and navigation functions implemented in software or VHDL (Very High Speed Hardware Description Language) provide the capability to change the functionality of the radio during development or after launch. The ability to change the operating characteristics of a radio through software once deployed to space offers the flexibility to adapt to new science opportunities, recover from anomalies within the science payload or communication system, and potentially reduce development cost and risk by adapting generic space platforms to meet specific mission requirements. The software defined radios on the SCaN Testbed are each compliant to NASA's Space Telecommunications Radio System (STRS) Architecture. The STRS Architecture is an open, non-proprietary architecture that defines interfaces for the connections between radio components. It provides an operating environment to abstract the communication waveform application from the underlying platform specific hardware such as digital-to-analog converters, analog-to-digital converters, oscillators, RF attenuators, automatic gain control circuits, FPGAs, general-purpose processors, etc. and the interconnections among different radio components.

  10. Factors Influencing F/OSS Cloud Computing Software Product Success: A Quantitative Study

    ERIC Educational Resources Information Center

    Letort, D. Brian

    2012-01-01

    Cloud Computing introduces a new business operational model that allows an organization to shift information technology consumption from traditional capital expenditure to operational expenditure. This shift introduces challenges from both the adoption and creation vantage. This study evaluates factors that influence Free/Open Source Software…

  11. Private Cloud Communities for Faculty and Students

    ERIC Educational Resources Information Center

    Tomal, Daniel R.; Grant, Cynthia

    2015-01-01

    Massive open online courses (MOOCs) and public and private cloud communities continue to flourish in the field of higher education. However, MOOCs have received criticism in recent years and offer little benefit to students already enrolled at an institution. This article advocates for the collaborative creation and use of institutional, program…

  12. HPC: Rent or Buy

    ERIC Educational Resources Information Center

    Fredette, Michelle

    2012-01-01

    "Rent or buy?" is a question people ask about everything from housing to textbooks. It is also a question universities must consider when it comes to high-performance computing (HPC). With the advent of Amazon's Elastic Compute Cloud (EC2), Microsoft Windows HPC Server, Rackspace's OpenStack, and other cloud-based services, researchers now have…

  13. Students' Understanding of Cloud and Rainbow Formation and Teachers' Awareness of Students' Performance

    ERIC Educational Resources Information Center

    Malleus, Elina; Kikas, Eve; Kruus, Sigrid

    2016-01-01

    This study describes primary school students' knowledge about rainfall, clouds and rainbow formation together with teachers' predictions about students' performance. In our study, primary school students' (N = 177) knowledge about rainfall and rainbow formation was examined using structured interviews with open-ended questions. Primary school…

  14. Cloudweaver: Adaptive and Data-Driven Workload Manager for Generic Clouds

    NASA Astrophysics Data System (ADS)

    Li, Rui; Chen, Lei; Li, Wen-Syan

    Cloud computing denotes the latest trend in application development for parallel computing on massive data volumes. It relies on clouds of servers to handle tasks that used to be managed by an individual server. With cloud computing, software vendors can provide business intelligence and data analytic services for internet scale data sets. Many open source projects, such as Hadoop, offer various software components that are essential for building a cloud infrastructure. Current Hadoop (and many others) requires users to configure cloud infrastructures via programs and APIs and such configuration is fixed during the runtime. In this chapter, we propose a workload manager (WLM), called CloudWeaver, which provides automated configuration of a cloud infrastructure for runtime execution. The workload management is data-driven and can adapt to dynamic nature of operator throughput during different execution phases. CloudWeaver works for a single job and a workload consisting of multiple jobs running concurrently, which aims at maximum throughput using a minimum set of processors.

  15. Local Atmospheric Response to an Open-Ocean Polynya in a High-Resolution Climate Model

    DOE PAGES

    Weijer, Wilbert; Veneziani, Milena; Stössel, Achim; ...

    2017-03-01

    For this scientific paper, we study the atmospheric response to an open-ocean polynya in the Southern Ocean by analyzing the results from an atmospheric and oceanic synoptic-scale resolving Community Earth System Model (CESM) simulation. While coarser-resolution versions of CESM generally do not produce open-ocean polynyas in the Southern Ocean, they do emerge and disappear on interannual timescales in the synoptic-scale simulation. This provides an ideal opportunity to study the polynya’s impact on the overlying and surrounding atmosphere. This has been pursued here by investigating the seasonal cycle of differences of surface and air-column variables between polynya and non-polynya years. Ourmore » results indicate significant local impacts on turbulent heat fluxes, precipitation, cloud characteristics, and radiative fluxes. In particular, we find that clouds over polynyas are optically thicker and higher than clouds over sea ice during non-polynya years. Although the lower albedo of polynyas significantly increases the net shortwave absorption, the enhanced cloud brightness tempers this increase by almost 50%. Also, in this model, enhanced longwave radiation emitted from the warmer surface of polynyas is balanced by stronger downwelling fluxes from the thicker cloud deck. Impacts are found to be sensitive to the synoptic wind direction. Strongest regional impacts are found when northeasterly winds cross the polynya and interact with katabatic winds. Finally, surface air pressure anomalies over the polynya are only found to be significant when cold, dry air masses strike over the polynya, i.e. in case of southerly winds.« less

  16. Local Atmospheric Response to an Open-Ocean Polynya in a High-Resolution Climate Model

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Weijer, Wilbert; Veneziani, Milena; Stössel, Achim

    For this scientific paper, we study the atmospheric response to an open-ocean polynya in the Southern Ocean by analyzing the results from an atmospheric and oceanic synoptic-scale resolving Community Earth System Model (CESM) simulation. While coarser-resolution versions of CESM generally do not produce open-ocean polynyas in the Southern Ocean, they do emerge and disappear on interannual timescales in the synoptic-scale simulation. This provides an ideal opportunity to study the polynya’s impact on the overlying and surrounding atmosphere. This has been pursued here by investigating the seasonal cycle of differences of surface and air-column variables between polynya and non-polynya years. Ourmore » results indicate significant local impacts on turbulent heat fluxes, precipitation, cloud characteristics, and radiative fluxes. In particular, we find that clouds over polynyas are optically thicker and higher than clouds over sea ice during non-polynya years. Although the lower albedo of polynyas significantly increases the net shortwave absorption, the enhanced cloud brightness tempers this increase by almost 50%. Also, in this model, enhanced longwave radiation emitted from the warmer surface of polynyas is balanced by stronger downwelling fluxes from the thicker cloud deck. Impacts are found to be sensitive to the synoptic wind direction. Strongest regional impacts are found when northeasterly winds cross the polynya and interact with katabatic winds. Finally, surface air pressure anomalies over the polynya are only found to be significant when cold, dry air masses strike over the polynya, i.e. in case of southerly winds.« less

  17. Facilitating NASA Earth Science Data Processing Using Nebula Cloud Computing

    NASA Technical Reports Server (NTRS)

    Pham, Long; Chen, Aijun; Kempler, Steven; Lynnes, Christopher; Theobald, Michael; Asghar, Esfandiari; Campino, Jane; Vollmer, Bruce

    2011-01-01

    Cloud Computing has been implemented in several commercial arenas. The NASA Nebula Cloud Computing platform is an Infrastructure as a Service (IaaS) built in 2008 at NASA Ames Research Center and 2010 at GSFC. Nebula is an open source Cloud platform intended to: a) Make NASA realize significant cost savings through efficient resource utilization, reduced energy consumption, and reduced labor costs. b) Provide an easier way for NASA scientists and researchers to efficiently explore and share large and complex data sets. c) Allow customers to provision, manage, and decommission computing capabilities on an as-needed bases

  18. SUCCESS Studies of the Impact of Aircraft on Cirrus Clouds

    NASA Technical Reports Server (NTRS)

    Toon, Owen B.; Condon, Estelle P. (Technical Monitor)

    1996-01-01

    During April of 1996 NASA will sponsor the SUCCESS project to better understand the impact of subsonic aircraft on the Earth's radiation budget. We plan to better determine the radiative properties of cirrus clouds and of contrails so that satellite observations can better determine their impact on Earth's radiation budget. We hope to determine how cirrus clouds form, whether the exhaust from subsonic aircraft presently affects the formation of cirrus clouds, and if the exhaust does affect the clouds whether the changes induced are of climatological significance. We seek to pave the way for future studies by developing and testing several new instruments. We also plan to better determine the characteristics of gaseous and particulate exhaust products from subsonic aircraft and their evolution in the region near the aircraft. In order to achieve our experimental objectives we plan to use the DC-8 aircraft as an in situ sampling platform. It will carry a wide variety of gaseous, particulate, radiative, and meteorological instruments. We will also use a T-39 aircraft primarily to sample the exhaust from other aircraft. It will carry a suite of instruments to measure particles and gases. We will employ an ER-2 aircraft as a remote sensing platform. The ER-2 will act as a surrogate satellite so that remote sensing observations can be related to the in situ parameters measured by the DC-8 and T-39. The mission strategy calls for a 5 week deployment beginning on April 8, 1996, and ending on May 10, 1996. During this time all three aircraft will be based in Salina, Kansas. A series of flights, averaging one every other day during this period, will be made mainly near the Department of Energy's Climate and Radiation Testbed site (CART) located in Northern Oklahoma, and Southern Kansas. During this same time period an extensive set of ground based measurements will be made by the DOE, which will also be operating several aircraft in the area to better understand the radiative properties of the atmosphere. Additional flights will be made over the Rocky Mountains, to investigate wave clouds. Flights will also be made over the Gulf of Mexico to utilize an oceanic background for remote sensing measurements. The results of this mission will be presented in this talk.

  19. The Wide-Field Imaging Interferometry Testbed (WIIT): Recent Progress and Results

    NASA Technical Reports Server (NTRS)

    Rinehart, Stephen A.; Frey, Bradley J.; Leisawitz, David T.; Lyon, Richard G.; Maher, Stephen F.; Martino, Anthony J.

    2008-01-01

    Continued research with the Wide-Field Imaging Interferometry Testbed (WIIT) has achieved several important milestones. We have moved WIIT into the Advanced Interferometry and Metrology (AIM) Laboratory at Goddard, and have characterized the testbed in this well-controlled environment. The system is now completely automated and we are in the process of acquiring large data sets for analysis. In this paper, we discuss these new developments and outline our future research directions. The WIIT testbed, combined with new data analysis techniques and algorithms, provides a demonstration of the technique of wide-field interferometric imaging, a powerful tool for future space-borne interferometers.

  20. Starlight suppression from the starshade testbed at NGAS

    NASA Astrophysics Data System (ADS)

    Samuele, Rocco; Glassman, Tiffany; Johnson, Adam M. J.; Varshneya, Rupal; Shipley, Ann

    2009-08-01

    We report on progress at the Northrop Grumman Aerospace Systems (NGAS) starshade testbed. The starshade testbed is a 42.8 m, vacuum chamber designed to replicate the Fresnel number of an equivalent full-scale starshade mission, namely the flagship New Worlds Observer (NWO) configuration. Subscale starshades manufactured by the NGAS foundry have shown 10-7 starlight suppression at an equivalent full-mission inner working angle of 85 milliarseconds. In this paper, we present an overview of the experimental set up, scaling relationships to an equivalent full-scale mission, and preliminary results from the testbed. We also discuss potential limitations of the current generation of starshades and improvements for the future.

  1. Development of the CSI phase-3 evolutionary model testbed

    NASA Technical Reports Server (NTRS)

    Gronet, M. J.; Davis, D. A.; Tan, M. K.

    1994-01-01

    This report documents the development effort for the reconfiguration of the Controls-Structures Integration (CSI) Evolutionary Model (CEM) Phase-2 testbed into the CEM Phase-3 configuration. This step responds to the need to develop and test CSI technologies associated with typical planned earth science and remote sensing platforms. The primary objective of the CEM Phase-3 ground testbed is to simulate the overall on-orbit dynamic behavior of the EOS AM-1 spacecraft. Key elements of the objective include approximating the low-frequency appendage dynamic interaction of EOS AM-1, allowing for the changeout of components, and simulating the free-free on-orbit environment using an advanced suspension system. The fundamentals of appendage dynamic interaction are reviewed. A new version of the multiple scaling method is used to design the testbed to have the full-scale geometry and dynamics of the EOS AM-1 spacecraft, but at one-tenth the weight. The testbed design is discussed, along with the testing of the solar array, high gain antenna, and strut components. Analytical performance comparisons show that the CEM Phase-3 testbed simulates the EOS AM-1 spacecraft with good fidelity for the important parameters of interest.

  2. Kite: Status of the External Metrology Testbed for SIM

    NASA Technical Reports Server (NTRS)

    Dekens, Frank G.; Alvarez-Salazar, Oscar; Azizi, Alireza; Moser, Steven; Nemati, Bijan; Negron, John; Neville, Timothy; Ryan, Daniel

    2004-01-01

    Kite is a system level testbed for the External Metrology system of the Space Interferometry Mission (SIM). The External Metrology System is used to track the fiducial that are located at the centers of the interferometer's siderostats. The relative changes in their positions needs to be tracked to tens of picometers in order to correct for thermal measurements, the Kite testbed was build to test both the metrology gauges and out ability to optically model the system at these levels. The Kite testbed is an over-constraint system where 6 lengths are measured, but only 5 are needed to determine the system. The agreement in the over-constrained length needs to be on the order of 140 pm for the SIM Wide-Angle observing scenario and 8 pm for the Narrow-Angle observing scenario. We demonstrate that we have met the Wide-Angle goal with our current setup. For the Narrow-Angle case, we have only reached the goal for on-axis observations. We describe the testbed improvements that have been made since our initial results, and outline the future Kite changes that will add further effects that SIM faces in order to make the testbed more SIM like.

  3. Improved Cloud resource allocation: how INDIGO-DataCloud is overcoming the current limitations in Cloud schedulers

    NASA Astrophysics Data System (ADS)

    Lopez Garcia, Alvaro; Zangrando, Lisa; Sgaravatto, Massimo; Llorens, Vincent; Vallero, Sara; Zaccolo, Valentina; Bagnasco, Stefano; Taneja, Sonia; Dal Pra, Stefano; Salomoni, Davide; Donvito, Giacinto

    2017-10-01

    Performing efficient resource provisioning is a fundamental aspect for any resource provider. Local Resource Management Systems (LRMS) have been used in data centers for decades in order to obtain the best usage of the resources, providing their fair usage and partitioning for the users. In contrast, current cloud schedulers are normally based on the immediate allocation of resources on a first-come, first-served basis, meaning that a request will fail if there are no resources (e.g. OpenStack) or it will be trivially queued ordered by entry time (e.g. OpenNebula). Moreover, these scheduling strategies are based on a static partitioning of the resources, meaning that existing quotas cannot be exceeded, even if there are idle resources allocated to other projects. This is a consequence of the fact that cloud instances are not associated with a maximum execution time and leads to a situation where the resources are under-utilized. These facts have been identified by the INDIGO-DataCloud project as being too simplistic for accommodating scientific workloads in an efficient way, leading to an underutilization of the resources, a non desirable situation in scientific data centers. In this work, we will present the work done in the scheduling area during the first year of the INDIGO project and the foreseen evolutions.

  4. Cloud-Scale Numerical Modeling of the Arctic Boundary Layer

    NASA Technical Reports Server (NTRS)

    Krueger, Steven K.

    1998-01-01

    The interactions between sea ice, open ocean, atmospheric radiation, and clouds over the Arctic Ocean exert a strong influence on global climate. Uncertainties in the formulation of interactive air-sea-ice processes in global climate models (GCMs) result in large differences between the Arctic, and global, climates simulated by different models. Arctic stratus clouds are not well-simulated by GCMs, yet exert a strong influence on the surface energy budget of the Arctic. Leads (channels of open water in sea ice) have significant impacts on the large-scale budgets during the Arctic winter, when they contribute about 50 percent of the surface fluxes over the Arctic Ocean, but cover only 1 to 2 percent of its area. Convective plumes generated by wide leads may penetrate the surface inversion and produce condensate that spreads up to 250 km downwind of the lead, and may significantly affect the longwave radiative fluxes at the surface and thereby the sea ice thickness. The effects of leads and boundary layer clouds must be accurately represented in climate models to allow possible feedbacks between them and the sea ice thickness. The FIRE III Arctic boundary layer clouds field program, in conjunction with the SHEBA ice camp and the ARM North Slope of Alaska and Adjacent Arctic Ocean site, will offer an unprecedented opportunity to greatly improve our ability to parameterize the important effects of leads and boundary layer clouds in GCMs.

  5. Impact of Biomass Burning Aerosols on Cloud Formation in Coastal Regions

    NASA Astrophysics Data System (ADS)

    Nair, U. S.; Wu, Y.; Reid, J. S.

    2017-12-01

    In the tropics, shallow and deep convective cloud structures organize in hierarchy of spatial scales ranging from meso-gamma (2-20 km) to planetary scales (40,000km). At the lower end of the spectrum is shallow convection over the open ocean, whose upscale growth is dependent upon mesoscale convergence triggers. In this context, cloud systems associated with land breezes that propagate long distances into open ocean areas are important. We utilized numerical model simulations to examine the impact of biomass burning on such cloud systems in the maritime continent, specifically along the coastal regions of Sarawak. Numerical model simulations conducted using the Weather Research and Forecasting Chemistry (WRF-Chem) model show spatial patterns of smoke that show good agreement to satellite observations. Analysis of model simulations show that, during daytime the horizontal convective rolls (HCRs) that form over land play an important role in organizing transport of smoke in the coastal regions. Alternating patterns of low and high smoke concentrations that are well correlated to the wavelengths of HCRs are found in both the simulations and satellite observations. During night time, smoke transport is modulated by the land breeze circulation and a band of enhanced smoke concentration is found along the land breeze front. Biomass burning aerosols are ingested by the convective clouds that form along the land breeze and leads to changes in total water path, cloud structure and precipitation formation.

  6. Development of a cloud-based Bioinformatics Training Platform.

    PubMed

    Revote, Jerico; Watson-Haigh, Nathan S; Quenette, Steve; Bethwaite, Blair; McGrath, Annette; Shang, Catherine A

    2017-05-01

    The Bioinformatics Training Platform (BTP) has been developed to provide access to the computational infrastructure required to deliver sophisticated hands-on bioinformatics training courses. The BTP is a cloud-based solution that is in active use for delivering next-generation sequencing training to Australian researchers at geographically dispersed locations. The BTP was built to provide an easy, accessible, consistent and cost-effective approach to delivering workshops at host universities and organizations with a high demand for bioinformatics training but lacking the dedicated bioinformatics training suites required. To support broad uptake of the BTP, the platform has been made compatible with multiple cloud infrastructures. The BTP is an open-source and open-access resource. To date, 20 training workshops have been delivered to over 700 trainees at over 10 venues across Australia using the BTP. © The Author 2016. Published by Oxford University Press.

  7. Development of a cloud-based Bioinformatics Training Platform

    PubMed Central

    Revote, Jerico; Watson-Haigh, Nathan S.; Quenette, Steve; Bethwaite, Blair; McGrath, Annette

    2017-01-01

    Abstract The Bioinformatics Training Platform (BTP) has been developed to provide access to the computational infrastructure required to deliver sophisticated hands-on bioinformatics training courses. The BTP is a cloud-based solution that is in active use for delivering next-generation sequencing training to Australian researchers at geographically dispersed locations. The BTP was built to provide an easy, accessible, consistent and cost-effective approach to delivering workshops at host universities and organizations with a high demand for bioinformatics training but lacking the dedicated bioinformatics training suites required. To support broad uptake of the BTP, the platform has been made compatible with multiple cloud infrastructures. The BTP is an open-source and open-access resource. To date, 20 training workshops have been delivered to over 700 trainees at over 10 venues across Australia using the BTP. PMID:27084333

  8. First field demonstration of cloud datacenter workflow automation employing dynamic optical transport network resources under OpenStack and OpenFlow orchestration.

    PubMed

    Szyrkowiec, Thomas; Autenrieth, Achim; Gunning, Paul; Wright, Paul; Lord, Andrew; Elbers, Jörg-Peter; Lumb, Alan

    2014-02-10

    For the first time, we demonstrate the orchestration of elastic datacenter and inter-datacenter transport network resources using a combination of OpenStack and OpenFlow. Programmatic control allows a datacenter operator to dynamically request optical lightpaths from a transport network operator to accommodate rapid changes of inter-datacenter workflows.

  9. Experiences with the JPL telerobot testbed: Issues and insights

    NASA Technical Reports Server (NTRS)

    Stone, Henry W.; Balaram, Bob; Beahan, John

    1989-01-01

    The Jet Propulsion Laboratory's (JPL) Telerobot Testbed is an integrated robotic testbed used to develop, implement, and evaluate the performance of advanced concepts in autonomous, tele-autonomous, and tele-operated control of robotic manipulators. Using the Telerobot Testbed, researchers demonstrated several of the capabilities and technological advances in the control and integration of robotic systems which have been under development at JPL for several years. In particular, the Telerobot Testbed was recently employed to perform a near completely automated, end-to-end, satellite grapple and repair sequence. The task of integrating existing as well as new concepts in robot control into the Telerobot Testbed has been a very difficult and timely one. Now that researchers have completed the first major milestone (i.e., the end-to-end demonstration) it is important to reflect back upon experiences and to collect the knowledge that has been gained so that improvements can be made to the existing system. It is also believed that the experiences are of value to the others in the robotics community. Therefore, the primary objective here will be to use the Telerobot Testbed as a case study to identify real problems and technological gaps which exist in the areas of robotics and in particular systems integration. Such problems have surely hindered the development of what could be reasonably called an intelligent robot. In addition to identifying such problems, researchers briefly discuss what approaches have been taken to resolve them or, in several cases, to circumvent them until better approaches can be developed.

  10. STORMSeq: An Open-Source, User-Friendly Pipeline for Processing Personal Genomics Data in the Cloud

    PubMed Central

    Karczewski, Konrad J.; Fernald, Guy Haskin; Martin, Alicia R.; Snyder, Michael; Tatonetti, Nicholas P.; Dudley, Joel T.

    2014-01-01

    The increasing public availability of personal complete genome sequencing data has ushered in an era of democratized genomics. However, read mapping and variant calling software is constantly improving and individuals with personal genomic data may prefer to customize and update their variant calls. Here, we describe STORMSeq (Scalable Tools for Open-Source Read Mapping), a graphical interface cloud computing solution that does not require a parallel computing environment or extensive technical experience. This customizable and modular system performs read mapping, read cleaning, and variant calling and annotation. At present, STORMSeq costs approximately $2 and 5–10 hours to process a full exome sequence and $30 and 3–8 days to process a whole genome sequence. We provide this open-access and open-source resource as a user-friendly interface in Amazon EC2. PMID:24454756

  11. The impact of radiatively active water-ice clouds on Martian mesoscale atmospheric circulations

    NASA Astrophysics Data System (ADS)

    Spiga, A.; Madeleine, J.-B.; Hinson, D.; Navarro, T.; Forget, F.

    2014-04-01

    Background and Goals Water ice clouds are a key component of the Martian climate [1]. Understanding the properties of the Martian water ice clouds is crucial to constrain the Red Planet's climate and hydrological cycle both in the present and in the past [2]. In recent years, this statement have become all the more true as it was shown that the radiative effects of water ice clouds is far from being as negligible as hitherto believed; water ice clouds plays instead a key role in the large-scale thermal structure and dynamics of the Martian atmosphere [3, 4, 5]. Nevertheless, the radiative effect of water ice clouds at lower scales than the large synoptic scale (the so-called meso-scales) is still left to be explored. Here we use for the first time mesoscale modeling with radiatively active water ice clouds to address this open question.

  12. Open-Loop Performance of COBALT Precision Landing Payload on a Commercial Sub-Orbital Rocket

    NASA Technical Reports Server (NTRS)

    Restrepo, Carolina I.; Carson, John M., III; Amzajerdian, Farzin; Seubert, Carl R.; Lovelace, Ronney S.; McCarthy, Megan M.; Tse, Teming; Stelling, Richard; Collins, Steven M.

    2018-01-01

    An open-loop flight test campaign of the NASA COBALT (CoOperative Blending of Autonomous Landing Technologies) platform was conducted onboard the Masten Xodiac suborbital rocket testbed. The COBALT platform integrates NASA Guidance, Navigation and Control (GN&C) sensing technologies for autonomous, precise soft landing, including the Navigation Doppler Lidar (NDL) velocity and range sensor and the Lander Vision System (LVS) Terrain Relative Navigation (TRN) system. A specialized navigation filter running onboard COBALT fuses the NDL and LVS data in real time to produce a navigation solution that is independent of GPS and suitable for future, autonomous, planetary, landing systems. COBALT was a passive payload during the open loop tests. COBALT's sensors were actively taking data and processing it in real time, but the Xodiac rocket flew with its own GPS-navigation system as a risk reduction activity in the maturation of the technologies towards space flight. A future closed-loop test campaign is planned where the COBALT navigation solution will be used to fly its host vehicle.

  13. Consortium for Robotics & Unmanned Systems Education & Research (CRUSER)

    DTIC Science & Technology

    2012-09-30

    as facilities at Camp Roberts, Calif. and frequent experimentation events, the Many vs. Many ( MvM ) Autonomous Systems Testbed provides the...and expediently translate theory to practice. The MvM Testbed is designed to integrate technological advances in hardware (inexpensive, expendable...designed to leverage the MvM Autonomous Systems Testbed to explore practical and operationally relevant avenues to counter these “swarm” opponents, and

  14. Analysis, modeling, and simulation (AMS) testbed development and evaluation to support dynamic mobility applications (DMA) and active transportation and demand management (ATDM) programs — evaluation report for ATDM program. [supporting datasets - Pasadena Testbed

    DOT National Transportation Integrated Search

    2017-07-26

    This zip file contains POSTDATA.ATT (.ATT); Print to File (.PRN); Portable Document Format (.PDF); and document (.DOCX) files of data to support FHWA-JPO-16-385, Analysis, modeling, and simulation (AMS) testbed development and evaluation to support d...

  15. Analysis, Modeling, and Simulation (AMS) Testbed Development and Evaluation to Support Dynamic Mobility Applications (DMA) and Active Transportation and Demand Management (ATDM) Programs : Evaluation Report for the San Diego Testbed : Draft Report. [supporting datasets - San Diego

    DOT National Transportation Integrated Search

    2016-06-26

    The datasets in this zip file are in support of Intelligent Transportation Systems Joint Program Office (ITS JPO) report FHWA-JPO-16-385, "Analysis, Modeling, and Simulation (AMS) Testbed Development and Evaluation to Support Dynamic Mobility Applica...

  16. RBioCloud: A Light-Weight Framework for Bioconductor and R-based Jobs on the Cloud.

    PubMed

    Varghese, Blesson; Patel, Ishan; Barker, Adam

    2015-01-01

    Large-scale ad hoc analytics of genomic data is popular using the R-programming language supported by over 700 software packages provided by Bioconductor. More recently, analytical jobs are benefitting from on-demand computing and storage, their scalability and their low maintenance cost, all of which are offered by the cloud. While biologists and bioinformaticists can take an analytical job and execute it on their personal workstations, it remains challenging to seamlessly execute the job on the cloud infrastructure without extensive knowledge of the cloud dashboard. How analytical jobs can not only with minimum effort be executed on the cloud, but also how both the resources and data required by the job can be managed is explored in this paper. An open-source light-weight framework for executing R-scripts using Bioconductor packages, referred to as `RBioCloud', is designed and developed. RBioCloud offers a set of simple command-line tools for managing the cloud resources, the data and the execution of the job. Three biological test cases validate the feasibility of RBioCloud. The framework is available from http://www.rbiocloud.com.

  17. Scalable cloud without dedicated storage

    NASA Astrophysics Data System (ADS)

    Batkovich, D. V.; Kompaniets, M. V.; Zarochentsev, A. K.

    2015-05-01

    We present a prototype of a scalable computing cloud. It is intended to be deployed on the basis of a cluster without the separate dedicated storage. The dedicated storage is replaced by the distributed software storage. In addition, all cluster nodes are used both as computing nodes and as storage nodes. This solution increases utilization of the cluster resources as well as improves fault tolerance and performance of the distributed storage. Another advantage of this solution is high scalability with a relatively low initial and maintenance cost. The solution is built on the basis of the open source components like OpenStack, CEPH, etc.

  18. Arctic PBL Cloud Height and Motion Retrievals from MISR and MINX

    NASA Technical Reports Server (NTRS)

    Wu, Dong L.

    2012-01-01

    How Arctic clouds respond and feedback to sea ice loss is key to understanding of the rapid climate change seen in the polar region. As more open water becomes available in the Arctic Ocean, cold air outbreaks (aka. off-ice flow from polar lows) produce a vast sheet of roll clouds in the planetary boundary layer (PBl). The cold air temperature and wind velocity are the critical parameters to determine and understand the PBl structure formed under these roll clouds. It has been challenging for nadir visible/IR sensors to detect Arctic clouds due to lack of contrast between clouds and snowy/icy surfaces. In addition) PBl temperature inversion creates a further problem for IR sensors to relate cloud top temperature to cloud top height. Here we explore a new method with the Multiangle Imaging Spectro-Radiometer (MISR) instrument to measure cloud height and motion over the Arctic Ocean. Employing a stereoscopic-technique, MISR is able to measure cloud top height accurately and distinguish between clouds and snowy/icy surfaces with the measured height. We will use the MISR INteractive eXplorer (MINX) to quantify roll cloud dynamics during cold-air outbreak events and characterize PBl structures over water and over sea ice.

  19. Self-Similar Spin Images for Point Cloud Matching

    NASA Astrophysics Data System (ADS)

    Pulido, Daniel

    The rapid growth of Light Detection And Ranging (Lidar) technologies that collect, process, and disseminate 3D point clouds have allowed for increasingly accurate spatial modeling and analysis of the real world. Lidar sensors can generate massive 3D point clouds of a collection area that provide highly detailed spatial and radiometric information. However, a Lidar collection can be expensive and time consuming. Simultaneously, the growth of crowdsourced Web 2.0 data (e.g., Flickr, OpenStreetMap) have provided researchers with a wealth of freely available data sources that cover a variety of geographic areas. Crowdsourced data can be of varying quality and density. In addition, since it is typically not collected as part of a dedicated experiment but rather volunteered, when and where the data is collected is arbitrary. The integration of these two sources of geoinformation can provide researchers the ability to generate products and derive intelligence that mitigate their respective disadvantages and combine their advantages. Therefore, this research will address the problem of fusing two point clouds from potentially different sources. Specifically, we will consider two problems: scale matching and feature matching. Scale matching consists of computing feature metrics of each point cloud and analyzing their distributions to determine scale differences. Feature matching consists of defining local descriptors that are invariant to common dataset distortions (e.g., rotation and translation). Additionally, after matching the point clouds they can be registered and processed further (e.g., change detection). The objective of this research is to develop novel methods to fuse and enhance two point clouds from potentially disparate sources (e.g., Lidar and crowdsourced Web 2.0 datasets). The scope of this research is to investigate both scale and feature matching between two point clouds. The specific focus of this research will be in developing a novel local descriptor based on the concept of self-similarity to aid in the scale and feature matching steps. An open problem in fusion is how best to extract features from two point clouds and then perform feature-based matching. The proposed approach for this matching step is the use of local self-similarity as an invariant measure to match features. In particular, the proposed approach is to combine the concept of local self-similarity with a well-known feature descriptor, Spin Images, and thereby define "Self-Similar Spin Images". This approach is then extended to the case of matching two points clouds in very different coordinate systems (e.g., a geo-referenced Lidar point cloud and stereo-image derived point cloud without geo-referencing). The use of Self-Similar Spin Images is again applied to address this problem by introducing a "Self-Similar Keyscale" that matches the spatial scales of two point clouds. Another open problem is how best to detect changes in content between two point clouds. A method is proposed to find changes between two point clouds by analyzing the order statistics of the nearest neighbors between the two clouds, and thereby define the "Nearest Neighbor Order Statistic" method. Note that the well-known Hausdorff distance is a special case as being just the maximum order statistic. Therefore, by studying the entire histogram of these nearest neighbors it is expected to yield a more robust method to detect points that are present in one cloud but not the other. This approach is applied at multiple resolutions. Therefore, changes detected at the coarsest level will yield large missing targets and at finer levels will yield smaller targets.

  20. Multi-Level Secure Information Sharing Between Smart Cloud Systems of Systems

    DTIC Science & Technology

    2014-03-01

    implementation of virtual hardware (VMWare), along with a commercial implementation of virtual networking (VPN), such as OpenVPN . 1. VMWare Virtualization...en.wikipedia.org/wiki/MongoDB. Wikipedia. 2014b. Accessed February 26. s.v. “Open VPN,” http://en.wikipedia.org/wiki/ OpenVPN . Wikipedia. 2014c. Accessed

  1. First Lessons From The Biarritz Trial Network [1

    NASA Astrophysics Data System (ADS)

    Touyarot, P.; Marc, B.; de Panafieu, A.

    1986-07-01

    Opened for commercial operation in 1984, the trial optical fiber network at Biarritz in south-west France gives 1,500 subscribers access to a whole range of broadband services - videophony, audiovisual databases, TV and stereo sound program distribution, and an on-line TV program library - in addition to conventional narrow-band services like telephony and videotex. The Biarritz network is an outstanding technology and engineering testbed. It is also a sociological testing ground for new services, unique in the world, with results of particular relevance to the interactive cable TV and visual communications networks of the future.

  2. Computer Education and Instructional Technology Teacher Trainees' Opinions about Cloud Computing Technology

    ERIC Educational Resources Information Center

    Karamete, Aysen

    2015-01-01

    This study aims to show the present conditions about the usage of cloud computing in the department of Computer Education and Instructional Technology (CEIT) amongst teacher trainees in School of Necatibey Education, Balikesir University, Turkey. In this study, a questionnaire with open-ended questions was used. 17 CEIT teacher trainees…

  3. Secure and Resilient Cloud Computing for the Department of Defense

    DTIC Science & Technology

    2015-11-16

    platform as a service (PaaS), and software as a service ( SaaS )—that target system administrators, developers, and end-users respectively (see Table 2...interfaces (API) and services Medium Amazon Elastic MapReduce, MathWorks Cloud, Red Hat OpenShift SaaS Full-fledged applications Low Google gMail

  4. Metabolizing Data in the Cloud.

    PubMed

    Warth, Benedikt; Levin, Nadine; Rinehart, Duane; Teijaro, John; Benton, H Paul; Siuzdak, Gary

    2017-06-01

    Cloud-based bioinformatic platforms address the fundamental demands of creating a flexible scientific environment, facilitating data processing and general accessibility independent of a countries' affluence. These platforms have a multitude of advantages as demonstrated by omics technologies, helping to support both government and scientific mandates of a more open environment. Copyright © 2016 Elsevier Ltd. All rights reserved.

  5. Definition study for variable cycle engine testbed engine and associated test program

    NASA Technical Reports Server (NTRS)

    Vdoviak, J. W.

    1978-01-01

    The product/study double bypass variable cycle engine (VCE) was updated to incorporate recent improvements. The effect of these improvements on mission range and noise levels was determined. This engine design was then compared with current existing high-technology core engines in order to define a subscale testbed configuration that simulated many of the critical technology features of the product/study VCE. Detailed preliminary program plans were then developed for the design, fabrication, and static test of the selected testbed engine configuration. These plans included estimated costs and schedules for the detail design, fabrication and test of the testbed engine and the definition of a test program, test plan, schedule, instrumentation, and test stand requirements.

  6. Workstation-Based Avionics Simulator to Support Mars Science Laboratory Flight Software Development

    NASA Technical Reports Server (NTRS)

    Henriquez, David; Canham, Timothy; Chang, Johnny T.; McMahon, Elihu

    2008-01-01

    The Mars Science Laboratory developed the WorkStation TestSet (WSTS) to support flight software development. The WSTS is the non-real-time flight avionics simulator that is designed to be completely software-based and run on a workstation class Linux PC. This provides flight software developers with their own virtual avionics testbed and allows device-level and functional software testing when hardware testbeds are either not yet available or have limited availability. The WSTS has successfully off-loaded many flight software development activities from the project testbeds. At the writing of this paper, the WSTS has averaged an order of magnitude more usage than the project's hardware testbeds.

  7. Technology developments integrating a space network communications testbed

    NASA Technical Reports Server (NTRS)

    Kwong, Winston; Jennings, Esther; Clare, Loren; Leang, Dee

    2006-01-01

    As future manned and robotic space explorations missions involve more complex systems, it is essential to verify, validate, and optimize such systems through simulation and emulation in a low cost testbed environment. The goal of such a testbed is to perform detailed testing of advanced space and ground communications networks, technologies, and client applications that are essential for future space exploration missions. We describe the development of new technologies enhancing our Multi-mission Advanced Communications Hybrid Environment for Test and Evaluation (MACHETE) that enables its integration in a distributed space communications testbed. MACHETE combines orbital modeling, link analysis, and protocol and service modeling to quantify system performance based on comprehensive considerations of different aspects of space missions.

  8. Review of science issues, deployment strategy, and status for the ARM north slope of Alaska-Adjacent Arctic Ocean climate research site

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Stamnes, K.; Ellingson, R.G.; Curry, J.A.

    1999-01-01

    Recent climate modeling results point to the Arctic as a region that is particularly sensitive to global climate change. The Arctic warming predicted by the models to result from the expected doubling of atmospheric carbon dioxide is two to three times the predicted mean global warming, and considerably greater than the warming predicted for the Antarctic. The North Slope of Alaska-Adjacent Arctic Ocean (NSA-AAO) Cloud and Radiation Testbed (CART) site of the Atmospheric Radiation Measurement (ARM) Program is designed to collect data on temperature-ice-albedo and water vapor-cloud-radiation feedbacks, which are believed to be important to the predicted enhanced warming inmore » the Arctic. The most important scientific issues of Arctic, as well as global, significance to be addressed at the NSA-AAO CART site are discussed, and a brief overview of the current approach toward, and status of, site development is provided. ARM radiometric and remote sensing instrumentation is already deployed and taking data in the perennial Arctic ice pack as part of the SHEBA (Surface Heat Budget of the Arctic ocean) experiment. In parallel with ARM`s participation in SHEBA, the NSA-AAO facility near Barrow was formally dedicated on 1 July 1997 and began routine data collection early in 1998. This schedule permits the US Department of Energy`s ARM Program, NASA`s Arctic Cloud program, and the SHEBA program (funded primarily by the National Science Foundation and the Office of Naval Research) to be mutually supportive. In addition, location of the NSA-AAO Barrow facility on National Oceanic and Atmospheric Administration land immediately adjacent to its Climate Monitoring and Diagnostic Laboratory Barrow Observatory includes NOAA in this major interagency Arctic collaboration.« less

  9. Cloud Infrastructures for In Silico Drug Discovery: Economic and Practical Aspects

    PubMed Central

    Clematis, Andrea; Quarati, Alfonso; Cesini, Daniele; Milanesi, Luciano; Merelli, Ivan

    2013-01-01

    Cloud computing opens new perspectives for small-medium biotechnology laboratories that need to perform bioinformatics analysis in a flexible and effective way. This seems particularly true for hybrid clouds that couple the scalability offered by general-purpose public clouds with the greater control and ad hoc customizations supplied by the private ones. A hybrid cloud broker, acting as an intermediary between users and public providers, can support customers in the selection of the most suitable offers, optionally adding the provisioning of dedicated services with higher levels of quality. This paper analyses some economic and practical aspects of exploiting cloud computing in a real research scenario for the in silico drug discovery in terms of requirements, costs, and computational load based on the number of expected users. In particular, our work is aimed at supporting both the researchers and the cloud broker delivering an IaaS cloud infrastructure for biotechnology laboratories exposing different levels of nonfunctional requirements. PMID:24106693

  10. Space Station power system autonomy demonstration

    NASA Technical Reports Server (NTRS)

    Kish, James A.; Dolce, James L.; Weeks, David J.

    1988-01-01

    The Systems Autonomy Demonstration Program (SADP) represents NASA's major effort to demonstrate, through a series of complex ground experiments, the application and benefits of applying advanced automation technologies to the Space Station project. Lewis Research Center (LeRC) and Marshall Space Flight Center (MSFC) will first jointly develop an autonomous power system using existing Space Station testbed facilities at each center. The subsequent 1990 power-thermal demonstration will then involve the cooperative operation of the LeRC/MSFC power system with the Johnson Space Center (JSC's) thermal control and DMS/OMS testbed facilities. The testbeds and expert systems at each of the NASA centers will be interconnected via communication links. The appropriate knowledge-based technology will be developed for each testbed and applied to problems requiring intersystem cooperation. Primary emphasis will be focused on failure detection and classification, system reconfiguration, planning and scheduling of electrical power resources, and integration of knowledge-based and conventional control system software into the design and operation of Space Station testbeds.

  11. Description of the control system design for the SSF PMAD DC testbed

    NASA Technical Reports Server (NTRS)

    Baez, Anastacio N.; Kimnach, Greg L.

    1991-01-01

    The Power Management and Distribution (PMAD) DC Testbed Control System for Space Station Freedom was developed using a top down approach based on classical control system and conventional terrestrial power utilities design techniques. The design methodology includes the development of a testbed operating concept. This operating concept describes the operation of the testbed under all possible scenarios. A unique set of operating states was identified and a description of each state, along with state transitions, was generated. Each state is represented by a unique set of attributes and constraints, and its description reflects the degree of system security within which the power system is operating. Using the testbed operating states description, a functional design for the control system was developed. This functional design consists of a functional outline, a text description, and a logical flowchart for all the major control system functions. Described here are the control system design techniques, various control system functions, and the status of the design and implementation.

  12. Space Software Defined Radio Characterization to Enable Reuse

    NASA Technical Reports Server (NTRS)

    Mortensen, Dale J.; Bishop, Daniel W.; Chelmins, David

    2012-01-01

    NASA's Space Communication and Navigation Testbed is beginning operations on the International Space Station this year. The objective is to promote new software defined radio technologies and associated software application reuse, enabled by this first flight of NASA's Space Telecommunications Radio System architecture standard. The Space Station payload has three software defined radios onboard that allow for a wide variety of communications applications; however, each radio was only launched with one waveform application. By design the testbed allows new waveform applications to be uploaded and tested by experimenters in and outside of NASA. During the system integration phase of the testbed special waveform test modes and stand-alone test waveforms were used to characterize the SDR platforms for the future experiments. Characterization of the Testbed's JPL SDR using test waveforms and specialized ground test modes is discussed in this paper. One of the test waveforms, a record and playback application, can be utilized in a variety of ways, including new satellite on-orbit checkout as well as independent on-board testbed experiments.

  13. Development and validation of a low-cost mobile robotics testbed

    NASA Astrophysics Data System (ADS)

    Johnson, Michael; Hayes, Martin J.

    2012-03-01

    This paper considers the design, construction and validation of a low-cost experimental robotic testbed, which allows for the localisation and tracking of multiple robotic agents in real time. The testbed system is suitable for research and education in a range of different mobile robotic applications, for validating theoretical as well as practical research work in the field of digital control, mobile robotics, graphical programming and video tracking systems. It provides a reconfigurable floor space for mobile robotic agents to operate within, while tracking the position of multiple agents in real-time using the overhead vision system. The overall system provides a highly cost-effective solution to the topical problem of providing students with practical robotics experience within severe budget constraints. Several problems encountered in the design and development of the mobile robotic testbed and associated tracking system, such as radial lens distortion and the selection of robot identifier templates are clearly addressed. The testbed performance is quantified and several experiments involving LEGO Mindstorm NXT and Merlin System MiaBot robots are discussed.

  14. GATECloud.net: a platform for large-scale, open-source text processing on the cloud.

    PubMed

    Tablan, Valentin; Roberts, Ian; Cunningham, Hamish; Bontcheva, Kalina

    2013-01-28

    Cloud computing is increasingly being regarded as a key enabler of the 'democratization of science', because on-demand, highly scalable cloud computing facilities enable researchers anywhere to carry out data-intensive experiments. In the context of natural language processing (NLP), algorithms tend to be complex, which makes their parallelization and deployment on cloud platforms a non-trivial task. This study presents a new, unique, cloud-based platform for large-scale NLP research--GATECloud. net. It enables researchers to carry out data-intensive NLP experiments by harnessing the vast, on-demand compute power of the Amazon cloud. Important infrastructural issues are dealt with by the platform, completely transparently for the researcher: load balancing, efficient data upload and storage, deployment on the virtual machines, security and fault tolerance. We also include a cost-benefit analysis and usage evaluation.

  15. A model to determine open or closed cellular convection

    NASA Technical Reports Server (NTRS)

    Helfand, H. M.; Kalnay, E.

    1981-01-01

    A simple mechanism is proposed to explain the observed presence in the atmosphere of open or closed cellular convection. If convection is produced by cooling concentrated near the top of the cloud layer, as in radiative cooling of stratus clouds, it develops strong descending currents which are compensated by weak ascent over most of the horizontal area, and closed cells result. Conversely, heating concentrated near the bottom of a layer, as when an air mass is heated by warm water, results in strong ascending currents compensated by weak descent over most of the area, or open cells. This mechanism is similar to the one suggested by Stommel (1962) to explain the smallness of the oceans' sinking regions. The mechanism is studied numerically by means of a two-dimensional, nonlinear Boussinesq model.

  16. A Cooperative IDS Approach Against MPTCP Attacks

    DTIC Science & Technology

    2017-06-01

    physical testbeds in order to present a methodology that allows distributed IDSs (DIDS) to cooperate in a manner that permits effective detection of...reconstruct MPTCP subflows and detect malicious content. Next, we build physical testbeds in order to present a methodology that allows distributed IDSs...hypotheses on a more realistic testbed environment. • Developing a methodology to incorporate multiple IDSs, real and virtual, to be able to detect cross

  17. Acquisition and Development of a Cognitive Radio Based Wireless Monitoring and Surveillance Testbed for Future Battlefield Communications Research

    DTIC Science & Technology

    2015-03-01

    for Public Release; Distribution Unlimited Final Report: Acquisition and Development of A Cognitive Radio based Wireless Monitoring and Surveillance...journals: Final Report: Acquisition and Development of A Cognitive Radio based Wireless Monitoring and Surveillance Testbed for Future Battlefield...Opeyemi Oduola, Nan Zou, Xiangfang Li, Husheng Li, Lijun Qian. Distributed Spectrum Monitoring and Surveillance using a Cognitive Radio based Testbed

  18. An architecture for integrating distributed and cooperating knowledge-based Air Force decision aids

    NASA Technical Reports Server (NTRS)

    Nugent, Richard O.; Tucker, Richard W.

    1988-01-01

    MITRE has been developing a Knowledge-Based Battle Management Testbed for evaluating the viability of integrating independently-developed knowledge-based decision aids in the Air Force tactical domain. The primary goal for the testbed architecture is to permit a new system to be added to a testbed with little change to the system's software. Each system that connects to the testbed network declares that it can provide a number of services to other systems. When a system wants to use another system's service, it does not address the server system by name, but instead transmits a request to the testbed network asking for a particular service to be performed. A key component of the testbed architecture is a common database which uses a relational database management system (RDBMS). The RDBMS provides a database update notification service to requesting systems. Normally, each system is expected to monitor data relations of interest to it. Alternatively, a system may broadcast an announcement message to inform other systems that an event of potential interest has occurred. Current research is aimed at dealing with issues resulting from integration efforts, such as dealing with potential mismatches of each system's assumptions about the common database, decentralizing network control, and coordinating multiple agents.

  19. Developmental Cryogenic Active Telescope Testbed, a Wavefront Sensing and Control Testbed for the Next Generation Space Telescope

    NASA Technical Reports Server (NTRS)

    Leboeuf, Claudia M.; Davila, Pamela S.; Redding, David C.; Morell, Armando; Lowman, Andrew E.; Wilson, Mark E.; Young, Eric W.; Pacini, Linda K.; Coulter, Dan R.

    1998-01-01

    As part of the technology validation strategy of the next generation space telescope (NGST), a system testbed is being developed at GSFC, in partnership with JPL and Marshall Space Flight Center (MSFC), which will include all of the component functions envisioned in an NGST active optical system. The system will include an actively controlled, segmented primary mirror, actively controlled secondary, deformable, and fast steering mirrors, wavefront sensing optics, wavefront control algorithms, a telescope simulator module, and an interferometric wavefront sensor for use in comparing final obtained wavefronts from different tests. The developmental. cryogenic active telescope testbed (DCATT) will be implemented in three phases. Phase 1 will focus on operating the testbed at ambient temperature. During Phase 2, a cryocapable segmented telescope will be developed and cooled to cryogenic temperature to investigate the impact on the ability to correct the wavefront and stabilize the image. In Phase 3, it is planned to incorporate industry developed flight-like components, such as figure controlled mirror segments, cryogenic, low hold power actuators, or different wavefront sensing and control hardware or software. A very important element of the program is the development and subsequent validation of the integrated multidisciplinary models. The Phase 1 testbed objectives, plans, configuration, and design will be discussed.

  20. Integration of advanced teleoperation technologies for control of space robots

    NASA Technical Reports Server (NTRS)

    Stagnaro, Michael J.

    1993-01-01

    Teleoperated robots require one or more humans to control actuators, mechanisms, and other robot equipment given feedback from onboard sensors. To accomplish this task, the human or humans require some form of control station. Desirable features of such a control station include operation by a single human, comfort, and natural human interfaces (visual, audio, motion, tactile, etc.). These interfaces should work to maximize performance of the human/robot system by streamlining the link between human brain and robot equipment. This paper describes development of a control station testbed with the characteristics described above. Initially, this testbed will be used to control two teleoperated robots. Features of the robots include anthropomorphic mechanisms, slaving to the testbed, and delivery of sensory feedback to the testbed. The testbed will make use of technologies such as helmet mounted displays, voice recognition, and exoskeleton masters. It will allow tor integration and testing of emerging telepresence technologies along with techniques for coping with control link time delays. Systems developed from this testbed could be applied to ground control of space based robots. During man-tended operations, the Space Station Freedom may benefit from ground control of IVA or EVA robots with science or maintenance tasks. Planetary exploration may also find advanced teleoperation systems to be very useful.

  1. Development of a Rotor-Body Coupled Analysis for an Active Mount Aeroelastic Rotor Testbed. Degree awarded by George Washington Univ., May 1996

    NASA Technical Reports Server (NTRS)

    Wilbur, Matthew L.

    1998-01-01

    At the Langley Research Center an active mount rotorcraft testbed is being developed for use in the Langley Transonic Dynamics Tunnel. This testbed, the second generation version of the Aeroelastic Rotor Experimental System (ARES-II), can impose rotor hub motions and measure the response so that rotor-body coupling phenomena may be investigated. An analytical method for coupling an aeroelastically scaled model rotor system to the ARES-II is developed in the current study. Models of the testbed and the rotor system are developed in independent analyses, and an impedance-matching approach is used to couple the rotor system to the testbed. The development of the analytical models and the coupling method is examined, and individual and coupled results are presented for the testbed and rotor system. Coupled results are presented with and without applied hub motion, and system loads and displacements are examined. The results show that a closed-loop control system is necessary to achieve desired hub motions, that proper modeling requires including the loads at the rotor hub and rotor control system, and that the strain-gauge balance placed in the rotating system of the ARES-II provided the best loads results.

  2. A Testbed for Evaluating Lunar Habitat Autonomy Architectures

    NASA Technical Reports Server (NTRS)

    Lawler, Dennis G.

    2008-01-01

    A lunar outpost will involve a habitat with an integrated set of hardware and software that will maintain a safe environment for human activities. There is a desire for a paradigm shift whereby crew will be the primary mission operators, not ground controllers. There will also be significant periods when the outpost is uncrewed. This will require that significant automation software be resident in the habitat to maintain all system functions and respond to faults. JSC is developing a testbed to allow for early testing and evaluation of different autonomy architectures. This will allow evaluation of different software configurations in order to: 1) understand different operational concepts; 2) assess the impact of failures and perturbations on the system; and 3) mitigate software and hardware integration risks. The testbed will provide an environment in which habitat hardware simulations can interact with autonomous control software. Faults can be injected into the simulations and different mission scenarios can be scripted. The testbed allows for logging, replaying and re-initializing mission scenarios. An initial testbed configuration has been developed by combining an existing life support simulation and an existing simulation of the space station power distribution system. Results from this initial configuration will be presented along with suggested requirements and designs for the incremental development of a more sophisticated lunar habitat testbed.

  3. MIT's interferometer CST testbed

    NASA Technical Reports Server (NTRS)

    Hyde, Tupper; Kim, ED; Anderson, Eric; Blackwood, Gary; Lublin, Leonard

    1990-01-01

    The MIT Space Engineering Research Center (SERC) has developed a controlled structures technology (CST) testbed based on one design for a space-based optical interferometer. The role of the testbed is to provide a versatile platform for experimental investigation and discovery of CST approaches. In particular, it will serve as the focus for experimental verification of CSI methodologies and control strategies at SERC. The testbed program has an emphasis on experimental CST--incorporating a broad suite of actuators and sensors, active struts, system identification, passive damping, active mirror mounts, and precision component characterization. The SERC testbed represents a one-tenth scaled version of an optical interferometer concept based on an inherently rigid tetrahedral configuration with collecting apertures on one face. The testbed consists of six 3.5 meter long truss legs joined at four vertices and is suspended with attachment points at three vertices. Each aluminum leg has a 0.2 m by 0.2 m by 0.25 m triangular cross-section. The structure has a first flexible mode at 31 Hz and has over 50 global modes below 200 Hz. The stiff tetrahedral design differs from similar testbeds (such as the JPL Phase B) in that the structural topology is closed. The tetrahedral design minimizes structural deflections at the vertices (site of optical components for maximum baseline) resulting in reduced stroke requirements for isolation and pointing of optics. Typical total light path length stability goals are on the order of lambda/20, with a wavelength of light, lambda, of roughly 500 nanometers. It is expected that active structural control will be necessary to achieve this goal in the presence of disturbances.

  4. The Cloud Area Padovana: from pilot to production

    NASA Astrophysics Data System (ADS)

    Andreetto, P.; Costa, F.; Crescente, A.; Dorigo, A.; Fantinel, S.; Fanzago, F.; Sgaravatto, M.; Traldi, S.; Verlato, M.; Zangrando, L.

    2017-10-01

    The Cloud Area Padovana has been running for almost two years. This is an OpenStack-based scientific cloud, spread across two different sites: the INFN Padova Unit and the INFN Legnaro National Labs. The hardware resources have been scaled horizontally and vertically, by upgrading some hypervisors and by adding new ones: currently it provides about 1100 cores. Some in-house developments were also integrated in the OpenStack dashboard, such as a tool for user and project registrations with direct support for the INFN-AAI Identity Provider as a new option for the user authentication. In collaboration with the EU-funded Indigo DataCloud project, the integration with Docker-based containers has been experimented with and will be available in production soon. This computing facility now satisfies the computational and storage demands of more than 70 users affiliated with about 20 research projects. We present here the architecture of this Cloud infrastructure, the tools and procedures used to operate it. We also focus on the lessons learnt in these two years, describing the problems that were found and the corrective actions that had to be applied. We also discuss about the chosen strategy for upgrades, which combines the need to promptly integrate the OpenStack new developments, the demand to reduce the downtimes of the infrastructure, and the need to limit the effort requested for such updates. We also discuss how this Cloud infrastructure is being used. In particular we focus on two big physics experiments which are intensively exploiting this computing facility: CMS and SPES. CMS deployed on the cloud a complex computational infrastructure, composed of several user interfaces for job submission in the Grid environment/local batch queues or for interactive processes; this is fully integrated with the local Tier-2 facility. To avoid a static allocation of the resources, an elastic cluster, based on cernVM, has been configured: it allows to automatically create and delete virtual machines according to the user needs. SPES, using a client-server system called TraceWin, exploits INFN’s virtual resources performing a very large number of simulations on about a thousand nodes elastically managed.

  5. AstroCloud, a Cyber-Infrastructure for Astronomy Research: Overview

    NASA Astrophysics Data System (ADS)

    Cui, C.; Yu, C.; Xiao, J.; He, B.; Li, C.; Fan, D.; Wang, C.; Hong, Z.; Li, S.; Mi, L.; Wan, W.; Cao, Z.; Wang, J.; Yin, S.; Fan, Y.; Wang, J.

    2015-09-01

    AstroCloud is a cyber-Infrastructure for Astronomy Research initiated by Chinese Virtual Observatory (China-VO) under funding support from NDRC (National Development and Reform commission) and CAS (Chinese Academy of Sciences). Tasks such as proposal submission, proposal peer-review, data archiving, data quality control, data release and open access, Cloud based data processing and analyzing, will be all supported on the platform. It will act as a full lifecycle management system for astronomical data and telescopes. Achievements from international Virtual Observatories and Cloud Computing are adopted heavily. In this paper, backgrounds of the project, key features of the system, and latest progresses are introduced.

  6. Integrating Cloud-Computing-Specific Model into Aircraft Design

    NASA Astrophysics Data System (ADS)

    Zhimin, Tian; Qi, Lin; Guangwen, Yang

    Cloud Computing is becoming increasingly relevant, as it will enable companies involved in spreading this technology to open the door to Web 3.0. In the paper, the new categories of services introduced will slowly replace many types of computational resources currently used. In this perspective, grid computing, the basic element for the large scale supply of cloud services, will play a fundamental role in defining how those services will be provided. The paper tries to integrate cloud computing specific model into aircraft design. This work has acquired good results in sharing licenses of large scale and expensive software, such as CFD (Computational Fluid Dynamics), UG, CATIA, and so on.

  7. Description of the SSF PMAD DC testbed control system data acquisition function

    NASA Technical Reports Server (NTRS)

    Baez, Anastacio N.; Mackin, Michael; Wright, Theodore

    1992-01-01

    The NASA LeRC in Cleveland, Ohio has completed the development and integration of a Power Management and Distribution (PMAD) DC Testbed. This testbed is a reduced scale representation of the end to end, sources to loads, Space Station Freedom Electrical Power System (SSF EPS). This unique facility is being used to demonstrate DC power generation and distribution, power management and control, and system operation techniques considered to be prime candidates for the Space Station Freedom. A key capability of the testbed is its ability to be configured to address system level issues in support of critical SSF program design milestones. Electrical power system control and operation issues like source control, source regulation, system fault protection, end-to-end system stability, health monitoring, resource allocation, and resource management are being evaluated in the testbed. The SSF EPS control functional allocation between on-board computers and ground based systems is evolving. Initially, ground based systems will perform the bulk of power system control and operation. The EPS control system is required to continuously monitor and determine the current state of the power system. The DC Testbed Control System consists of standard controllers arranged in a hierarchical and distributed architecture. These controllers provide all the monitoring and control functions for the DC Testbed Electrical Power System. Higher level controllers include the Power Management Controller, Load Management Controller, Operator Interface System, and a network of computer systems that perform some of the SSF Ground based Control Center Operation. The lower level controllers include Main Bus Switch Controllers and Photovoltaic Controllers. Power system status information is periodically provided to the higher level controllers to perform system control and operation. The data acquisition function of the control system is distributed among the various levels of the hierarchy. Data requirements are dictated by the control system algorithms being implemented at each level. A functional description of the various levels of the testbed control system architecture, the data acquisition function, and the status of its implementationis presented.

  8. The EPOS Vision for the Open Science Cloud

    NASA Astrophysics Data System (ADS)

    Jeffery, Keith; Harrison, Matt; Cocco, Massimo

    2016-04-01

    Cloud computing offers dynamic elastic scalability for data processing on demand. For much research activity, demand for computing is uneven over time and so CLOUD computing offers both cost-effectiveness and capacity advantages. However, as reported repeatedly by the EC Cloud Expert Group, there are barriers to the uptake of Cloud Computing: (1) security and privacy; (2) interoperability (avoidance of lock-in); (3) lack of appropriate systems development environments for application programmers to characterise their applications to allow CLOUD middleware to optimize their deployment and execution. From CERN, the Helix-Nebula group has proposed the architecture for the European Open Science Cloud. They are discussing with other e-Infrastructure groups such as EGI (GRIDs), EUDAT (data curation), AARC (network authentication and authorisation) and also with the EIROFORUM group of 'international treaty' RIs (Research Infrastructures) and the ESFRI (European Strategic Forum for Research Infrastructures) RIs including EPOS. Many of these RIs are either e-RIs (electronic-RIs) or have an e-RI interface for access and use. The EPOS architecture is centred on a portal: ICS (Integrated Core Services). The architectural design already allows for access to e-RIs (which may include any or all of data, software, users and resources such as computers or instruments). Those within any one domain (subject area) of EPOS are considered within the TCS (Thematic Core Services). Those outside, or available across multiple domains of EPOS, are ICS-d (Integrated Core Services-Distributed) since the intention is that they will be used by any or all of the TCS via the ICS. Another such service type is CES (Computational Earth Science); effectively an ICS-d specializing in high performance computation, analytics, simulation or visualization offered by a TCS for others to use. Already discussions are underway between EPOS and EGI, EUDAT, AARC and Helix-Nebula for those offerings to be considered as ICS-ds by EPOS.. Provision of access to ICS-Ds from ICS-C concerns several aspects: (a) Technical : it may be more or less difficult to connect and pass from ICS-C to the ICS-d/ CES the 'package' (probably a virtual machine) of data and software; (b) Security/privacy : including passing personal information e.g. related to AAAI (Authentication, authorization, accounting Infrastructure); (c) financial and legal : such as payment, licence conditions; Appropriate interfaces from ICS-C to ICS-d are being designed to accommodate these aspects. The Open Science Cloud is timely because it provides a framework to discuss governance and sustainability for computational resource provision as well as an effective interpretation of federated approach to HPC(High Performance Computing) -HTC (High Throughput Computing). It will be a unique opportunity to share and adopt procurement policies to provide access to computational resources for RIs. The current state of discussions and expected roadmap for the EPOS-Open Science Cloud relationship are presented.

  9. "helix Nebula - the Science Cloud", a European Science Driven Cross-Domain Initiative Implemented in via AN Active Ppp Set-Up

    NASA Astrophysics Data System (ADS)

    Lengert, W.; Mondon, E.; Bégin, M. E.; Ferrer, M.; Vallois, F.; DelaMar, J.

    2015-12-01

    Helix Nebula, a European science cross-domain initiative building on an active PPP, is aiming to implement the concept of an open science commons[1] while using a cloud hybrid model[2] as the proposed implementation solution. This approach allows leveraging and merging of complementary data intensive Earth Science disciplines (e.g. instrumentation[3] and modeling), without introducing significant changes in the contributors' operational set-up. Considering the seamless integration with life-science (e.g. EMBL), scientific exploitation of meteorological, climate, and Earth Observation data and models open an enormous potential for new big data science. The work of Helix Nebula has shown that is it feasible to interoperate publicly funded infrastructures, such as EGI [5] and GEANT [6], with commercial cloud services. Such hybrid systems are in the interest of the existing users of publicly funded infrastructures and funding agencies because they will provide "freedom and choice" over the type of computing resources to be consumed and the manner in which they can be obtained. But to offer such freedom and choice across a spectrum of suppliers, various issues such as intellectual property, legal responsibility, service quality agreements and related issues need to be addressed. Finding solutions to these issues is one of the goals of the Helix Nebula initiative. [1] http://www.egi.eu/news-and-media/publications/OpenScienceCommons_v3.pdf [2] http://www.helix-nebula.eu/events/towards-the-european-open-science-cloud [3] e.g. https://sentinel.esa.int/web/sentinel/sentinel-data-access [5] http://www.egi.eu/ [6] http://www.geant.net/

  10. Cloud-based Jupyter Notebooks for Water Data Analysis

    NASA Astrophysics Data System (ADS)

    Castronova, A. M.; Brazil, L.; Seul, M.

    2017-12-01

    The development and adoption of technologies by the water science community to improve our ability to openly collaborate and share workflows will have a transformative impact on how we address the challenges associated with collaborative and reproducible scientific research. Jupyter notebooks offer one solution by providing an open-source platform for creating metadata-rich toolchains for modeling and data analysis applications. Adoption of this technology within the water sciences, coupled with publicly available datasets from agencies such as USGS, NASA, and EPA enables researchers to easily prototype and execute data intensive toolchains. Moreover, implementing this software stack in a cloud-based environment extends its native functionality to provide researchers a mechanism to build and execute toolchains that are too large or computationally demanding for typical desktop computers. Additionally, this cloud-based solution enables scientists to disseminate data processing routines alongside journal publications in an effort to support reproducibility. For example, these data collection and analysis toolchains can be shared, archived, and published using the HydroShare platform or downloaded and executed locally to reproduce scientific analysis. This work presents the design and implementation of a cloud-based Jupyter environment and its application for collecting, aggregating, and munging various datasets in a transparent, sharable, and self-documented manner. The goals of this work are to establish a free and open source platform for domain scientists to (1) conduct data intensive and computationally intensive collaborative research, (2) utilize high performance libraries, models, and routines within a pre-configured cloud environment, and (3) enable dissemination of research products. This presentation will discuss recent efforts towards achieving these goals, and describe the architectural design of the notebook server in an effort to support collaborative and reproducible science.

  11. MM-Wave Radiometric Measurements of Low Amounts of Precipitable Water Vapor

    NASA Technical Reports Server (NTRS)

    Racette, P.; Westwater, Ed; Han, Yong; Manning, Will; Jones, David; Gasiewski, Al

    2000-01-01

    An experiment was conducted during March, 1999 to study ways in which to improve techniques for measuring low amounts of total-column precipitable water vapor (PWV). The experiment was conducted at the DOE's ARM program's North Slope of Alaska/Adjacent Arctic Ocean Cloud and Radiation Testbed site (DoE ARM NSA/AAO CaRT) located just outside Barrow, Alaska. NASA and NOAA deployed a suite of radiometers covering 25 channels in the frequency range of 20 GHz up to 340 GHz including 8 channels around the 183 GHz water vapor absorption line. In addition to the usual CaRT site instrumentation the NOAA Depolarization and Backscatter Unattended Lidar (DABUL), the SUNY Rotating Shadowband Spectroradiometer (RSS) and other surface based meteorological instrumentation were deployed during the intensive observation period. Vaisala RS80 radiosondes were launched daily as well as nearby National Weather Service VIZ sondes. Atmospheric conditions ranged from clear calm skies to blowing snow and heavy multi-layer cloud coverage. Measurements made by the radiosondes indicate the PWV varied from approx. 1 to approx. 5 mm during the experiment. The near-surface temperature varied between about -40 C to - 15 C. In this presentation, an overview of the experiment with examples of data collected will be presented. Application of the data for assessing the potential and limitations of millimeter-wave radiometry for retrieving very low amounts of PWV will be discussed.

  12. Demonstration of application-driven network slicing and orchestration in optical/packet domains: on-demand vDC expansion for Hadoop MapReduce optimization.

    PubMed

    Kong, Bingxin; Liu, Siqi; Yin, Jie; Li, Shengru; Zhu, Zuqing

    2018-05-28

    Nowadays, it is common for service providers (SPs) to leverage hybrid clouds to improve the quality-of-service (QoS) of their Big Data applications. However, for achieving guaranteed latency and/or bandwidth in its hybrid cloud, an SP might desire to have a virtual datacenter (vDC) network, in which it can manage and manipulate the network connections freely. To address this requirement, we design and implement a network slicing and orchestration (NSO) system that can create and expand vDCs across optical/packet domains on-demand. Considering Hadoop MapReduce (M/R) as the use-case, we describe the proposed architectures of the system's data, control and management planes, and present the operation procedures for creating, expanding, monitoring and managing a vDC for M/R optimization. The proposed NSO system is then realized in a small-scale network testbed that includes four optical/packet domains, and we conduct experiments in it to demonstrate the whole operations of the data, control and management planes. Our experimental results verify that application-driven on-demand vDC expansion across optical/packet domains can be achieved for M/R optimization, and after being provisioned with a vDC, the SP using the NSO system can fully control the vDC network and further optimize the M/R jobs in it with network orchestration.

  13. The StratusLab cloud distribution: Use-cases and support for scientific applications

    NASA Astrophysics Data System (ADS)

    Floros, E.

    2012-04-01

    The StratusLab project is integrating an open cloud software distribution that enables organizations to setup and provide their own private or public IaaS (Infrastructure as a Service) computing clouds. StratusLab distribution capitalizes on popular infrastructure virtualization solutions like KVM, the OpenNebula virtual machine manager, Claudia service manager and SlipStream deployment platform, which are further enhanced and expanded with additional components developed within the project. The StratusLab distribution covers the core aspects of a cloud IaaS architecture, namely Computing (life-cycle management of virtual machines), Storage, Appliance management and Networking. The resulting software stack provides a packaged turn-key solution for deploying cloud computing services. The cloud computing infrastructures deployed using StratusLab can support a wide range of scientific and business use cases. Grid computing has been the primary use case pursued by the project and for this reason the initial priority has been the support for the deployment and operation of fully virtualized production-level grid sites; a goal that has already been achieved by operating such a site as part of EGI's (European Grid Initiative) pan-european grid infrastructure. In this area the project is currently working to provide non-trivial capabilities like elastic and autonomic management of grid site resources. Although grid computing has been the motivating paradigm, StratusLab's cloud distribution can support a wider range of use cases. Towards this direction, we have developed and currently provide support for setting up general purpose computing solutions like Hadoop, MPI and Torque clusters. For what concerns scientific applications the project is collaborating closely with the Bioinformatics community in order to prepare VM appliances and deploy optimized services for bioinformatics applications. In a similar manner additional scientific disciplines like Earth Science can take advantage of StratusLab cloud solutions. Interested users are welcomed to join StratusLab's user community by getting access to the reference cloud services deployed by the project and offered to the public.

  14. Crew-integration and Automation Testbed (CAT)Program Overview and RUX06 Introduction

    DTIC Science & Technology

    2006-09-20

    unlimited Crew-integration and Automation Testbed ( CAT ) Program Overview and RUX06 Introduction 26-27 July 2006 Patrick Nunez, Terry Tierney, Brian Novak...3. DATES COVERED 4. TITLE AND SUBTITLE Crew-integration and Automation Testbed ( CAT )Program Overview and RUX06 Introduction 5a. CONTRACT...Experiment • Capstone CAT experiment – Evaluate effectiveness of CAT program in improving the performance and/or reducing the workload for a mounted

  15. Testbed for Satellite and Terrestrial Interoperability (TSTI)

    NASA Technical Reports Server (NTRS)

    Gary, J. Patrick

    1998-01-01

    Various issues associated with the "Testbed for Satellite and Terrestrial Interoperability (TSTI)" are presented in viewgraph form. Specific topics include: 1) General and specific scientific technical objectives; 2) ACTS experiment No. 118: 622 Mbps network tests between ATDNet and MAGIC via ACTS; 3) ATDNet SONET/ATM gigabit network; 4) Testbed infrastructure, collaborations and end sites in TSTI based evaluations; 5) the Trans-Pacific digital library experiment; and 6) ESDCD on-going network projects.

  16. Cooperative Search with Autonomous Vehicles in a 3D Aquatic Testbed

    DTIC Science & Technology

    2012-01-01

    Cooperative Search with Autonomous Vehicles in a 3D Aquatic Testbed Matthew Keeter1, Daniel Moore2,3, Ryan Muller2,3, Eric Nieters1, Jennifer...Many applications for autonomous vehicles involve three-dimensional domains, notably aerial and aquatic environments. Such applications include mon...TYPE 3. DATES COVERED 00-00-2012 to 00-00-2012 4. TITLE AND SUBTITLE Cooperative Search With Autonomous Vehicles In A 3D Aquatic Testbed 5a

  17. Eye/Brain/Task Testbed And Software

    NASA Technical Reports Server (NTRS)

    Janiszewski, Thomas; Mainland, Nora; Roden, Joseph C.; Rothenheber, Edward H.; Ryan, Arthur M.; Stokes, James M.

    1994-01-01

    Eye/brain/task (EBT) testbed records electroencephalograms, movements of eyes, and structures of tasks to provide comprehensive data on neurophysiological experiments. Intended to serve continuing effort to develop means for interactions between human brain waves and computers. Software library associated with testbed provides capabilities to recall collected data, to process data on movements of eyes, to correlate eye-movement data with electroencephalographic data, and to present data graphically. Cognitive processes investigated in ways not previously possible.

  18. A Reconfigurable Testbed Environment for Spacecraft Autonomy

    NASA Technical Reports Server (NTRS)

    Biesiadecki, Jeffrey; Jain, Abhinandan

    1996-01-01

    A key goal of NASA's New Millennium Program is the development of technology for increased spacecraft on-board autonomy. Achievement of this objective requires the development of a new class of ground-based automony testbeds that can enable the low-cost and rapid design, test, and integration of the spacecraft autonomy software. This paper describes the development of an Autonomy Testbed Environment (ATBE) for the NMP Deep Space I comet/asteroid rendezvous mission.

  19. Comparison of two matrix data structures for advanced CSM testbed applications

    NASA Technical Reports Server (NTRS)

    Regelbrugge, M. E.; Brogan, F. A.; Nour-Omid, B.; Rankin, C. C.; Wright, M. A.

    1989-01-01

    The first section describes data storage schemes presently used by the Computational Structural Mechanics (CSM) testbed sparse matrix facilities and similar skyline (profile) matrix facilities. The second section contains a discussion of certain features required for the implementation of particular advanced CSM algorithms, and how these features might be incorporated into the data storage schemes described previously. The third section presents recommendations, based on the discussions of the prior sections, for directing future CSM testbed development to provide necessary matrix facilities for advanced algorithm implementation and use. The objective is to lend insight into the matrix structures discussed and to help explain the process of evaluating alternative matrix data structures and utilities for subsequent use in the CSM testbed.

  20. Adaptive controller for a strength testbed for aircraft structures

    NASA Astrophysics Data System (ADS)

    Laperdin, A. I.; Yurkevich, V. D.

    2017-07-01

    The problem of control system design for a strength testbed of aircraft structures is considered. A method for calculating the parameters of a proportional-integral controller (control algorithm) using the time-scale separation method for the testbed taking into account the dead time effect in the control loop is presented. An adaptive control algorithm structure is proposed which limits the amplitude of high-frequency oscillations in the control system with a change in the direction of motion of the rod of the hydraulic cylinders and provides the desired accuracy and quality of transients at all stages of structural loading history. The results of tests of the developed control system with the adaptive control algorithm on an experimental strength testbed for aircraft structures are given.

  1. An Experimental Testbed for Evaluation of Trust and Reputation Systems

    NASA Astrophysics Data System (ADS)

    Kerr, Reid; Cohen, Robin

    To date, trust and reputation systems have often been evaluated using methods of their designers’ own devising. Recently, we demonstrated that a number of noteworthy trust and reputation systems could be readily defeated, revealing limitations in their original evaluations. Efforts in the trust and reputation community to develop a testbed have yielded a successful competition platform, ART. This testbed, however, is less suited to general experimentation and evaluation of individual trust and reputation technologies. In this paper, we propose an experimentation and evaluation testbed based directly on that used in our investigations into security vulnerabilities in trust and reputation systems for marketplaces. We demonstrate the advantages of this design, towards the development of more thorough, objective evaluations of trust and reputation systems.

  2. Recent Experiments Conducted with the Wide-Field Imaging Interferometry Testbed (WIIT)

    NASA Technical Reports Server (NTRS)

    Leisawitz, David T.; Juanola-Parramon, Roser; Bolcar, Matthew; Iacchetta, Alexander S.; Maher, Stephen F.; Rinehart, Stephen A.

    2016-01-01

    The Wide-field Imaging Interferometry Testbed (WIIT) was developed at NASA's Goddard Space Flight Center to demonstrate and explore the practical limitations inherent in wide field-of-view double Fourier (spatio-spectral) interferometry. The testbed delivers high-quality interferometric data and is capable of observing spatially and spectrally complex hyperspectral test scenes. Although WIIT operates at visible wavelengths, by design the data are representative of those from a space-based far-infrared observatory. We used WIIT to observe a calibrated, independently characterized test scene of modest spatial and spectral complexity, and an astronomically realistic test scene of much greater spatial and spectral complexity. This paper describes the experimental setup, summarizes the performance of the testbed, and presents representative data.

  3. MASYS: The AKARI Spectroscopic Survey of Symbiotic Stars in the Magellanic Clouds

    NASA Astrophysics Data System (ADS)

    Angeloni, R.; Ciroi, S.; Marigo, P.; Contini, M.; Di Mille, F.; Rafanelli, P.

    2009-12-01

    MASYS is the AKARI spectroscopic survey of Symbiotic Stars in the Magellanic Clouds, and one of the European Open Time Observing Programmes approved for the AKARI (Post-Helium) Phase-3. It is providing the first ever near-IR spectra of extragalactic symbiotic stars. The observations are scheduled to be completed in July 2009.

  4. WHOI and SIO (I): Next Steps toward Multi-Institution Archiving of Shipboard and Deep Submergence Vehicle Data

    NASA Astrophysics Data System (ADS)

    Detrick, R. S.; Clark, D.; Gaylord, A.; Goldsmith, R.; Helly, J.; Lemmond, P.; Lerner, S.; Maffei, A.; Miller, S. P.; Norton, C.; Walden, B.

    2005-12-01

    The Scripps Institution of Oceanography (SIO) and the Woods Hole Oceanographic Institution (WHOI) have joined forces with the San Diego Supercomputer Center to build a testbed for multi-institutional archiving of shipboard and deep submergence vehicle data. Support has been provided by the Digital Archiving and Preservation program funded by NSF/CISE and the Library of Congress. In addition to the more than 92,000 objects stored in the SIOExplorer Digital Library, the testbed will provide access to data, photographs, video images and documents from WHOI ships, Alvin submersible and Jason ROV dives, and deep-towed vehicle surveys. An interactive digital library interface will allow combinations of distributed collections to be browsed, metadata inspected, and objects displayed or selected for download. The digital library architecture, and the search and display tools of the SIOExplorer project, are being combined with WHOI tools, such as the Alvin Framegrabber and the Jason Virtual Control Van, that have been designed using WHOI's GeoBrowser to handle the vast volumes of digital video and camera data generated by Alvin, Jason and other deep submergence vehicles. Notions of scalability will be tested, as data volumes range from 3 CDs per cruise to 200 DVDs per cruise. Much of the scalability of this proposal comes from an ability to attach digital library data and metadata acquisition processes to diverse sensor systems. We are able to run an entire digital library from a laptop computer as well as from supercomputer-center-size resources. It can be used, in the field, laboratory or classroom, covering data from acquisition-to-archive using a single coherent methodology. The design is an open architecture, supporting applications through well-defined external interfaces maintained as an open-source effort for community inclusion and enhancement.

  5. Leveraging Open Standard Interfaces in Accessing and Processing NASA Data Model Outputs

    NASA Astrophysics Data System (ADS)

    Falke, S. R.; Alameh, N. S.; Hoijarvi, K.; de La Beaujardiere, J.; Bambacus, M. J.

    2006-12-01

    An objective of NASA's Earth Science Division is to develop advanced information technologies for processing, archiving, accessing, visualizing, and communicating Earth Science data. To this end, NASA and other federal agencies have collaborated with the Open Geospatial Consortium (OGC) to research, develop, and test interoperability specifications within projects and testbeds benefiting the government, industry, and the public. This paper summarizes the results of a recent effort under the auspices of the OGC Web Services testbed phase 4 (OWS-4) to explore standardization approaches for accessing and processing the outputs of NASA models of physical phenomena. Within the OWS-4 context, experiments were designed to leverage the emerging OGC Web Processing Service (WPS) and Web Coverage Service (WCS) specifications to access, filter and manipulate the outputs of the NASA Goddard Earth Observing System (GEOS) and Goddard Chemistry Aerosol Radiation and Transport (GOCART) forecast models. In OWS-4, the intent is to provide the users with more control over the subsets of data that they can extract from the model results as well as over the final portrayal of that data. To meet that goal, experiments have been designed to test the suitability of use of OGC's Web Processing Service (WPS) and Web Coverage Service (WCS) for filtering, processing and portraying the model results (including slices by height or by time), and to identify any enhancements to the specs to meet the desired objectives. This paper summarizes the findings of the experiments highlighting the value of the Web Processing Service in providing standard interfaces for accessing and manipulating model data within spatial and temporal frameworks. The paper also points out the key shortcomings of the WPS especially in terms in comparison with a SOAP/WSDL approach towards solving the same problem.

  6. Reduced-Order Biogeochemical Flux Model for High-Resolution Multi-Scale Biophysical Simulations

    NASA Astrophysics Data System (ADS)

    Smith, Katherine; Hamlington, Peter; Pinardi, Nadia; Zavatarelli, Marco

    2017-04-01

    Biogeochemical tracers and their interactions with upper ocean physical processes such as submesoscale circulations and small-scale turbulence are critical for understanding the role of the ocean in the global carbon cycle. These interactions can cause small-scale spatial and temporal heterogeneity in tracer distributions that can, in turn, greatly affect carbon exchange rates between the atmosphere and interior ocean. For this reason, it is important to take into account small-scale biophysical interactions when modeling the global carbon cycle. However, explicitly resolving these interactions in an earth system model (ESM) is currently infeasible due to the enormous associated computational cost. As a result, understanding and subsequently parameterizing how these small-scale heterogeneous distributions develop and how they relate to larger resolved scales is critical for obtaining improved predictions of carbon exchange rates in ESMs. In order to address this need, we have developed the reduced-order, 17 state variable Biogeochemical Flux Model (BFM-17) that follows the chemical functional group approach, which allows for non-Redfield stoichiometric ratios and the exchange of matter through units of carbon, nitrate, and phosphate. This model captures the behavior of open-ocean biogeochemical systems without substantially increasing computational cost, thus allowing the model to be combined with computationally-intensive, fully three-dimensional, non-hydrostatic large eddy simulations (LES). In this talk, we couple BFM-17 with the Princeton Ocean Model and show good agreement between predicted monthly-averaged results and Bermuda testbed area field data (including the Bermuda-Atlantic Time-series Study and Bermuda Testbed Mooring). Through these tests, we demonstrate the capability of BFM-17 to accurately model open-ocean biochemistry. Additionally, we discuss the use of BFM-17 within a multi-scale LES framework and outline how this will further our understanding of turbulent biophysical interactions in the upper ocean.

  7. Reduced-Order Biogeochemical Flux Model for High-Resolution Multi-Scale Biophysical Simulations

    NASA Astrophysics Data System (ADS)

    Smith, K.; Hamlington, P.; Pinardi, N.; Zavatarelli, M.; Milliff, R. F.

    2016-12-01

    Biogeochemical tracers and their interactions with upper ocean physical processes such as submesoscale circulations and small-scale turbulence are critical for understanding the role of the ocean in the global carbon cycle. These interactions can cause small-scale spatial and temporal heterogeneity in tracer distributions which can, in turn, greatly affect carbon exchange rates between the atmosphere and interior ocean. For this reason, it is important to take into account small-scale biophysical interactions when modeling the global carbon cycle. However, explicitly resolving these interactions in an earth system model (ESM) is currently infeasible due to the enormous associated computational cost. As a result, understanding and subsequently parametrizing how these small-scale heterogeneous distributions develop and how they relate to larger resolved scales is critical for obtaining improved predictions of carbon exchange rates in ESMs. In order to address this need, we have developed the reduced-order, 17 state variable Biogeochemical Flux Model (BFM-17). This model captures the behavior of open-ocean biogeochemical systems without substantially increasing computational cost, thus allowing the model to be combined with computationally-intensive, fully three-dimensional, non-hydrostatic large eddy simulations (LES). In this talk, we couple BFM-17 with the Princeton Ocean Model and show good agreement between predicted monthly-averaged results and Bermuda testbed area field data (including the Bermuda-Atlantic Time Series and Bermuda Testbed Mooring). Through these tests, we demonstrate the capability of BFM-17 to accurately model open-ocean biochemistry. Additionally, we discuss the use of BFM-17 within a multi-scale LES framework and outline how this will further our understanding of turbulent biophysical interactions in the upper ocean.

  8. Hybrid Cloud Computing Environment for EarthCube and Geoscience Community

    NASA Astrophysics Data System (ADS)

    Yang, C. P.; Qin, H.

    2016-12-01

    The NSF EarthCube Integration and Test Environment (ECITE) has built a hybrid cloud computing environment to provides cloud resources from private cloud environments by using cloud system software - OpenStack and Eucalyptus, and also manages public cloud - Amazon Web Service that allow resource synchronizing and bursting between private and public cloud. On ECITE hybrid cloud platform, EarthCube and geoscience community can deploy and manage the applications by using base virtual machine images or customized virtual machines, analyze big datasets by using virtual clusters, and real-time monitor the virtual resource usage on the cloud. Currently, a number of EarthCube projects have deployed or started migrating their projects to this platform, such as CHORDS, BCube, CINERGI, OntoSoft, and some other EarthCube building blocks. To accomplish the deployment or migration, administrator of ECITE hybrid cloud platform prepares the specific needs (e.g. images, port numbers, usable cloud capacity, etc.) of each project in advance base on the communications between ECITE and participant projects, and then the scientists or IT technicians in those projects launch one or multiple virtual machines, access the virtual machine(s) to set up computing environment if need be, and migrate their codes, documents or data without caring about the heterogeneity in structure and operations among different cloud platforms.

  9. Evolution of System Architectures: Where Do We Need to Fail Next?

    NASA Astrophysics Data System (ADS)

    Bermudez, Luis; Alameh, Nadine; Percivall, George

    2013-04-01

    Innovation requires testing and failing. Thomas Edison was right when he said "I have not failed. I've just found 10,000 ways that won't work". For innovation and improvement of standards to happen, service Architectures have to be tested and tested. Within the Open Geospatial Consortium (OGC), testing of service architectures has occurred for the last 15 years. This talk will present an evolution of these service architectures and a possible future path. OGC is a global forum for the collaboration of developers and users of spatial data products and services, and for the advancement and development of international standards for geospatial interoperability. The OGC Interoperability Program is a series of hands-on, fast paced, engineering initiatives to accelerate the development and acceptance of OGC standards. Each initiative is organized in threads that provide focus under a particular theme. The first testbed, OGC Web Services phase 1, completed in 2003 had four threads: Common Architecture, Web Mapping, Sensor Web and Web Imagery Enablement. The Common Architecture was a cross-thread theme, to ensure that the Web Mapping and Sensor Web experiments built on a base common architecture. The architecture was based on the three main SOA components: Broker, Requestor and Provider. It proposed a general service model defining service interactions and dependencies; categorization of service types; registries to allow discovery and access of services; data models and encodings; and common services (WMS, WFS, WCS). For the latter, there was a clear distinction on the different services: Data Services (e.g. WMS), Application services (e.g. Coordinate transformation) and server-side client applications (e.g. image exploitation). The latest testbed, OGC Web Service phase 9, completed in 2012 had 5 threads: Aviation, Cross-Community Interoperability (CCI), Security and Services Interoperability (SSI), OWS Innovations and Compliance & Interoperability Testing & Evaluation (CITE). Compared to the first testbed, OWS-9 did not have a separate common architecture thread. Instead the emphasis was on brokering information models, securing them and making data available efficiently on mobile devices. The outcome is an architecture based on usability and non-intrusiveness while leveraging mediation of information models from different communities. This talk will use lessons learned from the evolution from OGC Testbed phase 1 to phase 9 to better understand how global and complex infrastructures evolve to support many communities including the Earth System Science Community.

  10. An Approach of Web-based Point Cloud Visualization without Plug-in

    NASA Astrophysics Data System (ADS)

    Ye, Mengxuan; Wei, Shuangfeng; Zhang, Dongmei

    2016-11-01

    With the advances in three-dimensional laser scanning technology, the demand for visualization of massive point cloud is increasingly urgent, but a few years ago point cloud visualization was limited to desktop-based solutions until the introduction of WebGL, several web renderers are available. This paper addressed the current issues in web-based point cloud visualization, and proposed a method of web-based point cloud visualization without plug-in. The method combines ASP.NET and WebGL technologies, using the spatial database PostgreSQL to store data and the open web technologies HTML5 and CSS3 to implement the user interface, a visualization system online for 3D point cloud is developed by Javascript with the web interactions. Finally, the method is applied to the real case. Experiment proves that the new model is of great practical value which avoids the shortcoming of the existing WebGIS solutions.

  11. High-performance scientific computing in the cloud

    NASA Astrophysics Data System (ADS)

    Jorissen, Kevin; Vila, Fernando; Rehr, John

    2011-03-01

    Cloud computing has the potential to open up high-performance computational science to a much broader class of researchers, owing to its ability to provide on-demand, virtualized computational resources. However, before such approaches can become commonplace, user-friendly tools must be developed that hide the unfamiliar cloud environment and streamline the management of cloud resources for many scientific applications. We have recently shown that high-performance cloud computing is feasible for parallelized x-ray spectroscopy calculations. We now present benchmark results for a wider selection of scientific applications focusing on electronic structure and spectroscopic simulation software in condensed matter physics. These applications are driven by an improved portable interface that can manage virtual clusters and run various applications in the cloud. We also describe a next generation of cluster tools, aimed at improved performance and a more robust cluster deployment. Supported by NSF grant OCI-1048052.

  12. Design and Development of ChemInfoCloud: An Integrated Cloud Enabled Platform for Virtual Screening.

    PubMed

    Karthikeyan, Muthukumarasamy; Pandit, Deepak; Bhavasar, Arvind; Vyas, Renu

    2015-01-01

    The power of cloud computing and distributed computing has been harnessed to handle vast and heterogeneous data required to be processed in any virtual screening protocol. A cloud computing platorm ChemInfoCloud was built and integrated with several chemoinformatics and bioinformatics tools. The robust engine performs the core chemoinformatics tasks of lead generation, lead optimisation and property prediction in a fast and efficient manner. It has also been provided with some of the bioinformatics functionalities including sequence alignment, active site pose prediction and protein ligand docking. Text mining, NMR chemical shift (1H, 13C) prediction and reaction fingerprint generation modules for efficient lead discovery are also implemented in this platform. We have developed an integrated problem solving cloud environment for virtual screening studies that also provides workflow management, better usability and interaction with end users using container based virtualization, OpenVz.

  13. Sector and Sphere: the design and implementation of a high-performance data cloud

    PubMed Central

    Gu, Yunhong; Grossman, Robert L.

    2009-01-01

    Cloud computing has demonstrated that processing very large datasets over commodity clusters can be done simply, given the right programming model and infrastructure. In this paper, we describe the design and implementation of the Sector storage cloud and the Sphere compute cloud. By contrast with the existing storage and compute clouds, Sector can manage data not only within a data centre, but also across geographically distributed data centres. Similarly, the Sphere compute cloud supports user-defined functions (UDFs) over data both within and across data centres. As a special case, MapReduce-style programming can be implemented in Sphere by using a Map UDF followed by a Reduce UDF. We describe some experimental studies comparing Sector/Sphere and Hadoop using the Terasort benchmark. In these studies, Sector is approximately twice as fast as Hadoop. Sector/Sphere is open source. PMID:19451100

  14. Study of Molecular Clouds, Variable Stars and Related Topics at NUU and UBAI

    NASA Astrophysics Data System (ADS)

    Hojaev, A. S.

    2017-07-01

    The search of young PMS stars made by our team at Maidanak, Lulin and Beijing observatories, especially in NGC 6820/23 area, as well as monitoring of a sample of open clusters will be described and results will be presented. We consider physical conditions in different star forming regions, particularly in TDC and around Vul OB1, estimate SFE and SFR, energy balance and instability processes in these regions. We also reviewed all data on molecular clouds in the Galaxy and in other galaxies where the clouds were observed to prepare general catalog of molecular clouds, to study physical conditions, unsteadiness and possible star formation in them, the formation and evolution of molecular cloud systems, to analyze their role in formation of different types of galaxies and structural features therein.

  15. Implementation of Grid Tier 2 and Tier 3 facilities on a Distributed OpenStack Cloud

    NASA Astrophysics Data System (ADS)

    Limosani, Antonio; Boland, Lucien; Coddington, Paul; Crosby, Sean; Huang, Joanna; Sevior, Martin; Wilson, Ross; Zhang, Shunde

    2014-06-01

    The Australian Government is making a AUD 100 million investment in Compute and Storage for the academic community. The Compute facilities are provided in the form of 30,000 CPU cores located at 8 nodes around Australia in a distributed virtualized Infrastructure as a Service facility based on OpenStack. The storage will eventually consist of over 100 petabytes located at 6 nodes. All will be linked via a 100 Gb/s network. This proceeding describes the development of a fully connected WLCG Tier-2 grid site as well as a general purpose Tier-3 computing cluster based on this architecture. The facility employs an extension to Torque to enable dynamic allocations of virtual machine instances. A base Scientific Linux virtual machine (VM) image is deployed in the OpenStack cloud and automatically configured as required using Puppet. Custom scripts are used to launch multiple VMs, integrate them into the dynamic Torque cluster and to mount remote file systems. We report on our experience in developing this nation-wide ATLAS and Belle II Tier 2 and Tier 3 computing infrastructure using the national Research Cloud and storage facilities.

  16. GC31G-1182: Opennex, a Private-Public Partnership in Support of the National Climate Assessment

    NASA Technical Reports Server (NTRS)

    Nemani, Ramakrishna R.; Wang, Weile; Michaelis, Andrew; Votava, Petr; Ganguly, Sangram

    2016-01-01

    The NASA Earth Exchange (NEX) is a collaborative computing platform that has been developed with the objective of bringing scientists together with the software tools, massive global datasets, and supercomputing resources necessary to accelerate research in Earth systems science and global change. NEX is funded as an enabling tool for sustaining the national climate assessment. Over the past five years, researchers have used the NEX platform and produced a number of data sets highly relevant to the National Climate Assessment. These include high-resolution climate projections using different downscaling techniques and trends in historical climate from satellite data. To enable a broader community in exploiting the above datasets, the NEX team partnered with public cloud providers to create the OpenNEX platform. OpenNEX provides ready access to NEX data holdings on a number of public cloud platforms along with pertinent analysis tools and workflows in the form of Machine Images and Docker Containers, lectures and tutorials by experts. We will showcase some of the applications of OpenNEX data and tools by the community on Amazon Web Services, Google Cloud and the NEX Sandbox.

  17. A study on strategic provisioning of cloud computing services.

    PubMed

    Whaiduzzaman, Md; Haque, Mohammad Nazmul; Rejaul Karim Chowdhury, Md; Gani, Abdullah

    2014-01-01

    Cloud computing is currently emerging as an ever-changing, growing paradigm that models "everything-as-a-service." Virtualised physical resources, infrastructure, and applications are supplied by service provisioning in the cloud. The evolution in the adoption of cloud computing is driven by clear and distinct promising features for both cloud users and cloud providers. However, the increasing number of cloud providers and the variety of service offerings have made it difficult for the customers to choose the best services. By employing successful service provisioning, the essential services required by customers, such as agility and availability, pricing, security and trust, and user metrics can be guaranteed by service provisioning. Hence, continuous service provisioning that satisfies the user requirements is a mandatory feature for the cloud user and vitally important in cloud computing service offerings. Therefore, we aim to review the state-of-the-art service provisioning objectives, essential services, topologies, user requirements, necessary metrics, and pricing mechanisms. We synthesize and summarize different provision techniques, approaches, and models through a comprehensive literature review. A thematic taxonomy of cloud service provisioning is presented after the systematic review. Finally, future research directions and open research issues are identified.

  18. A Study on Strategic Provisioning of Cloud Computing Services

    PubMed Central

    Rejaul Karim Chowdhury, Md

    2014-01-01

    Cloud computing is currently emerging as an ever-changing, growing paradigm that models “everything-as-a-service.” Virtualised physical resources, infrastructure, and applications are supplied by service provisioning in the cloud. The evolution in the adoption of cloud computing is driven by clear and distinct promising features for both cloud users and cloud providers. However, the increasing number of cloud providers and the variety of service offerings have made it difficult for the customers to choose the best services. By employing successful service provisioning, the essential services required by customers, such as agility and availability, pricing, security and trust, and user metrics can be guaranteed by service provisioning. Hence, continuous service provisioning that satisfies the user requirements is a mandatory feature for the cloud user and vitally important in cloud computing service offerings. Therefore, we aim to review the state-of-the-art service provisioning objectives, essential services, topologies, user requirements, necessary metrics, and pricing mechanisms. We synthesize and summarize different provision techniques, approaches, and models through a comprehensive literature review. A thematic taxonomy of cloud service provisioning is presented after the systematic review. Finally, future research directions and open research issues are identified. PMID:25032243

  19. Open NASA Earth Exchange (OpenNEX): Strategies for enabling cross organization collaboration in the earth sciences

    NASA Astrophysics Data System (ADS)

    Michaelis, A.; Ganguly, S.; Nemani, R. R.; Votava, P.; Wang, W.; Lee, T. J.; Dungan, J. L.

    2014-12-01

    Sharing community-valued codes, intermediary datasets and results from individual efforts with others that are not in a direct funded collaboration can be a challenge. Cross organization collaboration is often impeded due to infrastructure security constraints, rigid financial controls, bureaucracy, and workforce nationalities, etc., which can force groups to work in a segmented fashion and/or through awkward and suboptimal web services. We show how a focused community may come together, share modeling and analysis codes, computing configurations, scientific results, knowledge and expertise on a public cloud platform; diverse groups of researchers working together at "arms length". Through the OpenNEX experimental workshop, users can view short technical "how-to" videos and explore encapsulated working environment. Workshop participants can easily instantiate Amazon Machine Images (AMI) or launch full cluster and data processing configurations within minutes. Enabling users to instantiate computing environments from configuration templates on large public cloud infrastructures, such as Amazon Web Services, may provide a mechanism for groups to easily use each others work and collaborate indirectly. Moreover, using the public cloud for this workshop allowed a single group to host a large read only data archive, making datasets of interest to the community widely available on the public cloud, enabling other groups to directly connect to the data and reduce the costs of the collaborative work by freeing other individual groups from redundantly retrieving, integrating or financing the storage of the datasets of interest.

  20. Formation history of open clusters constrained by detailed asteroseismology of red giant stars observed by Kepler

    NASA Astrophysics Data System (ADS)

    Corsaro, Enrico; Lee, Yueh-Ning; García, Rafael A.; Hennebelle, Patrick; Mathur, Savita; Beck, Paul G.; Mathis, Stephane; Stello, Dennis; Bouvier, Jérôme

    2017-10-01

    Stars originate by the gravitational collapse of a turbulent molecular cloud of a diffuse medium, and are often observed to form clusters. Stellar clusters therefore play an important role in our understanding of star formation and of the dynamical processes at play. However, investigating the cluster formation is diffcult because the density of the molecular cloud undergoes a change of many orders of magnitude. Hierarchical-step approaches to decompose the problem into different stages are therefore required, as well as reliable assumptions on the initial conditions in the clouds. We report for the first time the use of the full potential of NASA Kepler asteroseismic observations coupled with 3D numerical simulations, to put strong constraints on the early formation stages of open clusters. Thanks to a Bayesian peak bagging analysis of about 50 red giant members of NGC 6791 and NGC 6819, the two most populated open clusters observed in the nominal Kepler mission, we derive a complete set of detailed oscillation mode properties for each star, with thousands of oscillation modes characterized. We therefore show how these asteroseismic properties lead us to a discovery about the rotation history of stellar clusters. Finally, our observational findings will be compared with hydrodynamical simulations for stellar cluster formation to constrain the physical processes of turbulence, rotation, and magnetic fields that are in action during the collapse of the progenitor cloud into a proto-cluster.

  1. Passive Thermal Design Approach for the Space Communications and Navigation (SCaN) Testbed Experiment on the International Space Station (ISS)

    NASA Technical Reports Server (NTRS)

    Siamidis, John; Yuko, Jim

    2014-01-01

    The Space Communications and Navigation (SCaN) Program Office at NASA Headquarters oversees all of NASAs space communications activities. SCaN manages and directs the ground-based facilities and services provided by the Deep Space Network (DSN), Near Earth Network (NEN), and the Space Network (SN). Through the SCaN Program Office, NASA GRC developed a Software Defined Radio (SDR) testbed experiment (SCaN testbed experiment) for use on the International Space Station (ISS). It is comprised of three different SDR radios, the Jet Propulsion Laboratory (JPL) radio, Harris Corporation radio, and the General Dynamics Corporation radio. The SCaN testbed experiment provides an on-orbit, adaptable, SDR Space Telecommunications Radio System (STRS) - based facility to conduct a suite of experiments to advance the Software Defined Radio, Space Telecommunications Radio Systems (STRS) standards, reduce risk (Technology Readiness Level (TRL) advancement) for candidate Constellation future space flight hardware software, and demonstrate space communication links critical to future NASA exploration missions. The SCaN testbed project provides NASA, industry, other Government agencies, and academic partners the opportunity to develop and field communications, navigation, and networking technologies in the laboratory and space environment based on reconfigurable, software defined radio platforms and the STRS Architecture.The SCaN testbed is resident on the P3 Express Logistics Carrier (ELC) on the exterior truss of the International Space Station (ISS). The SCaN testbed payload launched on the Japanese Aerospace Exploration Agency (JAXA) H-II Transfer Vehicle (HTV) and was installed on the ISS P3 ELC located on the inboard RAM P3 site. The daily operations and testing are managed out of NASA GRC in the Telescience Support Center (TSC).

  2. Research in Wireless Networks and Communications

    DTIC Science & Technology

    2008-05-01

    TESTBED SETUP AND INITIAL MULTI-HOP EXPERIENCE As a proof of concept, we assembled a testbed platform of nodes based on 400MHz AMD Geode single-board...experi- ments on a testbed network consisting of 400MHz AMD Geode single-board computers made by Thecus Inc. We equipped each of these nodes with two...ground nodes were placed on a line, with about 3 feet of separation between adjacent nodes. The nodes were powered by 400MHz AMD Geode single-board

  3. Phoenix Missile Hypersonic Testbed (PMHT): System Concept Overview

    NASA Technical Reports Server (NTRS)

    Jones, Thomas P.

    2007-01-01

    A viewgraph presentation of the Phoenix Missile Hypersonic Testbed (PMHT) is shown. The contents include: 1) Need and Goals; 2) Phoenix Missile Hypersonic Testbed; 3) PMHT Concept; 4) Development Objectives; 5) Possible Research Payloads; 6) Possible Research Program Participants; 7) PMHT Configuration; 8) AIM-54 Internal Hardware Schematic; 9) PMHT Configuration; 10) New Guidance and Armament Section Profiles; 11) Nomenclature; 12) PMHT Stack; 13) Systems Concept; 14) PMHT Preflight Activities; 15) Notional Ground Path; and 16) Sample Theoretical Trajectories.

  4. Experimental Studies in a Reconfigurable C4 Test-bed for Network Enabled Capability

    DTIC Science & Technology

    2006-06-01

    Cross1, Dr R. Houghton1, and Mr R. McMaster1 Defence Technology Centre for Human factors Integration (DTC HFI ) BITlab, School of Engineering and Design...NUMBER 7. PERFORMING ORGANIZATION NAME(S) AND ADDRESS(ES) Defence Technology Centre for Human factors Integration (DTC HFI ) BITlab, School of...studies into NEC by the Human Factors Integration Defence Technology Centre ( HFI -DTC). DEVELOPMENT OF THE TESTBED In brief, the C4 test-bed

  5. Versatile simulation testbed for rotorcraft speech I/O system design

    NASA Technical Reports Server (NTRS)

    Simpson, Carol A.

    1986-01-01

    A versatile simulation testbed for the design of a rotorcraft speech I/O system is described in detail. The testbed will be used to evaluate alternative implementations of synthesized speech displays and speech recognition controls for the next generation of Army helicopters including the LHX. The message delivery logic is discussed as well as the message structure, the speech recognizer command structure and features, feedback from the recognizer, and random access to controls via speech command.

  6. In-Space Networking on NASA's SCAN Testbed

    NASA Technical Reports Server (NTRS)

    Brooks, David E.; Eddy, Wesley M.; Clark, Gilbert J.; Johnson, Sandra K.

    2016-01-01

    The NASA Space Communications and Navigation (SCaN) Testbed, an external payload onboard the International Space Station, is equipped with three software defined radios and a flight computer for supporting in-space communication research. New technologies being studied using the SCaN Testbed include advanced networking, coding, and modulation protocols designed to support the transition of NASAs mission systems from primarily point to point data links and preplanned routes towards adaptive, autonomous internetworked operations needed to meet future mission objectives. Networking protocols implemented on the SCaN Testbed include the Advanced Orbiting Systems (AOS) link-layer protocol, Consultative Committee for Space Data Systems (CCSDS) Encapsulation Packets, Internet Protocol (IP), Space Link Extension (SLE), CCSDS File Delivery Protocol (CFDP), and Delay-Tolerant Networking (DTN) protocols including the Bundle Protocol (BP) and Licklider Transmission Protocol (LTP). The SCaN Testbed end-to-end system provides three S-band data links and one Ka-band data link to exchange space and ground data through NASAs Tracking Data Relay Satellite System or a direct-to-ground link to ground stations. The multiple data links and nodes provide several upgradable elements on both the space and ground systems. This paper will provide a general description of the testbeds system design and capabilities, discuss in detail the design and lessons learned in the implementation of the network protocols, and describe future plans for continuing research to meet the communication needs for evolving global space systems.

  7. Graphical interface between the CIRSSE testbed and CimStation software with MCS/CTOS

    NASA Technical Reports Server (NTRS)

    Hron, Anna B.

    1992-01-01

    This research is concerned with developing a graphical simulation of the testbed at the Center for Intelligent Robotic Systems for Space Exploration (CIRSSE) and the interface which allows for communication between the two. Such an interface is useful in telerobotic operations, and as a functional interaction tool for testbed users. Creating a simulated model of a real world system, generates inevitable calibration discrepancies between them. This thesis gives a brief overview of the work done to date in the area of workcell representation and communication, describes the development of the CIRSSE interface, and gives a direction for future work in the area of system calibration. The CimStation software used for development of this interface, is a highly versatile robotic workcell simulation package which has been programmed for this application with a scale graphical model of the testbed, and supporting interface menu code. A need for this tool has been identified for the reasons of path previewing, as a window on teleoperation and for calibration of simulated vs. real world models. The interface allows information (i.e., joint angles) generated by CimStation to be sent as motion goal positions to the testbed robots. An option of the interface has been established such that joint angle information generated by supporting testbed algorithms (i.e., TG, collision avoidance) can be piped through CimStation as a visual preview of the path.

  8. Big Data Clustering via Community Detection and Hyperbolic Network Embedding in IoT Applications.

    PubMed

    Karyotis, Vasileios; Tsitseklis, Konstantinos; Sotiropoulos, Konstantinos; Papavassiliou, Symeon

    2018-04-15

    In this paper, we present a novel data clustering framework for big sensory data produced by IoT applications. Based on a network representation of the relations among multi-dimensional data, data clustering is mapped to node clustering over the produced data graphs. To address the potential very large scale of such datasets/graphs that test the limits of state-of-the-art approaches, we map the problem of data clustering to a community detection one over the corresponding data graphs. Specifically, we propose a novel computational approach for enhancing the traditional Girvan-Newman (GN) community detection algorithm via hyperbolic network embedding. The data dependency graph is embedded in the hyperbolic space via Rigel embedding, allowing more efficient computation of edge-betweenness centrality needed in the GN algorithm. This allows for more efficient clustering of the nodes of the data graph in terms of modularity, without sacrificing considerable accuracy. In order to study the operation of our approach with respect to enhancing GN community detection, we employ various representative types of artificial complex networks, such as scale-free, small-world and random geometric topologies, and frequently-employed benchmark datasets for demonstrating its efficacy in terms of data clustering via community detection. Furthermore, we provide a proof-of-concept evaluation by applying the proposed framework over multi-dimensional datasets obtained from an operational smart-city/building IoT infrastructure provided by the Federated Interoperable Semantic IoT/cloud Testbeds and Applications (FIESTA-IoT) testbed federation. It is shown that the proposed framework can be indeed used for community detection/data clustering and exploited in various other IoT applications, such as performing more energy-efficient smart-city/building sensing.

  9. Big Data Clustering via Community Detection and Hyperbolic Network Embedding in IoT Applications

    PubMed Central

    Sotiropoulos, Konstantinos

    2018-01-01

    In this paper, we present a novel data clustering framework for big sensory data produced by IoT applications. Based on a network representation of the relations among multi-dimensional data, data clustering is mapped to node clustering over the produced data graphs. To address the potential very large scale of such datasets/graphs that test the limits of state-of-the-art approaches, we map the problem of data clustering to a community detection one over the corresponding data graphs. Specifically, we propose a novel computational approach for enhancing the traditional Girvan–Newman (GN) community detection algorithm via hyperbolic network embedding. The data dependency graph is embedded in the hyperbolic space via Rigel embedding, allowing more efficient computation of edge-betweenness centrality needed in the GN algorithm. This allows for more efficient clustering of the nodes of the data graph in terms of modularity, without sacrificing considerable accuracy. In order to study the operation of our approach with respect to enhancing GN community detection, we employ various representative types of artificial complex networks, such as scale-free, small-world and random geometric topologies, and frequently-employed benchmark datasets for demonstrating its efficacy in terms of data clustering via community detection. Furthermore, we provide a proof-of-concept evaluation by applying the proposed framework over multi-dimensional datasets obtained from an operational smart-city/building IoT infrastructure provided by the Federated Interoperable Semantic IoT/cloud Testbeds and Applications (FIESTA-IoT) testbed federation. It is shown that the proposed framework can be indeed used for community detection/data clustering and exploited in various other IoT applications, such as performing more energy-efficient smart-city/building sensing. PMID:29662043

  10. Optical Design of the Developmental Cryogenic Active Telescope Testbed (DCATT)

    NASA Technical Reports Server (NTRS)

    Davila, Pam; Wilson, Mark; Young, Eric W.; Lowman, Andrew E.; Redding, David C.

    1997-01-01

    In the summer of 1996, three Study teams developed conceptual designs and mission architectures for the Next Generation Space Telescope (NGST). Each group highlighted areas of technology development that need to be further advanced to meet the goals of the NGST mission. The most important areas for future study included: deployable structures, lightweight optics, cryogenic optics and mechanisms, passive cooling, and on-orbit closed loop wavefront sensing and control. NASA and industry are currently planning to develop a series of ground testbeds and validation flights to demonstrate many of these technologies. The Deployed Cryogenic Active Telescope Testbed (DCATT) is a system level testbed to be developed at Goddard Space Flight Center in three phases over an extended period of time. This testbed will combine an actively controlled telescope with the hardware and software elements of a closed loop wavefront sensing and control system to achieve diffraction limited imaging at 2 microns. We will present an overview of the system level requirements, a discussion of the optical design, and results of performance analyses for the Phase 1 ambient concept for DCATT,

  11. In-Space Networking On NASA's SCaN Testbed

    NASA Technical Reports Server (NTRS)

    Brooks, David; Eddy, Wesley M.; Clark, Gilbert J., III; Johnson, Sandra K.

    2016-01-01

    The NASA Space Communications and Navigation (SCaN) Testbed, an external payload onboard the International Space Station, is equipped with three software defined radios (SDRs) and a programmable flight computer. The purpose of the Testbed is to conduct inspace research in the areas of communication, navigation, and networking in support of NASA missions and communication infrastructure. Multiple reprogrammable elements in the end to end system, along with several communication paths and a semi-operational environment, provides a unique opportunity to explore networking concepts and protocols envisioned for the future Solar System Internet (SSI). This paper will provide a general description of the system's design and the networking protocols implemented and characterized on the testbed, including Encapsulation, IP over CCSDS, and Delay-Tolerant Networking (DTN). Due to the research nature of the implementation, flexibility and robustness are considered in the design to enable expansion for future adaptive and cognitive techniques. Following a detailed design discussion, lessons learned and suggestions for future missions and communication infrastructure elements will be provided. Plans for the evolving research on SCaN Testbed as it moves towards a more adaptive, autonomous system will be discussed.

  12. Data distribution service-based interoperability framework for smart grid testbed infrastructure

    DOE PAGES

    Youssef, Tarek A.; Elsayed, Ahmed T.; Mohammed, Osama A.

    2016-03-02

    This study presents the design and implementation of a communication and control infrastructure for smart grid operation. The proposed infrastructure enhances the reliability of the measurements and control network. The advantages of utilizing the data-centric over message-centric communication approach are discussed in the context of smart grid applications. The data distribution service (DDS) is used to implement a data-centric common data bus for the smart grid. This common data bus improves the communication reliability, enabling distributed control and smart load management. These enhancements are achieved by avoiding a single point of failure while enabling peer-to-peer communication and an automatic discoverymore » feature for dynamic participating nodes. The infrastructure and ideas presented in this paper were implemented and tested on the smart grid testbed. A toolbox and application programing interface for the testbed infrastructure are developed in order to facilitate interoperability and remote access to the testbed. This interface allows control, monitoring, and performing of experiments remotely. Furthermore, it could be used to integrate multidisciplinary testbeds to study complex cyber-physical systems (CPS).« less

  13. CoNNeCT Antenna Positioning System Dynamic Simulator Modal Model Correlation

    NASA Technical Reports Server (NTRS)

    Jones, Tevor M.; McNelis, Mark E.; Staab, Lucas D.; Akers, James C.; Suarez, Vicente

    2012-01-01

    The National Aeronautics and Space Administration (NASA) developed an on-orbit, adaptable, Software Defined Radios (SDR)/Space Telecommunications Radio System (STRS)-based testbed facility to conduct a suite of experiments to advance technologies, reduce risk, and enable future mission capabilities on the International Space Station (ISS). The Communications, Navigation, and Networking reConfigurable Testbed (CoNNeCT) Project will provide NASA, industry, other Government agencies, and academic partners the opportunity to develop and field communications, navigation, and networking technologies in both the laboratory and space environment based on reconfigurable, software-defined radio platforms and the STRS Architecture. The CoNNeCT Payload Operations Nomenclature is "SCAN Testbed," and this nomenclature will be used in all ISS integration, safety, verification, and operations documentation. The SCAN Testbed (payload) is a Flight Releasable Attachment Mechanism (FRAM) based payload that will launch aboard the Japanese H-II Transfer Vehicle (HTV) Multipurpose Exposed Pallet (EP-MP) to the International Space Station (ISS), and will be transferred to the Express Logistics Carrier 3 (ELC3) via Extravehicular Robotics (EVR). The SCAN Testbed will operate on-orbit for a minimum of two years.

  14. CoNNeCT Antenna Positioning System Dynamic Simulator Modal Model Correlation

    NASA Technical Reports Server (NTRS)

    Jones, Trevor M.; McNelis, Mark E.; Staab, Lucas D.; Akers, James C.; Suarez, Vicente J.

    2012-01-01

    The National Aeronautics and Space Administration (NASA) developed an on-orbit, adaptable, Software Defined Radios (SDR)/Space Telecommunications Radio System (STRS)-based testbed facility to conduct a suite of experiments to advance technologies, reduce risk, and enable future mission capabilities on the International Space Station (ISS). The Communications, Navigation, and Networking reConfigurable Testbed (CoNNeCT) Project will provide NASA, industry, other Government agencies, and academic partners the opportunity to develop and field communications, navigation, and networking technologies in both the laboratory and space environment based on reconfigurable, software-defined radio platforms and the STRS Architecture. The CoNNeCT Payload Operations Nomenclature is SCAN Testbed, and this nomenclature will be used in all ISS integration, safety, verification, and operations documentation. The SCAN Testbed (payload) is a Flight Releasable Attachment Mechanism (FRAM) based payload that will launch aboard the Japanese H-II Transfer Vehicle (HTV) Multipurpose Exposed Pallet (EP-MP) to the International Space Station (ISS), and will be transferred to the Express Logistics Carrier 3 (ELC3) via Extravehicular Robotics (EVR). The SCAN Testbed will operate on-orbit for a minimum of two years.

  15. Description of New Inflatable/Rigidizable Hexapod Structure Testbed for Shape and Vibration Control

    NASA Technical Reports Server (NTRS)

    Adetona, O.; Keel, L. H.; Horta, L. G.; Cadogan, D. P.; Sapna, G. H.; Scarborough, S. E.

    2002-01-01

    Larger and more powerful space based instruments are needed to meet increasingly sophisticated scientific demand. To support this need, concepts for telescopes with apertures of 100 meters are being investigated, but the required technologies are not in hand today. Due to the capacity limits of launch vehicles, the idea of deploying, erecting, or inflating large structures in space is being considered. Recently, rigidization concepts of large inflatable structures have demonstrated the capability of weight reductions of up to 50% from current concepts with packaging efficiencies near 80%. One of the important aspects of inflatable structures is vibration mitigation and line-of-sight control. Such control tasks are possible only after actuators/sensors are properly integrated into a rigidizable concept. To study these issues, we have developed an inflatable/rigidizable hexapod structure testbed. The testbed integrates state of the art piezo-electric self-sensing actuators into an inflatable/rigidizable structure and a flat membrane reflector. Using this testbed, we plan to experimentally demonstrate achievable vibration and line-of-sight control. This paper contains a description of the testbed and an outline of the test plan.

  16. A computational- And storage-cloud for integration of biodiversity collections

    USGS Publications Warehouse

    Matsunaga, A.; Thompson, A.; Figueiredo, R. J.; Germain-Aubrey, C.C; Collins, M.; Beeman, R.S; Macfadden, B.J.; Riccardi, G.; Soltis, P.S; Page, L. M.; Fortes, J.A.B

    2013-01-01

    A core mission of the Integrated Digitized Biocollections (iDigBio) project is the building and deployment of a cloud computing environment customized to support the digitization workflow and integration of data from all U.S. nonfederal biocollections. iDigBio chose to use cloud computing technologies to deliver a cyberinfrastructure that is flexible, agile, resilient, and scalable to meet the needs of the biodiversity community. In this context, this paper describes the integration of open source cloud middleware, applications, and third party services using standard formats, protocols, and services. In addition, this paper demonstrates the value of the digitized information from collections in a broader scenario involving multiple disciplines.

  17. Matsu: An Elastic Cloud Connected to a SensorWeb for Disaster Response

    NASA Technical Reports Server (NTRS)

    Mandl, Daniel

    2011-01-01

    This slide presentation reviews the use of cloud computing combined with the SensorWeb in aiding disaster response planning. Included is an overview of the architecture of the SensorWeb, and overviews of the phase 1 of the EO-1 system and the steps to improve it to transform it to an On-demand product cloud as part of the Open Cloud Consortium (OCC). The effectiveness of this system is demonstrated in the SensorWeb for the Namibia flood in 2010, using information blended from MODIS, TRMM, River Gauge data, and the Google Earth version of Namibia the system enabled river surge predictions and could enable planning for future disaster responses.

  18. Terahertz standoff imaging testbed design and performance for concealed weapon and device identification model development

    NASA Astrophysics Data System (ADS)

    Franck, Charmaine C.; Lee, Dave; Espinola, Richard L.; Murrill, Steven R.; Jacobs, Eddie L.; Griffin, Steve T.; Petkie, Douglas T.; Reynolds, Joe

    2007-04-01

    This paper describes the design and performance of the U.S. Army RDECOM CERDEC Night Vision and Electronic Sensors Directorate's (NVESD), active 0.640-THz imaging testbed, developed in support of the Defense Advanced Research Project Agency's (DARPA) Terahertz Imaging Focal-Plane Technology (TIFT) program. The laboratory measurements and standoff images were acquired during the development of a NVESD and Army Research Laboratory terahertz imaging performance model. The imaging testbed is based on a 12-inch-diameter Off-Axis Elliptical (OAE) mirror designed with one focal length at 1 m and the other at 10 m. This paper will describe the design considerations of the OAE-mirror, dual-capability, active imaging testbed, as well as measurement/imaging results used to further develop the model.

  19. Open Source Surrogate Safety Assessment Model, 2017 Enhancement and Update: SSAM Version 3.0 [Tech Brief

    DOT National Transportation Integrated Search

    2016-11-17

    The ETFOMM (Enhanced Transportation Flow Open Source Microscopic Model) Cloud Service (ECS) is a software product sponsored by the U.S. Department of Transportation in conjunction with the Microscopic Traffic Simulation Models and SoftwareAn Op...

  20. Using Cloud-based Storage Technologies for Earth Science Data

    NASA Astrophysics Data System (ADS)

    Michaelis, A.; Readey, J.; Votava, P.

    2016-12-01

    Cloud based infrastructure may offer several key benefits of scalability, built in redundancy and reduced total cost of ownership as compared with a traditional data center approach. However, most of the tools and software systems developed for NASA data repositories were not developed with a cloud based infrastructure in mind and do not fully take advantage of commonly available cloud-based technologies. Object storage services are provided through all the leading public (Amazon Web Service, Microsoft Azure, Google Cloud, etc.) and private (Open Stack) clouds, and may provide a more cost-effective means of storing large data collections online. We describe a system that utilizes object storage rather than traditional file system based storage to vend earth science data. The system described is not only cost effective, but shows superior performance for running many different analytics tasks in the cloud. To enable compatibility with existing tools and applications, we outline client libraries that are API compatible with existing libraries for HDF5 and NetCDF4. Performance of the system is demonstrated using clouds services running on Amazon Web Services.

  1. Possible external sources of terrestrial cloud cover variability: the solar wind

    NASA Astrophysics Data System (ADS)

    Voiculescu, Mirela; Usoskin, Ilya; Condurache-Bota, Simona

    2014-05-01

    Cloud cover plays an important role in the terrestrial radiation budget. The possible influence of the solar activity on cloud cover is still an open question with contradictory answers. An extraterrestrial factor potentially affecting the cloud cover is related to fields associated with solar wind. We focus here on a derived quantity, the interplanetary electric field (IEF), defined as the product between the solar wind speed and the meridional component, Bz, of the interplanetary magnetic field (IMF) in the Geocentric Solar Magnetospheric (GSM) system. We show that cloud cover at mid-high latitudes systematically correlates with positive IEF, which has a clear energetic input into the atmosphere, but not with negative IEF, in general agreement with predictions of the global electric circuit (GEC)-related mechanism. Since the IEF responds differently to solar activity than, for instance, cosmic ray flux or solar irradiance, we also show that such a study allows distinguishing one solar-driven mechanism of cloud evolution, via the GEC, from others. We also present results showing that the link between cloud cover and IMF varies depending on composition and altitude of clouds.

  2. Evaluation of NCMRWF unified model vertical cloud structure with CloudSat over the Indian summer monsoon region

    NASA Astrophysics Data System (ADS)

    Jayakumar, A.; Mamgain, Ashu; Jisesh, A. S.; Mohandas, Saji; Rakhi, R.; Rajagopal, E. N.

    2016-05-01

    Representation of rainfall distribution and monsoon circulation in the high resolution versions of NCMRWF Unified model (NCUM-REG) for the short-range forecasting of extreme rainfall event is vastly dependent on the key factors such as vertical cloud distribution, convection and convection/cloud relationship in the model. Hence it is highly relevant to evaluate the vertical structure of cloud and precipitation of the model over the monsoon environment. In this regard, we utilized the synergy of the capabilities of CloudSat data for long observational period, by conditioning it for the synoptic situation of the model simulation period. Simulations were run at 4-km grid length with the convective parameterization effectively switched off and on. Since the sample of CloudSat overpasses through the monsoon domain is small, the aforementioned methodology may qualitatively evaluate the vertical cloud structure for the model simulation period. It is envisaged that the present study will open up the possibility of further improvement in the high resolution version of NCUM in the tropics for the Indian summer monsoon associated rainfall events.

  3. A Cloud Microphysics Model for the Gas Giant Planets

    NASA Astrophysics Data System (ADS)

    Palotai, Csaba J.; Le Beau, Raymond P.; Shankar, Ramanakumar; Flom, Abigail; Lashley, Jacob; McCabe, Tyler

    2016-10-01

    Recent studies have significantly increased the quality and the number of observed meteorological features on the jovian planets, revealing banded cloud structures and discrete features. Our current understanding of the formation and decay of those clouds also defines the conceptual modes about the underlying atmospheric dynamics. The full interpretation of the new observational data set and the related theories requires modeling these features in a general circulation model (GCM). Here, we present details of our bulk cloud microphysics model that was designed to simulate clouds in the Explicit Planetary Hybrid-Isentropic Coordinate (EPIC) GCM for the jovian planets. The cloud module includes hydrological cycles for each condensable species that consist of interactive vapor, cloud and precipitation phases and it also accounts for latent heating and cooling throughout the transfer processes (Palotai and Dowling, 2008. Icarus, 194, 303-326). Previously, the self-organizing clouds in our simulations successfully reproduced the vertical and horizontal ammonia cloud structure in the vicinity of Jupiter's Great Red Spot and Oval BA (Palotai et al. 2014, Icarus, 232, 141-156). In our recent work, we extended this model to include water clouds on Jupiter and Saturn, ammonia clouds on Saturn, and methane clouds on Uranus and Neptune. Details of our cloud parameterization scheme, our initial results and their comparison with observations will be shown. The latest version of EPIC model is available as open source software from NASA's PDS Atmospheres Node.

  4. A Kenyan Cloud School. Massive Open Online & Ongoing Courses for Blended and Lifelong Learning

    ERIC Educational Resources Information Center

    Jobe, William

    2013-01-01

    This research describes the predicted outcomes of a Kenyan Cloud School (KCS), which is a MOOC that contains all courses taught at the secondary school level in Kenya. This MOOC will consist of online, ongoing subjects in both English and Kiswahili. The KCS subjects offer self-testing and peer assessment to maximize scalability, and digital badges…

  5. Enabling BOINC in infrastructure as a service cloud system

    NASA Astrophysics Data System (ADS)

    Montes, Diego; Añel, Juan A.; Pena, Tomás F.; Uhe, Peter; Wallom, David C. H.

    2017-02-01

    Volunteer or crowd computing is becoming increasingly popular for solving complex research problems from an increasingly diverse range of areas. The majority of these have been built using the Berkeley Open Infrastructure for Network Computing (BOINC) platform, which provides a range of different services to manage all computation aspects of a project. The BOINC system is ideal in those cases where not only does the research community involved need low-cost access to massive computing resources but also where there is a significant public interest in the research being done.We discuss the way in which cloud services can help BOINC-based projects to deliver results in a fast, on demand manner. This is difficult to achieve using volunteers, and at the same time, using scalable cloud resources for short on demand projects can optimize the use of the available resources. We show how this design can be used as an efficient distributed computing platform within the cloud, and outline new approaches that could open up new possibilities in this field, using Climateprediction.net (http://www.climateprediction.net/) as a case study.

  6. How NASA is Building a Petabyte Scale Geospatial Archive in the Cloud

    NASA Technical Reports Server (NTRS)

    Pilone, Dan; Quinn, Patrick; Jazayeri, Alireza; Baynes, Kathleen; Murphy, Kevin J.

    2018-01-01

    NASA's Earth Observing System Data and Information System (EOSDIS) is working towards a vision of a cloud-based, highly-flexible, ingest, archive, management, and distribution system for its ever-growing and evolving data holdings. This free and open source system, Cumulus, is emerging from its prototyping stages and is poised to make a huge impact on how NASA manages and disseminates its Earth science data. This talk outlines the motivation for this work, present the achievements and hurdles of the past 18 months and charts a course for the future expansion of Cumulus. We explore not just the technical, but also the socio-technical challenges that we face in evolving a system of this magnitude into the cloud. The NASA EOSDIS archive is currently at nearly 30 PBs and will grow to over 300PBs in the coming years. We've presented progress on this effort at AWS re:Invent and the American Geophysical Union (AGU) Fall Meeting in 2017 and hope to have the opportunity to share with FOSS4G attendees information on the availability of the open sourced software and how NASA intends on making its Earth Observing Geospatial data available for free to the public in the cloud.

  7. Hybrid cloud and cluster computing paradigms for life science applications

    PubMed Central

    2010-01-01

    Background Clouds and MapReduce have shown themselves to be a broadly useful approach to scientific computing especially for parallel data intensive applications. However they have limited applicability to some areas such as data mining because MapReduce has poor performance on problems with an iterative structure present in the linear algebra that underlies much data analysis. Such problems can be run efficiently on clusters using MPI leading to a hybrid cloud and cluster environment. This motivates the design and implementation of an open source Iterative MapReduce system Twister. Results Comparisons of Amazon, Azure, and traditional Linux and Windows environments on common applications have shown encouraging performance and usability comparisons in several important non iterative cases. These are linked to MPI applications for final stages of the data analysis. Further we have released the open source Twister Iterative MapReduce and benchmarked it against basic MapReduce (Hadoop) and MPI in information retrieval and life sciences applications. Conclusions The hybrid cloud (MapReduce) and cluster (MPI) approach offers an attractive production environment while Twister promises a uniform programming environment for many Life Sciences applications. Methods We used commercial clouds Amazon and Azure and the NSF resource FutureGrid to perform detailed comparisons and evaluations of different approaches to data intensive computing. Several applications were developed in MPI, MapReduce and Twister in these different environments. PMID:21210982

  8. Hybrid cloud and cluster computing paradigms for life science applications.

    PubMed

    Qiu, Judy; Ekanayake, Jaliya; Gunarathne, Thilina; Choi, Jong Youl; Bae, Seung-Hee; Li, Hui; Zhang, Bingjing; Wu, Tak-Lon; Ruan, Yang; Ekanayake, Saliya; Hughes, Adam; Fox, Geoffrey

    2010-12-21

    Clouds and MapReduce have shown themselves to be a broadly useful approach to scientific computing especially for parallel data intensive applications. However they have limited applicability to some areas such as data mining because MapReduce has poor performance on problems with an iterative structure present in the linear algebra that underlies much data analysis. Such problems can be run efficiently on clusters using MPI leading to a hybrid cloud and cluster environment. This motivates the design and implementation of an open source Iterative MapReduce system Twister. Comparisons of Amazon, Azure, and traditional Linux and Windows environments on common applications have shown encouraging performance and usability comparisons in several important non iterative cases. These are linked to MPI applications for final stages of the data analysis. Further we have released the open source Twister Iterative MapReduce and benchmarked it against basic MapReduce (Hadoop) and MPI in information retrieval and life sciences applications. The hybrid cloud (MapReduce) and cluster (MPI) approach offers an attractive production environment while Twister promises a uniform programming environment for many Life Sciences applications. We used commercial clouds Amazon and Azure and the NSF resource FutureGrid to perform detailed comparisons and evaluations of different approaches to data intensive computing. Several applications were developed in MPI, MapReduce and Twister in these different environments.

  9. Integrating multiple scientific computing needs via a Private Cloud infrastructure

    NASA Astrophysics Data System (ADS)

    Bagnasco, S.; Berzano, D.; Brunetti, R.; Lusso, S.; Vallero, S.

    2014-06-01

    In a typical scientific computing centre, diverse applications coexist and share a single physical infrastructure. An underlying Private Cloud facility eases the management and maintenance of heterogeneous use cases such as multipurpose or application-specific batch farms, Grid sites catering to different communities, parallel interactive data analysis facilities and others. It allows to dynamically and efficiently allocate resources to any application and to tailor the virtual machines according to the applications' requirements. Furthermore, the maintenance of large deployments of complex and rapidly evolving middleware and application software is eased by the use of virtual images and contextualization techniques; for example, rolling updates can be performed easily and minimizing the downtime. In this contribution we describe the Private Cloud infrastructure at the INFN-Torino Computer Centre, that hosts a full-fledged WLCG Tier-2 site and a dynamically expandable PROOF-based Interactive Analysis Facility for the ALICE experiment at the CERN LHC and several smaller scientific computing applications. The Private Cloud building blocks include the OpenNebula software stack, the GlusterFS filesystem (used in two different configurations for worker- and service-class hypervisors) and the OpenWRT Linux distribution (used for network virtualization). A future integration into a federated higher-level infrastructure is made possible by exposing commonly used APIs like EC2 and by using mainstream contextualization tools like CloudInit.

  10. Large-scale virtual screening on public cloud resources with Apache Spark.

    PubMed

    Capuccini, Marco; Ahmed, Laeeq; Schaal, Wesley; Laure, Erwin; Spjuth, Ola

    2017-01-01

    Structure-based virtual screening is an in-silico method to screen a target receptor against a virtual molecular library. Applying docking-based screening to large molecular libraries can be computationally expensive, however it constitutes a trivially parallelizable task. Most of the available parallel implementations are based on message passing interface, relying on low failure rate hardware and fast network connection. Google's MapReduce revolutionized large-scale analysis, enabling the processing of massive datasets on commodity hardware and cloud resources, providing transparent scalability and fault tolerance at the software level. Open source implementations of MapReduce include Apache Hadoop and the more recent Apache Spark. We developed a method to run existing docking-based screening software on distributed cloud resources, utilizing the MapReduce approach. We benchmarked our method, which is implemented in Apache Spark, docking a publicly available target receptor against [Formula: see text]2.2 M compounds. The performance experiments show a good parallel efficiency (87%) when running in a public cloud environment. Our method enables parallel Structure-based virtual screening on public cloud resources or commodity computer clusters. The degree of scalability that we achieve allows for trying out our method on relatively small libraries first and then to scale to larger libraries. Our implementation is named Spark-VS and it is freely available as open source from GitHub (https://github.com/mcapuccini/spark-vs).Graphical abstract.

  11. Large-scale structural analysis: The structural analyst, the CSM Testbed and the NAS System

    NASA Technical Reports Server (NTRS)

    Knight, Norman F., Jr.; Mccleary, Susan L.; Macy, Steven C.; Aminpour, Mohammad A.

    1989-01-01

    The Computational Structural Mechanics (CSM) activity is developing advanced structural analysis and computational methods that exploit high-performance computers. Methods are developed in the framework of the CSM testbed software system and applied to representative complex structural analysis problems from the aerospace industry. An overview of the CSM testbed methods development environment is presented and some numerical methods developed on a CRAY-2 are described. Selected application studies performed on the NAS CRAY-2 are also summarized.

  12. The Segmented Aperture Interferometric Nulling Testbed (SAINT) I: overview and air-side system description

    NASA Astrophysics Data System (ADS)

    Hicks, Brian A.; Lyon, Richard G.; Petrone, Peter; Ballard, Marlin; Bolcar, Matthew R.; Bolognese, Jeff; Clampin, Mark; Dogoda, Peter; Dworzanski, Daniel; Helmbrecht, Michael A.; Koca, Corina; Shiri, Ron

    2016-07-01

    This work presents an overview of the Segmented Aperture Interferometric Nulling Testbed (SAINT), a project that will pair an actively-controlled macro-scale segmented mirror with the Visible Nulling Coronagraph (VNC). SAINT will incorporate the VNC's demonstrated wavefront sensing and control system to refine and quantify end-to-end high-contrast starlight suppression performance. This pathfinder testbed will be used as a tool to study and refine approaches to mitigating instabilities and complex diffraction expected from future large segmented aperture telescopes.

  13. On-wire lithography-generated molecule-based transport junctions: a new testbed for molecular electronics.

    PubMed

    Chen, Xiaodong; Jeon, You-Moon; Jang, Jae-Won; Qin, Lidong; Huo, Fengwei; Wei, Wei; Mirkin, Chad A

    2008-07-02

    On-wire lithography (OWL) fabricated nanogaps are used as a new testbed to construct molecular transport junctions (MTJs) through the assembly of thiolated molecular wires across a nanogap formed between two Au electrodes. In addition, we show that one can use OWL to rapidly characterize a MTJ and optimize gap size for two molecular wires of different dimensions. Finally, we have used this new testbed to identify unusual temperature-dependent transport mechanisms for alpha,omega-dithiol terminated oligo(phenylene ethynylene).

  14. A Reference Software Architecture to Support Unmanned Aircraft Integration in the National Airspace System

    DTIC Science & Technology

    2012-07-01

    and Avoid ( SAA ) testbed that provides some of the core services . This paper describes the general architecture and a SAA testbed implementation that...that provides data and software services to enable a set of Unmanned Aircraft (UA) platforms to operate in a wide range of air domains which may...implemented by MIT Lincoln Laboratory in the form of a Sense and Avoid ( SAA ) testbed that provides some of the core services . This paper describes the general

  15. The ac power system testbed

    NASA Technical Reports Server (NTRS)

    Mildice, J.; Sundberg, R.

    1987-01-01

    The object of this program was to design, build, test, and deliver a high frequency (20 kHz) Power System Testbed which would electrically approximate a single, separable power channel of an IOC Space Station. That program is described, including the technical background, and the results are discussed showing that the major assumptions about the characteristics of this class of hardware (size, mass, efficiency, control, etc.) were substantially correct. This testbed equipment was completed and delivered and is being operated as part of the Space Station Power System Test Facility.

  16. Advanced Artificial Intelligence Technology Testbed

    NASA Technical Reports Server (NTRS)

    Anken, Craig S.

    1993-01-01

    The Advanced Artificial Intelligence Technology Testbed (AAITT) is a laboratory testbed for the design, analysis, integration, evaluation, and exercising of large-scale, complex, software systems, composed of both knowledge-based and conventional components. The AAITT assists its users in the following ways: configuring various problem-solving application suites; observing and measuring the behavior of these applications and the interactions between their constituent modules; gathering and analyzing statistics about the occurrence of key events; and flexibly and quickly altering the interaction of modules within the applications for further study.

  17. ARTEMIS: a collaborative framework for health care.

    PubMed

    Reddy, R; Jagannathan, V; Srinivas, K; Karinthi, R; Reddy, S M; Gollapudy, C; Friedman, S

    1993-01-01

    Patient centered healthcare delivery is an inherently collaborative process. This involves a wide range of individuals and organizations with diverse perspectives: primary care physicians, hospital administrators, labs, clinics, and insurance. The key to cost reduction and quality improvement in health care is effective management of this collaborative process. The use of multi-media collaboration technology can facilitate timely delivery of patient care and reduce cost at the same time. During the last five years, the Concurrent Engineering Research Center (CERC), under the sponsorship of DARPA (Defense Advanced Research Projects Agency, recently renamed ARPA) developed a number of generic key subsystems of a comprehensive collaboration environment. These subsystems are intended to overcome the barriers that inhibit the collaborative process. Three subsystems developed under this program include: MONET (Meeting On the Net)--to provide consultation over a computer network, ISS (Information Sharing Server)--to provide access to multi-media information, and PCB (Project Coordination Board)--to better coordinate focussed activities. These systems have been integrated into an open environment to enable collaborative processes. This environment is being used to create a wide-area (geographically distributed) research testbed under DARPA sponsorship, ARTEMIS (Advance Research Testbed for Medical Informatics) to explore the collaborative health care processes. We believe this technology will play a key role in the current national thrust to reengineer the present health-care delivery system.

  18. Prototyping Control and Data Acquisition for the ITER Neutral Beam Test Facility

    NASA Astrophysics Data System (ADS)

    Luchetta, Adriano; Manduchi, Gabriele; Taliercio, Cesare; Soppelsa, Anton; Paolucci, Francesco; Sartori, Filippo; Barbato, Paolo; Breda, Mauro; Capobianco, Roberto; Molon, Federico; Moressa, Modesto; Polato, Sandro; Simionato, Paola; Zampiva, Enrico

    2013-10-01

    The ITER Neutral Beam Test Facility will be the project's R&D facility for heating neutral beam injectors (HNB) for fusion research operating with H/D negative ions. Its mission is to develop technology to build the HNB prototype injector meeting the stringent HNB requirements (16.5 MW injection power, -1 MeV acceleration energy, 40 A ion current and one hour continuous operation). Two test-beds will be built in sequence in the facility: first SPIDER, the ion source test-bed, to optimize the negative ion source performance, second MITICA, the actual prototype injector, to optimize ion beam acceleration and neutralization. The SPIDER control and data acquisition system is under design. To validate the main architectural choices, a system prototype has been assembled and performance tests have been executed to assess the prototype's capability to meet the control and data acquisition system requirements. The prototype is based on open-source software frameworks running under Linux. EPICS is the slow control engine, MDSplus is the data handler and MARTe is the fast control manager. The prototype addresses low and high-frequency data acquisition, 10 kS/s and 10 MS/s respectively, camera image acquisition, data archiving, data streaming, data retrieval and visualization, real time fast control with 100 μs control cycle and supervisory control.

  19. OpenMP Parallelization and Optimization of Graph-Based Machine Learning Algorithms

    DOE PAGES

    Meng, Zhaoyi; Koniges, Alice; He, Yun Helen; ...

    2016-09-21

    In this paper, we investigate the OpenMP parallelization and optimization of two novel data classification algorithms. The new algorithms are based on graph and PDE solution techniques and provide significant accuracy and performance advantages over traditional data classification algorithms in serial mode. The methods leverage the Nystrom extension to calculate eigenvalue/eigenvectors of the graph Laplacian and this is a self-contained module that can be used in conjunction with other graph-Laplacian based methods such as spectral clustering. We use performance tools to collect the hotspots and memory access of the serial codes and use OpenMP as the parallelization language to parallelizemore » the most time-consuming parts. Where possible, we also use library routines. We then optimize the OpenMP implementations and detail the performance on traditional supercomputer nodes (in our case a Cray XC30), and test the optimization steps on emerging testbed systems based on Intel’s Knights Corner and Landing processors. We show both performance improvement and strong scaling behavior. Finally, a large number of optimization techniques and analyses are necessary before the algorithm reaches almost ideal scaling.« less

  20. CSI computer system/remote interface unit acceptance test results

    NASA Technical Reports Server (NTRS)

    Sparks, Dean W., Jr.

    1992-01-01

    The validation tests conducted on the Control/Structures Interaction (CSI) Computer System (CCS)/Remote Interface Unit (RIU) is discussed. The CCS/RIU consists of a commercially available, Langley Research Center (LaRC) programmed, space flight qualified computer and a flight data acquisition and filtering computer, developed at LaRC. The tests were performed in the Space Structures Research Laboratory (SSRL) and included open loop excitation, closed loop control, safing, RIU digital filtering, and RIU stand alone testing with the CSI Evolutionary Model (CEM) Phase-0 testbed. The test results indicated that the CCS/RIU system is comparable to ground based systems in performing real-time control-structure experiments.

Top