Sample records for heterogeneous cloud workloads

  1. Optimization of over-provisioned clouds

    NASA Astrophysics Data System (ADS)

    Balashov, N.; Baranov, A.; Korenkov, V.

    2016-09-01

    The functioning of modern applications in cloud-centers is characterized by a huge variety of computational workloads generated. This causes uneven workload distribution and as a result leads to ineffective utilization of cloud-centers' hardware. The proposed article addresses the possible ways to solve this issue and demonstrates that it is a matter of necessity to optimize cloud-centers' hardware utilization. As one of the possible ways to solve the problem of the inefficient resource utilization in heterogeneous cloud-environments an algorithm of dynamic re-allocation of virtual resources is suggested.

  2. A green strategy for federated and heterogeneous clouds with communicating workloads.

    PubMed

    Mateo, Jordi; Vilaplana, Jordi; Plà, Lluis M; Lérida, Josep Ll; Solsona, Francesc

    2014-01-01

    Providers of cloud environments must tackle the challenge of configuring their system to provide maximal performance while minimizing the cost of resources used. However, at the same time, they must guarantee an SLA (service-level agreement) to the users. The SLA is usually associated with a certain level of QoS (quality of service). As response time is perhaps the most widely used QoS metric, it was also the one chosen in this work. This paper presents a green strategy (GS) model for heterogeneous cloud systems. We provide a solution for heterogeneous job-communicating tasks and heterogeneous VMs that make up the nodes of the cloud. In addition to guaranteeing the SLA, the main goal is to optimize energy savings. The solution results in an equation that must be solved by a solver with nonlinear capabilities. The results obtained from modelling the policies to be executed by a solver demonstrate the applicability of our proposal for saving energy and guaranteeing the SLA.

  3. A Green Strategy for Federated and Heterogeneous Clouds with Communicating Workloads

    PubMed Central

    Plà, Lluis M.; Lérida, Josep Ll.

    2014-01-01

    Providers of cloud environments must tackle the challenge of configuring their system to provide maximal performance while minimizing the cost of resources used. However, at the same time, they must guarantee an SLA (service-level agreement) to the users. The SLA is usually associated with a certain level of QoS (quality of service). As response time is perhaps the most widely used QoS metric, it was also the one chosen in this work. This paper presents a green strategy (GS) model for heterogeneous cloud systems. We provide a solution for heterogeneous job-communicating tasks and heterogeneous VMs that make up the nodes of the cloud. In addition to guaranteeing the SLA, the main goal is to optimize energy savings. The solution results in an equation that must be solved by a solver with nonlinear capabilities. The results obtained from modelling the policies to be executed by a solver demonstrate the applicability of our proposal for saving energy and guaranteeing the SLA. PMID:25478589

  4. Cloudweaver: Adaptive and Data-Driven Workload Manager for Generic Clouds

    NASA Astrophysics Data System (ADS)

    Li, Rui; Chen, Lei; Li, Wen-Syan

    Cloud computing denotes the latest trend in application development for parallel computing on massive data volumes. It relies on clouds of servers to handle tasks that used to be managed by an individual server. With cloud computing, software vendors can provide business intelligence and data analytic services for internet scale data sets. Many open source projects, such as Hadoop, offer various software components that are essential for building a cloud infrastructure. Current Hadoop (and many others) requires users to configure cloud infrastructures via programs and APIs and such configuration is fixed during the runtime. In this chapter, we propose a workload manager (WLM), called CloudWeaver, which provides automated configuration of a cloud infrastructure for runtime execution. The workload management is data-driven and can adapt to dynamic nature of operator throughput during different execution phases. CloudWeaver works for a single job and a workload consisting of multiple jobs running concurrently, which aims at maximum throughput using a minimum set of processors.

  5. Next Generation Workload Management System For Big Data on Heterogeneous Distributed Computing

    NASA Astrophysics Data System (ADS)

    Klimentov, A.; Buncic, P.; De, K.; Jha, S.; Maeno, T.; Mount, R.; Nilsson, P.; Oleynik, D.; Panitkin, S.; Petrosyan, A.; Porter, R. J.; Read, K. F.; Vaniachine, A.; Wells, J. C.; Wenaus, T.

    2015-05-01

    The Large Hadron Collider (LHC), operating at the international CERN Laboratory in Geneva, Switzerland, is leading Big Data driven scientific explorations. Experiments at the LHC explore the fundamental nature of matter and the basic forces that shape our universe, and were recently credited for the discovery of a Higgs boson. ATLAS and ALICE are the largest collaborations ever assembled in the sciences and are at the forefront of research at the LHC. To address an unprecedented multi-petabyte data processing challenge, both experiments rely on a heterogeneous distributed computational infrastructure. The ATLAS experiment uses PanDA (Production and Data Analysis) Workload Management System (WMS) for managing the workflow for all data processing on hundreds of data centers. Through PanDA, ATLAS physicists see a single computing facility that enables rapid scientific breakthroughs for the experiment, even though the data centers are physically scattered all over the world. The scale is demonstrated by the following numbers: PanDA manages O(102) sites, O(105) cores, O(108) jobs per year, O(103) users, and ATLAS data volume is O(1017) bytes. In 2013 we started an ambitious program to expand PanDA to all available computing resources, including opportunistic use of commercial and academic clouds and Leadership Computing Facilities (LCF). The project titled ‘Next Generation Workload Management and Analysis System for Big Data’ (BigPanDA) is funded by DOE ASCR and HEP. Extending PanDA to clouds and LCF presents new challenges in managing heterogeneity and supporting workflow. The BigPanDA project is underway to setup and tailor PanDA at the Oak Ridge Leadership Computing Facility (OLCF) and at the National Research Center "Kurchatov Institute" together with ALICE distributed computing and ORNL computing professionals. Our approach to integration of HPC platforms at the OLCF and elsewhere is to reuse, as much as possible, existing components of the PanDA system. We will present our current accomplishments with running the PanDA WMS at OLCF and other supercomputers and demonstrate our ability to use PanDA as a portal independent of the computing facilities infrastructure for High Energy and Nuclear Physics as well as other data-intensive science applications.

  6. Evolutionary Multiobjective Query Workload Optimization of Cloud Data Warehouses

    PubMed Central

    Dokeroglu, Tansel; Sert, Seyyit Alper; Cinar, Muhammet Serkan

    2014-01-01

    With the advent of Cloud databases, query optimizers need to find paretooptimal solutions in terms of response time and monetary cost. Our novel approach minimizes both objectives by deploying alternative virtual resources and query plans making use of the virtual resource elasticity of the Cloud. We propose an exact multiobjective branch-and-bound and a robust multiobjective genetic algorithm for the optimization of distributed data warehouse query workloads on the Cloud. In order to investigate the effectiveness of our approach, we incorporate the devised algorithms into a prototype system. Finally, through several experiments that we have conducted with different workloads and virtual resource configurations, we conclude remarkable findings of alternative deployments as well as the advantages and disadvantages of the multiobjective algorithms we propose. PMID:24892048

  7. Commissioning the CERN IT Agile Infrastructure with experiment workloads

    NASA Astrophysics Data System (ADS)

    Medrano Llamas, Ramón; Harald Barreiro Megino, Fernando; Kucharczyk, Katarzyna; Kamil Denis, Marek; Cinquilli, Mattia

    2014-06-01

    In order to ease the management of their infrastructure, most of the WLCG sites are adopting cloud based strategies. In the case of CERN, the Tier 0 of the WLCG, is completely restructuring the resource and configuration management of their computing center under the codename Agile Infrastructure. Its goal is to manage 15,000 Virtual Machines by means of an OpenStack middleware in order to unify all the resources in CERN's two datacenters: the one placed in Meyrin and the new on in Wigner, Hungary. During the commissioning of this infrastructure, CERN IT is offering an attractive amount of computing resources to the experiments (800 cores for ATLAS and CMS) through a private cloud interface. ATLAS and CMS have joined forces to exploit them by running stress tests and simulation workloads since November 2012. This work will describe the experience of the first deployments of the current experiment workloads on the CERN private cloud testbed. The paper is organized as follows: the first section will explain the integration of the experiment workload management systems (WMS) with the cloud resources. The second section will revisit the performance and stress testing performed with HammerCloud in order to evaluate and compare the suitability for the experiment workloads. The third section will go deeper into the dynamic provisioning techniques, such as the use of the cloud APIs directly by the WMS. The paper finishes with a review of the conclusions and the challenges ahead.

  8. Next Generation Workload Management System For Big Data on Heterogeneous Distributed Computing

    DOE PAGES

    Klimentov, A.; Buncic, P.; De, K.; ...

    2015-05-22

    The Large Hadron Collider (LHC), operating at the international CERN Laboratory in Geneva, Switzerland, is leading Big Data driven scientific explorations. Experiments at the LHC explore the fundamental nature of matter and the basic forces that shape our universe, and were recently credited for the discovery of a Higgs boson. ATLAS and ALICE are the largest collaborations ever assembled in the sciences and are at the forefront of research at the LHC. To address an unprecedented multi-petabyte data processing challenge, both experiments rely on a heterogeneous distributed computational infrastructure. The ATLAS experiment uses PanDA (Production and Data Analysis) Workload Managementmore » System (WMS) for managing the workflow for all data processing on hundreds of data centers. Through PanDA, ATLAS physicists see a single computing facility that enables rapid scientific breakthroughs for the experiment, even though the data centers are physically scattered all over the world. The scale is demonstrated by the following numbers: PanDA manages O(10 2) sites, O(10 5) cores, O(10 8) jobs per year, O(10 3) users, and ATLAS data volume is O(10 17) bytes. In 2013 we started an ambitious program to expand PanDA to all available computing resources, including opportunistic use of commercial and academic clouds and Leadership Computing Facilities (LCF). The project titled 'Next Generation Workload Management and Analysis System for Big Data' (BigPanDA) is funded by DOE ASCR and HEP. Extending PanDA to clouds and LCF presents new challenges in managing heterogeneity and supporting workflow. The BigPanDA project is underway to setup and tailor PanDA at the Oak Ridge Leadership Computing Facility (OLCF) and at the National Research Center "Kurchatov Institute" together with ALICE distributed computing and ORNL computing professionals. Our approach to integration of HPC platforms at the OLCF and elsewhere is to reuse, as much as possible, existing components of the PanDA system. Finally, we will present our current accomplishments with running the PanDA WMS at OLCF and other supercomputers and demonstrate our ability to use PanDA as a portal independent of the computing facilities infrastructure for High Energy and Nuclear Physics as well as other data-intensive science applications.« less

  9. Next Generation Workload Management System For Big Data on Heterogeneous Distributed Computing

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Klimentov, A.; Buncic, P.; De, K.

    The Large Hadron Collider (LHC), operating at the international CERN Laboratory in Geneva, Switzerland, is leading Big Data driven scientific explorations. Experiments at the LHC explore the fundamental nature of matter and the basic forces that shape our universe, and were recently credited for the discovery of a Higgs boson. ATLAS and ALICE are the largest collaborations ever assembled in the sciences and are at the forefront of research at the LHC. To address an unprecedented multi-petabyte data processing challenge, both experiments rely on a heterogeneous distributed computational infrastructure. The ATLAS experiment uses PanDA (Production and Data Analysis) Workload Managementmore » System (WMS) for managing the workflow for all data processing on hundreds of data centers. Through PanDA, ATLAS physicists see a single computing facility that enables rapid scientific breakthroughs for the experiment, even though the data centers are physically scattered all over the world. The scale is demonstrated by the following numbers: PanDA manages O(10 2) sites, O(10 5) cores, O(10 8) jobs per year, O(10 3) users, and ATLAS data volume is O(10 17) bytes. In 2013 we started an ambitious program to expand PanDA to all available computing resources, including opportunistic use of commercial and academic clouds and Leadership Computing Facilities (LCF). The project titled 'Next Generation Workload Management and Analysis System for Big Data' (BigPanDA) is funded by DOE ASCR and HEP. Extending PanDA to clouds and LCF presents new challenges in managing heterogeneity and supporting workflow. The BigPanDA project is underway to setup and tailor PanDA at the Oak Ridge Leadership Computing Facility (OLCF) and at the National Research Center "Kurchatov Institute" together with ALICE distributed computing and ORNL computing professionals. Our approach to integration of HPC platforms at the OLCF and elsewhere is to reuse, as much as possible, existing components of the PanDA system. Finally, we will present our current accomplishments with running the PanDA WMS at OLCF and other supercomputers and demonstrate our ability to use PanDA as a portal independent of the computing facilities infrastructure for High Energy and Nuclear Physics as well as other data-intensive science applications.« less

  10. Have the 'black clouds' cleared with new residency programme regulations?

    PubMed

    Schissler, A J; Einstein, A J

    2016-06-01

    For decades, residents believed to work harder have been referred to as having a 'black cloud'. Residency training programmes recently instituted changes to improve physician wellness and achieve comparable clinical workload. All Internal Medicine residents in the internship class of 2014 at Columbia were surveyed to assess for the ongoing presence of 'black cloud' trainees. While some residents are still thought to have this designation, they did not have a greater workload when compared to their peers. © 2016 Royal Australasian College of Physicians.

  11. On the Modeling and Management of Cloud Data Analytics

    NASA Astrophysics Data System (ADS)

    Castillo, Claris; Tantawi, Asser; Steinder, Malgorzata; Pacifici, Giovanni

    A new era is dawning where vast amount of data is subjected to intensive analysis in a cloud computing environment. Over the years, data about a myriad of things, ranging from user clicks to galaxies, have been accumulated, and continue to be collected, on storage media. The increasing availability of such data, along with the abundant supply of compute power and the urge to create useful knowledge, gave rise to a new data analytics paradigm in which data is subjected to intensive analysis, and additional data is created in the process. Meanwhile, a new cloud computing environment has emerged where seemingly limitless compute and storage resources are being provided to host computation and data for multiple users through virtualization technologies. Such a cloud environment is becoming the home for data analytics. Consequently, providing good performance at run-time to data analytics workload is an important issue for cloud management. In this paper, we provide an overview of the data analytics and cloud environment landscapes, and investigate the performance management issues related to running data analytics in the cloud. In particular, we focus on topics such as workload characterization, profiling analytics applications and their pattern of data usage, cloud resource allocation, placement of computation and data and their dynamic migration in the cloud, and performance prediction. In solving such management problems one relies on various run-time analytic models. We discuss approaches for modeling and optimizing the dynamic data analytics workload in the cloud environment. All along, we use the Map-Reduce paradigm as an illustration of data analytics.

  12. W-MAC: A Workload-Aware MAC Protocol for Heterogeneous Convergecast in Wireless Sensor Networks

    PubMed Central

    Xia, Ming; Dong, Yabo; Lu, Dongming

    2011-01-01

    The power consumption and latency of existing MAC protocols for wireless sensor networks (WSNs) are high in heterogeneous convergecast, where each sensor node generates different amounts of data in one convergecast operation. To solve this problem, we present W-MAC, a workload-aware MAC protocol for heterogeneous convergecast in WSNs. A subtree-based iterative cascading scheduling mechanism and a workload-aware time slice allocation mechanism are proposed to minimize the power consumption of nodes, while offering a low data latency. In addition, an efficient schedule adjustment mechanism is provided for adapting to data traffic variation and network topology change. Analytical and simulation results show that the proposed protocol provides a significant energy saving and latency reduction in heterogeneous convergecast, and can effectively support data aggregation to further improve the performance. PMID:22163753

  13. Hybrid Symbiotic Organisms Search Optimization Algorithm for Scheduling of Tasks on Cloud Computing Environment.

    PubMed

    Abdullahi, Mohammed; Ngadi, Md Asri

    2016-01-01

    Cloud computing has attracted significant attention from research community because of rapid migration rate of Information Technology services to its domain. Advances in virtualization technology has made cloud computing very popular as a result of easier deployment of application services. Tasks are submitted to cloud datacenters to be processed on pay as you go fashion. Task scheduling is one the significant research challenges in cloud computing environment. The current formulation of task scheduling problems has been shown to be NP-complete, hence finding the exact solution especially for large problem sizes is intractable. The heterogeneous and dynamic feature of cloud resources makes optimum task scheduling non-trivial. Therefore, efficient task scheduling algorithms are required for optimum resource utilization. Symbiotic Organisms Search (SOS) has been shown to perform competitively with Particle Swarm Optimization (PSO). The aim of this study is to optimize task scheduling in cloud computing environment based on a proposed Simulated Annealing (SA) based SOS (SASOS) in order to improve the convergence rate and quality of solution of SOS. The SOS algorithm has a strong global exploration capability and uses fewer parameters. The systematic reasoning ability of SA is employed to find better solutions on local solution regions, hence, adding exploration ability to SOS. Also, a fitness function is proposed which takes into account the utilization level of virtual machines (VMs) which reduced makespan and degree of imbalance among VMs. CloudSim toolkit was used to evaluate the efficiency of the proposed method using both synthetic and standard workload. Results of simulation showed that hybrid SOS performs better than SOS in terms of convergence speed, response time, degree of imbalance, and makespan.

  14. Hybrid Symbiotic Organisms Search Optimization Algorithm for Scheduling of Tasks on Cloud Computing Environment

    PubMed Central

    Abdullahi, Mohammed; Ngadi, Md Asri

    2016-01-01

    Cloud computing has attracted significant attention from research community because of rapid migration rate of Information Technology services to its domain. Advances in virtualization technology has made cloud computing very popular as a result of easier deployment of application services. Tasks are submitted to cloud datacenters to be processed on pay as you go fashion. Task scheduling is one the significant research challenges in cloud computing environment. The current formulation of task scheduling problems has been shown to be NP-complete, hence finding the exact solution especially for large problem sizes is intractable. The heterogeneous and dynamic feature of cloud resources makes optimum task scheduling non-trivial. Therefore, efficient task scheduling algorithms are required for optimum resource utilization. Symbiotic Organisms Search (SOS) has been shown to perform competitively with Particle Swarm Optimization (PSO). The aim of this study is to optimize task scheduling in cloud computing environment based on a proposed Simulated Annealing (SA) based SOS (SASOS) in order to improve the convergence rate and quality of solution of SOS. The SOS algorithm has a strong global exploration capability and uses fewer parameters. The systematic reasoning ability of SA is employed to find better solutions on local solution regions, hence, adding exploration ability to SOS. Also, a fitness function is proposed which takes into account the utilization level of virtual machines (VMs) which reduced makespan and degree of imbalance among VMs. CloudSim toolkit was used to evaluate the efficiency of the proposed method using both synthetic and standard workload. Results of simulation showed that hybrid SOS performs better than SOS in terms of convergence speed, response time, degree of imbalance, and makespan. PMID:27348127

  15. A survey and taxonomy on energy efficient resource allocation techniques for cloud computing systems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hameed, Abdul; Khoshkbarforoushha, Alireza; Ranjan, Rajiv

    In a cloud computing paradigm, energy efficient allocation of different virtualized ICT resources (servers, storage disks, and networks, and the like) is a complex problem due to the presence of heterogeneous application (e.g., content delivery networks, MapReduce, web applications, and the like) workloads having contentious allocation requirements in terms of ICT resource capacities (e.g., network bandwidth, processing speed, response time, etc.). Several recent papers have tried to address the issue of improving energy efficiency in allocating cloud resources to applications with varying degree of success. However, to the best of our knowledge there is no published literature on this subjectmore » that clearly articulates the research problem and provides research taxonomy for succinct classification of existing techniques. Hence, the main aim of this paper is to identify open challenges associated with energy efficient resource allocation. In this regard, the study, first, outlines the problem and existing hardware and software-based techniques available for this purpose. Furthermore, available techniques already presented in the literature are summarized based on the energy-efficient research dimension taxonomy. The advantages and disadvantages of the existing techniques are comprehensively analyzed against the proposed research dimension taxonomy namely: resource adaption policy, objective function, allocation method, allocation operation, and interoperability.« less

  16. Cost-aware request routing in multi-geography cloud data centres using software-defined networking

    NASA Astrophysics Data System (ADS)

    Yuan, Haitao; Bi, Jing; Li, Bo Hu; Tan, Wei

    2017-03-01

    Current geographically distributed cloud data centres (CDCs) require gigantic energy and bandwidth costs to provide multiple cloud applications to users around the world. Previous studies only focus on energy cost minimisation in distributed CDCs. However, a CDC provider needs to deliver gigantic data between users and distributed CDCs through internet service providers (ISPs). Geographical diversity of bandwidth and energy costs brings a highly challenging problem of how to minimise the total cost of a CDC provider. With the recently emerging software-defined networking, we study the total cost minimisation problem for a CDC provider by exploiting geographical diversity of energy and bandwidth costs. We formulate the total cost minimisation problem as a mixed integer non-linear programming (MINLP). Then, we develop heuristic algorithms to solve the problem and to provide a cost-aware request routing for joint optimisation of the selection of ISPs and the number of servers in distributed CDCs. Besides, to tackle the dynamic workload in distributed CDCs, this article proposes a regression-based workload prediction method to obtain future incoming workload. Finally, this work evaluates the cost-aware request routing by trace-driven simulation and compares it with the existing approaches to demonstrate its effectiveness.

  17. Performance implications from sizing a VM on multi-core systems: A Data analytic application s view

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lim, Seung-Hwan; Horey, James L; Begoli, Edmon

    In this paper, we present a quantitative performance analysis of data analytics applications running on multi-core virtual machines. Such environments form the core of cloud computing. In addition, data analytics applications, such as Cassandra and Hadoop, are becoming increasingly popular on cloud computing platforms. This convergence necessitates a better understanding of the performance and cost implications of such hybrid systems. For example, the very rst step in hosting applications in virtualized environments, requires the user to con gure the number of virtual processors and the size of memory. To understand performance implications of this step, we benchmarked three Yahoo Cloudmore » Serving Benchmark (YCSB) workloads in a virtualized multi-core environment. Our measurements indicate that the performance of Cassandra for YCSB workloads does not heavily depend on the processing capacity of a system, while the size of the data set is critical to performance relative to allocated memory. We also identi ed a strong relationship between the running time of workloads and various hardware events (last level cache loads, misses, and CPU migrations). From this analysis, we provide several suggestions to improve the performance of data analytics applications running on cloud computing environments.« less

  18. Context-aware distributed cloud computing using CloudScheduler

    NASA Astrophysics Data System (ADS)

    Seuster, R.; Leavett-Brown, CR; Casteels, K.; Driemel, C.; Paterson, M.; Ring, D.; Sobie, RJ; Taylor, RP; Weldon, J.

    2017-10-01

    The distributed cloud using the CloudScheduler VM provisioning service is one of the longest running systems for HEP workloads. It has run millions of jobs for ATLAS and Belle II over the past few years using private and commercial clouds around the world. Our goal is to scale the distributed cloud to the 10,000-core level, with the ability to run any type of application (low I/O, high I/O and high memory) on any cloud. To achieve this goal, we have been implementing changes that utilize context-aware computing designs that are currently employed in the mobile communication industry. Context-awareness makes use of real-time and archived data to respond to user or system requirements. In our distributed cloud, we have many opportunistic clouds with no local HEP services, software or storage repositories. A context-aware design significantly improves the reliability and performance of our system by locating the nearest location of the required services. We describe how we are collecting and managing contextual information from our workload management systems, the clouds, the virtual machines and our services. This information is used not only to monitor the system but also to carry out automated corrective actions. We are incrementally adding new alerting and response services to our distributed cloud. This will enable us to scale the number of clouds and virtual machines. Further, a context-aware design will enable us to run analysis or high I/O application on opportunistic clouds. We envisage an open-source HTTP data federation (for example, the DynaFed system at CERN) as a service that would provide us access to existing storage elements used by the HEP experiments.

  19. Helix Nebula and CERN: A Symbiotic approach to exploiting commercial clouds

    NASA Astrophysics Data System (ADS)

    Barreiro Megino, Fernando H.; Jones, Robert; Kucharczyk, Katarzyna; Medrano Llamas, Ramón; van der Ster, Daniel

    2014-06-01

    The recent paradigm shift toward cloud computing in IT, and general interest in "Big Data" in particular, have demonstrated that the computing requirements of HEP are no longer globally unique. Indeed, the CERN IT department and LHC experiments have already made significant R&D investments in delivering and exploiting cloud computing resources. While a number of technical evaluations of interesting commercial offerings from global IT enterprises have been performed by various physics labs, further technical, security, sociological, and legal issues need to be address before their large-scale adoption by the research community can be envisaged. Helix Nebula - the Science Cloud is an initiative that explores these questions by joining the forces of three European research institutes (CERN, ESA and EMBL) with leading European commercial IT enterprises. The goals of Helix Nebula are to establish a cloud platform federating multiple commercial cloud providers, along with new business models, which can sustain the cloud marketplace for years to come. This contribution will summarize the participation of CERN in Helix Nebula. We will explain CERN's flagship use-case and the model used to integrate several cloud providers with an LHC experiment's workload management system. During the first proof of concept, this project contributed over 40.000 CPU-days of Monte Carlo production throughput to the ATLAS experiment with marginal manpower required. CERN's experience, together with that of ESA and EMBL, is providing a great insight into the cloud computing industry and highlighted several challenges that are being tackled in order to ease the export of the scientific workloads to the cloud environments.

  20. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hameed, Abdul; Khoshkbarforoushha, Alireza; Ranjan, Rajiv

    In a cloud computing paradigm, energy efficient allocation of different virtualized ICT resources (servers, storage disks, and networks, and the like) is a complex problem due to the presence of heterogeneous application (e.g., content delivery networks, MapReduce, web applications, and the like) workloads having contentious allocation requirements in terms of ICT resource capacities (e.g., network bandwidth, processing speed, response time, etc.). Several recent papers have tried to address the issue of improving energy efficiency in allocating cloud resources to applications with varying degree of success. However, to the best of our knowledge there is no published literature on this subjectmore » that clearly articulates the research problem and provides research taxonomy for succinct classification of existing techniques. Hence, the main aim of this paper is to identify open challenges associated with energy efficient resource allocation. In this regard, the study, first, outlines the problem and existing hardware and software-based techniques available for this purpose. Furthermore, available techniques already presented in the literature are summarized based on the energy-efficient research dimension taxonomy. The advantages and disadvantages of the existing techniques are comprehensively analyzed against the proposed research dimension taxonomy namely: resource adaption policy, objective function, allocation method, allocation operation, and interoperability.« less

  1. Heterogeneous Clustering: Operational and User Impacts

    NASA Technical Reports Server (NTRS)

    Salm, Saita Wood

    1999-01-01

    Heterogeneous clustering can improve overall utilization of multiple hosts and can provide better turnaround to users by balancing workloads across hosts. Building a cluster requires both operational changes and revisions in user scripts.

  2. Heterogeneous Chemistry Involving Methanol in Tropospheric Clouds

    NASA Technical Reports Server (NTRS)

    Tabazadeh, A.; Yokelson, R. J.; Singh, H. B.; Hobbs, P. V.; Crawford, J. H.; Iraci, L. T.

    2004-01-01

    In this report we analyze airborne measurements to suggest that methanol in biomass burning smoke is lost heterogeneously in clouds. When a smoke plume intersected a cumulus cloud during the SAFARI 2000 field project, the observed methanol gas phase concentration rapidly declined. Current understanding of gas and aqueous phase chemistry cannot explain the loss of methanol documented by these measurements. Two plausible heterogeneous reactions are proposed to explain the observed simultaneous loss and production of methanol and formaldehyde, respectively. If the rapid heterogeneous processing of methanol, seen in a cloud impacted by smoke, occurs in more pristine clouds, it could affect the oxidizing capacity of the troposphere on a global scale.

  3. Enhancing Security by System-Level Virtualization in Cloud Computing Environments

    NASA Astrophysics Data System (ADS)

    Sun, Dawei; Chang, Guiran; Tan, Chunguang; Wang, Xingwei

    Many trends are opening up the era of cloud computing, which will reshape the IT industry. Virtualization techniques have become an indispensable ingredient for almost all cloud computing system. By the virtual environments, cloud provider is able to run varieties of operating systems as needed by each cloud user. Virtualization can improve reliability, security, and availability of applications by using consolidation, isolation, and fault tolerance. In addition, it is possible to balance the workloads by using live migration techniques. In this paper, the definition of cloud computing is given; and then the service and deployment models are introduced. An analysis of security issues and challenges in implementation of cloud computing is identified. Moreover, a system-level virtualization case is established to enhance the security of cloud computing environments.

  4. Key Technology Research on Open Architecture for The Sharing of Heterogeneous Geographic Analysis Models

    NASA Astrophysics Data System (ADS)

    Yue, S. S.; Wen, Y. N.; Lv, G. N.; Hu, D.

    2013-10-01

    In recent years, the increasing development of cloud computing technologies laid critical foundation for efficiently solving complicated geographic issues. However, it is still difficult to realize the cooperative operation of massive heterogeneous geographical models. Traditional cloud architecture is apt to provide centralized solution to end users, while all the required resources are often offered by large enterprises or special agencies. Thus, it's a closed framework from the perspective of resource utilization. Solving comprehensive geographic issues requires integrating multifarious heterogeneous geographical models and data. In this case, an open computing platform is in need, with which the model owners can package and deploy their models into cloud conveniently, while model users can search, access and utilize those models with cloud facility. Based on this concept, the open cloud service strategies for the sharing of heterogeneous geographic analysis models is studied in this article. The key technology: unified cloud interface strategy, sharing platform based on cloud service, and computing platform based on cloud service are discussed in detail, and related experiments are conducted for further verification.

  5. CERN Computing in Commercial Clouds

    NASA Astrophysics Data System (ADS)

    Cordeiro, C.; Field, L.; Garrido Bear, B.; Giordano, D.; Jones, B.; Keeble, O.; Manzi, A.; Martelli, E.; McCance, G.; Moreno-García, D.; Traylen, S.

    2017-10-01

    By the end of 2016 more than 10 Million core-hours of computing resources have been delivered by several commercial cloud providers to the four LHC experiments to run their production workloads, from simulation to full chain processing. In this paper we describe the experience gained at CERN in procuring and exploiting commercial cloud resources for the computing needs of the LHC experiments. The mechanisms used for provisioning, monitoring, accounting, alarming and benchmarking will be discussed, as well as the involvement of the LHC collaborations in terms of managing the workflows of the experiments within a multicloud environment.

  6. Storm Clouds on the Digital Education Horizon.

    ERIC Educational Resources Information Center

    Reeves, Thomas C.

    2003-01-01

    Focuses on five unresolved challenges of digital education in higher education: faculty workload, continued dominance of traditional pedagogy, the state of assessment, flaws in the accreditation process, and the state of educational research in the area. (EV)

  7. Eleven quick tips for architecting biomedical informatics workflows with cloud computing.

    PubMed

    Cole, Brian S; Moore, Jason H

    2018-03-01

    Cloud computing has revolutionized the development and operations of hardware and software across diverse technological arenas, yet academic biomedical research has lagged behind despite the numerous and weighty advantages that cloud computing offers. Biomedical researchers who embrace cloud computing can reap rewards in cost reduction, decreased development and maintenance workload, increased reproducibility, ease of sharing data and software, enhanced security, horizontal and vertical scalability, high availability, a thriving technology partner ecosystem, and much more. Despite these advantages that cloud-based workflows offer, the majority of scientific software developed in academia does not utilize cloud computing and must be migrated to the cloud by the user. In this article, we present 11 quick tips for architecting biomedical informatics workflows on compute clouds, distilling knowledge gained from experience developing, operating, maintaining, and distributing software and virtualized appliances on the world's largest cloud. Researchers who follow these tips stand to benefit immediately by migrating their workflows to cloud computing and embracing the paradigm of abstraction.

  8. Eleven quick tips for architecting biomedical informatics workflows with cloud computing

    PubMed Central

    Moore, Jason H.

    2018-01-01

    Cloud computing has revolutionized the development and operations of hardware and software across diverse technological arenas, yet academic biomedical research has lagged behind despite the numerous and weighty advantages that cloud computing offers. Biomedical researchers who embrace cloud computing can reap rewards in cost reduction, decreased development and maintenance workload, increased reproducibility, ease of sharing data and software, enhanced security, horizontal and vertical scalability, high availability, a thriving technology partner ecosystem, and much more. Despite these advantages that cloud-based workflows offer, the majority of scientific software developed in academia does not utilize cloud computing and must be migrated to the cloud by the user. In this article, we present 11 quick tips for architecting biomedical informatics workflows on compute clouds, distilling knowledge gained from experience developing, operating, maintaining, and distributing software and virtualized appliances on the world’s largest cloud. Researchers who follow these tips stand to benefit immediately by migrating their workflows to cloud computing and embracing the paradigm of abstraction. PMID:29596416

  9. MONET: multidimensional radiative cloud scene model

    NASA Astrophysics Data System (ADS)

    Chervet, Patrick

    1999-12-01

    All cloud fields exhibit variable structures (bulge) and heterogeneities in water distributions. With the development of multidimensional radiative models by the atmospheric community, it is now possible to describe horizontal heterogeneities of the cloud medium, to study these influences on radiative quantities. We have developed a complete radiative cloud scene generator, called MONET (French acronym for: MOdelisation des Nuages En Tridim.) to compute radiative cloud scene from visible to infrared wavelengths for various viewing and solar conditions, different spatial scales, and various locations on the Earth. MONET is composed of two parts: a cloud medium generator (CSSM -- Cloud Scene Simulation Model) developed by the Air Force Research Laboratory, and a multidimensional radiative code (SHDOM -- Spherical Harmonic Discrete Ordinate Method) developed at the University of Colorado by Evans. MONET computes images for several scenario defined by user inputs: date, location, viewing angles, wavelength, spatial resolution, meteorological conditions (atmospheric profiles, cloud types)... For the same cloud scene, we can output different viewing conditions, or/and various wavelengths. Shadowing effects on clouds or grounds are taken into account. This code is useful to study heterogeneity effects on satellite data for various cloud types and spatial resolutions, and to determine specifications of new imaging sensor.

  10. The impact of horizontal heterogeneities, cloud fraction, and cloud dynamics on warm cloud effective radii and liquid water path from CERES-like Aqua MODIS retrievals

    NASA Astrophysics Data System (ADS)

    Painemal, D.; Minnis, P.; Sun-Mack, S.

    2013-05-01

    The impact of horizontal heterogeneities, liquid water path (LWP from AMSR-E), and cloud fraction (CF) on MODIS cloud effective radius (re), retrieved from the 2.1 μm (re2.1) and 3.8 μm (re3.8) channels, is investigated for warm clouds over the southeast Pacific. Values of re retrieved using the CERES Edition 4 algorithms are averaged at the CERES footprint resolution (~ 20 km), while heterogeneities (Hσ) are calculated as the ratio between the standard deviation and mean 0.64 μm reflectance. The value of re2.1 strongly depends on CF, with magnitudes up to 5 μm larger than those for overcast scenes, whereas re3.8 remains insensitive to CF. For cloudy scenes, both re2.1 and re3.8 increase with Hσ for any given AMSR-E LWP, but re2.1 changes more than for re3.8. Additionally, re3.8 - re2.1 differences are positive (< 1 μm) for homogeneous scenes (Hσ < 0.2) and LWP > 50 g m-2, and negative (up to -4 μm) for larger Hσ. Thus, re3.8 - re2.1 differences are more likely to reflect biases associated with cloud heterogeneities rather than information about the cloud vertical structure. The consequences for MODIS LWP are also discussed.

  11. Are Cloud Environments Ready for Scientific Applications?

    NASA Astrophysics Data System (ADS)

    Mehrotra, P.; Shackleford, K.

    2011-12-01

    Cloud computing environments are becoming widely available both in the commercial and government sectors. They provide flexibility to rapidly provision resources in order to meet dynamic and changing computational needs without the customers incurring capital expenses and/or requiring technical expertise. Clouds also provide reliable access to resources even though the end-user may not have in-house expertise for acquiring or operating such resources. Consolidation and pooling in a cloud environment allow organizations to achieve economies of scale in provisioning or procuring computing resources and services. Because of these and other benefits, many businesses and organizations are migrating their business applications (e.g., websites, social media, and business processes) to cloud environments-evidenced by the commercial success of offerings such as the Amazon EC2. In this paper, we focus on the feasibility of utilizing cloud environments for scientific workloads and workflows particularly of interest to NASA scientists and engineers. There is a wide spectrum of such technical computations. These applications range from small workstation-level computations to mid-range computing requiring small clusters to high-performance simulations requiring supercomputing systems with high bandwidth/low latency interconnects. Data-centric applications manage and manipulate large data sets such as satellite observational data and/or data previously produced by high-fidelity modeling and simulation computations. Most of the applications are run in batch mode with static resource requirements. However, there do exist situations that have dynamic demands, particularly ones with public-facing interfaces providing information to the general public, collaborators and partners, as well as to internal NASA users. In the last few months we have been studying the suitability of cloud environments for NASA's technical and scientific workloads. We have ported several applications to multiple cloud environments including NASA's Nebula environment, Amazon's EC2, Magellan at NERSC, and SGI's Cyclone system. We critically examined the performance of the applications on these systems. We also collected information on the usability of these cloud environments. In this talk we will present the results of our study focusing on the efficacy of using clouds for NASA's scientific applications.

  12. Cirrus Heterogeneity Effects on Cloud Optical Properties Retrieved with an Optimal Estimation Method from MODIS VIS to TIR Channels.

    NASA Technical Reports Server (NTRS)

    Fauchez, T.; Platnick, S.; Meyer, K.; Sourdeval, O.; Cornet, C.; Zhang, Z.; Szczap, F.

    2016-01-01

    This study presents preliminary results on the effect of cirrus heterogeneities on top-of-atmosphere (TOA) simulated radiances or reflectances for MODIS channels centered at 0.86, 2.21, 8.56, 11.01 and 12.03 micrometers , and on cloud optical properties retrieved with a research-level optimal estimation method (OEM). Synthetic cirrus cloud fields are generated using a 3D cloud generator (3DCLOUD) and radiances/reflectances are simulated using a 3D radiative transfer code (3DMCPOL). We find significant differences between the heterogeneity effects on either visible and near-infrared (VNIR) or thermal infrared (TIR) radiances. However, when both wavelength ranges are combined, heterogeneity effects are dominated by the VNIR horizontal radiative transport effect. As a result, small optical thicknesses are overestimated and large ones are underestimated. Retrieved effective diameter are found to be slightly affected, contrarily to retrievals using TIR channels only.

  13. Boundary-layer cumulus over heterogeneous landscapes: A subgrid GCM parameterization. Final report, December 1991--November 1995

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Stull, R.B.; Tripoli, G.

    1996-01-08

    The authors developed single-column parameterizations for subgrid boundary-layer cumulus clouds. These give cloud onset time, cloud coverage, and ensemble distributions of cloud-base altitudes, cloud-top altitudes, cloud thickness, and the characteristics of cloudy and clear updrafts. They tested and refined the parameterizations against archived data from Spring and Summer 1994 and 1995 intensive operation periods (IOPs) at the Southern Great Plains (SGP) ARM CART site near Lamont, Oklahoma. The authors also found that: cloud-base altitudes are not uniform over a heterogeneous surface; tops of some cumulus clouds can be below the base-altitudes of other cumulus clouds; there is an overlap regionmore » near cloud base where clear and cloudy updrafts exist simultaneously; and the lognormal distribution of cloud sizes scales to the JFD of surface layer air and to the shape of the temperature profile above the boundary layer.« less

  14. Evaluating the Influence of the Client Behavior in Cloud Computing.

    PubMed

    Souza Pardo, Mário Henrique; Centurion, Adriana Molina; Franco Eustáquio, Paulo Sérgio; Carlucci Santana, Regina Helena; Bruschi, Sarita Mazzini; Santana, Marcos José

    2016-01-01

    This paper proposes a novel approach for the implementation of simulation scenarios, providing a client entity for cloud computing systems. The client entity allows the creation of scenarios in which the client behavior has an influence on the simulation, making the results more realistic. The proposed client entity is based on several characteristics that affect the performance of a cloud computing system, including different modes of submission and their behavior when the waiting time between requests (think time) is considered. The proposed characterization of the client enables the sending of either individual requests or group of Web services to scenarios where the workload takes the form of bursts. The client entity is included in the CloudSim, a framework for modelling and simulation of cloud computing. Experimental results show the influence of the client behavior on the performance of the services executed in a cloud computing system.

  15. Evaluating the Influence of the Client Behavior in Cloud Computing

    PubMed Central

    Centurion, Adriana Molina; Franco Eustáquio, Paulo Sérgio; Carlucci Santana, Regina Helena; Bruschi, Sarita Mazzini; Santana, Marcos José

    2016-01-01

    This paper proposes a novel approach for the implementation of simulation scenarios, providing a client entity for cloud computing systems. The client entity allows the creation of scenarios in which the client behavior has an influence on the simulation, making the results more realistic. The proposed client entity is based on several characteristics that affect the performance of a cloud computing system, including different modes of submission and their behavior when the waiting time between requests (think time) is considered. The proposed characterization of the client enables the sending of either individual requests or group of Web services to scenarios where the workload takes the form of bursts. The client entity is included in the CloudSim, a framework for modelling and simulation of cloud computing. Experimental results show the influence of the client behavior on the performance of the services executed in a cloud computing system. PMID:27441559

  16. Intelligent Agent Transparency in Human-Agent Teaming for Multi-UxV Management.

    PubMed

    Mercado, Joseph E; Rupp, Michael A; Chen, Jessie Y C; Barnes, Michael J; Barber, Daniel; Procci, Katelyn

    2016-05-01

    We investigated the effects of level of agent transparency on operator performance, trust, and workload in a context of human-agent teaming for multirobot management. Participants played the role of a heterogeneous unmanned vehicle (UxV) operator and were instructed to complete various missions by giving orders to UxVs through a computer interface. An intelligent agent (IA) assisted the participant by recommending two plans-a top recommendation and a secondary recommendation-for every mission. A within-subjects design with three levels of agent transparency was employed in the present experiment. There were eight missions in each of three experimental blocks, grouped by level of transparency. During each experimental block, the IA was incorrect three out of eight times due to external information (e.g., commander's intent and intelligence). Operator performance, trust, workload, and usability data were collected. Results indicate that operator performance, trust, and perceived usability increased as a function of transparency level. Subjective and objective workload data indicate that participants' workload did not increase as a function of transparency. Furthermore, response time did not increase as a function of transparency. Unlike previous research, which showed that increased transparency resulted in increased performance and trust calibration at the cost of greater workload and longer response time, our results support the benefits of transparency for performance effectiveness without additional costs. The current results will facilitate the implementation of IAs in military settings and will provide useful data to the design of heterogeneous UxV teams. © 2016, Human Factors and Ergonomics Society.

  17. Modelling heterogeneous ice nucleation on mineral dust and soot with parameterizations based on laboratory experiments

    NASA Astrophysics Data System (ADS)

    Hoose, C.; Hande, L. B.; Mohler, O.; Niemand, M.; Paukert, M.; Reichardt, I.; Ullrich, R.

    2016-12-01

    Between 0 and -37°C, ice formation in clouds is triggered by aerosol particles acting as heterogeneous ice nuclei. At lower temperatures, heterogeneous ice nucleation on aerosols can occur at lower supersaturations than homogeneous freezing of solutes. In laboratory experiments, the ability of different aerosol species (e.g. desert dusts, soot, biological particles) has been studied in detail and quantified via various theoretical or empirical parameterization approaches. For experiments in the AIDA cloud chamber, we have quantified the ice nucleation efficiency via a temperature- and supersaturation dependent ice nucleation active site density. Here we present a new empirical parameterization scheme for immersion and deposition ice nucleation on desert dust and soot based on these experimental data. The application of this parameterization to the simulation of cirrus clouds, deep convective clouds and orographic clouds will be shown, including the extension of the scheme to the treatment of freezing of rain drops. The results are compared to other heterogeneous ice nucleation schemes. Furthermore, an aerosol-dependent parameterization of contact ice nucleation is presented.

  18. The vacuum platform

    NASA Astrophysics Data System (ADS)

    McNab, A.

    2017-10-01

    This paper describes GridPP’s Vacuum Platform for managing virtual machines (VMs), which has been used to run production workloads for WLCG and other HEP experiments. The platform provides a uniform interface between VMs and the sites they run at, whether the site is organised as an Infrastructure-as-a-Service cloud system such as OpenStack, or an Infrastructure-as-a-Client system such as Vac. The paper describes our experience in using this platform, in developing and operating VM lifecycle managers Vac and Vcycle, and in interacting with VMs provided by LHCb, ATLAS, ALICE, CMS, and the GridPP DIRAC service to run production workloads.

  19. Trust Model to Enhance Security and Interoperability of Cloud Environment

    NASA Astrophysics Data System (ADS)

    Li, Wenjuan; Ping, Lingdi

    Trust is one of the most important means to improve security and enable interoperability of current heterogeneous independent cloud platforms. This paper first analyzed several trust models used in large and distributed environment and then introduced a novel cloud trust model to solve security issues in cross-clouds environment in which cloud customer can choose different providers' services and resources in heterogeneous domains can cooperate. The model is domain-based. It divides one cloud provider's resource nodes into the same domain and sets trust agent. It distinguishes two different roles cloud customer and cloud server and designs different strategies for them. In our model, trust recommendation is treated as one type of cloud services just like computation or storage. The model achieves both identity authentication and behavior authentication. The results of emulation experiments show that the proposed model can efficiently and safely construct trust relationship in cross-clouds environment.

  20. The impact on UT/LS cirrus clouds in the CAM/CARMA model using a new interactive aerosol parameterization.

    NASA Astrophysics Data System (ADS)

    Maloney, C.; Toon, B.; Bardeen, C.

    2017-12-01

    Recent studies indicate that heterogeneous nucleation may play a large role in cirrus cloud formation in the UT/LS, a region previously thought to be primarily dominated by homogeneous nucleation. As a result, it is beneficial to ensure that general circulation models properly represent heterogeneous nucleation in ice cloud simulations. Our work strives towards addressing this issue in the NSF/DOE Community Earth System Model's atmospheric model, CAM. More specifically we are addressing the role of heterogeneous nucleation in the coupled sectional microphysics cloud model, CARMA. Currently, our CAM/CARMA cirrus model only performs homogenous ice nucleation while ignoring heterogeneous nucleation. In our work, we couple the CAM/CARMA cirrus model with the Modal Aerosol Model (MAM). By combining the aerosol model with CAM/CARMA we can both account for heterogeneous nucleation, as well as directly link the sulfates used for homogeneous nucleation to computed fields instead of the current static field being utilized. Here we present our initial results and compare our findings to observations from the long running CALIPSO and MODIS satellite missions.

  1. Allowing for Horizontally Heterogeneous Clouds and Generalized Overlap in an Atmospheric GCM

    NASA Technical Reports Server (NTRS)

    Lee, D.; Oreopoulos, L.; Suarez, M.

    2011-01-01

    While fully accounting for 3D effects in Global Climate Models (GCMs) appears not realistic at the present time for a variety of reasons such as computational cost and unavailability of 3D cloud structure in the models, incorporation in radiation schemes of subgrid cloud variability described by one-point statistics is now considered feasible and is being actively pursued. This development has gained momentum once it was demonstrated that CPU-intensive spectrally explicit Independent Column Approximation (lCA) can be substituted by stochastic Monte Carlo ICA (McICA) calculations where spectral integration is accomplished in a manner that produces relatively benign random noise. The McICA approach has been implemented in Goddard's GEOS-5 atmospheric GCM as part of the implementation of the RRTMG radiation package. GEOS-5 with McICA and RRTMG can handle horizontally variable clouds which can be set via a cloud generator to arbitrarily overlap within the full spectrum of maximum and random both in terms of cloud fraction and layer condensate distributions. In our presentation we will show radiative and other impacts of the combined horizontal and vertical cloud variability on multi-year simulations of an otherwise untuned GEOS-5 with fixed SSTs. Introducing cloud horizontal heterogeneity without changing the mean amounts of condensate reduces reflected solar and increases thermal radiation to space, but disproportionate changes may increase the radiative imbalance at TOA. The net radiation at TOA can be modulated by allowing the parameters of the generalized overlap and heterogeneity scheme to vary, a dependence whose behavior we will discuss. The sensitivity of the cloud radiative forcing to the parameters of cloud horizontal heterogeneity and comparisons of CERES-derived forcing will be shown.

  2. Federated data storage and management infrastructure

    NASA Astrophysics Data System (ADS)

    Zarochentsev, A.; Kiryanov, A.; Klimentov, A.; Krasnopevtsev, D.; Hristov, P.

    2016-10-01

    The Large Hadron Collider (LHC)’ operating at the international CERN Laboratory in Geneva, Switzerland, is leading Big Data driven scientific explorations. Experiments at the LHC explore the fundamental nature of matter and the basic forces that shape our universe. Computing models for the High Luminosity LHC era anticipate a growth of storage needs of at least orders of magnitude; it will require new approaches in data storage organization and data handling. In our project we address the fundamental problem of designing of architecture to integrate a distributed heterogeneous disk resources for LHC experiments and other data- intensive science applications and to provide access to data from heterogeneous computing facilities. We have prototyped a federated storage for Russian T1 and T2 centers located in Moscow, St.-Petersburg and Gatchina, as well as Russian / CERN federation. We have conducted extensive tests of underlying network infrastructure and storage endpoints with synthetic performance measurement tools as well as with HENP-specific workloads, including the ones running on supercomputing platform, cloud computing and Grid for ALICE and ATLAS experiments. We will present our current accomplishments with running LHC data analysis remotely and locally to demonstrate our ability to efficiently use federated data storage experiment wide within National Academic facilities for High Energy and Nuclear Physics as well as for other data-intensive science applications, such as bio-informatics.

  3. Heterogeneous neuromuscular activation within human rectus femoris muscle during pedaling.

    PubMed

    Watanabe, Kohei; Kouzaki, Motoki; Moritani, Toshio

    2015-09-01

    We investigated the effect of workload and the use of pedal straps on the spatial distribution of neuromuscular activation within the rectus femoris (RF) muscle during pedaling movements. Eleven healthy men performed submaximal pedaling exercises on an electrically braked ergometer at different workloads and with or without pedal straps. During these tasks, surface electromyograms (SEMGs) were recorded from the RF using 36 electrode pairs, and central locus activation (CLA) was calculated along the longitudinal line of the muscle. CLA moved markedly, indicating changes in spatial distribution of SEMG within the muscle, during a crank cycle under all conditions (P < 0.05). There were significant differences in CLA among different workloads and between those with and without pedal straps (P < 0.05). These results suggest that neuromuscular activation within the RF is regulated regionally by changes in workload and the use of pedal straps during pedaling. © 2014 Wiley Periodicals, Inc.

  4. Dynamic Voltage-Frequency and Workload Joint Scaling Power Management for Energy Harvesting Multi-Core WSN Node SoC

    PubMed Central

    Li, Xiangyu; Xie, Nijie; Tian, Xinyue

    2017-01-01

    This paper proposes a scheduling and power management solution for energy harvesting heterogeneous multi-core WSN node SoC such that the system continues to operate perennially and uses the harvested energy efficiently. The solution consists of a heterogeneous multi-core system oriented task scheduling algorithm and a low-complexity dynamic workload scaling and configuration optimization algorithm suitable for light-weight platforms. Moreover, considering the power consumption of most WSN applications have the characteristic of data dependent behavior, we introduce branches handling mechanism into the solution as well. The experimental result shows that the proposed algorithm can operate in real-time on a lightweight embedded processor (MSP430), and that it can make a system do more valuable works and make more than 99.9% use of the power budget. PMID:28208730

  5. Dynamic Voltage-Frequency and Workload Joint Scaling Power Management for Energy Harvesting Multi-Core WSN Node SoC.

    PubMed

    Li, Xiangyu; Xie, Nijie; Tian, Xinyue

    2017-02-08

    This paper proposes a scheduling and power management solution for energy harvesting heterogeneous multi-core WSN node SoC such that the system continues to operate perennially and uses the harvested energy efficiently. The solution consists of a heterogeneous multi-core system oriented task scheduling algorithm and a low-complexity dynamic workload scaling and configuration optimization algorithm suitable for light-weight platforms. Moreover, considering the power consumption of most WSN applications have the characteristic of data dependent behavior, we introduce branches handling mechanism into the solution as well. The experimental result shows that the proposed algorithm can operate in real-time on a lightweight embedded processor (MSP430), and that it can make a system do more valuable works and make more than 99.9% use of the power budget.

  6. Research on distributed heterogeneous data PCA algorithm based on cloud platform

    NASA Astrophysics Data System (ADS)

    Zhang, Jin; Huang, Gang

    2018-05-01

    Principal component analysis (PCA) of heterogeneous data sets can solve the problem that centralized data scalability is limited. In order to reduce the generation of intermediate data and error components of distributed heterogeneous data sets, a principal component analysis algorithm based on heterogeneous data sets under cloud platform is proposed. The algorithm performs eigenvalue processing by using Householder tridiagonalization and QR factorization to calculate the error component of the heterogeneous database associated with the public key to obtain the intermediate data set and the lost information. Experiments on distributed DBM heterogeneous datasets show that the model method has the feasibility and reliability in terms of execution time and accuracy.

  7. Radiative Impacts of Cloud Heterogeneity and Overlap in an Atmospheric General Circulation Model

    NASA Technical Reports Server (NTRS)

    Oreopoulos, L.; Lee, D.; Sud, Y. C.; Suarez, M. J.

    2012-01-01

    The radiative impacts of introducing horizontal heterogeneity of layer cloud condensate, and vertical overlap of condensate and cloud fraction are examined with the aid of a new radiation package operating in the GEOS-5 Atmospheric General Circulation Model. The impacts are examined in terms of diagnostic top-of-the-atmosphere shortwave (SW) and longwave (LW) cloud radiative effect (CRE) calculations for a range of assumptions and parameter specifications about the overlap. The investigation is conducted for two distinct cloud schemes, the one that comes with the standard GEOS-5 distribution, and another which has been recently used experimentally for its enhanced GEOS-5 distribution, and another which has been recently used experimentally for its enhanced cloud microphysical capabilities; both are coupled to a cloud generator allowing arbitrary cloud overlap specification. We find that cloud overlap radiative impacts are significantly stronger for the operational cloud scheme for which a change of cloud fraction overlap from maximum-random to generalized results to global changes of SW and LW CRE of approximately 4 Watts per square meter, and zonal changes of up to approximately 10 Watts per square meter. This is because of fewer occurrences compared to the other scheme of large layer cloud fractions and of multi-layer situations with large numbers of atmospheric being simultaneously cloudy, conditions that make overlap details more important. The impact on CRE of the details of condensate distribution overlap is much weaker. Once generalized overlap is adopted, both cloud schemes are only modestly sensitive to the exact values of the overlap parameters. We also find that if one of the CRE components is overestimated and the other underestimated, both cannot be driven towards observed values by adjustments to cloud condensate heterogeneity and overlap alone.

  8. Prediction based proactive thermal virtual machine scheduling in green clouds.

    PubMed

    Kinger, Supriya; Kumar, Rajesh; Sharma, Anju

    2014-01-01

    Cloud computing has rapidly emerged as a widely accepted computing paradigm, but the research on Cloud computing is still at an early stage. Cloud computing provides many advanced features but it still has some shortcomings such as relatively high operating cost and environmental hazards like increasing carbon footprints. These hazards can be reduced up to some extent by efficient scheduling of Cloud resources. Working temperature on which a machine is currently running can be taken as a criterion for Virtual Machine (VM) scheduling. This paper proposes a new proactive technique that considers current and maximum threshold temperature of Server Machines (SMs) before making scheduling decisions with the help of a temperature predictor, so that maximum temperature is never reached. Different workload scenarios have been taken into consideration. The results obtained show that the proposed system is better than existing systems of VM scheduling, which does not consider current temperature of nodes before making scheduling decisions. Thus, a reduction in need of cooling systems for a Cloud environment has been obtained and validated.

  9. An Efficient Interactive Model for On-Demand Sensing-As-A-Servicesof Sensor-Cloud

    PubMed Central

    Dinh, Thanh; Kim, Younghan

    2016-01-01

    This paper proposes an efficient interactive model for the sensor-cloud to enable the sensor-cloud to efficiently provide on-demand sensing services for multiple applications with different requirements at the same time. The interactive model is designed for both the cloud and sensor nodes to optimize the resource consumption of physical sensors, as well as the bandwidth consumption of sensing traffic. In the model, the sensor-cloud plays a key role in aggregating application requests to minimize the workloads required for constrained physical nodes while guaranteeing that the requirements of all applications are satisfied. Physical sensor nodes perform their sensing under the guidance of the sensor-cloud. Based on the interactions with the sensor-cloud, physical sensor nodes adapt their scheduling accordingly to minimize their energy consumption. Comprehensive experimental results show that our proposed system achieves a significant improvement in terms of the energy consumption of physical sensors, the bandwidth consumption from the sink node to the sensor-cloud, the packet delivery latency, reliability and scalability, compared to current approaches. Based on the obtained results, we discuss the economical benefits and how the proposed system enables a win-win model in the sensor-cloud. PMID:27367689

  10. An Efficient Interactive Model for On-Demand Sensing-As-A-Servicesof Sensor-Cloud.

    PubMed

    Dinh, Thanh; Kim, Younghan

    2016-06-28

    This paper proposes an efficient interactive model for the sensor-cloud to enable the sensor-cloud to efficiently provide on-demand sensing services for multiple applications with different requirements at the same time. The interactive model is designed for both the cloud and sensor nodes to optimize the resource consumption of physical sensors, as well as the bandwidth consumption of sensing traffic. In the model, the sensor-cloud plays a key role in aggregating application requests to minimize the workloads required for constrained physical nodes while guaranteeing that the requirements of all applications are satisfied. Physical sensor nodes perform their sensing under the guidance of the sensor-cloud. Based on the interactions with the sensor-cloud, physical sensor nodes adapt their scheduling accordingly to minimize their energy consumption. Comprehensive experimental results show that our proposed system achieves a significant improvement in terms of the energy consumption of physical sensors, the bandwidth consumption from the sink node to the sensor-cloud, the packet delivery latency, reliability and scalability, compared to current approaches. Based on the obtained results, we discuss the economical benefits and how the proposed system enables a win-win model in the sensor-cloud.

  11. The impact of horizontal heterogeneities, cloud fraction, and liquid water path on warm cloud effective radii from CERES-like Aqua MODIS retrievals

    NASA Astrophysics Data System (ADS)

    Painemal, D.; Minnis, P.; Sun-Mack, S.

    2013-10-01

    The impact of horizontal heterogeneities, liquid water path (LWP from AMSR-E), and cloud fraction (CF) on MODIS cloud effective radius (re), retrieved from the 2.1 μm (re2.1) and 3.8 μm (re3.8) channels, is investigated for warm clouds over the southeast Pacific. Values of re retrieved using the CERES algorithms are averaged at the CERES footprint resolution (∼20 km), while heterogeneities (Hσ) are calculated as the ratio between the standard deviation and mean 0.64 μm reflectance. The value of re2.1 strongly depends on CF, with magnitudes up to 5 μm larger than those for overcast scenes, whereas re3.8 remains insensitive to CF. For cloudy scenes, both re2.1 and re3.8 increase with Hσ for any given AMSR-E LWP, but re2.1 changes more than for re3.8. Additionally, re3.8-re2.1 differences are positive (<1 μm) for homogeneous scenes (Hσ < 0.2) and LWP > 45 gm-2, and negative (up to -4 μm) for larger Hσ. While re3.8-re2.1 differences in homogeneous scenes are qualitatively consistent with in situ microphysical observations over the region of study, negative differences - particularly evinced in mean regional maps - are more likely to reflect the dominant bias associated with cloud heterogeneities rather than information about the cloud vertical structure. The consequences for MODIS LWP are also discussed.

  12. High-resolution Monte Carlo simulation of flow and conservative transport in heterogeneous porous media: 2. Transport results

    USGS Publications Warehouse

    Naff, R.L.; Haley, D.F.; Sudicky, E.A.

    1998-01-01

    In this, the second of two papers concerned with the use of numerical simulation to examine flow and transport parameters in heterogeneous porous media via Monte Carlo methods, results from the transport aspect of these simulations are reported on. Transport simulations contained herein assume a finite pulse input of conservative tracer, and the numerical technique endeavors to realistically simulate tracer spreading as the cloud moves through a heterogeneous medium. Medium heterogeneity is limited to the hydraulic conductivity field, and generation of this field assumes that the hydraulic-conductivity process is second-order stationary. Methods of estimating cloud moments, and the interpretation of these moments, are discussed. Techniques for estimation of large-time macrodispersivities from cloud second-moment data, and for the approximation of the standard errors associated with these macrodispersivities, are also presented. These moment and macrodispersivity estimation techniques were applied to tracer clouds resulting from transport scenarios generated by specific Monte Carlo simulations. Where feasible, moments and macrodispersivities resulting from the Monte Carlo simulations are compared with first- and second-order perturbation analyses. Some limited results concerning the possible ergodic nature of these simulations, and the presence of non-Gaussian behavior of the mean cloud, are reported on as well.

  13. The effect of ice nuclei on a deep convective cloud in South China

    NASA Astrophysics Data System (ADS)

    Deng, Xin; Xue, Huiwen; Meng, Zhiyong

    2018-07-01

    This study uses the Weather Research and Forecasting Model to simulate a deep convective cloud under a relatively polluted condition in South China. Ice nuclei (IN) aerosols near the surface are effectively transported upwards to above the 0 °C level by the strong updrafts in the convective cloud. Four cases with initial surface IN aerosol concentrations of 1, 10, 100, and 1000 L-1 are simulated. All simulations can well reproduce the major characteristics of the deep convective cloud in terms of the evolution, spatial distribution, and its track. IN aerosols have little effect on these macrophysical characteristics but can significantly affect ice formation. When IN concentration is increased, all heterogeneous nucleation modes are significantly enhanced, whereas the homogeneous freezing of cloud droplets is unchanged or weakened depending on the IN concentration and the development stages of the deep convective cloud. The homogeneous freezing of haze particles is generally not affected by increased IN but is slightly weakened in the extremely high IN case. As IN concentration is increased by 10 and 100 times, the enhanced heterogeneous nucleation is still not strong enough to compete with homogeneous freezing. Ice formation is hence still dominated by the homogenous freezing of cloud droplets and haze particles in the layer of 9-14 km, where most of the ice crystals are produced. The microphysical properties are generally unaffected in all the stages of cloud evolution. As IN concentration is increased by 1000 times and heterogeneous nucleation is further enhanced, the homogeneous freezing of cloud droplets and haze particles dominates only in the mature and dissipating stages, leading to unaffected ice number mixing ratio in the anvil region (approximately above 9 km) for these two stages. However, in the developing stage, when the supply of cloud droplets is limited, the homogeneous freezing of cloud droplets is weakened or even suppressed due to the very strong competition for liquid water with heterogeneous nucleation, leading to significantly lower ice number mixing ratio in the anvil regions. In addition, the microphysical properties in the convective core regions below the cloud anvil (approximately below 9 km) are also affected in the case of 1000 L-1. The enhanced heterogeneous nucleation produces more ice crystals below 9 km, leading to a stronger conversion from ice crystals to snow particles, and hence higher number and mass mixing ratios of snow. The IN effect on the spatial distributions and temporal evolutions of the surface precipitation and updraft velocity is generally insignificant.

  14. Sensitivity Studies of Dust Ice Nuclei Effect on Cirrus Clouds with the Community Atmosphere Model CAM5

    NASA Technical Reports Server (NTRS)

    Liu, Xiaohong; Zhang, Kai; Jensen, Eric J.; Gettelman, Andrew; Barahona, Donifan; Nenes, Athanasios; Lawson, Paul

    2012-01-01

    In this study the effect of dust aerosol on upper tropospheric cirrus clouds through heterogeneous ice nucleation is investigated in the Community Atmospheric Model version 5 (CAM5) with two ice nucleation parameterizations. Both parameterizations consider homogeneous and heterogeneous nucleation and the competition between the two mechanisms in cirrus clouds, but differ significantly in the number concentration of heterogeneous ice nuclei (IN) from dust. Heterogeneous nucleation on dust aerosol reduces the occurrence frequency of homogeneous nucleation and thus the ice crystal number concentration in the Northern Hemisphere (NH) cirrus clouds compared to simulations with pure homogeneous nucleation. Global and annual mean shortwave and longwave cloud forcing are reduced by up to 2.0+/-0.1Wm (sup-2) (1 uncertainty) and 2.4+/-0.1Wm (sup-2), respectively due to the presence of dust IN, with the net cloud forcing change of -0.40+/-0.20W m(sup-2). Comparison of model simulations with in situ aircraft data obtained in NH mid-latitudes suggests that homogeneous ice nucleation may play an important role in the ice nucleation at these regions with temperatures of 205-230 K. However, simulations overestimate observed ice crystal number concentrations in the tropical tropopause regions with temperatures of 190- 205 K, and overestimate the frequency of occurrence of high ice crystal number concentration (greater than 200 L(sup-1) and underestimate the frequency of low ice crystal number concentration (less than 30 L(sup-1) at NH mid-latitudes. These results highlight the importance of quantifying the number concentrations and properties of heterogeneous IN (including dust aerosol) in the upper troposphere from the global perspective.

  15. ATLAS Cloud R&D

    NASA Astrophysics Data System (ADS)

    Panitkin, Sergey; Barreiro Megino, Fernando; Caballero Bejar, Jose; Benjamin, Doug; Di Girolamo, Alessandro; Gable, Ian; Hendrix, Val; Hover, John; Kucharczyk, Katarzyna; Medrano Llamas, Ramon; Love, Peter; Ohman, Henrik; Paterson, Michael; Sobie, Randall; Taylor, Ryan; Walker, Rodney; Zaytsev, Alexander; Atlas Collaboration

    2014-06-01

    The computing model of the ATLAS experiment was designed around the concept of grid computing and, since the start of data taking, this model has proven very successful. However, new cloud computing technologies bring attractive features to improve the operations and elasticity of scientific distributed computing. ATLAS sees grid and cloud computing as complementary technologies that will coexist at different levels of resource abstraction, and two years ago created an R&D working group to investigate the different integration scenarios. The ATLAS Cloud Computing R&D has been able to demonstrate the feasibility of offloading work from grid to cloud sites and, as of today, is able to integrate transparently various cloud resources into the PanDA workload management system. The ATLAS Cloud Computing R&D is operating various PanDA queues on private and public resources and has provided several hundred thousand CPU days to the experiment. As a result, the ATLAS Cloud Computing R&D group has gained a significant insight into the cloud computing landscape and has identified points that still need to be addressed in order to fully utilize this technology. This contribution will explain the cloud integration models that are being evaluated and will discuss ATLAS' learning during the collaboration with leading commercial and academic cloud providers.

  16. Beating the tyranny of scale with a private cloud configured for Big Data

    NASA Astrophysics Data System (ADS)

    Lawrence, Bryan; Bennett, Victoria; Churchill, Jonathan; Juckes, Martin; Kershaw, Philip; Pepler, Sam; Pritchard, Matt; Stephens, Ag

    2015-04-01

    The Joint Analysis System, JASMIN, consists of a five significant hardware components: a batch computing cluster, a hypervisor cluster, bulk disk storage, high performance disk storage, and access to a tape robot. Each of the computing clusters consists of a heterogeneous set of servers, supporting a range of possible data analysis tasks - and a unique network environment makes it relatively trivial to migrate servers between the two clusters. The high performance disk storage will include the world's largest (publicly visible) deployment of the Panasas parallel disk system. Initially deployed in April 2012, JASMIN has already undergone two major upgrades, culminating in a system which by April 2015, will have in excess of 16 PB of disk and 4000 cores. Layered on the basic hardware are a range of services, ranging from managed services, such as the curated archives of the Centre for Environmental Data Archival or the data analysis environment for the National Centres for Atmospheric Science and Earth Observation, to a generic Infrastructure as a Service (IaaS) offering for the UK environmental science community. Here we present examples of some of the big data workloads being supported in this environment - ranging from data management tasks, such as checksumming 3 PB of data held in over one hundred million files, to science tasks, such as re-processing satellite observations with new algorithms, or calculating new diagnostics on petascale climate simulation outputs. We will demonstrate how the provision of a cloud environment closely coupled to a batch computing environment, all sharing the same high performance disk system allows massively parallel processing without the necessity to shuffle data excessively - even as it supports many different virtual communities, each with guaranteed performance. We will discuss the advantages of having a heterogeneous range of servers with available memory from tens of GB at the low end to (currently) two TB at the high end. There are some limitations of the JASMIN environment, the high performance disk environment is not fully available in the IaaS environment, and a planned ability to burst compute heavy jobs into the public cloud is not yet fully available. There are load balancing and performance issues that need to be understood. We will conclude with projections for future usage, and our plans to meet those requirements.

  17. Real-time video streaming in mobile cloud over heterogeneous wireless networks

    NASA Astrophysics Data System (ADS)

    Abdallah-Saleh, Saleh; Wang, Qi; Grecos, Christos

    2012-06-01

    Recently, the concept of Mobile Cloud Computing (MCC) has been proposed to offload the resource requirements in computational capabilities, storage and security from mobile devices into the cloud. Internet video applications such as real-time streaming are expected to be ubiquitously deployed and supported over the cloud for mobile users, who typically encounter a range of wireless networks of diverse radio access technologies during their roaming. However, real-time video streaming for mobile cloud users across heterogeneous wireless networks presents multiple challenges. The network-layer quality of service (QoS) provision to support high-quality mobile video delivery in this demanding scenario remains an open research question, and this in turn affects the application-level visual quality and impedes mobile users' perceived quality of experience (QoE). In this paper, we devise a framework to support real-time video streaming in this new mobile video networking paradigm and evaluate the performance of the proposed framework empirically through a lab-based yet realistic testing platform. One particular issue we focus on is the effect of users' mobility on the QoS of video streaming over the cloud. We design and implement a hybrid platform comprising of a test-bed and an emulator, on which our concept of mobile cloud computing, video streaming and heterogeneous wireless networks are implemented and integrated to allow the testing of our framework. As representative heterogeneous wireless networks, the popular WLAN (Wi-Fi) and MAN (WiMAX) networks are incorporated in order to evaluate effects of handovers between these different radio access technologies. The H.264/AVC (Advanced Video Coding) standard is employed for real-time video streaming from a server to mobile users (client nodes) in the networks. Mobility support is introduced to enable continuous streaming experience for a mobile user across the heterogeneous wireless network. Real-time video stream packets are captured for analytical purposes on the mobile user node. Experimental results are obtained and analysed. Future work is identified towards further improvement of the current design and implementation. With this new mobile video networking concept and paradigm implemented and evaluated, results and observations obtained from this study would form the basis of a more in-depth, comprehensive understanding of various challenges and opportunities in supporting high-quality real-time video streaming in mobile cloud over heterogeneous wireless networks.

  18. Measuring the effects of heterogeneity on distributed systems

    NASA Technical Reports Server (NTRS)

    El-Toweissy, Mohamed; Zeineldine, Osman; Mukkamala, Ravi

    1991-01-01

    Distributed computer systems in daily use are becoming more and more heterogeneous. Currently, much of the design and analysis studies of such systems assume homogeneity. This assumption of homogeneity has been mainly driven by the resulting simplicity in modeling and analysis. A simulation study is presented which investigated the effects of heterogeneity on scheduling algorithms for hard real time distributed systems. In contrast to previous results which indicate that random scheduling may be as good as a more complex scheduler, this algorithm is shown to be consistently better than a random scheduler. This conclusion is more prevalent at high workloads as well as at high levels of heterogeneity.

  19. Study on Huizhou architecture of point cloud registration based on optimized ICP algorithm

    NASA Astrophysics Data System (ADS)

    Zhang, Runmei; Wu, Yulu; Zhang, Guangbin; Zhou, Wei; Tao, Yuqian

    2018-03-01

    In view of the current point cloud registration software has high hardware requirements, heavy workload and moltiple interactive definition, the source of software with better processing effect is not open, a two--step registration method based on normal vector distribution feature and coarse feature based iterative closest point (ICP) algorithm is proposed in this paper. This method combines fast point feature histogram (FPFH) algorithm, define the adjacency region of point cloud and the calculation model of the distribution of normal vectors, setting up the local coordinate system for each key point, and obtaining the transformation matrix to finish rough registration, the rough registration results of two stations are accurately registered by using the ICP algorithm. Experimental results show that, compared with the traditional ICP algorithm, the method used in this paper has obvious time and precision advantages for large amount of point clouds.

  20. The implications of dust ice nuclei effect on cloud top temperature in a complex mesoscale convective system.

    PubMed

    Li, Rui; Dong, Xue; Guo, Jingchao; Fu, Yunfei; Zhao, Chun; Wang, Yu; Min, Qilong

    2017-10-23

    Mineral dust is the most important natural source of atmospheric ice nuclei (IN) which may significantly mediate the properties of ice cloud through heterogeneous nucleation and lead to crucial impacts on hydrological and energy cycle. The potential dust IN effect on cloud top temperature (CTT) in a well-developed mesoscale convective system (MCS) was studied using both satellite observations and cloud resolving model (CRM) simulations. We combined satellite observations from passive spectrometer, active cloud radar, lidar, and wind field simulations from CRM to identify the place where ice cloud mixed with dust particles. For given ice water path, the CTT of dust-mixed cloud is warmer than that in relatively pristine cloud. The probability distribution function (PDF) of CTT for dust-mixed clouds shifted to the warmer end and showed two peaks at about -45 °C and -25 °C. The PDF for relatively pristine cloud only show one peak at -55 °C. Cloud simulations with different microphysical schemes agreed well with each other and showed better agreement with satellite observations in pristine clouds, but they showed large discrepancies in dust-mixed clouds. Some microphysical schemes failed to predict the warm peak of CTT related to heterogeneous ice formation.

  1. Understanding Ice Supersaturation, Particle Growth, and Number Concentration in Cirrus Clouds

    NASA Technical Reports Server (NTRS)

    Comstock, Jennifer M.; Lin, Ruei-Fong; Starr, David O'C.; Yang, Ping

    2008-01-01

    Many factors control the ice supersaturation and microphysical properties in cirrus clouds. We explore the effects of dynamic forcing, ice nucleation mechanisms, and ice crystal growth rate on the evolution and distribution of water vapor and cloud properties in nighttime cirrus clouds using a one-dimensional cloud model with bin microphysics and remote sensing measurements obtained at the Department of Energy's Atmospheric Radiation Measurement (ARM) Climate Research Facility located near Lamont, OK. We forced the model using both large-scale vertical ascent and, for the first time, mean mesoscale velocity derived from radar Doppler velocity measurements. Both heterogeneous and homogeneous nucleation processes are explored, where a classical theory heterogeneous scheme is compared with empirical representations. We evaluated model simulations by examining both bulk cloud properties and distributions of measured radar reflectivity, lidar extinction, and water vapor profiles, as well as retrieved cloud microphysical properties. Our results suggest that mesoscale variability is the primary mechanism needed to reproduce observed quantities. Model sensitivity to the ice growth rate is also investigated. The most realistic simulations as compared with observations are forced using mesoscale waves, include fast ice crystal growth, and initiate ice by either homogeneous or heterogeneous nucleation. Simulated ice crystal number concentrations (tens to hundreds particles per liter) are typically two orders of magnitude smaller than previously published results based on aircraft measurements in cirrus clouds, although higher concentrations are possible in isolated pockets within the nucleation zone.

  2. Prediction Based Proactive Thermal Virtual Machine Scheduling in Green Clouds

    PubMed Central

    Kinger, Supriya; Kumar, Rajesh; Sharma, Anju

    2014-01-01

    Cloud computing has rapidly emerged as a widely accepted computing paradigm, but the research on Cloud computing is still at an early stage. Cloud computing provides many advanced features but it still has some shortcomings such as relatively high operating cost and environmental hazards like increasing carbon footprints. These hazards can be reduced up to some extent by efficient scheduling of Cloud resources. Working temperature on which a machine is currently running can be taken as a criterion for Virtual Machine (VM) scheduling. This paper proposes a new proactive technique that considers current and maximum threshold temperature of Server Machines (SMs) before making scheduling decisions with the help of a temperature predictor, so that maximum temperature is never reached. Different workload scenarios have been taken into consideration. The results obtained show that the proposed system is better than existing systems of VM scheduling, which does not consider current temperature of nodes before making scheduling decisions. Thus, a reduction in need of cooling systems for a Cloud environment has been obtained and validated. PMID:24737962

  3. Competition for water vapour results in suppression of ice formation in mixed-phase clouds

    NASA Astrophysics Data System (ADS)

    Simpson, Emma L.; Connolly, Paul J.; McFiggans, Gordon

    2018-05-01

    The formation of ice in clouds can initiate precipitation and influence a cloud's reflectivity and lifetime, affecting climate to a highly uncertain degree. Nucleation of ice at elevated temperatures requires an ice nucleating particle (INP), which results in so-called heterogeneous freezing. Previously reported measurements for the ability of a particle to nucleate ice have been made in the absence of other aerosol which will act as cloud condensation nuclei (CCN) and are ubiquitous in the atmosphere. Here we show that CCN can outcompete INPs for available water vapour thus suppressing ice formation, which has the potential to significantly affect the Earth's radiation budget. The magnitude of this suppression is shown to be dependent on the mass of condensed water required for freezing. Here we show that ice formation in a state-of-the-art cloud parcel model is strongly dependent on the criteria for heterogeneous freezing selected from those previously hypothesised. We have developed an alternative criteria which agrees well with observations from cloud chamber experiments. This study demonstrates the dominant role that competition for water vapour can play in ice formation, highlighting both a need for clarity in the requirements for heterogeneous freezing and for measurements under atmospherically appropriate conditions.

  4. An Efficient Virtual Machine Consolidation Scheme for Multimedia Cloud Computing.

    PubMed

    Han, Guangjie; Que, Wenhui; Jia, Gangyong; Shu, Lei

    2016-02-18

    Cloud computing has innovated the IT industry in recent years, as it can delivery subscription-based services to users in the pay-as-you-go model. Meanwhile, multimedia cloud computing is emerging based on cloud computing to provide a variety of media services on the Internet. However, with the growing popularity of multimedia cloud computing, its large energy consumption cannot only contribute to greenhouse gas emissions, but also result in the rising of cloud users' costs. Therefore, the multimedia cloud providers should try to minimize its energy consumption as much as possible while satisfying the consumers' resource requirements and guaranteeing quality of service (QoS). In this paper, we have proposed a remaining utilization-aware (RUA) algorithm for virtual machine (VM) placement, and a power-aware algorithm (PA) is proposed to find proper hosts to shut down for energy saving. These two algorithms have been combined and applied to cloud data centers for completing the process of VM consolidation. Simulation results have shown that there exists a trade-off between the cloud data center's energy consumption and service-level agreement (SLA) violations. Besides, the RUA algorithm is able to deal with variable workload to prevent hosts from overloading after VM placement and to reduce the SLA violations dramatically.

  5. An Efficient Virtual Machine Consolidation Scheme for Multimedia Cloud Computing

    PubMed Central

    Han, Guangjie; Que, Wenhui; Jia, Gangyong; Shu, Lei

    2016-01-01

    Cloud computing has innovated the IT industry in recent years, as it can delivery subscription-based services to users in the pay-as-you-go model. Meanwhile, multimedia cloud computing is emerging based on cloud computing to provide a variety of media services on the Internet. However, with the growing popularity of multimedia cloud computing, its large energy consumption cannot only contribute to greenhouse gas emissions, but also result in the rising of cloud users’ costs. Therefore, the multimedia cloud providers should try to minimize its energy consumption as much as possible while satisfying the consumers’ resource requirements and guaranteeing quality of service (QoS). In this paper, we have proposed a remaining utilization-aware (RUA) algorithm for virtual machine (VM) placement, and a power-aware algorithm (PA) is proposed to find proper hosts to shut down for energy saving. These two algorithms have been combined and applied to cloud data centers for completing the process of VM consolidation. Simulation results have shown that there exists a trade-off between the cloud data center’s energy consumption and service-level agreement (SLA) violations. Besides, the RUA algorithm is able to deal with variable workload to prevent hosts from overloading after VM placement and to reduce the SLA violations dramatically. PMID:26901201

  6. Workflow Management Systems for Molecular Dynamics on Leadership Computers

    NASA Astrophysics Data System (ADS)

    Wells, Jack; Panitkin, Sergey; Oleynik, Danila; Jha, Shantenu

    Molecular Dynamics (MD) simulations play an important role in a range of disciplines from Material Science to Biophysical systems and account for a large fraction of cycles consumed on computing resources. Increasingly science problems require the successful execution of ''many'' MD simulations as opposed to a single MD simulation. There is a need to provide scalable and flexible approaches to the execution of the workload. We present preliminary results on the Titan computer at the Oak Ridge Leadership Computing Facility that demonstrate a general capability to manage workload execution agnostic of a specific MD simulation kernel or execution pattern, and in a manner that integrates disparate grid-based and supercomputing resources. Our results build upon our extensive experience of distributed workload management in the high-energy physics ATLAS project using PanDA (Production and Distributed Analysis System), coupled with recent conceptual advances in our understanding of workload management on heterogeneous resources. We will discuss how we will generalize these initial capabilities towards a more production level service on DOE leadership resources. This research is sponsored by US DOE/ASCR and used resources of the OLCF computing facility.

  7. Dynamical States of Low Temperature Cirrus

    NASA Technical Reports Server (NTRS)

    Barahona, D.; Nenes, A.

    2011-01-01

    Low ice crystal concentration and sustained in-cloud supersaturation, commonly found in cloud observations at low temperature, challenge our understanding of cirrus formation. Heterogeneous freezing from effloresced ammonium sulfate, glassy aerosol, dust and black carbon are proposed to cause these phenomena; this requires low updrafts for cirrus characteristics to agree with observations and is at odds with the gravity wave spectrum in the upper troposphere. Background temperature fluctuations however can establish a dynamical equilibrium between ice production and sedimentation loss (as opposed to ice crystal formation during the first stages of cloud evolution and subsequent slow cloud decay) that explains low temperature cirrus properties. This newly-discovered state is favored at low temperatures and does not require heterogeneous nucleation to occur (the presence of ice nuclei can however facilitate its onset). Our understanding of cirrus clouds and their role in anthropogenic climate change is reshaped, as the type of dynamical forcing will set these clouds in one of two preferred microphysical regimes with very different susceptibility to aerosol.

  8. Impact of spatial resolution on cirrus infrared satellite retrievals in the presence of cloud heterogeneity

    NASA Astrophysics Data System (ADS)

    Fauchez, T.; Platnick, S. E.; Meyer, K.; Zhang, Z.; Cornet, C.; Szczap, F.; Dubuisson, P.

    2015-12-01

    Cirrus clouds are an important part of the Earth radiation budget but an accurate assessment of their role remains highly uncertain. Cirrus optical properties such as Cloud Optical Thickness (COT) and ice crystal effective particle size are often retrieved with a combination of Visible/Near InfraRed (VNIR) and ShortWave-InfraRed (SWIR) reflectance channels. Alternatively, Thermal InfraRed (TIR) techniques, such as the Split Window Technique (SWT), have demonstrated better accuracy for thin cirrus effective radius retrievals with small effective radii. However, current global operational algorithms for both retrieval methods assume that cloudy pixels are horizontally homogeneous (Plane Parallel Approximation (PPA)) and independent (Independent Pixel Approximation (IPA)). The impact of these approximations on ice cloud retrievals needs to be understood and, as far as possible, corrected. Horizontal heterogeneity effects in the TIR spectrum are mainly dominated by the PPA bias that primarily depends on the COT subpixel heterogeneity; for solar reflectance channels, in addition to the PPA bias, the IPA can lead to significant retrieval errors due to a significant photon horizontal transport between cloudy columns, as well as brightening and shadowing effects that are more difficult to quantify. Furthermore TIR retrievals techniques have demonstrated better retrieval accuracy for thin cirrus having small effective radii over solar reflectance techniques. The TIR range is thus particularly relevant in order to characterize, as accurately as possible, thin cirrus clouds. Heterogeneity effects in the TIR are evaluated as a function of spatial resolution in order to estimate the optimal spatial resolution for TIR retrieval applications. These investigations are performed using a cirrus 3D cloud generator (3DCloud), a 3D radiative transfer code (3DMCPOL), and two retrieval algorithms, namely the operational MODIS retrieval algorithm (MOD06) and a research-level SWT algorithm.

  9. Evolution of the ATLAS PanDA workload management system for exascale computational science

    NASA Astrophysics Data System (ADS)

    Maeno, T.; De, K.; Klimentov, A.; Nilsson, P.; Oleynik, D.; Panitkin, S.; Petrosyan, A.; Schovancova, J.; Vaniachine, A.; Wenaus, T.; Yu, D.; Atlas Collaboration

    2014-06-01

    An important foundation underlying the impressive success of data processing and analysis in the ATLAS experiment [1] at the LHC [2] is the Production and Distributed Analysis (PanDA) workload management system [3]. PanDA was designed specifically for ATLAS and proved to be highly successful in meeting all the distributed computing needs of the experiment. However, the core design of PanDA is not experiment specific. The PanDA workload management system is capable of meeting the needs of other data intensive scientific applications. Alpha-Magnetic Spectrometer [4], an astro-particle experiment on the International Space Station, and the Compact Muon Solenoid [5], an LHC experiment, have successfully evaluated PanDA and are pursuing its adoption. In this paper, a description of the new program of work to develop a generic version of PanDA will be given, as well as the progress in extending PanDA's capabilities to support supercomputers and clouds and to leverage intelligent networking. PanDA has demonstrated at a very large scale the value of automated dynamic brokering of diverse workloads across distributed computing resources. The next generation of PanDA will allow other data-intensive sciences and a wider exascale community employing a variety of computing platforms to benefit from ATLAS' experience and proven tools.

  10. Flavivirus structural heterogeneity: implications for cell entry.

    PubMed

    Rey, Félix A; Stiasny, Karin; Heinz, Franz X

    2017-06-01

    The explosive spread of Zika virus is the most recent example of the threat imposed to human health by flaviviruses. High-resolution structures are available for several of these arthropod-borne viruses, revealing alternative icosahedral organizations of immature and mature virions. Incomplete proteolytic maturation, however, results in a cloud of highly heterogeneous mosaic particles. This heterogeneity is further expanded by a dynamic behavior of the viral envelope glycoproteins. The ensemble of heterogeneous and dynamic infectious particles circulating in infected hosts offers a range of alternative possible receptor interaction sites at their surfaces, potentially contributing to the broad flavivirus host-range and variation in tissue tropism. The potential synergy between heterogeneous particles in the circulating cloud thus provides an additional dimension to understand the unanticipated properties of Zika virus in its recent outbreaks. Copyright © 2017 Elsevier B.V. All rights reserved.

  11. Accounting for Heterogeneous-Phase Chemistry in Air Quality Models - Research Needs and Applications

    EPA Science Inventory

    Understanding the extent to which heterogeneous chemical reactions affect the burden and distribution of atmospheric pollutants is important because heterogeneous surfaces are ubiquitous throughout our environment. They include materials such as aerosol particles, clouds and fog,...

  12. Model simulations with COSMO-SPECS: impact of heterogeneous freezing modes and ice nucleating particle types on ice formation and precipitation in a deep convective cloud

    NASA Astrophysics Data System (ADS)

    Diehl, Karoline; Grützun, Verena

    2018-03-01

    In deep convective clouds, heavy rain is often formed involving the ice phase. Simulations were performed using the 3-D cloud resolving model COSMO-SPECS with detailed spectral microphysics including parameterizations of homogeneous and three heterogeneous freezing modes. The initial conditions were selected to result in a deep convective cloud reaching 14 km of altitude with strong updrafts up to 40 m s-1. At such altitudes with corresponding temperatures below -40 °C the major fraction of liquid drops freezes homogeneously. The goal of the present model simulations was to investigate how additional heterogeneous freezing will affect ice formation and precipitation although its contribution to total ice formation may be rather low. In such a situation small perturbations that do not show significant effects at first sight may trigger cloud microphysical responses. Effects of the following small perturbations were studied: (1) additional ice formation via immersion, contact, and deposition modes in comparison to solely homogeneous freezing, (2) contact and deposition freezing in comparison to immersion freezing, and (3) small fractions of biological ice nucleating particles (INPs) in comparison to higher fractions of mineral dust INP. The results indicate that the modification of precipitation proceeds via the formation of larger ice particles, which may be supported by direct freezing of larger drops, the growth of pristine ice particles by riming, and by nucleation of larger drops by collisions with pristine ice particles. In comparison to the reference case with homogeneous freezing only, such small perturbations due to additional heterogeneous freezing rather affect the total precipitation amount. It is more likely that the temporal development and the local distribution of precipitation are affected by such perturbations. This results in a gradual increase in precipitation at early cloud stages instead of a strong increase at later cloud stages coupled with approximately 50 % more precipitation in the cloud center. The modifications depend on the active freezing modes, the fractions of active INP, and the composition of the internal mixtures in the drops.

  13. Evaluating the Efficacy of the Cloud for Cluster Computation

    NASA Technical Reports Server (NTRS)

    Knight, David; Shams, Khawaja; Chang, George; Soderstrom, Tom

    2012-01-01

    Computing requirements vary by industry, and it follows that NASA and other research organizations have computing demands that fall outside the mainstream. While cloud computing made rapid inroads for tasks such as powering web applications, performance issues on highly distributed tasks hindered early adoption for scientific computation. One venture to address this problem is Nebula, NASA's homegrown cloud project tasked with delivering science-quality cloud computing resources. However, another industry development is Amazon's high-performance computing (HPC) instances on Elastic Cloud Compute (EC2) that promises improved performance for cluster computation. This paper presents results from a series of benchmarks run on Amazon EC2 and discusses the efficacy of current commercial cloud technology for running scientific applications across a cluster. In particular, a 240-core cluster of cloud instances achieved 2 TFLOPS on High-Performance Linpack (HPL) at 70% of theoretical computational performance. The cluster's local network also demonstrated sub-100 ?s inter-process latency with sustained inter-node throughput in excess of 8 Gbps. Beyond HPL, a real-world Hadoop image processing task from NASA's Lunar Mapping and Modeling Project (LMMP) was run on a 29 instance cluster to process lunar and Martian surface images with sizes on the order of tens of gigapixels. These results demonstrate that while not a rival of dedicated supercomputing clusters, commercial cloud technology is now a feasible option for moderately demanding scientific workloads.

  14. VM Capacity-Aware Scheduling within Budget Constraints in IaaS Clouds

    PubMed Central

    Thanasias, Vasileios; Lee, Choonhwa; Hanif, Muhammad; Kim, Eunsam; Helal, Sumi

    2016-01-01

    Recently, cloud computing has drawn significant attention from both industry and academia, bringing unprecedented changes to computing and information technology. The infrastructure-as-a-Service (IaaS) model offers new abilities such as the elastic provisioning and relinquishing of computing resources in response to workload fluctuations. However, because the demand for resources dynamically changes over time, the provisioning of resources in a way that a given budget is efficiently utilized while maintaining a sufficing performance remains a key challenge. This paper addresses the problem of task scheduling and resource provisioning for a set of tasks running on IaaS clouds; it presents novel provisioning and scheduling algorithms capable of executing tasks within a given budget, while minimizing the slowdown due to the budget constraint. Our simulation study demonstrates a substantial reduction up to 70% in the overall task slowdown rate by the proposed algorithms. PMID:27501046

  15. Exploring Infiniband Hardware Virtualization in OpenNebula towards Efficient High-Performance Computing

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Pais Pitta de Lacerda Ruivo, Tiago; Bernabeu Altayo, Gerard; Garzoglio, Gabriele

    2014-11-11

    has been widely accepted that software virtualization has a big negative impact on high-performance computing (HPC) application performance. This work explores the potential use of Infiniband hardware virtualization in an OpenNebula cloud towards the efficient support of MPI-based workloads. We have implemented, deployed, and tested an Infiniband network on the FermiCloud private Infrastructure-as-a-Service (IaaS) cloud. To avoid software virtualization towards minimizing the virtualization overhead, we employed a technique called Single Root Input/Output Virtualization (SRIOV). Our solution spanned modifications to the Linux’s Hypervisor as well as the OpenNebula manager. We evaluated the performance of the hardware virtualization on up to 56more » virtual machines connected by up to 8 DDR Infiniband network links, with micro-benchmarks (latency and bandwidth) as well as w a MPI-intensive application (the HPL Linpack benchmark).« less

  16. VM Capacity-Aware Scheduling within Budget Constraints in IaaS Clouds.

    PubMed

    Thanasias, Vasileios; Lee, Choonhwa; Hanif, Muhammad; Kim, Eunsam; Helal, Sumi

    2016-01-01

    Recently, cloud computing has drawn significant attention from both industry and academia, bringing unprecedented changes to computing and information technology. The infrastructure-as-a-Service (IaaS) model offers new abilities such as the elastic provisioning and relinquishing of computing resources in response to workload fluctuations. However, because the demand for resources dynamically changes over time, the provisioning of resources in a way that a given budget is efficiently utilized while maintaining a sufficing performance remains a key challenge. This paper addresses the problem of task scheduling and resource provisioning for a set of tasks running on IaaS clouds; it presents novel provisioning and scheduling algorithms capable of executing tasks within a given budget, while minimizing the slowdown due to the budget constraint. Our simulation study demonstrates a substantial reduction up to 70% in the overall task slowdown rate by the proposed algorithms.

  17. Jungle Computing: Distributed Supercomputing Beyond Clusters, Grids, and Clouds

    NASA Astrophysics Data System (ADS)

    Seinstra, Frank J.; Maassen, Jason; van Nieuwpoort, Rob V.; Drost, Niels; van Kessel, Timo; van Werkhoven, Ben; Urbani, Jacopo; Jacobs, Ceriel; Kielmann, Thilo; Bal, Henri E.

    In recent years, the application of high-performance and distributed computing in scientific practice has become increasingly wide spread. Among the most widely available platforms to scientists are clusters, grids, and cloud systems. Such infrastructures currently are undergoing revolutionary change due to the integration of many-core technologies, providing orders-of-magnitude speed improvements for selected compute kernels. With high-performance and distributed computing systems thus becoming more heterogeneous and hierarchical, programming complexity is vastly increased. Further complexities arise because urgent desire for scalability and issues including data distribution, software heterogeneity, and ad hoc hardware availability commonly force scientists into simultaneous use of multiple platforms (e.g., clusters, grids, and clouds used concurrently). A true computing jungle.

  18. Efficient workload management in geographically distributed data centers leveraging autoregressive models

    NASA Astrophysics Data System (ADS)

    Altomare, Albino; Cesario, Eugenio; Mastroianni, Carlo

    2016-10-01

    The opportunity of using Cloud resources on a pay-as-you-go basis and the availability of powerful data centers and high bandwidth connections are speeding up the success and popularity of Cloud systems, which is making on-demand computing a common practice for enterprises and scientific communities. The reasons for this success include natural business distribution, the need for high availability and disaster tolerance, the sheer size of their computational infrastructure, and/or the desire to provide uniform access times to the infrastructure from widely distributed client sites. Nevertheless, the expansion of large data centers is resulting in a huge rise of electrical power consumed by hardware facilities and cooling systems. The geographical distribution of data centers is becoming an opportunity: the variability of electricity prices, environmental conditions and client requests, both from site to site and with time, makes it possible to intelligently and dynamically (re)distribute the computational workload and achieve as diverse business goals as: the reduction of costs, energy consumption and carbon emissions, the satisfaction of performance constraints, the adherence to Service Level Agreement established with users, etc. This paper proposes an approach that helps to achieve the business goals established by the data center administrators. The workload distribution is driven by a fitness function, evaluated for each data center, which weighs some key parameters related to business objectives, among which, the price of electricity, the carbon emission rate, the balance of load among the data centers etc. For example, the energy costs can be reduced by using a "follow the moon" approach, e.g. by migrating the workload to data centers where the price of electricity is lower at that time. Our approach uses data about historical usage of the data centers and data about environmental conditions to predict, with the help of regressive models, the values of the parameters of the fitness function, and then to appropriately tune the weights assigned to the parameters in accordance to the business goals. Preliminary experimental results, presented in this paper, show encouraging benefits.

  19. Shallow to Deep Convection Transition over a Heterogeneous Land Surface Using the Land Model Coupled Large-Eddy Simulation

    NASA Astrophysics Data System (ADS)

    Lee, J.; Zhang, Y.; Klein, S. A.

    2017-12-01

    The triggering of the land breeze, and hence the development of deep convection over heterogeneous land should be understood as a consequence of the complex processes involving various factors from land surface and atmosphere simultaneously. That is a sub-grid scale process that many large-scale models have difficulty incorporating it into the parameterization scheme partly due to lack of our understanding. Thus, it is imperative that we approach the problem using a high-resolution modeling framework. In this study, we use SAM-SLM (Lee and Khairoutdinov, 2015), a large-eddy simulation model coupled to a land model, to explore the cloud effect such as cold pool, the cloud shading and the soil moisture memory on the land breeze structure and the further development of cloud and precipitation over a heterogeneous land surface. The atmospheric large scale forcing and the initial sounding are taken from the new composite case study of the fair-weather, non-precipitating shallow cumuli at ARM SGP (Zhang et al., 2017). We model the land surface as a chess board pattern with alternating leaf area index (LAI). The patch contrast of the LAI is adjusted to encompass the weak to strong heterogeneity amplitude. The surface sensible- and latent heat fluxes are computed according to the given LAI representing the differential surface heating over a heterogeneous land surface. Separate from the surface forcing imposed from the originally modeled surface, the cases that transition into the moist convection can induce another layer of the surface heterogeneity from the 1) radiation shading by clouds, 2) adjusted soil moisture pattern by the rain, 3) spreading cold pool. First, we assess and quantifies the individual cloud effect on the land breeze and the moist convection under the weak wind to simplify the feedback processes. And then, the same set of experiments is repeated under sheared background wind with low level jet, a typical summer time wind pattern at ARM SGP site, to account for more realistic situations. Our goal is to assist answering the question: "Do the sub-grid scale land surface heterogeneity matter for the weather and climate modeling?" This work performed under the auspices of the U.S. Department of Energy by Lawrence Livermore National Laboratory under Contract DE-AC52-07NA27344. LLNL-ABS- 736011.

  20. Role of the ATLAS Grid Information System (AGIS) in Distributed Data Analysis and Simulation

    NASA Astrophysics Data System (ADS)

    Anisenkov, A. V.

    2018-03-01

    In modern high-energy physics experiments, particular attention is paid to the global integration of information and computing resources into a unified system for efficient storage and processing of experimental data. Annually, the ATLAS experiment performed at the Large Hadron Collider at the European Organization for Nuclear Research (CERN) produces tens of petabytes raw data from the recording electronics and several petabytes of data from the simulation system. For processing and storage of such super-large volumes of data, the computing model of the ATLAS experiment is based on heterogeneous geographically distributed computing environment, which includes the worldwide LHC computing grid (WLCG) infrastructure and is able to meet the requirements of the experiment for processing huge data sets and provide a high degree of their accessibility (hundreds of petabytes). The paper considers the ATLAS grid information system (AGIS) used by the ATLAS collaboration to describe the topology and resources of the computing infrastructure, to configure and connect the high-level software systems of computer centers, to describe and store all possible parameters, control, configuration, and other auxiliary information required for the effective operation of the ATLAS distributed computing applications and services. The role of the AGIS system in the development of a unified description of the computing resources provided by grid sites, supercomputer centers, and cloud computing into a consistent information model for the ATLAS experiment is outlined. This approach has allowed the collaboration to extend the computing capabilities of the WLCG project and integrate the supercomputers and cloud computing platforms into the software components of the production and distributed analysis workload management system (PanDA, ATLAS).

  1. Cloud Computing Trace Characterization and Synthetic Workload Generation

    DTIC Science & Technology

    2013-03-01

    measurements [44]. Olio is primarily for learning Web 2.0 technologies, evaluating the three implementations (PHP, Java EE, and RubyOnRails (ROR...Add Event 17 Olio is well documented, but assumes prerequisite knowledge with setup and operation of apache web servers and MySQL databases. Olio...Faban supports numerous servers such as Apache httpd, Sun Java System Web, Portal and Mail Servers, Oracle RDBMS, memcached, and others [18]. Perhaps

  2. Challenges of Designing Interdisciplinary Postgraduate Curricula: Case Studies of Interdisciplinary Master's Programmes at a Research-Intensive UK University

    ERIC Educational Resources Information Center

    Gantogtokh, Orkhon; Quinlan, Kathleen M.

    2017-01-01

    This study, based on case study analyses of two interdisciplinary programmes in a research-intensive university in the UK, focuses on the challenges involved in designing, coordinating, and leading interdisciplinary postgraduate curricula, including workload, student heterogeneity, and difficulties in achieving coherence. Solutions and approaches…

  3. A survey of CPU-GPU heterogeneous computing techniques

    DOE PAGES

    Mittal, Sparsh; Vetter, Jeffrey S.

    2015-07-04

    As both CPU and GPU become employed in a wide range of applications, it has been acknowledged that both of these processing units (PUs) have their unique features and strengths and hence, CPU-GPU collaboration is inevitable to achieve high-performance computing. This has motivated significant amount of research on heterogeneous computing techniques, along with the design of CPU-GPU fused chips and petascale heterogeneous supercomputers. In this paper, we survey heterogeneous computing techniques (HCTs) such as workload-partitioning which enable utilizing both CPU and GPU to improve performance and/or energy efficiency. We review heterogeneous computing approaches at runtime, algorithm, programming, compiler and applicationmore » level. Further, we review both discrete and fused CPU-GPU systems; and discuss benchmark suites designed for evaluating heterogeneous computing systems (HCSs). Furthermore, we believe that this paper will provide insights into working and scope of applications of HCTs to researchers and motivate them to further harness the computational powers of CPUs and GPUs to achieve the goal of exascale performance.« less

  4. A survey of CPU-GPU heterogeneous computing techniques

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mittal, Sparsh; Vetter, Jeffrey S.

    As both CPU and GPU become employed in a wide range of applications, it has been acknowledged that both of these processing units (PUs) have their unique features and strengths and hence, CPU-GPU collaboration is inevitable to achieve high-performance computing. This has motivated significant amount of research on heterogeneous computing techniques, along with the design of CPU-GPU fused chips and petascale heterogeneous supercomputers. In this paper, we survey heterogeneous computing techniques (HCTs) such as workload-partitioning which enable utilizing both CPU and GPU to improve performance and/or energy efficiency. We review heterogeneous computing approaches at runtime, algorithm, programming, compiler and applicationmore » level. Further, we review both discrete and fused CPU-GPU systems; and discuss benchmark suites designed for evaluating heterogeneous computing systems (HCSs). Furthermore, we believe that this paper will provide insights into working and scope of applications of HCTs to researchers and motivate them to further harness the computational powers of CPUs and GPUs to achieve the goal of exascale performance.« less

  5. Double-moment Cloud Microphysics Scheme for the Deep Convection Parameterization in the GFDL AM3

    NASA Astrophysics Data System (ADS)

    Belochitski, A.; Donner, L.

    2013-12-01

    A double-moment cloud microphysical scheme originally developed by Morrision and Gettelman (2008) for the stratiform clouds and later adopted for the deep convection by Song and Zhang (2011) is being implemented in to the deep convection parameterization of Geophysical Fluid Dynamics Laboratory's atmospheric general circulation model AM3. The scheme treats cloud drop, cloud ice, rain, and snow number concentrations and mixing ratios as diagnostic variables and incorporates processes of autoconversion, self-collection, collection between hydrometeor species, sedimentation, ice nucleation, drop activation, homogeneous and heterogeneous freezing, and the Bergeron-Findeisen process. Detailed representation of microphysical processes makes the scheme suitable for studying the interactions between aerosols and convection, as well as aerosols' indirect effects on clouds and the roles of these effects in climate change. The scheme is implemented into the single column version of the GFDL AM3 and evaluated using large scale forcing data obtained at the U.S. Department of Energy Atmospheric Radiation Measurment project's Southern Great Planes and Tropical West Pacific sites. Sensitivity of the scheme to formulations for autoconversion of cloud water and its accretion by rain, self-collection of rain and self-collection of snow, as well as the formulation for heterogenous ice nucleation is investigated. In the future, tests with the full atmospheric GCM will be conducted.

  6. Liberating Virtual Machines from Physical Boundaries through Execution Knowledge

    DTIC Science & Technology

    2015-12-01

    trivial infrastructures such as VM distribution networks, clients need to wait for an extended period of time before launching a VM. In cloud settings...hardware support. MobiDesk [28] efficiently supports virtual desktops in mobile environments by decou- pling the user’s workload from host systems and...experiment set-up. VMs are migrated between a pair of source and destination hosts, which are connected through a backend 10 Gbps network for

  7. The ATLAS Production System Evolution: New Data Processing and Analysis Paradigm for the LHC Run2 and High-Luminosity

    NASA Astrophysics Data System (ADS)

    Barreiro, F. H.; Borodin, M.; De, K.; Golubkov, D.; Klimentov, A.; Maeno, T.; Mashinistov, R.; Padolski, S.; Wenaus, T.; ATLAS Collaboration

    2017-10-01

    The second generation of the ATLAS Production System called ProdSys2 is a distributed workload manager that runs daily hundreds of thousands of jobs, from dozens of different ATLAS specific workflows, across more than hundred heterogeneous sites. It achieves high utilization by combining dynamic job definition based on many criteria, such as input and output size, memory requirements and CPU consumption, with manageable scheduling policies and by supporting different kind of computational resources, such as GRID, clouds, supercomputers and volunteer-computers. The system dynamically assigns a group of jobs (task) to a group of geographically distributed computing resources. Dynamic assignment and resources utilization is one of the major features of the system, it didn’t exist in the earliest versions of the production system where Grid resources topology was predefined using national or/and geographical pattern. Production System has a sophisticated job fault-recovery mechanism, which efficiently allows to run multi-Terabyte tasks without human intervention. We have implemented “train” model and open-ended production which allow to submit tasks automatically as soon as new set of data is available and to chain physics groups data processing and analysis with central production by the experiment. We present an overview of the ATLAS Production System and its major components features and architecture: task definition, web user interface and monitoring. We describe the important design decisions and lessons learned from an operational experience during the first year of LHC Run2. We also report the performance of the designed system and how various workflows, such as data (re)processing, Monte-Carlo and physics group production, users analysis, are scheduled and executed within one production system on heterogeneous computing resources.

  8. An overview of the Ice Nuclei Research Unit Jungfraujoch/Cloud and Aerosol Characterization Experiment 2013 (INUIT-JFJ/CLACE-2013)

    NASA Astrophysics Data System (ADS)

    Schneider, Johannes

    2014-05-01

    Ice formation in mixed phase tropospheric clouds is an essential prerequisite for the formation of precipitation at mid-latitudes. Ice formation at temperatures warmer than -35°C is only possible via heterogeneous ice nucleation, but up to now the exact pathways of heterogeneous ice formation are not sufficiently well understood. The research unit INUIT (Ice NUcleation research unIT), funded by the Deutsche Forschungsgemeinschaft (DFG FOR 1525) has been established in 2012 with the objective to investigate heterogeneous ice nucleation by combination of laboratory studies, model calculation and field experiments. The main field campaign of the INUIT project (INUIT-JFJ) was conducted at the High Alpine Research Station Jungfraujoch (Swiss Alps, 3580 m asl) during January and February 2013, in collaboration with several international partners in the framework of CLACE2013. The instrumentation included a large set of aerosol chemical and physical analysis instruments (particle counters, particle sizers, particle mass spectrometers, cloud condensation nuclei counters, ice nucleus counters etc.), that were operated inside the Sphinx laboratory and sampled in mixed phase clouds through two ice selective inlets (Ice-CVI, ISI) as well as through a total aerosol inlet that was used for out-of-cloud aerosol measurements. Besides the on-line measurements, also samples for off-line analysis (ESEM, STXM) have been taken in and out of clouds. Furthermore, several cloud microphysics instruments were operated outside the Sphinx laboratory. First results indicate that a large fraction of ice residues sampled from mixed phase clouds contain organic material, but also mineral dust. Soot and lead were not found to be enriched in ice residues. The concentration of heterogeneous ice nuclei was found to be variable (ranging between < 1 and > 100 per liter) and to be strongly dependent on the operating conditions of the respective IN counter. The number size distribution of ice residues appears to show a bimodal distribution with a smaller mode having a modal diameter around 200 nm and a coarse mode at around 2 µm. During the cloud events evaluated so far, agreement between the number concentration of ice residues sampled through the Ice-CVI and the measured concentration of small ice crystals measured outside the laboratory could be achieved. The shape of small ice crystals was found to be mainly irregular. We acknowledge the International Foundation High Altitude Research Stations Jungfraujoch and Gornergrat (HFSJG), the help of the custodians at the Jungfraujoch station, and the funding by DFG (FOR 1525) and the federal state Hessen ("LOEWE-Schwerpunkt AmbiProbe").

  9. Impacts of Subgrid Heterogeneous Mixing between Cloud Liquid and Ice on the Wegner-Bergeron-Findeisen Process and Mixed-phase Clouds in NCAR CAM5

    NASA Astrophysics Data System (ADS)

    Liu, X.; Zhang, M.; Zhang, D.; Wang, Z.; Wang, Y.

    2017-12-01

    Mixed-phase clouds are persistently observed over the Arctic and the phase partitioning between cloud liquid and ice hydrometeors in mixed-phase clouds has important impacts on the surface energy budget and Arctic climate. In this study, we test the NCAR Community Atmosphere Model Version 5 (CAM5) with the single-column and weather forecast configurations and evaluate the model performance against observation data from the DOE Atmospheric Radiation Measurement (ARM) Program's M-PACE field campaign in October 2004 and long-term ground-based multi-sensor remote sensing measurements. Like most global climate models, we find that CAM5 also poorly simulates the phase partitioning in mixed-phase clouds by significantly underestimating the cloud liquid water content. Assuming pocket structures in the distribution of cloud liquid and ice in mixed-phase clouds as suggested by in situ observations provides a plausible solution to improve the model performance by reducing the Wegner-Bergeron-Findeisen (WBF) process rate. In this study, the modification of the WBF process in the CAM5 model has been achieved with applying a stochastic perturbation to the time scale of the WBF process relevant to both ice and snow to account for the heterogeneous mixture of cloud liquid and ice. Our results show that this modification of WBF process improves the modeled phase partitioning in the mixed-phase clouds. The seasonal variation of mixed-phase cloud properties is also better reproduced in the model in comparison with the long-term ground-based remote sensing observations. Furthermore, the phase partitioning is insensitive to the reassignment time step of perturbations.

  10. An improved ice cloud formation parameterization in the EMAC model

    NASA Astrophysics Data System (ADS)

    Bacer, Sara; Pozzer, Andrea; Karydis, Vlassis; Tsimpidi, Alexandra; Tost, Holger; Sullivan, Sylvia; Nenes, Athanasios; Barahona, Donifan; Lelieveld, Jos

    2017-04-01

    Cirrus clouds cover about 30% of the Earth's surface and are an important modulator of the radiative energy budget of the atmosphere. Despite their importance in the global climate system, there are still large uncertainties in understanding the microphysical properties and interactions with aerosols. Ice crystal formation is quite complex and a variety of mechanisms exists for ice nucleation, depending on aerosol characteristics and environmental conditions. Ice crystals can be formed via homogeneous nucleation or heterogeneous nucleation of ice-nucleating particles in different ways (contact, immersion, condensation, deposition). We have implemented the computationally efficient cirrus cloud formation parameterization by Barahona and Nenes (2009) into the EMAC (ECHAM5/MESSy Atmospheric Chemistry) model in order to improve the representation of ice clouds and aerosol-cloud interactions. The parameterization computes the ice crystal number concentration from precursor aerosols and ice-nucleating particles accounting for the competition between homogeneous and heterogeneous nucleation and among different freezing modes. Our work shows the differences and the improvements obtained after the implementation with respect to the previous version of EMAC.

  11. HETEROGENOUS PHOTOREACTION OF FORMALDEHYDE WITH HYDROXYL RADICALS

    EPA Science Inventory

    Atmospheric heterogeneous photoreactions occur between formaldehyde and hydroxyl radicals to produce formic acid. hese photoreactions not only occur in clouds, but also in other tropospheric hydrometeors such as precipitation and dew droplets. xperiments were performed by irradia...

  12. Dynamic Transfers Of Tasks Among Computers

    NASA Technical Reports Server (NTRS)

    Liu, Howard T.; Silvester, John A.

    1989-01-01

    Allocation scheme gives jobs to idle computers. Ideal resource-sharing algorithm should have following characteristics: Dynamics, decentralized, and heterogeneous. Proposed enhanced receiver-initiated dynamic algorithm (ERIDA) for resource sharing fulfills all above criteria. Provides method balancing workload among hosts, resulting in improvement in response time and throughput performance of total system. Adjusts dynamically to traffic load of each station.

  13. Job Superscheduler Architecture and Performance in Computational Grid Environments

    NASA Technical Reports Server (NTRS)

    Shan, Hongzhang; Oliker, Leonid; Biswas, Rupak

    2003-01-01

    Computational grids hold great promise in utilizing geographically separated heterogeneous resources to solve large-scale complex scientific problems. However, a number of major technical hurdles, including distributed resource management and effective job scheduling, stand in the way of realizing these gains. In this paper, we propose a novel grid superscheduler architecture and three distributed job migration algorithms. We also model the critical interaction between the superscheduler and autonomous local schedulers. Extensive performance comparisons with ideal, central, and local schemes using real workloads from leading computational centers are conducted in a simulation environment. Additionally, synthetic workloads are used to perform a detailed sensitivity analysis of our superscheduler. Several key metrics demonstrate that substantial performance gains can be achieved via smart superscheduling in distributed computational grids.

  14. Cirrus clouds as seen by the CALIPSO satellite and ECHAM-HAM global climate model

    NASA Astrophysics Data System (ADS)

    Gasparini, Blaz; Meyer, Angela; Neubauer, David; Münch, Steffen; Lohmann, Ulrike

    2017-04-01

    Ice clouds impact the planetary energy balance and upper tropospheric water vapour transport and are therefore relevant for climate. In this study ice clouds at temperatures below -40°C simulated by the ECHAM-HAM global climate model are compared to CALIPSO/CALIOP satellite data. The model reproduces well the mean occurrence of ice clouds, while the ice water path, ice crystal radius, cloud optical depth and extinction are overestimated in terms of annual means and temperature dependent frequency histograms. Two distinct types of cirrus clouds are found: in-situ formed cirrus dominating at temperatures below -60°C and liquid-origin cirrus, dominating at temperatures warmer than -55°C. The latter form in anvils of deep convective clouds or by glaciation of mixed-phase clouds. They are associated with ice water contents of up to 0.1 g m-3 and extinctions of up to 0.1 km-1, while the in-situ formed cirrus are optically thinner and contain at least an order of magnitude less ice. The ice cloud properties do not differ significantly between the southern and the northern hemisphere. In-situ formed ice clouds are further divided into homogeneously and heterogeneously nucleated ones. The simulated liquid-origin ice crystals mainly form in convective outflow in large number concentrations, similar to in-situ homogeneously nucleated ice crystals. On the contrary, heterogeneously nucleated ice crystals are associated with smaller number concentrations. However, ice crystal aggregation and depositional growth smooth the differences between several formation mechanisms making the attribution to a specific ice nucleation mechanism challenging.

  15. The future of PanDA in ATLAS distributed computing

    NASA Astrophysics Data System (ADS)

    De, K.; Klimentov, A.; Maeno, T.; Nilsson, P.; Oleynik, D.; Panitkin, S.; Petrosyan, A.; Schovancova, J.; Vaniachine, A.; Wenaus, T.

    2015-12-01

    Experiments at the Large Hadron Collider (LHC) face unprecedented computing challenges. Heterogeneous resources are distributed worldwide at hundreds of sites, thousands of physicists analyse the data remotely, the volume of processed data is beyond the exabyte scale, while data processing requires more than a few billion hours of computing usage per year. The PanDA (Production and Distributed Analysis) system was developed to meet the scale and complexity of LHC distributed computing for the ATLAS experiment. In the process, the old batch job paradigm of locally managed computing in HEP was discarded in favour of a far more automated, flexible and scalable model. The success of PanDA in ATLAS is leading to widespread adoption and testing by other experiments. PanDA is the first exascale workload management system in HEP, already operating at more than a million computing jobs per day, and processing over an exabyte of data in 2013. There are many new challenges that PanDA will face in the near future, in addition to new challenges of scale, heterogeneity and increasing user base. PanDA will need to handle rapidly changing computing infrastructure, will require factorization of code for easier deployment, will need to incorporate additional information sources including network metrics in decision making, be able to control network circuits, handle dynamically sized workload processing, provide improved visualization, and face many other challenges. In this talk we will focus on the new features, planned or recently implemented, that are relevant to the next decade of distributed computing workload management using PanDA.

  16. Absorption of Solar Radiation by Clouds: Observations Versus Models

    NASA Technical Reports Server (NTRS)

    Cess, R. D.; Zhang, M. H.; Minnis, P.; Corsetti, L.; Dutton, E. G.; Forgan, B. W.; Garber, D. P.; Gates, W. L.; Hack, J. J.; Harrison, E. F.; hide

    1995-01-01

    There has been a long history of unexplained anomalous absorption of solar radiation by clouds. Collocated satellite and surface measurements of solar radiation at five geographically diverse locations showed significant solar absorption by clouds, resulting in about 25 watts per square meter more global-mean absorption by the cloudy atmosphere than predicted by theoretical models. It has often been suggested that tropospheric aerosols could increase cloud absorption. But these aerosols are temporally and spatially heterogeneous, whereas the observed cloud absorption is remarkably invariant with respect to season and location. Although its physical cause is unknown, enhanced cloud absorption substantially alters our understanding of the atmosphere's energy budget.

  17. Workload Model Based Dynamic Adaptation of Social Internet of Vehicles

    PubMed Central

    Alam, Kazi Masudul; Saini, Mukesh; El Saddik, Abdulmotaleb

    2015-01-01

    Social Internet of Things (SIoT) has gained much interest among different research groups in recent times. As a key member of a smart city, the vehicular domain of SIoT (SIoV) is also undergoing steep development. In the SIoV, vehicles work as sensor-hub to capture surrounding information using the in-vehicle and Smartphone sensors and later publish them for the consumers. A cloud centric cyber-physical system better describes the SIoV model where physical sensing-actuation process affects the cloud based service sharing or computation in a feedback loop or vice versa. The cyber based social relationship abstraction enables distributed, easily navigable and scalable peer-to-peer communication among the SIoV subsystems. These cyber-physical interactions involve a huge amount of data and it is difficult to form a real instance of the system to test the feasibility of SIoV applications. In this paper, we propose an analytical model to measure the workloads of various subsystems involved in the SIoV process. We present the basic model which is further extended to incorporate complex scenarios. We provide extensive simulation results for different parameter settings of the SIoV system. The findings of the analyses are further used to design example adaptation strategies for the SIoV subsystems which would foster deployment of intelligent transport systems. PMID:26389905

  18. Workload Model Based Dynamic Adaptation of Social Internet of Vehicles.

    PubMed

    Alam, Kazi Masudul; Saini, Mukesh; El Saddik, Abdulmotaleb

    2015-09-15

    Social Internet of Things (SIoT) has gained much interest among different research groups in recent times. As a key member of a smart city, the vehicular domain of SIoT (SIoV) is also undergoing steep development. In the SIoV, vehicles work as sensor-hub to capture surrounding information using the in-vehicle and Smartphone sensors and later publish them for the consumers. A cloud centric cyber-physical system better describes the SIoV model where physical sensing-actuation process affects the cloud based service sharing or computation in a feedback loop or vice versa. The cyber based social relationship abstraction enables distributed, easily navigable and scalable peer-to-peer communication among the SIoV subsystems. These cyber-physical interactions involve a huge amount of data and it is difficult to form a real instance of the system to test the feasibility of SIoV applications. In this paper, we propose an analytical model to measure the workloads of various subsystems involved in the SIoV process. We present the basic model which is further extended to incorporate complex scenarios. We provide extensive simulation results for different parameter settings of the SIoV system. The findings of the analyses are further used to design example adaptation strategies for the SIoV subsystems which would foster deployment of intelligent transport systems.

  19. Black Clouds vs Random Variation in Hospital Admissions.

    PubMed

    Ong, Luei Wern; Dawson, Jeffrey D; Ely, John W

    2018-06-01

    Physicians often accuse their peers of being "black clouds" if they repeatedly have more than the average number of hospital admissions while on call. Our purpose was to determine whether the black-cloud phenomenon is real or explainable by random variation. We analyzed hospital admissions to the University of Iowa family medicine service from July 1, 2010 to June 30, 2015. Analyses were stratified by peer group (eg, night shift attending physicians, day shift senior residents). We analyzed admission numbers to find evidence of black-cloud physicians (those with significantly more admissions than their peers) and white-cloud physicians (those with significantly fewer admissions). The statistical significance of whether there were actual differences across physicians was tested with mixed-effects negative binomial regression. The 5-year study included 96 physicians and 6,194 admissions. The number of daytime admissions ranged from 0 to 10 (mean 2.17, SD 1.63). Night admissions ranged from 0 to 11 (mean 1.23, SD 1.22). Admissions increased from 1,016 in the first year to 1,523 in the fifth year. We found 18 white-cloud and 16 black-cloud physicians in simple regression models that did not control for this upward trend. After including study year and other potential confounding variables in the regression models, there were no significant associations between physicians and admission numbers and therefore no true black or white clouds. In this study, apparent black-cloud and white-cloud physicians could be explained by random variation in hospital admissions. However, this randomness incorporated a wide range in workload among physicians, with potential impact on resident education at the low end and patient safety at the high end.

  20. Improved Cloud resource allocation: how INDIGO-DataCloud is overcoming the current limitations in Cloud schedulers

    NASA Astrophysics Data System (ADS)

    Lopez Garcia, Alvaro; Zangrando, Lisa; Sgaravatto, Massimo; Llorens, Vincent; Vallero, Sara; Zaccolo, Valentina; Bagnasco, Stefano; Taneja, Sonia; Dal Pra, Stefano; Salomoni, Davide; Donvito, Giacinto

    2017-10-01

    Performing efficient resource provisioning is a fundamental aspect for any resource provider. Local Resource Management Systems (LRMS) have been used in data centers for decades in order to obtain the best usage of the resources, providing their fair usage and partitioning for the users. In contrast, current cloud schedulers are normally based on the immediate allocation of resources on a first-come, first-served basis, meaning that a request will fail if there are no resources (e.g. OpenStack) or it will be trivially queued ordered by entry time (e.g. OpenNebula). Moreover, these scheduling strategies are based on a static partitioning of the resources, meaning that existing quotas cannot be exceeded, even if there are idle resources allocated to other projects. This is a consequence of the fact that cloud instances are not associated with a maximum execution time and leads to a situation where the resources are under-utilized. These facts have been identified by the INDIGO-DataCloud project as being too simplistic for accommodating scientific workloads in an efficient way, leading to an underutilization of the resources, a non desirable situation in scientific data centers. In this work, we will present the work done in the scheduling area during the first year of the INDIGO project and the foreseen evolutions.

  1. Now and Next-Generation Sequencing Techniques: Future of Sequence Analysis Using Cloud Computing

    PubMed Central

    Thakur, Radhe Shyam; Bandopadhyay, Rajib; Chaudhary, Bratati; Chatterjee, Sourav

    2012-01-01

    Advances in the field of sequencing techniques have resulted in the greatly accelerated production of huge sequence datasets. This presents immediate challenges in database maintenance at datacenters. It provides additional computational challenges in data mining and sequence analysis. Together these represent a significant overburden on traditional stand-alone computer resources, and to reach effective conclusions quickly and efficiently, the virtualization of the resources and computation on a pay-as-you-go concept (together termed “cloud computing”) has recently appeared. The collective resources of the datacenter, including both hardware and software, can be available publicly, being then termed a public cloud, the resources being provided in a virtual mode to the clients who pay according to the resources they employ. Examples of public companies providing these resources include Amazon, Google, and Joyent. The computational workload is shifted to the provider, which also implements required hardware and software upgrades over time. A virtual environment is created in the cloud corresponding to the computational and data storage needs of the user via the internet. The task is then performed, the results transmitted to the user, and the environment finally deleted after all tasks are completed. In this discussion, we focus on the basics of cloud computing, and go on to analyze the prerequisites and overall working of clouds. Finally, the applications of cloud computing in biological systems, particularly in comparative genomics, genome informatics, and SNP detection are discussed with reference to traditional workflows. PMID:23248640

  2. Now and next-generation sequencing techniques: future of sequence analysis using cloud computing.

    PubMed

    Thakur, Radhe Shyam; Bandopadhyay, Rajib; Chaudhary, Bratati; Chatterjee, Sourav

    2012-01-01

    Advances in the field of sequencing techniques have resulted in the greatly accelerated production of huge sequence datasets. This presents immediate challenges in database maintenance at datacenters. It provides additional computational challenges in data mining and sequence analysis. Together these represent a significant overburden on traditional stand-alone computer resources, and to reach effective conclusions quickly and efficiently, the virtualization of the resources and computation on a pay-as-you-go concept (together termed "cloud computing") has recently appeared. The collective resources of the datacenter, including both hardware and software, can be available publicly, being then termed a public cloud, the resources being provided in a virtual mode to the clients who pay according to the resources they employ. Examples of public companies providing these resources include Amazon, Google, and Joyent. The computational workload is shifted to the provider, which also implements required hardware and software upgrades over time. A virtual environment is created in the cloud corresponding to the computational and data storage needs of the user via the internet. The task is then performed, the results transmitted to the user, and the environment finally deleted after all tasks are completed. In this discussion, we focus on the basics of cloud computing, and go on to analyze the prerequisites and overall working of clouds. Finally, the applications of cloud computing in biological systems, particularly in comparative genomics, genome informatics, and SNP detection are discussed with reference to traditional workflows.

  3. The Magellan Final Report on Cloud Computing

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    ,; Coghlan, Susan; Yelick, Katherine

    The goal of Magellan, a project funded through the U.S. Department of Energy (DOE) Office of Advanced Scientific Computing Research (ASCR), was to investigate the potential role of cloud computing in addressing the computing needs for the DOE Office of Science (SC), particularly related to serving the needs of mid- range computing and future data-intensive computing workloads. A set of research questions was formed to probe various aspects of cloud computing from performance, usability, and cost. To address these questions, a distributed testbed infrastructure was deployed at the Argonne Leadership Computing Facility (ALCF) and the National Energy Research Scientific Computingmore » Center (NERSC). The testbed was designed to be flexible and capable enough to explore a variety of computing models and hardware design points in order to understand the impact for various scientific applications. During the project, the testbed also served as a valuable resource to application scientists. Applications from a diverse set of projects such as MG-RAST (a metagenomics analysis server), the Joint Genome Institute, the STAR experiment at the Relativistic Heavy Ion Collider, and the Laser Interferometer Gravitational Wave Observatory (LIGO), were used by the Magellan project for benchmarking within the cloud, but the project teams were also able to accomplish important production science utilizing the Magellan cloud resources.« less

  4. A Cloud-Based Internet of Things Platform for Ambient Assisted Living

    PubMed Central

    Cubo, Javier; Nieto, Adrián; Pimentel, Ernesto

    2014-01-01

    A common feature of ambient intelligence is that many objects are inter-connected and act in unison, which is also a challenge in the Internet of Things. There has been a shift in research towards integrating both concepts, considering the Internet of Things as representing the future of computing and communications. However, the efficient combination and management of heterogeneous things or devices in the ambient intelligence domain is still a tedious task, and it presents crucial challenges. Therefore, to appropriately manage the inter-connection of diverse devices in these systems requires: (1) specifying and efficiently implementing the devices (e.g., as services); (2) handling and verifying their heterogeneity and composition; and (3) standardizing and managing their data, so as to tackle large numbers of systems together, avoiding standalone applications on local servers. To overcome these challenges, this paper proposes a platform to manage the integration and behavior-aware orchestration of heterogeneous devices as services, stored and accessed via the cloud, with the following contributions: (i) we describe a lightweight model to specify the behavior of devices, to determine the order of the sequence of exchanged messages during the composition of devices; (ii) we define a common architecture using a service-oriented standard environment, to integrate heterogeneous devices by means of their interfaces, via a gateway, and to orchestrate them according to their behavior; (iii) we design a framework based on cloud computing technology, connecting the gateway in charge of acquiring the data from the devices with a cloud platform, to remotely access and monitor the data at run-time and react to emergency situations; and (iv) we implement and generate a novel cloud-based IoT platform of behavior-aware devices as services for ambient intelligence systems, validating the whole approach in real scenarios related to a specific ambient assisted living application. PMID:25093343

  5. A cloud-based Internet of Things platform for ambient assisted living.

    PubMed

    Cubo, Javier; Nieto, Adrián; Pimentel, Ernesto

    2014-08-04

    A common feature of ambient intelligence is that many objects are inter-connected and act in unison, which is also a challenge in the Internet of Things. There has been a shift in research towards integrating both concepts, considering the Internet of Things as representing the future of computing and communications. However, the efficient combination and management of heterogeneous things or devices in the ambient intelligence domain is still a tedious task, and it presents crucial challenges. Therefore, to appropriately manage the inter-connection of diverse devices in these systems requires: (1) specifying and efficiently implementing the devices (e.g., as services); (2) handling and verifying their heterogeneity and composition; and (3) standardizing and managing their data, so as to tackle large numbers of systems together, avoiding standalone applications on local servers. To overcome these challenges, this paper proposes a platform to manage the integration and behavior-aware orchestration of heterogeneous devices as services, stored and accessed via the cloud, with the following contributions: (i) we describe a lightweight model to specify the behavior of devices, to determine the order of the sequence of exchanged messages during the composition of devices; (ii) we define a common architecture using a service-oriented standard environment, to integrate heterogeneous devices by means of their interfaces, via a gateway, and to orchestrate them according to their behavior; (iii) we design a framework based on cloud computing technology, connecting the gateway in charge of acquiring the data from the devices with a cloud platform, to remotely access and monitor the data at run-time and react to emergency situations; and (iv) we implement and generate a novel cloud-based IoT platform of behavior-aware devices as services for ambient intelligence systems, validating the whole approach in real scenarios related to a specific ambient assisted living application.

  6. Nursing Activities Score: Cloud Computerized Structure.

    PubMed

    Moraes, Kátia Bottega; Martins, Fabiana Zerbieri; de Camargo, Maximiliano Dutra; Vieira, Débora Feijó; Magalhães, Ana Maria Muller; Silveira, Denise Tolfo

    2016-01-01

    This study objective to describe the cloud Nursing Activities Score implementation process in the Intensive Care Unit of the Post-Anesthesia Recovery Room. It is a case study. The tools used were the Google applications with high productivity interconnecting the topic knowledge on behalf of the nursing professionals and information technology professionals. As partial results, it was determined that the average nursing staff workload in the ICU/PARR during the first 24 hours, according to the score on the scale, was 91.75 ± 18.2. Each point of NAS is converted into 14.4 minutes, which is equivalent to an average of 22 working hours. Currently the instrument is implemented in the institution, reinforcing the need to update and raise awareness concerning the need to maintain the new routine.

  7. Mitigating clogging and arrest in confined self-propelled systems

    NASA Astrophysics Data System (ADS)

    Savoie, William; Aguilar, Jeffrey; Monaenkova, Daria; Linevich, Vadim; Goldman, Daniel

    Ensembles of self-propelling elements, like colloidal surfers, bacterial biofilms, and robot swarms can spontaneously form density heterogeneities. To understand how to prevent potentially catastrophic clogs in task-oriented active matter systems (like soil excavating robots), we present a robophysical study of excavation of granular media in a confined environment. We probe the efficacy of two social strategies observed in our studies of fire ants (S. invicta). The first behavior (denoted as unequal workload) prescribes to each excavator a different probability to enter the digging area. The second behavior (denoted as reversal\\x9D), is characterized by a probability to forfeit excavation when progress is sufficiently obstructed. For equal workload distribution and no reversal behavior, clogs at the digging site prevent excavation for sufficient numbers of robots. Measurements of aggregation relaxation times reveal how the strategies mitigate clogs. The unequal workload behavior reduces the tunnel density, decreasing the probability of clog formation. Reversal behavior, while allowing clogs to form, reduces aggregation relaxation time. We posit that application of social behaviors can be useful for swarm robot systems where global control and organization may not be possible.

  8. Contributions of Heterogeneous Ice Nucleation, Large-Scale Circulation, and Shallow Cumulus Detrainment to Cloud Phase Transition in Mixed-Phase Clouds with NCAR CAM5

    NASA Astrophysics Data System (ADS)

    Liu, X.; Wang, Y.; Zhang, D.; Wang, Z.

    2016-12-01

    Mixed-phase clouds consisting of both liquid and ice water occur frequently at high-latitudes and in mid-latitude storm track regions. This type of clouds has been shown to play a critical role in the surface energy balance, surface air temperature, and sea ice melting in the Arctic. Cloud phase partitioning between liquid and ice water determines the cloud optical depth of mixed-phase clouds because of distinct optical properties of liquid and ice hydrometeors. The representation and simulation of cloud phase partitioning in state-of-the-art global climate models (GCMs) are associated with large biases. In this study, the cloud phase partition in mixed-phase clouds simulated from the NCAR Community Atmosphere Model version 5 (CAM5) is evaluated against satellite observations. Observation-based supercooled liquid fraction (SLF) is calculated from CloudSat, MODIS and CPR radar detected liquid and ice water paths for clouds with cloud-top temperatures between -40 and 0°C. Sensitivity tests with CAM5 are conducted for different heterogeneous ice nucleation parameterizations with respect to aerosol influence (Wang et al., 2014), different phase transition temperatures for detrained cloud water from shallow convection (Kay et al., 2016), and different CAM5 model configurations (free-run versus nudged winds and temperature, Zhang et al., 2015). A classical nucleation theory-based ice nucleation parameterization in mixed-phase clouds increases the SLF especially at temperatures colder than -20°C, and significantly improves the model agreement with observations in the Arctic. The change of transition temperature for detrained cloud water increases the SLF at higher temperatures and improves the SLF mostly over the Southern Ocean. Even with the improved SLF from the ice nucleation and shallow cumulus detrainment, the low SLF biases in some regions can only be improved through the improved circulation with the nudging technique. Our study highlights the challenges of representations of large-scale moisture transport, cloud microphysics, ice nucleation, and cumulus detrainment in order to improve the mixed-phase transition in GCMs.

  9. A Parameterization for Land-Atmosphere-Cloud Exchange (PLACE): Documentation and Testing of a Detailed Process Model of the Partly Cloudy Boundary Layer over Heterogeneous Land.

    NASA Astrophysics Data System (ADS)

    Wetzel, Peter J.; Boone, Aaron

    1995-07-01

    This paper presents a general description of, and demonstrates the capabilities of, the Parameterization for Land-Atmosphere-Cloud Exchange (PLACE). The PLACE model is a detailed process model of the partly cloudy atmospheric boundary layer and underlying heterogeneous land surfaces. In its development, particular attention has been given to three of the model's subprocesses: the prediction of boundary layer cloud amount, the treatment of surface and soil subgrid heterogeneity, and the liquid water budget. The model includes a three-parameter nonprecipitating cumulus model that feeds back to the surface and boundary layer through radiative effects. Surface heterogeneity in the PLACE model is treated both statistically and by resolving explicit subgrid patches. The model maintains a vertical column of liquid water that is divided into seven reservoirs, from the surface interception store down to bedrock.Five single-day demonstration cases are presented, in which the PLACE model was initialized, run, and compared to field observations from four diverse sites. The model is shown to predict cloud amount well in these while predicting the surface fluxes with similar accuracy. A slight tendency to underpredict boundary layer depth is noted in all cases.Sensitivity tests were also run using anemometer-level forcing provided by the Project for Inter-comparison of Land-surface Parameterization Schemes (PILPS). The purpose is to demonstrate the relative impact of heterogeneity of surface parameters on the predicted annual mean surface fluxes. Significant sensitivity to subgrid variability of certain parameters is demonstrated, particularly to parameters related to soil moisture. A major result is that the PLACE-computed impact of total (homogeneous) deforestation of a rain forest is comparable in magnitude to the effect of imposing heterogeneity of certain surface variables, and is similarly comparable to the overall variance among the other PILPS participant models. Were this result to be bourne out by further analysis, it would suggest that today's average land surface parameterization has little credibility when applied to discriminating the local impacts of any plausible future climate change.

  10. CHEMICAL HETEROGENEITY AMONG CLOUD DROP POPULATIONS AND ITS INFLUENCE ON AEROSOL PROCESSING IN WINTER CLOUDS. (R823979)

    EPA Science Inventory

    The perspectives, information and conclusions conveyed in research project abstracts, progress reports, final reports, journal abstracts and journal publications convey the viewpoints of the principal investigator and may not represent the views and policies of ORD and EPA. Concl...

  11. THE INFLUENCE OF CHEMICAL HETEROGENEITY AMONG CLOUD DROP POPULATIONS ON AEROSOL PROCESSING IN WINTER CLOUDS. (R823979)

    EPA Science Inventory

    The perspectives, information and conclusions conveyed in research project abstracts, progress reports, final reports, journal abstracts and journal publications convey the viewpoints of the principal investigator and may not represent the views and policies of ORD and EPA. Concl...

  12. THE INFLUENCE OF CHEMICAL HETEROGENEITY AMONG CLOUD DROP POPULATIONS ON AEROSOL PROCESSING IN WINTER CLOUDS. (U915364)

    EPA Science Inventory

    The perspectives, information and conclusions conveyed in research project abstracts, progress reports, final reports, journal abstracts and journal publications convey the viewpoints of the principal investigator and may not represent the views and policies of ORD and EPA. Concl...

  13. Graph Partitioning for Parallel Applications in Heterogeneous Grid Environments

    NASA Technical Reports Server (NTRS)

    Bisws, Rupak; Kumar, Shailendra; Das, Sajal K.; Biegel, Bryan (Technical Monitor)

    2002-01-01

    The problem of partitioning irregular graphs and meshes for parallel computations on homogeneous systems has been extensively studied. However, these partitioning schemes fail when the target system architecture exhibits heterogeneity in resource characteristics. With the emergence of technologies such as the Grid, it is imperative to study the partitioning problem taking into consideration the differing capabilities of such distributed heterogeneous systems. In our model, the heterogeneous system consists of processors with varying processing power and an underlying non-uniform communication network. We present in this paper a novel multilevel partitioning scheme for irregular graphs and meshes, that takes into account issues pertinent to Grid computing environments. Our partitioning algorithm, called MiniMax, generates and maps partitions onto a heterogeneous system with the objective of minimizing the maximum execution time of the parallel distributed application. For experimental performance study, we have considered both a realistic mesh problem from NASA as well as synthetic workloads. Simulation results demonstrate that MiniMax generates high quality partitions for various classes of applications targeted for parallel execution in a distributed heterogeneous environment.

  14. ATLAS WORLD-cloud and networking in PanDA

    NASA Astrophysics Data System (ADS)

    Barreiro Megino, F.; De, K.; Di Girolamo, A.; Maeno, T.; Walker, R.; ATLAS Collaboration

    2017-10-01

    The ATLAS computing model was originally designed as static clouds (usually national or geographical groupings of sites) around the Tier 1 centres, which confined tasks and most of the data traffic. Since those early days, the sites’ network bandwidth has increased at 0(1000) and the difference in functionalities between Tier 1s and Tier 2s has reduced. After years of manual, intermediate solutions, we have now ramped up to full usage of World-cloud, the latest step in the PanDA Workload Management System to increase resource utilization on the ATLAS Grid, for all workflows (MC production, data (re)processing, etc.). We have based the development on two new site concepts. Nuclei sites are the Tier 1s and large Tier 2s, where tasks will be assigned and the output aggregated, and satellites are the sites that will execute the jobs and send the output to their nucleus. PanDA dynamically pairs nuclei and satellite sites for each task based on the input data availability, capability matching, site load and network connectivity. This contribution will introduce the conceptual changes for World-cloud, the development necessary in PanDA, an insight into the network model and the first half-year of operational experience.

  15. An Application-Based Performance Evaluation of NASAs Nebula Cloud Computing Platform

    NASA Technical Reports Server (NTRS)

    Saini, Subhash; Heistand, Steve; Jin, Haoqiang; Chang, Johnny; Hood, Robert T.; Mehrotra, Piyush; Biswas, Rupak

    2012-01-01

    The high performance computing (HPC) community has shown tremendous interest in exploring cloud computing as it promises high potential. In this paper, we examine the feasibility, performance, and scalability of production quality scientific and engineering applications of interest to NASA on NASA's cloud computing platform, called Nebula, hosted at Ames Research Center. This work represents the comprehensive evaluation of Nebula using NUTTCP, HPCC, NPB, I/O, and MPI function benchmarks as well as four applications representative of the NASA HPC workload. Specifically, we compare Nebula performance on some of these benchmarks and applications to that of NASA s Pleiades supercomputer, a traditional HPC system. We also investigate the impact of virtIO and jumbo frames on interconnect performance. Overall results indicate that on Nebula (i) virtIO and jumbo frames improve network bandwidth by a factor of 5x, (ii) there is a significant virtualization layer overhead of about 10% to 25%, (iii) write performance is lower by a factor of 25x, (iv) latency for short MPI messages is very high, and (v) overall performance is 15% to 48% lower than that on Pleiades for NASA HPC applications. We also comment on the usability of the cloud platform.

  16. Performance Evaluation of Resource Management in Cloud Computing Environments.

    PubMed

    Batista, Bruno Guazzelli; Estrella, Julio Cezar; Ferreira, Carlos Henrique Gomes; Filho, Dionisio Machado Leite; Nakamura, Luis Hideo Vasconcelos; Reiff-Marganiec, Stephan; Santana, Marcos José; Santana, Regina Helena Carlucci

    2015-01-01

    Cloud computing is a computational model in which resource providers can offer on-demand services to clients in a transparent way. However, to be able to guarantee quality of service without limiting the number of accepted requests, providers must be able to dynamically manage the available resources so that they can be optimized. This dynamic resource management is not a trivial task, since it involves meeting several challenges related to workload modeling, virtualization, performance modeling, deployment and monitoring of applications on virtualized resources. This paper carries out a performance evaluation of a module for resource management in a cloud environment that includes handling available resources during execution time and ensuring the quality of service defined in the service level agreement. An analysis was conducted of different resource configurations to define which dimension of resource scaling has a real influence on client requests. The results were used to model and implement a simulated cloud system, in which the allocated resource can be changed on-the-fly, with a corresponding change in price. In this way, the proposed module seeks to satisfy both the client by ensuring quality of service, and the provider by ensuring the best use of resources at a fair price.

  17. Performance Evaluation of Resource Management in Cloud Computing Environments

    PubMed Central

    Batista, Bruno Guazzelli; Estrella, Julio Cezar; Ferreira, Carlos Henrique Gomes; Filho, Dionisio Machado Leite; Nakamura, Luis Hideo Vasconcelos; Reiff-Marganiec, Stephan; Santana, Marcos José; Santana, Regina Helena Carlucci

    2015-01-01

    Cloud computing is a computational model in which resource providers can offer on-demand services to clients in a transparent way. However, to be able to guarantee quality of service without limiting the number of accepted requests, providers must be able to dynamically manage the available resources so that they can be optimized. This dynamic resource management is not a trivial task, since it involves meeting several challenges related to workload modeling, virtualization, performance modeling, deployment and monitoring of applications on virtualized resources. This paper carries out a performance evaluation of a module for resource management in a cloud environment that includes handling available resources during execution time and ensuring the quality of service defined in the service level agreement. An analysis was conducted of different resource configurations to define which dimension of resource scaling has a real influence on client requests. The results were used to model and implement a simulated cloud system, in which the allocated resource can be changed on-the-fly, with a corresponding change in price. In this way, the proposed module seeks to satisfy both the client by ensuring quality of service, and the provider by ensuring the best use of resources at a fair price. PMID:26555730

  18. Classification of Patient Care Complexity: Cloud Technology.

    PubMed

    de Oliveira Riboldi, Caren; Macedo, Andrea Barcellos Teixeira; Mergen, Thiane; Dias, Vera Lúcia Mendes; da Costa, Diovane Ghignatti; Malvezzi, Maria Luiza Falsarella; Magalhães, Ana Maria Muller; Silveira, Denise Tolfo

    2016-01-01

    Presentation of the computerized structure to implement, in a university hospital in the South of Brazil, the Patients Classification System of Perroca, which categorizes patients according to the care complexity. This solution also aims to corroborate a recent study at the hospital, which evidenced that the increasing workload presents a direct relation with the institutional quality indicators. The tools used were the Google applications with high productivity interconnecting the topic knowledge on behalf of the nursing professionals and information technology professionals.

  19. Laboratory measurements of heterogeneous CO2 ice nucleation on nanoparticles under conditions relevant to the Martian mesosphere

    NASA Astrophysics Data System (ADS)

    Nachbar, Mario; Duft, Denis; Mangan, Thomas Peter; Martin, Juan Carlos Gomez; Plane, John M. C.; Leisner, Thomas

    2016-05-01

    Clouds of CO2 ice particles have been observed in the Martian mesosphere. These clouds are believed to be formed through heterogeneous nucleation of CO2 on nanometer-sized meteoric smoke particles (MSPs) or upward propagated Martian dust particles (MDPs). Large uncertainties still exist in parameterizing the microphysical formation process of these clouds as key physicochemical parameters are not well known. We present measurements on the nucleation and growth of CO2 ice on sub-4 nm radius iron oxide and silica particles representing MSPs at conditions close to the mesosphere of Mars. For both particle materials we determine the desorption energy of CO2 to be ΔFdes = (18.5 ± 0.2) kJ mol-1 corresponding to ΔFdes = (0.192 ± 0.002) eV and obtain m = 0.78 ± 0.02 for the contact parameter that governs heterogeneous nucleation by analyzing the measurements using classical heterogeneous nucleation theory. We did not find any temperature dependence for the contact parameter in the temperature range examined (64 K to 73 K). By applying these values for MSPs in the Martian mesosphere, we derive characteristic temperatures for the onset of CO2 ice nucleation, which are 8-18 K below the CO2 frost point temperature, depending on particle size. This is in line with the occurrence of highly supersaturated conditions extending to 20 K below frost point temperature without the observation of clouds. Moreover, the sticking coefficient of CO2 on solid CO2 was determined to be near unity. We further argue that the same parameters can be applied to CO2 nucleation on upward propagated MDPs.

  20. The role of human-automation consensus in multiple unmanned vehicle scheduling.

    PubMed

    Cummings, M L; Clare, Andrew; Hart, Christin

    2010-02-01

    This study examined the impact of increasing automation replanning rates on operator performance and workload when supervising a decentralized network of heterogeneous unmanned vehicles. Futuristic unmanned vehicles systems will invert the operator-to-vehicle ratio so that one operator can control multiple dissimilar vehicles connected through a decentralized network. Significant human-automation collaboration will be needed because of automation brittleness, but such collaboration could cause high workload. Three increasing levels of replanning were tested on an existing multiple unmanned vehicle simulation environment that leverages decentralized algorithms for vehicle routing and task allocation in conjunction with human supervision. Rapid replanning can cause high operator workload, ultimately resulting in poorer overall system performance. Poor performance was associated with a lack of operator consensus for when to accept the automation's suggested prompts for new plan consideration as well as negative attitudes toward unmanned aerial vehicles in general. Participants with video game experience tended to collaborate more with the automation, which resulted in better performance. In decentralized unmanned vehicle networks, operators who ignore the automation's requests for new plan consideration and impose rapid replans both increase their own workload and reduce the ability of the vehicle network to operate at its maximum capacity. These findings have implications for personnel selection and training for futuristic systems involving human collaboration with decentralized algorithms embedded in networks of autonomous systems.

  1. CloudDOE: a user-friendly tool for deploying Hadoop clouds and analyzing high-throughput sequencing data with MapReduce.

    PubMed

    Chung, Wei-Chun; Chen, Chien-Chih; Ho, Jan-Ming; Lin, Chung-Yen; Hsu, Wen-Lian; Wang, Yu-Chun; Lee, D T; Lai, Feipei; Huang, Chih-Wei; Chang, Yu-Jung

    2014-01-01

    Explosive growth of next-generation sequencing data has resulted in ultra-large-scale data sets and ensuing computational problems. Cloud computing provides an on-demand and scalable environment for large-scale data analysis. Using a MapReduce framework, data and workload can be distributed via a network to computers in the cloud to substantially reduce computational latency. Hadoop/MapReduce has been successfully adopted in bioinformatics for genome assembly, mapping reads to genomes, and finding single nucleotide polymorphisms. Major cloud providers offer Hadoop cloud services to their users. However, it remains technically challenging to deploy a Hadoop cloud for those who prefer to run MapReduce programs in a cluster without built-in Hadoop/MapReduce. We present CloudDOE, a platform-independent software package implemented in Java. CloudDOE encapsulates technical details behind a user-friendly graphical interface, thus liberating scientists from having to perform complicated operational procedures. Users are guided through the user interface to deploy a Hadoop cloud within in-house computing environments and to run applications specifically targeted for bioinformatics, including CloudBurst, CloudBrush, and CloudRS. One may also use CloudDOE on top of a public cloud. CloudDOE consists of three wizards, i.e., Deploy, Operate, and Extend wizards. Deploy wizard is designed to aid the system administrator to deploy a Hadoop cloud. It installs Java runtime environment version 1.6 and Hadoop version 0.20.203, and initiates the service automatically. Operate wizard allows the user to run a MapReduce application on the dashboard list. To extend the dashboard list, the administrator may install a new MapReduce application using Extend wizard. CloudDOE is a user-friendly tool for deploying a Hadoop cloud. Its smart wizards substantially reduce the complexity and costs of deployment, execution, enhancement, and management. Interested users may collaborate to improve the source code of CloudDOE to further incorporate more MapReduce bioinformatics tools into CloudDOE and support next-generation big data open source tools, e.g., Hadoop BigTop and Spark. CloudDOE is distributed under Apache License 2.0 and is freely available at http://clouddoe.iis.sinica.edu.tw/.

  2. CloudDOE: A User-Friendly Tool for Deploying Hadoop Clouds and Analyzing High-Throughput Sequencing Data with MapReduce

    PubMed Central

    Chung, Wei-Chun; Chen, Chien-Chih; Ho, Jan-Ming; Lin, Chung-Yen; Hsu, Wen-Lian; Wang, Yu-Chun; Lee, D. T.; Lai, Feipei; Huang, Chih-Wei; Chang, Yu-Jung

    2014-01-01

    Background Explosive growth of next-generation sequencing data has resulted in ultra-large-scale data sets and ensuing computational problems. Cloud computing provides an on-demand and scalable environment for large-scale data analysis. Using a MapReduce framework, data and workload can be distributed via a network to computers in the cloud to substantially reduce computational latency. Hadoop/MapReduce has been successfully adopted in bioinformatics for genome assembly, mapping reads to genomes, and finding single nucleotide polymorphisms. Major cloud providers offer Hadoop cloud services to their users. However, it remains technically challenging to deploy a Hadoop cloud for those who prefer to run MapReduce programs in a cluster without built-in Hadoop/MapReduce. Results We present CloudDOE, a platform-independent software package implemented in Java. CloudDOE encapsulates technical details behind a user-friendly graphical interface, thus liberating scientists from having to perform complicated operational procedures. Users are guided through the user interface to deploy a Hadoop cloud within in-house computing environments and to run applications specifically targeted for bioinformatics, including CloudBurst, CloudBrush, and CloudRS. One may also use CloudDOE on top of a public cloud. CloudDOE consists of three wizards, i.e., Deploy, Operate, and Extend wizards. Deploy wizard is designed to aid the system administrator to deploy a Hadoop cloud. It installs Java runtime environment version 1.6 and Hadoop version 0.20.203, and initiates the service automatically. Operate wizard allows the user to run a MapReduce application on the dashboard list. To extend the dashboard list, the administrator may install a new MapReduce application using Extend wizard. Conclusions CloudDOE is a user-friendly tool for deploying a Hadoop cloud. Its smart wizards substantially reduce the complexity and costs of deployment, execution, enhancement, and management. Interested users may collaborate to improve the source code of CloudDOE to further incorporate more MapReduce bioinformatics tools into CloudDOE and support next-generation big data open source tools, e.g., Hadoop BigTop and Spark. Availability: CloudDOE is distributed under Apache License 2.0 and is freely available at http://clouddoe.iis.sinica.edu.tw/. PMID:24897343

  3. Heterogeneous ice nucleation of α-pinene SOA particles before and after ice cloud processing

    NASA Astrophysics Data System (ADS)

    Wagner, Robert; Höhler, Kristina; Huang, Wei; Kiselev, Alexei; Möhler, Ottmar; Mohr, Claudia; Pajunoja, Aki; Saathoff, Harald; Schiebel, Thea; Shen, Xiaoli; Virtanen, Annele

    2017-05-01

    The ice nucleation ability of α-pinene secondary organic aerosol (SOA) particles was investigated at temperatures between 253 and 205 K in the Aerosol Interaction and Dynamics in the Atmosphere cloud simulation chamber. Pristine SOA particles were nucleated and grown from pure gas precursors and then subjected to repeated expansion cooling cycles to compare their intrinsic ice nucleation ability during the first nucleation event with that observed after ice cloud processing. The unprocessed α-pinene SOA particles were found to be inefficient ice-nucleating particles at cirrus temperatures, with nucleation onsets (for an activated fraction of 0.1%) as high as for the homogeneous freezing of aqueous solution droplets. Ice cloud processing at temperatures below 235 K only marginally improved the particles' ice nucleation ability and did not significantly alter their morphology. In contrast, the particles' morphology and ice nucleation ability was substantially modified upon ice cloud processing in a simulated convective cloud system, where the α-pinene SOA particles were first activated to supercooled cloud droplets and then froze homogeneously at about 235 K. As evidenced by electron microscopy, the α-pinene SOA particles adopted a highly porous morphology during such a freeze-drying cycle. When probing the freeze-dried particles in succeeding expansion cooling runs in the mixed-phase cloud regime up to 253 K, the increase in relative humidity led to a collapse of the porous structure. Heterogeneous ice formation was observed after the droplet activation of the collapsed, freeze-dried SOA particles, presumably caused by ice remnants in the highly viscous material or the larger surface area of the particles.

  4. Heterogeneous Ice Nucleation Ability of NaCl and Sea Salt Aerosol Particles at Cirrus Temperatures

    NASA Astrophysics Data System (ADS)

    Wagner, Robert; Kaufmann, Julia; Möhler, Ottmar; Saathoff, Harald; Schnaiter, Martin; Ullrich, Romy; Leisner, Thomas

    2018-03-01

    In situ measurements of the composition of heterogeneous cirrus ice cloud residuals have indicated a substantial contribution of sea salt in sampling regions above the ocean. We have investigated the heterogeneous ice nucleation ability of sodium chloride (NaCl) and sea salt aerosol (SSA) particles at cirrus cloud temperatures between 235 and 200 K in the Aerosol Interaction and Dynamics in the Atmosphere aerosol and cloud chamber. Effloresced NaCl particles were found to act as ice nucleating particles in the deposition nucleation mode at temperatures below about 225 K, with freezing onsets in terms of the ice saturation ratio, Sice, between 1.28 and 1.40. Above 225 K, the crystalline NaCl particles deliquesced and nucleated ice homogeneously. The heterogeneous ice nucleation efficiency was rather similar for the two crystalline forms of NaCl (anhydrous NaCl and NaCl dihydrate). Mixed-phase (solid/liquid) SSA particles were found to act as ice nucleating particles in the immersion freezing mode at temperatures below about 220 K, with freezing onsets in terms of Sice between 1.24 and 1.42. Above 220 K, the SSA particles fully deliquesced and nucleated ice homogeneously. Ice nucleation active surface site densities of the SSA particles were found to be in the range between 1.0 · 1010 and 1.0 · 1011 m-2 at T < 220 K. These values are of the same order of magnitude as ice nucleation active surface site densities recently determined for desert dust, suggesting a potential contribution of SSA particles to low-temperature heterogeneous ice nucleation in the atmosphere.

  5. Application of an online-coupled regional climate model, WRF-CAM5, over East Asia for examination of ice nucleation schemes. Part II. Sensitivity to heterogeneous ice nucleation parameterizations and dust emissions

    DOE PAGES

    Zhang, Yang; Chen, Ying; Fan, Jiwen; ...

    2015-09-14

    Aerosol particles can affect cloud microphysical properties by serving as ice nuclei (IN). Large uncertainties exist in the ice nucleation parameterizations (INPs) used in current climate models. In this Part II paper, to examine the sensitivity of the model predictions to different heterogeneous INPs, WRF-CAM5 simulation using the INP of Niemand et al. (N12) [1] is conducted over East Asia for two full years, 2006 and 2011, and compared with simulation using the INP of Meyers et al. (M92) [2], which is the original INP used in CAM5. M92 calculates the nucleated ice particle concentration as a function of icemore » supersaturation, while N12 represents the nucleated ice particle concentration as a function of temperature and the number concentrations and surface areas of dust particles. Compared to M92, the WRF-CAM5 simulation with N12 produces significantly higher nucleated ice crystal number concentrations (ICNCs) in the northern domain where dust sources are located, leading to significantly higher cloud ice number and mass concentrations and ice water path, but the opposite is true in the southern domain where temperatures and moistures play a more important role in ice formation. Overall, the simulation with N12 gives lower downward shortwave radiation but higher downward longwave radiation, cloud liquid water path, cloud droplet number concentrations, and cloud optical depth. The increase in cloud optical depth and the decrease in downward solar flux result in a stronger shortwave and longwave cloud forcing, and decreases temperature at 2-m and precipitation. Changes in temperature and radiation lower surface concentrations of OH, O₃, SO₄²⁻, and PM 2.5, but increase surface concentrations of CO, NO₂, and SO₂ over most of the domain. By acting as cloud condensation nuclei (CCN) and IN, dust particles have different impacts on cloud water and ice number concentrations, radiation, and temperature at 2-m and precipitation depending on whether the dominant role of dust is CCN or IN. These results indicate the importance of the heterogeneous ice nucleation treatments and dust emissions in accurately simulating regional climate and air quality.« less

  6. Application of an online-coupled regional climate model, WRF-CAM5, over East Asia for examination of ice nucleation schemes. Part II. Sensitivity to heterogeneous ice nucleation parameterizations and dust emissions

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zhang, Yang; Chen, Ying; Fan, Jiwen

    Aerosol particles can affect cloud microphysical properties by serving as ice nuclei (IN). Large uncertainties exist in the ice nucleation parameterizations (INPs) used in current climate models. In this Part II paper, to examine the sensitivity of the model predictions to different heterogeneous INPs, WRF-CAM5 simulation using the INP of Niemand et al. (N12) [1] is conducted over East Asia for two full years, 2006 and 2011, and compared with simulation using the INP of Meyers et al. (M92) [2], which is the original INP used in CAM5. M92 calculates the nucleated ice particle concentration as a function of icemore » supersaturation, while N12 represents the nucleated ice particle concentration as a function of temperature and the number concentrations and surface areas of dust particles. Compared to M92, the WRF-CAM5 simulation with N12 produces significantly higher nucleated ice crystal number concentrations (ICNCs) in the northern domain where dust sources are located, leading to significantly higher cloud ice number and mass concentrations and ice water path, but the opposite is true in the southern domain where temperatures and moistures play a more important role in ice formation. Overall, the simulation with N12 gives lower downward shortwave radiation but higher downward longwave radiation, cloud liquid water path, cloud droplet number concentrations, and cloud optical depth. The increase in cloud optical depth and the decrease in downward solar flux result in a stronger shortwave and longwave cloud forcing, and decreases temperature at 2-m and precipitation. Changes in temperature and radiation lower surface concentrations of OH, O₃, SO₄²⁻, and PM 2.5, but increase surface concentrations of CO, NO₂, and SO₂ over most of the domain. By acting as cloud condensation nuclei (CCN) and IN, dust particles have different impacts on cloud water and ice number concentrations, radiation, and temperature at 2-m and precipitation depending on whether the dominant role of dust is CCN or IN. These results indicate the importance of the heterogeneous ice nucleation treatments and dust emissions in accurately simulating regional climate and air quality.« less

  7. Application of an Online-Coupled Regional Climate Model, WRF-CAM5, over East Asia for Examination of Ice Nucleation Schemes: Part II. Sensitivity to Heterogeneous Ice Nucleation Parameterizations and Dust Emissions

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zhang, Yang; Chen, Ying; Fan, Jiwen

    Aerosol particles can affect cloud microphysical properties by serving as ice nuclei (IN). Large uncertainties exist in the ice nucleation parameterizations (INPs) used in current climate models. In this Part II paper, to examine the sensitivity of the model predictions to different heterogeneous INPs, WRF-CAM5 simulation using the INP of Niemand et al. (N12) [1] is conducted over East Asia for two full years, 2006 and 2011, and compared with simulation using the INP of Meyers et al. (M92) [2], which is the original INP used in CAM5. M92 calculates the nucleated ice particle concentration as a function of icemore » supersaturation, while N12 represents the nucleated ice particle concentration as a function of temperature and the number concentrations and surface areas of dust particles. Compared to M92, the WRF-CAM5 simulation with N12 produces significantly higher nucleated ice crystal number concentrations (ICNCs) in the northern domain where dust sources are located, leading to significantly higher cloud ice number and mass concentrations and ice water path, but the opposite is true in the southern domain where temperatures and moistures play a more important role in ice formation. Overall, the simulation with N12 gives lower downward shortwave radiation but higher downward longwave radiation, cloud liquid water path, cloud droplet number concentrations, and cloud optical depth. The increase in cloud optical depth and the decrease in downward solar flux result in a stronger shortwave and longwave cloud forcing, and decreases temperature at 2-m and precipitation. Changes in temperature and radiation lower surface concentrations of OH, O 3, SO 4 2-, and PM2.5, but increase surface concentrations of CO, NO 2, and SO 2 over most of the domain. By acting as cloud condensation nuclei (CCN) and IN, dust particles have different impacts on cloud water and ice number concentrations, radiation, and temperature at 2-m and precipitation depending on whether the dominant role of dust is CCN or IN. These results indicate the importance of the heterogeneous ice nucleation treatments and dust emissions in accurately simulating regional climate and air quality.« less

  8. Cloud-based mobility management in heterogeneous wireless networks

    NASA Astrophysics Data System (ADS)

    Kravchuk, Serhii; Minochkin, Dmytro; Omiotek, Zbigniew; Bainazarov, Ulan; Weryńska-Bieniasz, RóŻa; Iskakova, Aigul

    2017-08-01

    Mobility management is the key feature that supports the roaming of users between different systems. Handover is the essential aspect in the development of solutions supporting mobility scenarios. The handover process becomes more complex in a heterogeneous environment compared to the homogeneous one. Seamlessness and reduction of delay in servicing the handover calls, which can reduce the handover dropping probability, also require complex algorithms to provide a desired QoS for mobile users. A challenging problem to increase the scalability and availability of handover decision mechanisms is discussed. The aim of the paper is to propose cloud based handover as a service concept to cope with the challenges that arise.

  9. Experimental study on the minimum ignition temperature of coal dust clouds in oxy-fuel combustion atmospheres.

    PubMed

    Wu, Dejian; Norman, Frederik; Verplaetsen, Filip; Van den Bulck, Eric

    2016-04-15

    BAM furnace apparatus tests were conducted to investigate the minimum ignition temperature of coal dusts (MITC) in O2/CO2 atmospheres with an O2 mole fraction from 20 to 50%. Three coal dusts: Indonesian Sebuku coal, Pittsburgh No.8 coal and South African coal were tested. Experimental results showed that the dust explosion risk increases significantly with increasing O2 mole fraction by reducing the minimum ignition temperature for the three tested coal dust clouds dramatically (even by 100°C). Compared with conventional combustion, the inhibiting effect of CO2 was found to be comparatively large in dust clouds, particularly for the coal dusts with high volatile content. The retardation effect of the moisture content on the ignition of dust clouds was also found to be pronounced. In addition, a modified steady-state mathematical model based on heterogeneous reaction was proposed to interpret the observed experimental phenomena and to estimate the ignition mechanism of coal dust clouds under minimum ignition temperature conditions. The analysis revealed that heterogeneous ignition dominates the ignition mechanism for sub-/bituminous coal dusts under minimum ignition temperature conditions, but the decrease of coal maturity facilitates homogeneous ignition. These results improve our understanding of the ignition behaviour and the explosion risk of coal dust clouds in oxy-fuel combustion atmospheres. Copyright © 2015 Elsevier B.V. All rights reserved.

  10. TOSCA-based orchestration of complex clusters at the IaaS level

    NASA Astrophysics Data System (ADS)

    Caballer, M.; Donvito, G.; Moltó, G.; Rocha, R.; Velten, M.

    2017-10-01

    This paper describes the adoption and extension of the TOSCA standard by the INDIGO-DataCloud project for the definition and deployment of complex computing clusters together with the required support in both OpenStack and OpenNebula, carried out in close collaboration with industry partners such as IBM. Two examples of these clusters are described in this paper, the definition of an elastic computing cluster to support the Galaxy bioinformatics application where the nodes are dynamically added and removed from the cluster to adapt to the workload, and the definition of an scalable Apache Mesos cluster for the execution of batch jobs and support for long-running services. The coupling of TOSCA with Ansible Roles to perform automated installation has resulted in the definition of high-level, deterministic templates to provision complex computing clusters across different Cloud sites.

  11. A FIRE-ACE/SHEBA Case Study of Mixed-Phase Arctic Boundary Layer Clouds: Entrainment Rate Limitations on Rapid Primary Ice Nucleation Processes

    NASA Technical Reports Server (NTRS)

    Fridlin, Ann; vanDiedenhoven, Bastiaan; Ackerman, Andrew S.; Avramov, Alexander; Mrowiec, Agnieszka; Morrison, Hugh; Zuidema, Paquita; Shupe, Matthew D.

    2012-01-01

    Observations of long-lived mixed-phase Arctic boundary layer clouds on 7 May 1998 during the First International Satellite Cloud Climatology Project (ISCCP) Regional Experiment (FIRE)Arctic Cloud Experiment (ACE)Surface Heat Budget of the Arctic Ocean (SHEBA) campaign provide a unique opportunity to test understanding of cloud ice formation. Under the microphysically simple conditions observed (apparently negligible ice aggregation, sublimation, and multiplication), the only expected source of new ice crystals is activation of heterogeneous ice nuclei (IN) and the only sink is sedimentation. Large-eddy simulations with size-resolved microphysics are initialized with IN number concentration N(sub IN) measured above cloud top, but details of IN activation behavior are unknown. If activated rapidly (in deposition, condensation, or immersion modes), as commonly assumed, IN are depleted from the well-mixed boundary layer within minutes. Quasi-equilibrium ice number concentration N(sub i) is then limited to a small fraction of overlying N(sub IN) that is determined by the cloud-top entrainment rate w(sub e) divided by the number-weighted ice fall speed at the surface v(sub f). Because w(sub c)< 1 cm/s and v(sub f)> 10 cm/s, N(sub i)/N(sub IN)<< 1. Such conditions may be common for this cloud type, which has implications for modeling IN diagnostically, interpreting measurements, and quantifying sensitivity to increasing N(sub IN) (when w(sub e)/v(sub f)< 1, entrainment rate limitations serve to buffer cloud system response). To reproduce observed ice crystal size distributions and cloud radar reflectivities with rapidly consumed IN in this case, the measured above-cloud N(sub IN) must be multiplied by approximately 30. However, results are sensitive to assumed ice crystal properties not constrained by measurements. In addition, simulations do not reproduce the pronounced mesoscale heterogeneity in radar reflectivity that is observed.

  12. Influence of Saharan dust on cloud glaciation in southern Morocco during the Saharan Mineral Dust Experiment

    NASA Astrophysics Data System (ADS)

    Ansmann, A.; Tesche, M.; Althausen, D.; Müller, D.; Seifert, P.; Freudenthaler, V.; Heese, B.; Wiegner, M.; Pisani, G.; Knippertz, P.; Dubovik, O.

    2008-02-01

    Multiwavelength lidar, Sun photometer, and radiosonde observations were conducted at Ouarzazate (30.9°N, 6.9°W, 1133 m above sea level, asl), Morocco, in the framework of the Saharan Mineral Dust Experiment (SAMUM) in May-June 2006. The field site is close to the Saharan desert. Information on the depolarization ratio, backscatter and extinction coefficients, and lidar ratio of the dust particles, estimates of the available concentration of atmospheric ice nuclei at cloud level, profiles of temperature, humidity, and the horizontal wind vector as well as backward trajectory analysis are used to study cases of cloud formation in the dust with focus on heterogeneous ice formation. Surprisingly, most of the altocumulus clouds that form at the top of the Saharan dust layer, which reaches into heights of 4-7 km asl and has layer top temperatures of -8°C to -18°C, do not show any ice formation. According to the lidar observations the presence of a high number of ice nuclei (1-20 cm-3) does not automatically result in the obvious generation of ice particles, but the observations indicate that cloud top temperatures must typically reach values as low as -20°C before significant ice production starts. Another main finding is that liquid clouds are obviously required before ice crystals form via heterogeneous freezing mechanisms, and, as a consequence, that deposition freezing is not an important ice nucleation process. An interesting case with cloud seeding in the free troposphere above the dust layer is presented in addition. Small water clouds formed at about -30°C and produced ice virga. These virga reached water cloud layers several kilometers below the initiating cloud cells and caused strong ice production in these clouds at temperatures as high as -12°C to -15°C.

  13. The Dependence of Homo- and Heterogeneously Formed Cirrus Clouds on Latitude, Season and Surface-type based on a New CALIPSO Remote Sensing Method

    NASA Astrophysics Data System (ADS)

    Mitchell, D. L.; Garnier, A.; Mejia, J.; Avery, M. A.; Erfani, E.

    2016-12-01

    A new CALIPSO infrared retrieval method sensitive to small ice crystals has been developed to measure the temperature dependence of the layer-average number concentration N, effective diameter De and ice water content in single-layer cirrus clouds (one cloud layer in the atmospheric column) that have optical depths between 0.3 and 3.0 and cloud base temperature T < 235 K. While retrievals of low N are not accurate, mid-to-high N can be retrieved with much lower uncertainty. This enables the retrieval to estimate the dominant ice nucleation mechanism (homo- or heterogeneous, henceforth hom and het) though which the cirrus formed. Based on N, hom or het cirrus can be estimated as a function of temperature, season, latitude and surface type. The retrieved properties noted above compare favorably with spatial-temporal coincident cirrus cloud in situ measurements from SPARTICUS case studies as well as the extensive in situ cirrus data set of Krämer et al. (2009, ACP). For our cirrus cloud selection, these retrievals show a pronounced seasonal cycle in the N. Hemisphere over land north of 30°N latitude in terms of both cloud amount and microphysics, with greater cloud cover, higher N and smaller De during the winter season. We postulate that this is partially due to the seasonal cycle of deep convection that replenishes the supply of ice nuclei (IN) at cirrus levels, with hom more likely when deep convection is absent. Over oceans, heterogeneous ice nucleation appears to prevail based on the lower N and higher De observed. Due to the relatively smooth ocean surface, lower amplitude atmospheric waves at cirrus cloud levels are expected. Over land outside the tropics during winter, hom cirrus tend to occur over mountainous terrain, possibly due to lower IN concentrations and stronger, more sustained updrafts in mountain-induced waves. Over pristine Antarctica, IN concentrations are minimal and the terrain near the coast is often high and rugged, allowing hom to dominate. Accordingly, over Antarctica cirrus clouds exhibit relatively high N and small De throughout the year. These retrievals allow us to parameterize De and the ice fall speed in CAM5 as a function of T, season, latitude and surface-type. Our goal is to estimate the radiative impact of hom cirrus north of 30°N latitude in winter relative to het cirrus before the AGU Fall Meeting.

  14. An Energy-Efficient Approach to Enhance Virtual Sensors Provisioning in Sensor Clouds Environments

    PubMed Central

    Filho, Raimir Holanda; Rabêlo, Ricardo de Andrade L.; de Carvalho, Carlos Giovanni N.; Mendes, Douglas Lopes de S.; Costa, Valney da Gama

    2018-01-01

    Virtual sensors provisioning is a central issue for sensors cloud middleware since it is responsible for selecting physical nodes, usually from Wireless Sensor Networks (WSN) of different owners, to handle user’s queries or applications. Recent works perform provisioning by clustering sensor nodes based on the correlation measurements and then selecting as few nodes as possible to preserve WSN energy. However, such works consider only homogeneous nodes (same set of sensors). Therefore, those works are not entirely appropriate for sensor clouds, which in most cases comprises heterogeneous sensor nodes. In this paper, we propose ACxSIMv2, an approach to enhance the provisioning task by considering heterogeneous environments. Two main algorithms form ACxSIMv2. The first one, ACASIMv1, creates multi-dimensional clusters of sensor nodes, taking into account the measurements correlations instead of the physical distance between nodes like most works on literature. Then, the second algorithm, ACOSIMv2, based on an Ant Colony Optimization system, selects an optimal set of sensors nodes from to respond user’s queries while attending all parameters and preserving the overall energy consumption. Results from initial experiments show that the approach reduces significantly the sensor cloud energy consumption compared to traditional works, providing a solution to be considered in sensor cloud scenarios. PMID:29495406

  15. An Energy-Efficient Approach to Enhance Virtual Sensors Provisioning in Sensor Clouds Environments.

    PubMed

    Lemos, Marcus Vinícius de S; Filho, Raimir Holanda; Rabêlo, Ricardo de Andrade L; de Carvalho, Carlos Giovanni N; Mendes, Douglas Lopes de S; Costa, Valney da Gama

    2018-02-26

    Virtual sensors provisioning is a central issue for sensors cloud middleware since it is responsible for selecting physical nodes, usually from Wireless Sensor Networks (WSN) of different owners, to handle user's queries or applications. Recent works perform provisioning by clustering sensor nodes based on the correlation measurements and then selecting as few nodes as possible to preserve WSN energy. However, such works consider only homogeneous nodes (same set of sensors). Therefore, those works are not entirely appropriate for sensor clouds, which in most cases comprises heterogeneous sensor nodes. In this paper, we propose ACxSIMv2, an approach to enhance the provisioning task by considering heterogeneous environments. Two main algorithms form ACxSIMv2. The first one, ACASIMv1, creates multi-dimensional clusters of sensor nodes, taking into account the measurements correlations instead of the physical distance between nodes like most works on literature. Then, the second algorithm, ACOSIMv2, based on an Ant Colony Optimization system, selects an optimal set of sensors nodes from to respond user's queries while attending all parameters and preserving the overall energy consumption. Results from initial experiments show that the approach reduces significantly the sensor cloud energy consumption compared to traditional works, providing a solution to be considered in sensor cloud scenarios.

  16. How Heterogeneity Affects the Design of Hadoop MapReduce Schedulers: A State-of-the-Art Survey and Challenges.

    PubMed

    Pandey, Vaibhav; Saini, Poonam

    2018-06-01

    MapReduce (MR) computing paradigm and its open source implementation Hadoop have become a de facto standard to process big data in a distributed environment. Initially, the Hadoop system was homogeneous in three significant aspects, namely, user, workload, and cluster (hardware). However, with growing variety of MR jobs and inclusion of different configurations of nodes in the existing cluster, heterogeneity has become an essential part of Hadoop systems. The heterogeneity factors adversely affect the performance of a Hadoop scheduler and limit the overall throughput of the system. To overcome this problem, various heterogeneous Hadoop schedulers have been proposed in the literature. Existing survey works in this area mostly cover homogeneous schedulers and classify them on the basis of quality of service parameters they optimize. Hence, there is a need to study the heterogeneous Hadoop schedulers on the basis of various heterogeneity factors considered by them. In this survey article, we first discuss different heterogeneity factors that typically exist in a Hadoop system and then explore various challenges that arise while designing the schedulers in the presence of such heterogeneity. Afterward, we present the comparative study of heterogeneous scheduling algorithms available in the literature and classify them by the previously said heterogeneity factors. Lastly, we investigate different methods and environment used for evaluation of discussed Hadoop schedulers.

  17. Parameterization of GCM subgrid nonprecipitating cumulus and stratocumulus clouds using stochastic/phenomenological methods. Annual technical progress report, 1 December 1992--30 November 1993

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Stull, R.B.

    1993-08-27

    This document is a progress report to the USDOE Atmospheric Radiation and Measurement Program (ARM). The overall project goal is to relate subgrid-cumulus-cloud formation, coverage, and population characteristics to statistical properties of surface-layer air, which in turn are modulated by heterogeneous land-usage within GCM-grid-box-size regions. The motivation is to improve the understanding and prediction of climate change by more accurately describing radiative and cloud processes.

  18. Rethinking key–value store for parallel I/O optimization

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kougkas, Anthony; Eslami, Hassan; Sun, Xian-He

    2015-01-26

    Key-value stores are being widely used as the storage system for large-scale internet services and cloud storage systems. However, they are rarely used in HPC systems, where parallel file systems are the dominant storage solution. In this study, we examine the architecture differences and performance characteristics of parallel file systems and key-value stores. We propose using key-value stores to optimize overall Input/Output (I/O) performance, especially for workloads that parallel file systems cannot handle well, such as the cases with intense data synchronization or heavy metadata operations. We conducted experiments with several synthetic benchmarks, an I/O benchmark, and a real application.more » We modeled the performance of these two systems using collected data from our experiments, and we provide a predictive method to identify which system offers better I/O performance given a specific workload. The results show that we can optimize the I/O performance in HPC systems by utilizing key-value stores.« less

  19. Cirrus Horizontal Heterogeneity Effects on Cloud Optical Properties Retrieved from MODIS VNIR to TIR Channels as a Function of the Spatial Resolution

    NASA Astrophysics Data System (ADS)

    Fauchez, T.; Platnick, S. E.; Sourdeval, O.; Wang, C.; Meyer, K.; Cornet, C.; Szczap, F.

    2017-12-01

    Cirrus are an important part of the Earth radiation budget but an assessment of their role yet remains highly uncertain. Cirrus optical properties such as Cloud Optical Thickness (COT) and ice crystal effective particle size (Re) are often retrieved with a combination of Visible/Near InfraRed (VNIR) and ShortWave-InfraRed (SWIR) reflectance channels. Alternatively, Thermal InfraRed (TIR) techniques, such as the Split Window Technique (SWT), have demonstrated better sensitivity to thin cirrus. However, current satellite operational products for both retrieval methods assume that cloudy pixels are horizontally homogeneous (Plane Parallel and Homogeneous Approximation (PPHA)) and independent (Independent Pixel Approximation (IPA)). The impact of these approximations on cirrus retrievals needs to be understood and, as far as possible, corrected. Horizontal heterogeneity effects can be more easily estimated and corrected in the TIR range because they are mainly dominated by the PPA bias, which primarily depends on the COT subpixel heterogeneity. For solar reflectance channels, in addition to the PPHA bias, the IPA can lead to significant retrieval errors if there is large photon transport between cloudy columns in addition to brightening and shadowing effects that are more difficult to quantify.The effects of cirrus horizontal heterogeneity are here studied on COT and Re retrievals obtained using simulated MODIS reflectances at 0.86 and 2.11 μm and radiances at 8.5, 11.0 and 12.0 μm, for spatial resolutions ranging from 50 m to 10 km. For each spatial resolution, simulated TOA reflectances and radiances are combined for cloud optical property retrievals with a research-level optimal estimation retrieval method (OEM). The impact of horizontal heterogeneity on the retrieved products is assessed for different solar geometries and various combinations of the five channels.

  20. Clouds of different colors: A prospective look at head and neck surgical resident call experience.

    PubMed

    Melzer, Jonathan

    2017-12-01

    Graduate medical education programs typically set up call under the assumption that residents will have similar experiences. The terms black cloud and white cloud have frequently been used to describe residents with more difficult (black) or less difficult (white) call experiences. This study followed residents in the department of head and neck surgery during call to determine whether certain residents have a significantly different call experience than the norm. It is a prospective observational study conducted over 16 months in a tertiary care center with a resident training program in otolaryngology. Resident call data on total pages, consults, and operative interventions were examined, as well as subjective survey data about sleep and perceived difficulty of resident call. Analysis showed no significant difference in call activity (pages, consults, operative interventions) among residents. However, data from the resident call surveys revealed perceived disparities in call difficulty that were significant. Two residents were clearly labeled as black clouds compared to the rest. These residents did not have the highest average number of pages, consults, or operative interventions. This study suggests that factors affecting call perception are outside the objective, absolute workload. These results may be used to improve resident education on sleep training and nighttime patient management in the field of otolaryngology and may influence otolaryngology residency programs.

  1. Towards an Approach of Semantic Access Control for Cloud Computing

    NASA Astrophysics Data System (ADS)

    Hu, Luokai; Ying, Shi; Jia, Xiangyang; Zhao, Kai

    With the development of cloud computing, the mutual understandability among distributed Access Control Policies (ACPs) has become an important issue in the security field of cloud computing. Semantic Web technology provides the solution to semantic interoperability of heterogeneous applications. In this paper, we analysis existing access control methods and present a new Semantic Access Control Policy Language (SACPL) for describing ACPs in cloud computing environment. Access Control Oriented Ontology System (ACOOS) is designed as the semantic basis of SACPL. Ontology-based SACPL language can effectively solve the interoperability issue of distributed ACPs. This study enriches the research that the semantic web technology is applied in the field of security, and provides a new way of thinking of access control in cloud computing.

  2. Cloud Bursting with GlideinWMS: Means to satisfy ever increasing computing needs for Scientific Workflows

    NASA Astrophysics Data System (ADS)

    Mhashilkar, Parag; Tiradani, Anthony; Holzman, Burt; Larson, Krista; Sfiligoi, Igor; Rynge, Mats

    2014-06-01

    Scientific communities have been in the forefront of adopting new technologies and methodologies in the computing. Scientific computing has influenced how science is done today, achieving breakthroughs that were impossible to achieve several decades ago. For the past decade several such communities in the Open Science Grid (OSG) and the European Grid Infrastructure (EGI) have been using GlideinWMS to run complex application workflows to effectively share computational resources over the grid. GlideinWMS is a pilot-based workload management system (WMS) that creates on demand, a dynamically sized overlay HTCondor batch system on grid resources. At present, the computational resources shared over the grid are just adequate to sustain the computing needs. We envision that the complexity of the science driven by "Big Data" will further push the need for computational resources. To fulfill their increasing demands and/or to run specialized workflows, some of the big communities like CMS are investigating the use of cloud computing as Infrastructure-As-A-Service (IAAS) with GlideinWMS as a potential alternative to fill the void. Similarly, communities with no previous access to computing resources can use GlideinWMS to setup up a batch system on the cloud infrastructure. To enable this, the architecture of GlideinWMS has been extended to enable support for interfacing GlideinWMS with different Scientific and commercial cloud providers like HLT, FutureGrid, FermiCloud and Amazon EC2. In this paper, we describe a solution for cloud bursting with GlideinWMS. The paper describes the approach, architectural changes and lessons learned while enabling support for cloud infrastructures in GlideinWMS.

  3. Cloud Bursting with GlideinWMS: Means to satisfy ever increasing computing needs for Scientific Workflows

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mhashilkar, Parag; Tiradani, Anthony; Holzman, Burt

    Scientific communities have been in the forefront of adopting new technologies and methodologies in the computing. Scientific computing has influenced how science is done today, achieving breakthroughs that were impossible to achieve several decades ago. For the past decade several such communities in the Open Science Grid (OSG) and the European Grid Infrastructure (EGI) have been using GlideinWMS to run complex application workflows to effectively share computational resources over the grid. GlideinWMS is a pilot-based workload management system (WMS) that creates on demand, a dynamically sized overlay HTCondor batch system on grid resources. At present, the computational resources shared overmore » the grid are just adequate to sustain the computing needs. We envision that the complexity of the science driven by 'Big Data' will further push the need for computational resources. To fulfill their increasing demands and/or to run specialized workflows, some of the big communities like CMS are investigating the use of cloud computing as Infrastructure-As-A-Service (IAAS) with GlideinWMS as a potential alternative to fill the void. Similarly, communities with no previous access to computing resources can use GlideinWMS to setup up a batch system on the cloud infrastructure. To enable this, the architecture of GlideinWMS has been extended to enable support for interfacing GlideinWMS with different Scientific and commercial cloud providers like HLT, FutureGrid, FermiCloud and Amazon EC2. In this paper, we describe a solution for cloud bursting with GlideinWMS. The paper describes the approach, architectural changes and lessons learned while enabling support for cloud infrastructures in GlideinWMS.« less

  4. Spectroscopic Evidence Against Nitric Acid Trihydrate in Polar Stratospheric Clouds

    NASA Technical Reports Server (NTRS)

    Toon, Owen B.; Tolbert, Margaret A.

    1995-01-01

    Heterogeneous reactions on polar stratospheric clouds (PSC's) play a key role in the photochemical mechanism thought to be responsible for ozone depletion in the Antarctic and Arctic. Reactions of PSC particles activate chlorine to forms that are capable of photochemical ozone destruction, and sequester nitrogen oxides (NOx) that would otherwise deactivate the chlorine. Although the heterogeneous chemistry is now well established, the composition of the clouds themselves is uncertain. It is commonly thought that they are composed of nitric acid trihydrate, although observations have left this question unresolved. Here we reanalyse infrared spectra of type 1 PSCs obtained in Antarctica in September 1987, using recently measured optical constants of the various compounds that might be present in PSCs. We find these PSCs were not composed of nitric acid trihydrate but instead had a more complex compositon, perhaps that of a ternary solution. Because cloud formation is sensitive to their composition, this finding will alter our understanding of the locations and conditions in which PSCs form. In addition, the extent of ozone loss depends on the ability of the PSCs to remove NOx permanently through sedimentation, The sedimentation rates depend on PSC particle size which in turn is controlled by the composition and formation mechanism.

  5. Evidence for nucleosynthetic enrichment of the protosolar molecular cloud core by multiple supernova events.

    PubMed

    Schiller, Martin; Paton, Chad; Bizzarro, Martin

    2015-01-15

    The presence of isotope heterogeneity of nucleosynthetic origin amongst meteorites and their components provides a record of the diverse stars that contributed matter to the protosolar molecular cloud core. Understanding how and when the solar system's nucleosynthetic heterogeneity was established and preserved within the solar protoplanetary disk is critical for unraveling the earliest formative stages of the solar system. Here, we report calcium and magnesium isotope measurements of primitive and differentiated meteorites as well as various types of refractory inclusions, including refractory inclusions (CAIs) formed with the canonical 26 Al/ 27 Al of ~5 × 10 -5 ( 26 Al decays to 26 Mg with a half-life of ~0.73 Ma) and CAIs that show fractionated and unidentified nuclear effects (FUN-CAIs) to understand the origin of the solar system's nucleosynthetic heterogeneity. Bulk analyses of primitive and differentiated meteorites along with canonical and FUN-CAIs define correlated, mass-independent variations in 43 Ca, 46 Ca and 48 Ca. Moreover, sequential dissolution experiments of the Ivuna carbonaceous chondrite aimed at identifying the nature and number of presolar carriers of isotope anomalies within primitive meteorites have detected the presence of multiple carriers of the short-lived 26 Al nuclide as well as carriers of anomalous and uncorrelated 43 Ca, 46 Ca and 48 Ca compositions, which requires input from multiple and recent supernovae sources. We infer that the solar system's correlated nucleosynthetic variability reflects unmixing of old, galactically-inherited homogeneous dust from a new, supernovae-derived dust component formed shortly prior to or during the evolution of the giant molecular cloud parental to the protosolar molecular cloud core. This implies that similarly to 43 Ca, 46 Ca and 48 Ca, the short-lived 26 Al nuclide was heterogeneously distributed in the inner solar system at the time of CAI formation.

  6. Stable water isotopologue ratios in fog and cloud droplets of liquid clouds are not size-dependent

    USGS Publications Warehouse

    Spiegel, J.K.; Aemisegger, F.; Scholl, M.; Wienhold, F.G.; Collett, J.L.; Lee, T.; van Pinxteren, D.; Mertes, S.; Tilgner, A.; Herrmann, H.; Werner, Roland A.; Buchmann, N.; Eugster, W.

    2012-01-01

    In this work, we present the first observations of stable water isotopologue ratios in cloud droplets of different sizes collected simultaneously. We address the question whether the isotope ratio of droplets in a liquid cloud varies as a function of droplet size. Samples were collected from a ground intercepted cloud (= fog) during the Hill Cap Cloud Thuringia 2010 campaign (HCCT-2010) using a three-stage Caltech Active Strand Cloud water Collector (CASCC). An instrument test revealed that no artificial isotopic fractionation occurs during sample collection with the CASCC. Furthermore, we could experimentally confirm the hypothesis that the δ values of cloud droplets of the relevant droplet sizes (μm-range) were not significantly different and thus can be assumed to be in isotopic equilibrium immediately with the surrounding water vapor. However, during the dissolution period of the cloud, when the supersaturation inside the cloud decreased and the cloud began to clear, differences in isotope ratios of the different droplet sizes tended to be larger. This is likely to result from the cloud's heterogeneity, implying that larger and smaller cloud droplets have been collected at different moments in time, delivering isotope ratios from different collection times.

  7. Climate Impacts of Ice Nucleation

    NASA Technical Reports Server (NTRS)

    Gettelman, Andrew; Liu, Xiaohong; Barahona, Donifan; Lohmann, Ulrike; Chen, Celia

    2012-01-01

    Several different ice nucleation parameterizations in two different General Circulation Models (GCMs) are used to understand the effects of ice nucleation on the mean climate state, and the Aerosol Indirect Effects (AIE) of cirrus clouds on climate. Simulations have a range of ice microphysical states that are consistent with the spread of observations, but many simulations have higher present-day ice crystal number concentrations than in-situ observations. These different states result from different parameterizations of ice cloud nucleation processes, and feature different balances of homogeneous and heterogeneous nucleation. Black carbon aerosols have a small (0.06 Wm(exp-2) and not statistically significant AIE when included as ice nuclei, for nucleation efficiencies within the range of laboratory measurements. Indirect effects of anthropogenic aerosols on cirrus clouds occur as a consequence of increasing anthropogenic sulfur emissions with different mechanisms important in different models. In one model this is due to increases in homogeneous nucleation fraction, and in the other due to increases in heterogeneous nucleation with coated dust. The magnitude of the effect is the same however. The resulting ice AIE does not seem strongly dependent on the balance between homogeneous and heterogeneous ice nucleation. Regional effects can reach several Wm2. Indirect effects are slightly larger for those states with less homogeneous nucleation and lower ice number concentration in the base state. The total ice AIE is estimated at 0.27 +/- 0.10 Wm(exp-2) (1 sigma uncertainty). This represents a 20% offset of the simulated total shortwave AIE for ice and liquid clouds of 1.6 Wm(sup-2).

  8. Heterogeneous Formation of Polar Stratospheric Clouds- Part 1: Nucleation of Nitric Acid Trihydrate (NAT)

    NASA Technical Reports Server (NTRS)

    Hoyle, C. R.; Engel, I.; Luo, B. P.; Pitts, M. C.; Poole, L. R.; Grooss, J.-U.; Peter, T.

    2013-01-01

    Satellite-based observations during the Arctic winter of 2009/2010 provide firm evidence that, in contrast to the current understanding, the nucleation of nitric acid trihydrate (NAT) in the polar stratosphere does not only occur on preexisting ice particles. In order to explain the NAT clouds observed over the Arctic in mid-December 2009, a heterogeneous nucleation mechanism is required, occurring via immersion freezing on the surface of solid particles, likely of meteoritic origin. For the first time, a detailed microphysical modelling of this NAT formation pathway has been carried out. Heterogeneous NAT formation was calculated along more than sixty thousand trajectories, ending at Cloud Aerosol Lidar with Orthogonal Polarization (CALIOP) observation points. Comparing the optical properties of the modelled NAT with these observations enabled a thorough validation of a newly developed NAT nucleation parameterisation, which has been built into the Zurich Optical and Microphysical box Model (ZOMM). The parameterisation is based on active site theory, is simple to implement in models and provides substantial advantages over previous approaches which involved a constant rate of NAT nucleation in a given volume of air. It is shown that the new method is capable of reproducing observed polar stratospheric clouds (PSCs) very well, despite the varied conditions experienced by air parcels travelling along the different trajectories. In a companion paper, ZOMM is applied to a later period of the winter, when ice PSCs are also present, and it is shown that the observed PSCs are also represented extremely well under these conditions.

  9. The Registration and Segmentation of Heterogeneous Laser Scanning Data

    NASA Astrophysics Data System (ADS)

    Al-Durgham, Mohannad M.

    Light Detection And Ranging (LiDAR) mapping has been emerging over the past few years as a mainstream tool for the dense acquisition of three dimensional point data. Besides the conventional mapping missions, LiDAR systems have proven to be very useful for a wide spectrum of applications such as forestry, structural deformation analysis, urban mapping, and reverse engineering. The wide application scope of LiDAR lead to the development of many laser scanning technologies that are mountable on multiple platforms (i.e., airborne, mobile terrestrial, and tripod mounted), this caused variations in the characteristics and quality of the generated point clouds. As a result of the increased popularity and diversity of laser scanners, one should address the heterogeneous LiDAR data post processing (i.e., registration and segmentation) problems adequately. Current LiDAR integration techniques do not take into account the varying nature of laser scans originating from various platforms. In this dissertation, the author proposes a methodology designed particularly for the registration and segmentation of heterogeneous LiDAR data. A data characterization and filtering step is proposed to populate the points' attributes and remove non-planar LiDAR points. Then, a modified version of the Iterative Closest Point (ICP), denoted by the Iterative Closest Projected Point (ICPP) is designed for the registration of heterogeneous scans to remove any misalignments between overlapping strips. Next, a region-growing-based heterogeneous segmentation algorithm is developed to ensure the proper extraction of planar segments from the point clouds. Validation experiments show that the proposed heterogeneous registration can successfully align airborne and terrestrial datasets despite the great differences in their point density and their noise level. In addition, similar testes have been conducted to examine the heterogeneous segmentation and it is shown that one is able to identify common planar features in airborne and terrestrial data without resampling or manipulating the data in any way. The work presented in this dissertation provides a framework for the registration and segmentation of airborne and terrestrial laser scans which has a positive impact on the completeness of the scanned feature. Therefore, the derived products from these point clouds have higher accuracy as seen in the full manuscript.

  10. Statistics Analysis of the Uncertainties in Cloud Optical Depth Retrievals Caused by Three-Dimensional Radiative Effects

    NASA Technical Reports Server (NTRS)

    Varnai, Tamas; Marshak, Alexander

    2000-01-01

    This paper presents a simple approach to estimate the uncertainties that arise in satellite retrievals of cloud optical depth when the retrievals use one-dimensional radiative transfer theory for heterogeneous clouds that have variations in all three dimensions. For the first time, preliminary error bounds are set to estimate the uncertainty of cloud optical depth retrievals. These estimates can help us better understand the nature of uncertainties that three-dimensional effects can introduce into retrievals of this important product of the MODIS instrument. The probability distribution of resulting retrieval errors is examined through theoretical simulations of shortwave cloud reflection for a wide variety of cloud fields. The results are used to illustrate how retrieval uncertainties change with observable and known parameters, such as solar elevation or cloud brightness. Furthermore, the results indicate that a tendency observed in an earlier study, clouds appearing thicker for oblique sun, is indeed caused by three-dimensional radiative effects.

  11. Community Cloud Computing

    NASA Astrophysics Data System (ADS)

    Marinos, Alexandros; Briscoe, Gerard

    Cloud Computing is rising fast, with its data centres growing at an unprecedented rate. However, this has come with concerns over privacy, efficiency at the expense of resilience, and environmental sustainability, because of the dependence on Cloud vendors such as Google, Amazon and Microsoft. Our response is an alternative model for the Cloud conceptualisation, providing a paradigm for Clouds in the community, utilising networked personal computers for liberation from the centralised vendor model. Community Cloud Computing (C3) offers an alternative architecture, created by combing the Cloud with paradigms from Grid Computing, principles from Digital Ecosystems, and sustainability from Green Computing, while remaining true to the original vision of the Internet. It is more technically challenging than Cloud Computing, having to deal with distributed computing issues, including heterogeneous nodes, varying quality of service, and additional security constraints. However, these are not insurmountable challenges, and with the need to retain control over our digital lives and the potential environmental consequences, it is a challenge we must pursue.

  12. Dynamic Voltage Frequency Scaling Simulator for Real Workflows Energy-Aware Management in Green Cloud Computing

    PubMed Central

    Cotes-Ruiz, Iván Tomás; Prado, Rocío P.; García-Galán, Sebastián; Muñoz-Expósito, José Enrique; Ruiz-Reyes, Nicolás

    2017-01-01

    Nowadays, the growing computational capabilities of Cloud systems rely on the reduction of the consumed power of their data centers to make them sustainable and economically profitable. The efficient management of computing resources is at the heart of any energy-aware data center and of special relevance is the adaptation of its performance to workload. Intensive computing applications in diverse areas of science generate complex workload called workflows, whose successful management in terms of energy saving is still at its beginning. WorkflowSim is currently one of the most advanced simulators for research on workflows processing, offering advanced features such as task clustering and failure policies. In this work, an expected power-aware extension of WorkflowSim is presented. This new tool integrates a power model based on a computing-plus-communication design to allow the optimization of new management strategies in energy saving considering computing, reconfiguration and networks costs as well as quality of service, and it incorporates the preeminent strategy for on host energy saving: Dynamic Voltage Frequency Scaling (DVFS). The simulator is designed to be consistent in different real scenarios and to include a wide repertory of DVFS governors. Results showing the validity of the simulator in terms of resources utilization, frequency and voltage scaling, power, energy and time saving are presented. Also, results achieved by the intra-host DVFS strategy with different governors are compared to those of the data center using a recent and successful DVFS-based inter-host scheduling strategy as overlapped mechanism to the DVFS intra-host technique. PMID:28085932

  13. Dynamic Voltage Frequency Scaling Simulator for Real Workflows Energy-Aware Management in Green Cloud Computing.

    PubMed

    Cotes-Ruiz, Iván Tomás; Prado, Rocío P; García-Galán, Sebastián; Muñoz-Expósito, José Enrique; Ruiz-Reyes, Nicolás

    2017-01-01

    Nowadays, the growing computational capabilities of Cloud systems rely on the reduction of the consumed power of their data centers to make them sustainable and economically profitable. The efficient management of computing resources is at the heart of any energy-aware data center and of special relevance is the adaptation of its performance to workload. Intensive computing applications in diverse areas of science generate complex workload called workflows, whose successful management in terms of energy saving is still at its beginning. WorkflowSim is currently one of the most advanced simulators for research on workflows processing, offering advanced features such as task clustering and failure policies. In this work, an expected power-aware extension of WorkflowSim is presented. This new tool integrates a power model based on a computing-plus-communication design to allow the optimization of new management strategies in energy saving considering computing, reconfiguration and networks costs as well as quality of service, and it incorporates the preeminent strategy for on host energy saving: Dynamic Voltage Frequency Scaling (DVFS). The simulator is designed to be consistent in different real scenarios and to include a wide repertory of DVFS governors. Results showing the validity of the simulator in terms of resources utilization, frequency and voltage scaling, power, energy and time saving are presented. Also, results achieved by the intra-host DVFS strategy with different governors are compared to those of the data center using a recent and successful DVFS-based inter-host scheduling strategy as overlapped mechanism to the DVFS intra-host technique.

  14. CoreTSAR: Core Task-Size Adapting Runtime

    DOE PAGES

    Scogland, Thomas R. W.; Feng, Wu-chun; Rountree, Barry; ...

    2014-10-27

    Heterogeneity continues to increase at all levels of computing, with the rise of accelerators such as GPUs, FPGAs, and other co-processors into everything from desktops to supercomputers. As a consequence, efficiently managing such disparate resources has become increasingly complex. CoreTSAR seeks to reduce this complexity by adaptively worksharing parallel-loop regions across compute resources without requiring any transformation of the code within the loop. Lastly, our results show performance improvements of up to three-fold over a current state-of-the-art heterogeneous task scheduler as well as linear performance scaling from a single GPU to four GPUs for many codes. In addition, CoreTSAR demonstratesmore » a robust ability to adapt to both a variety of workloads and underlying system configurations.« less

  15. What Controls the Low Ice Number Concentration in the Upper Tropical Troposphere?

    NASA Astrophysics Data System (ADS)

    Penner, J.; Zhou, C.; Lin, G.; Liu, X.; Wang, M.

    2015-12-01

    Cirrus clouds in the tropical tropopause play a key role in regulating the moisture entering the stratosphere through their dehydrating effect. Low ice number concentrations and high supersaturations were frequently were observed in these clouds. However, low ice number concentrations are inconsistent with cirrus cloud formation based on homogeneous freezing. Different mechanisms have been proposed to explain this discrepancy, including the inhibition of homogeneous freezing by pre-existing ice crystals and/or glassy organic aerosol heterogeneous ice nuclei (IN) and limiting the formation of ice number from high frequency gravity waves. In this study, we examined the effect from three different parameterizations of in-cloud updraft velocities, the effect from pre-existing ice crystals, the effect from different water vapor deposition coefficients (α=0.1 or 1), and the effect from 0.1% of secondary organic aerosol (SOA) acting as glassy heterogeneous ice nuclei (IN) in CAM5. Model simulated ice crystal numbers are compared against an aircraft observational dataset. Using grid resolved large-scale updraft velocity in the ice nucleation parameterization generates ice number concentrations in better agreement with observations for temperatures below 205K while using updraft velocities based on the model-generated turbulence kinetic energy generates ice number concentrations in better agreement with observations for temperatures above 205K. A larger water vapor deposition coefficient (α=1) can efficiently reduce the ice number at temperatures below 205K but less so at higher temperatures. Glassy SOA IN are most effective at reducing the ice number concentrations when the effective in-cloud updraft velocities are moderate (~0.05-0.2 m s-1). Including the removal of water vapor on pre-existing ice can also effectively reduce the ice number and diminish the effects from the additional glassy SOA heterogeneous IN. We also re-evaluate whether IN seeding in cirrus cloud is a viable mechanism for cooling. A significant amount of negative climate forcing can only be achieved if we restrict the updraft velocity in regions of background cirrus formation to moderate values (~0.05-0.2 m s-1).

  16. The enhancement and suppression of immersion mode heterogeneous ice-nucleation by solutes.

    PubMed

    Whale, Thomas F; Holden, Mark A; Wilson, Theodore W; O'Sullivan, Daniel; Murray, Benjamin J

    2018-05-07

    Heterogeneous nucleation of ice from aqueous solutions is an important yet poorly understood process in multiple fields, not least the atmospheric sciences where it impacts the formation and properties of clouds. In the atmosphere ice-nucleating particles are usually, if not always, mixed with soluble material. However, the impact of this soluble material on ice nucleation is poorly understood. In the atmospheric community the current paradigm for freezing under mixed phase cloud conditions is that dilute solutions will not influence heterogeneous freezing. By testing combinations of nucleators and solute molecules we have demonstrated that 0.015 M solutions (predicted melting point depression <0.1 °C) of several ammonium salts can cause suspended particles of feldspars and quartz to nucleate ice up to around 3 °C warmer than they do in pure water. In contrast, dilute solutions of certain alkali metal halides can dramatically depress freezing points for the same nucleators. At 0.015 M, solutes can enhance or deactivate the ice-nucleating ability of a microcline feldspar across a range of more than 10 °C, which corresponds to a change in active site density of more than a factor of 10 5 . This concentration was chosen for a survey across multiple solutes-nucleant combinations since it had a minimal colligative impact on freezing and is relevant for activating cloud droplets. Other nucleators, for instance a silica gel, are unaffected by these 'solute effects', to within experimental uncertainty. This split in response to the presence of solutes indicates that different mechanisms of ice nucleation occur on the different nucleators or that surface modification of relevance to ice nucleation proceeds in different ways for different nucleators. These solute effects on immersion mode ice nucleation may be of importance in the atmosphere as sea salt and ammonium sulphate are common cloud condensation nuclei (CCN) for cloud droplets and are internally mixed with ice-nucleating particles in mixed-phase clouds. In addition, we propose a pathway dependence where activation of CCN at low temperatures might lead to enhanced ice formation relative to pathways where CCN activation occurs at higher temperatures prior to cooling to nucleation temperature.

  17. The enhancement and suppression of immersion mode heterogeneous ice-nucleation by solutes

    PubMed Central

    Holden, Mark A.; Wilson, Theodore W.; O'Sullivan, Daniel; Murray, Benjamin J.

    2018-01-01

    Heterogeneous nucleation of ice from aqueous solutions is an important yet poorly understood process in multiple fields, not least the atmospheric sciences where it impacts the formation and properties of clouds. In the atmosphere ice-nucleating particles are usually, if not always, mixed with soluble material. However, the impact of this soluble material on ice nucleation is poorly understood. In the atmospheric community the current paradigm for freezing under mixed phase cloud conditions is that dilute solutions will not influence heterogeneous freezing. By testing combinations of nucleators and solute molecules we have demonstrated that 0.015 M solutions (predicted melting point depression <0.1 °C) of several ammonium salts can cause suspended particles of feldspars and quartz to nucleate ice up to around 3 °C warmer than they do in pure water. In contrast, dilute solutions of certain alkali metal halides can dramatically depress freezing points for the same nucleators. At 0.015 M, solutes can enhance or deactivate the ice-nucleating ability of a microcline feldspar across a range of more than 10 °C, which corresponds to a change in active site density of more than a factor of 105. This concentration was chosen for a survey across multiple solutes–nucleant combinations since it had a minimal colligative impact on freezing and is relevant for activating cloud droplets. Other nucleators, for instance a silica gel, are unaffected by these ‘solute effects’, to within experimental uncertainty. This split in response to the presence of solutes indicates that different mechanisms of ice nucleation occur on the different nucleators or that surface modification of relevance to ice nucleation proceeds in different ways for different nucleators. These solute effects on immersion mode ice nucleation may be of importance in the atmosphere as sea salt and ammonium sulphate are common cloud condensation nuclei (CCN) for cloud droplets and are internally mixed with ice-nucleating particles in mixed-phase clouds. In addition, we propose a pathway dependence where activation of CCN at low temperatures might lead to enhanced ice formation relative to pathways where CCN activation occurs at higher temperatures prior to cooling to nucleation temperature. PMID:29780544

  18. Cyanide and isocyanide abundances in the cold, dark cloud TMC-1

    NASA Technical Reports Server (NTRS)

    Irvine, W. M.; Schloerb, F. P.

    1984-01-01

    Cold, dark molecular clouds are particularly useful for the study of interstellar chemistry because their physical parameters are better understood than those of heterogeneous, complex giant molecular clouds. Another advantage is their relatively small distance from the solar system. The present investigaation has the objective to provide accurate abundance ratios for several cyanides and isocyanides in order to constrain models of dark cloud chemistry. The relative abundances of such related species can be particularly useful for the study of chemical processes. The cloud TMC-1 considered in the current study has a remarkably high abundance of acetylene and polyacetylene derivatives. Data at 3 mm, corresponding to the J = 1 to 0 transitions of HCN, H(C-13)N, HN(C-13), HC(N-15), and H(N-15)C were obtained.

  19. The role of 1-D and 3-D radiative heating in the organization of shallow cumulus convection and the formation of cloud streets

    NASA Astrophysics Data System (ADS)

    Jakub, Fabian; Mayer, Bernhard

    2017-11-01

    The formation of shallow cumulus cloud streets was historically attributed primarily to dynamics. Here, we focus on the interaction between radiatively induced surface heterogeneities and the resulting patterns in the flow. Our results suggest that solar radiative heating has the potential to organize clouds perpendicular to the sun's incidence angle. To quantify the extent of organization, we performed a high-resolution large-eddy simulation (LES) parameter study. We varied the horizontal wind speed, the surface heat capacity, the solar zenith and azimuth angles, and radiative transfer parameterizations (1-D and 3-D). As a quantitative measure we introduce a simple algorithm that provides a scalar quantity for the degree of organization and the alignment. We find that, even in the absence of a horizontal wind, 3-D radiative transfer produces cloud streets perpendicular to the sun's incident direction, whereas the 1-D approximation or constant surface fluxes produce randomly positioned circular clouds. Our reasoning for the enhancement or reduction of organization is the geometric position of the cloud's shadow and its corresponding surface fluxes. Furthermore, when increasing horizontal wind speeds to 5 or 10 m s-1, we observe the development of dynamically induced cloud streets. If, in addition, solar radiation illuminates the surface beneath the cloud, i.e., when the sun is positioned orthogonally to the mean wind field and the solar zenith angle is larger than 20°, the cloud-radiative feedback has the potential to significantly enhance the tendency to organize in cloud streets. In contrast, in the case of the 1-D approximation (or overhead sun), the tendency to organize is weaker or even prohibited because the shadow is cast directly beneath the cloud. In a land-surface-type situation, we find the organization of convection happening on a timescale of half an hour. The radiative feedback, which creates surface heterogeneities, is generally diminished for large surface heat capacities. We therefore expect radiative feedbacks to be strongest over land surfaces and weaker over the ocean. Given the results of this study we expect that simulations including shallow cumulus convection will have difficulties producing cloud streets if they employ 1-D radiative transfer solvers or may need unrealistically high wind speeds to excite cloud street organization.

  20. PanDA: Exascale Federation of Resources for the ATLAS Experiment at the LHC

    NASA Astrophysics Data System (ADS)

    Barreiro Megino, Fernando; Caballero Bejar, Jose; De, Kaushik; Hover, John; Klimentov, Alexei; Maeno, Tadashi; Nilsson, Paul; Oleynik, Danila; Padolski, Siarhei; Panitkin, Sergey; Petrosyan, Artem; Wenaus, Torre

    2016-02-01

    After a scheduled maintenance and upgrade period, the world's largest and most powerful machine - the Large Hadron Collider(LHC) - is about to enter its second run at unprecedented energies. In order to exploit the scientific potential of the machine, the experiments at the LHC face computational challenges with enormous data volumes that need to be analysed by thousand of physics users and compared to simulated data. Given diverse funding constraints, the computational resources for the LHC have been deployed in a worldwide mesh of data centres, connected to each other through Grid technologies. The PanDA (Production and Distributed Analysis) system was developed in 2005 for the ATLAS experiment on top of this heterogeneous infrastructure to seamlessly integrate the computational resources and give the users the feeling of a unique system. Since its origins, PanDA has evolved together with upcoming computing paradigms in and outside HEP, such as changes in the networking model, Cloud Computing and HPC. It is currently running steadily up to 200 thousand simultaneous cores (limited by the available resources for ATLAS), up to two million aggregated jobs per day and processes over an exabyte of data per year. The success of PanDA in ATLAS is triggering the widespread adoption and testing by other experiments. In this contribution we will give an overview of the PanDA components and focus on the new features and upcoming challenges that are relevant to the next decade of distributed computing workload management using PanDA.

  1. Design and Development of ChemInfoCloud: An Integrated Cloud Enabled Platform for Virtual Screening.

    PubMed

    Karthikeyan, Muthukumarasamy; Pandit, Deepak; Bhavasar, Arvind; Vyas, Renu

    2015-01-01

    The power of cloud computing and distributed computing has been harnessed to handle vast and heterogeneous data required to be processed in any virtual screening protocol. A cloud computing platorm ChemInfoCloud was built and integrated with several chemoinformatics and bioinformatics tools. The robust engine performs the core chemoinformatics tasks of lead generation, lead optimisation and property prediction in a fast and efficient manner. It has also been provided with some of the bioinformatics functionalities including sequence alignment, active site pose prediction and protein ligand docking. Text mining, NMR chemical shift (1H, 13C) prediction and reaction fingerprint generation modules for efficient lead discovery are also implemented in this platform. We have developed an integrated problem solving cloud environment for virtual screening studies that also provides workflow management, better usability and interaction with end users using container based virtualization, OpenVz.

  2. Global Simulations of Ice nucleation and Ice Supersaturation with an Improved Cloud Scheme in the Community Atmosphere Model

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gettelman, A.; Liu, Xiaohong; Ghan, Steven J.

    2010-09-28

    A process-based treatment of ice supersaturation and ice-nucleation is implemented in the National Center for Atmospheric Research (NCAR) Community Atmosphere Model (CAM). The new scheme is designed to allow (1) supersaturation with respect to ice, (2) ice nucleation by aerosol particles and (3) ice cloud cover consistent with ice microphysics. The scheme is implemented with a 4-class 2 moment microphysics code and is used to evaluate ice cloud nucleation mechanisms and supersaturation in CAM. The new model is able to reproduce field observations of ice mass and mixed phase cloud occurrence better than previous versions of the model. Simulations indicatemore » heterogeneous freezing and contact nucleation on dust are both potentially important over remote areas of the Arctic. Cloud forcing and hence climate is sensitive to different formulations of the ice microphysics. Arctic radiative fluxes are sensitive to the parameterization of ice clouds. These results indicate that ice clouds are potentially an important part of understanding cloud forcing and potential cloud feedbacks, particularly in the Arctic.« less

  3. Heterogeneous chemistry on Antarctic polar stratospheric clouds - A microphysical estimate of the extent of chemical processing

    NASA Technical Reports Server (NTRS)

    Drdla, K.; Turco, R. P.; Elliott, S.

    1993-01-01

    A detailed model of polar stratospheric clouds (PSCs), which includes nucleation, condensational growth. and sedimentation processes, has been applied to the study of heterogeneous chemical reactions. For the first time, the extent of chemical processing during a polar winter has been estimated for an idealized air parcel in the Antarctic vortex by calculating in detail the rates of heterogeneous reactions on PSC particles. The resulting active chlorine and NO(x) concentrations at first sunrise are analyzed with respect to their influence upon the Antarctic ozone hole using a photochemical model. It is found that the species present at sunrise are primarily influenced by the relative values of the heterogeneous reaction rate constants and the initial gas concentrations. However, the extent of chlorine activation is also influenced by whether N2O5 is removed by reaction with HCl or H2O. The reaction of N2O5 with HCl, which occurs rapidly on type 1 PSCs, activates the chlorine contained in the reservoir species HCl. Hence the presence and surface area of type 1 PSCs early in the winter are crucial in determining ozone depletion.

  4. Spatiotemporal Domain Decomposition for Massive Parallel Computation of Space-Time Kernel Density

    NASA Astrophysics Data System (ADS)

    Hohl, A.; Delmelle, E. M.; Tang, W.

    2015-07-01

    Accelerated processing capabilities are deemed critical when conducting analysis on spatiotemporal datasets of increasing size, diversity and availability. High-performance parallel computing offers the capacity to solve computationally demanding problems in a limited timeframe, but likewise poses the challenge of preventing processing inefficiency due to workload imbalance between computing resources. Therefore, when designing new algorithms capable of implementing parallel strategies, careful spatiotemporal domain decomposition is necessary to account for heterogeneity in the data. In this study, we perform octtree-based adaptive decomposition of the spatiotemporal domain for parallel computation of space-time kernel density. In order to avoid edge effects near subdomain boundaries, we establish spatiotemporal buffers to include adjacent data-points that are within the spatial and temporal kernel bandwidths. Then, we quantify computational intensity of each subdomain to balance workloads among processors. We illustrate the benefits of our methodology using a space-time epidemiological dataset of Dengue fever, an infectious vector-borne disease that poses a severe threat to communities in tropical climates. Our parallel implementation of kernel density reaches substantial speedup compared to sequential processing, and achieves high levels of workload balance among processors due to great accuracy in quantifying computational intensity. Our approach is portable of other space-time analytical tests.

  5. Cloud radiative properties and aerosol - cloud interaction

    NASA Astrophysics Data System (ADS)

    Viviana Vladutescu, Daniela; Gross, Barry; Li, Clement; Han, Zaw

    2015-04-01

    The presented research discusses different techniques for improvement of cloud properties measurements and analysis. The need for these measurements and analysis arises from the high errors noticed in existing methods that are currently used in retrieving cloud properties and implicitly cloud radiative forcing. The properties investigated are cloud fraction (cf) and cloud optical thickness (COT) measured with a suite of collocated remote sensing instruments. The novel approach makes use of a ground based "poor man's camera" to detect cloud and sky radiation in red, green, and blue with a high spatial resolution of 30 mm at 1km. The surface-based high resolution photography provides a new and interesting view of clouds. As the cloud fraction cannot be uniquely defined or measured, it depends on threshold and resolution. However as resolution decreases, cloud fraction tends to increase if the threshold is below the mean, and vice versa. Additionally cloud fractal dimension also depends on threshold. Therefore these findings raise concerns over the ability to characterize clouds by cloud fraction or fractal dimension. Our analysis indicate that Principal Component analysis may lead to a robust means of quantifying cloud contribution to radiance. The cloud images are analyzed in conjunction with a collocated CIMEL sky radiometer, Microwave Radiometer and LIDAR to determine homogeneity and heterogeneity. Additionally, MFRSR measurements are used to determine the cloud radiative properties as a validation tool to the results obtained from the other instruments and methods. The cloud properties to be further studied are aerosol- cloud interaction, cloud particle radii, and vertical homogeneity.

  6. Remotely Sensed High-Resolution Global Cloud Dynamics for Predicting Ecosystem and Biodiversity Distributions.

    PubMed

    Wilson, Adam M; Jetz, Walter

    2016-03-01

    Cloud cover can influence numerous important ecological processes, including reproduction, growth, survival, and behavior, yet our assessment of its importance at the appropriate spatial scales has remained remarkably limited. If captured over a large extent yet at sufficiently fine spatial grain, cloud cover dynamics may provide key information for delineating a variety of habitat types and predicting species distributions. Here, we develop new near-global, fine-grain (≈1 km) monthly cloud frequencies from 15 y of twice-daily Moderate Resolution Imaging Spectroradiometer (MODIS) satellite images that expose spatiotemporal cloud cover dynamics of previously undocumented global complexity. We demonstrate that cloud cover varies strongly in its geographic heterogeneity and that the direct, observation-based nature of cloud-derived metrics can improve predictions of habitats, ecosystem, and species distributions with reduced spatial autocorrelation compared to commonly used interpolated climate data. These findings support the fundamental role of remote sensing as an effective lens through which to understand and globally monitor the fine-grain spatial variability of key biodiversity and ecosystem properties.

  7. Toward ubiquitous healthcare services with a novel efficient cloud platform.

    PubMed

    He, Chenguang; Fan, Xiaomao; Li, Ye

    2013-01-01

    Ubiquitous healthcare services are becoming more and more popular, especially under the urgent demand of the global aging issue. Cloud computing owns the pervasive and on-demand service-oriented natures, which can fit the characteristics of healthcare services very well. However, the abilities in dealing with multimodal, heterogeneous, and nonstationary physiological signals to provide persistent personalized services, meanwhile keeping high concurrent online analysis for public, are challenges to the general cloud. In this paper, we proposed a private cloud platform architecture which includes six layers according to the specific requirements. This platform utilizes message queue as a cloud engine, and each layer thereby achieves relative independence by this loosely coupled means of communications with publish/subscribe mechanism. Furthermore, a plug-in algorithm framework is also presented, and massive semistructure or unstructured medical data are accessed adaptively by this cloud architecture. As the testing results showing, this proposed cloud platform, with robust, stable, and efficient features, can satisfy high concurrent requests from ubiquitous healthcare services.

  8. Estimating the Influence of Biological Ice Nuclei on Clouds with Regional Scale Simulations

    NASA Astrophysics Data System (ADS)

    Hummel, Matthias; Hoose, Corinna; Schaupp, Caroline; Möhler, Ottmar

    2014-05-01

    Cloud properties are largely influenced by the atmospheric formation of ice particles. Some primary biological aerosol particles (PBAP), e.g. certain bacteria, fungal spores or pollen, have been identified as effective ice nuclei (IN). The work presented here quantifies the IN concentrations originating from PBAP in order to estimate their influences on clouds with the regional scale atmospheric model COSMO-ART in a six day case study for Western Europe. The atmospheric particle distribution is calculated for three different PBAP (bacteria, fungal spores and birch pollen). The parameterizations for heterogeneous ice nucleation of PBAP are derived from AIDA cloud chamber experiments with Pseudomonas syringae bacteria and birch pollen (Schaupp, 2013) and from published data on Cladosporium spores (Iannone et al., 2011). A constant fraction of ice-active bacteria and fungal spores relative to the total bacteria and spore concentration had to be assumed. At cloud altitude, average simulated PBAP number concentrations are ~17 L-1 for bacteria and fungal spores and ~0.03 L-1 for birch pollen, including large temporal and spatial variations of more than one order of magnitude. Thus, the average, 'diagnostic' in-cloud PBAP IN concentrations, which only depend on the PBAP concentrations and temperature, without applying dynamics and cloud microphysics, lie at the lower end of the range of typically observed atmospheric IN concentrations . Average PBAP IN concentrations are between 10-6 L-1 and 10-4 L-1. Locally but not very frequently, PBAP IN concentrations can be as high as 0.2 L-1 at -10° C. Two simulations are compared to estimate the cloud impact of PBAP IN, both including mineral dust as an additional background IN with a constant concentration of 100 L-1. One of the simulations includes additional PBAP IN which can alter the cloud properties compared to the reference simulation without PBAP IN. The difference in ice particle and cloud droplet concentration between both simulations is a result of the heterogeneous ice nucleation of PBAP. In the chosen case setup, two effects can be identified which are occurring at different altitudes. Additional PBAP IN directly enhance the ice crystal concentration at lower parts of a mixed-phase cloud. This increase comes with a decrease in liquid droplet concentration in this part of a cloud. Therefore, a second effect takes place, where less ice crystals are formed by dust-driven heterogeneous as well as homogeneous ice nucleation in upper parts of a cloud, probably due to a lack of liquid water reaching these altitudes. Overall, diagnostic PBAP IN concentrations are very low compared to typical IN concentration, but reach maxima at temperatures where typical IN are not very ice-active. PBAP IN can therefore influence clouds to some extent. Iannone, R., Chernoff, D. I., Pringle, A., Martin, S. T., and Bertram, A. K.: The ice nucleation ability of one of the most abundant types of fungal spores found in the atmosphere, Atmos. Chem. Phys., 11, 1191-1201, 10.5194/acp-11-1191-2011, 2011. Schaupp, C.: Untersuchungen zur Rolle von Bakterien und Pollen als Wolkenkondensations- und Eiskeime in troposphärischen Wolken, Ph.D. thesis, Institute of Environmental Physics, Heidelberg University, Heidelberg, Germany, 2013.

  9. Effect of Heterogeneous Chemical Reactions on the Köhler Activation of Aqueous Organic Aerosols.

    PubMed

    Djikaev, Yuri S; Ruckenstein, Eli

    2018-05-03

    We study some thermodynamic aspects of the activation of aqueous organic aerosols into cloud droplets considering the aerosols to consist of liquid solution of water and hydrophilic and hydrophobic organic compounds, taking into account the presence of reactive species in the air. The hydrophobic (surfactant) organic molecules on the surface of such an aerosol can be processed by chemical reactions with some atmospheric species; this affects the hygroscopicity of the aerosol and hence its ability to become a cloud droplet either via nucleation or via Köhler activation. The most probable pathway of such processing involves atmospheric hydroxyl radicals that abstract hydrogen atoms from hydrophobic organic molecules located on the aerosol surface (first step), the resulting radicals being quickly oxidized by ubiquitous atmospheric oxygen molecules to produce surface-bound peroxyl radicals (second step). These two reactions play a crucial role in the enhancement of the Köhler activation of the aerosol and its evolution into a cloud droplet. Taking them and a third reaction (next in the multistep chain of relevant heterogeneous reactions) into account, one can derive an explicit expression for the free energy of formation of a four-component aqueous droplet on a ternary aqueous organic aerosol as a function of four independent variables of state of a droplet. The results of numerical calculations suggest that the formation of cloud droplets on such (aqueous hydrophilic/hydrophobic organic) aerosols is most likely to occur as a Köhler activation-like process rather than via nucleation. The model allows one to determine the threshold parameters of the system necessary for the Köhler activation of such aerosols, which are predicted to be very sensitive to the equilibrium constant of the chain of three heterogeneous reactions involved in the chemical aging of aerosols.

  10. Classifying stages of cirrus life-cycle evolution

    NASA Astrophysics Data System (ADS)

    Urbanek, Benedikt; Groß, Silke; Schäfler, Andreas; Wirth, Martin

    2018-04-01

    Airborne lidar backscatter data is used to determine in- and out-of-cloud regions. Lidar measurements of water vapor together with model temperature fields are used to calculate relative humidity over ice (RHi). Based on temperature and RHi we identify different stages of cirrus evolution: homogeneous and heterogeneous freezing, depositional growth, ice sublimation and sedimentation. We will present our classification scheme and first applications on mid-latitude cirrus clouds.

  11. Spectroscopic Evidence Against Nitric Acid Trihydrate in Polar Stratospheric Clouds

    NASA Technical Reports Server (NTRS)

    Toon, Owen B.; Tolbert, Margaret A.

    1995-01-01

    Heterogeneous reactions on polar stratospheric clouds (PSC's) play a key role in the photochemical mechanisms thought to be responsible for ozone depletion in the Antarctic and the Arctic. Reactions on PSC particles activate chlorine to forms that are capable of photochemical ozone destruction, and sequester nitrogen oxides (NOx) that would otherwise deactivate the chlorine. Although the heterogeneous chemistry is now well established, the composition of the clouds themselves is uncertain. It is commonly thought that they are composed of nitric acid trihydrate, although observations have left this question unresolved. Here we reanalyse infrared spectra of type I PCS's obtained in Antarctica in September 1987, using recently measured optical constraints of the various compounds that might be present in PSC's. We find that these PSC's were not composed of nitric acid trihydrate but instead had a more complex composition perhaps that of a ternary solution. Because cloud formation is sensitive to their composition, this finding will alter our understanding of the locations and conditions in which PSCs form. In addition, the extent of ozone loss depends on the ability of the PSC's to remove NOx permanently through sedimentation. The sedimentation rates depend on PSC particle size which in turn is controlled by the composition and formation mechanism.

  12. Applying a cloud computing approach to storage architectures for spacecraft

    NASA Astrophysics Data System (ADS)

    Baldor, Sue A.; Quiroz, Carlos; Wood, Paul

    As sensor technologies, processor speeds, and memory densities increase, spacecraft command, control, processing, and data storage systems have grown in complexity to take advantage of these improvements and expand the possible missions of spacecraft. Spacecraft systems engineers are increasingly looking for novel ways to address this growth in complexity and mitigate associated risks. Looking to conventional computing, many solutions have been executed to solve both the problem of complexity and heterogeneity in systems. In particular, the cloud-based paradigm provides a solution for distributing applications and storage capabilities across multiple platforms. In this paper, we propose utilizing a cloud-like architecture to provide a scalable mechanism for providing mass storage in spacecraft networks that can be reused on multiple spacecraft systems. By presenting a consistent interface to applications and devices that request data to be stored, complex systems designed by multiple organizations may be more readily integrated. Behind the abstraction, the cloud storage capability would manage wear-leveling, power consumption, and other attributes related to the physical memory devices, critical components in any mass storage solution for spacecraft. Our approach employs SpaceWire networks and SpaceWire-capable devices, although the concept could easily be extended to non-heterogeneous networks consisting of multiple spacecraft and potentially the ground segment.

  13. Performance of Goddard Earth Observing System GCM Column Radiation Models under Heterogeneous Cloud Conditions

    NASA Technical Reports Server (NTRS)

    Oreopoulos, L.; Chou, M.-D.; Khairoutdinov, M.; Barker, H. W.; Cahalan, R. F.

    2003-01-01

    We test the performance of the shortwave (SW) and longwave (LW) Column Radiation Models (CORAMs) of Chou and collaborators with heterogeneous cloud fields from a global single-day dataset produced by NCAR's Community Atmospheric Model with a 2-D CRM installed in each gridbox. The original SW version of the CORAM performs quite well compared to reference Independent Column Approximation (ICA) calculations for boundary fluxes, largely due to the success of a combined overlap and cloud scaling parameterization scheme. The absolute magnitude of errors relative to ICA are even smaller for the LW CORAM which applies similar overlap. The vertical distribution of heating and cooling within the atmosphere is also simulated quite well with daily-averaged zonal errors always below 0.3 K/d for SW heating rates and 0.6 K/d for LW cooling rates. The SW CORAM's performance improves by introducing a scheme that accounts for cloud inhomogeneity. These results suggest that previous studies demonstrating the inaccuracy of plane-parallel models may have unfairly focused on worst scenario cases, and that current radiative transfer algorithms of General Circulation Models (GCMs) may be more capable than previously thought in estimating realistic spatial and temporal averages of radiative fluxes, as long as they are provided with correct mean cloud profiles. However, even if the errors of the particular CORAMs are small, they seem to be systematic, and the impact of the biases can be fully assessed only with GCM climate simulations.

  14. Dynamic electronic institutions in agent oriented cloud robotic systems.

    PubMed

    Nagrath, Vineet; Morel, Olivier; Malik, Aamir; Saad, Naufal; Meriaudeau, Fabrice

    2015-01-01

    The dot-com bubble bursted in the year 2000 followed by a swift movement towards resource virtualization and cloud computing business model. Cloud computing emerged not as new form of computing or network technology but a mere remoulding of existing technologies to suit a new business model. Cloud robotics is understood as adaptation of cloud computing ideas for robotic applications. Current efforts in cloud robotics stress upon developing robots that utilize computing and service infrastructure of the cloud, without debating on the underlying business model. HTM5 is an OMG's MDA based Meta-model for agent oriented development of cloud robotic systems. The trade-view of HTM5 promotes peer-to-peer trade amongst software agents. HTM5 agents represent various cloud entities and implement their business logic on cloud interactions. Trade in a peer-to-peer cloud robotic system is based on relationships and contracts amongst several agent subsets. Electronic Institutions are associations of heterogeneous intelligent agents which interact with each other following predefined norms. In Dynamic Electronic Institutions, the process of formation, reformation and dissolution of institutions is automated leading to run time adaptations in groups of agents. DEIs in agent oriented cloud robotic ecosystems bring order and group intellect. This article presents DEI implementations through HTM5 methodology.

  15. Hybrid Cloud Computing Environment for EarthCube and Geoscience Community

    NASA Astrophysics Data System (ADS)

    Yang, C. P.; Qin, H.

    2016-12-01

    The NSF EarthCube Integration and Test Environment (ECITE) has built a hybrid cloud computing environment to provides cloud resources from private cloud environments by using cloud system software - OpenStack and Eucalyptus, and also manages public cloud - Amazon Web Service that allow resource synchronizing and bursting between private and public cloud. On ECITE hybrid cloud platform, EarthCube and geoscience community can deploy and manage the applications by using base virtual machine images or customized virtual machines, analyze big datasets by using virtual clusters, and real-time monitor the virtual resource usage on the cloud. Currently, a number of EarthCube projects have deployed or started migrating their projects to this platform, such as CHORDS, BCube, CINERGI, OntoSoft, and some other EarthCube building blocks. To accomplish the deployment or migration, administrator of ECITE hybrid cloud platform prepares the specific needs (e.g. images, port numbers, usable cloud capacity, etc.) of each project in advance base on the communications between ECITE and participant projects, and then the scientists or IT technicians in those projects launch one or multiple virtual machines, access the virtual machine(s) to set up computing environment if need be, and migrate their codes, documents or data without caring about the heterogeneity in structure and operations among different cloud platforms.

  16. An Architecture for Cross-Cloud System Management

    NASA Astrophysics Data System (ADS)

    Dodda, Ravi Teja; Smith, Chris; van Moorsel, Aad

    The emergence of the cloud computing paradigm promises flexibility and adaptability through on-demand provisioning of compute resources. As the utilization of cloud resources extends beyond a single provider, for business as well as technical reasons, the issue of effectively managing such resources comes to the fore. Different providers expose different interfaces to their compute resources utilizing varied architectures and implementation technologies. This heterogeneity poses a significant system management problem, and can limit the extent to which the benefits of cross-cloud resource utilization can be realized. We address this problem through the definition of an architecture to facilitate the management of compute resources from different cloud providers in an homogenous manner. This preserves the flexibility and adaptability promised by the cloud computing paradigm, whilst enabling the benefits of cross-cloud resource utilization to be realized. The practical efficacy of the architecture is demonstrated through an implementation utilizing compute resources managed through different interfaces on the Amazon Elastic Compute Cloud (EC2) service. Additionally, we provide empirical results highlighting the performance differential of these different interfaces, and discuss the impact of this performance differential on efficiency and profitability.

  17. Nucleation in Synoptically Forced Cirrostratus

    NASA Technical Reports Server (NTRS)

    Lin, R.-F.; Starr, D. OC.; Reichardt, J.; DeMott, P. J.

    2004-01-01

    Formation and evolution of cirrostratus in response to weak, uniform and constant synoptic forcing is simulated using a one-dimensional numerical model with explicit microphysics, in which the particle size distribution in each grid box is fully resolved. A series of tests of the model response to nucleation modes (homogeneous-freezing-only/heterogeneous nucleation) and heterogeneous nucleation parameters are performed. In the case studied here, nucleation is first activated in the prescribed moist layer. A continuous cloud-top nucleation zone with a depth depending on the vertical humidity gradient and one of the nucleation parameters is developed afterward. For the heterogeneous nucleation cases, intermittent nucleation zones in the mid-upper portion of the cloud form where the relative humidity is on the rise, because existent ice crystals do not uptake excess water vapor efficiently, and ice nuclei (IN) are available. Vertical resolution as fine as 1 m is required for realistic simulation of the homogeneous-freezing-only scenario, while the model resolution requirement is more relaxed in the cases where heterogeneous nucleation dominates. Bulk microphysical and optical properties are evaluated and compared. Ice particle number flux divergence, which is due to the vertical gradient of the gravity-induced particle sedimentation, is constantly and rapidly changing the local ice number concentration, even in the nucleation zone. When the depth of the nucleation zone is shallow, particle number concentration decreases rapidly as ice particles grow and sediment away from the nucleation zone. When the depth of the nucleation zone is large, a region of high ice number concentration can be sustained. The depth of nucleation zone is an important parameter to be considered in parametric treatments of ice cloud generation.

  18. Reconciling biases and uncertainties of AIRS and MODIS ice cloud properties

    NASA Astrophysics Data System (ADS)

    Kahn, B. H.; Gettelman, A.

    2015-12-01

    We will discuss comparisons of collocated Atmospheric Infrared Sounder (AIRS) and Moderate Resolution Imaging Spectroradiometer (MODIS) ice cloud optical thickness (COT), effective radius (CER), and cloud thermodynamic phase retrievals. The ice cloud comparisons are stratified by retrieval uncertainty estimates, horizontal inhomogeneity at the pixel-scale, vertical cloud structure, and other key parameters. Although an estimated 27% globally of all AIRS pixels contain ice cloud, only 7% of them are spatially uniform ice according to MODIS. We find that the correlations of COT and CER between the two instruments are strong functions of horizontal cloud heterogeneity and vertical cloud structure. The best correlations are found in single-layer, horizontally homogeneous clouds over the low-latitude tropical oceans with biases and scatter that increase with scene complexity. While the COT comparisons are unbiased in homogeneous ice clouds, a bias of 5-10 microns remains in CER within the most homogeneous scenes identified. This behavior is entirely consistent with known sensitivity differences in the visible and infrared bands. We will use AIRS and MODIS ice cloud properties to evaluate ice hydrometeor output from climate model output, such as the CAM5, with comparisons sorted into different dynamical regimes. The results of the regime-dependent comparisons will be described and implications for model evaluation and future satellite observational needs will be discussed.

  19. A Review of Spatial and Seasonal Changes in Condensation Clouds Observed During Aerobraking by MGS TES

    NASA Technical Reports Server (NTRS)

    Pearl, J. C.; Smith, M. D.; Conrath, B. J.; Bandfield, J. L.; Christensen, P. R.

    1999-01-01

    Successful operation of the Mars Global Surveyor spacecraft beginning in September 1997, has permitted extensive infrared observations of condensation clouds during the martian southern summer and fall seasons (184 deg

  20. A Method to Analyze How Various Parts of Clouds Influence Each Other's Brightness

    NASA Technical Reports Server (NTRS)

    Varnai, Tamas; Marshak, Alexander; Lau, William (Technical Monitor)

    2001-01-01

    This paper proposes a method for obtaining new information on 3D radiative effects that arise from horizontal radiative interactions in heterogeneous clouds. Unlike current radiative transfer models, it can not only calculate how 3D effects change radiative quantities at any given point, but can also determine which areas contribute to these 3D effects, to what degree, and through what mechanisms. After describing the proposed method, the paper illustrates its new capabilities both for detailed case studies and for the statistical processing of large datasets. Because the proposed method makes it possible, for the first time, to link a particular change in cloud properties to the resulting 3D effect, in future studies it can be used to develop new radiative transfer parameterizations that would consider 3D effects in practical applications currently limited to 1D theory-such as remote sensing of cloud properties and dynamical cloud modeling.

  1. Temperature Dependence in Homogeneous and Heterogeneous Nucleation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    McGraw R. L.; Winkler, P. M.; Wagner, P. E.

    2017-08-01

    Heterogeneous nucleation on stable (sub-2 nm) nuclei aids the formation of atmospheric cloud condensation nuclei (CCN) by circumventing or reducing vapor pressure barriers that would otherwise limit condensation and new particle growth. Aerosol and cloud formation depend largely on the interaction between a condensing liquid and the nucleating site. A new paper published this year reports the first direct experimental determination of contact angles as well as contact line curvature and other geometric properties of a spherical cap nucleus at nanometer scale using measurements from the Vienna Size Analyzing Nucleus Counter (SANC) (Winkler et al., 2016). For water nucleating heterogeneouslymore » on silver oxide nanoparticles we find contact angles around 15 degrees compared to around 90 degrees for the macroscopically measured equilibrium angle for water on bulk silver. The small microscopic contact angles can be attributed via the generalized Young equation to a negative line tension that becomes increasingly dominant with increasing curvature of the contact line. These results enable a consistent theoretical description of heterogeneous nucleation and provide firm insight to the wetting of nanosized objects.« less

  2. A theoretical framework for modeling dilution enhancement of non-reactive solutes in heterogeneous porous media.

    PubMed

    de Barros, F P J; Fiori, A; Boso, F; Bellin, A

    2015-01-01

    Spatial heterogeneity of the hydraulic properties of geological porous formations leads to erratically shaped solute clouds, thus increasing the edge area of the solute body and augmenting the dilution rate. In this study, we provide a theoretical framework to quantify dilution of a non-reactive solute within a steady state flow as affected by the spatial variability of the hydraulic conductivity. Embracing the Lagrangian concentration framework, we obtain explicit semi-analytical expressions for the dilution index as a function of the structural parameters of the random hydraulic conductivity field, under the assumptions of uniform-in-the-average flow, small injection source and weak-to-mild heterogeneity. Results show how the dilution enhancement of the solute cloud is strongly dependent on both the statistical anisotropy ratio and the heterogeneity level of the porous medium. The explicit semi-analytical solution also captures the temporal evolution of the dilution rate; for the early- and late-time limits, the proposed solution recovers previous results from the literature, while at intermediate times it reflects the increasing interplay between large-scale advection and local-scale dispersion. The performance of the theoretical framework is verified with high resolution numerical results and successfully tested against the Cape Cod field data. Copyright © 2015 Elsevier B.V. All rights reserved.

  3. Smart Point Cloud: Definition and Remaining Challenges

    NASA Astrophysics Data System (ADS)

    Poux, F.; Hallot, P.; Neuville, R.; Billen, R.

    2016-10-01

    Dealing with coloured point cloud acquired from terrestrial laser scanner, this paper identifies remaining challenges for a new data structure: the smart point cloud. This concept arises with the statement that massive and discretized spatial information from active remote sensing technology is often underused due to data mining limitations. The generalisation of point cloud data associated with the heterogeneity and temporality of such datasets is the main issue regarding structure, segmentation, classification, and interaction for an immediate understanding. We propose to use both point cloud properties and human knowledge through machine learning to rapidly extract pertinent information, using user-centered information (smart data) rather than raw data. A review of feature detection, machine learning frameworks and database systems indexed both for mining queries and data visualisation is studied. Based on existing approaches, we propose a new 3-block flexible framework around device expertise, analytic expertise and domain base reflexion. This contribution serves as the first step for the realisation of a comprehensive smart point cloud data structure.

  4. Using virtual machine monitors to overcome the challenges of monitoring and managing virtualized cloud infrastructures

    NASA Astrophysics Data System (ADS)

    Bamiah, Mervat Adib; Brohi, Sarfraz Nawaz; Chuprat, Suriayati

    2012-01-01

    Virtualization is one of the hottest research topics nowadays. Several academic researchers and developers from IT industry are designing approaches for solving security and manageability issues of Virtual Machines (VMs) residing on virtualized cloud infrastructures. Moving the application from a physical to a virtual platform increases the efficiency, flexibility and reduces management cost as well as effort. Cloud computing is adopting the paradigm of virtualization, using this technique, memory, CPU and computational power is provided to clients' VMs by utilizing the underlying physical hardware. Beside these advantages there are few challenges faced by adopting virtualization such as management of VMs and network traffic, unexpected additional cost and resource allocation. Virtual Machine Monitor (VMM) or hypervisor is the tool used by cloud providers to manage the VMs on cloud. There are several heterogeneous hypervisors provided by various vendors that include VMware, Hyper-V, Xen and Kernel Virtual Machine (KVM). Considering the challenge of VM management, this paper describes several techniques to monitor and manage virtualized cloud infrastructures.

  5. Estimation of Cloud Fraction Profile in Shallow Convection Using a Scanning Cloud Radar

    DOE PAGES

    Oue, Mariko; Kollias, Pavlos; North, Kirk W.; ...

    2016-10-18

    Large spatial heterogeneities in shallow convection result in uncertainties in estimations of domain-averaged cloud fraction profiles (CFP). This issue is addressed using large eddy simulations of shallow convection over land coupled with a radar simulator. Results indicate that zenith profiling observations are inadequate to provide reliable CFP estimates. Use of Scanning Cloud Radar (SCR), performing a sequence of cross-wind horizon-to-horizon scans, is not straightforward due to the strong dependence of radar sensitivity to target distance. An objective method for estimating domain-averaged CFP is proposed that uses observed statistics of SCR hydrometeor detection with height to estimate optimum sampling regions. Thismore » method shows good agreement with the model CFP. Results indicate that CFP estimates require more than 35 min of SCR scans to converge on the model domain average. Lastly, the proposed technique is expected to improve our ability to compare model output with cloud radar observations in shallow cumulus cloud conditions.« less

  6. Evidence of Mineral Dust Altering Cloud Microphysics and Precipitation

    NASA Technical Reports Server (NTRS)

    Min, Qilong; Li, Rui; Lin, Bing; Joseph, Everette; Wang, Shuyu; Hu, Yongxiang; Morris, Vernon; Chang, F.

    2008-01-01

    Multi-platform and multi-sensor observations are employed to investigate the impact of mineral dust on cloud microphysical and precipitation processes in mesoscale convective systems. It is clearly evident that for a given convection strength,small hydrometeors were more prevalent in the stratiform rain regions with dust than in those regions that were dust free. Evidence of abundant cloud ice particles in the dust sector, particularly at altitudes where heterogeneous nucleation process of mineral dust prevails, further supports the observed changes of precipitation. The consequences of the microphysical effects of the dust aerosols were to shift the precipitation size spectrum from heavy precipitation to light precipitation and ultimately suppressing precipitation.

  7. A cloud-based X73 ubiquitous mobile healthcare system: design and implementation.

    PubMed

    Ji, Zhanlin; Ganchev, Ivan; O'Droma, Máirtín; Zhang, Xin; Zhang, Xueji

    2014-01-01

    Based on the user-centric paradigm for next generation networks, this paper describes a ubiquitous mobile healthcare (uHealth) system based on the ISO/IEEE 11073 personal health data (PHD) standards (X73) and cloud computing techniques. A number of design issues associated with the system implementation are outlined. The system includes a middleware on the user side, providing a plug-and-play environment for heterogeneous wireless sensors and mobile terminals utilizing different communication protocols and a distributed "big data" processing subsystem in the cloud. The design and implementation of this system are envisaged as an efficient solution for the next generation of uHealth systems.

  8. A cloud-based data network approach for translational cancer research.

    PubMed

    Xing, Wei; Tsoumakos, Dimitrios; Ghanem, Moustafa

    2015-01-01

    We develop a new model and associated technology for constructing and managing self-organizing data to support translational cancer research studies. We employ a semantic content network approach to address the challenges of managing cancer research data. Such data is heterogeneous, large, decentralized, growing and continually being updated. Moreover, the data originates from different information sources that may be partially overlapping, creating redundancies as well as contradictions and inconsistencies. Building on the advantages of elasticity of cloud computing, we deploy the cancer data networks on top of the CELAR Cloud platform to enable more effective processing and analysis of Big cancer data.

  9. Large-scale simulations and in-situ observations of mid-latitude and Arctic cirrus clouds

    NASA Astrophysics Data System (ADS)

    Rolf, Christian; Grooß, Jens-Uwe; Spichtinger, Peter; Costa, Anja; Krämer, Martina

    2017-04-01

    Cirrus clouds play an important role by influencing the Earth's radiation budget and the global climate (Heintzenberg and Charlson, 2009). The formation and further evolution of cirrus clouds is determined by the interplay of temperature, ice nuclei (IN) properties, relative humidity, cooling rates and ice crystal sedimentation. Thus, for a realistic simulation of cirrus clouds, a Lagrangian approach using meteorological wind fields is the best way to represent complete cirrus systems as e.g. frontal cirrus. To this end, we coupled the two moment microphysical ice model of Spichtinger and Gierens (2009) with the 3D Lagrangian model CLaMS (McKenna et al., 2002). The new CLaMS-Ice module simulates cirrus formation by including heterogeneous and homogeneous freezing as well as ice crystal sedimentation. The boxmodel is operated along CLaMS trajectories and individually initialized with the ECMWF meteorological fields. From the CLaMS-Ice three dimensional large scale cirrus simulations, we are able to assign the formation mechanism - either heterogeneous or homogeneous freezing - to specific combinations of temperatures and ice water contents. First, we compare a large mid-latitude dataset of in-situ measured cirrus microphysical properties compiled from the ML-Cirrus aircraft campaign in 2014 to ClaMS-Ice model simulations. We investigate the number of ice crystals and the ice water content with respect to temperature in a climatological way and found a good and consistent agreement between measurement and simulations. We also found that most (67 %) of the cirrus cloud cover in mid-latitude is dominated by heterogeneously formed ice crystals. Second, CLaMS-Ice model simulations in the Arctic/Polar region are performed during the POLSTRACC aircraft campaign in 2016. Higher ice crystal number concentrations are found more frequently in the Arctic region in comparison to the mid-latitude dataset. This is caused by enhanced gravity wave activity over the mountainous terrain. References: Heintzenberg, J. and Charlson, R. J.: Clouds in the perturbed climate system - Their relationship to energy balance, atmospheric dynamics, and precipitation, MIT Press, Cambridge, UK, 58-72, 2009. McKenna, D. S., Konopka, P., Grooss, J. U., Günther, G., Müller, R., Spang, R., Offermann, D.,and Orsolini, Y.: A new Chemical Lagrangian Model of the Stratosphere (CLaMS) - 1. Formulation of advection and mixing, J. Geophys. Res., 107, 4309, doi:10.1029/2000JD000114, 2002. Spichtinger, P. and Gierens, K. M.: Modelling of cirrus clouds - Part 1a: Model description and validation, Atmospheric Chemistry and Physics, 9, 685-706, 2009.

  10. A~comprehensive parameterization of heterogeneous ice nucleation of dust surrogate: laboratory study with hematite particles and its application to atmospheric models

    NASA Astrophysics Data System (ADS)

    Hiranuma, N.; Paukert, M.; Steinke, I.; Zhang, K.; Kulkarni, G.; Hoose, C.; Schnaiter, M.; Saathoff, H.; Möhler, O.

    2014-06-01

    A new heterogeneous ice nucleation parameterization that covers a~wide temperature range (-36 to -78 °C) is presented. Developing and testing such an ice nucleation parameterization, which is constrained through identical experimental conditions, is critical in order to accurately simulate the ice nucleation processes in cirrus clouds. The surface-scaled ice nucleation efficiencies of hematite particles, inferred by ns, were derived from AIDA (Aerosol Interaction and Dynamics in the Atmosphere) cloud chamber measurements under water subsaturated conditions that were realized by continuously changing temperature (T) and relative humidity with respect to ice (RHice) in the chamber. Our measurements showed several different pathways to nucleate ice depending on T and RHice conditions. For instance, almost T-independent freezing was observed at -60 °C < T < -50 °C, where RHice explicitly controlled ice nucleation efficiency, while both T and RHice played roles in other two T regimes: -78 °C < T < -60 °C and -50 °C < T < -36 °C. More specifically, observations at T colder than -60 °C revealed that higher RHice was necessary to maintain constant ns, whereas T may have played a significant role in ice nucleation at T warmer than -50 °C. We implemented new ns parameterizations into two cloud models to investigate its sensitivity and compare with the existing ice nucleation schemes towards simulating cirrus cloud properties. Our results show that the new AIDA-based parameterizations lead to an order of magnitude higher ice crystal concentrations and inhibition of homogeneous nucleation in colder temperature regions. Our cloud simulation results suggest that atmospheric dust particles that form ice nuclei at lower temperatures, below -36 °C, can potentially have stronger influence on cloud properties such as cloud longevity and initiation when compared to previous parameterizations.

  11. A comprehensive parameterization of heterogeneous ice nucleation of dust surrogate: laboratory study with hematite particles and its application to atmospheric models

    NASA Astrophysics Data System (ADS)

    Hiranuma, N.; Paukert, M.; Steinke, I.; Zhang, K.; Kulkarni, G.; Hoose, C.; Schnaiter, M.; Saathoff, H.; Möhler, O.

    2014-12-01

    A new heterogeneous ice nucleation parameterization that covers a wide temperature range (-36 to -78 °C) is presented. Developing and testing such an ice nucleation parameterization, which is constrained through identical experimental conditions, is important to accurately simulate the ice nucleation processes in cirrus clouds. The ice nucleation active surface-site density (ns) of hematite particles, used as a proxy for atmospheric dust particles, were derived from AIDA (Aerosol Interaction and Dynamics in the Atmosphere) cloud chamber measurements under water subsaturated conditions. These conditions were achieved by continuously changing the temperature (T) and relative humidity with respect to ice (RHice) in the chamber. Our measurements showed several different pathways to nucleate ice depending on T and RHice conditions. For instance, almost T-independent freezing was observed at -60 °C < T < -50 °C, where RHice explicitly controlled ice nucleation efficiency, while both T and RHice played roles in other two T regimes: -78 °C < T < -60 °C and -50 °C < T < -36 °C. More specifically, observations at T lower than -60 °C revealed that higher RHice was necessary to maintain a constant ns, whereas T may have played a significant role in ice nucleation at T higher than -50 °C. We implemented the new hematite-derived ns parameterization, which agrees well with previous AIDA measurements of desert dust, into two conceptual cloud models to investigate their sensitivity to the new parameterization in comparison to existing ice nucleation schemes for simulating cirrus cloud properties. Our results show that the new AIDA-based parameterization leads to an order of magnitude higher ice crystal concentrations and to an inhibition of homogeneous nucleation in lower-temperature regions. Our cloud simulation results suggest that atmospheric dust particles that form ice nuclei at lower temperatures, below -36 °C, can potentially have a stronger influence on cloud properties, such as cloud longevity and initiation, compared to previous parameterizations.

  12. A Comprehensive Parameterization of Heterogeneous Ice Nucleation of Dust Surrogate: Laboratory Study with Hematite Particles and Its Application to Atmospheric Models

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hiranuma, Naruki; Paukert, Marco; Steinke, Isabelle

    2014-12-10

    A new heterogeneous ice nucleation parameterization that covers a wide temperature range (-36 °C to -78 °C) is presented. Developing and testing such an ice nucleation parameterization, which is constrained through identical experimental conditions, is critical in order to accurately simulate the ice nucleation processes in cirrus clouds. The surface-scaled ice nucleation efficiencies of hematite particles, inferred by n s, were derived from AIDA (Aerosol Interaction and Dynamics in the Atmosphere) cloud chamber measurements under water subsaturated conditions that were realized by continuously changing temperature (T) and relative humidity with respect to ice (RH ice) in the chamber. Our measurementsmore » showed several different pathways to nucleate ice depending on T and RH ice conditions. For instance, almost independent freezing was observed at -60 °C < T < -50 °C, where RH ice explicitly controlled ice nucleation efficiency, while both T and RH ice played roles in other two T regimes: -78 °C < T < -60 °C and -50 °C < T < -36 °C. More specifically, observations at T colder than -60 °C revealed that higher RHice was necessary to maintain constant n s, whereas T may have played a significant role in ice nucleation at T warmer than -50 °C. We implemented new n s parameterizations into two cloud models to investigate its sensitivity and compare with the existing ice nucleation schemes towards simulating cirrus cloud properties. Our results show that the new AIDA-based parameterizations lead to an order of magnitude higher ice crystal concentrations and inhibition of homogeneous nucleation in colder temperature regions. Our cloud simulation results suggest that atmospheric dust particles that form ice nuclei at lower temperatures, below -36 °C, can potentially have stronger influence on cloud properties such as cloud longevity and initiation when compared to previous parameterizations.« less

  13. On the usage of classical nucleation theory in predicting the impact of bacteria on weather and climate

    NASA Astrophysics Data System (ADS)

    Sahyoun, Maher; Woetmann Nielsen, Niels; Havskov Sørensen, Jens; Finster, Kai; Bay Gosewinkel Karlson, Ulrich; Šantl-Temkiv, Tina; Smith Korsholm, Ulrik

    2014-05-01

    Bacteria, e.g. Pseudomonas syringae, have previously been found efficient in nucleating ice heterogeneously at temperatures close to -2°C in laboratory tests. Therefore, ice nucleation active (INA) bacteria may be involved in the formation of precipitation in mixed phase clouds, and could potentially influence weather and climate. Investigations into the impact of INA bacteria on climate have shown that emissions were too low to significantly impact the climate (Hoose et al., 2010). The goal of this study is to clarify the reason for finding the marginal impact on climate when INA bacteria were considered, by investigating the usability of ice nucleation rate parameterization based on classical nucleation theory (CNT). For this purpose, two parameterizations of heterogeneous ice nucleation were compared. Both parameterizations were implemented and tested in a 1-d version of the operational weather model (HIRLAM) (Lynch et al., 2000; Unden et al., 2002) in two different meteorological cases. The first parameterization is based on CNT and denoted CH08 (Chen et al., 2008). This parameterization is a function of temperature and the size of the IN. The second parameterization, denoted HAR13, was derived from nucleation measurements of SnomaxTM (Hartmann et al., 2013). It is a function of temperature and the number of protein complexes on the outer membranes of the cell. The fraction of cloud droplets containing each type of IN as percentage in the cloud droplets population were used and the sensitivity of cloud ice production in each parameterization was compared. In this study, HAR13 produces more cloud ice and precipitation than CH08 when the bacteria fraction increases. In CH08, the increase of the bacteria fraction leads to decreasing the cloud ice mixing ratio. The ice production using HAR13 was found to be more sensitive to the change of the bacterial fraction than CH08 which did not show a similar sensitivity. As a result, this may explain the marginal impact of IN bacteria in climate models when CH08 was used. The number of cell fragments containing proteins appears to be a more important parameter to consider than the size of the cell when parameterizing the heterogeneous freezing of bacteria.

  14. Remotely Sensed High-Resolution Global Cloud Dynamics for Predicting Ecosystem and Biodiversity Distributions

    PubMed Central

    Wilson, Adam M.; Jetz, Walter

    2016-01-01

    Cloud cover can influence numerous important ecological processes, including reproduction, growth, survival, and behavior, yet our assessment of its importance at the appropriate spatial scales has remained remarkably limited. If captured over a large extent yet at sufficiently fine spatial grain, cloud cover dynamics may provide key information for delineating a variety of habitat types and predicting species distributions. Here, we develop new near-global, fine-grain (≈1 km) monthly cloud frequencies from 15 y of twice-daily Moderate Resolution Imaging Spectroradiometer (MODIS) satellite images that expose spatiotemporal cloud cover dynamics of previously undocumented global complexity. We demonstrate that cloud cover varies strongly in its geographic heterogeneity and that the direct, observation-based nature of cloud-derived metrics can improve predictions of habitats, ecosystem, and species distributions with reduced spatial autocorrelation compared to commonly used interpolated climate data. These findings support the fundamental role of remote sensing as an effective lens through which to understand and globally monitor the fine-grain spatial variability of key biodiversity and ecosystem properties. PMID:27031693

  15. Aerosol processing in mixed-phase clouds in ECHAM5-HAM: Model description and comparison to observations

    NASA Astrophysics Data System (ADS)

    Hoose, C.; Lohmann, U.; Stier, P.; Verheggen, B.; Weingartner, E.

    2008-04-01

    The global aerosol-climate model ECHAM5-HAM has been extended by an explicit treatment of cloud-borne particles. Two additional modes for in-droplet and in-crystal particles are introduced, which are coupled to the number of cloud droplet and ice crystal concentrations simulated by the ECHAM5 double-moment cloud microphysics scheme. Transfer, production, and removal of cloud-borne aerosol number and mass by cloud droplet activation, collision scavenging, aqueous-phase sulfate production, freezing, melting, evaporation, sublimation, and precipitation formation are taken into account. The model performance is demonstrated and validated with observations of the evolution of total and interstitial aerosol concentrations and size distributions during three different mixed-phase cloud events at the alpine high-altitude research station Jungfraujoch (Switzerland). Although the single-column simulations cannot be compared one-to-one with the observations, the governing processes in the evolution of the cloud and aerosol parameters are captured qualitatively well. High scavenged fractions are found during the presence of liquid water, while the release of particles during the Bergeron-Findeisen process results in low scavenged fractions after cloud glaciation. The observed coexistence of liquid and ice, which might be related to cloud heterogeneity at subgrid scales, can only be simulated in the model when assuming nonequilibrium conditions.

  16. Flexible services for the support of research.

    PubMed

    Turilli, Matteo; Wallom, David; Williams, Chris; Gough, Steve; Curran, Neal; Tarrant, Richard; Bretherton, Dan; Powell, Andy; Johnson, Matt; Harmer, Terry; Wright, Peter; Gordon, John

    2013-01-28

    Cloud computing has been increasingly adopted by users and providers to promote a flexible, scalable and tailored access to computing resources. Nonetheless, the consolidation of this paradigm has uncovered some of its limitations. Initially devised by corporations with direct control over large amounts of computational resources, cloud computing is now being endorsed by organizations with limited resources or with a more articulated, less direct control over these resources. The challenge for these organizations is to leverage the benefits of cloud computing while dealing with limited and often widely distributed computing resources. This study focuses on the adoption of cloud computing by higher education institutions and addresses two main issues: flexible and on-demand access to a large amount of storage resources, and scalability across a heterogeneous set of cloud infrastructures. The proposed solutions leverage a federated approach to cloud resources in which users access multiple and largely independent cloud infrastructures through a highly customizable broker layer. This approach allows for a uniform authentication and authorization infrastructure, a fine-grained policy specification and the aggregation of accounting and monitoring. Within a loosely coupled federation of cloud infrastructures, users can access vast amount of data without copying them across cloud infrastructures and can scale their resource provisions when the local cloud resources become insufficient.

  17. A service brokering and recommendation mechanism for better selecting cloud services.

    PubMed

    Gui, Zhipeng; Yang, Chaowei; Xia, Jizhe; Huang, Qunying; Liu, Kai; Li, Zhenlong; Yu, Manzhu; Sun, Min; Zhou, Nanyin; Jin, Baoxuan

    2014-01-01

    Cloud computing is becoming the new generation computing infrastructure, and many cloud vendors provide different types of cloud services. How to choose the best cloud services for specific applications is very challenging. Addressing this challenge requires balancing multiple factors, such as business demands, technologies, policies and preferences in addition to the computing requirements. This paper recommends a mechanism for selecting the best public cloud service at the levels of Infrastructure as a Service (IaaS) and Platform as a Service (PaaS). A systematic framework and associated workflow include cloud service filtration, solution generation, evaluation, and selection of public cloud services. Specifically, we propose the following: a hierarchical information model for integrating heterogeneous cloud information from different providers and a corresponding cloud information collecting mechanism; a cloud service classification model for categorizing and filtering cloud services and an application requirement schema for providing rules for creating application-specific configuration solutions; and a preference-aware solution evaluation mode for evaluating and recommending solutions according to the preferences of application providers. To test the proposed framework and methodologies, a cloud service advisory tool prototype was developed after which relevant experiments were conducted. The results show that the proposed system collects/updates/records the cloud information from multiple mainstream public cloud services in real-time, generates feasible cloud configuration solutions according to user specifications and acceptable cost predication, assesses solutions from multiple aspects (e.g., computing capability, potential cost and Service Level Agreement, SLA) and offers rational recommendations based on user preferences and practical cloud provisioning; and visually presents and compares solutions through an interactive web Graphical User Interface (GUI).

  18. Heterogeneous VM Replication: A New Approach to Intrusion Detection, Active Response and Recovery in Cloud Data Centers

    DTIC Science & Technology

    2015-08-17

    from the same execution history, and cost-effective active response by proactively setting up standby VM replicas: migration from a compromised VM...the guest OSes system call code to be reused inside a “shadowed” portion of the context of the out-of- guest inspection program. Besides...by the rootkits in cloud environments. RootkitDet detects rootkits by identifying suspicious code region in the kernel space of guest OSes through

  19. Immersion freezing by natural dust based on a soccer ball model with the Community Atmospheric Model version 5: climate effects

    NASA Astrophysics Data System (ADS)

    Wang, Yong; Liu, Xiaohong

    2014-12-01

    We introduce a simplified version of the soccer ball model (SBM) developed by Niedermeier et al (2014 Geophys. Res. Lett. 41 736-741) into the Community Atmospheric Model version 5 (CAM5). It is the first time that SBM is used in an atmospheric model to parameterize the heterogeneous ice nucleation. The SBM, which was simplified for its suitable application in atmospheric models, uses the classical nucleation theory to describe the immersion/condensation freezing by dust in the mixed-phase cloud regime. Uncertain parameters (mean contact angle, standard deviation of contact angle probability distribution, and number of surface sites) in the SBM are constrained by fitting them to recent natural dust (Saharan dust) datasets. With the SBM in CAM5, we investigate the sensitivity of modeled cloud properties to the SBM parameters, and find significant seasonal and regional differences in the sensitivity among the three SBM parameters. Changes of mean contact angle and the number of surface sites lead to changes of cloud properties in Arctic in spring, which could be attributed to the transport of dust ice nuclei to this region. In winter, significant changes of cloud properties induced by these two parameters mainly occur in northern hemispheric mid-latitudes (e.g., East Asia). In comparison, no obvious changes of cloud properties caused by changes of standard deviation can be found in all the seasons. These results are valuable for understanding the heterogeneous ice nucleation behavior, and useful for guiding the future model developments.

  20. Advanced Understanding of Convection Initiation and Optimizing Cloud Seeding by Advanced Remote Sensing and Land Cover Modification over the United Arab Emirates

    NASA Astrophysics Data System (ADS)

    Wulfmeyer, V.; Behrendt, A.; Branch, O.; Schwitalla, T.

    2016-12-01

    A prerequisite for significant precipitation amounts is the presence of convergence zones. These are due to land surface heterogeneity, orography as well as mesoscale and synoptic scale circulations. Only, if these convergence zones are strong enough and interact with an upper level instability, deep convection can be initiated. For the understanding of convection initiation (CI) and optimal cloud seeding deployment, it is essential that these convergence zones are detected before clouds are developing in order to preempt the decisive microphysical processes for liquid water and ice formation. In this presentation, a new project on Optimizing Cloud Seeding by Advanced Remote Sensing and Land Cover Modification (OCAL) is introduced, which is funded by the United Arab Emirates Rain Enhancement Program (UAEREP). This project has two research components. The first component focuses on an improved detection and forecasting of convergence zones and CI by a) operation of scanning Doppler lidar and cloud radar systems during two seasonal field campaigns in orographic terrain and over the desert in the UAE, and b) advanced forecasting of convergence zones and CI with the WRF-NOAHMP model system. Nowcasting to short-range forecasting of convection will be improved by the assimilation of Doppler lidar and the UAE radar network data. For the latter, we will apply a new model forward operator developed at our institute. Forecast uncertainties will be assessed by ensemble simulations driven by ECMWF boundaries. The second research component of OCAL will study whether artificial modifications of land surface heterogeneity are possible through plantations or changes of terrain, leading to an amplification of convergence zones. This is based on our pioneering work on high-resolution modeling of the impact of plantations on weather and climate in arid regions. A specific design of the shape and location of plantations can lead to the formation of convergence zones, which can strengthen convergent flows already existing in the region of interest, thus amplifying convection and precipitation. We expect that this method can be successfully applied in regions with pre-existing land-surface heterogeneity and orography such as coastal areas with land-sea breezes and the Al Hajar Mountain range.

  1. Evaluation of Future Internet Technologies for Processing and Distribution of Satellite Imagery

    NASA Astrophysics Data System (ADS)

    Becedas, J.; Perez, R.; Gonzalez, G.; Alvarez, J.; Garcia, F.; Maldonado, F.; Sucari, A.; Garcia, J.

    2015-04-01

    Satellite imagery data centres are designed to operate a defined number of satellites. For instance, difficulties when new satellites have to be incorporated in the system appear. This occurs because traditional infrastructures are neither flexible nor scalable. With the appearance of Future Internet technologies new solutions can be provided to manage large and variable amounts of data on demand. These technologies optimize resources and facilitate the appearance of new applications and services in the traditional Earth Observation (EO) market. The use of Future Internet technologies for the EO sector were validated with the GEO-Cloud experiment, part of the Fed4FIRE FP7 European project. This work presents the final results of the project, in which a constellation of satellites records the whole Earth surface on a daily basis. The satellite imagery is downloaded into a distributed network of ground stations and ingested in a cloud infrastructure, where the data is processed, stored, archived and distributed to the end users. The processing and transfer times inside the cloud, workload of the processors, automatic cataloguing and accessibility through the Internet are evaluated to validate if Future Internet technologies present advantages over traditional methods. Applicability of these technologies is evaluated to provide high added value services. Finally, the advantages of using federated testbeds to carry out large scale, industry driven experiments are analysed evaluating the feasibility of an experiment developed in the European infrastructure Fed4FIRE and its migration to a commercial cloud: SoftLayer, an IBM Company.

  2. Analytical modeling and feasibility study of a multi-GPU cloud-based server (MGCS) framework for non-voxel-based dose calculations.

    PubMed

    Neylon, J; Min, Y; Kupelian, P; Low, D A; Santhanam, A

    2017-04-01

    In this paper, a multi-GPU cloud-based server (MGCS) framework is presented for dose calculations, exploring the feasibility of remote computing power for parallelization and acceleration of computationally and time intensive radiotherapy tasks in moving toward online adaptive therapies. An analytical model was developed to estimate theoretical MGCS performance acceleration and intelligently determine workload distribution. Numerical studies were performed with a computing setup of 14 GPUs distributed over 4 servers interconnected by a 1 Gigabits per second (Gbps) network. Inter-process communication methods were optimized to facilitate resource distribution and minimize data transfers over the server interconnect. The analytically predicted computation time predicted matched experimentally observations within 1-5 %. MGCS performance approached a theoretical limit of acceleration proportional to the number of GPUs utilized when computational tasks far outweighed memory operations. The MGCS implementation reproduced ground-truth dose computations with negligible differences, by distributing the work among several processes and implemented optimization strategies. The results showed that a cloud-based computation engine was a feasible solution for enabling clinics to make use of fast dose calculations for advanced treatment planning and adaptive radiotherapy. The cloud-based system was able to exceed the performance of a local machine even for optimized calculations, and provided significant acceleration for computationally intensive tasks. Such a framework can provide access to advanced technology and computational methods to many clinics, providing an avenue for standardization across institutions without the requirements of purchasing, maintaining, and continually updating hardware.

  3. Delay-based virtual congestion control in multi-tenant datacenters

    NASA Astrophysics Data System (ADS)

    Liu, Yuxin; Zhu, Danhong; Zhang, Dong

    2018-03-01

    With the evolution of cloud computing and virtualization, the congestion control of virtual datacenters has become the basic issue for multi-tenant datacenters transmission. Regarding to the friendly conflict of heterogeneous congestion control among multi-tenant, this paper proposes a delay-based virtual congestion control, which translates the multi-tenant heterogeneous congestion control into delay-based feedback uniformly by setting the hypervisor translation layer, modifying three-way handshake of explicit feedback and packet loss feedback and throttling receive window. The simulation results show that the delay-based virtual congestion control can effectively solve the unfairness of heterogeneous feedback congestion control algorithms.

  4. OpenID Connect as a security service in cloud-based medical imaging systems.

    PubMed

    Ma, Weina; Sartipi, Kamran; Sharghigoorabi, Hassan; Koff, David; Bak, Peter

    2016-04-01

    The evolution of cloud computing is driving the next generation of medical imaging systems. However, privacy and security concerns have been consistently regarded as the major obstacles for adoption of cloud computing by healthcare domains. OpenID Connect, combining OpenID and OAuth together, is an emerging representational state transfer-based federated identity solution. It is one of the most adopted open standards to potentially become the de facto standard for securing cloud computing and mobile applications, which is also regarded as "Kerberos of cloud." We introduce OpenID Connect as an authentication and authorization service in cloud-based diagnostic imaging (DI) systems, and propose enhancements that allow for incorporating this technology within distributed enterprise environments. The objective of this study is to offer solutions for secure sharing of medical images among diagnostic imaging repository (DI-r) and heterogeneous picture archiving and communication systems (PACS) as well as Web-based and mobile clients in the cloud ecosystem. The main objective is to use OpenID Connect open-source single sign-on and authorization service and in a user-centric manner, while deploying DI-r and PACS to private or community clouds should provide equivalent security levels to traditional computing model.

  5. New insights about cloud vertical structure from CloudSat and CALIPSO observations

    NASA Astrophysics Data System (ADS)

    Oreopoulos, Lazaros; Cho, Nayeong; Lee, Dongmin

    2017-09-01

    Active cloud observations from A-Train's CloudSat and CALIPSO satellites offer new opportunities to examine the vertical structure of hydrometeor layers. We use the 2B-CLDCLASS-LIDAR merged CloudSat-CALIPSO product to examine global aspects of hydrometeor vertical stratification. We group the data into major cloud vertical structure (CVS) classes based on our interpretation of how clouds in three standard atmospheric layers overlap and provide their global frequency of occurrence. The two most frequent CVS classes are single-layer (per our definition) low and high clouds that represent 53% of cloudy skies, followed by high clouds overlying low clouds, and vertically extensive clouds that occupy near-contiguously a large portion of the troposphere. The prevalence of these configurations changes seasonally and geographically, between daytime and nighttime, and between continents and oceans. The radiative effects of the CVS classes reveal the major radiative warmers and coolers from the perspective of the planet as a whole, the surface, and the atmosphere. Single-layer low clouds dominate planetary and atmospheric cooling and thermal infrared surface warming. We also investigate the consistency between passive and active views of clouds by providing the CVS breakdowns of Moderate Resolution Imaging Spectroradiometer cloud regimes for spatiotemporally coincident MODIS-Aqua (also on the A-Train) and CloudSat-CALIPSO daytime observations. When the analysis is expanded for a more in-depth look at the most heterogeneous of the MODIS cloud regimes, it ultimately confirms previous interpretations of their makeup that did not have the benefit of collocated active observations.

  6. Integrating Containers in the CERN Private Cloud

    NASA Astrophysics Data System (ADS)

    Noel, Bertrand; Michelino, Davide; Velten, Mathieu; Rocha, Ricardo; Trigazis, Spyridon

    2017-10-01

    Containers remain a hot topic in computing, with new use cases and tools appearing every day. Basic functionality such as spawning containers seems to have settled, but topics like volume support or networking are still evolving. Solutions like Docker Swarm, Kubernetes or Mesos provide similar functionality but target different use cases, exposing distinct interfaces and APIs. The CERN private cloud is made of thousands of nodes and users, with many different use cases. A single solution for container deployment would not cover every one of them, and supporting multiple solutions involves repeating the same process multiple times for integration with authentication services, storage services or networking. In this paper we describe OpenStack Magnum as the solution to offer container management in the CERN cloud. We will cover its main functionality and some advanced use cases using Docker Swarm and Kubernetes, highlighting some relevant differences between the two. We will describe the most common use cases in HEP and how we integrated popular services like CVMFS or AFS in the most transparent way possible, along with some limitations found. Finally we will look into ongoing work on advanced scheduling for both Swarm and Kubernetes, support for running batch like workloads and integration of container networking technologies with the CERN infrastructure.

  7. Factors Controlling the Properties of Multi-Phase Arctic Stratocumulus Clouds

    NASA Technical Reports Server (NTRS)

    Fridlind, Ann; Ackerman, Andrew; Menon, Surabi

    2005-01-01

    The 2004 Multi-Phase Arctic Cloud Experiment (M-PACE) IOP at the ARM NSA site focused on measuring the properties of autumn transition-season arctic stratus and the environmental conditions controlling them, including concentrations of heterogeneous ice nuclei. Our work aims to use a large-eddy simulation (LES) code with embedded size-resolved aerosol and cloud microphysics to identify factors controlling multi-phase arctic stratus. Our preliminary simulations of autumn transition-season clouds observed during the 1994 Beaufort and Arctic Seas Experiment (BASE) indicated that low concentrations of ice nuclei, which were not measured, may have significantly lowered liquid water content and thereby stabilized cloud evolution. However, cloud drop concentrations appeared to be virtually immune to changes in liquid water content, indicating an active Bergeron process with little effect of collection on drop number concentration. We will compare these results with preliminary simulations from October 8-13 during MPACE. The sensitivity of cloud properties to uncertainty in other factors, such as large-scale forcings and aerosol profiles, will also be investigated. Based on the LES simulations with M-PACE data, preliminary results from the NASA GlSS single-column model (SCM) will be used to examine the sensitivity of predicted cloud properties to changing cloud drop number concentrations for multi-phase arctic clouds. Present parametrizations assumed fixed cloud droplet number concentrations and these will be modified using M-PACE data.

  8. A Review of Spatial and Seasonal Changes in Condensation Clouds Observed During Aerobraking by MGS TES

    NASA Technical Reports Server (NTRS)

    Pearl, J. C.; Smith, M. D.; Conrath, B. J.; Bandfield, J. L.; Christensen, P. R.

    1999-01-01

    Successful operation of the Mars Global Surveyor spacecraft, beginning in September 1997, has permitted extensive infrared observations of condensation clouds during the martian southern summer and fall seasons (184 deg less than L(sub s) less than 28 deg). Initially, thin (normal optical depth less than 0.06 at 825/ cm) ice clouds and hazes were widespread, showing a latitudinal gradient. With the onset of a regional dust storm at L(sub s) = 224 deg, ice clouds essentially vanished in the southern hemisphere, to reappear gradually after the decay of the storm. The thickest clouds (optical depth approx. 0.6) were associated with major volcanic features. At L(exp s) = 318 deg, the cloud at Ascraeus Mons was observed to disappear between 21:30 and 09:30, consistent with historically recorded diurnal behavior for clouds of this type. Limb observations showed extended optically thin (depth less than 0.04) stratiform clouds at altitudes up to 55 km. A water ice haze was present in the north polar night at altitudes up to 40 km; this probably provided heterogeneous nucleation sites for the formation of CO2 clouds at altitudes below the 1 mbar pressure level, where atmospheric temperatures dropped to the condensation point of CO2.

  9. Production experience with the ATLAS Event Service

    NASA Astrophysics Data System (ADS)

    Benjamin, D.; Calafiura, P.; Childers, T.; De, K.; Guan, W.; Maeno, T.; Nilsson, P.; Tsulaia, V.; Van Gemmeren, P.; Wenaus, T.; ATLAS Collaboration

    2017-10-01

    The ATLAS Event Service (AES) has been designed and implemented for efficient running of ATLAS production workflows on a variety of computing platforms, ranging from conventional Grid sites to opportunistic, often short-lived resources, such as spot market commercial clouds, supercomputers and volunteer computing. The Event Service architecture allows real time delivery of fine grained workloads to running payload applications which process dispatched events or event ranges and immediately stream the outputs to highly scalable Object Stores. Thanks to its agile and flexible architecture the AES is currently being used by grid sites for assigning low priority workloads to otherwise idle computing resources; similarly harvesting HPC resources in an efficient back-fill mode; and massively scaling out to the 50-100k concurrent core level on the Amazon spot market to efficiently utilize those transient resources for peak production needs. Platform ports in development include ATLAS@Home (BOINC) and the Google Compute Engine, and a growing number of HPC platforms. After briefly reviewing the concept and the architecture of the Event Service, we will report the status and experience gained in AES commissioning and production operations on supercomputers, and our plans for extending ES application beyond Geant4 simulation to other workflows, such as reconstruction and data analysis.

  10. A Cloud-Based X73 Ubiquitous Mobile Healthcare System: Design and Implementation

    PubMed Central

    Ji, Zhanlin; O'Droma, Máirtín; Zhang, Xin; Zhang, Xueji

    2014-01-01

    Based on the user-centric paradigm for next generation networks, this paper describes a ubiquitous mobile healthcare (uHealth) system based on the ISO/IEEE 11073 personal health data (PHD) standards (X73) and cloud computing techniques. A number of design issues associated with the system implementation are outlined. The system includes a middleware on the user side, providing a plug-and-play environment for heterogeneous wireless sensors and mobile terminals utilizing different communication protocols and a distributed “big data” processing subsystem in the cloud. The design and implementation of this system are envisaged as an efficient solution for the next generation of uHealth systems. PMID:24737958

  11. Application of physical adsorption thermodynamics to heterogeneous chemistry on polar stratospheric clouds

    NASA Technical Reports Server (NTRS)

    Elliott, Scott; Turco, Richard P.; Toon, Owen B.; Hamill, Patrick

    1991-01-01

    Laboratory isotherms for the binding of several nonheterogeneously active atmospheric gases and for HCl to water ice are translated into adsorptive equilibrium constants and surface enthalpies. Extrapolation to polar conditions through the Clausius Clapeyron relation yields coverage estimates below the percent level for N2, Ar, CO2, and CO, suggesting that the crystal faces of type II stratospheric cloud particles may be regarded as clean with respect to these species. For HCl, and perhaps HF and HNO3, estimates rise to several percent, and the adsorbed layer may offer acid or proton sources alternate to the bulk solid for heterogeneous reactions with stratospheric nitrates. Measurements are lacking for many key atmospheric molecules on water ice, and almost entirely for nitric acid trihydrate as substrate. Adsorptive equilibria enter into gas to particle mass flux descriptions, and the binding energy determines rates for desorption of, and encounter between, potential surface reactants.

  12. A Service Brokering and Recommendation Mechanism for Better Selecting Cloud Services

    PubMed Central

    Gui, Zhipeng; Yang, Chaowei; Xia, Jizhe; Huang, Qunying; Liu, Kai; Li, Zhenlong; Yu, Manzhu; Sun, Min; Zhou, Nanyin; Jin, Baoxuan

    2014-01-01

    Cloud computing is becoming the new generation computing infrastructure, and many cloud vendors provide different types of cloud services. How to choose the best cloud services for specific applications is very challenging. Addressing this challenge requires balancing multiple factors, such as business demands, technologies, policies and preferences in addition to the computing requirements. This paper recommends a mechanism for selecting the best public cloud service at the levels of Infrastructure as a Service (IaaS) and Platform as a Service (PaaS). A systematic framework and associated workflow include cloud service filtration, solution generation, evaluation, and selection of public cloud services. Specifically, we propose the following: a hierarchical information model for integrating heterogeneous cloud information from different providers and a corresponding cloud information collecting mechanism; a cloud service classification model for categorizing and filtering cloud services and an application requirement schema for providing rules for creating application-specific configuration solutions; and a preference-aware solution evaluation mode for evaluating and recommending solutions according to the preferences of application providers. To test the proposed framework and methodologies, a cloud service advisory tool prototype was developed after which relevant experiments were conducted. The results show that the proposed system collects/updates/records the cloud information from multiple mainstream public cloud services in real-time, generates feasible cloud configuration solutions according to user specifications and acceptable cost predication, assesses solutions from multiple aspects (e.g., computing capability, potential cost and Service Level Agreement, SLA) and offers rational recommendations based on user preferences and practical cloud provisioning; and visually presents and compares solutions through an interactive web Graphical User Interface (GUI). PMID:25170937

  13. Sensitivities of simulated satellite views of clouds to subgrid-scale overlap and condensate heterogeneity

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hillman, Benjamin R.; Marchand, Roger T.; Ackerman, Thomas P.

    Satellite simulators are often used to account for limitations in satellite retrievals of cloud properties in comparisons between models and satellite observations. The purpose of the simulator framework is to enable more robust evaluation of model cloud properties, so that di erences between models and observations can more con dently be attributed to model errors. However, these simulators are subject to uncertainties themselves. A fundamental uncertainty exists in connecting the spatial scales at which cloud properties are retrieved with those at which clouds are simulated in global models. In this study, we create a series of sensitivity tests using 4more » km global model output from the Multiscale Modeling Framework to evaluate the sensitivity of simulated satellite retrievals when applied to climate models whose grid spacing is many tens to hundreds of kilometers. In particular, we examine the impact of cloud and precipitation overlap and of condensate spatial variability. We find the simulated retrievals are sensitive to these assumptions. Specifically, using maximum-random overlap with homogeneous cloud and precipitation condensate, which is often used in global climate models, leads to large errors in MISR and ISCCP-simulated cloud cover and in CloudSat-simulated radar reflectivity. To correct for these errors, an improved treatment of unresolved clouds and precipitation is implemented for use with the simulator framework and is shown to substantially reduce the identified errors.« less

  14. Mid-Level Mixed-Phase Cloud Properties Derived From Polarization Lidar Measurements and Model Simulations

    NASA Astrophysics Data System (ADS)

    Sassen, K.; Canonica, L.; James, C.; Khvorostyanov, V.

    2005-12-01

    Water-dominated altocumulus clouds are distributed world-wide in the middle troposphere, and so are generally supercooled clouds with variable amounts of ice production via the heterogeneous droplet freezing process, which depends on temperature and the availability of ice nuclei. Although they tend to be relatively optically thin (i.e., for water clouds) and may often act similarly to cirrus clouds, altocumulus are globally widespread and probably play a significant role in maintaining the radiation balance of the Earth/atmosphere system. We will review recent cloud microphysical/ radiative model findings describing their impact on radiation transfer, and how increasing ice content (leading to cloud glaciation) affects their radiative impact. These simulations are based on the results of a polarization lidar climatology of the macrophysical properties of midlatitude altocumulus clouds, which variably produced ice virga. A new more advanced polarization lidar algorithm for characterizing mixed-phase cloud properties is currently being developed. Relative ice content is shown to have a large effect on atmospheric heating rates. We will also present lidar data examples, from Florida to Alaska, that indicate how desert dust and forest fire smoke aerosols can affect supercooled cloud phase. Since such aerosols may be becoming increasingly prevalent due to various human activities or climate change itself, it is important to assess the potential effects of increasing ice nuclei to climate change.

  15. Ice particle morphology and microphysical properties of cirrus clouds inferred from combined CALIOP-IIR measurements

    NASA Astrophysics Data System (ADS)

    Saito, Masanori; Iwabuchi, Hironobu; Yang, Ping; Tang, Guanglin; King, Michael D.; Sekiguchi, Miho

    2017-04-01

    Ice particle morphology and microphysical properties of cirrus clouds are essential for assessing radiative forcing associated with these clouds. We develop an optimal estimation-based algorithm to infer cirrus cloud optical thickness (COT), cloud effective radius (CER), plate fraction including quasi-horizontally oriented plates (HOPs), and the degree of surface roughness from the Cloud Aerosol Lidar with Orthogonal Polarization (CALIOP) and the Infrared Imaging Radiometer (IIR) on the Cloud Aerosol Lidar and Infrared Pathfinder Satellite Observation (CALIPSO) platform. A simple but realistic ice particle model is used, and the relevant bulk optical properties are computed using state-of-the-art light-scattering computational capabilities. Rigorous estimation of uncertainties related to surface properties, atmospheric gases, and cloud heterogeneity is performed. The results based on the present method show that COTs are quite consistent with other satellite products and CERs essentially agree with the other counterparts. A 1 month global analysis for April 2007, in which CALIPSO off-nadir angle is 0.3°, shows that the HOP has significant temperature-dependence and is critical to the lidar ratio when cloud temperature is warmer than -40°C. The lidar ratio is calculated from the bulk optical properties based on the inferred parameters, showing robust temperature dependence. The median lidar ratio of cirrus clouds is 27-31 sr over the globe.

  16. Developing Verification Systems for Building Information Models of Heritage Buildings with Heterogeneous Datasets

    NASA Astrophysics Data System (ADS)

    Chow, L.; Fai, S.

    2017-08-01

    The digitization and abstraction of existing buildings into building information models requires the translation of heterogeneous datasets that may include CAD, technical reports, historic texts, archival drawings, terrestrial laser scanning, and photogrammetry into model elements. In this paper, we discuss a project undertaken by the Carleton Immersive Media Studio (CIMS) that explored the synthesis of heterogeneous datasets for the development of a building information model (BIM) for one of Canada's most significant heritage assets - the Centre Block of the Parliament Hill National Historic Site. The scope of the project included the development of an as-found model of the century-old, six-story building in anticipation of specific model uses for an extensive rehabilitation program. The as-found Centre Block model was developed in Revit using primarily point cloud data from terrestrial laser scanning. The data was captured by CIMS in partnership with Heritage Conservation Services (HCS), Public Services and Procurement Canada (PSPC), using a Leica C10 and P40 (exterior and large interior spaces) and a Faro Focus (small to mid-sized interior spaces). Secondary sources such as archival drawings, photographs, and technical reports were referenced in cases where point cloud data was not available. As a result of working with heterogeneous data sets, a verification system was introduced in order to communicate to model users/viewers the source of information for each building element within the model.

  17. Aerosol-cloud interactions in Arctic mixed-phase stratocumulus

    NASA Astrophysics Data System (ADS)

    Solomon, A.

    2017-12-01

    Reliable climate projections require realistic simulations of Arctic cloud feedbacks. Of particular importance is accurately simulating Arctic mixed-phase stratocumuli (AMPS), which are ubiquitous and play an important role in regional climate due to their impact on the surface energy budget and atmospheric boundary layer structure through cloud-driven turbulence, radiative forcing, and precipitation. AMPS are challenging to model due to uncertainties in ice microphysical processes that determine phase partitioning between ice and radiatively important cloud liquid water. Since temperatures in AMPS are too warm for homogenous ice nucleation, ice must form through heterogeneous nucleation. In this presentation we discuss a relatively unexplored source of ice production-recycling of ice nuclei in regions of ice subsaturation. AMPS frequently have ice-subsaturated air near the cloud-driven mixed-layer base where falling ice crystals can sublimate, leaving behind IN. This study provides an idealized framework to understand feedbacks between dynamics and microphysics that maintain phase-partitioning in AMPS. In addition, the results of this study provide insight into the mechanisms and feedbacks that may maintain cloud ice in AMPS even when entrainment of IN at the mixed-layer boundaries is weak.

  18. Study of In-Trap Ion Clouds by Ion Trajectory Simulations.

    PubMed

    Zhou, Xiaoyu; Liu, Xinwei; Cao, Wenbo; Wang, Xiao; Li, Ming; Qiao, Haoxue; Ouyang, Zheng

    2018-02-01

    Gaussian distribution has been utilized to describe the global number density distribution of ion cloud in the Paul trap, which is known as the thermal equilibrium theory and widely used in theoretical modeling of ion clouds in the ion traps. Using ion trajectory simulations, however, the ion clouds can now also be treated as a dynamic ion flow field and the location-dependent features could now be characterized. This study was carried out to better understand the in-trap ion cloud properties, such as the local particle velocity and temperature. The local ion number densities were found to be heterogeneously distributed in terms of mean and distribution width; the velocity and temperature of the ion flow varied with pressure depending on the flow type of the neutral molecules; and the "quasi-static" equilibrium status can only be achieved after a certain number of collisions, for which the time period is pressure-dependent. This work provides new insights of the ion clouds that are globally stable but subjected to local rf heating and collisional cooling. Graphical Abstract ᅟ.

  19. Application-oriented offloading in heterogeneous networks for mobile cloud computing

    NASA Astrophysics Data System (ADS)

    Tseng, Fan-Hsun; Cho, Hsin-Hung; Chang, Kai-Di; Li, Jheng-Cong; Shih, Timothy K.

    2018-04-01

    Nowadays Internet applications have become more complicated that mobile device needs more computing resources for shorter execution time but it is restricted to limited battery capacity. Mobile cloud computing (MCC) is emerged to tackle the finite resource problem of mobile device. MCC offloads the tasks and jobs of mobile devices to cloud and fog environments by using offloading scheme. It is vital to MCC that which task should be offloaded and how to offload efficiently. In the paper, we formulate the offloading problem between mobile device and cloud data center and propose two algorithms based on application-oriented for minimum execution time, i.e. the Minimum Offloading Time for Mobile device (MOTM) algorithm and the Minimum Execution Time for Cloud data center (METC) algorithm. The MOTM algorithm minimizes offloading time by selecting appropriate offloading links based on application categories. The METC algorithm minimizes execution time in cloud data center by selecting virtual and physical machines with corresponding resource requirements of applications. Simulation results show that the proposed mechanism not only minimizes total execution time for mobile devices but also decreases their energy consumption.

  20. Development of a Cloud Resolving Model for Heterogeneous Supercomputers

    NASA Astrophysics Data System (ADS)

    Sreepathi, S.; Norman, M. R.; Pal, A.; Hannah, W.; Ponder, C.

    2017-12-01

    A cloud resolving climate model is needed to reduce major systematic errors in climate simulations due to structural uncertainty in numerical treatments of convection - such as convective storm systems. This research describes the porting effort to enable SAM (System for Atmosphere Modeling) cloud resolving model on heterogeneous supercomputers using GPUs (Graphical Processing Units). We have isolated a standalone configuration of SAM that is targeted to be integrated into the DOE ACME (Accelerated Climate Modeling for Energy) Earth System model. We have identified key computational kernels from the model and offloaded them to a GPU using the OpenACC programming model. Furthermore, we are investigating various optimization strategies intended to enhance GPU utilization including loop fusion/fission, coalesced data access and loop refactoring to a higher abstraction level. We will present early performance results, lessons learned as well as optimization strategies. The computational platform used in this study is the Summitdev system, an early testbed that is one generation removed from Summit, the next leadership class supercomputer at Oak Ridge National Laboratory. The system contains 54 nodes wherein each node has 2 IBM POWER8 CPUs and 4 NVIDIA Tesla P100 GPUs. This work is part of a larger project, ACME-MMF component of the U.S. Department of Energy(DOE) Exascale Computing Project. The ACME-MMF approach addresses structural uncertainty in cloud processes by replacing traditional parameterizations with cloud resolving "superparameterization" within each grid cell of global climate model. Super-parameterization dramatically increases arithmetic intensity, making the MMF approach an ideal strategy to achieve good performance on emerging exascale computing architectures. The goal of the project is to integrate superparameterization into ACME, and explore its full potential to scientifically and computationally advance climate simulation and prediction.

  1. The free radical chemistry of cloud droplets and its impact upon the composition of rain

    NASA Technical Reports Server (NTRS)

    Chameides, W. L.; Davis, D. D.

    1982-01-01

    Calculations are presented that simulate the free radical chemistries of the gas phase and aqueous phase within a warm cloud during midday. It is demonstrated that in the presence of midday solar fluxes, the heterogeneous scavenging of OH and HO2 from the gas phase by cloud droplets can represent a major source of free radicals to cloud water, provided the accommodation or sticking coefficient for these species impinging upon water droplets is not less than 0.0001. The aqueous-phase of HO2 radicals are found to be converted to H2O2 by aqueous-phase chemical reactions at a rate that suggests that this mechanism could produce a significant fraction of the H2O2 found in cloud droplets. The rapid oxidation of sulfur species dissolved in cloudwater by this free-radical-produced H2O2 as well as by aqueous-phase OH radicals could conceivably have a significant impact upon the chemical composition of rain.

  2. The Impact of the Aerosol Direct Radiative Forcing on Deep Convection and Air Quality in the Pearl River Delta Region

    NASA Astrophysics Data System (ADS)

    Liu, Z.; Yim, Steve H. L.; Wang, C.; Lau, N. C.

    2018-05-01

    Literature has reported the remarkable aerosol impact on low-level cloud by direct radiative forcing (DRF). Impacts on middle-upper troposphere cloud are not yet fully understood, even though this knowledge is important for regions with a large spatial heterogeneity of emissions and aerosol concentration. We assess the aerosol DRF and its cloud response in June (with strong convection) in Pearl River Delta region for 2008-2012 at cloud-resolving scale using an air quality-climate coupled model. Aerosols suppress deep convection by increasing atmospheric stability leading to less evaporation from the ground. The relative humidity is reduced in middle-upper troposphere due to induced reduction in both evaporation from the ground and upward motion. The cloud reduction offsets 20% of the aerosol DRF. The weaker vertical mixing further increases surface aerosol concentration by up to 2.90 μg/m3. These findings indicate the aerosol DRF impact on deep convection and in turn regional air quality.

  3. Scheduling Multilevel Deadline-Constrained Scientific Workflows on Clouds Based on Cost Optimization

    DOE PAGES

    Malawski, Maciej; Figiela, Kamil; Bubak, Marian; ...

    2015-01-01

    This paper presents a cost optimization model for scheduling scientific workflows on IaaS clouds such as Amazon EC2 or RackSpace. We assume multiple IaaS clouds with heterogeneous virtual machine instances, with limited number of instances per cloud and hourly billing. Input and output data are stored on a cloud object store such as Amazon S3. Applications are scientific workflows modeled as DAGs as in the Pegasus Workflow Management System. We assume that tasks in the workflows are grouped into levels of identical tasks. Our model is specified using mathematical programming languages (AMPL and CMPL) and allows us to minimize themore » cost of workflow execution under deadline constraints. We present results obtained using our model and the benchmark workflows representing real scientific applications in a variety of domains. The data used for evaluation come from the synthetic workflows and from general purpose cloud benchmarks, as well as from the data measured in our own experiments with Montage, an astronomical application, executed on Amazon EC2 cloud. We indicate how this model can be used for scenarios that require resource planning for scientific workflows and their ensembles.« less

  4. Utility functions and resource management in an oversubscribed heterogeneous computing environment

    DOE PAGES

    Khemka, Bhavesh; Friese, Ryan; Briceno, Luis Diego; ...

    2014-09-26

    We model an oversubscribed heterogeneous computing system where tasks arrive dynamically and a scheduler maps the tasks to machines for execution. The environment and workloads are based on those being investigated by the Extreme Scale Systems Center at Oak Ridge National Laboratory. Utility functions that are designed based on specifications from the system owner and users are used to create a metric for the performance of resource allocation heuristics. Each task has a time-varying utility (importance) that the enterprise will earn based on when the task successfully completes execution. We design multiple heuristics, which include a technique to drop lowmore » utility-earning tasks, to maximize the total utility that can be earned by completing tasks. The heuristics are evaluated using simulation experiments with two levels of oversubscription. The results show the benefit of having fast heuristics that account for the importance of a task and the heterogeneity of the environment when making allocation decisions in an oversubscribed environment. Furthermore, the ability to drop low utility-earning tasks allow the heuristics to tolerate the high oversubscription as well as earn significant utility.« less

  5. Combined virtual and real robotic test-bed for single operator control of multiple robots

    NASA Astrophysics Data System (ADS)

    Lee, Sam Y.-S.; Hunt, Shawn; Cao, Alex; Pandya, Abhilash

    2010-04-01

    Teams of heterogeneous robots with different dynamics or capabilities could perform a variety of tasks such as multipoint surveillance, cooperative transport and explorations in hazardous environments. In this study, we work with heterogeneous robots of semi-autonomous ground and aerial robots for contaminant localization. We developed a human interface system which linked every real robot to its virtual counterpart. A novel virtual interface has been integrated with Augmented Reality that can monitor the position and sensory information from video feed of ground and aerial robots in the 3D virtual environment, and improve user situational awareness. An operator can efficiently control the real multi-robots using the Drag-to-Move method on the virtual multi-robots. This enables an operator to control groups of heterogeneous robots in a collaborative way for allowing more contaminant sources to be pursued simultaneously. The advanced feature of the virtual interface system is guarded teleoperation. This can be used to prevent operators from accidently driving multiple robots into walls and other objects. Moreover, the feature of the image guidance and tracking is able to reduce operator workload.

  6. Investigating ice nucleation in cirrus clouds with an aerosol-enabled Multiscale Modeling Framework

    DOE PAGES

    Zhang, Chengzhu; Wang, Minghuai; Morrison, H.; ...

    2014-11-06

    In this study, an aerosol-dependent ice nucleation scheme [Liu and Penner, 2005] has been implemented in an aerosol-enabled multi-scale modeling framework (PNNL MMF) to study ice formation in upper troposphere cirrus clouds through both homogeneous and heterogeneous nucleation. The MMF model represents cloud scale processes by embedding a cloud-resolving model (CRM) within each vertical column of a GCM grid. By explicitly linking ice nucleation to aerosol number concentration, CRM-scale temperature, relative humidity and vertical velocity, the new MMF model simulates the persistent high ice supersaturation and low ice number concentration (10 to 100/L) at cirrus temperatures. The low ice numbermore » is attributed to the dominance of heterogeneous nucleation in ice formation. The new model simulates the observed shift of the ice supersaturation PDF towards higher values at low temperatures following homogeneous nucleation threshold. The MMF models predict a higher frequency of midlatitude supersaturation in the Southern hemisphere and winter hemisphere, which is consistent with previous satellite and in-situ observations. It is shown that compared to a conventional GCM, the MMF is a more powerful model to emulate parameters that evolve over short time scales such as supersaturation. Sensitivity tests suggest that the simulated global distribution of ice clouds is sensitive to the ice nucleation schemes and the distribution of sulfate and dust aerosols. Simulations are also performed to test empirical parameters related to auto-conversion of ice crystals to snow. Results show that with a value of 250 μm for the critical diameter, Dcs, that distinguishes ice crystals from snow, the model can produce good agreement to the satellite retrieved products in terms of cloud ice water path and ice water content, while the total ice water is not sensitive to the specification of Dcs value.« less

  7. Sedimentation Efficiency of Condensation Clouds in Substellar Atmospheres

    NASA Astrophysics Data System (ADS)

    Gao, Peter; Marley, Mark S.; Ackerman, Andrew S.

    2018-03-01

    Condensation clouds in substellar atmospheres have been widely inferred from spectra and photometric variability. Up until now, their horizontally averaged vertical distribution and mean particle size have been largely characterized using models, one of which is the eddy diffusion–sedimentation model from Ackerman and Marley that relies on a sedimentation efficiency parameter, f sed, to determine the vertical extent of clouds in the atmosphere. However, the physical processes controlling the vertical structure of clouds in substellar atmospheres are not well understood. In this work, we derive trends in f sed across a large range of eddy diffusivities (K zz ), gravities, material properties, and cloud formation pathways by fitting cloud distributions calculated by a more detailed cloud microphysics model. We find that f sed is dependent on K zz , but not gravity, when K zz is held constant. f sed is most sensitive to the nucleation rate of cloud particles, as determined by material properties like surface energy and molecular weight. High surface energy materials form fewer, larger cloud particles, leading to large f sed (>1), and vice versa for materials with low surface energy. For cloud formation via heterogeneous nucleation, f sed is sensitive to the condensation nuclei flux and radius, connecting cloud formation in substellar atmospheres to the objects’ formation environments and other atmospheric aerosols. These insights could lead to improved cloud models that help us better understand substellar atmospheres. For example, we demonstrate that f sed could increase with increasing cloud base depth in an atmosphere, shedding light on the nature of the brown dwarf L/T transition.

  8. Managing Power Heterogeneity

    NASA Astrophysics Data System (ADS)

    Pruhs, Kirk

    A particularly important emergent technology is heterogeneous processors (or cores), which many computer architects believe will be the dominant architectural design in the future. The main advantage of a heterogeneous architecture, relative to an architecture of identical processors, is that it allows for the inclusion of processors whose design is specialized for particular types of jobs, and for jobs to be assigned to a processor best suited for that job. Most notably, it is envisioned that these heterogeneous architectures will consist of a small number of high-power high-performance processors for critical jobs, and a larger number of lower-power lower-performance processors for less critical jobs. Naturally, the lower-power processors would be more energy efficient in terms of the computation performed per unit of energy expended, and would generate less heat per unit of computation. For a given area and power budget, heterogeneous designs can give significantly better performance for standard workloads. Moreover, even processors that were designed to be homogeneous, are increasingly likely to be heterogeneous at run time: the dominant underlying cause is the increasing variability in the fabrication process as the feature size is scaled down (although run time faults will also play a role). Since manufacturing yields would be unacceptably low if every processor/core was required to be perfect, and since there would be significant performance loss from derating the entire chip to the functioning of the least functional processor (which is what would be required in order to attain processor homogeneity), some processor heterogeneity seems inevitable in chips with many processors/cores.

  9. Partitioning of ice nucleating particles: Which modes matter?

    NASA Astrophysics Data System (ADS)

    Hande, Luke; Hoose, Corinna

    2017-04-01

    Ice particles in clouds have a large impact on cloud lifetime, precipitation amount, and cloud radiative properties through the indirect aerosol effect. Thus, correctly modelling ice formation processes is important for simulations preformed on all spatial and temporal scales. Ice forms on aerosol particles through several different mechanisms, namely deposition nucleation, immersion freezing, and contact freezing. However there is conflicting evidence as to which mode dominates, and the relative importance of the three heterogeneous ice nucleation mechanisms, as well as homogeneous nucleation, remains an open question. The environmental conditions, and hence the cloud type, have a large impact on determining which nucleation mode dominates. In order to understand this, simulations were performed with the COSMO-LES model, utilising state of the art parameterisations to describe the different nucleation mechanisms for several semi-idealised cloud types commonly occurring over central Europe. The cloud types investigated include a semi-idealised, and an idealised convective cloud, an orographic cloud, and a stratiform cloud. Results show that immersion and contact freezing dominate at warmer temperatures, and under most conditions, deposition nucleation plays only a minor role. In clouds where sufficiently high levels of water vapour are present at colder temperatures, deposition nucleation can play a role, however in general homogeneous nucleation dominates at colder temperatures. Since contact nucleation depends on the environmental relative humidity, enhancements in this nucleation mode can be seen in areas of dry air entrainment. The results indicate that ice microphysical processes are somewhat sensitve to the environmental conditions and therefore the cloud type.

  10. Porting AMG2013 to Heterogeneous CPU+GPU Nodes

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Samfass, Philipp

    LLNL's future advanced technology system SIERRA will feature heterogeneous compute nodes that consist of IBM PowerV9 CPUs and NVIDIA Volta GPUs. Conceptually, the motivation for such an architecture is quite straightforward: While GPUs are optimized for throughput on massively parallel workloads, CPUs strive to minimize latency for rather sequential operations. Yet, making optimal use of heterogeneous architectures raises new challenges for the development of scalable parallel software, e.g., with respect to work distribution. Porting LLNL's parallel numerical libraries to upcoming heterogeneous CPU+GPU architectures is therefore a critical factor for ensuring LLNL's future success in ful lling its national mission. Onemore » of these libraries, called HYPRE, provides parallel solvers and precondi- tioners for large, sparse linear systems of equations. In the context of this intern- ship project, I consider AMG2013 which is a proxy application for major parts of HYPRE that implements a benchmark for setting up and solving di erent systems of linear equations. In the following, I describe in detail how I ported multiple parts of AMG2013 to the GPU (Section 2) and present results for di erent experiments that demonstrate a successful parallel implementation on the heterogeneous ma- chines surface and ray (Section 3). In Section 4, I give guidelines on how my code should be used. Finally, I conclude and give an outlook for future work (Section 5).« less

  11. OpenID Connect as a security service in cloud-based medical imaging systems

    PubMed Central

    Ma, Weina; Sartipi, Kamran; Sharghigoorabi, Hassan; Koff, David; Bak, Peter

    2016-01-01

    Abstract. The evolution of cloud computing is driving the next generation of medical imaging systems. However, privacy and security concerns have been consistently regarded as the major obstacles for adoption of cloud computing by healthcare domains. OpenID Connect, combining OpenID and OAuth together, is an emerging representational state transfer-based federated identity solution. It is one of the most adopted open standards to potentially become the de facto standard for securing cloud computing and mobile applications, which is also regarded as “Kerberos of cloud.” We introduce OpenID Connect as an authentication and authorization service in cloud-based diagnostic imaging (DI) systems, and propose enhancements that allow for incorporating this technology within distributed enterprise environments. The objective of this study is to offer solutions for secure sharing of medical images among diagnostic imaging repository (DI-r) and heterogeneous picture archiving and communication systems (PACS) as well as Web-based and mobile clients in the cloud ecosystem. The main objective is to use OpenID Connect open-source single sign-on and authorization service and in a user-centric manner, while deploying DI-r and PACS to private or community clouds should provide equivalent security levels to traditional computing model. PMID:27340682

  12. Insights on Chemistry of Mercury Species in Clouds over Northern China: Complexation and Adsorption.

    PubMed

    Li, Tao; Wang, Yan; Mao, Huiting; Wang, Shuxiao; Talbot, Robert W; Zhou, Ying; Wang, Zhe; Nie, Xiaoling; Qie, Guanghao

    2018-05-01

    Cloud effects on heterogeneous reactions of atmospheric mercury (Hg) are poorly understood due to limited knowledge of cloudwater Hg chemistry. Here we quantified Hg species in cloudwater at the summit of Mt. Tai in northern China. Total mercury (THg) and methylmercury (MeHg) in cloudwater were on average 70.5 and 0.15 ng L -1 , respectively, and particulate Hg (PHg) contributed two-thirds of THg. Chemical equilibrium modeling simulations suggested that Hg complexes by dissolved organic matter (DOM) dominated dissolved Hg (DHg) speciation, which was highly pH dependent. Hg concentrations and speciation were altered by cloud processing, during which significant positive correlations of PHg and MeHg with cloud droplet number concentration ( N d ) were observed. Unlike direct contribution to PHg from cloud scavenging of aerosol particles, abiotic DHg methylation was the most likely source of MeHg. Hg adsorption coefficients K ad (5.9-362.7 L g -1 ) exhibited an inverse-power relationship with cloud residues content. Morphology analyses indicated that compared to mineral particles, fly ash particles could enhance Hg adsorption due to more abundant carbon binding sites on the surface. Severe particulate air pollution in northern China may bring substantial Hg into cloud droplets and impact atmospheric Hg geochemical cycling by aerosol-cloud interactions.

  13. Relationship between macroscopic and microphysical properties for mixed-phase and ice clouds over the Southern Ocean in ORCAS campaign

    NASA Astrophysics Data System (ADS)

    Diao, M.; Jensen, J. B.

    2017-12-01

    Mixed-phase and ice clouds play very important roles in regulating the atmospheric radiation over the Southern Ocean. Previously, in-situ observations over this remote region are limited, and a few of the available observation-based analyses mainly focused on the cloud microphysical properties. The relationship between macroscopic and microphysical properties for both mixed-phase and ice clouds have not been thoroughly investigated based on in-situ observations. In this work, the aircraft-based observations from the NSF O2/N2 Ratio and CO2 Airborne Southern Ocean (ORCAS) field campaign (Jan - Feb 2016) will be used to analyze the cloud macroscopic properties on the microscale to mesoscale, including the distributions of cloud chord length, the patchiness of clouds, and the spatial ratios of adjacent cloud segments in mixed phase and pure ice phase. In addition, these macroscopic properties will be analyzed in relation to the relative humidity (RH) background, such as the average and maximum RH inside clouds, as well as the probability density function (PDF) of in-cloud RH. We found that the clouds with larger horizontal scales are often associated with larger magnitudes of average and maximum in-cloud RH values. In addition, when decomposing the contributions from the spatial variabilities of water vapor and temperature to the variability of RH, the water vapor heterogeneities are found to have the most dominant impact on RH variability. Sensitivities of the cloud macroscopic and microphysical properties to the horizontal resolutions of the observations will be shown, including the impacts on the patchiness of clouds, cloud fraction, frequencies of ice supersaturation, and the PDFs of RH. These sensitivity analyses will provide useful information on the comparisons among multi-scale observations and simulations.

  14. Physical and chemical properties of ice residuals during the 2013 and 2014 CLACE campaigns

    NASA Astrophysics Data System (ADS)

    Kupiszewski, Piotr; Weingartner, Ernest; Vochezer, Paul; Hammer, Emanuel; Gysel, Martin; Färber, Raphael; Fuchs, Claudia; Schnaiter, Martin; Baltensperger, Urs; Schmidt, Susan; Schneider, Johannes; Bigi, Alessandro; Toprak, Emre; Linke, Claudia; Klimach, Thomas

    2014-05-01

    The shortcomings in our understanding and, thus, representation of aerosol-cloud interactions are one of the major sources of uncertainty in climate model projections. Among the poorly understood processes is mixed-phase cloud formation via heterogeneous nucleation, and the subsequent spatial and temporal evolution of such clouds. Cloud glaciation augments precipitation formation, resulting in decreased cloud cover and lifetime, and affects cloud radiative properties. Meanwhile, the physical and chemical properties of atmospherically relevant ice nuclei (IN), the sub-population of aerosol particles which enable heterogeneous nucleation, are not well known. Extraction of ice residuals (IR) in mixed-phase clouds is a difficult task, requiring separation of the few small, freshly formed ice crystals (the IR within such crystals can be deemed representative of the original IN) not only from interstitial particles, but also from the numerous supercooled droplets which have aerodynamic diameters similar to those of the ice crystals. In order to address the difficulties with ice crystal sampling and IR extraction in mixed-phase clouds, the new Ice Selective Inlet (ISI) has been designed and deployed at the Jungfraujoch field site. Small ice crystals are selectively sampled via the inlet with simultaneous counting, sizing and imaging of hydrometeors contained in the cloud by a set of optical particle spectrometers, namely Welas optical particle counters (OPC) and a Particle Phase Discriminator (PPD). The heart of the ISI is a droplet evaporation unit with ice-covered inner walls, resulting in removal of droplets using the Wegener-Bergeron-Findeisen process, while transmitting a relatively high fraction of small ice crystals. The ISI was deployed in the winters of 2013 and 2014 at the high alpine Jungfraujoch site (3580 m.a.s.l) during the intensive CLACE field campaigns. The measurements focused on analysis of the physical and chemical characteristics of IR and the microphysical properties of mixed-phase clouds. A host of aerosol instrumentation was deployed downstream of the ISI, including a Grimm OPC and a scanning mobility particle sizer (SMPS) for number size distribution measurements, as well as a single particle mass spectrometer (ALABAMA; 2013 only), single particle soot photometers (SP2) and a Wideband Integrated Bioaerosol Sensor (WIBS-4) for analysis of the chemical composition, with particular focus on the content of black carbon (BC) and biological particles in IR. Corresponding instrumentation sampled through a total aerosol inlet. By comparing observations from the ISI with those from the total inlet the characteristics of ice residuals relative to the total aerosol could be established. First results from these analyses will be presented.

  15. Job Scheduling in a Heterogeneous Grid Environment

    NASA Technical Reports Server (NTRS)

    Shan, Hong-Zhang; Smith, Warren; Oliker, Leonid; Biswas, Rupak

    2004-01-01

    Computational grids have the potential for solving large-scale scientific problems using heterogeneous and geographically distributed resources. However, a number of major technical hurdles must be overcome before this potential can be realized. One problem that is critical to effective utilization of computational grids is the efficient scheduling of jobs. This work addresses this problem by describing and evaluating a grid scheduling architecture and three job migration algorithms. The architecture is scalable and does not assume control of local site resources. The job migration policies use the availability and performance of computer systems, the network bandwidth available between systems, and the volume of input and output data associated with each job. An extensive performance comparison is presented using real workloads from leading computational centers. The results, based on several key metrics, demonstrate that the performance of our distributed migration algorithms is significantly greater than that of a local scheduling framework and comparable to a non-scalable global scheduling approach.

  16. Performance comparison of heuristic algorithms for task scheduling in IaaS cloud computing environment.

    PubMed

    Madni, Syed Hamid Hussain; Abd Latiff, Muhammad Shafie; Abdullahi, Mohammed; Abdulhamid, Shafi'i Muhammad; Usman, Mohammed Joda

    2017-01-01

    Cloud computing infrastructure is suitable for meeting computational needs of large task sizes. Optimal scheduling of tasks in cloud computing environment has been proved to be an NP-complete problem, hence the need for the application of heuristic methods. Several heuristic algorithms have been developed and used in addressing this problem, but choosing the appropriate algorithm for solving task assignment problem of a particular nature is difficult since the methods are developed under different assumptions. Therefore, six rule based heuristic algorithms are implemented and used to schedule autonomous tasks in homogeneous and heterogeneous environments with the aim of comparing their performance in terms of cost, degree of imbalance, makespan and throughput. First Come First Serve (FCFS), Minimum Completion Time (MCT), Minimum Execution Time (MET), Max-min, Min-min and Sufferage are the heuristic algorithms considered for the performance comparison and analysis of task scheduling in cloud computing.

  17. Performance comparison of heuristic algorithms for task scheduling in IaaS cloud computing environment

    PubMed Central

    Madni, Syed Hamid Hussain; Abd Latiff, Muhammad Shafie; Abdullahi, Mohammed; Usman, Mohammed Joda

    2017-01-01

    Cloud computing infrastructure is suitable for meeting computational needs of large task sizes. Optimal scheduling of tasks in cloud computing environment has been proved to be an NP-complete problem, hence the need for the application of heuristic methods. Several heuristic algorithms have been developed and used in addressing this problem, but choosing the appropriate algorithm for solving task assignment problem of a particular nature is difficult since the methods are developed under different assumptions. Therefore, six rule based heuristic algorithms are implemented and used to schedule autonomous tasks in homogeneous and heterogeneous environments with the aim of comparing their performance in terms of cost, degree of imbalance, makespan and throughput. First Come First Serve (FCFS), Minimum Completion Time (MCT), Minimum Execution Time (MET), Max-min, Min-min and Sufferage are the heuristic algorithms considered for the performance comparison and analysis of task scheduling in cloud computing. PMID:28467505

  18. Federated and Cloud Enabled Resources for Data Management and Utilization

    NASA Astrophysics Data System (ADS)

    Rankin, R.; Gordon, M.; Potter, R. G.; Satchwill, B.

    2011-12-01

    The emergence of cloud computing over the past three years has led to a paradigm shift in how data can be managed, processed and made accessible. Building on the federated data management system offered through the Canadian Space Science Data Portal (www.cssdp.ca), we demonstrate how heterogeneous and geographically distributed data sets and modeling tools have been integrated to form a virtual data center and computational modeling platform that has services for data processing and visualization embedded within it. We also discuss positive and negative experiences in utilizing Eucalyptus and OpenStack cloud applications, and job scheduling facilitated by Condor and Star Cluster. We summarize our findings by demonstrating use of these technologies in the Cloud Enabled Space Weather Data Assimilation and Modeling Platform CESWP (www.ceswp.ca), which is funded through Canarie's (canarie.ca) Network Enabled Platforms program in Canada.

  19. Effects of pre-existing ice crystals on cirrus clouds and comparison between different ice nucleation parameterizations with the Community Atmosphere Model (CAM5)

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Shi, Xiangjun; Liu, Xiaohong; Zhang, Kai

    In order to improve the treatment of ice nucleation in a more realistic manner in the Community Atmosphere Model version 5.3 (CAM5.3), the effects of pre-existing ice crystals on ice nucleation in cirrus clouds are considered. In addition, by considering the in-cloud variability in ice saturation ratio, homogeneous nucleation takes place spatially only in a portion of the cirrus cloud rather than in the whole area of the cirrus cloud. Compared to observations, the ice number concentrations and the probability distributions of ice number concentration are both improved with the updated treatment. The pre-existing ice crystals significantly reduce ice numbermore » concentrations in cirrus clouds, especially at mid- to high latitudes in the upper troposphere (by a factor of ~10). Furthermore, the contribution of heterogeneous ice nucleation to cirrus ice crystal number increases considerably. Besides the default ice nucleation parameterization of Liu and Penner (2005, hereafter LP) in CAM5.3, two other ice nucleation parameterizations of Barahona and Nenes (2009, hereafter BN) and Kärcher et al. (2006, hereafter KL) are implemented in CAM5.3 for the comparison. In-cloud ice crystal number concentration, percentage contribution from heterogeneous ice nucleation to total ice crystal number, and pre-existing ice effects simulated by the three ice nucleation parameterizations have similar patterns in the simulations with present-day aerosol emissions. However, the change (present-day minus pre-industrial times) in global annual mean column ice number concentration from the KL parameterization (3.24 × 10 6 m -2) is less than that from the LP (8.46 × 10 6 m -2) and BN (5.62 × 10 6 m -2) parameterizations. As a result, the experiment using the KL parameterization predicts a much smaller anthropogenic aerosol long-wave indirect forcing (0.24 W m -2) than that using the LP (0.46 W m −2) and BN (0.39 W m -2) parameterizations.« less

  20. Effects of pre-existing ice crystals on cirrus clouds and comparison between different ice nucleation parameterizations with the Community Atmosphere Model (CAM5)

    DOE PAGES

    Shi, Xiangjun; Liu, Xiaohong; Zhang, Kai

    2015-02-11

    In order to improve the treatment of ice nucleation in a more realistic manner in the Community Atmosphere Model version 5.3 (CAM5.3), the effects of pre-existing ice crystals on ice nucleation in cirrus clouds are considered. In addition, by considering the in-cloud variability in ice saturation ratio, homogeneous nucleation takes place spatially only in a portion of the cirrus cloud rather than in the whole area of the cirrus cloud. Compared to observations, the ice number concentrations and the probability distributions of ice number concentration are both improved with the updated treatment. The pre-existing ice crystals significantly reduce ice numbermore » concentrations in cirrus clouds, especially at mid- to high latitudes in the upper troposphere (by a factor of ~10). Furthermore, the contribution of heterogeneous ice nucleation to cirrus ice crystal number increases considerably. Besides the default ice nucleation parameterization of Liu and Penner (2005, hereafter LP) in CAM5.3, two other ice nucleation parameterizations of Barahona and Nenes (2009, hereafter BN) and Kärcher et al. (2006, hereafter KL) are implemented in CAM5.3 for the comparison. In-cloud ice crystal number concentration, percentage contribution from heterogeneous ice nucleation to total ice crystal number, and pre-existing ice effects simulated by the three ice nucleation parameterizations have similar patterns in the simulations with present-day aerosol emissions. However, the change (present-day minus pre-industrial times) in global annual mean column ice number concentration from the KL parameterization (3.24 × 10 6 m -2) is less than that from the LP (8.46 × 10 6 m -2) and BN (5.62 × 10 6 m -2) parameterizations. As a result, the experiment using the KL parameterization predicts a much smaller anthropogenic aerosol long-wave indirect forcing (0.24 W m -2) than that using the LP (0.46 W m −2) and BN (0.39 W m -2) parameterizations.« less

  1. Heterogeneous nucleation and its relationship to precipitation type. Technical memo

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Smith, G.

    1995-04-01

    The purpose of this study is to present important elements of cloud microphysics that will be useful to the operational meteorologist in determining precipitation type. Synoptic-scale environments and vertical atmospheric structures of cases, where freezing precipitation occurred, will be examined. Furthermore, only cases in which the entire depth of the troposphere was below freezing are studied. The absences of lower tropospheric warm layers (above freezing) suggest that the primary atmospheric process that influenced precipitation type was heterogeneous nucleation rather than melting.

  2. Radiative-dynamical and microphysical processes of thin cirrus clouds controlling humidity of air entering the stratosphere

    NASA Astrophysics Data System (ADS)

    Dinh, Tra; Fueglistaler, Stephan

    2016-04-01

    Thin cirrus clouds in the tropical tropopause layer (TTL) are of great interest due to their role in the control of water vapor and temperature in the TTL. Previous research on TTL cirrus clouds has focussed mainly on microphysical processes, specifically the ice nucleation mechanism and dehydration efficiency. Here, we use a cloud resolving model to analyse the sensitivity of TTL cirrus characteristics and impacts with respect to microphysical and radiative processes. A steady-state TTL cirrus cloud field is obtained in the model forced with dynamical conditions typical for the TTL (2-dimensional setup with a Kelvin-wave temperature perturbation). Our model results show that the dehydration efficiency (as given by the domain average relative humidity in the layer of cloud occurrence) is relatively insensitive to the ice nucleation mechanism, i.e. homogeneous versus heterogeneous nucleation. Rather, TTL cirrus affect the water vapor entering the stratosphere via an indirect effect associated with the cloud radiative heating and dynamics. Resolving the cloud radiative heating and the radiatively induced circulations approximately doubles the domain average ice mass. The cloud radiative heating is proportional to the domain average ice mass, and the observed increase in domain average ice mass induces a domain average temperature increase of a few Kelvin. The corresponding increase in water vapor entering the stratosphere is estimated to be about 30 to 40%.

  3. Clarifying the dominant sources and mechanisms of cirrus cloud formation.

    PubMed

    Cziczo, Daniel J; Froyd, Karl D; Hoose, Corinna; Jensen, Eric J; Diao, Minghui; Zondlo, Mark A; Smith, Jessica B; Twohy, Cynthia H; Murphy, Daniel M

    2013-06-14

    Formation of cirrus clouds depends on the availability of ice nuclei to begin condensation of atmospheric water vapor. Although it is known that only a small fraction of atmospheric aerosols are efficient ice nuclei, the critical ingredients that make those aerosols so effective have not been established. We have determined in situ the composition of the residual particles within cirrus crystals after the ice was sublimated. Our results demonstrate that mineral dust and metallic particles are the dominant source of residual particles, whereas sulfate and organic particles are underrepresented, and elemental carbon and biological materials are essentially absent. Further, composition analysis combined with relative humidity measurements suggests that heterogeneous freezing was the dominant formation mechanism of these clouds.

  4. A cloud-ozone data product from Aura OMI and MLS satellite measurements

    NASA Astrophysics Data System (ADS)

    Ziemke, Jerald R.; Strode, Sarah A.; Douglass, Anne R.; Joiner, Joanna; Vasilkov, Alexander; Oman, Luke D.; Liu, Junhua; Strahan, Susan E.; Bhartia, Pawan K.; Haffner, David P.

    2017-11-01

    Ozone within deep convective clouds is controlled by several factors involving photochemical reactions and transport. Gas-phase photochemical reactions and heterogeneous surface chemical reactions involving ice, water particles, and aerosols inside the clouds all contribute to the distribution and net production and loss of ozone. Ozone in clouds is also dependent on convective transport that carries low-troposphere/boundary-layer ozone and ozone precursors upward into the clouds. Characterizing ozone in thick clouds is an important step for quantifying relationships of ozone with tropospheric H2O, OH production, and cloud microphysics/transport properties. Although measuring ozone in deep convective clouds from either aircraft or balloon ozonesondes is largely impossible due to extreme meteorological conditions associated with these clouds, it is possible to estimate ozone in thick clouds using backscattered solar UV radiation measured by satellite instruments. Our study combines Aura Ozone Monitoring Instrument (OMI) and Microwave Limb Sounder (MLS) satellite measurements to generate a new research product of monthly-mean ozone concentrations in deep convective clouds between 30° S and 30° N for October 2004-April 2016. These measurements represent mean ozone concentration primarily in the upper levels of thick clouds and reveal key features of cloud ozone including: persistent low ozone concentrations in the tropical Pacific of ˜ 10 ppbv or less; concentrations of up to 60 pphv or greater over landmass regions of South America, southern Africa, Australia, and India/east Asia; connections with tropical ENSO events; and intraseasonal/Madden-Julian oscillation variability. Analysis of OMI aerosol measurements suggests a cause and effect relation between boundary-layer pollution and elevated ozone inside thick clouds over landmass regions including southern Africa and India/east Asia.

  5. A Cloud-Ozone Data Product from Aura OMI and MLS Satellite Measurements.

    PubMed

    Ziemke, Jerald R; Strode, Sarah A; Douglass, Anne R; Joiner, Joanna; Vasilkov, Alexander; Oman, Luke D; Liu, Junhua; Strahan, Susan E; Bhartia, Pawan K; Haffner, David P

    2017-01-01

    Ozone within deep convective clouds is controlled by several factors involving photochemical reactions and transport. Gas-phase photochemical reactions and heterogeneous surface chemical reactions involving ice, water particles, and aerosols inside the clouds all contribute to the distribution and net production and loss of ozone. Ozone in clouds is also dependent on convective transport that carries low troposphere/boundary layer ozone and ozone precursors upward into the clouds. Characterizing ozone in thick clouds is an important step for quantifying relationships of ozone with tropospheric H 2 O, OH production, and cloud microphysics/transport properties. Although measuring ozone in deep convective clouds from either aircraft or balloon ozonesondes is largely impossible due to extreme meteorological conditions associated with these clouds, it is possible to estimate ozone in thick clouds using backscattered solar UV radiation measured by satellite instruments. Our study combines Aura Ozone Monitoring Instrument (OMI) and Microwave Limb Sounder (MLS) satellite measurements to generate a new research product of monthly-mean ozone concentrations in deep convective clouds between 30°S to 30°N for October 2004 - April 2016. These measurements represent mean ozone concentration primarily in the upper levels of thick clouds and reveal key features of cloud ozone including: persistent low ozone concentrations in the tropical Pacific of ~10 ppbv or less; concentrations of up to 60 pphv or greater over landmass regions of South America, southern Africa, Australia, and India/east Asia; connections with tropical ENSO events; and intra-seasonal/Madden-Julian Oscillation variability. Analysis of OMI aerosol measurements suggests a cause and effect relation between boundary layer pollution and elevated ozone inside thick clouds over land-mass regions including southern Africa and India/east Asia.

  6. Global model comparison of heterogeneous ice nucleation parameterizations in mixed phase clouds

    NASA Astrophysics Data System (ADS)

    Yun, Yuxing; Penner, Joyce E.

    2012-04-01

    A new aerosol-dependent mixed phase cloud parameterization for deposition/condensation/immersion (DCI) ice nucleation and one for contact freezing are compared to the original formulations in a coupled general circulation model and aerosol transport model. The present-day cloud liquid and ice water fields and cloud radiative forcing are analyzed and compared to observations. The new DCI freezing parameterization changes the spatial distribution of the cloud water field. Significant changes are found in the cloud ice water fraction and in the middle cloud fractions. The new DCI freezing parameterization predicts less ice water path (IWP) than the original formulation, especially in the Southern Hemisphere. The smaller IWP leads to a less efficient Bergeron-Findeisen process resulting in a larger liquid water path, shortwave cloud forcing, and longwave cloud forcing. It is found that contact freezing parameterizations have a greater impact on the cloud water field and radiative forcing than the two DCI freezing parameterizations that we compared. The net solar flux at top of atmosphere and net longwave flux at the top of the atmosphere change by up to 8.73 and 3.52 W m-2, respectively, due to the use of different DCI and contact freezing parameterizations in mixed phase clouds. The total climate forcing from anthropogenic black carbon/organic matter in mixed phase clouds is estimated to be 0.16-0.93 W m-2using the aerosol-dependent parameterizations. A sensitivity test with contact ice nuclei concentration in the original parameterization fit to that recommended by Young (1974) gives results that are closer to the new contact freezing parameterization.

  7. Temporal behavior of a solute cloud in a fractal heterogeneous porous medium at different scales

    NASA Astrophysics Data System (ADS)

    Ross, Katharina; Attinger, Sabine

    2010-05-01

    Water pollution is still a very real problem and the need for efficient models for flow and solute transport in heterogeneous porous or fractured media is evident. In our study we focus on solute transport in heterogeneous fractured media. In heterogeneous fractured media the shape of the pores and fractures in the subsurface might be modeled as a fractal network or a heterogeneous structure with infinite correlation length. To derive explicit results for larger scale or effective transport parameters in such structures is the aim of this work. To describe flow and transport we investigate the temporal behavior of transport coefficients of solute movement through a spatially heterogeneous medium. It is necessary to distinguish between two fundamentally different quantities characterizing the solute dispersion: The effective dispersion coefficient Deff(t) represents the physical (observable) dispersion in one given realization of the medium. It is conceptually different from the mathematically simpler ensemble dispersion coefficient Dens(t) which characterizes the (abstract) dispersion with respect to the set of all possible realizations of the medium. In the framework of a stochastic approach DENTZ ET AL. (2000 I[2] & II[3]) derive explicit expressions for the temporal behavior of the center-of-mass velocity and the dispersion of the concentration distribution, using a second order perturbation expansion. In their model the authors assume a finite correlation length of the heterogeneities and use a GAUSSIAN correlation function. In a first step, we model the fractured medium as a heterogeneous porous medium with infinite correlation length and neglect single fractures. ZHAN & WHEATCRAFT (1996[4]) analyze the macrodispersivity tensor in fractal porous media using a non-integer exponent which consists of the HURST coefficient and the fractal dimension D. To avoid this non-integer exponent for numerical reasons we extend the study of DENTZ ET AL. (2000 I[2] & II[3]) and derive explicit expressions for the center-of-mass velocity and the longitudinal dispersion coefficient for isotropic and anisotropic media as well as for point-like (where the extent of the source distribution is small compared to the correlation lengths of the heterogeneities) and spatially extended injections. Our results clearly show that the difference between Deff and Dens persists for all times. In other words, ensemble mixing and effective mixing coefficients do not approach the same asymptotic limit. The center-of-mass fluctuations between different flow paths for a plume traveling through the medium never become irrelevant and ergodicity breaks down in such media. Our ongoing work concerns the investigation of the transversal dispersion coefficient and the extension of the upscaling method coarse graining[1] to heterogeneous fractal porous media with embedded single fractures. References [1]ATTINGER, S. (2003): Generalized coarse graining procedures for flow in porous media, Computational Geosciences, 7 (4), pp. 253-273. [2]DENTZ, M. / KINZELBACH, H. / ATTINGER, S. and W. KINZELBACH (2000): Temporal behavior of a solute cloud in a heterogeneous porous medium: 1. Point-like injection, Water Resources Research, 36 (12), pp. 3591-3604. [3]DENTZ, M. / KINZELBACH, H. / ATTINGER, S. and W. KINZELBACH (2000): Temporal behavior of a solute cloud in a heterogeneous porous medium: 2. Spatially extended injection, Water Resources Research, 36 (12), pp. 3605-3614. [4]ZHAN, H. and S. W. WHEATCRAFT (1996): Macrodispersivity tensor for nonreactive solute transport in isotropic and anisotropic fractal porous media: Analytical solutions, Water Resources Research, 32 (12), pp. 3461-3474.

  8. Distributed data analysis in ATLAS

    NASA Astrophysics Data System (ADS)

    Nilsson, Paul; Atlas Collaboration

    2012-12-01

    Data analysis using grid resources is one of the fundamental challenges to be addressed before the start of LHC data taking. The ATLAS detector will produce petabytes of data per year, and roughly one thousand users will need to run physics analyses on this data. Appropriate user interfaces and helper applications have been made available to ensure that the grid resources can be used without requiring expertise in grid technology. These tools enlarge the number of grid users from a few production administrators to potentially all participating physicists. ATLAS makes use of three grid infrastructures for the distributed analysis: the EGEE sites, the Open Science Grid, and Nordu Grid. These grids are managed by the gLite workload management system, the PanDA workload management system, and ARC middleware; many sites can be accessed via both the gLite WMS and PanDA. Users can choose between two front-end tools to access the distributed resources. Ganga is a tool co-developed with LHCb to provide a common interface to the multitude of execution backends (local, batch, and grid). The PanDA workload management system provides a set of utilities called PanDA Client; with these tools users can easily submit Athena analysis jobs to the PanDA-managed resources. Distributed data is managed by Don Quixote 2, a system developed by ATLAS; DQ2 is used to replicate datasets according to the data distribution policies and maintains a central catalog of file locations. The operation of the grid resources is continually monitored by the Ganga Robot functional testing system, and infrequent site stress tests are performed using the Hammer Cloud system. In addition, the DAST shift team is a group of power users who take shifts to provide distributed analysis user support; this team has effectively relieved the burden of support from the developers.

  9. OpenID connect as a security service in Cloud-based diagnostic imaging systems

    NASA Astrophysics Data System (ADS)

    Ma, Weina; Sartipi, Kamran; Sharghi, Hassan; Koff, David; Bak, Peter

    2015-03-01

    The evolution of cloud computing is driving the next generation of diagnostic imaging (DI) systems. Cloud-based DI systems are able to deliver better services to patients without constraining to their own physical facilities. However, privacy and security concerns have been consistently regarded as the major obstacle for adoption of cloud computing by healthcare domains. Furthermore, traditional computing models and interfaces employed by DI systems are not ready for accessing diagnostic images through mobile devices. RESTful is an ideal technology for provisioning both mobile services and cloud computing. OpenID Connect, combining OpenID and OAuth together, is an emerging REST-based federated identity solution. It is one of the most perspective open standards to potentially become the de-facto standard for securing cloud computing and mobile applications, which has ever been regarded as "Kerberos of Cloud". We introduce OpenID Connect as an identity and authentication service in cloud-based DI systems and propose enhancements that allow for incorporating this technology within distributed enterprise environment. The objective of this study is to offer solutions for secure radiology image sharing among DI-r (Diagnostic Imaging Repository) and heterogeneous PACS (Picture Archiving and Communication Systems) as well as mobile clients in the cloud ecosystem. Through using OpenID Connect as an open-source identity and authentication service, deploying DI-r and PACS to private or community clouds should obtain equivalent security level to traditional computing model.

  10. Laboratory and Cloud Chamber Studies of Formation Processes and Properties of Atmospheric Ice Particles

    NASA Astrophysics Data System (ADS)

    Leisner, T.; Abdelmonem, A.; Benz, S.; Brinkmann, M.; Möhler, O.; Rzesanke, D.; Saathoff, H.; Schnaiter, M.; Wagner, R.

    2009-04-01

    The formation of ice in tropospheric clouds controls the evolution of precipitation and thereby influences climate and weather via a complex network of dynamical and microphysical processes. At higher altitudes, ice particles in cirrus clouds or contrails modify the radiative energy budget by direct interaction with the shortwave and longwave radiation. In order to improve the parameterisation of the complex microphysical and dynamical processes leading to and controlling the evolution of tropospheric ice, laboratory experiments are performed at the IMK Karlsruhe both on a single particle level and in the aerosol and cloud chamber AIDA. Single particle experiments in electrodynamic levitation lend themselves to the study of the interaction between cloud droplets and aerosol particles under extremely well characterized and static conditions in order to obtain microphysical parameters as freezing nucleation rates for homogeneous and heterogeneous ice formation. They also allow the observation of the freezing dynamics and of secondary ice formation and multiplication processes under controlled conditions and with very high spatial and temporal resolution. The inherent droplet charge in these experiments can be varied over a wide range in order to assess the influence of the electrical state of the cloud on its microphysics. In the AIDA chamber on the other hand, these processes are observable under the realistic dynamic conditions of an expanding and cooling cloud- parcel with interacting particles and are probed simultaneously by a comprehensive set of analytical instruments. By this means, microphysical processes can be studied in their complex interplay with dynamical processes as for example coagulation or particle evaporation and growth via the Bergeron - Findeisen process. Shortwave scattering and longwave absorption properties of the nucleating and growing ice crystals are probed by in situ polarised laser light scattering measurements and infrared extinction spectroscopy. In conjunction with ex situ single particle imaging and light scattering measurements the relation between the overall extinction and depolarization properties of the ice clouds and the morphological details of the constituent ice crystals are investigated. In our contribution we will concentrate on the parameterization of homogeneous and heterogeneous ice formation processes under various atmospheric conditions and on the optical properties of the ice crystals produced under these conditions. First attempts to parameterize the observations will be presented.

  11. Patients' online access to their electronic health records and linked online services: a systematic interpretative review.

    PubMed

    de Lusignan, Simon; Mold, Freda; Sheikh, Aziz; Majeed, Azeem; Wyatt, Jeremy C; Quinn, Tom; Cavill, Mary; Gronlund, Toto Anne; Franco, Christina; Chauhan, Umesh; Blakey, Hannah; Kataria, Neha; Barker, Fiona; Ellis, Beverley; Koczan, Phil; Arvanitis, Theodoros N; McCarthy, Mary; Jones, Simon; Rafi, Imran

    2014-09-08

    To investigate the effect of providing patients online access to their electronic health record (EHR) and linked transactional services on the provision, quality and safety of healthcare. The objectives are also to identify and understand: barriers and facilitators for providing online access to their records and services for primary care workers; and their association with organisational/IT system issues. Primary care. A total of 143 studies were included. 17 were experimental in design and subject to risk of bias assessment, which is reported in a separate paper. Detailed inclusion and exclusion criteria have also been published elsewhere in the protocol. Our primary outcome measure was change in quality or safety as a result of implementation or utilisation of online records/transactional services. No studies reported changes in health outcomes; though eight detected medication errors and seven reported improved uptake of preventative care. Professional concerns over privacy were reported in 14 studies. 18 studies reported concern over potential increased workload; with some showing an increase workload in email or online messaging; telephone contact remaining unchanged, and face-to face contact staying the same or falling. Owing to heterogeneity in reporting overall workload change was hard to predict. 10 studies reported how online access offered convenience, primarily for more advantaged patients, who were largely highly satisfied with the process when clinician responses were prompt. Patient online access and services offer increased convenience and satisfaction. However, professionals were concerned about impact on workload and risk to privacy. Studies correcting medication errors may improve patient safety. There may need to be a redesign of the business process to engage health professionals in online access and of the EHR to make it friendlier and provide equity of access to a wider group of patients. A1 SYSTEMATIC REVIEW REGISTRATION NUMBER: PROSPERO CRD42012003091. Published by the BMJ Publishing Group Limited. For permission to use (where not already granted under a licence) please go to http://group.bmj.com/group/rights-licensing/permissions.

  12. Patients’ online access to their electronic health records and linked online services: a systematic interpretative review

    PubMed Central

    de Lusignan, Simon; Mold, Freda; Sheikh, Aziz; Majeed, Azeem; Wyatt, Jeremy C; Quinn, Tom; Cavill, Mary; Gronlund, Toto Anne; Franco, Christina; Chauhan, Umesh; Blakey, Hannah; Kataria, Neha; Barker, Fiona; Ellis, Beverley; Koczan, Phil; Arvanitis, Theodoros N; McCarthy, Mary; Jones, Simon; Rafi, Imran

    2014-01-01

    Objectives To investigate the effect of providing patients online access to their electronic health record (EHR) and linked transactional services on the provision, quality and safety of healthcare. The objectives are also to identify and understand: barriers and facilitators for providing online access to their records and services for primary care workers; and their association with organisational/IT system issues. Setting Primary care. Participants A total of 143 studies were included. 17 were experimental in design and subject to risk of bias assessment, which is reported in a separate paper. Detailed inclusion and exclusion criteria have also been published elsewhere in the protocol. Primary and secondary outcome measures Our primary outcome measure was change in quality or safety as a result of implementation or utilisation of online records/transactional services. Results No studies reported changes in health outcomes; though eight detected medication errors and seven reported improved uptake of preventative care. Professional concerns over privacy were reported in 14 studies. 18 studies reported concern over potential increased workload; with some showing an increase workload in email or online messaging; telephone contact remaining unchanged, and face-to face contact staying the same or falling. Owing to heterogeneity in reporting overall workload change was hard to predict. 10 studies reported how online access offered convenience, primarily for more advantaged patients, who were largely highly satisfied with the process when clinician responses were prompt. Conclusions Patient online access and services offer increased convenience and satisfaction. However, professionals were concerned about impact on workload and risk to privacy. Studies correcting medication errors may improve patient safety. There may need to be a redesign of the business process to engage health professionals in online access and of the EHR to make it friendlier and provide equity of access to a wider group of patients. A1. Systematic review registration number PROSPERO CRD42012003091. PMID:25200561

  13. Dose-response relationship between cumulative physical workload and osteoarthritis of the hip - a meta-analysis applying an external reference population for exposure assignment.

    PubMed

    Seidler, Andreas; Lüben, Laura; Hegewald, Janice; Bolm-Audorff, Ulrich; Bergmann, Annekatrin; Liebers, Falk; Ramdohr, Christina; Romero Starke, Karla; Freiberg, Alice; Unverzagt, Susanne

    2018-06-01

    There is consistent evidence from observational studies of an association between occupational lifting and carrying of heavy loads and the diagnosis of hip osteoarthritis. However, due to the heterogeneity of exposure estimates considered in single studies, a dose-response relationship between cumulative physical workload and hip osteoarthritis could not be determined so far. This study aimed to analyze the dose-response relationship between cumulative physical workload and hip osteoarthritis by replacing the exposure categories of the included studies with cumulative exposure values of an external reference population. Our meta-regression analysis was based on a recently conducted systematic review (Bergmann A, Bolm-Audorff U, Krone D, Seidler A, Liebers F, Haerting J, Freiberg A, Unverzagt S, Dtsch Arztebl Int 114:581-8, 2017). The main analysis of our meta-regression comprised six case-control studies for men and five for women. The population control subjects of a German multicentre case-control study (Seidler A, Bergmann A, Jäger M, Ellegast R, Ditchen D, Elsner G, Grifka J, Haerting J, Hofmann F, Linhardt O, Luttmann A, Michaelis M, Petereit-Haack G, Schumann B, Bolm-Audorff U, BMC Musculoskelet Disord 10:48, 2009) served as the reference population. Based on the sex-specific cumulative exposure percentiles of the reference population, we assigned exposure values to each category of the included studies using three different cumulative exposure parameters. To estimate the doubling dose (the amount of physical workload to double the risk of hip osteoarthritis) on the basis of all available case-control-studies, meta-regression analyses were conducted based on the linear association between exposure values of the reference population and the logarithm of reported odds ratios (ORs) from the included studies. In men, the risk to develop hip osteoarthritis was increased by an OR of 1.98 (95% CI 1.20-3.29) per 10,000 tons of weights ≥20 kg handled, 2.08 (95% CI 1.22-3.53) per 10,000 tons handled > 10 times per day and 8.64 (95% CI 1.87-39.91) per 10 6 operations. These estimations result in doubling dosages of 10,100 tons of weights ≥20 kg handled, 9500 tons ≥20 kg handled > 10 times per day and 321,400 operations of weights ≥20 kg. There was no linear association between manual handling of weights at work and risk to develop hip osteoarthritis in women. Under specific conditions, the application of an external reference population allows for the derivation of a dose-response relationship despite high exposure heterogeneities in the pooled studies.

  14. The JASMIN Cloud: specialised and hybrid to meet the needs of the Environmental Sciences Community

    NASA Astrophysics Data System (ADS)

    Kershaw, Philip; Lawrence, Bryan; Churchill, Jonathan; Pritchard, Matt

    2014-05-01

    Cloud computing provides enormous opportunities for the research community. The large public cloud providers provide near-limitless scaling capability. However, adapting Cloud to scientific workloads is not without its problems. The commodity nature of the public cloud infrastructure can be at odds with the specialist requirements of the research community. Issues such as trust, ownership of data, WAN bandwidth and costing models make additional barriers to more widespread adoption. Alongside the application of public cloud for scientific applications, a number of private cloud initiatives are underway in the research community of which the JASMIN Cloud is one example. Here, cloud service models are being effectively super-imposed over more established services such as data centres, compute cluster facilities and Grids. These have the potential to deliver the specialist infrastructure needed for the science community coupled with the benefits of a Cloud service model. The JASMIN facility based at the Rutherford Appleton Laboratory was established in 2012 to support the data analysis requirements of the climate and Earth Observation community. In its first year of operation, the 5PB of available storage capacity was filled and the hosted compute capability used extensively. JASMIN has modelled the concept of a centralised large-volume data analysis facility. Key characteristics have enabled success: peta-scale fast disk connected via low latency networks to compute resources and the use of virtualisation for effective management of the resources for a range of users. A second phase is now underway funded through NERC's (Natural Environment Research Council) Big Data initiative. This will see significant expansion to the resources available with a doubling of disk-based storage to 12PB and an increase of compute capacity by a factor of ten to over 3000 processing cores. This expansion is accompanied by a broadening in the scope for JASMIN, as a service available to the entire UK environmental science community. Experience with the first phase demonstrated the range of user needs. A trade-off is needed between access privileges to resources, flexibility of use and security. This has influenced the form and types of service under development for the new phase. JASMIN will deploy a specialised private cloud organised into "Managed" and "Unmanaged" components. In the Managed Cloud, users have direct access to the storage and compute resources for optimal performance but for reasons of security, via a more restrictive PaaS (Platform-as-a-Service) interface. The Unmanaged Cloud is deployed in an isolated part of the network but co-located with the rest of the infrastructure. This enables greater liberty to tenants - full IaaS (Infrastructure-as-a-Service) capability to provision customised infrastructure - whilst at the same time protecting more sensitive parts of the system from direct access using these elevated privileges. The private cloud will be augmented with cloud-bursting capability so that it can exploit the resources available from public clouds, making it effectively a hybrid solution. A single interface will overlay the functionality of both the private cloud and external interfaces to public cloud providers giving users the flexibility to migrate resources between infrastructures as requirements dictate.

  15. Modeling the Relationships Between Aerosol Properties and the Direct and Indirect Effects of Aerosols on Climate

    NASA Technical Reports Server (NTRS)

    Toon, Owen B.

    1994-01-01

    Aerosols may affect climate directly by scattering and absorbing visible and infrared energy, They may also affect climate indirectly by modifying the properties of clouds through microphysical processes, and by altering abundances of radiatively important gases through heterogeneous chemistry. Researchers understand which aerosol properties control the direct effect of aerosols on the radiation budget. Unfortunately, despite an abundance of data on certain types of aerosols, much work remains to be done to determine the values of these properties. For instance we have little idea about the global distribution, seasonal variation, or interannual variability of the aerosol optical depth. Also we do not know the visible light absorption properties of tropical aerosols which may contain much debris from slash and burn agriculture. A positive correlation between aerosol concentrations and albedos of marine stratus clouds is observed, and the causative microphysics is understood. However, models suggest that it is difficult to produce new particles in the marine boundary layer. Some modelers have suggested that the particles in the marine boundary layer may originate in the free troposphere and be transported into the boundary layer. Others argue that the aerosols are created in the marine boundary layer. There are no data linking aerosol concentration and cirrus cloud albedo, and models suggest cirrus properties may not be very sensitive to aerosol abundance. There is clear evidence of a radiatively significant change in the global lower stratospheric ozone abundance during the past few decades. These changes are caused by heterogeneous chemical reactions occurring on the surfaces of particles. The rates of these reactions depend upon the chemical composition of the particles. Although rapid advances in understanding heterogeneous chemistry have been made, much remains to be done.

  16. Sensitivity of Cirrus and Mixed-phase Clouds to the Ice Nuclei Spectra in McRAS-AC: Single Column Model Simulations

    NASA Technical Reports Server (NTRS)

    Betancourt, R. Morales; Lee, D.; Oreopoulos, L.; Sud, Y. C.; Barahona, D.; Nenes, A.

    2012-01-01

    The salient features of mixed-phase and ice clouds in a GCM cloud scheme are examined using the ice formation parameterizations of Liu and Penner (LP) and Barahona and Nenes (BN). The performance of LP and BN ice nucleation parameterizations were assessed in the GEOS-5 AGCM using the McRAS-AC cloud microphysics framework in single column mode. Four dimensional assimilated data from the intensive observation period of ARM TWP-ICE campaign was used to drive the fluxes and lateral forcing. Simulation experiments where established to test the impact of each parameterization in the resulting cloud fields. Three commonly used IN spectra were utilized in the BN parameterization to described the availability of IN for heterogeneous ice nucleation. The results show large similarities in the cirrus cloud regime between all the schemes tested, in which ice crystal concentrations were within a factor of 10 regardless of the parameterization used. In mixed-phase clouds there are some persistent differences in cloud particle number concentration and size, as well as in cloud fraction, ice water mixing ratio, and ice water path. Contact freezing in the simulated mixed-phase clouds contributed to transfer liquid to ice efficiently, so that on average, the clouds were fully glaciated at T approximately 260K, irrespective of the ice nucleation parameterization used. Comparison of simulated ice water path to available satellite derived observations were also performed, finding that all the schemes tested with the BN parameterization predicted 20 average values of IWP within plus or minus 15% of the observations.

  17. An Assessment of Differences Between Cloud Effective Particle Radius Retrievals for Marine Water Clouds from Three MODIS Spectral Bands

    NASA Technical Reports Server (NTRS)

    Platnick, Steven; Zhang, Zhibo

    2011-01-01

    The Moderate Resolution Imaging Spectroradiometer (MODIS) cloud product provides three separate 1 km resolution retrievals of cloud particle effective radii (r (sub e)), derived from 1.6, 2.1 and 3.7 micron band observations. In this study, differences among the three size retrievals for maritime water clouds (designated as r (sub e), 1.6 r (sub e), 2.1 and r (sub e),3.7) were systematically investigated through a series of case studies and global analyses. Substantial differences are found between r (sub e),3.7 and r (sub e),2.1 retrievals (delta r (sub e),3.7-2.l), with a strong dependence on cloud regime. The differences are typically small, within +/- 2 micron, over relatively spatially homogeneous coastal stratocumulus cloud regions. However, for trade wind cumulus regimes, r (sub e),3.7 was found to be substantially smaller than r (sub e),2.1, sometimes by more than 10 micron. The correlation of delta r(sub e),3.7-2.1 with key cloud parameters, including the cloud optical thickness (tau), r (sub e) and a cloud horizontal heterogeneity index (H-sigma) derived from 250 m resolution MODIS 0.86 micron band observations, were investigated using one month of MODIS Terra data. It was found that differences among the three r (sub e) retrievals for optically thin clouds (tau <5) are highly variable, ranging from - 15 micron to 10 micron, likely due to the large MODIS retrieval uncertainties when the cloud is thin. The delta r (sub e),3.7-2.1 exhibited a threshold-like dependence on both r (sub e),2.l and H-sigma. The re,3.7 is found to agree reasonably well with re,2.! when re,2.l is smaller than about 15J-Lm, but becomes increasingly smaller than re,2.1 once re,2.! exceeds this size. All three re retrievals showed little dependence when H-sigma < 0.3 (defined as standard deviation divided by the mean for the 250 m pixels within a 1 km pixel retrieval). However, for H-=sigma >0.3, both r (sub e),1.6 and r (sub e),2.1 were seen to increase quickly with H-sigma. On the other hand, r (sub e),3.7 statistics showed little dependence on H-sigma and remained relatively stable over the whole range of H-sigma values. Potential contributing causes to the substantial r (sub e),3.7 and r (sub e),2.1 differences are discussed. In particular, based on both 1-D and 3-D radiative transfer simulations, we have elucidated mechanisms by which cloud heterogeneity and 3-D radiative effects can cause large differences between r (sub e),3.7 and r (sub e),2.l retrievals for highly inhomogeneous clouds. Our results suggest that the contrast in observed delta r (sub e)3.7-2.1 between cloud regimes is correlated with increases in both cloud r (sub e) and H-sigma. We also speculate that in some highly inhomogeneous drizzling clouds, vertical structure induced by drizzle and 3-D radiative effects might operate together to cause dramatic differences between r (sub e),3.7 and r (sub e),2.1 retrievals.

  18. Astronomy In The Cloud: Using Mapreduce For Image Coaddition

    NASA Astrophysics Data System (ADS)

    Wiley, Keith; Connolly, A.; Gardner, J.; Krughoff, S.; Balazinska, M.; Howe, B.; Kwon, Y.; Bu, Y.

    2011-01-01

    In the coming decade, astronomical surveys of the sky will generate tens of terabytes of images and detect hundreds of millions of sources every night. The study of these sources will involve computational challenges such as anomaly detection, classification, and moving object tracking. Since such studies require the highest quality data, methods such as image coaddition, i.e., registration, stacking, and mosaicing, will be critical to scientific investigation. With a requirement that these images be analyzed on a nightly basis to identify moving sources, e.g., asteroids, or transient objects, e.g., supernovae, these datastreams present many computational challenges. Given the quantity of data involved, the computational load of these problems can only be addressed by distributing the workload over a large number of nodes. However, the high data throughput demanded by these applications may present scalability challenges for certain storage architectures. One scalable data-processing method that has emerged in recent years is MapReduce, and in this paper we focus on its popular open-source implementation called Hadoop. In the Hadoop framework, the data is partitioned among storage attached directly to worker nodes, and the processing workload is scheduled in parallel on the nodes that contain the required input data. A further motivation for using Hadoop is that it allows us to exploit cloud computing resources, i.e., platforms where Hadoop is offered as a service. We report on our experience implementing a scalable image-processing pipeline for the SDSS imaging database using Hadoop. This multi-terabyte imaging dataset provides a good testbed for algorithm development since its scope and structure approximate future surveys. First, we describe MapReduce and how we adapted image coaddition to the MapReduce framework. Then we describe a number of optimizations to our basic approach and report experimental results compring their performance. This work is funded by the NSF and by NASA.

  19. Astronomy in the Cloud: Using MapReduce for Image Co-Addition

    NASA Astrophysics Data System (ADS)

    Wiley, K.; Connolly, A.; Gardner, J.; Krughoff, S.; Balazinska, M.; Howe, B.; Kwon, Y.; Bu, Y.

    2011-03-01

    In the coming decade, astronomical surveys of the sky will generate tens of terabytes of images and detect hundreds of millions of sources every night. The study of these sources will involve computation challenges such as anomaly detection and classification and moving-object tracking. Since such studies benefit from the highest-quality data, methods such as image co-addition, i.e., astrometric registration followed by per-pixel summation, will be a critical preprocessing step prior to scientific investigation. With a requirement that these images be analyzed on a nightly basis to identify moving sources such as potentially hazardous asteroids or transient objects such as supernovae, these data streams present many computational challenges. Given the quantity of data involved, the computational load of these problems can only be addressed by distributing the workload over a large number of nodes. However, the high data throughput demanded by these applications may present scalability challenges for certain storage architectures. One scalable data-processing method that has emerged in recent years is MapReduce, and in this article we focus on its popular open-source implementation called Hadoop. In the Hadoop framework, the data are partitioned among storage attached directly to worker nodes, and the processing workload is scheduled in parallel on the nodes that contain the required input data. A further motivation for using Hadoop is that it allows us to exploit cloud computing resources: i.e., platforms where Hadoop is offered as a service. We report on our experience of implementing a scalable image-processing pipeline for the SDSS imaging database using Hadoop. This multiterabyte imaging data set provides a good testbed for algorithm development, since its scope and structure approximate future surveys. First, we describe MapReduce and how we adapted image co-addition to the MapReduce framework. Then we describe a number of optimizations to our basic approach and report experimental results comparing their performance.

  20. Double-moment cloud microphysics scheme for the deep convection parameterization in the GFDL AM3

    NASA Astrophysics Data System (ADS)

    Belochitski, A.; Donner, L.

    2014-12-01

    A double-moment cloud microphysical scheme originally developed by Morrision and Gettelman (2008) for the stratiform clouds and later adopted for the deep convection by Song and Zhang (2011) has been implemented in to the Geophysical Fluid Dynamics Laboratory's atmospheric general circulation model AM3. The scheme treats cloud drop, cloud ice, rain, and snow number concentrations and mixing ratios as diagnostic variables and incorporates processes of autoconversion, self-collection, collection between hydrometeor species, sedimentation, ice nucleation, drop activation, homogeneous and heterogeneous freezing, and the Bergeron-Findeisen process. Such detailed representation of microphysical processes makes the scheme suitable for studying the interactions between aerosols and convection, as well as aerosols' indirect effects on clouds and their roles in climate change. The scheme is first tested in the single column version of the GFDL AM3 using forcing data obtained at the U.S. Department of Energy Atmospheric Radiation Measurment project's Southern Great Planes site. Scheme's impact on SCM simulations is discussed. As the next step, runs of the full atmospheric GCM incorporating the new parameterization are compared to the unmodified version of GFDL AM3. Global climatological fields and their variability are contrasted with those of the original version of the GCM. Impact on cloud radiative forcing and climate sensitivity is investigated.

  1. Effects of drop freezing on microphysics of an ascending cloud parcel under biomass burning conditions

    NASA Astrophysics Data System (ADS)

    Diehl, K.; Simmel, M.; Wurzler, S.

    There is some evidence that the initiation of warm rain is suppressed in clouds over regions with vegetation fires. Thus, the ice phase becomes important as another possibility to initiate precipitation. Numerical simulations were performed to investigate heterogeneous drop freezing for a biomass-burning situation. An air parcel model with a sectional two-dimensional description of the cloud microphysics was employed with parameterizations for immersion and contact freezing which consider the different ice nucleating efficiencies of various ice nuclei. Three scenarios were simulated resulting to mixed-phase or completely glaciated clouds. According to the high insoluble fraction of the biomass-burning particles drop freezing via immersion and contact modes was very efficient. The preferential freezing of large drops followed by riming (i.e. the deposition of liquid drops on ice particles) and the evaporation of the liquid drops (Bergeron-Findeisen process) caused a further decrease of the liquid drops' effective radius in higher altitudes. In turn ice particle sizes increased so that they could serve as germs for graupel or hailstone formation. The effects of ice initiation on the vertical cloud dynamics were fairly significant leading to a development of the cloud to much higher altitudes than in a warm cloud without ice formation.

  2. Secure Scientific Applications Scheduling Technique for Cloud Computing Environment Using Global League Championship Algorithm

    PubMed Central

    Abdulhamid, Shafi’i Muhammad; Abd Latiff, Muhammad Shafie; Abdul-Salaam, Gaddafi; Hussain Madni, Syed Hamid

    2016-01-01

    Cloud computing system is a huge cluster of interconnected servers residing in a datacenter and dynamically provisioned to clients on-demand via a front-end interface. Scientific applications scheduling in the cloud computing environment is identified as NP-hard problem due to the dynamic nature of heterogeneous resources. Recently, a number of metaheuristics optimization schemes have been applied to address the challenges of applications scheduling in the cloud system, without much emphasis on the issue of secure global scheduling. In this paper, scientific applications scheduling techniques using the Global League Championship Algorithm (GBLCA) optimization technique is first presented for global task scheduling in the cloud environment. The experiment is carried out using CloudSim simulator. The experimental results show that, the proposed GBLCA technique produced remarkable performance improvement rate on the makespan that ranges between 14.44% to 46.41%. It also shows significant reduction in the time taken to securely schedule applications as parametrically measured in terms of the response time. In view of the experimental results, the proposed technique provides better-quality scheduling solution that is suitable for scientific applications task execution in the Cloud Computing environment than the MinMin, MaxMin, Genetic Algorithm (GA) and Ant Colony Optimization (ACO) scheduling techniques. PMID:27384239

  3. Secure Scientific Applications Scheduling Technique for Cloud Computing Environment Using Global League Championship Algorithm.

    PubMed

    Abdulhamid, Shafi'i Muhammad; Abd Latiff, Muhammad Shafie; Abdul-Salaam, Gaddafi; Hussain Madni, Syed Hamid

    2016-01-01

    Cloud computing system is a huge cluster of interconnected servers residing in a datacenter and dynamically provisioned to clients on-demand via a front-end interface. Scientific applications scheduling in the cloud computing environment is identified as NP-hard problem due to the dynamic nature of heterogeneous resources. Recently, a number of metaheuristics optimization schemes have been applied to address the challenges of applications scheduling in the cloud system, without much emphasis on the issue of secure global scheduling. In this paper, scientific applications scheduling techniques using the Global League Championship Algorithm (GBLCA) optimization technique is first presented for global task scheduling in the cloud environment. The experiment is carried out using CloudSim simulator. The experimental results show that, the proposed GBLCA technique produced remarkable performance improvement rate on the makespan that ranges between 14.44% to 46.41%. It also shows significant reduction in the time taken to securely schedule applications as parametrically measured in terms of the response time. In view of the experimental results, the proposed technique provides better-quality scheduling solution that is suitable for scientific applications task execution in the Cloud Computing environment than the MinMin, MaxMin, Genetic Algorithm (GA) and Ant Colony Optimization (ACO) scheduling techniques.

  4. Partitioning the primary ice formation modes in large eddy simulations of mixed-phase clouds

    NASA Astrophysics Data System (ADS)

    Hande, Luke B.; Hoose, Corinna

    2017-11-01

    State-of-the-art aerosol-dependent parameterisations describing each heterogeneous ice nucleation mode (contact, immersion, and deposition ice nucleation), as well as homogeneous nucleation, were incorporated into a large eddy simulation model. Several cases representing commonly occurring cloud types were simulated in an effort to understand which ice nucleation modes contribute the most to total concentrations of ice crystals. The cases include a completely idealised warm bubble, semi-idealised deep convection, an orographic cloud, and a stratiform case. Despite clear differences in thermodynamic conditions between the cases, the results are remarkably consistent between the different cloud types. In all the investigated cloud types and under normal aerosol conditions, immersion freezing dominates and contact freezing also contributes significantly. At colder temperatures, deposition nucleation plays only a small role, and homogeneous freezing is important. To some extent, the temporal evolution of the cloud determines the dominant freezing mechanism and hence the subsequent microphysical processes. Precipitation is not correlated with any one ice nucleation mode, instead occurring simultaneously when several nucleation modes are active. Furthermore, large variations in the aerosol concentration do affect the dominant ice nucleation mode; however, they have only a minor influence on the precipitation amount.

  5. Climate impact of anthropogenic aerosols on cirrus clouds

    NASA Astrophysics Data System (ADS)

    Penner, J.; Zhou, C.

    2017-12-01

    Cirrus clouds have a net warming effect on the atmosphere and cover about 30% of the Earth's area. Aerosol particles initiate ice formation in the upper troposphere through modes of action that include homogeneous freezing of solution droplets, heterogeneous nucleation on solid particles immersed in a solution, and deposition nucleation of vapor onto solid particles. However, the efficacy with which particles act to form cirrus particles in a model depends on the representation of updrafts. Here, we use a representation of updrafts based on observations of gravity waves, and follow ice formation/evaporation during both updrafts and downdrafts. We examine the possible change in ice number concentration from anthropogenic soot originating from surface sources of fossil fuel and biomass burning and from aircraft particles that have previously formed ice in contrails. Results show that fossil fuel and biomass burning soot aerosols with this version exert a radiative forcing of -0.15±0.02 Wm-2 while aircraft aerosols that have been pre-activated within contrails exert a forcing of -0.20±0.06 Wm-2, but it is possible to decrease these estimates of forcing if a larger fraction of dust particles act as heterogeneous ice nuclei. In addition aircraft aerosols may warm the climate if a large fraction of these particles act as ice nuclei. The magnitude of the forcing in cirrus clouds can be comparable to the forcing exerted by anthropogenic aerosols on warm clouds. This assessment could therefore support climate models with high sensitivity to greenhouse gas forcing, while still allowing the models to fit the overall historical temperature change.

  6. POLECAT: Preparatory and modelling studies

    NASA Astrophysics Data System (ADS)

    Peter, T.; Müller, R.; Pawson, S.; Volkert, H.

    1995-02-01

    “POLECAT” is the acronym for a mission to polar stratospheric clouds, lee waves, chemistry, aerosols and transport. It constitutes a lead project of the German ozone research program sponsored by the Federal Ministry of Education and Research (BMBF). It focusses on the investigation of polar stratospheric clouds (PSCs) in the northern hemisphere with special emphasis on mesoscale effects, in particular lee waves, and their effects on polar stratospheric chemistry. The project comprises two phases. Phase 1 will support laboratory studies on PSC microphysics and heterogeneous chemistry, modelling studies on all scales, and selected field experiments concerning particle measurements as well as characterization of the direct chemical products of heterogeneous reactions. Phase 2 will cover a mission of the high-altitude aircraft Strato-2C, used for flights along streamlines across orographically perturbed regions for direct investigation of PSC effects. This paper presents some preparatory work for the upcoming project and, hence, concentrates on modelling studies including the planning strategies for the future aircraft missions.

  7. Heterogeneous nucleation of nitric acid trihydrate on clay minerals: relevance to type ia polar stratospheric clouds.

    PubMed

    Hatch, Courtney D; Gough, Raina V; Toon, Owen B; Tolbert, Margaret A

    2008-01-17

    Although critical to atmospheric modeling of stratospheric ozone depletion, selective heterogeneous nuclei that promote the formation of Type Ia polar stratospheric clouds (PSCs) are largely unknown. While mineral particles are known to be good ice nuclei, it is currently not clear whether they are also good nuclei for PSCs. In the present study, a high-vacuum chamber equipped with transmission Fourier transform infrared spectroscopy and a quadrupole mass spectrometer was used to study heterogeneous nucleation of nitric acid trihydrate (NAT) on two clay minerals-Na-montmorillonite and kaolinite-as analogs of atmospheric terrestrial and extraterrestrial minerals. The minerals are first coated with a 3:1 supercooled H2O/HNO3 solution prior to the observed nucleation of crystalline NAT. At 220 K, NAT formation was observed at low SNAT values of 12 and 7 on kaolinite and montmorillonite clays, respectively. These are the lowest SNAT values reported in the literature on any substrate. However, NAT nucleation exhibited significant temperature dependence. At lower temperatures, representative of typical polar stratospheric conditions, much higher supersaturations were required before nucleation was observed. Our results suggest that NAT nucleation on mineral particles, not previously treated with sulfuric acid, may not be an important nucleation platform for Type Ia PSCs under normal polar stratospheric conditions.

  8. Aerosol Processing in Mixed-Phase Clouds in ECHAM5-HAM: Comparison of Single-Column Model Simulations to Observations

    NASA Astrophysics Data System (ADS)

    Hoose, C.; Lohmann, U.; Stier, P.; Verheggen, B.; Weingartner, E.; Herich, H.

    2007-12-01

    The global aerosol-climate model ECHAM5-HAM (Stier et al., 2005) has been extended by an explicit treatment of cloud-borne particles. Two additional modes for in-droplet and in-crystal particles are introduced, which are coupled to the number of cloud droplet and ice crystal concentrations simulated by the ECHAM5 double-moment cloud microphysics scheme (Lohmann et al., 2007). Transfer, production and removal of cloud-borne aerosol number and mass by cloud droplet activation, collision scavenging, aqueous-phase sulfate production, freezing, melting, evaporation, sublimation and precipitation formation are taken into account. The model performance is demonstrated and validated with observations of the evolution of total and interstitial aerosol concentrations and size distributions during three different mixed-phase cloud events at the alpine high-altitude research station Jungfraujoch (Switzerland) (Verheggen et al, 2007). Although the single-column simulations can not be compared one-to-one with the observations, the governing processes in the evolution of the cloud and aerosol parameters are captured qualitatively well. High scavenged fractions are found during the presence of liquid water, while the release of particles during the Bergeron-Findeisen process results in low scavenged fractions after cloud glaciation. The observed coexistence of liquid and ice, which might be related to cloud heterogeneity at subgrid scales, can only be simulated in the model when forcing non-equilibrium conditions. References: U. Lohmann et al., Cloud microphysics and aerosol indirect effects in the global climate model ECHAM5-HAM, Atmos. Chem. Phys. 7, 3425-3446 (2007) P. Stier et al., The aerosol-climate model ECHAM5-HAM, Atmos. Chem. Phys. 5, 1125-1156 (2005) B. Verheggen et al., Aerosol partitioning between the interstitial and the condensed phase in mixed-phase clouds, Accepted for publication in J. Geophys. Res. (2007)

  9. Homogeneous and heterogeneous chemistry along air parcel trajectories

    NASA Technical Reports Server (NTRS)

    Jones, R. L.; Mckenna, D. L.; Poole, L. R.; Solomon, S.

    1990-01-01

    The study of coupled heterogeneous and homogeneous chemistry due to polar stratospheric clouds (PSC's) using Lagrangian parcel trajectories for interpretation of the Airborne Arctic Stratosphere Experiment (AASE) is discussed. This approach represents an attempt to quantitatively model the physical and chemical perturbation to stratospheric composition due to formation of PSC's using the fullest possible representation of the relevant processes. Further, the meteorological fields from the United Kingdom Meteorological office global model were used to deduce potential vorticity and inferred regions of PSC's as an input to flight planning during AASE.

  10. Airborne observations of the microphysical structure of two contrasting cirrus clouds

    NASA Astrophysics Data System (ADS)

    O'Shea, S. J.; Choularton, T. W.; Lloyd, G.; Crosier, J.; Bower, K. N.; Gallagher, M.; Abel, S. J.; Cotton, R. J.; Brown, P. R. A.; Fugal, J. P.; Schlenczek, O.; Borrmann, S.; Pickering, J. C.

    2016-11-01

    We present detailed airborne in situ measurements of cloud microphysics in two midlatitude cirrus clouds, collected as part of the Cirrus Coupled Cloud-Radiation Experiment. A new habit recognition algorithm for sorting cloud particle images using a neural network is introduced. Both flights observed clouds that were related to frontal systems, but one was actively developing while the other dissipated as it was sampled. The two clouds showed distinct differences in particle number, habit, and size. However, a number of common features were observed in the 2-D stereo data set, including a distinct bimodal size distribution within the higher-temperature regions of the clouds. This may result from a combination of local heterogeneous nucleation and large particles sedimenting from aloft. Both clouds had small ice crystals (<100 µm) present at all levels However, this small ice mode is not present in observations from a holographic probe. This raises the possibility that the small ice observed by optical array probes may at least be in part an instrument artifact due to the counting of out-of-focus large particles as small ice. The concentrations of ice crystals were a factor 10 higher in the actively growing cloud with the stronger updrafts, with a mean concentration of 261 L-1 compared to 29 L-1 in the decaying case. Particles larger than 700 µm were largely absent from the decaying cirrus case. A comparison with ice-nucleating particle parameterizations suggests that for the developing case the ice concentrations at the lowest temperatures are best explained by homogenous nucleation.

  11. "Tactic": Traffic Aware Cloud for Tiered Infrastructure Consolidation

    ERIC Educational Resources Information Center

    Sangpetch, Akkarit

    2013-01-01

    Large-scale enterprise applications are deployed as distributed applications. These applications consist of many inter-connected components with heterogeneous roles and complex dependencies. Each component typically consumes 5-15% of the server capacity. Deploying each component as a separate virtual machine (VM) allows us to consolidate the…

  12. Cirrus Susceptibility to Changes in Ice Nuclei: Physical Processes, Model Uncertainties, and Measurement Needs

    NASA Technical Reports Server (NTRS)

    Jensen, Eric

    2017-01-01

    In this talk, I will begin by discussing the physical processes that govern the competition between heterogeneous and homogeneous ice nucleation in upper tropospheric cirrus clouds. Next, I will review the current knowledge of low-temperature ice nucleation from laboratory experiments and field measurements. I will then discuss the uncertainties and deficiencies in representations of cirrus processes in global models used to estimate the climate impacts of changes in cirrus clouds. Lastly, I will review the critical field measurements needed to advance our understanding of cirrus and their susceptibility to changes in aerosol properties.

  13. Data Center Consolidation: A Step towards Infrastructure Clouds

    NASA Astrophysics Data System (ADS)

    Winter, Markus

    Application service providers face enormous challenges and rising costs in managing and operating a growing number of heterogeneous system and computing landscapes. Limitations of traditional computing environments force IT decision-makers to reorganize computing resources within the data center, as continuous growth leads to an inefficient utilization of the underlying hardware infrastructure. This paper discusses a way for infrastructure providers to improve data center operations based on the findings of a case study on resource utilization of very large business applications and presents an outlook beyond server consolidation endeavors, transforming corporate data centers into compute clouds.

  14. Cloud chamber experiments on the origin of ice crystal complexity in cirrus clouds

    NASA Astrophysics Data System (ADS)

    Schnaiter, Martin; Järvinen, Emma; Vochezer, Paul; Abdelmonem, Ahmed; Wagner, Robert; Jourdan, Olivier; Mioche, Guillaume; Shcherbakov, Valery N.; Schmitt, Carl G.; Tricoli, Ugo; Ulanowski, Zbigniew; Heymsfield, Andrew J.

    2016-04-01

    This study reports on the origin of small-scale ice crystal complexity and its influence on the angular light scattering properties of cirrus clouds. Cloud simulation experiments were conducted at the AIDA (Aerosol Interactions and Dynamics in the Atmosphere) cloud chamber of the Karlsruhe Institute of Technology (KIT). A new experimental procedure was applied to grow and sublimate ice particles at defined super- and subsaturated ice conditions and for temperatures in the -40 to -60 °C range. The experiments were performed for ice clouds generated via homogeneous and heterogeneous initial nucleation. Small-scale ice crystal complexity was deduced from measurements of spatially resolved single particle light scattering patterns by the latest version of the Small Ice Detector (SID-3). It was found that a high crystal complexity dominates the microphysics of the simulated clouds and the degree of this complexity is dependent on the available water vapor during the crystal growth. Indications were found that the small-scale crystal complexity is influenced by unfrozen H2SO4 / H2O residuals in the case of homogeneous initial ice nucleation. Angular light scattering functions of the simulated ice clouds were measured by the two currently available airborne polar nephelometers: the polar nephelometer (PN) probe of Laboratoire de Métérologie et Physique (LaMP) and the Particle Habit Imaging and Polar Scattering (PHIPS-HALO) probe of KIT. The measured scattering functions are featureless and flat in the side and backward scattering directions. It was found that these functions have a rather low sensitivity to the small-scale crystal complexity for ice clouds that were grown under typical atmospheric conditions. These results have implications for the microphysical properties of cirrus clouds and for the radiative transfer through these clouds.

  15. Multi-Objective Approach for Energy-Aware Workflow Scheduling in Cloud Computing Environments

    PubMed Central

    Kadima, Hubert; Granado, Bertrand

    2013-01-01

    We address the problem of scheduling workflow applications on heterogeneous computing systems like cloud computing infrastructures. In general, the cloud workflow scheduling is a complex optimization problem which requires considering different criteria so as to meet a large number of QoS (Quality of Service) requirements. Traditional research in workflow scheduling mainly focuses on the optimization constrained by time or cost without paying attention to energy consumption. The main contribution of this study is to propose a new approach for multi-objective workflow scheduling in clouds, and present the hybrid PSO algorithm to optimize the scheduling performance. Our method is based on the Dynamic Voltage and Frequency Scaling (DVFS) technique to minimize energy consumption. This technique allows processors to operate in different voltage supply levels by sacrificing clock frequencies. This multiple voltage involves a compromise between the quality of schedules and energy. Simulation results on synthetic and real-world scientific applications highlight the robust performance of the proposed approach. PMID:24319361

  16. Multi-objective approach for energy-aware workflow scheduling in cloud computing environments.

    PubMed

    Yassa, Sonia; Chelouah, Rachid; Kadima, Hubert; Granado, Bertrand

    2013-01-01

    We address the problem of scheduling workflow applications on heterogeneous computing systems like cloud computing infrastructures. In general, the cloud workflow scheduling is a complex optimization problem which requires considering different criteria so as to meet a large number of QoS (Quality of Service) requirements. Traditional research in workflow scheduling mainly focuses on the optimization constrained by time or cost without paying attention to energy consumption. The main contribution of this study is to propose a new approach for multi-objective workflow scheduling in clouds, and present the hybrid PSO algorithm to optimize the scheduling performance. Our method is based on the Dynamic Voltage and Frequency Scaling (DVFS) technique to minimize energy consumption. This technique allows processors to operate in different voltage supply levels by sacrificing clock frequencies. This multiple voltage involves a compromise between the quality of schedules and energy. Simulation results on synthetic and real-world scientific applications highlight the robust performance of the proposed approach.

  17. A Survey on Personal Data Cloud

    PubMed Central

    Wang, Jiaqiu; Wang, Zhongjie

    2014-01-01

    Personal data represent the e-history of a person and are of great significance to the person, but they are essentially produced and governed by various distributed services and there lacks a global and centralized view. In recent years, researchers pay attention to Personal Data Cloud (PDC) which aggregates the heterogeneous personal data scattered in different clouds into one cloud, so that a person could effectively store, acquire, and share their data. This paper makes a short survey on PDC research by summarizing related papers published in recent years. The concept, classification, and significance of personal data are elaborately introduced and then the semantics correlation and semantics representation of personal data are discussed. A multilayer reference architecture of PDC, including its core components and a real-world operational scenario showing how the reference architecture works, is introduced in detail. Existing commercial PDC products/prototypes are listed and compared from several perspectives. Five open issues to improve the shortcomings of current PDC research are put forward. PMID:25165753

  18. Machine Learning for Knowledge Extraction from PHR Big Data.

    PubMed

    Poulymenopoulou, Michaela; Malamateniou, Flora; Vassilacopoulos, George

    2014-01-01

    Cloud computing, Internet of things (IOT) and NoSQL database technologies can support a new generation of cloud-based PHR services that contain heterogeneous (unstructured, semi-structured and structured) patient data (health, social and lifestyle) from various sources, including automatically transmitted data from Internet connected devices of patient living space (e.g. medical devices connected to patients at home care). The patient data stored in such PHR systems constitute big data whose analysis with the use of appropriate machine learning algorithms is expected to improve diagnosis and treatment accuracy, to cut healthcare costs and, hence, to improve the overall quality and efficiency of healthcare provided. This paper describes a health data analytics engine which uses machine learning algorithms for analyzing cloud based PHR big health data towards knowledge extraction to support better healthcare delivery as regards disease diagnosis and prognosis. This engine comprises of the data preparation, the model generation and the data analysis modules and runs on the cloud taking advantage from the map/reduce paradigm provided by Apache Hadoop.

  19. A water activity based model of heterogeneous ice nucleation kinetics for freezing of water and aqueous solution droplets

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Knopf, Daniel A.; Alpert, Peter A.

    Immersion freezing of water and aqueous solutions by particles acting as ice nuclei (IN) is a common process of heterogeneous ice nucleation which occurs in many environments, especially in the atmosphere where it results in the glaciation of clouds. Here we experimentally show, using a variety of IN types suspended in various aqueous solutions, that immersion freezing temperatures and kinetics can be described solely by temperature, T, and solution water activity, aw, which is the ratio of the vapour pressure of the solution and the saturation water vapour pressure under the same conditions and, in equilibrium, equivalent to relative humiditymore » (RH). This allows the freezing point and corresponding heterogeneous ice nucleation rate coefficient, Jhet, to be uniquely expressed by T and aw, a result we term the aw based immersion freezing model (ABIFM). This method is independent of the nature of the solute and accounts for several varying parameters, including cooling rate and IN surface area, while providing a holistic description of immersion freezing and allowing prediction of freezing temperatures, Jhet, frozen fractions, ice particle production rates and numbers. Our findings are based on experimental freezing data collected for various IN surface areas, A, and cooling rates, r, of droplets variously containing marine biogenic material, two soil humic acids, four mineral dusts, and one organic monolayer acting as IN. For all investigated IN types we demonstrate that droplet freezing temperatures increase as A increases. Similarly, droplet freezing temperatures increase as the cooling rate decreases. The log 10(J het) values for the various IN types derived exclusively by T and aw, provide a complete description of the heterogeneous ice nucleation kinetics. Thus, the ABIFM can be applied over the entire range of T, RH, total particulate surface area, and cloud activation timescales typical of atmospheric conditions. Finally, we demonstrate that ABIFM can be used to derive frozen fractions of droplets and ice particle production for atmospheric models of cirrus and mixed phase cloud conditions.« less

  20. A water activity based model of heterogeneous ice nucleation kinetics for freezing of water and aqueous solution droplets.

    PubMed

    Knopf, Daniel A; Alpert, Peter A

    2013-01-01

    Immersion freezing of water and aqueous solutions by particles acting as ice nuclei (IN) is a common process of heterogeneous ice nucleation which occurs in many environments, especially in the atmosphere where it results in the glaciation of clouds. Here we experimentally show, using a variety of IN types suspended in various aqueous solutions, that immersion freezing temperatures and kinetics can be described solely by temperature, T, and solution water activity, a(w), which is the ratio of the vapour pressure of the solution and the saturation water vapour pressure under the same conditions and, in equilibrium, equivalent to relative humidity (RH). This allows the freezing point and corresponding heterogeneous ice nucleation rate coefficient, J(het), to be uniquely expressed by T and a(w), a result we term the a(w) based immersion freezing model (ABIFM). This method is independent of the nature of the solute and accounts for several varying parameters, including cooling rate and IN surface area, while providing a holistic description of immersion freezing and allowing prediction of freezing temperatures, J(het), frozen fractions, ice particle production rates and numbers. Our findings are based on experimental freezing data collected for various IN surface areas, A, and cooling rates, r, of droplets variously containing marine biogenic material, two soil humic acids, four mineral dusts, and one organic monolayer acting as IN. For all investigated IN types we demonstrate that droplet freezing temperatures increase as A increases. Similarly, droplet freezing temperatures increase as the cooling rate decreases. The log10(J(het)) values for the various IN types derived exclusively by Tand a(w), provide a complete description of the heterogeneous ice nucleation kinetics. Thus, the ABIFM can be applied over the entire range of T, RH, total particulate surface area, and cloud activation timescales typical of atmospheric conditions. Lastly, we demonstrate that ABIFM can be used to derive frozen fractions of droplets and ice particle production for atmospheric models of cirrus and mixed phase cloud conditions.

  1. A water activity based model of heterogeneous ice nucleation kinetics for freezing of water and aqueous solution droplets

    DOE PAGES

    Knopf, Daniel A.; Alpert, Peter A.

    2013-04-24

    Immersion freezing of water and aqueous solutions by particles acting as ice nuclei (IN) is a common process of heterogeneous ice nucleation which occurs in many environments, especially in the atmosphere where it results in the glaciation of clouds. Here we experimentally show, using a variety of IN types suspended in various aqueous solutions, that immersion freezing temperatures and kinetics can be described solely by temperature, T, and solution water activity, aw, which is the ratio of the vapour pressure of the solution and the saturation water vapour pressure under the same conditions and, in equilibrium, equivalent to relative humiditymore » (RH). This allows the freezing point and corresponding heterogeneous ice nucleation rate coefficient, Jhet, to be uniquely expressed by T and aw, a result we term the aw based immersion freezing model (ABIFM). This method is independent of the nature of the solute and accounts for several varying parameters, including cooling rate and IN surface area, while providing a holistic description of immersion freezing and allowing prediction of freezing temperatures, Jhet, frozen fractions, ice particle production rates and numbers. Our findings are based on experimental freezing data collected for various IN surface areas, A, and cooling rates, r, of droplets variously containing marine biogenic material, two soil humic acids, four mineral dusts, and one organic monolayer acting as IN. For all investigated IN types we demonstrate that droplet freezing temperatures increase as A increases. Similarly, droplet freezing temperatures increase as the cooling rate decreases. The log 10(J het) values for the various IN types derived exclusively by T and aw, provide a complete description of the heterogeneous ice nucleation kinetics. Thus, the ABIFM can be applied over the entire range of T, RH, total particulate surface area, and cloud activation timescales typical of atmospheric conditions. Finally, we demonstrate that ABIFM can be used to derive frozen fractions of droplets and ice particle production for atmospheric models of cirrus and mixed phase cloud conditions.« less

  2. The Impacts of an Observationally-Based Cloud Fraction and Condensate Overlap Parameterization on a GCM's Cloud Radiative Effect

    NASA Technical Reports Server (NTRS)

    Oreopoulos, Lazaros; Lee, Dongmin; Norris, Peter; Yuan, Tianle

    2011-01-01

    It has been shown that the details of how cloud fraction overlap is treated in GCMs has substantial impact on shortwave and longwave fluxes. Because cloud condensate is also horizontally heterogeneous at GCM grid scales, another aspect of cloud overlap should in principle also be assessed, namely the vertical overlap of hydrometeor distributions. This type of overlap is usually examined in terms of rank correlations, i.e., linear correlations between hydrometeor amount ranks of the overlapping parts of cloud layers at specific separation distances. The cloud fraction overlap parameter and the rank correlation of hydrometeor amounts can be both expressed as inverse exponential functions of separation distance characterized by their respective decorrelation lengths (e-folding distances). Larger decorrelation lengths mean that hydrometeor fractions and probability distribution functions have high levels of vertical alignment. An analysis of CloudSat and CALIPSO data reveals that the two aspects of cloud overlap are related and their respective decorrelation lengths have a distinct dependence on latitude that can be parameterized and included in a GCM. In our presentation we will contrast the Cloud Radiative Effect (CRE) of the GEOS-5 atmospheric GCM (AGCM) when the observationally-based parameterization of decorrelation lengths is used to represent overlap versus the simpler cases of maximum-random overlap and globally constant decorrelation lengths. The effects of specific overlap representations will be examined for both diagnostic and interactive radiation runs in GEOS-5 and comparisons will be made with observed CREs from CERES and CloudSat (2B-FLXHR product). Since the radiative effects of overlap depend on the cloud property distributions of the AGCM, the availability of two different cloud schemes in GEOS-5 will give us the opportunity to assess a wide range of potential cloud overlap consequences on the model's climate.

  3. The 1991 Eruption of Mt. Pinatubo: Changes in Climate and Atmospheric Chemistry- Lesson Learned and Questions Left Unanswered

    NASA Astrophysics Data System (ADS)

    Toon, O. B.

    2016-12-01

    Mt. Pinatubo injected the largest amount of SO2 into the stratosphere of any 20th Century eruption. I will survey what we learned, and point out issues that require more data, or further analysis. Beautiful purple twilight glows, hazy gray skies, and sunsets so bright they caused traffic accidents made the volcanic cloud evident to even casual observers for more than a year. High altitude aircraft, balloons, satellites and ground-based instruments measured many properties of the cloud and its impact on the Earth. Scattering of sunlight to space created a temporary negative radiative forcing, larger than the positive forcing from greenhouse gases in the previous century. As a result the surface cooled, but the cloud faded before the cooling reached its full potential. Absorption of near infrared sunlight, and of upwelling mid-infrared radiation heated the stratosphere. The heating was unequal, which may have induced local dynamical changes that sped the movement of the cloud into the Southern Hemisphere. The ascending motion in the tropical branch of the Brewer-Dobson circulation sped up, leading to tropical ozone reductions. Mid-latitude declines in ozone were caused by heterogeneous chemical reactions on the volcanic cloud. Polar ozone loss was enhanced by increased heterogeneous reactions due to the increased surface area provided by the volcanic particles. A number of important microphysical insights were gained that are not always recognized. Stratospheric particle sizes increase with the addition of SO2, and even Pinatubo particles did not have a constant or uniform particle size distribution. The optical depth was not uniform over the globe, or even one hemisphere. In fact, the maximum optical depth moved from the tropics to high northern latitudes over the first year. Many questions have been left unanswered. Theory suggests the optical depth of volcanic clouds increases less than linearly with the mass of SO2 injected, is this correct? Observations did not show injections of water or halogens, will other eruptions have significant injections? Do large eruptions have an effect on El Nino or winter warming in the Northern Hemisphere, and if so what is the mechanism? Other large eruptions are inevitable, but rare. How can we prepare to measure the properties and effects of their clouds?

  4. Scale Dependence of Cirrus Horizontal Heterogeneity Effects on TOA Measurements. Part I; MODIS Brightness Temperatures in the Thermal Infrared

    NASA Technical Reports Server (NTRS)

    Fauchez, Thomas; Platnick, Steven; Meyer, Kerry; Cornet, Celine; Szczap, Frederic; Varnai, Tamas

    2017-01-01

    This paper presents a study on the impact of cirrus cloud heterogeneities on MODIS simulated thermal infrared (TIR) brightness temperatures (BTs) at the top of the atmosphere (TOA) as a function of spatial resolution from 50 meters to 10 kilometers. A realistic 3-D (three-dimensional) cirrus field is generated by the 3DCLOUD model (average optical thickness of 1.4, cloudtop and base altitudes at 10 and 12 kilometers, respectively, consisting of aggregate column crystals of D (sub eff) equals 20 microns), and 3-D thermal infrared radiative transfer (RT) is simulated with the 3DMCPOL (3-D Monte Carlo Polarized) code. According to previous studies, differences between 3-D BT computed from a heterogenous pixel and 1-D (one-dimensional) RT computed from a homogeneous pixel are considered dependent at nadir on two effects: (i) the optical thickness horizontal heterogeneity leading to the plane-parallel homogeneous bias (PPHB); and the (ii) horizontal radiative transport (HRT) leading to the independent pixel approximation error (IPAE). A single but realistic cirrus case is simulated and, as expected, the PPHB mainly impacts the low-spatial resolution results (above approximately 250 meters), with averaged values of up to 5-7 K (thousand), while the IPAE mainly impacts the high-spatial resolution results (below approximately 250 meters) with average values of up to 1-2 K (thousand). A sensitivity study has been performed in order to extend these results to various cirrus optical thicknesses and heterogeneities by sampling the cirrus in several ranges of parameters. For four optical thickness classes and four optical heterogeneity classes, we have found that, for nadir observations, the spatial resolution at which the combination of PPHB and HRT effects is the smallest, falls between 100 and 250 meters. These spatial resolutions thus appear to be the best choice to retrieve cirrus optical properties with the smallest cloud heterogeneity-related total bias in the thermal infrared. For off-nadir observations, the average total effect is increased and the minimum is shifted to coarser spatial resolutions.

  5. Economic Perspective on Cloud Computing: Three Essays

    ERIC Educational Resources Information Center

    Dutt, Abhijit

    2013-01-01

    Improvements in Information Technology (IT) infrastructure and standardization of interoperability standards among heterogeneous Information System (IS) applications have brought a paradigm shift in the way an IS application could be used and delivered. Not only an IS application can be built using standardized component but also parts of it can…

  6. Nursing workload in the acute-care setting: A concept analysis of nursing workload.

    PubMed

    Swiger, Pauline A; Vance, David E; Patrician, Patricia A

    2016-01-01

    A pressing need in the field of nursing is the identification of optimal staffing levels to ensure patient safety. Effective staffing requires comprehensive measurement of nursing workload to determine staffing needs. Issues surrounding nursing workload are complex, and the volume of workload is growing; however, many workload systems do not consider the numerous workload factors that impact nursing today. The purpose of this concept analysis was to better understand and define nursing workload as it relates to the acute-care setting. Rogers' evolutionary method was used for this literature-based concept analysis. Nursing workload is influenced by more than patient care. The proposed definition of nursing workload may help leaders identify workload that is unnoticed and unmeasured. These findings could help leaders consider and identify workload that is unnecessary, redundant, or more appropriate for assignment to other members of the health care team. Published by Elsevier Inc.

  7. Homogeneous ice nucleation and supercooled liquid water in orographic wave clouds

    NASA Technical Reports Server (NTRS)

    Heymsfield, Andrew J.; Miloshevich, Larry M.

    1993-01-01

    This study investigates ice nucleation mechanisms in cold lenticular wave clouds, a cloud type characterized by quasi-steady-state air motions and microphysical properties. It is concluded that homogeneous ice nucleation is responsible for the ice production in these clouds at temperatures below about -33 C. The lack of ice nucleation observed above -33 C indicates a dearth of ice-forming nuclei, and hence heterogeneous ice nucleation, in these clouds. Aircraft measurements in the temperature range -31 to -41 C show the following complement of simultaneous and abrupt changes in cloud properties that indicate a transition from the liquid phase to ice: disappearance of liquid water; decrease in relative humidity from near water saturation to ice saturation; increase in mean particle size; change in particle concentration; and change in temperature due to the release of latent heat. A numerical model of cloud particle growth and homogeneous ice nucleation is used to aid in interpretation of our in situ measurements. The abrupt changes in observed cloud properties compare favorably, both qualitatively and quantitatively, with results from the homogeneous ice nucleation model. It is shown that the homogeneous ice nucleation rates from the measurements are consistent with the temperature-dependent rates employed by the model (within a factor of 100, corresponding to about 1 C in temperature) in the temperature range -35 deg to -38 C. Given the theoretical basis of the modeled rates, it may be reasonable to apply them throughout the -30 to -50 C temperature range considered by the theory.

  8. Changes in Stratiform Clouds of Mesoscale Convective Complex Introduced by Dust Aerosols

    NASA Technical Reports Server (NTRS)

    Lin, B.; Min, Q.-L.; Li, R.

    2010-01-01

    Aerosols influence the earth s climate through direct, indirect, and semi-direct effects. There are large uncertainties in quantifying these effects due to limited measurements and observations of aerosol-cloud-precipitation interactions. As a major terrestrial source of atmospheric aerosols, dusts may serve as a significant climate forcing for the changing climate because of its effect on solar and thermal radiation as well as on clouds and precipitation processes. Latest satellites measurements enable us to determine dust aerosol loadings and cloud distributions and can potentially be used to reduce the uncertainties in the estimations of aerosol effects on climate. This study uses sensors on various satellites to investigate the impact of mineral dust on cloud microphysical and precipitation processes in mesoscale convective complex (MCC). A trans-Atlantic dust outbreak of Saharan origin occurring in early March 2004 is considered. For the observed MCCs under a given convective strength, small hydrometeors were found more prevalent in the dusty stratiform regions than in those regions that were dust free. Evidence of abundant cloud ice particles in the dust regions, particularly at altitudes where heterogeneous nucleation of mineral dust prevails, further supports the observed changes of clouds and precipitation. The consequences of the microphysical effects of the dust aerosols were to shift the size spectrum of precipitation-sized hydrometeors from heavy precipitation to light precipitation and ultimately to suppress precipitation and increase the lifecycle of cloud systems, especially over stratiform areas.

  9. An Investigation of Topography Modulated Low Level Moisture Convergence Patterns in the Southern Appalachians Using WRF

    NASA Astrophysics Data System (ADS)

    Wilson, A. M.; Duan, Y.; Barros, A.

    2015-12-01

    The Southern Appalachian Mountains (SAM) region is a biodiversity hot-spot that is vulnerable to land use/land cover changes due to its proximity to the rapidly growing population in the Southeast U.S. Persistent near surface moisture and associated microclimates observed in this region have been documented since the colonization of the area. The landform in this area, in particular in the inner mountain region, is highly complex with nested valleys and ridges. The geometry of the terrain causes distinct diurnal and seasonal local flow patterns that result in highly complex interactions of this low level moisture with meso- and synoptic-scale cyclones passing through the region. The Weather Research and Forecasting model (WRF) was used to conduct high resolution simulations of several case studies of warm season precipitation in the SAM with different synoptic-scale conditions to investigate this interaction between local and larger-scale flow patterns. The aim is to elucidate the microphysical interactions among these shallow orographic clouds and preexisting precipitating cloud systems and identify uncertainties in the model microphysics using in situ measurements. Findings show that ridge-valley precipitation gradients, in particular the "reverse" to the classical orographic effect observed in inner mountain valleys, is linked to horizontal heterogeneity in the vertical structure of low level cloud and precipitation promoted through landform controls on local flow. Moisture convergence patterns follow the peaks and valleys as represented by WRF terrain, and the topography effectively controls their timing and spatial structure. The simulations support the hypothesis that ridge-valley precipitation gradients, and in particular the reverse orographic enhancement effect in inner mountain valleys, is linked to horizontal heterogeneity in the vertical structure of low level clouds and precipitation promoted through landform controls on moisture convergence.

  10. Improving Pixel Level Cloud Optical Property Retrieval using Monte Carlo Simulations

    NASA Technical Reports Server (NTRS)

    Oreopoulos, Lazaros; Marshak, Alexander; Cahalan, Robert F.

    1999-01-01

    The accurate pixel-by-pixel retrieval of cloud optical properties from space is influenced by radiative smoothing due to high order photon scattering and radiative roughening due to low order scattering events. Both are caused by cloud heterogeneity and the three-dimensional nature of radiative transfer and can be studied with the aid of computer simulations. We use Monte Carlo simulations on variable 1-D and 2-D model cloud fields to seek for dependencies of smoothing and roughening phenomena on single scattering albedo, solar zenith angle, and cloud characteristics. The results are discussed in the context of high resolution satellite (such as Landsat) retrieval applications. The current work extends the investigation on the inverse NIPA (Non-local Independent Pixel Approximation) as a tool for removing smoothing and improving retrievals of cloud optical depth. This is accomplished by: (1) Delineating the limits of NIPA applicability; (2) Exploring NIPA parameter dependences on cloud macrostructural features, such as mean cloud optical depth and geometrical thickness, degree of extinction and cloud top height variability. We also compare parameter values from empirical and theoretical considerations; (3) Examining the differences between applying NIPA on radiation quantities vs direct application on optical properties; (4) Studying the radiation budget importance of the NIPA corrections as a function of scale. Finally, we discuss fundamental adjustments that need to be considered for successful radiance inversion at non-conservative wavelengths and oblique Sun angles. These adjustments are necessary to remove roughening signatures which become more prominent with increasing absorption and solar zenith angle.

  11. Relating large-scale subsidence to convection development in Arctic mixed-phase marine stratocumulus

    NASA Astrophysics Data System (ADS)

    Young, Gillian; Connolly, Paul J.; Dearden, Christopher; Choularton, Thomas W.

    2018-02-01

    Large-scale subsidence, associated with high-pressure systems, is often imposed in large-eddy simulation (LES) models to maintain the height of boundary layer (BL) clouds. Previous studies have considered the influence of subsidence on warm liquid clouds in subtropical regions; however, the relationship between subsidence and mixed-phase cloud microphysics has not specifically been studied. For the first time, we investigate how widespread subsidence associated with synoptic-scale meteorological features can affect the microphysics of Arctic mixed-phase marine stratocumulus (Sc) clouds. Modelled with LES, four idealised scenarios - a stable Sc, varied droplet (Ndrop) or ice (Nice) number concentrations, and a warming surface (representing motion southwards) - were subjected to different levels of subsidence to investigate the cloud microphysical response. We find strong sensitivities to large-scale subsidence, indicating that high-pressure systems in the ocean-exposed Arctic regions have the potential to generate turbulence and changes in cloud microphysics in any resident BL mixed-phase clouds.Increased cloud convection is modelled with increased subsidence, driven by longwave radiative cooling at cloud top and rain evaporative cooling and latent heating from snow growth below cloud. Subsidence strengthens the BL temperature inversion, thus reducing entrainment and allowing the liquid- and ice-water paths (LWPs, IWPs) to increase. Through increased cloud-top radiative cooling and subsequent convective overturning, precipitation production is enhanced: rain particle number concentrations (Nrain), in-cloud rain mass production rates, and below-cloud evaporation rates increase with increased subsidence.Ice number concentrations (Nice) play an important role, as greater concentrations suppress the liquid phase; therefore, Nice acts to mediate the strength of turbulent overturning promoted by increased subsidence. With a warming surface, a lack of - or low - subsidence allows for rapid BL turbulent kinetic energy (TKE) coupling, leading to a heterogeneous cloud layer, cloud-top ascent, and cumuli formation below the Sc cloud. In these scenarios, higher levels of subsidence act to stabilise the Sc layer, where the combination of these two forcings counteract one another to produce a stable, yet dynamic, cloud layer.

  12. A comparison of policies on nurse faculty workload in the United States.

    PubMed

    Ellis, Peggy A

    2013-01-01

    This article describes nurse faculty workload policies from across the nation in order to assess current practice. There is a well-documented shortage of nursing faculty leading to an increase in workload demands. Increases in faculty workload results in difficulties with work-life balance and dissatisfaction threatening to make nursing education less attractive to young faculty. In order to begin an examination of faculty workload in nursing, existing workloads must be known. Faculty workload data were solicited from nursing programs nationwide and analyzed to determine the current workloads. The most common faculty teaching workload reported overall for nursing is 12 credit hours per semester; however, some variations exist. Consideration should be given to the multiple components of the faculty workload. Research is needed to address the most effective and efficient workload allocation for nursing faculty.

  13. Impact of distributions on the archetypes and prototypes in heterogeneous nanoparticle ensembles.

    PubMed

    Fernandez, Michael; Wilson, Hugh F; Barnard, Amanda S

    2017-01-05

    The magnitude and complexity of the structural and functional data available on nanomaterials requires data analytics, statistical analysis and information technology to drive discovery. We demonstrate that multivariate statistical analysis can recognise the sets of truly significant nanostructures and their most relevant properties in heterogeneous ensembles with different probability distributions. The prototypical and archetypal nanostructures of five virtual ensembles of Si quantum dots (SiQDs) with Boltzmann, frequency, normal, Poisson and random distributions are identified using clustering and archetypal analysis, where we find that their diversity is defined by size and shape, regardless of the type of distribution. At the complex hull of the SiQD ensembles, simple configuration archetypes can efficiently describe a large number of SiQDs, whereas more complex shapes are needed to represent the average ordering of the ensembles. This approach provides a route towards the characterisation of computationally intractable virtual nanomaterial spaces, which can convert big data into smart data, and significantly reduce the workload to simulate experimentally relevant virtual samples.

  14. Using the C3M Satellite Data Product to Evaluate and Constrain the Cloud Fields in the HadGEM3-UKCA Model with an Aim to Enhance Understanding of the Effects of Clouds on Atmospheric Composition via Photolysis

    NASA Astrophysics Data System (ADS)

    Varma, S.; Voulgarakis, A.; Liu, H.; Crawford, J. H.; Zhang, B.

    2017-12-01

    What drives the variability of trace gases in the troposphere is not well understood, as is the role of clouds in modulating this variability via radiative, transport, deposition, heterogeneous chemistry, and lightning effects that are associated with them. Accurately simulating tropospheric composition and its variability is of utmost importance as both could have a significant effect on the region's temperature and circulation, as well as on surface climate and the amount of UV radiation in the troposphere. In this presentation, we will examine how clouds can influence tropospheric and lower stratospheric composition through modifying solar radiation leading to changes in the local actinic flux and subsequently to photolysis, a key driver of chemistry. We will utilize C3M (a unique 3-D cloud data product merged from multiple A-Train satellites (CERES, CloudSat, CALIPSO, and MODIS) developed at the NASA Langley Research Center to evaluate the cloud fields and their vertical distribution in the HadGEM3-UKCA model developed by the Natural Environment Research Council (NERC, UK) and the Met Office. This evaluation will involve 1) comparing the effective cloud optical depth (ECOD) as calculated from C3M and the model using the approximate random overlap method, 2) applying 3-D scaling factors from C3M to the model's ECOD and analyzing the changes this makes to the model's cloud fields, and 3) running the scaled model and analyzing the impacts of this cloud field adjustment on the model's estimates of photolysis rates and key tropospheric oxidants such as ozone and OH.

  15. Improvements for retrieval of cloud droplet size by the POLDER instrument

    NASA Astrophysics Data System (ADS)

    Shang, H.; Husi, L.; Bréon, F. M.; Ma, R.; Chen, L.; Wang, Z.

    2017-12-01

    The principles of cloud droplet size retrieval via Polarization and Directionality of the Earth's Reflectance (POLDER) requires that clouds be horizontally homogeneous. The retrieval is performed by combining all measurements from an area of 150 km × 150 km to compensate for POLDER's insufficient directional sampling. Using POLDER-like data simulated with the RT3 model, we investigate the impact of cloud horizontal inhomogeneity and directional sampling on the retrieval and analyze which spatial resolution is potentially accessible from the measurements. Case studies show that the sub-grid-scale variability in droplet effective radius (CDR) can significantly reduce valid retrievals and introduce small biases to the CDR ( 1.5µm) and effective variance (EV) estimates. Nevertheless, the sub-grid-scale variations in EV and cloud optical thickness (COT) only influence the EV retrievals and not the CDR estimate. In the directional sampling cases studied, the retrieval using limited observations is accurate and is largely free of random noise. Several improvements have been made to the original POLDER droplet size retrieval. For example, measurements in the primary rainbow region (137-145°) are used to ensure retrievals of large droplet (>15 µm) and to reduce the uncertainties caused by cloud heterogeneity. A premium resoltion of 0.8° is determined by considering successful retrievals and cloud horizontal homogeneity. The improved algorithm is applied to measurements of POLDER in 2008, and we further compared our retrievals with cloud effective radii estimations of Moderate Resolution Imaging Spectroradiometer (MODIS). The results indicate that in global scale, the cloud effective radii and effective variance is larger in the central ocean than inland and coast areas. Over heavy polluted regions, the cloud droplets has small effective radii and narraw distribution due to the influence of aerosol particles.

  16. Development of an atmospheric infrared radiation model with high clouds for target detection

    NASA Astrophysics Data System (ADS)

    Bellisario, Christophe; Malherbe, Claire; Schweitzer, Caroline; Stein, Karin

    2016-10-01

    In the field of target detection, the simulation of the camera FOV (field of view) background is a significant issue. The presence of heterogeneous clouds might have a strong impact on a target detection algorithm. In order to address this issue, we present here the construction of the CERAMIC package (Cloudy Environment for RAdiance and MIcrophysics Computation) that combines cloud microphysical computation and 3D radiance computation to produce a 3D atmospheric infrared radiance in attendance of clouds. The input of CERAMIC starts with an observer with a spatial position and a defined FOV (by the mean of a zenithal angle and an azimuthal angle). We introduce a 3D cloud generator provided by the French LaMP for statistical and simplified physics. The cloud generator is implemented with atmospheric profiles including heterogeneity factor for 3D fluctuations. CERAMIC also includes a cloud database from the French CNRM for a physical approach. We present here some statistics developed about the spatial and time evolution of the clouds. Molecular optical properties are provided by the model MATISSE (Modélisation Avancée de la Terre pour l'Imagerie et la Simulation des Scènes et de leur Environnement). The 3D radiance is computed with the model LUCI (for LUminance de CIrrus). It takes into account 3D microphysics with a resolution of 5 cm-1 over a SWIR bandwidth. In order to have a fast computation time, most of the radiance contributors are calculated with analytical expressions. The multiple scattering phenomena are more difficult to model. Here a discrete ordinate method with correlated-K precision to compute the average radiance is used. We add a 3D fluctuations model (based on a behavioral model) taking into account microphysics variations. In fine, the following parameters are calculated: transmission, thermal radiance, single scattering radiance, radiance observed through the cloud and multiple scattering radiance. Spatial images are produced, with a dimension of 10 km x 10 km and a resolution of 0.1 km with each contribution of the radiance separated. We present here the first results with examples of a typical scenarii. A 1D comparison in results is made with the use of the MATISSE model by separating each radiance calculated, in order to validate outputs. The code performance in 3D is shown by comparing LUCI to SHDOM model, referency code which uses the Spherical Harmonic Discrete Ordinate Method for 3D Atmospheric Radiative Transfer model. The results obtained by the different codes present a strong agreement and the sources of small differences are considered. An important gain in time is observed for LUCI versus SHDOM. We finally conclude on various scenarios for case analysis.

  17. The chemical composition of cirrus forming aerosol: Lessons from the MACPEX field study

    NASA Astrophysics Data System (ADS)

    Cziczo, D. J.; Froyd, K. D.; Murphy, D. M.

    2012-12-01

    Cirrus clouds are an important factor in the Earth's climate system. These clouds exert a large radiative forcing due to their extensive global coverage and high altitude despite minimal physical and optical thickness. During the Mid-latitude Aerosol and Cloud Properties EXperiment (MACPEX) we measured chemical and physical properties of the aerosols on which cirrus ice crystals formed in situ and in real time using a laser ablation single particle mass spectrometry technique deployed aboard the NASA WB-57 research aircraft. Ice residual particles were also collected for off-line laboratory investigation including electron microscopy. Flights spanned from the Gulf of Mexico to the mid-latitudes over the United States. In most cases heterogeneous freezing was the inferred mechanism of cloud formation and aerosol composition had a significant impact on the nucleation of the ice phase. Mineral dust and some metallic particles were highly enhanced in the ice phase when compared to their abundance outside of cloud. Particles such as soot and biological material, previously suggested as ice nuclei, were not found either due to an inability to nucleate ice or low abundance. Atmospheric implications of these measurements and more advanced future analyses will be discussed.

  18. New Perspectives of Point Clouds Color Management - the Development of Tool in Matlab for Applications in Cultural Heritage

    NASA Astrophysics Data System (ADS)

    Pepe, M.; Ackermann, S.; Fregonese, L.; Achille, C.

    2017-02-01

    The paper describes a method for Point Clouds Color management and Integration obtained from Terrestrial Laser Scanner (TLS) and Image Based (IB) survey techniques. Especially in the Cultural Heritage (CH) environment, methods and techniques to improve the color quality of Point Clouds have a key role because a homogenous texture brings to a more accurate reconstruction of the investigated object and to a more pleasant perception of the color object as well. A color management method for point clouds can be useful in case of single data set acquired by TLS or IB technique as well as in case of chromatic heterogeneity resulting by merging different datasets. The latter condition can occur when the scans are acquired in different moments of the same day or when scans of the same object are performed in a period of weeks or months, and consequently with a different environment/lighting condition. In this paper, a procedure to balance the point cloud color in order to uniform the different data sets, to improve the chromatic quality and to highlight further details will be presented and discussed.

  19. Impact of stratospheric aircraft on calculations of nitric acid trihydrate cloud surface area densities using NMC temperatures and 2D model constituent distributions

    NASA Technical Reports Server (NTRS)

    Considine, David B.; Douglass, Anne R.

    1994-01-01

    A parameterization of NAT (nitric acid trihydrate) clouds is developed for use in 2D models of the stratosphere. The parameterization uses model distributions of HNO3 and H2O to determine critical temperatures for NAT formation as a function of latitude and pressure. National Meteorological Center temperature fields are then used to determine monthly temperature frequency distributions, also as a function of latitude and pressure. The fractions of these distributions which fall below the critical temperatures for NAT formation are then used to determine the NAT cloud surface area density for each location in the model grid. By specifying heterogeneous reaction rates as functions of the surface area density, it is then possible to assess the effects of the NAT clouds on model constituent distributions. We also consider the increase in the NAT cloud formation in the presence of a fleet of stratospheric aircraft. The stratospheric aircraft NO(x) and H2O perturbations result in increased HNO3 as well as H2O. This increases the probability of NAT formation substantially, especially if it is assumed that the aircraft perturbations are confined to a corridor region.

  20. The 1980 eruptions of Mount St. Helens - Physical and chemical processes in the stratospheric clouds

    NASA Technical Reports Server (NTRS)

    Turco, R. P.; Toon, O. B.; Whitten, R. C.; Hamill, P.; Keesee, R. G.

    1983-01-01

    The large and diverse set of observational data collected in the high-altitude plumes of the May 18, May 25, and June 13, 1980 eruptions is organized and analyzed with a view to discerning the processes at work. The data serve to guide and constrain detailed model simulations of the volcanic clouds. For this purpose, use is made of a comprehensive one-dimensional model of stratospheric sulfate aerosols, sulfur precursor gases, and volcanic ash and dust. The model takes into account gas-phase and condensed-phase (heterogeneous) chemistry in the clouds, aerosol nucleation and growth, and cloud expansion. Computational results are presented for the time histories of the gaseous species concentrations, aerosol size distributions, and ash burdens of the eruption clouds. Also investigated are the long-term buildup of stratospheric aerosols in the Northern Hemisphere and the persistent effects of injected chlorine and water vapor on stratospheric ozone. It is concluded that SO2, water vapor, and ash were probably the most important substances injected into the stratosphere by the Mount St. Helens volcano, both with respect to their widespread effects on composition and their effect on climate.

  1. Study of Aerosol - Cloud Interaction over Indo - Gangetic Basin During Normal Monsoon and Drought Years

    NASA Astrophysics Data System (ADS)

    Tiwari, S.; Ramachandran, S.

    2017-12-01

    Clouds are one of the major factors that influence the Earth's radiation budget and also change the precipitation pattern. Atmospheric aerosols play a crucial role in modifying the cloud properties acting as cloud condensation nuclei (CCN). It can change cloud droplet number concentration, cloud droplet size and hence cloud albedo. Therefore, the effects of aerosol on cloud parameters are one of the most important topics in climate change study. In the present study, we investigate the spatial variability of aerosol - cloud interactions during normal monsoon years and drought years over entire Indo - Gangetic Basin (IGB) which is one of the most polluted regions of the world. Based on aerosol loading and their major emission sources, we divided the entire IGB in to six major sub regions (R1: 66 - 71 E, 24 - 29 N; R2: 71 - 76 E, 29 - 34 N; R3: 76 - 81 E, 26 - 31 N; R4: 81 - 86 E, 23 - 28 N; R5: 86 - 91 E, 22 - 27 N and R6: 91 - 96 E, 23 - 28 N). With this objective, fifteen years (2001 - 2015), daily mean aerosol optical depth, cloud parameters and rainfall data obtained from MODerate resolution Imaging Spectroradiometer (MODIS) on board of Terra satellite and Tropical Rainfall Measuring Mission (TRMM) is analyzed over each sub regions of IGB for monsoon season (JJAS : June, July, August and September months). Preliminary results suggest that a slightly change in aerosol optical depth can affect the significant contribution of cloud fraction and other cloud properties which also show a large spatial heterogeneity. During drought years, higher cloud effective radius (i.e. CER > 20µm) decreases from western to eastern IGB suggesting the enhancement in cloud albedo. Relatively week correlation between cloud optical thickness and rainfall is found during drought years than the normal monsoon years over western IGB. The results from the present study will be helpful to reduce uncertainty in understanding of aerosol - cloud interaction over IGB. Further details will be presented during the conference.

  2. Chemical Processing of Organics within Clouds: Pilot Study at Whiteface Mountain in Upstate NY

    NASA Astrophysics Data System (ADS)

    Lance, S.; Carlton, A. G.; Barth, M. C.; Schwab, J. J.; Minder, J. R.; Freedman, J. M.; Zhang, J.; Brandt, R. E.; Casson, P.; Brewer, M.; Orlowski, D.; Christiansen, A.

    2017-12-01

    Aqueous chemical processing within cloud and fog water has been identified as a key process in the formation of secondary organic aerosol (SOA) mass, which is found abundantly throughout the troposphere. Yet, significant uncertainty remains regarding the organic chemical reactions taking place within clouds and the conditions under which those reactions occur. Routine longterm measurements from the Whiteface Mountain (WFM) Research Observatory in upstate NY provide a unique and broad view of regional air quality relevant to the formation of particulate matter within clouds, largely due to the fact that the summit of WFM is within non-precipitating clouds 30-50% in summertime and the site is undisturbed by local sources. An NSF-funded Cloud Chemistry Workshop in Sept 2016 brought together key researchers at WFM to lay out the most pertinent scientific questions relevant to heterogeneous chemistry occurring within fogs and clouds and to discuss preliminary model intercomparisons. The workshop culminated in a plan to coordinate chemical analyses of cloud water samples focused on chemical constituents thought to be most relevant for SOA formation. Workshop participants also recommended that a pilot study be conducted at WFM to better characterize the meteorological conditions, airflow patterns and clouds intercepting the site, in preparation for future intensive field operations focused on the chemical processing of organics within clouds. This presentation will highlight the experimental design and preliminary observations from the pilot study taking place at WFM in August 2017. Upwind below-cloud measurements of aerosol CCN activation efficiency, size distribution and chemical composition will be compared with similar measurements made at the summit. Under certain conditions, we anticipate that aerosols measured at the summit between cloud events will be representative of cloud droplet residuals recently detrained from the frequent shallow cumulus intercepting the summit. Wind LIDAR and radiosonde observations will be used to link the below-cloud and summit observations. These pre- and post- `cloud processed' aerosols will also be compared with the chemical composition of cloud water samples to evaluate changes to the organic partitioning in the aqueous and aerosol phases.

  3. Heterogeneous ice nucleation of viscous secondary organic aerosol produced from ozonolysis of α-pinene

    NASA Astrophysics Data System (ADS)

    Ignatius, Karoliina; Kristensen, Thomas B.; Järvinen, Emma; Nichman, Leonid; Fuchs, Claudia; Gordon, Hamish; Herenz, Paul; Hoyle, Christopher R.; Duplissy, Jonathan; Garimella, Sarvesh; Dias, Antonio; Frege, Carla; Höppel, Niko; Tröstl, Jasmin; Wagner, Robert; Yan, Chao; Amorim, Antonio; Baltensperger, Urs; Curtius, Joachim; Donahue, Neil M.; Gallagher, Martin W.; Kirkby, Jasper; Kulmala, Markku; Möhler, Ottmar; Saathoff, Harald; Schnaiter, Martin; Tomé, Antonio; Virtanen, Annele; Worsnop, Douglas; Stratmann, Frank

    2016-05-01

    There are strong indications that particles containing secondary organic aerosol (SOA) exhibit amorphous solid or semi-solid phase states in the atmosphere. This may facilitate heterogeneous ice nucleation and thus influence cloud properties. However, experimental ice nucleation studies of biogenic SOA are scarce. Here, we investigated the ice nucleation ability of viscous SOA particles. The SOA particles were produced from the ozone initiated oxidation of α-pinene in an aerosol chamber at temperatures in the range from -38 to -10 °C at 5-15 % relative humidity with respect to water to ensure their formation in a highly viscous phase state, i.e. semi-solid or glassy. The ice nucleation ability of SOA particles with different sizes was investigated with a new continuous flow diffusion chamber. For the first time, we observed heterogeneous ice nucleation of viscous α-pinene SOA for ice saturation ratios between 1.3 and 1.4 significantly below the homogeneous freezing limit. The maximum frozen fractions found at temperatures between -39.0 and -37.2 °C ranged from 6 to 20 % and did not depend on the particle surface area. Global modelling of monoterpene SOA particles suggests that viscous biogenic SOA particles are indeed present in regions where cirrus cloud formation takes place. Hence, they could make up an important contribution to the global ice nucleating particle budget.

  4. Workload and Stress in New Zealand Universities.

    ERIC Educational Resources Information Center

    Boyd, Sally; Wylie, Cathy

    This study examined the workloads of academic, general, support, library, and technical staff of New Zealand universities. It focused on current levels of workload, changes in workload levels and content, connections between workload and stress, and staff attitudes towards the effects of workload changes and educational reforms on the quality of…

  5. Maximum Movement Workloads and High-Intensity Workload Demands by Position in NCAA Division I Collegiate Football.

    PubMed

    Sanders, Gabriel J; Roll, Brad; Peacock, Corey A; Kollock, Roger O

    2018-05-02

    Sanders, GJ, Roll, B, Peacock, CA, and Kollock, RO. Maximum movement workloads and high-intensity workload demands by position in NCAA division I collegiate football. J Strength Cond Res XX(X): 000-000, 2018-The purpose of the study was to quantify the average and maximum (i.e., peak) movement workloads, and the percent of those workloads performed at high intensity by NCAA division I football athletes during competitive games. Using global positioning system devices (Catapult Sports), low, moderate, and high and total multidirectional movement workloads were quantified by each position. Strategically achieving maximal workloads may improve both conditioning and rehabilitation protocols for athletes as they prepare for competition or return to play after an injury. A total of 40 football athletes were included in the analysis. For the data to be included, athletes were required to participate in ≥75% of the offensive or defensive snaps for any given game. There was a total of 286 data downloads from 13 different games for 8 different football positions. Data were calculated and compared by offensive and defensive position to establish the mean, SD, and maximum workloads during competitive games. The percent high-intensity workload profile was established to assess the total number and percent of high-intensity movement workloads by position. The profile was calculated by dividing a position's maximal high-intensity movement workload by the total (e.g., sum of maximal low, moderate, and high-intensity movements) movement workload. One-way analysis of variances revealed that there was a main effect of football position for total movement workloads and the percent of workloads performed at high intensities (p ≤ 0.025 for all). Maximal high-intensity workloads were 1.6-4.3 times greater than average high-intensity workloads, and the percent of total workloads performed at high intensities varied greatly by position. Strategically training for and using maximal movement workloads can help ensure that athletes are achieving workloads that are similar to the greatest demands of a competitive game.

  6. The interannual variability of polar stratospheric clouds and related parameters in Antarctica during September and October

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Poole, L.R.; McCormick, M.P.; Solomon, S.

    1989-10-01

    Antarctic polar stratospheric cloud (PSC) sightings by the orbiting SAM II sensor during September and October show a pronounced Quasi-Biennial Oscillation (QBO) signal, and October sightings have increased markedly over the past 10 years in years of westerly QBO phase. The QBO in PSC frequency is likely to affect the rate of Antarctic heterogeneous chemical processes and, hence, ozone depletion. Studies of the observed long-term temperature trend suggest that the decadal PSC trend probably results from the ozone decline through its effect on stratospheric heating rates. A more detailed analysis of data from 1986 and 1987 shows that there weremore » more PSCs in 1987 and that they persisted much later into the spring season as compared to 1986. Qualitatively similar behavior was found for the OClO column abundances and 18-km ozone depletion observed at McMurdo Station during these 2 years. These observations suggest that both the intensity and duration of heterogeneous chemical processes are likely greater during colder, QBO-westerly phase years.« less

  7. Application-Level Interoperability Across Grids and Clouds

    NASA Astrophysics Data System (ADS)

    Jha, Shantenu; Luckow, Andre; Merzky, Andre; Erdely, Miklos; Sehgal, Saurabh

    Application-level interoperability is defined as the ability of an application to utilize multiple distributed heterogeneous resources. Such interoperability is becoming increasingly important with increasing volumes of data, multiple sources of data as well as resource types. The primary aim of this chapter is to understand different ways in which application-level interoperability can be provided across distributed infrastructure. We achieve this by (i) using the canonical wordcount application, based on an enhanced version of MapReduce that scales-out across clusters, clouds, and HPC resources, (ii) establishing how SAGA enables the execution of wordcount application using MapReduce and other programming models such as Sphere concurrently, and (iii) demonstrating the scale-out of ensemble-based biomolecular simulations across multiple resources. We show user-level control of the relative placement of compute and data and also provide simple performance measures and analysis of SAGA-MapReduce when using multiple, different, heterogeneous infrastructures concurrently for the same problem instance. Finally, we discuss Azure and some of the system-level abstractions that it provides and show how it is used to support ensemble-based biomolecular simulations.

  8. Microphysical modeling of cirrus. 2: Sensitivity studies

    NASA Technical Reports Server (NTRS)

    Jensen, Eric J.; Toon, Owen B.; Westphal, Douglas L.; Kinne, Stefan; Heymsfield, Andrew J.

    1994-01-01

    The one-dimensional cirrus model described in part 1 of this issue has been used to study the sensitivity of simulated cirrus microphysical and radiative properties to poorly known model parameters, poorly understood physical processes, and environmental conditions. Model parameters and physical processes investigated include nucleation rate, mode of nucleation (e.g., homogeneous freezing of aerosols and liquid droplets or heterogeneous deposition), ice crystal shape, and coagulation. These studies suggest that the leading sources of uncertainty in the model are the phase change (liquid-solid) energy barrier and the ice-water surface energy which dominate the homogeneous freezing nucleation rate and the coagulation sticking efficiency at low temperatures which controls the production of large ice crystals (radii greater than 100 mcirons). Environmental conditions considered in sensitivity tests were CN size distribution, vertical wind speed, and cloud height. We found that (unlike stratus clouds) variations in the total number of condensation nuclei (NC) have little effect on cirrus microphysical and radiative properties, since nucleation occurs only on the largest CN at the tail of the size distribution. The total number of ice crystals which nucleate has little or no relationship to the number of CN present and depends primarily on the temperature and the cooling rate. Stronger updrafts (more rapid cooling) generate higher ice number densities, ice water content, cloud optical depth, and net radiative forcing. Increasing the height of the clouds in the model leads to an increase in ice number density, a decrease in effective radius, and a decrease in ice water content. The most prominent effect of increasing cloud height was a rapid increase in the net cloud radiative forcing which can be attributed to the change in cloud temperature as well as change in cloud ice size distributions. It has long been recognized that changes in cloud height or cloud area have the greatest potential for causing feedbacks on climate change. Our results suggest that variations in vertical velocity or cloud microphysical changes associatd with cloud height changes may also be important.

  9. Observations of Martian ice clouds by the Mars Global Surveyor Thermal Emission Spectrometer: The first Martian year

    NASA Astrophysics Data System (ADS)

    Pearl, John C.; Smith, Michael D.; Conrath, Barney J.; Bandfield, Joshua L.; Christensen, Philip R.

    2001-06-01

    Successful operation of the Mars Global Surveyor spacecraft, beginning in September 1997 (Ls=184°), has permitted extensive observations over more than a Martian year. Initially, thin (normal optical depth <0.06 at 825 cm-1) ice clouds and hazes were widespread, showing a distinct latitudinal gradient. With the onset of a regional dust storm at Ls=224°, ice clouds vanished in the southern hemisphere, to reappear gradually after the decay of the storm. The zonally averaged cloud opacities show little difference between the beginning and end of the first Martian year. A broad low-latitude cloud belt with considerable longitudinal structure was present in early northern summer. Apparently characteristic of the northern summer season, it vanished between Ls=140° and 150°. The latitudinal extent of this feature is apparently controlled by the ascending branch of the Hadley circulation. The most opaque clouds (optical depth ~0.6) were found above the summits of major volcanic features; these showed spatial structure possibly associated with wave activity. Variety among low-lying late morning clouds suggests localized differences in circulation and microclimates. Limb observations showed extensive optically thin (optical depth <0.04) stratiform clouds at altitudes up to 55 km. Considerable latitude and altitude variations were evident in ice clouds in early northern spring (Ls=25°) near 30 km, thin clouds extended from just north of the equator to ~45°N, nearly to the north polar vortex. A water ice haze was present in the north polar night (Ls=30°) at altitudes up to 40 km. Because little dust was present this probably provided heterogeneous nucleation sites for the formation of CO2 clouds and snowfall at altitudes below ~20 km, where atmospheric temperatures dropped to the CO2 condensation point. The relatively invariant spectral shape of the water ice cloud feature over space and time indicates that ice particle radii are generally between 1 and 4 μm.

  10. EDITORIAL: Aerosol cloud interactions—a challenge for measurements and modeling at the cutting edge of cloud climate interactions

    NASA Astrophysics Data System (ADS)

    Spichtinger, Peter; Cziczo, Daniel J.

    2008-04-01

    Research in aerosol properties and cloud characteristics have historically been considered two separate disciplines within the field of atmospheric science. As such, it has been uncommon for a single researcher, or even research group, to have considerable expertise in both subject areas. The recent attention paid to global climate change has shown that clouds can have a considerable effect on the Earth's climate and that one of the most uncertain aspects in their formation, persistence, and ultimate dissipation is the role played by aerosols. This highlights the need for researchers in both disciplines to interact more closely than they have in the past. This is the vision behind this focus issue of Environmental Research Letters. Certain interactions between aerosols and clouds are relatively well studied and understood. For example, it is known that an increase in the aerosol concentration will increase the number of droplets in warm clouds, decrease their average size, reduce the rate of precipitation, and extend the lifetime. Other effects are not as well known. For example, persistent ice super-saturated conditions are observed in the upper troposphere that appear to exceed our understanding of the conditions required for cirrus cloud formation. Further, the interplay of dynamics versus effects purely attributed to aerosols remains highly uncertain. The purpose of this focus issue is to consider the current state of knowledge of aerosol/cloud interactions, to define the contemporary uncertainties, and to outline research foci as we strive to better understand the Earth's climate system. This focus issue brings together laboratory experiments, field data, and model studies. The authors address issues associated with warm liquid water, cold ice, and intermediate temperature mixed-phase clouds. The topics include the uncertainty associated with the effect of black carbon and organics, aerosol types of anthropogenic interest, on droplet and ice formation. Phases of water which have not yet been fully defined, for example cubic ice, are considered. The impact of natural aerosols on clouds, for example mineral dust, is also discussed, as well as other natural but highly sensitive effects such as the Wegener Bergeron Findeisen process. It is our belief that this focus issue represents a leap forward not only in reducing the uncertainty associated with the interaction of aerosols and clouds but also a new link between groups that must work together to continue progress in this important area of climate science. Focus on Aerosol Cloud Interactions Contents The articles below represent the first accepted contributions and further additions will appear in the near future. The global influence of dust mineralogical composition on heterogeneous ice nucleation in mixed-phase clouds C Hoose, U Lohmann, R Erdin and I Tegen Ice formation via deposition nucleation on mineral dust and organics: dependence of onset relative humidity on total particulate surface area Zamin A Kanji, Octavian Florea and Jonathan P D Abbatt The Explicit-Cloud Parameterized-Pollutant hybrid approach for aerosol cloud interactions in multiscale modeling framework models: tracer transport results William I Gustafson Jr, Larry K Berg, Richard C Easter and Steven J Ghan Cloud effects from boreal forest fire smoke: evidence for ice nucleation from polarization lidar data and cloud model simulations Kenneth Sassen and Vitaly I Khvorostyanov The effect of organic coating on the heterogeneous ice nucleation efficiency of mineral dust aerosols O Möhler, S Benz, H Saathoff, M Schnaiter, R Wagner, J Schneider, S Walter, V Ebert and S Wagner Enhanced formation of cubic ice in aqueous organic acid droplets Benjamin J Murray Quantification of water uptake by soot particles O B Popovicheva, N M Persiantseva, V Tishkova, N K Shonija and N A Zubareva Meridional gradients of light absorbing carbon over northern Europe D Baumgardner, G Kok, M Krämer and F Weidle

  11. Assessment of 3D cloud radiative transfer effects applied to collocated A-Train data

    NASA Astrophysics Data System (ADS)

    Okata, M.; Nakajima, T.; Suzuki, K.; Toshiro, I.; Nakajima, T. Y.; Okamoto, H.

    2017-12-01

    This study investigates broadband radiative fluxes in the 3D cloud-laden atmospheres using a 3D radiative transfer (RT) model, MCstar, and the collocated A-Train cloud data. The 3D extinction coefficients are constructed by a newly devised Minimum cloud Information Deviation Profiling Method (MIDPM) that extrapolates CPR radar profiles at nadir into off-nadir regions within MODIS swath based on collocated information of MODIS-derived cloud properties and radar reflectivity profiles. The method is applied to low level maritime water clouds, for which the 3D-RT simulations are performed. The radiative fluxes thus simulated are compared to those obtained from CERES as a way to validate the MIDPM-constructed clouds and our 3D-RT simulations. The results show that the simulated SW flux agrees with CERES values within 8 - 50 Wm-2. One of the large biases occurred by cyclic boundary condition that was required to pose into our computational domain limited to 20km by 20km with 1km resolution. Another source of the bias also arises from the 1D assumption for cloud property retrievals particularly for thin clouds, which tend to be affected by spatial heterogeneity leading to overestimate of the cloud optical thickness. These 3D-RT simulations also serve to address another objective of this study, i.e. to characterize the "observed" specific 3D-RT effects by the cloud morphology. We extend the computational domain to 100km by 100km for this purpose. The 3D-RT effects are characterized by errors of existing 1D approximations to 3D radiation field. The errors are investigated in terms of their dependence on solar zenith angle (SZA) for the satellite-constructed real cloud cases, and we define two indices from the error tendencies. According to the indices, the 3D-RT effects are classified into three types which correspond to different simple three morphologies types, i.e. isolated cloud type, upper cloud-roughened type and lower cloud-roughened type. These 3D-RT effects linked to cloud morphologies are also visualized in the form of the RGB composite maps constructed from MODIS/Aqua three channels, which show cloud optical thickness and cloud height information. Such a classification offers a novel insight into 3D-RT effect in a manner that directly relates to cloud morphology.

  12. Predicting operator workload during system design

    NASA Technical Reports Server (NTRS)

    Aldrich, Theodore B.; Szabo, Sandra M.

    1988-01-01

    A workload prediction methodology was developed in response to the need to measure workloads associated with operation of advanced aircraft. The application of the methodology will involve: (1) conducting mission/task analyses of critical mission segments and assigning estimates of workload for the sensory, cognitive, and psychomotor workload components of each task identified; (2) developing computer-based workload prediction models using the task analysis data; and (3) exercising the computer models to produce predictions of crew workload under varying automation and/or crew configurations. Critical issues include reliability and validity of workload predictors and selection of appropriate criterion measures.

  13. General surgery workloads and practice patterns in the United States, 2007 to 2009: a 10-year update from the American Board of Surgery.

    PubMed

    Valentine, R James; Jones, Andrew; Biester, Thomas W; Cogbill, Thomas H; Borman, Karen R; Rhodes, Robert S

    2011-09-01

    To assess changes in general surgery workloads and practice patterns in the past decade. Nearly 80% of graduating general surgery residents pursue additional training in a surgical subspecialty. This has resulted in a shortage of general surgeons, especially in rural areas. The purpose of this study is to characterize the workloads and practice patterns of general surgeons versus certified surgical subspecialists and to compare these data with those from a previous decade. The surgical operative logs of 4968 individuals recertifying in surgery 2007 to 2009 were reviewed. Data from 3362 (68%) certified only in Surgery (GS) were compared with 1606 (32%) with additional American Board of Medical Specialties certificates (GS+). Data from GS surgeons were also compared with data from GS surgeons recertifying 1995 to 1997. Independent variables were compared using factorial ANOVA. GS surgeons performed a mean of 533 ± 365 procedures annually. Women GS performed far more breast operations and fewer abdomen, alimentary tract and laparoscopic procedures compared to men GS (P < 0.001). GS surgeons recertifying at 10 years performed more abdominal, alimentary tract and laparoscopic procedures compared to those recertifying at 20 or 30 years (P < 0.001). Rural GS surgeons performed far more endoscopic procedures and fewer abdominal, alimentary tract, and laparoscopic procedures than urban counterparts (P < 0.001). The United States medical school graduates had similar workloads and distribution of operations to international medical graduates. Compared to 1995 to 1997, GS surgeons from 2007 to 2009 performed more procedures, especially endoscopic and laparoscopic. GS+ surgeons performed 15% to 33% of all general surgery procedures. GS practice patterns are heterogeneous; gender, age, and practice setting significantly affect operative caseloads. A substantial portion of general surgery procedures currently are performed by GS+ surgeons, whereas GS surgeons continue to perform considerable numbers of specialty operations. Reduced general surgery operative experience in GS+ residencies may negatively impact access to general surgical care. Similarly, narrowing GS residency operative experience may impair specialty operation access.

  14. Integration of Panda Workload Management System with supercomputers

    NASA Astrophysics Data System (ADS)

    De, K.; Jha, S.; Klimentov, A.; Maeno, T.; Mashinistov, R.; Nilsson, P.; Novikov, A.; Oleynik, D.; Panitkin, S.; Poyda, A.; Read, K. F.; Ryabinkin, E.; Teslyuk, A.; Velikhov, V.; Wells, J. C.; Wenaus, T.

    2016-09-01

    The Large Hadron Collider (LHC), operating at the international CERN Laboratory in Geneva, Switzerland, is leading Big Data driven scientific explorations. Experiments at the LHC explore the fundamental nature of matter and the basic forces that shape our universe, and were recently credited for the discovery of a Higgs boson. ATLAS, one of the largest collaborations ever assembled in the sciences, is at the forefront of research at the LHC. To address an unprecedented multi-petabyte data processing challenge, the ATLAS experiment is relying on a heterogeneous distributed computational infrastructure. The ATLAS experiment uses PanDA (Production and Data Analysis) Workload Management System for managing the workflow for all data processing on over 140 data centers. Through PanDA, ATLAS physicists see a single computing facility that enables rapid scientific breakthroughs for the experiment, even though the data centers are physically scattered all over the world. While PanDA currently uses more than 250000 cores with a peak performance of 0.3+ petaFLOPS, next LHC data taking runs will require more resources than Grid computing can possibly provide. To alleviate these challenges, LHC experiments are engaged in an ambitious program to expand the current computing model to include additional resources such as the opportunistic use of supercomputers. We will describe a project aimed at integration of PanDA WMS with supercomputers in United States, Europe and Russia (in particular with Titan supercomputer at Oak Ridge Leadership Computing Facility (OLCF), Supercomputer at the National Research Center "Kurchatov Institute", IT4 in Ostrava, and others). The current approach utilizes a modified PanDA pilot framework for job submission to the supercomputers batch queues and local data management, with light-weight MPI wrappers to run singlethreaded workloads in parallel on Titan's multi-core worker nodes. This implementation was tested with a variety of Monte-Carlo workloads on several supercomputing platforms. We will present our current accomplishments in running PanDA WMS at supercomputers and demonstrate our ability to use PanDA as a portal independent of the computing facility's infrastructure for High Energy and Nuclear Physics, as well as other data-intensive science applications, such as bioinformatics and astro-particle physics.

  15. Integration Of PanDA Workload Management System With Supercomputers for ATLAS and Data Intensive Science

    NASA Astrophysics Data System (ADS)

    Klimentov, A.; De, K.; Jha, S.; Maeno, T.; Nilsson, P.; Oleynik, D.; Panitkin, S.; Wells, J.; Wenaus, T.

    2016-10-01

    The.LHC, operating at CERN, is leading Big Data driven scientific explorations. Experiments at the LHC explore the fundamental nature of matter and the basic forces that shape our universe. ATLAS, one of the largest collaborations ever assembled in the sciences, is at the forefront of research at the LHC. To address an unprecedented multi-petabyte data processing challenge, the ATLAS experiment is relying on a heterogeneous distributed computational infrastructure. The ATLAS experiment uses PanDA (Production and Data Analysis) Workload Management System for managing the workflow for all data processing on over 150 data centers. Through PanDA, ATLAS physicists see a single computing facility that enables rapid scientific breakthroughs for the experiment, even though the data centers are physically scattered all over the world. While PanDA currently uses more than 250,000 cores with a peak performance of 0.3 petaFLOPS, LHC data taking runs require more resources than grid can possibly provide. To alleviate these challenges, LHC experiments are engaged in an ambitious program to expand the current computing model to include additional resources such as the opportunistic use of supercomputers. We will describe a project aimed at integration of PanDA WMS with supercomputers in United States, in particular with Titan supercomputer at Oak Ridge Leadership Computing Facility. Current approach utilizes modified PanDA pilot framework for job submission to the supercomputers batch queues and local data management, with light-weight MPI wrappers to run single threaded workloads in parallel on LCFs multi-core worker nodes. This implementation was tested with a variety of Monte-Carlo workloads on several supercomputing platforms for ALICE and ATLAS experiments and it is in full pro duction for the ATLAS since September 2015. We will present our current accomplishments with running PanDA at supercomputers and demonstrate our ability to use PanDA as a portal independent of the computing facilities infrastructure for High Energy and Nuclear Physics as well as other data-intensive science applications, such as bioinformatics and astro-particle physics.

  16. PanDA for ATLAS distributed computing in the next decade

    NASA Astrophysics Data System (ADS)

    Barreiro Megino, F. H.; De, K.; Klimentov, A.; Maeno, T.; Nilsson, P.; Oleynik, D.; Padolski, S.; Panitkin, S.; Wenaus, T.; ATLAS Collaboration

    2017-10-01

    The Production and Distributed Analysis (PanDA) system has been developed to meet ATLAS production and analysis requirements for a data-driven workload management system capable of operating at the Large Hadron Collider (LHC) data processing scale. Heterogeneous resources used by the ATLAS experiment are distributed worldwide at hundreds of sites, thousands of physicists analyse the data remotely, the volume of processed data is beyond the exabyte scale, dozens of scientific applications are supported, while data processing requires more than a few billion hours of computing usage per year. PanDA performed very well over the last decade including the LHC Run 1 data taking period. However, it was decided to upgrade the whole system concurrently with the LHC’s first long shutdown in order to cope with rapidly changing computing infrastructure. After two years of reengineering efforts, PanDA has embedded capabilities for fully dynamic and flexible workload management. The static batch job paradigm was discarded in favor of a more automated and scalable model. Workloads are dynamically tailored for optimal usage of resources, with the brokerage taking network traffic and forecasts into account. Computing resources are partitioned based on dynamic knowledge of their status and characteristics. The pilot has been re-factored around a plugin structure for easier development and deployment. Bookkeeping is handled with both coarse and fine granularities for efficient utilization of pledged or opportunistic resources. An in-house security mechanism authenticates the pilot and data management services in off-grid environments such as volunteer computing and private local clusters. The PanDA monitor has been extensively optimized for performance and extended with analytics to provide aggregated summaries of the system as well as drill-down to operational details. There are as well many other challenges planned or recently implemented, and adoption by non-LHC experiments such as bioinformatics groups successfully running Paleomix (microbial genome and metagenomes) payload on supercomputers. In this paper we will focus on the new and planned features that are most important to the next decade of distributed computing workload management.

  17. Cirrus cloud mimic surfaces in the laboratory: organic acids, bases and NOx heterogeneous reactions

    NASA Astrophysics Data System (ADS)

    Sodeau, J.; Oriordan, B.

    2003-04-01

    CIRRUS CLOUD MIMIC SURFACES IN THE LABORATORY:ORGANIC ACIDS, BASES AND NOX HETEROGENEOUS REACTIONS. B. ORiordan, J. Sodeau Department of Chemistry and Environment Research Institute, University College Cork, Ireland j.sodeau@ucc.ie /Fax: +353-21-4902680 There are a variety of biogenic and anthropogenic sources for the simple carboxylic acids to be found in the troposphere giving rise to levels as high as 45 ppb in certain urban areas. In this regard it is of note that ants of genus Formica produce some 10Tg of formic acid each year; some ten times that produced by industry. The expected sinks are those generally associated with tropospheric chemistry: the major routes studied, to date, being wet and dry deposition. No studies have been carried out hitherto on the role of water-ice surfaces in the atmospheric chemistry of carboxylic acids and the purpose of this paper is to indicate their potential function in the heterogeneous release of atmospheric species such as HONO. The deposition of formic acid on a water-ice surface was studied using FT-RAIR spectroscopy over a range of temperatures between 100 and 165K. In all cases ionization to the formate (and oxonium) ions was observed. The results were confirmed by TPD (Temperature Programmed Desorption) measurements, which indicated that two distinct surface species adsorb to the ice. Potential reactions between the formic acid/formate ion surface and nitrogen dioxide were subsequently investigated by FT-RAIRS. Co-deposition experiments showed that N2O3 and the NO+ ion (associated with water) were formed as products. A mechanism is proposed to explain these results, which involves direct reaction between the organic acid and nitrogen dioxide. Similar experiments involving acetic acid also indicate ionization on a water-ice surface. The results are put into the context of atmospheric chemistry potentially occuring on cirrus cloud surfaces.

  18. Modeling of Electromagnetic Scattering by Discrete and Discretely Heterogeneous Random Media by Using Numerically Exact Solutions of the Maxwell Equations

    NASA Technical Reports Server (NTRS)

    Dlugach, Janna M.; Mishchenko, Michael I.

    2017-01-01

    In this paper, we discuss some aspects of numerical modeling of electromagnetic scattering by discrete random medium by using numerically exact solutions of the macroscopic Maxwell equations. Typical examples of such media are clouds of interstellar dust, clouds of interplanetary dust in the Solar system, dusty atmospheres of comets, particulate planetary rings, clouds in planetary atmospheres, aerosol particles with numerous inclusions and so on. Our study is based on the results of extensive computations of different characteristics of electromagnetic scattering obtained by using the superposition T-matrix method which represents a direct computer solver of the macroscopic Maxwell equations for an arbitrary multisphere configuration. As a result, in particular, we clarify the range of applicability of the low-density theories of radiative transfer and coherent backscattering as well as of widely used effective-medium approximations.

  19. Nucleation of nitric acid hydrates in polar stratospheric clouds by meteoric material

    NASA Astrophysics Data System (ADS)

    James, Alexander D.; Brooke, James S. A.; Mangan, Thomas P.; Whale, Thomas F.; Plane, John M. C.; Murray, Benjamin J.

    2018-04-01

    Heterogeneous nucleation of crystalline nitric acid hydrates in polar stratospheric clouds (PSCs) enhances ozone depletion. However, the identity and mode of action of the particles responsible for nucleation remains unknown. It has been suggested that meteoric material may trigger nucleation of nitric acid trihydrate (NAT, or other nitric acid phases), but this has never been quantitatively demonstrated in the laboratory. Meteoric material is present in two forms in the stratosphere: smoke that results from the ablation and re-condensation of vapours, and fragments that result from the break-up of meteoroids entering the atmosphere. Here we show that analogues of both materials have a capacity to nucleate nitric acid hydrates. In combination with estimates from a global model of the amount of meteoric smoke and fragments in the polar stratosphere we show that meteoric material probably accounts for NAT observations in early season polar stratospheric clouds in the absence of water ice.

  20. Spikes in acute workload are associated with increased injury risk in elite cricket fast bowlers.

    PubMed

    Hulin, Billy T; Gabbett, Tim J; Blanch, Peter; Chapman, Paul; Bailey, David; Orchard, John W

    2014-04-01

    To determine if the comparison of acute and chronic workload is associated with increased injury risk in elite cricket fast bowlers. Data were collected from 28 fast bowlers who completed a total of 43 individual seasons over a 6-year period. Workloads were estimated by summarising the total number of balls bowled per week (external workload), and by multiplying the session rating of perceived exertion by the session duration (internal workload). One-week data (acute workload), together with 4-week rolling average data (chronic workload), were calculated for external and internal workloads. The size of the acute workload in relation to the chronic workload provided either a negative or positive training-stress balance. A negative training-stress balance was associated with an increased risk of injury in the week after exposure, for internal workload (relative risk (RR)=2.2 (CI 1.91 to 2.53), p=0.009), and external workload (RR=2.1 (CI 1.81 to 2.44), p=0.01). Fast bowlers with an internal workload training-stress balance of greater than 200% had a RR of injury of 4.5 (CI 3.43 to 5.90, p=0.009) compared with those with a training-stress balance between 50% and 99%. Fast bowlers with an external workload training-stress balance of more than 200% had a RR of injury of 3.3 (CI 1.50 to 7.25, p=0.033) in comparison to fast bowlers with an external workload training-stress balance between 50% and 99%. These findings demonstrate that large increases in acute workload are associated with increased injury risk in elite cricket fast bowlers.

  1. Saharan dust, convective lofting, aerosol enhancement zones, and potential impacts on ice nucleation in the tropical upper troposphere

    NASA Astrophysics Data System (ADS)

    Twohy, C. H.; Anderson, B. E.; Ferrare, R. A.; Sauter, K. E.; L'Ecuyer, T. S.; van den Heever, S. C.; Heymsfield, A. J.; Ismail, S.; Diskin, G. S.

    2017-08-01

    Dry aerosol size distributions and scattering coefficients were measured on 10 flights in 32 clear-air regions adjacent to tropical storm anvils over the eastern Atlantic Ocean. Aerosol properties in these regions were compared with those from background air in the upper troposphere at least 40 km from clouds. Median values for aerosol scattering coefficient and particle number concentration >0.3 μm diameter were higher at the anvil edges than in background air, showing that convective clouds loft particles from the lower troposphere to the upper troposphere. These differences are statistically significant. The aerosol enhancement zones extended 10-15 km horizontally and 0.25 km vertically below anvil cloud edges but were not due to hygroscopic growth since particles were measured under dry conditions. Number concentrations of particles >0.3 μm diameter were enhanced more for the cases where Saharan dust layers were identified below the clouds with airborne lidar. Median number concentrations in this size range increased from 100 l-1 in background air to 400 l-1 adjacent to cloud edges with dust below, with larger enhancements for stronger storm systems. Integration with satellite cloud frequency data indicates that this transfer of large particles from low to high altitudes by convection has little impact on dust concentrations within the Saharan Air Layer itself. However, it can lead to substantial enhancement in large dust particles and, therefore, heterogeneous ice nuclei in the upper troposphere over the Atlantic. This may induce a cloud/aerosol feedback effect that could impact cloud properties in the region and downwind.

  2. Simulation of the Upper Clouds and Hazes of Venus Using a Microphysical Cloud Model

    NASA Astrophysics Data System (ADS)

    McGouldrick, K.

    2012-12-01

    Several different microphysical and chemical models of the clouds of Venus have been developed in attempts to reproduce the in situ observations of the Venus clouds made by Pioneer Venus, Venera, and Vega descent probes (Turco et al., 1983 (Icarus 53:18-25), James et al, 1997 (Icarus 129:147-171), Imamura and Hashimoto, 2001 (J. Atm. Sci. 58:3597-3612), and McGouldrick and Toon, 2007 (Icarus 191:1-24)). The model of McGouldrick and Toon has successfully reproduced observations within the condensational middle and lower cloud decks of Venus (between about 48 and 57 km altitude, experiencing conditions similar to Earth's troposphere) and it now being extended to also simulate the microphysics occurring in the upper cloud deck (between altitudes of about 57 km and 70 km, experiencing conditions similar to Earth's stratosphere). In the upper clouds, aerosols composed of a solution of sulfuric acid in water are generated from the reservoir of available water vapor and sulfuric acid vapor that is photochemically produced. The manner of particle creation (e.g., activation of cloud condensation nuclei, or homogeneous or heterogeneous nucleation) is still incompletely understood, and the atmospheric environment has been measured to be not inconsistent with frozen aerosol particles (either sulfuric acid monohydrate or water ice). The material phase, viscosity, and surface tension of the aerosols (which are strongly dependent up on the local temperature and water vapor concentration) can affect the coagulation efficiencies of the aerosol, leading to variations in the size distributions, and other microphysical and radiative properties. Here, I present recent work exploring the effects of nucleation rates and coalescence efficiencies on the simulated Venus upper clouds.

  3. An All Sky Instantaneous Shortwave Solar Radiation Model for Mountainous Terrain

    NASA Astrophysics Data System (ADS)

    Zhang, S.; Li, X.; She, J.

    2017-12-01

    In mountainous terrain, solar radiation shows high heterogeneity in space and time because of strong terrain shading effects and significant variability of cloud cover. While existing GIS-based solar radiation models simulate terrain shading effects with relatively high accuracy and models based on satellite datasets consider fine scale cloud attenuation processes, none of these models have considered the geometrical relationships between sun, cloud, and terrain, which are important over mountainous terrain. In this research we propose sky cloud maps to represent cloud distribution in a hemispherical sky using MODIS cloud products. By overlaying skyshed (visible area in the hemispherical sky derived from DEM), sky map, and sky cloud maps, we are able to consider both terrain shading effects and anisotropic cloud attenuation in modeling instantaneous direct and diffuse solar radiation in mountainous terrain. The model is evaluated with field observations from three automatic weather stations in the Tizinafu watershed in the Kunlun Mountains of northwestern China. Overall, under all sky conditions, the model overestimates instantaneous global solar radiation with a mean absolute relative difference (MARD) of 22%. The model is also evaluated under clear sky (clearness index of more than 0.75) and partly cloudy sky (clearness index between 0.35 and 0.75) conditions with MARDs of 5.98% and 23.65% respectively. The MARD for very cloudy sky (clearness index less than 0.35) is relatively high. But these days occur less than 1% of the time. The model is sensitive to DEM data error, algorithms used in delineating skyshed, and errors in MODIS atmosphere and cloud products. Our model provides a novel approach for solar radiation modeling in mountainous areas.

  4. A better understanding of POLDER's cloud droplet size retrieval: impact of cloud horizontal inhomogeneity and directional sampling

    NASA Astrophysics Data System (ADS)

    Shang, H.; Chen, L.; Bréon, F.-M.; Letu, H.; Li, S.; Wang, Z.; Su, L.

    2015-07-01

    The principles of the Polarization and Directionality of the Earth's Reflectance (POLDER) cloud droplet size retrieval requires that clouds are horizontally homogeneous. Nevertheless, the retrieval is applied by combining all measurements from an area of 150 km × 150 km to compensate for POLDER's insufficient directional sampling. Using the POLDER-like data simulated with the RT3 model, we investigate the impact of cloud horizontal inhomogeneity and directional sampling on the retrieval, and then analyze which spatial resolution is potentially accessible from the measurements. Case studies show that the sub-scale variability in droplet effective radius (CDR) can mislead both the CDR and effective variance (EV) retrievals. Nevertheless, the sub-scale variations in EV and cloud optical thickness (COT) only influence the EV retrievals and not the CDR estimate. In the directional sampling cases studied, the retrieval is accurate using limited observations and is largely independent of random noise. Several improvements have been made to the original POLDER droplet size retrieval. For example, the measurements in the primary rainbow region (137-145°) are used to ensure accurate large droplet (> 15 μm) retrievals and reduce the uncertainties caused by cloud heterogeneity. We apply the improved method using the POLDER global L1B data for June 2008, the new CDR results are compared with the operational CDRs. The comparison show that the operational CDRs tend to be underestimated for large droplets. The reason is that the cloudbow oscillations in the scattering angle region of 145-165° are weak for cloud fields with CDR > 15 μm. Lastly, a sub-scale retrieval case is analyzed, illustrating that a higher resolution, e.g., 42 km × 42 km, can be used when inverting cloud droplet size parameters from POLDER measurements.

  5. The Mars Dust Cycle: Investigating the Effects of Radiatively Active Water Ice Clouds on Surface Stresses and Dust Lifting Potential with the NASA Ames Mars General Circulation Model

    NASA Technical Reports Server (NTRS)

    Kahre, Melinda A.; Hollingsworth, Jeffery

    2012-01-01

    The dust cycle is a critically important component of Mars' current climate system. Dust is present in the atmosphere of Mars year-round but the dust loading varies with season in a generally repeatable manner. Dust has a significant influence on the thermal structure of the atmosphere and thus greatly affects atmospheric circulation. The dust cycle is the most difficult of the three climate cycles (CO2, water, and dust) to model realistically with general circulation models. Until recently, numerical modeling investigations of the dust cycle have typically not included the effects of couplings to the water cycle through cloud formation. In the Martian atmosphere, dust particles likely provide the seed nuclei for heterogeneous nucleation of water ice clouds. As ice coats atmospheric dust grains, the newly formed cloud particles exhibit different physical and radiative characteristics. Thus, the coupling between the dust and water cycles likely affects the distributions of dust, water vapor and water ice, and thus atmospheric heating and cooling and the resulting circulations. We use the NASA Ames Mars GCM to investigate the effects of radiatively active water ice clouds on surface stress and the potential for dust lifting. The model includes a state-of-the-art water ice cloud microphysics package and a radiative transfer scheme that accounts for the radiative effects of CO2 gas, dust, and water ice clouds. We focus on simulations that are radiatively forced by a prescribed dust map, and we compare simulations that do and do not include radiatively active clouds. Preliminary results suggest that the magnitude and spatial patterns of surface stress (and thus dust lifting potential) are substantial influenced by the radiative effects of water ice clouds.

  6. Heterogeneous ice nucleation and phase transition of viscous α-pinene secondary organic aerosol

    NASA Astrophysics Data System (ADS)

    Ignatius, Karoliina; Kristensen, Thomas B.; Järvinen, Emma; Nichman, Leonid; Fuchs, Claudia; Gordon, Hamish; Herenz, Paul; Hoyle, Christopher R.; Duplissy, Jonathan; Baltensperger, Urs; Curtius, Joachim; Donahue, Neil M.; Gallagher, Martin W.; Kirkby, Jasper; Kulmala, Markku; Möhler, Ottmar; Saathoff, Harald; Schnaiter, Martin; Virtanen, Annele; Stratmann, Frank

    2016-04-01

    There are strong indications that particles containing secondary organic aerosol (SOA) exhibit amorphous solid or semi-solid phase states in the atmosphere. This may facilitate deposition ice nucleation and thus influence cirrus cloud properties. Global model simulations of monoterpene SOA particles suggest that viscous biogenic SOA are indeed present in regions where cirrus cloud formation takes place. Hence, they could make up an important contribution to the global ice nucleating particle (INP) budget. However, experimental ice nucleation studies of biogenic SOA are scarce. Here, we investigated the ice nucleation ability of viscous SOA particles at the CLOUD (Cosmics Leaving OUtdoor Droplets) experiment at CERN (Ignatius et al., 2015, Järvinen et al., 2015). In the CLOUD chamber, the SOA particles were produced from the ozone initiated oxidation of α-pinene at temperatures in the range from -38 to -10° C at 5-15 % relative humidity with respect to water (RHw) to ensure their formation in a highly viscous phase state, i.e. semi-solid or glassy. We found that particles formed and grown in the chamber developed an asymmetric shape through coagulation. As the RHw was increased to between 35 % at -10° C and 80 % at -38° C, a transition to spherical shape was observed with a new in-situ optical method. This transition confirms previous modelling of the viscosity transition conditions. The ice nucleation ability of SOA particles was investigated with a new continuous flow diffusion chamber SPIN (Spectrometer for Ice Nuclei) for different SOA particle sizes. For the first time, we observed heterogeneous ice nucleation of viscous α-pinene SOA in the deposition mode for ice saturation ratios between 1.3 and 1.4, significantly below the homogeneous freezing limit. The maximum frozen fractions found at temperatures between -36.5 and -38.3° C ranged from 6 to 20 % and did not depend on the particle surface area. References Ignatius, K. et al., Heterogeneous ice nucleation of secondary organic aerosol produced from ozonolysis of α-pinene, Atmos. Chem. Phys. Discuss., 15, 35719-35752, doi:10.5194/acpd-15-35719-2015, 2015. Järvinen, E. et al., Observation of viscosity transition in α-pinene secondary organic aerosol, Atmos. Chem. Phys. Discuss., 15, 28575-28617, doi:10.5194/acpd-15-28575-2015, 2015.

  7. Evaluation of the Display of Cognitive State Feedback to Drive Adaptive Task Sharing

    PubMed Central

    Dorneich, Michael C.; Passinger, Břetislav; Hamblin, Christopher; Keinrath, Claudia; Vašek, Jiři; Whitlow, Stephen D.; Beekhuyzen, Martijn

    2017-01-01

    This paper presents an adaptive system intended to address workload imbalances between pilots in future flight decks. Team performance can be maximized when task demands are balanced within crew capabilities and resources. Good communication skills enable teams to adapt to changes in workload, and include the balancing of workload between team members This work addresses human factors priorities in the aviation domain with the goal to develop concepts that balance operator workload, support future operator roles and responsibilities, and support new task requirements, while allowing operators to focus on the most safety critical tasks. A traditional closed-loop adaptive system includes the decision logic to turn automated adaptations on and off. This work takes a novel approach of replacing the decision logic, normally performed by the automation, with human decisions. The Crew Workload Manager (CWLM) was developed to objectively display the workload between pilots and recommend task sharing; it is then the pilots who “close the loop” by deciding how to best mitigate unbalanced workload. The workload was manipulated by the Shared Aviation Task Battery (SAT-B), which was developed to provide opportunities for pilots to mitigate imbalances in workload between crew members. Participants were put in situations of high and low workload (i.e., workload was manipulated as opposed to being measured), the workload was then displayed to pilots, and pilots were allowed to decide how to mitigate the situation. An evaluation was performed that utilized the SAT-B to manipulate workload and create workload imbalances. Overall, the CWLM reduced the time spent in unbalanced workload and improved the crew coordination in task sharing while not negatively impacting concurrent task performance. Balancing workload has the potential to improve crew resource management and task performance over time, and reduce errors and fatigue. Paired with a real-time workload measurement system, the CWLM could help teams manage their own task load distribution. PMID:28400716

  8. Evaluation of the Display of Cognitive State Feedback to Drive Adaptive Task Sharing.

    PubMed

    Dorneich, Michael C; Passinger, Břetislav; Hamblin, Christopher; Keinrath, Claudia; Vašek, Jiři; Whitlow, Stephen D; Beekhuyzen, Martijn

    2017-01-01

    This paper presents an adaptive system intended to address workload imbalances between pilots in future flight decks. Team performance can be maximized when task demands are balanced within crew capabilities and resources. Good communication skills enable teams to adapt to changes in workload, and include the balancing of workload between team members This work addresses human factors priorities in the aviation domain with the goal to develop concepts that balance operator workload, support future operator roles and responsibilities, and support new task requirements, while allowing operators to focus on the most safety critical tasks. A traditional closed-loop adaptive system includes the decision logic to turn automated adaptations on and off. This work takes a novel approach of replacing the decision logic, normally performed by the automation, with human decisions. The Crew Workload Manager (CWLM) was developed to objectively display the workload between pilots and recommend task sharing; it is then the pilots who "close the loop" by deciding how to best mitigate unbalanced workload. The workload was manipulated by the Shared Aviation Task Battery (SAT-B), which was developed to provide opportunities for pilots to mitigate imbalances in workload between crew members. Participants were put in situations of high and low workload (i.e., workload was manipulated as opposed to being measured), the workload was then displayed to pilots, and pilots were allowed to decide how to mitigate the situation. An evaluation was performed that utilized the SAT-B to manipulate workload and create workload imbalances. Overall, the CWLM reduced the time spent in unbalanced workload and improved the crew coordination in task sharing while not negatively impacting concurrent task performance. Balancing workload has the potential to improve crew resource management and task performance over time, and reduce errors and fatigue. Paired with a real-time workload measurement system, the CWLM could help teams manage their own task load distribution.

  9. Load Balancing in Cloud Computing Environment Using Improved Weighted Round Robin Algorithm for Nonpreemptive Dependent Tasks.

    PubMed

    Devi, D Chitra; Uthariaraj, V Rhymend

    2016-01-01

    Cloud computing uses the concepts of scheduling and load balancing to migrate tasks to underutilized VMs for effectively sharing the resources. The scheduling of the nonpreemptive tasks in the cloud computing environment is an irrecoverable restraint and hence it has to be assigned to the most appropriate VMs at the initial placement itself. Practically, the arrived jobs consist of multiple interdependent tasks and they may execute the independent tasks in multiple VMs or in the same VM's multiple cores. Also, the jobs arrive during the run time of the server in varying random intervals under various load conditions. The participating heterogeneous resources are managed by allocating the tasks to appropriate resources by static or dynamic scheduling to make the cloud computing more efficient and thus it improves the user satisfaction. Objective of this work is to introduce and evaluate the proposed scheduling and load balancing algorithm by considering the capabilities of each virtual machine (VM), the task length of each requested job, and the interdependency of multiple tasks. Performance of the proposed algorithm is studied by comparing with the existing methods.

  10. Load Balancing in Cloud Computing Environment Using Improved Weighted Round Robin Algorithm for Nonpreemptive Dependent Tasks

    PubMed Central

    Devi, D. Chitra; Uthariaraj, V. Rhymend

    2016-01-01

    Cloud computing uses the concepts of scheduling and load balancing to migrate tasks to underutilized VMs for effectively sharing the resources. The scheduling of the nonpreemptive tasks in the cloud computing environment is an irrecoverable restraint and hence it has to be assigned to the most appropriate VMs at the initial placement itself. Practically, the arrived jobs consist of multiple interdependent tasks and they may execute the independent tasks in multiple VMs or in the same VM's multiple cores. Also, the jobs arrive during the run time of the server in varying random intervals under various load conditions. The participating heterogeneous resources are managed by allocating the tasks to appropriate resources by static or dynamic scheduling to make the cloud computing more efficient and thus it improves the user satisfaction. Objective of this work is to introduce and evaluate the proposed scheduling and load balancing algorithm by considering the capabilities of each virtual machine (VM), the task length of each requested job, and the interdependency of multiple tasks. Performance of the proposed algorithm is studied by comparing with the existing methods. PMID:26955656

  11. [Effects of mental workload on work ability in primary and secondary school teachers].

    PubMed

    Xiao, Yuanmei; Li, Weijuan; Ren, Qingfeng; Ren, Xiaohui; Wang, Zhiming; Wang, Mianzhen; Lan, Yajia

    2015-02-01

    To investigate the change pattern of primary and secondary school teachers' work ability with the changes in their mental workload. A total of 901 primary and secondary school teachers were selected by random cluster sampling, and then their mental workload and work ability were assessed by National Aeronautics and Space Administration-Task Load Index (NASA-TLX) and Work Ability Index (WAI) questionnaires, whose reliability and validity had been tested. The effects of their mental workload on the work ability were analyzed. Primary and secondary school teachers' work ability reached the highest level at a certain level of mental workload (55.73< mental workload ≤ 64.10). When their mental workload was lower than the level, their work ability had a positive correlation with the mental workload. Their work ability increased or maintained stable with the increasing mental workload. Moreover, the percentage of teachers with good work ability increased, while that of teachers with moderate work ability decreased. But when their mental workload was higher than the level, their work ability had a negative correlation with the mental workload. Their work ability significantly decreased with the increasing mental workload (P < 0.01). Furthermore, the percentage of teachers with good work ability decreased, while that of teachers with moderate work ability increased (P < 0.001). Too high or low mental workload will result in the decline of primary and secondary school teachers' work ability. Moderate mental workload (55.73∼64.10) will benefit the maintaining and stabilization of their work ability.

  12. Grid heterogeneity in in-silico experiments: an exploration of drug screening using DOCK on cloud environments.

    PubMed

    Yim, Wen-Wai; Chien, Shu; Kusumoto, Yasuyuki; Date, Susumu; Haga, Jason

    2010-01-01

    Large-scale in-silico screening is a necessary part of drug discovery and Grid computing is one answer to this demand. A disadvantage of using Grid computing is the heterogeneous computational environments characteristic of a Grid. In our study, we have found that for the molecular docking simulation program DOCK, different clusters within a Grid organization can yield inconsistent results. Because DOCK in-silico virtual screening (VS) is currently used to help select chemical compounds to test with in-vitro experiments, such differences have little effect on the validity of using virtual screening before subsequent steps in the drug discovery process. However, it is difficult to predict whether the accumulation of these discrepancies over sequentially repeated VS experiments will significantly alter the results if VS is used as the primary means for identifying potential drugs. Moreover, such discrepancies may be unacceptable for other applications requiring more stringent thresholds. This highlights the need for establishing a more complete solution to provide the best scientific accuracy when executing an application across Grids. One possible solution to platform heterogeneity in DOCK performance explored in our study involved the use of virtual machines as a layer of abstraction. This study investigated the feasibility and practicality of using virtual machine and recent cloud computing technologies in a biological research application. We examined the differences and variations of DOCK VS variables, across a Grid environment composed of different clusters, with and without virtualization. The uniform computer environment provided by virtual machines eliminated inconsistent DOCK VS results caused by heterogeneous clusters, however, the execution time for the DOCK VS increased. In our particular experiments, overhead costs were found to be an average of 41% and 2% in execution time for two different clusters, while the actual magnitudes of the execution time costs were minimal. Despite the increase in overhead, virtual clusters are an ideal solution for Grid heterogeneity. With greater development of virtual cluster technology in Grid environments, the problem of platform heterogeneity may be eliminated through virtualization, allowing greater usage of VS, and will benefit all Grid applications in general.

  13. Heavy vehicle driver workload assessment. Task 4, review of workload and related research

    DOT National Transportation Integrated Search

    This report reviews literature on workload measures and related research. It depicts the preliminary development of a theoretical basis for relating driving workload to highway safety and a selective review of driver performance evaluation, workload ...

  14. Impact of the Bergeron-Findeisen process on the release of aerosol particles during the evolution of cloud ice

    NASA Astrophysics Data System (ADS)

    Schwarzenböck, A.; Mertes, S.; Heintzenberg, J.; Wobrock, W.; Laj, P.

    The paper focuses on the redistribution of aerosol particles (APs) during the artificial nucleation and subsequent growth of ice crystals in a supercooled cloud. A significant number of the supercooled cloud droplets during icing periods (seeding agents: C 3H 8, CO 2) did not freeze as was presumed prior to the experiment but instead evaporated. The net mass flux of water vapour from the evaporating droplets to the nucleating ice crystals (Bergeron-Findeisen mechanism) led to the release of residual particles that simultaneously appeared in the interstitial phase. The strong decrease of the droplet residuals confirms the nucleation of ice particles on seeding germs without natural aerosol particles serving as ice nuclei. As the number of residual particles during the seedings did not drop to zero, other processes such as heterogeneous ice nucleation, spontaneous freezing, entrainment of supercooled droplets and diffusion to the created particle-free ice germs must have contributed to the experimental findings. During the icing periods, residual mass concentrations in the condensed phase dropped by a factor of 1.1-6.7, as compared to the unperturbed supercooled cloud. As the Bergeron-Findeisen process also occurs without artificial seeding in the atmosphere, this study demonstrated that the hydrometeors in mixed-phase clouds might be much cleaner than anticipated for the simple freezing process of supercooled droplets in tropospheric mid latitude clouds.

  15. Transport pilot workload - A comparison of two subjective techniques

    NASA Technical Reports Server (NTRS)

    Battiste, Vernol; Bortolussi, Michael

    1988-01-01

    Although SWAT and NASA-TLX workload scales have been compared on numerous occasions, they have not been compared in the context of transport operations. Transport pilot workload has traditionally been classified as long periods of low workload with occasional spikes of high workload. Thus, the relative sensitivity of the scales to variations in workload at the low end of the scale were evaluated. This study was a part of a larger study which investigated workload measures for aircraft certification, conducted in a Phase II certified Link/Boeing 727 simulator. No significant main effects were found for any performance-based measures of workload. However, both SWAT and NASA-TLX were sensitive to differences between high and low workload flights and to differences among flight segments. NASA-TLX (but not SWAT) was sensitive to the increase in workload during the cruise segment of the high workload flight. Between-subject variability was high for SWAT. NASA-TLX was found to be stable when compared in the test/retest paradigm. A test/retest by segment interaction suggested that this was not the case for SWAT ratings.

  16. Aerosol Indirect Effects on Cirrus Clouds in Global Aerosol-Climate Models

    NASA Astrophysics Data System (ADS)

    Liu, X.; Zhang, K.; Wang, Y.; Neubauer, D.; Lohmann, U.; Ferrachat, S.; Zhou, C.; Penner, J.; Barahona, D.; Shi, X.

    2015-12-01

    Cirrus clouds play an important role in regulating the Earth's radiative budget and water vapor distribution in the upper troposphere. Aerosols can act as solution droplets or ice nuclei that promote ice nucleation in cirrus clouds. Anthropogenic emissions from fossil fuel and biomass burning activities have substantially perturbed and enhanced concentrations of aerosol particles in the atmosphere. Global aerosol-climate models (GCMs) have now been used to quantify the radiative forcing and effects of aerosols on cirrus clouds (IPCC AR5). However, the estimate uncertainty is very large due to the different representation of ice cloud formation and evolution processes in GCMs. In addition, large discrepancies have been found between model simulations in terms of the spatial distribution of ice-nucleating aerosols, relative humidity, and temperature fluctuations, which contribute to different estimates of the aerosol indirect effect through cirrus clouds. In this presentation, four GCMs with the start-of-the art representations of cloud microphysics and aerosol-cloud interactions are used to estimate the aerosol indirect effects on cirrus clouds and to identify the causes of the discrepancies. The estimated global and annual mean anthropogenic aerosol indirect effect through cirrus clouds ranges from 0.1 W m-2 to 0.3 W m-2 in terms of the top-of-the-atmosphere (TOA) net radiation flux, and 0.5-0.6 W m-2 for the TOA longwave flux. Despite the good agreement on global mean, large discrepancies are found at the regional scale. The physics behind the aerosol indirect effect is dramatically different. Our analysis suggests that burden of ice-nucleating aerosols in the upper troposphere, ice nucleation frequency, and relative role of ice formation processes (i.e., homogeneous versus heterogeneous nucleation) play key roles in determining the characteristics of the simulated aerosol indirect effects. In addition to the indirect effect estimate, we also use field campaign measurements and satellite retrievals to evaluate the simulated micro- and macro- physical properties of ice clouds in the four GCMs.

  17. Reducing Errors in Satellite Simulated Views of Clouds with an Improved Parameterization of Unresolved Scales

    NASA Astrophysics Data System (ADS)

    Hillman, B. R.; Marchand, R.; Ackerman, T. P.

    2016-12-01

    Satellite instrument simulators have emerged as a means to reduce errors in model evaluation by producing simulated or psuedo-retrievals from model fields, which account for limitations in the satellite retrieval process. Because of the mismatch in resolved scales between satellite retrievals and large-scale models, model cloud fields must first be downscaled to scales consistent with satellite retrievals. This downscaling is analogous to that required for model radiative transfer calculations. The assumption is often made in both model radiative transfer codes and satellite simulators that the unresolved clouds follow maximum-random overlap with horizontally homogeneous cloud condensate amounts. We examine errors in simulated MISR and CloudSat retrievals that arise due to these assumptions by applying the MISR and CloudSat simulators to cloud resolving model (CRM) output generated by the Super-parameterized Community Atmosphere Model (SP-CAM). Errors are quantified by comparing simulated retrievals performed directly on the CRM fields with those simulated by first averaging the CRM fields to approximately 2-degree resolution, applying a "subcolumn generator" to regenerate psuedo-resolved cloud and precipitation condensate fields, and then applying the MISR and CloudSat simulators on the regenerated condensate fields. We show that errors due to both assumptions of maximum-random overlap and homogeneous condensate are significant (relative to uncertainties in the observations and other simulator limitations). The treatment of precipitation is particularly problematic for CloudSat-simulated radar reflectivity. We introduce an improved subcolumn generator for use with the simulators, and show that these errors can be greatly reduced by replacing the maximum-random overlap assumption with the more realistic generalized overlap and incorporating a simple parameterization of subgrid-scale cloud and precipitation condensate heterogeneity. Sandia National Laboratories is a multi-program laboratory managed and operated by Sandia Corporation, a wholly owned subsidiary of Lockheed Martin Corporation, for the U.S. Department of Energy's National Nuclear Security Administration under contract DE-AC04-94AL85000. SAND NO. SAND2016-7485 A

  18. The Radiative Consistency of Atmospheric Infrared Sounder and Moderate Resolution Imaging Spectroradiometer Cloud Retrievals

    NASA Technical Reports Server (NTRS)

    Kahn, Brian H.; Fishbein, Evan; Nasiri, Shaima L.; Eldering, Annmarie; Fetzer, Eric J.; Garay, Michael J.; Lee, Sung-Yung

    2007-01-01

    The consistency of cloud top temperature (Tc) and effective cloud fraction (f) retrieved by the Atmospheric Infrared Sounder (AIRS)/Advanced Microwave Sounding Unit (AMSU) observation suite and the Moderate Resolution Imaging Spectroradiometer (MODIS) on the EOS-Aqua platform are investigated. Collocated AIRS and MODIS TC and f are compared via an 'effective scene brightness temperature' (Tb,e). Tb,e is calculated with partial field of view (FOV) contributions from TC and surface temperature (TS), weighted by f and 1-f, respectively. AIRS reports up to two cloud layers while MODIS reports up to one. However, MODIS reports TC, TS, and f at a higher spatial resolution than AIRS. As a result, pixel-scale comparisons of TC and f are difficult to interpret, demonstrating the need for alternatives such as Tb,e. AIRS-MODIS Tb,e differences ((Delta)Tb,e) for identical observing scenes are useful as a diagnostic for cloud quantity comparisons. The smallest values of DTb,e are for high and opaque clouds, with increasing scatter in (Delta)Tb,e for clouds of smaller opacity and lower altitude. A persistent positive bias in DTb,e is observed in warmer and low-latitude scenes, characterized by a mixture of MODIS CO2 slicing and 11-mm window retrievals. These scenes contain heterogeneous cloud cover, including mixtures of multilayered cloudiness and misplaced MODIS cloud top pressure. The spatial patterns of (Delta)Tb,e are systematic and do not correlate well with collocated AIRS-MODIS radiance differences, which are more random in nature and smaller in magnitude than (Delta)Tb,e. This suggests that the observed inconsistencies in AIRS and MODIS cloud fields are dominated by retrieval algorithm differences, instead of differences in the observed radiances. The results presented here have implications for the validation of cloudy satellite retrieval algorithms, and use of cloud products in quantitative analyses.

  19. Heterogeneous nucleation of ice on carbon surfaces.

    PubMed

    Lupi, Laura; Hudait, Arpa; Molinero, Valeria

    2014-02-26

    Atmospheric aerosols can promote the heterogeneous nucleation of ice, impacting the radiative properties of clouds and Earth's climate. The experimental investigation of heterogeneous freezing of water droplets by carbonaceous particles reveals widespread ice freezing temperatures. It is not known which structural and chemical characteristics of soot account for the variability in ice nucleation efficiency. Here we use molecular dynamics simulations to investigate the nucleation of ice from liquid water in contact with graphitic surfaces. We find that atomically flat carbon surfaces promote heterogeneous nucleation of ice, while molecularly rough surfaces with the same hydrophobicity do not. Graphitic surfaces and other surfaces that promote ice nucleation induce layering in the interfacial water, suggesting that the order imposed by the surface on liquid water may play an important role in the heterogeneous nucleation mechanism. We investigate a large set of graphitic surfaces of various dimensions and radii of curvature and find that variations in nanostructures alone could account for the spread in the freezing temperatures of ice on soot in experiments. We conclude that a characterization of the nanostructure of soot is needed to predict its ice nucleation efficiency.

  20. Reconsidering the conceptualization of nursing workload: literature review.

    PubMed

    Morris, Roisin; MacNeela, Padraig; Scott, Anne; Treacy, Pearl; Hyde, Abbey

    2007-03-01

    This paper reports a literature review that aimed to analyse the way in which nursing intensity and patient dependency have been considered to be conceptually similar to nursing workload, and to propose a model to show how these concepts actually differ in both theoretical and practical terms. The literature on nursing workload considers the concepts of patient 'dependency' and nursing 'intensity' in the realm of nursing workload. These concepts differ by definition but are used to measure the same phenomenon, i.e. nursing workload. The literature search was undertaken in 2004 using electronic databases, reference lists and other available literature. Papers were sourced from the Medline, Psychlit, CINAHL and Cochrane databases and through the general search engine Google. The keywords focussed on nursing workload, nursing intensity and patient dependency. Nursing work and workload concepts and labels are defined and measured in different and often contradictory ways. It is vitally important to understand these differences when using such conceptualizations to measure nursing workload. A preliminary model is put forward to clarify the relationships between nursing workload concepts. In presenting a preliminary model of nursing workload, it is hoped that nursing workload might be better understood so that it becomes more visible and recognizable. Increasing the visibility of nursing workload should have a positive impact on nursing workload management and on the provision of patient care.

  1. Classification Systems for Individual Differences in Multiple-task Performance and Subjective Estimates of Workload

    NASA Technical Reports Server (NTRS)

    Damos, D. L.

    1984-01-01

    Human factors practitioners often are concerned with mental workload in multiple-task situations. Investigations of these situations have demonstrated repeatedly that individuals differ in their subjective estimates of workload. These differences may be attributed in part to individual differences in definitions of workload. However, after allowing for differences in the definition of workload, there are still unexplained individual differences in workload ratings. The relation between individual differences in multiple-task performance, subjective estimates of workload, information processing abilities, and the Type A personality trait were examined.

  2. Interhemispheric Differences in Dentifrication and Related Processes Affecting Polar Ozone

    NASA Technical Reports Server (NTRS)

    Santee, M. L.; Read, W. G.; Waters, J. W.; Froidevaux, L.; Manney, G. L.; Flower, D. A.; Jarnot, R. F.; Harwood, R. S.; Peckham, G. E.

    1994-01-01

    The severe depletion of stratospheric ozone over Antarctica in late winter and early spring is caused by enhanced CLO abundances arising from heterogeneous reactions on polar stratospheric clouds (PSCs). CLO abundances comparable to those over Antarctica have also been observed throughout the Arctic Vortex, but the accompanying loss of Arctic ozone has been much less severe.

  3. Cirrus Parcel Model Comparison Project. Phase 1; The Critical Components to Simulate Cirrus Initiation Explicitly

    NASA Technical Reports Server (NTRS)

    Lin, Ruei-Fong; Starr, David OC; DeMott, Paul J.; Cotton, Richard; Sassen, Kenneth; Jensen, Eric; Einaudi, Franco (Technical Monitor)

    2001-01-01

    The Cirrus Parcel Model Comparison Project, a project of the GCSS (GEWEX Cloud System Studies) Working Group on Cirrus Cloud Systems, involves the systematic comparison of current models of ice crystal nucleation and growth for specified, typical, cirrus cloud environments. In Phase I of the project reported here, simulated cirrus cloud microphysical properties are compared for situations of "warm" (40 C) and "cold" (-60 C) cirrus, both subject to updrafts of 4, 20 and 100 centimeters per second. Five models participated. The various models employ explicit microphysical schemes wherein the size distribution of each class of particles (aerosols and ice crystals) is resolved into bins or treated separately. Simulations are made including both the homogeneous and heterogeneous ice nucleation mechanisms. A single initial aerosol population of sulfuric acid particles is prescribed for all simulations. To isolate the treatment of the homogeneous freezing (of haze droplets) nucleation process, the heterogeneous nucleation mechanism is disabled for a second parallel set of simulations. Qualitative agreement is found for the homogeneous-nucleation- only simulations, e.g., the number density of nucleated ice crystals increases with the strength of the prescribed updraft. However, significant quantitative differences are found. Detailed analysis reveals that the homogeneous nucleation rate, haze particle solution concentration, and water vapor uptake rate by ice crystal growth (particularly as controlled by the deposition coefficient) are critical components that lead to differences in predicted microphysics. Systematic bias exists between results based on a modified classical theory approach and models using an effective freezing temperature approach to the treatment of nucleation. Each approach is constrained by critical freezing data from laboratory studies, but each includes assumptions that can only be justified by further laboratory research. Consequently, it is not yet clear if the two approaches can be made consistent. Large haze particles may deviate considerably from equilibrium size in moderate to strong updrafts (20-100 centimeters per second) at -60 C when the commonly invoked equilibrium assumption is lifted. The resulting difference in particle-size- dependent solution concentration of haze particles may significantly affect the ice particle formation rate during the initial nucleation interval. The uptake rate for water vapor excess by ice crystals is another key component regulating the total number of nucleated ice crystals. This rate, the product of particle number concentration and ice crystal diffusional growth rate, which is particularly sensitive to the deposition coefficient when ice particles are small, modulates the peak particle formation rate achieved in an air parcel and the duration of the active nucleation time period. The effects of heterogeneous nucleation are most pronounced in weak updraft situations. Vapor competition by the heterogeneously nucleated ice crystals may limit the achieved ice supersaturation and thus suppresses the contribution of homogeneous nucleation. Correspondingly, ice crystal number density is markedly reduced. Definitive laboratory and atmospheric benchmark data are needed for the heterogeneous nucleation process. Inter-model differences are correspondingly greater than in the case of the homogeneous nucleation process acting alone.

  4. Interactive Query Processing in Big Data Systems: A Cross Industry Study of MapReduce Workloads

    DTIC Science & Technology

    2012-04-02

    invite cluster operators and the broader data management commu- nity to share additional knowledge about their MapReduce workloads. 9. ACKNOWLEDGMENTS...against real- life production MapReduce workloads. Knowledge of such workloads is currently limited to a handful of technology companies [19, 8, 48, 41...database management insights would benefit from checking workload assumptions against empirical measurements. The broad spectrum of workloads analyzed allows

  5. Flight Crew Workload Evaluation Based on the Workload Function Distribution Method.

    PubMed

    Zheng, Yiyuan; Lu, Yanyu; Jie, Yuwen; Fu, Shan

    2017-05-01

    The minimum flight crew on the flight deck should be established according to the workload for individual crewmembers. Typical workload measures consist of three types: subjective rating scale, task performance, and psychophysiological measures. However, all these measures have their own limitations. To reflect flight crew workload more specifically and comprehensively within the flight environment, and more directly comply with airworthiness regulations, the Workload Function Distribution Method, which combined the basic six workload functions, was proposed. The analysis was based on the different conditions of workload function numbers. Each condition was analyzed from two aspects, which were overall proportion and effective proportion. Three types of approach tasks were used in this study and the NASA-TLX scale was implemented for comparison. Neither the one-function condition nor the two-function condition had the same results with NASA-TLX. However, both the three-function and the four- to six- function conditions were identical with NASA-TLX. Further, the significant differences were different on four to six conditions. The overall proportion was insignificant, while the effective proportions were significant. The results show that the conditions with one function and two functions seemed to have no influence on workload, while executing three functions and four to six functions had an impact on workload. Besides, effective proportions of workload functions were more precisely compared with the overall proportions to indicate workload, especially in the conditions with multiple functions.Zheng Y, Lu Y, Jie Y, Fu S. Flight crew workload evaluation based on the workload function distribution method. Aerosp Med Hum Perform. 2017; 88(5):481-486.

  6. Supporting Academic Workloads in Online Learning

    ERIC Educational Resources Information Center

    Haggerty, Carmel E.

    2015-01-01

    Academic workloads in online learning are influenced by many variables, the complexity of which makes it difficult to measure academic workloads in isolation. While researching issues associated with academic workloads, professional development stood out as having a substantive impact on academic workloads. Many academics in applied health degrees…

  7. Ozone destruction through heterogeneous chemistry following the eruption of El Chichon

    NASA Technical Reports Server (NTRS)

    Hofmann, David J.; Solomon, Susan

    1989-01-01

    The results of ozone observations at northern midlatitudes in late 1982 through 1983, following the eruption of El Chichon are discussed, together with the observations of other trace gases which may be linked to possible variations in ozone chemistry. These results are related to the in situ aerosol observations following the El Chicon eruption, with particular attention given to data relevant to heterogeneous reactions, such as the aerosol surface area and weight percent H2SO4. It is shown that, at midlatitudes, the observed volcanic-particle surface area reached a maximum of about 50 sq microns/cu m (above a typical background value of about 0.75) at an altitude of 18-20 km in early 1983; this enhancement of surface area is about the same as that encountered in stratospheric clouds in the Antarctic, suggesting a possible basis for ozone depletion through heterogeneous chemistry. The fraction of ozone reduction that may have occurred as a result of heterogeneous chemicl effects is estimated.

  8. 3D Cloud Radiative Effects on Polarized Reflectances

    NASA Astrophysics Data System (ADS)

    Cornet, C.; Matar, C.; C-Labonnote, L.; Szczap, F.; Waquet, F.; Parol, F.; Riedi, J.

    2017-12-01

    As recognized in the last IPCC report, clouds have a major importance in the climate budget and need to be better characterized. Remote sensing observations are a way to obtain either global observations of cloud from satellites or a very fine description of clouds from airborne measurements. An increasing numbers of radiometers plan to measure polarized reflectances in addition to total reflectances, since this information is very helpful to obtain aerosol or cloud properties. In a near future, for example, the Multi-viewing, Multi-channel, Multi-polarization Imager (3MI) will be part the EPS-SG Eumetsat-ESA mission. It will achieve multi-angular polarimetric measurements from visible to shortwave infrared wavelengths. An airborne prototype, OSIRIS (Observing System Including Polarization in the Solar Infrared Spectrum), is also presently developed at the Laboratoire d'Optique Atmospherique and had already participated to several measurements campaigns. In order to analyze suitably the measured signal, it it necessary to have realistic and accurate models able to simulate polarized reflectances. The 3DCLOUD model (Szczap et al., 2014) was used to generate three-dimensional synthetic cloud and the 3D radiative transfer model, 3DMCPOL (Cornet et al., 2010) to compute realistic polarized reflectances. From these simulations, we investigate the effects of 3D cloud structures and heterogeneity on the polarized angular signature often used to retrieve cloud or aerosol properties. We show that 3D effects are weak for flat clouds but become quite significant for fractional clouds above ocean. The 3D effects are quite different according to the observation scale. For the airborne scale (few tens of meter), solar illumination effects can lead to polarized cloud reflectance values higher than the saturation limit predicted by the homogeneous cloud assumption. In the cloud gaps, corresponding to shadowed areas of the total reflectances, polarized signal can also be enhanced by the molecular signal at the shortest wavelength. At the satellite scale (few kilometers), depending on the wavelength and the molecular contribution, the absolute polarized signal may be increased or decreased in the forward scattering direction and is decreased in the cloudbow directions because of the plan-parallel biases.

  9. Development of Two-Moment Cloud Microphysics for Liquid and Ice Within the NASA Goddard Earth Observing System Model (GEOS-5)

    NASA Technical Reports Server (NTRS)

    Barahona, Donifan; Molod, Andrea M.; Bacmeister, Julio; Nenes, Athanasios; Gettelman, Andrew; Morrison, Hugh; Phillips, Vaughan,; Eichmann, Andrew F.

    2013-01-01

    This work presents the development of a two-moment cloud microphysics scheme within the version 5 of the NASA Goddard Earth Observing System (GEOS-5). The scheme includes the implementation of a comprehensive stratiform microphysics module, a new cloud coverage scheme that allows ice supersaturation and a new microphysics module embedded within the moist convection parameterization of GEOS-5. Comprehensive physically-based descriptions of ice nucleation, including homogeneous and heterogeneous freezing, and liquid droplet activation are implemented to describe the formation of cloud particles in stratiform clouds and convective cumulus. The effect of preexisting ice crystals on the formation of cirrus clouds is also accounted for. A new parameterization of the subgrid scale vertical velocity distribution accounting for turbulence and gravity wave motion is developed. The implementation of the new microphysics significantly improves the representation of liquid water and ice in GEOS-5. Evaluation of the model shows agreement of the simulated droplet and ice crystal effective and volumetric radius with satellite retrievals and in situ observations. The simulated global distribution of supersaturation is also in agreement with observations. It was found that when using the new microphysics the fraction of condensate that remains as liquid follows a sigmoidal increase with temperature which differs from the linear increase assumed in most models and is in better agreement with available observations. The performance of the new microphysics in reproducing the observed total cloud fraction, longwave and shortwave cloud forcing, and total precipitation is similar to the operational version of GEOS-5 and in agreement with satellite retrievals. However the new microphysics tends to underestimate the coverage of persistent low level stratocumulus. Sensitivity studies showed that the simulated cloud properties are robust to moderate variation in cloud microphysical parameters. However significant sensitivity in ice cloud properties was found to variation in the dispersion of the ice crystal size distribution and the critical size for ice autoconversion. The implementation of the new microphysics leads to a more realistic representation of cloud processes in GEOS-5 and allows the linkage of cloud properties to aerosol emissions.

  10. Cloud processing of organic compounds: Secondary organic aerosol and nitrosamine formation

    NASA Astrophysics Data System (ADS)

    Hutchings, James W., III

    Cloud processing of atmospheric organic compounds has been investigated through field studies, laboratory experiments, and numerical modeling. Observational cloud chemistry studies were performed in northern Arizona and fog studies in central Pennsylvania. At both locations, the cloud and fogs showed low acidity due to neutralization by soil dust components (Arizona) and ammonia (Pennsylvania). The field observations showed substantial concentrations (20-5500 ng•L -1) of volatile organic compounds (VOC) in the cloud droplets. The potential generation of secondary organic aerosol mass through the processing of these anthropogenic VOCs was investigated through laboratory and modeling studies. Under simulated atmospheric conditions, in idealized solutions, benzene, toluene, ethylbenzene, and xylene (BTEX) degraded quickly in the aqueous phase with half lives of approximately three hours. The degradation process yielded less volatile products which would contribute to new aerosol mass upon cloud evaporation. However, when realistic cloud solutions containing natural organic matter were used in the experiments, the reaction kinetics decreased with increasing organic carbon content, resulting in half lives of approximately 7 hours. The secondary organic aerosol (SUA) mass formation potential of cloud processing of BTEX was evaluated. SOA mass formation by cloud processing of BTEX, while strongly dependent on the atmospheric conditions, could contribute up to 9% of the ambient atmospheric aerosol mass, although typically ˜1% appears realistic. Field observations also showed the occurrence of N-nitrosodimethylamine (NDMA), a potent carcinogen, in fogs and clouds (100-340 ng•L -1). Laboratory studies were conducted to investigate the formation of NDMA from nitrous acid and dimethylamine in the homogeneous aqueous phase within cloud droplets. While NDMA was produced in the cloud droplets, the low yields (<1%) observed could not explain observational concentrations. Therefore heterogeneous or gaseous formation of NDMA with partitioning to droplet must be the source of aqueous NDMA. Box-model calculations tended to demonstrate a predominance of a gas phase formation mechanism followed by partitioning into the cloud droplets. The calculations were consistent with field measurements of gaseous and aqueous NDMA concentrations. Measurements and model calculations showed that while NDMA is eventually photolyzed, it might persist in the atmosphere for hours.

  11. Workload - An examination of the concept

    NASA Technical Reports Server (NTRS)

    Gopher, Daniel; Donchin, Emanuel

    1986-01-01

    The relations between task difficulty and workload and workload and performance are examined. The architecture and limitations of the central processor are discussed. Various procedures for measuring workload are described and evaluated. Consideration is given to normative and descriptive approaches; subjective, performance, and arousal measures; performance operating characteristics; and psychophysiological measures of workload.

  12. Pilot Workload and Speech Analysis: A Preliminary Investigation

    NASA Technical Reports Server (NTRS)

    Bittner, Rachel M.; Begault, Durand R.; Christopher, Bonny R.

    2013-01-01

    Prior research has questioned the effectiveness of speech analysis to measure the stress, workload, truthfulness, or emotional state of a talker. The question remains regarding the utility of speech analysis for restricted vocabularies such as those used in aviation communications. A part-task experiment was conducted in which participants performed Air Traffic Control read-backs in different workload environments. Participant's subjective workload and the speech qualities of fundamental frequency (F0) and articulation rate were evaluated. A significant increase in subjective workload rating was found for high workload segments. F0 was found to be significantly higher during high workload while articulation rates were found to be significantly slower. No correlation was found to exist between subjective workload and F0 or articulation rate.

  13. A human factors engineering conceptual framework of nursing workload and patient safety in intensive care units.

    PubMed

    Carayon, Pascale; Gürses, Ayşe P

    2005-10-01

    In this paper, we review the literature on nursing workload in intensive care units (ICUs) and its impact on patient safety and quality of working life of nurses. We then propose a conceptual framework of ICU nursing workload that defines causes, consequences and outcomes of workload. We identified four levels of nursing workload (ICU/unit level, job level, patient level, and situation level), and discuss measures associated with each of the four levels. A micro-level approach to ICU nursing workload at the situation level is proposed and recommended in order to reduce workload and mitigate its negative impact. Performance obstacles are conceptualized as causes of ICU nursing workload at the situation level.

  14. The Workload Curve: Subjective Mental Workload.

    PubMed

    Estes, Steven

    2015-11-01

    In this paper I begin looking for evidence of a subjective workload curve. Results from subjective mental workload assessments are often interpreted linearly. However, I hypothesized that ratings of subjective mental workload increase nonlinearly with unitary increases in working memory load. Two studies were conducted. In the first, the participant provided ratings of the mental difficulty of a series of digit span recall tasks. In the second study, participants provided ratings of mental difficulty associated with recall of visual patterns. The results of the second study were then examined using a mathematical model of working memory. An S curve, predicted a priori, was found in the results of both the digit span and visual pattern studies. A mathematical model showed a tight fit between workload ratings and levels of working memory activation. This effort provides good initial evidence for the existence of a workload curve. The results support further study in applied settings and other facets of workload (e.g., temporal workload). Measures of subjective workload are used across a wide variety of domains and applications. These results bear on their interpretation, particularly as they relate to workload thresholds. © 2015, Human Factors and Ergonomics Society.

  15. Personality Traits Moderate the Effect of Workload Sources on Perceived Workload in Flying Column Police Officers

    PubMed Central

    Chiorri, Carlo; Garbarino, Sergio; Bracco, Fabrizio; Magnavita, Nicola

    2015-01-01

    Previous research has suggested that personality traits of the Five Factor Model play a role in worker's response to workload. The aim of this study was to investigate the association of personality traits of first responders with their perceived workload in real-life tasks. A flying column of 269 police officers completed a measure of subjective workload (NASA-Task Load Index) after intervention tasks in a major public event. Officers' scores on a measure of Five Factor Model personality traits were obtained from archival data. Linear Mixed Modeling was used to test the direct and interaction effects of personality traits on workload scores once controlling for background variables, task type and workload source (mental, temporal and physical demand of the task, perceived effort, dissatisfaction for the performance and frustration due to the task). All personality traits except extraversion significantly interacted at least with one workload source. Perceived workload in flying column police officers appears to be the result of their personality characteristics interacting with the workload source. The implications of these results for the development of support measures aimed at reducing the impact of workload in this category of workers are discussed. PMID:26640456

  16. Assessing Continuous Operator Workload With a Hybrid Scaffolded Neuroergonomic Modeling Approach.

    PubMed

    Borghetti, Brett J; Giametta, Joseph J; Rusnock, Christina F

    2017-02-01

    We aimed to predict operator workload from neurological data using statistical learning methods to fit neurological-to-state-assessment models. Adaptive systems require real-time mental workload assessment to perform dynamic task allocations or operator augmentation as workload issues arise. Neuroergonomic measures have great potential for informing adaptive systems, and we combine these measures with models of task demand as well as information about critical events and performance to clarify the inherent ambiguity of interpretation. We use machine learning algorithms on electroencephalogram (EEG) input to infer operator workload based upon Improved Performance Research Integration Tool workload model estimates. Cross-participant models predict workload of other participants, statistically distinguishing between 62% of the workload changes. Machine learning models trained from Monte Carlo resampled workload profiles can be used in place of deterministic workload profiles for cross-participant modeling without incurring a significant decrease in machine learning model performance, suggesting that stochastic models can be used when limited training data are available. We employed a novel temporary scaffold of simulation-generated workload profile truth data during the model-fitting process. A continuous workload profile serves as the target to train our statistical machine learning models. Once trained, the workload profile scaffolding is removed and the trained model is used directly on neurophysiological data in future operator state assessments. These modeling techniques demonstrate how to use neuroergonomic methods to develop operator state assessments, which can be employed in adaptive systems.

  17. Higher mental workload is associated with poorer laparoscopic performance as measured by the NASA-TLX tool.

    PubMed

    Yurko, Yuliya Y; Scerbo, Mark W; Prabhu, Ajita S; Acker, Christina E; Stefanidis, Dimitrios

    2010-10-01

    Increased workload during task performance may increase fatigue and facilitate errors. The National Aeronautics and Space Administration-Task Load Index (NASA-TLX) is a previously validated tool for workload self-assessment. We assessed the relationship of workload and performance during simulator training on a complex laparoscopic task. NASA-TLX workload data from three separate trials were analyzed. All participants were novices (n = 28), followed the same curriculum on the fundamentals of laparoscopic surgery suturing model, and were tested in the animal operating room (OR) on a Nissen fundoplication model after training. Performance and workload scores were recorded at baseline, after proficiency achievement, and during the test. Performance, NASA-TLX scores, and inadvertent injuries during the test were analyzed and compared. Workload scores declined during training and mirrored performance changes. NASA-TLX scores correlated significantly with performance scores (r = -0.5, P < 0.001). Participants with higher workload scores caused more inadvertent injuries to adjacent structures in the OR (r = 0.38, P < 0.05). Increased mental and physical workload scores at baseline correlated with higher workload scores in the OR (r = 0.52-0.82; P < 0.05) and more inadvertent injuries (r = 0.52, P < 0.01). Increased workload is associated with inferior task performance and higher likelihood of errors. The NASA-TLX questionnaire accurately reflects workload changes during simulator training and may identify individuals more likely to experience high workload and more prone to errors during skill transfer to the clinical environment.

  18. Laboratory Studies of the Cloud Droplet Activation Properties and Corresponding Chemistry of Saline Playa Dust.

    PubMed

    Gaston, Cassandra J; Pratt, Kerri A; Suski, Kaitlyn J; May, Nathaniel W; Gill, Thomas E; Prather, Kimberly A

    2017-02-07

    Playas emit large quantities of dust that can facilitate the activation of cloud droplets. Despite the potential importance of playa dusts for cloud formation, most climate models assume that all dust is nonhygroscopic; however, measurements are needed to clarify the role of dusts in aerosol-cloud interactions. Here, we report measurements of CCN activation from playa dusts and parameterize these results in terms of both κ-Köhler theory and adsorption activation theory for inclusion in atmospheric models. κ ranged from 0.002 ± 0.001 to 0.818 ± 0.094, whereas Frankel-Halsey-Hill (FHH) adsorption parameters of A FHH = 2.20 ± 0.60 and B FHH = 1.24 ± 0.14 described the water uptake properties of the dusts. Measurements made using aerosol time-of-flight mass spectrometry (ATOFMS) revealed the presence of halite, sodium sulfates, and sodium carbonates that were strongly correlated with κ underscoring the role that mineralogy, including salts, plays in water uptake by dust. Predictions of κ made using bulk chemical techniques generally showed good agreement with measured values. However, several samples were poorly predicted suggesting that chemical heterogeneities as a function of size or chemically distinct particle surfaces can determine the hygroscopicity of playa dusts. Our results further demonstrate the importance of dust in aerosol-cloud interactions.

  19. Design of laboratory experiments to study radiation-driven implosions

    DOE PAGES

    Keiter, P. A.; Trantham, M.; Malamud, G.; ...

    2017-02-03

    The interstellar medium is heterogeneous with dense clouds amid an ambient medium. Radiation from young OB stars asymmetrically irradiate the dense clouds. Bertoldi (1989) developed analytic formulae to describe possible outcomes of these clouds when irradiated by hot, young stars. One of the critical parameters that determines the cloud’s fate is the number of photon mean free paths in the cloud. For the extreme cases where the cloud size is either much greater than or much less than one mean free path, the radiation transport should be well understood. However, as one transitions between these limits, the radiation transport ismore » much more complex and is a challenge to solve with many of the current radiation transport models implemented in codes. In this paper, we present the design of laboratory experiments that use a thermal source of x-rays to asymmetrically irradiate a low-density plastic foam sphere. The experiment will vary the density and hence the number of mean free paths of the sphere to study the radiation transport in different regimes. Finally, we have developed dimensionless parameters to relate the laboratory experiment to the astrophysical system and we show that we can perform the experiment in the same transport regime.« less

  20. Theoretical Investigations of Clouds and Aerosols in the Stratosphere and Upper Troposphere

    NASA Technical Reports Server (NTRS)

    Toon, Owen B.

    2005-01-01

    support of the Atmospheric Chemistry Modeling and Data Analysis Program. We investigated a wide variety of issues involving ambient stratospheric aerosols, polar stratospheric clouds or heterogeneous chemistry, analysis of laboratory data, and particles in the upper troposphere. The papers resulting from these studies are listed below. In addition, I participated in the 1999-2000 SOLVE mission as one of the project scientists and in the 2002 CRYSTAL field mission as one of the project scientists. Several CU graduate students and research associates also participated in these mission, under support from the ACMAP program, and worked to interpret data. During the past few years my group has completed a number of projects under the

  1. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mitchell, David L.

    It is well known that cirrus clouds play a major role in regulating the earth’s climate, but the details of how this works are just beginning to be understood. This project targeted the main property of cirrus clouds that influence climate processes; the ice fall speed. That is, this project improves the representation of the mass-weighted ice particle fall velocity, V m, in climate models, used to predict future climate on global and regional scales. Prior to 2007, the dominant sizes of ice particles in cirrus clouds were poorly understood, making it virtually impossible to predict how cirrus clouds interactmore » with sunlight and thermal radiation. Due to several studies investigating the performance of optical probes used to measure the ice particle size distribution (PSD), as well as the remote sensing results from our last ARM project, it is now well established that the anomalously high concentrations of small ice crystals often reported prior to 2007 were measurement artifacts. Advances in the design and data processing of optical probes have greatly reduced these ice artifacts that resulted from the shattering of ice particles on the probe tips and/or inlet tube, and PSD measurements from one of these improved probes (the 2-dimensional Stereo or 2D-S probe) are utilized in this project to parameterize V m for climate models. Our original plan in the proposal was to parameterize the ice PSD (in terms of temperature and ice water content) and ice particle mass and projected area (in terms of mass- and area-dimensional power laws or m-D/A-D expressions) since these are the microphysical properties that determine V m, and then proceed to calculate V m from these parameterized properties. But the 2D-S probe directly measures ice particle projected area and indirectly estimates ice particle mass for each size bin. It soon became apparent that the original plan would introduce more uncertainty in the V m calculations than simply using the 2D-S measurements to directly calculate V m. By calculating V m directly from the measured PSD, ice particle projected area and estimated mass, more accurate estimates of V m are obtained. These V m values were then parameterized for climate models by relating them to (1) sampling temperature and ice water content (IWC) and (2) the effective diameter (D e) of the ice PSD. Parameterization (1) is appropriate for climate models having single-moment microphysical schemes whereas (2) is appropriate for double-moment microphysical schemes and yields more accurate V m estimates. These parameterizations were developed for tropical cirrus clouds, Arctic cirrus, mid-latitude synoptic cirrus and mid-latitude anvil cirrus clouds based on field campaigns in these regions. An important but unexpected result of this research was the discovery of microphysical evidence indicating the mechanisms by which ice crystals are produced in cirrus clouds. This evidence, derived from PSD measurements, indicates that homogeneous freezing ice nucleation dominates in mid-latitude synoptic cirrus clouds, whereas heterogeneous ice nucleation processes dominate in mid-latitude anvil cirrus. Based on these findings, D e was parameterized in terms of temperature (T) for conditions dominated by (1) homo- and (2) heterogeneous ice nucleation. From this, an experiment was designed for global climate models (GCMs). The net radiative forcing from cirrus clouds may be affected by the means ice is produced (homo- or heterogeneously), and this net forcing contributes to climate sensitivity (i.e. the change in mean global surface temperature resulting from a doubling of CO 2). The objective of this GCM experiment was to determine how a change in ice nucleation mode affects the predicted global radiation balance. In the first simulation (Run 1), the D e-T relationship for homogeneous nucleation is used at all latitudes, while in the second simulation (Run 2), the D e-T relationship for heterogeneous nucleation is used at all latitudes. For both runs, V m is calculated from D e. Two GCMs were used; the Community Atmosphere Model version 5 (CAM5) and a European GCM known as ECHAM5 (thanks to our European colleagues who collaborated with us). Similar results were obtained from both GCMs in the Northern Hemisphere mid-latitudes, with a net cooling of ~ 1.0 W m -2 due to heterogeneous nucleation, relative to Run 1. The mean global net cooling was 2.4 W m -2 for the ECHAM5 GCM while CAM5 produced a mean global net cooling of about 0.8 W m -2. This dependence of the radiation balance on nucleation mode is substantial when one considers the direct radiative forcing from a CO 2 doubling is 4 W m -2. The differences between GCMs in mean global net cooling estimates may demonstrate a need for improving the representation of cirrus clouds in GCMs, including the coupling between microphysical and radiative properties. Unfortunately, after completing this GCM experiment, we learned from the company that provided the 2D-S microphysical data that the data was corrupted due to a computer program coding problem. Therefore the microphysical data had to be reprocessed and reanalyzed, and the GCM experiments were redone under our current ASR project but using an improved experimental design.« less

  2. Probing Individual Ice Nucleation Events with Environmental Scanning Electron Microscopy

    NASA Astrophysics Data System (ADS)

    Wang, Bingbing; China, Swarup; Knopf, Daniel; Gilles, Mary; Laskin, Alexander

    2016-04-01

    Heterogeneous ice nucleation is one of the processes of critical relevance to a range of topics in the fundamental and the applied science and technologies. Heterogeneous ice nucleation initiated by particles proceeds where microscopic properties of particle surfaces essentially control nucleation mechanisms. Ice nucleation in the atmosphere on particles governs the formation of ice and mixed phase clouds, which in turn influence the Earth's radiative budget and climate. Heterogeneous ice nucleation is still insufficiently understood and poses significant challenges in predictive understanding of climate change. We present a novel microscopy platform allowing observation of individual ice nucleation events at temperature range of 193-273 K and relative humidity relevant for ice formation in the atmospheric clouds. The approach utilizes a home built novel ice nucleation cell interfaced with Environmental Scanning Electron Microscope (IN-ESEM system). The IN-ESEM system is applied for direct observation of individual ice formation events, determining ice nucleation mechanisms, freezing temperatures, and relative humidity onsets. Reported microanalysis of the ice nucleating particles (INP) include elemental composition detected by the energy dispersed analysis of X-rays (EDX), and advanced speciation of the organic content in particles using scanning transmission x-ray microscopy with near edge X-ray absorption fine structure spectroscopy (STXM/NEXAFS). The performance of the IN-ESEM system is validated through a set of experiments with kaolinite particles with known ice nucleation propensity. We demonstrate an application of the IN-ESEM system to identify and characterize individual INP within a complex mixture of ambient particles.

  3. Exposure to Workplace Bullying: The Role of Coping Strategies in Dealing with Work Stressors

    PubMed Central

    Baillien, Elfi; Vander Elst, Tinne; De Witte, Hans; Van den Broeck, Anja

    2017-01-01

    Studies investigating both work- and individual-related antecedents of workplace bullying are scarce. In reply, this study investigated the interaction between workload, job insecurity, role conflict, and role ambiguity (i.e., work-related antecedents), and problem- and emotion-focused coping strategies (i.e., individual-related antecedents) in association with exposure to workplace bullying. Problem-focused coping strategies were hypothesised to decrease (i.e., buffer) the associations between workload, job insecurity, role conflict, and role ambiguity and exposure to bullying, while emotion-focused coping strategies were hypothesised to increase (i.e., amplify) these associations. Results for a heterogeneous sample (N = 3,105) did not provide evidence for problem-focused coping strategies as moderators. As expected, some emotion-focused coping strategies amplified the associations between work-related antecedents and bullying: employees using “focus on and venting of emotions” or “behavioural disengagement” in dealing with job insecurity, role conflict, or role ambiguity were more likely to be exposed to bullying. Similarly, “seeking social support for emotional reasons” and “mental disengagement” amplified the associations of role ambiguity and the associations of both role conflict and role ambiguity, respectively. To prevent bullying, organisations may train employees in tempering emotion-focused coping strategies, especially when experiencing job insecurity, role conflict, or role ambiguity. PMID:29270424

  4. An analytics approach to designing patient centered medical homes.

    PubMed

    Ajorlou, Saeede; Shams, Issac; Yang, Kai

    2015-03-01

    Recently the patient centered medical home (PCMH) model has become a popular team based approach focused on delivering more streamlined care to patients. In current practices of medical homes, a clinical based prediction frame is recommended because it can help match the portfolio capacity of PCMH teams with the actual load generated by a set of patients. Without such balances in clinical supply and demand, issues such as excessive under and over utilization of physicians, long waiting time for receiving the appropriate treatment, and non-continuity of care will eliminate many advantages of the medical home strategy. In this paper, by using the hierarchical generalized linear model with multivariate responses, we develop a clinical workload prediction model for care portfolio demands in a Bayesian framework. The model allows for heterogeneous variances and unstructured covariance matrices for nested random effects that arise through complex hierarchical care systems. We show that using a multivariate approach substantially enhances the precision of workload predictions at both primary and non primary care levels. We also demonstrate that care demands depend not only on patient demographics but also on other utilization factors, such as length of stay. Our analyses of a recent data from Veteran Health Administration further indicate that risk adjustment for patient health conditions can considerably improve the prediction power of the model.

  5. [Distribution and main influential factors of mental workload of middle school teachers in Nanchang City].

    PubMed

    Xiao, Yuanmei; Li, Weijuan; Ren, Qingfeng; Ren, Xiaohui; Wang, Zhiming; Wang, Mianzhen; Lan, Yajia

    2015-01-01

    To investigate the distribution and main influential factors of mental workload of middle school teachers in Nanchang City. A total of 504 middle school teachers were sampled by random cluster sampling from middle schools in Nanchang City, and the mental workload level was assessed with National Aeronautics and Space Administration-Task Load Index (NASA-TLX) which was verified in reliability and validity. The mental workload scores of middle school teachers in Nanchang was approximately normal distribution. The mental workload level of middle school teachers aged 31 -35 years old was the highest. For those no more than 35 years old, there was positive correlation between mental workload and age (r = 0.146, P < 0.05). For those more than 35 years old, the levels of their mental workload had no statistically significant difference. There was a negative correlation between mental workload and educational level(r = -0.172, P < 0.05). The middle school teachers with lower educational level seemed to have a higher mental workload (P < 0.01). The longer a middle school teacher worked per day, the higher the mental workload was. Working hours per day was the most influential factor on mental workload in all influential factors (P < 0.001). Mental workload of middle school teachers was closely related to age, educational level and work hours per day. Working hours per day was the important risk factor of mental workload. Reducing working hours per day, especially reducing it to be no more than 8 hours per day, may be a significant and useful approach alleviating mental workload of middle school teachers in Nanchang City.

  6. Impact of Conflict Avoidance Responsibility Allocation on Pilot Workload in a Distributed Air Traffic Management System

    NASA Technical Reports Server (NTRS)

    Ligda, Sarah V.; Dao, Arik-Quang V.; Vu, Kim-Phuong; Strybel, Thomas Z.; Battiste, Vernol; Johnson, Walter W.

    2010-01-01

    Pilot workload was examined during simulated flights requiring flight deck-based merging and spacing while avoiding weather. Pilots used flight deck tools to avoid convective weather and space behind a lead aircraft during an arrival into Louisville International airport. Three conflict avoidance management concepts were studied: pilot, controller or automation primarily responsible. A modified Air Traffic Workload Input Technique (ATWIT) metric showed highest workload during the approach phase of flight and lowest during the en-route phase of flight (before deviating for weather). In general, the modified ATWIT was shown to be a valid and reliable workload measure, providing more detailed information than post-run subjective workload metrics. The trend across multiple workload metrics revealed lowest workload when pilots had both conflict alerting and responsibility of the three concepts, while all objective and subjective measures showed highest workload when pilots had no conflict alerting or responsibility. This suggests that pilot workload was not tied primarily to responsibility for resolving conflicts, but to gaining and/or maintaining situation awareness when conflict alerting is unavailable.

  7. A demonstration of adjoint methods for multi-dimensional remote sensing of the atmosphere and surface

    NASA Astrophysics Data System (ADS)

    Martin, William G. K.; Hasekamp, Otto P.

    2018-01-01

    In previous work, we derived the adjoint method as a computationally efficient path to three-dimensional (3D) retrievals of clouds and aerosols. In this paper we will demonstrate the use of adjoint methods for retrieving two-dimensional (2D) fields of cloud extinction. The demonstration uses a new 2D radiative transfer solver (FSDOM). This radiation code was augmented with adjoint methods to allow efficient derivative calculations needed to retrieve cloud and surface properties from multi-angle reflectance measurements. The code was then used in three synthetic retrieval studies. Our retrieval algorithm adjusts the cloud extinction field and surface albedo to minimize the measurement misfit function with a gradient-based, quasi-Newton approach. At each step we compute the value of the misfit function and its gradient with two calls to the solver FSDOM. First we solve the forward radiative transfer equation to compute the residual misfit with measurements, and second we solve the adjoint radiative transfer equation to compute the gradient of the misfit function with respect to all unknowns. The synthetic retrieval studies verify that adjoint methods are scalable to retrieval problems with many measurements and unknowns. We can retrieve the vertically-integrated optical depth of moderately thick clouds as a function of the horizontal coordinate. It is also possible to retrieve the vertical profile of clouds that are separated by clear regions. The vertical profile retrievals improve for smaller cloud fractions. This leads to the conclusion that cloud edges actually increase the amount of information that is available for retrieving the vertical profile of clouds. However, to exploit this information one must retrieve the horizontally heterogeneous cloud properties with a 2D (or 3D) model. This prototype shows that adjoint methods can efficiently compute the gradient of the misfit function. This work paves the way for the application of similar methods to 3D remote sensing problems.

  8. Relationship between workload and mind-wandering in simulated driving

    PubMed Central

    2017-01-01

    Mental workload and mind-wandering are highly related to driving safety. This study investigated the relationship between mental workload and mind-wandering while driving. Participants (N = 40) were asked to perform a car following task in driving simulator, and report whether they had experienced mind-wandering upon hearing a tone. After driving, participants reported their workload using the NASA-Task Load Index (TLX). Results revealed an interaction between workload and mind-wandering in two different perspectives. First, there was a negative correlation between workload and mind-wandering (r = -0.459, p < 0.01) for different individuals. Second, from temporal perspective workload and mind-wandering frequency increased significantly over task time and were positively correlated. Together, these findings contribute to understanding the roles of workload and mind-wandering in driving. PMID:28467513

  9. Defining the subjective experience of workload

    NASA Technical Reports Server (NTRS)

    Hart, S. G.; Childress, M. E.; Bortolussi, M.

    1981-01-01

    Flight scenarios that represent different types and levels of pilot workload are needed in order to conduct research about, and develop measures of, pilot workload. In order to be useful, however, the workload associated with such scenarios and the component tasks must be determined independently. An initial study designed to provide such information was conducted by asking a panel of general aviation pilots to evaluate flight-related tasks for the overall, perceptual, physical, and cognitive workload they impose. These ratings will provide the nucleus for a data base of flight-related primary tasks that have been independently rated for workload to use in workload assessment research.

  10. The workload analysis in welding workshop

    NASA Astrophysics Data System (ADS)

    Wahyuni, D.; Budiman, I.; Tryana Sembiring, M.; Sitorus, E.; Nasution, H.

    2018-03-01

    This research was conducted in welding workshop which produces doors, fences, canopies, etc., according to customer’s order. The symptoms of excessive workload were seen from the fact of employees complaint, requisition for additional employees, the lateness of completion time (there were 11 times of lateness from 28 orders, and 7 customers gave complaints). The top management of the workshop assumes that employees’ workload was still a tolerable limit. Therefore, it was required workload analysis to determine the number of employees required. The Workload was measured by using a physiological method and workload analysis. The result of this research can be utilized by the workshop for a better workload management.

  11. Effects of workload preview on task scheduling during simulated instrument flight.

    PubMed

    Andre, A D; Heers, S T; Cashion, P A

    1995-01-01

    Our study examined pilot scheduling behavior in the context of simulated instrument flight. Over the course of the flight, pilots flew along specified routes while scheduling and performing several flight-related secondary tasks. The first phase of flight was flown under low-workload conditions, whereas the second phase of flight was flown under high-workload conditions in the form of increased turbulence and a disorganized instrument layout. Six pilots were randomly assigned to each of three workload preview groups. Subjects in the no-preview group were not given preview of the increased-workload conditions. Subjects in the declarative preview group were verbally informed of the nature of the flight workload manipulation but did not receive any practice under the high-workload conditions. Subjects in the procedural preview group received the same instructions as the declarative preview group but also flew half of the practice flight under the high-workload conditions. The results show that workload preview fostered efficient scheduling strategies. Specifically, those pilots with either declarative or procedural preview of future workload demands adopted an efficient strategy of scheduling more of the difficult secondary tasks during the low-workload phase of flight. However, those pilots given a procedural preview showed the greatest benefits in overall flight performance.

  12. Comparing airborne and satellite retrievals of cloud optical thickness and particle effective radius using a spectral radiance ratio technique: two case studies for cirrus and deep convective clouds

    NASA Astrophysics Data System (ADS)

    Krisna, Trismono C.; Wendisch, Manfred; Ehrlich, André; Jäkel, Evelyn; Werner, Frank; Weigel, Ralf; Borrmann, Stephan; Mahnke, Christoph; Pöschl, Ulrich; Andreae, Meinrat O.; Voigt, Christiane; Machado, Luiz A. T.

    2018-04-01

    Solar radiation reflected by cirrus and deep convective clouds (DCCs) was measured by the Spectral Modular Airborne Radiation Measurement System (SMART) installed on the German High Altitude and Long Range Research Aircraft (HALO) during the Mid-Latitude Cirrus (ML-CIRRUS) and the Aerosol, Cloud, Precipitation, and Radiation Interaction and Dynamic of Convective Clouds System - Cloud Processes of the Main Precipitation Systems in Brazil: A Contribution to Cloud Resolving Modelling and to the Global Precipitation Measurement (ACRIDICON-CHUVA) campaigns. On particular flights, HALO performed measurements closely collocated with overpasses of the Moderate Resolution Imaging Spectroradiometer (MODIS) aboard the Aqua satellite. A cirrus cloud located above liquid water clouds and a DCC topped by an anvil cirrus are analyzed in this paper. Based on the nadir spectral upward radiance measured above the two clouds, the optical thickness τ and particle effective radius reff of the cirrus and DCC are retrieved using a radiance ratio technique, which considers the cloud thermodynamic phase, the vertical profile of cloud microphysical properties, the presence of multilayer clouds, and the heterogeneity of the surface albedo. For the cirrus case, the comparison of τ and reff retrieved on the basis of SMART and MODIS measurements yields a normalized mean absolute deviation of up to 1.2 % for τ and 2.1 % for reff. For the DCC case, deviations of up to 3.6 % for τ and 6.2 % for reff are obtained. The larger deviations in the DCC case are mainly attributed to the fast cloud evolution and three-dimensional (3-D) radiative effects. Measurements of spectral upward radiance at near-infrared wavelengths are employed to investigate the vertical profile of reff in the cirrus. The retrieved values of reff are compared with corresponding in situ measurements using a vertical weighting method. Compared to the MODIS observations, measurements of SMART provide more information on the vertical distribution of particle sizes, which allow reconstructing the profile of reff close to the cloud top. The comparison between retrieved and in situ reff yields a normalized mean absolute deviation, which ranges between 1.5 and 10.3 %, and a robust correlation coefficient of 0.82.

  13. Smartphone Analytics: Mobilizing the Lab into the Cloud for Omic-Scale Analyses.

    PubMed

    Montenegro-Burke, J Rafael; Phommavongsay, Thiery; Aisporna, Aries E; Huan, Tao; Rinehart, Duane; Forsberg, Erica; Poole, Farris L; Thorgersen, Michael P; Adams, Michael W W; Krantz, Gregory; Fields, Matthew W; Northen, Trent R; Robbins, Paul D; Niedernhofer, Laura J; Lairson, Luke; Benton, H Paul; Siuzdak, Gary

    2016-10-04

    Active data screening is an integral part of many scientific activities, and mobile technologies have greatly facilitated this process by minimizing the reliance on large hardware instrumentation. In order to meet with the increasingly growing field of metabolomics and heavy workload of data processing, we designed the first remote metabolomic data screening platform for mobile devices. Two mobile applications (apps), XCMS Mobile and METLIN Mobile, facilitate access to XCMS and METLIN, which are the most important components in the computer-based XCMS Online platforms. These mobile apps allow for the visualization and analysis of metabolic data throughout the entire analytical process. Specifically, XCMS Mobile and METLIN Mobile provide the capabilities for remote monitoring of data processing, real time notifications for the data processing, visualization and interactive analysis of processed data (e.g., cloud plots, principle component analysis, box-plots, extracted ion chromatograms, and hierarchical cluster analysis), and database searching for metabolite identification. These apps, available on Apple iOS and Google Android operating systems, allow for the migration of metabolomic research onto mobile devices for better accessibility beyond direct instrument operation. The utility of XCMS Mobile and METLIN Mobile functionalities was developed and is demonstrated here through the metabolomic LC-MS analyses of stem cells, colon cancer, aging, and bacterial metabolism.

  14. Analyzing large scale genomic data on the cloud with Sparkhit

    PubMed Central

    Huang, Liren; Krüger, Jan

    2018-01-01

    Abstract Motivation The increasing amount of next-generation sequencing data poses a fundamental challenge on large scale genomic analytics. Existing tools use different distributed computational platforms to scale-out bioinformatics workloads. However, the scalability of these tools is not efficient. Moreover, they have heavy run time overheads when pre-processing large amounts of data. To address these limitations, we have developed Sparkhit: a distributed bioinformatics framework built on top of the Apache Spark platform. Results Sparkhit integrates a variety of analytical methods. It is implemented in the Spark extended MapReduce model. It runs 92–157 times faster than MetaSpark on metagenomic fragment recruitment and 18–32 times faster than Crossbow on data pre-processing. We analyzed 100 terabytes of data across four genomic projects in the cloud in 21 h, which includes the run times of cluster deployment and data downloading. Furthermore, our application on the entire Human Microbiome Project shotgun sequencing data was completed in 2 h, presenting an approach to easily associate large amounts of public datasets with reference data. Availability and implementation Sparkhit is freely available at: https://rhinempi.github.io/sparkhit/. Contact asczyrba@cebitec.uni-bielefeld.de Supplementary information Supplementary data are available at Bioinformatics online. PMID:29253074

  15. Smartphone Analytics: Mobilizing the Lab into the Cloud for Omic-Scale Analyses

    PubMed Central

    2016-01-01

    Active data screening is an integral part of many scientific activities, and mobile technologies have greatly facilitated this process by minimizing the reliance on large hardware instrumentation. In order to meet with the increasingly growing field of metabolomics and heavy workload of data processing, we designed the first remote metabolomic data screening platform for mobile devices. Two mobile applications (apps), XCMS Mobile and METLIN Mobile, facilitate access to XCMS and METLIN, which are the most important components in the computer-based XCMS Online platforms. These mobile apps allow for the visualization and analysis of metabolic data throughout the entire analytical process. Specifically, XCMS Mobile and METLIN Mobile provide the capabilities for remote monitoring of data processing, real time notifications for the data processing, visualization and interactive analysis of processed data (e.g., cloud plots, principle component analysis, box-plots, extracted ion chromatograms, and hierarchical cluster analysis), and database searching for metabolite identification. These apps, available on Apple iOS and Google Android operating systems, allow for the migration of metabolomic research onto mobile devices for better accessibility beyond direct instrument operation. The utility of XCMS Mobile and METLIN Mobile functionalities was developed and is demonstrated here through the metabolomic LC-MS analyses of stem cells, colon cancer, aging, and bacterial metabolism. PMID:27560777

  16. Smartphone Analytics: Mobilizing the Lab into the Cloud for Omic-Scale Analyses

    DOE PAGES

    Montenegro-Burke, J. Rafael; Phommavongsay, Thiery; Aisporna, Aries E.; ...

    2016-08-25

    Active data screening is an integral part of many scientific activities, and mobile technologies have greatly facilitated this process by minimizing the reliance on large hardware instrumentation. In order to meet with the increasingly growing field of metabolomics and heavy workload of data processing, we designed the first remote metabolomic data screening platform for mobile devices. Two mobile applications (apps), XCMS Mobile and METLIN Mobile, facilitate access to XCMS and METLIN, which are the most important components in the computer-based XCMS Online platforms. These mobile apps allow for the visualization and analysis of metabolic data throughout the entire analytical process.more » Specifically, XCMS Mobile and METLIN Mobile provide the capabilities for remote monitoring of data processing, real time notifications for the data processing, visualization and interactive analysis of processed data (e.g., cloud plots, principle component analysis, box-plots, extracted ion chromatograms, and hierarchical cluster analysis), and database searching for metabolite identification. These apps, available on Apple iOS and Google Android operating systems, allow for the migration of metabolomic research onto mobile devices for better accessibility beyond direct instrument operation. The utility of XCMS Mobile and METLIN Mobile functionalities was developed and is demonstrated here through the metabolomic LC-MS analyses of stem cells, colon cancer, aging, and bacterial metabolism.« less

  17. Smartphone Analytics: Mobilizing the Lab into the Cloud for Omic-Scale Analyses

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Montenegro-Burke, J. Rafael; Phommavongsay, Thiery; Aisporna, Aries E.

    Active data screening is an integral part of many scientific activities, and mobile technologies have greatly facilitated this process by minimizing the reliance on large hardware instrumentation. In order to meet with the increasingly growing field of metabolomics and heavy workload of data processing, we designed the first remote metabolomic data screening platform for mobile devices. Two mobile applications (apps), XCMS Mobile and METLIN Mobile, facilitate access to XCMS and METLIN, which are the most important components in the computer-based XCMS Online platforms. These mobile apps allow for the visualization and analysis of metabolic data throughout the entire analytical process.more » Specifically, XCMS Mobile and METLIN Mobile provide the capabilities for remote monitoring of data processing, real time notifications for the data processing, visualization and interactive analysis of processed data (e.g., cloud plots, principle component analysis, box-plots, extracted ion chromatograms, and hierarchical cluster analysis), and database searching for metabolite identification. These apps, available on Apple iOS and Google Android operating systems, allow for the migration of metabolomic research onto mobile devices for better accessibility beyond direct instrument operation. The utility of XCMS Mobile and METLIN Mobile functionalities was developed and is demonstrated here through the metabolomic LC-MS analyses of stem cells, colon cancer, aging, and bacterial metabolism.« less

  18. Workload of Team Leaders and Team Members During a Simulated Sepsis Scenario.

    PubMed

    Tofil, Nancy M; Lin, Yiqun; Zhong, John; Peterson, Dawn Taylor; White, Marjorie Lee; Grant, Vincent; Grant, David J; Gottesman, Ronald; Sudikoff, Stephanie N; Adler, Mark; Marohn, Kimberly; Davidson, Jennifer; Cheng, Adam

    2017-09-01

    Crisis resource management principles dictate appropriate distribution of mental and/or physical workload so as not to overwhelm any one team member. Workload during pediatric emergencies is not well studied. The National Aeronautics and Space Administration-Task Load Index is a multidimensional tool designed to assess workload validated in multiple settings. Low workload is defined as less than 40, moderate 40-60, and greater than 60 signify high workloads. Our hypothesis is that workload among both team leaders and team members is moderate to high during a simulated pediatric sepsis scenario and that team leaders would have a higher workload than team members. Multicenter observational study. Nine pediatric simulation centers (five United States, three Canada, and one United Kingdom). Team leaders and team members during a 12-minute pediatric sepsis scenario. National Aeronautics and Space Administration-Task Load Index. One hundred twenty-seven teams were recruited from nine sites. One hundred twenty-seven team leaders and 253 team members completed the National Aeronautics and Space Administration-Task Load Index. Team leader had significantly higher overall workload than team member (51 ± 11 vs 44 ± 13; p < 0.01). Team leader had higher workloads in all subcategories except in performance where the values were equal and in physical demand where team members were higher than team leaders (29 ± 22 vs 18 ± 16; p < 0.01). The highest category for each group was mental 73 ± 13 for team leader and 60 ± 20 for team member. For team leader, two categories, mental (73 ± 17) and effort (66 ± 16), were high workload, most domains for team member were moderate workload levels. Team leader and team member are under moderate workloads during a pediatric sepsis scenario with team leader under high workloads (> 60) in the mental demand and effort subscales. Team leader average significantly higher workloads. Consideration of decreasing team leader responsibilities may improve team workload distribution.

  19. Multidimensional Environmental Data Resource Brokering on Computational Grids and Scientific Clouds

    NASA Astrophysics Data System (ADS)

    Montella, Raffaele; Giunta, Giulio; Laccetti, Giuliano

    Grid computing has widely evolved over the past years, and its capabilities have found their way even into business products and are no longer relegated to scientific applications. Today, grid computing technology is not restricted to a set of specific grid open source or industrial products, but rather it is comprised of a set of capabilities virtually within any kind of software to create shared and highly collaborative production environments. These environments are focused on computational (workload) capabilities and the integration of information (data) into those computational capabilities. An active grid computing application field is the fully virtualization of scientific instruments in order to increase their availability and decrease operational and maintaining costs. Computational and information grids allow to manage real-world objects in a service-oriented way using industrial world-spread standards.

  20. Mental workload measurement in operator control room using NASA-TLX

    NASA Astrophysics Data System (ADS)

    Sugarindra, M.; Suryoputro, M. R.; Permana, A. I.

    2017-12-01

    The workload, encountered a combination of physical workload and mental workload, is a consequence of the activities for workers. Central control room is one department in the oil processing company, employees tasked with monitoring the processing unit for 24 hours nonstop with a combination of 3 shifts in 8 hours. NASA-TLX (NASA Task Load Index) is one of the subjective mental workload measurement using six factors, namely the Mental demand (MD), Physical demand (PD), Temporal demand (TD), Performance (OP), Effort (EF), frustration levels (FR). Measurement of a subjective mental workload most widely used because it has a high degree of validity. Based on the calculation of the mental workload, there at 5 units (DTU, NPU, HTU, DIST and OPS) at the control chamber (94; 83.33; 94.67; 81, 33 and 94.67 respectively) that categorize as very high mental workload. The high level of mental workload on the operator in the Central Control Room is a requirement to have high accuracy, alertness and can make decisions quickly

  1. Planning and management of cloud computing networks

    NASA Astrophysics Data System (ADS)

    Larumbe, Federico

    The evolution of the Internet has a great impact on a big part of the population. People use it to communicate, query information, receive news, work, and as entertainment. Its extraordinary usefulness as a communication media made the number of applications and technological resources explode. However, that network expansion comes at the cost of an important power consumption. If the power consumption of telecommunication networks and data centers is considered as the power consumption of a country, it would rank at the 5 th place in the world. Furthermore, the number of servers in the world is expected to grow by a factor of 10 between 2013 and 2020. This context motivates us to study techniques and methods to allocate cloud computing resources in an optimal way with respect to cost, quality of service (QoS), power consumption, and environmental impact. The results we obtained from our test cases show that besides minimizing capital expenditures (CAPEX) and operational expenditures (OPEX), the response time can be reduced up to 6 times, power consumption by 30%, and CO2 emissions by a factor of 60. Cloud computing provides dynamic access to IT resources as a service. In this paradigm, programs are executed in servers connected to the Internet that users access from their computers and mobile devices. The first advantage of this architecture is to reduce the time of application deployment and interoperability, because a new user only needs a web browser and does not need to install software on local computers with specific operating systems. Second, applications and information are available from everywhere and with any device with an Internet access. Also, servers and IT resources can be dynamically allocated depending on the number of users and workload, a feature called elasticity. This thesis studies the resource management of cloud computing networks and is divided in three main stages. We start by analyzing the planning of cloud computing networks to get a comprehensive vision. The first question to be solved is what are the optimal data center locations. We found that the location of each data center has a big impact on cost, QoS, power consumption, and greenhouse gas emissions. An optimization problem with a multi-criteria objective function is proposed to decide jointly the optimal location of data centers and software components, link capacities, and information routing. Once the network planning has been analyzed, the problem of dynamic resource provisioning in real time is addressed. In this context, virtualization is a key technique in cloud computing because each server can be shared by multiple Virtual Machines (VMs) and the total power consumption can be reduced. In the same line of location problems, we propose a Green Cloud Broker that optimizes VM placement across multiple data centers. In fact, when multiple data centers are considered, response time can be reduced by placing VMs close to users, cost can be minimized, power consumption can be optimized by using energy efficient data centers, and CO2 emissions can be decreased by choosing data centers provided with renewable energy sources. The third stage of the analysis is the short-term management of a cloud data center. In particular, a method is proposed to assign VMs to servers by considering communication traffic among VMs. Cloud data centers receive new applications over time and these applications need on-demand resource provisioning. Each application is composed of multiple types of VMs that interact among themselves. A program called scheduler must place each new VM in a server and that impacts the QoS and power consumption. Our method places VMs that communicate among themselves in servers that are close to each other in the network topology, thus reducing communication delay and increasing the throughput available among VMs. Furthermore, the power consumption of each type of server is considered and the most efficient ones are chosen to place the VMs. The number of VMs of each application can be dynamically changed to match the workload and servers not needed in a particular period can be suspended to save energy. The methodology developed is based on Mixed Integer Programming (MIP) models to formalize the problems and use state of the art optimization solvers. Then, heuristics are developed to solve cases with more than 1,000 potential data center locations for the planning problem, 1,000 nodes for the cloud broker, and 128,000 servers for the VM placement problem. Solutions with very short optimality gaps, between 0% and 1.95%, are obtained, and execution time in the order of minutes for the planning problem and less than a second for real time cases. We consider that this thesis on resource provisioning of cloud computing networks includes important contributions on this research area, and innovative commercial applications based on the proposed methods have promising future.

  2. Dynamic resource allocation scheme for distributed heterogeneous computer systems

    NASA Technical Reports Server (NTRS)

    Liu, Howard T. (Inventor); Silvester, John A. (Inventor)

    1991-01-01

    This invention relates to a resource allocation in computer systems, and more particularly, to a method and associated apparatus for shortening response time and improving efficiency of a heterogeneous distributed networked computer system by reallocating the jobs queued up for busy nodes to idle, or less-busy nodes. In accordance with the algorithm (SIDA for short), the load-sharing is initiated by the server device in a manner such that extra overhead in not imposed on the system during heavily-loaded conditions. The algorithm employed in the present invention uses a dual-mode, server-initiated approach. Jobs are transferred from heavily burdened nodes (i.e., over a high threshold limit) to low burdened nodes at the initiation of the receiving node when: (1) a job finishes at a node which is burdened below a pre-established threshold level, or (2) a node is idle for a period of time as established by a wakeup timer at the node. The invention uses a combination of the local queue length and the local service rate ratio at each node as the workload indicator.

  3. Heterogeneous compute in computer vision: OpenCL in OpenCV

    NASA Astrophysics Data System (ADS)

    Gasparakis, Harris

    2014-02-01

    We explore the relevance of Heterogeneous System Architecture (HSA) in Computer Vision, both as a long term vision, and as a near term emerging reality via the recently ratified OpenCL 2.0 Khronos standard. After a brief review of OpenCL 1.2 and 2.0, including HSA features such as Shared Virtual Memory (SVM) and platform atomics, we identify what genres of Computer Vision workloads stand to benefit by leveraging those features, and we suggest a new mental framework that replaces GPU compute with hybrid HSA APU compute. As a case in point, we discuss, in some detail, popular object recognition algorithms (part-based models), emphasizing the interplay and concurrent collaboration between the GPU and CPU. We conclude by describing how OpenCL has been incorporated in OpenCV, a popular open source computer vision library, emphasizing recent work on the Transparent API, to appear in OpenCV 3.0, which unifies the native CPU and OpenCL execution paths under a single API, allowing the same code to execute either on CPU or on a OpenCL enabled device, without even recompiling.

  4. Can Clouds Dance? Neural Correlates of Passive Conceptual Expansion Using a Metaphor Processing Task: Implications for Creative Cognition

    ERIC Educational Resources Information Center

    Rutter, Barbara; Kroger, Soren; Stark, Rudolf; Schweckendiek, Jan; Windmann, Sabine; Hermann, Christiane; Abraham, Anna

    2012-01-01

    Creativity has emerged in the focus of neurocognitive research in the past decade. However, a heterogeneous pattern of brain areas has been implicated as underpinning the neural correlates of creativity. One explanation for these divergent findings lies in the fact that creativity is not usually investigated in terms of its many underlying…

  5. Session on coupled atmospheric/chemistry coupled models

    NASA Technical Reports Server (NTRS)

    Thompson, Anne

    1993-01-01

    The session on coupled atmospheric/chemistry coupled models is reviewed. Current model limitations, current issues and critical unknowns, and modeling activity are addressed. Specific recommendations and experimental strategies on the following are given: multiscale surface layer - planetary boundary layer - chemical flux measurements; Eulerian budget study; and Langrangian experiment. Nonprecipitating cloud studies, organized convective systems, and aerosols - heterogenous chemistry are also discussed.

  6. [Study on mental workload of teachers in primary schools].

    PubMed

    Xiao, Yuan-mei; Wang, Zhi-ming; Wang, Mian-zhen; Lan, Ya-jia; Fan, Guang-qin; Feng, Chang

    2011-12-01

    To investigate the distribution characteristics and influencing factors of mental workload of teachers in primary schools. National Aeronautics and Space Administration-Task Load Index (NASA-TLX) was used to assess the mental workload levels for 397 teachers of primary schools in a city. The mental workload (64.34+10.56) of female teachers was significantly higher than that (61.73+ 9.77) of male teachers (P<0.05). The mental workload (65.66+10.42) of "-35" years old group was the highest. When age of teachers was younger than 35 years old, there was a positive correlation between the mental workload and age (r=0.146, P<0.05). When age of teachers was older than 35 years old, there was a negative correlation between the mental workload and age (r=-0.190, P<0.05). The teachers with higher education level felt higher mental workload (unstandardized coefficients B=1.524, standardized coefficients /=0.111, P<0.05). There was a positive correlation between the mental workload and working hours per day (unstandardized coefficients B =4.659, standardized coefficients/3 =0.223, P<0.001). Mental workload of the teachers in primary schools is closely related to age, educational level and work hours per day. Work hours per day is an important risk factor for mental workload. Reducing work hours per day (8 hours) is an effective measure of alleviating the mental workload of teachers in primary schools.

  7. Elastic Cloud Computing Architecture and System for Heterogeneous Spatiotemporal Computing

    NASA Astrophysics Data System (ADS)

    Shi, X.

    2017-10-01

    Spatiotemporal computation implements a variety of different algorithms. When big data are involved, desktop computer or standalone application may not be able to complete the computation task due to limited memory and computing power. Now that a variety of hardware accelerators and computing platforms are available to improve the performance of geocomputation, different algorithms may have different behavior on different computing infrastructure and platforms. Some are perfect for implementation on a cluster of graphics processing units (GPUs), while GPUs may not be useful on certain kind of spatiotemporal computation. This is the same situation in utilizing a cluster of Intel's many-integrated-core (MIC) or Xeon Phi, as well as Hadoop or Spark platforms, to handle big spatiotemporal data. Furthermore, considering the energy efficiency requirement in general computation, Field Programmable Gate Array (FPGA) may be a better solution for better energy efficiency when the performance of computation could be similar or better than GPUs and MICs. It is expected that an elastic cloud computing architecture and system that integrates all of GPUs, MICs, and FPGAs could be developed and deployed to support spatiotemporal computing over heterogeneous data types and computational problems.

  8. Ice Nucleation in the Tropical Tropopause Layer: Implications for Cirrus Occurrence, Cirrus Microphysical Properties, and Dehydration of Air Entering the Stratosphere

    NASA Technical Reports Server (NTRS)

    Jensen, Eric; Kaercher, Bernd; Ueyama, Rei; Pfister, Leonhard

    2017-01-01

    Recent laboratory experiments have advanced our understanding of the physical properties and ice nucleating abilities of aerosol particles atlow temperatures. In particular, aerosols containing organics will transition to a glassy state at low temperatures, and these glassy aerosols are moderately effective as ice nuclei. These results have implications for ice nucleation in the cold Tropical Tropopause Layer (TTL; 13-19 km). We have developed a detailed cloud microphysical model that includes heterogeneous nucleation on a variety of aerosol types and homogeneous freezing of aqueous aerosols. This model has been incorporated into one-dimensional simulations of cirrus and water vapor driven by meteorological analysis temperature and wind fields. The model includes scavenging of ice nuclei by sedimenting ice crystals. The model is evaluated by comparing the simulated cloud properties and water vapor concentrations with aircraft and satellite measurements. In this presentation, I will discuss the relative importance of homogeneous and heterogeneous ice nucleation, the impact of ice nuclei scavenging as air slowly ascends through the TTL, and the implications for the final dehydration of air parcels crossing the tropical cold-point tropopause and entering the tropical stratosphere.

  9. Using the NASA Task Load Index to Assess Workload in Electronic Medical Records.

    PubMed

    Hudson, Darren; Kushniruk, Andre W; Borycki, Elizabeth M

    2015-01-01

    Electronic medical records (EMRs) has been expected to decrease health professional workload. The NASA Task Load Index has become an important tool for assessing workload in many domains. However, its application in assessing the impact of an EMR on nurse's workload has remained to be explored. In this paper we report the results of a study of workload and we explore the utility of applying the NASA Task Load Index to assess impact of an EMR at the end of its lifecycle on nurses' workload. It was found that mental and temporal demands were the most responsible for the workload. Further work along these lines is recommended.

  10. Mental workload prediction based on attentional resource allocation and information processing.

    PubMed

    Xiao, Xu; Wanyan, Xiaoru; Zhuang, Damin

    2015-01-01

    Mental workload is an important component in complex human-machine systems. The limited applicability of empirical workload measures produces the need for workload modeling and prediction methods. In the present study, a mental workload prediction model is built on the basis of attentional resource allocation and information processing to ensure pilots' accuracy and speed in understanding large amounts of flight information on the cockpit display interface. Validation with an empirical study of an abnormal attitude recovery task showed that this model's prediction of mental workload highly correlated with experimental results. This mental workload prediction model provides a new tool for optimizing human factors interface design and reducing human errors.

  11. Measuring workload in collaborative contexts: trait versus state perspectives.

    PubMed

    Helton, William S; Funke, Gregory J; Knott, Benjamin A

    2014-03-01

    In the present study, we explored the state versus trait aspects of measures of task and team workload in a disaster simulation. There is often a need to assess workload in both individual and collaborative settings. Researchers in this field often use the NASATask Load Index (NASA-TLX) as a global measure of workload by aggregating the NASA-TLX's component items. Using this practice, one may overlook the distinction between traits and states. Fifteen dyadic teams (11 inexperienced, 4 experienced) completed five sessions of a tsunami disaster simulator. After every session, individuals completed a modified version of the NASA-TLX that included team workload measures.We then examined the workload items by using a between-subjects and within-subjects perspective. Between-subjects and within-subjects correlations among the items indicated the workload items are more independent within subjects (as states) than between subjects (as traits). Correlations between the workload items and simulation performance were also different at the trait and state levels. Workload may behave differently at trait (between-subjects) and state (within-subjects) levels. Researchers interested in workload measurement as a state should take a within-subjects perspective in their analyses.

  12. Novel method of measuring the mental workload of anaesthetists during clinical practice.

    PubMed

    Byrne, A J; Oliver, M; Bodger, O; Barnett, W A; Williams, D; Jones, H; Murphy, A

    2010-12-01

    Cognitive overload has been recognized as a significant cause of error in industries such as aviation and measuring mental workload has become a key method of improving safety. The aim of this study was to pilot the use of a new method of measuring mental workload in the operating theatre using a previously published methodology. The mental workload of the anaesthetists was assessed by measuring their response times to a wireless vibrotactile device and the NASA TLX subjective workload score during routine surgical procedures. Primary task workload was inferred from the phase of anaesthesia. Significantly increased response time was associated with the induction phase of anaesthesia compared with maintenance/emergence, non-consultant grade, and during more complex cases. Increased response was also associated with self-reported mental load, physical load, and frustration. These findings are consistent with periods of increased mental workload and with the findings of other studies using similar techniques. These findings confirm the importance of mental workload to the performance of anaesthetists and suggest that increased mental workload is likely to be a common problem in clinical practice. Although further studies are required, the method described may be useful for the measurement of the mental workload of anaesthetists.

  13. Workload and non-contact injury incidence in elite football players competing in European leagues.

    PubMed

    Delecroix, Barthelemy; McCall, Alan; Dawson, Brian; Berthoin, Serge; Dupont, Gregory

    2018-06-02

    The aim of this study was to analyse the relationship between absolute and acute:chronic workload ratios and non-contact injury incidence in professional football players and to assess their predictive ability. Elite football players (n = 130) from five teams competing in European domestic and confederation level competitions were followed during one full competitive season. Non-contact injuries were recorded and using session rate of perceived exertion (s-RPE) internal absolute workload and acute:chronic (A:C) workload ratios (4-weeks, 3-weeks, 2-weeks and week-to-week) were calculated using a rolling days method. The relative risk (RR) of non-contact injury was increased (RR = 1.59, CI95%: 1.18-2.15) when a cumulative 4-week absolute workload was greater than 10629 arbitrary units (AU) in comparison with a workload between 3745 and 10628 AU. When the 3-week absolute workload was more than 8319 AU versus between 2822 and 8318 AU injury risk was also increased (RR= 1.46, CI95% 1.08-1.98). Injury incidence was higher when the 4-week A:C ratio was <0.85 versus >0.85 (RR = 1.31, CI95%: 1.02-1.70) and with a 3-week A:C ratio >1.30 versus <1.30 (RR = 1.37, CI95%: 1.05-1.77). Importantly, none of the A:C workload combinations showed high sensitivity or specificity. In elite European footballers, using internal workload (sRPE) revealed that cumulative workloads over 3 and 4 weeks were associated with injury incidence. Additionally, A:C workloads, using combinations of 2, 3 and 4 weeks as the C workloads were also associated with increased injury risk. No A:C workload combination was appropriate to predict injury.

  14. Combining Quick-Turnaround and Batch Workloads at Scale

    NASA Technical Reports Server (NTRS)

    Matthews, Gregory A.

    2012-01-01

    NAS uses PBS Professional to schedule and manage the workload on Pleiades, an 11,000+ node 1B cluster. At this scale the user experience for quick-turnaround jobs can degrade, which led NAS initially to set up two separate PBS servers, each dedicated to a particular workload. Recently we have employed PBS hooks and scheduler modifications to merge these workloads together under one PBS server, delivering sub-1-minute start times for the quick-turnaround workload, and enabling dynamic management of the resources set aside for that workload.

  15. Cirrus Parcel Model Comparison Project. Phase 1: The Critical Components to Simulate Cirrus Initiation Explicitly.

    NASA Astrophysics Data System (ADS)

    Lin, Ruei-Fong; O'C. Starr, David; Demott, Paul J.; Cotton, Richard; Sassen, Kenneth; Jensen, Eric; Kärcher, Bernd; Liu, Xiaohong

    2002-08-01

    The Cirrus Parcel Model Comparison Project, a project of the GCSS [Global Energy and Water Cycle Experiment (GEWEX) Cloud System Studies] Working Group on Cirrus Cloud Systems, involves the systematic comparison of current models of ice crystal nucleation and growth for specified, typical, cirrus cloud environments. In Phase 1 of the project reported here, simulated cirrus cloud microphysical properties from seven models are compared for `warm' (40°C) and `cold' (60°C) cirrus, each subject to updrafts of 0.04, 0.2, and 1 m s1. The models employ explicit microphysical schemes wherein the size distribution of each class of particles (aerosols and ice crystals) is resolved into bins or the evolution of each individual particle is traced. Simulations are made including both homogeneous and heterogeneous ice nucleation mechanisms (all-mode simulations). A single initial aerosol population of sulfuric acid particles is prescribed for all simulations. Heterogeneous nucleation is disabled for a second parallel set of simulations in order to isolate the treatment of the homogeneous freezing (of haze droplets) nucleation process. Analysis of these latter simulations is the primary focus of this paper.Qualitative agreement is found for the homogeneous-nucleation-only simulations; for example, the number density of nucleated ice crystals increases with the strength of the prescribed updraft. However, significant quantitative differences are found. Detailed analysis reveals that the homogeneous nucleation rate, haze particle solution concentration, and water vapor uptake rate by ice crystal growth (particularly as controlled by the deposition coefficient) are critical components that lead to differences in the predicted microphysics.Systematic differences exist between results based on a modified classical theory approach and models using an effective freezing temperature approach to the treatment of nucleation. Each method is constrained by critical freezing data from laboratory studies, but each includes assumptions that can only be justified by further laboratory research. Consequently, it is not yet clear if the two approaches can be made consistent. Large haze particles may deviate considerably from equilibrium size in moderate to strong updrafts (0.2-1 m s1) at 60°C. The equilibrium assumption is commonly invoked in cirrus parcel models. The resulting difference in particle-size-dependent solution concentration of haze particles may significantly affect the ice particle formation rate during the initial nucleation interval. The uptake rate for water vapor excess by ice crystals is another key component regulating the total number of nucleated ice crystals. This rate, the product of particle number concentration and ice crystal diffusional growth rate, which is particularly sensitive to the deposition coefficient when ice particles are small, modulates the peak particle formation rate achieved in an air parcel and the duration of the active nucleation time period. The consequent differences in cloud microphysical properties, and thus cloud optical properties, between state-of-the-art models of ice crystal initiation are significant.Intermodel differences in the case of all-mode simulations are correspondingly greater than in the case of homogeneous nucleation acting alone. Definitive laboratory and atmospheric benchmark data are needed to improve the treatment of heterogeneous nucleation processes.

  16. Preliminary Investigation of Workload on Intrastate Bus Traffic Controllers

    NASA Astrophysics Data System (ADS)

    Yen Bin, Teo; Azlis-Sani, Jalil; Nur Annuar Mohd Yunos, Muhammad; Ismail, S. M. Sabri S. M.; Tajedi, Noor Aqilah Ahmad

    2016-11-01

    The daily routine of bus traffic controller which involves high mental processes would have a direct impact on the level of workload. To date, the level of workload on the bus traffic controllers in Malaysia is relatively unknown. Excessive workload on bus traffic controllers would affect the control and efficiency of the system. This paper served to study the workload on bus traffic controllers and justify the needs to conduct further detailed research on this field. The objectives of this research are to identify the level of workload on the intrastate bus traffic controllers. Based on the results, recommendations will be proposed for improvements and future studies. The level of workload for the bus traffic controllers is quantified using questionnaire adapted from NASA TLX. Interview sessions were conducted for validation of workload. Sixteen respondents were involved and it was found that the average level of workload based on NASA TLX was 6.91. It was found that workload is not affected by gender and marital status. This study also showed that the level of workload and working experience of bus traffic controllers has a strong positive linear relationship. This study would serve as a guidance and reference related to this field. Since this study is a preliminary investigation, further detailed studies could be conducted to obtain a better comprehension regarding the bus traffic controllers.

  17. FMP study of pilot workload. Qualification of workload via instrument scan

    NASA Technical Reports Server (NTRS)

    Tolel, J. R.; Vivaudou, M.; Harris, R. L., Sr.; Ephrath, A.

    1982-01-01

    Various methods of measuring a pilot's mental workload are discussed. Scanning the various flight instruments with good scan pattern and other verbal tasks during instrument landings is given special attention for measuring pilot workload.

  18. Mental workload measurement: Event-related potentials and ratings of workload and fatigue

    NASA Technical Reports Server (NTRS)

    Biferno, M. A.

    1985-01-01

    Event-related potentials were elicited when a digitized word representing a pilot's call-sign was presented. This auditory probe was presented during 27 workload conditions in a 3x3x3 design where the following variables were manipulated: short-term load, tracking task difficulty, and time-on-task. Ratings of workload and fatigue were obtained between each trial of a 2.5-hour test. The data of each subject were analyzed individually to determine whether significant correlations existed between subjective ratings and ERP component measures. Results indicated that a significant number of subjects had positive correlations between: (1) ratings of workload and P300 amplitude, (2) ratings of workload and N400 amplitude, and (3) ratings of fatigue and P300 amplitude. These data are the first to show correlations between ratings of workload or fatigue and ERP components thereby reinforcing their validity as measures of mental workload and fatigue.

  19. Voice measures of workload in the advanced flight deck: Additional studies

    NASA Technical Reports Server (NTRS)

    Schneider, Sid J.; Alpert, Murray

    1989-01-01

    These studies investigated acoustical analysis of the voice as a measure of workload in individual operators. In the first study, voice samples were recorded from a single operator during high, medium, and low workload conditions. Mean amplitude, frequency, syllable duration, and emphasis all tended to increase as workload increased. In the second study, NASA test pilots performed a laboratory task, and used a flight simulator under differing work conditions. For two of the pilots, high workload in the simulator brought about greater amplitude, peak duration, and stress. In both the laboratory and simulator tasks, high workload tended to be associated with more statistically significant drop-offs in the acoustical measures than were lower workload levels. There was a great deal of intra-subject variability in the acoustical measures. The results suggested that in individual operators, increased workload might be revealed by high initial amplitude and frequency, followed by rapid drop-offs over time.

  20. A computationally efficient description of heterogeneous freezing: A simplified version of the Soccer ball model

    NASA Astrophysics Data System (ADS)

    Niedermeier, Dennis; Ervens, Barbara; Clauss, Tina; Voigtländer, Jens; Wex, Heike; Hartmann, Susan; Stratmann, Frank

    2014-01-01

    In a recent study, the Soccer ball model (SBM) was introduced for modeling and/or parameterizing heterogeneous ice nucleation processes. The model applies classical nucleation theory. It allows for a consistent description of both apparently singular and stochastic ice nucleation behavior, by distributing contact angles over the nucleation sites of a particle population assuming a Gaussian probability density function. The original SBM utilizes the Monte Carlo technique, which hampers its usage in atmospheric models, as fairly time-consuming calculations must be performed to obtain statistically significant results. Thus, we have developed a simplified and computationally more efficient version of the SBM. We successfully used the new SBM to parameterize experimental nucleation data of, e.g., bacterial ice nucleation. Both SBMs give identical results; however, the new model is computationally less expensive as confirmed by cloud parcel simulations. Therefore, it is a suitable tool for describing heterogeneous ice nucleation processes in atmospheric models.

  1. Distributed MRI reconstruction using Gadgetron-based cloud computing.

    PubMed

    Xue, Hui; Inati, Souheil; Sørensen, Thomas Sangild; Kellman, Peter; Hansen, Michael S

    2015-03-01

    To expand the open source Gadgetron reconstruction framework to support distributed computing and to demonstrate that a multinode version of the Gadgetron can be used to provide nonlinear reconstruction with clinically acceptable latency. The Gadgetron framework was extended with new software components that enable an arbitrary number of Gadgetron instances to collaborate on a reconstruction task. This cloud-enabled version of the Gadgetron was deployed on three different distributed computing platforms ranging from a heterogeneous collection of commodity computers to the commercial Amazon Elastic Compute Cloud. The Gadgetron cloud was used to provide nonlinear, compressed sensing reconstruction on a clinical scanner with low reconstruction latency (eg, cardiac and neuroimaging applications). The proposed setup was able to handle acquisition and 11 -SPIRiT reconstruction of nine high temporal resolution real-time, cardiac short axis cine acquisitions, covering the ventricles for functional evaluation, in under 1 min. A three-dimensional high-resolution brain acquisition with 1 mm(3) isotropic pixel size was acquired and reconstructed with nonlinear reconstruction in less than 5 min. A distributed computing enabled Gadgetron provides a scalable way to improve reconstruction performance using commodity cluster computing. Nonlinear, compressed sensing reconstruction can be deployed clinically with low image reconstruction latency. © 2014 Wiley Periodicals, Inc.

  2. Volume I: fluidized-bed code documentation, for the period February 28, 1983-March 18, 1983

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Piperopoulou, H.; Finson, M.; Bloomfield, D.

    1983-03-01

    This documentation supersedes the previous documentation of the Fluidized-Bed Gasifier code. Volume I documents a simulation program of a Fluidized-Bed Gasifier (FBG), and Volume II documents a systems model of the FBG. The FBG simulation program is an updated version of the PSI/FLUBED code which is capable of modeling slugging beds and variable bed diameter. In its present form the code is set up to model a Westinghouse commercial scale gasifier. The fluidized bed gasifier model combines the classical bubbling bed description for the transport and mixing processes with PSI-generated models for coal chemistry. At the distributor plate, the bubblemore » composition is that of the inlet gas and the initial bubble size is set by the details of the distributor plate. Bubbles grow by coalescence as they rise. The bubble composition and temperature change with height due to transport to and from the cloud as well as homogeneous reactions within the bubble. The cloud composition also varies with height due to cloud/bubble exchange, cloud/emulsion, exchange, and heterogeneous coal char reactions. The emulsion phase is considered to be well mixed.« less

  3. Modification of cirrus clouds to reduce global warming

    NASA Astrophysics Data System (ADS)

    Mitchell, David L.; Finnegan, William

    2009-10-01

    Greenhouse gases and cirrus clouds regulate outgoing longwave radiation (OLR) and cirrus cloud coverage is predicted to be sensitive to the ice fall speed which depends on ice crystal size. The higher the cirrus, the greater their impact is on OLR. Thus by changing ice crystal size in the coldest cirrus, OLR and climate might be modified. Fortunately the coldest cirrus have the highest ice supersaturation due to the dominance of homogeneous freezing nucleation. Seeding such cirrus with very efficient heterogeneous ice nuclei should produce larger ice crystals due to vapor competition effects, thus increasing OLR and surface cooling. Preliminary estimates of this global net cloud forcing are more negative than -2.8 W m-2 and could neutralize the radiative forcing due to a CO2 doubling (3.7 W m-2). A potential delivery mechanism for the seeding material is already in place: the airline industry. Since seeding aerosol residence times in the troposphere are relatively short, the climate might return to its normal state within months after stopping the geoengineering experiment. The main known drawback to this approach is that it would not stop ocean acidification. It does not have many of the drawbacks that stratospheric injection of sulfur species has.

  4. Chlorine-containing salts as water ice nucleating particles on Mars

    NASA Astrophysics Data System (ADS)

    Santiago-Materese, D. L.; Iraci, L. T.; Clapham, M. E.; Chuang, P. Y.

    2018-03-01

    Water ice cloud formation on Mars largely is expected to occur on the most efficient ice nucleating particle available. Salts have been observed on the Martian surface and have been known to facilitate water cloud formation on Earth. We examined heterogeneous ice nucleation onto sodium chloride and sodium perchlorate substrates under Martian atmospheric conditions, in the range of 150 to 180 K and 10-7 to 10-5 Torr water partial pressure. Sub-155 K data for the critical saturation ratio (Scrit) suggests an exponential model best describes the temperature-dependence of nucleation onset of water ice for all substrates tested. While sodium chloride does not facilitate water ice nucleation more easily than bare silicon, sodium perchlorate does support depositional nucleation at lower saturation levels than other substrates shown and is comparable to smectite-rich clay in its ability to support cloud initiation. Perchlorates could nucleate water ice at partial pressures up to 40% lower than other substrates examined to date under Martian atmospheric conditions. These findings suggest air masses on Mars containing uplifted salts such as perchlorates could form water ice clouds at lower saturation ratios than in air masses absent similar particles.

  5. Managing a tier-2 computer centre with a private cloud infrastructure

    NASA Astrophysics Data System (ADS)

    Bagnasco, Stefano; Berzano, Dario; Brunetti, Riccardo; Lusso, Stefano; Vallero, Sara

    2014-06-01

    In a typical scientific computing centre, several applications coexist and share a single physical infrastructure. An underlying Private Cloud infrastructure eases the management and maintenance of such heterogeneous applications (such as multipurpose or application-specific batch farms, Grid sites, interactive data analysis facilities and others), allowing dynamic allocation resources to any application. Furthermore, the maintenance of large deployments of complex and rapidly evolving middleware and application software is eased by the use of virtual images and contextualization techniques. Such infrastructures are being deployed in some large centres (see e.g. the CERN Agile Infrastructure project), but with several open-source tools reaching maturity this is becoming viable also for smaller sites. In this contribution we describe the Private Cloud infrastructure at the INFN-Torino Computer Centre, that hosts a full-fledged WLCG Tier-2 centre, an Interactive Analysis Facility for the ALICE experiment at the CERN LHC and several smaller scientific computing applications. The private cloud building blocks include the OpenNebula software stack, the GlusterFS filesystem and the OpenWRT Linux distribution (used for network virtualization); a future integration into a federated higher-level infrastructure is made possible by exposing commonly used APIs like EC2 and OCCI.

  6. Entrainment vs. Dilution in Tropical Deep Convection

    NASA Astrophysics Data System (ADS)

    Hannah, W.

    2017-12-01

    The distinction between entrainment and dilution is investigated with cloud resolving simulations of deep convection in a tropical environment. A method for estimating the rate of dilution by entrainment and detrainment is calculated for a series of bubble simulations with a range of initial radii. Entrainment generally corresponds to dilution of convection, but the two quantities are not well correlated. Core dilution by entrainment is significantly reduced by the presence of a shell of moist air around the core. Entrainment contributes significantly to the total net dilution, but detrainment and the various source/sink terms play large roles depending on the variable in question. Detrainment has a concentrating effect on average that balances out the dilution by entrainment. The experiments are also used to examine whether entrainment or dilution scale with cloud radius. The results support a weak negative relationship for dilution, but not for entrainment. The sensitivity to resolution is briefly discussed. A toy Lagrangian thermal model is used to demonstrate the importance of the cloud shell as a thermodynamic buffer to reduce the dilution of the core by entrainment. The results suggest that explicit cloud heterogeneity may be a useful consideration for future convective parameterization development.

  7. 9+ Years of CALIOP PSC Data: An Evolving Climatology

    NASA Technical Reports Server (NTRS)

    Pitts, Michael C.; Poole, Lamont R.

    2015-01-01

    Polar stratospheric clouds (PSCs) play key roles in the springtime chemical depletion of ozone at high latitudes. PSC particles provide sites for heterogeneous chemical reactions that transform stable chlorine and bromine reservoir species into highly reactive ozone-destructive forms. Furthermore, large nitric acid trihydrate (NAT) PSC particles can irreversibly redistribute odd nitrogen through gravitational sedimentation, which prolongs the ozone depletion process by slowing the reformation of the stable chlorine reservoirs. However, there are still significant gaps in our understanding of PSC processes, particularly concerning the details of NAT particle formation. Spaceborne observations from the CALIOP (Cloud-Aerosol Lidar with Orthogonal Polarization) lidar on the CALIPSO (Cloud-Aerosol Lidar and Infrared Pathfinder Satellite Observations) satellite are providing a rich new dataset for studying PSCs on unprecedented vortex-wide scales. In this paper, we examine the vertical and spatial distribution of PSCs in the Antarctic and Arctic on vortex-wide scales for entire PSC seasons over the more than nine-year data record.

  8. The effects of control order, feedback, practice, and input device on tracking performance and perceived workload

    NASA Technical Reports Server (NTRS)

    Hancock, P. A.; Robinson, M. A.

    1989-01-01

    The present experiment examined the influence of several task-related factors on tracking performance and concomitant workload. The manipulated factors included tracking order, the presence or absence of knowledge of performance, and the control device. Summed root mean square error (rmse) and perceived workload were measured at the termination of each trial. Perceived workload was measured using the NASA Task Load Index (TLX) and the Subjective Workload Assessment Technique (SWAT). Results indicated a large and expected effect for track order on both performance and the perception of load. In general, trackball input was more accurate and judged for lower load than input using a mouse. The presence or absence of knowledge of performance had little effect on either performance or workload. There were a number of interactions between factors shown in performance that were mirrored by perceived workload scores. Results from each workload scale were equivalent in terms of sensitivity to task manipulations. The pattern of results affirm the utility of these workload measures in assessing the imposed load of multiple task-related variables.

  9. A workload model and measures for computer performance evaluation

    NASA Technical Reports Server (NTRS)

    Kerner, H.; Kuemmerle, K.

    1972-01-01

    A generalized workload definition is presented which constructs measurable workloads of unit size from workload elements, called elementary processes. An elementary process makes almost exclusive use of one of the processors, CPU, I/O processor, etc., and is measured by the cost of its execution. Various kinds of user programs can be simulated by quantitative composition of elementary processes into a type. The character of the type is defined by the weights of its elementary processes and its structure by the amount and sequence of transitions between its elementary processes. A set of types is batched to a mix. Mixes of identical cost are considered as equivalent amounts of workload. These formalized descriptions of workloads allow investigators to compare the results of different studies quantitatively. Since workloads of different composition are assigned a unit of cost, these descriptions enable determination of cost effectiveness of different workloads on a machine. Subsequently performance parameters such as throughput rate, gain factor, internal and external delay factors are defined and used to demonstrate the effects of various workload attributes on the performance of a selected large scale computer system.

  10. A model for developing job rotation schedules that eliminate sequential high workloads and minimize between-worker variability in cumulative daily workloads: Application to automotive assembly lines.

    PubMed

    Yoon, Sang-Young; Ko, Jeonghan; Jung, Myung-Chul

    2016-07-01

    The aim of study is to suggest a job rotation schedule by developing a mathematical model in order to reduce cumulative workload from the successive use of the same body region. Workload assessment using rapid entire body assessment (REBA) was performed for the model in three automotive assembly lines of chassis, trim, and finishing to identify which body part exposed to relatively high workloads at workstations. The workloads were incorporated to the model to develop a job rotation schedule. The proposed schedules prevent the exposure to high workloads successively on the same body region and minimized between-worker variance in cumulative daily workload. Whereas some of workers were successively assigned to high workload workstation under no job rotation and serial job rotation. This model would help to reduce the potential for work-related musculoskeletal disorders (WMSDs) without additional cost for engineering work, although it may need more computational time and relative complex job rotation sequences. Copyright © 2016 Elsevier Ltd and The Ergonomics Society. All rights reserved.

  11. Sustainable IT and IT for Sustainability

    NASA Astrophysics Data System (ADS)

    Liu, Zhenhua

    Energy and sustainability have become one of the most critical issues of our generation. While the abundant potential of renewable energy such as solar and wind provides a real opportunity for sustainability, their intermittency and uncertainty present a daunting operating challenge. This thesis aims to develop analytical models, deployable algorithms, and real systems to enable efficient integration of renewable energy into complex distributed systems with limited information. The first thrust of the thesis is to make IT systems more sustainable by facilitating the integration of renewable energy into these systems. IT represents the fastest growing sectors in energy usage and greenhouse gas pollution. Over the last decade there are dramatic improvements in the energy efficiency of IT systems, but the efficiency improvements do not necessarily lead to reduction in energy consumption because more servers are demanded. Further, little effort has been put in making IT more sustainable, and most of the improvements are from improved "engineering" rather than improved "algorithms". In contrast, my work focuses on developing algorithms with rigorous theoretical analysis that improve the sustainability of IT. In particular, this thesis seeks to exploit the flexibilities of cloud workloads both (i) in time by scheduling delay-tolerant workloads and (ii) in space by routing requests to geographically diverse data centers. These opportunities allow data centers to adaptively respond to renewable availability, varying cooling efficiency, and fluctuating energy prices, while still meeting performance requirements. The design of the enabling algorithms is however very challenging because of limited information, non-smooth objective functions and the need for distributed control. Novel distributed algorithms are developed with theoretically provable guarantees to enable the "follow the renewables" routing. Moving from theory to practice, I helped HP design and implement industry's first Net-zero Energy Data Center. The second thrust of this thesis is to use IT systems to improve the sustainability and efficiency of our energy infrastructure through data center demand response. The main challenges as we integrate more renewable sources to the existing power grid come from the fluctuation and unpredictability of renewable generation. Although energy storage and reserves can potentially solve the issues, they are very costly. One promising alternative is to make the cloud data centers demand responsive. The potential of such an approach is huge. To realize this potential, we need adaptive and distributed control of cloud data centers and new electricity market designs for distributed electricity resources. My work is progressing in both directions. In particular, I have designed online algorithms with theoretically guaranteed performance for data center operators to deal with uncertainties under popular demand response programs. Based on local control rules of customers, I have further designed new pricing schemes for demand response to align the interests of customers, utility companies, and the society to improve social welfare.

  12. Classification of driving workload affected by highway alignment conditions based on classification and regression tree algorithm.

    PubMed

    Hu, Jiangbi; Wang, Ronghua

    2018-02-17

    Guaranteeing a safe and comfortable driving workload can contribute to reducing traffic injuries. In order to provide safe and comfortable threshold values, this study attempted to classify driving workload from the aspects of human factors mainly affected by highway geometric conditions and to determine the thresholds of different workload classifications. This article stated a hypothesis that the values of driver workload change within a certain range. Driving workload scales were stated based on a comprehensive literature review. Through comparative analysis of different psychophysiological measures, heart rate variability (HRV) was chosen as the representative measure for quantifying driving workload by field experiments. Seventy-two participants (36 car drivers and 36 large truck drivers) and 6 highways with different geometric designs were selected to conduct field experiments. A wearable wireless dynamic multiparameter physiological detector (KF-2) was employed to detect physiological data that were simultaneously correlated to the speed changes recorded by a Global Positioning System (GPS) (testing time, driving speeds, running track, and distance). Through performing statistical analyses, including the distribution of HRV during the flat, straight segments and P-P plots of modified HRV, a driving workload calculation model was proposed. Integrating driving workload scales with values, the threshold of each scale of driving workload was determined by classification and regression tree (CART) algorithms. The driving workload calculation model was suitable for driving speeds in the range of 40 to 120 km/h. The experimental data of 72 participants revealed that driving workload had a significant effect on modified HRV, revealing a change in driving speed. When the driving speed was between 100 and 120 km/h, drivers showed an apparent increase in the corresponding modified HRV. The threshold value of the normal driving workload K was between -0.0011 and 0.056 for a car driver and between -0.00086 and 0.067 for a truck driver. Heart rate variability was a direct and effective index for measuring driving workload despite being affected by multiple highway alignment elements. The driving workload model and the thresholds of driving workload classifications can be used to evaluate the quality of highway geometric design. A higher quality of highway geometric design could keep driving workload within a safer and more comfortable range. This study provided insight into reducing traffic injuries from the perspective of disciplinary integration of highway engineering and human factor engineering.

  13. The Integration of CloudStack and OCCI/OpenNebula with DIRAC

    NASA Astrophysics Data System (ADS)

    Méndez Muñoz, Víctor; Fernández Albor, Víctor; Graciani Diaz, Ricardo; Casajús Ramo, Adriàn; Fernández Pena, Tomás; Merino Arévalo, Gonzalo; José Saborido Silva, Juan

    2012-12-01

    The increasing availability of Cloud resources is arising as a realistic alternative to the Grid as a paradigm for enabling scientific communities to access large distributed computing resources. The DIRAC framework for distributed computing is an easy way to efficiently access to resources from both systems. This paper explains the integration of DIRAC with two open-source Cloud Managers: OpenNebula (taking advantage of the OCCI standard) and CloudStack. These are computing tools to manage the complexity and heterogeneity of distributed data center infrastructures, allowing to create virtual clusters on demand, including public, private and hybrid clouds. This approach has required to develop an extension to the previous DIRAC Virtual Machine engine, which was developed for Amazon EC2, allowing the connection with these new cloud managers. In the OpenNebula case, the development has been based on the CernVM Virtual Software Appliance with appropriate contextualization, while in the case of CloudStack, the infrastructure has been kept more general, which permits other Virtual Machine sources and operating systems being used. In both cases, CernVM File System has been used to facilitate software distribution to the computing nodes. With the resulting infrastructure, the cloud resources are transparent to the users through a friendly interface, like the DIRAC Web Portal. The main purpose of this integration is to get a system that can manage cloud and grid resources at the same time. This particular feature pushes DIRAC to a new conceptual denomination as interware, integrating different middleware. Users from different communities do not need to care about the installation of the standard software that is available at the nodes, nor the operating system of the host machine which is transparent to the user. This paper presents an analysis of the overhead of the virtual layer, doing some tests to compare the proposed approach with the existing Grid solution. License Notice: Published under licence in Journal of Physics: Conference Series by IOP Publishing Ltd.

  14. Particle backscatter and relative humidity measured across cirrus clouds and comparison with microphysical cirrus modelling

    NASA Astrophysics Data System (ADS)

    Brabec, M.; Wienhold, F. G.; Luo, B. P.; Vömel, H.; Immler, F.; Steiner, P.; Hausammann, E.; Weers, U.; Peter, T.

    2012-10-01

    Advanced measurement and modelling techniques are employed to estimate the partitioning of atmospheric water between the gas phase and the condensed phase in and around cirrus clouds, and thus to identify in-cloud and out-of-cloud supersaturations with respect to ice. In November 2008 the newly developed balloon-borne backscatter sonde COBALD (Compact Optical Backscatter and AerosoL Detector) was flown 14 times together with a CFH (Cryogenic Frost point Hygrometer) from Lindenberg, Germany (52° N, 14° E). The case discussed here in detail shows two cirrus layers with in-cloud relative humidities with respect to ice between 50% and 130%. Global operational analysis data of ECMWF (roughly 1° × 1° horizontal and 1 km vertical resolution, 6-hourly stored fields) fail to represent ice water contents and relative humidities. Conversely, regional COSMO-7 forecasts (6.6 km × 6.6 km, 5-min stored fields) capture the measured humidities and cloud positions remarkably well. The main difference between ECMWF and COSMO data is the resolution of small-scale vertical features responsible for cirrus formation. Nevertheless, ice water contents in COSMO-7 are still off by factors 2-10, likely reflecting limitations in COSMO's ice phase bulk scheme. Significant improvements can be achieved by comprehensive size-resolved microphysical and optical modelling along backward trajectories based on COSMO-7 wind and temperature fields, which allow accurate computation of humidities, homogeneous ice nucleation, resulting ice particle size distributions and backscatter ratios at the COBALD wavelengths. However, only by superimposing small-scale temperature fluctuations, which remain unresolved by the numerical weather prediction models, can we obtain a satisfying agreement with the observations and reconcile the measured in-cloud non-equilibrium humidities with conventional ice cloud microphysics. Conversely, the model-data comparison provides no evidence that additional changes to ice-cloud microphysics - such as heterogeneous nucleation or changing the water vapour accommodation coefficient on ice - are required.

  15. Impact of cloud horizontal inhomogeneity and directional sampling on the retrieval of cloud droplet size by the POLDER instrument

    NASA Astrophysics Data System (ADS)

    Shang, H.; Chen, L.; Bréon, F. M.; Letu, H.; Li, S.; Wang, Z.; Su, L.

    2015-11-01

    The principles of cloud droplet size retrieval via Polarization and Directionality of the Earth's Reflectance (POLDER) requires that clouds be horizontally homogeneous. The retrieval is performed by combining all measurements from an area of 150 km × 150 km to compensate for POLDER's insufficient directional sampling. Using POLDER-like data simulated with the RT3 model, we investigate the impact of cloud horizontal inhomogeneity and directional sampling on the retrieval and analyze which spatial resolution is potentially accessible from the measurements. Case studies show that the sub-grid-scale variability in droplet effective radius (CDR) can significantly reduce valid retrievals and introduce small biases to the CDR (~ 1.5 μm) and effective variance (EV) estimates. Nevertheless, the sub-grid-scale variations in EV and cloud optical thickness (COT) only influence the EV retrievals and not the CDR estimate. In the directional sampling cases studied, the retrieval using limited observations is accurate and is largely free of random noise. Several improvements have been made to the original POLDER droplet size retrieval. For example, measurements in the primary rainbow region (137-145°) are used to ensure retrievals of large droplet (> 15 μm) and to reduce the uncertainties caused by cloud heterogeneity. We apply the improved method using the POLDER global L1B data from June 2008, and the new CDR results are compared with the operational CDRs. The comparison shows that the operational CDRs tend to be underestimated for large droplets because the cloudbow oscillations in the scattering angle region of 145-165° are weak for cloud fields with CDR > 15 μm. Finally, a sub-grid-scale retrieval case demonstrates that a higher resolution, e.g., 42 km × 42 km, can be used when inverting cloud droplet size distribution parameters from POLDER measurements.

  16. Heavy vehicle driver workload assessment. Task 5, workload assessment protocol

    DOT National Transportation Integrated Search

    This report presents a description of a prescriptive workload assessment protocol for use in evaluating in-cab devices in heavy vehicles. The primary objective of this heavy vehicle driver workload assessment protocol is to identify the components an...

  17. Timesharing performance as an indicator of pilot mental workload

    NASA Technical Reports Server (NTRS)

    Casper, Patricia A.; Kantowitz, Barry H.; Sorkin, Robert D.

    1988-01-01

    Attentional deficits (workloads) were evaluated in a timesharing task. The results from this and other experiments were incorporated into an expert system designed to provide workload metric selection advice to non-experts in the field interested in operator workload.

  18. A user-oriented synthetic workload generator

    NASA Technical Reports Server (NTRS)

    Kao, Wei-Lun

    1991-01-01

    A user oriented synthetic workload generator that simulates users' file access behavior based on real workload characterization is described. The model for this workload generator is user oriented and job specific, represents file I/O operations at the system call level, allows general distributions for the usage measures, and assumes independence in the file I/O operation stream. The workload generator consists of three parts which handle specification of distributions, creation of an initial file system, and selection and execution of file I/O operations. Experiments on SUN NFS are shown to demonstrate the usage of the workload generator.

  19. Crew workload-management strategies - A critical factor in system performance

    NASA Technical Reports Server (NTRS)

    Hart, Sandra G.

    1989-01-01

    This paper reviews the philosophy and goals of the NASA/USAF Strategic Behavior/Workload Management Program. The philosophical foundation of the program is based on the assumption that an improved understanding of pilot strategies will clarify the complex and inconsistent relationships observed among objective task demands and measures of system performance and pilot workload. The goals are to: (1) develop operationally relevant figures of merit for performance, (2) quantify the effects of strategic behaviors on system performance and pilot workload, (3) identify evaluation criteria for workload measures, and (4) develop methods of improving pilots' abilities to manage workload extremes.

  20. Reconciling the aerosol-liquid water path relationship in the ECHAM6-HAM GCM and the Aerosol_cci/Cloud_cci (A)ATSR dataset by minimizing the effect of aerosol swelling

    NASA Astrophysics Data System (ADS)

    Neubauer, D.; Christensen, M.; Lohmann, U.; Poulsen, C. A.

    2016-12-01

    Studies using present day variability to assess statistical relationships between aerosol and cloud properties find different strengths of these relationships between satellite data and general circulation model (GCM) data. This discrepancy can be explained by structural uncertainties due to differences in the analysis/observational scale and the process scale or spurious relationships between aerosol and cloud properties. Such spurious relationships are the growth of aerosol particles in the humid environment surrounding clouds, misclassification of partly cloudy satellite pixels as cloud free pixels, brightening of aerosol particles by sunlight reflected at cloud edges, or effects of clouds on aerosol like processing of aerosol particles in clouds by nucleation or impact scavenging and subsequent growth by heterogeneous chemistry and release by cloud droplet evaporation or wet scavenging of aerosol particles. To minimize the effects of spatial aggregation and spurious relationships we apply a new nearest neighbour approach to high resolution (A)ATSR datasets from the Aerosol_cci and Cloud_cci projects of the Climate Change Initiative (CCI) programme of ESA. For the ECHAM6-HAM GCM we quantify the impact of using dry aerosol (without aerosol water) in the analysis to mimic the effect of the nearest neighbour approach. The aerosol-liquid water path relationship in ECHAM6-HAM is systematically stronger than in (A)ATSR data and cannot be explained by an overestimation of autoconversion when using diagnostic precipitation but rather by aerosol swelling in regions where humidity is high and clouds are present. When aerosol water is removed from the analysis in ECHAM6-HAM the strength of the aerosol-liquid water path relationship agrees much better with the ones of (A)ATSR or MODIS. We further find that while the observed relationships of different satellite sensors ((A)ATSR vs. MODIS) are not always consistent for tested environmental conditions the relationships in ECHAM6-HAM are missing a strong dependence on environmental conditions which is critical for bridging the gap between satellite and model estimates of aerosol indirect forcing.

  1. Modelling ice microphysics of mixed-phase clouds

    NASA Astrophysics Data System (ADS)

    Ahola, J.; Raatikainen, T.; Tonttila, J.; Romakkaniemi, S.; Kokkola, H.; Korhonen, H.

    2017-12-01

    The low-level Arctic mixed-phase clouds have a significant role for the Arctic climate due to their ability to absorb and reflect radiation. Since the climate change is amplified in polar areas, it is vital to apprehend the mixed-phase cloud processes. From a modelling point of view, this requires a high spatiotemporal resolution to capture turbulence and the relevant microphysical processes, which has shown to be difficult.In order to solve this problem about modelling mixed-phase clouds, a new ice microphysics description has been developed. The recently published large-eddy simulation cloud model UCLALES-SALSA offers a good base for a feasible solution (Tonttila et al., Geosci. Mod. Dev., 10:169-188, 2017). The model includes aerosol-cloud interactions described with a sectional SALSA module (Kokkola et al., Atmos. Chem. Phys., 8, 2469-2483, 2008), which represents a good compromise between detail and computational expense.Newly, the SALSA module has been upgraded to include also ice microphysics. The dynamical part of the model is based on well-known UCLA-LES model (Stevens et al., J. Atmos. Sci., 56, 3963-3984, 1999) which can be used to study cloud dynamics on a fine grid.The microphysical description of ice is sectional and the included processes consist of formation, growth and removal of ice and snow particles. Ice cloud particles are formed by parameterized homo- or heterogeneous nucleation. The growth mechanisms of ice particles and snow include coagulation and condensation of water vapor. Autoconversion from cloud ice particles to snow is parameterized. The removal of ice particles and snow happens by sedimentation and melting.The implementation of ice microphysics is tested by initializing the cloud simulation with atmospheric observations from the Indirect and Semi-Direct Aerosol Campaign (ISDAC). The results are compared to the model results shown in the paper of Ovchinnikov et al. (J. Adv. Model. Earth Syst., 6, 223-248, 2014) and they show a good match. One of the advantages of UCLALES-SALSA is that it can be used to quantify the effect of aerosol scavenging on cloud properties in a precise way.

  2. MEASURING WORKLOAD OF ICU NURSES WITH A QUESTIONNAIRE SURVEY: THE NASA TASK LOAD INDEX (TLX).

    PubMed

    Hoonakker, Peter; Carayon, Pascale; Gurses, Ayse; Brown, Roger; McGuire, Kerry; Khunlertkit, Adjhaporn; Walker, James M

    2011-01-01

    High workload of nurses in Intensive Care Units (ICUs) has been identified as a major patient safety and worker stress problem. However, relative little attention has been dedicated to the measurement of workload in healthcare. The objectives of this study are to describe and examine several methods to measure workload of ICU nurses. We then focus on the measurement of ICU nurses' workload using a subjective rating instrument: the NASA TLX.We conducted secondary data analysis on data from two, multi-side, cross-sectional questionnaire studies to examine several instruments to measure ICU nurses' workload. The combined database contains the data from 757 ICU nurses in 8 hospitals and 21 ICUs.Results show that the different methods to measure workload of ICU nurses, such as patient-based and operator-based workload, are only moderately correlated, or not correlated at all. Results show further that among the operator-based instruments, the NASA TLX is the most reliable and valid questionnaire to measure workload and that NASA TLX can be used in a healthcare setting. Managers of hospitals and ICUs can benefit from the results of this research as it provides benchmark data on workload experienced by nurses in a variety of ICUs.

  3. MEASURING WORKLOAD OF ICU NURSES WITH A QUESTIONNAIRE SURVEY: THE NASA TASK LOAD INDEX (TLX)

    PubMed Central

    Hoonakker, Peter; Carayon, Pascale; Gurses, Ayse; Brown, Roger; McGuire, Kerry; Khunlertkit, Adjhaporn; Walker, James M.

    2012-01-01

    High workload of nurses in Intensive Care Units (ICUs) has been identified as a major patient safety and worker stress problem. However, relative little attention has been dedicated to the measurement of workload in healthcare. The objectives of this study are to describe and examine several methods to measure workload of ICU nurses. We then focus on the measurement of ICU nurses’ workload using a subjective rating instrument: the NASA TLX. We conducted secondary data analysis on data from two, multi-side, cross-sectional questionnaire studies to examine several instruments to measure ICU nurses’ workload. The combined database contains the data from 757 ICU nurses in 8 hospitals and 21 ICUs. Results show that the different methods to measure workload of ICU nurses, such as patient-based and operator-based workload, are only moderately correlated, or not correlated at all. Results show further that among the operator-based instruments, the NASA TLX is the most reliable and valid questionnaire to measure workload and that NASA TLX can be used in a healthcare setting. Managers of hospitals and ICUs can benefit from the results of this research as it provides benchmark data on workload experienced by nurses in a variety of ICUs. PMID:22773941

  4. Importance of the mixing state for ice nucleating capabilities of individual aerosol particles

    NASA Astrophysics Data System (ADS)

    Ebert, Martin; Worringen, Annette; Benker, Nathalie; Weinbruch, Stephan

    2010-05-01

    The effects of aerosol particles on heterogeneous ice formation are currently insufficiently understood. Modelling studies have shown that the type and quantity of atmospheric aerosol particles acting as ice nuclei (IN) can influence ice cloud microphysical and radiative properties as well as their precipitation efficiency. Therefore, the physicochemical identification of IN and a quantitative description of the ice nucleation processes are crucial for a better understanding of formation, life cycles, and the optical properties of clouds as well as for numerical precipitation forecast. During the CLACE 5 campaign in 2006 at the high alpine research station Jungfraujoch (3580 m asl), Switzerland, the physicochemical parameters of IN within mixed-phase clouds were studied. By the use of special Ice-Counterflow Virtual Impactor, residual particles of small ice nuclei (IN) and the interstitial aerosol fraction were sampled seperately within mixed-phase clouds. The size, morphology, elemental composition and mixing state of more than 7000 particles of selected IN- and interstitial-samples were analyzed by scanning electron microscopy (SEM) combined with energy-dispersive X-ray analysis (EDX). For selected particles, the mineralogical phase composition was determined by transmission electron microscopy. In order to receive detailed information about the mixing state (coatings, agglomerates, heterogeneous inclusions) of the IN- and interstitial-samples, the complete individual particle analysis was performed operator controlled. Four different particle types were identified to act as IN. 1) Carbonaceous particles, which were identified to be a complex mixture of soot (main component), sulfate and nitrate. 2) Complex mixtures of two or more diverse particle groups. In almost 75% of these particles silicates or metal oxides are the main-component. 3) Aluminium oxide particles, which were internally mixed with calcium and sulphate rich material and 4) Pb bearing particles. The high abundance of Pb-bearing particles in the IN-samples (up to 24% by number) was an unexpected finding. Besides a smaller content of larger PbO and PbCl2-particles the main component of the particles within this type are predominantly sea salt, soot or silicates, while Pb in these particles is only present as small (50 - 500 nm) heterogeneous Pb or PbS inclusions. In all 4 particle types identified as IN, the mixing state seems to play an essential role. Therefore it can be concluded that the determination of the main-component of a particle is not sufficient for the prediction of its IN-capability.

  5. Accumulation of planets into the proto-planetary cloud as a process of occurring an amount of characteristic scales into the nonlinear self organized dynamical systems

    NASA Astrophysics Data System (ADS)

    Professor Khachay, Yurie

    2015-04-01

    Two characteristic times are significant for evolution the interior of the homogeneous proto-planetary cloud: the time of bodies free fall towards the clouds mass center and the time of sound distribution through the cloud. With the beginning of proto-planetary disk fragmentation and accumulation of the proto-planets from the bodies and particles there are formed matter content heterogeneities of the finite dimension, heterogeneities of temperature, density and values of kinetic coefficients. The system became more and more complicated with interior interconnections. By the growing of the bodies the difference between the values of the characteristic times and dimensions become larger. The dynamical evolution of the system we could observe with use the numerical modeling of the Earth and Moon formation into the 3-D model [1,2]. The fact, that the linear dimensions of the objects during the accumulation process change from the centimeter and meter dimensions to some thousands of kilometers significantly prevent the mathematical description of these processes. The corresponding values of the no dimensional similarity criterions, which are included into the systems of differential equations, which describe the proto-planetary growing, the conditions for entropy and mass on the growing surface, the equations of the impulse balance, energy and mass into the interior parts of the planet change on an orders of values. Therefore we used very detailed space and time grids for solution the problem using the method of finite differences. The additional complications occur according to necessity to take into account the nonlinear dependence of matter viscosity from the temperature, pressure and chemical matter content. At last we took into account the principal random distribution of heterogeneities, stipulated by bodies and particles falling. Only progression towards that direction and constructing corresponding systems of observation and interpretation allow to hope receiving more and more realistic models of self organizing structures and to understand the laws of their reconstruction during the complicated process of planetary accumulation. The work is fulfilled by partly support of RFBR (grant N13-05-00138). References. 1. Y. Khachay , V. Anfilogov , and A. Antipin (2014) Numerical Results of 3-D Modeling of Moon Accumulation // Geophysical Research Abstracts, Vol. 16, EGU2014-1011 2. Y.Khachay, A.Antipin and V.Anfilogov (2014)Numerical modeling of temperature distribution on the stage of Earth's accumulation in a frame of 3-D model and peculiarities of its initial minerageny. Ural geophysical bulletin 1: 81-85.

  6. Laboratory Studies of the Cloud Droplet Activation Properties and Corresponding Chemistry of Saline Playa Dust

    NASA Astrophysics Data System (ADS)

    Gaston, C.; Pratt, K.; Suski, K. J.; May, N.; Gill, T. E.; Prather, K. A.

    2016-12-01

    Saline playas (dried lake beds) emit large quantities of dust that can facilitate the activation of cloud droplets. Despite the potential importance of playa dust for cloud formation, several models assume that dust is non-hygroscopic highlighting the need for measurements to clarify the role of dust from multiple sources in aerosol-cloud-climate interactions. Here we present water uptake measurements onto playa dust represented by the hygroscopicity parameter κ, which ranged from 0.002 ± 0.001 to 0.818 ± 0.094. Single-particle measurements made using an aircraft-aerosol time-of-flight mass spectrometer (A-ATOFMS) revealed the presence of halite, sodium sulfates, and sodium carbonates that were strongly correlated with κ underscoring the role that dust composition plays in water uptake. Predictions of κ made using bulk chemical techniques generally showed good agreement with measured values; however, several samples were poorly predicted using bulk particle composition. The lack of measurements/model agreement using this method and the strong correlations between κ and single-particle data are suggestive of chemical heterogeneities as a function of particle size and/or chemically distinct particle surfaces that dictate the water uptake properties of playa dust particles. Overall, our results highlight the ability of playa dust particles to act as cloud condensation nuclei that should be accounted for in models.

  7. Integrating multiple scientific computing needs via a Private Cloud infrastructure

    NASA Astrophysics Data System (ADS)

    Bagnasco, S.; Berzano, D.; Brunetti, R.; Lusso, S.; Vallero, S.

    2014-06-01

    In a typical scientific computing centre, diverse applications coexist and share a single physical infrastructure. An underlying Private Cloud facility eases the management and maintenance of heterogeneous use cases such as multipurpose or application-specific batch farms, Grid sites catering to different communities, parallel interactive data analysis facilities and others. It allows to dynamically and efficiently allocate resources to any application and to tailor the virtual machines according to the applications' requirements. Furthermore, the maintenance of large deployments of complex and rapidly evolving middleware and application software is eased by the use of virtual images and contextualization techniques; for example, rolling updates can be performed easily and minimizing the downtime. In this contribution we describe the Private Cloud infrastructure at the INFN-Torino Computer Centre, that hosts a full-fledged WLCG Tier-2 site and a dynamically expandable PROOF-based Interactive Analysis Facility for the ALICE experiment at the CERN LHC and several smaller scientific computing applications. The Private Cloud building blocks include the OpenNebula software stack, the GlusterFS filesystem (used in two different configurations for worker- and service-class hypervisors) and the OpenWRT Linux distribution (used for network virtualization). A future integration into a federated higher-level infrastructure is made possible by exposing commonly used APIs like EC2 and by using mainstream contextualization tools like CloudInit.

  8. Effect of time span and task load on pilot mental workload

    NASA Technical Reports Server (NTRS)

    Berg, S. L.; Sheridan, T. B.

    1986-01-01

    Two sets of simulations designed to examine how a pilot's mental workload is affected by continuous manual-control activity versus discrete mental tasks that included the length of time between receiving an assignment and executing it are described. The first experiment evaluated two types of measures: objective performance indicators and subjective ratings. Subjective ratings for the two missions were different, but the objective performance measures were similar. In the second experiments, workload levels were increased and a second performance measure was taken. Mental workload had no influence on either performance-based workload measure. Subjective ratings discriminated among the scenarios and correlated with performance measures for high-workload flights. The number of mental tasks performed did not influence error rates, although high manual workloads did increase errors.

  9. Workload Characterization of CFD Applications Using Partial Differential Equation Solvers

    NASA Technical Reports Server (NTRS)

    Waheed, Abdul; Yan, Jerry; Saini, Subhash (Technical Monitor)

    1998-01-01

    Workload characterization is used for modeling and evaluating of computing systems at different levels of detail. We present workload characterization for a class of Computational Fluid Dynamics (CFD) applications that solve Partial Differential Equations (PDEs). This workload characterization focuses on three high performance computing platforms: SGI Origin2000, EBM SP-2, a cluster of Intel Pentium Pro bases PCs. We execute extensive measurement-based experiments on these platforms to gather statistics of system resource usage, which results in workload characterization. Our workload characterization approach yields a coarse-grain resource utilization behavior that is being applied for performance modeling and evaluation of distributed high performance metacomputing systems. In addition, this study enhances our understanding of interactions between PDE solver workloads and high performance computing platforms and is useful for tuning these applications.

  10. High variability of the heterogeneous ice nucleation potential of oxalic acid dihydrate and sodium oxalate

    NASA Astrophysics Data System (ADS)

    Wagner, R.; Möhler, O.; Saathoff, H.; Schnaiter, M.; Leisner, T.

    2010-04-01

    The heterogeneous ice nucleation potential of airborne oxalic acid dihydrate and sodium oxalate particles in the deposition and condensation mode has been investigated by controlled expansion cooling cycles in the AIDA aerosol and cloud chamber of the Karlsruhe Institute of Technology at temperatures between 244 and 228 K. Previous laboratory studies have highlighted the particular role of oxalic acid dihydrate as the only species amongst a variety of other investigated dicarboxylic acids to be capable of acting as a heterogeneous ice nucleus in both the deposition and immersion mode. We could confirm a high deposition mode ice activity for 0.03 to 0.8 μm sized oxalic acid dihydrate particles that were either formed by nucleation from a gaseous oxalic acid/air mixture or by rapid crystallisation of highly supersaturated aqueous oxalic acid solution droplets. The critical saturation ratio with respect to ice required for deposition nucleation was found to be less than 1.1 and the size-dependent ice-active fraction of the aerosol population was in the range from 0.1 to 22%. In contrast, oxalic acid dihydrate particles that had crystallised from less supersaturated solution droplets and had been allowed to slowly grow in a supersaturated environment from still unfrozen oxalic acid solution droplets over a time period of several hours were found to be much poorer heterogeneous ice nuclei. We speculate that under these conditions a crystal surface structure with less-active sites for the initiation of ice nucleation was generated. Such particles partially proved to be almost ice-inactive in both the deposition and condensation mode. At times, the heterogeneous ice nucleation ability of oxalic acid dihydrate significantly changed when the particles had been processed in preceding cloud droplet activation steps. Such behaviour was also observed for the second investigated species, namely sodium oxalate. Our experiments address the atmospheric scenario that coating layers of oxalic acid or its salts may be formed by physical and chemical processing on pre-existing particulates such as mineral dust and soot. Given the broad diversity of the observed heterogeneous ice nucleability of the oxalate species, it is not straightforward to predict whether an oxalate coating layer will improve or reduce the ice nucleation ability of the seed aerosol particles.

  11. High variability of the heterogeneous ice nucleation potential of oxalic acid dihydrate and sodium oxalate

    NASA Astrophysics Data System (ADS)

    Wagner, R.; Möhler, O.; Saathoff, H.; Schnaiter, M.; Leisner, T.

    2010-08-01

    The heterogeneous ice nucleation potential of airborne oxalic acid dihydrate and sodium oxalate particles in the deposition and condensation mode has been investigated by controlled expansion cooling cycles in the AIDA aerosol and cloud chamber of the Karlsruhe Institute of Technology at temperatures between 244 and 228 K. Previous laboratory studies have highlighted the particular role of oxalic acid dihydrate as the only species amongst a variety of other investigated dicarboxylic acids to be capable of acting as a heterogeneous ice nucleus in both the deposition and immersion mode. We could confirm a high deposition mode ice activity for 0.03 to 0.8 μm sized oxalic acid dihydrate particles that were either formed by nucleation from a gaseous oxalic acid/air mixture or by rapid crystallisation of highly supersaturated aqueous oxalic acid solution droplets. The critical saturation ratio with respect to ice required for deposition nucleation was found to be less than 1.1 and the size-dependent ice-active fraction of the aerosol population was in the range from 0.1 to 22%. In contrast, oxalic acid dihydrate particles that had crystallised from less supersaturated solution droplets and had been allowed to slowly grow in a supersaturated environment from still unfrozen oxalic acid solution droplets over a time period of several hours were found to be much poorer heterogeneous ice nuclei. We speculate that under these conditions a crystal surface structure with less-active sites for the initiation of ice nucleation was generated. Such particles partially proved to be almost ice-inactive in both the deposition and condensation mode. At times, the heterogeneous ice nucleation ability of oxalic acid dihydrate significantly changed when the particles had been processed in preceding cloud droplet activation steps. Such behaviour was also observed for the second investigated species, namely sodium oxalate. Our experiments address the atmospheric scenario that coating layers of oxalic acid or its salts may be formed by physical and chemical processing on pre-existing particulates such as mineral dust and soot. Given the broad diversity of the observed heterogeneous ice nucleability of the oxalate species, it is not straightforward to predict whether an oxalate coating layer will improve or reduce the ice nucleation ability of the seed aerosol particles.

  12. Assessment of Changes in Cloud Microphysical Properties and Rainfall in the Southeast Atlantic During the ORACLES 2016 Deployment

    NASA Astrophysics Data System (ADS)

    Diamond, M. S.; Dzambo, A.; L'Ecuyer, T.; Wood, R.; Durden, S. L.; Sy, O. O.; Tanelli, S.; Freitag, S.; Howell, S. G.; Smirnow, N.; Small Griswold, J. D.; Heikkila, A.

    2017-12-01

    Complex interactions between aerosol particles and clouds are the largest source of uncertainty in present-day radiative forcing and future projections of anthropogenic climate change. Unlike that of well-mixed greenhouse gases, the pattern of forcing for aerosol-cloud interactions (ACI) is regionally heterogeneous; one region of particular interest is the southeast Atlantic Ocean (SEA) off the western coast of Africa. During the southern African biomass burning (BB) season from July to October, a persistent layer of BB aerosol has been observed overlying one of the world's three semi-permanent stratocumulus (Sc) cloud decks. The vertical distribution of smoke over the SEA region remains poorly understood, particularly how much BB aerosol mixes into the Sc deck, which alters the clouds' microphysical properties. To investigate the effects of BB aerosols over the SEA Sc deck, we utilize data from the Airborne Third Generation Precipitation Radar (APR-3), an assortment of cloud probes, the Hawaii Group for Environmental Aerosol Research (HIGEAR) nephelometer, and other in-situ instruments on the P-3 aircraft during NASA's ObseRvations of Aerosols above CLouds and their intEractionS (ORACLES) 2016 campaign. Nearly all clouds observed in this experiment have a cloud top altitude of 1.5 km or less, with cloud top reflectivities rarely exceeding -15 dBZ. Two representative flights, the Aug. 31 and Sept. 6 missions, have cloud droplet number concentration (CDNC) values approximately between 250 and 350 per cubic centimeter (cc), with values exceeding 400/cc near the coast. Retrieved rainfall estimates suggest intermittent drizzle production occurs but rarely exceeds 0.1 mm h-1 further into the BB layer, and any drizzle production corresponds to CDNC values of approximately 300/cc or less. These two particular flights show that, when CDNC exceeds 400/cc, clouds drizzle less than 1% of the time. The distance between the Sc deck and BB layer is computed. Although a majority of cases show the Sc deck and BB layer are in contact, CDNC is not primarily controlled by this "gap" distance, suggesting that BB layer-Sc deck contact is not sufficient enough to explain cloud microphysical variability in the SEA region. Trajectory analyses of air masses are also presented to highlight underlying meteorological controls.

  13. The relationship between workload and training - An introduction

    NASA Technical Reports Server (NTRS)

    Hart, Sandra G.

    1986-01-01

    This paper reviews the relationships among workload, performance, and training. Its goal is to introduce the concepts of workload and training and to suggest how they may be related. It suggests some of the practical and theoretical benefits to be derived from their joint consideration. Training effectiveness can be improved by monitoring trainee workload and the reliability of workload predictions, and measures can be improved by identifying and controlling the training levels of experimental subjects.

  14. Integration Of PanDA Workload Management System With Supercomputers for ATLAS and Data Intensive Science

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    De, K; Jha, S; Klimentov, A

    2016-01-01

    The Large Hadron Collider (LHC), operating at the international CERN Laboratory in Geneva, Switzerland, is leading Big Data driven scientific explorations. Experiments at the LHC explore the fundamental nature of matter and the basic forces that shape our universe, and were recently credited for the discovery of a Higgs boson. ATLAS, one of the largest collaborations ever assembled in the sciences, is at the forefront of research at the LHC. To address an unprecedented multi-petabyte data processing challenge, the ATLAS experiment is relying on a heterogeneous distributed computational infrastructure. The ATLAS experiment uses PanDA (Production and Data Analysis) Workload Managementmore » System for managing the workflow for all data processing on over 150 data centers. Through PanDA, ATLAS physicists see a single computing facility that enables rapid scientific breakthroughs for the experiment, even though the data centers are physically scattered all over the world. While PanDA currently uses more than 250,000 cores with a peak performance of 0.3 petaFLOPS, LHC data taking runs require more resources than Grid computing can possibly provide. To alleviate these challenges, LHC experiments are engaged in an ambitious program to expand the current computing model to include additional resources such as the opportunistic use of supercomputers. We will describe a project aimed at integration of PanDA WMS with supercomputers in United States, Europe and Russia (in particular with Titan supercomputer at Oak Ridge Leadership Computing Facility (OLCF), MIRA supercomputer at Argonne Leadership Computing Facilities (ALCF), Supercomputer at the National Research Center Kurchatov Institute , IT4 in Ostrava and others). Current approach utilizes modified PanDA pilot framework for job submission to the supercomputers batch queues and local data management, with light-weight MPI wrappers to run single threaded workloads in parallel on LCFs multi-core worker nodes. This implementation was tested with a variety of Monte-Carlo workloads on several supercomputing platforms for ALICE and ATLAS experiments and it is in full production for the ATLAS experiment since September 2015. We will present our current accomplishments with running PanDA WMS at supercomputers and demonstrate our ability to use PanDA as a portal independent of the computing facilities infrastructure for High Energy and Nuclear Physics as well as other data-intensive science applications, such as bioinformatics and astro-particle physics.« less

  15. INTEGRATION OF PANDA WORKLOAD MANAGEMENT SYSTEM WITH SUPERCOMPUTERS

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    De, K; Jha, S; Maeno, T

    Abstract The Large Hadron Collider (LHC), operating at the international CERN Laboratory in Geneva, Switzerland, is leading Big Data driven scientific explorations. Experiments at the LHC explore the funda- mental nature of matter and the basic forces that shape our universe, and were recently credited for the dis- covery of a Higgs boson. ATLAS, one of the largest collaborations ever assembled in the sciences, is at the forefront of research at the LHC. To address an unprecedented multi-petabyte data processing challenge, the ATLAS experiment is relying on a heterogeneous distributed computational infrastructure. The ATLAS experiment uses PanDA (Production and Datamore » Analysis) Workload Management System for managing the workflow for all data processing on over 140 data centers. Through PanDA, ATLAS physicists see a single computing facility that enables rapid scientific breakthroughs for the experiment, even though the data cen- ters are physically scattered all over the world. While PanDA currently uses more than 250000 cores with a peak performance of 0.3+ petaFLOPS, next LHC data taking runs will require more resources than Grid computing can possibly provide. To alleviate these challenges, LHC experiments are engaged in an ambitious program to expand the current computing model to include additional resources such as the opportunistic use of supercomputers. We will describe a project aimed at integration of PanDA WMS with supercomputers in United States, Europe and Russia (in particular with Titan supercomputer at Oak Ridge Leadership Com- puting Facility (OLCF), Supercomputer at the National Research Center Kurchatov Institute , IT4 in Ostrava, and others). The current approach utilizes a modified PanDA pilot framework for job submission to the supercomputers batch queues and local data management, with light-weight MPI wrappers to run single- threaded workloads in parallel on Titan s multi-core worker nodes. This implementation was tested with a variety of Monte-Carlo workloads on several supercomputing platforms. We will present our current accom- plishments in running PanDA WMS at supercomputers and demonstrate our ability to use PanDA as a portal independent of the computing facility s infrastructure for High Energy and Nuclear Physics, as well as other data-intensive science applications, such as bioinformatics and astro-particle physics.« less

  16. Technology-assisted title and abstract screening for systematic reviews: a retrospective evaluation of the Abstrackr machine learning tool.

    PubMed

    Gates, Allison; Johnson, Cydney; Hartling, Lisa

    2018-03-12

    Machine learning tools can expedite systematic review (SR) processes by semi-automating citation screening. Abstrackr semi-automates citation screening by predicting relevant records. We evaluated its performance for four screening projects. We used a convenience sample of screening projects completed at the Alberta Research Centre for Health Evidence, Edmonton, Canada: three SRs and one descriptive analysis for which we had used SR screening methods. The projects were heterogeneous with respect to search yield (median 9328; range 5243 to 47,385 records; interquartile range (IQR) 15,688 records), topic (Antipsychotics, Bronchiolitis, Diabetes, Child Health SRs), and screening complexity. We uploaded the records to Abstrackr and screened until it made predictions about the relevance of the remaining records. Across three trials for each project, we compared the predictions to human reviewer decisions and calculated the sensitivity, specificity, precision, false negative rate, proportion missed, and workload savings. Abstrackr's sensitivity was > 0.75 for all projects and the mean specificity ranged from 0.69 to 0.90 with the exception of Child Health SRs, for which it was 0.19. The precision (proportion of records correctly predicted as relevant) varied by screening task (median 26.6%; range 14.8 to 64.7%; IQR 29.7%). The median false negative rate (proportion of records incorrectly predicted as irrelevant) was 12.6% (range 3.5 to 21.2%; IQR 12.3%). The workload savings were often large (median 67.2%, range 9.5 to 88.4%; IQR 23.9%). The proportion missed (proportion of records predicted as irrelevant that were included in the final report, out of the total number predicted as irrelevant) was 0.1% for all SRs and 6.4% for the descriptive analysis. This equated to 4.2% (range 0 to 12.2%; IQR 7.8%) of the records in the final reports. Abstrackr's reliability and the workload savings varied by screening task. Workload savings came at the expense of potentially missing relevant records. How this might affect the results and conclusions of SRs needs to be evaluated. Studies evaluating Abstrackr as the second reviewer in a pair would be of interest to determine if concerns for reliability would diminish. Further evaluations of Abstrackr's performance and usability will inform its refinement and practical utility.

  17. The immersion freezing behavior of size-segregated soot and kaolinite particles

    NASA Astrophysics Data System (ADS)

    Hartmann, S.; Augustin, S.; Clauss, T.; Niedermeier, D.; Raddatz, M.; Wex, H.; Shaw, R. A.; Stratmann, F.

    2011-12-01

    Heterogeneous ice nucleation plays a crucial role for ice formation in mixed-phase and cirrus clouds and has an important impact on precipitation formation, global radiation balances, and therefore Earth's climate (Cantrell and Heymsfield, 2005). Mineral dust and soot particles are found to be a major component of ice crystal residues (e.g., Pratt et al., 2009) so these substances are potential sources of atmospheric ice nuclei (IN). Experimental studies investigating the immersion freezing behavior of size-segregated soot and kaolinite particles conducted at the Leipzig Aerosol Cloud Interaction Simulator (LACIS) are presented. In our measurements only one aerosol particle is immersed in an air suspended water droplet which can trigger ice nucleation. The method facilitates very precise examinations with respect to temperature, ice nucleation time and ice nucleus size. Considering laboratory studies, the picture of the IN ability of soot particles is quite heterogeneous. Our studies show that submicron flame, spark soot particles and optionally coated with sulfuric acid to simulate chemically aging do not act as IN at temperatures higher than homogeneous freezing taking place. Therefore soot particles might not be an important source of IN for immersion freezing in the atmosphere. In contrast, kaolinite being representative for natural mineral dust with a well known composition and structure is found to be very active in forming ice for all freezing modes (e.g., Mason and Maybank, 1958). Analyzing the immersion freezing behavior of different sized kaolinite particles (300, 500 and 700 nm in diameter) the size effect was clearly observed, i.e. the ice fraction (number of frozen droplets per total number) scales with particle surface, i.e. the larger the ice nucleus surface the higher the ice fraction. The slope of the logarithm of the ice fraction as function of temperature is similar for all particle sizes investigated and fits very well with the results of Lüönd et al. (2010) and Murray et al. (2011). Heterogeneous ice nucleation rate coefficients are derived which can be used to describe the immersion freezing process size-segregated in cloud microphysical models.

  18. Ice Nucleation of Soot Particles in the Cirrus Regime: Is Pore Condensation and Freezing Relevant for Soot?

    NASA Astrophysics Data System (ADS)

    Kanji, Z. A.; Mahrt, F.; David, R.; Marcolli, C.; Lohmann, U.; Fahrni, J.; Brühwiler, D.

    2017-12-01

    Heterogeneous ice nucleation (HIN) onto soot particles from previous studies have produced inconsistent results of temperature and relative humidity conditions required for freezing depending on the source of soot particle investigated. The ability of soot to act as HIN depended on the type of soot and size of particle. Often homogenous freezing conditions or water saturation conditions were required to freeze soot particles, rendering HIN irrelevant. Using synthesised mesoporous silica particles, we show pore condensation and freezing works with experiments performed in the Zurich Ice Nucleation Chamber (ZINC). By testing a variety of soot particles in parallel in the Horizontal Ice Nucleation Chamber (HINC), we suggest that previously observed HIN on soot particles is not the responsible mechanism for ice formation. Laboratory generated CAST brown and black soot, commercially available soot and acid treated soot were investigated for their ice nucleation abilities in the mixed-phase and cirrus cloud temperature regimes. No heterogeneous ice nucleation activity is inferred at T > -38 °C (mixed-phase cloud regime), however depending on particle size and soot type, HIN was observed for T < -38 °C (cirrus could regime). Nevertheless, we question if this is caused by a heterogeneous phase change due the presence of a so called active site or due to pore-condensation of water as predicted by the inverse Kelvin effect followed by homogeneous nucleation of ice in the pores or cavities that are ubiquitous in soot particles between the primary spherules. The ability of some particles to freeze at lower relative humidity compared to others demonstrates why hydrophobicity plays a role in ice nucleation, i.e. controlling the conditions at which these cavities fill with water. Thus for more hydrophobic particles pore filling occurs at higher relative humidity, and therefore freezing of pore water and ice crystal growth. Future work focusses on testing the cloud processing ability of soot particles and water adsorption isotherms of the different soot samples to support the hydrophobicity inferences from the ice nucleation results.

  19. Toward real-time Monte Carlo simulation using a commercial cloud computing infrastructure.

    PubMed

    Wang, Henry; Ma, Yunzhi; Pratx, Guillem; Xing, Lei

    2011-09-07

    Monte Carlo (MC) methods are the gold standard for modeling photon and electron transport in a heterogeneous medium; however, their computational cost prohibits their routine use in the clinic. Cloud computing, wherein computing resources are allocated on-demand from a third party, is a new approach for high performance computing and is implemented to perform ultra-fast MC calculation in radiation therapy. We deployed the EGS5 MC package in a commercial cloud environment. Launched from a single local computer with Internet access, a Python script allocates a remote virtual cluster. A handshaking protocol designates master and worker nodes. The EGS5 binaries and the simulation data are initially loaded onto the master node. The simulation is then distributed among independent worker nodes via the message passing interface, and the results aggregated on the local computer for display and data analysis. The described approach is evaluated for pencil beams and broad beams of high-energy electrons and photons. The output of cloud-based MC simulation is identical to that produced by single-threaded implementation. For 1 million electrons, a simulation that takes 2.58 h on a local computer can be executed in 3.3 min on the cloud with 100 nodes, a 47× speed-up. Simulation time scales inversely with the number of parallel nodes. The parallelization overhead is also negligible for large simulations. Cloud computing represents one of the most important recent advances in supercomputing technology and provides a promising platform for substantially improved MC simulation. In addition to the significant speed up, cloud computing builds a layer of abstraction for high performance parallel computing, which may change the way dose calculations are performed and radiation treatment plans are completed.

  20. Response of Moist Convection to Multi-scale Surface Flux Heterogeneity

    NASA Astrophysics Data System (ADS)

    Kang, S. L.; Ryu, J. H.

    2015-12-01

    We investigate response of moist convection to multi-scale feature of the spatial variation of surface sensible heat fluxes (SHF) in the afternoon evolution of the convective boundary layer (CBL), utilizing a mesoscale-domain large eddy simulation (LES) model. The multi-scale surface heterogeneity feature is analytically created as a function of the spectral slope in the wavelength range from a few tens of km to a few hundreds of m in the spectrum of surface SHF on a log-log scale. The response of moist convection to the κ-3 - slope (where κ is wavenumber) surface SHF field is compared with that to the κ-2 - slope surface, which has a relatively weak mesoscale feature, and the homogeneous κ0 - slope surface. Given the surface energy balance with a spatially uniform available energy, the prescribed SHF has a 180° phase lag with the latent heat flux (LHF) in a horizontal domain of (several tens of km)2. Thus, warmer (cooler) surface is relatively dry (moist). For all the cases, the same observation-based sounding is prescribed for the initial condition. For all the κ-3 - slope surface heterogeneity cases, early non-precipitating shallow clouds further develop into precipitating deep thunderstorms. But for all the κ-2 - slope cases, only shallow clouds develop. We compare the vertical profiles of domain-averaged fluxes and variances, and the contribution of the mesoscale and turbulence contributions to the fluxes and variances, between the κ-3 versus κ-2 slope cases. Also the cross-scale processes are investigated.

  1. Driver’s Cognitive Workload and Driving Performance under Traffic Sign Information Exposure in Complex Environments: A Case Study of the Highways in China

    PubMed Central

    Lyu, Nengchao; Xie, Lian; Wu, Chaozhong; Fu, Qiang; Deng, Chao

    2017-01-01

    Complex traffic situations and high driving workload are the leading contributing factors to traffic crashes. There is a strong correlation between driving performance and driving workload, such as visual workload from traffic signs on highway off-ramps. This study aimed to evaluate traffic safety by analyzing drivers’ behavior and performance under the cognitive workload in complex environment areas. First, the driving workload of drivers was tested based on traffic signs with different quantities of information. Forty-four drivers were recruited to conduct a traffic sign cognition experiment under static controlled environment conditions. Different complex traffic signs were used for applying the cognitive workload. The static experiment results reveal that workload is highly related to the amount of information on traffic signs and reaction time increases with the information grade, while driving experience and gender effect are not significant. This shows that the cognitive workload of subsequent driving experiments can be controlled by the amount of information on traffic signs; Second, driving characteristics and driving performance were analyzed under different secondary task driving workload levels using a driving simulator. Drivers were required to drive at the required speed on a designed highway off-ramp scene. The cognitive workload was controlled by reading traffic signs with different information, which were divided into four levels. Drivers had to make choices by pushing buttons after reading traffic signs. Meanwhile, the driving performance information was recorded. Questionnaires on objective workload were collected right after each driving task. The results show that speed maintenance and lane deviations are significantly different under different levels of cognitive workload, and the effects of driving experience and gender groups are significant. The research results can be used to analyze traffic safety in highway environments, while considering more drivers’ cognitive and driving performance. PMID:28218696

  2. An Approach to Quantify Workload in a System of Agents

    NASA Technical Reports Server (NTRS)

    Stocker, Richard; Rungta, Neha; Mercer, Eric; Raimondi, Franco; Holbrook, Jon; Cardoza, Colleen; Goodrich, Michael

    2015-01-01

    The role of humans in aviation and other domains continues to shift from manual control to automation monitoring. Studies have found that humans are often poorly suited for monitoring roles, and workload can easily spike in off-nominal situations. Current workload measurement tools, like NASA TLX, use human operators to assess their own workload after using a prototype system. Such measures are used late in the design process and can result in ex- pensive alterations when problems are discovered. Our goal in this work is to provide a quantitative workload measure for use early in the design process. We leverage research in human cognition to de ne metrics that can measure workload on belief-desire-intentions based multi-agent systems. These measures can alert designers to potential workload issues early in design. We demonstrate the utility of our approach by characterizing quantitative differences in the workload for a single pilot operations model compared to a traditional two pilot model.

  3. Elastic Extension of a CMS Computing Centre Resources on External Clouds

    NASA Astrophysics Data System (ADS)

    Codispoti, G.; Di Maria, R.; Aiftimiei, C.; Bonacorsi, D.; Calligola, P.; Ciaschini, V.; Costantini, A.; Dal Pra, S.; DeGirolamo, D.; Grandi, C.; Michelotto, D.; Panella, M.; Peco, G.; Sapunenko, V.; Sgaravatto, M.; Taneja, S.; Zizzi, G.

    2016-10-01

    After the successful LHC data taking in Run-I and in view of the future runs, the LHC experiments are facing new challenges in the design and operation of the computing facilities. The computing infrastructure for Run-II is dimensioned to cope at most with the average amount of data recorded. The usage peaks, as already observed in Run-I, may however originate large backlogs, thus delaying the completion of the data reconstruction and ultimately the data availability for physics analysis. In order to cope with the production peaks, CMS - along the lines followed by other LHC experiments - is exploring the opportunity to access Cloud resources provided by external partners or commercial providers. Specific use cases have already been explored and successfully exploited during Long Shutdown 1 (LS1) and the first part of Run 2. In this work we present the proof of concept of the elastic extension of a CMS site, specifically the Bologna Tier-3, on an external OpenStack infrastructure. We focus on the “Cloud Bursting” of a CMS Grid site using a newly designed LSF configuration that allows the dynamic registration of new worker nodes to LSF. In this approach, the dynamically added worker nodes instantiated on the OpenStack infrastructure are transparently accessed by the LHC Grid tools and at the same time they serve as an extension of the farm for the local usage. The amount of resources allocated thus can be elastically modeled to cope up with the needs of CMS experiment and local users. Moreover, a direct access/integration of OpenStack resources to the CMS workload management system is explored. In this paper we present this approach, we report on the performances of the on-demand allocated resources, and we discuss the lessons learned and the next steps.

  4. Measuring perceived mental workload in children.

    PubMed

    Laurie-Rose, Cynthia; Frey, Meredith; Ennis, Aristi; Zamary, Amanda

    2014-01-01

    Little is known about the mental workload, or psychological costs, associated with information processing tasks in children. We adapted the highly regarded NASA Task Load Index (NASA-TLX) multidimensional workload scale (Hart & Staveland, 1988) to test its efficacy for use with elementary school children. We developed 2 types of tasks, each with 2 levels of demand, to draw differentially on resources from the separate subscales of workload. In Experiment 1, our participants were both typical and school-labeled gifted children recruited from 4th and 5th grades. Results revealed that task type elicited different workload profiles, and task demand directly affected the children's experience of workload. In general, gifted children experienced less workload than typical children. Objective response time and accuracy measures provide evidence for the criterion validity of the workload ratings. In Experiment 2, we applied the same method with 1st- and 2nd-grade children. Findings from Experiment 2 paralleled those of Experiment 1 and support the use of NASA-TLX with even the youngest elementary school children. These findings contribute to the fledgling field of educational ergonomics and attest to the innovative application of workload research. Such research may optimize instructional techniques and identify children at risk for experiencing overload.

  5. Panel workload assessment in US primary care: accounting for non-face-to-face panel management activities.

    PubMed

    Arndt, Brian; Tuan, Wen-Jan; White, Jennifer; Schumacher, Jessica

    2014-01-01

    An understanding of primary care provider (PCP) workload is an important consideration in establishing optimal PCP panel size. However, no widely acceptable measure of PCP workload exists that incorporates the effort involved with both non-face-to-face patient care activities and face-to-face encounters. Accounting for this gap is critical given the increase in non-face-to-face PCP activities that has accompanied electronic health records (EHRs) (eg, electronic messaging). Our goal was to provide a comprehensive assessment of perceived PCP workload, accounting for aspects of both face-to-face and non-face-to-face encounters. Internal medicine, family medicine, and pediatric PCPs completed a self-administered survey about the perceived workload involved with face-to-face and non-face-to-face panel management activities as well as the perceived challenge associated with caring for patients with particular biomedical, demographic, and psychosocial characteristics (n = 185). Survey results were combined with EHR data at the individual patient and PCP service levels to assess PCP panel workload, accounting for face-to-face and non-face-to-face utilization. Of the multiple face-to-face and non-face-to-face activities associated with routine primary care, PCPs considered hospital admissions, obstetric care, hospital discharges, and new patient preventive health visits to be greater workload than non-face-to-face activities such as telephone calls, electronic communication, generating letters, and medication refills. Total workload within PCP panels at the individual patient level varied by overall health status, and the total workload of non-face-to-face panel management activities associated with routine primary care was greater than the total workload associated with face-to-face encounters regardless of health status. We used PCP survey results coupled with EHR data to assess PCP workload associated with both face-to-face as well as non-face-to-face panel management activities in primary care. The non-face-to-face workload was an important contributor to overall PCP workload for all patients regardless of overall health status. This is an important consideration for PCP workload assessment given the changing nature of primary care that requires more non-face-to-face effort, resulting in an overall increase in PCP workload. © Copyright 2014 by the American Board of Family Medicine.

  6. Workload Measurement in Human Autonomy Teaming: How and Why?

    NASA Technical Reports Server (NTRS)

    Shively, Jay

    2016-01-01

    This is an invited talk on autonomy and workload for an AFRL Blue Sky workshop sponsored by the Florida Institute for Human Machine Studies. The presentation reviews various metrics of workload and how to move forward with measuring workload in a human-autonomy teaming environment.

  7. Timesharing performance as an indicator of pilot mental workload

    NASA Technical Reports Server (NTRS)

    Casper, Patricia A.

    1988-01-01

    The research was performed in two simultaneous phases, each intended to identify and manipulate factors related to operator mental workload. The first phase concerned evaluation of attentional deficits (workloads) in a timesharing task. Work in the second phase involved incorporating the results from these and other experiments into an expert system designed to provide workload metric selection advice to nonexperts in the field interested in operator workload. The results of the experiments conducted are summarized.

  8. A human factors framework and study of the effect of nursing workload on patient safety and employee quality of working life

    PubMed Central

    Holden, Richard J.; Scanlon, Matthew C.; Patel, Neal R.; Kaushal, Rainu; Escoto, Kamisha Hamilton; Brown, Roger L.; Alper, Samuel J.; Arnold, Judi M.; Shalaby, Theresa M.; Murkowski, Kathleen; Karsh, Ben-Tzion

    2010-01-01

    Backgrounds Nursing workload is increasingly thought to contribute to both nurses’ quality of working life and quality/safety of care. Prior studies lack a coherent model for conceptualizing and measuring the effects of workload in health care. In contrast, we conceptualized a human factors model for workload specifying workload at three distinct levels of analysis and having multiple nurse and patient outcomes. Methods To test this model, we analyzed results from a cross-sectional survey of a volunteer sample of nurses in six units of two academic tertiary care pediatric hospitals. Results Workload measures were generally correlated with outcomes of interest. A multivariate structural model revealed that: the unit-level measure of staffing adequacy was significantly related to job dissatisfaction (path loading = .31) and burnout (path loading = .45); the task-level measure of mental workload related to interruptions, divided attention, and being rushed was associated with burnout (path loading = .25) and medication error likelihood (path loading = 1.04). Job-level workload was not uniquely and significantly associated with any outcomes. Discussion The human factors engineering model of nursing workload was supported by data from two pediatric hospitals. The findings provided a novel insight into specific ways that different types of workload could affect nurse and patient outcomes. These findings suggest further research and yield a number of human factors design suggestions. PMID:21228071

  9. A human factors framework and study of the effect of nursing workload on patient safety and employee quality of working life.

    PubMed

    Holden, Richard J; Scanlon, Matthew C; Patel, Neal R; Kaushal, Rainu; Escoto, Kamisha Hamilton; Brown, Roger L; Alper, Samuel J; Arnold, Judi M; Shalaby, Theresa M; Murkowski, Kathleen; Karsh, Ben-Tzion

    2011-01-01

    Nursing workload is increasingly thought to contribute to both nurses' quality of working life and quality/safety of care. Prior studies lack a coherent model for conceptualising and measuring the effects of workload in healthcare. In contrast, we conceptualised a human factors model for workload specifying workload at three distinct levels of analysis and having multiple nurse and patient outcomes. To test this model, we analysed results from a cross-sectional survey of a volunteer sample of nurses in six units of two academic tertiary care paediatric hospitals. Workload measures were generally correlated with outcomes of interest. A multivariate structural model revealed that: the unit-level measure of staffing adequacy was significantly related to job dissatisfaction (path loading=0.31) and burnout (path loading=0.45); the task-level measure of mental workload related to interruptions, divided attention, and being rushed was associated with burnout (path loading=0.25) and medication error likelihood (path loading=1.04). Job-level workload was not uniquely and significantly associated with any outcomes. The human factors engineering model of nursing workload was supported by data from two paediatric hospitals. The findings provided a novel insight into specific ways that different types of workload could affect nurse and patient outcomes. These findings suggest further research and yield a number of human factors design suggestions.

  10. Accumulated workloads and the acute:chronic workload ratio relate to injury risk in elite youth football players

    PubMed Central

    Bowen, Laura; Gross, Aleksander Stefan; Gimpel, Mo; Li, François-Xavier

    2017-01-01

    Aim The purpose of this study was to investigate the relationship between physical workload and injury risk in elite youth football players. Methods The workload data and injury incidence of 32 players were monitored throughout 2 seasons. Multiple regression was used to compare cumulative (1, 2, 3 and 4-weekly) loads and acute:chronic (A:C) workload ratios (acute workload divided by chronic workload) between injured and non-injured players for specific GPS and accelerometer-derived variables:total distance (TD), high-speed distance (HSD), accelerations (ACC) and total load. Workloads were classified into discrete ranges by z-scores and the relative risk was determined. Results A very high number of ACC (≥9254) over 3 weeks was associated with the highest significant overall (relative risk (RR)=3.84) and non-contact injury risk (RR=5.11). Non-contact injury risk was significantly increased when a high acute HSD was combined with low chronic HSD (RR=2.55), but not with high chronic HSD (RR=0.47). Contact injury risk was greatest when A:C TD and ACC ratios were very high (1.76 and 1.77, respectively) (RR=4.98). Conclusions In general, higher accumulated and acute workloads were associated with a greater injury risk. However, progressive increases in chronic workload may develop the players' physical tolerance to higher acute loads and resilience to injury risk. PMID:27450360

  11. Monitoring Workload in Throwing-Dominant Sports: A Systematic Review.

    PubMed

    Black, Georgia M; Gabbett, Tim J; Cole, Michael H; Naughton, Geraldine

    2016-10-01

    The ability to monitor training load accurately in professional sports is proving vital for athlete preparedness and injury prevention. While numerous monitoring techniques have been developed to assess the running demands of many team sports, these methods are not well suited to throwing-dominant sports that are infrequently linked to high running volumes. Therefore, other techniques are required to monitor the differing demands of these sports to ensure athletes are adequately prepared for competition. To investigate the different methodologies used to quantitatively monitor training load in throwing-dominant sports. A systematic review of the methods used to monitor training load in throwing-dominant sports was conducted using variations of terms that described different load-monitoring techniques and different sports. Studies included in this review were published prior to June 2015 and were identified through a systematic search of four electronic databases including Academic Search Complete, CINAHL, Medline and SPORTDiscus. Only full-length peer-reviewed articles investigating workload monitoring in throwing-dominant sports were selected for review. A total of 8098 studies were initially retrieved from the four databases and 7334 results were removed as they were either duplicates, review articles, non-peer-reviewed articles, conference abstracts or articles written in languages other than English. After screening the titles and abstracts of the remaining papers, 28 full-text papers were reviewed, resulting in the identification of 20 articles meeting the inclusion criteria for monitoring workloads in throwing-dominant sports. Reference lists of selected articles were then scanned to identify other potential articles, which yielded one additional article. Ten articles investigated workload monitoring in cricket, while baseball provided eight results, and handball, softball and water polo each contributed one article. Results demonstrated varying techniques used to monitor workload and purposes for monitoring workload, encompassing the relationship between workload and injury, individual responses to workloads, the effect of workload on subsequent performance and the future directions of workload-monitoring techniques. This systematic review highlighted a number of simple and effective workload-monitoring techniques implemented across a variety of throwing-dominant sports. The current literature placed an emphasis on the relationship between workload and injury. However, due to differences in chronological and training age, inconsistent injury definitions and time frames used for monitoring, injury thresholds remain unclear in throwing-dominant sports. Furthermore, although research has examined total workload, the intensity of workload is often neglected. Additional research on the reliability of self-reported workload data is also required to validate existing relationships between workload and injury. Considering the existing disparity within the literature, it is likely that throwing-dominant sports would benefit from the development of an automated monitoring tool to objectively assess throwing-related workloads in conjunction with well-established internal measures of load in athletes.

  12. The effects of practice on tracking and subjective workload

    NASA Technical Reports Server (NTRS)

    Hancock, P. A.; Robinson, M. A.; Chu, A. L.; Hansen, D. R.; Vercruyssen, M.

    1989-01-01

    Six college-age male subjects performed one hundred, two-minute trials on a second-order tracking task. After each trial, subjects estimated perceived workload using both the NASA TLX and SWAT workload assessment procedures. Results confirmed an expected performance improvement on the tracking task which followed traditional learning curves within the performance of each individual. Perceived workload also decreased for both scales across trials. While performance variability significantly decreased across trials, workload variability remained constant. One month later, the same subjects returned to complete the second experiment in the sequence which was a retention replication of the first experiment. Results replicated those for the first experiment except that both performance error and workload were at reduced overall levels. Results in general affirm a parallel workload reduction with performance improvement, an observation consistent with a resource-based view of automaticity.

  13. Cognitive and affective components of mental workload: Understanding the effects of each on human decision making behavior

    NASA Technical Reports Server (NTRS)

    Nygren, Thomas E.

    1992-01-01

    Human factors and ergonomics researchers have recognized for some time the increasing importance of understanding the role of the construct of mental workload in flight research. Current models of mental workload suggest that it is a multidimensional and complex construct, but one that has proved difficult to measure. Because of this difficulty, emphasis has usually been placed on using direct reports through subjective measures such as rating scales to assess levels of mental workload. The NASA Task Load Index (NASA/TLX, Hart and Staveland) has been shown to be a highly reliable and sensitive measure of perceived mental workload. But a problem with measures like TLX is that there is still considerable disagreement as to what it is about mental workload that these subjective measures are actually measuring. The empirical use of subjective workload measures has largely been to provide estimates of the cognitive components of the actual mental workload required for a task. However, my research suggests that these measures may, in fact have greater potential in accurately assessing the affective components of workload. That is, for example, TLX may be more likely to assess the positive and negative feelings associated with varying workload levels, which in turn may potentially influence the decision making behavior that directly bears on performance and safety issues. Pilots, for example, are often called upon to complete many complex tasks that are high in mental workload, stress, and frustration, and that have significant dynamic decision making components -- often ones that involve risk as well.

  14. Managing Teacher Workload: Work-Life Balance and Wellbeing

    ERIC Educational Resources Information Center

    Bubb, Sara; Earley, Peter

    2004-01-01

    This book is divided into three sections. In the First Section, entitled "Wellbeing and Workload", the authors examine teacher workload and how teachers spend their time. Chapter 1 focuses on what the causes and effects of excessive workload are, especially in relation to wellbeing, stress and, crucially, recruitment and retention?…

  15. "Time Is Not Enough." Workload in Higher Education: A Student Perspective

    ERIC Educational Resources Information Center

    Kyndt, Eva; Berghmans, Inneke; Dochy, Filip; Bulckens, Lydwin

    2014-01-01

    Students' workload has been recognised as a major factor in the teaching and learning environment. This paper starts by structuring the different conceptualisations of workload described in the scientific literature. Besides the traditional distinction between objective and subjective or perceived workload, a distinction between conceptualisations…

  16. Advances in heterogeneous ice nucleation research: Theoretical modeling and measurements

    NASA Astrophysics Data System (ADS)

    Beydoun, Hassan

    In the atmosphere, cloud droplets can remain in a supercooled liquid phase at temperatures as low as -40 °C. Above this temperature, cloud droplets freeze via heterogeneous ice nucleation whereby a rare and poorly understood subset of atmospheric particles catalyze the ice phase transition. As the phase state of clouds is critical in determining their radiative properties and lifetime, deficiencies in our understanding of heterogeneous ice nucleation poses a large uncertainty on our efforts to predict human induced global climate change. Experimental challenges in properly simulating particle-induced freezing processes under atmospherically relevant conditions have largely contributed to the absence of a well-established model and parameterizations that accurately predict heterogeneous ice nucleation. Conversely, the sparsity of reliable measurement techniques available struggle to be interpreted by a single consistent theoretical or empirical framework, which results in layers of uncertainty when attempting to extrapolate useful information regarding ice nucleation for use in atmospheric cloud models. In this dissertation a new framework for describing heterogeneous ice nucleation is developed. Starting from classical nucleation theory, the surface of an ice nucleating particle is treated as a continuum of heterogeneous ice nucleating activity and a particle specific distribution of this activity g is derived. It is hypothesized that an individual particle species exhibits a critical surface area. Above this critical area the ice nucleating activity of a particle species can be described by one g distribution, g, while below it g expresses itself expresses externally resulting in particle to particle variability in ice nucleating activity. The framework is supported by cold plate droplet freezing measurements for dust and biological particles in which the total surface area of particle material available is varied. Freezing spectra above a certain surface area are shown to be successfully fitted with g while a process of random sampling from g can predict the freezing behavior below the identified critical surface area threshold. The framework is then extended to account for droplets composed of multiple particle species and successfully applied to predict the freezing spectra of a mixed proxy for an atmospheric dust-biological particle system. The contact freezing mode of ice nucleation, whereby a particle induces freezing upon collision with a droplet, is thought to be more efficient than particle initiated immersion freezing from within the droplet bulk. However, it has been a decades' long challenge to accurately measure this ice nucleation mode, since it necessitates reliably measuring the rate at which particles hit a droplet surface combined with direct determination of freezing onset. In an effort to remedy this longstanding deficiency a temperature controlled chilled aerosol optical tweezers capable of stably isolating water droplets in air at subzero temperatures has been designed and implemented. The new temperature controlled system retains the powerful capabilities of traditional aerosol optical tweezers: retrieval of a cavity enhanced Raman spectrum which could be used to accurately determine the size and refractive index of a trapped droplet. With these capabilities, it is estimated that the design can achieve ice supersaturation conditions at the droplet surface. It was also found that a KCl aqueous droplet simultaneously cooling and evaporating exhibited a significantly higher measured refractive index at its surface than when it was held at a steady state temperature. This implies the potential of a "salting out" process. Sensitivity of the cavity enhanced Raman spectrum as well as the visual image of a trapped droplet to dust particle collisions is shown, an important step in measuring collision frequencies of dust particles with a trapped droplet. These results may pave the way for future experiments of the exceptionally poorly understood contact freezing mode of ice nucleation.

  17. Aqueous Chemistry in the Clouds of Venus: A Possible Source for the UV Absorber

    NASA Astrophysics Data System (ADS)

    Baines, Kevin H.; Delitsky, M. L.

    2013-10-01

    The identity and cause of the UV absorber near the Venusian cloudtops 62-70 km altitude) has been an enduring mystery. Given the role of sulfur in Venus’s atmosphere, where, somewhat analogous to water on Earth, it cycles through gas, liquid, and (possibly) solid phases in the atmosphere, it has been a prime suspect as at least a key component, perhaps as long-lived solid poly-sulfur aerosols, Sn, where n > 4. However, the narrow range of altitudes inhabited by the UV absorber (thought to form and reside primarily above 62 km altitude) seems incompatible with Sn, which should vertically disperse after formation. Here, we point to another process that could lead to somewhat more exotic chemistries that favor formation and sequestration at high altitudes: Aqueous chemistry within H2SO4-nH2O cloud particles. Due to (1) the decrease of temperature and (2) the increase in the fraction of water (“n” in the previous formula) of each cloud droplet with altitude, high-altitude particles near the cloudtops are - via the “heterogeneous uptake” process - significantly more capable of capturing and concentrating trace gases, in particular HCl. For example, the heterogeneous uptake of HCl in H2SO4 droplets near the 65-km cloudtops is at least three times greater than that found in the middle of the clouds near 55 km altitude. Other factors such as local mixing ratios and the concentration of other solvents in the droplet also modify the uptake. Within the cloud droplets, solution chemistry between HCl and H2SO4 may lead to the formation of chlorosulfonic acid, ClSO3H, which is a weak acid that readily breaks down into other species, such as SO2Cl2 (sulfuryl chloride) and SOCl2 (thionyl chloride). Together, these three materials have UV-blue absorptions at 0.21, 0.29, 0.39 and 0.47 micron. Thus, H2SO4 aerosols at high altitudes may take on lasting UV absorption characteristics, dependent on temperature (altitude) and other conditions, Balloons floating at benign Earth-surface-like temperature/pressure conditions near 56-km altitude may be able to sample such aerosols and their complex contents as measured in periodic downdrafts of materials from higher altitudes.

  18. On water in volcanic clouds

    NASA Astrophysics Data System (ADS)

    Durant, Adam J.

    2007-12-01

    Volcanic clouds and tephra fallout present a hazard to aviation, human and animal health (direct inhalation or ingestion, contamination of water supplies), and infrastructure (building collapse, burial of roads and railways, agriculture, abrasive and chemical effects on machinery). Understanding sedimentation processes is a fundamental component in the prediction of volcanic cloud lifetime and fallout at the ground, essential in the mitigation of these hazards. The majority of classical volcanic ash transport and dispersion models (VATDM) are based solely on fluid dynamics. The non-agreement between VATDM and observed regional-scale tephra deposit characteristics is especially obvious at large distances from the source volcano. In meteorology, the processes of hydrometeor nucleation, growth and collection have been long-established as playing a central role in sedimentation and precipitation. Taking this as motivation, the hypothesis that hydrometeor formation drives sedimentation from volcanic clouds was tested. The research objectives of this dissertation are: (1) To determine the effectiveness of tephra particles in the catalysis of the liquid water to ice phase transformation, with application to ice hydrometeor formation in volcanic clouds. (2) To determine the sedimentological characteristics of distal (100s km) tephra fallout from recent volcanic clouds. (3) To assess particle fallout rates from recent volcanic clouds in the context of observed deposit characteristics. (4) To assess the implications of hydrometeor formation on the enhancement of volcanic sedimentation and the potential for cloud destabilization from volcanic hydrometeor sublimation. Dissertation Overview. The following chapters present the analysis, results and conclusions of heterogeneous ice nucleation experiments and sedimentological characterization of several recent tephra deposits. The dissertation is organized in three chapters, each prepared in journal article format. In Chapter 1, single ash particle freezing experiments were carried out to investigate the effect of ash particle composition and surface area on water drop freezing temperature. In Chapter 2, the tephra deposit from the 18 May 1980 eruption of Mount St. Helens, USA, was reanalyzed using laser diffraction particle size analysis and hydrometeor-induced sedimentation mechanisms are considered. In Chapter 3, fallout from the 18 August 1992 and 16--17 September 1992 eruptions of Mount Spurr, USA, was analyzed and particle sedimentation and cloud microphysics were modeled to assess the potential for cloud destabilization from hydrometeor sublimation.

  19. Comparison of workload measures on computer-generated primary flight displays

    NASA Technical Reports Server (NTRS)

    Nataupsky, Mark; Abbott, Terence S.

    1987-01-01

    Four Air Force pilots were used as subjects to assess a battery of subjective and physiological workload measures in a flight simulation environment in which two computer-generated primary flight display configurations were evaluated. A high- and low-workload task was created by manipulating flight path complexity. Both SWAT and the NASA-TLX were shown to be effective in differentiating the high and low workload path conditions. Physiological measures were inconclusive. A battery of workload measures continues to be necessary for an understanding of the data. Based on workload, opinion, and performance data, it is fruitful to pursue research with a primary flight display and a horizontal situation display integrated into a single display.

  20. A self-analysis of the NASA-TLX workload measure.

    PubMed

    Noyes, Jan M; Bruneau, Daniel P J

    2007-04-01

    Computer use and, more specifically, the administration of tests and materials online continue to proliferate. A number of subjective, self-report workload measures exist, but the National Aeronautics and Space Administration-Task Load Index (NASA-TLX) is probably the most well known and used. The aim of this paper is to consider the workload costs associated with the computer-based and paper versions of the NASA-TLX measure. It was found that there is a significant difference between the workload scores for the two media, with the computer version of the NASA-TLX incurring more workload. This has implications for the practical use of the NASA-TLX as well as for other computer-based workload measures.

  1. Crew procedures and workload of retrofit concepts for microwave landing system

    NASA Technical Reports Server (NTRS)

    Summers, Leland G.; Jonsson, Jon E.

    1989-01-01

    Crew procedures and workload for Microwave Landing Systems (MLS) that could be retrofitted into existing transport aircraft were evaluated. Two MLS receiver concepts were developed. One is capable of capturing a runway centerline and the other is capable of capturing a segmented approach path. Crew procedures were identified and crew task analyses were performed using each concept. Crew workload comparisons were made between the MLS concepts and an ILS baseline using a task-timeline workload model. Workload indexes were obtained for each scenario. The results showed that workload was comparable to the ILS baseline for the MLS centerline capture concept, but significantly higher for the segmented path capture concept.

  2. Cognitive workload modulation through degraded visual stimuli: a single-trial EEG study

    NASA Astrophysics Data System (ADS)

    Yu, K.; Prasad, I.; Mir, H.; Thakor, N.; Al-Nashash, H.

    2015-08-01

    Objective. Our experiments explored the effect of visual stimuli degradation on cognitive workload. Approach. We investigated the subjective assessment, event-related potentials (ERPs) as well as electroencephalogram (EEG) as measures of cognitive workload. Main results. These experiments confirm that degradation of visual stimuli increases cognitive workload as assessed by subjective NASA task load index and confirmed by the observed P300 amplitude attenuation. Furthermore, the single-trial multi-level classification using features extracted from ERPs and EEG is found to be promising. Specifically, the adopted single-trial oscillatory EEG/ERP detection method achieved an average accuracy of 85% for discriminating 4 workload levels. Additionally, we found from the spatial patterns obtained from EEG signals that the frontal parts carry information that can be used for differentiating workload levels. Significance. Our results show that visual stimuli can modulate cognitive workload, and the modulation can be measured by the single trial EEG/ERP detection method.

  3. Exploring the Utility of Workload Models in Academe: A Pilot Study

    ERIC Educational Resources Information Center

    Boyd, Leanne

    2014-01-01

    The workload of academics in Australia is increasing. Among the potential ramifications of this are work-related stress and burnout. Unions have negotiated workload models in employment agreements as a means of distributing workload in a fair and transparent manner. This qualitative pilot study aimed to explore how academics perceive their current…

  4. Nursing Workload and the Changing Health Care Environment: A Review of the Literature

    ERIC Educational Resources Information Center

    Neill, Denise

    2011-01-01

    Changes in the health care environment have impacted nursing workload, quality of care, and patient safety. Traditional nursing workload measures do not guarantee efficiency, nor do they adequately capture the complexity of nursing workload. Review of the literature indicates nurses perceive the quality of their work has diminished. Research has…

  5. Student Burnout as a Function of Personality, Social Support, and Workload.

    ERIC Educational Resources Information Center

    Jacobs, Sheri R.; Dodd, David K.

    2003-01-01

    Measures of social support, personality, and workload were related to psychological burnout among 149 college students. High levels of burnout were predicted by negative temperament and subjective workload, but actual workload (academic and vocational) had little to do with burnout. Low levels of burnout were predicted by positive temperament,…

  6. Effects of mental workload on physiological and subjective responses during traffic density monitoring: A field study.

    PubMed

    Fallahi, Majid; Motamedzade, Majid; Heidarimoghadam, Rashid; Soltanian, Ali Reza; Miyake, Shinji

    2016-01-01

    This study evaluated operators' mental workload while monitoring traffic density in a city traffic control center. To determine the mental workload, physiological signals (ECG, EMG) were recorded and the NASA-Task Load Index (TLX) was administered for 16 operators. The results showed that the operators experienced a larger mental workload during high traffic density than during low traffic density. The traffic control center stressors caused changes in heart rate variability features and EMG amplitude, although the average workload score was significantly higher in HTD conditions than in LTD conditions. The findings indicated that increasing traffic congestion had a significant effect on HR, RMSSD, SDNN, LF/HF ratio, and EMG amplitude. The results suggested that when operators' workload increases, their mental fatigue and stress level increase and their mental health deteriorate. Therefore, it maybe necessary to implement an ergonomic program to manage mental health. Furthermore, by evaluating mental workload, the traffic control center director can organize the center's traffic congestion operators to sustain the appropriate mental workload and improve traffic control management. Copyright © 2015 Elsevier Ltd and The Ergonomics Society. All rights reserved.

  7. Online EEG-Based Workload Adaptation of an Arithmetic Learning Environment.

    PubMed

    Walter, Carina; Rosenstiel, Wolfgang; Bogdan, Martin; Gerjets, Peter; Spüler, Martin

    2017-01-01

    In this paper, we demonstrate a closed-loop EEG-based learning environment, that adapts instructional learning material online, to improve learning success in students during arithmetic learning. The amount of cognitive workload during learning is crucial for successful learning and should be held in the optimal range for each learner. Based on EEG data from 10 subjects, we created a prediction model that estimates the learner's workload to obtain an unobtrusive workload measure. Furthermore, we developed an interactive learning environment that uses the prediction model to estimate the learner's workload online based on the EEG data and adapt the difficulty of the learning material to keep the learner's workload in an optimal range. The EEG-based learning environment was used by 13 subjects to learn arithmetic addition in the octal number system, leading to a significant learning effect. The results suggest that it is feasible to use EEG as an unobtrusive measure of cognitive workload to adapt the learning content. Further it demonstrates that a promptly workload prediction is possible using a generalized prediction model without the need for a user-specific calibration.

  8. Patient Safety Incidents and Nursing Workload 1

    PubMed Central

    Carlesi, Katya Cuadros; Padilha, Kátia Grillo; Toffoletto, Maria Cecília; Henriquez-Roldán, Carlos; Juan, Monica Andrea Canales

    2017-01-01

    ABSTRACT Objective: to identify the relationship between the workload of the nursing team and the occurrence of patient safety incidents linked to nursing care in a public hospital in Chile. Method: quantitative, analytical, cross-sectional research through review of medical records. The estimation of workload in Intensive Care Units (ICUs) was performed using the Therapeutic Interventions Scoring System (TISS-28) and for the other services, we used the nurse/patient and nursing assistant/patient ratios. Descriptive univariate and multivariate analysis were performed. For the multivariate analysis we used principal component analysis and Pearson correlation. Results: 879 post-discharge clinical records and the workload of 85 nurses and 157 nursing assistants were analyzed. The overall incident rate was 71.1%. It was found a high positive correlation between variables workload (r = 0.9611 to r = 0.9919) and rate of falls (r = 0.8770). The medication error rates, mechanical containment incidents and self-removal of invasive devices were not correlated with the workload. Conclusions: the workload was high in all units except the intermediate care unit. Only the rate of falls was associated with the workload. PMID:28403334

  9. Analysis of Different Cost Functions in the Geosect Airspace Partitioning Tool

    NASA Technical Reports Server (NTRS)

    Wong, Gregory L.

    2010-01-01

    A new cost function representing air traffic controller workload is implemented in the Geosect airspace partitioning tool. Geosect currently uses a combination of aircraft count and dwell time to select optimal airspace partitions that balance controller workload. This is referred to as the aircraft count/dwell time hybrid cost function. The new cost function is based on Simplified Dynamic Density, a measure of different aspects of air traffic controller workload. Three sectorizations are compared. These are the current sectorization, Geosect's sectorization based on the aircraft count/dwell time hybrid cost function, and Geosect s sectorization based on the Simplified Dynamic Density cost function. Each sectorization is evaluated for maximum and average workload along with workload balance using the Simplified Dynamic Density as the workload measure. In addition, the Airspace Concept Evaluation System, a nationwide air traffic simulator, is used to determine the capacity and delay incurred by each sectorization. The sectorization resulting from the Simplified Dynamic Density cost function had a lower maximum workload measure than the other sectorizations, and the sectorization based on the combination of aircraft count and dwell time did a better job of balancing workload and balancing capacity. However, the current sectorization had the lowest average workload, highest sector capacity, and the least system delay.

  10. Psychophysiological response to cognitive workload during symmetrical, asymmetrical and dual-task walking.

    PubMed

    Knaepen, Kristel; Marusic, Uros; Crea, Simona; Rodríguez Guerrero, Carlos D; Vitiello, Nicola; Pattyn, Nathalie; Mairesse, Olivier; Lefeber, Dirk; Meeusen, Romain

    2015-04-01

    Walking with a lower limb prosthesis comes at a high cognitive workload for amputees, possibly affecting their mobility, safety and independency. A biocooperative prosthesis which is able to reduce the cognitive workload of walking could offer a solution. Therefore, we wanted to investigate whether different levels of cognitive workload can be assessed during symmetrical, asymmetrical and dual-task walking and to identify which parameters are the most sensitive. Twenty-four healthy subjects participated in this study. Cognitive workload was assessed through psychophysiological responses, physical and cognitive performance and subjective ratings. The results showed that breathing frequency and heart rate significantly increased, and heart rate variability significantly decreased with increasing cognitive workload during walking (p<.05). Performance measures (e.g., cadence) only changed under high cognitive workload. As a result, psychophysiological measures are the most sensitive to identify changes in cognitive workload during walking. These parameters reflect the cognitive effort necessary to maintain performance during complex walking and can easily be assessed regardless of the task. This makes them excellent candidates to feed to the control loop of a biocooperative prosthesis in order to detect the cognitive workload. This information can then be used to adapt the robotic assistance to the patient's cognitive abilities. Copyright © 2015 Elsevier B.V. All rights reserved.

  11. Electronic Health Record Alert-Related Workload as a Predictor of Burnout in Primary Care Providers.

    PubMed

    Gregory, Megan E; Russo, Elise; Singh, Hardeep

    2017-07-05

    Electronic health records (EHRs) have been shown to increase physician workload. One EHR feature that contributes to increased workload is asynchronous alerts (also known as inbox notifications) related to test results, referral responses, medication refill requests, and messages from physicians and other health care professionals. This alert-related workload results in negative cognitive outcomes, but its effect on affective outcomes, such as burnout, has been understudied. To examine EHR alert-related workload (both objective and subjective) as a predictor of burnout in primary care providers (PCPs), in order to ultimately inform interventions aimed at reducing burnout due to alert workload. A cross-sectional questionnaire and focus group of 16 PCPs at a large medical center in the southern United States. Subjective, but not objective, alert workload was related to two of the three dimensions of burnout, including physical fatigue (p = 0.02) and cognitive weariness (p = 0.04), when controlling for organizational tenure. To reduce alert workload and subsequent burnout, participants indicated a desire to have protected time for alert management, fewer unnecessary alerts, and improvements to the EHR system. Burnout associated with alert workload may be in part due to subjective differences at an individual level, and not solely a function of the objective work environment. This suggests the need for both individual and organizational-level interventions to improve alert workload and subsequent burnout. Additional research should confirm these findings in larger, more representative samples.

  12. [The measurement of nursing workload in a sub-intensive unit with the Nine Equivalents of Nursing Manpower Scale].

    PubMed

    D'Orazio, Alessia; Dragonetti, Antonella; Finiguerra, Ivana; Simone, Paola

    2015-01-01

    The measurement of nursing workload in a sub-intensive unit with the Nine Equivalents of Nursing Manpower Scale. The need to maximize the nursing manpower to patients complexity requires a careful assessment of the nursing workload. To measure the nursing workload in a sub-intensive care unit and to assess the impact of patients isolated for multidrug resistant microorganisms (MDR) and with delirium, on the nursing workload. From december 1 2014 to march 31 2015 the nursing workload of patients admitted to a semi intensive untit of a Turin Hospital was measured with Nine Equivalents of Nursing Manpower (NEMS) original and modified, adding 1 point score for patients isolated and with delirium (Richmond Agitation Sedation Scale). Admission and discharge times, and the activities performed in and out of the unit were registered. Two-hundred-thirty patients were daily assessed and no differences were observed in mean NEMS scores with the original and modified scale: december 17.3 vs 18.5; January 19.4 vs 20.2; February 19.9 vs 20.6; March 19.5 vs 20.1). mean scores did not change across shifts although on average 8 days a month the scores exceeded 21, identifiyng an excess workload and a need of a 2:1 patient/nurse ratio. The maximum workload was concentrated between 12.00 and 18.00 pm. The NEMS scale allows to measure the nursing workload. Apparently patients isolated and with delirium did not significantly impact on the nursing workload.

  13. Long-term Evolution of the Aerosol Debris Cloud Produced by the 2009 Impact of an Object with Jupiter

    NASA Astrophysics Data System (ADS)

    Sanchez-Lavega, Agustin; Orton, G. S.; Hueso, R.; Pérez-Hoyos, S.; Fletcher, L. N.; Garcia-Melendo, E.; Gomez, J. M.; de Pater, I.; Wong, M.; Hammel, H. B.; Yanamandra-Fisher, P.; Simon-Miller, M.; Barrado-Izagirre, N.; Marchis, F.; Mousis, O.; Ortiz, J. L.; Garcia, J.; Cecconi, M.; Clarke, J. T.; Noll, K.; Pedraz, S.; Wesley, A.; McConnel, N.; Kalas, P.; Graham, J.; McKenzie, L.; Reddy, V.; Golisch, W.; Griep, D.; Sears, P.; International Outer PLanet Watch (IOPW)

    2010-10-01

    We report the evolution of the cloud of aerosols produced in the atmosphere of Jupiter by the impact of an object in 19 July 2009 (Sánchez-Lavega et al., Astrophys. J. Lett, Vol. 715, L155. 2010). This study is based on images obtained with a battery of ground-based telescopes and the Hubble Space Telescope in the visible and in the deep near infrared absorption bands at 2.1-2.3 microns from the impact date to 31 December 2009. The impact cloud expanded zonally from 5000 km (July 19) to 225,000 km (about 180 deg in longitude by 29 October) and it was meridionally localized within a latitude band from -53.5 deg to -61.5 deg. During the first two months it showed a heterogeneous structure with embedded spots of a size of 500 - 1000 km. The cloud was mainly dispersed in longitude by the dominant zonal winds and their meridional shear and, during the initial stages, by the action of local motions perhaps originated by the thermal perturbation produced at the impact site. The tracking of individual spots within the impact cloud showed that the winds increase their eastward velocity with altitude above the tropopause by 5-10 m/s. We found evidence of discrete localized meridional motions in the equatorward direction with speeds of 1 - 2 m/s. Measurements of the cloud reflectivity evolution during the whole period showed that it followed an exponential decrease with a characteristic time of 15 days, shorter than the 45 - 200 days sedimentation time for the small aerosol particles in the stratosphere. A radiative transfer model of the cloud optical depth coupled to an advection model of the cloud dispersion by the wind shears, reproduces this behavior. Acknowledgements: ASL, RH, SPH, NBI are supported by the Spanish MICIIN AYA2009-10701 with FEDER and Grupos Gobierno Vasco IT-464-07.

  14. Cognitive Workload and Sleep Restriction Interact to Influence Sleep Homeostatic Responses

    PubMed Central

    Goel, Namni; Abe, Takashi; Braun, Marcia E.; Dinges, David F.

    2014-01-01

    Study Objectives: Determine the effects of high versus moderate workload on sleep physiology and neurobehavioral measures, during sleep restriction (SR) and no sleep restriction (NSR) conditions. Design: Ten-night experiment involving cognitive workload and SR manipulations. Setting: Controlled laboratory environment. Participants: Sixty-three healthy adults (mean ± standard deviation: 33.2 ± 8.7 y; 29 females), age 22–50 y. Interventions: Following three baseline 8 h time in bed (TIB) nights, subjects were randomized to one of four conditions: high cognitive workload (HW) + SR; moderate cognitive workload (MW) + SR; HW + NSR; or MW + NSR. SR entailed 5 consecutive nights at 4 h TIB; NSR entailed 5 consecutive nights at 8 h TIB. Subjects received three workload test sessions/day consisting of 15-min preworkload assessments, followed by a 60-min (MW) or 120-min (HW) workload manipulation comprised of visually based cognitive tasks, and concluding with 15-min of postworkload assessments. Experimental nights were followed by two 8-h TIB recovery sleep nights. Polysomnography was collected on baseline night 3, experimental nights 1, 4, and 5, and recovery night 1 using three channels (central, frontal, occipital [C3, Fz, O2]). Measurements and Results: High workload, regardless of sleep duration, increased subjective fatigue and sleepiness (all P < 0.05). In contrast, sleep restriction produced cumulative increases in Psychomotor Vigilance Test (PVT) lapses, fatigue, and sleepiness and decreases in PVT response speed and Maintenance of Wakefulness Test (MWT) sleep onset latencies (all P < 0.05). High workload produced longer sleep onset latencies (P < 0.05, d = 0.63) and less wake after sleep onset (P < 0.05, d = 0.64) than moderate workload. Slow-wave energy—the putative marker of sleep homeostasis—was higher at O2 than C3 only in the HW + SR condition (P < 0.05). Conclusions: High cognitive workload delayed sleep onset, but it also promoted sleep homeostatic responses by increasing subjective fatigue and sleepiness, and producing a global sleep homeostatic response by reducing wake after sleep onset. When combined with sleep restriction, high workload increased local (occipital) sleep homeostasis, suggesting a use-dependent sleep response to visual work. We conclude that sleep restriction and cognitive workload interact to influence sleep homeostasis. Citation: Goel N, Abe T, Braun ME, Dinges DF. Cognitive workload and sleep restriction interact to influence sleep homeostatic responses. SLEEP 2014;37(11):1745-1756. PMID:25364070

  15. High-resolution Monte Carlo simulation of flow and conservative transport in heterogeneous porous media: 1. Methodology and flow results

    USGS Publications Warehouse

    Naff, R.L.; Haley, D.F.; Sudicky, E.A.

    1998-01-01

    In this, the first of two papers concerned with the use of numerical simulation to examine flow and transport parameters in heterogeneous porous media via Monte Carlo methods, various aspects of the modelling effort are examined. In particular, the need to save on core memory causes one to use only specific realizations that have certain initial characteristics; in effect, these transport simulations are conditioned by these characteristics. Also, the need to independently estimate length scales for the generated fields is discussed. The statistical uniformity of the flow field is investigated by plotting the variance of the seepage velocity for vector components in the x, y, and z directions. Finally, specific features of the velocity field itself are illuminated in this first paper. In particular, these data give one the opportunity to investigate the effective hydraulic conductivity in a flow field which is approximately statistically uniform; comparisons are made with first- and second-order perturbation analyses. The mean cloud velocity is examined to ascertain whether it is identical to the mean seepage velocity of the model. Finally, the variance in the cloud centroid velocity is examined for the effect of source size and differing strengths of local transverse dispersion.

  16. NASA TLA workload analysis support. Volume 3: FFD autopilot scenario validation data

    NASA Technical Reports Server (NTRS)

    Sundstrom, J. L.

    1980-01-01

    The data used to validate a seven time line analysis of forward flight deck autopilot mode for the pilot and copilot for NASA B737 terminal configured vehicle are presented. Demand workloads are given in two forms: workload histograms and workload summaries (bar graphs). A report showing task length and task interaction is also presented.

  17. Student Workload and Assessment: Strategies to Manage Expectations and Inform Curriculum Development

    ERIC Educational Resources Information Center

    Scully, Glennda; Kerr, Rosemary

    2014-01-01

    This study reports the results of a survey of student study times and perceptions of workload in undergraduate and graduate accounting courses at a large Australian public university. The study was in response to student feedback expressing concerns about workload in courses. The presage factors of student workload and assessment in Biggs' 3P…

  18. Individual differences and subjective workload assessment - Comparing pilots to nonpilots

    NASA Technical Reports Server (NTRS)

    Vidulich, Michael A.; Pandit, Parimal

    1987-01-01

    Results by two groups of subjects, pilots and nonpilots, for two subjective workload assessment techniques (the SWAT and NASA-TLX tests) intended to evaluate individual differences in the perception and reporting of subjective workload are compared with results obtained for several traditional personality tests. The personality tests were found to discriminate between the groups while the workload tests did not. It is concluded that although the workload tests may provide useful information with respect to the interaction between tasks and personality, they are not effective as pure tests of individual differences.

  19. The workload book: Assessment of operator workload to engineering systems

    NASA Technical Reports Server (NTRS)

    Gopher, D.

    1983-01-01

    The structure and initial work performed toward the creation of a handbook for workload analysis directed at the operational community of engineers and human factors psychologists are described. The goal, when complete, will be to make accessible to such individuals the results of theoretically-based research that are of practical interest and utility in the analysis and prediction of operator workload in advanced and existing systems. In addition, the results of laboratory study focused on the development of a subjective rating technique for workload that is based on psychophysical scaling techniques are described.

  20. Neutron beam irradiation study of workload dependence of SER in a microprocessor

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Michalak, Sarah E; Graves, Todd L; Hong, Ted

    It is known that workloads are an important factor in soft error rates (SER), but it is proving difficult to find differentiating workloads for microprocessors. We have performed neutron beam irradiation studies of a commercial microprocessor under a wide variety of workload conditions from idle, performing no operations, to very busy workloads resembling real HPC, graphics, and business applications. There is evidence that the mean times to first indication of failure, MTFIF defined in Section II, may be different for some of the applications.

  1. Evolution of the ATLAS PanDA Production and Distributed Analysis System

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Maeno, T.; De, K.; Wenaus, T.

    2012-12-13

    Evolution of the ATLAS PanDA Production and Distributed Analysis System T Maeno1,5, K De2, T Wenaus1, P Nilsson2, R Walker3, A Stradling2, V Fine1, M Potekhin1, S Panitkin1 and G Compostella4 Published under licence by IOP Publishing Ltd Journal of Physics: Conference Series, Volume 396, Part 3 Article PDF References Citations Metrics 101 Total downloads Cited by 8 articles Turn on MathJax Share this article Article information Abstract The PanDA (Production and Distributed Analysis) system has been developed to meet ATLAS production and analysis requirements for a data-driven workload management system capable of operating at LHC data processing scale. PanDAmore » has performed well with high reliability and robustness during the two years of LHC data-taking, while being actively evolved to meet the rapidly changing requirements for analysis use cases. We will present an overview of system evolution including automatic rebrokerage and reattempt for analysis jobs, adaptation for the CernVM File System, support for the multi-cloud model through which Tier-2 sites act as members of multiple clouds, pledged resource management and preferential brokerage, and monitoring improvements. We will also describe results from the analysis of two years of PanDA usage statistics, current issues, and plans for the future.« less

  2. Running Neuroimaging Applications on Amazon Web Services: How, When, and at What Cost?

    PubMed

    Madhyastha, Tara M; Koh, Natalie; Day, Trevor K M; Hernández-Fernández, Moises; Kelley, Austin; Peterson, Daniel J; Rajan, Sabreena; Woelfer, Karl A; Wolf, Jonathan; Grabowski, Thomas J

    2017-01-01

    The contribution of this paper is to identify and describe current best practices for using Amazon Web Services (AWS) to execute neuroimaging workflows "in the cloud." Neuroimaging offers a vast set of techniques by which to interrogate the structure and function of the living brain. However, many of the scientists for whom neuroimaging is an extremely important tool have limited training in parallel computation. At the same time, the field is experiencing a surge in computational demands, driven by a combination of data-sharing efforts, improvements in scanner technology that allow acquisition of images with higher image resolution, and by the desire to use statistical techniques that stress processing requirements. Most neuroimaging workflows can be executed as independent parallel jobs and are therefore excellent candidates for running on AWS, but the overhead of learning to do so and determining whether it is worth the cost can be prohibitive. In this paper we describe how to identify neuroimaging workloads that are appropriate for running on AWS, how to benchmark execution time, and how to estimate cost of running on AWS. By benchmarking common neuroimaging applications, we show that cloud computing can be a viable alternative to on-premises hardware. We present guidelines that neuroimaging labs can use to provide a cluster-on-demand type of service that should be familiar to users, and scripts to estimate cost and create such a cluster.

  3. Comparative evaluation of workload estimation techniques in piloting tasks

    NASA Technical Reports Server (NTRS)

    Wierwille, W. W.

    1983-01-01

    Techniques to measure operator workload in a wide range of situations and tasks were examined. The sensitivity and intrusion of a wide variety of workload assessment techniques in simulated piloting tasks were investigated. Four different piloting tasks, psychomotor, perceptual, mediational, and communication aspects of piloting behavior were selected. Techniques to determine relative sensitivity and intrusion were applied. Sensitivity is the relative ability of a workload estimation technique to discriminate statistically significant differences in operator loading. High sensitivity requires discriminable changes in score means as a function of load level and low variation of the scores about the means. Intrusion is an undesirable change in the task for which workload is measured, resulting from the introduction of the workload estimation technique or apparatus.

  4. Entrainment versus Dilution in Tropical Deep Convection

    DOE PAGES

    Hannah, Walter M.

    2017-11-01

    In this paper, the distinction between entrainment and dilution is investigated with cloud-resolving simulations of deep convection in a tropical environment. A method for estimating the rate of dilution by entrainment and detrainment is presented and calculated for a series of bubble simulations with a range of initial radii. Entrainment generally corresponds to dilution of convection, but the two quantities are not well correlated. Core dilution by entrainment is significantly reduced by the presence of a shell of moist air around the core. Dilution by entrainment also increases with increasing updraft velocity but only for sufficiently strong updrafts. Entrainment contributesmore » significantly to the total net dilution, but detrainment and the various source/sink terms play large roles depending on the variable in question. Detrainment has a concentrating effect on average that balances out the dilution by entrainment. The experiments are also used to examine whether entrainment or dilution scale with cloud radius. The results support a weak negative relationship for dilution but not for entrainment. The sensitivity to resolution is briefly discussed. A toy Lagrangian thermal model is used to demonstrate the importance of the cloud shell as a thermodynamic buffer to reduce the dilution of the core by entrainment. Finally, the results suggest that explicit cloud heterogeneity may be a useful consideration for future convective parameterization development.« less

  5. Toward a Proof of Concept Cloud Framework for Physics Applications on Blue Gene Supercomputers

    NASA Astrophysics Data System (ADS)

    Dreher, Patrick; Scullin, William; Vouk, Mladen

    2015-09-01

    Traditional high performance supercomputers are capable of delivering large sustained state-of-the-art computational resources to physics applications over extended periods of time using batch processing mode operating environments. However, today there is an increasing demand for more complex workflows that involve large fluctuations in the levels of HPC physics computational requirements during the simulations. Some of the workflow components may also require a richer set of operating system features and schedulers than normally found in a batch oriented HPC environment. This paper reports on progress toward a proof of concept design that implements a cloud framework onto BG/P and BG/Q platforms at the Argonne Leadership Computing Facility. The BG/P implementation utilizes the Kittyhawk utility and the BG/Q platform uses an experimental heterogeneous FusedOS operating system environment. Both platforms use the Virtual Computing Laboratory as the cloud computing system embedded within the supercomputer. This proof of concept design allows a cloud to be configured so that it can capitalize on the specialized infrastructure capabilities of a supercomputer and the flexible cloud configurations without resorting to virtualization. Initial testing of the proof of concept system is done using the lattice QCD MILC code. These types of user reconfigurable environments have the potential to deliver experimental schedulers and operating systems within a working HPC environment for physics computations that may be different from the native OS and schedulers on production HPC supercomputers.

  6. Mental workload during n-back task-quantified in the prefrontal cortex using fNIRS.

    PubMed

    Herff, Christian; Heger, Dominic; Fortmann, Ole; Hennrich, Johannes; Putze, Felix; Schultz, Tanja

    2013-01-01

    When interacting with technical systems, users experience mental workload. Particularly in multitasking scenarios (e.g., interacting with the car navigation system while driving) it is desired to not distract the users from their primary task. For such purposes, human-machine interfaces (HCIs) are desirable which continuously monitor the users' workload and dynamically adapt the behavior of the interface to the measured workload. While memory tasks have been shown to elicit hemodynamic responses in the brain when averaging over multiple trials, a robust single trial classification is a crucial prerequisite for the purpose of dynamically adapting HCIs to the workload of its user. The prefrontal cortex (PFC) plays an important role in the processing of memory and the associated workload. In this study of 10 subjects, we used functional Near-Infrared Spectroscopy (fNIRS), a non-invasive imaging modality, to sample workload activity in the PFC. The results show up to 78% accuracy for single-trial discrimination of three levels of workload from each other. We use an n-back task (n ∈ {1, 2, 3}) to induce different levels of workload, forcing subjects to continuously remember the last one, two, or three of rapidly changing items. Our experimental results show that measuring hemodynamic responses in the PFC with fNIRS, can be used to robustly quantify and classify mental workload. Single trial analysis is still a young field that suffers from a general lack of standards. To increase comparability of fNIRS methods and results, the data corpus for this study is made available online.

  7. Mental workload during n-back task—quantified in the prefrontal cortex using fNIRS

    PubMed Central

    Herff, Christian; Heger, Dominic; Fortmann, Ole; Hennrich, Johannes; Putze, Felix; Schultz, Tanja

    2014-01-01

    When interacting with technical systems, users experience mental workload. Particularly in multitasking scenarios (e.g., interacting with the car navigation system while driving) it is desired to not distract the users from their primary task. For such purposes, human-machine interfaces (HCIs) are desirable which continuously monitor the users' workload and dynamically adapt the behavior of the interface to the measured workload. While memory tasks have been shown to elicit hemodynamic responses in the brain when averaging over multiple trials, a robust single trial classification is a crucial prerequisite for the purpose of dynamically adapting HCIs to the workload of its user. The prefrontal cortex (PFC) plays an important role in the processing of memory and the associated workload. In this study of 10 subjects, we used functional Near-Infrared Spectroscopy (fNIRS), a non-invasive imaging modality, to sample workload activity in the PFC. The results show up to 78% accuracy for single-trial discrimination of three levels of workload from each other. We use an n-back task (n ∈ {1, 2, 3}) to induce different levels of workload, forcing subjects to continuously remember the last one, two, or three of rapidly changing items. Our experimental results show that measuring hemodynamic responses in the PFC with fNIRS, can be used to robustly quantify and classify mental workload. Single trial analysis is still a young field that suffers from a general lack of standards. To increase comparability of fNIRS methods and results, the data corpus for this study is made available online. PMID:24474913

  8. Quantitative EEG patterns of differential in-flight workload

    NASA Technical Reports Server (NTRS)

    Sterman, M. B.; Mann, C. A.; Kaiser, D. A.

    1993-01-01

    Four test pilots were instrumented for in-flight EEG recordings using a custom portable recording system. Each flew six, two minute tracking tasks in the Calspan NT-33 experimental trainer at Edwards AFB. With the canopy blacked out, pilots used a HUD display to chase a simulated aircraft through a random flight course. Three configurations of flight controls altered the flight characteristics to achieve low, moderate, and high workload, as determined by normative Cooper-Harper ratings. The test protocol was administered by a command pilot in the back seat. Corresponding EEG and tracking data were compared off-line. Tracking performance was measured as deviation from the target aircraft and combined with control difficulty to achieve an estimate of 'cognitive workload'. Trended patterns of parietal EEG activity at 8-12 Hz were sorted according to this classification. In all cases, high workload produced a significantly greater suppression of 8-12 Hz activity than low workload. Further, a clear differentiation of EEG trend patterns was obtained in 80 percent of the cases. High workload produced a sustained suppression of 8-12 Hz activity, while moderate workload resulted in an initial suppression followed by a gradual increment. Low workload was associated with a modulated pattern lacking any periods of marked or sustained suppression. These findings suggest that quantitative analysis of appropriate EEG measures may provide an objective and reliable in-flight index of cognitive effort that could facilitate workload assessment.

  9. Nurse-patient assignment models considering patient acuity metrics and nurses' perceived workload.

    PubMed

    Sir, Mustafa Y; Dundar, Bayram; Barker Steege, Linsey M; Pasupathy, Kalyan S

    2015-06-01

    Patient classification systems (PCSs) are commonly used in nursing units to assess how many nursing care hours are needed to care for patients. These systems then provide staffing and nurse-patient assignment recommendations for a given patient census based on these acuity scores. Our hypothesis is that such systems do not accurately capture workload and we conduct an experiment to test this hypothesis. Specifically, we conducted a survey study to capture nurses' perception of workload in an inpatient unit. Forty five nurses from oncology and surgery units completed the survey and rated the impact of patient acuity indicators on their perceived workload using a six-point Likert scale. These ratings were used to calculate a workload score for an individual nurse given a set of patient acuity indicators. The approach offers optimization models (prescriptive analytics), which use patient acuity indicators from a commercial PCS as well as a survey-based nurse workload score. The models assign patients to nurses in a balanced manner by distributing acuity scores from the PCS and survey-based perceived workload. Numerical results suggest that the proposed nurse-patient assignment models achieve a balanced assignment and lower overall survey-based perceived workload compared to the assignment based solely on acuity scores from the PCS. This results in an improvement of perceived workload that is upwards of five percent. Copyright © 2015 Elsevier Inc. All rights reserved.

  10. School Nurse Workload: A Scoping Review of Acute Care, Community Health, and Mental Health Nursing Workload Literature

    ERIC Educational Resources Information Center

    Endsley, Patricia

    2017-01-01

    The purpose of this scoping review was to survey the most recent (5 years) acute care, community health, and mental health nursing workload literature to understand themes and research avenues that may be applicable to school nursing workload research. The search for empirical and nonempirical literature was conducted using search engines such as…

  11. ANALYSIS OF INPATIENT HOSPITAL STAFF MENTAL WORKLOAD BY MEANS OF DISCRETE-EVENT SIMULATION

    DTIC Science & Technology

    2016-03-24

    ANALYSIS OF INPATIENT HOSPITAL STAFF MENTAL WORKLOAD BY MEANS OF DISCRETE -EVENT SIMULATION...in the United States. AFIT-ENV-MS-16-M-166 ANALYSIS OF INPATIENT HOSPITAL STAFF MENTAL WORKLOAD BY MEANS OF DISCRETE -EVENT SIMULATION...UNLIMITED. AFIT-ENV-MS-16-M-166 ANALYSIS OF INPATIENT HOSPITAL STAFF MENTAL WORKLOAD BY MEANS OF DISCRETE -EVENT SIMULATION Erich W

  12. Role of Academic Managers in Workload and Performance Management of Academic Staff: A Case Study

    ERIC Educational Resources Information Center

    Graham, Andrew T.

    2016-01-01

    This small-scale case study focused on academic managers to explore the ways in which they control the workload of academic staff and the extent to which they use the workload model in performance management of academic staff. The links that exist between the workload and performance management were explored to confirm or refute the conceptual…

  13. A Scheduling Algorithm for Computational Grids that Minimizes Centralized Processing in Genome Assembly of Next-Generation Sequencing Data

    PubMed Central

    Lima, Jakelyne; Cerdeira, Louise Teixeira; Bol, Erick; Schneider, Maria Paula Cruz; Silva, Artur; Azevedo, Vasco; Abelém, Antônio Jorge Gomes

    2012-01-01

    Improvements in genome sequencing techniques have resulted in generation of huge volumes of data. As a consequence of this progress, the genome assembly stage demands even more computational power, since the incoming sequence files contain large amounts of data. To speed up the process, it is often necessary to distribute the workload among a group of machines. However, this requires hardware and software solutions specially configured for this purpose. Grid computing try to simplify this process of aggregate resources, but do not always offer the best performance possible due to heterogeneity and decentralized management of its resources. Thus, it is necessary to develop software that takes into account these peculiarities. In order to achieve this purpose, we developed an algorithm aimed to optimize the functionality of de novo assembly software ABySS in order to optimize its operation in grids. We run ABySS with and without the algorithm we developed in the grid simulator SimGrid. Tests showed that our algorithm is viable, flexible, and scalable even on a heterogeneous environment, which improved the genome assembly time in computational grids without changing its quality. PMID:22461785

  14. Iodophenylpentadecanoic acid-myocardial blood flow relationship during maximal exercise with coronary occlusion.

    PubMed

    Caldwell, J H; Martin, G V; Link, J M; Krohn, K A; Bassingthwaighte, J B

    1990-01-01

    Imaging 123I-labeled iodophenylpentadecanoic acid (IPPA) uptake and clearance from the myocardium following exercise has been advocated as a means of detecting myocardial ischemia because fatty acid deposition is enhanced and clearance prolonged in regions of low flow. However, normal regional myocardial blood flows are markedly heterogeneous, and it is not known how this heterogeneity affects regional metabolism or substrate uptake and thus image interpretation. In five instrumented dogs running at near maximal workload on a treadmill, 131I-labeled IPPA and 15-micron 46Sc microspheres were injected into the left atrium after 30 sec of circumflex coronary artery occlusion. Microsphere and IPPA activity were determined in 250 mapped pieces of myocardium of approximately 400 mg. Myocardial blood flows (from microspheres) ranged from 0.05 to 7.6 ml/min/g. Deposition of IPPA was proportional to regional flows (r = 0.83) with an average retention of 25%. The mean endocardial-epicardial ratio for IPPA (0.90 +/- 0.43) was similar to that for microspheres (0.94 +/- 0.47; p = 0.08). Thus, initial IPPA deposition during treadmill exercise increases in proportion to regional myocardial blood flow over a range of flows from very low to five times normal.

  15. Activity-based differentiation of pathologists' workload in surgical pathology.

    PubMed

    Meijer, G A; Oudejans, J J; Koevoets, J J M; Meijer, C J L M

    2009-06-01

    Adequate budget control in pathology practice requires accurate allocation of resources. Any changes in types and numbers of specimens handled or protocols used will directly affect the pathologists' workload and consequently the allocation of resources. The aim of the present study was to develop a model for measuring the pathologists' workload that can take into account the changes mentioned above. The diagnostic process was analyzed and broken up into separate activities. The time needed to perform these activities was measured. Based on linear regression analysis, for each activity, the time needed was calculated as a function of the number of slides or blocks involved. The total pathologists' time required for a range of specimens was calculated based on standard protocols and validated by comparing to actually measured workload. Cutting up, microscopic procedures and dictating turned out to be highly correlated to number of blocks and/or slides per specimen. Calculated workload per type of specimen was significantly correlated to the actually measured workload. Modeling pathologists' workload based on formulas that calculate workload per type of specimen as a function of the number of blocks and slides provides a basis for a comprehensive, yet flexible, activity-based costing system for pathology.

  16. The dissociation of subjective measures of mental workload and performance

    NASA Technical Reports Server (NTRS)

    Yeh, Y. H.; Wickens, C. D.

    1984-01-01

    Dissociation between performance and subjective workload measures was investigated in the theoretical framework of the multiple resources model. Subjective measures do not preserve the vector characteristics in the multidimensional space described by the model. A theory of dissociation was proposed to locate the sources that may produce dissociation between the two workload measures. According to the theory, performance is affected by every aspect of processing whereas subjective workload is sensitive to the amount of aggregate resource investment and is dominated by the demands on the perceptual/central resources. The proposed theory was tested in three experiments. Results showed that performance improved but subjective workload was elevated with an increasing amount of resource investment. Furthermore, subjective workload was not as sensitive as was performance to differences in the amount of resource competition between two tasks. The demand on perceptual/central resources was found to be the most salient component of subjective workload. Dissociation occurred when the demand on this component was increased by the number of concurrent tasks or by the number of display elements. However, demands on response resources were weighted in subjective introspection as much as demands on perceptual/central resources. The implications of these results for workload practitioners are described.

  17. Modified Petri net model sensitivity to workload manipulations

    NASA Technical Reports Server (NTRS)

    White, S. A.; Mackinnon, D. P.; Lyman, J.

    1986-01-01

    Modified Petri Nets (MPNs) are investigated as a workload modeling tool. The results of an exploratory study of the sensitivity of MPNs to work load manipulations in a dual task are described. Petri nets have been used to represent systems with asynchronous, concurrent and parallel activities (Peterson, 1981). These characteristics led some researchers to suggest the use of Petri nets in workload modeling where concurrent and parallel activities are common. Petri nets are represented by places and transitions. In the workload application, places represent operator activities and transitions represent events. MPNs have been used to formally represent task events and activities of a human operator in a man-machine system. Some descriptive applications demonstrate the usefulness of MPNs in the formal representation of systems. It is the general hypothesis herein that in addition to descriptive applications, MPNs may be useful for workload estimation and prediction. The results are reported of the first of a series of experiments designed to develop and test a MPN system of workload estimation and prediction. This first experiment is a screening test of MPN model general sensitivity to changes in workload. Positive results from this experiment will justify the more complicated analyses and techniques necessary for developing a workload prediction system.

  18. Assessment of a neonatal unit nursing staff: application of the Nursing Activities Score.

    PubMed

    Nunes, Bruna Kosar; Toma, Edi

    2013-02-01

    The study proposes to analyze the nursing staff workload of the sectors of a neonatal unit by means of the Nursing Activities Score - NAS and to calculate the quantitative ideal for the team, comparing it with the current workload. The NAS tool was applied for all newborns interned for at least 24 hours; the sum of the NAS points provided the unit workload which was used for calculating the team assessment by means of mathematical equation. The sector of Low Risk presented a workload of 267 NAS points and an imbalance of 8.8 professionals daily; the Medium Risk sector a workload of 446.7 and an imbalance of 22.3; the High Risk sector a workload of 359 and a deficit of 17.9; the Isolation sector a demand of 609 and an imbalance of 18.2; and NICU a workload of 568.6 with a deficit of 16.1 professionals. The study disclosed an important imbalance of professionals in relation to the exalted work demand they are subjected to daily. The application of the Nursing Activities Score in neonatal units contributes to the evaluation of the workload and assessment of the nursing team.

  19. Perceived vs. measured effects of advanced cockpit systems on pilot workload and error: are pilots' beliefs misaligned with reality?

    PubMed

    Casner, Stephen M

    2009-05-01

    Four types of advanced cockpit systems were tested in an in-flight experiment for their effect on pilot workload and error. Twelve experienced pilots flew conventional cockpit and advanced cockpit versions of the same make and model airplane. In both airplanes, the experimenter dictated selected combinations of cockpit systems for each pilot to use while soliciting subjective workload measures and recording any errors that pilots made. The results indicate that the use of a GPS navigation computer helped reduce workload and errors during some phases of flight but raised them in others. Autopilots helped reduce some aspects of workload in the advanced cockpit airplane but did not appear to reduce workload in the conventional cockpit. Electronic flight and navigation instruments appeared to have no effect on workload or error. Despite this modest showing for advanced cockpit systems, pilots stated an overwhelming preference for using them during all phases of flight.

  20. Approximate entropy: a new evaluation approach of mental workload under multitask conditions

    NASA Astrophysics Data System (ADS)

    Yao, Lei; Li, Xiaoling; Wang, Wei; Dong, Yuanzhe; Jiang, Ying

    2014-04-01

    There are numerous instruments and an abundance of complex information in the traditional cockpit display-control system, and pilots require a long time to familiarize themselves with the cockpit interface. This can cause accidents when they cope with emergency events, suggesting that it is necessary to evaluate pilot cognitive workload. In order to establish a simplified method to evaluate cognitive workload under a multitask condition. We designed a series of experiments involving different instrument panels and collected electroencephalograms (EEG) from 10 healthy volunteers. The data were classified and analyzed with an approximate entropy (ApEn) signal processing. ApEn increased with increasing experiment difficulty, suggesting that it can be used to evaluate cognitive workload. Our results demonstrate that ApEn can be used as an evaluation criteria of cognitive workload and has good specificity and sensitivity. Moreover, we determined an empirical formula to assess the cognitive workload interval, which can simplify cognitive workload evaluation under multitask conditions.

Top