Sample records for computing capacity resource

  1. NASA Center for Computational Sciences: History and Resources

    NASA Technical Reports Server (NTRS)

    2000-01-01

    The Nasa Center for Computational Sciences (NCCS) has been a leading capacity computing facility, providing a production environment and support resources to address the challenges facing the Earth and space sciences research community.

  2. Polyphony: A Workflow Orchestration Framework for Cloud Computing

    NASA Technical Reports Server (NTRS)

    Shams, Khawaja S.; Powell, Mark W.; Crockett, Tom M.; Norris, Jeffrey S.; Rossi, Ryan; Soderstrom, Tom

    2010-01-01

    Cloud Computing has delivered unprecedented compute capacity to NASA missions at affordable rates. Missions like the Mars Exploration Rovers (MER) and Mars Science Lab (MSL) are enjoying the elasticity that enables them to leverage hundreds, if not thousands, or machines for short durations without making any hardware procurements. In this paper, we describe Polyphony, a resilient, scalable, and modular framework that efficiently leverages a large set of computing resources to perform parallel computations. Polyphony can employ resources on the cloud, excess capacity on local machines, as well as spare resources on the supercomputing center, and it enables these resources to work in concert to accomplish a common goal. Polyphony is resilient to node failures, even if they occur in the middle of a transaction. We will conclude with an evaluation of a production-ready application built on top of Polyphony to perform image-processing operations of images from around the solar system, including Mars, Saturn, and Titan.

  3. Opportunistic Capacity-Based Resource Allocation for Chunk-Based Multi-Carrier Cognitive Radio Sensor Networks

    PubMed Central

    Huang, Jie; Zeng, Xiaoping; Jian, Xin; Tan, Xiaoheng; Zhang, Qi

    2017-01-01

    The spectrum allocation for cognitive radio sensor networks (CRSNs) has received considerable research attention under the assumption that the spectrum environment is static. However, in practice, the spectrum environment varies over time due to primary user/secondary user (PU/SU) activity and mobility, resulting in time-varied spectrum resources. This paper studies resource allocation for chunk-based multi-carrier CRSNs with time-varied spectrum resources. We present a novel opportunistic capacity model through a continuous time semi-Markov chain (CTSMC) to describe the time-varied spectrum resources of chunks and, based on this, a joint power and chunk allocation model by considering the opportunistically available capacity of chunks is proposed. To reduce the computational complexity, we split this model into two sub-problems and solve them via the Lagrangian dual method. Simulation results illustrate that the proposed opportunistic capacity-based resource allocation algorithm can achieve better performance compared with traditional algorithms when the spectrum environment is time-varied. PMID:28106803

  4. Networked Microcomputers--The Next Generation in College Computing.

    ERIC Educational Resources Information Center

    Harris, Albert L.

    The evolution of computer hardware for college computing has mirrored the industry's growth. When computers were introduced into the educational environment, they had limited capacity and served one user at a time. Then came large mainframes with many terminals sharing the resource. Next, the use of computers in office automation emerged. As…

  5. The Tractable Cognition Thesis

    ERIC Educational Resources Information Center

    van Rooij, Iris

    2008-01-01

    The recognition that human minds/brains are finite systems with limited resources for computation has led some researchers to advance the "Tractable Cognition thesis": Human cognitive capacities are constrained by computational tractability. This thesis, if true, serves cognitive psychology by constraining the space of computational-level theories…

  6. Framework Resources Multiply Computing Power

    NASA Technical Reports Server (NTRS)

    2010-01-01

    As an early proponent of grid computing, Ames Research Center awarded Small Business Innovation Research (SBIR) funding to 3DGeo Development Inc., of Santa Clara, California, (now FusionGeo Inc., of The Woodlands, Texas) to demonstrate a virtual computer environment that linked geographically dispersed computer systems over the Internet to help solve large computational problems. By adding to an existing product, FusionGeo enabled access to resources for calculation- or data-intensive applications whenever and wherever they were needed. Commercially available as Accelerated Imaging and Modeling, the product is used by oil companies and seismic service companies, which require large processing and data storage capacities.

  7. 77 FR 66729 - National Oil and Hazardous Substances Pollution Contingency Plan; Revision To Increase Public...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2012-11-07

    ... technology, to include computer telecommunications or other electronic means, that the lead agency is... assess the capacity and resources of the public to utilize and maintain an electronic- or computer... the technology, to include computer telecommunications or other electronic means, that the lead agency...

  8. HEPCloud, a New Paradigm for HEP Facilities: CMS Amazon Web Services Investigation

    DOE PAGES

    Holzman, Burt; Bauerdick, Lothar A. T.; Bockelman, Brian; ...

    2017-09-29

    Historically, high energy physics computing has been performed on large purpose-built computing systems. These began as single-site compute facilities, but have evolved into the distributed computing grids used today. Recently, there has been an exponential increase in the capacity and capability of commercial clouds. Cloud resources are highly virtualized and intended to be able to be flexibly deployed for a variety of computing tasks. There is a growing interest among the cloud providers to demonstrate the capability to perform large-scale scientific computing. In this paper, we discuss results from the CMS experiment using the Fermilab HEPCloud facility, which utilized bothmore » local Fermilab resources and virtual machines in the Amazon Web Services Elastic Compute Cloud. We discuss the planning, technical challenges, and lessons learned involved in performing physics workflows on a large-scale set of virtualized resources. Additionally, we will discuss the economics and operational efficiencies when executing workflows both in the cloud and on dedicated resources.« less

  9. HEPCloud, a New Paradigm for HEP Facilities: CMS Amazon Web Services Investigation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Holzman, Burt; Bauerdick, Lothar A. T.; Bockelman, Brian

    Historically, high energy physics computing has been performed on large purpose-built computing systems. These began as single-site compute facilities, but have evolved into the distributed computing grids used today. Recently, there has been an exponential increase in the capacity and capability of commercial clouds. Cloud resources are highly virtualized and intended to be able to be flexibly deployed for a variety of computing tasks. There is a growing interest among the cloud providers to demonstrate the capability to perform large-scale scientific computing. In this paper, we discuss results from the CMS experiment using the Fermilab HEPCloud facility, which utilized bothmore » local Fermilab resources and virtual machines in the Amazon Web Services Elastic Compute Cloud. We discuss the planning, technical challenges, and lessons learned involved in performing physics workflows on a large-scale set of virtualized resources. Additionally, we will discuss the economics and operational efficiencies when executing workflows both in the cloud and on dedicated resources.« less

  10. Future Approach to tier-0 extension

    NASA Astrophysics Data System (ADS)

    Jones, B.; McCance, G.; Cordeiro, C.; Giordano, D.; Traylen, S.; Moreno García, D.

    2017-10-01

    The current tier-0 processing at CERN is done on two managed sites, the CERN computer centre and the Wigner computer centre. With the proliferation of public cloud resources at increasingly competitive prices, we have been investigating how to transparently increase our compute capacity to include these providers. The approach taken has been to integrate these resources using our existing deployment and computer management tools and to provide them in a way that exposes them to users as part of the same site. The paper will describe the architecture, the toolset and the current production experiences of this model.

  11. Infrastructure Systems for Advanced Computing in E-science applications

    NASA Astrophysics Data System (ADS)

    Terzo, Olivier

    2013-04-01

    In the e-science field are growing needs for having computing infrastructure more dynamic and customizable with a model of use "on demand" that follow the exact request in term of resources and storage capacities. The integration of grid and cloud infrastructure solutions allows us to offer services that can adapt the availability in terms of up scaling and downscaling resources. The main challenges for e-sciences domains will on implement infrastructure solutions for scientific computing that allow to adapt dynamically the demands of computing resources with a strong emphasis on optimizing the use of computing resources for reducing costs of investments. Instrumentation, data volumes, algorithms, analysis contribute to increase the complexity for applications who require high processing power and storage for a limited time and often exceeds the computational resources that equip the majority of laboratories, research Unit in an organization. Very often it is necessary to adapt or even tweak rethink tools, algorithms, and consolidate existing applications through a phase of reverse engineering in order to adapt them to a deployment on Cloud infrastructure. For example, in areas such as rainfall monitoring, meteorological analysis, Hydrometeorology, Climatology Bioinformatics Next Generation Sequencing, Computational Electromagnetic, Radio occultation, the complexity of the analysis raises several issues such as the processing time, the scheduling of tasks of processing, storage of results, a multi users environment. For these reasons, it is necessary to rethink the writing model of E-Science applications in order to be already adapted to exploit the potentiality of cloud computing services through the uses of IaaS, PaaS and SaaS layer. An other important focus is on create/use hybrid infrastructure typically a federation between Private and public cloud, in fact in this way when all resources owned by the organization are all used it will be easy with a federate cloud infrastructure to add some additional resources form the Public cloud for following the needs in term of computational and storage resources and release them where process are finished. Following the hybrid model, the scheduling approach is important for managing both cloud models. Thanks to this model infrastructure every time resources are available for additional request in term of IT capacities that can used "on demand" for a limited time without having to proceed to purchase additional servers.

  12. 78 FR 77161 - Grant Program To Build Tribal Energy Development Capacity

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-12-20

    ... project equipment such as computers, vehicles, field gear, etc; Legal fees; Contract negotiation fees; and... tribes for projects to build tribal capacity for energy resource development under the Department of the... Information section of this notice to select projects for funding awards. DATES: Submit grant proposals by...

  13. A multipurpose computing center with distributed resources

    NASA Astrophysics Data System (ADS)

    Chudoba, J.; Adam, M.; Adamová, D.; Kouba, T.; Mikula, A.; Říkal, V.; Švec, J.; Uhlířová, J.; Vokáč, P.; Svatoš, M.

    2017-10-01

    The Computing Center of the Institute of Physics (CC IoP) of the Czech Academy of Sciences serves a broad spectrum of users with various computing needs. It runs WLCG Tier-2 center for the ALICE and the ATLAS experiments; the same group of services is used by astroparticle physics projects the Pierre Auger Observatory (PAO) and the Cherenkov Telescope Array (CTA). OSG stack is installed for the NOvA experiment. Other groups of users use directly local batch system. Storage capacity is distributed to several locations. DPM servers used by the ATLAS and the PAO are all in the same server room, but several xrootd servers for the ALICE experiment are operated in the Nuclear Physics Institute in Řež, about 10 km away. The storage capacity for the ATLAS and the PAO is extended by resources of the CESNET - the Czech National Grid Initiative representative. Those resources are in Plzen and Jihlava, more than 100 km away from the CC IoP. Both distant sites use a hierarchical storage solution based on disks and tapes. They installed one common dCache instance, which is published in the CC IoP BDII. ATLAS users can use these resources using the standard ATLAS tools in the same way as the local storage without noticing this geographical distribution. Computing clusters LUNA and EXMAG dedicated to users mostly from the Solid State Physics departments offer resources for parallel computing. They are part of the Czech NGI infrastructure MetaCentrum with distributed batch system based on torque with a custom scheduler. Clusters are installed remotely by the MetaCentrum team and a local contact helps only when needed. Users from IoP have exclusive access only to a part of these two clusters and take advantage of higher priorities on the rest (1500 cores in total), which can also be used by any user of the MetaCentrum. IoP researchers can also use distant resources located in several towns of the Czech Republic with a capacity of more than 12000 cores in total.

  14. Exploiting opportunistic resources for ATLAS with ARC CE and the Event Service

    NASA Astrophysics Data System (ADS)

    Cameron, D.; Filipčič, A.; Guan, W.; Tsulaia, V.; Walker, R.; Wenaus, T.; ATLAS Collaboration

    2017-10-01

    With ever-greater computing needs and fixed budgets, big scientific experiments are turning to opportunistic resources as a means to add much-needed extra computing power. These resources can be very different in design from those that comprise the Grid computing of most experiments, therefore exploiting them requires a change in strategy for the experiment. They may be highly restrictive in what can be run or in connections to the outside world, or tolerate opportunistic usage only on condition that tasks may be terminated without warning. The Advanced Resource Connector Computing Element (ARC CE) with its nonintrusive architecture is designed to integrate resources such as High Performance Computing (HPC) systems into a computing Grid. The ATLAS experiment developed the ATLAS Event Service (AES) primarily to address the issue of jobs that can be terminated at any point when opportunistic computing capacity is needed by someone else. This paper describes the integration of these two systems in order to exploit opportunistic resources for ATLAS in a restrictive environment. In addition to the technical details, results from deployment of this solution in the SuperMUC HPC centre in Munich are shown.

  15. The OSG Open Facility: an on-ramp for opportunistic scientific computing

    NASA Astrophysics Data System (ADS)

    Jayatilaka, B.; Levshina, T.; Sehgal, C.; Gardner, R.; Rynge, M.; Würthwein, F.

    2017-10-01

    The Open Science Grid (OSG) is a large, robust computing grid that started primarily as a collection of sites associated with large HEP experiments such as ATLAS, CDF, CMS, and DZero, but has evolved in recent years to a much larger user and resource platform. In addition to meeting the US LHC community’s computational needs, the OSG continues to be one of the largest providers of distributed high-throughput computing (DHTC) to researchers from a wide variety of disciplines via the OSG Open Facility. The Open Facility consists of OSG resources that are available opportunistically to users other than resource owners and their collaborators. In the past two years, the Open Facility has doubled its annual throughput to over 200 million wall hours. More than half of these resources are used by over 100 individual researchers from over 60 institutions in fields such as biology, medicine, math, economics, and many others. Over 10% of these individual users utilized in excess of 1 million computational hours each in the past year. The largest source of these cycles is temporary unused capacity at institutions affiliated with US LHC computational sites. An increasing fraction, however, comes from university HPC clusters and large national infrastructure supercomputers offering unused capacity. Such expansions have allowed the OSG to provide ample computational resources to both individual researchers and small groups as well as sizable international science collaborations such as LIGO, AMS, IceCube, and sPHENIX. Opening up access to the Fermilab FabrIc for Frontier Experiments (FIFE) project has also allowed experiments such as mu2e and NOvA to make substantial use of Open Facility resources, the former with over 40 million wall hours in a year. We present how this expansion was accomplished as well as future plans for keeping the OSG Open Facility at the forefront of enabling scientific research by way of DHTC.

  16. The OSG Open Facility: An On-Ramp for Opportunistic Scientific Computing

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Jayatilaka, B.; Levshina, T.; Sehgal, C.

    The Open Science Grid (OSG) is a large, robust computing grid that started primarily as a collection of sites associated with large HEP experiments such as ATLAS, CDF, CMS, and DZero, but has evolved in recent years to a much larger user and resource platform. In addition to meeting the US LHC community’s computational needs, the OSG continues to be one of the largest providers of distributed high-throughput computing (DHTC) to researchers from a wide variety of disciplines via the OSG Open Facility. The Open Facility consists of OSG resources that are available opportunistically to users other than resource ownersmore » and their collaborators. In the past two years, the Open Facility has doubled its annual throughput to over 200 million wall hours. More than half of these resources are used by over 100 individual researchers from over 60 institutions in fields such as biology, medicine, math, economics, and many others. Over 10% of these individual users utilized in excess of 1 million computational hours each in the past year. The largest source of these cycles is temporary unused capacity at institutions affiliated with US LHC computational sites. An increasing fraction, however, comes from university HPC clusters and large national infrastructure supercomputers offering unused capacity. Such expansions have allowed the OSG to provide ample computational resources to both individual researchers and small groups as well as sizable international science collaborations such as LIGO, AMS, IceCube, and sPHENIX. Opening up access to the Fermilab FabrIc for Frontier Experiments (FIFE) project has also allowed experiments such as mu2e and NOvA to make substantial use of Open Facility resources, the former with over 40 million wall hours in a year. We present how this expansion was accomplished as well as future plans for keeping the OSG Open Facility at the forefront of enabling scientific research by way of DHTC.« less

  17. Tri-Laboratory Linux Capacity Cluster 2007 SOW

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Seager, M

    2007-03-22

    The Advanced Simulation and Computing (ASC) Program (formerly know as Accelerated Strategic Computing Initiative, ASCI) has led the world in capability computing for the last ten years. Capability computing is defined as a world-class platform (in the Top10 of the Top500.org list) with scientific simulations running at scale on the platform. Example systems are ASCI Red, Blue-Pacific, Blue-Mountain, White, Q, RedStorm, and Purple. ASC applications have scaled to multiple thousands of CPUs and accomplished a long list of mission milestones on these ASC capability platforms. However, the computing demands of the ASC and Stockpile Stewardship programs also include a vastmore » number of smaller scale runs for day-to-day simulations. Indeed, every 'hero' capability run requires many hundreds to thousands of much smaller runs in preparation and post processing activities. In addition, there are many aspects of the Stockpile Stewardship Program (SSP) that can be directly accomplished with these so-called 'capacity' calculations. The need for capacity is now so great within the program that it is increasingly difficult to allocate the computer resources required by the larger capability runs. To rectify the current 'capacity' computing resource shortfall, the ASC program has allocated a large portion of the overall ASC platforms budget to 'capacity' systems. In addition, within the next five to ten years the Life Extension Programs (LEPs) for major nuclear weapons systems must be accomplished. These LEPs and other SSP programmatic elements will further drive the need for capacity calculations and hence 'capacity' systems as well as future ASC capability calculations on 'capability' systems. To respond to this new workload analysis, the ASC program will be making a large sustained strategic investment in these capacity systems over the next ten years, starting with the United States Government Fiscal Year 2007 (GFY07). However, given the growing need for 'capability' systems as well, the budget demands are extreme and new, more cost effective ways of fielding these systems must be developed. This Tri-Laboratory Linux Capacity Cluster (TLCC) procurement represents the ASC first investment vehicle in these capacity systems. It also represents a new strategy for quickly building, fielding and integrating many Linux clusters of various sizes into classified and unclassified production service through a concept of Scalable Units (SU). The programmatic objective is to dramatically reduce the overall Total Cost of Ownership (TCO) of these 'capacity' systems relative to the best practices in Linux Cluster deployments today. This objective only makes sense in the context of these systems quickly becoming very robust and useful production clusters under the crushing load that will be inflicted on them by the ASC and SSP scientific simulation capacity workload.« less

  18. 8760-Based Method for Representing Variable Generation Capacity Value in Capacity Expansion Models: Preprint

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Frew, Bethany A; Cole, Wesley J; Sun, Yinong

    Capacity expansion models (CEMs) are widely used to evaluate the least-cost portfolio of electricity generators, transmission, and storage needed to reliably serve demand over the evolution of many years or decades. Various CEM formulations are used to evaluate systems ranging in scale from states or utility service territories to national or multi-national systems. CEMs can be computationally complex, and to achieve acceptable solve times, key parameters are often estimated using simplified methods. In this paper, we focus on two of these key parameters associated with the integration of variable generation (VG) resources: capacity value and curtailment. We first discuss commonmore » modeling simplifications used in CEMs to estimate capacity value and curtailment, many of which are based on a representative subset of hours that can miss important tail events or which require assumptions about the load and resource distributions that may not match actual distributions. We then present an alternate approach that captures key elements of chronological operation over all hours of the year without the computationally intensive economic dispatch optimization typically employed within more detailed operational models. The updated methodology characterizes the (1) contribution of VG to system capacity during high load and net load hours, (2) the curtailment level of VG, and (3) the potential reductions in curtailments enabled through deployment of storage and more flexible operation of select thermal generators. We apply this alternate methodology to an existing CEM, the Regional Energy Deployment System (ReEDS). Results demonstrate that this alternate approach provides more accurate estimates of capacity value and curtailments by explicitly capturing system interactions across all hours of the year. This approach could be applied more broadly to CEMs at many different scales where hourly resource and load data is available, greatly improving the representation of challenges associate with integration of variable generation resources.« less

  19. National Software Capacity: Near-Term Study

    DTIC Science & Technology

    1990-05-01

    Productivity Gains 17 2. Labor Markets and Human Resource Impacts on Capacity 21 2.1. Career Ladders 21 2.1.1. Industry 21 2.1.2. Civil Service 22 2.1.3...Inflows to and Outflows from Computer-Related Jobs 29 2.3.2. Inflows to and Outflows from the DoD Industrial Contractors 31 3. Major Impacts of Other...Factors on Capacity: 35 A Systems View 3.1. Organizational Impacts on Capacity 35 3.1.1. Requirements Specification and Changes 35 3.1.2. The Contracting

  20. Institutional Computing Executive Group Review of Multi-programmatic & Institutional Computing, Fiscal Year 2005 and 2006

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Langer, S; Rotman, D; Schwegler, E

    The Institutional Computing Executive Group (ICEG) review of FY05-06 Multiprogrammatic and Institutional Computing (M and IC) activities is presented in the attached report. In summary, we find that the M and IC staff does an outstanding job of acquiring and supporting a wide range of institutional computing resources to meet the programmatic and scientific goals of LLNL. The responsiveness and high quality of support given to users and the programs investing in M and IC reflects the dedication and skill of the M and IC staff. M and IC has successfully managed serial capacity, parallel capacity, and capability computing resources.more » Serial capacity computing supports a wide range of scientific projects which require access to a few high performance processors within a shared memory computer. Parallel capacity computing supports scientific projects that require a moderate number of processors (up to roughly 1000) on a parallel computer. Capability computing supports parallel jobs that push the limits of simulation science. M and IC has worked closely with Stockpile Stewardship, and together they have made LLNL a premier institution for computational and simulation science. Such a standing is vital to the continued success of laboratory science programs and to the recruitment and retention of top scientists. This report provides recommendations to build on M and IC's accomplishments and improve simulation capabilities at LLNL. We recommend that institution fully fund (1) operation of the atlas cluster purchased in FY06 to support a few large projects; (2) operation of the thunder and zeus clusters to enable 'mid-range' parallel capacity simulations during normal operation and a limited number of large simulations during dedicated application time; (3) operation of the new yana cluster to support a wide range of serial capacity simulations; (4) improvements to the reliability and performance of the Lustre parallel file system; (5) support for the new GDO petabyte-class storage facility on the green network for use in data intensive external collaborations; and (6) continued support for visualization and other methods for analyzing large simulations. We also recommend that M and IC begin planning in FY07 for the next upgrade of its parallel clusters. LLNL investments in M and IC have resulted in a world-class simulation capability leading to innovative science. We thank the LLNL management for its continued support and thank the M and IC staff for its vision and dedicated efforts to make it all happen.« less

  1. The Development of Educational and/or Training Computer Games for Students with Disabilities

    ERIC Educational Resources Information Center

    Kwon, Jungmin

    2012-01-01

    Computer and video games have much in common with the strategies used in special education. Free resources for game development are becoming more widely available, so lay computer users, such as teachers and other practitioners, now have the capacity to develop games using a low budget and a little self-teaching. This article provides a guideline…

  2. Infrastructures for Distributed Computing: the case of BESIII

    NASA Astrophysics Data System (ADS)

    Pellegrino, J.

    2018-05-01

    The BESIII is an electron-positron collision experiment hosted at BEPCII in Beijing and aimed to investigate Tau-Charm physics. Now BESIII has been running for several years and gathered more than 1PB raw data. In order to analyze these data and perform massive Monte Carlo simulations, a large amount of computing and storage resources is needed. The distributed computing system is based up on DIRAC and it is in production since 2012. It integrates computing and storage resources from different institutes and a variety of resource types such as cluster, grid, cloud or volunteer computing. About 15 sites from BESIII Collaboration from all over the world joined this distributed computing infrastructure, giving a significant contribution to the IHEP computing facility. Nowadays cloud computing is playing a key role in the HEP computing field, due to its scalability and elasticity. Cloud infrastructures take advantages of several tools, such as VMDirac, to manage virtual machines through cloud managers according to the job requirements. With the virtually unlimited resources from commercial clouds, the computing capacity could scale accordingly in order to deal with any burst demands. General computing models have been discussed in the talk and are addressed herewith, with particular focus on the BESIII infrastructure. Moreover new computing tools and upcoming infrastructures will be addressed.

  3. Experience in using commercial clouds in CMS

    NASA Astrophysics Data System (ADS)

    Bauerdick, L.; Bockelman, B.; Dykstra, D.; Fuess, S.; Garzoglio, G.; Girone, M.; Gutsche, O.; Holzman, B.; Hufnagel, D.; Kim, H.; Kennedy, R.; Mason, D.; Spentzouris, P.; Timm, S.; Tiradani, A.; Vaandering, E.; CMS Collaboration

    2017-10-01

    Historically high energy physics computing has been performed on large purpose-built computing systems. In the beginning there were single site computing facilities, which evolved into the Worldwide LHC Computing Grid (WLCG) used today. The vast majority of the WLCG resources are used for LHC computing and the resources are scheduled to be continuously used throughout the year. In the last several years there has been an explosion in capacity and capability of commercial and academic computing clouds. Cloud resources are highly virtualized and intended to be able to be flexibly deployed for a variety of computing tasks. There is a growing interest amongst the cloud providers to demonstrate the capability to perform large scale scientific computing. In this presentation we will discuss results from the CMS experiment using the Fermilab HEPCloud Facility, which utilized both local Fermilab resources and Amazon Web Services (AWS). The goal was to work with AWS through a matching grant to demonstrate a sustained scale approximately equal to half of the worldwide processing resources available to CMS. We will discuss the planning and technical challenges involved in organizing the most IO intensive CMS workflows on a large-scale set of virtualized resource provisioned by the Fermilab HEPCloud. We will describe the data handling and data management challenges. Also, we will discuss the economic issues and cost and operational efficiency comparison to our dedicated resources. At the end we will consider the changes in the working model of HEP computing in a domain with the availability of large scale resources scheduled at peak times.

  4. Experience in using commercial clouds in CMS

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bauerdick, L.; Bockelman, B.; Dykstra, D.

    Historically high energy physics computing has been performed on large purposebuilt computing systems. In the beginning there were single site computing facilities, which evolved into the Worldwide LHC Computing Grid (WLCG) used today. The vast majority of the WLCG resources are used for LHC computing and the resources are scheduled to be continuously used throughout the year. In the last several years there has been an explosion in capacity and capability of commercial and academic computing clouds. Cloud resources are highly virtualized and intended to be able to be flexibly deployed for a variety of computing tasks. There is amore » growing interest amongst the cloud providers to demonstrate the capability to perform large scale scientific computing. In this presentation we will discuss results from the CMS experiment using the Fermilab HEPCloud Facility, which utilized both local Fermilab resources and Amazon Web Services (AWS). The goal was to work with AWS through a matching grant to demonstrate a sustained scale approximately equal to half of the worldwide processing resources available to CMS. We will discuss the planning and technical challenges involved in organizing the most IO intensive CMS workflows on a large-scale set of virtualized resource provisioned by the Fermilab HEPCloud. We will describe the data handling and data management challenges. Also, we will discuss the economic issues and cost and operational efficiency comparison to our dedicated resources. At the end we will consider the changes in the working model of HEP computing in a domain with the availability of large scale resources scheduled at peak times.« less

  5. The economics of time shared computing: Congestion, user costs and capacity

    NASA Technical Reports Server (NTRS)

    Agnew, C. E.

    1982-01-01

    Time shared systems permit the fixed costs of computing resources to be spread over large numbers of users. However, bottleneck results in the theory of closed queueing networks can be used to show that this economy of scale will be offset by the increased congestion that results as more users are added to the system. If one considers the total costs, including the congestion cost, there is an optimal number of users for a system which equals the saturation value usually used to define system capacity.

  6. Consolidation of cloud computing in ATLAS

    NASA Astrophysics Data System (ADS)

    Taylor, Ryan P.; Domingues Cordeiro, Cristovao Jose; Giordano, Domenico; Hover, John; Kouba, Tomas; Love, Peter; McNab, Andrew; Schovancova, Jaroslava; Sobie, Randall; ATLAS Collaboration

    2017-10-01

    Throughout the first half of LHC Run 2, ATLAS cloud computing has undergone a period of consolidation, characterized by building upon previously established systems, with the aim of reducing operational effort, improving robustness, and reaching higher scale. This paper describes the current state of ATLAS cloud computing. Cloud activities are converging on a common contextualization approach for virtual machines, and cloud resources are sharing monitoring and service discovery components. We describe the integration of Vacuum resources, streamlined usage of the Simulation at Point 1 cloud for offline processing, extreme scaling on Amazon compute resources, and procurement of commercial cloud capacity in Europe. Finally, building on the previously established monitoring infrastructure, we have deployed a real-time monitoring and alerting platform which coalesces data from multiple sources, provides flexible visualization via customizable dashboards, and issues alerts and carries out corrective actions in response to problems.

  7. Decomposition method for zonal resource allocation problems in telecommunication networks

    NASA Astrophysics Data System (ADS)

    Konnov, I. V.; Kashuba, A. Yu

    2016-11-01

    We consider problems of optimal resource allocation in telecommunication networks. We first give an optimization formulation for the case where the network manager aims to distribute some homogeneous resource (bandwidth) among users of one region with quadratic charge and fee functions and present simple and efficient solution methods. Next, we consider a more general problem for a provider of a wireless communication network divided into zones (clusters) with common capacity constraints. We obtain a convex quadratic optimization problem involving capacity and balance constraints. By using the dual Lagrangian method with respect to the capacity constraint, we suggest to reduce the initial problem to a single-dimensional optimization problem, but calculation of the cost function value leads to independent solution of zonal problems, which coincide with the above single region problem. Some results of computational experiments confirm the applicability of the new methods.

  8. The Relation between Acquisition of a Theory of Mind and the Capacity to Hold in Mind.

    ERIC Educational Resources Information Center

    Gordon, Anne C. L.; Olson, David R.

    1998-01-01

    Tested hypothesized relationship between development of a theory of mind and increasing computational resources in 3- to 5-year olds. Found that the correlations between performance on theory of mind tasks and dual processing tasks were as high as r=.64, suggesting that changes in working memory capacity allow the expression of, and arguably the…

  9. VM Capacity-Aware Scheduling within Budget Constraints in IaaS Clouds

    PubMed Central

    Thanasias, Vasileios; Lee, Choonhwa; Hanif, Muhammad; Kim, Eunsam; Helal, Sumi

    2016-01-01

    Recently, cloud computing has drawn significant attention from both industry and academia, bringing unprecedented changes to computing and information technology. The infrastructure-as-a-Service (IaaS) model offers new abilities such as the elastic provisioning and relinquishing of computing resources in response to workload fluctuations. However, because the demand for resources dynamically changes over time, the provisioning of resources in a way that a given budget is efficiently utilized while maintaining a sufficing performance remains a key challenge. This paper addresses the problem of task scheduling and resource provisioning for a set of tasks running on IaaS clouds; it presents novel provisioning and scheduling algorithms capable of executing tasks within a given budget, while minimizing the slowdown due to the budget constraint. Our simulation study demonstrates a substantial reduction up to 70% in the overall task slowdown rate by the proposed algorithms. PMID:27501046

  10. VM Capacity-Aware Scheduling within Budget Constraints in IaaS Clouds.

    PubMed

    Thanasias, Vasileios; Lee, Choonhwa; Hanif, Muhammad; Kim, Eunsam; Helal, Sumi

    2016-01-01

    Recently, cloud computing has drawn significant attention from both industry and academia, bringing unprecedented changes to computing and information technology. The infrastructure-as-a-Service (IaaS) model offers new abilities such as the elastic provisioning and relinquishing of computing resources in response to workload fluctuations. However, because the demand for resources dynamically changes over time, the provisioning of resources in a way that a given budget is efficiently utilized while maintaining a sufficing performance remains a key challenge. This paper addresses the problem of task scheduling and resource provisioning for a set of tasks running on IaaS clouds; it presents novel provisioning and scheduling algorithms capable of executing tasks within a given budget, while minimizing the slowdown due to the budget constraint. Our simulation study demonstrates a substantial reduction up to 70% in the overall task slowdown rate by the proposed algorithms.

  11. Time Division Multiplexing of Semiconductor Qubits

    NASA Astrophysics Data System (ADS)

    Jarratt, Marie Claire; Hornibrook, John; Croot, Xanthe; Watson, John; Gardner, Geoff; Fallahi, Saeed; Manfra, Michael; Reilly, David

    Readout chains, comprising resonators, amplifiers, and demodulators, are likely to be precious resources in quantum computing architectures. The potential to share readout resources is contingent on realising efficient means of time-division multiplexing (TDM) schemes that are compatible with quantum computing. Here, we demonstrate TDM using a GaAs quantum dot device with multiple charge sensors. Our device incorporates chip-level switches that do not load the impedance matching network. When used in conjunction with frequency multiplexing, each frequency tone addresses multiple time-multiplexed qubits, vastly increasing the capacity of a single readout line.

  12. Integration of Russian Tier-1 Grid Center with High Performance Computers at NRC-KI for LHC experiments and beyond HENP

    NASA Astrophysics Data System (ADS)

    Belyaev, A.; Berezhnaya, A.; Betev, L.; Buncic, P.; De, K.; Drizhuk, D.; Klimentov, A.; Lazin, Y.; Lyalin, I.; Mashinistov, R.; Novikov, A.; Oleynik, D.; Polyakov, A.; Poyda, A.; Ryabinkin, E.; Teslyuk, A.; Tkachenko, I.; Yasnopolskiy, L.

    2015-12-01

    The LHC experiments are preparing for the precision measurements and further discoveries that will be made possible by higher LHC energies from April 2015 (LHC Run2). The need for simulation, data processing and analysis would overwhelm the expected capacity of grid infrastructure computing facilities deployed by the Worldwide LHC Computing Grid (WLCG). To meet this challenge the integration of the opportunistic resources into LHC computing model is highly important. The Tier-1 facility at Kurchatov Institute (NRC-KI) in Moscow is a part of WLCG and it will process, simulate and store up to 10% of total data obtained from ALICE, ATLAS and LHCb experiments. In addition Kurchatov Institute has supercomputers with peak performance 0.12 PFLOPS. The delegation of even a fraction of supercomputing resources to the LHC Computing will notably increase total capacity. In 2014 the development a portal combining a Tier-1 and a supercomputer in Kurchatov Institute was started to provide common interfaces and storage. The portal will be used not only for HENP experiments, but also by other data- and compute-intensive sciences like biology with genome sequencing analysis; astrophysics with cosmic rays analysis, antimatter and dark matter search, etc.

  13. A lightweight distributed framework for computational offloading in mobile cloud computing.

    PubMed

    Shiraz, Muhammad; Gani, Abdullah; Ahmad, Raja Wasim; Adeel Ali Shah, Syed; Karim, Ahmad; Rahman, Zulkanain Abdul

    2014-01-01

    The latest developments in mobile computing technology have enabled intensive applications on the modern Smartphones. However, such applications are still constrained by limitations in processing potentials, storage capacity and battery lifetime of the Smart Mobile Devices (SMDs). Therefore, Mobile Cloud Computing (MCC) leverages the application processing services of computational clouds for mitigating resources limitations in SMDs. Currently, a number of computational offloading frameworks are proposed for MCC wherein the intensive components of the application are outsourced to computational clouds. Nevertheless, such frameworks focus on runtime partitioning of the application for computational offloading, which is time consuming and resources intensive. The resource constraint nature of SMDs require lightweight procedures for leveraging computational clouds. Therefore, this paper presents a lightweight framework which focuses on minimizing additional resources utilization in computational offloading for MCC. The framework employs features of centralized monitoring, high availability and on demand access services of computational clouds for computational offloading. As a result, the turnaround time and execution cost of the application are reduced. The framework is evaluated by testing prototype application in the real MCC environment. The lightweight nature of the proposed framework is validated by employing computational offloading for the proposed framework and the latest existing frameworks. Analysis shows that by employing the proposed framework for computational offloading, the size of data transmission is reduced by 91%, energy consumption cost is minimized by 81% and turnaround time of the application is decreased by 83.5% as compared to the existing offloading frameworks. Hence, the proposed framework minimizes additional resources utilization and therefore offers lightweight solution for computational offloading in MCC.

  14. A Lightweight Distributed Framework for Computational Offloading in Mobile Cloud Computing

    PubMed Central

    Shiraz, Muhammad; Gani, Abdullah; Ahmad, Raja Wasim; Adeel Ali Shah, Syed; Karim, Ahmad; Rahman, Zulkanain Abdul

    2014-01-01

    The latest developments in mobile computing technology have enabled intensive applications on the modern Smartphones. However, such applications are still constrained by limitations in processing potentials, storage capacity and battery lifetime of the Smart Mobile Devices (SMDs). Therefore, Mobile Cloud Computing (MCC) leverages the application processing services of computational clouds for mitigating resources limitations in SMDs. Currently, a number of computational offloading frameworks are proposed for MCC wherein the intensive components of the application are outsourced to computational clouds. Nevertheless, such frameworks focus on runtime partitioning of the application for computational offloading, which is time consuming and resources intensive. The resource constraint nature of SMDs require lightweight procedures for leveraging computational clouds. Therefore, this paper presents a lightweight framework which focuses on minimizing additional resources utilization in computational offloading for MCC. The framework employs features of centralized monitoring, high availability and on demand access services of computational clouds for computational offloading. As a result, the turnaround time and execution cost of the application are reduced. The framework is evaluated by testing prototype application in the real MCC environment. The lightweight nature of the proposed framework is validated by employing computational offloading for the proposed framework and the latest existing frameworks. Analysis shows that by employing the proposed framework for computational offloading, the size of data transmission is reduced by 91%, energy consumption cost is minimized by 81% and turnaround time of the application is decreased by 83.5% as compared to the existing offloading frameworks. Hence, the proposed framework minimizes additional resources utilization and therefore offers lightweight solution for computational offloading in MCC. PMID:25127245

  15. Consolidating WLCG topology and configuration in the Computing Resource Information Catalogue

    DOE PAGES

    Alandes, Maria; Andreeva, Julia; Anisenkov, Alexey; ...

    2017-10-01

    Here, the Worldwide LHC Computing Grid infrastructure links about 200 participating computing centres affiliated with several partner projects. It is built by integrating heterogeneous computer and storage resources in diverse data centres all over the world and provides CPU and storage capacity to the LHC experiments to perform data processing and physics analysis. In order to be used by the experiments, these distributed resources should be well described, which implies easy service discovery and detailed description of service configuration. Currently this information is scattered over multiple generic information sources like GOCDB, OIM, BDII and experiment-specific information systems. Such a modelmore » does not allow to validate topology and configuration information easily. Moreover, information in various sources is not always consistent. Finally, the evolution of computing technologies introduces new challenges. Experiments are more and more relying on opportunistic resources, which by their nature are more dynamic and should also be well described in the WLCG information system. This contribution describes the new WLCG configuration service CRIC (Computing Resource Information Catalogue) which collects information from various information providers, performs validation and provides a consistent set of UIs and APIs to the LHC VOs for service discovery and usage configuration. The main requirements for CRIC are simplicity, agility and robustness. CRIC should be able to be quickly adapted to new types of computing resources, new information sources, and allow for new data structures to be implemented easily following the evolution of the computing models and operations of the experiments.« less

  16. Consolidating WLCG topology and configuration in the Computing Resource Information Catalogue

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Alandes, Maria; Andreeva, Julia; Anisenkov, Alexey

    Here, the Worldwide LHC Computing Grid infrastructure links about 200 participating computing centres affiliated with several partner projects. It is built by integrating heterogeneous computer and storage resources in diverse data centres all over the world and provides CPU and storage capacity to the LHC experiments to perform data processing and physics analysis. In order to be used by the experiments, these distributed resources should be well described, which implies easy service discovery and detailed description of service configuration. Currently this information is scattered over multiple generic information sources like GOCDB, OIM, BDII and experiment-specific information systems. Such a modelmore » does not allow to validate topology and configuration information easily. Moreover, information in various sources is not always consistent. Finally, the evolution of computing technologies introduces new challenges. Experiments are more and more relying on opportunistic resources, which by their nature are more dynamic and should also be well described in the WLCG information system. This contribution describes the new WLCG configuration service CRIC (Computing Resource Information Catalogue) which collects information from various information providers, performs validation and provides a consistent set of UIs and APIs to the LHC VOs for service discovery and usage configuration. The main requirements for CRIC are simplicity, agility and robustness. CRIC should be able to be quickly adapted to new types of computing resources, new information sources, and allow for new data structures to be implemented easily following the evolution of the computing models and operations of the experiments.« less

  17. Consolidating WLCG topology and configuration in the Computing Resource Information Catalogue

    NASA Astrophysics Data System (ADS)

    Alandes, Maria; Andreeva, Julia; Anisenkov, Alexey; Bagliesi, Giuseppe; Belforte, Stephano; Campana, Simone; Dimou, Maria; Flix, Jose; Forti, Alessandra; di Girolamo, A.; Karavakis, Edward; Lammel, Stephan; Litmaath, Maarten; Sciaba, Andrea; Valassi, Andrea

    2017-10-01

    The Worldwide LHC Computing Grid infrastructure links about 200 participating computing centres affiliated with several partner projects. It is built by integrating heterogeneous computer and storage resources in diverse data centres all over the world and provides CPU and storage capacity to the LHC experiments to perform data processing and physics analysis. In order to be used by the experiments, these distributed resources should be well described, which implies easy service discovery and detailed description of service configuration. Currently this information is scattered over multiple generic information sources like GOCDB, OIM, BDII and experiment-specific information systems. Such a model does not allow to validate topology and configuration information easily. Moreover, information in various sources is not always consistent. Finally, the evolution of computing technologies introduces new challenges. Experiments are more and more relying on opportunistic resources, which by their nature are more dynamic and should also be well described in the WLCG information system. This contribution describes the new WLCG configuration service CRIC (Computing Resource Information Catalogue) which collects information from various information providers, performs validation and provides a consistent set of UIs and APIs to the LHC VOs for service discovery and usage configuration. The main requirements for CRIC are simplicity, agility and robustness. CRIC should be able to be quickly adapted to new types of computing resources, new information sources, and allow for new data structures to be implemented easily following the evolution of the computing models and operations of the experiments.

  18. On-demand provisioning of HEP compute resources on cloud sites and shared HPC centers

    NASA Astrophysics Data System (ADS)

    Erli, G.; Fischer, F.; Fleig, G.; Giffels, M.; Hauth, T.; Quast, G.; Schnepf, M.; Heese, J.; Leppert, K.; Arnaez de Pedro, J.; Sträter, R.

    2017-10-01

    This contribution reports on solutions, experiences and recent developments with the dynamic, on-demand provisioning of remote computing resources for analysis and simulation workflows. Local resources of a physics institute are extended by private and commercial cloud sites, ranging from the inclusion of desktop clusters over institute clusters to HPC centers. Rather than relying on dedicated HEP computing centers, it is nowadays more reasonable and flexible to utilize remote computing capacity via virtualization techniques or container concepts. We report on recent experience from incorporating a remote HPC center (NEMO Cluster, Freiburg University) and resources dynamically requested from the commercial provider 1&1 Internet SE into our intitute’s computing infrastructure. The Freiburg HPC resources are requested via the standard batch system, allowing HPC and HEP applications to be executed simultaneously, such that regular batch jobs run side by side to virtual machines managed via OpenStack [1]. For the inclusion of the 1&1 commercial resources, a Python API and SDK as well as the possibility to upload images were available. Large scale tests prove the capability to serve the scientific use case in the European 1&1 datacenters. The described environment at the Institute of Experimental Nuclear Physics (IEKP) at KIT serves the needs of researchers participating in the CMS and Belle II experiments. In total, resources exceeding half a million CPU hours have been provided by remote sites.

  19. Job Management and Task Bundling

    NASA Astrophysics Data System (ADS)

    Berkowitz, Evan; Jansen, Gustav R.; McElvain, Kenneth; Walker-Loud, André

    2018-03-01

    High Performance Computing is often performed on scarce and shared computing resources. To ensure computers are used to their full capacity, administrators often incentivize large workloads that are not possible on smaller systems. Measurements in Lattice QCD frequently do not scale to machine-size workloads. By bundling tasks together we can create large jobs suitable for gigantic partitions. We discuss METAQ and mpi_jm, software developed to dynamically group computational tasks together, that can intelligently backfill to consume idle time without substantial changes to users' current workflows or executables.

  20. Application of microarray analysis on computer cluster and cloud platforms.

    PubMed

    Bernau, C; Boulesteix, A-L; Knaus, J

    2013-01-01

    Analysis of recent high-dimensional biological data tends to be computationally intensive as many common approaches such as resampling or permutation tests require the basic statistical analysis to be repeated many times. A crucial advantage of these methods is that they can be easily parallelized due to the computational independence of the resampling or permutation iterations, which has induced many statistics departments to establish their own computer clusters. An alternative is to rent computing resources in the cloud, e.g. at Amazon Web Services. In this article we analyze whether a selection of statistical projects, recently implemented at our department, can be efficiently realized on these cloud resources. Moreover, we illustrate an opportunity to combine computer cluster and cloud resources. In order to compare the efficiency of computer cluster and cloud implementations and their respective parallelizations we use microarray analysis procedures and compare their runtimes on the different platforms. Amazon Web Services provide various instance types which meet the particular needs of the different statistical projects we analyzed in this paper. Moreover, the network capacity is sufficient and the parallelization is comparable in efficiency to standard computer cluster implementations. Our results suggest that many statistical projects can be efficiently realized on cloud resources. It is important to mention, however, that workflows can change substantially as a result of a shift from computer cluster to cloud computing.

  1. Model documentation renewable fuels module of the National Energy Modeling System

    NASA Astrophysics Data System (ADS)

    1995-06-01

    This report documents the objectives, analytical approach, and design of the National Energy Modeling System (NEMS) Renewable Fuels Module (RFM) as it relates to the production of the 1995 Annual Energy Outlook (AEO95) forecasts. The report catalogs and describes modeling assumptions, computational methodologies, data inputs, and parameter estimation techniques. A number of offline analyses used in lieu of RFM modeling components are also described. The RFM consists of six analytical submodules that represent each of the major renewable energy resources -- wood, municipal solid waste (MSW), solar energy, wind energy, geothermal energy, and alcohol fuels. The RFM also reads in hydroelectric facility capacities and capacity factors from a data file for use by the NEMS Electricity Market Module (EMM). The purpose of the RFM is to define the technological, cost, and resource size characteristics of renewable energy technologies. These characteristics are used to compute a levelized cost to be competed against other similarly derived costs from other energy sources and technologies. The competition of these energy sources over the NEMS time horizon determines the market penetration of these renewable energy technologies. The characteristics include available energy capacity, capital costs, fixed operating costs, variable operating costs, capacity factor, heat rate, construction lead time, and fuel product price.

  2. A world-wide databridge supported by a commercial cloud provider

    NASA Astrophysics Data System (ADS)

    Tat Cheung, Kwong; Field, Laurence; Furano, Fabrizio

    2017-10-01

    Volunteer computing has the potential to provide significant additional computing capacity for the LHC experiments. One of the challenges with exploiting volunteer computing is to support a global community of volunteers that provides heterogeneous resources. However, high energy physics applications require more data input and output than the CPU intensive applications that are typically used by other volunteer computing projects. While the so-called databridge has already been successfully proposed as a method to span the untrusted and trusted domains of volunteer computing and Grid computing respective, globally transferring data between potentially poor-performing residential networks and CERN could be unreliable, leading to wasted resources usage. The expectation is that by placing a storage endpoint that is part of a wider, flexible geographical databridge deployment closer to the volunteers, the transfer success rate and the overall performance can be improved. This contribution investigates the provision of a globally distributed databridge implemented upon a commercial cloud provider.

  3. Grid site availability evaluation and monitoring at CMS

    DOE PAGES

    Lyons, Gaston; Maciulaitis, Rokas; Bagliesi, Giuseppe; ...

    2017-10-01

    The Compact Muon Solenoid (CMS) experiment at the Large Hadron Collider (LHC) uses distributed grid computing to store, process, and analyse the vast quantity of scientific data recorded every year. The computing resources are grouped into sites and organized in a tiered structure. Each site provides computing and storage to the CMS computing grid. Over a hundred sites worldwide contribute with resources from hundred to well over ten thousand computing cores and storage from tens of TBytes to tens of PBytes. In such a large computing setup scheduled and unscheduled outages occur continually and are not allowed to significantly impactmore » data handling, processing, and analysis. Unscheduled capacity and performance reductions need to be detected promptly and corrected. CMS developed a sophisticated site evaluation and monitoring system for Run 1 of the LHC based on tools of the Worldwide LHC Computing Grid. For Run 2 of the LHC the site evaluation and monitoring system is being overhauled to enable faster detection/reaction to failures and a more dynamic handling of computing resources. Furthermore, enhancements to better distinguish site from central service issues and to make evaluations more transparent and informative to site support staff are planned.« less

  4. Grid site availability evaluation and monitoring at CMS

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lyons, Gaston; Maciulaitis, Rokas; Bagliesi, Giuseppe

    The Compact Muon Solenoid (CMS) experiment at the Large Hadron Collider (LHC) uses distributed grid computing to store, process, and analyse the vast quantity of scientific data recorded every year. The computing resources are grouped into sites and organized in a tiered structure. Each site provides computing and storage to the CMS computing grid. Over a hundred sites worldwide contribute with resources from hundred to well over ten thousand computing cores and storage from tens of TBytes to tens of PBytes. In such a large computing setup scheduled and unscheduled outages occur continually and are not allowed to significantly impactmore » data handling, processing, and analysis. Unscheduled capacity and performance reductions need to be detected promptly and corrected. CMS developed a sophisticated site evaluation and monitoring system for Run 1 of the LHC based on tools of the Worldwide LHC Computing Grid. For Run 2 of the LHC the site evaluation and monitoring system is being overhauled to enable faster detection/reaction to failures and a more dynamic handling of computing resources. Furthermore, enhancements to better distinguish site from central service issues and to make evaluations more transparent and informative to site support staff are planned.« less

  5. Grid site availability evaluation and monitoring at CMS

    NASA Astrophysics Data System (ADS)

    Lyons, Gaston; Maciulaitis, Rokas; Bagliesi, Giuseppe; Lammel, Stephan; Sciabà, Andrea

    2017-10-01

    The Compact Muon Solenoid (CMS) experiment at the Large Hadron Collider (LHC) uses distributed grid computing to store, process, and analyse the vast quantity of scientific data recorded every year. The computing resources are grouped into sites and organized in a tiered structure. Each site provides computing and storage to the CMS computing grid. Over a hundred sites worldwide contribute with resources from hundred to well over ten thousand computing cores and storage from tens of TBytes to tens of PBytes. In such a large computing setup scheduled and unscheduled outages occur continually and are not allowed to significantly impact data handling, processing, and analysis. Unscheduled capacity and performance reductions need to be detected promptly and corrected. CMS developed a sophisticated site evaluation and monitoring system for Run 1 of the LHC based on tools of the Worldwide LHC Computing Grid. For Run 2 of the LHC the site evaluation and monitoring system is being overhauled to enable faster detection/reaction to failures and a more dynamic handling of computing resources. Enhancements to better distinguish site from central service issues and to make evaluations more transparent and informative to site support staff are planned.

  6. Quantum computing with incoherent resources and quantum jumps.

    PubMed

    Santos, M F; Cunha, M Terra; Chaves, R; Carvalho, A R R

    2012-04-27

    Spontaneous emission and the inelastic scattering of photons are two natural processes usually associated with decoherence and the reduction in the capacity to process quantum information. Here we show that, when suitably detected, these photons are sufficient to build all the fundamental blocks needed to perform quantum computation in the emitting qubits while protecting them from deleterious dissipative effects. We exemplify this by showing how to efficiently prepare graph states for the implementation of measurement-based quantum computation.

  7. Tactical resource allocation and elective patient admission planning in care processes.

    PubMed

    Hulshof, Peter J H; Boucherie, Richard J; Hans, Erwin W; Hurink, Johann L

    2013-06-01

    Tactical planning of resources in hospitals concerns elective patient admission planning and the intermediate term allocation of resource capacities. Its main objectives are to achieve equitable access for patients, to meet production targets/to serve the strategically agreed number of patients, and to use resources efficiently. This paper proposes a method to develop a tactical resource allocation and elective patient admission plan. These tactical plans allocate available resources to various care processes and determine the selection of patients to be served that are at a particular stage of their care process. Our method is developed in a Mixed Integer Linear Programming (MILP) framework and copes with multiple resources, multiple time periods and multiple patient groups with various uncertain treatment paths through the hospital, thereby integrating decision making for a chain of hospital resources. Computational results indicate that our method leads to a more equitable distribution of resources and provides control of patient access times, the number of patients served and the fraction of allocated resource capacity. Our approach is generic, as the base MILP and the solution approach allow for including various extensions to both the objective criteria and the constraints. Consequently, the proposed method is applicable in various settings of tactical hospital management.

  8. Mouse Genome Informatics (MGI): Resources for Mining Mouse Genetic, Genomic, and Biological Data in Support of Primary and Translational Research.

    PubMed

    Eppig, Janan T; Smith, Cynthia L; Blake, Judith A; Ringwald, Martin; Kadin, James A; Richardson, Joel E; Bult, Carol J

    2017-01-01

    The Mouse Genome Informatics (MGI), resource ( www.informatics.jax.org ) has existed for over 25 years, and over this time its data content, informatics infrastructure, and user interfaces and tools have undergone dramatic changes (Eppig et al., Mamm Genome 26:272-284, 2015). Change has been driven by scientific methodological advances, rapid improvements in computational software, growth in computer hardware capacity, and the ongoing collaborative nature of the mouse genomics community in building resources and sharing data. Here we present an overview of the current data content of MGI, describe its general organization, and provide examples using simple and complex searches, and tools for mining and retrieving sets of data.

  9. A Cloud-Based Simulation Architecture for Pandemic Influenza Simulation

    PubMed Central

    Eriksson, Henrik; Raciti, Massimiliano; Basile, Maurizio; Cunsolo, Alessandro; Fröberg, Anders; Leifler, Ola; Ekberg, Joakim; Timpka, Toomas

    2011-01-01

    High-fidelity simulations of pandemic outbreaks are resource consuming. Cluster-based solutions have been suggested for executing such complex computations. We present a cloud-based simulation architecture that utilizes computing resources both locally available and dynamically rented online. The approach uses the Condor framework for job distribution and management of the Amazon Elastic Computing Cloud (EC2) as well as local resources. The architecture has a web-based user interface that allows users to monitor and control simulation execution. In a benchmark test, the best cost-adjusted performance was recorded for the EC2 H-CPU Medium instance, while a field trial showed that the job configuration had significant influence on the execution time and that the network capacity of the master node could become a bottleneck. We conclude that it is possible to develop a scalable simulation environment that uses cloud-based solutions, while providing an easy-to-use graphical user interface. PMID:22195089

  10. Probabilistic resource allocation system with self-adaptive capability

    NASA Technical Reports Server (NTRS)

    Yufik, Yan M. (Inventor)

    1996-01-01

    A probabilistic resource allocation system is disclosed containing a low capacity computational module (Short Term Memory or STM) and a self-organizing associative network (Long Term Memory or LTM) where nodes represent elementary resources, terminal end nodes represent goals, and directed links represent the order of resource association in different allocation episodes. Goals and their priorities are indicated by the user, and allocation decisions are made in the STM, while candidate associations of resources are supplied by the LTM based on the association strength (reliability). Reliability values are automatically assigned to the network links based on the frequency and relative success of exercising those links in the previous allocation decisions. Accumulation of allocation history in the form of an associative network in the LTM reduces computational demands on subsequent allocations. For this purpose, the network automatically partitions itself into strongly associated high reliability packets, allowing fast approximate computation and display of allocation solutions satisfying the overall reliability and other user-imposed constraints. System performance improves in time due to modification of network parameters and partitioning criteria based on the performance feedback.

  11. Probabilistic resource allocation system with self-adaptive capability

    NASA Technical Reports Server (NTRS)

    Yufik, Yan M. (Inventor)

    1998-01-01

    A probabilistic resource allocation system is disclosed containing a low capacity computational module (Short Term Memory or STM) and a self-organizing associative network (Long Term Memory or LTM) where nodes represent elementary resources, terminal end nodes represent goals, and weighted links represent the order of resource association in different allocation episodes. Goals and their priorities are indicated by the user, and allocation decisions are made in the STM, while candidate associations of resources are supplied by the LTM based on the association strength (reliability). Weights are automatically assigned to the network links based on the frequency and relative success of exercising those links in the previous allocation decisions. Accumulation of allocation history in the form of an associative network in the LTM reduces computational demands on subsequent allocations. For this purpose, the network automatically partitions itself into strongly associated high reliability packets, allowing fast approximate computation and display of allocation solutions satisfying the overall reliability and other user-imposed constraints. System performance improves in time due to modification of network parameters and partitioning criteria based on the performance feedback.

  12. Interoperating Cloud-based Virtual Farms

    NASA Astrophysics Data System (ADS)

    Bagnasco, S.; Colamaria, F.; Colella, D.; Casula, E.; Elia, D.; Franco, A.; Lusso, S.; Luparello, G.; Masera, M.; Miniello, G.; Mura, D.; Piano, S.; Vallero, S.; Venaruzzo, M.; Vino, G.

    2015-12-01

    The present work aims at optimizing the use of computing resources available at the grid Italian Tier-2 sites of the ALICE experiment at CERN LHC by making them accessible to interactive distributed analysis, thanks to modern solutions based on cloud computing. The scalability and elasticity of the computing resources via dynamic (“on-demand”) provisioning is essentially limited by the size of the computing site, reaching the theoretical optimum only in the asymptotic case of infinite resources. The main challenge of the project is to overcome this limitation by federating different sites through a distributed cloud facility. Storage capacities of the participating sites are seen as a single federated storage area, preventing the need of mirroring data across them: high data access efficiency is guaranteed by location-aware analysis software and storage interfaces, in a transparent way from an end-user perspective. Moreover, the interactive analysis on the federated cloud reduces the execution time with respect to grid batch jobs. The tests of the investigated solutions for both cloud computing and distributed storage on wide area network will be presented.

  13. Capacity utilization study for aviation security cargo inspection queuing system

    NASA Astrophysics Data System (ADS)

    Allgood, Glenn O.; Olama, Mohammed M.; Lake, Joe E.; Brumback, Daryl

    2010-04-01

    In this paper, we conduct performance evaluation study for an aviation security cargo inspection queuing system for material flow and accountability. The queuing model employed in our study is based on discrete-event simulation and processes various types of cargo simultaneously. Onsite measurements are collected in an airport facility to validate the queuing model. The overall performance of the aviation security cargo inspection system is computed, analyzed, and optimized for the different system dynamics. Various performance measures are considered such as system capacity, residual capacity, throughput, capacity utilization, subscribed capacity utilization, resources capacity utilization, subscribed resources capacity utilization, and number of cargo pieces (or pallets) in the different queues. These metrics are performance indicators of the system's ability to service current needs and response capacity to additional requests. We studied and analyzed different scenarios by changing various model parameters such as number of pieces per pallet, number of TSA inspectors and ATS personnel, number of forklifts, number of explosives trace detection (ETD) and explosives detection system (EDS) inspection machines, inspection modality distribution, alarm rate, and cargo closeout time. The increased physical understanding resulting from execution of the queuing model utilizing these vetted performance measures should reduce the overall cost and shipping delays associated with new inspection requirements.

  14. Capacity Utilization Study for Aviation Security Cargo Inspection Queuing System

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Allgood, Glenn O; Olama, Mohammed M; Lake, Joe E

    In this paper, we conduct performance evaluation study for an aviation security cargo inspection queuing system for material flow and accountability. The queuing model employed in our study is based on discrete-event simulation and processes various types of cargo simultaneously. Onsite measurements are collected in an airport facility to validate the queuing model. The overall performance of the aviation security cargo inspection system is computed, analyzed, and optimized for the different system dynamics. Various performance measures are considered such as system capacity, residual capacity, throughput, capacity utilization, subscribed capacity utilization, resources capacity utilization, subscribed resources capacity utilization, and number ofmore » cargo pieces (or pallets) in the different queues. These metrics are performance indicators of the system s ability to service current needs and response capacity to additional requests. We studied and analyzed different scenarios by changing various model parameters such as number of pieces per pallet, number of TSA inspectors and ATS personnel, number of forklifts, number of explosives trace detection (ETD) and explosives detection system (EDS) inspection machines, inspection modality distribution, alarm rate, and cargo closeout time. The increased physical understanding resulting from execution of the queuing model utilizing these vetted performance measures should reduce the overall cost and shipping delays associated with new inspection requirements.« less

  15. Data mining to support simulation modeling of patient flow in hospitals.

    PubMed

    Isken, Mark W; Rajagopalan, Balaji

    2002-04-01

    Spiraling health care costs in the United States are driving institutions to continually address the challenge of optimizing the use of scarce resources. One of the first steps towards optimizing resources is to utilize capacity effectively. For hospital capacity planning problems such as allocation of inpatient beds, computer simulation is often the method of choice. One of the more difficult aspects of using simulation models for such studies is the creation of a manageable set of patient types to include in the model. The objective of this paper is to demonstrate the potential of using data mining techniques, specifically clustering techniques such as K-means, to help guide the development of patient type definitions for purposes of building computer simulation or analytical models of patient flow in hospitals. Using data from a hospital in the Midwest this study brings forth several important issues that researchers need to address when applying clustering techniques in general and specifically to hospital data.

  16. Self-Directed Cooperative Planetary Rovers

    NASA Technical Reports Server (NTRS)

    Zilberstein, Shlomo; Morris, Robert (Technical Monitor)

    2003-01-01

    The project is concerned with the development of decision-theoretic techniques to optimize the scientific return of planetary rovers. Planetary rovers are small unmanned vehicles equipped with cameras and a variety of sensors used for scientific experiments. They must operate under tight constraints over such resources as operation time, power, storage capacity, and communication bandwidth. Moreover, the limited computational resources of the rover limit the complexity of on-line planning and scheduling. We have developed a comprehensive solution to this problem that involves high-level tools to describe a mission; a compiler that maps a mission description and additional probabilistic models of the components of the rover into a Markov decision problem; and algorithms for solving the rover control problem that are sensitive to the limited computational resources and high-level of uncertainty in this domain.

  17. Changing from computing grid to knowledge grid in life-science grid.

    PubMed

    Talukdar, Veera; Konar, Amit; Datta, Ayan; Choudhury, Anamika Roy

    2009-09-01

    Grid computing has a great potential to become a standard cyber infrastructure for life sciences that often require high-performance computing and large data handling, which exceeds the computing capacity of a single institution. Grid computer applies the resources of many computers in a network to a single problem at the same time. It is useful to scientific problems that require a great number of computer processing cycles or access to a large amount of data.As biologists,we are constantly discovering millions of genes and genome features, which are assembled in a library and distributed on computers around the world.This means that new, innovative methods must be developed that exploit the re-sources available for extensive calculations - for example grid computing.This survey reviews the latest grid technologies from the viewpoints of computing grid, data grid and knowledge grid. Computing grid technologies have been matured enough to solve high-throughput real-world life scientific problems. Data grid technologies are strong candidates for realizing a "resourceome" for bioinformatics. Knowledge grids should be designed not only from sharing explicit knowledge on computers but also from community formulation for sharing tacit knowledge among a community. By extending the concept of grid from computing grid to knowledge grid, it is possible to make use of a grid as not only sharable computing resources, but also as time and place in which people work together, create knowledge, and share knowledge and experiences in a community.

  18. In silico discovery of metal-organic frameworks for precombustion CO2 capture using a genetic algorithm

    PubMed Central

    Chung, Yongchul G.; Gómez-Gualdrón, Diego A.; Li, Peng; Leperi, Karson T.; Deria, Pravas; Zhang, Hongda; Vermeulen, Nicolaas A.; Stoddart, J. Fraser; You, Fengqi; Hupp, Joseph T.; Farha, Omar K.; Snurr, Randall Q.

    2016-01-01

    Discovery of new adsorbent materials with a high CO2 working capacity could help reduce CO2 emissions from newly commissioned power plants using precombustion carbon capture. High-throughput computational screening efforts can accelerate the discovery of new adsorbents but sometimes require significant computational resources to explore the large space of possible materials. We report the in silico discovery of high-performing adsorbents for precombustion CO2 capture by applying a genetic algorithm to efficiently search a large database of metal-organic frameworks (MOFs) for top candidates. High-performing MOFs identified from the in silico search were synthesized and activated and show a high CO2 working capacity and a high CO2/H2 selectivity. One of the synthesized MOFs shows a higher CO2 working capacity than any MOF reported in the literature under the operating conditions investigated here. PMID:27757420

  19. In vitro data and in silico models for computational toxicology (Teratology Society ILSI HESI workshop)

    EPA Science Inventory

    The challenge of assessing the potential developmental health risks for the tens of thousands of environmental chemicals is beyond the capacity for resource-intensive animal protocols. Large data streams coming from high-throughput (HTS) and high-content (HCS) profiling of biolog...

  20. The DoD's High Performance Computing Modernization Program - Ensuing the National Earth Systems Prediction Capability Becomes Operational

    NASA Astrophysics Data System (ADS)

    Burnett, W.

    2016-12-01

    The Department of Defense's (DoD) High Performance Computing Modernization Program (HPCMP) provides high performance computing to address the most significant challenges in computational resources, software application support and nationwide research and engineering networks. Today, the HPCMP has a critical role in ensuring the National Earth System Prediction Capability (N-ESPC) achieves initial operational status in 2019. A 2015 study commissioned by the HPCMP found that N-ESPC computational requirements will exceed interconnect bandwidth capacity due to the additional load from data assimilation and passing connecting data between ensemble codes. Memory bandwidth and I/O bandwidth will continue to be significant bottlenecks for the Navy's Hybrid Coordinate Ocean Model (HYCOM) scalability - by far the major driver of computing resource requirements in the N-ESPC. The study also found that few of the N-ESPC model developers have detailed plans to ensure their respective codes scale through 2024. Three HPCMP initiatives are designed to directly address and support these issues: Productivity Enhancement, Technology, Transfer and Training (PETTT), the HPCMP Applications Software Initiative (HASI), and Frontier Projects. PETTT supports code conversion by providing assistance, expertise and training in scalable and high-end computing architectures. HASI addresses the continuing need for modern application software that executes effectively and efficiently on next-generation high-performance computers. Frontier Projects enable research and development that could not be achieved using typical HPCMP resources by providing multi-disciplinary teams access to exceptional amounts of high performance computing resources. Finally, the Navy's DoD Supercomputing Resource Center (DSRC) currently operates a 6 Petabyte system, of which Naval Oceanography receives 15% of operational computational system use, or approximately 1 Petabyte of the processing capability. The DSRC will provide the DoD with future computing assets to initially operate the N-ESPC in 2019. This talk will further describe how DoD's HPCMP will ensure N-ESPC becomes operational, efficiently and effectively, using next-generation high performance computing.

  1. Efficient Redundancy Techniques in Cloud and Desktop Grid Systems using MAP/G/c-type Queues

    NASA Astrophysics Data System (ADS)

    Chakravarthy, Srinivas R.; Rumyantsev, Alexander

    2018-03-01

    Cloud computing is continuing to prove its flexibility and versatility in helping industries and businesses as well as academia as a way of providing needed computing capacity. As an important alternative to cloud computing, desktop grids allow to utilize the idle computer resources of an enterprise/community by means of distributed computing system, providing a more secure and controllable environment with lower operational expenses. Further, both cloud computing and desktop grids are meant to optimize limited resources and at the same time to decrease the expected latency for users. The crucial parameter for optimization both in cloud computing and in desktop grids is the level of redundancy (replication) for service requests/workunits. In this paper we study the optimal replication policies by considering three variations of Fork-Join systems in the context of a multi-server queueing system with a versatile point process for the arrivals. For services we consider phase type distributions as well as shifted exponential and Weibull. We use both analytical and simulation approach in our analysis and report some interesting qualitative results.

  2. Climate simulations and services on HPC, Cloud and Grid infrastructures

    NASA Astrophysics Data System (ADS)

    Cofino, Antonio S.; Blanco, Carlos; Minondo Tshuma, Antonio

    2017-04-01

    Cloud, Grid and High Performance Computing have changed the accessibility and availability of computing resources for Earth Science research communities, specially for Climate community. These paradigms are modifying the way how climate applications are being executed. By using these technologies the number, variety and complexity of experiments and resources are increasing substantially. But, although computational capacity is increasing, traditional applications and tools used by the community are not good enough to manage this large volume and variety of experiments and computing resources. In this contribution, we evaluate the challenges to run climate simulations and services on Grid, Cloud and HPC infrestructures and how to tackle them. The Grid and Cloud infrastructures provided by EGI's VOs ( esr , earth.vo.ibergrid and fedcloud.egi.eu) will be evaluated, as well as HPC resources from PRACE infrastructure and institutional clusters. To solve those challenges, solutions using DRM4G framework will be shown. DRM4G provides a good framework to manage big volume and variety of computing resources for climate experiments. This work has been supported by the Spanish National R&D Plan under projects WRF4G (CGL2011-28864), INSIGNIA (CGL2016-79210-R) and MULTI-SDM (CGL2015-66583-R) ; the IS-ENES2 project from the 7FP of the European Commission (grant agreement no. 312979); the European Regional Development Fund—ERDF and the Programa de Personal Investigador en Formación Predoctoral from Universidad de Cantabria and Government of Cantabria.

  3. System capacity and economic modeling computer tool for satellite mobile communications systems

    NASA Technical Reports Server (NTRS)

    Wiedeman, Robert A.; Wen, Doong; Mccracken, Albert G.

    1988-01-01

    A unique computer modeling tool that combines an engineering tool with a financial analysis program is described. The resulting combination yields a flexible economic model that can predict the cost effectiveness of various mobile systems. Cost modeling is necessary in order to ascertain if a given system with a finite satellite resource is capable of supporting itself financially and to determine what services can be supported. Personal computer techniques using Lotus 123 are used for the model in order to provide as universal an application as possible such that the model can be used and modified to fit many situations and conditions. The output of the engineering portion of the model consists of a channel capacity analysis and link calculations for several qualities of service using up to 16 types of earth terminal configurations. The outputs of the financial model are a revenue analysis, an income statement, and a cost model validation section.

  4. Cognitive performance modeling based on general systems performance theory.

    PubMed

    Kondraske, George V

    2010-01-01

    General Systems Performance Theory (GSPT) was initially motivated by problems associated with quantifying different aspects of human performance. It has proved to be invaluable for measurement development and understanding quantitative relationships between human subsystem capacities and performance in complex tasks. It is now desired to bring focus to the application of GSPT to modeling of cognitive system performance. Previous studies involving two complex tasks (i.e., driving and performing laparoscopic surgery) and incorporating measures that are clearly related to cognitive performance (information processing speed and short-term memory capacity) were revisited. A GSPT-derived method of task analysis and performance prediction termed Nonlinear Causal Resource Analysis (NCRA) was employed to determine the demand on basic cognitive performance resources required to support different levels of complex task performance. This approach is presented as a means to determine a cognitive workload profile and the subsequent computation of a single number measure of cognitive workload (CW). Computation of CW may be a viable alternative to measuring it. Various possible "more basic" performance resources that contribute to cognitive system performance are discussed. It is concluded from this preliminary exploration that a GSPT-based approach can contribute to defining cognitive performance models that are useful for both individual subjects and specific groups (e.g., military pilots).

  5. The Czech National Grid Infrastructure

    NASA Astrophysics Data System (ADS)

    Chudoba, J.; Křenková, I.; Mulač, M.; Ruda, M.; Sitera, J.

    2017-10-01

    The Czech National Grid Infrastructure is operated by MetaCentrum, a CESNET department responsible for coordinating and managing activities related to distributed computing. CESNET as the Czech National Research and Education Network (NREN) provides many e-infrastructure services, which are used by 94% of the scientific and research community in the Czech Republic. Computing and storage resources owned by different organizations are connected by fast enough network to provide transparent access to all resources. We describe in more detail the computing infrastructure, which is based on several different technologies and covers grid, cloud and map-reduce environment. While the largest part of CPUs is still accessible via distributed torque servers, providing environment for long batch jobs, part of infrastructure is available via standard EGI tools in EGI, subset of NGI resources is provided into EGI FedCloud environment with cloud interface and there is also Hadoop cluster provided by the same e-infrastructure.A broad spectrum of computing servers is offered; users can choose from standard 2 CPU servers to large SMP machines with up to 6 TB of RAM or servers with GPU cards. Different groups have different priorities on various resources, resource owners can even have an exclusive access. The software is distributed via AFS. Storage servers offering up to tens of terabytes of disk space to individual users are connected via NFS4 on top of GPFS and access to long term HSM storage with peta-byte capacity is also provided. Overview of available resources and recent statistics of usage will be given.

  6. Modeling individual differences in working memory performance: a source activation account

    PubMed Central

    Daily, Larry Z.; Lovett, Marsha C.; Reder, Lynne M.

    2008-01-01

    Working memory resources are needed for processing and maintenance of information during cognitive tasks. Many models have been developed to capture the effects of limited working memory resources on performance. However, most of these models do not account for the finding that different individuals show different sensitivities to working memory demands, and none of the models predicts individual subjects' patterns of performance. We propose a computational model that accounts for differences in working memory capacity in terms of a quantity called source activation, which is used to maintain goal-relevant information in an available state. We apply this model to capture the working memory effects of individual subjects at a fine level of detail across two experiments. This, we argue, strengthens the interpretation of source activation as working memory capacity. PMID:19079561

  7. Perspectives on the Future of CFD

    NASA Technical Reports Server (NTRS)

    Kwak, Dochan

    2000-01-01

    This viewgraph presentation gives an overview of the future of computational fluid dynamics (CFD), which in the past has pioneered the field of flow simulation. Over time CFD has progressed as computing power. Numerical methods have been advanced as CPU and memory capacity increases. Complex configurations are routinely computed now and direct numerical simulations (DNS) and large eddy simulations (LES) are used to study turbulence. As the computing resources changed to parallel and distributed platforms, computer science aspects such as scalability (algorithmic and implementation) and portability and transparent codings have advanced. Examples of potential future (or current) challenges include risk assessment, limitations of the heuristic model, and the development of CFD and information technology (IT) tools.

  8. Cross stratum resources protection in fog-computing-based radio over fiber networks for 5G services

    NASA Astrophysics Data System (ADS)

    Guo, Shaoyong; Shao, Sujie; Wang, Yao; Yang, Hui

    2017-09-01

    In order to meet the requirement of internet of things (IoT) and 5G, the cloud radio access network is a paradigm which converges all base stations computational resources into a cloud baseband unit (BBU) pool, while the distributed radio frequency signals are collected by remote radio head (RRH). A precondition for centralized processing in the BBU pool is an interconnection fronthaul network with high capacity and low delay. However, it has become more complex and frequent in the interaction between RRH and BBU and resource scheduling among BBUs in cloud. Cloud radio over fiber network has been proposed in our previous work already. In order to overcome the complexity and latency, in this paper, we first present a novel cross stratum resources protection (CSRP) architecture in fog-computing-based radio over fiber networks (F-RoFN) for 5G services. Additionally, a cross stratum protection (CSP) scheme considering the network survivability is introduced in the proposed architecture. The CSRP with CSP scheme can effectively pull the remote processing resource locally to implement the cooperative radio resource management, enhance the responsiveness and resilience to the dynamic end-to-end 5G service demands, and globally optimize optical network, wireless and fog resources. The feasibility and efficiency of the proposed architecture with CSP scheme are verified on our software defined networking testbed in terms of service latency, transmission success rate, resource occupation rate and blocking probability.

  9. Preliminary research on quantitative methods of water resources carrying capacity based on water resources balance sheet

    NASA Astrophysics Data System (ADS)

    Wang, Yanqiu; Huang, Xiaorong; Gao, Linyun; Guo, Biying; Ma, Kai

    2018-06-01

    Water resources are not only basic natural resources, but also strategic economic resources and ecological control factors. Water resources carrying capacity constrains the sustainable development of regional economy and society. Studies of water resources carrying capacity can provide helpful information about how the socioeconomic system is both supported and restrained by the water resources system. Based on the research of different scholars, major problems in the study of water resources carrying capacity were summarized as follows: the definition of water resources carrying capacity is not yet unified; the methods of carrying capacity quantification based on the definition of inconsistency are poor in operability; the current quantitative research methods of water resources carrying capacity did not fully reflect the principles of sustainable development; it is difficult to quantify the relationship among the water resources, economic society and ecological environment. Therefore, it is necessary to develop a better quantitative evaluation method to determine the regional water resources carrying capacity. This paper proposes a new approach to quantifying water resources carrying capacity (that is, through the compilation of the water resources balance sheet) to get a grasp of the regional water resources depletion and water environmental degradation (as well as regional water resources stock assets and liabilities), figure out the squeeze of socioeconomic activities on the environment, and discuss the quantitative calculation methods and technical route of water resources carrying capacity which are able to embody the substance of sustainable development.

  10. Computer software tool REALM for sustainable water allocation and management.

    PubMed

    Perera, B J C; James, B; Kularathna, M D U

    2005-12-01

    REALM (REsource ALlocation Model) is a generalised computer simulation package that models harvesting and bulk distribution of water resources within a water supply system. It is a modeling tool, which can be applied to develop specific water allocation models. Like other water resource simulation software tools, REALM uses mass-balance accounting at nodes, while the movement of water within carriers is subject to capacity constraints. It uses a fast network linear programming algorithm to optimise the water allocation within the network during each simulation time step, in accordance with user-defined operating rules. This paper describes the main features of REALM and provides potential users with an appreciation of its capabilities. In particular, it describes two case studies covering major urban and rural water supply systems. These case studies illustrate REALM's capabilities in the use of stochastically generated data in water supply planning and management, modelling of environmental flows, and assessing security of supply issues.

  11. Cloud-Based Numerical Weather Prediction for Near Real-Time Forecasting and Disaster Response

    NASA Technical Reports Server (NTRS)

    Molthan, Andrew; Case, Jonathan; Venners, Jason; Schroeder, Richard; Checchi, Milton; Zavodsky, Bradley; Limaye, Ashutosh; O'Brien, Raymond

    2015-01-01

    The use of cloud computing resources continues to grow within the public and private sector components of the weather enterprise as users become more familiar with cloud-computing concepts, and competition among service providers continues to reduce costs and other barriers to entry. Cloud resources can also provide capabilities similar to high-performance computing environments, supporting multi-node systems required for near real-time, regional weather predictions. Referred to as "Infrastructure as a Service", or IaaS, the use of cloud-based computing hardware in an on-demand payment system allows for rapid deployment of a modeling system in environments lacking access to a large, supercomputing infrastructure. Use of IaaS capabilities to support regional weather prediction may be of particular interest to developing countries that have not yet established large supercomputing resources, but would otherwise benefit from a regional weather forecasting capability. Recently, collaborators from NASA Marshall Space Flight Center and Ames Research Center have developed a scripted, on-demand capability for launching the NOAA/NWS Science and Training Resource Center (STRC) Environmental Modeling System (EMS), which includes pre-compiled binaries of the latest version of the Weather Research and Forecasting (WRF) model. The WRF-EMS provides scripting for downloading appropriate initial and boundary conditions from global models, along with higher-resolution vegetation, land surface, and sea surface temperature data sets provided by the NASA Short-term Prediction Research and Transition (SPoRT) Center. This presentation will provide an overview of the modeling system capabilities and benchmarks performed on the Amazon Elastic Compute Cloud (EC2) environment. In addition, the presentation will discuss future opportunities to deploy the system in support of weather prediction in developing countries supported by NASA's SERVIR Project, which provides capacity building activities in environmental monitoring and prediction across a growing number of regional hubs throughout the world. Capacity-building applications that extend numerical weather prediction to developing countries are intended to provide near real-time applications to benefit public health, safety, and economic interests, but may have a greater impact during disaster events by providing a source for local predictions of weather-related hazards, or impacts that local weather events may have during the recovery phase.

  12. The OSG open facility: A sharing ecosystem

    DOE PAGES

    Jayatilaka, B.; Levshina, T.; Rynge, M.; ...

    2015-12-23

    The Open Science Grid (OSG) ties together individual experiments’ computing power, connecting their resources to create a large, robust computing grid, this computing infrastructure started primarily as a collection of sites associated with large HEP experiments such as ATLAS, CDF, CMS, and DZero. In the years since, the OSG has broadened its focus to also address the needs of other US researchers and increased delivery of Distributed High Through-put Computing (DHTC) to users from a wide variety of disciplines via the OSG Open Facility. Presently, the Open Facility delivers about 100 million computing wall hours per year to researchers whomore » are not already associated with the owners of the computing sites, this is primarily accomplished by harvesting and organizing the temporarily unused capacity (i.e. opportunistic cycles) from the sites in the OSG. Using these methods, OSG resource providers and scientists share computing hours with researchers in many other fields to enable their science, striving to make sure that these computing power used with maximal efficiency. Furthermore, we believe that expanded access to DHTC is an essential tool for scientific innovation and work continues in expanding this service.« less

  13. Achieving production-level use of HEP software at the Argonne Leadership Computing Facility

    NASA Astrophysics Data System (ADS)

    Uram, T. D.; Childers, J. T.; LeCompte, T. J.; Papka, M. E.; Benjamin, D.

    2015-12-01

    HEP's demand for computing resources has grown beyond the capacity of the Grid, and these demands will accelerate with the higher energy and luminosity planned for Run II. Mira, the ten petaFLOPs supercomputer at the Argonne Leadership Computing Facility, is a potentially significant compute resource for HEP research. Through an award of fifty million hours on Mira, we have delivered millions of events to LHC experiments by establishing the means of marshaling jobs through serial stages on local clusters, and parallel stages on Mira. We are running several HEP applications, including Alpgen, Pythia, Sherpa, and Geant4. Event generators, such as Sherpa, typically have a split workload: a small scale integration phase, and a second, more scalable, event-generation phase. To accommodate this workload on Mira we have developed two Python-based Django applications, Balsam and ARGO. Balsam is a generalized scheduler interface which uses a plugin system for interacting with scheduler software such as HTCondor, Cobalt, and TORQUE. ARGO is a workflow manager that submits jobs to instances of Balsam. Through these mechanisms, the serial and parallel tasks within jobs are executed on the appropriate resources. This approach and its integration with the PanDA production system will be discussed.

  14. Data management and its role in delivering science at DOE BES user facilities - Past, Present, and Future

    NASA Astrophysics Data System (ADS)

    Miller, Stephen D.; Herwig, Kenneth W.; Ren, Shelly; Vazhkudai, Sudharshan S.; Jemian, Pete R.; Luitz, Steffen; Salnikov, Andrei A.; Gaponenko, Igor; Proffen, Thomas; Lewis, Paul; Green, Mark L.

    2009-07-01

    The primary mission of user facilities operated by Basic Energy Sciences under the Department of Energy is to produce data for users in support of open science and basic research [1]. We trace back almost 30 years of history across selected user facilities illustrating the evolution of facility data management practices and how these practices have related to performing scientific research. The facilities cover multiple techniques such as X-ray and neutron scattering, imaging and tomography sciences. Over time, detector and data acquisition technologies have dramatically increased the ability to produce prolific volumes of data challenging the traditional paradigm of users taking data home upon completion of their experiments to process and publish their results. During this time, computing capacity has also increased dramatically, though the size of the data has grown significantly faster than the capacity of one's laptop to manage and process this new facility produced data. Trends indicate that this will continue to be the case for yet some time. Thus users face a quandary for how to manage today's data complexity and size as these may exceed the computing resources users have available to themselves. This same quandary can also stifle collaboration and sharing. Realizing this, some facilities are already providing web portal access to data and computing thereby providing users access to resources they need [2]. Portal based computing is now driving researchers to think about how to use the data collected at multiple facilities in an integrated way to perform their research, and also how to collaborate and share data. In the future, inter-facility data management systems will enable next tier cross-instrument-cross facility scientific research fuelled by smart applications residing upon user computer resources. We can learn from the medical imaging community that has been working since the early 1990's to integrate data from across multiple modalities to achieve better diagnoses [3] - similarly, data fusion across BES facilities will lead to new scientific discoveries.

  15. Modeling "Throughput Capacity": Using Computational Thinking to Envision More Graduates without Investing More Resources

    ERIC Educational Resources Information Center

    Wick, Michael R.; Kleine, Patricia A.; Nelson, Andrew J.

    2011-01-01

    This article presents the development, testing, and application of an enrollment model. The model incorporates incoming freshman enrollment class size and historical persistence, transfer, and graduation rates to predict a six-year enrollment window and associated annual graduate production. The model predicts six-year enrollment to within 0.67…

  16. Simulation of LHC events on a millions threads

    NASA Astrophysics Data System (ADS)

    Childers, J. T.; Uram, T. D.; LeCompte, T. J.; Papka, M. E.; Benjamin, D. P.

    2015-12-01

    Demand for Grid resources is expected to double during LHC Run II as compared to Run I; the capacity of the Grid, however, will not double. The HEP community must consider how to bridge this computing gap by targeting larger compute resources and using the available compute resources as efficiently as possible. Argonne's Mira, the fifth fastest supercomputer in the world, can run roughly five times the number of parallel processes that the ATLAS experiment typically uses on the Grid. We ported Alpgen, a serial x86 code, to run as a parallel application under MPI on the Blue Gene/Q architecture. By analysis of the Alpgen code, we reduced the memory footprint to allow running 64 threads per node, utilizing the four hardware threads available per core on the PowerPC A2 processor. Event generation and unweighting, typically run as independent serial phases, are coupled together in a single job in this scenario, reducing intermediate writes to the filesystem. By these optimizations, we have successfully run LHC proton-proton physics event generation at the scale of a million threads, filling two-thirds of Mira.

  17. Application-oriented offloading in heterogeneous networks for mobile cloud computing

    NASA Astrophysics Data System (ADS)

    Tseng, Fan-Hsun; Cho, Hsin-Hung; Chang, Kai-Di; Li, Jheng-Cong; Shih, Timothy K.

    2018-04-01

    Nowadays Internet applications have become more complicated that mobile device needs more computing resources for shorter execution time but it is restricted to limited battery capacity. Mobile cloud computing (MCC) is emerged to tackle the finite resource problem of mobile device. MCC offloads the tasks and jobs of mobile devices to cloud and fog environments by using offloading scheme. It is vital to MCC that which task should be offloaded and how to offload efficiently. In the paper, we formulate the offloading problem between mobile device and cloud data center and propose two algorithms based on application-oriented for minimum execution time, i.e. the Minimum Offloading Time for Mobile device (MOTM) algorithm and the Minimum Execution Time for Cloud data center (METC) algorithm. The MOTM algorithm minimizes offloading time by selecting appropriate offloading links based on application categories. The METC algorithm minimizes execution time in cloud data center by selecting virtual and physical machines with corresponding resource requirements of applications. Simulation results show that the proposed mechanism not only minimizes total execution time for mobile devices but also decreases their energy consumption.

  18. 2005 White Paper on Institutional Capability Computing Requirements

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Carnes, B; McCoy, M; Seager, M

    This paper documents the need for a significant increase in the computing infrastructure provided to scientists working in the unclassified domains at Lawrence Livermore National Laboratory (LLNL). This need could be viewed as the next step in a broad strategy outlined in the January 2002 White Paper (UCRL-ID-147449) that bears essentially the same name as this document. Therein we wrote: 'This proposed increase could be viewed as a step in a broader strategy linking hardware evolution to applications development that would take LLNL unclassified computational science to a position of distinction if not preeminence by 2006.' This position of distinctionmore » has certainly been achieved. This paper provides a strategy for sustaining this success but will diverge from its 2002 predecessor in that it will: (1) Amplify the scientific and external success LLNL has enjoyed because of the investments made in 2002 (MCR, 11 TF) and 2004 (Thunder, 23 TF). (2) Describe in detail the nature of additional investments that are important to meet both the institutional objectives of advanced capability for breakthrough science and the scientists clearly stated request for adequate capacity and more rapid access to moderate-sized resources. (3) Put these requirements in the context of an overall strategy for simulation science and external collaboration. While our strategy for Multiprogrammatic and Institutional Computing (M&IC) has worked well, three challenges must be addressed to assure and enhance our position. The first is that while we now have over 50 important classified and unclassified simulation codes available for use by our computational scientists, we find ourselves coping with high demand for access and long queue wait times. This point was driven home in the 2005 Institutional Computing Executive Group (ICEG) 'Report Card' to the Deputy Director for Science and Technology (DDST) Office and Computation Directorate management. The second challenge is related to the balance that should be maintained in the simulation environment. With the advent of Thunder, the institution directed a change in course from past practice. Instead of making Thunder available to the large body of scientists, as was MCR, and effectively using it as a capacity system, the intent was to make it available to perhaps ten projects so that these teams could run very aggressive problems for breakthrough science. This usage model established Thunder as a capability system. The challenge this strategy raises is that the majority of scientists have not seen an improvement in capacity computing resources since MCR, thus creating significant tension in the system. The question then is: 'How do we address the institution's desire to maintain the potential for breakthrough science and also meet the legitimate requests from the ICEG to achieve balance?' Both the capability and the capacity environments must be addressed through this one procurement. The third challenge is to reach out more aggressively to the national science community to encourage access to LLNL resources as part of a strategy for sharpening our science through collaboration. Related to this, LLNL has been unable in the past to provide access for sensitive foreign nationals (SFNs) to the Livermore Computing (LC) unclassified 'yellow' network. Identifying some mechanism for data sharing between LLNL computational scientists and SFNs would be a first practical step in fostering cooperative, collaborative relationships with an important and growing sector of the American science community.« less

  19. The DYNES Instrument: A Description and Overview

    NASA Astrophysics Data System (ADS)

    Zurawski, Jason; Ball, Robert; Barczyk, Artur; Binkley, Mathew; Boote, Jeff; Boyd, Eric; Brown, Aaron; Brown, Robert; Lehman, Tom; McKee, Shawn; Meekhof, Benjeman; Mughal, Azher; Newman, Harvey; Rozsa, Sandor; Sheldon, Paul; Tackett, Alan; Voicu, Ramiro; Wolff, Stephen; Yang, Xi

    2012-12-01

    Scientific innovation continues to increase requirements for the computing and networking infrastructures of the world. Collaborative partners, instrumentation, storage, and processing facilities are often geographically and topologically separated, as is the case with LHC virtual organizations. These separations challenge the technology used to interconnect available resources, often delivered by Research and Education (R&E) networking providers, and leads to complications in the overall process of end-to-end data management. Capacity and traffic management are key concerns of R&E network operators; a delicate balance is required to serve both long-lived, high capacity network flows, as well as more traditional end-user activities. The advent of dynamic circuit services, a technology that enables the creation of variable duration, guaranteed bandwidth networking channels, allows for the efficient use of common network infrastructures. These gains are seen particularly in locations where overall capacity is scarce compared to the (sustained peak) needs of user communities. Related efforts, including those of the LHCOPN [3] operations group and the emerging LHCONE [4] project, may take advantage of available resources by designating specific network activities as a “high priority”, allowing reservation of dedicated bandwidth or optimizing for deadline scheduling and predicable delivery patterns. This paper presents the DYNES instrument, an NSF funded cyberinfrastructure project designed to facilitate end-to-end dynamic circuit services [2]. This combination of hardware and software innovation is being deployed across R&E networks in the United States at selected end-sites located on University Campuses. DYNES is peering with international efforts in other countries using similar solutions, and is increasing the reach of this emerging technology. This global data movement solution could be integrated into computing paradigms such as cloud and grid computing platforms, and through the use of APIs can be integrated into existing data movement software.

  20. Construction of an evaluation index system of water resources bearing capacity: An empirical study in Xi’an, China

    NASA Astrophysics Data System (ADS)

    Qu, X. E.; Zhang, L. L.

    2017-08-01

    In this paper, a comprehensive evaluation of the water resources bearing capacity of Xi’an is performed. By constructing a comprehensive evaluation index system of the water resources bearing capacity that included water resources, economy, society, and ecological environment, we empirically studied the dynamic change and regional differences of the water resources bearing capacities of Xi’an districts through the TOPSIS method (Technique for Order Preference by Similarity to an Ideal Solution). Results show that the water resources bearing capacity of Xi’an significantly increased over time, and the contributions of the subsystems from high to low are as follows: water resources subsystem, social subsystem, ecological subsystem, and economic subsystem. Furthermore, there are large differences between the water resources bearing capacities of the different districts in Xi’an. The water resources bearing capacities from high to low are urban areas, Huxian, Zhouzhi, Gaoling, and Lantian. Overall, the water resources bearing capacity of Xi’an is still at a the lower level, which is highly related to the scarcity of water resources, population pressure, insufficient water saving consciousness, irrational industrial structure, low water-use efficiency, and so on.

  1. Utilization of Cloud Computing in Education and Research to the Attainment of Millennium Development Goals and Vision 2030 in Kenya

    ERIC Educational Resources Information Center

    Waga, Duncan; Makori, Esther; Rabah, Kefa

    2014-01-01

    Kenya Educational and Research fraternity has highly qualified human resource capacity with globally gained experiences. However each research entity works in disparity due to the absence of a common digital platform while educational units don't even have the basic infrastructure. For sustainability of Education and research progression,…

  2. Trends in life science grid: from computing grid to knowledge grid.

    PubMed

    Konagaya, Akihiko

    2006-12-18

    Grid computing has great potential to become a standard cyberinfrastructure for life sciences which often require high-performance computing and large data handling which exceeds the computing capacity of a single institution. This survey reviews the latest grid technologies from the viewpoints of computing grid, data grid and knowledge grid. Computing grid technologies have been matured enough to solve high-throughput real-world life scientific problems. Data grid technologies are strong candidates for realizing "resourceome" for bioinformatics. Knowledge grids should be designed not only from sharing explicit knowledge on computers but also from community formulation for sharing tacit knowledge among a community. Extending the concept of grid from computing grid to knowledge grid, it is possible to make use of a grid as not only sharable computing resources, but also as time and place in which people work together, create knowledge, and share knowledge and experiences in a community.

  3. Trends in life science grid: from computing grid to knowledge grid

    PubMed Central

    Konagaya, Akihiko

    2006-01-01

    Background Grid computing has great potential to become a standard cyberinfrastructure for life sciences which often require high-performance computing and large data handling which exceeds the computing capacity of a single institution. Results This survey reviews the latest grid technologies from the viewpoints of computing grid, data grid and knowledge grid. Computing grid technologies have been matured enough to solve high-throughput real-world life scientific problems. Data grid technologies are strong candidates for realizing "resourceome" for bioinformatics. Knowledge grids should be designed not only from sharing explicit knowledge on computers but also from community formulation for sharing tacit knowledge among a community. Conclusion Extending the concept of grid from computing grid to knowledge grid, it is possible to make use of a grid as not only sharable computing resources, but also as time and place in which people work together, create knowledge, and share knowledge and experiences in a community. PMID:17254294

  4. Orchestrating Distributed Resource Ensembles for Petascale Science

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Baldin, Ilya; Mandal, Anirban; Ruth, Paul

    2014-04-24

    Distributed, data-intensive computational science applications of interest to DOE scientific com- munities move large amounts of data for experiment data management, distributed analysis steps, remote visualization, and accessing scientific instruments. These applications need to orchestrate ensembles of resources from multiple resource pools and interconnect them with high-capacity multi- layered networks across multiple domains. It is highly desirable that mechanisms are designed that provide this type of resource provisioning capability to a broad class of applications. It is also important to have coherent monitoring capabilities for such complex distributed environments. In this project, we addressed these problems by designing an abstractmore » API, enabled by novel semantic resource descriptions, for provisioning complex and heterogeneous resources from multiple providers using their native provisioning mechanisms and control planes: computational, storage, and multi-layered high-speed network domains. We used an extensible resource representation based on semantic web technologies to afford maximum flexibility to applications in specifying their needs. We evaluated the effectiveness of provisioning using representative data-intensive ap- plications. We also developed mechanisms for providing feedback about resource performance to the application, to enable closed-loop feedback control and dynamic adjustments to resource allo- cations (elasticity). This was enabled through development of a novel persistent query framework that consumes disparate sources of monitoring data, including perfSONAR, and provides scalable distribution of asynchronous notifications.« less

  5. Humanity's unsustainable environmental footprint.

    PubMed

    Hoekstra, Arjen Y; Wiedmann, Thomas O

    2014-06-06

    Within the context of Earth's limited natural resources and assimilation capacity, the current environmental footprint of humankind is not sustainable. Assessing land, water, energy, material, and other footprints along supply chains is paramount in understanding the sustainability, efficiency, and equity of resource use from the perspective of producers, consumers, and government. We review current footprints and relate those to maximum sustainable levels, highlighting the need for future work on combining footprints, assessing trade-offs between them, improving computational techniques, estimating maximum sustainable footprint levels, and benchmarking efficiency of resource use. Ultimately, major transformative changes in the global economy are necessary to reduce humanity's environmental footprint to sustainable levels. Copyright © 2014, American Association for the Advancement of Science.

  6. Towards optimizing server performance in an educational MMORPG for teaching computer programming

    NASA Astrophysics Data System (ADS)

    Malliarakis, Christos; Satratzemi, Maya; Xinogalos, Stelios

    2013-10-01

    Web-based games have become significantly popular during the last few years. This is due to the gradual increase of internet speed, which has led to the ongoing multiplayer games development and more importantly the emergence of the Massive Multiplayer Online Role Playing Games (MMORPG) field. In parallel, similar technologies called educational games have started to be developed in order to be put into practice in various educational contexts, resulting in the field of Game Based Learning. However, these technologies require significant amounts of resources, such as bandwidth, RAM and CPU capacity etc. These amounts may be even larger in an educational MMORPG game that supports computer programming education, due to the usual inclusion of a compiler and the constant client/server data transmissions that occur during program coding, possibly leading to technical issues that could cause malfunctions during learning. Thus, the determination of the elements that affect the overall games resources' load is essential so that server administrators can configure them and ensure educational games' proper operation during computer programming education. In this paper, we propose a new methodology with which we can achieve monitoring and optimization of the load balancing, so that the essential resources for the creation and proper execution of an educational MMORPG for computer programming can be foreseen and bestowed without overloading the system.

  7. The LHCb software and computing upgrade for Run 3: opportunities and challenges

    NASA Astrophysics Data System (ADS)

    Bozzi, C.; Roiser, S.; LHCb Collaboration

    2017-10-01

    The LHCb detector will be upgraded for the LHC Run 3 and will be readout at 30 MHz, corresponding to the full inelastic collision rate, with major implications on the full software trigger and offline computing. If the current computing model and software framework are kept, the data storage capacity and computing power required to process data at this rate, and to generate and reconstruct equivalent samples of simulated events, will exceed the current capacity by at least one order of magnitude. A redesign of the software framework, including scheduling, the event model, the detector description and the conditions database, is needed to fully exploit the computing power of multi-, many-core architectures, and coprocessors. Data processing and the analysis model will also change towards an early streaming of different data types, in order to limit storage resources, with further implications for the data analysis workflows. Fast simulation options will allow to obtain a reasonable parameterization of the detector response in considerably less computing time. Finally, the upgrade of LHCb will be a good opportunity to review and implement changes in the domains of software design, test and review, and analysis workflow and preservation. In this contribution, activities and recent results in all the above areas are presented.

  8. Cloudbursting - Solving the 3-body problem

    NASA Astrophysics Data System (ADS)

    Chang, G.; Heistand, S.; Vakhnin, A.; Huang, T.; Zimdars, P.; Hua, H.; Hood, R.; Koenig, J.; Mehrotra, P.; Little, M. M.; Law, E.

    2014-12-01

    Many science projects in the future will be accomplished through collaboration among 2 or more NASA centers along with, potentially, external scientists. Science teams will be composed of more geographically dispersed individuals and groups. However, the current computing environment does not make this easy and seamless. By being able to share computing resources among members of a multi-center team working on a science/ engineering project, limited pre-competition funds could be more efficiently applied and technical work could be conducted more effectively with less time spent moving data or waiting for computing resources to free up. Based on the work from an NASA CIO IT Labs task, this presentation will highlight our prototype work in identifying the feasibility and identify the obstacles, both technical and management, to perform "Cloudbursting" among private clouds located at three different centers. We will demonstrate the use of private cloud computing infrastructure at the Jet Propulsion Laboratory, Langley Research Center, and Ames Research Center to provide elastic computation to each other to perform parallel Earth Science data imaging. We leverage elastic load balancing and auto-scaling features at each data center so that each location can independently define how many resources to allocate to a particular job that was "bursted" from another data center and demonstrate that compute capacity scales up and down with the job. We will also discuss future work in the area, which could include the use of cloud infrastructure from different cloud framework providers as well as other cloud service providers.

  9. A survey and taxonomy on energy efficient resource allocation techniques for cloud computing systems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hameed, Abdul; Khoshkbarforoushha, Alireza; Ranjan, Rajiv

    In a cloud computing paradigm, energy efficient allocation of different virtualized ICT resources (servers, storage disks, and networks, and the like) is a complex problem due to the presence of heterogeneous application (e.g., content delivery networks, MapReduce, web applications, and the like) workloads having contentious allocation requirements in terms of ICT resource capacities (e.g., network bandwidth, processing speed, response time, etc.). Several recent papers have tried to address the issue of improving energy efficiency in allocating cloud resources to applications with varying degree of success. However, to the best of our knowledge there is no published literature on this subjectmore » that clearly articulates the research problem and provides research taxonomy for succinct classification of existing techniques. Hence, the main aim of this paper is to identify open challenges associated with energy efficient resource allocation. In this regard, the study, first, outlines the problem and existing hardware and software-based techniques available for this purpose. Furthermore, available techniques already presented in the literature are summarized based on the energy-efficient research dimension taxonomy. The advantages and disadvantages of the existing techniques are comprehensively analyzed against the proposed research dimension taxonomy namely: resource adaption policy, objective function, allocation method, allocation operation, and interoperability.« less

  10. Computing Bounds on Resource Levels for Flexible Plans

    NASA Technical Reports Server (NTRS)

    Muscvettola, Nicola; Rijsman, David

    2009-01-01

    A new algorithm efficiently computes the tightest exact bound on the levels of resources induced by a flexible activity plan (see figure). Tightness of bounds is extremely important for computations involved in planning because tight bounds can save potentially exponential amounts of search (through early backtracking and detection of solutions), relative to looser bounds. The bound computed by the new algorithm, denoted the resource-level envelope, constitutes the measure of maximum and minimum consumption of resources at any time for all fixed-time schedules in the flexible plan. At each time, the envelope guarantees that there are two fixed-time instantiations one that produces the minimum level and one that produces the maximum level. Therefore, the resource-level envelope is the tightest possible resource-level bound for a flexible plan because any tighter bound would exclude the contribution of at least one fixed-time schedule. If the resource- level envelope can be computed efficiently, one could substitute looser bounds that are currently used in the inner cores of constraint-posting scheduling algorithms, with the potential for great improvements in performance. What is needed to reduce the cost of computation is an algorithm, the measure of complexity of which is no greater than a low-degree polynomial in N (where N is the number of activities). The new algorithm satisfies this need. In this algorithm, the computation of resource-level envelopes is based on a novel combination of (1) the theory of shortest paths in the temporal-constraint network for the flexible plan and (2) the theory of maximum flows for a flow network derived from the temporal and resource constraints. The measure of asymptotic complexity of the algorithm is O(N O(maxflow(N)), where O(x) denotes an amount of computing time or a number of arithmetic operations proportional to a number of the order of x and O(maxflow(N)) is the measure of complexity (and thus of cost) of a maximumflow algorithm applied to an auxiliary flow network of 2N nodes. The algorithm is believed to be efficient in practice; experimental analysis shows the practical cost of maxflow to be as low as O(N1.5). The algorithm could be enhanced following at least two approaches. In the first approach, incremental subalgorithms for the computation of the envelope could be developed. By use of temporal scanning of the events in the temporal network, it may be possible to significantly reduce the size of the networks on which it is necessary to run the maximum-flow subalgorithm, thereby significantly reducing the time required for envelope calculation. In the second approach, the practical effectiveness of resource envelopes in the inner loops of search algorithms could be tested for multi-capacity resource scheduling. This testing would include inner-loop backtracking and termination tests and variable and value-ordering heuristics that exploit the properties of resource envelopes more directly.

  11. Optimal growth trajectories with finite carrying capacity.

    PubMed

    Caravelli, F; Sindoni, L; Caccioli, F; Ududec, C

    2016-08-01

    We consider the problem of finding optimal strategies that maximize the average growth rate of multiplicative stochastic processes. For a geometric Brownian motion, the problem is solved through the so-called Kelly criterion, according to which the optimal growth rate is achieved by investing a constant given fraction of resources at any step of the dynamics. We generalize these finding to the case of dynamical equations with finite carrying capacity, which can find applications in biology, mathematical ecology, and finance. We formulate the problem in terms of a stochastic process with multiplicative noise and a nonlinear drift term that is determined by the specific functional form of carrying capacity. We solve the stochastic equation for two classes of carrying capacity functions (power laws and logarithmic), and in both cases we compute the optimal trajectories of the control parameter. We further test the validity of our analytical results using numerical simulations.

  12. Estimation of reservoir storage capacity using multibeam sonar and terrestrial lidar, Randy Poynter Lake, Rockdale County, Georgia, 2012

    USGS Publications Warehouse

    Lee, K.G.

    2013-01-01

    The U.S. Geological Survey, in cooperation with the Rockdale County Department of Water Resources, conducted a bathymetric and topographic survey of Randy Poynter Lake in northern Georgia in 2012. The Randy Poynter Lake watershed drains surface area from Rockdale, Gwinnett, and Walton Counties. The reservoir serves as the water supply for the Conyers-Rockdale Big Haynes Impoundment Authority. The Randy Poynter reservoir was surveyed to prepare a current bathymetric map and determine storage capacities at specified water-surface elevations. Topographic and bathymetric data were collected using a marine-based mobile mapping unit to estimate storage capacity. The marine-based mobile mapping unit operates with several components: multibeam echosounder, singlebeam echosounder, light detection and ranging system, navigation and motion-sensing system, and data acquisition computer. All data were processed and combined to develop a triangulated irregular network, a reservoir capacity table, and a bathymetric contour map.

  13. Optimal growth trajectories with finite carrying capacity

    NASA Astrophysics Data System (ADS)

    Caravelli, F.; Sindoni, L.; Caccioli, F.; Ududec, C.

    2016-08-01

    We consider the problem of finding optimal strategies that maximize the average growth rate of multiplicative stochastic processes. For a geometric Brownian motion, the problem is solved through the so-called Kelly criterion, according to which the optimal growth rate is achieved by investing a constant given fraction of resources at any step of the dynamics. We generalize these finding to the case of dynamical equations with finite carrying capacity, which can find applications in biology, mathematical ecology, and finance. We formulate the problem in terms of a stochastic process with multiplicative noise and a nonlinear drift term that is determined by the specific functional form of carrying capacity. We solve the stochastic equation for two classes of carrying capacity functions (power laws and logarithmic), and in both cases we compute the optimal trajectories of the control parameter. We further test the validity of our analytical results using numerical simulations.

  14. The service telemetry and control device for space experiment “GRIS”

    NASA Astrophysics Data System (ADS)

    Glyanenko, A. S.

    2016-02-01

    Problems of scientific devices control (for example, fine control of measuring paths), collecting auxiliary (service information about working capacity, conditions of experiment carrying out, etc.) and preliminary data processing are actual for any space device. Modern devices for space research it is impossible to imagine without devices that didn't use digital data processing methods and specialized or standard interfaces and computing facilities. For realization of these functions in “GRIS” experiment onboard ISS for purposes minimization of dimensions, power consumption, the concept “system-on-chip” was chosen and realized. In the programmable logical integrated scheme by Microsemi from ProASIC3 family with maximum capacity up to 3M system gates, the computing kernel and all necessary peripherals are created. In this paper we discuss structure, possibilities and resources the service telemetry and control device for “GRIS” space experiment.

  15. Implementing controlled-unitary operations over the butterfly network

    NASA Astrophysics Data System (ADS)

    Soeda, Akihito; Kinjo, Yoshiyuki; Turner, Peter S.; Murao, Mio

    2014-12-01

    We introduce a multiparty quantum computation task over a network in a situation where the capacities of both the quantum and classical communication channels of the network are limited and a bottleneck occurs. Using a resource setting introduced by Hayashi [1], we present an efficient protocol for performing controlled-unitary operations between two input nodes and two output nodes over the butterfly network, one of the most fundamental networks exhibiting the bottleneck problem. This result opens the possibility of developing a theory of quantum network coding for multiparty quantum computation, whereas the conventional network coding only treats multiparty quantum communication.

  16. Implementing controlled-unitary operations over the butterfly network

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Soeda, Akihito; Kinjo, Yoshiyuki; Turner, Peter S.

    2014-12-04

    We introduce a multiparty quantum computation task over a network in a situation where the capacities of both the quantum and classical communication channels of the network are limited and a bottleneck occurs. Using a resource setting introduced by Hayashi [1], we present an efficient protocol for performing controlled-unitary operations between two input nodes and two output nodes over the butterfly network, one of the most fundamental networks exhibiting the bottleneck problem. This result opens the possibility of developing a theory of quantum network coding for multiparty quantum computation, whereas the conventional network coding only treats multiparty quantum communication.

  17. Multi-model approach to petroleum resource appraisal using analytic methodologies for probabilistic systems

    USGS Publications Warehouse

    Crovelli, R.A.

    1988-01-01

    The geologic appraisal model that is selected for a petroleum resource assessment depends upon purpose of the assessment, basic geologic assumptions of the area, type of available data, time available before deadlines, available human and financial resources, available computer facilities, and, most importantly, the available quantitative methodology with corresponding computer software and any new quantitative methodology that would have to be developed. Therefore, different resource assessment projects usually require different geologic models. Also, more than one geologic model might be needed in a single project for assessing different regions of the study or for cross-checking resource estimates of the area. Some geologic analyses used in the past for petroleum resource appraisal involved play analysis. The corresponding quantitative methodologies of these analyses usually consisted of Monte Carlo simulation techniques. A probabilistic system of petroleum resource appraisal for play analysis has been designed to meet the following requirements: (1) includes a variety of geologic models, (2) uses an analytic methodology instead of Monte Carlo simulation, (3) possesses the capacity to aggregate estimates from many areas that have been assessed by different geologic models, and (4) runs quickly on a microcomputer. Geologic models consist of four basic types: reservoir engineering, volumetric yield, field size, and direct assessment. Several case histories and present studies by the U.S. Geological Survey are discussed. ?? 1988 International Association for Mathematical Geology.

  18. The JASMIN Cloud: specialised and hybrid to meet the needs of the Environmental Sciences Community

    NASA Astrophysics Data System (ADS)

    Kershaw, Philip; Lawrence, Bryan; Churchill, Jonathan; Pritchard, Matt

    2014-05-01

    Cloud computing provides enormous opportunities for the research community. The large public cloud providers provide near-limitless scaling capability. However, adapting Cloud to scientific workloads is not without its problems. The commodity nature of the public cloud infrastructure can be at odds with the specialist requirements of the research community. Issues such as trust, ownership of data, WAN bandwidth and costing models make additional barriers to more widespread adoption. Alongside the application of public cloud for scientific applications, a number of private cloud initiatives are underway in the research community of which the JASMIN Cloud is one example. Here, cloud service models are being effectively super-imposed over more established services such as data centres, compute cluster facilities and Grids. These have the potential to deliver the specialist infrastructure needed for the science community coupled with the benefits of a Cloud service model. The JASMIN facility based at the Rutherford Appleton Laboratory was established in 2012 to support the data analysis requirements of the climate and Earth Observation community. In its first year of operation, the 5PB of available storage capacity was filled and the hosted compute capability used extensively. JASMIN has modelled the concept of a centralised large-volume data analysis facility. Key characteristics have enabled success: peta-scale fast disk connected via low latency networks to compute resources and the use of virtualisation for effective management of the resources for a range of users. A second phase is now underway funded through NERC's (Natural Environment Research Council) Big Data initiative. This will see significant expansion to the resources available with a doubling of disk-based storage to 12PB and an increase of compute capacity by a factor of ten to over 3000 processing cores. This expansion is accompanied by a broadening in the scope for JASMIN, as a service available to the entire UK environmental science community. Experience with the first phase demonstrated the range of user needs. A trade-off is needed between access privileges to resources, flexibility of use and security. This has influenced the form and types of service under development for the new phase. JASMIN will deploy a specialised private cloud organised into "Managed" and "Unmanaged" components. In the Managed Cloud, users have direct access to the storage and compute resources for optimal performance but for reasons of security, via a more restrictive PaaS (Platform-as-a-Service) interface. The Unmanaged Cloud is deployed in an isolated part of the network but co-located with the rest of the infrastructure. This enables greater liberty to tenants - full IaaS (Infrastructure-as-a-Service) capability to provision customised infrastructure - whilst at the same time protecting more sensitive parts of the system from direct access using these elevated privileges. The private cloud will be augmented with cloud-bursting capability so that it can exploit the resources available from public clouds, making it effectively a hybrid solution. A single interface will overlay the functionality of both the private cloud and external interfaces to public cloud providers giving users the flexibility to migrate resources between infrastructures as requirements dictate.

  19. Netbook - A Toolset in Support of a Collaborative and Cooperative Learning Environment.

    DTIC Science & Technology

    1996-04-26

    Netbook is a software development/research project being conducted for the DARPA computer aided training initiative (CEATI). As a part of the SNAIR...division of CEATI, Netbook concerns itself with the management of Internet resources. More specifically, Netbook is a toolset that allows students...a meaningful way. In addition Netbook provides the capacity for communication with peers and teachers, enabling students to collaborate while engaged

  20. Environmental sustainability control by water resources carrying capacity concept: application significance in Indonesia

    NASA Astrophysics Data System (ADS)

    Djuwansyah, M. R.

    2018-02-01

    This paper reviews the use of Water Resources carrying capacity concept to control environmental sustainability with the particular note for the case in Indonesia. Carrying capacity is a capability measure of an environment or an area to support human and the other lives as well as their activities in a sustainable manner. Recurrently water-related hazards and environmental problems indicate that the environments are exploited over its carrying capacity. Environmental carrying capacity (ECC) assessment includes Land and Water Carrying Capacity analysis of an area, suggested to always refer to the dimension of the related watershed as an incorporated hydrologic unit on the basis of resources availability estimation. Many countries use this measure to forecast the future sustainability of regional development based on water availability. Direct water Resource Carrying Capacity (WRCC) assessment involves population number determination together with their activities could be supported by available water, whereas indirect WRCC assessment comprises the analysis of supply-demand balance status of water. Water resource limits primarily environmental carrying capacity rather than the land resource since land capability constraints are easier. WRCC is a crucial factor known to control land and water resource utilization, particularly in a growing densely populated area. Even though capability of water resources is relatively perpetual, the utilization pattern of these resources may change by socio-economic and cultural technology level of the users, because of which WRCC should be evaluated periodically to maintain usage sustainability of water resource and environment.

  1. Building surgical capacity in low-resource countries: a qualitative analysis of task shifting from surgeon volunteers' perspectives.

    PubMed

    Aliu, Oluseyi; Corlew, Scott D; Heisler, Michele E; Pannucci, Christopher J; Chung, Kevin C

    2014-01-01

    Surgical volunteer organizations (SVOs) focus considerable resources on addressing the backlog of cases in low-resource countries. This model of service may perpetuate dependency. Efforts should focus on models that establish independence in providing surgical care. Independence could be achieved through surgical capacity building. However, there has been scant discussion in literature on SVO involvement in surgical capacity building. Using qualitative methods, we evaluated the perspectives of surgeons with extensive volunteer experience in low-resource countries. We collected data through in-depth interviews that centered on SVOs using task shifting as a tool for surgical capacity building. Some of the key themes from our analysis include the ethical ramifications of task shifting, the challenges of addressing technical and clinical education in capacity building for low-resource settings, and the allocation of limited volunteer resources toward surgical capacity building. These themes will be the foundation of subsequent studies that will focus on other stakeholders in surgical capacity building including host communities and SVO administrators.

  2. Cloud-based opportunities in scientific computing: insights from processing Suomi National Polar-Orbiting Partnership (S-NPP) Direct Broadcast data

    NASA Astrophysics Data System (ADS)

    Evans, J. D.; Hao, W.; Chettri, S.

    2013-12-01

    The cloud is proving to be a uniquely promising platform for scientific computing. Our experience with processing satellite data using Amazon Web Services highlights several opportunities for enhanced performance, flexibility, and cost effectiveness in the cloud relative to traditional computing -- for example: - Direct readout from a polar-orbiting satellite such as the Suomi National Polar-Orbiting Partnership (S-NPP) requires bursts of processing a few times a day, separated by quiet periods when the satellite is out of receiving range. In the cloud, by starting and stopping virtual machines in minutes, we can marshal significant computing resources quickly when needed, but not pay for them when not needed. To take advantage of this capability, we are automating a data-driven approach to the management of cloud computing resources, in which new data availability triggers the creation of new virtual machines (of variable size and processing power) which last only until the processing workflow is complete. - 'Spot instances' are virtual machines that run as long as one's asking price is higher than the provider's variable spot price. Spot instances can greatly reduce the cost of computing -- for software systems that are engineered to withstand unpredictable interruptions in service (as occurs when a spot price exceeds the asking price). We are implementing an approach to workflow management that allows data processing workflows to resume with minimal delays after temporary spot price spikes. This will allow systems to take full advantage of variably-priced 'utility computing.' - Thanks to virtual machine images, we can easily launch multiple, identical machines differentiated only by 'user data' containing individualized instructions (e.g., to fetch particular datasets or to perform certain workflows or algorithms) This is particularly useful when (as is the case with S-NPP data) we need to launch many very similar machines to process an unpredictable number of data files concurrently. Our experience shows the viability and flexibility of this approach to workflow management for scientific data processing. - Finally, cloud computing is a promising platform for distributed volunteer ('interstitial') computing, via mechanisms such as the Berkeley Open Infrastructure for Network Computing (BOINC) popularized with the SETI@Home project and others such as ClimatePrediction.net and NASA's Climate@Home. Interstitial computing faces significant challenges as commodity computing shifts from (always on) desktop computers towards smartphones and tablets (untethered and running on scarce battery power); but cloud computing offers significant slack capacity. This capacity includes virtual machines with unused RAM or underused CPUs; virtual storage volumes allocated (& paid for) but not full; and virtual machines that are paid up for the current hour but whose work is complete. We are devising ways to facilitate the reuse of these resources (i.e., cloud-based interstitial computing) for satellite data processing and related analyses. We will present our findings and research directions on these and related topics.

  3. An Applied Method for Predicting the Load-Carrying Capacity in Compression of Thin-Wall Composite Structures with Impact Damage

    NASA Astrophysics Data System (ADS)

    Mitrofanov, O.; Pavelko, I.; Varickis, S.; Vagele, A.

    2018-03-01

    The necessity for considering both strength criteria and postbuckling effects in calculating the load-carrying capacity in compression of thin-wall composite structures with impact damage is substantiated. An original applied method ensuring solution of these problems with an accuracy sufficient for practical design tasks is developed. The main advantage of the method is its applicability in terms of computing resources and the set of initial data required. The results of application of the method to solution of the problem of compression of fragments of thin-wall honeycomb panel damaged by impacts of various energies are presented. After a comparison of calculation results with experimental data, a working algorithm for calculating the reduction in the load-carrying capacity of a composite object with impact damage is adopted.

  4. The tractable cognition thesis.

    PubMed

    Van Rooij, Iris

    2008-09-01

    The recognition that human minds/brains are finite systems with limited resources for computation has led some researchers to advance the Tractable Cognition thesis: Human cognitive capacities are constrained by computational tractability. This thesis, if true, serves cognitive psychology by constraining the space of computational-level theories of cognition. To utilize this constraint, a precise and workable definition of "computational tractability" is needed. Following computer science tradition, many cognitive scientists and psychologists define computational tractability as polynomial-time computability, leading to the P-Cognition thesis. This article explains how and why the P-Cognition thesis may be overly restrictive, risking the exclusion of veridical computational-level theories from scientific investigation. An argument is made to replace the P-Cognition thesis by the FPT-Cognition thesis as an alternative formalization of the Tractable Cognition thesis (here, FPT stands for fixed-parameter tractable). Possible objections to the Tractable Cognition thesis, and its proposed formalization, are discussed, and existing misconceptions are clarified. 2008 Cognitive Science Society, Inc.

  5. UNIX-based operating systems robustness evaluation

    NASA Technical Reports Server (NTRS)

    Chang, Yu-Ming

    1996-01-01

    Robust operating systems are required for reliable computing. Techniques for robustness evaluation of operating systems not only enhance the understanding of the reliability of computer systems, but also provide valuable feed- back to system designers. This thesis presents results from robustness evaluation experiments on five UNIX-based operating systems, which include Digital Equipment's OSF/l, Hewlett Packard's HP-UX, Sun Microsystems' Solaris and SunOS, and Silicon Graphics' IRIX. Three sets of experiments were performed. The methodology for evaluation tested (1) the exception handling mechanism, (2) system resource management, and (3) system capacity under high workload stress. An exception generator was used to evaluate the exception handling mechanism of the operating systems. Results included exit status of the exception generator and the system state. Resource management techniques used by individual operating systems were tested using programs designed to usurp system resources such as physical memory and process slots. Finally, the workload stress testing evaluated the effect of the workload on system performance by running a synthetic workload and recording the response time of local and remote user requests. Moderate to severe performance degradations were observed on the systems under stress.

  6. Analysis of Department of Defense Organic Depot Maintenance Capacity Management and Facility Utilization Factors

    DTIC Science & Technology

    1991-09-01

    System ( CAPMS ) in lieu of using DODI 4151.15H. Facility utilization rate computation is not explicitly defined; it is merely identified as a ratio of...front of a bottleneck buffers the critical resource and protects against disruption of the system. This approach optimizes facility utilization by...run titled BUFFERED BASELINE. Three different levels of inventory were used to evaluate the effect of increasing the inventory level on critical

  7. From transistor to trapped-ion computers for quantum chemistry.

    PubMed

    Yung, M-H; Casanova, J; Mezzacapo, A; McClean, J; Lamata, L; Aspuru-Guzik, A; Solano, E

    2014-01-07

    Over the last few decades, quantum chemistry has progressed through the development of computational methods based on modern digital computers. However, these methods can hardly fulfill the exponentially-growing resource requirements when applied to large quantum systems. As pointed out by Feynman, this restriction is intrinsic to all computational models based on classical physics. Recently, the rapid advancement of trapped-ion technologies has opened new possibilities for quantum control and quantum simulations. Here, we present an efficient toolkit that exploits both the internal and motional degrees of freedom of trapped ions for solving problems in quantum chemistry, including molecular electronic structure, molecular dynamics, and vibronic coupling. We focus on applications that go beyond the capacity of classical computers, but may be realizable on state-of-the-art trapped-ion systems. These results allow us to envision a new paradigm of quantum chemistry that shifts from the current transistor to a near-future trapped-ion-based technology.

  8. From transistor to trapped-ion computers for quantum chemistry

    PubMed Central

    Yung, M.-H.; Casanova, J.; Mezzacapo, A.; McClean, J.; Lamata, L.; Aspuru-Guzik, A.; Solano, E.

    2014-01-01

    Over the last few decades, quantum chemistry has progressed through the development of computational methods based on modern digital computers. However, these methods can hardly fulfill the exponentially-growing resource requirements when applied to large quantum systems. As pointed out by Feynman, this restriction is intrinsic to all computational models based on classical physics. Recently, the rapid advancement of trapped-ion technologies has opened new possibilities for quantum control and quantum simulations. Here, we present an efficient toolkit that exploits both the internal and motional degrees of freedom of trapped ions for solving problems in quantum chemistry, including molecular electronic structure, molecular dynamics, and vibronic coupling. We focus on applications that go beyond the capacity of classical computers, but may be realizable on state-of-the-art trapped-ion systems. These results allow us to envision a new paradigm of quantum chemistry that shifts from the current transistor to a near-future trapped-ion-based technology. PMID:24395054

  9. [Evaluation of comprehensive capacity of resources and environments in Poyang Lake Eco-economic Zone].

    PubMed

    Song, Yan-Chun; Yu, Dan

    2014-10-01

    With the development of the society and economy, the contradictions among population, resources and environment are increasingly worse. As a result, the capacity of resources and environment becomes one of the focal issues for many countries and regions. Through investigating and analyzing the present situation and the existing problems of resources and environment in Poyang Lake Eco-economic Zone, seven factors were chosen as the evaluation criterion layer, namely, land resources, water resources, biological resources, mineral resources, ecological-geological environment, water environment and atmospheric environment. Based on the single factor evaluation results and with the county as the evaluation unit, the comprehensive capacity of resources and environment was evaluated by using the state space method in Poyang Lake Eco-economic Zone. The results showed that it boasted abundant biological resources, quality atmosphere and water environment, and relatively stable geological environment, while restricted by land resource, water resource and mineral resource. Currently, although the comprehensive capacity of the resources and environments in Poyang Lake Eco-economic Zone was not overloaded as a whole, it has been the case in some counties/districts. State space model, with clear indication and high accuracy, could serve as another approach to evaluating comprehensive capacity of regional resources and environment.

  10. Hybrid Cloud Computing Environment for EarthCube and Geoscience Community

    NASA Astrophysics Data System (ADS)

    Yang, C. P.; Qin, H.

    2016-12-01

    The NSF EarthCube Integration and Test Environment (ECITE) has built a hybrid cloud computing environment to provides cloud resources from private cloud environments by using cloud system software - OpenStack and Eucalyptus, and also manages public cloud - Amazon Web Service that allow resource synchronizing and bursting between private and public cloud. On ECITE hybrid cloud platform, EarthCube and geoscience community can deploy and manage the applications by using base virtual machine images or customized virtual machines, analyze big datasets by using virtual clusters, and real-time monitor the virtual resource usage on the cloud. Currently, a number of EarthCube projects have deployed or started migrating their projects to this platform, such as CHORDS, BCube, CINERGI, OntoSoft, and some other EarthCube building blocks. To accomplish the deployment or migration, administrator of ECITE hybrid cloud platform prepares the specific needs (e.g. images, port numbers, usable cloud capacity, etc.) of each project in advance base on the communications between ECITE and participant projects, and then the scientists or IT technicians in those projects launch one or multiple virtual machines, access the virtual machine(s) to set up computing environment if need be, and migrate their codes, documents or data without caring about the heterogeneity in structure and operations among different cloud platforms.

  11. Why Are We Talking About Capacity Markets?

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Frew, Bethany

    Revenue sufficiency or 'missing money' concerns in wholesale electricity markets are important because they could lead to resource (or capacity) adequacy shortfalls. Capacity markets or other capacity-based payments are among the proposed solutions to remedy these challenges. This presentation provides a high-level overview of the importance of and process for ensuring resource adequacy, and then discusses considerations for capacity markets under futures with high penetrations of variable resources such as wind and solar.

  12. Processing Solutions for Big Data in Astronomy

    NASA Astrophysics Data System (ADS)

    Fillatre, L.; Lepiller, D.

    2016-09-01

    This paper gives a simple introduction to processing solutions applied to massive amounts of data. It proposes a general presentation of the Big Data paradigm. The Hadoop framework, which is considered as the pioneering processing solution for Big Data, is described together with YARN, the integrated Hadoop tool for resource allocation. This paper also presents the main tools for the management of both the storage (NoSQL solutions) and computing capacities (MapReduce parallel processing schema) of a cluster of machines. Finally, more recent processing solutions like Spark are discussed. Big Data frameworks are now able to run complex applications while keeping the programming simple and greatly improving the computing speed.

  13. Computationally Efficient Power Allocation Algorithm in Multicarrier-Based Cognitive Radio Networks: OFDM and FBMC Systems

    NASA Astrophysics Data System (ADS)

    Shaat, Musbah; Bader, Faouzi

    2010-12-01

    Cognitive Radio (CR) systems have been proposed to increase the spectrum utilization by opportunistically access the unused spectrum. Multicarrier communication systems are promising candidates for CR systems. Due to its high spectral efficiency, filter bank multicarrier (FBMC) can be considered as an alternative to conventional orthogonal frequency division multiplexing (OFDM) for transmission over the CR networks. This paper addresses the problem of resource allocation in multicarrier-based CR networks. The objective is to maximize the downlink capacity of the network under both total power and interference introduced to the primary users (PUs) constraints. The optimal solution has high computational complexity which makes it unsuitable for practical applications and hence a low complexity suboptimal solution is proposed. The proposed algorithm utilizes the spectrum holes in PUs bands as well as active PU bands. The performance of the proposed algorithm is investigated for OFDM and FBMC based CR systems. Simulation results illustrate that the proposed resource allocation algorithm with low computational complexity achieves near optimal performance and proves the efficiency of using FBMC in CR context.

  14. Capturing the Impact of Storage and Other Flexible Technologies on Electric System Planning

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hale, Elaine; Stoll, Brady; Mai, Trieu

    Power systems of the future are likely to require additional flexibility. This has been well studied from an operational perspective, but has been more difficult to incorporate into capacity expansion models (CEMs) that study investment decisions on the decadal scale. There are two primary reasons for this. First, the necessary input data, including cost and resource projections, for flexibility options like demand response and storage are significantly uncertain. Second, it is computationally difficult to represent both investment and operational decisions in detail, the latter being necessary to properly value system flexibility, in CEMs for realistically sized systems. In this work,more » we extend a particular CEM, NREL's Resource Planning Model (RPM), to address the latter issue by better representing variable generation impacts on operations, and then adding two flexible technologies to RPM's suite of investment decisions: interruptible load and utility-scale storage. This work does not develop full suites of input data for these technologies, but is rather methodological and exploratory in nature. We thus exercise these new investment decisions in the context of exploring price points and value streams needed for significant deployment in the Western Interconnection by 2030. Our study of interruptible load finds significant variation by location, year, and overall system conditions. Some locations find no system need for interruptible load even with low costs, while others build the most expensive resources offered. System needs can include planning reserve capacity needs to ensure resource adequacy, but there are also particular cases in which spinning reserve requirements drive deployment. Utility-scale storage is found to require deep cost reductions to achieve wide deployment and is found to be more valuable in some locations with greater renewable deployment. Differences between more solar- and wind-reliant regions are also found: Storage technologies with lower energy capacities are deployed to support solar deployment, and higher energy capacity technologies support wind. Finally, we identify potential future research and areas of improvement to build on this initial analysis.« less

  15. A Multi-Tiered Approach for Building Capacity in Hydrologic Modeling for Water Resource Management in Developing Regions

    NASA Astrophysics Data System (ADS)

    Markert, K. N.; Limaye, A. S.; Rushi, B. R.; Adams, E. C.; Anderson, E.; Ellenburg, W. L.; Mithieu, F.; Griffin, R.

    2017-12-01

    Water resource management is the process by which governments, businesses and/or individuals reach and implement decisions that are intended to address the future quantity and/or quality of water for societal benefit. The implementation of water resource management typically requires the understanding of the quantity and/or timing of a variety of hydrologic variables (e.g. discharge, soil moisture and evapotranspiration). Often times these variables for management are simulated using hydrologic models particularly in data sparse regions. However, there are several large barriers to entry in learning how to use models, applying best practices during the modeling process, and selecting and understanding the most appropriate model for diverse applications. This presentation focuses on a multi-tiered approach to bring the state-of-the-art hydrologic modeling capabilities and methods to developing regions through the SERVIR program, a joint NASA and USAID initiative that builds capacity of regional partners and their end users on the use of Earth observations for environmental decision making. The first tier is a series of trainings on the use of multiple hydrologic models, including the Variable Infiltration Capacity (VIC) and Ensemble Framework For Flash Flood Forecasting (EF5), which focus on model concepts and steps to successfully implement the models. We present a case study for this in a pilot area, the Nyando Basin in Kenya. The second tier is focused on building a community of practice on applied hydrology modeling aimed at creating a support network for hydrologists in SERVIR regions and promoting best practices. The third tier is a hydrologic inter-comparison project under development in the SERVIR regions. The objective of this step is to understand model performance under specific decision-making scenarios, and to share knowledge among hydrologists in SERVIR regions. The results of these efforts include computer programs, training materials, and new scientific understanding, all of which are shared in an open and collaborative environment for transparency and subsequent capacity building in SERVIR regions and beyond. The outcome of this work is increased awareness and capacity on the use of hydrologic models in developing regions to support water resource management and water security.

  16. Linear optical quantum computing in a single spatial mode.

    PubMed

    Humphreys, Peter C; Metcalf, Benjamin J; Spring, Justin B; Moore, Merritt; Jin, Xian-Min; Barbieri, Marco; Kolthammer, W Steven; Walmsley, Ian A

    2013-10-11

    We present a scheme for linear optical quantum computing using time-bin-encoded qubits in a single spatial mode. We show methods for single-qubit operations and heralded controlled-phase (cphase) gates, providing a sufficient set of operations for universal quantum computing with the Knill-Laflamme-Milburn [Nature (London) 409, 46 (2001)] scheme. Our protocol is suited to currently available photonic devices and ideally allows arbitrary numbers of qubits to be encoded in the same spatial mode, demonstrating the potential for time-frequency modes to dramatically increase the quantum information capacity of fixed spatial resources. As a test of our scheme, we demonstrate the first entirely single spatial mode implementation of a two-qubit quantum gate and show its operation with an average fidelity of 0.84±0.07.

  17. Decision support for hospital bed management using adaptable individual length of stay estimations and shared resources

    PubMed Central

    2013-01-01

    Background Elective patient admission and assignment planning is an important task of the strategic and operational management of a hospital and early on became a central topic of clinical operations research. The management of hospital beds is an important subtask. Various approaches have been proposed, involving the computation of efficient assignments with regard to the patients’ condition, the necessity of the treatment, and the patients’ preferences. However, these approaches are mostly based on static, unadaptable estimates of the length of stay and, thus, do not take into account the uncertainty of the patient’s recovery. Furthermore, the effect of aggregated bed capacities have not been investigated in this context. Computer supported bed management, combining an adaptable length of stay estimation with the treatment of shared resources (aggregated bed capacities) has not yet been sufficiently investigated. The aim of our work is: 1) to define a cost function for patient admission taking into account adaptable length of stay estimations and aggregated resources, 2) to define a mathematical program formally modeling the assignment problem and an architecture for decision support, 3) to investigate four algorithmic methodologies addressing the assignment problem and one base-line approach, and 4) to evaluate these methodologies w.r.t. cost outcome, performance, and dismissal ratio. Methods The expected free ward capacity is calculated based on individual length of stay estimates, introducing Bernoulli distributed random variables for the ward occupation states and approximating the probability densities. The assignment problem is represented as a binary integer program. Four strategies for solving the problem are applied and compared: an exact approach, using the mixed integer programming solver SCIP; and three heuristic strategies, namely the longest expected processing time, the shortest expected processing time, and random choice. A baseline approach serves to compare these optimization strategies with a simple model of the status quo. All the approaches are evaluated by a realistic discrete event simulation: the outcomes are the ratio of successful assignments and dismissals, the computation time, and the model’s cost factors. Results A discrete event simulation of 226,000 cases shows a reduction of the dismissal rate compared to the baseline by more than 30 percentage points (from a mean dismissal ratio of 74.7% to 40.06% comparing the status quo with the optimization strategies). Each of the optimization strategies leads to an improved assignment. The exact approach has only a marginal advantage over the heuristic strategies in the model’s cost factors (≤3%). Moreover,this marginal advantage was only achieved at the price of a computational time fifty times that of the heuristic models (an average computing time of 141 s using the exact method, vs. 2.6 s for the heuristic strategy). Conclusions In terms of its performance and the quality of its solution, the heuristic strategy RAND is the preferred method for bed assignment in the case of shared resources. Future research is needed to investigate whether an equally marked improvement can be achieved in a large scale clinical application study, ideally one comprising all the departments involved in admission and assignment planning. PMID:23289448

  18. Decision support for hospital bed management using adaptable individual length of stay estimations and shared resources.

    PubMed

    Schmidt, Robert; Geisler, Sandra; Spreckelsen, Cord

    2013-01-07

    Elective patient admission and assignment planning is an important task of the strategic and operational management of a hospital and early on became a central topic of clinical operations research. The management of hospital beds is an important subtask. Various approaches have been proposed, involving the computation of efficient assignments with regard to the patients' condition, the necessity of the treatment, and the patients' preferences. However, these approaches are mostly based on static, unadaptable estimates of the length of stay and, thus, do not take into account the uncertainty of the patient's recovery. Furthermore, the effect of aggregated bed capacities have not been investigated in this context. Computer supported bed management, combining an adaptable length of stay estimation with the treatment of shared resources (aggregated bed capacities) has not yet been sufficiently investigated. The aim of our work is: 1) to define a cost function for patient admission taking into account adaptable length of stay estimations and aggregated resources, 2) to define a mathematical program formally modeling the assignment problem and an architecture for decision support, 3) to investigate four algorithmic methodologies addressing the assignment problem and one base-line approach, and 4) to evaluate these methodologies w.r.t. cost outcome, performance, and dismissal ratio. The expected free ward capacity is calculated based on individual length of stay estimates, introducing Bernoulli distributed random variables for the ward occupation states and approximating the probability densities. The assignment problem is represented as a binary integer program. Four strategies for solving the problem are applied and compared: an exact approach, using the mixed integer programming solver SCIP; and three heuristic strategies, namely the longest expected processing time, the shortest expected processing time, and random choice. A baseline approach serves to compare these optimization strategies with a simple model of the status quo. All the approaches are evaluated by a realistic discrete event simulation: the outcomes are the ratio of successful assignments and dismissals, the computation time, and the model's cost factors. A discrete event simulation of 226,000 cases shows a reduction of the dismissal rate compared to the baseline by more than 30 percentage points (from a mean dismissal ratio of 74.7% to 40.06% comparing the status quo with the optimization strategies). Each of the optimization strategies leads to an improved assignment. The exact approach has only a marginal advantage over the heuristic strategies in the model's cost factors (≤3%). Moreover,this marginal advantage was only achieved at the price of a computational time fifty times that of the heuristic models (an average computing time of 141 s using the exact method, vs. 2.6 s for the heuristic strategy). In terms of its performance and the quality of its solution, the heuristic strategy RAND is the preferred method for bed assignment in the case of shared resources. Future research is needed to investigate whether an equally marked improvement can be achieved in a large scale clinical application study, ideally one comprising all the departments involved in admission and assignment planning.

  19. Computational biology in the cloud: methods and new insights from computing at scale.

    PubMed

    Kasson, Peter M

    2013-01-01

    The past few years have seen both explosions in the size of biological data sets and the proliferation of new, highly flexible on-demand computing capabilities. The sheer amount of information available from genomic and metagenomic sequencing, high-throughput proteomics, experimental and simulation datasets on molecular structure and dynamics affords an opportunity for greatly expanded insight, but it creates new challenges of scale for computation, storage, and interpretation of petascale data. Cloud computing resources have the potential to help solve these problems by offering a utility model of computing and storage: near-unlimited capacity, the ability to burst usage, and cheap and flexible payment models. Effective use of cloud computing on large biological datasets requires dealing with non-trivial problems of scale and robustness, since performance-limiting factors can change substantially when a dataset grows by a factor of 10,000 or more. New computing paradigms are thus often needed. The use of cloud platforms also creates new opportunities to share data, reduce duplication, and to provide easy reproducibility by making the datasets and computational methods easily available.

  20. Integration of a neuroimaging processing pipeline into a pan-canadian computing grid

    NASA Astrophysics Data System (ADS)

    Lavoie-Courchesne, S.; Rioux, P.; Chouinard-Decorte, F.; Sherif, T.; Rousseau, M.-E.; Das, S.; Adalat, R.; Doyon, J.; Craddock, C.; Margulies, D.; Chu, C.; Lyttelton, O.; Evans, A. C.; Bellec, P.

    2012-02-01

    The ethos of the neuroimaging field is quickly moving towards the open sharing of resources, including both imaging databases and processing tools. As a neuroimaging database represents a large volume of datasets and as neuroimaging processing pipelines are composed of heterogeneous, computationally intensive tools, such open sharing raises specific computational challenges. This motivates the design of novel dedicated computing infrastructures. This paper describes an interface between PSOM, a code-oriented pipeline development framework, and CBRAIN, a web-oriented platform for grid computing. This interface was used to integrate a PSOM-compliant pipeline for preprocessing of structural and functional magnetic resonance imaging into CBRAIN. We further tested the capacity of our infrastructure to handle a real large-scale project. A neuroimaging database including close to 1000 subjects was preprocessed using our interface and publicly released to help the participants of the ADHD-200 international competition. This successful experiment demonstrated that our integrated grid-computing platform is a powerful solution for high-throughput pipeline analysis in the field of neuroimaging.

  1. The Cloud2SM Project

    NASA Astrophysics Data System (ADS)

    Crinière, Antoine; Dumoulin, Jean; Mevel, Laurent; Andrade-Barosso, Guillermo; Simonin, Matthieu

    2015-04-01

    From the past decades the monitoring of civil engineering structure became a major field of research and development process in the domains of modelling and integrated instrumentation. This increasing of interest can be attributed in part to the need of controlling the aging of such structures and on the other hand to the need to optimize maintenance costs. From this standpoint the project Cloud2SM (Cloud architecture design for Structural Monitoring with in-line Sensors and Models tasking), has been launched to develop a robust information system able to assess the long term monitoring of civil engineering structures as well as interfacing various sensors and data. The specificity of such architecture is to be based on the notion of data processing through physical or statistical models. Thus the data processing, whether material or mathematical, can be seen here as a resource of the main architecture. The project can be divided in various items: -The sensors and their measurement process: Those items provide data to the main architecture and can embed storage or computational resources. Dependent of onboard capacity and the amount of data generated it can be distinguished heavy and light sensors. - The storage resources: Based on the cloud concept this resource can store at least two types of data, raw data and processed ones. - The computational resources: This item includes embedded "pseudo real time" resources as the dedicated computer cluster or computational resources. - The models: Used for the conversion of raw data to meaningful data. Those types of resources inform the system of their needs they can be seen as independents blocks of the system. - The user interface: This item can be divided in various HMI to assess maintaining operation on the sensors or pop-up some information to the user. - The demonstrators: The structures themselves. This project follows previous research works initiated in the European project ISTIMES [1]. It includes the infrared thermal monitoring of civil engineering structures [2-3] and/or the vibration monitoring of such structures [4-5]. The chosen architecture is based on the OGC standard in order to ensure the interoperability between the various measurement systems. This concept is extended to the notion of physical models. The last but not the least main objective of this project is to explore the feasibility and the reliability to deploy mathematical models and process a large amount of data using the GPGPU capacity of a dedicated computational cluster, while studying OGC standardization to those technical concepts. References [1] M. Proto et al., « Transport Infrastructure surveillance and Monitoring by Electromagnetic Sensing: the ISTIMES project », Journal Sensors, Sensors 2010, 10(12), 10620-10639; doi:10.3390/s101210620, December 2010. [2] J. Dumoulin, A. Crinière, R. Averty ," Detection and thermal characterization of the inner structure of the "Musmeci" bridge deck by infrared thermography monitoring ",Journal of Geophysics and Engineering, Volume 10, Number 2, 17 pages ,November 2013, IOP Science, doi:10.1088/1742-2132/10/6/064003. [3] J Dumoulin and V Boucher; "Infrared thermography system for transport infrastructures survey with inline local atmospheric parameter measurements and offline model for radiation attenuation evaluations," J. Appl. Remote Sens., 8(1), 084978 (2014). doi:10.1117/1.JRS.8.084978. [4] V. Le Cam, M. Doehler, M. Le Pen, L. Mevel. "Embedded modal analysis algorithms on the smart wireless sensor platform PEGASE", In Proc. 9th International Workshop on Structural Health Monitoring, Stanford, CA, USA, 2013. [5] M. Zghal, L. Mevel, P. Del Moral, "Modal parameter estimation using interacting Kalman filter", Mechanical Systems and Signal Processing, 2014.

  2. Results of a Nationwide Capacity Survey of Hospitals Providing Trauma Care in War-Affected Syria.

    PubMed

    Mowafi, Hani; Hariri, Mahmoud; Alnahhas, Houssam; Ludwig, Elizabeth; Allodami, Tammam; Mahameed, Bahaa; Koly, Jamal Kaby; Aldbis, Ahmed; Saqqur, Maher; Zhang, Baobao; Al-Kassem, Anas

    2016-09-01

    The Syrian civil war has resulted in large-scale devastation of Syria's health infrastructure along with widespread injuries and death from trauma. The capacity of Syrian trauma hospitals is not well characterized. Data are needed to allocate resources for trauma care to the population remaining in Syria. To identify the number of trauma hospitals operating in Syria and to delineate their capacities. From February 1 to March 31, 2015, a nationwide survey of 94 trauma hospitals was conducted inside Syria, representing a coverage rate of 69% to 93% of reported hospitals in nongovernment controlled areas. Identification and geocoding of trauma and essential surgical services in Syria. Although 86 hospitals (91%) reported capacity to perform emergency surgery, 1 in 6 hospitals (16%) reported having no inpatient ward for patients after surgery. Sixty-three hospitals (70%) could transfuse whole blood but only 7 (7.4%) could separate and bank blood products. Seventy-one hospitals (76%) had any pharmacy services. Only 10 (11%) could provide renal replacement therapy, and only 18 (20%) provided any form of rehabilitative services. Syrian hospitals are isolated, with 24 (26%) relying on smuggling routes to refer patients to other hospitals and 47 hospitals (50%) reporting domestic supply lines that were never open or open less than daily. There were 538 surgeons, 378 physicians, and 1444 nurses identified in this survey, yielding a nurse to physician ratio of 1.8:1. Only 74 hospitals (79%) reported any salary support for staff, and 84 (89%) reported material support. There is an unmet need for biomedical engineering support in Syrian trauma hospitals, with 12 fixed x-ray machines (23%), 11 portable x-ray machines (13%), 13 computed tomographic scanners (22%), 21 adult (21%) and 5 pediatric (19%) ventilators, 14 anesthesia machines (10%), and 116 oxygen cylinders (15%) not functional. No functioning computed tomographic scanners remain in Aleppo, and 95 oxygen cylinders (42%) in rural Damascus are not functioning despite the high density of hospitals and patients in both provinces. Syrian trauma hospitals operate in the Syrian civil war under severe material and human resource constraints. Attention must be paid to providing biomedical engineering support and to directing resources to currently unsupported and geographically isolated critical access surgical hospitals.

  3. Surgical resource utilization in urban terrorist bombing: a computer simulation.

    PubMed

    Hirshberg, A; Stein, M; Walden, R

    1999-09-01

    The objective of this study was to analyze the utilization of surgical staff and facilities during an urban terrorist bombing incident. A discrete-event computer model of the emergency room and related hospital facilities was constructed and implemented, based on cumulated data from 12 urban terrorist bombing incidents in Israel. The simulation predicts that the admitting capacity of the hospital depends primarily on the number of available surgeons and defines an optimal staff profile for surgeons, residents, and trauma nurses. The major bottlenecks in the flow of critical casualties are the shock rooms and the computed tomographic scanner but not the operating rooms. The simulation also defines the number of reinforcement staff needed to treat noncritical casualties and shows that radiology is the major obstacle to the flow of these patients. Computer simulation is an important new tool for the optimization of surgical service elements for a multiple-casualty situation.

  4. Visual Short-Term Memory Compared in Rhesus Monkeys and Humans

    PubMed Central

    Elmore, L. Caitlin; Ma, Wei Ji; Magnotti, John F.; Leising, Kenneth J.; Passaro, Antony D.; Katz, Jeffrey S.; Wright, Anthony A.

    2011-01-01

    Summary Change detection is a popular task to study visual short-term memory (STM) in humans [1–4]. Much of this work suggests that STM has a fixed capacity of 4 ± 1 items [1–6]. Here we report the first comparison of change detection memory between humans and a species closely related to humans, the rhesus monkey. Monkeys and humans were tested in nearly identical procedures with overlapping display sizes. Although the monkeys’ STM was well fit by a 1-item fixed-capacity memory model, other monkey memory tests with 4-item lists have shown performance impossible to obtain with a 1-item capacity [7]. We suggest that this contradiction can be resolved using a continuous-resource approach more closely tied to the neural basis of memory [8,9]. In this view, items have a noisy memory representation whose noise level depends on display size due to distributed allocation of a continuous resource. In accord with this theory, we show that performance depends on the perceptual distance between items before and after the change, and d′ depends on display size in an approximately power law fashion. Our results open the door to combining the power of psychophysics, computation, and physiology to better understand the neural basis of STM. PMID:21596568

  5. A configurable distributed high-performance computing framework for satellite's TDI-CCD imaging simulation

    NASA Astrophysics Data System (ADS)

    Xue, Bo; Mao, Bingjing; Chen, Xiaomei; Ni, Guoqiang

    2010-11-01

    This paper renders a configurable distributed high performance computing(HPC) framework for TDI-CCD imaging simulation. It uses strategy pattern to adapt multi-algorithms. Thus, this framework help to decrease the simulation time with low expense. Imaging simulation for TDI-CCD mounted on satellite contains four processes: 1) atmosphere leads degradation, 2) optical system leads degradation, 3) electronic system of TDI-CCD leads degradation and re-sampling process, 4) data integration. Process 1) to 3) utilize diversity data-intensity algorithms such as FFT, convolution and LaGrange Interpol etc., which requires powerful CPU. Even uses Intel Xeon X5550 processor, regular series process method takes more than 30 hours for a simulation whose result image size is 1500 * 1462. With literature study, there isn't any mature distributing HPC framework in this field. Here we developed a distribute computing framework for TDI-CCD imaging simulation, which is based on WCF[1], uses Client/Server (C/S) layer and invokes the free CPU resources in LAN. The server pushes the process 1) to 3) tasks to those free computing capacity. Ultimately we rendered the HPC in low cost. In the computing experiment with 4 symmetric nodes and 1 server , this framework reduced about 74% simulation time. Adding more asymmetric nodes to the computing network, the time decreased namely. In conclusion, this framework could provide unlimited computation capacity in condition that the network and task management server are affordable. And this is the brand new HPC solution for TDI-CCD imaging simulation and similar applications.

  6. Acquisition of ICU data: concepts and demands.

    PubMed

    Imhoff, M

    1992-12-01

    As the issue of data overload is a problem in critical care today, it is of utmost importance to improve acquisition, storage, integration, and presentation of medical data, which appears only feasible with the help of bedside computers. The data originates from four major sources: (1) the bedside medical devices, (2) the local area network (LAN) of the ICU, (3) the hospital information system (HIS) and (4) manual input. All sources differ markedly in quality and quantity of data and in the demands of the interfaces between source of data and patient database. The demands for data acquisition from bedside medical devices, ICU-LAN and HIS concentrate on technical problems, such as computational power, storage capacity, real-time processing, interfacing with different devices and networks and the unmistakable assignment of data to the individual patient. The main problem of manual data acquisition is the definition and configuration of the user interface that must allow the inexperienced user to interact with the computer intuitively. Emphasis must be put on the construction of a pleasant, logical and easy-to-handle graphical user interface (GUI). Short response times will require high graphical processing capacity. Moreover, high computational resources are necessary in the future for additional interfacing devices such as speech recognition and 3D-GUI. Therefore, in an ICU environment the demands for computational power are enormous. These problems are complicated by the urgent need for friendly and easy-to-handle user interfaces. Both facts place ICU bedside computing at the vanguard of present and future workstation development leaving no room for solutions based on traditional concepts of personal computers.(ABSTRACT TRUNCATED AT 250 WORDS)

  7. Report to the Institutional Computing Executive Group (ICEG) August 14, 2006

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Carnes, B

    We have delayed this report from its normal distribution schedule for two reasons. First, due to the coverage provided in the White Paper on Institutional Capability Computing Requirements distributed in August 2005, we felt a separate 2005 ICEG report would not be value added. Second, we wished to provide some specific information about the Peloton procurement and we have just now reached a point in the process where we can make some definitive statements. The Peloton procurement will result in an almost complete replacement of current M&IC systems. We have plans to retire MCR, iLX, and GPS. We will replacemore » them with new parallel and serial capacity systems based on the same node architecture in the new Peloton capability system named ATLAS. We are currently adding the first users to the Green Data Oasis, a large file system on the open network that will provide the institution with external collaboration data sharing. Only Thunder will remain from the current M&IC system list and it will be converted from Capability to Capacity. We are confident that we are entering a challenging yet rewarding new phase for the M&IC program. Institutional computing has been an essential component of our S&T investment strategy and has helped us achieve recognition in many scientific and technical forums. Through consistent institutional investments, M&IC has grown into a powerful unclassified computing resource that is being used across the Lab to push the limits of computing and its application to simulation science. With the addition of Peloton, the Laboratory will significantly increase the broad-based computing resources available to meet the ever-increasing demand for the large scale simulations indispensable to advancing all scientific disciplines. All Lab research efforts are bolstered through the long term development of mission driven scalable applications and platforms. The new systems will soon be fully utilized and will position Livermore to extend the outstanding science and technology breakthroughs the M&IC program has enabled to date.« less

  8. Dawn Usage, Scheduling, and Governance Model

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Louis, S

    2009-11-02

    This document describes Dawn use, scheduling, and governance concerns. Users started running full-machine science runs in early April 2009 during the initial open shakedown period. Scheduling Dawn while in the Open Computing Facility (OCF) was controlled and coordinated via phone calls, emails, and a small number of controlled banks. With Dawn moving to the Secure Computing Facility (SCF) in fall of 2009, a more detailed scheduling and governance model is required. The three major objectives are: (1) Ensure Dawn resources are allocated on a program priority-driven basis; (2) Utilize Dawn resources on the job mixes for which they were intended;more » and (3) Minimize idle cycles through use of partitions, banks and proper job mix. The SCF workload for Dawn will be inherently different than Purple or BG/L, and therefore needs a different approach. Dawn's primary function is to permit adequate access for tri-lab code development in preparation for Sequoia, and in particular for weapons multi-physics codes in support of UQ. A second purpose is to provide time allocations for large-scale science runs and for UQ suite calculations to advance SSP program priorities. This proposed governance model will be the basis for initial time allocation of Dawn computing resources for the science and UQ workloads that merit priority on this class of resource, either because they cannot be reasonably attempted on any other resources due to size of problem, or because of the unavailability of sizable allocations on other ASC capability or capacity platforms. This proposed model intends to make the most effective use of Dawn as possible, but without being overly constrained by more formal proposal processes such as those now used for Purple CCCs.« less

  9. JINR cloud infrastructure evolution

    NASA Astrophysics Data System (ADS)

    Baranov, A. V.; Balashov, N. A.; Kutovskiy, N. A.; Semenov, R. N.

    2016-09-01

    To fulfil JINR commitments in different national and international projects related to the use of modern information technologies such as cloud and grid computing as well as to provide a modern tool for JINR users for their scientific research a cloud infrastructure was deployed at Laboratory of Information Technologies of Joint Institute for Nuclear Research. OpenNebula software was chosen as a cloud platform. Initially it was set up in simple configuration with single front-end host and a few cloud nodes. Some custom development was done to tune JINR cloud installation to fit local needs: web form in the cloud web-interface for resources request, a menu item with cloud utilization statistics, user authentication via Kerberos, custom driver for OpenVZ containers. Because of high demand in that cloud service and its resources over-utilization it was re-designed to cover increasing users' needs in capacity, availability and reliability. Recently a new cloud instance has been deployed in high-availability configuration with distributed network file system and additional computing power.

  10. Literature review on land carrying capacity of the coordinated development of population, resources, environment and economy

    NASA Astrophysics Data System (ADS)

    Ma, Biao

    2017-10-01

    Land carrying capacity is an important index of evaluation on land resources. And the land carrying capacity is also very important for guiding regional plans and promoting sustainable development of regional economy. So it is significant to clarify the land carrying capacity in the sequence of events which helps the decision makers understand and grasp the knowledge of land carrying capacity more clearly and make the right judgment and decision. Based on the theory of population, resources, environment and economy, the method of reviewing literatures is used in this paper to summarize the theory of the land carrying capacity and the researching methods of the land carrying capacity, as well as the problems existing in the study of land carrying capacity.

  11. Mobile-Cloud Assisted Video Summarization Framework for Efficient Management of Remote Sensing Data Generated by Wireless Capsule Sensors

    PubMed Central

    Mehmood, Irfan; Sajjad, Muhammad; Baik, Sung Wook

    2014-01-01

    Wireless capsule endoscopy (WCE) has great advantages over traditional endoscopy because it is portable and easy to use, especially in remote monitoring health-services. However, during the WCE process, the large amount of captured video data demands a significant deal of computation to analyze and retrieve informative video frames. In order to facilitate efficient WCE data collection and browsing task, we present a resource- and bandwidth-aware WCE video summarization framework that extracts the representative keyframes of the WCE video contents by removing redundant and non-informative frames. For redundancy elimination, we use Jeffrey-divergence between color histograms and inter-frame Boolean series-based correlation of color channels. To remove non-informative frames, multi-fractal texture features are extracted to assist the classification using an ensemble-based classifier. Owing to the limited WCE resources, it is impossible for the WCE system to perform computationally intensive video summarization tasks. To resolve computational challenges, mobile-cloud architecture is incorporated, which provides resizable computing capacities by adaptively offloading video summarization tasks between the client and the cloud server. The qualitative and quantitative results are encouraging and show that the proposed framework saves information transmission cost and bandwidth, as well as the valuable time of data analysts in browsing remote sensing data. PMID:25225874

  12. SCinet Architecture: Featured at the International Conference for High Performance Computing,Networking, Storage and Analysis 2016

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lyonnais, Marc; Smith, Matt; Mace, Kate P.

    SCinet is the purpose-built network that operates during the International Conference for High Performance Computing,Networking, Storage and Analysis (Super Computing or SC). Created each year for the conference, SCinet brings to life a high-capacity network that supports applications and experiments that are a hallmark of the SC conference. The network links the convention center to research and commercial networks around the world. This resource serves as a platform for exhibitors to demonstrate the advanced computing resources of their home institutions and elsewhere by supporting a wide variety of applications. Volunteers from academia, government and industry work together to design andmore » deliver the SCinet infrastructure. Industry vendors and carriers donate millions of dollars in equipment and services needed to build and support the local and wide area networks. Planning begins more than a year in advance of each SC conference and culminates in a high intensity installation in the days leading up to the conference. The SCinet architecture for SC16 illustrates a dramatic increase in participation from the vendor community, particularly those that focus on network equipment. Software-Defined Networking (SDN) and Data Center Networking (DCN) are present in nearly all aspects of the design.« less

  13. Design and Analysis of Self-Adapted Task Scheduling Strategies in Wireless Sensor Networks

    PubMed Central

    Guo, Wenzhong; Xiong, Naixue; Chao, Han-Chieh; Hussain, Sajid; Chen, Guolong

    2011-01-01

    In a wireless sensor network (WSN), the usage of resources is usually highly related to the execution of tasks which consume a certain amount of computing and communication bandwidth. Parallel processing among sensors is a promising solution to provide the demanded computation capacity in WSNs. Task allocation and scheduling is a typical problem in the area of high performance computing. Although task allocation and scheduling in wired processor networks has been well studied in the past, their counterparts for WSNs remain largely unexplored. Existing traditional high performance computing solutions cannot be directly implemented in WSNs due to the limitations of WSNs such as limited resource availability and the shared communication medium. In this paper, a self-adapted task scheduling strategy for WSNs is presented. First, a multi-agent-based architecture for WSNs is proposed and a mathematical model of dynamic alliance is constructed for the task allocation problem. Then an effective discrete particle swarm optimization (PSO) algorithm for the dynamic alliance (DPSO-DA) with a well-designed particle position code and fitness function is proposed. A mutation operator which can effectively improve the algorithm’s ability of global search and population diversity is also introduced in this algorithm. Finally, the simulation results show that the proposed solution can achieve significant better performance than other algorithms. PMID:22163971

  14. Mobile-cloud assisted video summarization framework for efficient management of remote sensing data generated by wireless capsule sensors.

    PubMed

    Mehmood, Irfan; Sajjad, Muhammad; Baik, Sung Wook

    2014-09-15

    Wireless capsule endoscopy (WCE) has great advantages over traditional endoscopy because it is portable and easy to use, especially in remote monitoring health-services. However, during the WCE process, the large amount of captured video data demands a significant deal of computation to analyze and retrieve informative video frames. In order to facilitate efficient WCE data collection and browsing task, we present a resource- and bandwidth-aware WCE video summarization framework that extracts the representative keyframes of the WCE video contents by removing redundant and non-informative frames. For redundancy elimination, we use Jeffrey-divergence between color histograms and inter-frame Boolean series-based correlation of color channels. To remove non-informative frames, multi-fractal texture features are extracted to assist the classification using an ensemble-based classifier. Owing to the limited WCE resources, it is impossible for the WCE system to perform computationally intensive video summarization tasks. To resolve computational challenges, mobile-cloud architecture is incorporated, which provides resizable computing capacities by adaptively offloading video summarization tasks between the client and the cloud server. The qualitative and quantitative results are encouraging and show that the proposed framework saves information transmission cost and bandwidth, as well as the valuable time of data analysts in browsing remote sensing data.

  15. Developing a Coalition Battle Management Language to Facilitate Interoperability Between Operation CIS, and Simulations in Support of Training and Mission Rehearsal

    DTIC Science & Technology

    2005-06-01

    virtualisation of distributed computing and data resources such as processing, network bandwidth, and storage capacity, to create a single system...and Simulation (M&S) will be integrated into this heterogeneous SOA. M&S functionality will be available in the form of operational M&S services. One...documents defining net centric warfare, the use of M&S functionality is a common theme. Alberts and Hayes give a good overview on net centric operations

  16. Computational Study of Scenarios Regarding Explosion Risk Mitigation

    NASA Astrophysics Data System (ADS)

    Vlasin, Nicolae-Ioan; Mihai Pasculescu, Vlad; Florea, Gheorghe-Daniel; Cornel Suvar, Marius

    2016-10-01

    Exploration in order to discover new deposits of natural gas, upgrading techniques to exploit these resources and new ways to convert the heat capacity of these gases into industrial usable energy is the research areas of great interest around the globe. But all activities involving the handling of natural gas (exploitation, transport, combustion) are subjected to the same type of risk: the risk to explosion. Experiments carried out physical scenarios to determine ways to reduce this risk can be extremely costly, requiring suitable premises, equipment and apparatus, manpower, time and, not least, presenting the risk of personnel injury. Taking in account the above mentioned, the present paper deals with the possibility of studying the scenarios of gas explosion type events in virtual domain, exemplifying by performing a computer simulation of a stoichiometric air - methane explosion (methane is the main component of natural gas). The advantages of computer-assisted imply are the possibility of using complex virtual geometries of any form as the area of deployment phenomenon, the use of the same geometry for an infinite number of settings of initial parameters as input, total elimination the risk of personnel injury, decrease the execution time etc. Although computer simulations are hardware resources consuming and require specialized personnel to use the CFD (Computational Fluid Dynamics) techniques, the costs and risks associated with these methods are greatly diminished, presenting, in the same time, a major benefit in terms of execution time.

  17. An improved approximate network blocking probability model for all-optical WDM Networks with heterogeneous link capacities

    NASA Astrophysics Data System (ADS)

    Khan, Akhtar Nawaz

    2017-11-01

    Currently, analytical models are used to compute approximate blocking probabilities in opaque and all-optical WDM networks with the homogeneous link capacities. Existing analytical models can also be extended to opaque WDM networking with heterogeneous link capacities due to the wavelength conversion at each switch node. However, existing analytical models cannot be utilized for all-optical WDM networking with heterogeneous structure of link capacities due to the wavelength continuity constraint and unequal numbers of wavelength channels on different links. In this work, a mathematical model is extended for computing approximate network blocking probabilities in heterogeneous all-optical WDM networks in which the path blocking is dominated by the link along the path with fewer number of wavelength channels. A wavelength assignment scheme is also proposed for dynamic traffic, termed as last-fit-first wavelength assignment, in which a wavelength channel with maximum index is assigned first to a lightpath request. Due to heterogeneous structure of link capacities and the wavelength continuity constraint, the wavelength channels with maximum indexes are utilized for minimum hop routes. Similarly, the wavelength channels with minimum indexes are utilized for multi-hop routes between source and destination pairs. The proposed scheme has lower blocking probability values compared to the existing heuristic for wavelength assignments. Finally, numerical results are computed in different network scenarios which are approximately equal to values obtained from simulations. Since January 2016, he is serving as Head of Department and an Assistant Professor in the Department of Electrical Engineering at UET, Peshawar-Jalozai Campus, Pakistan. From May 2013 to June 2015, he served Department of Telecommunication Engineering as an Assistant Professor at UET, Peshawar-Mardan Campus, Pakistan. He also worked as an International Internship scholar in the Fukuda Laboratory, National Institute of Informatics, Tokyo, Japan on the topic large-scale simulation for internet topology analysis. His research interests include design and analysis of optical WDM networks, network algorithms, network routing, and network resource optimization problems.

  18. Carrying capacity: maintaining outdoor recreation quality

    Treesearch

    David W. Lime; George H. Stankey

    1971-01-01

    A discussion of (1) what is meant by the concept of recreational carrying capacity; (2) what is known about capacities in terms of both how resources and experience of visitors are affected by recreational use; and (3) what alternative procedures the administrator can use to manage both resources and visitors for capacity.

  19. Concept and Connotation of Water Resources Carrying Capacity in Water Ecological Civilization Construction

    NASA Astrophysics Data System (ADS)

    Chao, Zhilong; Song, Xiaoyu; Feng, Xianghua

    2018-01-01

    Water ecological civilization construction is based on the water resources carrying capacity, guided by the sustainable development concept, adhered to the human-water harmony thoughts. This paper has comprehensive analyzed the concept and characteristics of the carrying capacity of water resources in the water ecological civilization construction, and discussed the research methods and evaluation index system of water carrying capacity in the water ecological civilization construction, finally pointed out that the problems and solutions of water carrying capacity in the water ecological civilization construction and put forward the future research prospect.

  20. Alumni's perception of public health informatics competencies: lessons from the Graduate Program of Public Health, Faculty of Medicine, Universitas Gadjah Mada, Indonesia.

    PubMed

    Fuad, Anis; Sanjaya, Guardian Yoki; Lazuardi, Lutfan; Rahmanti, Annisa Ristya; Hsu, Chien-Yeh

    2013-01-01

    Public health informatics has been defined as the systematic application of information and computer science and technology to public health practice, research, and learning [1]. Unfortunately, limited reports exist concerning to the capacity building strategies to improve public health informatics workforce in limited-resources setting. In Indonesia, only three universities, including Universitas Gadjah Mada (UGM), offer master degree program on related public health informatics discipline. UGM started a new dedicated master program on Health Management Information Systems in 2005, under the auspice of the Graduate Program of Public Health at the Faculty of Medicine. This is the first tracer study to the alumni aiming to a) identify the gaps between curriculum and the current jobs and b) describe their perception on public health informatics competencies. We distributed questionnaires to 114 alumni with 36.84 % response rate. Despite low response rate, this study provided valuable resources to set up appropriate competencies, curriculum and capacity building strategies of public health informatics workforce in Indonesia.

  1. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gaustad, K.L.; De Steese, J.G.

    A computer program was developed to analyze the viability of integrating superconducting magnetic energy storage (SMES) with proposed wind farm scenarios at a site near Browning, Montana. The program simulated an hour-by-hour account of the charge/discharge history of a SMES unit for a representative wind-speed year. Effects of power output, storage capacity, and power conditioning capability on SMES performance characteristics were analyzed on a seasonal, diurnal, and hourly basis. The SMES unit was assumed to be charged during periods when power output of the wind resource exceeded its average value. Energy was discharged from the SMES unit into the gridmore » during periods of low wind speed to compensate for below-average output of the wind resource. The option of using SMES to provide power continuity for a wind farm supplemented by combustion turbines was also investigated. Levelizing the annual output of large wind energy systems operating in the Blackfeet area of Montana was found to require a storage capacity too large to be economically viable. However, it appears that intermediate-sized SMES economically levelize the wind energy output on a seasonal basis.« less

  2. Classification of CO2 Geologic Storage: Resource and Capacity

    USGS Publications Warehouse

    Frailey, S.M.; Finley, R.J.

    2009-01-01

    The use of the term capacity to describe possible geologic storage implies a realistic or likely volume of CO2 to be sequestered. Poor data quantity and quality may lead to very high uncertainty in the storage estimate. Use of the term "storage resource" alleviates the implied certainty of the term "storage capacity". This is especially important to non- scientists (e.g. policy makers) because "capacity" is commonly used to describe the very specific and more certain quantities such as volume of a gas tank or a hotel's overnight guest limit. Resource is a term used in the classification of oil and gas accumulations to infer lesser certainty in the commercial production of oil and gas. Likewise for CO2 sequestration, a suspected porous and permeable zone can be classified as a resource, but capacity can only be estimated after a well is drilled into the formation and a relatively higher degree of economic and regulatory certainty is established. Storage capacity estimates are lower risk or higher certainty compared to storage resource estimates. In the oil and gas industry, prospective resource and contingent resource are used for estimates with less data and certainty. Oil and gas reserves are classified as Proved and Unproved, and by analogy, capacity can be classified similarly. The highest degree of certainty for an oil or gas accumulation is Proved, Developed Producing (PDP) Reserves. For CO2 sequestration this could be Proved Developed Injecting (PDI) Capacity. A geologic sequestration storage classification system is developed by analogy to that used by the oil and gas industry. When a CO2 sequestration industry emerges, storage resource and capacity estimates will be considered a company asset and consequently regulated by the Securities and Exchange Commission. Additionally, storage accounting and auditing protocols will be required to confirm projected storage estimates and assignment of credits from actual injection. An example illustrates the use of these terms and how storage classification changes as new data become available. ?? 2009 Elsevier Ltd. All rights reserved.

  3. Discrete Resource Allocation in Visual Working Memory

    ERIC Educational Resources Information Center

    Barton, Brian; Ester, Edward F.; Awh, Edward

    2009-01-01

    Are resources in visual working memory allocated in a continuous or a discrete fashion? On one hand, flexible resource models suggest that capacity is determined by a central resource pool that can be flexibly divided such that items of greater complexity receive a larger share of resources. On the other hand, if capacity in working memory is…

  4. Bits and bytes: the future of radiology lies in informatics and information technology.

    PubMed

    Brink, James A; Arenson, Ronald L; Grist, Thomas M; Lewin, Jonathan S; Enzmann, Dieter

    2017-09-01

    Advances in informatics and information technology are sure to alter the practice of medical imaging and image-guided therapies substantially over the next decade. Each element of the imaging continuum will be affected by substantial increases in computing capacity coincident with the seamless integration of digital technology into our society at large. This article focuses primarily on areas where this IT transformation is likely to have a profound effect on the practice of radiology. • Clinical decision support ensures consistent and appropriate resource utilization. • Big data enables correlation of health information across multiple domains. • Data mining advances the quality of medical decision-making. • Business analytics allow radiologists to maximize the benefits of imaging resources.

  5. Decision-Theoretic Control of Planetary Rovers

    NASA Technical Reports Server (NTRS)

    Zilberstein, Shlomo; Washington, Richard; Bernstein, Daniel S.; Mouaddib, Abdel-Illah; Morris, Robert (Technical Monitor)

    2003-01-01

    Planetary rovers are small unmanned vehicles equipped with cameras and a variety of sensors used for scientific experiments. They must operate under tight constraints over such resources as operation time, power, storage capacity, and communication bandwidth. Moreover, the limited computational resources of the rover limit the complexity of on-line planning and scheduling. We describe two decision-theoretic approaches to maximize the productivity of planetary rovers: one based on adaptive planning and the other on hierarchical reinforcement learning. Both approaches map the problem into a Markov decision problem and attempt to solve a large part of the problem off-line, exploiting the structure of the plan and independence between plan components. We examine the advantages and limitations of these techniques and their scalability.

  6. Technical, Managerial and Financial (TMF) Capacity Resources for Small Drinking Water Systems

    EPA Pesticide Factsheets

    Resources are available to help public water systems build the technical, managerial and financial (TMF) capacity. TMF capacity is necessary to achieve and maintain long-term sustainability and compliance with national safe drinking water regulations.

  7. Plant intelligence.

    PubMed

    Trewavas, Anthony

    2005-09-01

    Intelligent behavior is a complex adaptive phenomenon that has evolved to enable organisms to deal with variable environmental circumstances. Maximizing fitness requires skill in foraging for necessary resources (food) in competitive circumstances and is probably the activity in which intelligent behavior is most easily seen. Biologists suggest that intelligence encompasses the characteristics of detailed sensory perception, information processing, learning, memory, choice, optimisation of resource sequestration with minimal outlay, self-recognition, and foresight by predictive modeling. All these properties are concerned with a capacity for problem solving in recurrent and novel situations. Here I review the evidence that individual plant species exhibit all of these intelligent behavioral capabilities but do so through phenotypic plasticity, not movement. Furthermore it is in the competitive foraging for resources that most of these intelligent attributes have been detected. Plants should therefore be regarded as prototypical intelligent organisms, a concept that has considerable consequences for investigations of whole plant communication, computation and signal transduction.

  8. Long live the Data Scientist, but can he/she persist?

    NASA Astrophysics Data System (ADS)

    Wyborn, L. A.

    2011-12-01

    In recent years the fourth paradigm of data intensive science has slowly taken hold as the increased capacity of instruments and an increasing number of instruments (in particular sensor networks) have changed how fundamental research is undertaken. Most modern scientific research is about digital capture of data direct from instruments, processing it by computers, storing the results on computers and only publishing a small fraction of data in hard copy publications. At the same time, the rapid increase in capacity of supercomputers, particularly at petascale, means that far larger data sets can be analysed and to greater resolution than previously possible. The new cloud computing paradigm which allows distributed data, software and compute resources to be linked by seamless workflows, is creating new opportunities in processing of high volumes of data to an increasingly larger number of researchers. However, to take full advantage of these compute resources, data sets for analysis have to be aggregated from multiple sources to create high performance data sets. These new technology developments require that scientists must become more skilled in data management and/or have a higher degree of computer literacy. In almost every science discipline there is now an X-informatics branch and a computational X branch (eg, Geoinformatics and Computational Geoscience): both require a new breed of researcher that has skills in both the science fundamentals and also knowledge of some ICT aspects (computer programming, data base design and development, data curation, software engineering). People that can operate in both science and ICT are increasingly known as 'data scientists'. Data scientists are a critical element of many large scale earth and space science informatics projects, particularly those that are tackling current grand challenges at an international level on issues such as climate change, hazard prediction and sustainable development of our natural resources. These projects by their very nature require the integration of multiple digital data sets from multiple sources. Often the preparation of the data for computational analysis can take months and requires painstaking attention to detail to ensure that anomalies identified are real and are not just artefacts of the data preparation and/or the computational analysis. Although data scientists are increasingly vital to successful data intensive earth and space science projects, unless they are recognised for their capabilities in both the science and the computational domains they are likely to migrate to either a science role or an ICT role as their career advances. Most reward and recognition systems do not recognise those with skills in both, hence, getting trained data scientists to persist beyond one or two projects can be challenge. Those data scientists that persist in the profession are characteristically committed and enthusiastic people who have the support of their organisations to take on this role. They also tend to be people who share developments and are critical to the success of the open source software movement. However, the fact remains that survival of the data scientist as a species is being threatened unless something is done to recognise their invaluable contributions to the new fourth paradigm of science.

  9. Algorithmic complexity of quantum capacity

    NASA Astrophysics Data System (ADS)

    Oskouei, Samad Khabbazi; Mancini, Stefano

    2018-04-01

    We analyze the notion of quantum capacity from the perspective of algorithmic (descriptive) complexity. To this end, we resort to the concept of semi-computability in order to describe quantum states and quantum channel maps. We introduce algorithmic entropies (like algorithmic quantum coherent information) and derive relevant properties for them. Then we show that quantum capacity based on semi-computable concept equals the entropy rate of algorithmic coherent information, which in turn equals the standard quantum capacity. Thanks to this, we finally prove that the quantum capacity, for a given semi-computable channel, is limit computable.

  10. Understanding the allocation of attention when faced with varying perceptual load in partial report: a computational approach.

    PubMed

    Kyllingsbæk, Søren; Sy, Jocelyn L; Giesbrecht, Barry

    2011-05-01

    The allocation of visual processing capacity is a key topic in studies and theories of visual attention. The load theory of Lavie (1995) proposes that allocation happens in two steps where processing resources are first allocated to task-relevant stimuli and secondly remaining capacity 'spills over' to task-irrelevant distractors. In contrast, the Theory of Visual Attention (TVA) proposed by Bundesen (1990) assumes that allocation happens in a single step where processing capacity is allocated to all stimuli, both task-relevant and task-irrelevant, in proportion to their relative attentional weight. Here we present data from two partial report experiments where we varied the number and discriminability of the task-irrelevant stimuli (Experiment 1) and perceptual load (Experiment 2). The TVA fitted the data of the two experiments well thus favoring the simple explanation with a single step of capacity allocation. We also show that the effects of varying perceptual load can only be explained by a combined effect of allocation of processing capacity as well as limits in visual working memory. Finally, we link the results to processing capacity understood at the neural level based on the neural theory of visual attention by Bundesen et al. (2005). Copyright © 2010 Elsevier Ltd. All rights reserved.

  11. Mentorship and competencies for applied chronic disease epidemiology.

    PubMed

    Lengerich, Eugene J; Siedlecki, Jennifer C; Brownson, Ross; Aldrich, Tim E; Hedberg, Katrina; Remington, Patrick; Siegel, Paul Z

    2003-01-01

    To understand the potential and establish a framework for mentoring as a method to develop professional competencies of state-level applied chronic disease epidemiologists, model mentorship programs were reviewed, specific competencies were identified, and competencies were then matched to essential public health services. Although few existing mentorship programs in public health were identified, common themes in other professional mentorship programs support the potential of mentoring as an effective means to develop capacity for applied chronic disease epidemiology. Proposed competencies for chronic disease epidemiologists in a mentorship program include planning, analysis, communication, basic public health, informatics and computer knowledge, and cultural diversity. Mentoring may constitute a viable strategy to build chronic disease epidemiology capacity, especially in public health agencies where resource and personnel system constraints limit opportunities to recruit and hire new staff.

  12. Relationship between human resource ability and market access capacity on business performance. (case study of wood craft micro- and small-scale industries in Gianyar Regency, Bali)

    NASA Astrophysics Data System (ADS)

    Sukartini, N. W.; Sudarmini, N. M.; Lasmini, N. K.

    2018-01-01

    The aims of this research are to: (1) analyze the influence of Human Resource Ability on market access capacity in Wood Craft Micro and Small Industry; (2) to analyze the effect of market access capacity on business performance; (3) analyze the influence of Human Resources ability on business performance. Data were collected using questionnaires, interviews, observations, and literature studies. The resulting data were analyzed using Struture Equation Modeling (SEM). The results of the analysis show that (1) there is a positive and significant influence of the ability of Human Resources on market access capacity in Wood Craft Micro-and Small-Scale Industries in Gianyar; (2) there is a positive and significant influence of market access capacity on business performance; and (3) there is a positive and significant influence of Human Resource ability on business performance. To improve the ability to access the market and business performance, it is recommended that human resource ability need to be improved through training; government and higher education institutions are expected to play a role in improving the ability of human resources (craftsmen) through provision of training programs

  13. An Analysis of Cloud Computing with Amazon Web Services for the Atmospheric Science Data Center

    NASA Astrophysics Data System (ADS)

    Gleason, J. L.; Little, M. M.

    2013-12-01

    NASA science and engineering efforts rely heavily on compute and data handling systems. The nature of NASA science data is such that it is not restricted to NASA users, instead it is widely shared across a globally distributed user community including scientists, educators, policy decision makers, and the public. Therefore NASA science computing is a candidate use case for cloud computing where compute resources are outsourced to an external vendor. Amazon Web Services (AWS) is a commercial cloud computing service developed to use excess computing capacity at Amazon, and potentially provides an alternative to costly and potentially underutilized dedicated acquisitions whenever NASA scientists or engineers require additional data processing. AWS desires to provide a simplified avenue for NASA scientists and researchers to share large, complex data sets with external partners and the public. AWS has been extensively used by JPL for a wide range of computing needs and was previously tested on a NASA Agency basis during the Nebula testing program. Its ability to support the Langley Science Directorate needs to be evaluated by integrating it with real world operational needs across NASA and the associated maturity that would come with that. The strengths and weaknesses of this architecture and its ability to support general science and engineering applications has been demonstrated during the previous testing. The Langley Office of the Chief Information Officer in partnership with the Atmospheric Sciences Data Center (ASDC) has established a pilot business interface to utilize AWS cloud computing resources on a organization and project level pay per use model. This poster discusses an effort to evaluate the feasibility of the pilot business interface from a project level perspective by specifically using a processing scenario involving the Clouds and Earth's Radiant Energy System (CERES) project.

  14. Climate Modeling Computing Needs Assessment

    NASA Astrophysics Data System (ADS)

    Petraska, K. E.; McCabe, J. D.

    2011-12-01

    This paper discusses early findings of an assessment of computing needs for NASA science, engineering and flight communities. The purpose of this assessment is to document a comprehensive set of computing needs that will allow us to better evaluate whether our computing assets are adequately structured to meet evolving demand. The early results are interesting, already pointing out improvements we can make today to get more out of the computing capacity we have, as well as potential game changing innovations for the future in how we apply information technology to science computing. Our objective is to learn how to leverage our resources in the best way possible to do more science for less money. Our approach in this assessment is threefold: Development of use case studies for science workflows; Creating a taxonomy and structure for describing science computing requirements; and characterizing agency computing, analysis, and visualization resources. As projects evolve, science data sets increase in a number of ways: in size, scope, timelines, complexity, and fidelity. Generating, processing, moving, and analyzing these data sets places distinct and discernable requirements on underlying computing, analysis, storage, and visualization systems. The initial focus group for this assessment is the Earth Science modeling community within NASA's Science Mission Directorate (SMD). As the assessment evolves, this focus will expand to other science communities across the agency. We will discuss our use cases, our framework for requirements and our characterizations, as well as our interview process, what we learned and how we plan to improve our materials after using them in the first round of interviews in the Earth Science Modeling community. We will describe our plans for how to expand this assessment, first into the Earth Science data analysis and remote sensing communities, and then throughout the full community of science, engineering and flight at NASA.

  15. Cognitive Modeling of Individual Variation in Reference Production and Comprehension

    PubMed Central

    Hendriks, Petra

    2016-01-01

    A challenge for most theoretical and computational accounts of linguistic reference is the observation that language users vary considerably in their referential choices. Part of the variation observed among and within language users and across tasks may be explained from variation in the cognitive resources available to speakers and listeners. This paper presents a computational model of reference production and comprehension developed within the cognitive architecture ACT-R. Through simulations with this ACT-R model, it is investigated how cognitive constraints interact with linguistic constraints and features of the linguistic discourse in speakers’ production and listeners’ comprehension of referring expressions in specific tasks, and how this interaction may give rise to variation in referential choice. The ACT-R model of reference explains and predicts variation among language users in their referential choices as a result of individual and task-related differences in processing speed and working memory capacity. Because of limitations in their cognitive capacities, speakers sometimes underspecify or overspecify their referring expressions, and listeners sometimes choose incorrect referents or are overly liberal in their interpretation of referring expressions. PMID:27092101

  16. How much spare capacity is necessary for the security of resource networks?

    NASA Astrophysics Data System (ADS)

    Zhao, Qian-Chuan; Jia, Qing-Shan; Cao, Yang

    2007-01-01

    The balance between the supply and demand of some kind of resource is critical for the functionality and security of many complex networks. Local contingencies that break this balance can cause a global collapse. These contingencies are usually dealt with by spare capacity, which is costly especially when the network capacity (the total amount of the resource generated/consumed in the network) grows. This paper studies the relationship between the spare capacity and the collapse probability under separation contingencies when the network capacity grows. Our results are obtained based on the analysis of the existence probability of balanced partitions, which is a measure of network security when network splitting is unavoidable. We find that a network with growing capacity will inevitably collapse after a separation contingency if the spare capacity in each island increases slower than a linear function of the network capacity and there is no suitable global coordinator.

  17. The state of human dimensions capacity for natural resource management: needs, knowledge, and resources

    USGS Publications Warehouse

    Sexton, Natalie R.; Leong, Kirsten M.; Milley, Brad J.; Clarke, Melinda M.; Teel, Tara L.; Chase, Mark A.; Dietsch, Alia M.

    2013-01-01

    The social sciences have become increasingly important in understanding natural resource management contexts and audiences, and are essential in design and delivery of effective and durable management strategies. Yet many agencies and organizations do not have the necessary resource management. We draw on the textbook definition of HD: how and why people value natural resources, what benefits people seek and derive from those resources, and how people affect and are affected by those resources and their management (Decker, Brown, and Seimer 2001). Clearly articulating how HD information can be used and integrated into natural resource management planning and decision-making is an important challenge faced by the HD field. To address this challenge, we formed a collaborative team to explore the issue of HD capacity-building for natural resource organizations and to advance the HD field. We define HD capacity as activities, efforts, and resources that enhance the ability of HD researchers and practitioners and natural managers and decision-makers to understand and address the social aspects of conservation.Specifically, we sought to examine current barriers to integration of HD into natural resource management, knowledge needed to improve HD capacity, and existing HD tools, resources, and training opportunities. We conducted a needs assessment of HD experts and practitioners, developed a framework for considering HD activities that can contribute both directly and indirectly throughout any phase of an adaptive management cycle, and held a workshop to review preliminary findings and gather additional input through breakout group discussions. This paper provides highlights from our collaborative initiative to help frame and inform future HD capacity-building efforts and natural resource organizations and also provides a list of existing human dimensions tools and resources.

  18. Dividing Attention within and between Hemispheres: Testing a Multiple Resources Approach to Limited-Capacity Information Processing.

    ERIC Educational Resources Information Center

    Friedman, Alinda; And Others

    1982-01-01

    Two experiments tested the limiting case of a multiple resources approach to resource allocation in information processing. Results contradict a single-capacity model, supporting the idea that the hemispheres' resource supplies are independent and have implications for both cerebral specialization and divided attention issues. (Author/PN)

  19. Survey on Security Issues in Cloud Computing and Associated Mitigation Techniques

    NASA Astrophysics Data System (ADS)

    Bhadauria, Rohit; Sanyal, Sugata

    2012-06-01

    Cloud Computing holds the potential to eliminate the requirements for setting up of high-cost computing infrastructure for IT-based solutions and services that the industry uses. It promises to provide a flexible IT architecture, accessible through internet for lightweight portable devices. This would allow multi-fold increase in the capacity or capabilities of the existing and new software. In a cloud computing environment, the entire data reside over a set of networked resources, enabling the data to be accessed through virtual machines. Since these data-centers may lie in any corner of the world beyond the reach and control of users, there are multifarious security and privacy challenges that need to be understood and taken care of. Also, one can never deny the possibility of a server breakdown that has been witnessed, rather quite often in the recent times. There are various issues that need to be dealt with respect to security and privacy in a cloud computing scenario. This extensive survey paper aims to elaborate and analyze the numerous unresolved issues threatening the cloud computing adoption and diffusion affecting the various stake-holders linked to it.

  20. Bridging the digital divide by increasing computer and cancer literacy: community technology centers for head-start parents and families.

    PubMed

    Salovey, Peter; Williams-Piehota, Pamela; Mowad, Linda; Moret, Marta Elisa; Edlund, Denielle; Andersen, Judith

    2009-01-01

    This article describes the establishment of two community technology centers affiliated with Head Start early childhood education programs focused especially on Latino and African American parents of children enrolled in Head Start. A 6-hour course concerned with computer and cancer literacy was presented to 120 parents and other community residents who earned a free, refurbished, Internet-ready computer after completing the program. Focus groups provided the basis for designing the structure and content of the course and modifying it during the project period. An outcomes-based assessment comparing program participants with 70 nonparticipants at baseline, immediately after the course ended, and 3 months later suggested that the program increased knowledge about computers and their use, knowledge about cancer and its prevention, and computer use including health information-seeking via the Internet. The creation of community computer technology centers requires the availability of secure space, capacity of a community partner to oversee project implementation, and resources of this partner to ensure sustainability beyond core funding.

  1. A Study on the Sources of Resources and Capacity Building in Resource Mobilization: Case of Private Chartered Universities in Nakuru Town, Kenya

    ERIC Educational Resources Information Center

    Kipchumba, Simon Kibet; Zhimin, Liu; Chelagat, Robert

    2013-01-01

    The purpose of this study was to review and analyze the resources needs and sources of resources and level of training and capacity building in resource mobilization in Kenyan private chartered universities. The study employed a descriptive survey research design. Purposeful sampling technique was used to select 63 respondents (staff) from three…

  2. An innovative method for water resources carrying capacity research--Metabolic theory of regional water resources.

    PubMed

    Ren, Chongfeng; Guo, Ping; Li, Mo; Li, Ruihuan

    2016-02-01

    The shortage and uneven spatial and temporal distribution of water resources has seriously restricted the sustainable development of regional society and economy. In this study, a metabolic theory for regional water resources was proposed by introducing the biological metabolism concept into the carrying capacity of regional water resources. In the organic metabolic process of water resources, the socio-economic system consumes water resources, while products, services and pollutants, etc. are output. Furthermore, an evaluation index system which takes into the characteristics of the regional water resources, the socio-economic system and the sustainable development principle was established based on the proposed theory. The theory was then applied to a case study to prove its availability. Further, suggestions aiming at improving the regional water carrying capacity were given on the basis of a comprehensive analysis of the current water resources situation. Copyright © 2015 Elsevier Ltd. All rights reserved.

  3. Estimating landscape carrying capacity through maximum clique analysis

    USGS Publications Warehouse

    Donovan, Therese; Warrington, Greg; Schwenk, W. Scott; Dinitz, Jeffrey H.

    2012-01-01

    Habitat suitability (HS) maps are widely used tools in wildlife science and establish a link between wildlife populations and landscape pattern. Although HS maps spatially depict the distribution of optimal resources for a species, they do not reveal the population size a landscape is capable of supporting--information that is often crucial for decision makers and managers. We used a new approach, "maximum clique analysis," to demonstrate how HS maps for territorial species can be used to estimate the carrying capacity, N(k), of a given landscape. We estimated the N(k) of Ovenbirds (Seiurus aurocapillus) and bobcats (Lynx rufus) in an 1153-km2 study area in Vermont, USA. These two species were selected to highlight different approaches in building an HS map as well as computational challenges that can arise in a maximum clique analysis. We derived 30-m2 HS maps for each species via occupancy modeling (Ovenbird) and by resource utilization modeling (bobcats). For each species, we then identified all pixel locations on the map (points) that had sufficient resources in the surrounding area to maintain a home range (termed a "pseudo-home range"). These locations were converted to a mathematical graph, where any two points were linked if two pseudo-home ranges could exist on the landscape without violating territory boundaries. We used the program Cliquer to find the maximum clique of each graph. The resulting estimates of N(k) = 236 Ovenbirds and N(k) = 42 female bobcats were sensitive to different assumptions and model inputs. Estimates of N(k) via alternative, ad hoc methods were 1.4 to > 30 times greater than the maximum clique estimate, suggesting that the alternative results may be upwardly biased. The maximum clique analysis was computationally intensive but could handle problems with < 1500 total pseudo-home ranges (points). Given present computational constraints, it is best suited for species that occur in clustered distributions (where the problem can be broken into several, smaller problems), or for species with large home ranges relative to grid scale where resampling the points to a coarser resolution can reduce the problem to manageable proportions.

  4. Rich client data exploration and research prototyping for NOAA

    NASA Astrophysics Data System (ADS)

    Grossberg, Michael; Gladkova, Irina; Guch, Ingrid; Alabi, Paul; Shahriar, Fazlul; Bonev, George; Aizenman, Hannah

    2009-08-01

    Data from satellites and model simulations is increasing exponentially as observations and model computing power improve rapidly. Not only is technology producing more data, but it often comes from sources all over the world. Researchers and scientists who must collaborate are also located globally. This work presents a software design and technologies which will make it possible for groups of researchers to explore large data sets visually together without the need to download these data sets locally. The design will also make it possible to exploit high performance computing remotely and transparently to analyze and explore large data sets. Computer power, high quality sensing, and data storage capacity have improved at a rate that outstrips our ability to develop software applications that exploit these resources. It is impractical for NOAA scientists to download all of the satellite and model data that may be relevant to a given problem and the computing environments available to a given researcher range from supercomputers to only a web browser. The size and volume of satellite and model data are increasing exponentially. There are at least 50 multisensor satellite platforms collecting Earth science data. On the ground and in the sea there are sensor networks, as well as networks of ground based radar stations, producing a rich real-time stream of data. This new wealth of data would have limited use were it not for the arrival of large-scale high-performance computation provided by parallel computers, clusters, grids, and clouds. With these computational resources and vast archives available, it is now possible to analyze subtle relationships which are global, multi-modal and cut across many data sources. Researchers, educators, and even the general public, need tools to access, discover, and use vast data center archives and high performance computing through a simple yet flexible interface.

  5. Real-time simulation of a spiking neural network model of the basal ganglia circuitry using general purpose computing on graphics processing units.

    PubMed

    Igarashi, Jun; Shouno, Osamu; Fukai, Tomoki; Tsujino, Hiroshi

    2011-11-01

    Real-time simulation of a biologically realistic spiking neural network is necessary for evaluation of its capacity to interact with real environments. However, the real-time simulation of such a neural network is difficult due to its high computational costs that arise from two factors: (1) vast network size and (2) the complicated dynamics of biologically realistic neurons. In order to address these problems, mainly the latter, we chose to use general purpose computing on graphics processing units (GPGPUs) for simulation of such a neural network, taking advantage of the powerful computational capability of a graphics processing unit (GPU). As a target for real-time simulation, we used a model of the basal ganglia that has been developed according to electrophysiological and anatomical knowledge. The model consists of heterogeneous populations of 370 spiking model neurons, including computationally heavy conductance-based models, connected by 11,002 synapses. Simulation of the model has not yet been performed in real-time using a general computing server. By parallelization of the model on the NVIDIA Geforce GTX 280 GPU in data-parallel and task-parallel fashion, faster-than-real-time simulation was robustly realized with only one-third of the GPU's total computational resources. Furthermore, we used the GPU's full computational resources to perform faster-than-real-time simulation of three instances of the basal ganglia model; these instances consisted of 1100 neurons and 33,006 synapses and were synchronized at each calculation step. Finally, we developed software for simultaneous visualization of faster-than-real-time simulation output. These results suggest the potential power of GPGPU techniques in real-time simulation of realistic neural networks. Copyright © 2011 Elsevier Ltd. All rights reserved.

  6. Threshold-based queuing system for performance analysis of cloud computing system with dynamic scaling

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Shorgin, Sergey Ya.; Pechinkin, Alexander V.; Samouylov, Konstantin E.

    Cloud computing is promising technology to manage and improve utilization of computing center resources to deliver various computing and IT services. For the purpose of energy saving there is no need to unnecessarily operate many servers under light loads, and they are switched off. On the other hand, some servers should be switched on in heavy load cases to prevent very long delays. Thus, waiting times and system operating cost can be maintained on acceptable level by dynamically adding or removing servers. One more fact that should be taken into account is significant server setup costs and activation times. Formore » better energy efficiency, cloud computing system should not react on instantaneous increase or instantaneous decrease of load. That is the main motivation for using queuing systems with hysteresis for cloud computing system modelling. In the paper, we provide a model of cloud computing system in terms of multiple server threshold-based infinite capacity queuing system with hysteresis and noninstantanuous server activation. For proposed model, we develop a method for computing steady-state probabilities that allow to estimate a number of performance measures.« less

  7. 30 CFR 75.513 - Electric conductor; capacity and insulation.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... 30 Mineral Resources 1 2011-07-01 2011-07-01 false Electric conductor; capacity and insulation. 75.513 Section 75.513 Mineral Resources MINE SAFETY AND HEALTH ADMINISTRATION, DEPARTMENT OF LABOR COAL... § 75.513 Electric conductor; capacity and insulation. [Statutory Provision] All electric conductors...

  8. 30 CFR 75.513 - Electric conductor; capacity and insulation.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... 30 Mineral Resources 1 2010-07-01 2010-07-01 false Electric conductor; capacity and insulation. 75.513 Section 75.513 Mineral Resources MINE SAFETY AND HEALTH ADMINISTRATION, DEPARTMENT OF LABOR COAL... § 75.513 Electric conductor; capacity and insulation. [Statutory Provision] All electric conductors...

  9. Resource-poor settings: infrastructure and capacity building: care of the critically ill and injured during pandemics and disasters: CHEST consensus statement.

    PubMed

    Geiling, James; Burkle, Frederick M; Amundson, Dennis; Dominguez-Cherit, Guillermo; Gomersall, Charles D; Lim, Matthew L; Luyckx, Valerie; Sarani, Babak; Uyeki, Timothy M; West, T Eoin; Christian, Michael D; Devereaux, Asha V; Dichter, Jeffrey R; Kissoon, Niranjan

    2014-10-01

    Planning for mass critical care (MCC) in resource-poor or constrained settings has been largely ignored, despite their large populations that are prone to suffer disproportionately from natural disasters. Addressing MCC in these settings has the potential to help vast numbers of people and also to inform planning for better-resourced areas. The Resource-Poor Settings panel developed five key question domains; defining the term resource poor and using the traditional phases of disaster (mitigation/preparedness/response/recovery), literature searches were conducted to identify evidence on which to answer the key questions in these areas. Given a lack of data upon which to develop evidence-based recommendations, expert-opinion suggestions were developed, and consensus was achieved using a modified Delphi process. The five key questions were then separated as follows: definition, infrastructure and capacity building, resources, response, and reconstitution/recovery of host nation critical care capabilities and research. Addressing these questions led the panel to offer 33 suggestions. Because of the large number of suggestions, the results have been separated into two sections: part 1, Infrastructure/Capacity in this article, and part 2, Response/Recovery/Research in the accompanying article. Lack of, or presence of, rudimentary ICU resources and limited capacity to enhance services further challenge resource-poor and constrained settings. Hence, capacity building entails preventative strategies and strengthening of primary health services. Assistance from other countries and organizations is needed to mount a surge response. Moreover, planning should include when to disengage and how the host nation can provide capacity beyond the mass casualty care event.

  10. Wind resource quality affected by high levels of renewables

    DOE PAGES

    Diakov, Victor

    2015-06-17

    For solar photovoltaic (PV) and wind resources, the capacity factor is an important parameter describing the quality of the resource. As the share of variable renewable resources (such as PV and wind) on the electric system is increasing, so does curtailment (and the fraction of time when it cannot be avoided). At high levels of renewable generation, curtailments effectively change the practical measure of resource quality from capacity factor to the incremental capacity factor. The latter accounts only for generation during hours of no curtailment and is directly connected with the marginal capital cost of renewable generators for a givenmore » level of renewable generation during the year. The Western U.S. wind generation is analyzed hourly for a system with 75% of annual generation from wind, and it is found that the value for the system of resources with equal capacity factors can vary by a factor of 2, which highlights the importance of using the incremental capacity factor instead. Finally, the effect is expected to be more pronounced in smaller geographic areas (or when transmission limitations imposed) and less pronounced at lower levels of renewable energy in the system with less curtailment.« less

  11. Adaptive capacity and community-based natural resource management.

    PubMed

    Armitage, Derek

    2005-06-01

    Why do some community-based natural resource management strategies perform better than others? Commons theorists have approached this question by developing institutional design principles to address collective choice situations, while other analysts have critiqued the underlying assumptions of community-based resource management. However, efforts to enhance community-based natural resource management performance also require an analysis of exogenous and endogenous variables that influence how social actors not only act collectively but do so in ways that respond to changing circumstances, foster learning, and build capacity for management adaptation. Drawing on examples from northern Canada and Southeast Asia, this article examines the relationship among adaptive capacity, community-based resource management performance, and the socio-institutional determinants of collective action, such as technical, financial, and legal constraints, and complex issues of politics, scale, knowledge, community and culture. An emphasis on adaptive capacity responds to a conceptual weakness in community-based natural resource management and highlights an emerging research and policy discourse that builds upon static design principles and the contested concepts in current management practice.

  12. Security and Cloud Outsourcing Framework for Economic Dispatch

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sarker, Mushfiqur R.; Wang, Jianhui; Li, Zuyi

    The computational complexity and problem sizes of power grid applications have increased significantly with the advent of renewable resources and smart grid technologies. The current paradigm of solving these issues consist of inhouse high performance computing infrastructures, which have drawbacks of high capital expenditures, maintenance, and limited scalability. Cloud computing is an ideal alternative due to its powerful computational capacity, rapid scalability, and high cost-effectiveness. A major challenge, however, remains in that the highly confidential grid data is susceptible for potential cyberattacks when outsourced to the cloud. In this work, a security and cloud outsourcing framework is developed for themore » Economic Dispatch (ED) linear programming application. As a result, the security framework transforms the ED linear program into a confidentiality-preserving linear program, that masks both the data and problem structure, thus enabling secure outsourcing to the cloud. Results show that for large grid test cases the performance gain and costs outperforms the in-house infrastructure.« less

  13. Security and Cloud Outsourcing Framework for Economic Dispatch

    DOE PAGES

    Sarker, Mushfiqur R.; Wang, Jianhui; Li, Zuyi; ...

    2017-04-24

    The computational complexity and problem sizes of power grid applications have increased significantly with the advent of renewable resources and smart grid technologies. The current paradigm of solving these issues consist of inhouse high performance computing infrastructures, which have drawbacks of high capital expenditures, maintenance, and limited scalability. Cloud computing is an ideal alternative due to its powerful computational capacity, rapid scalability, and high cost-effectiveness. A major challenge, however, remains in that the highly confidential grid data is susceptible for potential cyberattacks when outsourced to the cloud. In this work, a security and cloud outsourcing framework is developed for themore » Economic Dispatch (ED) linear programming application. As a result, the security framework transforms the ED linear program into a confidentiality-preserving linear program, that masks both the data and problem structure, thus enabling secure outsourcing to the cloud. Results show that for large grid test cases the performance gain and costs outperforms the in-house infrastructure.« less

  14. Integrating computational methods to retrofit enzymes to synthetic pathways.

    PubMed

    Brunk, Elizabeth; Neri, Marilisa; Tavernelli, Ivano; Hatzimanikatis, Vassily; Rothlisberger, Ursula

    2012-02-01

    Microbial production of desired compounds provides an efficient framework for the development of renewable energy resources. To be competitive to traditional chemistry, one requirement is to utilize the full capacity of the microorganism to produce target compounds with high yields and turnover rates. We use integrated computational methods to generate and quantify the performance of novel biosynthetic routes that contain highly optimized catalysts. Engineering a novel reaction pathway entails addressing feasibility on multiple levels, which involves handling the complexity of large-scale biochemical networks while respecting the critical chemical phenomena at the atomistic scale. To pursue this multi-layer challenge, our strategy merges knowledge-based metabolic engineering methods with computational chemistry methods. By bridging multiple disciplines, we provide an integral computational framework that could accelerate the discovery and implementation of novel biosynthetic production routes. Using this approach, we have identified and optimized a novel biosynthetic route for the production of 3HP from pyruvate. Copyright © 2011 Wiley Periodicals, Inc.

  15. Evaluation of Water Resources Carrying Capacity in Shandong Province Based on Fuzzy Comprehensive Evaluation

    NASA Astrophysics Data System (ADS)

    Zhao, Qiang; Gao, Qian; Zhu, Mingyue; Li, Xiumei

    2018-06-01

    Water resources carrying capacity is the maximum available water resources supporting by the social and economic development. Based on investigating and statisticing on the current situation of water resources in Shandong Province, this paper selects 13 factors including per capita water resources, water resources utilization, water supply modulus, rainfall, per capita GDP, population density, per capita water consumption, water consumption per million yuan, The water consumption of industrial output value, the agricultural output value of farmland, the irrigation rate of cultivated land, the water consumption rate of ecological environment and the forest coverage rate were used as the evaluation factors. Then,the fuzzy comprehensive evaluation model was used to analyze the water resources carrying capacity Force status evaluation. The results showed : The comprehensive evaluation results of water resources in Shandong Province were lower than 0.6 in 2001-2009 and higher than 0.6 in 2010-2015, which indicating that the water resources carrying capacity of Shandong Province has been improved.; In addition, most of the years a value of less than 0.6, individual years below 0.4, the interannual changes are relatively large, from that we can see the level of water resources is generally weak, the greater the interannual changes in Shandong Province.

  16. Building the Capacity to Innovate: The Role of Human Capital. Research Report

    ERIC Educational Resources Information Center

    Smith, Andrew; Courvisanos, Jerry; Tuck, Jacqueline; McEachern, Steven

    2012-01-01

    This report examines the link between human resource management practices and innovation. It is based on a conceptual framework in which "human resource stimuli measures"--work organisation, working time, areas of training and creativity--feed into innovative capacity or innovation. Of course, having innovative capacity does not…

  17. Grid and Cloud for Developing Countries

    NASA Astrophysics Data System (ADS)

    Petitdidier, Monique

    2014-05-01

    The European Grid e-infrastructure has shown the capacity to connect geographically distributed heterogeneous compute resources in a secure way taking advantages of a robust and fast REN (Research and Education Network). In many countries like in Africa the first step has been to implement a REN and regional organizations like Ubuntunet, WACREN or ASREN to coordinate the development, improvement of the network and its interconnection. The Internet connections are still exploding in those countries. The second step has been to fill up compute needs of the scientists. Even if many of them have their own multi-core or not laptops for more and more applications it is not enough because they have to face intensive computing due to the large amount of data to be processed and/or complex codes. So far one solution has been to go abroad in Europe or in America to run large applications or not to participate to international communities. The Grid is very attractive to connect geographically-distributed heterogeneous resources, aggregate new ones and create new sites on the REN with a secure access. All the users have the same servicers even if they have no resources in their institute. With faster and more robust internet they will be able to take advantage of the European Grid. There are different initiatives to provide resources and training like UNESCO/HP Brain Gain initiative, EUMEDGrid, ..Nowadays Cloud becomes very attractive and they start to be developed in some countries. In this talk challenges for those countries to implement such e-infrastructures, to develop in parallel scientific and technical research and education in the new technologies will be presented illustrated by examples.

  18. Standardized description of scientific evidence using the Evidence Ontology (ECO)

    PubMed Central

    Chibucos, Marcus C.; Mungall, Christopher J.; Balakrishnan, Rama; Christie, Karen R.; Huntley, Rachael P.; White, Owen; Blake, Judith A.; Lewis, Suzanna E.; Giglio, Michelle

    2014-01-01

    The Evidence Ontology (ECO) is a structured, controlled vocabulary for capturing evidence in biological research. ECO includes diverse terms for categorizing evidence that supports annotation assertions including experimental types, computational methods, author statements and curator inferences. Using ECO, annotation assertions can be distinguished according to the evidence they are based on such as those made by curators versus those automatically computed or those made via high-throughput data review versus single test experiments. Originally created for capturing evidence associated with Gene Ontology annotations, ECO is now used in other capacities by many additional annotation resources including UniProt, Mouse Genome Informatics, Saccharomyces Genome Database, PomBase, the Protein Information Resource and others. Information on the development and use of ECO can be found at http://evidenceontology.org. The ontology is freely available under Creative Commons license (CC BY-SA 3.0), and can be downloaded in both Open Biological Ontologies and Web Ontology Language formats at http://code.google.com/p/evidenceontology. Also at this site is a tracker for user submission of term requests and questions. ECO remains under active development in response to user-requested terms and in collaborations with other ontologies and database resources. Database URL: Evidence Ontology Web site: http://evidenceontology.org PMID:25052702

  19. A Distributed Computing Framework for Real-Time Detection of Stress and of Its Propagation in a Team.

    PubMed

    Pandey, Parul; Lee, Eun Kyung; Pompili, Dario

    2016-11-01

    Stress is one of the key factor that impacts the quality of our daily life: From the productivity and efficiency in the production processes to the ability of (civilian and military) individuals in making rational decisions. Also, stress can propagate from one individual to other working in a close proximity or toward a common goal, e.g., in a military operation or workforce. Real-time assessment of the stress of individuals alone is, however, not sufficient, as understanding its source and direction in which it propagates in a group of people is equally-if not more-important. A continuous near real-time in situ personal stress monitoring system to quantify level of stress of individuals and its direction of propagation in a team is envisioned. However, stress monitoring of an individual via his/her mobile device may not always be possible for extended periods of time due to limited battery capacity of these devices. To overcome this challenge a novel distributed mobile computing framework is proposed to organize the resources in the vicinity and form a mobile device cloud that enables offloading of computation tasks in stress detection algorithm from resource constrained devices (low residual battery, limited CPU cycles) to resource rich devices. Our framework also supports computing parallelization and workflows, defining how the data and tasks divided/assigned among the entities of the framework are designed. The direction of propagation and magnitude of influence of stress in a group of individuals are studied by applying real-time, in situ analysis of Granger Causality. Tangible benefits (in terms of energy expenditure and execution time) of the proposed framework in comparison to a centralized framework are presented via thorough simulations and real experiments.

  20. Investigation on wind energy-compressed air power system.

    PubMed

    Jia, Guang-Zheng; Wang, Xuan-Yin; Wu, Gen-Mao

    2004-03-01

    Wind energy is a pollution free and renewable resource widely distributed over China. Aimed at protecting the environment and enlarging application of wind energy, a new approach to application of wind energy by using compressed air power to some extent instead of electricity put forward. This includes: explaining the working principles and characteristics of the wind energy-compressed air power system; discussing the compatibility of wind energy and compressor capacity; presenting the theoretical model and computational simulation of the system. The obtained compressor capacity vs wind power relationship in certain wind velocity range can be helpful in the designing of the wind power-compressed air system. Results of investigations on the application of high-pressure compressed air for pressure reduction led to conclusion that pressure reduction with expander is better than the throttle regulator in energy saving.

  1. Enhancing capacity among faith-based organizations to implement evidence-based cancer control programs: a community-engaged approach.

    PubMed

    Leyva, Bryan; Allen, Jennifer D; Ospino, Hosffman; Tom, Laura S; Negrón, Rosalyn; Buesa, Richard; Torres, Maria Idalí

    2017-09-01

    Evidence-based interventions (EBIs) to promote cancer control among Latinos have proliferated in recent years, though adoption and implementation of these interventions by faith-based organizations (FBOs) is limited. Capacity building may be one strategy to promote implementation. In this qualitative study, 18 community key informants were interviewed to (a) understand existing capacity for health programming among Catholic parishes, (b) characterize parishes' resource gaps and capacity-building needs implementing cancer control EBIs, and (c) elucidate strategies for delivering capacity-building assistance to parishes to facilitate implementation of EBIs. Semi-structured qualitative interviews were conducted. Key informants concurred about the capacity of Catholic parishes to deliver health programs, and described attributes of parishes that make them strong partners in health promotion initiatives, including a mission to address physical and mental health, outreach to marginalized groups, altruism among members, and existing engagement in health programming. However, resource gaps and capacity building needs were also identified. Specific recommendations participants made about how existing resources might be leveraged to address challenges include to: establish parish wellness committees; provide "hands-on" learning opportunities for parishioners to gain program planning skills; offer continuous, tailored, on-site technical assistance; facilitate relationships between parishes and community resources; and provide financial support for parishes. Leveraging parishes' existing resources and addressing their implementation needs may improve adoption of cancer control EBIs.

  2. Application analysis of Monte Carlo to estimate the capacity of geothermal resources in Lawu Mount

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Supriyadi, E-mail: supriyadi-uno@yahoo.co.nz; Srigutomo, Wahyu; Munandar, Arif

    2014-03-24

    Monte Carlo analysis has been applied in calculation of geothermal resource capacity based on volumetric method issued by Standar Nasional Indonesia (SNI). A deterministic formula is converted into a stochastic formula to take into account the nature of uncertainties in input parameters. The method yields a range of potential power probability stored beneath Lawu Mount geothermal area. For 10,000 iterations, the capacity of geothermal resources is in the range of 139.30-218.24 MWe with the most likely value is 177.77 MWe. The risk of resource capacity above 196.19 MWe is less than 10%. The power density of the prospect area coveringmore » 17 km{sup 2} is 9.41 MWe/km{sup 2} with probability 80%.« less

  3. Refining "Teacher Design Capacity": Mathematics Teachers' Interactions with Digital Curriculum Resources

    ERIC Educational Resources Information Center

    Pepin, B.; Gueudet, G.; Trouche, L.

    2017-01-01

    The goal of this conceptual paper is to develop enhanced understandings of mathematics teacher design and design capacity when interacting with digital curriculum resources. We argue that digital resources in particular offer incentives and increasing opportunities for mathematics teachers' design, both individually and in collectives. Indeed they…

  4. WPS mediation: An approach to process geospatial data on different computing backends

    NASA Astrophysics Data System (ADS)

    Giuliani, Gregory; Nativi, Stefano; Lehmann, Anthony; Ray, Nicolas

    2012-10-01

    The OGC Web Processing Service (WPS) specification allows generating information by processing distributed geospatial data made available through Spatial Data Infrastructures (SDIs). However, current SDIs have limited analytical capacities and various problems emerge when trying to use them in data and computing-intensive domains such as environmental sciences. These problems are usually not or only partially solvable using single computing resources. Therefore, the Geographic Information (GI) community is trying to benefit from the superior storage and computing capabilities offered by distributed computing (e.g., Grids, Clouds) related methods and technologies. Currently, there is no commonly agreed approach to grid-enable WPS. No implementation allows one to seamlessly execute a geoprocessing calculation following user requirements on different computing backends, ranging from a stand-alone GIS server up to computer clusters and large Grid infrastructures. Considering this issue, this paper presents a proof of concept by mediating different geospatial and Grid software packages, and by proposing an extension of WPS specification through two optional parameters. The applicability of this approach will be demonstrated using a Normalized Difference Vegetation Index (NDVI) mediated WPS process, highlighting benefits, and issues that need to be further investigated to improve performances.

  5. The SGI/CRAY T3E: Experiences and Insights

    NASA Technical Reports Server (NTRS)

    Bernard, Lisa Hamet

    1999-01-01

    The focus of the HPCC Earth and Space Sciences (ESS) Project is capability computing - pushing highly scalable computing testbeds to their performance limits. The drivers of this focus are the Grand Challenge problems in Earth and space science: those that could not be addressed in a capacity computing environment where large jobs must continually compete for resources. These Grand Challenge codes require a high degree of communication, large memory, and very large I/O (throughout the duration of the processing, not just in loading initial conditions and saving final results). This set of parameters led to the selection of an SGI/Cray T3E as the current ESS Computing Testbed. The T3E at the Goddard Space Flight Center is a unique computational resource within NASA. As such, it must be managed to effectively support the diverse research efforts across the NASA research community yet still enable the ESS Grand Challenge Investigator teams to achieve their performance milestones, for which the system was intended. To date, all Grand Challenge Investigator teams have achieved the 10 GFLOPS milestone, eight of nine have achieved the 50 GFLOPS milestone, and three have achieved the 100 GFLOPS milestone. In addition, many technical papers have been published highlighting results achieved on the NASA T3E, including some at this Workshop. The successes enabled by the NASA T3E computing environment are best illustrated by the 512 PE upgrade funded by the NASA Earth Science Enterprise earlier this year. Never before has an HPCC computing testbed been so well received by the general NASA science community that it was deemed critical to the success of a core NASA science effort. NASA looks forward to many more success stories before the conclusion of the NASA-SGI/Cray cooperative agreement in June 1999.

  6. Remote Earth Sciences data collection using ACTS

    NASA Technical Reports Server (NTRS)

    Evans, Robert H.

    1992-01-01

    Given the focus on global change and the attendant scope of such research, we anticipate significant growth of requirements for investigator interaction, processing system capabilities, and availability of data sets. The increased complexity of global processes requires interdisciplinary teams to address them; the investigators will need to interact on a regular basis; however, it is unlikely that a single institution will house sufficient investigators with the required breadth of skills. The complexity of the computations may also require resources beyond those located within a single institution; this lack of sufficient computational resources leads to a distributed system located at geographically dispersed institutions. Finally the combination of long term data sets like the Pathfinder datasets and the data to be gathered by new generations of satellites such as SeaWiFS and MODIS-N yield extra-ordinarily large amounts of data. All of these factors combine to increase demands on the communications facilities available; the demands are generating requirements for highly flexible, high capacity networks. We have been examining the applicability of the Advanced Communications Technology Satellite (ACTS) to address the scientific, computational, and, primarily, communications questions resulting from global change research. As part of this effort three scenarios for oceanographic use of ACTS have been developed; a full discussion of this is contained in Appendix B.

  7. Analysis of superconducting magnetic energy storage applications at a proposed wind farm site near Browning, Montana

    NASA Astrophysics Data System (ADS)

    Gaustad, K. L.; Desteese, J. G.

    1993-07-01

    A computer program was developed to analyze the viability of integrating superconducting magnetic energy storage (SMES) with proposed wind farm scenarios at a site near Browning, Montana. The program simulated an hour-by-hour account of the charge/discharge history of a SMES unit for a representative wind-speed year. Effects of power output, storage capacity, and power conditioning capability on SMES performance characteristics were analyzed on a seasonal, diurnal, and hourly basis. The SMES unit was assumed to be charged during periods when power output of the wind resource exceeded its average value. Energy was discharged from the SMES unit into the grid during periods of low wind speed to compensate for below-average output of the wind resource. The option of using SMES to provide power continuity for a wind farm supplemented by combustion turbines was also investigated. Levelizing the annual output of large wind energy systems operating in the Blackfeet area of Montana was found to require a storage capacity too large to be economically viable. However, it appears that intermediate-sized SMES economically levelize the wind energy output on a seasonal basis.

  8. Working Towards New Transformative Geoscience Analytics Enabled by Petascale Computing

    NASA Astrophysics Data System (ADS)

    Woodcock, R.; Wyborn, L.

    2012-04-01

    Currently the top 10 supercomputers in the world are petascale and already exascale computers are being planned. Cloud computing facilities are becoming mainstream either as private or commercial investments. These computational developments will provide abundant opportunities for the earth science community to tackle the data deluge which has resulted from new instrumentation enabling data to be gathered at a greater rate and at higher resolution. Combined, the new computational environments should enable the earth sciences to be transformed. However, experience in Australia and elsewhere has shown that it is not easy to scale existing earth science methods, software and analytics to take advantage of the increased computational capacity that is now available. It is not simply a matter of 'transferring' current work practices to the new facilities: they have to be extensively 'transformed'. In particular new Geoscientific methods will need to be developed using advanced data mining, assimilation, machine learning and integration algorithms. Software will have to be capable of operating in highly parallelised environments, and will also need to be able to scale as the compute systems grow. Data access will have to improve and the earth science community needs to move from the file discovery, display and then locally download paradigm to self describing data cubes and data arrays that are available as online resources from either major data repositories or in the cloud. In the new transformed world, rather than analysing satellite data scene by scene, sensor agnostic data cubes of calibrated earth observation data will enable researchers to move across data from multiple sensors at varying spatial data resolutions. In using geophysics to characterise basement and cover, rather than analysing individual gridded airborne geophysical data sets, and then combining the results, petascale computing will enable analysis of multiple data types, collected at varying resolutions with integration and validation across data type boundaries. Increased capacity of storage and compute will mean that uncertainty and reliability of individual observations will consistently be taken into account and propagated throughout the processing chain. If these data access difficulties can be overcome, the increased compute capacity will also mean that larger scale, more complex models can be run at higher resolution and instead of single pass modelling runs. Ensembles of models will be able to be run to simultaneously test multiple hypotheses. Petascale computing and high performance data offer more than "bigger, faster": it is an opportunity for a transformative change in the way in which geoscience research is routinely conducted.

  9. Holding-time-aware asymmetric spectrum allocation in virtual optical networks

    NASA Astrophysics Data System (ADS)

    Lyu, Chunjian; Li, Hui; Liu, Yuze; Ji, Yuefeng

    2017-10-01

    Virtual optical networks (VONs) have been considered as a promising solution to support current high-capacity dynamic traffic and achieve rapid applications deployment. Since most of the network services (e.g., high-definition video service, cloud computing, distributed storage) in VONs are provisioned by dedicated data centers, needing different amount of bandwidth resources in both directions, the network traffic is mostly asymmetric. The common strategy, symmetric provisioning of traffic in optical networks, leads to a waste of spectrum resources in such traffic patterns. In this paper, we design a holding-time-aware asymmetric spectrum allocation module based on SDON architecture and an asymmetric spectrum allocation algorithm based on the module is proposed. For the purpose of reducing spectrum resources' waste, the algorithm attempts to reallocate the idle unidirectional spectrum slots in VONs, which are generated due to the asymmetry of services' bidirectional bandwidth. This part of resources can be exploited by other requests, such as short-time non-VON requests. We also introduce a two-dimensional asymmetric resource model for maintaining idle spectrum resources information of VON in spectrum and time domains. Moreover, a simulation is designed to evaluate the performance of the proposed algorithm, and results show that our proposed asymmetric spectrum allocation algorithm can improve the resource waste and reduce blocking probability.

  10. Increasing exercise capacity and quality of life of patients with heart failure through Wii gaming: the rationale, design and methodology of the HF-Wii study; a multicentre randomized controlled trial.

    PubMed

    Jaarsma, Tiny; Klompstra, Leonie; Ben Gal, Tuvia; Boyne, Josiane; Vellone, Ercole; Bäck, Maria; Dickstein, Kenneth; Fridlund, Bengt; Hoes, Arno; Piepoli, Massimo F; Chialà, Oronzo; Mårtensson, Jan; Strömberg, Anna

    2015-07-01

    Exercise is known to be beneficial for patients with heart failure (HF), and these patients should therefore be routinely advised to exercise and to be or to become physically active. Despite the beneficial effects of exercise such as improved functional capacity and favourable clinical outcomes, the level of daily physical activity in most patients with HF is low. Exergaming may be a promising new approach to increase the physical activity of patients with HF at home. The aim of this study is to determine the effectiveness of the structured introduction and access to a Wii game computer in patients with HF to improve exercise capacity and level of daily physical activity, to decrease healthcare resource use, and to improve self-care and health-related quality of life. A multicentre randomized controlled study with two treatment groups will include 600 patients with HF. In each centre, patients will be randomized to either motivational support only (control) or structured access to a Wii game computer (Wii). Patients in the control group will receive advice on physical activity and will be contacted by four telephone calls. Patients in the Wii group also will receive advice on physical activity along with a Wii game computer, with instructions and training. The primary endpoint will be exercise capacity at 3 months as measured by the 6 min walk test. Secondary endpoints include exercise capacity at 6 and 12 months, level of daily physical activity, muscle function, health-related quality of life, and hospitalization or death during the 12 months follow-up. The HF-Wii study is a randomized study that will evaluate the effect of exergaming in patients with HF. The findings can be useful to healthcare professionals and improve our understanding of the potential role of exergaming in the treatment and management of patients with HF. NCT01785121. © 2015 The Authors. European Journal of Heart Failure © 2015 European Society of Cardiology.

  11. An evaluation capacity building toolkit for principal investigators of undergraduate research experiences: A demonstration of transforming theory into practice.

    PubMed

    Rorrer, Audrey S

    2016-04-01

    This paper describes the approach and process undertaken to develop evaluation capacity among the leaders of a federally funded undergraduate research program. An evaluation toolkit was developed for Computer and Information Sciences and Engineering(1) Research Experiences for Undergraduates(2) (CISE REU) programs to address the ongoing need for evaluation capacity among principal investigators who manage program evaluation. The toolkit was the result of collaboration within the CISE REU community with the purpose being to provide targeted instructional resources and tools for quality program evaluation. Challenges were to balance the desire for standardized assessment with the responsibility to account for individual program contexts. Toolkit contents included instructional materials about evaluation practice, a standardized applicant management tool, and a modulated outcomes measure. Resulting benefits from toolkit deployment were having cost effective, sustainable evaluation tools, a community evaluation forum, and aggregate measurement of key program outcomes for the national program. Lessons learned included the imperative of understanding the evaluation context, engaging stakeholders, and building stakeholder trust. Results from project measures are presented along with a discussion of guidelines for facilitating evaluation capacity building that will serve a variety of contexts. Copyright © 2016. Published by Elsevier Ltd.

  12. Integrating resource, social, and managerial indicators of quality into carrying capacity decision-making

    USGS Publications Warehouse

    Newman, P.; Marion, J.; Cahill, K.

    2001-01-01

    In park and wilderness management, integrating social and resource indicators is essential to meet park mandates that require the protection of both experiential and resource conditions. This paper will address the challenges we face in integrating social and resource data and outline a study in progress in Yosemite National Park. This study will develop and apply a management model that integrates resource, social and managerial indicators of quality into carrying capacity decisionmaking.

  13. Assessing the components of adaptive capacity to improve conservation and management efforts under global change

    USGS Publications Warehouse

    Nicotra, Adrienne; Beever, Erik; Robertson, Amanda; Hofmann, Gretchen; O’Leary, John

    2015-01-01

    Natural-resource managers and other conservation practitioners are under unprecedented pressure to categorize and quantify the vulnerability of natural systems based on assessment of the exposure, sensitivity, and adaptive capacity of species to climate change. Despite the urgent need for these assessments, neither the theoretical basis of adaptive capacity nor the practical issues underlying its quantification has been articulated in a manner that is directly applicable to natural-resource management. Both are critical for researchers, managers, and other conservation practitioners to develop reliable strategies for assessing adaptive capacity. Drawing from principles of classical and contemporary research and examples from terrestrial, marine, plant, and animal systems, we examined broadly the theory behind the concept of adaptive capacity. We then considered how interdisciplinary, trait- and triage-based approaches encompassing the oft-overlooked interactions among components of adaptive capacity can be used to identify species and populations likely to have higher (or lower) adaptive capacity. We identified the challenges and value of such endeavors and argue for a concerted interdisciplinary research approach that combines ecology, ecological genetics, and eco-physiology to reflect the interacting components of adaptive capacity. We aimed to provide a basis for constructive discussion between natural-resource managers and researchers, discussions urgently needed to identify research directions that will deliver answers to real-world questions facing resource managers, other conservation practitioners, and policy makers. Directing research to both seek general patterns and identify ways to facilitate adaptive capacity of key species and populations within species, will enable conservation ecologists and resource managers to maximize returns on research and management investment and arrive at novel and dynamic management and policy decisions.

  14. Assessing the components of adaptive capacity to improve conservation and management efforts under global change.

    PubMed

    Nicotra, Adrienne B; Beever, Erik A; Robertson, Amanda L; Hofmann, Gretchen E; O'Leary, John

    2015-10-01

    Natural-resource managers and other conservation practitioners are under unprecedented pressure to categorize and quantify the vulnerability of natural systems based on assessment of the exposure, sensitivity, and adaptive capacity of species to climate change. Despite the urgent need for these assessments, neither the theoretical basis of adaptive capacity nor the practical issues underlying its quantification has been articulated in a manner that is directly applicable to natural-resource management. Both are critical for researchers, managers, and other conservation practitioners to develop reliable strategies for assessing adaptive capacity. Drawing from principles of classical and contemporary research and examples from terrestrial, marine, plant, and animal systems, we examined broadly the theory behind the concept of adaptive capacity. We then considered how interdisciplinary, trait- and triage-based approaches encompassing the oft-overlooked interactions among components of adaptive capacity can be used to identify species and populations likely to have higher (or lower) adaptive capacity. We identified the challenges and value of such endeavors and argue for a concerted interdisciplinary research approach that combines ecology, ecological genetics, and eco-physiology to reflect the interacting components of adaptive capacity. We aimed to provide a basis for constructive discussion between natural-resource managers and researchers, discussions urgently needed to identify research directions that will deliver answers to real-world questions facing resource managers, other conservation practitioners, and policy makers. Directing research to both seek general patterns and identify ways to facilitate adaptive capacity of key species and populations within species, will enable conservation ecologists and resource managers to maximize returns on research and management investment and arrive at novel and dynamic management and policy decisions. © 2015 Society for Conservation Biology.

  15. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hameed, Abdul; Khoshkbarforoushha, Alireza; Ranjan, Rajiv

    In a cloud computing paradigm, energy efficient allocation of different virtualized ICT resources (servers, storage disks, and networks, and the like) is a complex problem due to the presence of heterogeneous application (e.g., content delivery networks, MapReduce, web applications, and the like) workloads having contentious allocation requirements in terms of ICT resource capacities (e.g., network bandwidth, processing speed, response time, etc.). Several recent papers have tried to address the issue of improving energy efficiency in allocating cloud resources to applications with varying degree of success. However, to the best of our knowledge there is no published literature on this subjectmore » that clearly articulates the research problem and provides research taxonomy for succinct classification of existing techniques. Hence, the main aim of this paper is to identify open challenges associated with energy efficient resource allocation. In this regard, the study, first, outlines the problem and existing hardware and software-based techniques available for this purpose. Furthermore, available techniques already presented in the literature are summarized based on the energy-efficient research dimension taxonomy. The advantages and disadvantages of the existing techniques are comprehensively analyzed against the proposed research dimension taxonomy namely: resource adaption policy, objective function, allocation method, allocation operation, and interoperability.« less

  16. Developing enterprise tools and capacities for large-scale natural resource monitoring: A visioning workshop

    USGS Publications Warehouse

    Bayer, Jennifer M.; Weltzin, Jake F.; Scully, Rebecca A.

    2017-01-01

    Objectives of the workshop were: 1) identify resources that support natural resource monitoring programs working across the data life cycle; 2) prioritize desired capacities and tools to facilitate monitoring design and implementation; 3) identify standards and best practices that improve discovery, accessibility, and interoperability of data across programs and jurisdictions; and 4) contribute to an emerging community of practice focused on natural resource monitoring.

  17. The evolution of distributed sensing and collective computation in animal populations

    PubMed Central

    Hein, Andrew M; Rosenthal, Sara Brin; Hagstrom, George I; Berdahl, Andrew; Torney, Colin J; Couzin, Iain D

    2015-01-01

    Many animal groups exhibit rapid, coordinated collective motion. Yet, the evolutionary forces that cause such collective responses to evolve are poorly understood. Here, we develop analytical methods and evolutionary simulations based on experimental data from schooling fish. We use these methods to investigate how populations evolve within unpredictable, time-varying resource environments. We show that populations evolve toward a distinctive regime in behavioral phenotype space, where small responses of individuals to local environmental cues cause spontaneous changes in the collective state of groups. These changes resemble phase transitions in physical systems. Through these transitions, individuals evolve the emergent capacity to sense and respond to resource gradients (i.e. individuals perceive gradients via social interactions, rather than sensing gradients directly), and to allocate themselves among distinct, distant resource patches. Our results yield new insight into how natural selection, acting on selfish individuals, results in the highly effective collective responses evident in nature. DOI: http://dx.doi.org/10.7554/eLife.10955.001 PMID:26652003

  18. Working memory management and predicted utility

    PubMed Central

    Chatham, Christopher H.; Badre, David

    2013-01-01

    Given the limited capacity of working memory (WM), its resources should be allocated strategically. One strategy is filtering, whereby access to WM is granted preferentially to items with the greatest utility. However, reallocation of WM resources might be required if the utility of maintained information subsequently declines. Here, we present behavioral, computational, and neuroimaging evidence that human participants track changes in the predicted utility of information in WM. First, participants demonstrated behavioral costs when the utility of items already maintained in WM declined and resources should be reallocated. An adapted Q-learning model indicated that these costs scaled with the historical utility of individual items. Finally, model-based neuroimaging demonstrated that frontal cortex tracked the utility of items to be maintained in WM, whereas ventral striatum tracked changes in the utility of items maintained in WM to the degree that these items are no longer useful. Our findings suggest that frontostriatal mechanisms track the utility of information in WM, and that these dynamics may predict delays in the removal of information from WM. PMID:23882196

  19. Enhancing Lay Counselor Capacity to Improve Patient Outcomes with Multimedia Technology.

    PubMed

    Robbins, Reuben N; Mellins, Claude A; Leu, Cheng-Shiun; Rowe, Jessica; Warne, Patricia; Abrams, Elaine J; Witte, Susan; Stein, Dan J; Remien, Robert H

    2015-06-01

    Multimedia technologies offer powerful tools to increase capacity of health workers to deliver standardized, effective, and engaging antiretroviral medication adherence counseling. Masivukeni-is an innovative multimedia-based, computer-driven, lay counselor-delivered intervention designed to help people living with HIV in resource-limited settings achieve optimal adherence. This pilot study examined medication adherence and key psychosocial outcomes among 55 non-adherent South African HIV+ patients, on antiretroviral therapy (ART) for at least 6 months, who were randomized to receive either Masivukeni or standard of care (SOC) counseling for ART non-adherence. At baseline, there were no significant differences between the SOC and Masivukeni groups on any outcome variables. At post-intervention (approximately 5-6 weeks after baseline), -clinic-based pill count adherence data available for 20 participants (10 per intervention arm) showed a 10 % improvement for-participants and a decrease of 8 % for SOC participants. Masivukeni participants reported significantly more positive attitudes towards disclosure and medication social support, less social rejection, and better clinic-patient relationships than did SOC participants. Masivukeni shows promise to promote optimal adherence and provides preliminary evidence that multimedia, computer-based technology can help lay counselors offer better adherence counseling than standard approaches.

  20. Enhancing Lay Counselor Capacity to Improve Patient Outcomes with Multimedia Technology

    PubMed Central

    Robbins, Reuben N.; Mellins, Claude A.; Leu, Cheng-Shiun; Rowe, Jessica; Warne, Patricia; Abrams, Elaine J.; Witte, Susan; Stein, Dan J.; Remien, Robert H.

    2015-01-01

    Multimedia technologies offer powerful tools to increase capacity of health workers to deliver standardized, effective, and engaging antiretroviral medication adherence counseling. Masivukeni is an innovative multimedia-based, computer-driven, lay counselor-delivered intervention designed to help people living with HIV in resource-limited settings achieve optimal adherence. This pilot study examined medication adherence and key psychosocial outcomes among 55 non-adherent South African HIV+ patients, on ART for at least 6 months, who were randomized to receive either Masivukeni or standard of care (SOC) counseling for ART non-adherence. At baseline, there were no significant differences between the SOC and Masivukeni groups on any outcome variables. At post-intervention (approximately 5–6 weeks after baseline), clinic-based pill count adherence data available for 20 participants (10 per intervention arm) showed a 10% improvement for Masivukeni participants and a decrease of 8% for SOC participants. Masivukeni participants reported significantly more positive attitudes towards disclosure and medication social support, less social rejection, and better clinic-patient relationships than did SOC participants. Masivukeni shows promise to promote optimal adherence and provides preliminary evidence that multimedia, computer-based technology can help lay counselors offer better adherence counseling than standard approaches. PMID:25566763

  1. District Resource Capacity and the Effects of Educational Policy: The Case of Primary Class Size Reduction in Ontario

    ERIC Educational Resources Information Center

    Mascall, Blair; Leung, Joannie

    2012-01-01

    In a study of Ontario, Canada's province-wide Primary Class Size Reduction (PCS) Initiative, school districts' ability to direct and support schools was related to their experience with planning and monitoring, interest in innovation, and its human and fiscal resource base. Districts with greater "resource capacity" were able to…

  2. Talk the Walk: Does Socio-Cognitive Resource Reallocation Facilitate the Development of Walking?

    PubMed

    Geva, Ronny; Orr, Edna

    2016-01-01

    Walking is of interest to psychology, robotics, zoology, neuroscience and medicine. Human's ability to walk on two feet is considered to be one of the defining characteristics of hominoid evolution. Evolutionary science propses that it emerged in response to limited environmental resources; yet the processes supporting its emergence are not fully understood. Developmental psychology research suggests that walking elicits cognitive advancements. We postulate that the relationship between cognitive development and walking is a bi-directional one; and further suggest that the initiation of novel capacities, such as walking, is related to internal socio-cognitive resource reallocation. We shed light on these notions by exploring infants' cognitive and socio-communicative outputs prospectively from 6-18 months of age. Structured bi/tri weekly evaluations of symbolic and verbal development were employed in an urban cohort (N = 9) for 12 months, during the transition from crawling to walking. Results show links between preemptive cognitive changes in socio-communicative output, symbolic-cognitive tool-use processes, and the age of emergence of walking. Plots of use rates of lower symbolic play levels before and after emergence of new skills illustrate reductions in use of previously attained key behaviors prior to emergence of higher symbolic play, language and walking. Further, individual differences in age of walking initiation were strongly related to the degree of reductions in complexity of object-use (r = .832, p < .005), along with increases, counter to the general reduction trend, in skills that serve recruitment of external resources [socio-communication bids before speech (r = -.696, p < .01), and speech bids before walking; r = .729, p < .01)]. Integration of these proactive changes using a computational approach yielded an even stronger link, underscoring internal resource reallocation as a facilitator of walking initiation (r = .901, p<0.001). These preliminary data suggest that representational capacities, symbolic object use, language and social developments, form an integrated adaptable composite, which possibly enables proactive internal resource reallocation, designed to support the emergence of new developmental milestones, such as walking.

  3. Evaluation of Resources Carrying Capacity in China Based on Remote Sensing and GIS

    NASA Astrophysics Data System (ADS)

    Liu, K.; Gan, Y. H.; Zhang, T.; Luo, Z. Y.; Wang, J. J.; Lin, F. N.

    2018-04-01

    This paper accurately extracted the information of arable land, grassland (wetland), forest land, water area and construction land, based on 1 : 250000 basic geographic information data. It made model modification of comprehensive CCRR to achieve carrying capacity calculation taking resource quality into consideration. Ultimately it achieved a comprehensive assessment of CCRR status in China. The top ten cities where the status of carrying capacity of resources was overloaded were Wenzhou, Shanghai, Chengdu, Baoding, Shantou, Jieyang, Dongguan, Fuyang, Zhoukou and Handan. The cities were basically distributed in the central and southern areas with convenient transportation and more economically developed areas. Among the cities in surplus status, resources carrying capacity in Hulun Buir was the most abundant, followed by Heihe, Bayingolin Mongol Autonomous Prefecture, Qiqihar, Chifeng and Jiamusi, all of which were located in northeastern China with a small population and plentiful cultivated land.

  4. Quantum coding with finite resources.

    PubMed

    Tomamichel, Marco; Berta, Mario; Renes, Joseph M

    2016-05-09

    The quantum capacity of a memoryless channel determines the maximal rate at which we can communicate reliably over asymptotically many uses of the channel. Here we illustrate that this asymptotic characterization is insufficient in practical scenarios where decoherence severely limits our ability to manipulate large quantum systems in the encoder and decoder. In practical settings, we should instead focus on the optimal trade-off between three parameters: the rate of the code, the size of the quantum devices at the encoder and decoder, and the fidelity of the transmission. We find approximate and exact characterizations of this trade-off for various channels of interest, including dephasing, depolarizing and erasure channels. In each case, the trade-off is parameterized by the capacity and a second channel parameter, the quantum channel dispersion. In the process, we develop several bounds that are valid for general quantum channels and can be computed for small instances.

  5. Quantum coding with finite resources

    PubMed Central

    Tomamichel, Marco; Berta, Mario; Renes, Joseph M.

    2016-01-01

    The quantum capacity of a memoryless channel determines the maximal rate at which we can communicate reliably over asymptotically many uses of the channel. Here we illustrate that this asymptotic characterization is insufficient in practical scenarios where decoherence severely limits our ability to manipulate large quantum systems in the encoder and decoder. In practical settings, we should instead focus on the optimal trade-off between three parameters: the rate of the code, the size of the quantum devices at the encoder and decoder, and the fidelity of the transmission. We find approximate and exact characterizations of this trade-off for various channels of interest, including dephasing, depolarizing and erasure channels. In each case, the trade-off is parameterized by the capacity and a second channel parameter, the quantum channel dispersion. In the process, we develop several bounds that are valid for general quantum channels and can be computed for small instances. PMID:27156995

  6. Controlling user access to electronic resources without password

    DOEpatents

    Smith, Fred Hewitt

    2015-06-16

    Described herein are devices and techniques for remotely controlling user access to a restricted computer resource. The process includes pre-determining an association of the restricted computer resource and computer-resource-proximal environmental information. Indicia of user-proximal environmental information are received from a user requesting access to the restricted computer resource. Received indicia of user-proximal environmental information are compared to associated computer-resource-proximal environmental information. User access to the restricted computer resource is selectively granted responsive to a favorable comparison in which the user-proximal environmental information is sufficiently similar to the computer-resource proximal environmental information. In at least some embodiments, the process further includes comparing user-supplied biometric measure and comparing it with a predetermined association of at least one biometric measure of an authorized user. Access to the restricted computer resource is granted in response to a favorable comparison.

  7. Laboratory Computing Resource Center

    Science.gov Websites

    Systems Computing and Data Resources Purchasing Resources Future Plans For Users Getting Started Using LCRC Software Best Practices and Policies Getting Help Support Laboratory Computing Resource Center Laboratory Computing Resource Center Latest Announcements See All April 27, 2018, Announcements, John Low

  8. Integrated modelling to assess long-term water supply capacity of a meso-scale Mediterranean catchment.

    PubMed

    Collet, Lila; Ruelland, Denis; Borrell-Estupina, Valérie; Dezetter, Alain; Servat, Eric

    2013-09-01

    Assessing water supply capacity is crucial to meet stakeholders' needs, notably in the Mediterranean region. This region has been identified as a climate change hot spot, and as a region where water demand is continuously increasing due to population growth and the expansion of irrigated areas. The Hérault River catchment (2500 km(2), France) is a typical example and a negative trend in discharge has been observed since the 1960s. In this context, local stakeholders need first to understand the processes controlling the evolution of water resources and demands in the past to latter evaluate future water supply capacity and anticipate the tensions users could be confronted to in the future. A modelling framework is proposed at a 10-day time step to assess whether water resources have been able to meet water demands over the last 50 years. Water supply was evaluated using hydrological modelling and a dam management model. Water demand dynamics were estimated for the domestic and agricultural sectors. A water supply capacity index is computed to assess the extent and the frequency to which water demand has been satisfied at the sub-basin scale. Simulated runoff dynamics were in good agreement with observations over the calibration and validation periods. Domestic water demand has increased considerably since the 1980s and is characterized by a seasonal peak in summer. Agricultural demand has increased in the downstream sub-basins and decreased upstream where irrigated areas have decreased. As a result, although most water demands were satisfied between 1961 and 1980, irrigation requirements in summer have sometimes not been satisfied since the 1980s. This work is the first step toward evaluating possible future changes in water allocation capacity in the catchment, using future climate change, dam management and water use scenarios. Copyright © 2013 Elsevier B.V. All rights reserved.

  9. The influence of working memory capacity on experimental heat pain.

    PubMed

    Nakae, Aya; Endo, Kaori; Adachi, Tomonori; Ikeda, Takashi; Hagihira, Satoshi; Mashimo, Takashi; Osaka, Mariko

    2013-10-01

    Pain processing and attention have a bidirectional interaction that depends upon one's relative ability to use limited-capacity resources. However, correlations between the size of limited-capacity resources and pain have not been evaluated. Working memory capacity, which is a cognitive resource, can be measured using the reading span task (RST). In this study, we hypothesized that an individual's potential working memory capacity and subjective pain intensity are related. To test this hypothesis, we evaluated 31 healthy participants' potential working memory capacity using the RST, and then applied continuous experimental heat stimulation using the listening span test (LST), which is a modified version of the RST. Subjective pain intensities were significantly lower during the challenging parts of the RST. The pain intensity under conditions where memorizing tasks were performed was compared with that under the control condition, and it showed a correlation with potential working memory capacity. These results indicate that working memory capacity reflects the ability to process information, including precise evaluations of changes in pain perception. In this work, we present data suggesting that changes in subjective pain intensity are related, depending upon individual potential working memory capacities. Individual working memory capacity may be a phenotype that reflects sensitivity to changes in pain perception. Copyright © 2013 American Pain Society. Published by Elsevier Inc. All rights reserved.

  10. Multi-scale research of time and space differences about ecological footprint and ecological carrying capacity of the water resources

    NASA Astrophysics Data System (ADS)

    Li, Jiahong; Lei, Xiaohui; Fu, Qiang; Li, Tianxiao; Qiao, Yu; Chen, Lei; Liao, Weihong

    2018-03-01

    A multi-scale assessment framework for assessing and comparing the water resource sustainability based on the ecological footprint (EF) is introduced. The study aims to manage the water resource from different views in Heilongjiang Province. First of all, from the scale of each city, the water ecological carrying capacity (ECC) was calculated from 2000 to 2011, and map the spatial distribution of the recent 3 years which show that, the water ecological carrying capacity (ECC) is uneven and has a downward trend year by year. Then, from the perspective of the five secondary partition basins in Heilongjiang Province, the paper calculated the ecological carrying capacity (ECC), the ecological footprint (EF) and ecological surplus and deficit (S&D) situation of water resources from 2000 to 2011, which show that the ecological deficit situation is more prominent in Nenjiang and Suifenhe basins which are in an unsustainable development state. Finally, from the perspective of the province, the paper calculated the ecological carrying capacity (ECC), the ecological footprint (EF) and ecological S&D of water resources from 2000 to 2011 in Heilongjiang Province, which show that the ecological footprint (EF) is in the rising trend, and the correlation coefficient between the ecological carrying capacity (ECC) and the precipitation is 0.8. There are 5 years of unsustainable development state in Heilongjiang. The proposed multi-scale assessment of WEF aims to evaluate the complex relationship between water resource supply and consumption in different spatial scales and time series. It also provides more reasonable assessment result which can be used by managers and regulators.

  11. Spectrum sensing and resource allocation for multicarrier cognitive radio systems under interference and power constraints

    NASA Astrophysics Data System (ADS)

    Dikmese, Sener; Srinivasan, Sudharsan; Shaat, Musbah; Bader, Faouzi; Renfors, Markku

    2014-12-01

    Multicarrier waveforms have been commonly recognized as strong candidates for cognitive radio. In this paper, we study the dynamics of spectrum sensing and spectrum allocation functions in cognitive radio context using very practical signal models for the primary users (PUs), including the effects of power amplifier nonlinearities. We start by sensing the spectrum with energy detection-based wideband multichannel spectrum sensing algorithm and continue by investigating optimal resource allocation methods. Along the way, we examine the effects of spectral regrowth due to the inevitable power amplifier nonlinearities of the PU transmitters. The signal model includes frequency selective block-fading channel models for both secondary and primary transmissions. Filter bank-based wideband spectrum sensing techniques are applied for detecting spectral holes and filter bank-based multicarrier (FBMC) modulation is selected for transmission as an alternative multicarrier waveform to avoid the disadvantage of limited spectral containment of orthogonal frequency-division multiplexing (OFDM)-based multicarrier systems. The optimization technique used for the resource allocation approach considered in this study utilizes the information obtained through spectrum sensing and knowledge of spectrum leakage effects of the underlying waveforms, including a practical power amplifier model for the PU transmitter. This study utilizes a computationally efficient algorithm to maximize the SU link capacity with power and interference constraints. It is seen that the SU transmission capacity depends critically on the spectral containment of the PU waveform, and these effects are quantified in a case study using an 802.11-g WLAN scenario.

  12. Research on elastic resource management for multi-queue under cloud computing environment

    NASA Astrophysics Data System (ADS)

    CHENG, Zhenjing; LI, Haibo; HUANG, Qiulan; Cheng, Yaodong; CHEN, Gang

    2017-10-01

    As a new approach to manage computing resource, virtualization technology is more and more widely applied in the high-energy physics field. A virtual computing cluster based on Openstack was built at IHEP, using HTCondor as the job queue management system. In a traditional static cluster, a fixed number of virtual machines are pre-allocated to the job queue of different experiments. However this method cannot be well adapted to the volatility of computing resource requirements. To solve this problem, an elastic computing resource management system under cloud computing environment has been designed. This system performs unified management of virtual computing nodes on the basis of job queue in HTCondor based on dual resource thresholds as well as the quota service. A two-stage pool is designed to improve the efficiency of resource pool expansion. This paper will present several use cases of the elastic resource management system in IHEPCloud. The practical run shows virtual computing resource dynamically expanded or shrunk while computing requirements change. Additionally, the CPU utilization ratio of computing resource was significantly increased when compared with traditional resource management. The system also has good performance when there are multiple condor schedulers and multiple job queues.

  13. Developing a Cloud-Based Online Geospatial Information Sharing and Geoprocessing Platform to Facilitate Collaborative Education and Research

    NASA Astrophysics Data System (ADS)

    Yang, Z. L.; Cao, J.; Hu, K.; Gui, Z. P.; Wu, H. Y.; You, L.

    2016-06-01

    Efficient online discovering and applying geospatial information resources (GIRs) is critical in Earth Science domain as while for cross-disciplinary applications. However, to achieve it is challenging due to the heterogeneity, complexity and privacy of online GIRs. In this article, GeoSquare, a collaborative online geospatial information sharing and geoprocessing platform, was developed to tackle this problem. Specifically, (1) GIRs registration and multi-view query functions allow users to publish and discover GIRs more effectively. (2) Online geoprocessing and real-time execution status checking help users process data and conduct analysis without pre-installation of cumbersome professional tools on their own machines. (3) A service chain orchestration function enables domain experts to contribute and share their domain knowledge with community members through workflow modeling. (4) User inventory management allows registered users to collect and manage their own GIRs, monitor their execution status, and track their own geoprocessing histories. Besides, to enhance the flexibility and capacity of GeoSquare, distributed storage and cloud computing technologies are employed. To support interactive teaching and training, GeoSquare adopts the rich internet application (RIA) technology to create user-friendly graphical user interface (GUI). Results show that GeoSquare can integrate and foster collaboration between dispersed GIRs, computing resources and people. Subsequently, educators and researchers can share and exchange resources in an efficient and harmonious way.

  14. Dynamic Reconfiguration of a RGBD Sensor Based on QoS and QoC Requirements in Distributed Systems.

    PubMed

    Munera, Eduardo; Poza-Lujan, Jose-Luis; Posadas-Yagüe, Juan-Luis; Simó-Ten, José-Enrique; Noguera, Juan Fco Blanes

    2015-07-24

    The inclusion of embedded sensors into a networked system provides useful information for many applications. A Distributed Control System (DCS) is one of the clearest examples where processing and communications are constrained by the client's requirements and the capacity of the system. An embedded sensor with advanced processing and communications capabilities supplies high level information, abstracting from the data acquisition process and objects recognition mechanisms. The implementation of an embedded sensor/actuator as a Smart Resource permits clients to access sensor information through distributed network services. Smart resources can offer sensor services as well as computing, communications and peripheral access by implementing a self-aware based adaptation mechanism which adapts the execution profile to the context. On the other hand, information integrity must be ensured when computing processes are dynamically adapted. Therefore, the processing must be adapted to perform tasks in a certain lapse of time but always ensuring a minimum process quality. In the same way, communications must try to reduce the data traffic without excluding relevant information. The main objective of the paper is to present a dynamic configuration mechanism to adapt the sensor processing and communication to the client's requirements in the DCS. This paper describes an implementation of a smart resource based on a Red, Green, Blue, and Depth (RGBD) sensor in order to test the dynamic configuration mechanism presented.

  15. Computer Simulations of Developmental Change: The Contributions of Working Memory Capacity and Long-Term Knowledge

    ERIC Educational Resources Information Center

    Jones, Gary; Gobet, Fernand; Pine, Julian M.

    2008-01-01

    Increasing working memory (WM) capacity is often cited as a major influence on children's development and yet WM capacity is difficult to examine independently of long-term knowledge. A computational model of children's nonword repetition (NWR) performance is presented that independently manipulates long-term knowledge and WM capacity to determine…

  16. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Radtke, M.A.

    This paper will chronicle the activity at Wisconsin Public Service Corporation (WPSC) that resulted in the complete migration of a traditional, late 1970`s vintage, Energy Management System (EMS). The new environment includes networked microcomputers, minicomputers, and the corporate mainframe, and provides on-line access to employees outside the energy control center and some WPSC customers. In the late 1980`s, WPSC was forecasting an EMS computer upgrade or replacement to address both capacity and technology needs. Reasoning that access to diverse computing resources would best position the company to accommodate the uncertain needs of the energy industry in the 90`s, WPSC chosemore » to investigate an in-place migration to a network of computers, able to support heterogeneous hardware and operating systems. The system was developed in a modular fashion, with individual modules being deployed as soon as they were completed. The functional and technical specification was continuously enhanced as operating experience was gained from each operational module. With the migration off the original EMS computers complete, the networked system called DEMAXX (Distributed Energy Management Architecture with eXtensive eXpandability) has exceeded expectations in the areas of: cost, performance, flexibility, and reliability.« less

  17. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Radtke, M.A.

    This paper will chronicle the activity at Wisconsin Public Service Corporation (WPSC) that resulted in the complete migration of a traditional, late 1970`s vintage, Energy management System (EMS). The new environment includes networked microcomputers, minicomputers, and the corporate mainframe, and provides on-line access to employees outside the energy control center and some WPSC customers. In the late 1980`s, WPSC was forecasting an EMS computer upgrade or replacement to address both capacity and technology needs. Reasoning that access to diverse computing resources would best position the company to accommodate the uncertain needs of the energy industry in the 90`s, WPSC chosemore » to investigate an in-place migration to a network of computers, able to support heterogeneous hardware and operating systems. The system was developed in a modular fashion, with individual modules being deployed as soon as they were completed. The functional and technical specification was continuously enhanced as operating experience was gained from each operational module. With the migration of the original EMS computers complete, the networked system called DEMAXX (Distributed Energy Management Architecture with eXtensive eXpandability) has exceeded expectations in the areas of: cost, performance, flexibility, and reliability.« less

  18. Additive Classical Capacity of Quantum Channels Assisted by Noisy Entanglement.

    PubMed

    Zhuang, Quntao; Zhu, Elton Yechao; Shor, Peter W

    2017-05-19

    We give a capacity formula for the classical information transmission over a noisy quantum channel, with separable encoding by the sender and limited resources provided by the receiver's preshared ancilla. Instead of a pure state, we consider the signal-ancilla pair in a mixed state, purified by a "witness." Thus, the signal-witness correlation limits the resource available from the signal-ancilla correlation. Our formula characterizes the utility of different forms of resources, including noisy or limited entanglement assistance, for classical communication. With separable encoding, the sender's signals across multiple channel uses are still allowed to be entangled, yet our capacity formula is additive. In particular, for generalized covariant channels, our capacity formula has a simple closed form. Moreover, our additive capacity formula upper bounds the general coherent attack's information gain in various two-way quantum key distribution protocols. For Gaussian protocols, the additivity of the formula indicates that the collective Gaussian attack is the most powerful.

  19. Integrated Sustainable Planning for Industrial Region Using Geospatial Technology

    NASA Astrophysics Data System (ADS)

    Tiwari, Manish K.; Saxena, Aruna; Katare, Vivek

    2012-07-01

    The Geospatial techniques and its scope of applications have undergone an order of magnitude change since its advent and now it has been universally accepted as a most important and modern tool for mapping and monitoring of various natural resources as well as amenities and infrastructure. The huge and voluminous spatial database generated from various Remote Sensing platforms needs proper management like storage, retrieval, manipulation and analysis to extract desired information, which is beyond the capability of human brain. This is where the computer aided GIS technology came into existence. A GIS with major input from Remote Sensing satellites for the natural resource management applications must be able to handle the spatiotemporal data, supporting spatiotemporal quarries and other spatial operations. Software and the computer-based tools are designed to make things easier to the user and to improve the efficiency and quality of information processing tasks. The natural resources are a common heritage, which we have shared with the past generations, and our future generation will be inheriting these resources from us. Our greed for resource and our tremendous technological capacity to exploit them at a much larger scale has created a situation where we have started withdrawing from the future stocks. Bhopal capital region had attracted the attention of the planners from the beginning of the five-year plan strategy for Industrial development. However, a number of projects were carried out in the individual Districts (Bhopal, Rajgarh, Shajapur, Raisen, Sehore) which also gave fruitful results, but no serious efforts have been made to involve the entire region. No use of latest Geospatial technique (Remote Sensing, GIS, GPS) to prepare a well structured computerized data base without which it is very different to retrieve, analyze and compare the data for monitoring as well as for planning the developmental activities in future.

  20. Spatiotemporal Domain Decomposition for Massive Parallel Computation of Space-Time Kernel Density

    NASA Astrophysics Data System (ADS)

    Hohl, A.; Delmelle, E. M.; Tang, W.

    2015-07-01

    Accelerated processing capabilities are deemed critical when conducting analysis on spatiotemporal datasets of increasing size, diversity and availability. High-performance parallel computing offers the capacity to solve computationally demanding problems in a limited timeframe, but likewise poses the challenge of preventing processing inefficiency due to workload imbalance between computing resources. Therefore, when designing new algorithms capable of implementing parallel strategies, careful spatiotemporal domain decomposition is necessary to account for heterogeneity in the data. In this study, we perform octtree-based adaptive decomposition of the spatiotemporal domain for parallel computation of space-time kernel density. In order to avoid edge effects near subdomain boundaries, we establish spatiotemporal buffers to include adjacent data-points that are within the spatial and temporal kernel bandwidths. Then, we quantify computational intensity of each subdomain to balance workloads among processors. We illustrate the benefits of our methodology using a space-time epidemiological dataset of Dengue fever, an infectious vector-borne disease that poses a severe threat to communities in tropical climates. Our parallel implementation of kernel density reaches substantial speedup compared to sequential processing, and achieves high levels of workload balance among processors due to great accuracy in quantifying computational intensity. Our approach is portable of other space-time analytical tests.

  1. Tools for Analyzing Computing Resource Management Strategies and Algorithms for SDR Clouds

    NASA Astrophysics Data System (ADS)

    Marojevic, Vuk; Gomez-Miguelez, Ismael; Gelonch, Antoni

    2012-09-01

    Software defined radio (SDR) clouds centralize the computing resources of base stations. The computing resource pool is shared between radio operators and dynamically loads and unloads digital signal processing chains for providing wireless communications services on demand. Each new user session request particularly requires the allocation of computing resources for executing the corresponding SDR transceivers. The huge amount of computing resources of SDR cloud data centers and the numerous session requests at certain hours of a day require an efficient computing resource management. We propose a hierarchical approach, where the data center is divided in clusters that are managed in a distributed way. This paper presents a set of computing resource management tools for analyzing computing resource management strategies and algorithms for SDR clouds. We use the tools for evaluating a different strategies and algorithms. The results show that more sophisticated algorithms can achieve higher resource occupations and that a tradeoff exists between cluster size and algorithm complexity.

  2. Lessons learned from implementing a national infrastructure in Sweden for storage and analysis of next-generation sequencing data

    PubMed Central

    2013-01-01

    Analyzing and storing data and results from next-generation sequencing (NGS) experiments is a challenging task, hampered by ever-increasing data volumes and frequent updates of analysis methods and tools. Storage and computation have grown beyond the capacity of personal computers and there is a need for suitable e-infrastructures for processing. Here we describe UPPNEX, an implementation of such an infrastructure, tailored to the needs of data storage and analysis of NGS data in Sweden serving various labs and multiple instruments from the major sequencing technology platforms. UPPNEX comprises resources for high-performance computing, large-scale and high-availability storage, an extensive bioinformatics software suite, up-to-date reference genomes and annotations, a support function with system and application experts as well as a web portal and support ticket system. UPPNEX applications are numerous and diverse, and include whole genome-, de novo- and exome sequencing, targeted resequencing, SNP discovery, RNASeq, and methylation analysis. There are over 300 projects that utilize UPPNEX and include large undertakings such as the sequencing of the flycatcher and Norwegian spruce. We describe the strategic decisions made when investing in hardware, setting up maintenance and support, allocating resources, and illustrate major challenges such as managing data growth. We conclude with summarizing our experiences and observations with UPPNEX to date, providing insights into the successful and less successful decisions made. PMID:23800020

  3. Ecological Footprint and Ecosystem Services Models: A Comparative Analysis of Environmental Carrying Capacity Calculation Approach in Indonesia

    NASA Astrophysics Data System (ADS)

    Subekti, R. M.; Suroso, D. S. A.

    2018-05-01

    Calculation of environmental carrying capacity can be done by various approaches. The selection of an appropriate approach determines the success of determining and applying environmental carrying capacity. This study aimed to compare the ecological footprint approach and the ecosystem services approach for calculating environmental carrying capacity. It attempts to describe two relatively new models that require further explanation if they are used to calculate environmental carrying capacity. In their application, attention needs to be paid to their respective advantages and weaknesses. Conceptually, the ecological footprint model is more complete than the ecosystem services model, because it describes the supply and demand of resources, including supportive and assimilative capacity of the environment, and measurable output through a resource consumption threshold. However, this model also has weaknesses, such as not considering technological change and resources beneath the earth’s surface, as well as the requirement to provide trade data between regions for calculating at provincial and district level. The ecosystem services model also has advantages, such as being in line with strategic environmental assessment (SEA) of ecosystem services, using spatial analysis based on ecoregions, and a draft regulation on calculation guidelines formulated by the government. Meanwhile, weaknesses are that it only describes the supply of resources, that the assessment of the different types of ecosystem services by experts tends to be subjective, and that the output of the calculation lacks a resource consumption threshold.

  4. Towards a sustainable framework for computer based health information systems (CHIS) for least developed countries (LDCs).

    PubMed

    Gordon, Abekah Nkrumah; Hinson, Robert Ebo

    2007-01-01

    The purpose of this paper is to argue for a theoretical framework by which development of computer based health information systems (CHIS) can be made sustainable. Health Management and promotion thrive on well-articulated CHIS. There are high levels of risk associated with the development of CHIS in the context of least developed countries (LDC), thereby making them unsustainable. This paper is based largely on literature survey on health promotion and information systems. The main factors accounting for the sustainability problem in less developed countries include poor infrastructure, inappropriate donor policies and strategies, poor infrastructure and inadequate human resource capacity. To counter these challenges and to ensure that CHIS deployment in LDCs is sustainable, it is proposed that the activities involved in the implementation of these systems be incorporated into organizational routines. This will ensure and secure the needed resources as well as the relevant support from all stakeholders of the system; on a continuous basis. This paper sets out to look at the issue of CHIS sustainability in LDCs, theoretically explains the factors that account for the sustainability problem and develops a conceptual model based on theoretical literature and existing empirical findings.

  5. The Virtual Climate Data Server (vCDS): An iRODS-Based Data Management Software Appliance Supporting Climate Data Services and Virtualization-as-a-Service in the NASA Center for Climate Simulation

    NASA Technical Reports Server (NTRS)

    Schnase, John L.; Tamkin, Glenn S.; Ripley, W. David III; Stong, Savannah; Gill, Roger; Duffy, Daniel Q.

    2012-01-01

    Scientific data services are becoming an important part of the NASA Center for Climate Simulation's mission. Our technological response to this expanding role is built around the concept of a Virtual Climate Data Server (vCDS), repetitive provisioning, image-based deployment and distribution, and virtualization-as-a-service. The vCDS is an iRODS-based data server specialized to the needs of a particular data-centric application. We use RPM scripts to build vCDS images in our local computing environment, our local Virtual Machine Environment, NASA s Nebula Cloud Services, and Amazon's Elastic Compute Cloud. Once provisioned into one or more of these virtualized resource classes, vCDSs can use iRODS s federation capabilities to create an integrated ecosystem of managed collections that is scalable and adaptable to changing resource requirements. This approach enables platform- or software-asa- service deployment of vCDS and allows the NCCS to offer virtualization-as-a-service: a capacity to respond in an agile way to new customer requests for data services.

  6. Exploring the Relationship Between Surgical Capacity and Output in Ghana: Current Capacity Assessments May Not Tell the Whole Story.

    PubMed

    Stewart, Barclay T; Gyedu, Adam; Gaskill, Cameron; Boakye, Godfred; Quansah, Robert; Donkor, Peter; Volmink, Jimmy; Mock, Charles

    2018-03-13

    Capacity assessments serve as surrogates for surgical output in low- and middle-income countries where detailed registers do not exist. The relationship between surgical capacity and output was evaluated in Ghana to determine whether a more critical interpretation of capacity assessment data is needed on which to base health systems strengthening initiatives. A standardized surgical capacity assessment was performed at 37 hospitals nationwide using WHO guidelines; availability of 25 essential resources and capabilities was used to create a composite capacity score that ranged from 0 (no availability of essential resources) to 75 (constant availability) for each hospital. Data regarding the number of essential operations performed over 1 year, surgical specialties available, hospital beds, and functional operating rooms were also collected. The relationship between capacity and output was explored. The median surgical capacity score was 37 [interquartile range (IQR) 29-48; range 20-56]. The median number of essential operations per year was 1480 (IQR 736-1932) at first-level hospitals; 1545 operations (IQR 984-2452) at referral hospitals; and 11,757 operations (IQR 3769-21,256) at tertiary hospitals. Surgical capacity and output were not correlated (p > 0.05). Contrary to current understanding, surgical capacity assessments may not accurately reflect surgical output. To improve the validity of surgical capacity assessments and facilitate maximal use of available resources, other factors that influence output should also be considered, including demand-side factors; supply-side factors and process elements; and health administration and management factors.

  7. Planning for partnerships: Maximizing surge capacity resources through service learning.

    PubMed

    Adams, Lavonne M; Reams, Paula K; Canclini, Sharon B

    2015-01-01

    Infectious disease outbreaks and natural or human-caused disasters can strain the community's surge capacity through sudden demand on healthcare activities. Collaborative partnerships between communities and schools of nursing have the potential to maximize resource availability to meet community needs following a disaster. This article explores how communities can work with schools of nursing to enhance surge capacity through systems thinking, integrated planning, and cooperative efforts.

  8. An Interface for Biomedical Big Data Processing on the Tianhe-2 Supercomputer.

    PubMed

    Yang, Xi; Wu, Chengkun; Lu, Kai; Fang, Lin; Zhang, Yong; Li, Shengkang; Guo, Guixin; Du, YunFei

    2017-12-01

    Big data, cloud computing, and high-performance computing (HPC) are at the verge of convergence. Cloud computing is already playing an active part in big data processing with the help of big data frameworks like Hadoop and Spark. The recent upsurge of high-performance computing in China provides extra possibilities and capacity to address the challenges associated with big data. In this paper, we propose Orion-a big data interface on the Tianhe-2 supercomputer-to enable big data applications to run on Tianhe-2 via a single command or a shell script. Orion supports multiple users, and each user can launch multiple tasks. It minimizes the effort needed to initiate big data applications on the Tianhe-2 supercomputer via automated configuration. Orion follows the "allocate-when-needed" paradigm, and it avoids the idle occupation of computational resources. We tested the utility and performance of Orion using a big genomic dataset and achieved a satisfactory performance on Tianhe-2 with very few modifications to existing applications that were implemented in Hadoop/Spark. In summary, Orion provides a practical and economical interface for big data processing on Tianhe-2.

  9. A study of computer graphics technology in application of communication resource management

    NASA Astrophysics Data System (ADS)

    Li, Jing; Zhou, Liang; Yang, Fei

    2017-08-01

    With the development of computer technology, computer graphics technology has been widely used. Especially, the success of object-oriented technology and multimedia technology promotes the development of graphics technology in the computer software system. Therefore, the computer graphics theory and application technology have become an important topic in the field of computer, while the computer graphics technology becomes more and more extensive in various fields of application. In recent years, with the development of social economy, especially the rapid development of information technology, the traditional way of communication resource management cannot effectively meet the needs of resource management. In this case, the current communication resource management is still using the original management tools and management methods, resource management equipment management and maintenance, which brought a lot of problems. It is very difficult for non-professionals to understand the equipment and the situation in communication resource management. Resource utilization is relatively low, and managers cannot quickly and accurately understand the resource conditions. Aimed at the above problems, this paper proposes to introduce computer graphics technology into the communication resource management. The introduction of computer graphics not only makes communication resource management more vivid, but also reduces the cost of resource management and improves work efficiency.

  10. Electricity market design for generator revenue sufficiency with increased variable generation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Levin, Todd; Botterud, Audun

    Here, we present a computationally efficient mixed-integer program (MIP) that determines optimal generator expansion decisions, and hourly unit commitment and dispatch in a power system. The impact of increasing wind power capacity on the optimal generation mix and generator profitability is analyzed for a test case that approximates the electricity market in Texas (ERCOT). We analyze three market policies that may support resource adequacy: Operating Reserve Demand Curves (ORDC), Fixed Reserve Scarcity Prices (FRSP) and fixed capacity payments (CP). Optimal expansion plans are comparable between the ORDC and FRSP implementations, while capacity payments may result in additional new capacity. Themore » FRSP policy leads to frequent reserves scarcity events and corresponding price spikes, while the ORDC implementation results in more continuous energy prices. Average energy prices decrease with increasing wind penetration under all policies, as do revenues for baseload and wind generators. Intermediate and peak load plants benefit from higher reserve prices and are less exposed to reduced energy prices. All else equal, an ORDC approach may be preferred to FRSP as it results in similar expansion and revenues with less extreme energy prices. A fixed CP leads to additional new flexible NGCT units, but lower profits for other technologies.« less

  11. Electricity market design for generator revenue sufficiency with increased variable generation

    DOE PAGES

    Levin, Todd; Botterud, Audun

    2015-10-01

    Here, we present a computationally efficient mixed-integer program (MIP) that determines optimal generator expansion decisions, and hourly unit commitment and dispatch in a power system. The impact of increasing wind power capacity on the optimal generation mix and generator profitability is analyzed for a test case that approximates the electricity market in Texas (ERCOT). We analyze three market policies that may support resource adequacy: Operating Reserve Demand Curves (ORDC), Fixed Reserve Scarcity Prices (FRSP) and fixed capacity payments (CP). Optimal expansion plans are comparable between the ORDC and FRSP implementations, while capacity payments may result in additional new capacity. Themore » FRSP policy leads to frequent reserves scarcity events and corresponding price spikes, while the ORDC implementation results in more continuous energy prices. Average energy prices decrease with increasing wind penetration under all policies, as do revenues for baseload and wind generators. Intermediate and peak load plants benefit from higher reserve prices and are less exposed to reduced energy prices. All else equal, an ORDC approach may be preferred to FRSP as it results in similar expansion and revenues with less extreme energy prices. A fixed CP leads to additional new flexible NGCT units, but lower profits for other technologies.« less

  12. Multitasking as a choice: a perspective.

    PubMed

    Broeker, Laura; Liepelt, Roman; Poljac, Edita; Künzell, Stefan; Ewolds, Harald; de Oliveira, Rita F; Raab, Markus

    2018-01-01

    Performance decrements in multitasking have been explained by limitations in cognitive capacity, either modelled as static structural bottlenecks or as the scarcity of overall cognitive resources that prevent humans, or at least restrict them, from processing two tasks at the same time. However, recent research has shown that individual differences, flexible resource allocation, and prioritization of tasks cannot be fully explained by these accounts. We argue that understanding human multitasking as a choice and examining multitasking performance from the perspective of judgment and decision-making (JDM), may complement current dual-task theories. We outline two prominent theories from the area of JDM, namely Simple Heuristics and the Decision Field Theory, and adapt these theories to multitasking research. Here, we explain how computational modelling techniques and decision-making parameters used in JDM may provide a benefit to understanding multitasking costs and argue that these techniques and parameters have the potential to predict multitasking behavior in general, and also individual differences in behavior. Finally, we present the one-reason choice metaphor to explain a flexible use of limited capacity as well as changes in serial and parallel task processing. Based on this newly combined approach, we outline a concrete interdisciplinary future research program that we think will help to further develop multitasking research.

  13. Flexible cognitive resources: competitive content maps for attention and memory

    PubMed Central

    Franconeri, Steven L.; Alvarez, George A.; Cavanagh, Patrick

    2013-01-01

    The brain has finite processing resources so that, as tasks become harder, performance degrades. Where do the limits on these resources come from? We focus on a variety of capacity-limited buffers related to attention, recognition, and memory that we claim have a two-dimensional ‘map’ architecture, where individual items compete for cortical real estate. This competitive format leads to capacity limits that are flexible, set by the nature of the content and their locations within an anatomically delimited space. We contrast this format with the standard ‘slot’ architecture and its fixed capacity. Using visual spatial attention and visual short-term memory as case studies, we suggest that competitive maps are a concrete and plausible architecture that limits cognitive capacity across many domains. PMID:23428935

  14. A Cloud-Based Infrastructure for Near-Real-Time Processing and Dissemination of NPP Data

    NASA Astrophysics Data System (ADS)

    Evans, J. D.; Valente, E. G.; Chettri, S. S.

    2011-12-01

    We are building a scalable cloud-based infrastructure for generating and disseminating near-real-time data products from a variety of geospatial and meteorological data sources, including the new National Polar-Orbiting Environmental Satellite System (NPOESS) Preparatory Project (NPP). Our approach relies on linking Direct Broadcast and other data streams to a suite of scientific algorithms coordinated by NASA's International Polar-Orbiter Processing Package (IPOPP). The resulting data products are directly accessible to a wide variety of end-user applications, via industry-standard protocols such as OGC Web Services, Unidata Local Data Manager, or OPeNDAP, using open source software components. The processing chain employs on-demand computing resources from Amazon.com's Elastic Compute Cloud and NASA's Nebula cloud services. Our current prototype targets short-term weather forecasting, in collaboration with NASA's Short-term Prediction Research and Transition (SPoRT) program and the National Weather Service. Direct Broadcast is especially crucial for NPP, whose current ground segment is unlikely to deliver data quickly enough for short-term weather forecasters and other near-real-time users. Direct Broadcast also allows full local control over data handling, from the receiving antenna to end-user applications: this provides opportunities to streamline processes for data ingest, processing, and dissemination, and thus to make interpreted data products (Environmental Data Records) available to practitioners within minutes of data capture at the sensor. Cloud computing lets us grow and shrink computing resources to meet large and rapid fluctuations in data availability (twice daily for polar orbiters) - and similarly large fluctuations in demand from our target (near-real-time) users. This offers a compelling business case for cloud computing: the processing or dissemination systems can grow arbitrarily large to sustain near-real time data access despite surges in data volumes or user demand, but that computing capacity (and hourly costs) can be dropped almost instantly once the surge passes. Cloud computing also allows low-risk experimentation with a variety of machine architectures (processor types; bandwidth, memory, and storage capacities, etc.) and of system configurations (including massively parallel computing patterns). Finally, our service-based approach (in which user applications invoke software processes on a Web-accessible server) facilitates access into datasets of arbitrary size and resolution, and allows users to request and receive tailored products on demand. To maximize the usefulness and impact of our technology, we have emphasized open, industry-standard software interfaces. We are also using and developing open source software to facilitate the widespread adoption of similar, derived, or interoperable systems for processing and serving near-real-time data from NPP and other sources.

  15. 78 FR 65632 - Centralized Capacity Markets in Regional Transmission Organizations and Independent System...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-11-01

    ... capacity product, as under current market designs, are there certain fundamental performance standards that... necessary for capacity investment? PJM offers LSEs the alternative to opt out of its capacity auction by... transmission may substitute for capacity resources. How can investment in capacity and transmission planning be...

  16. POCA Update: An NSF PAARE Project

    NASA Astrophysics Data System (ADS)

    Walter, Donald K.; Brittain, S. D.; Cash, J. L.; Hartmann, D. H.; Howell, S. B.; King, J. R.; Leising, M. D.; Mayo, E. A.; Mighell, K. J.; Smith, D. M., Jr.

    2011-01-01

    We report on the status of "A Partnership in Observational and Computational Astronomy (POCA)” under the NSF's "Partnerships in Astronomy and Astrophysics Research and Education (PAARE)" program. This partnership includes South Carolina State University (a Historically Black College/University), Clemson University (a Ph.D. granting institution) and the National Optical Astronomy Observatory. We have reached the midpoint of this 5-year award and discuss the successes, challenges and obstacles encountered to date. Included is a summary of our summer REU program, the POCA graduate fellowship program, faculty research capacity building, outreach activities, increased use of NSF facilities and shared resources. Additional POCA research presentations by the authors are described elsewhere in these proceedings. Support for this work was provided by the NSF PAARE program to South Carolina State University under award AST-0750814 as well as resources and support provided by Clemson University and the National Optical Astronomy Observatory.

  17. Establishing a distributed national research infrastructure providing bioinformatics support to life science researchers in Australia.

    PubMed

    Schneider, Maria Victoria; Griffin, Philippa C; Tyagi, Sonika; Flannery, Madison; Dayalan, Saravanan; Gladman, Simon; Watson-Haigh, Nathan; Bayer, Philipp E; Charleston, Michael; Cooke, Ira; Cook, Rob; Edwards, Richard J; Edwards, David; Gorse, Dominique; McConville, Malcolm; Powell, David; Wilkins, Marc R; Lonie, Andrew

    2017-06-30

    EMBL Australia Bioinformatics Resource (EMBL-ABR) is a developing national research infrastructure, providing bioinformatics resources and support to life science and biomedical researchers in Australia. EMBL-ABR comprises 10 geographically distributed national nodes with one coordinating hub, with current funding provided through Bioplatforms Australia and the University of Melbourne for its initial 2-year development phase. The EMBL-ABR mission is to: (1) increase Australia's capacity in bioinformatics and data sciences; (2) contribute to the development of training in bioinformatics skills; (3) showcase Australian data sets at an international level and (4) enable engagement in international programs. The activities of EMBL-ABR are focussed in six key areas, aligning with comparable international initiatives such as ELIXIR, CyVerse and NIH Commons. These key areas-Tools, Data, Standards, Platforms, Compute and Training-are described in this article. © The Author 2017. Published by Oxford University Press.

  18. A resource management architecture based on complex network theory in cloud computing federation

    NASA Astrophysics Data System (ADS)

    Zhang, Zehua; Zhang, Xuejie

    2011-10-01

    Cloud Computing Federation is a main trend of Cloud Computing. Resource Management has significant effect on the design, realization, and efficiency of Cloud Computing Federation. Cloud Computing Federation has the typical characteristic of the Complex System, therefore, we propose a resource management architecture based on complex network theory for Cloud Computing Federation (abbreviated as RMABC) in this paper, with the detailed design of the resource discovery and resource announcement mechanisms. Compare with the existing resource management mechanisms in distributed computing systems, a Task Manager in RMABC can use the historical information and current state data get from other Task Managers for the evolution of the complex network which is composed of Task Managers, thus has the advantages in resource discovery speed, fault tolerance and adaptive ability. The result of the model experiment confirmed the advantage of RMABC in resource discovery performance.

  19. Landscape functionality of plant communities in the Impala Platinum mining area, Rustenburg.

    PubMed

    van der Walt, L; Cilliers, S S; Kellner, K; Tongway, D; van Rensburg, L

    2012-12-30

    The tremendous growth of the platinum mining industry in South Africa has affected the natural environment adversely. The waste produced by platinum mineral processing is alkaline, biologically sterile and has a low water-holding capacity. These properties in the environment may constitute dysfunctional areas that will create 'leaky' and dysfunctional landscapes, limiting biological development. Landscape Function Analysis (LFA) is a monitoring procedure that assesses the degradation of landscapes, as brought about by human, animal and natural activities, through rapidly assessing certain soil surface indicators which indicate the biophysical functionality of the system. The "Trigger-Transfer-Reserve-Pulse" (TTRP) conceptual framework forms the foundation for assessing landscape function when using LFA. The two main aspects of this framework are the loss of resources from the system and the utilisation of resources by the system. After a survey of landscape heterogeneity to reflect the spatial organisation of the landscape, soil surface indicators are assessed within different patch types (identifiable units that retains resources that pass through the system) and interpatches (units between patches where vital resources are not retained, but lost) to assess the capacity of patches with various physical properties in regulating the effectiveness of resource control in the landscape. Indices describing landscape organisation are computed by a spreadsheet analysis, as well as soil surface quality indices. When assembled in different combinations, three indices emerge that reflect soil productive potential, namely: the (1) surface stability, (2) infiltration capacity, and (3) the nutrient cycling potential of the landscape. In this study we compared the landscape functionality of natural thornveld areas, rehabilitated opencast mines and rehabilitated slopes of tailings dams in the area leased for mining in the Rustenburg area. Our results show that the rehabilitated areas had a higher total SSA functionality due to higher infiltration and nutrient cycling indices than the natural thornveld landscapes. The length of interpatches and the width of patches greatly influenced the landscape function of the studied areas. The natural thornveld areas had a marginally higher total patch area than the rehabilitated areas. Vegetated patches (grass-, sparse grass-, grassy forb-, and grassy shrub-patches) generally scored the highest functionality indices, whilst bare soil interpatches contributed to the landscape functionality of the various plant communities the least. Copyright © 2012 Elsevier Ltd. All rights reserved.

  20. Reinforcement learning techniques for controlling resources in power networks

    NASA Astrophysics Data System (ADS)

    Kowli, Anupama Sunil

    As power grids transition towards increased reliance on renewable generation, energy storage and demand response resources, an effective control architecture is required to harness the full functionalities of these resources. There is a critical need for control techniques that recognize the unique characteristics of the different resources and exploit the flexibility afforded by them to provide ancillary services to the grid. The work presented in this dissertation addresses these needs. Specifically, new algorithms are proposed, which allow control synthesis in settings wherein the precise distribution of the uncertainty and its temporal statistics are not known. These algorithms are based on recent developments in Markov decision theory, approximate dynamic programming and reinforcement learning. They impose minimal assumptions on the system model and allow the control to be "learned" based on the actual dynamics of the system. Furthermore, they can accommodate complex constraints such as capacity and ramping limits on generation resources, state-of-charge constraints on storage resources, comfort-related limitations on demand response resources and power flow limits on transmission lines. Numerical studies demonstrating applications of these algorithms to practical control problems in power systems are discussed. Results demonstrate how the proposed control algorithms can be used to improve the performance and reduce the computational complexity of the economic dispatch mechanism in a power network. We argue that the proposed algorithms are eminently suitable to develop operational decision-making tools for large power grids with many resources and many sources of uncertainty.

  1. 78 FR 59924 - Centralized Capacity Markets in Regional Transmission Organizations and Independent System...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-09-30

    ... does the changing resource mix (i.e., increased reliance on natural gas-fired generation, increasing... resource planning policies, emerging technologies and fuels such as shale gas, price responsive demand and... design tools could prospectively augment, supplement or substitute for typical centralized capacity...

  2. A resource-sharing model based on a repeated game in fog computing.

    PubMed

    Sun, Yan; Zhang, Nan

    2017-03-01

    With the rapid development of cloud computing techniques, the number of users is undergoing exponential growth. It is difficult for traditional data centers to perform many tasks in real time because of the limited bandwidth of resources. The concept of fog computing is proposed to support traditional cloud computing and to provide cloud services. In fog computing, the resource pool is composed of sporadic distributed resources that are more flexible and movable than a traditional data center. In this paper, we propose a fog computing structure and present a crowd-funding algorithm to integrate spare resources in the network. Furthermore, to encourage more resource owners to share their resources with the resource pool and to supervise the resource supporters as they actively perform their tasks, we propose an incentive mechanism in our algorithm. Simulation results show that our proposed incentive mechanism can effectively reduce the SLA violation rate and accelerate the completion of tasks.

  3. A review of Computer Science resources for learning and teaching with K-12 computing curricula: an Australian case study

    NASA Astrophysics Data System (ADS)

    Falkner, Katrina; Vivian, Rebecca

    2015-10-01

    To support teachers to implement Computer Science curricula into classrooms from the very first year of school, teachers, schools and organisations seek quality curriculum resources to support implementation and teacher professional development. Until now, many Computer Science resources and outreach initiatives have targeted K-12 school-age children, with the intention to engage children and increase interest, rather than to formally teach concepts and skills. What is the educational quality of existing Computer Science resources and to what extent are they suitable for classroom learning and teaching? In this paper, an assessment framework is presented to evaluate the quality of online Computer Science resources. Further, a semi-systematic review of available online Computer Science resources was conducted to evaluate resources available for classroom learning and teaching and to identify gaps in resource availability, using the Australian curriculum as a case study analysis. The findings reveal a predominance of quality resources, however, a number of critical gaps were identified. This paper provides recommendations and guidance for the development of new and supplementary resources and future research.

  4. An accurate, compact and computationally efficient representation of orbitals for quantum Monte Carlo calculations

    NASA Astrophysics Data System (ADS)

    Luo, Ye; Esler, Kenneth; Kent, Paul; Shulenburger, Luke

    Quantum Monte Carlo (QMC) calculations of giant molecules, surface and defect properties of solids have been feasible recently due to drastically expanding computational resources. However, with the most computationally efficient basis set, B-splines, these calculations are severely restricted by the memory capacity of compute nodes. The B-spline coefficients are shared on a node but not distributed among nodes, to ensure fast evaluation. A hybrid representation which incorporates atomic orbitals near the ions and B-spline ones in the interstitial regions offers a more accurate and less memory demanding description of the orbitals because they are naturally more atomic like near ions and much smoother in between, thus allowing coarser B-spline grids. We will demonstrate the advantage of hybrid representation over pure B-spline and Gaussian basis sets and also show significant speed-up like computing the non-local pseudopotentials with our new scheme. Moreover, we discuss a new algorithm for atomic orbital initialization which used to require an extra workflow step taking a few days. With this work, the highly efficient hybrid representation paves the way to simulate large size even in-homogeneous systems using QMC. This work was supported by the U.S. Department of Energy, Office of Science, Basic Energy Sciences, Computational Materials Sciences Program.

  5. iTools: a framework for classification, categorization and integration of computational biology resources.

    PubMed

    Dinov, Ivo D; Rubin, Daniel; Lorensen, William; Dugan, Jonathan; Ma, Jeff; Murphy, Shawn; Kirschner, Beth; Bug, William; Sherman, Michael; Floratos, Aris; Kennedy, David; Jagadish, H V; Schmidt, Jeanette; Athey, Brian; Califano, Andrea; Musen, Mark; Altman, Russ; Kikinis, Ron; Kohane, Isaac; Delp, Scott; Parker, D Stott; Toga, Arthur W

    2008-05-28

    The advancement of the computational biology field hinges on progress in three fundamental directions--the development of new computational algorithms, the availability of informatics resource management infrastructures and the capability of tools to interoperate and synergize. There is an explosion in algorithms and tools for computational biology, which makes it difficult for biologists to find, compare and integrate such resources. We describe a new infrastructure, iTools, for managing the query, traversal and comparison of diverse computational biology resources. Specifically, iTools stores information about three types of resources--data, software tools and web-services. The iTools design, implementation and resource meta-data content reflect the broad research, computational, applied and scientific expertise available at the seven National Centers for Biomedical Computing. iTools provides a system for classification, categorization and integration of different computational biology resources across space-and-time scales, biomedical problems, computational infrastructures and mathematical foundations. A large number of resources are already iTools-accessible to the community and this infrastructure is rapidly growing. iTools includes human and machine interfaces to its resource meta-data repository. Investigators or computer programs may utilize these interfaces to search, compare, expand, revise and mine meta-data descriptions of existent computational biology resources. We propose two ways to browse and display the iTools dynamic collection of resources. The first one is based on an ontology of computational biology resources, and the second one is derived from hyperbolic projections of manifolds or complex structures onto planar discs. iTools is an open source project both in terms of the source code development as well as its meta-data content. iTools employs a decentralized, portable, scalable and lightweight framework for long-term resource management. We demonstrate several applications of iTools as a framework for integrated bioinformatics. iTools and the complete details about its specifications, usage and interfaces are available at the iTools web page http://iTools.ccb.ucla.edu.

  6. Blast2GO goes grid: developing a grid-enabled prototype for functional genomics analysis.

    PubMed

    Aparicio, G; Götz, S; Conesa, A; Segrelles, D; Blanquer, I; García, J M; Hernandez, V; Robles, M; Talon, M

    2006-01-01

    The vast amount in complexity of data generated in Genomic Research implies that new dedicated and powerful computational tools need to be developed to meet their analysis requirements. Blast2GO (B2G) is a bioinformatics tool for Gene Ontology-based DNA or protein sequence annotation and function-based data mining. The application has been developed with the aim of affering an easy-to-use tool for functional genomics research. Typical B2G users are middle size genomics labs carrying out sequencing, ETS and microarray projects, handling datasets up to several thousand sequences. In the current version of B2G. The power and analytical potential of both annotation and function data-mining is somehow restricted to the computational power behind each particular installation. In order to be able to offer the possibility of an enhanced computational capacity within this bioinformatics application, a Grid component is being developed. A prototype has been conceived for the particular problem of speeding up the Blast searches to obtain fast results for large datasets. Many efforts have been done in the literature concerning the speeding up of Blast searches, but few of them deal with the use of large heterogeneous production Grid Infrastructures. These are the infrastructures that could reach the largest number of resources and the best load balancing for data access. The Grid Service under development will analyse requests based on the number of sequences, splitting them accordingly to the available resources. Lower-level computation will be performed through MPIBLAST. The software architecture is based on the WSRF standard.

  7. A simulation of dementia epidemiology and resource use in Australia.

    PubMed

    Standfield, Lachlan B; Comans, Tracy; Scuffham, Paul

    2018-06-01

    The number of people in the developed world who have dementia is predicted to rise markedly. This study presents a validated predictive model to assist decision-makers to determine this population's future resource requirements and target scarce health and welfare resources appropriately. A novel individual patient discrete event simulation was developed to estimate the future prevalence of dementia and related health and welfare resource use in Australia. When compared to other published results, the simulation generated valid estimates of dementia prevalence and resource use. The analysis predicted 298,000, 387,000 and 928,000 persons in Australia will have dementia in 2011, 2020 and 2050, respectively. Health and welfare resource use increased markedly over the simulated time-horizon and was affected by capacity constraints. This simulation provides useful estimates of future demands on dementia-related services allowing the exploration of the effects of capacity constraints. Implications for public health: The model demonstrates that under-resourcing of residential aged care may lead to inappropriate and inefficient use of hospital resources. To avoid these capacity constraints it is predicted that the number of aged care beds for persons with dementia will need to increase more than threefold from 2011 to 2050. © 2017 The Authors.

  8. Biophysical constraints on the computational capacity of biochemical signaling networks

    NASA Astrophysics Data System (ADS)

    Wang, Ching-Hao; Mehta, Pankaj

    Biophysics fundamentally constrains the computations that cells can carry out. Here, we derive fundamental bounds on the computational capacity of biochemical signaling networks that utilize post-translational modifications (e.g. phosphorylation). To do so, we combine ideas from the statistical physics of disordered systems and the observation by Tony Pawson and others that the biochemistry underlying protein-protein interaction networks is combinatorial and modular. Our results indicate that the computational capacity of signaling networks is severely limited by the energetics of binding and the need to achieve specificity. We relate our results to one of the theoretical pillars of statistical learning theory, Cover's theorem, which places bounds on the computational capacity of perceptrons. PM and CHW were supported by a Simons Investigator in the Mathematical Modeling of Living Systems Grant, and NIH Grant No. 1R35GM119461 (both to PM).

  9. Propagating Resource Constraints Using Mutual Exclusion Reasoning

    NASA Technical Reports Server (NTRS)

    Frank, Jeremy; Sanchez, Romeo; Do, Minh B.; Clancy, Daniel (Technical Monitor)

    2001-01-01

    One of the most recent techniques for propagating resource constraints in Constraint Based scheduling is Energy Constraint. This technique focuses in precedence based scheduling, where precedence relations are taken into account rather than the absolute position of activities. Although, this particular technique proved to be efficient on discrete unary resources, it provides only loose bounds for jobs using discrete multi-capacity resources. In this paper we show how mutual exclusion reasoning can be used to propagate time bounds for activities using discrete resources. We show that our technique based on critical path analysis and mutex reasoning is just as effective on unary resources, and also shows that it is more effective on multi-capacity resources, through both examples and empirical study.

  10. Role of Working Memory in Children's Understanding Spoken Narrative: A Preliminary Investigation

    ERIC Educational Resources Information Center

    Montgomery, James W.; Polunenko, Anzhela; Marinellie, Sally A.

    2009-01-01

    The role of phonological short-term memory (PSTM), attentional resource capacity/allocation, and processing speed on children's spoken narrative comprehension was investigated. Sixty-seven children (6-11 years) completed a digit span task (PSTM), concurrent verbal processing and storage (CPS) task (resource capacity/allocation), auditory-visual…

  11. 18 CFR 284.262 - Definitions.

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ... natural gas supply or capacity; or (3) An anticipated loss of natural gas supply or capacity due to a... 18 Conservation of Power and Water Resources 1 2010-04-01 2010-04-01 false Definitions. 284.262 Section 284.262 Conservation of Power and Water Resources FEDERAL ENERGY REGULATORY COMMISSION, DEPARTMENT...

  12. 18 CFR 292.303 - Electric utility obligations under this subpart.

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ... 18 Conservation of Power and Water Resources 1 2010-04-01 2010-04-01 false Electric utility obligations under this subpart. 292.303 Section 292.303 Conservation of Power and Water Resources FEDERAL... energy or capacity under this subpart as if the qualifying facility were supplying energy or capacity...

  13. Tribal Watershed Management: Culture, Science, Capacity, and Collaboration

    ERIC Educational Resources Information Center

    Cronin, Amanda; Ostergren, David M.

    2007-01-01

    This research focuses on two elements of contemporary American Indian natural resource management. First, the authors explore the capacity of tribes to manage natural resources, including the merging of traditional ecological knowledge (TEK) with Western science. Second, they analyze tribal management in the context of local and regional…

  14. 76 FR 41297 - Grant Program To Build Tribal Energy Development Capacity

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-07-13

    ... develop energy resources on Indian land and properly accounting for resulting energy resource production and revenues. We will use a competitive evaluation process based on criteria stated in the.... Determine what process(es) and/or procedure(s) may be used to eliminate capacity gaps or sustain the...

  15. 76 FR 45589 - Notice of Submission of Proposed Information Collection to OMB; Evaluation of the Department of...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-07-29

    ... contribute their technical expertise, organizational capacity, and resources to local community development... the partners and additional funding used to support OUP-funded activities. The telephone interviews..., organizational capacity, and resources to local community development efforts. There has been no prior evaluation...

  16. 18 CFR 287.101 - Determination of powerplant design capacity.

    Code of Federal Regulations, 2011 CFR

    2011-04-01

    ... 18 Conservation of Power and Water Resources 1 2011-04-01 2011-04-01 false Determination of powerplant design capacity. 287.101 Section 287.101 Conservation of Power and Water Resources FEDERAL ENERGY... generator's kilovolt-amperes nameplate rating and power factor nameplate rating. (b) Combustion turbine. The...

  17. 18 CFR 287.101 - Determination of powerplant design capacity.

    Code of Federal Regulations, 2014 CFR

    2014-04-01

    ... 18 Conservation of Power and Water Resources 1 2014-04-01 2014-04-01 false Determination of powerplant design capacity. 287.101 Section 287.101 Conservation of Power and Water Resources FEDERAL ENERGY... generator's kilovolt-amperes nameplate rating and power factor nameplate rating. (b) Combustion turbine. The...

  18. 18 CFR 287.101 - Determination of powerplant design capacity.

    Code of Federal Regulations, 2012 CFR

    2012-04-01

    ... 18 Conservation of Power and Water Resources 1 2012-04-01 2012-04-01 false Determination of powerplant design capacity. 287.101 Section 287.101 Conservation of Power and Water Resources FEDERAL ENERGY... generator's kilovolt-amperes nameplate rating and power factor nameplate rating. (b) Combustion turbine. The...

  19. 18 CFR 287.101 - Determination of powerplant design capacity.

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ... 18 Conservation of Power and Water Resources 1 2010-04-01 2010-04-01 false Determination of powerplant design capacity. 287.101 Section 287.101 Conservation of Power and Water Resources FEDERAL ENERGY... generator's kilovolt-amperes nameplate rating and power factor nameplate rating. (b) Combustion turbine. The...

  20. 18 CFR 287.101 - Determination of powerplant design capacity.

    Code of Federal Regulations, 2013 CFR

    2013-04-01

    ... 18 Conservation of Power and Water Resources 1 2013-04-01 2013-04-01 false Determination of powerplant design capacity. 287.101 Section 287.101 Conservation of Power and Water Resources FEDERAL ENERGY... generator's kilovolt-amperes nameplate rating and power factor nameplate rating. (b) Combustion turbine. The...

  1. Physically Based Virtual Surgery Planning and Simulation Tools for Personal Health Care Systems

    NASA Astrophysics Data System (ADS)

    Dogan, Firat; Atilgan, Yasemin

    The virtual surgery planning and simulation tools have gained a great deal of importance in the last decade in a consequence of increasing capacities at the information technology level. The modern hardware architectures, large scale database systems, grid based computer networks, agile development processes, better 3D visualization and all the other strong aspects of the information technology brings necessary instruments into almost every desk. The last decade’s special software and sophisticated super computer environments are now serving to individual needs inside “tiny smart boxes” for reasonable prices. However, resistance to learning new computerized environments, insufficient training and all the other old habits prevents effective utilization of IT resources by the specialists of the health sector. In this paper, all the aspects of the former and current developments in surgery planning and simulation related tools are presented, future directions and expectations are investigated for better electronic health care systems.

  2. The Next Generation of Personal Computers.

    ERIC Educational Resources Information Center

    Crecine, John P.

    1986-01-01

    Discusses factors converging to create high-capacity, low-cost nature of next generation of microcomputers: a coherent vision of what graphics workstation and future computing environment should be like; hardware developments leading to greater storage capacity at lower costs; and development of software and expertise to exploit computing power…

  3. Provider-Independent Use of the Cloud

    NASA Astrophysics Data System (ADS)

    Harmer, Terence; Wright, Peter; Cunningham, Christina; Perrott, Ron

    Utility computing offers researchers and businesses the potential of significant cost-savings, making it possible for them to match the cost of their computing and storage to their demand for such resources. A utility compute provider enables the purchase of compute infrastructures on-demand; when a user requires computing resources a provider will provision a resource for them and charge them only for their period of use of that resource. There has been a significant growth in the number of cloud computing resource providers and each has a different resource usage model, application process and application programming interface (API)-developing generic multi-resource provider applications is thus difficult and time consuming. We have developed an abstraction layer that provides a single resource usage model, user authentication model and API for compute providers that enables cloud-provider neutral applications to be developed. In this paper we outline the issues in using external resource providers, give examples of using a number of the most popular cloud providers and provide examples of developing provider neutral applications. In addition, we discuss the development of the API to create a generic provisioning model based on a common architecture for cloud computing providers.

  4. Building Nationally-Focussed, Globally Federated, High Performance Earth Science Platforms to Solve Next Generation Social and Economic Issues.

    NASA Astrophysics Data System (ADS)

    Wyborn, Lesley; Evans, Ben; Foster, Clinton; Pugh, Timothy; Uhlherr, Alfred

    2015-04-01

    Digital geoscience data and information are integral to informing decisions on the social, economic and environmental management of natural resources. Traditionally, such decisions were focused on regional or national viewpoints only, but it is increasingly being recognised that global perspectives are required to meet new challenges such as predicting impacts of climate change; sustainably exploiting scarce water, mineral and energy resources; and protecting our communities through better prediction of the behaviour of natural hazards. In recent years, technical advances in scientific instruments have resulted in a surge in data volumes, with data now being collected at unprecedented rates and at ever increasing resolutions. The size of many earth science data sets now exceed the computational capacity of many government and academic organisations to locally store and dynamically access the data sets; to internally process and analyse them to high resolutions; and then to deliver them online to clients, partners and stakeholders. Fortunately, at the same time, computational capacities have commensurately increased (both cloud and HPC): these can now provide the capability to effectively access the ever-growing data assets within realistic time frames. However, to achieve this, data and computing need to be co-located: bandwidth limits the capacity to move the large data sets; the data transfers are too slow; and latencies to access them are too high. These scenarios are driving the move towards more centralised High Performance (HP) Infrastructures. The rapidly increasing scale of data, the growing complexity of software and hardware environments, combined with the energy costs of running such infrastructures is creating a compelling economic argument for just having one or two major national (or continental) HP facilities that can be federated internationally to enable earth and environmental issues to be tackled at global scales. But at the same time, if properly constructed, these infrastructures can also service very small-scale research projects. The National Computational Infrastructure (NCI) at the Australian National University (ANU) has built such an HP infrastructure as part of the Australian Government's National Collaborative Research Infrastructure Strategy. NCI operates as a formal partnership between the ANU and the three major Australian National Government Scientific Agencies: the Commonwealth Scientific and Industrial Research Organisation (CSIRO), the Bureau of Meteorology and Geoscience Australia. The government partners agreed to explore the new opportunities offered within the partnership with NCI, rather than each running their own separate agenda independently. The data from these national agencies, as well as from collaborating overseas organisations (e.g., NASA, NOAA, USGS, CMIP, etc.) are either replicated to, or produced at, NCI. By co-locating and harmonising these vast data collections within the integrated HP computing environments at NCI, new opportunities have arisen for Data-intensive Interdisciplinary Science at scales and resolutions not hitherto possible. The new NCI infrastructure has also enabled the blending of research by the university sector with the more operational business of government science agencies, with the fundamental shift being that researchers from both sectors work and collaborate within a federated data and computational environment that contains both national and international data collections.

  5. dV/dt - Accelerating the Rate of Progress towards Extreme Scale Collaborative Science

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Livny, Miron

    This report introduces publications that report the results of a project that aimed to design a computational framework that enables computational experimentation at scale while supporting the model of “submit locally, compute globally”. The project focuses on estimating application resource needs, finding the appropriate computing resources, acquiring those resources,deploying the applications and data on the resources, managing applications and resources during run.

  6. iTools: A Framework for Classification, Categorization and Integration of Computational Biology Resources

    PubMed Central

    Dinov, Ivo D.; Rubin, Daniel; Lorensen, William; Dugan, Jonathan; Ma, Jeff; Murphy, Shawn; Kirschner, Beth; Bug, William; Sherman, Michael; Floratos, Aris; Kennedy, David; Jagadish, H. V.; Schmidt, Jeanette; Athey, Brian; Califano, Andrea; Musen, Mark; Altman, Russ; Kikinis, Ron; Kohane, Isaac; Delp, Scott; Parker, D. Stott; Toga, Arthur W.

    2008-01-01

    The advancement of the computational biology field hinges on progress in three fundamental directions – the development of new computational algorithms, the availability of informatics resource management infrastructures and the capability of tools to interoperate and synergize. There is an explosion in algorithms and tools for computational biology, which makes it difficult for biologists to find, compare and integrate such resources. We describe a new infrastructure, iTools, for managing the query, traversal and comparison of diverse computational biology resources. Specifically, iTools stores information about three types of resources–data, software tools and web-services. The iTools design, implementation and resource meta - data content reflect the broad research, computational, applied and scientific expertise available at the seven National Centers for Biomedical Computing. iTools provides a system for classification, categorization and integration of different computational biology resources across space-and-time scales, biomedical problems, computational infrastructures and mathematical foundations. A large number of resources are already iTools-accessible to the community and this infrastructure is rapidly growing. iTools includes human and machine interfaces to its resource meta-data repository. Investigators or computer programs may utilize these interfaces to search, compare, expand, revise and mine meta-data descriptions of existent computational biology resources. We propose two ways to browse and display the iTools dynamic collection of resources. The first one is based on an ontology of computational biology resources, and the second one is derived from hyperbolic projections of manifolds or complex structures onto planar discs. iTools is an open source project both in terms of the source code development as well as its meta-data content. iTools employs a decentralized, portable, scalable and lightweight framework for long-term resource management. We demonstrate several applications of iTools as a framework for integrated bioinformatics. iTools and the complete details about its specifications, usage and interfaces are available at the iTools web page http://iTools.ccb.ucla.edu. PMID:18509477

  7. TethysCluster: A comprehensive approach for harnessing cloud resources for hydrologic modeling

    NASA Astrophysics Data System (ADS)

    Nelson, J.; Jones, N.; Ames, D. P.

    2015-12-01

    Advances in water resources modeling are improving the information that can be supplied to support decisions affecting the safety and sustainability of society. However, as water resources models become more sophisticated and data-intensive they require more computational power to run. Purchasing and maintaining the computing facilities needed to support certain modeling tasks has been cost-prohibitive for many organizations. With the advent of the cloud, the computing resources needed to address this challenge are now available and cost-effective, yet there still remains a significant technical barrier to leverage these resources. This barrier inhibits many decision makers and even trained engineers from taking advantage of the best science and tools available. Here we present the Python tools TethysCluster and CondorPy, that have been developed to lower the barrier to model computation in the cloud by providing (1) programmatic access to dynamically scalable computing resources, (2) a batch scheduling system to queue and dispatch the jobs to the computing resources, (3) data management for job inputs and outputs, and (4) the ability to dynamically create, submit, and monitor computing jobs. These Python tools leverage the open source, computing-resource management, and job management software, HTCondor, to offer a flexible and scalable distributed-computing environment. While TethysCluster and CondorPy can be used independently to provision computing resources and perform large modeling tasks, they have also been integrated into Tethys Platform, a development platform for water resources web apps, to enable computing support for modeling workflows and decision-support systems deployed as web apps.

  8. Reduced dopamine receptors and transporters but not synthesis capacity in normal aging adults: a meta-analysis.

    PubMed

    Karrer, Teresa M; Josef, Anika K; Mata, Rui; Morris, Evan D; Samanez-Larkin, Gregory R

    2017-09-01

    Many theories of cognitive aging are based on evidence that dopamine (DA) declines with age. Here, we performed a systematic meta-analysis of cross-sectional positron emission tomography and single-photon emission-computed tomography studies on the average effects of age on distinct DA targets (receptors, transporters, or relevant enzymes) in healthy adults (N = 95 studies including 2611 participants). Results revealed significant moderate to large, negative effects of age on DA transporters and receptors. Age had a significantly larger effect on D1- than D2-like receptors. In contrast, there was no significant effect of age on DA synthesis capacity. The average age reductions across the DA system were 3.7%-14.0% per decade. A meta-regression found only DA target as a significant moderator of the age effect. This study precisely quantifies prior claims of reduced DA functionality with age. It also identifies presynaptic mechanisms (spared synthesis capacity and reduced DA transporters) that may partially account for previously unexplained phenomena whereby older adults appear to use dopaminergic resources effectively. Recommendations for future studies including minimum required samples sizes are provided. Copyright © 2017 The Author(s). Published by Elsevier Inc. All rights reserved.

  9. Focused attention improves working memory: implications for flexible-resource and discrete-capacity models.

    PubMed

    Souza, Alessandra S; Rerko, Laura; Lin, Hsuan-Yu; Oberauer, Klaus

    2014-10-01

    Performance in working memory (WM) tasks depends on the capacity for storing objects and on the allocation of attention to these objects. Here, we explored how capacity models need to be augmented to account for the benefit of focusing attention on the target of recall. Participants encoded six colored disks (Experiment 1) or a set of one to eight colored disks (Experiment 2) and were cued to recall the color of a target on a color wheel. In the no-delay condition, the recall-cue was presented after a 1,000-ms retention interval, and participants could report the retrieved color immediately. In the delay condition, the recall-cue was presented at the same time as in the no-delay condition, but the opportunity to report the color was delayed. During this delay, participants could focus attention exclusively on the target. Responses deviated less from the target's color in the delay than in the no-delay condition. Mixture modeling assigned this benefit to a reduction in guessing (Experiments 1 and 2) and transposition errors (Experiment 2). We tested several computational models implementing flexible or discrete capacity allocation, aiming to explain both the effect of set size, reflecting the limited capacity of WM, and the effect of delay, reflecting the role of attention to WM representations. Both models fit the data better when a spatially graded source of transposition error is added to its assumptions. The benefits of focusing attention could be explained by allocating to this object a higher proportion of the capacity to represent color.

  10. Hospital influenza pandemic stockpiling needs: A computer simulation.

    PubMed

    Abramovich, Mark N; Hershey, John C; Callies, Byron; Adalja, Amesh A; Tosh, Pritish K; Toner, Eric S

    2017-03-01

    A severe influenza pandemic could overwhelm hospitals but planning guidance that accounts for the dynamic interrelationships between planning elements is lacking. We developed a methodology to calculate pandemic supply needs based on operational considerations in hospitals and then tested the methodology at Mayo Clinic in Rochester, MN. We upgraded a previously designed computer modeling tool and input carefully researched resource data from the hospital to run 10,000 Monte Carlo simulations using various combinations of variables to determine resource needs across a spectrum of scenarios. Of 10,000 iterations, 1,315 fell within the parameters defined by our simulation design and logical constraints. From these valid iterations, we projected supply requirements by percentile for key supplies, pharmaceuticals, and personal protective equipment requirements needed in a severe pandemic. We projected supplies needs for a range of scenarios that use up to 100% of Mayo Clinic-Rochester's surge capacity of beds and ventilators. The results indicate that there are diminishing patient care benefits for stockpiling on the high side of the range, but that having some stockpile of critical resources, even if it is relatively modest, is most important. We were able to display the probabilities of needing various supply levels across a spectrum of scenarios. The tool could be used to model many other hospital preparedness issues, but validation in other settings is needed. Copyright © 2017 Association for Professionals in Infection Control and Epidemiology, Inc. Published by Elsevier Inc. All rights reserved.

  11. Distribution of high-temperature (>150 °C) geothermal resources in California

    USGS Publications Warehouse

    Sass, John H.; Priest, Susan S.

    2002-01-01

    California contains, by far, the greatest geothermal generating capacity in the United States, and with the possible exception of Alaska, the greatest potential for the development of additional resources. California has nearly 2/3 of the US geothermal electrical installed capacity of over 3,000 MW. Depending on assumptions regarding reservoir characteristics and future market conditions, additional resources of between 2,000 and 10,000 MWe might be developed (see e.g., Muffler, 1979).

  12. Renewable Energy Deployment in Colorado and the West: Extended Policy Sensitivities

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Barrows, Clayton P.; Stoll, Brady; Mooney, Meghan E.

    The Resource Planning Model is a capacity expansion model designed for a regional power system, such as a utility service territory, state, or balancing authority. We apply a geospatial analysis to Resource Planning Model renewable energy capacity expansion results to understand the likelihood of renewable development on various lands within Colorado.

  13. Building Human Resources Management Capacity for University Research: The Case at Four Leading Vietnamese Universities

    ERIC Educational Resources Information Center

    Nguyen, T. L.

    2016-01-01

    At research-intensive universities, building human resources management (HRM) capacity has become a key approach to enhancing a university's research performance. However, despite aspiring to become a research-intensive university, many teaching-intensive universities in developing countries may not have created effective research-promoted HRM…

  14. Research Capacity Building in Education: The Role of Digital Archives

    ERIC Educational Resources Information Center

    Carmichael, Patrick

    2011-01-01

    Accounts of how research capacity in education can be developed often make reference to electronic networks and online resources. This paper presents a theoretically driven analysis of the role of one such resource, an online archive of educational research studies that includes not only digitised collections of original documents but also videos…

  15. A Novel Optimal Joint Resource Allocation Method in Cooperative Multicarrier Networks: Theory and Practice

    PubMed Central

    Gao, Yuan; Zhou, Weigui; Ao, Hong; Chu, Jian; Zhou, Quan; Zhou, Bo; Wang, Kang; Li, Yi; Xue, Peng

    2016-01-01

    With the increasing demands for better transmission speed and robust quality of service (QoS), the capacity constrained backhaul gradually becomes a bottleneck in cooperative wireless networks, e.g., in the Internet of Things (IoT) scenario in joint processing mode of LTE-Advanced Pro. This paper focuses on resource allocation within capacity constrained backhaul in uplink cooperative wireless networks, where two base stations (BSs) equipped with single antennae serve multiple single-antennae users via multi-carrier transmission mode. In this work, we propose a novel cooperative transmission scheme based on compress-and-forward with user pairing to solve the joint mixed integer programming problem. To maximize the system capacity under the limited backhaul, we formulate the joint optimization problem of user sorting, subcarrier mapping and backhaul resource sharing among different pairs (subcarriers for users). A novel robust and efficient centralized algorithm based on alternating optimization strategy and perfect mapping is proposed. Simulations show that our novel method can improve the system capacity significantly under the constraint of the backhaul resource compared with the blind alternatives. PMID:27077865

  16. Orbital Angular Momentum-Entanglement Frequency Transducer.

    PubMed

    Zhou, Zhi-Yuan; Liu, Shi-Long; Li, Yan; Ding, Dong-Sheng; Zhang, Wei; Shi, Shuai; Dong, Ming-Xin; Shi, Bao-Sen; Guo, Guang-Can

    2016-09-02

    Entanglement is a vital resource for realizing many tasks such as teleportation, secure key distribution, metrology, and quantum computations. To effectively build entanglement between different quantum systems and share information between them, a frequency transducer to convert between quantum states of different wavelengths while retaining its quantum features is indispensable. Information encoded in the photon's orbital angular momentum (OAM) degrees of freedom is preferred in harnessing the information-carrying capacity of a single photon because of its unlimited dimensions. A quantum transducer, which operates at wavelengths from 1558.3 to 525 nm for OAM qubits, OAM-polarization hybrid-entangled states, and OAM-entangled states, is reported for the first time. Nonclassical properties and entanglements are demonstrated following the conversion process by performing quantum tomography, interference, and Bell inequality measurements. Our results demonstrate the capability to create an entanglement link between different quantum systems operating in a photon's OAM degrees of freedom, which will be of great importance in building a high-capacity OAM quantum network.

  17. Using Amazon's Elastic Compute Cloud to dynamically scale CMS computational resources

    NASA Astrophysics Data System (ADS)

    Evans, D.; Fisk, I.; Holzman, B.; Melo, A.; Metson, S.; Pordes, R.; Sheldon, P.; Tiradani, A.

    2011-12-01

    Large international scientific collaborations such as the Compact Muon Solenoid (CMS) experiment at the Large Hadron Collider have traditionally addressed their data reduction and analysis needs by building and maintaining dedicated computational infrastructure. Emerging cloud computing services such as Amazon's Elastic Compute Cloud (EC2) offer short-term CPU and storage resources with costs based on usage. These services allow experiments to purchase computing resources as needed, without significant prior planning and without long term investments in facilities and their management. We have demonstrated that services such as EC2 can successfully be integrated into the production-computing model of CMS, and find that they work very well as worker nodes. The cost-structure and transient nature of EC2 services makes them inappropriate for some CMS production services and functions. We also found that the resources are not truely "on-demand" as limits and caps on usage are imposed. Our trial workflows allow us to make a cost comparison between EC2 resources and dedicated CMS resources at a University, and conclude that it is most cost effective to purchase dedicated resources for the "base-line" needs of experiments such as CMS. However, if the ability to use cloud computing resources is built into an experiment's software framework before demand requires their use, cloud computing resources make sense for bursting during times when spikes in usage are required.

  18. Statistics Online Computational Resource for Education

    ERIC Educational Resources Information Center

    Dinov, Ivo D.; Christou, Nicolas

    2009-01-01

    The Statistics Online Computational Resource (http://www.SOCR.ucla.edu) provides one of the largest collections of free Internet-based resources for probability and statistics education. SOCR develops, validates and disseminates two core types of materials--instructional resources and computational libraries. (Contains 2 figures.)

  19. Talk the Walk: Does Socio-Cognitive Resource Reallocation Facilitate the Development of Walking?

    PubMed Central

    Orr, Edna

    2016-01-01

    Walking is of interest to psychology, robotics, zoology, neuroscience and medicine. Human’s ability to walk on two feet is considered to be one of the defining characteristics of hominoid evolution. Evolutionary science propses that it emerged in response to limited environmental resources; yet the processes supporting its emergence are not fully understood. Developmental psychology research suggests that walking elicits cognitive advancements. We postulate that the relationship between cognitive development and walking is a bi-directional one; and further suggest that the initiation of novel capacities, such as walking, is related to internal socio-cognitive resource reallocation. We shed light on these notions by exploring infants’ cognitive and socio-communicative outputs prospectively from 6–18 months of age. Structured bi/tri weekly evaluations of symbolic and verbal development were employed in an urban cohort (N = 9) for 12 months, during the transition from crawling to walking. Results show links between preemptive cognitive changes in socio-communicative output, symbolic-cognitive tool-use processes, and the age of emergence of walking. Plots of use rates of lower symbolic play levels before and after emergence of new skills illustrate reductions in use of previously attained key behaviors prior to emergence of higher symbolic play, language and walking. Further, individual differences in age of walking initiation were strongly related to the degree of reductions in complexity of object-use (r = .832, p < .005), along with increases, counter to the general reduction trend, in skills that serve recruitment of external resources [socio-communication bids before speech (r = -.696, p < .01), and speech bids before walking; r = .729, p < .01)]. Integration of these proactive changes using a computational approach yielded an even stronger link, underscoring internal resource reallocation as a facilitator of walking initiation (r = .901, p<0.001). These preliminary data suggest that representational capacities, symbolic object use, language and social developments, form an integrated adaptable composite, which possibly enables proactive internal resource reallocation, designed to support the emergence of new developmental milestones, such as walking. PMID:27248834

  20. An Architecture for Cross-Cloud System Management

    NASA Astrophysics Data System (ADS)

    Dodda, Ravi Teja; Smith, Chris; van Moorsel, Aad

    The emergence of the cloud computing paradigm promises flexibility and adaptability through on-demand provisioning of compute resources. As the utilization of cloud resources extends beyond a single provider, for business as well as technical reasons, the issue of effectively managing such resources comes to the fore. Different providers expose different interfaces to their compute resources utilizing varied architectures and implementation technologies. This heterogeneity poses a significant system management problem, and can limit the extent to which the benefits of cross-cloud resource utilization can be realized. We address this problem through the definition of an architecture to facilitate the management of compute resources from different cloud providers in an homogenous manner. This preserves the flexibility and adaptability promised by the cloud computing paradigm, whilst enabling the benefits of cross-cloud resource utilization to be realized. The practical efficacy of the architecture is demonstrated through an implementation utilizing compute resources managed through different interfaces on the Amazon Elastic Compute Cloud (EC2) service. Additionally, we provide empirical results highlighting the performance differential of these different interfaces, and discuss the impact of this performance differential on efficiency and profitability.

  1. An adaptive grid algorithm for 3-D GIS landform optimization based on improved ant algorithm

    NASA Astrophysics Data System (ADS)

    Wu, Chenhan; Meng, Lingkui; Deng, Shijun

    2005-07-01

    The key technique of 3-D GIS is to realize quick and high-quality 3-D visualization, in which 3-D roaming system based on landform plays an important role. However how to increase efficiency of 3-D roaming engine and process a large amount of landform data is a key problem in 3-D landform roaming system and improper process of the problem would result in tremendous consumption of system resources. Therefore it has become the key of 3-D roaming system design that how to realize high-speed process of distributed data for landform DEM (Digital Elevation Model) and high-speed distributed modulation of various 3-D landform data resources. In the paper we improved the basic ant algorithm and designed the modulation strategy of 3-D GIS landform resources based on the improved ant algorithm. By initially hypothetic road weights σi , the change of the information factors in the original algorithm would transform from ˜τj to ∆τj+σi and the weights was decided by 3-D computative capacity of various nodes in network environment. So during the course of initial phase of task assignment, increasing the resource information factors of high task-accomplishing rate and decreasing ones of low accomplishing rate would make load accomplishing rate approach the same value as quickly as possible, then in the later process of task assignment, the load balanced ability of the system was further improved. Experimental results show by improving ant algorithm, our system not only decreases many disadvantage of the traditional ant algorithm, but also like ants looking for food effectively distributes the complicated landform algorithm to many computers to process cooperatively and gains a satisfying search result.

  2. Quantum Computing: Selected Internet Resources for Librarians, Researchers, and the Casually Curious

    ERIC Educational Resources Information Center

    Cirasella, Jill

    2009-01-01

    This article presents an annotated selection of the most important and informative Internet resources for learning about quantum computing, finding quantum computing literature, and tracking quantum computing news. All of the quantum computing resources described in this article are freely available, English-language web sites that fall into one…

  3. Asynchrony of wind and hydropower resources in Australia.

    PubMed

    Gunturu, Udaya Bhaskar; Hallgren, Willow

    2017-08-18

    Wind and hydropower together constitute nearly 80% of the renewable capacity in Australia and their resources are collocated. We show that wind and hydro generation capacity factors covary negatively at the interannual time scales. Thus, the technology diversity mitigates the variability of renewable power generation at the interannual scales. The asynchrony of wind and hydropower resources is explained by the differential impact of the two modes of the El Ni˜no Southern Oscillation - canonical and Modoki - on the wind and hydro resources. Also, the Modoki El Ni˜no and the Modoki La Ni˜na phases have greater impact. The seasonal impact patterns corroborate these results. As the proportion of wind power increases in Australia's energy mix, this negative covariation has implications for storage capacity of excess wind generation at short time scales and for generation system adequacy at the longer time scales.

  4. The Efficiency of Increasing the Capacity of Physiotherapy Screening Clinics or Traditional Medical Services to Address Unmet Demand in Orthopaedic Outpatients: A Practical Application of Discrete Event Simulation with Dynamic Queuing.

    PubMed

    Standfield, L; Comans, T; Raymer, M; O'Leary, S; Moretto, N; Scuffham, P

    2016-08-01

    Hospital outpatient orthopaedic services traditionally rely on medical specialists to assess all new patients to determine appropriate care. This has resulted in significant delays in service provision. In response, Orthopaedic Physiotherapy Screening Clinics and Multidisciplinary Services (OPSC) have been introduced to assess and co-ordinate care for semi- and non-urgent patients. To compare the efficiency of delivering increased semi- and non-urgent orthopaedic outpatient services through: (1) additional OPSC services; (2) additional traditional orthopaedic medical services with added surgical resources (TOMS + Surg); or (3) additional TOMS without added surgical resources (TOMS - Surg). A cost-utility analysis using discrete event simulation (DES) with dynamic queuing (DQ) was used to predict the cost effectiveness, throughput, queuing times, and resource utilisation, associated with introducing additional OPSC or TOMS ± Surg versus usual care. The introduction of additional OPSC or TOMS (±surgery) would be considered cost effective in Australia. However, OPSC was the most cost-effective option. Increasing the capacity of current OPSC services is an efficient way to improve patient throughput and waiting times without exceeding current surgical resources. An OPSC capacity increase of ~100 patients per month appears cost effective (A$8546 per quality-adjusted life-year) and results in a high level of OPSC utilisation (98 %). Increasing OPSC capacity to manage semi- and non-urgent patients would be cost effective, improve throughput, and reduce waiting times without exceeding current surgical resources. Unlike Markov cohort modelling, microsimulation, or DES without DQ, employing DES-DQ in situations where capacity constraints predominate provides valuable additional information beyond cost effectiveness to guide resource allocation decisions.

  5. 8760-Based Method for Representing Variable Generation Capacity Value in Capacity Expansion Models

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Frew, Bethany A

    Capacity expansion models (CEMs) are widely used to evaluate the least-cost portfolio of electricity generators, transmission, and storage needed to reliably serve load over many years or decades. CEMs can be computationally complex and are often forced to estimate key parameters using simplified methods to achieve acceptable solve times or for other reasons. In this paper, we discuss one of these parameters -- capacity value (CV). We first provide a high-level motivation for and overview of CV. We next describe existing modeling simplifications and an alternate approach for estimating CV that utilizes hourly '8760' data of load and VG resources.more » We then apply this 8760 method to an established CEM, the National Renewable Energy Laboratory's (NREL's) Regional Energy Deployment System (ReEDS) model (Eurek et al. 2016). While this alternative approach for CV is not itself novel, it contributes to the broader CEM community by (1) demonstrating how a simplified 8760 hourly method, which can be easily implemented in other power sector models when data is available, more accurately captures CV trends than a statistical method within the ReEDS CEM, and (2) providing a flexible modeling framework from which other 8760-based system elements (e.g., demand response, storage, and transmission) can be added to further capture important dynamic interactions, such as curtailment.« less

  6. Significantly reducing the processing times of high-speed photometry data sets using a distributed computing model

    NASA Astrophysics Data System (ADS)

    Doyle, Paul; Mtenzi, Fred; Smith, Niall; Collins, Adrian; O'Shea, Brendan

    2012-09-01

    The scientific community is in the midst of a data analysis crisis. The increasing capacity of scientific CCD instrumentation and their falling costs is contributing to an explosive generation of raw photometric data. This data must go through a process of cleaning and reduction before it can be used for high precision photometric analysis. Many existing data processing pipelines either assume a relatively small dataset or are batch processed by a High Performance Computing centre. A radical overhaul of these processing pipelines is required to allow reduction and cleaning rates to process terabyte sized datasets at near capture rates using an elastic processing architecture. The ability to access computing resources and to allow them to grow and shrink as demand fluctuates is essential, as is exploiting the parallel nature of the datasets. A distributed data processing pipeline is required. It should incorporate lossless data compression, allow for data segmentation and support processing of data segments in parallel. Academic institutes can collaborate and provide an elastic computing model without the requirement for large centralized high performance computing data centers. This paper demonstrates how a base 10 order of magnitude improvement in overall processing time has been achieved using the "ACN pipeline", a distributed pipeline spanning multiple academic institutes.

  7. Mutual research capacity strengthening: a qualitative study of two-way partnerships in public health research.

    PubMed

    Redman-MacLaren, Michelle; MacLaren, David J; Harrington, Humpress; Asugeni, Rowena; Timothy-Harrington, Relmah; Kekeubata, Esau; Speare, Richard

    2012-12-18

    Capacity building has been employed in international health and development sectors to describe the process of 'experts' from more resourced countries training people in less resourced countries. Hence the concept has an implicit power imbalance based on 'expert' knowledge. In 2011, a health research strengthening workshop was undertaken at Atoifi Adventist Hospital, Solomon Islands to further strengthen research skills of the Hospital and College of Nursing staff and East Kwaio community leaders through partnering in practical research projects. The workshop was based on participatory research frameworks underpinned by decolonising methodologies, which sought to challenge historical power imbalances and inequities. Our research question was, "Is research capacity strengthening a two-way process?" In this qualitative study, five Solomon Islanders and five Australians each responded to four open-ended questions about their experience of the research capacity strengthening workshop and activities: five chose face to face interview, five chose to provide written responses. Written responses and interview transcripts were inductively analysed in NVivo 9. Six major themes emerged. These were: Respectful relationships; Increased knowledge and experience with research process; Participation at all stages in the research process; Contribution to public health action; Support and sustain research opportunities; and Managing challenges of capacity strengthening. All researchers identified benefits for themselves, their institution and/or community, regardless of their role or country of origin, indicating that the capacity strengthening had been a two-way process. The flexible and responsive process we used to strengthen research capacity was identified as mutually beneficial. Using community-based participatory frameworks underpinned by decolonising methodologies is assisting to redress historical power imbalances and inequities and is helping to sustain the initial steps taken to establish a local research agenda at Atoifi Hospital. It is our experience that embedding mutuality throughout the research capacity strengthening process has had great benefit and may also benefit researchers from more resourced and less resourced countries wanting to partner in research capacity strengthening activities.

  8. A theoretical analysis of the electromagnetic environment of the AS330 super Puma helicopter external and internal coupling

    NASA Technical Reports Server (NTRS)

    Flourens, F.; Morel, T.; Gauthier, D.; Serafin, D.

    1991-01-01

    Numerical techniques such as Finite Difference Time Domain (FDTD) computer programs, which were first developed to analyze the external electromagnetic environment of an aircraft during a wave illumination, a lightning event, or any kind of current injection, are now very powerful investigative tools. The program called GORFF-VE, was extended to compute the inner electromagnetic fields that are generated by the penetration of the outer fields through large apertures made in the all metallic body. Then, the internal fields can drive the electrical response of a cable network. The coupling between the inside and the outside of the helicopter is implemented using Huygen's principle. Moreover, the spectacular increase of computer resources, as calculations speed and memory capacity, allows the modellization structures as complex as these of helicopters with accuracy. This numerical model was exploited, first, to analyze the electromagnetic environment of an in-flight helicopter for several injection configurations, and second, to design a coaxial return path to simulate the lightning aircraft interaction with a strong current injection. The E field and current mappings are the result of these calculations.

  9. Large-Scale NASA Science Applications on the Columbia Supercluster

    NASA Technical Reports Server (NTRS)

    Brooks, Walter

    2005-01-01

    Columbia, NASA's newest 61 teraflops supercomputer that became operational late last year, is a highly integrated Altix cluster of 10,240 processors, and was named to honor the crew of the Space Shuttle lost in early 2003. Constructed in just four months, Columbia increased NASA's computing capability ten-fold, and revitalized the Agency's high-end computing efforts. Significant cutting-edge science and engineering simulations in the areas of space and Earth sciences, as well as aeronautics and space operations, are already occurring on this largest operational Linux supercomputer, demonstrating its capacity and capability to accelerate NASA's space exploration vision. The presentation will describe how an integrated environment consisting not only of next-generation systems, but also modeling and simulation, high-speed networking, parallel performance optimization, and advanced data analysis and visualization, is being used to reduce design cycle time, accelerate scientific discovery, conduct parametric analysis of multiple scenarios, and enhance safety during the life cycle of NASA missions. The talk will conclude by discussing how NAS partnered with various NASA centers, other government agencies, computer industry, and academia, to create a national resource in large-scale modeling and simulation.

  10. A Comprehensive and Cost-Effective Computer Infrastructure for K-12 Schools

    NASA Technical Reports Server (NTRS)

    Warren, G. P.; Seaton, J. M.

    1996-01-01

    Since 1993, NASA Langley Research Center has been developing and implementing a low-cost Internet connection model, including system architecture, training, and support, to provide Internet access for an entire network of computers. This infrastructure allows local area networks which exceed 50 machines per school to independently access the complete functionality of the Internet by connecting to a central site, using state-of-the-art commercial modem technology, through a single standard telephone line. By locating high-cost resources at this central site and sharing these resources and their costs among the school districts throughout a region, a practical, efficient, and affordable infrastructure for providing scale-able Internet connectivity has been developed. As the demand for faster Internet access grows, the model has a simple expansion path that eliminates the need to replace major system components and re-train personnel. Observations of optical Internet usage within an environment, particularly school classrooms, have shown that after an initial period of 'surfing,' the Internet traffic becomes repetitive. By automatically storing requested Internet information on a high-capacity networked disk drive at the local site (network based disk caching), then updating this information only when it changes, well over 80 percent of the Internet traffic that leaves a location can be eliminated by retrieving the information from the local disk cache.

  11. 30 CFR 75.1401 - Hoists; rated capacities; indicators.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... 30 Mineral Resources 1 2011-07-01 2011-07-01 false Hoists; rated capacities; indicators. 75.1401... Hoists; rated capacities; indicators. Hoists shall have rated capacities consistent with the loads handled. An accurate and reliable indicator of the position of the cage, platform, skip, bucket, or cars...

  12. 30 CFR 75.1401 - Hoists; rated capacities; indicators.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... 30 Mineral Resources 1 2010-07-01 2010-07-01 false Hoists; rated capacities; indicators. 75.1401... Hoists; rated capacities; indicators. Hoists shall have rated capacities consistent with the loads handled. An accurate and reliable indicator of the position of the cage, platform, skip, bucket, or cars...

  13. The Capacity for Music: What Is It, and What's Special about It?

    ERIC Educational Resources Information Center

    Jackendoff, Ray; Lerdahl, Fred

    2006-01-01

    We explore the capacity for music in terms of five questions: (1) What cognitive structures are invoked by music? (2) What are the principles that create these structures? (3) How do listeners acquire these principles? (4) What pre-existing resources make such acquisition possible? (5) Which aspects of these resources are specific to music, and…

  14. Carrying capacity as "informed judgment": The values of science and the science of values

    Treesearch

    Robert E. Manning

    2001-01-01

    Contemporary carrying capacity frameworks, such as Limits of Acceptable Change and Visitor Experience and Resource Protection, rely on formulation of standards of quality, which are defined as minimum acceptable resource and social conditions in parks and wilderness. Formulation of standards of quality involves elements of both science and values, and both of these...

  15. Processing Capacity under Perceptual and Cognitive Load: A Closer Look at Load Theory

    ERIC Educational Resources Information Center

    Fitousi, Daniel; Wenger, Michael J.

    2011-01-01

    Variations in perceptual and cognitive demands (load) play a major role in determining the efficiency of selective attention. According to load theory (Lavie, Hirst, Fockert, & Viding, 2004) these factors (a) improve or hamper selectivity by altering the way resources (e.g., processing capacity) are allocated, and (b) tap resources rather than…

  16. Renewable Energy Deployment in Colorado and the West: A Modeling Sensitivity and GIS Analysis

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Barrows, Clayton; Mai, Trieu; Haase, Scott

    2016-03-01

    The Resource Planning Model is a capacity expansion model designed for a regional power system, such as a utility service territory, state, or balancing authority. We apply a geospatial analysis to Resource Planning Model renewable energy capacity expansion results to understand the likelihood of renewable development on various lands within Colorado.

  17. 30 CFR 75.1107-7 - Water spray devices; capacity; water supply; minimum requirements.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... 30 Mineral Resources 1 2012-07-01 2012-07-01 false Water spray devices; capacity; water supply; minimum requirements. 75.1107-7 Section 75.1107-7 Mineral Resources MINE SAFETY AND HEALTH ADMINISTRATION, DEPARTMENT OF LABOR COAL MINE SAFETY AND HEALTH MANDATORY SAFETY STANDARDS-UNDERGROUND COAL MINES Fire Protection Fire Suppression Devices and...

  18. 30 CFR 75.1107-7 - Water spray devices; capacity; water supply; minimum requirements.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... 30 Mineral Resources 1 2011-07-01 2011-07-01 false Water spray devices; capacity; water supply; minimum requirements. 75.1107-7 Section 75.1107-7 Mineral Resources MINE SAFETY AND HEALTH ADMINISTRATION, DEPARTMENT OF LABOR COAL MINE SAFETY AND HEALTH MANDATORY SAFETY STANDARDS-UNDERGROUND COAL MINES Fire Protection Fire Suppression Devices and...

  19. 30 CFR 75.1107-7 - Water spray devices; capacity; water supply; minimum requirements.

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... 30 Mineral Resources 1 2014-07-01 2014-07-01 false Water spray devices; capacity; water supply; minimum requirements. 75.1107-7 Section 75.1107-7 Mineral Resources MINE SAFETY AND HEALTH ADMINISTRATION, DEPARTMENT OF LABOR COAL MINE SAFETY AND HEALTH MANDATORY SAFETY STANDARDS-UNDERGROUND COAL MINES Fire Protection Fire Suppression Devices and...

  20. 30 CFR 75.1107-7 - Water spray devices; capacity; water supply; minimum requirements.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... 30 Mineral Resources 1 2013-07-01 2013-07-01 false Water spray devices; capacity; water supply; minimum requirements. 75.1107-7 Section 75.1107-7 Mineral Resources MINE SAFETY AND HEALTH ADMINISTRATION, DEPARTMENT OF LABOR COAL MINE SAFETY AND HEALTH MANDATORY SAFETY STANDARDS-UNDERGROUND COAL MINES Fire Protection Fire Suppression Devices and...

  1. Carrying capacity of water resources in Bandung Basin

    NASA Astrophysics Data System (ADS)

    Marganingrum, D.

    2018-02-01

    The concept of carrying capacity is widely used in various sectors as a management tool for sustainable development processes. This idea has also been applied in watershed or basin scale. Bandung Basin is the upstream of Citarum watershed known as one of the national strategic areas. This area has developed into a metropolitan area loaded with various environmental problems. Therefore, research that is related to environmental carrying capacity in this area becomes a strategic issue. However, research on environmental carrying capacity that has been done in this area is still partial either in water balance terminology, land suitability, ecological footprint, or balance of supply and demand of resources. This paper describes the application of the concept of integrated environmental carrying capacity in order to overcome the increasing complexity and dynamic environmental problems. The sector that becomes the focus of attention is the issue of water resources. The approach method to be carried out is to combine the concept of maximum balance and system dynamics. The dynamics of the proposed system is the ecological dynamics and population that cannot be separated from one another as a unity of the Bandung Basin ecosystem.

  2. An Investigation of the Relationship between College Chinese EFL Students' Autonomous Learning Capacity and Motivation in Using Computer-Assisted Language Learning

    ERIC Educational Resources Information Center

    Pu, Minran

    2009-01-01

    The purpose of the study was to investigate the relationship between college EFL students' autonomous learning capacity and motivation in using web-based Computer-Assisted Language Learning (CALL) in China. This study included three questionnaires: the student background questionnaire, the questionnaire on student autonomous learning capacity, and…

  3. The impact of individual factors on healthcare staff's computer use in psychiatric hospitals.

    PubMed

    Koivunen, Marita; Välimäki, Maritta; Koskinen, Anita; Staggers, Nancy; Katajisto, Jouko

    2009-04-01

    The study examines whether individual factors of healthcare staff are associated with computer use in psychiatric hospitals. In addition, factors inhibiting staff's optimal use of computers were explored. Computer applications have developed the content of clinical practice and changed patterns of professional working. Healthcare staff need new capacities to work in clinical practice, including the basic computers skills. Computer use amongst healthcare staff has widely been studied in general, but cogent information is still lacking in psychiatric care. Staff's computer use was assessed using a structured questionnaire (The Staggers Nursing Computer Experience Questionnaire). The study population was healthcare staff working in two psychiatric hospitals in Finland (n = 470, response rate = 59%). The data were analysed with descriptive statistics and manova with main effects and two-way interaction effects of six individual factors. Nurses who had more experience of computer use or of the implementation processes of computer systems were more motivated to use computers than those who had less experience of these issues. Males and administrative personnel who were younger had also participated more often than women in implementation processes of computer systems. The most significant factor inhibiting the use of computers was lack of interest in them. In psychiatric hospitals, more direct attention should focus on staff's capacities to use computers and to increase their understanding of the benefits in clinical care, especially for women and ageing staff working in psychiatric hospitals. To avoid exclusion amongst healthcare personnel in information society and to ensure that they have capacities to guide patients on how to use computers or to evaluate the quality of health information on the web, staff's capacities and motivation to use computers in mental health and psychiatric nursing should be ensured.

  4. From photons to big-data applications: terminating terabits

    PubMed Central

    2016-01-01

    Computer architectures have entered a watershed as the quantity of network data generated by user applications exceeds the data-processing capacity of any individual computer end-system. It will become impossible to scale existing computer systems while a gap grows between the quantity of networked data and the capacity for per system data processing. Despite this, the growth in demand in both task variety and task complexity continues unabated. Networked computer systems provide a fertile environment in which new applications develop. As networked computer systems become akin to infrastructure, any limitation upon the growth in capacity and capabilities becomes an important constraint of concern to all computer users. Considering a networked computer system capable of processing terabits per second, as a benchmark for scalability, we critique the state of the art in commodity computing, and propose a wholesale reconsideration in the design of computer architectures and their attendant ecosystem. Our proposal seeks to reduce costs, save power and increase performance in a multi-scale approach that has potential application from nanoscale to data-centre-scale computers. PMID:26809573

  5. From photons to big-data applications: terminating terabits.

    PubMed

    Zilberman, Noa; Moore, Andrew W; Crowcroft, Jon A

    2016-03-06

    Computer architectures have entered a watershed as the quantity of network data generated by user applications exceeds the data-processing capacity of any individual computer end-system. It will become impossible to scale existing computer systems while a gap grows between the quantity of networked data and the capacity for per system data processing. Despite this, the growth in demand in both task variety and task complexity continues unabated. Networked computer systems provide a fertile environment in which new applications develop. As networked computer systems become akin to infrastructure, any limitation upon the growth in capacity and capabilities becomes an important constraint of concern to all computer users. Considering a networked computer system capable of processing terabits per second, as a benchmark for scalability, we critique the state of the art in commodity computing, and propose a wholesale reconsideration in the design of computer architectures and their attendant ecosystem. Our proposal seeks to reduce costs, save power and increase performance in a multi-scale approach that has potential application from nanoscale to data-centre-scale computers. © 2016 The Authors.

  6. Open Source GIS based integrated watershed management

    NASA Astrophysics Data System (ADS)

    Byrne, J. M.; Lindsay, J.; Berg, A. A.

    2013-12-01

    Optimal land and water management to address future and current resource stresses and allocation challenges requires the development of state-of-the-art geomatics and hydrological modelling tools. Future hydrological modelling tools should be of high resolution, process based with real-time capability to assess changing resource issues critical to short, medium and long-term enviromental management. The objective here is to merge two renowned, well published resource modeling programs to create an source toolbox for integrated land and water management applications. This work will facilitate a much increased efficiency in land and water resource security, management and planning. Following an 'open-source' philosophy, the tools will be computer platform independent with source code freely available, maximizing knowledge transfer and the global value of the proposed research. The envisioned set of water resource management tools will be housed within 'Whitebox Geospatial Analysis Tools'. Whitebox, is an open-source geographical information system (GIS) developed by Dr. John Lindsay at the University of Guelph. The emphasis of the Whitebox project has been to develop a user-friendly interface for advanced spatial analysis in environmental applications. The plugin architecture of the software is ideal for the tight-integration of spatially distributed models and spatial analysis algorithms such as those contained within the GENESYS suite. Open-source development extends knowledge and technology transfer to a broad range of end-users and builds Canadian capability to address complex resource management problems with better tools and expertise for managers in Canada and around the world. GENESYS (Generate Earth Systems Science input) is an innovative, efficient, high-resolution hydro- and agro-meteorological model for complex terrain watersheds developed under the direction of Dr. James Byrne. GENESYS is an outstanding research and applications tool to address challenging resource management issues in industry, government and nongovernmental agencies. Current research and analysis tools were developed to manage meteorological, climatological, and land and water resource data efficiently at high resolution in space and time. The deliverable for this work is a Whitebox-GENESYS open-source resource management capacity with routines for GIS based watershed management including water in agriculture and food production. We are adding urban water management routines through GENESYS in 2013-15 with an engineering PhD candidate. Both Whitebox-GAT and GENESYS are already well-established tools. The proposed research will combine these products to create an open-source geomatics based water resource management tool that is revolutionary in both capacity and availability to a wide array of Canadian and global users

  7. 30 CFR 77.503 - Electric conductors; capacity and insulation.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... 30 Mineral Resources 1 2011-07-01 2011-07-01 false Electric conductors; capacity and insulation... UNDERGROUND COAL MINES Electrical Equipment-General § 77.503 Electric conductors; capacity and insulation. Electric conductors shall be sufficient in size and have adequate current carrying capacity and be of such...

  8. 30 CFR 77.503 - Electric conductors; capacity and insulation.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... 30 Mineral Resources 1 2010-07-01 2010-07-01 false Electric conductors; capacity and insulation... UNDERGROUND COAL MINES Electrical Equipment-General § 77.503 Electric conductors; capacity and insulation. Electric conductors shall be sufficient in size and have adequate current carrying capacity and be of such...

  9. Nomadic migration : a service environment for autonomic computing on the Grid

    NASA Astrophysics Data System (ADS)

    Lanfermann, Gerd

    2003-06-01

    In recent years, there has been a dramatic increase in available compute capacities. However, these “Grid resources” are rarely accessible in a continuous stream, but rather appear scattered across various machine types, platforms and operating systems, which are coupled by networks of fluctuating bandwidth. It becomes increasingly difficult for scientists to exploit available resources for their applications. We believe that intelligent, self-governing applications should be able to select resources in a dynamic and heterogeneous environment: Migrating applications determine a resource when old capacities are used up. Spawning simulations launch algorithms on external machines to speed up the main execution. Applications are restarted as soon as a failure is detected. All these actions can be taken without human interaction. A distributed compute environment possesses an intrinsic unreliability. Any application that interacts with such an environment must be able to cope with its failing components: deteriorating networks, crashing machines, failing software. We construct a reliable service infrastructure by endowing a service environment with a peer-to-peer topology. This “Grid Peer Services” infrastructure accommodates high-level services like migration and spawning, as well as fundamental services for application launching, file transfer and resource selection. It utilizes existing Grid technology wherever possible to accomplish its tasks. An Application Information Server acts as a generic information registry to all participants in a service environment. The service environment that we developed, allows applications e.g. to send a relocation requests to a migration server. The server selects a new computer based on the transmitted resource requirements. It transfers the application's checkpoint and binary to the new host and resumes the simulation. Although the Grid's underlying resource substrate is not continuous, we achieve persistent computations on Grids by relocating the application. We show with our real-world examples that a traditional genome analysis program can be easily modified to perform self-determined migrations in this service environment. In den vergangenen Jahren ist es zu einer dramatischen Vervielfachung der verfügbaren Rechenzeit gekommen. Diese 'Grid Ressourcen' stehen jedoch nicht als kontinuierlicher Strom zur Verfügung, sondern sind über verschiedene Maschinentypen, Plattformen und Betriebssysteme verteilt, die jeweils durch Netzwerke mit fluktuierender Bandbreite verbunden sind. Es wird für Wissenschaftler zunehmend schwieriger, die verfügbaren Ressourcen für ihre Anwendungen zu nutzen. Wir glauben, dass intelligente, selbstbestimmende Applikationen in der Lage sein sollten, ihre Ressourcen in einer dynamischen und heterogenen Umgebung selbst zu wählen: Migrierende Applikationen suchen eine neue Ressource, wenn die alte aufgebraucht ist. 'Spawning'-Anwendungen lassen Algorithmen auf externen Maschinen laufen, um die Hauptanwendung zu beschleunigen. Applikationen werden neu gestartet, sobald ein Absturz endeckt wird. Alle diese Verfahren können ohne menschliche Interaktion erfolgen. Eine verteilte Rechenumgebung besitzt eine natürliche Unverlässlichkeit. Jede Applikation, die mit einer solchen Umgebung interagiert, muss auf die gestörten Komponenten reagieren können: schlechte Netzwerkverbindung, abstürzende Maschinen, fehlerhafte Software. Wir konstruieren eine verlässliche Serviceinfrastruktur, indem wir der Serviceumgebung eine 'Peer-to-Peer'-Topology aufprägen. Diese “Grid Peer Service” Infrastruktur beinhaltet Services wie Migration und Spawning, als auch Services zum Starten von Applikationen, zur Dateiübertragung und Auswahl von Rechenressourcen. Sie benutzt existierende Gridtechnologie wo immer möglich, um ihre Aufgabe durchzuführen. Ein Applikations-Information- Server arbeitet als generische Registratur für alle Teilnehmer in der Serviceumgebung. Die Serviceumgebung, die wir entwickelt haben, erlaubt es Applikationen z.B. eine Relokationsanfrage an einen Migrationsserver zu stellen. Der Server sucht einen neuen Computer, basierend auf den übermittelten Ressourcen-Anforderungen. Er transferiert den Statusfile des Applikation zu der neuen Maschine und startet die Applikation neu. Obwohl das umgebende Ressourcensubstrat nicht kontinuierlich ist, können wir kontinuierliche Berechnungen auf Grids ausführen, indem wir die Applikation migrieren. Wir zeigen mit realistischen Beispielen, wie sich z.B. ein traditionelles Genom-Analyse-Programm leicht modifizieren lässt, um selbstbestimmte Migrationen in dieser Serviceumgebung durchzuführen.

  10. Error driven remeshing strategy in an elastic-plastic shakedown problem

    NASA Astrophysics Data System (ADS)

    Pazdanowski, Michał J.

    2018-01-01

    A shakedown based approach has been for many years successfully used to calculate the distributions of residual stresses in bodies made of elastic-plastic materials and subjected to cyclic loads exceeding their bearing capacity. The calculations performed indicated the existence of areas characterized by extremely high gradients and rapid changes of sign over small areas in the stress field sought. In order to account for these changes in sign, relatively dense nodal meshes had to be used during calculations in disproportionately large parts of considered bodies, resulting in unnecessary expenditure of computer resources. Therefore the effort was undertaken to limit the areas of high mesh densities and drive the mesh regeneration algorithm by selected error indicators.

  11. Pheromone Static Routing Strategy for Complex Networks

    NASA Astrophysics Data System (ADS)

    Hu, Mao-Bin; Henry, Y. K. Lau; Ling, Xiang; Jiang, Rui

    2012-12-01

    We adopt the concept of using pheromones to generate a set of static paths that can reach the performance of global dynamic routing strategy [Phys. Rev. E 81 (2010) 016113]. The path generation method consists of two stages. In the first stage, a pheromone is dropped to the nodes by packets forwarded according to the global dynamic routing strategy. In the second stage, pheromone static paths are generated according to the pheromone density. The output paths can greatly improve traffic systems' overall capacity on different network structures, including scale-free networks, small-world networks and random graphs. Because the paths are static, the system needs much less computational resources than the global dynamic routing strategy.

  12. Cost of wind energy: comparing distant wind resources to local resources in the midwestern United States.

    PubMed

    Hoppock, David C; Patiño-Echeverri, Dalia

    2010-11-15

    The best wind sites in the United States are often located far from electricity demand centers and lack transmission access. Local sites that have lower quality wind resources but do not require as much power transmission capacity are an alternative to distant wind resources. In this paper, we explore the trade-offs between developing new wind generation at local sites and installing wind farms at remote sites. We first examine the general relationship between the high capital costs required for local wind development and the relatively lower capital costs required to install a wind farm capable of generating the same electrical output at a remote site,with the results representing the maximum amount an investor should be willing to pay for transmission access. We suggest that this analysis can be used as a first step in comparing potential wind resources to meet a state renewable portfolio standard (RPS). To illustrate, we compare the cost of local wind (∼50 km from the load) to the cost of distant wind requiring new transmission (∼550-750 km from the load) to meet the Illinois RPS. We find that local, lower capacity factor wind sites are the lowest cost option for meeting the Illinois RPS if new long distance transmission is required to access distant, higher capacity factor wind resources. If higher capacity wind sites can be connected to the existing grid at minimal cost, in many cases they will have lower costs.

  13. Flexible services for the support of research.

    PubMed

    Turilli, Matteo; Wallom, David; Williams, Chris; Gough, Steve; Curran, Neal; Tarrant, Richard; Bretherton, Dan; Powell, Andy; Johnson, Matt; Harmer, Terry; Wright, Peter; Gordon, John

    2013-01-28

    Cloud computing has been increasingly adopted by users and providers to promote a flexible, scalable and tailored access to computing resources. Nonetheless, the consolidation of this paradigm has uncovered some of its limitations. Initially devised by corporations with direct control over large amounts of computational resources, cloud computing is now being endorsed by organizations with limited resources or with a more articulated, less direct control over these resources. The challenge for these organizations is to leverage the benefits of cloud computing while dealing with limited and often widely distributed computing resources. This study focuses on the adoption of cloud computing by higher education institutions and addresses two main issues: flexible and on-demand access to a large amount of storage resources, and scalability across a heterogeneous set of cloud infrastructures. The proposed solutions leverage a federated approach to cloud resources in which users access multiple and largely independent cloud infrastructures through a highly customizable broker layer. This approach allows for a uniform authentication and authorization infrastructure, a fine-grained policy specification and the aggregation of accounting and monitoring. Within a loosely coupled federation of cloud infrastructures, users can access vast amount of data without copying them across cloud infrastructures and can scale their resource provisions when the local cloud resources become insufficient.

  14. Computational analysis of Ebolavirus data: prospects, promises and challenges.

    PubMed

    Michaelis, Martin; Rossman, Jeremy S; Wass, Mark N

    2016-08-15

    The ongoing Ebola virus (also known as Zaire ebolavirus, a member of the Ebolavirus family) outbreak in West Africa has so far resulted in >28000 confirmed cases compared with previous Ebolavirus outbreaks that affected a maximum of a few hundred individuals. Hence, Ebolaviruses impose a much greater threat than we may have expected (or hoped). An improved understanding of the virus biology is essential to develop therapeutic and preventive measures and to be better prepared for future outbreaks by members of the Ebolavirus family. Computational investigations can complement wet laboratory research for biosafety level 4 pathogens such as Ebolaviruses for which the wet experimental capacities are limited due to a small number of appropriate containment laboratories. During the current West Africa outbreak, sequence data from many Ebola virus genomes became available providing a rich resource for computational analysis. Here, we consider the studies that have already reported on the computational analysis of these data. A range of properties have been investigated including Ebolavirus evolution and pathogenicity, prediction of micro RNAs and identification of Ebolavirus specific signatures. However, the accuracy of the results remains to be confirmed by wet laboratory experiments. Therefore, communication and exchange between computational and wet laboratory researchers is necessary to make maximum use of computational analyses and to iteratively improve these approaches. © 2016 The Author(s). published by Portland Press Limited on behalf of the Biochemical Society.

  15. Natural resource manager perceptions of agency performance on climate change.

    PubMed

    Lemieux, Christopher J; Thompson, Jessica L; Dawson, Jackie; Schuster, Rudy M

    2013-01-15

    An important precursor to the adoption of climate change adaptation strategies is to understand the perceived capacity to implement and operationalize such strategies. Utilizing an importance-performance analysis (IPA) evaluation framework, this article presents a comparative case study of federal and state land and natural resource manager perceptions of agency performance on factors influencing adaptive capacity in two U.S. regions (northern Colorado and southwestern South Dakota). Results revealed several important findings with substantial management implications. First, none of the managers ranked the adaptive capacity factors as a low priority. Second, managers held the perception that their agencies were performing either neutrally or poorly on most factors influencing adaptive capacity. Third, gap analysis revealed that significant improvements are required to facilitate optimal agency functioning when dealing with climate change-related management issues. Overall, results suggest that a host of institutional and policy-oriented (e.g., lack of clear mandate to adapt to climate change), financial and human resource (e.g., inadequate staff and financial resources), informational (e.g., inadequate research and monitoring programs) and contextual barriers (e.g., sufficient regional networks to mitigate potential transboundary impacts) currently challenge the efficient and effective integration of climate change into decision-making and management within agencies working in these regions. The IPA framework proved to be an effective tool to help managers identify and understand agency strengths, areas of concern, redundancies, and areas that warrant the use of limited funds and/or resource re-allocation in order to enhance adaptive capacity and maximize management effectiveness with respect to climate change. Copyright © 2012 Elsevier Ltd. All rights reserved.

  16. The effects of working memory resource depletion and training on sensorimotor adaptation

    PubMed Central

    Anguera, Joaquin A.; Bernard, Jessica A.; Jaeggi, Susanne M.; Buschkuehl, Martin; Benson, Bryan L.; Jennett, Sarah; Humfleet, Jennifer; Reuter-Lorenz, Patricia; Jonides, John; Seidler, Rachael D.

    2011-01-01

    We have recently demonstrated that visuospatial working memory performance predicts the rate of motor skill learning, particularly during the early phase of visuomotor adaptation. Here, we follow up these correlational findings with direct manipulations of working memory resources to determine the impact on visuomotor adaptation, a form of motor learning. We conducted two separate experiments. In the first one, we used a resource depletion strategy to investigate whether the rate of early visuomotor adaptation would be negatively affected by fatigue of spatial working memory resources. In the second study, we employed a dual n-back task training paradigm that has been shown to result in transfer effects [1] over five weeks to determine whether training-related improvements would boost the rate of early visuomotor adaptation. The depletion of spatial working memory resources negatively affected the rate of early visuomotor adaptation. However, enhancing working memory capacity via training did not lead to improved rates of visuomotor adaptation, suggesting that working memory capacity may not be the factor limiting maximal rate of visuomotor adaptation in young adults. These findings are discussed from a resource limitation / capacity framework with respect to current views of motor learning. PMID:22155489

  17. Anesthesia Care Capacity at Health Facilities in 22 Low- and Middle-Income Countries.

    PubMed

    Hadler, Rachel A; Chawla, Sagar; Stewart, Barclay T; McCunn, Maureen C; Kushner, Adam L

    2016-05-01

    Globally, an estimated 2 billion people lack access to surgical and anesthesia care. We sought to pool results of anesthesia care capacity assessments in low- and middle-income countries (LMICs) to identify patterns of deficits and provide useful targets for advocacy and intervention. A systematic review of PubMed, Cochrane Database of Systematic Reviews, and Google Scholar identified reports that documented anesthesia care capacity from LMICs. When multiple assessments from one country were identified, only the study with the most facilities assessed was included. Patterns of availability or deficit were described. We identified 22 LMICs (15 low- and 8 middle-income countries) with anesthesia care capacity assessments (614 facilities assessed). Anesthesia care resources were often unavailable, including relatively low-cost ones (e.g., oxygen and airway supplies). Capacity varied markedly between and within countries, regardless of the national income. The availability of fundamental resources for safe anesthesia, such as airway supplies and functional pulse oximeters, was often not reported (72 and 36 % of hospitals assessed, respectively). Anesthesia machines and the capability to perform general anesthesia were unavailable in 43 % (132/307 hospitals) and 56 % (202/361) of hospitals, respectively. We identified a pattern of critical deficiencies in anesthesia care capacity in LMICs, including some low-cost, high-value added resources. The global health community should advocate for improvements in anesthesia care capacity and the potential benefits of doing so to health system planners. In addition, better quality data on anesthesia care capacity can improve advocacy, as well as the monitoring and evaluation of changes over time and the impact of capacity improvement interventions.

  18. Resource Aware Intelligent Network Services (RAINS) Final Technical Report

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lehman, Tom; Yang, Xi

    The Resource Aware Intelligent Network Services (RAINS) project conducted research and developed technologies in the area of cyber infrastructure resource modeling and computation. The goal of this work was to provide a foundation to enable intelligent, software defined services which spanned the network AND the resources which connect to the network. A Multi-Resource Service Plane (MRSP) was defined, which allows resource owners/managers to locate and place themselves from a topology and service availability perspective within the dynamic networked cyberinfrastructure ecosystem. The MRSP enables the presentation of integrated topology views and computation results which can include resources across the spectrum ofmore » compute, storage, and networks. The RAINS project developed MSRP includes the following key components: i) Multi-Resource Service (MRS) Ontology/Multi-Resource Markup Language (MRML), ii) Resource Computation Engine (RCE), iii) Modular Driver Framework (to allow integration of a variety of external resources). The MRS/MRML is a general and extensible modeling framework that allows for resource owners to model, or describe, a wide variety of resource types. All resources are described using three categories of elements: Resources, Services, and Relationships between the elements. This modeling framework defines a common method for the transformation of cyber infrastructure resources into data in the form of MRML models. In order to realize this infrastructure datification, the RAINS project developed a model based computation system, i.e. “RAINS Computation Engine (RCE)”. The RCE has the ability to ingest, process, integrate, and compute based on automatically generated MRML models. The RCE interacts with the resources thru system drivers which are specific to the type of external network or resource controller. The RAINS project developed a modular and pluggable driver system which facilities a variety of resource controllers to automatically generate, maintain, and distribute MRML based resource descriptions. Once all of the resource topologies are absorbed by the RCE, a connected graph of the full distributed system topology is constructed, which forms the basis for computation and workflow processing. The RCE includes a Modular Computation Element (MCE) framework which allows for tailoring of the computation process to the specific set of resources under control, and the services desired. The input and output of an MCE are both model data based on MRS/MRML ontology and schema. Some of the RAINS project accomplishments include: Development of general and extensible multi-resource modeling framework; Design of a Resource Computation Engine (RCE) system which includes the following key capabilities; Absorb a variety of multi-resource model types and build integrated models; Novel architecture which uses model based communications across the full stack for all Flexible provision of abstract or intent based user facing interfaces; Workflow processing based on model descriptions; Release of the RCE as an open source software; Deployment of RCE in the University of Maryland/Mid-Atlantic Crossroad ScienceDMZ in prototype mode with a plan under way to transition to production; Deployment at the Argonne National Laboratory DTN Facility in prototype mode; Selection of RCE by the DOE SENSE (SDN for End-to-end Networked Science at the Exascale) project as the basis for their orchestration service.« less

  19. Optimization of tomographic reconstruction workflows on geographically distributed resources

    DOE PAGES

    Bicer, Tekin; Gursoy, Doga; Kettimuthu, Rajkumar; ...

    2016-01-01

    New technological advancements in synchrotron light sources enable data acquisitions at unprecedented levels. This emergent trend affects not only the size of the generated data but also the need for larger computational resources. Although beamline scientists and users have access to local computational resources, these are typically limited and can result in extended execution times. Applications that are based on iterative processing as in tomographic reconstruction methods require high-performance compute clusters for timely analysis of data. Here, time-sensitive analysis and processing of Advanced Photon Source data on geographically distributed resources are focused on. Two main challenges are considered: (i) modelingmore » of the performance of tomographic reconstruction workflows and (ii) transparent execution of these workflows on distributed resources. For the former, three main stages are considered: (i) data transfer between storage and computational resources, (i) wait/queue time of reconstruction jobs at compute resources, and (iii) computation of reconstruction tasks. These performance models allow evaluation and estimation of the execution time of any given iterative tomographic reconstruction workflow that runs on geographically distributed resources. For the latter challenge, a workflow management system is built, which can automate the execution of workflows and minimize the user interaction with the underlying infrastructure. The system utilizes Globus to perform secure and efficient data transfer operations. The proposed models and the workflow management system are evaluated by using three high-performance computing and two storage resources, all of which are geographically distributed. Workflows were created with different computational requirements using two compute-intensive tomographic reconstruction algorithms. Experimental evaluation shows that the proposed models and system can be used for selecting the optimum resources, which in turn can provide up to 3.13× speedup (on experimented resources). Furthermore, the error rates of the models range between 2.1 and 23.3% (considering workflow execution times), where the accuracy of the model estimations increases with higher computational demands in reconstruction tasks.« less

  20. Optimization of tomographic reconstruction workflows on geographically distributed resources

    PubMed Central

    Bicer, Tekin; Gürsoy, Doǧa; Kettimuthu, Rajkumar; De Carlo, Francesco; Foster, Ian T.

    2016-01-01

    New technological advancements in synchrotron light sources enable data acquisitions at unprecedented levels. This emergent trend affects not only the size of the generated data but also the need for larger computational resources. Although beamline scientists and users have access to local computational resources, these are typically limited and can result in extended execution times. Applications that are based on iterative processing as in tomographic reconstruction methods require high-performance compute clusters for timely analysis of data. Here, time-sensitive analysis and processing of Advanced Photon Source data on geographically distributed resources are focused on. Two main challenges are considered: (i) modeling of the performance of tomographic reconstruction workflows and (ii) transparent execution of these workflows on distributed resources. For the former, three main stages are considered: (i) data transfer between storage and computational resources, (i) wait/queue time of reconstruction jobs at compute resources, and (iii) computation of reconstruction tasks. These performance models allow evaluation and estimation of the execution time of any given iterative tomographic reconstruction workflow that runs on geographically distributed resources. For the latter challenge, a workflow management system is built, which can automate the execution of workflows and minimize the user interaction with the underlying infrastructure. The system utilizes Globus to perform secure and efficient data transfer operations. The proposed models and the workflow management system are evaluated by using three high-performance computing and two storage resources, all of which are geographically distributed. Workflows were created with different computational requirements using two compute-intensive tomographic reconstruction algorithms. Experimental evaluation shows that the proposed models and system can be used for selecting the optimum resources, which in turn can provide up to 3.13× speedup (on experimented resources). Moreover, the error rates of the models range between 2.1 and 23.3% (considering workflow execution times), where the accuracy of the model estimations increases with higher computational demands in reconstruction tasks. PMID:27359149

  1. Optimization of tomographic reconstruction workflows on geographically distributed resources

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bicer, Tekin; Gursoy, Doga; Kettimuthu, Rajkumar

    New technological advancements in synchrotron light sources enable data acquisitions at unprecedented levels. This emergent trend affects not only the size of the generated data but also the need for larger computational resources. Although beamline scientists and users have access to local computational resources, these are typically limited and can result in extended execution times. Applications that are based on iterative processing as in tomographic reconstruction methods require high-performance compute clusters for timely analysis of data. Here, time-sensitive analysis and processing of Advanced Photon Source data on geographically distributed resources are focused on. Two main challenges are considered: (i) modelingmore » of the performance of tomographic reconstruction workflows and (ii) transparent execution of these workflows on distributed resources. For the former, three main stages are considered: (i) data transfer between storage and computational resources, (i) wait/queue time of reconstruction jobs at compute resources, and (iii) computation of reconstruction tasks. These performance models allow evaluation and estimation of the execution time of any given iterative tomographic reconstruction workflow that runs on geographically distributed resources. For the latter challenge, a workflow management system is built, which can automate the execution of workflows and minimize the user interaction with the underlying infrastructure. The system utilizes Globus to perform secure and efficient data transfer operations. The proposed models and the workflow management system are evaluated by using three high-performance computing and two storage resources, all of which are geographically distributed. Workflows were created with different computational requirements using two compute-intensive tomographic reconstruction algorithms. Experimental evaluation shows that the proposed models and system can be used for selecting the optimum resources, which in turn can provide up to 3.13× speedup (on experimented resources). Furthermore, the error rates of the models range between 2.1 and 23.3% (considering workflow execution times), where the accuracy of the model estimations increases with higher computational demands in reconstruction tasks.« less

  2. Revisiting the Impact of NCLB High-Stakes School Accountability, Capacity, and Resources: State NAEP 1990-2009 Reading and Math Achievement Gaps and Trends

    ERIC Educational Resources Information Center

    Lee, Jaekyung; Reeves, Todd

    2012-01-01

    This study examines the impact of high-stakes school accountability, capacity, and resources under NCLB on reading and math achievement outcomes through comparative interrupted time-series analyses of 1990-2009 NAEP state assessment data. Through hierarchical linear modeling latent variable regression with inverse probability of treatment…

  3. Agency capacity for recreation science and management: the case of the U.S. Forest Service.

    Treesearch

    Lee K. Cerveny; Clare M. Ryan

    2008-01-01

    This report examines the capacity of natural resource agencies to generate scientific knowledge and information for use by resource managers in planning and decisionmaking. This exploratory study focused on recreation in the U.S. Department of Agriculture, Forest Service. A semistructured, open-ended interview guide elicited insights from 58 managers and 28 researchers...

  4. JPRS Report, Science & Technology, China: Energy.

    DTIC Science & Technology

    1988-02-10

    bedrock growth anticlines, buried hill fault blocks, rolling anticlines, compression anticlines, draped anticlines, volcanic diapers and others. The...development and utilization of solar , wind, geothermal and other energy resources, the energy conservation capacity and newly-added energy resources were...equivalent to 20 million tons of standard coal. The firewood-saving capacity in wood and coal-saving stoves, biogas pits and solar cookers alone was

  5. Processing capacity under perceptual and cognitive load: a closer look at load theory.

    PubMed

    Fitousi, Daniel; Wenger, Michael J

    2011-06-01

    Variations in perceptual and cognitive demands (load) play a major role in determining the efficiency of selective attention. According to load theory (Lavie, Hirst, Fockert, & Viding, 2004) these factors (a) improve or hamper selectivity by altering the way resources (e.g., processing capacity) are allocated, and (b) tap resources rather than data limitations (Norman & Bobrow, 1975). Here we provide an extensive and rigorous set of tests of these assumptions. Predictions regarding changes in processing capacity are tested using the hazard function of the response time (RT) distribution (Townsend & Ashby, 1978; Wenger & Gibson, 2004). The assumption that load taps resource rather than data limitations is examined using measures of sensitivity and bias drawn from signal detection theory (Swets, 1964). All analyses were performed at two levels: the individual and the aggregate. Hypotheses regarding changes in processing capacity were confirmed at the level of the aggregate. Hypotheses regarding resource and data limitations were not completely supported at either level of analysis. And in all of the analyses, we observed substantial individual differences. In sum, the results suggest a need to expand the theoretical vocabulary of load theory, rather than a need to discard it.

  6. A qualitative examination of the health workforce needs during climate change disaster response in Pacific Island Countries

    PubMed Central

    2014-01-01

    Background There is a growing body of evidence that the impacts of climate change are affecting population health negatively. The Pacific region is particularly vulnerable to climate change; a strong health-care system is required to respond during times of disaster. This paper examines the capacity of the health sector in Pacific Island Countries to adapt to changing disaster response needs, in terms of: (i) health workforce governance, management, policy and involvement; (ii) health-care capacity and skills; and (iii) human resources for health training and workforce development. Methods Key stakeholder interviews informed the assessment of the capacity of the health sector and disaster response organizations in Pacific Island Countries to adapt to disaster response needs under a changing climate. The research specifically drew upon and examined the adaptive capacity of individual organizations and the broader system of disaster response in four case study countries (Fiji, Cook Islands, Vanuatu and Samoa). Results ‘Capacity’ including health-care capacity was one of the objective determinants identified as most significant in influencing the adaptive capacity of disaster response systems in the Pacific. The research identified several elements that could support the adaptive capacity of the health sector such as: inclusive involvement in disaster coordination; policies in place for health workforce coordination; belief in their abilities; and strong donor support. Factors constraining adaptive capacity included: weak coordination of international health personnel; lack of policies to address health worker welfare; limited human resources and material resources; shortages of personnel to deal with psychosocial needs; inadequate skills in field triage and counselling; and limited capacity for training. Conclusion Findings from this study can be used to inform the development of human resources for health policies and strategic plans, and to support the development of a coordinated and collaborative approach to disaster response training across the Pacific and other developing contexts. This study also provides an overview of health-care capacity and some of the challenges and strengths that can inform future development work by humanitarian organizations, regional and international donors involved in climate change adaptation, and disaster risk reduction in the Pacific region. PMID:24521057

  7. Protective Capacity and Absorptive Capacity: Managing the Balance between Retention and Creation of Knowledge-Based Resources

    ERIC Educational Resources Information Center

    Andersen, Jim

    2012-01-01

    Purpose: In order to understand the pros and cons of an open organization regarding the flow of knowledge between firms, this paper introduces the concept of "protective capacity". The purpose of the paper is to elaborate the concept of "protective capacity" especially in relation to absorptive capacity, by presenting a number of propositions.…

  8. Enabling opportunistic resources for CMS Computing Operations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hufnagel, Dirk

    With the increased pressure on computing brought by the higher energy and luminosity from the LHC in Run 2, CMS Computing Operations expects to require the ability to utilize opportunistic resources resources not owned by, or a priori configured for CMS to meet peak demands. In addition to our dedicated resources we look to add computing resources from non CMS grids, cloud resources, and national supercomputing centers. CMS uses the HTCondor/glideinWMS job submission infrastructure for all its batch processing, so such resources will need to be transparently integrated into its glideinWMS pool. Bosco and parrot wrappers are used to enablemore » access and bring the CMS environment into these non CMS resources. Finally, we describe our strategy to supplement our native capabilities with opportunistic resources and our experience so far using them.« less

  9. Enabling opportunistic resources for CMS Computing Operations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hufnagel, Dick

    With the increased pressure on computing brought by the higher energy and luminosity from the LHC in Run 2, CMS Computing Operations expects to require the ability to utilize “opportunistic” resources — resources not owned by, or a priori configured for CMS — to meet peak demands. In addition to our dedicated resources we look to add computing resources from non CMS grids, cloud resources, and national supercomputing centers. CMS uses the HTCondor/glideinWMS job submission infrastructure for all its batch processing, so such resources will need to be transparently integrated into its glideinWMS pool. Bosco and parrot wrappers are usedmore » to enable access and bring the CMS environment into these non CMS resources. Here we describe our strategy to supplement our native capabilities with opportunistic resources and our experience so far using them.« less

  10. Enabling opportunistic resources for CMS Computing Operations

    DOE PAGES

    Hufnagel, Dirk

    2015-12-23

    With the increased pressure on computing brought by the higher energy and luminosity from the LHC in Run 2, CMS Computing Operations expects to require the ability to utilize opportunistic resources resources not owned by, or a priori configured for CMS to meet peak demands. In addition to our dedicated resources we look to add computing resources from non CMS grids, cloud resources, and national supercomputing centers. CMS uses the HTCondor/glideinWMS job submission infrastructure for all its batch processing, so such resources will need to be transparently integrated into its glideinWMS pool. Bosco and parrot wrappers are used to enablemore » access and bring the CMS environment into these non CMS resources. Finally, we describe our strategy to supplement our native capabilities with opportunistic resources and our experience so far using them.« less

  11. Interoperability of GADU in using heterogeneous Grid resources for bioinformatics applications.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sulakhe, D.; Rodriguez, A.; Wilde, M.

    2008-03-01

    Bioinformatics tools used for efficient and computationally intensive analysis of genetic sequences require large-scale computational resources to accommodate the growing data. Grid computational resources such as the Open Science Grid and TeraGrid have proved useful for scientific discovery. The genome analysis and database update system (GADU) is a high-throughput computational system developed to automate the steps involved in accessing the Grid resources for running bioinformatics applications. This paper describes the requirements for building an automated scalable system such as GADU that can run jobs on different Grids. The paper describes the resource-independent configuration of GADU using the Pegasus-based virtual datamore » system that makes high-throughput computational tools interoperable on heterogeneous Grid resources. The paper also highlights the features implemented to make GADU a gateway to computationally intensive bioinformatics applications on the Grid. The paper will not go into the details of problems involved or the lessons learned in using individual Grid resources as it has already been published in our paper on genome analysis research environment (GNARE) and will focus primarily on the architecture that makes GADU resource independent and interoperable across heterogeneous Grid resources.« less

  12. Demographic patterns and trends in Central Ghana: baseline indicators from the Kintampo Health and Demographic Surveillance System

    PubMed Central

    Owusu-Agyei, Seth; Nettey, Obed Ernest A.; Zandoh, Charles; Sulemana, Abubakari; Adda, Robert; Amenga-Etego, Seeba; Mbacke, Cheikh

    2012-01-01

    Background The dearth of health and demographic data in sub-Saharan Africa from vital registration systems and its impact on effective planning for health and socio-economic development is widely documented. Health and Demographic Surveillance Systems have the capacity to address the dearth of quality data for policy making in resource-poor settings. Objective This article demonstrates the utility of the Kintampo Health and Demographic Surveillance System (KHDSS) by showing the patterns and trends of population change from 2005 to 2009 in the Kintampo North Municipality and Kintampo South districts of Ghana through data obtained from the KHDSS biannual update rounds. Design Basic demographic rates for fertility, mortality, and migration were computed by year. School enrolment was computed as a percentage in school by age and sex for 6–18 year-olds. Socio-economic status was derived by use of Principal Components Analysis on household assets. Results Over the period, an earlier fertility decline was reversed in 2009; mortality declined slightly for all age-groups, and a significant share of working-age population was lost through out-migration. Large minorities of children of school-going age are not in school. Socio-economic factors are shown to be important determinants of fertility and mortality. Conclusion Strengthening the capacity of HDSSs could offer added value to evidence-driven policymaking at local level. PMID:23273249

  13. Demographic patterns and trends in Central Ghana: baseline indicators from the Kintampo Health and Demographic Surveillance System.

    PubMed

    Owusu-Agyei, Seth; Nettey, Obed Ernest A; Zandoh, Charles; Sulemana, Abubakari; Adda, Robert; Amenga-Etego, Seeba; Mbacke, Cheikh

    2012-12-20

    The dearth of health and demographic data in sub-Saharan Africa from vital registration systems and its impact on effective planning for health and socio-economic development is widely documented. Health and Demographic Surveillance Systems have the capacity to address the dearth of quality data for policy making in resource-poor settings. This article demonstrates the utility of the Kintampo Health and Demographic Surveillance System (KHDSS) by showing the patterns and trends of population change from 2005 to 2009 in the Kintampo North Municipality and Kintampo South districts of Ghana through data obtained from the KHDSS biannual update rounds. Basic demographic rates for fertility, mortality, and migration were computed by year. School enrolment was computed as a percentage in school by age and sex for 6-18 year-olds. Socio-economic status was derived by use of Principal Components Analysis on household assets. Over the period, an earlier fertility decline was reversed in 2009; mortality declined slightly for all age-groups, and a significant share of working-age population was lost through out-migration. Large minorities of children of school-going age are not in school. Socio-economic factors are shown to be important determinants of fertility and mortality. Strengthening the capacity of HDSSs could offer added value to evidence-driven policymaking at local level.

  14. Expanding Capacity and Promoting Inclusion in Introductory Computer Science: A Focus on Near-Peer Mentor Preparation and Code Review

    ERIC Educational Resources Information Center

    Pon-Barry, Heather; Packard, Becky Wai-Ling; St. John, Audrey

    2017-01-01

    A dilemma within computer science departments is developing sustainable ways to expand capacity within introductory computer science courses while remaining committed to inclusive practices. Training near-peer mentors for peer code review is one solution. This paper describes the preparation of near-peer mentors for their role, with a focus on…

  15. Development of urbanization in arid and semi arid regions based on the water resource carrying capacity -- a case study of Changji, Xinjiang

    NASA Astrophysics Data System (ADS)

    Xiao, H.; Zhang, L.; Chai, Z.

    2017-07-01

    The arid and semiarid region in China where have a relatively weak economic foundation, independent development capacity, and the low-level of urbanization. The new urbanization within these regions is facing severe challenges brought by the constraints of resources. In this paper, we selected the Changji Hui Autonomous Prefecture, Xinjiang Uyghur Autonomous Region as study area. We found that agricultural planting structure is the key water consumption index based on the research about the main water demands of domestic, agriculture and industry. Finally, we suggest that more attentions should be paid to the rational utilization of water resources, population carrying capacity, and adjust and upgrade the industrial structure, with the purpose of coordination with the Silk Road Economic Belt.

  16. A capacity-based approach for addressing ancillary care needs: implications for research in resource limited settings.

    PubMed

    Bright, Patricia L; Nelson, Robert M

    2012-11-01

    A paediatric clinical trial conducted in a developing country is likely to encounter conditions or illnesses in participants unrelated to the study. Since local healthcare resources may be inadequate to meet these needs, research clinicians may face the dilemma of deciding when to provide ancillary care and to what extent. The authors propose a model for identifying ancillary care obligations that draws on assessments of urgency, the capacity of the local healthcare infrastructure and the capacity of the research infrastructure. The model lends itself to a decision tree that can be adapted to the local context and resources so as to provide procedural guidance. This approach can help in planning and establishing organisational policies that govern the provision of ancillary care.

  17. Using Mosix for Wide-Area Compuational Resources

    USGS Publications Warehouse

    Maddox, Brian G.

    2004-01-01

    One of the problems with using traditional Beowulf-type distributed processing clusters is that they require an investment in dedicated computer resources. These resources are usually needed in addition to pre-existing ones such as desktop computers and file servers. Mosix is a series of modifications to the Linux kernel that creates a virtual computer, featuring automatic load balancing by migrating processes from heavily loaded nodes to less used ones. An extension of the Beowulf concept is to run a Mosixenabled Linux kernel on a large number of computer resources in an organization. This configuration would provide a very large amount of computational resources based on pre-existing equipment. The advantage of this method is that it provides much more processing power than a traditional Beowulf cluster without the added costs of dedicating resources.

  18. SMC: SCENIC Model Control

    NASA Technical Reports Server (NTRS)

    Srivastava, Priyaka; Kraus, Jeff; Murawski, Robert; Golden, Bertsel, Jr.

    2015-01-01

    NASAs Space Communications and Navigation (SCaN) program manages three active networks: the Near Earth Network, the Space Network, and the Deep Space Network. These networks simultaneously support NASA missions and provide communications services to customers worldwide. To efficiently manage these resources and their capabilities, a team of student interns at the NASA Glenn Research Center is developing a distributed system to model the SCaN networks. Once complete, the system shall provide a platform that enables users to perform capacity modeling of current and prospective missions with finer-grained control of information between several simulation and modeling tools. This will enable the SCaN program to access a holistic view of its networks and simulate the effects of modifications in order to provide NASA with decisional information. The development of this capacity modeling system is managed by NASAs Strategic Center for Education, Networking, Integration, and Communication (SCENIC). Three primary third-party software tools offer their unique abilities in different stages of the simulation process. MagicDraw provides UMLSysML modeling, AGIs Systems Tool Kit simulates the physical transmission parameters and de-conflicts scheduled communication, and Riverbed Modeler (formerly OPNET) simulates communication protocols and packet-based networking. SCENIC developers are building custom software extensions to integrate these components in an end-to-end space communications modeling platform. A central control module acts as the hub for report-based messaging between client wrappers. Backend databases provide information related to mission parameters and ground station configurations, while the end user defines scenario-specific attributes for the model. The eight SCENIC interns are working under the direction of their mentors to complete an initial version of this capacity modeling system during the summer of 2015. The intern team is composed of four students in Computer Science, two in Computer Engineering, one in Electrical Engineering, and one studying Space Systems Engineering.

  19. Simulation analysis of resource flexibility on healthcare processes

    PubMed Central

    Simwita, Yusta W; Helgheim, Berit I

    2016-01-01

    Purpose This paper uses discrete event simulation to explore the best resource flexibility scenario and examine the effect of implementing resource flexibility on different stages of patient treatment process. Specifically we investigate the effect of resource flexibility on patient waiting time and throughput in an orthopedic care process. We further seek to explore on how implementation of resource flexibility on patient treatment processes affects patient access to healthcare services. We focus on two resources, namely, orthopedic surgeon and operating room. Methods The observational approach was used to collect process data. The developed model was validated by comparing the simulation output with actual patient data collected from the studied orthopedic care process. We developed different scenarios to identify the best resource flexibility scenario and explore the effect of resource flexibility on patient waiting time, throughput, and future changes in demand. The developed scenarios focused on creating flexibility on service capacity of this care process by altering the amount of additional human resource capacity at different stages of patient care process and extending the use of operating room capacity. Results The study found that resource flexibility can improve responsiveness to patient demand in the treatment process. Testing different scenarios showed that the introduction of resource flexibility reduces patient waiting time and improves throughput. The simulation results show that patient access to health services can be improved by implementing resource flexibility at different stages of the patient treatment process. Conclusion This study contributes to the current health care literature by explaining how implementing resource flexibility at different stages of patient care processes can improve ability to respond to increasing patients demands. This study was limited to a single patient process; studies focusing on additional processes are recommended. PMID:27785046

  20. Simulation analysis of resource flexibility on healthcare processes.

    PubMed

    Simwita, Yusta W; Helgheim, Berit I

    2016-01-01

    This paper uses discrete event simulation to explore the best resource flexibility scenario and examine the effect of implementing resource flexibility on different stages of patient treatment process. Specifically we investigate the effect of resource flexibility on patient waiting time and throughput in an orthopedic care process. We further seek to explore on how implementation of resource flexibility on patient treatment processes affects patient access to healthcare services. We focus on two resources, namely, orthopedic surgeon and operating room. The observational approach was used to collect process data. The developed model was validated by comparing the simulation output with actual patient data collected from the studied orthopedic care process. We developed different scenarios to identify the best resource flexibility scenario and explore the effect of resource flexibility on patient waiting time, throughput, and future changes in demand. The developed scenarios focused on creating flexibility on service capacity of this care process by altering the amount of additional human resource capacity at different stages of patient care process and extending the use of operating room capacity. The study found that resource flexibility can improve responsiveness to patient demand in the treatment process. Testing different scenarios showed that the introduction of resource flexibility reduces patient waiting time and improves throughput. The simulation results show that patient access to health services can be improved by implementing resource flexibility at different stages of the patient treatment process. This study contributes to the current health care literature by explaining how implementing resource flexibility at different stages of patient care processes can improve ability to respond to increasing patients demands. This study was limited to a single patient process; studies focusing on additional processes are recommended.

  1. Critical care capacity in Canada: results of a national cross-sectional study.

    PubMed

    Fowler, Robert A; Abdelmalik, Philip; Wood, Gordon; Foster, Denise; Gibney, Noel; Bandrauk, Natalie; Turgeon, Alexis F; Lamontagne, François; Kumar, Anand; Zarychanski, Ryan; Green, Rob; Bagshaw, Sean M; Stelfox, Henry T; Foster, Ryan; Dodek, Peter; Shaw, Susan; Granton, John; Lawless, Bernard; Hill, Andrea; Rose, Louise; Adhikari, Neill K; Scales, Damon C; Cook, Deborah J; Marshall, John C; Martin, Claudio; Jouvet, Philippe

    2015-04-01

    Intensive Care Units (ICUs) provide life-supporting treatment; however, resources are limited, so demand may exceed supply in the event of pandemics, environmental disasters, or in the context of an aging population. We hypothesized that comprehensive national data on ICU resources would permit a better understanding of regional differences in system capacity. After the 2009-2010 Influenza A (H1N1) pandemic, the Canadian Critical Care Trials Group surveyed all acute care hospitals in Canada to assess ICU capacity. Using a structured survey tool administered to physicians, respiratory therapists and nurses, we determined the number of ICU beds, ventilators, and the ability to provide specialized support for respiratory failure. We identified 286 hospitals with 3170 ICU beds and 4982 mechanical ventilators for critically ill patients. Twenty-two hospitals had an ICU that routinely cared for children; 15 had dedicated pediatric ICUs. Per 100,000 population, there was substantial variability in provincial capacity, with a mean of 0.9 hospitals with ICUs (provincial range 0.4-2.8), 10 ICU beds capable of providing mechanical ventilation (provincial range 6-19), and 15 invasive mechanical ventilators (provincial range 10-24). There was only moderate correlation between ventilation capacity and population size (coefficient of determination (R(2)) = 0.771). ICU resources vary widely across Canadian provinces, and during times of increased demand, may result in geographic differences in the ability to care for critically ill patients. These results highlight the need to evolve inter-jurisdictional resource sharing during periods of substantial increase in demand, and provide background data for the development of appropriate critical care capacity benchmarks.

  2. Riverine threat indices to assess watershed condition and identify primary management capacity of agriculture natural resource management agencies.

    PubMed

    Fore, Jeffrey D; Sowa, Scott P; Galat, David L; Annis, Gust M; Diamond, David D; Rewa, Charles

    2014-03-01

    Managers can improve conservation of lotic systems over large geographies if they have tools to assess total watershed conditions for individual stream segments and can identify segments where conservation practices are most likely to be successful (i.e., primary management capacity). The goal of this research was to develop a suite of threat indices to help agriculture resource management agencies select and prioritize watersheds across Missouri River basin in which to implement agriculture conservation practices. We quantified watershed percentages or densities of 17 threat metrics that represent major sources of ecological stress to stream communities into five threat indices: agriculture, urban, point-source pollution, infrastructure, and all non-agriculture threats. We identified stream segments where agriculture management agencies had primary management capacity. Agriculture watershed condition differed by ecoregion and considerable local variation was observed among stream segments in ecoregions of high agriculture threats. Stream segments with high non-agriculture threats were most concentrated near urban areas, but showed high local variability. 60 % of stream segments in the basin were classified as under U.S. Department of Agriculture's Natural Resources Conservation Service (NRCS) primary management capacity and most segments were in regions of high agricultural threats. NRCS primary management capacity was locally variable which highlights the importance of assessing total watershed condition for multiple threats. Our threat indices can be used by agriculture resource management agencies to prioritize conservation actions and investments based on: (a) relative severity of all threats, (b) relative severity of agricultural threats, and (c) and degree of primary management capacity.

  3. Reducing power usage on demand

    NASA Astrophysics Data System (ADS)

    Corbett, G.; Dewhurst, A.

    2016-10-01

    The Science and Technology Facilities Council (STFC) datacentre provides large- scale High Performance Computing facilities for the scientific community. It currently consumes approximately 1.5MW and this has risen by 25% in the past two years. STFC has been investigating leveraging preemption in the Tier 1 batch farm to save power. HEP experiments are increasing using jobs that can be killed to take advantage of opportunistic CPU resources or novel cost models such as Amazon's spot pricing. Additionally, schemes from energy providers are available that offer financial incentives to reduce power consumption at peak times. Under normal operating conditions, 3% of the batch farm capacity is wasted due to draining machines. By using preempt-able jobs, nodes can be rapidly made available to run multicore jobs without this wasted resource. The use of preempt-able jobs has been extended so that at peak times machines can be hibernated quickly to save energy. This paper describes the implementation of the above and demonstrates that STFC could in future take advantage of such energy saving schemes.

  4. Managing resource capacity using hybrid simulation

    NASA Astrophysics Data System (ADS)

    Ahmad, Norazura; Ghani, Noraida Abdul; Kamil, Anton Abdulbasah; Tahar, Razman Mat

    2014-12-01

    Due to the diversity of patient flows and interdependency of the emergency department (ED) with other units in hospital, the use of analytical models seems not practical for ED modeling. One effective approach to study the dynamic complexity of ED problems is by developing a computer simulation model that could be used to understand the structure and behavior of the system. Attempts to build a holistic model using DES only will be too complex while if only using SD will lack the detailed characteristics of the system. This paper discusses the combination of DES and SD in order to get a better representation of the actual system than using either modeling paradigm solely. The model is developed using AnyLogic software that will enable us to study patient flows and the complex interactions among hospital resources for ED operations. Results from the model show that patients' length of stay is influenced by laboratories turnaround time, bed occupancy rate and ward admission rate.

  5. The creation and early implementation of a high speed fiber optic network for a university health sciences center.

    PubMed Central

    Schueler, J. D.; Mitchell, J. A.; Forbes, S. M.; Neely, R. C.; Goodman, R. J.; Branson, D. K.

    1991-01-01

    In late 1989 the University of Missouri Health Sciences Center began the process of creating an extensive fiber optic network throughout its facilities, with the intent to provide networked computer access to anyone in the Center desiring such access, regardless of geographic location or organizational affiliation. A committee representing all disciplines within the Center produced and, in conjunction with independent consultants, approved a comprehensive design for the network. Installation of network backbone components commenced in the second half of 1990 and was completed in early 1991. As the network entered its initial phases of operation, the first realities of this important new resource began to manifest themselves as enhanced functional capacity in the Health Sciences Center. This paper describes the development of the network, with emphasis on its design criteria, installation, early operation, and management. Also included are discussions on its organizational impact and its evolving significance as a medical community resource. PMID:1807660

  6. Urban water sustainability: an integrative framework for regional water management

    NASA Astrophysics Data System (ADS)

    Gonzales, P.; Ajami, N. K.

    2015-11-01

    Traditional urban water supply portfolios have proven to be unsustainable under the uncertainties associated with growth and long-term climate variability. Introducing alternative water supplies such as recycled water, captured runoff, desalination, as well as demand management strategies such as conservation and efficiency measures, has been widely proposed to address the long-term sustainability of urban water resources. Collaborative efforts have the potential to achieve this goal through more efficient use of common pool resources and access to funding opportunities for supply diversification projects. However, this requires a paradigm shift towards holistic solutions that address the complexity of hydrologic, socio-economic and governance dynamics surrounding water management issues. The objective of this work is to develop a regional integrative framework for the assessment of water resource sustainability under current management practices, as well as to identify opportunities for sustainability improvement in coupled socio-hydrologic systems. We define the sustainability of a water utility as the ability to access reliable supplies to consistently satisfy current needs, make responsible use of supplies, and have the capacity to adapt to future scenarios. To compute a quantitative measure of sustainability, we develop a numerical index comprised of supply, demand, and adaptive capacity indicators, including an innovative way to account for the importance of having diverse supply sources. We demonstrate the application of this framework to the Hetch Hetchy Regional Water System in the San Francisco Bay Area of California. Our analyses demonstrate that water agencies that share common water supplies are in a good position to establish integrative regional management partnerships in order to achieve individual and collective short-term and long-term benefits.

  7. Towards a virtual observatory for ecosystem services and poverty alleviation

    NASA Astrophysics Data System (ADS)

    Buytaert, W.; Baez, S.; Cuesta, F.; Veliz Rosas, C.

    2010-12-01

    Over the last decades, near real-time environmental observation, technical advances in computer power and cyber-infrastructure, and the development of environmental software algorithms have increased dramatically. The integration of these evolutions, which is commonly referred to as the establishment of a virtual observatory, is one of the major challenges of the next decade for environmental sciences. Worldwide, many coordinated activities are ongoing to make this integration a reality. However, far less attention is paid to the question of how these developments can benefit environmental services management in a poverty alleviation context. Such projects are typically faced with issues of large predictive uncertainties, limited resources, limited local scientific capacity. At the same time, the complexity of the socio-economic contexts requires a very strong bottom-up oriented and interdisciplinary approach to environmental data collection and processing. In this study, we present three natural resources management cases in the Andes and the Amazon basin, and investigate how "virtual observatory" technology can improve ecosystem management. Each of these case studies present scientific challenges in terms of model coupling, real-time data assimilation and visualisation for management purposes. The first project deals with water resources management in the Peruvian Andes. Using a rainfall-runoff model, novel visualisations are used to give farmers insight in the water production and regulation capacity of their catchments, which can then be linked to land management practices such as conservation agriculture, wetland protection and grazing density control. In a project in the Amazonian floodplains, optimal allocation of the nesting availability and quality of the giant freshwater turtle are determined using a combined hydraulic model and weather forecasts. Finally, in the rainforest of the Yasuní Biosphere Reserve, Ecuador, biodiversity models are used to quantify the impacts of hunting and logging on community composition and wildlife populations.

  8. Offshore Wind Energy Resource Assessment for Alaska

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Doubrawa Moreira, Paula; Scott, George N.; Musial, Walter D.

    This report quantifies Alaska's offshore wind resource capacity while focusing on its unique nature. It is a supplement to the existing U.S. Offshore Wind Resource Assessment, which evaluated the offshore wind resource for all other U.S. states. Together, these reports provide the foundation for the nation's offshore wind value proposition. Both studies were developed by the National Renewable Energy Laboratory. The analysis presented herein represents the first quantitative evidence of the offshore wind energy potential of Alaska. The technical offshore wind resource area in Alaska is larger than the technical offshore resource area of all other coastal U.S. states combined.more » Despite the abundant wind resource available, significant challenges inhibit large-scale offshore wind deployment in Alaska, such as the remoteness of the resource, its distance from load centers, and the wealth of land available for onshore wind development. Throughout this report, the energy landscape of Alaska is reviewed and a resource assessment analysis is performed in terms of gross and technical offshore capacity and energy potential.« less

  9. Heterogeneous game resource distributions promote cooperation in spatial prisoner's dilemma game

    NASA Astrophysics Data System (ADS)

    Cui, Guang-Hai; Wang, Zhen; Yang, Yan-Cun; Tian, Sheng-Wen; Yue, Jun

    2018-01-01

    In social networks, individual abilities to establish interactions are always heterogeneous and independent of the number of topological neighbors. We here study the influence of heterogeneous distributions of abilities on the evolution of individual cooperation in the spatial prisoner's dilemma game. First, we introduced a prisoner's dilemma game, taking into account individual heterogeneous abilities to establish games, which are determined by the owned game resources. Second, we studied three types of game resource distributions that follow the power-law property. Simulation results show that the heterogeneous distribution of individual game resources can promote cooperation effectively, and the heterogeneous level of resource distributions has a positive influence on the maintenance of cooperation. Extensive analysis shows that cooperators with large resource capacities can foster cooperator clusters around themselves. Furthermore, when the temptation to defect is high, cooperator clusters in which the central pure cooperators have larger game resource capacities are more stable than other cooperator clusters.

  10. Sankofa pediatric HIV disclosure intervention cyber data management: building capacity in a resource-limited setting and ensuring data quality

    PubMed Central

    Catlin, Ann Christine; Fernando, Sumudinie; Gamage, Ruwan; Renner, Lorna; Antwi, Sampson; Tettey, Jonas Kusah; Amisah, Kofi Aikins; Kyriakides, Tassos; Cong, Xiangyu; Reynolds, Nancy R.; Paintsil, Elijah

    2015-01-01

    Prevalence of pediatric HIV disclosure is low in resource-limited settings. Innovative, culturally sensitive, and patient-centered disclosure approaches are needed. Conducting such studies in resource-limited settings is not trivial considering the challenges of capturing, cleaning, and storing clinical research data. To overcome some of these challenges, the Sankofa pediatric disclosure intervention adopted an interactive cyber infrastructure for data capture and analysis. The Sankofa Project database system is built on the HUBzero cyber infrastructure (https://hubzero.org), an open source software platform. The hub database components support: (1) data management – the “databases” component creates, configures, and manages database access, backup, repositories, applications, and access control; (2) data collection – the “forms” component is used to build customized web case report forms that incorporate common data elements and include tailored form submit processing to handle error checking, data validation, and data linkage as the data are stored to the database; and (3) data exploration – the “dataviewer” component provides powerful methods for users to view, search, sort, navigate, explore, map, graph, visualize, aggregate, drill-down, compute, and export data from the database. The Sankofa cyber data management tool supports a user-friendly, secure, and systematic collection of all data. We have screened more than 400 child–caregiver dyads and enrolled nearly 300 dyads, with tens of thousands of data elements. The dataviews have successfully supported all data exploration and analysis needs of the Sankofa Project. Moreover, the ability of the sites to query and view data summaries has proven to be an incentive for collecting complete and accurate data. The data system has all the desirable attributes of an electronic data capture tool. It also provides an added advantage of building data management capacity in resource-limited settings due to its innovative data query and summary views and availability of real-time support by the data management team. PMID:26616131

  11. Sankofa pediatric HIV disclosure intervention cyber data management: building capacity in a resource-limited setting and ensuring data quality.

    PubMed

    Catlin, Ann Christine; Fernando, Sumudinie; Gamage, Ruwan; Renner, Lorna; Antwi, Sampson; Tettey, Jonas Kusah; Amisah, Kofi Aikins; Kyriakides, Tassos; Cong, Xiangyu; Reynolds, Nancy R; Paintsil, Elijah

    2015-01-01

    Prevalence of pediatric HIV disclosure is low in resource-limited settings. Innovative, culturally sensitive, and patient-centered disclosure approaches are needed. Conducting such studies in resource-limited settings is not trivial considering the challenges of capturing, cleaning, and storing clinical research data. To overcome some of these challenges, the Sankofa pediatric disclosure intervention adopted an interactive cyber infrastructure for data capture and analysis. The Sankofa Project database system is built on the HUBzero cyber infrastructure ( https://hubzero.org ), an open source software platform. The hub database components support: (1) data management - the "databases" component creates, configures, and manages database access, backup, repositories, applications, and access control; (2) data collection - the "forms" component is used to build customized web case report forms that incorporate common data elements and include tailored form submit processing to handle error checking, data validation, and data linkage as the data are stored to the database; and (3) data exploration - the "dataviewer" component provides powerful methods for users to view, search, sort, navigate, explore, map, graph, visualize, aggregate, drill-down, compute, and export data from the database. The Sankofa cyber data management tool supports a user-friendly, secure, and systematic collection of all data. We have screened more than 400 child-caregiver dyads and enrolled nearly 300 dyads, with tens of thousands of data elements. The dataviews have successfully supported all data exploration and analysis needs of the Sankofa Project. Moreover, the ability of the sites to query and view data summaries has proven to be an incentive for collecting complete and accurate data. The data system has all the desirable attributes of an electronic data capture tool. It also provides an added advantage of building data management capacity in resource-limited settings due to its innovative data query and summary views and availability of real-time support by the data management team.

  12. Evaluation of resources and environment carrying capacity and socio-economic pressure in typical ecological regions, China

    NASA Astrophysics Data System (ADS)

    Qiusen, Huang; Xinyi, Xu

    2017-04-01

    Since the reform and opening up, the socio-economic pressures have led to increasingly tight resource constraints and serious environmental pollution problems in China, especially for typical ecological regions. The ecological system is under a severe situation and resource and environmental issues have become the bottleneck of economic development. Taking the Chen Barag Banner which has been considered as typical ecological regions as an example, the evaluation indexes system of resources and environment carrying capacity was divided into three subsystems: natural driving force, socio-economic pressure and ecological health. On the basis of the indexes system and related data of Chen Barag Banner in 2014, the evaluation model of resources and environment carrying capacity based on spring model were proposed to analysis the state of resources and environment carrying, and an assessment of influence of socio-economic pressure on the resources and environment system has been conducted by using the discretization method of socio-economic data. The results showed that:(1) The resources and environment system of Baorixile Town, Huhenuoer Town and Bayankuren Town were overloaded among the ten towns, the values of Resources and Environment Carrying Capacity(RECC) / Resources and Environment Carrying State(RECS) were 9.86, 1.37 and 1.22, respectively;(2) The natural driving force index of Xiwuzhuer Town, Hadatu state-owned farm and Bayanhada Town were 0.40, 0.42 and 0.43, respectively, which were lower than others and indicated that the natural conditions in these areas were better than others;(3) The situation of ecological environment Ewenke Town, Hadatu state-owned farm and Tenihe state-owned farm were the best due to the result that the ecological health index of these three towns were 0.21, 0.22 and 0.26, respectively, which were lower than others;(4) The influence of socio-economic pressure on the system of resources and environment in Baorixile Town, Hadatu state-owned farm and Tenihe state-owned farm were the heaviest, the values of social and economic pressure index were 0.61, 0.32 and 0.30 respectively;(5) The discretization result of socio-economic pressure in the scale of 10km*10km space unit was helpful to understand the trend of socio-economic pressure within the township, which couldn't be learned from the result based on the scale of the township;(6) The main factors that affected the environmental carrying capacity of Chen Barag Banner were soil moisture content and per capita water resources;(7) Consumption of water resources and land resources, environmental pollution resulted from the production of agriculture and animal husbandry were the main causes of socio-economic pressures.

  13. Contextuality as a Resource for Models of Quantum Computation with Qubits

    NASA Astrophysics Data System (ADS)

    Bermejo-Vega, Juan; Delfosse, Nicolas; Browne, Dan E.; Okay, Cihan; Raussendorf, Robert

    2017-09-01

    A central question in quantum computation is to identify the resources that are responsible for quantum speed-up. Quantum contextuality has been recently shown to be a resource for quantum computation with magic states for odd-prime dimensional qudits and two-dimensional systems with real wave functions. The phenomenon of state-independent contextuality poses a priori an obstruction to characterizing the case of regular qubits, the fundamental building block of quantum computation. Here, we establish contextuality of magic states as a necessary resource for a large class of quantum computation schemes on qubits. We illustrate our result with a concrete scheme related to measurement-based quantum computation.

  14. Computing arrival times of firefighting resources for initial attack

    Treesearch

    Romain M. Mees

    1978-01-01

    Dispatching of firefighting resources requires instantaneous or precalculated decisions. A FORTRAN computer program has been developed that can provide a list of resources in order of computed arrival time for initial attack on a fire. The program requires an accurate description of the existing road system and a list of all resources available on a planning unit....

  15. Cost Optimal Elastic Auto-Scaling in Cloud Infrastructure

    NASA Astrophysics Data System (ADS)

    Mukhopadhyay, S.; Sidhanta, S.; Ganguly, S.; Nemani, R. R.

    2014-12-01

    Today, elastic scaling is critical part of leveraging cloud. Elastic scaling refers to adding resources only when it is needed and deleting resources when not in use. Elastic scaling ensures compute/server resources are not over provisioned. Today, Amazon and Windows Azure are the only two platform provider that allow auto-scaling of cloud resources where servers are automatically added and deleted. However, these solution falls short of following key features: A) Requires explicit policy definition such server load and therefore lacks any predictive intelligence to make optimal decision; B) Does not decide on the right size of resource and thereby does not result in cost optimal resource pool. In a typical cloud deployment model, we consider two types of application scenario: A. Batch processing jobs → Hadoop/Big Data case B. Transactional applications → Any application that process continuous transactions (Requests/response) In reference of classical queuing model, we are trying to model a scenario where servers have a price and capacity (size) and system can add delete servers to maintain a certain queue length. Classical queueing models applies to scenario where number of servers are constant. So we cannot apply stationary system analysis in this case. We investigate the following questions 1. Can we define Job queue and use the metric to define such a queue to predict the resource requirement in a quasi-stationary way? Can we map that into an optimal sizing problem? 2. Do we need to get into a level of load (CPU/Data) on server level to characterize the size requirement? How do we learn that based on Job type?

  16. Studies on water resources carrying capacity in Tuhai river basin based on ecological footprint

    NASA Astrophysics Data System (ADS)

    Wang, Chengshuai; Xu, Lirong; Fu, Xin

    2017-05-01

    In this paper, the method of the water ecological footprint (WEF) was used to evaluate water resources carrying capacity and water resources sustainability of Tuhai River Basin in Shandong Province. The results show that: (1) The WEF had a downward trend in overall volatility in Tuhai River Basin from 2003 to 2011. Agricultural water occupies high proportion, which was a major contributor to the WEF, and about 86.9% of agricultural WEF was used for farmland irrigation; (2) The water resources carrying capacity had a downward trend in general, which was mostly affected by some natural factors in this basin such as hydrology and meteorology in Tuhai River Basin; (3) Based on analysis of water resources ecological deficit, it can be concluded that the water resources utilization mode was in an unhealthy pattern and it was necessary to improve the utilization efficiency of water resources in Tuhai River Basin; (4) In view of water resources utilization problems in the studied area, well irrigation should be greatly developed at the head of Yellow River Irrigation Area(YRIA), however, water from Yellow River should be utilized for irrigation as much as possible, combined with agricultural water-saving measures and controlled exploiting groundwater at the tail of YRIA. Therefore, the combined usage of surface water and ground water of YRIA is an important way to realize agricultural water saving and sustainable utilization of water resources in Tuhai River Basin.

  17. Resources for health promotion: rhetoric, research and reality.

    PubMed

    Minke, Sharlene Wolbeck; Raine, Kim D; Plotnikoff, Ronald C; Anderson, Donna; Khalema, Ernest; Smith, Cynthia

    2007-01-01

    Canadian political discourse supports the importance of health promotion and advocates the allocation of health resources to health promotion. Furthermore, the current literature frequently identifies financial and human resources as important elements of organizational capacity for health promotion. In the Alberta Heart Health Project (AHHP), we sought to learn if the allocation of health resources in a regionalized health system was congruent with the espoused support for health promotion in Alberta, Canada. The AHHP used a mixed method approach in a time series design. Participants were drawn from multiple organizational levels (i.e., service providers, managers, board members) across all Regional Health Authorities (RHAs). Data were triangulated through multiple collection methods, primarily an organizational capacity survey, analysis of organizational documents, focus groups, and personal interviews. Analysis techniques were drawn from quantitative (i.e., frequency distributions, ANOVAs) and qualitative (i.e., content and thematic analysis) approaches. In most cases, small amounts (<5%) of financial resources were allocated to health promotion in RHAs' core budgets. Respondents reported seeking multiple sources of public health financing to support their health promotion initiatives. Human resources for health promotion were characterized by fragmented responsibilities and short-term work. Furthermore, valuable human resources were consumed in ongoing searches for funding that typically covered short time periods. Resource allocations to health promotion in Alberta RHAs are inconsistent with the current emphasis on health promotion as an organizational priority. Inadequate and unstable funding erodes the RHAs' capacity for health promotion. Sustainable health promotion calls for the assured allocation of adequate, sustainable financial resources.

  18. Resource-poor settings: response, recovery, and research: care of the critically ill and injured during pandemics and disasters: CHEST consensus statement.

    PubMed

    Geiling, James; Burkle, Frederick M; West, T Eoin; Uyeki, Timothy M; Amundson, Dennis; Dominguez-Cherit, Guillermo; Gomersall, Charles D; Lim, Matthew L; Luyckx, Valerie; Sarani, Babak; Christian, Michael D; Devereaux, Asha V; Dichter, Jeffrey R; Kissoon, Niranjan

    2014-10-01

    Planning for mass critical care in resource-poor and constrained settings has been largely ignored, despite large, densely crowded populations who are prone to suffer disproportionately from natural disasters. As a result, disaster response has been suboptimal and in many instances hampered by lack of planning, education and training, information, and communication. The Resource-Poor Settings panel developed five key question domains; defining the term resource poor and using the traditional phases of the disaster cycle (mitigation/preparedness/response/recovery). Literature searches were conducted to identify evidence to answer the key questions in these areas. Given a lack of data on which to develop evidence-based recommendations, expert-opinion suggestions were developed, and consensus was achieved using a modified Delphi process. The five key questions were as follows: definition, capacity building and mitigation, what resources can we bring to bear to assist/surge, response, and reconstitution and recovery of host nation critical care capabilities. Addressing these led the panel to offer 33 suggestions. Because of the large number of suggestions, the results have been separated into two sections: part I, Infrastructure/Capacity in the accompanying article, and part II, Response/Recovery/Research in this article. A lack of rudimentary ICU resources and capacity to enhance services plagues resource-poor or constrained settings. Capacity building therefore entails preventative strategies and strengthening of primary health services. Assistance from other countries and organizations is often needed to mount a surge response. Moreover, the disengagement of these responding groups and host country recovery require active planning. Future improvements in all phases require active research activities.

  19. Research on the tourism resource development from the perspective of network capability-Taking Wuxi Huishan Ancient Town as an example

    NASA Astrophysics Data System (ADS)

    Bao, Yanli; Hua, Hefeng

    2017-03-01

    Network capability is the enterprise's capability to set up, manage, maintain and use a variety of relations between enterprises, and to obtain resources for improving competitiveness. Tourism in China is in a transformation period from sightseeing to leisure and vacation. Scenic spots as well as tourist enterprises can learn from some other enterprises in the process of resource development, and build up its own network relations in order to get resources for their survival and development. Through the effective management of network relations, the performance of resource development will be improved. By analyzing literature on network capability and the case analysis of Wuxi Huishan Ancient Town, the role of network capacity in the tourism resource development is explored and resource development path is built from the perspective of network capability. Finally, the tourism resource development process model based on network capacity is proposed. This model mainly includes setting up network vision, resource identification, resource acquisition, resource utilization and tourism project development. In these steps, network construction, network management and improving network center status are key points.

  20. A Review of Computer Science Resources for Learning and Teaching with K-12 Computing Curricula: An Australian Case Study

    ERIC Educational Resources Information Center

    Falkner, Katrina; Vivian, Rebecca

    2015-01-01

    To support teachers to implement Computer Science curricula into classrooms from the very first year of school, teachers, schools and organisations seek quality curriculum resources to support implementation and teacher professional development. Until now, many Computer Science resources and outreach initiatives have targeted K-12 school-age…

  1. Performance Analysis of Cloud Computing Architectures Using Discrete Event Simulation

    NASA Technical Reports Server (NTRS)

    Stocker, John C.; Golomb, Andrew M.

    2011-01-01

    Cloud computing offers the economic benefit of on-demand resource allocation to meet changing enterprise computing needs. However, the flexibility of cloud computing is disadvantaged when compared to traditional hosting in providing predictable application and service performance. Cloud computing relies on resource scheduling in a virtualized network-centric server environment, which makes static performance analysis infeasible. We developed a discrete event simulation model to evaluate the overall effectiveness of organizations in executing their workflow in traditional and cloud computing architectures. The two part model framework characterizes both the demand using a probability distribution for each type of service request as well as enterprise computing resource constraints. Our simulations provide quantitative analysis to design and provision computing architectures that maximize overall mission effectiveness. We share our analysis of key resource constraints in cloud computing architectures and findings on the appropriateness of cloud computing in various applications.

  2. Development of a decision support system for small reservoir irrigation systems in rainfed and drought prone areas.

    PubMed

    Balderama, Orlando F

    2010-01-01

    An integrated computer program called Cropping System and Water Management Model (CSWM) with a three-step feature (expert system-simulation-optimization) was developed to address a range of decision support for rainfed farming, i.e. crop selection, scheduling and optimisation. The system was used for agricultural planning with emphasis on sustainable agriculture in the rainfed areas through the use of small farm reservoirs for increased production and resource conservation and management. The application of the model was carried out using crop, soil, and climate and water resource data from the Philippines. Primarily, four sets of data representing the different rainfall classification of the country were collected, analysed, and used as input in the model. Simulations were also done on date of planting, probabilities of wet and dry period and with various capacities of the water reservoir used for supplemental irrigation. Through the analysis, useful information was obtained to determine suitable crops in the region, cropping schedule and pattern appropriate to the specific climate conditions. In addition, optimisation of the use of the land and water resources can be achieved in areas partly irrigated by small reservoirs.

  3. The medical science DMZ: a network design pattern for data-intensive medical science

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Peisert, Sean; Dart, Eli; Barnett, William

    We describe a detailed solution for maintaining high-capacity, data-intensive network flows (eg, 10, 40, 100 Gbps+) in a scientific, medical context while still adhering to security and privacy laws and regulations.High-end networking, packet-filter firewalls, network intrusion-detection systems.We describe a "Medical Science DMZ" concept as an option for secure, high-volume transport of large, sensitive datasets between research institutions over national research networks, and give 3 detailed descriptions of implemented Medical Science DMZs.The exponentially increasing amounts of "omics" data, high-quality imaging, and other rapidly growing clinical datasets have resulted in the rise of biomedical research "Big Data." The storage, analysis, and networkmore » resources required to process these data and integrate them into patient diagnoses and treatments have grown to scales that strain the capabilities of academic health centers. Some data are not generated locally and cannot be sustained locally, and shared data repositories such as those provided by the National Library of Medicine, the National Cancer Institute, and international partners such as the European Bioinformatics Institute are rapidly growing. The ability to store and compute using these data must therefore be addressed by a combination of local, national, and industry resources that exchange large datasets. Maintaining data-intensive flows that comply with the Health Insurance Portability and Accountability Act (HIPAA) and other regulations presents a new challenge for biomedical research. We describe a strategy that marries performance and security by borrowing from and redefining the concept of a Science DMZ, a framework that is used in physical sciences and engineering research to manage high-capacity data flows.By implementing a Medical Science DMZ architecture, biomedical researchers can leverage the scale provided by high-performance computer and cloud storage facilities and national high-speed research networks while preserving privacy and meeting regulatory requirements.« less

  4. The Medical Science DMZ.

    PubMed

    Peisert, Sean; Barnett, William; Dart, Eli; Cuff, James; Grossman, Robert L; Balas, Edward; Berman, Ari; Shankar, Anurag; Tierney, Brian

    2016-11-01

    We describe use cases and an institutional reference architecture for maintaining high-capacity, data-intensive network flows (e.g., 10, 40, 100 Gbps+) in a scientific, medical context while still adhering to security and privacy laws and regulations. High-end networking, packet filter firewalls, network intrusion detection systems. We describe a "Medical Science DMZ" concept as an option for secure, high-volume transport of large, sensitive data sets between research institutions over national research networks. The exponentially increasing amounts of "omics" data, the rapid increase of high-quality imaging, and other rapidly growing clinical data sets have resulted in the rise of biomedical research "big data." The storage, analysis, and network resources required to process these data and integrate them into patient diagnoses and treatments have grown to scales that strain the capabilities of academic health centers. Some data are not generated locally and cannot be sustained locally, and shared data repositories such as those provided by the National Library of Medicine, the National Cancer Institute, and international partners such as the European Bioinformatics Institute are rapidly growing. The ability to store and compute using these data must therefore be addressed by a combination of local, national, and industry resources that exchange large data sets. Maintaining data-intensive flows that comply with HIPAA and other regulations presents a new challenge for biomedical research. Recognizing this, we describe a strategy that marries performance and security by borrowing from and redefining the concept of a "Science DMZ"-a framework that is used in physical sciences and engineering research to manage high-capacity data flows. By implementing a Medical Science DMZ architecture, biomedical researchers can leverage the scale provided by high-performance computer and cloud storage facilities and national high-speed research networks while preserving privacy and meeting regulatory requirements. © The Author 2016. Published by Oxford University Press on behalf of the American Medical Informatics Association.

  5. The medical science DMZ: a network design pattern for data-intensive medical science.

    PubMed

    Peisert, Sean; Dart, Eli; Barnett, William; Balas, Edward; Cuff, James; Grossman, Robert L; Berman, Ari; Shankar, Anurag; Tierney, Brian

    2017-10-06

    We describe a detailed solution for maintaining high-capacity, data-intensive network flows (eg, 10, 40, 100 Gbps+) in a scientific, medical context while still adhering to security and privacy laws and regulations. High-end networking, packet-filter firewalls, network intrusion-detection systems. We describe a "Medical Science DMZ" concept as an option for secure, high-volume transport of large, sensitive datasets between research institutions over national research networks, and give 3 detailed descriptions of implemented Medical Science DMZs. The exponentially increasing amounts of "omics" data, high-quality imaging, and other rapidly growing clinical datasets have resulted in the rise of biomedical research "Big Data." The storage, analysis, and network resources required to process these data and integrate them into patient diagnoses and treatments have grown to scales that strain the capabilities of academic health centers. Some data are not generated locally and cannot be sustained locally, and shared data repositories such as those provided by the National Library of Medicine, the National Cancer Institute, and international partners such as the European Bioinformatics Institute are rapidly growing. The ability to store and compute using these data must therefore be addressed by a combination of local, national, and industry resources that exchange large datasets. Maintaining data-intensive flows that comply with the Health Insurance Portability and Accountability Act (HIPAA) and other regulations presents a new challenge for biomedical research. We describe a strategy that marries performance and security by borrowing from and redefining the concept of a Science DMZ, a framework that is used in physical sciences and engineering research to manage high-capacity data flows. By implementing a Medical Science DMZ architecture, biomedical researchers can leverage the scale provided by high-performance computer and cloud storage facilities and national high-speed research networks while preserving privacy and meeting regulatory requirements. © The Author 2017. Published by Oxford University Press on behalf of the American Medical Informatics Association.

  6. The Medical Science DMZ

    PubMed Central

    Barnett, William; Dart, Eli; Cuff, James; Grossman, Robert L; Balas, Edward; Berman, Ari; Shankar, Anurag; Tierney, Brian

    2016-01-01

    Objective We describe use cases and an institutional reference architecture for maintaining high-capacity, data-intensive network flows (e.g., 10, 40, 100 Gbps+) in a scientific, medical context while still adhering to security and privacy laws and regulations. Materials and Methods High-end networking, packet filter firewalls, network intrusion detection systems. Results We describe a “Medical Science DMZ” concept as an option for secure, high-volume transport of large, sensitive data sets between research institutions over national research networks. Discussion The exponentially increasing amounts of “omics” data, the rapid increase of high-quality imaging, and other rapidly growing clinical data sets have resulted in the rise of biomedical research “big data.” The storage, analysis, and network resources required to process these data and integrate them into patient diagnoses and treatments have grown to scales that strain the capabilities of academic health centers. Some data are not generated locally and cannot be sustained locally, and shared data repositories such as those provided by the National Library of Medicine, the National Cancer Institute, and international partners such as the European Bioinformatics Institute are rapidly growing. The ability to store and compute using these data must therefore be addressed by a combination of local, national, and industry resources that exchange large data sets. Maintaining data-intensive flows that comply with HIPAA and other regulations presents a new challenge for biomedical research. Recognizing this, we describe a strategy that marries performance and security by borrowing from and redefining the concept of a “Science DMZ”—a framework that is used in physical sciences and engineering research to manage high-capacity data flows. Conclusion By implementing a Medical Science DMZ architecture, biomedical researchers can leverage the scale provided by high-performance computer and cloud storage facilities and national high-speed research networks while preserving privacy and meeting regulatory requirements. PMID:27136944

  7. Operating Dedicated Data Centers - Is It Cost-Effective?

    NASA Astrophysics Data System (ADS)

    Ernst, M.; Hogue, R.; Hollowell, C.; Strecker-Kellog, W.; Wong, A.; Zaytsev, A.

    2014-06-01

    The advent of cloud computing centres such as Amazon's EC2 and Google's Computing Engine has elicited comparisons with dedicated computing clusters. Discussions on appropriate usage of cloud resources (both academic and commercial) and costs have ensued. This presentation discusses a detailed analysis of the costs of operating and maintaining the RACF (RHIC and ATLAS Computing Facility) compute cluster at Brookhaven National Lab and compares them with the cost of cloud computing resources under various usage scenarios. An extrapolation of likely future cost effectiveness of dedicated computing resources is also presented.

  8. Computing the Envelope for Stepwise-Constant Resource Allocations

    NASA Technical Reports Server (NTRS)

    Muscettola, Nicola; Clancy, Daniel (Technical Monitor)

    2002-01-01

    Computing tight resource-level bounds is a fundamental problem in the construction of flexible plans with resource utilization. In this paper we describe an efficient algorithm that builds a resource envelope, the tightest possible such bound. The algorithm is based on transforming the temporal network of resource consuming and producing events into a flow network with nodes equal to the events and edges equal to the necessary predecessor links between events. A staged maximum flow problem on the network is then used to compute the time of occurrence and the height of each step of the resource envelope profile. Each stage has the same computational complexity of solving a maximum flow problem on the entire flow network. This makes this method computationally feasible and promising for use in the inner loop of flexible-time scheduling algorithms.

  9. Developing a Personnel Capacity Indicator for a high turnover Cartographic Production Sector

    NASA Astrophysics Data System (ADS)

    Mandarino, Flávia; Pessôa, Leonardo A. M.

    2018-05-01

    This paper describes a framework for development of an indicator for human re-sources capacity management in a military organization responsible for nautical chart production. Graphic chart for the results of the model COPPE-COSENZA (Cosenza et al. 2015) is used to properly present the personnel capacity within a high people turnover environment. The specific skills for the nautical charts production allied to the turnover rate require continuous and adequate personnel in-corporation and a capacity building through education and on-the-job training. The adopted approach for the study establishes quantitative values to fulfill quality requirements, and also presents graphically a profile for the human resources on a specific job to facilitate diagnosis and corrective actions.

  10. Slot-like capacity and resource-like coding in a neural model of multiple-item working memory.

    PubMed

    Standage, Dominic; Pare, Martin

    2018-06-27

    For the past decade, research on the storage limitations of working memory has been dominated by two fundamentally different hypotheses. On the one hand, the contents of working memory may be stored in a limited number of `slots', each with a fixed resolution. On the other hand, any number of items may be stored, but with decreasing resolution. These two hypotheses have been invaluable in characterizing the computational structure of working memory, but neither provides a complete account of the available experimental data, nor speaks to the neural basis of the limitations it characterizes. To address these shortcomings, we simulated a multiple-item working memory task with a cortical network model, the cellular resolution of which allowed us to quantify the coding fidelity of memoranda as a function of memory load, as measured by the discriminability, regularity and reliability of simulated neural spiking. Our simulations account for a wealth of neural and behavioural data from human and non-human primate studies, and they demonstrate that feedback inhibition lowers both capacity and coding fidelity. Because the strength of inhibition scales with the number of items stored by the network, increasing this number progressively lowers fidelity until capacity is reached. Crucially, the model makes specific, testable predictions for neural activity on multiple-item working memory tasks.

  11. Healthcare Provider Perceptions of Causes and Consequences of ICU Capacity Strain in a Large Publicly Funded Integrated Health Region: A Qualitative Study.

    PubMed

    Bagshaw, Sean M; Opgenorth, Dawn; Potestio, Melissa; Hastings, Stephanie E; Hepp, Shelanne L; Gilfoyle, Elaine; McKinlay, David; Boucher, Paul; Meier, Michael; Parsons-Leigh, Jeanna; Gibney, R T Noel; Zygun, David A; Stelfox, Henry T

    2017-04-01

    Discrepancy in the supply-demand relationship for critical care services precipitates a strain on ICU capacity. Strain can lead to suboptimal quality of care and burnout among providers and contribute to inefficient health resource utilization. We engaged interprofessional healthcare providers to explore their perceptions of the sources, impact, and strategies to manage capacity strain. Qualitative study using a conventional thematic analysis. Nine ICUs across Alberta, Canada. Nineteen focus groups (n = 122 participants). None. Participants' perspectives on strain on ICU capacity and its perceived impact on providers, families, and patient care were explored. Participants defined "capacity strain" as a discrepancy between the availability of ICU beds, providers, and ICU resources (supply) and the need to admit and provide care for critically ill patients (demand). Four interrelated themes of contributors to strain were characterized (each with subthemes): patient/family related, provider related, resource related, and health system related. Patient/family-related subthemes were "increasing patient complexity/acuity," along with patient-provider communication issues ("paucity of advance care planning and goals-of-care designation," "mismatches between patient/family and provider expectations," and "timeliness of end-of-life care planning"). Provider-related factor subthemes were nursing workforce related ("nurse attrition," "inexperienced workforce," "limited mentoring opportunities," and "high patient-to-nurse ratios") and physician related ("frequent turnover/handover" and "variations in care plan"). Resource-related subthemes were "reduced service capability after hours" and "physical bed shortages." Health system-related subthemes were "variable ICU utilization," "preferential "bed" priority for other services," and "high ward bed occupancy." Participants perceived that strain had negative implications for patients ("reduced quality and safety of care" and "disrupted opportunities for patient- and family-centered care"), providers ("increased workload," "moral distress," and "burnout"), and the health system ("unnecessary, excessive, and inefficient resource utilization"). Engagement with frontline critical care providers is essential for understanding their experiences and perspectives regarding strained capacity and for the development of sustainable strategies for improvement.

  12. Technological Innovation and Developmental Strategies for Sustainable Management of Aquatic Resources in Developing Countries

    NASA Astrophysics Data System (ADS)

    Agboola, Julius Ibukun

    2014-12-01

    Sustainable use and allocation of aquatic resources including water resources require implementation of ecologically appropriate technologies, efficient and relevant to local needs. Despite the numerous international agreements and provisions on transfer of technology, this has not been successfully achieved in developing countries. While reviewing some challenges to technological innovations and developments (TID), this paper analyzes five TID strategic approaches centered on grassroots technology development and provision of localized capacity for sustainable aquatic resources management. Three case studies provide examples of successful implementation of these strategies. Success requires the provision of localized capacity to manage technology through knowledge empowerment in rural communities situated within a framework of clear national priorities for technology development.

  13. Technological innovation and developmental strategies for sustainable management of aquatic resources in developing countries.

    PubMed

    Agboola, Julius Ibukun

    2014-12-01

    Sustainable use and allocation of aquatic resources including water resources require implementation of ecologically appropriate technologies, efficient and relevant to local needs. Despite the numerous international agreements and provisions on transfer of technology, this has not been successfully achieved in developing countries. While reviewing some challenges to technological innovations and developments (TID), this paper analyzes five TID strategic approaches centered on grassroots technology development and provision of localized capacity for sustainable aquatic resources management. Three case studies provide examples of successful implementation of these strategies. Success requires the provision of localized capacity to manage technology through knowledge empowerment in rural communities situated within a framework of clear national priorities for technology development.

  14. Comparing Resource Adequacy Metrics and Their Influence on Capacity Value: Preprint

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ibanez, E.; Milligan, M.

    2014-04-01

    Traditional probabilistic methods have been used to evaluate resource adequacy. The increasing presence of variable renewable generation in power systems presents a challenge to these methods because, unlike thermal units, variable renewable generation levels change over time because they are driven by meteorological events. Thus, capacity value calculations for these resources are often performed to simple rules of thumb. This paper follows the recommendations of the North American Electric Reliability Corporation?s Integration of Variable Generation Task Force to include variable generation in the calculation of resource adequacy and compares different reliability metrics. Examples are provided using the Western Interconnection footprintmore » under different variable generation penetrations.« less

  15. Orthopaedic Trauma Care Capacity Assessment and Strategic Planning in Ghana: Mapping a Way Forward.

    PubMed

    Stewart, Barclay T; Gyedu, Adam; Tansley, Gavin; Yeboah, Dominic; Amponsah-Manu, Forster; Mock, Charles; Labi-Addo, Wilfred; Quansah, Robert

    2016-12-07

    Orthopaedic conditions incur more than 52 million disability-adjusted life years annually worldwide. This burden disproportionately affects low and middle-income countries, which are least equipped to provide orthopaedic care. We aimed to assess orthopaedic capacity in Ghana, describe spatial access to orthopaedic care, and identify hospitals that would most improve access to care if their capacity was improved. Seventeen perioperative and orthopaedic trauma care-related items were selected from the World Health Organization's Guidelines for Essential Trauma Care. Direct inspection and structured interviews with hospital staff were used to assess resource availability and factors contributing to deficiencies at 40 purposively sampled facilities. Cost-distance analyses described population-level spatial access to orthopaedic trauma care. Facilities for targeted capability improvement were identified through location-allocation modeling. Orthopaedic trauma care assessment demonstrated marked deficiencies. Some deficient resources were low cost (e.g., spinal immobilization, closed reduction capabilities, and prosthetics for amputees). Resource nonavailability resulted from several contributing factors (e.g., absence of equipment, technology breakage, lack of training). Implants were commonly prohibitively expensive. Building basic orthopaedic care capacity at 15 hospitals without such capacity would improve spatial access to basic care from 74.9% to 83.0% of the population (uncertainty interval [UI] of 81.2% to 83.6%), providing access for an additional 2,169,714 Ghanaians. The availability of several low-cost resources could be better supplied by improvements in organization and training for orthopaedic trauma care. There is a critical need to advocate and provide funding for orthopaedic resources. These initiatives might be particularly effective if aimed at hospitals that could provide care to a large proportion of the population.

  16. Modeling Pre- and Post- Wildfire Hydrologic Response to Vegetation Change in the Valles Caldera National Preserve, NM

    NASA Astrophysics Data System (ADS)

    Gregory, A. E.; Benedict, K. K.; Zhang, S.; Savickas, J.

    2017-12-01

    Large scale, high severity wildfires in forests have become increasingly prevalent in the western United States due to fire exclusion. Although past work has focused on the immediate consequences of wildfire (ie. runoff magnitude and debris flow), little has been done to understand the post wildfire hydrologic consequences of vegetation regrowth. Furthermore, vegetation is often characterized by static parameterizations within hydrological models. In order to understand the temporal relationship between hydrologic processes and revegetation, we modularized and partially automated the hydrologic modeling process to increase connectivity between remotely sensed data, the Virtual Watershed Platform (a data management resource, called the VWP), input meteorological data, and the Precipitation-Runoff Modeling System (PRMS). This process was used to run simulations in the Valles Caldera of NM, an area impacted by the 2011 Las Conchas Fire, in PRMS before and after the Las Conchas to evaluate hydrologic process changes. The modeling environment addressed some of the existing challenges faced by hydrological modelers. At present, modelers are somewhat limited in their ability to push the boundaries of hydrologic understanding. Specific issues faced by modelers include limited computational resources to model processes at large spatial and temporal scales, data storage capacity and accessibility from the modeling platform, computational and time contraints for experimental modeling, and the skills to integrate modeling software in ways that have not been explored. By taking an interdisciplinary approach, we were able to address some of these challenges by leveraging the skills of hydrologic, data, and computer scientists; and the technical capabilities provided by a combination of on-demand/high-performance computing, distributed data, and cloud services. The hydrologic modeling process was modularized to include options for distributing meteorological data, parameter space experimentation, data format transformation, looping, validation of models and containerization for enabling new analytic scenarios. The user interacts with the modules through Jupyter Notebooks which can be connected to an on-demand computing and HPC environment, and data services built as part of the VWP.

  17. [Resources and capacity of emergency trauma care services in Peru].

    PubMed

    Rosales-Mayor, Edmundo; Miranda, J Jaime; Lema, Claudia; López, Luis; Paca-Palao, Ada; Luna, Diego; Huicho, Luis

    2011-09-01

    The objectives of this study were to evaluate the resources and capacity of emergency trauma care services in three Peruvian cities using the WHO report Guidelines for Essential Trauma Care. This was a cross-sectional study in eight public and private healthcare facilities in Lima, Ayacucho, and Pucallpa. Semi-structured questionnaires were applied to the heads of emergency departments with managerial responsibility for resources and capabilities. Considering the profiles and volume of care in each emergency service, most respondents in all three cities classified their currently available resources as inadequate. Comparison of the health facilities showed a shortage in public services and in the provinces (Ayacucho and Pucallpa). There was a widespread perception that both human and physical resources were insufficient, especially in public healthcare facilities and in the provinces.

  18. Distributed Accounting on the Grid

    NASA Technical Reports Server (NTRS)

    Thigpen, William; Hacker, Thomas J.; McGinnis, Laura F.; Athey, Brian D.

    2001-01-01

    By the late 1990s, the Internet was adequately equipped to move vast amounts of data between HPC (High Performance Computing) systems, and efforts were initiated to link together the national infrastructure of high performance computational and data storage resources together into a general computational utility 'grid', analogous to the national electrical power grid infrastructure. The purpose of the Computational grid is to provide dependable, consistent, pervasive, and inexpensive access to computational resources for the computing community in the form of a computing utility. This paper presents a fully distributed view of Grid usage accounting and a methodology for allocating Grid computational resources for use on a Grid computing system.

  19. Mutual research capacity strengthening: a qualitative study of two-way partnerships in public health research

    PubMed Central

    2012-01-01

    Introduction Capacity building has been employed in international health and development sectors to describe the process of ‘experts’ from more resourced countries training people in less resourced countries. Hence the concept has an implicit power imbalance based on ‘expert’ knowledge. In 2011, a health research strengthening workshop was undertaken at Atoifi Adventist Hospital, Solomon Islands to further strengthen research skills of the Hospital and College of Nursing staff and East Kwaio community leaders through partnering in practical research projects. The workshop was based on participatory research frameworks underpinned by decolonising methodologies, which sought to challenge historical power imbalances and inequities. Our research question was, “Is research capacity strengthening a two-way process?” Methods In this qualitative study, five Solomon Islanders and five Australians each responded to four open-ended questions about their experience of the research capacity strengthening workshop and activities: five chose face to face interview, five chose to provide written responses. Written responses and interview transcripts were inductively analysed in NVivo 9. Results Six major themes emerged. These were: Respectful relationships; Increased knowledge and experience with research process; Participation at all stages in the research process; Contribution to public health action; Support and sustain research opportunities; and Managing challenges of capacity strengthening. All researchers identified benefits for themselves, their institution and/or community, regardless of their role or country of origin, indicating that the capacity strengthening had been a two-way process. Conclusions The flexible and responsive process we used to strengthen research capacity was identified as mutually beneficial. Using community-based participatory frameworks underpinned by decolonising methodologies is assisting to redress historical power imbalances and inequities and is helping to sustain the initial steps taken to establish a local research agenda at Atoifi Hospital. It is our experience that embedding mutuality throughout the research capacity strengthening process has had great benefit and may also benefit researchers from more resourced and less resourced countries wanting to partner in research capacity strengthening activities. PMID:23249439

  20. Building capacity to develop an African teaching platform on health workforce development: a collaborative initiative of universities from four sub Saharan countries.

    PubMed

    Amde, Woldekidan Kifle; Sanders, David; Lehmann, Uta

    2014-05-30

    Health systems in many low-income countries remain fragile, and the record of human resource planning and management in Ministries of Health very uneven. Public health training institutions face the dual challenge of building human resources capacity in ministries and health services while alleviating and improving their own capacity constraints. This paper reports on an initiative aimed at addressing this dual challenge through the development and implementation of a joint Masters in Public Health (MPH) programme with a focus on health workforce development by four academic institutions from East and Southern Africa and the building of a joint teaching platform. Data were obtained through interviews and group discussions with stakeholders, direct and participant observations, and reviews of publications and project documents. Data were analysed using thematic analysis. The institutions developed and collaboratively implemented a 'Masters Degree programme with a focus on health workforce development'. It was geared towards strengthening the leadership capacity of Health ministries to develop expertise in health human resources (HRH) planning and management, and simultaneously build capacity of faculty in curriculum development and innovative educational practices to teach health workforce development. The initiative was configured to facilitate sharing of experience and resources. The implementation of this initiative has been complex, straddling multiple and changing contexts, actors and agendas. Some of these are common to postgraduate programmes with working learners, while others are unique to this particular partnership, such as weak institutional capacity to champion and embed new programmes and approaches to teaching. The partnership, despite significant inherent challenges, has potential for providing real opportunities for building the field and community of practice, and strengthening the staff and organizational capacity of participant institutions. Key learning points of the paper are:• the need for long-term strategies and engagement;• the need for more investment and attention to developing the capacity of academic institutions;• the need to invest specifically in educational/teaching expertise for innovative approaches to teaching and capacity development more broadly; and• the importance of increasing access and support for students who are working adults in public health institutions throughout Africa.

  1. Distributed Problem Solving: Adaptive Networks with a Computer Intermediary Resource. Intelligent Executive Computer Communication

    DTIC Science & Technology

    1991-06-01

    Proceedings of The National Conference on Artificial Intelligence , pages 181-184, The American Association for Aritificial Intelligence , Pittsburgh...Intermediary Resource: Intelligent Executive Computer Communication John Lyman and Carla J. Conaway University of California at Los Angeles for Contracting...Include Security Classification) Interim Report: Distributed Problem Solving: Adaptive Networks With a Computer Intermediary Resource: Intelligent

  2. Volunteered Cloud Computing for Disaster Management

    NASA Astrophysics Data System (ADS)

    Evans, J. D.; Hao, W.; Chettri, S. R.

    2014-12-01

    Disaster management relies increasingly on interpreting earth observations and running numerical models; which require significant computing capacity - usually on short notice and at irregular intervals. Peak computing demand during event detection, hazard assessment, or incident response may exceed agency budgets; however some of it can be met through volunteered computing, which distributes subtasks to participating computers via the Internet. This approach has enabled large projects in mathematics, basic science, and climate research to harness the slack computing capacity of thousands of desktop computers. This capacity is likely to diminish as desktops give way to battery-powered mobile devices (laptops, smartphones, tablets) in the consumer market; but as cloud computing becomes commonplace, it may offer significant slack capacity -- if its users are given an easy, trustworthy mechanism for participating. Such a "volunteered cloud computing" mechanism would also offer several advantages over traditional volunteered computing: tasks distributed within a cloud have fewer bandwidth limitations; granular billing mechanisms allow small slices of "interstitial" computing at no marginal cost; and virtual storage volumes allow in-depth, reversible machine reconfiguration. Volunteered cloud computing is especially suitable for "embarrassingly parallel" tasks, including ones requiring large data volumes: examples in disaster management include near-real-time image interpretation, pattern / trend detection, or large model ensembles. In the context of a major disaster, we estimate that cloud users (if suitably informed) might volunteer hundreds to thousands of CPU cores across a large provider such as Amazon Web Services. To explore this potential, we are building a volunteered cloud computing platform and targeting it to a disaster management context. Using a lightweight, fault-tolerant network protocol, this platform helps cloud users join parallel computing projects; automates reconfiguration of their virtual machines; ensures accountability for donated computing; and optimizes the use of "interstitial" computing. Initial applications include fire detection from multispectral satellite imagery and flood risk mapping through hydrological simulations.

  3. A set partitioning reformulation for the multiple-choice multidimensional knapsack problem

    NASA Astrophysics Data System (ADS)

    Voß, Stefan; Lalla-Ruiz, Eduardo

    2016-05-01

    The Multiple-choice Multidimensional Knapsack Problem (MMKP) is a well-known ?-hard combinatorial optimization problem that has received a lot of attention from the research community as it can be easily translated to several real-world problems arising in areas such as allocating resources, reliability engineering, cognitive radio networks, cloud computing, etc. In this regard, an exact model that is able to provide high-quality feasible solutions for solving it or being partially included in algorithmic schemes is desirable. The MMKP basically consists of finding a subset of objects that maximizes the total profit while observing some capacity restrictions. In this article a reformulation of the MMKP as a set partitioning problem is proposed to allow for new insights into modelling the MMKP. The computational experimentation provides new insights into the problem itself and shows that the new model is able to improve on the best of the known results for some of the most common benchmark instances.

  4. Platform Architecture for Decentralized Positioning Systems.

    PubMed

    Kasmi, Zakaria; Norrdine, Abdelmoumen; Blankenbach, Jörg

    2017-04-26

    A platform architecture for positioning systems is essential for the realization of a flexible localization system, which interacts with other systems and supports various positioning technologies and algorithms. The decentralized processing of a position enables pushing the application-level knowledge into a mobile station and avoids the communication with a central unit such as a server or a base station. In addition, the calculation of the position on low-cost and resource-constrained devices presents a challenge due to the limited computing, storage capacity, as well as power supply. Therefore, we propose a platform architecture that enables the design of a system with the reusability of the components, extensibility (e.g., with other positioning technologies) and interoperability. Furthermore, the position is computed on a low-cost device such as a microcontroller, which simultaneously performs additional tasks such as data collecting or preprocessing based on an operating system. The platform architecture is designed, implemented and evaluated on the basis of two positioning systems: a field strength system and a time of arrival-based positioning system.

  5. Platform Architecture for Decentralized Positioning Systems

    PubMed Central

    Kasmi, Zakaria; Norrdine, Abdelmoumen; Blankenbach, Jörg

    2017-01-01

    A platform architecture for positioning systems is essential for the realization of a flexible localization system, which interacts with other systems and supports various positioning technologies and algorithms. The decentralized processing of a position enables pushing the application-level knowledge into a mobile station and avoids the communication with a central unit such as a server or a base station. In addition, the calculation of the position on low-cost and resource-constrained devices presents a challenge due to the limited computing, storage capacity, as well as power supply. Therefore, we propose a platform architecture that enables the design of a system with the reusability of the components, extensibility (e.g., with other positioning technologies) and interoperability. Furthermore, the position is computed on a low-cost device such as a microcontroller, which simultaneously performs additional tasks such as data collecting or preprocessing based on an operating system. The platform architecture is designed, implemented and evaluated on the basis of two positioning systems: a field strength system and a time of arrival-based positioning system. PMID:28445414

  6. Method and system for data clustering for very large databases

    NASA Technical Reports Server (NTRS)

    Livny, Miron (Inventor); Zhang, Tian (Inventor); Ramakrishnan, Raghu (Inventor)

    1998-01-01

    Multi-dimensional data contained in very large databases is efficiently and accurately clustered to determine patterns therein and extract useful information from such patterns. Conventional computer processors may be used which have limited memory capacity and conventional operating speed, allowing massive data sets to be processed in a reasonable time and with reasonable computer resources. The clustering process is organized using a clustering feature tree structure wherein each clustering feature comprises the number of data points in the cluster, the linear sum of the data points in the cluster, and the square sum of the data points in the cluster. A dense region of data points is treated collectively as a single cluster, and points in sparsely occupied regions can be treated as outliers and removed from the clustering feature tree. The clustering can be carried out continuously with new data points being received and processed, and with the clustering feature tree being restructured as necessary to accommodate the information from the newly received data points.

  7. Critical analysis of world uranium resources

    USGS Publications Warehouse

    Hall, Susan; Coleman, Margaret

    2013-01-01

    The U.S. Department of Energy, Energy Information Administration (EIA) joined with the U.S. Department of the Interior, U.S. Geological Survey (USGS) to analyze the world uranium supply and demand balance. To evaluate short-term primary supply (0–15 years), the analysis focused on Reasonably Assured Resources (RAR), which are resources projected with a high degree of geologic assurance and considered to be economically feasible to mine. Such resources include uranium resources from mines currently in production as well as resources that are in the stages of feasibility or of being permitted. Sources of secondary supply for uranium, such as stockpiles and reprocessed fuel, were also examined. To evaluate long-term primary supply, estimates of uranium from unconventional and from undiscovered resources were analyzed. At 2010 rates of consumption, uranium resources identified in operating or developing mines would fuel the world nuclear fleet for about 30 years. However, projections currently predict an increase in uranium requirements tied to expansion of nuclear energy worldwide. Under a low-demand scenario, requirements through the period ending in 2035 are about 2.1 million tU. In the low demand case, uranium identified in existing and developing mines is adequate to supply requirements. However, whether or not these identified resources will be developed rapidly enough to provide an uninterrupted fuel supply to expanded nuclear facilities could not be determined. On the basis of a scenario of high demand through 2035, 2.6 million tU is required and identified resources in operating or developing mines is inadequate. Beyond 2035, when requirements could exceed resources in these developing properties, other sources will need to be developed from less well-assured resources, deposits not yet at the prefeasibility stage, resources that are currently subeconomic, secondary sources, undiscovered conventional resources, and unconventional uranium supplies. This report’s analysis of 141 mines that are operating or are being actively developed identifies 2.7 million tU of in-situ uranium resources worldwide, approximately 2.1 million tU recoverable after mining and milling losses were deducted. Sixty-four operating mines report a total of 1.4 million tU of in-situ RAR (about 1 million tU recoverable). Seventy-seven developing mines/production centers report 1.3 million tU in-situ Reasonably Assured Resources (RAR) (about 1.1 million tU recoverable), which have a reasonable chance of producing uranium within 5 years. Most of the production is projected to come from conventional underground or open pit mines as opposed to in-situ leach mines. Production capacity in operating mines is about 76,000 tU/yr, and in developing mines is estimated at greater than 52,000 tU/yr. Production capacity in operating mines should be considered a maximum as mines seldom produce up to licensed capacity due to operational difficulties. In 2010, worldwide mines operated at 70 percent of licensed capacity, and production has never exceeded 89 percent of capacity. The capacity in developing mines is not always reported. In this study 35 percent of developing mines did not report a target licensed capacity, so estimates of future capacity may be too low. The Organisation for Economic Co-operation and Development’s Nuclear Energy Agency (NEA) and International Atomic Energy Agency (IAEA) estimate an additional 1.4 million tU economically recoverable resources, beyond that identified in operating or developing mines identified in this report. As well, 0.5 million tU in subeconomic resources, and 2.3 million tU in the geologically less certain inferred category are identified worldwide. These agencies estimate 2.2 million tU in secondary sources such as government and commercial stockpiles and re-enriched uranium tails. They also estimate that unconventional uranium supplies (uraniferous phosphate and black shale deposits) may contain up to 7.6 million tU. Although unconventional resources are currently subeconomic, the improvement of extraction techniques or the production of coproducts may make extraction of uranium from these types of deposits profitable. A large undiscovered resource base is reported by these agencies, however this class of resource should be considered speculative and will require intensive exploration programs to adequately define them as mineable. These resources may all contribute to uranium supply that would fuel the world nuclear fleet well beyond that calculated in this report. Production of resources in both operating and developing uranium mines is subject to uncertainties caused by technical, legal, regulatory, and financial challenges that combined to create long timelines between deposit discovery and mine production. This analysis indicates that mine development is proceeding too slowly to fully meet requirements for an expanded nuclear power reactor fleet in the near future (to 2035), and unless adequate secondary or unconventional resources can be identified, imbalances in supply and demand may occur.

  8. Self-Organized Service Negotiation for Collaborative Decision Making

    PubMed Central

    Zhang, Bo; Zheng, Ziming

    2014-01-01

    This paper proposes a self-organized service negotiation method for CDM in intelligent and automatic manners. It mainly includes three phases: semantic-based capacity evaluation for the CDM sponsor, trust computation of the CDM organization, and negotiation selection of the decision-making service provider (DMSP). In the first phase, the CDM sponsor produces the formal semantic description of the complex decision task for DMSP and computes the capacity evaluation values according to participator instructions from different DMSPs. In the second phase, a novel trust computation approach is presented to compute the subjective belief value, the objective reputation value, and the recommended trust value. And in the third phase, based on the capacity evaluation and trust computation, a negotiation mechanism is given to efficiently implement the service selection. The simulation experiment results show that our self-organized service negotiation method is feasible and effective for CDM. PMID:25243228

  9. Self-organized service negotiation for collaborative decision making.

    PubMed

    Zhang, Bo; Huang, Zhenhua; Zheng, Ziming

    2014-01-01

    This paper proposes a self-organized service negotiation method for CDM in intelligent and automatic manners. It mainly includes three phases: semantic-based capacity evaluation for the CDM sponsor, trust computation of the CDM organization, and negotiation selection of the decision-making service provider (DMSP). In the first phase, the CDM sponsor produces the formal semantic description of the complex decision task for DMSP and computes the capacity evaluation values according to participator instructions from different DMSPs. In the second phase, a novel trust computation approach is presented to compute the subjective belief value, the objective reputation value, and the recommended trust value. And in the third phase, based on the capacity evaluation and trust computation, a negotiation mechanism is given to efficiently implement the service selection. The simulation experiment results show that our self-organized service negotiation method is feasible and effective for CDM.

  10. [Ecological carrying capacity and Chongming Island's ecological construction].

    PubMed

    Wang, Kaiyun; Zou, Chunjing; Kong, Zhenghong; Wang, Tianhou; Chen, Xiaoyong

    2005-12-01

    This paper overviewed the goals of Chongming Island's ecological construction and its background, analyzed the current eco-economic status and constraints of the Island, and put forward some scientific issues on its ecological construction. It was suggested that for the resources-saving and sustainable development of the Island, the researches on its ecological construction should be based on its ecological carrying capacity, fully take the regional characteristics into consideration, and refer the successful development modes at home and abroad. The carrying capacity study should ground on systemic and dynamic views, give a thorough evaluation of the Island's present carrying capacity, simulate its possible changes, and forecast its demands and risks. Operable countermeasures to promote the Island's carrying capacity should be worked out, new industry structure, population scale, and optimized distribution projects conforming to regional carrying capacity should be formulated, and effective ecological security alarming and control system should be built, with the aim of providing suggestions and strategic evidences for the decision-making of economic development and sustainable environmental resources use of the region.

  11. Carrying Capacity Model Applied to Coastal Ecotourism of Baluran National Park, Indonesia

    NASA Astrophysics Data System (ADS)

    Armono, H. D.; Rosyid, D. M.; Nuzula, N. I.

    2017-07-01

    The resources of Baluran National Park have been used for marine and coastal ecotourism. The increasing number of visitors has led to the increasing of tourists and its related activities. This condition will cause the degradation of resources and the welfare of local communities. This research aims to determine the sustainability of coastal ecotourism management by calculating the effective number of tourists who can be accepted. The study uses the concept of tourism carrying capacity, consists the ecological environment, economic, social and physical carrying capacity. The results of the combined carrying capacity analysis in Baluran National Park ecotourism shows that the number of 3.288 people per day (151.248 tourists per year) is the maximum number of accepted tourists. The current number of tourist arrivals is only 241 people per day (87.990 tourists per year) which is far below the carrying capacity.

  12. Capacity shortfalls hinder the performance of marine protected areas globally

    NASA Astrophysics Data System (ADS)

    Gill, David A.; Mascia, Michael B.; Ahmadia, Gabby N.; Glew, Louise; Lester, Sarah E.; Barnes, Megan; Craigie, Ian; Darling, Emily S.; Free, Christopher M.; Geldmann, Jonas; Holst, Susie; Jensen, Olaf P.; White, Alan T.; Basurto, Xavier; Coad, Lauren; Gates, Ruth D.; Guannel, Greg; Mumby, Peter J.; Thomas, Hannah; Whitmee, Sarah; Woodley, Stephen; Fox, Helen E.

    2017-03-01

    Marine protected areas (MPAs) are increasingly being used globally to conserve marine resources. However, whether many MPAs are being effectively and equitably managed, and how MPA management influences substantive outcomes remain unknown. We developed a global database of management and fish population data (433 and 218 MPAs, respectively) to assess: MPA management processes; the effects of MPAs on fish populations; and relationships between management processes and ecological effects. Here we report that many MPAs failed to meet thresholds for effective and equitable management processes, with widespread shortfalls in staff and financial resources. Although 71% of MPAs positively influenced fish populations, these conservation impacts were highly variable. Staff and budget capacity were the strongest predictors of conservation impact: MPAs with adequate staff capacity had ecological effects 2.9 times greater than MPAs with inadequate capacity. Thus, continued global expansion of MPAs without adequate investment in human and financial capacity is likely to lead to sub-optimal conservation outcomes.

  13. Building Capacity to Increase Health Promotion Funding to American Indian Communities: Recommendations From Community Members.

    PubMed

    Pedersen, Maja; Held, Suzanne Christopher; Brown, Blakely

    2016-11-01

    Foundations and government agencies have historically played a critical role in supporting community-based health promotion programs. Increased access to health promotion funding may help address significant health issues existing within American Indian (AI) communities, such as childhood obesity, type 2 diabetes, and cardiovascular disease. Understanding the capacity of AI communities to successfully apply for and receive funding may serve to increase resources for health promotion efforts within AI communities in Montana. This exploratory qualitative study completed 17 semistructured interviews across three AI reservations in the state of Montana. Dimensions of community capacity within the context of the funding application process and partnership with funding agencies were identified, including resources, leadership, community need, networks, and relationship with the funding agency. Dimensions of AI community capacity were then used to suggest capacity-building strategies for improved partnership between AI communities in Montana and the funding agencies. © 2016 Society for Public Health Education.

  14. Requirements for a network storage service

    NASA Technical Reports Server (NTRS)

    Kelly, Suzanne M.; Haynes, Rena A.

    1991-01-01

    Sandia National Laboratories provides a high performance classified computer network as a core capability in support of its mission of nuclear weapons design and engineering, physical sciences research, and energy research and development. The network, locally known as the Internal Secure Network (ISN), comprises multiple distributed local area networks (LAN's) residing in New Mexico and California. The TCP/IP protocol suite is used for inter-node communications. Scientific workstations and mid-range computers, running UNIX-based operating systems, compose most LAN's. One LAN, operated by the Sandia Corporate Computing Computing Directorate, is a general purpose resource providing a supercomputer and a file server to the entire ISN. The current file server on the supercomputer LAN is an implementation of the Common File Server (CFS). Subsequent to the design of the ISN, Sandia reviewed its mass storage requirements and chose to enter into a competitive procurement to replace the existing file server with one more adaptable to a UNIX/TCP/IP environment. The requirements study for the network was the starting point for the requirements study for the new file server. The file server is called the Network Storage Service (NSS) and its requirements are described. An application or functional description of the NSS is given. The final section adds performance, capacity, and access constraints to the requirements.

  15. Remembrance of phases past: An autoregressive method for generating realistic atmospheres in simulations

    NASA Astrophysics Data System (ADS)

    Srinath, Srikar; Poyneer, Lisa A.; Rudy, Alexander R.; Ammons, S. M.

    2014-08-01

    The advent of expensive, large-aperture telescopes and complex adaptive optics (AO) systems has strengthened the need for detailed simulation of such systems from the top of the atmosphere to control algorithms. The credibility of any simulation is underpinned by the quality of the atmosphere model used for introducing phase variations into the incident photons. Hitherto, simulations which incorporate wind layers have relied upon phase screen generation methods that tax the computation and memory capacities of the platforms on which they run. This places limits on parameters of a simulation, such as exposure time or resolution, thus compromising its utility. As aperture sizes and fields of view increase the problem will only get worse. We present an autoregressive method for evolving atmospheric phase that is efficient in its use of computation resources and allows for variability in the power contained in frozen flow or stochastic components of the atmosphere. Users have the flexibility of generating atmosphere datacubes in advance of runs where memory constraints allow to save on computation time or of computing the phase at each time step for long exposure times. Preliminary tests of model atmospheres generated using this method show power spectral density and rms phase in accordance with established metrics for Kolmogorov models.

  16. Increasing Capacity for Stewardship of Oceans and Coasts: Findings of the National Research Council Report

    NASA Astrophysics Data System (ADS)

    Roberts, S. J.; Feeley, M. H.

    2008-05-01

    With the increasing stress on ocean and coastal resources, ocean resource management will require greater capacity in terms of people, institutions, technology and tools. Successful capacity-building efforts address the needs of a specific locale or region and include plans to maintain and expand capacity after the project ends. In 2008, the US National Research Council published a report that assesses past and current capacity-building efforts to identify barriers to effective management of coastal and marine resources. The report recommends ways that governments and organizations can strengthen marine conservation and management capacity. Capacity building programs instill the tools, knowledge, skills, and attitudes that address: ecosystem function and change; processes of governance that influence societal and ecosystem change; and assembling and managing interdisciplinary teams. Programs require efforts beyond traditional sector-by-sector planning because marine ecosystems range from the open ocean to coastal waters and land use practices. Collaboration among sectors, scaling from local community-based management to international ocean policies, and ranging from inland to offshore areas, will be required to establish coordinated and efficient governance of ocean and coastal ecosystems. Barriers Most capacity building activities have been initiated to address particular issues such as overfishing or coral reef degradation, or they target a particular region or country facing threats to their marine resources. This fragmentation inhibits the sharing of information and experience and makes it more difficult to design and implement management approaches at appropriate scales. Additional barriers that have limited the effectiveness of capacity building programs include: lack of an adequate needs assessment prior to program design and implementation; exclusion of targeted populations in decision- making efforts; mismanagement, corruption, or both; incomplete or inappropriate evaluation procedures; and, lack of a coordinated and strategic approach among donors. A New Framework Improving ocean stewardship and ending the fragmentation of current capacity building programs will require a new, broadly adopted framework for capacity building that emphasizes cooperation, sustainability, and knowledge transfer within and among communities. The report identifies four specific features of capacity building that would increase the effectiveness and efficiency of future programs: 1. Regional action plans based on periodic program assessments to guide investments in capacity and set realistic milestones and performance measures. 2. Long-term support to establish self-sustaining programs. Sustained capacity building programs require a diversity of sources and coordinated investments from local, regional, and international donors. 3. Development of leadership and political will. One of the most commonly cited reasons for failure and lack of progress in ocean and coastal governance initiatives is lack of political will. One strategy for strengthening support is to identify, develop, mentor, and reward leaders. 4. Establishment of networks and mechanisms for regional collaboration. Networks bring together those working in the same or similar ecosystems with comparable management or governance challenges to share information, pool resources, and learn from one another. The report also recommends the establishment of regional centers to encourage and support collaboration among neighboring countries.

  17. Building capacity in health research in the developing world.

    PubMed Central

    Lansang, Mary Ann; Dennis, Rodolfo

    2004-01-01

    Strong national health research systems are needed to improve health systems and attain better health. For developing countries to indigenize health research systems, it is essential to build research capacity. We review the positive features and weaknesses of various approaches to capacity building, emphasizing that complementary approaches to human resource development work best in the context of a systems and long-term perspective. As a key element of capacity building, countries must also address issues related to the enabling environment, in particular: leadership, career structure, critical mass, infrastructure, information access and interfaces between research producers and users. The success of efforts to build capacity in developing countries will ultimately depend on political will and credibility, adequate financing, and a responsive capacity-building plan that is based on a thorough situational analysis of the resources needed for health research and the inequities and gaps in health care. Greater national and international investment in capacity building in developing countries has the greatest potential for securing dynamic and agile knowledge systems that can deliver better health and equity, now and in the future. PMID:15643798

  18. Building capacity in health research in the developing world.

    PubMed

    Lansang, Mary Ann; Dennis, Rodolfo

    2004-10-01

    Strong national health research systems are needed to improve health systems and attain better health. For developing countries to indigenize health research systems, it is essential to build research capacity. We review the positive features and weaknesses of various approaches to capacity building, emphasizing that complementary approaches to human resource development work best in the context of a systems and long-term perspective. As a key element of capacity building, countries must also address issues related to the enabling environment, in particular: leadership, career structure, critical mass, infrastructure, information access and interfaces between research producers and users. The success of efforts to build capacity in developing countries will ultimately depend on political will and credibility, adequate financing, and a responsive capacity-building plan that is based on a thorough situational analysis of the resources needed for health research and the inequities and gaps in health care. Greater national and international investment in capacity building in developing countries has the greatest potential for securing dynamic and agile knowledge systems that can deliver better health and equity, now and in the future.

  19. Towards anatomic scale agent-based modeling with a massively parallel spatially explicit general-purpose model of enteric tissue (SEGMEnT_HPC).

    PubMed

    Cockrell, Robert Chase; Christley, Scott; Chang, Eugene; An, Gary

    2015-01-01

    Perhaps the greatest challenge currently facing the biomedical research community is the ability to integrate highly detailed cellular and molecular mechanisms to represent clinical disease states as a pathway to engineer effective therapeutics. This is particularly evident in the representation of organ-level pathophysiology in terms of abnormal tissue structure, which, through histology, remains a mainstay in disease diagnosis and staging. As such, being able to generate anatomic scale simulations is a highly desirable goal. While computational limitations have previously constrained the size and scope of multi-scale computational models, advances in the capacity and availability of high-performance computing (HPC) resources have greatly expanded the ability of computational models of biological systems to achieve anatomic, clinically relevant scale. Diseases of the intestinal tract are exemplary examples of pathophysiological processes that manifest at multiple scales of spatial resolution, with structural abnormalities present at the microscopic, macroscopic and organ-levels. In this paper, we describe a novel, massively parallel computational model of the gut, the Spatially Explicitly General-purpose Model of Enteric Tissue_HPC (SEGMEnT_HPC), which extends an existing model of the gut epithelium, SEGMEnT, in order to create cell-for-cell anatomic scale simulations. We present an example implementation of SEGMEnT_HPC that simulates the pathogenesis of ileal pouchitis, and important clinical entity that affects patients following remedial surgery for ulcerative colitis.

  20. Cytobank: providing an analytics platform for community cytometry data analysis and collaboration.

    PubMed

    Chen, Tiffany J; Kotecha, Nikesh

    2014-01-01

    Cytometry is used extensively in clinical and laboratory settings to diagnose and track cell subsets in blood and tissue. High-throughput, single-cell approaches leveraging cytometry are developed and applied in the computational and systems biology communities by researchers, who seek to improve the diagnosis of human diseases, map the structures of cell signaling networks, and identify new cell types. Data analysis and management present a bottleneck in the flow of knowledge from bench to clinic. Multi-parameter flow and mass cytometry enable identification of signaling profiles of patient cell samples. Currently, this process is manual, requiring hours of work to summarize multi-dimensional data and translate these data for input into other analysis programs. In addition, the increase in the number and size of collaborative cytometry studies as well as the computational complexity of analytical tools require the ability to assemble sufficient and appropriately configured computing capacity on demand. There is a critical need for platforms that can be used by both clinical and basic researchers who routinely rely on cytometry. Recent advances provide a unique opportunity to facilitate collaboration and analysis and management of cytometry data. Specifically, advances in cloud computing and virtualization are enabling efficient use of large computing resources for analysis and backup. An example is Cytobank, a platform that allows researchers to annotate, analyze, and share results along with the underlying single-cell data.

  1. Computational models of music perception and cognition II: Domain-specific music processing

    NASA Astrophysics Data System (ADS)

    Purwins, Hendrik; Grachten, Maarten; Herrera, Perfecto; Hazan, Amaury; Marxer, Ricard; Serra, Xavier

    2008-09-01

    In Part I [Purwins H, Herrera P, Grachten M, Hazan A, Marxer R, Serra X. Computational models of music perception and cognition I: The perceptual and cognitive processing chain. Physics of Life Reviews 2008, in press, doi:10.1016/j.plrev.2008.03.004], we addressed the study of cognitive processes that underlie auditory perception of music, and their neural correlates. The aim of the present paper is to summarize empirical findings from music cognition research that are relevant to three prominent music theoretic domains: rhythm, melody, and tonality. Attention is paid to how cognitive processes like category formation, stimulus grouping, and expectation can account for the music theoretic key concepts in these domains, such as beat, meter, voice, consonance. We give an overview of computational models that have been proposed in the literature for a variety of music processing tasks related to rhythm, melody, and tonality. Although the present state-of-the-art in computational modeling of music cognition definitely provides valuable resources for testing specific hypotheses and theories, we observe the need for models that integrate the various aspects of music perception and cognition into a single framework. Such models should be able to account for aspects that until now have only rarely been addressed in computational models of music cognition, like the active nature of perception and the development of cognitive capacities from infancy to adulthood.

  2. The direction of cloud computing for Malaysian education sector in 21st century

    NASA Astrophysics Data System (ADS)

    Jaafar, Jazurainifariza; Rahman, M. Nordin A.; Kadir, M. Fadzil A.; Shamsudin, Syadiah Nor; Saany, Syarilla Iryani A.

    2017-08-01

    In 21st century, technology has turned learning environment into a new way of education to make learning systems more effective and systematic. Nowadays, education institutions are faced many challenges to ensure the teaching and learning process is running smoothly and manageable. Some of challenges in the current education management are lack of integrated systems, high cost of maintenance, difficulty of configuration and deployment as well as complexity of storage provision. Digital learning is an instructional practice that use technology to make learning experience more effective, provides education process more systematic and attractive. Digital learning can be considered as one of the prominent application that implemented under cloud computing environment. Cloud computing is a type of network resources that provides on-demands services where the users can access applications inside it at any location and no time border. It also promises for minimizing the cost of maintenance and provides a flexible of data storage capacity. The aim of this article is to review the definition and types of cloud computing for improving digital learning management as required in the 21st century education. The analysis of digital learning context focused on primary school in Malaysia. Types of cloud applications and services in education sector are also discussed in the article. Finally, gap analysis and direction of cloud computing in education sector for facing the 21st century challenges are suggested.

  3. The EPOS Vision for the Open Science Cloud

    NASA Astrophysics Data System (ADS)

    Jeffery, Keith; Harrison, Matt; Cocco, Massimo

    2016-04-01

    Cloud computing offers dynamic elastic scalability for data processing on demand. For much research activity, demand for computing is uneven over time and so CLOUD computing offers both cost-effectiveness and capacity advantages. However, as reported repeatedly by the EC Cloud Expert Group, there are barriers to the uptake of Cloud Computing: (1) security and privacy; (2) interoperability (avoidance of lock-in); (3) lack of appropriate systems development environments for application programmers to characterise their applications to allow CLOUD middleware to optimize their deployment and execution. From CERN, the Helix-Nebula group has proposed the architecture for the European Open Science Cloud. They are discussing with other e-Infrastructure groups such as EGI (GRIDs), EUDAT (data curation), AARC (network authentication and authorisation) and also with the EIROFORUM group of 'international treaty' RIs (Research Infrastructures) and the ESFRI (European Strategic Forum for Research Infrastructures) RIs including EPOS. Many of these RIs are either e-RIs (electronic-RIs) or have an e-RI interface for access and use. The EPOS architecture is centred on a portal: ICS (Integrated Core Services). The architectural design already allows for access to e-RIs (which may include any or all of data, software, users and resources such as computers or instruments). Those within any one domain (subject area) of EPOS are considered within the TCS (Thematic Core Services). Those outside, or available across multiple domains of EPOS, are ICS-d (Integrated Core Services-Distributed) since the intention is that they will be used by any or all of the TCS via the ICS. Another such service type is CES (Computational Earth Science); effectively an ICS-d specializing in high performance computation, analytics, simulation or visualization offered by a TCS for others to use. Already discussions are underway between EPOS and EGI, EUDAT, AARC and Helix-Nebula for those offerings to be considered as ICS-ds by EPOS.. Provision of access to ICS-Ds from ICS-C concerns several aspects: (a) Technical : it may be more or less difficult to connect and pass from ICS-C to the ICS-d/ CES the 'package' (probably a virtual machine) of data and software; (b) Security/privacy : including passing personal information e.g. related to AAAI (Authentication, authorization, accounting Infrastructure); (c) financial and legal : such as payment, licence conditions; Appropriate interfaces from ICS-C to ICS-d are being designed to accommodate these aspects. The Open Science Cloud is timely because it provides a framework to discuss governance and sustainability for computational resource provision as well as an effective interpretation of federated approach to HPC(High Performance Computing) -HTC (High Throughput Computing). It will be a unique opportunity to share and adopt procurement policies to provide access to computational resources for RIs. The current state of discussions and expected roadmap for the EPOS-Open Science Cloud relationship are presented.

  4. Study on the application of mobile internet cloud computing platform

    NASA Astrophysics Data System (ADS)

    Gong, Songchun; Fu, Songyin; Chen, Zheng

    2012-04-01

    The innovative development of computer technology promotes the application of the cloud computing platform, which actually is the substitution and exchange of a sort of resource service models and meets the needs of users on the utilization of different resources after changes and adjustments of multiple aspects. "Cloud computing" owns advantages in many aspects which not merely reduce the difficulties to apply the operating system and also make it easy for users to search, acquire and process the resources. In accordance with this point, the author takes the management of digital libraries as the research focus in this paper, and analyzes the key technologies of the mobile internet cloud computing platform in the operation process. The popularization and promotion of computer technology drive people to create the digital library models, and its core idea is to strengthen the optimal management of the library resource information through computers and construct an inquiry and search platform with high performance, allowing the users to access to the necessary information resources at any time. However, the cloud computing is able to promote the computations within the computers to distribute in a large number of distributed computers, and hence implement the connection service of multiple computers. The digital libraries, as a typical representative of the applications of the cloud computing, can be used to carry out an analysis on the key technologies of the cloud computing.

  5. Assessing organizational capacity for achieving meaningful use of electronic health records.

    PubMed

    Shea, Christopher M; Malone, Robb; Weinberger, Morris; Reiter, Kristin L; Thornhill, Jonathan; Lord, Jennifer; Nguyen, Nicholas G; Weiner, Bryan J

    2014-01-01

    Health care institutions are scrambling to manage the complex organizational change required for achieving meaningful use (MU) of electronic health records (EHR). Assessing baseline organizational capacity for the change can be a useful step toward effective planning and resource allocation. The aim of this article is to describe an adaptable method and tool for assessing organizational capacity for achieving MU of EHR. Data on organizational capacity (people, processes, and technology resources) and barriers are presented from outpatient clinics within one integrated health care delivery system; thus, the focus is on MU requirements for eligible professionals, not eligible hospitals. We conducted 109 interviews with representatives from 46 outpatient clinics. Most clinics had core elements of the people domain of capacity in place. However, the process domain was problematic for many clinics, specifically, capturing problem lists as structured data and having standard processes for maintaining the problem list in the EHR. Also, nearly half of all clinics did not have methods for tracking compliance with their existing processes. Finally, most clinics maintained clinical information in multiple systems, not just the EHR. The most common perceived barriers to MU for eligible professionals included EHR functionality, changes to workflows, increased workload, and resistance to change. Organizational capacity assessments provide a broad institutional perspective and an in-depth clinic-level perspective useful for making resource decisions and tailoring strategies to support the MU change effort for eligible professionals.

  6. Exploring the meteorological potential for planning a high performance European electricity super-grid: optimal power capacity distribution among countries

    NASA Astrophysics Data System (ADS)

    Santos-Alamillos, Francisco J.; Brayshaw, David J.; Methven, John; Thomaidis, Nikolaos S.; Ruiz-Arias, José A.; Pozo-Vázquez, David

    2017-11-01

    The concept of a European super-grid for electricity presents clear advantages for a reliable and affordable renewable power production (photovoltaics and wind). Based on the mean-variance portfolio optimization analysis, we explore optimal scenarios for the allocation of new renewable capacity at national level in order to provide to energy decision-makers guidance about which regions should be mostly targeted to either maximize total production or reduce its day-to-day variability. The results show that the existing distribution of renewable generation capacity across Europe is far from optimal: i.e. a ‘better’ spatial distribution of resources could have been achieved with either a ~31% increase in mean power supply (for the same level of day-to-day variability) or a ~37.5% reduction in day-to-day variability (for the same level of mean productivity). Careful planning of additional increments in renewable capacity at the European level could, however, act to significantly ameliorate this deficiency. The choice of where to deploy resources depends, however, on the objective being pursued—if the goal is to maximize average output, then new capacity is best allocated in the countries with highest resources, whereas investment in additional capacity in a north/south dipole pattern across Europe would act to most reduce daily variations and thus decrease the day-to-day volatility of renewable power supply.

  7. Exploring Cloud Computing for Large-scale Scientific Applications

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lin, Guang; Han, Binh; Yin, Jian

    This paper explores cloud computing for large-scale data-intensive scientific applications. Cloud computing is attractive because it provides hardware and software resources on-demand, which relieves the burden of acquiring and maintaining a huge amount of resources that may be used only once by a scientific application. However, unlike typical commercial applications that often just requires a moderate amount of ordinary resources, large-scale scientific applications often need to process enormous amount of data in the terabyte or even petabyte range and require special high performance hardware with low latency connections to complete computation in a reasonable amount of time. To address thesemore » challenges, we build an infrastructure that can dynamically select high performance computing hardware across institutions and dynamically adapt the computation to the selected resources to achieve high performance. We have also demonstrated the effectiveness of our infrastructure by building a system biology application and an uncertainty quantification application for carbon sequestration, which can efficiently utilize data and computation resources across several institutions.« less

  8. Computer-Based Resource Accounting Model for Automobile Technology Impact Assessment

    DOT National Transportation Integrated Search

    1976-10-01

    A computer-implemented resource accounting model has been developed for assessing resource impacts of future automobile technology options. The resources tracked are materials, energy, capital, and labor. The model has been used in support of the Int...

  9. Water-Constrained Electric Sector Capacity Expansion Modeling Under Climate Change Scenarios

    NASA Astrophysics Data System (ADS)

    Cohen, S. M.; Macknick, J.; Miara, A.; Vorosmarty, C. J.; Averyt, K.; Meldrum, J.; Corsi, F.; Prousevitch, A.; Rangwala, I.

    2015-12-01

    Over 80% of U.S. electricity generation uses a thermoelectric process, which requires significant quantities of water for power plant cooling. This water requirement exposes the electric sector to vulnerabilities related to shifts in water availability driven by climate change as well as reductions in power plant efficiencies. Electricity demand is also sensitive to climate change, which in most of the United States leads to warming temperatures that increase total cooling-degree days. The resulting demand increase is typically greater for peak demand periods. This work examines the sensitivity of the development and operations of the U.S. electric sector to the impacts of climate change using an electric sector capacity expansion model that endogenously represents seasonal and local water resource availability as well as climate impacts on water availability, electricity demand, and electricity system performance. Capacity expansion portfolios and water resource implications from 2010 to 2050 are shown at high spatial resolution under a series of climate scenarios. Results demonstrate the importance of water availability for future electric sector capacity planning and operations, especially under more extreme hotter and drier climate scenarios. In addition, region-specific changes in electricity demand and water resources require region-specific responses that depend on local renewable resource availability and electricity market conditions. Climate change and the associated impacts on water availability and temperature can affect the types of power plants that are built, their location, and their impact on regional water resources.

  10. Other Persons: On the Phenomenology of Interpersonal Experience in Schizophrenia (Ancillary Article to EAWE Domain 3).

    PubMed

    Stanghellini, Giovanni; Ballerini, Massimo; Mancini, Milena

    2017-01-01

    In this paper, we discuss the philosophical and psychopathological background of Domain 3, Other persons, of the Examination of Anomalous World Experiences (EAWE). The EAWE interview aims to describe the manifold phenomena of the schizophrenic lifeworld in all of their concrete and distinctive features, thus complementing a more abstract, symptom-focused approach. Domain 3, Other persons, focuses specifically on subjectively experienced interpersonal disturbances that may be especially common in schizophrenia. The aim of this domain, as with the rest of the EAWE, is to provide clinicians and researchers with a systematic orientation toward, or knowledge of, patients' experiences, so that the experiential universe of schizophrenia can be clarified in terms of the particular feel, meaning, and value it has for the patient. To help provide a context for EAWE Domain 3, Other persons, we propose a definition of "intersubjectivity" (IS) and "dissociality." The former is the ability to understand other persons, that is, the basis of our capacity to experience people and social situations as meaningful. IS relies both on perceptive- intuitive as well as cognitive-computational resources. Dissociality addresses the core psychopathological nucleus characterizing the quality of abnormal IS in persons with schizophrenia and covers several dimensions, including disturbances of both perceptive-intuitive and cognitive-computational capacities. The most typical perceptive-intuitive abnormality is hypoattunement, that is, the lack of interpersonal resonance and difficulties in grasping or immediately understanding others' mental states. The most characteristic cognitive-computational anomaly is social hyperreflexivity, especially an algorithmic conception of sociality (an observational/ethological attitude aimed to develop an explicit, often rule-based personal method for participating in social transactions). Other anomalous interpersonal experiences, such as emotional and behavioral responses to others, are also discussed in relation to this core of dissociality. © 2017 S. Karger AG, Basel.

  11. System Resource Allocations | High-Performance Computing | NREL

    Science.gov Websites

    Allocations System Resource Allocations To use NREL's high-performance computing (HPC) resources : Compute hours on NREL HPC Systems including Peregrine and Eagle Storage space (in Terabytes) on Peregrine , Eagle and Gyrfalcon. Allocations are principally done in response to an annual call for allocation

  12. Computers as learning resources in the health sciences: impact and issues.

    PubMed Central

    Ellis, L B; Hannigan, G G

    1986-01-01

    Starting with two computer terminals in 1972, the Health Sciences Learning Resources Center of the University of Minnesota Bio-Medical Library expanded its instructional facilities to ten terminals and thirty-five microcomputers by 1985. Computer use accounted for 28% of total center circulation. The impact of these resources on health sciences curricula is described and issues related to use, support, and planning are raised and discussed. Judged by their acceptance and educational value, computers are successful health sciences learning resources at the University of Minnesota. PMID:3518843

  13. An emulator for minimizing finite element analysis implementation resources

    NASA Technical Reports Server (NTRS)

    Melosh, R. J.; Utku, S.; Salama, M.; Islam, M.

    1982-01-01

    A finite element analysis emulator providing a basis for efficiently establishing an optimum computer implementation strategy when many calculations are involved is described. The SCOPE emulator determines computer resources required as a function of the structural model, structural load-deflection equation characteristics, the storage allocation plan, and computer hardware capabilities. Thereby, it provides data for trading analysis implementation options to arrive at a best strategy. The models contained in SCOPE lead to micro-operation computer counts of each finite element operation as well as overall computer resource cost estimates. Application of SCOPE to the Memphis-Arkansas bridge analysis provides measures of the accuracy of resource assessments. Data indicate that predictions are within 17.3 percent for calculation times and within 3.2 percent for peripheral storage resources for the ELAS code.

  14. Dynamic virtual machine allocation policy in cloud computing complying with service level agreement using CloudSim

    NASA Astrophysics Data System (ADS)

    Aneri, Parikh; Sumathy, S.

    2017-11-01

    Cloud computing provides services over the internet and provides application resources and data to the users based on their demand. Base of the Cloud Computing is consumer provider model. Cloud provider provides resources which consumer can access using cloud computing model in order to build their application based on their demand. Cloud data center is a bulk of resources on shared pool architecture for cloud user to access. Virtualization is the heart of the Cloud computing model, it provides virtual machine as per application specific configuration and those applications are free to choose their own configuration. On one hand, there is huge number of resources and on other hand it has to serve huge number of requests effectively. Therefore, resource allocation policy and scheduling policy play very important role in allocation and managing resources in this cloud computing model. This paper proposes the load balancing policy using Hungarian algorithm. Hungarian Algorithm provides dynamic load balancing policy with a monitor component. Monitor component helps to increase cloud resource utilization by managing the Hungarian algorithm by monitoring its state and altering its state based on artificial intelligent. CloudSim used in this proposal is an extensible toolkit and it simulates cloud computing environment.

  15. 18 CFR 292.314 - Existing rights and remedies.

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ... purchase electric energy or capacity from or to sell electric energy or capacity to a qualifying... recover costs of purchasing electric energy or capacity). [Order 688, 71 FR 64372, Nov. 1, 2006] ... remedies. 292.314 Section 292.314 Conservation of Power and Water Resources FEDERAL ENERGY REGULATORY...

  16. 18 CFR 292.314 - Existing rights and remedies.

    Code of Federal Regulations, 2011 CFR

    2011-04-01

    ... purchase electric energy or capacity from or to sell electric energy or capacity to a qualifying... recover costs of purchasing electric energy or capacity). [Order 688, 71 FR 64372, Nov. 1, 2006] ... remedies. 292.314 Section 292.314 Conservation of Power and Water Resources FEDERAL ENERGY REGULATORY...

  17. CASPER Version 2.0

    NASA Technical Reports Server (NTRS)

    Chien, Steve; Rabideau, Gregg; Tran, Daniel; Knight, Russell; Chouinard, Caroline; Estlin, Tara; Gaines, Daniel; Clement, Bradley; Barrett, Anthony

    2007-01-01

    CASPER is designed to perform automated planning of interdependent activities within a system subject to requirements, constraints, and limitations on resources. In contradistinction to the traditional concept of batch planning followed by execution, CASPER implements a concept of continuous planning and replanning in response to unanticipated changes (including failures), integrated with execution. Improvements over other, similar software that have been incorporated into CASPER version 2.0 include an enhanced executable interface to facilitate integration with a wide range of execution software systems and supporting software libraries; features to support execution while reasoning about urgency, importance, and impending deadlines; features that enable accommodation to a wide range of computing environments that include various central processing units and random- access-memory capacities; and improved generic time-server and time-control features.

  18. National survey of emergency departments in Denmark.

    PubMed

    Wen, Leana S; Anderson, Philip D; Stagelund, Søren; Sullivan, Ashley F; Camargo, Carlos A

    2013-06-01

    Emergency departments (EDs) are the basic unit of emergency medicine, but often differ in fundamental features. We sought to describe and characterize EDs in Denmark. All EDs open 24/7 to the general public were surveyed using the National ED Inventories survey instrument (http://www.emnet-nedi.org). ED staff were asked about ED characteristics with reference to the calendar year 2008. Twenty-eight EDs participated (82% response). All were located in hospitals. Less than half [43%, 95% confidence interval (CI) 24-63%] were independent departments. Thirty-nine percent (95% CI 22-59%) had a contiguous layout, with medical and surgical care provided in one area. The vast majority of EDs saw both adults and children; only 10% saw adults only and none saw children only. The median number of annual visits was 32 000 (interquartile range, 14 700-47 000). The majority (68%, 95% CI 47-89%) believed that their ED was at good balance or capacity, with 22% responding that they were under capacity and 9% reporting overcapacity. Technological resources were generally available, with the exception of dedicated computed tomography scanners and negative-pressure rooms. Almost all common emergencies were identified as being treatable 24/7 in the EDs. Although there is some variation in their layout and characteristics, most Danish EDs have a high degree of resource availability and are able to treat common emergencies. As Denmark seeks to reform emergency care through ED consolidation, this national survey helps to establish a benchmark for future comparisons.

  19. Artificial neural networks as quantum associative memory

    NASA Astrophysics Data System (ADS)

    Hamilton, Kathleen; Schrock, Jonathan; Imam, Neena; Humble, Travis

    We present results related to the recall accuracy and capacity of Hopfield networks implemented on commercially available quantum annealers. The use of Hopfield networks and artificial neural networks as content-addressable memories offer robust storage and retrieval of classical information, however, implementation of these models using currently available quantum annealers faces several challenges: the limits of precision when setting synaptic weights, the effects of spurious spin-glass states and minor embedding of densely connected graphs into fixed-connectivity hardware. We consider neural networks which are less than fully-connected, and also consider neural networks which contain multiple sparsely connected clusters. We discuss the effect of weak edge dilution on the accuracy of memory recall, and discuss how the multiple clique structure affects the storage capacity. Our work focuses on storage of patterns which can be embedded into physical hardware containing n < 1000 qubits. This work was supported by the United States Department of Defense and used resources of the Computational Research and Development Programs as Oak Ridge National Laboratory under Contract No. DE-AC0500OR22725 with the U. S. Department of Energy.

  20. Cell transmission model of dynamic assignment for urban rail transit networks.

    PubMed

    Xu, Guangming; Zhao, Shuo; Shi, Feng; Zhang, Feilian

    2017-01-01

    For urban rail transit network, the space-time flow distribution can play an important role in evaluating and optimizing the space-time resource allocation. For obtaining the space-time flow distribution without the restriction of schedules, a dynamic assignment problem is proposed based on the concept of continuous transmission. To solve the dynamic assignment problem, the cell transmission model is built for urban rail transit networks. The priority principle, queuing process, capacity constraints and congestion effects are considered in the cell transmission mechanism. Then an efficient method is designed to solve the shortest path for an urban rail network, which decreases the computing cost for solving the cell transmission model. The instantaneous dynamic user optimal state can be reached with the method of successive average. Many evaluation indexes of passenger flow can be generated, to provide effective support for the optimization of train schedules and the capacity evaluation for urban rail transit network. Finally, the model and its potential application are demonstrated via two numerical experiments using a small-scale network and the Beijing Metro network.

  1. A Computational Model of Spatial Visualization Capacity

    ERIC Educational Resources Information Center

    Lyon, Don R.; Gunzelmann, Glenn; Gluck, Kevin A.

    2008-01-01

    Visualizing spatial material is a cornerstone of human problem solving, but human visualization capacity is sharply limited. To investigate the sources of this limit, we developed a new task to measure visualization accuracy for verbally-described spatial paths (similar to street directions), and implemented a computational process model to…

  2. Analysis of the availability of the resources necessary for urgent and emergency healthcare in São Paulo between 2009-2013.

    PubMed

    Coimbra, Silvana Hebe; Camanho, Eliete Dominguez Lopez; Heringer, Lindolfo Carlos; Botelho, Ricardo Vieira; Vasconcellos, Cidia

    2017-06-01

    The Regulatory Complex is the structure that operationalizes actions for making resources available to meet the needs of urgent and emergency care in the municipality of São Paulo. In the case of urgent care, needs are immediate and associated with high morbidity and mortality. To identify the most frequently requested resources, the resolution capacity and the mortality rate associated with the unavailability of a certain resource. Our study was based on data from medical bulletins issued by the Urgent and Emergency Regulation Center (CRUE) in the city of São Paulo from 2009 to 2013. 91,823 requests were made over the five years of the study (2009 to 2013). Neurosurgery requests were the most frequent in all years (4,828, 5,159, 4,251, 5,008 and 4,394, respectively), followed by computed tomography (CT) scans, adult intensive care unit (ICU) beds, cardiac catheterization, and pediatric ICU beds. On average, requests for neurosurgery, adult ICU, pediatric ICU, CT scans, catheterization and vascular surgery were answered in 70%, 27%, 39%, 97%, 87% and 77% of cases. The total number of deaths relating to requests for neurosurgery, CT scans, adult ICU, pediatric ICU, catheterization and vascular surgeon assessment were 182, 9, 1,536, 1,536, 135, 49 and 24 cases, respectively. There is a lack of resources to meet urgent and emergency needs in the city of São Paulo.

  3. SCEAPI: A unified Restful Web API for High-Performance Computing

    NASA Astrophysics Data System (ADS)

    Rongqiang, Cao; Haili, Xiao; Shasha, Lu; Yining, Zhao; Xiaoning, Wang; Xuebin, Chi

    2017-10-01

    The development of scientific computing is increasingly moving to collaborative web and mobile applications. All these applications need high-quality programming interface for accessing heterogeneous computing resources consisting of clusters, grid computing or cloud computing. In this paper, we introduce our high-performance computing environment that integrates computing resources from 16 HPC centers across China. Then we present a bundle of web services called SCEAPI and describe how it can be used to access HPC resources with HTTP or HTTPs protocols. We discuss SCEAPI from several aspects including architecture, implementation and security, and address specific challenges in designing compatible interfaces and protecting sensitive data. We describe the functions of SCEAPI including authentication, file transfer and job management for creating, submitting and monitoring, and how to use SCEAPI in an easy-to-use way. Finally, we discuss how to exploit more HPC resources quickly for the ATLAS experiment by implementing the custom ARC compute element based on SCEAPI, and our work shows that SCEAPI is an easy-to-use and effective solution to extend opportunistic HPC resources.

  4. Capacity market design and renewable energy: Performance incentives, qualifying capacity, and demand curves

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Byers, Conleigh; Levin, Todd; Botterud, Audun

    A review of capacity markets in the United States in the context of increasing levels of variable renewable energy finds substantial differences with respect to incentives for operational performance, methods to calculate qualifying capacity for variable renewable energy and energy storage, and demand curves for capacity. The review also reveals large differences in historical capacity market clearing prices. The authors conclude that electricity market design must continue to evolve to achieve cost-effective policies for resource adequacy.

  5. Software and resources for computational medicinal chemistry

    PubMed Central

    Liao, Chenzhong; Sitzmann, Markus; Pugliese, Angelo; Nicklaus, Marc C

    2011-01-01

    Computer-aided drug design plays a vital role in drug discovery and development and has become an indispensable tool in the pharmaceutical industry. Computational medicinal chemists can take advantage of all kinds of software and resources in the computer-aided drug design field for the purposes of discovering and optimizing biologically active compounds. This article reviews software and other resources related to computer-aided drug design approaches, putting particular emphasis on structure-based drug design, ligand-based drug design, chemical databases and chemoinformatics tools. PMID:21707404

  6. The potential for gaming techniques in radiology education and practice.

    PubMed

    Reiner, Bruce; Siegel, Eliot

    2008-02-01

    Traditional means of communication, education and training, and research have been dramatically transformed with the advent of computerized medicine, and no other medical specialty has been more greatly affected than radiology. Of the myriad of newer computer applications currently available, computer gaming stands out for its unique potential to enhance end-user performance and job satisfaction. Research in other disciplines has demonstrated computer gaming to offer the potential for enhanced decision making, resource management, visual acuity, memory, and motor skills. Within medical imaging, video gaming provides a novel means to enhance radiologist and technologist performance and visual perception by increasing attentional capacity, visual field of view, and visual-motor coordination. These enhancements take on heightened importance with the increasing size and complexity of three-dimensional imaging datasets. Although these operational gains are important in themselves, psychologic gains intrinsic to video gaming offer the potential to reduce stress and improve job satisfaction by creating a fun and engaging means of spirited competition. By creating customized gaming programs and rewards systems, video game applications can be customized to the skill levels and preferences of individual users, thereby creating a comprehensive means to improve individual and collective job performance.

  7. Training Knowledge Bots for Physics-Based Simulations Using Artificial Neural Networks

    NASA Technical Reports Server (NTRS)

    Samareh, Jamshid A.; Wong, Jay Ming

    2014-01-01

    Millions of complex physics-based simulations are required for design of an aerospace vehicle. These simulations are usually performed by highly trained and skilled analysts, who execute, monitor, and steer each simulation. Analysts rely heavily on their broad experience that may have taken 20-30 years to accumulate. In addition, the simulation software is complex in nature, requiring significant computational resources. Simulations of system of systems become even more complex and are beyond human capacity to effectively learn their behavior. IBM has developed machines that can learn and compete successfully with a chess grandmaster and most successful jeopardy contestants. These machines are capable of learning some complex problems much faster than humans can learn. In this paper, we propose using artificial neural network to train knowledge bots to identify the idiosyncrasies of simulation software and recognize patterns that can lead to successful simulations. We examine the use of knowledge bots for applications of computational fluid dynamics (CFD), trajectory analysis, commercial finite-element analysis software, and slosh propellant dynamics. We will show that machine learning algorithms can be used to learn the idiosyncrasies of computational simulations and identify regions of instability without including any additional information about their mathematical form or applied discretization approaches.

  8. Assessing dementia in resource-poor regions.

    PubMed

    Maestre, Gladys E

    2012-10-01

    The numbers and proportions of elderly are increasing rapidly in developing countries, where prevalence of dementia is often high. Providing cost-effective services for dementia sufferers and their caregivers in these resource-poor regions poses numerous challenges; developing resources for diagnosis must be the first step. Capacity building for diagnosis involves training and education of healthcare providers, as well as the general public, development of infrastructure, and resolution of economic and ethical issues. Recent progress in some low-to-middle-income countries (LMICs) provides evidence that partnerships between wealthy and resource-poor countries, and between developing countries, can improve diagnostic capabilities. Without the involvement of the mental health community of developed countries in such capacity-building programs, dementia in the developing world is a disaster waiting to happen.

  9. Resource allocation and funding challenges for regional local health departments in Nebraska.

    PubMed

    Chen, Li-Wu; Jacobson, Janelle; Roberts, Sara; Palm, David

    2012-01-01

    This study examined the mechanism of resource allocation among member counties and the funding challenges of regional health departments (RHDs) in Nebraska. DESIGN AND STUDY SETTING: In 2009, we conducted a qualitative case study of 2 Nebraska RHDs to gain insight into their experiences of making resource allocation decisions and confronting funding challenges. The 2 RHD sites were selected for this case study on the basis of their heterogeneity in terms of population distribution in member counties. Sixteen semistructured in-person interviews were conducted with RHD directors, staff, and board of health members. Interview data were coded and analyzed using NVivo qualitative analysis software (QSR International [Americas] Inc., Cambridge, MA). Our findings suggested that the directors of RHDs play an integral role in making resource allocation decisions on the basis of community needs, not on a formula or on individual county population size. Interviewees also reported that the size of the vulnerable population served by the RHD had a significant impact on the level of resources for RHD's programs. The RHD's decisions about resource allocation were also dependent on the amount and type of resources received from the state. Interviewees identified inadequacy and instability of funding as the 2 main funding challenges for their RHD. These challenges negatively impacted workforce capacity and the long-term sustainability of some programs. Regional health departments may not benefit from better leveraging resources and building a stronger structural capacity unless the issues of funding inadequacy and instability are addressed. Strategies that can be used by RHDs to address these funding challenges include seeking grants to support programs, leveraging existing resources, and building community partnerships to share resources. Future research is needed to identify RHDs' optimal workforce capacity, required funding level, and potential funding mechanisms.

  10. Integration of Cloud resources in the LHCb Distributed Computing

    NASA Astrophysics Data System (ADS)

    Úbeda García, Mario; Méndez Muñoz, Víctor; Stagni, Federico; Cabarrou, Baptiste; Rauschmayr, Nathalie; Charpentier, Philippe; Closier, Joel

    2014-06-01

    This contribution describes how Cloud resources have been integrated in the LHCb Distributed Computing. LHCb is using its specific Dirac extension (LHCbDirac) as an interware for its Distributed Computing. So far, it was seamlessly integrating Grid resources and Computer clusters. The cloud extension of DIRAC (VMDIRAC) allows the integration of Cloud computing infrastructures. It is able to interact with multiple types of infrastructures in commercial and institutional clouds, supported by multiple interfaces (Amazon EC2, OpenNebula, OpenStack and CloudStack) - instantiates, monitors and manages Virtual Machines running on this aggregation of Cloud resources. Moreover, specifications for institutional Cloud resources proposed by Worldwide LHC Computing Grid (WLCG), mainly by the High Energy Physics Unix Information Exchange (HEPiX) group, have been taken into account. Several initiatives and computing resource providers in the eScience environment have already deployed IaaS in production during 2013. Keeping this on mind, pros and cons of a cloud based infrasctructure have been studied in contrast with the current setup. As a result, this work addresses four different use cases which represent a major improvement on several levels of our infrastructure. We describe the solution implemented by LHCb for the contextualisation of the VMs based on the idea of Cloud Site. We report on operational experience of using in production several institutional Cloud resources that are thus becoming integral part of the LHCb Distributed Computing resources. Furthermore, we describe as well the gradual migration of our Service Infrastructure towards a fully distributed architecture following the Service as a Service (SaaS) model.

  11. 30 CFR 75.1107-9 - Dry chemical devices; capacity; minimum requirements.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... 30 Mineral Resources 1 2011-07-01 2011-07-01 false Dry chemical devices; capacity; minimum... Dry chemical devices; capacity; minimum requirements. (a) Dry chemical fire extinguishing systems used...; (3) Hose and pipe shall be as short as possible; the distance between the chemical container and...

  12. 30 CFR 75.1107-9 - Dry chemical devices; capacity; minimum requirements.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... 30 Mineral Resources 1 2010-07-01 2010-07-01 false Dry chemical devices; capacity; minimum... Dry chemical devices; capacity; minimum requirements. (a) Dry chemical fire extinguishing systems used...; (3) Hose and pipe shall be as short as possible; the distance between the chemical container and...

  13. 30 CFR 75.1107-9 - Dry chemical devices; capacity; minimum requirements.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... 30 Mineral Resources 1 2013-07-01 2013-07-01 false Dry chemical devices; capacity; minimum... Dry chemical devices; capacity; minimum requirements. (a) Dry chemical fire extinguishing systems used...; (3) Hose and pipe shall be as short as possible; the distance between the chemical container and...

  14. 30 CFR 75.1107-9 - Dry chemical devices; capacity; minimum requirements.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... 30 Mineral Resources 1 2012-07-01 2012-07-01 false Dry chemical devices; capacity; minimum... Dry chemical devices; capacity; minimum requirements. (a) Dry chemical fire extinguishing systems used...; (3) Hose and pipe shall be as short as possible; the distance between the chemical container and...

  15. 30 CFR 77.701-3 - Grounding wires; capacity.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... 30 Mineral Resources 1 2010-07-01 2010-07-01 false Grounding wires; capacity. 77.701-3 Section 77... MINES Grounding § 77.701-3 Grounding wires; capacity. Where grounding wires are used to ground metallic sheaths, armors, conduits, frames, casings, and other metallic enclosures, such grounding wires will be...

  16. 30 CFR 77.701-3 - Grounding wires; capacity.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... 30 Mineral Resources 1 2011-07-01 2011-07-01 false Grounding wires; capacity. 77.701-3 Section 77... MINES Grounding § 77.701-3 Grounding wires; capacity. Where grounding wires are used to ground metallic sheaths, armors, conduits, frames, casings, and other metallic enclosures, such grounding wires will be...

  17. 18 CFR 294.101 - Shortages of electric energy and capacity.

    Code of Federal Regulations, 2011 CFR

    2011-04-01

    ... energy and capacity. 294.101 Section 294.101 Conservation of Power and Water Resources FEDERAL ENERGY REGULATORY COMMISSION, DEPARTMENT OF ENERGY REGULATIONS UNDER THE PUBLIC UTILITY REGULATORY POLICIES ACT OF 1978 PROCEDURES FOR SHORTAGES OF ELECTRIC ENERGY AND CAPACITY UNDER SECTION 206 OF THE PUBLIC UTILITY...

  18. 18 CFR 294.101 - Shortages of electric energy and capacity.

    Code of Federal Regulations, 2013 CFR

    2013-04-01

    ... energy and capacity. 294.101 Section 294.101 Conservation of Power and Water Resources FEDERAL ENERGY REGULATORY COMMISSION, DEPARTMENT OF ENERGY REGULATIONS UNDER THE PUBLIC UTILITY REGULATORY POLICIES ACT OF 1978 PROCEDURES FOR SHORTAGES OF ELECTRIC ENERGY AND CAPACITY UNDER SECTION 206 OF THE PUBLIC UTILITY...

  19. Nanoscale molecular communication networks: a game-theoretic perspective

    NASA Astrophysics Data System (ADS)

    Jiang, Chunxiao; Chen, Yan; Ray Liu, K. J.

    2015-12-01

    Currently, communication between nanomachines is an important topic for the development of novel devices. To implement a nanocommunication system, diffusion-based molecular communication is considered as a promising bio-inspired approach. Various technical issues about molecular communications, including channel capacity, noise and interference, and modulation and coding, have been studied in the literature, while the resource allocation problem among multiple nanomachines has not been well investigated, which is a very important issue since all the nanomachines share the same propagation medium. Considering the limited computation capability of nanomachines and the expensive information exchange cost among them, in this paper, we propose a game-theoretic framework for distributed resource allocation in nanoscale molecular communication systems. We first analyze the inter-symbol and inter-user interference, as well as bit error rate performance, in the molecular communication system. Based on the interference analysis, we formulate the resource allocation problem as a non-cooperative molecule emission control game, where the Nash equilibrium is found and proved to be unique. In order to improve the system efficiency while guaranteeing fairness, we further model the resource allocation problem using a cooperative game based on the Nash bargaining solution, which is proved to be proportionally fair. Simulation results show that the Nash bargaining solution can effectively ensure fairness among multiple nanomachines while achieving comparable social welfare performance with the centralized scheme.

  20. Carrying capacity in a heterogeneous environment with habitat connectivity.

    PubMed

    Zhang, Bo; Kula, Alex; Mack, Keenan M L; Zhai, Lu; Ryce, Arrix L; Ni, Wei-Ming; DeAngelis, Donald L; Van Dyken, J David

    2017-09-01

    A large body of theory predicts that populations diffusing in heterogeneous environments reach higher total size than if non-diffusing, and, paradoxically, higher size than in a corresponding homogeneous environment. However, this theory and its assumptions have not been rigorously tested. Here, we extended previous theory to include exploitable resources, proving qualitatively novel results, which we tested experimentally using spatially diffusing laboratory populations of yeast. Consistent with previous theory, we predicted and experimentally observed that spatial diffusion increased total equilibrium population abundance in heterogeneous environments, with the effect size depending on the relationship between r and K. Refuting previous theory, however, we discovered that homogeneously distributed resources support higher total carrying capacity than heterogeneously distributed resources, even with species diffusion. Our results provide rigorous experimental tests of new and old theory, demonstrating how the traditional notion of carrying capacity is ambiguous for populations diffusing in spatially heterogeneous environments. © 2017 John Wiley & Sons Ltd/CNRS.

  1. Carrying capacity in a heterogeneous environment with habitat connectivity

    USGS Publications Warehouse

    Zhang, Bo; Kula, Alex; Mack, Keenan M.L.; Zhai, Lu; Ryce, Arrix L.; Ni, Wei-Ming; DeAngelis, Donald L.; Van Dyken, J. David

    2017-01-01

    A large body of theory predicts that populations diffusing in heterogeneous environments reach higher total size than if non-diffusing, and, paradoxically, higher size than in a corresponding homogeneous environment. However, this theory and its assumptions have not been rigorously tested. Here, we extended previous theory to include exploitable resources, proving qualitatively novel results, which we tested experimentally using spatially diffusing laboratory populations of yeast. Consistent with previous theory, we predicted and experimentally observed that spatial diffusion increased total equilibrium population abundance in heterogeneous environments, with the effect size depending on the relationship between r and K. Refuting previous theory, however, we discovered that homogeneously distributed resources support higher total carrying capacity than heterogeneously distributed resources, even with species diffusion. Our results provide rigorous experimental tests of new and old theory, demonstrating how the traditional notion of carrying capacity is ambiguous for populations diffusing in spatially heterogeneous environments.

  2. Capacity Building and Financing Oral Health in the African and Middle East Region.

    PubMed

    Mumghamba, E G; Joury, E; Fatusi, O; Ober-Oluoch, J; Onigbanjo, R J; Honkala, S

    2015-07-01

    Many low- and middle-income countries do not yet have policies to implement effective oral health programs. A reason is lack of human and financial resources. Gaps between resource needs and available health funding are widening. By building capacity, countries aim to improve oral health through actions by oral health care personnel and oral health care organizations and their communities. Capacity building involves achieving measurable and sustainable results in training, research, and provision of care. Actions include advancement of knowledge, attitudes and skills, expansion of support, and development of cohesiveness and partnerships. The aim of this critical review is to review existing knowledge and identify gaps and variations between and within different income levels in relation to the capacity building and financing oral health in the African and Middle East region (AMER). A second aim is to formulate research priorities and outline a research agenda for capacity building and financing to improve oral health and reduce oral health inequalities in the AMER. The article focuses on capacity building for oral health and oral health financing in the AMER of the IADR. In many communities in the AMER, there are clear and widening gaps between the dental needs and the existing capacity to meet these needs in terms of financial and human resources. Concerted efforts are required to improve access to oral health care through appropriate financing mechanisms, innovative health insurance schemes, and donor support and move toward universal oral health care coverage to reduce social inequality in the region. It is necessary to build capacity and incentivize the workforce to render evidence-based services as well as accessing funds to conduct research on equity and social determinants of oral health while promoting community engagement and a multidisciplinary approach. © International & American Associations for Dental Research 2015.

  3. Community Capacity for Implementing Clean Development Mechanism Projects Within Community Forests in Cameroon

    PubMed Central

    McCall, Michael K.; Bressers, Hans Th. A.

    2007-01-01

    There is a growing assumption that payments for environmental services including carbon sequestration and greenhouse gas emission reduction provide an opportunity for poverty reduction and the enhancement of sustainable development within integrated natural resource management approaches. Yet in experiential terms, community-based natural resource management implementation falls short of expectations in many cases. In this paper, we investigate the asymmetry between community capacity and the Land Use Land Use Change Forestry (LULUCF) provisions of the Clean Development Mechanism within community forests in Cameroon. We use relevant aspects of the Clean Development Mechanism criteria and notions of “community capacity” to elucidate determinants of community capacity needed for CDM implementation within community forests. The main requirements are for community capacity to handle issues of additionality, acceptability, externalities, certification, and community organisation. These community capacity requirements are further used to interpret empirically derived insights on two community forestry cases in Cameroon. While local variations were observed for capacity requirements in each case, community capacity was generally found to be insufficient for meaningful uptake and implementation of Clean Development Mechanism projects. Implications for understanding factors that could inhibit or enhance community capacity for project development are discussed. We also include recommendations for the wider Clean Development Mechanism/Kyoto capacity building framework. PMID:17377732

  4. Synaptic efficacy shapes resource limitations in working memory.

    PubMed

    Krishnan, Nikhil; Poll, Daniel B; Kilpatrick, Zachary P

    2018-06-01

    Working memory (WM) is limited in its temporal length and capacity. Classic conceptions of WM capacity assume the system possesses a finite number of slots, but recent evidence suggests WM may be a continuous resource. Resource models typically assume there is no hard upper bound on the number of items that can be stored, but WM fidelity decreases with the number of items. We analyze a neural field model of multi-item WM that associates each item with the location of a bump in a finite spatial domain, considering items that span a one-dimensional continuous feature space. Our analysis relates the neural architecture of the network to accumulated errors and capacity limitations arising during the delay period of a multi-item WM task. Networks with stronger synapses support wider bumps that interact more, whereas networks with weaker synapses support narrower bumps that are more susceptible to noise perturbations. There is an optimal synaptic strength that both limits bump interaction events and the effects of noise perturbations. This optimum shifts to weaker synapses as the number of items stored in the network is increased. Our model not only provides a circuit-based explanation for WM capacity, but also speaks to how capacity relates to the arrangement of stored items in a feature space.

  5. Wireless Shared Resources: Sharing Right-Of-Way For Wireless Telecommunications, Guidance On Legal And Institutional Issues

    DOT National Transportation Integrated Search

    1997-06-06

    PUBLIC-PRIVATE PARTNERSHIPS SHARED RESOURCE PROJECTS ARE PUBLIC-PRIVATE ARRANGEMENTS THAT INVOLVE SHARING PUBLIC PROPERTY SUCH AS RIGHTS-OF-WAY AND PRIVATE RESOURCES SUCH AS TELECOMMUNICATIONS CAPACITY AND EXPERTISE. TYPICALLY, PRIVATE TELECOMMUNI...

  6. Design & implementation of distributed spatial computing node based on WPS

    NASA Astrophysics Data System (ADS)

    Liu, Liping; Li, Guoqing; Xie, Jibo

    2014-03-01

    Currently, the research work of SIG (Spatial Information Grid) technology mostly emphasizes on the spatial data sharing in grid environment, while the importance of spatial computing resources is ignored. In order to implement the sharing and cooperation of spatial computing resources in grid environment, this paper does a systematical research of the key technologies to construct Spatial Computing Node based on the WPS (Web Processing Service) specification by OGC (Open Geospatial Consortium). And a framework of Spatial Computing Node is designed according to the features of spatial computing resources. Finally, a prototype of Spatial Computing Node is implemented and the relevant verification work under the environment is completed.

  7. Economic models for management of resources in peer-to-peer and grid computing

    NASA Astrophysics Data System (ADS)

    Buyya, Rajkumar; Stockinger, Heinz; Giddy, Jonathan; Abramson, David

    2001-07-01

    The accelerated development in Peer-to-Peer (P2P) and Grid computing has positioned them as promising next generation computing platforms. They enable the creation of Virtual Enterprises (VE) for sharing resources distributed across the world. However, resource management, application development and usage models in these environments is a complex undertaking. This is due to the geographic distribution of resources that are owned by different organizations or peers. The resource owners of each of these resources have different usage or access policies and cost models, and varying loads and availability. In order to address complex resource management issues, we have proposed a computational economy framework for resource allocation and for regulating supply and demand in Grid computing environments. The framework provides mechanisms for optimizing resource provider and consumer objective functions through trading and brokering services. In a real world market, there exist various economic models for setting the price for goods based on supply-and-demand and their value to the user. They include commodity market, posted price, tenders and auctions. In this paper, we discuss the use of these models for interaction between Grid components in deciding resource value and the necessary infrastructure to realize them. In addition to normal services offered by Grid computing systems, we need an infrastructure to support interaction protocols, allocation mechanisms, currency, secure banking, and enforcement services. Furthermore, we demonstrate the usage of some of these economic models in resource brokering through Nimrod/G deadline and cost-based scheduling for two different optimization strategies on the World Wide Grid (WWG) testbed that contains peer-to-peer resources located on five continents: Asia, Australia, Europe, North America, and South America.

  8. A cross-sectional evaluation of computer literacy among medical students at a tertiary care teaching hospital in Mumbai, Bombay.

    PubMed

    Panchabhai, T S; Dangayach, N S; Mehta, V S; Patankar, C V; Rege, N N

    2011-01-01

    Computer usage capabilities of medical students for introduction of computer-aided learning have not been adequately assessed. Cross-sectional study to evaluate computer literacy among medical students. Tertiary care teaching hospital in Mumbai, India. Participants were administered a 52-question questionnaire, designed to study their background, computer resources, computer usage, activities enhancing computer skills, and attitudes toward computer-aided learning (CAL). The data was classified on the basis of sex, native place, and year of medical school, and the computer resources were compared. The computer usage and attitudes toward computer-based learning were assessed on a five-point Likert scale, to calculate Computer usage score (CUS - maximum 55, minimum 11) and Attitude score (AS - maximum 60, minimum 12). The quartile distribution among the groups with respect to the CUS and AS was compared by chi-squared tests. The correlation between CUS and AS was then tested. Eight hundred and seventy-five students agreed to participate in the study and 832 completed the questionnaire. One hundred and twenty eight questionnaires were excluded and 704 were analyzed. Outstation students had significantly lesser computer resources as compared to local students (P<0.0001). The mean CUS for local students (27.0±9.2, Mean±SD) was significantly higher than outstation students (23.2±9.05). No such difference was observed for the AS. The means of CUS and AS did not differ between males and females. The CUS and AS had positive, but weak correlations for all subgroups. The weak correlation between AS and CUS for all students could be explained by the lack of computer resources or inadequate training to use computers for learning. Providing additional resources would benefit the subset of outstation students with lesser computer resources. This weak correlation between the attitudes and practices of all students needs to be investigated. We believe that this gap can be bridged with a structured computer learning program.

  9. Water Resource Management Mechanisms for Intrastate Violent Conflict Resolution: the Capacity Gap and What To Do About It.

    NASA Astrophysics Data System (ADS)

    Workman, M.; Veilleux, J. C.

    2014-12-01

    Violent conflict and issues surrounding available water resources are both global problems and are connected. Violent conflict is increasingly intrastate in nature and coupled with increased hydrological variability as a function of climate change, there will be increased pressures on water resource use. The majority of mechanisms designed to secure water resources are often based on the presence of a governance framework or another type of institutional capacity, such as offered through a supra- or sub-national organization like the United Nations or a river basin organization. However, institutional frameworks are not present or loose functionality during violent conflict. Therefore, it will likely be extremely difficult to secure water resources for a significant proportion of populations in Fragile and Conflict Affected States. However, the capacity in Organisation for Economic Co-operation and Development nations for the appropriate interventions to address this problem is reduced by an increasing reluctance to participate in interventionist operations following a decade of expeditionary warfighting mainly in Iraq and Afghanistan, and related defence cuts. Therefore, future interventions in violent conflict and securing water resources may be more indirect in nature. This paper assesses the state of understanding key areas in the present literature and highlights the gap of securing water resources during violent conflict in the absence of institutional capacity. There is a need to close this gap as a matter of urgency by formulating frameworks to assess the lack of institutional oversight / framework for water resources in areas where violent conflict is prevalent; developing inclusive resource management platforms through transparency and reconciliation mechanisms; and developing endogenous confidence-building measures and evaluate how these may be encouraged by exogenous initiatives including those facilitated by the international community. This effort will require the development of collaborations between academic, NGO sectors, and national aid agencies in order to allow the development of the appropriate tools, understanding in a broad range of contexts, and the mechanisms that can be brought to bear to address this increasingly important area.

  10. Impact of remote sensing upon the planning, management, and development of water resources

    NASA Technical Reports Server (NTRS)

    Loats, H. L.; Fowler, T. R.; Frech, S. L.

    1974-01-01

    A survey of the principal water resource users was conducted to determine the impact of new remote data streams on hydrologic computer models. The analysis of the responses and direct contact demonstrated that: (1) the majority of water resource effort of the type suitable to remote sensing inputs is conducted by major federal water resources agencies or through federally stimulated research, (2) the federal government develops most of the hydrologic models used in this effort; and (3) federal computer power is extensive. The computers, computer power, and hydrologic models in current use were determined.

  11. High Speed Computing, LANs, and WAMs

    NASA Technical Reports Server (NTRS)

    Bergman, Larry A.; Monacos, Steve

    1994-01-01

    Optical fiber networks may one day offer potential capacities exceeding 10 terabits/sec. This paper describes present gigabit network techniques for distributed computing as illustrated by the CASA gigabit testbed, and then explores future all-optic network architectures that offer increased capacity, more optimized level of service for a given application, high fault tolerance, and dynamic reconfigurability.

  12. Research status of geothermal resources in China

    NASA Astrophysics Data System (ADS)

    Zhang, Lincheng; Li, Guang

    2017-08-01

    As the representative of the new green energy, geothermal resources are characterized by large reserve, wide distribution, cleanness and environmental protection, good stability, high utilization factor and other advantages. According to the characteristics of exploitation and utilization, they can be divided into high-temperature, medium-temperature and low-temperature geothermal resources. The abundant and widely distributed geothermal resources in China have a broad prospect for development. The medium and low temperature geothermal resources are broadly distributed in the continental crustal uplift and subsidence areas inside the plate, represented by the geothermal belt on the southeast coast, while the high temperature geothermal resources concentrate on Southern Tibet-Western Sichuan-Western Yunnan Geothermal Belt and Taiwan Geothermal Belt. Currently, the geothermal resources in China are mainly used for bathing, recuperation, heating and power generation. It is a country that directly makes maximum use of geothermal energy in the world. However, China’s geothermal power generation, including installed generating capacity and power generation capacity, are far behind those of Western European countries and the USA. Studies on exploitation and development of geothermal resources are still weak.

  13. Metatranscriptome analyses indicate resource partitioning between diatoms in the field.

    PubMed

    Alexander, Harriet; Jenkins, Bethany D; Rynearson, Tatiana A; Dyhrman, Sonya T

    2015-04-28

    Diverse communities of marine phytoplankton carry out half of global primary production. The vast diversity of the phytoplankton has long perplexed ecologists because these organisms coexist in an isotropic environment while competing for the same basic resources (e.g., inorganic nutrients). Differential niche partitioning of resources is one hypothesis to explain this "paradox of the plankton," but it is difficult to quantify and track variation in phytoplankton metabolism in situ. Here, we use quantitative metatranscriptome analyses to examine pathways of nitrogen (N) and phosphorus (P) metabolism in diatoms that cooccur regularly in an estuary on the east coast of the United States (Narragansett Bay). Expression of known N and P metabolic pathways varied between diatoms, indicating apparent differences in resource utilization capacity that may prevent direct competition. Nutrient amendment incubations skewed N/P ratios, elucidating nutrient-responsive patterns of expression and facilitating a quantitative comparison between diatoms. The resource-responsive (RR) gene sets deviated in composition from the metabolic profile of the organism, being enriched in genes associated with N and P metabolism. Expression of the RR gene set varied over time and differed significantly between diatoms, resulting in opposite transcriptional responses to the same environment. Apparent differences in metabolic capacity and the expression of that capacity in the environment suggest that diatom-specific resource partitioning was occurring in Narragansett Bay. This high-resolution approach highlights the molecular underpinnings of diatom resource utilization and how cooccurring diatoms adjust their cellular physiology to partition their niche space.

  14. Resource Provisioning in SLA-Based Cluster Computing

    NASA Astrophysics Data System (ADS)

    Xiong, Kaiqi; Suh, Sang

    Cluster computing is excellent for parallel computation. It has become increasingly popular. In cluster computing, a service level agreement (SLA) is a set of quality of services (QoS) and a fee agreed between a customer and an application service provider. It plays an important role in an e-business application. An application service provider uses a set of cluster computing resources to support e-business applications subject to an SLA. In this paper, the QoS includes percentile response time and cluster utilization. We present an approach for resource provisioning in such an environment that minimizes the total cost of cluster computing resources used by an application service provider for an e-business application that often requires parallel computation for high service performance, availability, and reliability while satisfying a QoS and a fee negotiated between a customer and the application service provider. Simulation experiments demonstrate the applicability of the approach.

  15. Acausal measurement-based quantum computing

    NASA Astrophysics Data System (ADS)

    Morimae, Tomoyuki

    2014-07-01

    In measurement-based quantum computing, there is a natural "causal cone" among qubits of the resource state, since the measurement angle on a qubit has to depend on previous measurement results in order to correct the effect of by-product operators. If we respect the no-signaling principle, by-product operators cannot be avoided. Here we study the possibility of acausal measurement-based quantum computing by using the process matrix framework [Oreshkov, Costa, and Brukner, Nat. Commun. 3, 1092 (2012), 10.1038/ncomms2076]. We construct a resource process matrix for acausal measurement-based quantum computing restricting local operations to projective measurements. The resource process matrix is an analog of the resource state of the standard causal measurement-based quantum computing. We find that if we restrict local operations to projective measurements the resource process matrix is (up to a normalization factor and trivial ancilla qubits) equivalent to the decorated graph state created from the graph state of the corresponding causal measurement-based quantum computing. We also show that it is possible to consider a causal game whose causal inequality is violated by acausal measurement-based quantum computing.

  16. Step-by-step magic state encoding for efficient fault-tolerant quantum computation

    PubMed Central

    Goto, Hayato

    2014-01-01

    Quantum error correction allows one to make quantum computers fault-tolerant against unavoidable errors due to decoherence and imperfect physical gate operations. However, the fault-tolerant quantum computation requires impractically large computational resources for useful applications. This is a current major obstacle to the realization of a quantum computer. In particular, magic state distillation, which is a standard approach to universality, consumes the most resources in fault-tolerant quantum computation. For the resource problem, here we propose step-by-step magic state encoding for concatenated quantum codes, where magic states are encoded step by step from the physical level to the logical one. To manage errors during the encoding, we carefully use error detection. Since the sizes of intermediate codes are small, it is expected that the resource overheads will become lower than previous approaches based on the distillation at the logical level. Our simulation results suggest that the resource requirements for a logical magic state will become comparable to those for a single logical controlled-NOT gate. Thus, the present method opens a new possibility for efficient fault-tolerant quantum computation. PMID:25511387

  17. Step-by-step magic state encoding for efficient fault-tolerant quantum computation.

    PubMed

    Goto, Hayato

    2014-12-16

    Quantum error correction allows one to make quantum computers fault-tolerant against unavoidable errors due to decoherence and imperfect physical gate operations. However, the fault-tolerant quantum computation requires impractically large computational resources for useful applications. This is a current major obstacle to the realization of a quantum computer. In particular, magic state distillation, which is a standard approach to universality, consumes the most resources in fault-tolerant quantum computation. For the resource problem, here we propose step-by-step magic state encoding for concatenated quantum codes, where magic states are encoded step by step from the physical level to the logical one. To manage errors during the encoding, we carefully use error detection. Since the sizes of intermediate codes are small, it is expected that the resource overheads will become lower than previous approaches based on the distillation at the logical level. Our simulation results suggest that the resource requirements for a logical magic state will become comparable to those for a single logical controlled-NOT gate. Thus, the present method opens a new possibility for efficient fault-tolerant quantum computation.

  18. A Review of Resources for Evaluating K-12 Computer Science Education Programs

    ERIC Educational Resources Information Center

    Randolph, Justus J.; Hartikainen, Elina

    2004-01-01

    Since computer science education is a key to preparing students for a technologically-oriented future, it makes sense to have high quality resources for conducting summative and formative evaluation of those programs. This paper describes the results of a critical analysis of the resources for evaluating K-12 computer science education projects.…

  19. Computing the Envelope for Stepwise Constant Resource Allocations

    NASA Technical Reports Server (NTRS)

    Muscettola, Nicola; Clancy, Daniel (Technical Monitor)

    2001-01-01

    Estimating tight resource level is a fundamental problem in the construction of flexible plans with resource utilization. In this paper we describe an efficient algorithm that builds a resource envelope, the tightest possible such bound. The algorithm is based on transforming the temporal network of resource consuming and producing events into a flow network with noises equal to the events and edges equal to the necessary predecessor links between events. The incremental solution of a staged maximum flow problem on the network is then used to compute the time of occurrence and the height of each step of the resource envelope profile. The staged algorithm has the same computational complexity of solving a maximum flow problem on the entire flow network. This makes this method computationally feasible for use in the inner loop of search-based scheduling algorithms.

  20. Assessment of Seasonal Water Balance Components over India Using Macroscale Hydrological Model

    NASA Astrophysics Data System (ADS)

    Joshi, S.; Raju, P. V.; Hakeem, K. A.; Rao, V. V.; Yadav, A.; Issac, A. M.; Diwakar, P. G.; Dadhwal, V. K.

    2016-12-01

    Hydrological models provide water balance components which are useful for water resources assessment and for capturing the seasonal changes and impact of anthropogenic interventions and climate change. The study under description is a national level modeling framework for country India using wide range of geo-spatial and hydro-meteorological data sets for estimating daily Water Balance Components (WBCs) at 0.15º grid resolution using Variable Infiltration Capacity model. The model parameters were optimized through calibration of model computed stream flow with field observed yielding Nash-Sutcliffe efficiency between 0.5 to 0.7. The state variables, evapotranspiration (ET) and soil moisture were also validated, obtaining R2 values of 0.57 and 0.69, respectively. Using long-term meteorological data sets, model computation were carried to capture hydrological extremities. During 2013, 2014 and 2015 monsoon seasons, WBCs were estimated and were published in web portal with 2-day time lag. In occurrence of disaster events, weather forecast was ingested, high surface runoff zones were identified for forewarning and disaster preparedness. Cumulative monsoon season rainfall of 2013, 2014 and 2015 were 105, 89 and 91% of long period average (LPA) respectively (Source: India Meteorological Department). Analysis of WBCs indicated that corresponding seasonal surface runoff was 116, 81 and 86% LPA and evapotranspiration was 109, 104 and 90% LPA. Using the grid-wise data, the spatial variation in WBCs among river basins/administrative regions was derived to capture the changes in surface runoff, ET between the years and in comparison with LPA. The model framework is operational and is providing periodic account of national level water balance fluxes which are useful for quantifying spatial and temporal variation in basin/sub-basin scale water resources, periodical water budgeting to form vital inputs for studies on water resources and climate change.

  1. COMPUTATIONAL TOXICOLOGY-WHERE IS THE DATA? ...

    EPA Pesticide Factsheets

    This talk will briefly describe the state of the data world for computational toxicology and one approach to improve the situation, called ACToR (Aggregated Computational Toxicology Resource). This talk will briefly describe the state of the data world for computational toxicology and one approach to improve the situation, called ACToR (Aggregated Computational Toxicology Resource).

  2. LaRC local area networks to support distributed computing

    NASA Technical Reports Server (NTRS)

    Riddle, E. P.

    1984-01-01

    The Langley Research Center's (LaRC) Local Area Network (LAN) effort is discussed. LaRC initiated the development of a LAN to support a growing distributed computing environment at the Center. The purpose of the network is to provide an improved capability (over inteactive and RJE terminal access) for sharing multivendor computer resources. Specifically, the network will provide a data highway for the transfer of files between mainframe computers, minicomputers, work stations, and personal computers. An important influence on the overall network design was the vital need of LaRC researchers to efficiently utilize the large CDC mainframe computers in the central scientific computing facility. Although there was a steady migration from a centralized to a distributed computing environment at LaRC in recent years, the work load on the central resources increased. Major emphasis in the network design was on communication with the central resources within the distributed environment. The network to be implemented will allow researchers to utilize the central resources, distributed minicomputers, work stations, and personal computers to obtain the proper level of computing power to efficiently perform their jobs.

  3. 78 FR 37537 - Centralized Capacity Markets in Regional Transmission Organizations and Independent System...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-06-21

    ... existing centralized capacity markets (e.g., resource adequacy, long-term price signals, fixed-cost[email protected] . Sarah McKinley (Logistical Information), Office of External Affairs, Federal Energy...

  4. Utilization of an interorganizational network analysis to evaluate the development of community capacity among a community-academic partnership.

    PubMed

    Clark, Heather R; Ramirez, Albert; Drake, Kelly N; Beaudoin, Christopher E; Garney, Whitney R; Wendel, Monica L; Outley, Corliss; Burdine, James N; Player, Harold D

    2014-01-01

    Following a community health assessment the Brazos Valley Health Partnership (BVHP) organized to address fragmentation of services and local health needs. This regional partnership employs the fundamental principles of community-based participatory research, fostering an equitable partnership with the aim of building community capacity to address local health issues. This article describes changes in relationships as a result of capacity building efforts in a community-academic partnership. Growth in network structure among organizations is hypothesized to be indicative of less fragmentation of services for residents and increased capacity of the BVHP to collectively address local health issues. Each of the participant organizations responded to a series of questions regarding its relationships with other organizations. Each organization was asked about information sharing, joint planning, resource sharing, and formal agreements with other organizations. The network survey has been administered 3 times between 2004 and 2009. Network density increased for sharing information and jointly planning events. Growth in the complexity of relationships was reported for sharing tangible resources and formal agreements. The average number of ties between organizations as well as the strength of relationships increased. This study provides evidence that the community capacity building efforts within these communities have contributed to beneficial changes in interorganizational relationships. Results from this analysis are useful for understanding how a community partnership's efforts to address access to care can strengthen a community's capacity for future action. Increased collaboration also leads to new assets, resources, and the transfer of knowledge and skills.

  5. CO2 sequestration: Storage capacity guideline needed

    USGS Publications Warehouse

    Frailey, S.M.; Finley, R.J.; Hickman, T.S.

    2006-01-01

    Petroleum reserves are classified for the assessment of available supplies by governmental agencies, management of business processes for achieving exploration and production efficiency, and documentation of the value of reserves and resources in financial statements. Up to the present however, the storage capacity determinations made by some organizations in the initial CO2 resource assessment are incorrect technically. New publications should thus cover differences in mineral adsorption of CO2 and dissolution of CO2 in various brine waters.

  6. Regionalization as an approach to regulatory systems strengthening: a case study in CARICOM member states.

    PubMed

    Preston, Charles; Chahal, Harinder S; Porrás, Analia; Cargill, Lucette; Hinds, Maryam; Olowokure, Babatunde; Cummings, Rudolph; Hospedales, James

    2016-05-01

    Improving basic capacities for regulation of medicines and health technologies through regulatory systems strengthening is particularly challenging in resource-constrained settings. "Regionalization"-an approach in which countries with common histories, cultural values, languages, and economic conditions work together to establish more efficient systems-may be one answer. This report describes the Caribbean Regulatory System (CRS), a regionalization initiative being implemented in the mostly small countries of the Caribbean Community and Common Market (CARICOM). This initiative is an innovative effort to strengthen regulatory systems in the Caribbean, where capacity is limited compared to other subregions of the Americas. The initiative's concept and design includes a number of features and steps intended to enhance sustainability in resource-constrained contexts. The latter include 1) leveraging existing platforms for centralized cooperation, governance, and infrastructure; 2) strengthening regulatory capacities with the largest potential public health impact; 3) incorporating policies that promote reliance on reference authorities; 4) changing the system to encourage industry to market their products in CARICOM (e.g., using a centralized portal of entry to reduce regulatory burdens); and 5) building human resource capacity. If implemented properly, the CRS will be self-sustaining through user fees. The experience and lessons learned thus far in implementing this initiative, described in this report, can serve as a case study for the development of similar regulatory strengthening initiatives in resource-constrained environments.

  7. 2015 California Demand Response Potential Study - Charting California’s Demand Response Future. Interim Report on Phase 1 Results

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Alstone, Peter; Potter, Jennifer; Piette, Mary Ann

    Demand response (DR) is an important resource for keeping the electricity grid stable and efficient; deferring upgrades to generation, transmission, and distribution systems; and providing other customer economic benefits. This study estimates the potential size and cost of the available DR resource for California’s three investor-owned utilities (IOUs), as the California Public Utilities Commission (CPUC) evaluates how to enhance the role of DR in meeting California’s resource planning needs and operational requirements. As the state forges a clean energy future, the contributions of wind and solar electricity from centralized and distributed generation will fundamentally change the power grid’s operational dynamics.more » This transition requires careful planning to ensure sufficient capacity is available with the right characteristics – flexibility and fast response – to meet reliability needs. Illustrated is a snapshot of how net load (the difference between demand and intermittent renewables) is expected to shift. Increasing contributions from renewable generation introduces steeper ramps and a shift, into the evening, of the hours that drive capacity needs. These hours of peak capacity need are indicated by the black dots on the plots. Ultimately this study quantifies the ability and the cost of using DR resources to help meet the capacity need at these forecasted critical hours in the state.« less

  8. China's human resources for maternal and child health: a national sampling survey.

    PubMed

    Ren, Zhenghong; Song, Peige; Theodoratou, Evropi; Guo, Sufang; An, Lin

    2015-12-16

    In order to achieve the Millennium Development Goals (MDG) 4 and 5, the Chinese Government has invested greatly in improving maternal and child health (MCH) with impressive results. However, one of the most important barriers for further improvement is the uneven distribution of MCH human resources. There is little information about the distribution, quantity and capacity of the Chinese MCH human resources and we sought to investigate this. Cities at prefectural level were selected by random cluster sampling. All medical and health institutions providing MCH-related services in the sampled areas were investigated using a structured questionnaire. The data were weighted based on the proportion of the sampled districts/cities. Amount, proportions and numbers per 10,000 population of MCH human resources were estimated in order to reveal the quantity of the Chinese MCH human resources. The capacity of MCH human resources was evaluated by analyzing data on the education level and professional skills of the staff. There were 77,248 MCH workers in China in 2010. In general, 67.6% and 71.9% of the women's and children's health care professionals had an associate degree or higher, whereas around 30% had only high-school or lower degrees. More than 40% of the women's health workers were capable of providing skilled birth attendance, but these proportions varied between different institutions and locations. Evidence from this study highlights that Chinese MCH human resources are not in shortage in the national level. However, the quantity and capacity of MCH human resources are not evenly distributed among different institutions and locations. Finally there is a need in the improvement of the MCH services by improving the quality of MCH human resources.

  9. 30 CFR 75.1107-10 - High expansion foam devices; minimum capacity.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... 30 Mineral Resources 1 2010-07-01 2010-07-01 false High expansion foam devices; minimum capacity... foam devices; minimum capacity. (a) On unattended underground equipment the amount of water delivered as high expansion foam for a period of approximately 20 minutes shall be not less than 0.06 gallon...

  10. 30 CFR 75.1107-10 - High expansion foam devices; minimum capacity.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... 30 Mineral Resources 1 2011-07-01 2011-07-01 false High expansion foam devices; minimum capacity... foam devices; minimum capacity. (a) On unattended underground equipment the amount of water delivered as high expansion foam for a period of approximately 20 minutes shall be not less than 0.06 gallon...

  11. Working Memory Capacity and Reading Skill Moderate the Effectiveness of Strategy Training in Learning from Hypertext

    ERIC Educational Resources Information Center

    Naumann, Johannes; Richter, Tobias; Christmann, Ursula; Groeben, Norbert

    2008-01-01

    Cognitive and metacognitive strategies are particularly important for learning with hypertext. The effectiveness of strategy training, however, depends on available working memory resources. Thus, especially learners high on working memory capacity can profit from strategy training, while learners low on working memory capacity might easily be…

  12. 30 CFR 75.1107-7 - Water spray devices; capacity; water supply; minimum requirements.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... 30 Mineral Resources 1 2010-07-01 2010-07-01 false Water spray devices; capacity; water supply... Water spray devices; capacity; water supply; minimum requirements. (a) Where water spray devices are... square foot over the top surface area of the equipment and the supply of water shall be adequate to...

  13. 30 CFR 75.701-4 - Grounding wires; capacity of wires.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... 30 Mineral Resources 1 2010-07-01 2010-07-01 false Grounding wires; capacity of wires. 75.701-4... SAFETY AND HEALTH MANDATORY SAFETY STANDARDS-UNDERGROUND COAL MINES Grounding § 75.701-4 Grounding wires; capacity of wires. Where grounding wires are used to ground metallic sheaths, armors, conduits, frames...

  14. 30 CFR 75.701-4 - Grounding wires; capacity of wires.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... 30 Mineral Resources 1 2011-07-01 2011-07-01 false Grounding wires; capacity of wires. 75.701-4... SAFETY AND HEALTH MANDATORY SAFETY STANDARDS-UNDERGROUND COAL MINES Grounding § 75.701-4 Grounding wires; capacity of wires. Where grounding wires are used to ground metallic sheaths, armors, conduits, frames...

  15. Capacity reconsidered: Finding consensus and clarifying differences

    Treesearch

    Doug Whittaker; Bo Shelby; Robert Manning; David Cole; Glenn Haas

    2010-01-01

    In a world where populations and resource demands continue to grow, there is a long history of concern about the "capacity" of the environment to support human uses, including timber, rangelands, fish and wildlife, and recreation. Work on visitor capacities has evolved considerably since the late 1960s as a result of environmental planning, court proceedings...

  16. Collaboration with HEIs: A Key Capacity Building Block for the Uganda Water and Sanitation Public Sector

    ERIC Educational Resources Information Center

    Kayaga, Sam

    2007-01-01

    The capacity of public service staff in developing countries is crucial for achieving the Millennium Development Goals. Literature from developed countries shows that, working with higher education institutions (HEIs), industries have improved their human resource capacity through continuing professional development. This paper reports on research…

  17. Mental Capacity and Working Memory in Chemistry: Algorithmic "versus" Open-Ended Problem Solving

    ERIC Educational Resources Information Center

    St Clair-Thompson, Helen; Overton, Tina; Bugler, Myfanwy

    2012-01-01

    Previous research has revealed that problem solving and attainment in chemistry are constrained by mental capacity and working memory. However, the terms mental capacity and working memory come from different theories of cognitive resources, and are assessed using different tasks. The current study examined the relationships between mental…

  18. Development of a resource modelling tool to support decision makers in pandemic influenza preparedness: The AsiaFluCap Simulator.

    PubMed

    Stein, Mart Lambertus; Rudge, James W; Coker, Richard; van der Weijden, Charlie; Krumkamp, Ralf; Hanvoravongchai, Piya; Chavez, Irwin; Putthasri, Weerasak; Phommasack, Bounlay; Adisasmito, Wiku; Touch, Sok; Sat, Le Minh; Hsu, Yu-Chen; Kretzschmar, Mirjam; Timen, Aura

    2012-10-12

    Health care planning for pandemic influenza is a challenging task which requires predictive models by which the impact of different response strategies can be evaluated. However, current preparedness plans and simulations exercises, as well as freely available simulation models previously made for policy makers, do not explicitly address the availability of health care resources or determine the impact of shortages on public health. Nevertheless, the feasibility of health systems to implement response measures or interventions described in plans and trained in exercises depends on the available resource capacity. As part of the AsiaFluCap project, we developed a comprehensive and flexible resource modelling tool to support public health officials in understanding and preparing for surges in resource demand during future pandemics. The AsiaFluCap Simulator is a combination of a resource model containing 28 health care resources and an epidemiological model. The tool was built in MS Excel© and contains a user-friendly interface which allows users to select mild or severe pandemic scenarios, change resource parameters and run simulations for one or multiple regions. Besides epidemiological estimations, the simulator provides indications on resource gaps or surpluses, and the impact of shortages on public health for each selected region. It allows for a comparative analysis of the effects of resource availability and consequences of different strategies of resource use, which can provide guidance on resource prioritising and/or mobilisation. Simulation results are displayed in various tables and graphs, and can also be easily exported to GIS software to create maps for geographical analysis of the distribution of resources. The AsiaFluCap Simulator is freely available software (http://www.cdprg.org) which can be used by policy makers, policy advisors, donors and other stakeholders involved in preparedness for providing evidence based and illustrative information on health care resource capacities during future pandemics. The tool can inform both preparedness plans and simulation exercises and can help increase the general understanding of dynamics in resource capacities during a pandemic. The combination of a mathematical model with multiple resources and the linkage to GIS for creating maps makes the tool unique compared to other available software.

  19. Biosafety and Biosecurity: A Relative Risk-Based Framework for Safer, More Secure, and Sustainable Laboratory Capacity Building.

    PubMed

    Dickmann, Petra; Sheeley, Heather; Lightfoot, Nigel

    2015-01-01

    Laboratory capacity building is characterized by a paradox between endemicity and resources: countries with high endemicity of pathogenic agents often have low and intermittent resources (water, electricity) and capacities (laboratories, trained staff, adequate regulations). Meanwhile, countries with low endemicity of pathogenic agents often have high-containment facilities with costly infrastructure and maintenance governed by regulations. The common practice of exporting high biocontainment facilities and standards is not sustainable and concerns about biosafety and biosecurity require careful consideration. A group at Chatham House developed a draft conceptual framework for safer, more secure, and sustainable laboratory capacity building. The draft generic framework is guided by the phrase "LOCAL - PEOPLE - MAKE SENSE" that represents three major principles: capacity building according to local needs (local) with an emphasis on relationship and trust building (people) and continuous outcome and impact measurement (make sense). This draft generic framework can serve as a blueprint for international policy decision-making on improving biosafety and biosecurity in laboratory capacity building, but requires more testing and detailing development.

  20. Towards a Scalable and Adaptive Application Support Platform for Large-Scale Distributed E-Sciences in High-Performance Network Environments

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wu, Chase Qishi; Zhu, Michelle Mengxia

    The advent of large-scale collaborative scientific applications has demonstrated the potential for broad scientific communities to pool globally distributed resources to produce unprecedented data acquisition, movement, and analysis. System resources including supercomputers, data repositories, computing facilities, network infrastructures, storage systems, and display devices have been increasingly deployed at national laboratories and academic institutes. These resources are typically shared by large communities of users over Internet or dedicated networks and hence exhibit an inherent dynamic nature in their availability, accessibility, capacity, and stability. Scientific applications using either experimental facilities or computation-based simulations with various physical, chemical, climatic, and biological models featuremore » diverse scientific workflows as simple as linear pipelines or as complex as a directed acyclic graphs, which must be executed and supported over wide-area networks with massively distributed resources. Application users oftentimes need to manually configure their computing tasks over networks in an ad hoc manner, hence significantly limiting the productivity of scientists and constraining the utilization of resources. The success of these large-scale distributed applications requires a highly adaptive and massively scalable workflow platform that provides automated and optimized computing and networking services. This project is to design and develop a generic Scientific Workflow Automation and Management Platform (SWAMP), which contains a web-based user interface specially tailored for a target application, a set of user libraries, and several easy-to-use computing and networking toolkits for application scientists to conveniently assemble, execute, monitor, and control complex computing workflows in heterogeneous high-performance network environments. SWAMP will enable the automation and management of the entire process of scientific workflows with the convenience of a few mouse clicks while hiding the implementation and technical details from end users. Particularly, we will consider two types of applications with distinct performance requirements: data-centric and service-centric applications. For data-centric applications, the main workflow task involves large-volume data generation, catalog, storage, and movement typically from supercomputers or experimental facilities to a team of geographically distributed users; while for service-centric applications, the main focus of workflow is on data archiving, preprocessing, filtering, synthesis, visualization, and other application-specific analysis. We will conduct a comprehensive comparison of existing workflow systems and choose the best suited one with open-source code, a flexible system structure, and a large user base as the starting point for our development. Based on the chosen system, we will develop and integrate new components including a black box design of computing modules, performance monitoring and prediction, and workflow optimization and reconfiguration, which are missing from existing workflow systems. A modular design for separating specification, execution, and monitoring aspects will be adopted to establish a common generic infrastructure suited for a wide spectrum of science applications. We will further design and develop efficient workflow mapping and scheduling algorithms to optimize the workflow performance in terms of minimum end-to-end delay, maximum frame rate, and highest reliability. We will develop and demonstrate the SWAMP system in a local environment, the grid network, and the 100Gpbs Advanced Network Initiative (ANI) testbed. The demonstration will target scientific applications in climate modeling and high energy physics and the functions to be demonstrated include workflow deployment, execution, steering, and reconfiguration. Throughout the project period, we will work closely with the science communities in the fields of climate modeling and high energy physics including Spallation Neutron Source (SNS) and Large Hadron Collider (LHC) projects to mature the system for production use.« less

  1. An approach for heterogeneous and loosely coupled geospatial data distributed computing

    NASA Astrophysics Data System (ADS)

    Chen, Bin; Huang, Fengru; Fang, Yu; Huang, Zhou; Lin, Hui

    2010-07-01

    Most GIS (Geographic Information System) applications tend to have heterogeneous and autonomous geospatial information resources, and the availability of these local resources is unpredictable and dynamic under a distributed computing environment. In order to make use of these local resources together to solve larger geospatial information processing problems that are related to an overall situation, in this paper, with the support of peer-to-peer computing technologies, we propose a geospatial data distributed computing mechanism that involves loosely coupled geospatial resource directories and a term named as Equivalent Distributed Program of global geospatial queries to solve geospatial distributed computing problems under heterogeneous GIS environments. First, a geospatial query process schema for distributed computing as well as a method for equivalent transformation from a global geospatial query to distributed local queries at SQL (Structured Query Language) level to solve the coordinating problem among heterogeneous resources are presented. Second, peer-to-peer technologies are used to maintain a loosely coupled network environment that consists of autonomous geospatial information resources, thus to achieve decentralized and consistent synchronization among global geospatial resource directories, and to carry out distributed transaction management of local queries. Finally, based on the developed prototype system, example applications of simple and complex geospatial data distributed queries are presented to illustrate the procedure of global geospatial information processing.

  2. Overcoming limits set by scarce resources - role of local food production and food imports

    NASA Astrophysics Data System (ADS)

    Porkka, Miina; Guillaume, Joseph H. A.; Schaphoff, Sibyll; Siebert, Stefan; Gerten, Dieter; Kummu, Matti

    2017-04-01

    There is a fundamental tension between population growth and carrying capacity, i.e. the population that could potentially be supported using the resources and technologies available at a given time. This makes the assessments of resource use and agricultural productivity central to the debate on future food security. Local carrying capacity can be increased by expanding (e.g. through land conversion and irrigation infrastructure) or intensifying (e.g. through technologies and practices that increase efficiency) the resource use in agriculture. Food imports can be considered another way of overcoming current local limits and continuing growth beyond the local human-carrying capacity. Focusing on water as the key limiting resource, we performed a global assessment of the capacity for food self-sufficiency at sub-national and national scale for 1961-2009, taking into account the availability of both green and blue water as well as technology and management practices affecting water productivity at a given time, and using the hydrology and agriculture model LPJmL as our primary tool. Furthermore, we examined the use of food imports as a strategy to increase carrying capacity in regions where the potential for food self-sufficiency was limited by water availability and productivity. We found that the capacity for food self-sufficiency reduced notably during the study period due to the rapid population growth that outpaced the substantial improvements in water productivity. In 2009 more than a third (2.2 billion people) of the world's population lived in areas where sufficient food production to meet the needs of the population was not possible, and some 800 million people more were approaching this threshold. Food imports have nearly universally been used to overcome these local limits to growth, though the success of this strategy has been highly dependent on economic purchasing power. In the unsuccessful cases, increases in imports and local productivity have not kept pace with population growth, leaving 460 million people with insufficient food. Where the strategy has been successful, food security of 1.4 billion people has become dependent on imports. Whether or not this dependence on imports is considered desirable, it has policy implications that need to be taken into account.

  3. Building capacity to develop an African teaching platform on health workforce development: a collaborative initiative of universities from four sub Saharan countries

    PubMed Central

    2014-01-01

    Introduction Health systems in many low-income countries remain fragile, and the record of human resource planning and management in Ministries of Health very uneven. Public health training institutions face the dual challenge of building human resources capacity in ministries and health services while alleviating and improving their own capacity constraints. This paper reports on an initiative aimed at addressing this dual challenge through the development and implementation of a joint Masters in Public Health (MPH) programme with a focus on health workforce development by four academic institutions from East and Southern Africa and the building of a joint teaching platform. Methods Data were obtained through interviews and group discussions with stakeholders, direct and participant observations, and reviews of publications and project documents. Data were analysed using thematic analysis. Case description The institutions developed and collaboratively implemented a ‘Masters Degree programme with a focus on health workforce development’. It was geared towards strengthening the leadership capacity of Health ministries to develop expertise in health human resources (HRH) planning and management, and simultaneously build capacity of faculty in curriculum development and innovative educational practices to teach health workforce development. The initiative was configured to facilitate sharing of experience and resources. Discussion The implementation of this initiative has been complex, straddling multiple and changing contexts, actors and agendas. Some of these are common to postgraduate programmes with working learners, while others are unique to this particular partnership, such as weak institutional capacity to champion and embed new programmes and approaches to teaching. Conclusions The partnership, despite significant inherent challenges, has potential for providing real opportunities for building the field and community of practice, and strengthening the staff and organizational capacity of participant institutions. Key learning points of the paper are: • the need for long-term strategies and engagement; • the need for more investment and attention to developing the capacity of academic institutions; • the need to invest specifically in educational/teaching expertise for innovative approaches to teaching and capacity development more broadly; and • the importance of increasing access and support for students who are working adults in public health institutions throughout Africa. PMID:24886267

  4. Now and next-generation sequencing techniques: future of sequence analysis using cloud computing.

    PubMed

    Thakur, Radhe Shyam; Bandopadhyay, Rajib; Chaudhary, Bratati; Chatterjee, Sourav

    2012-01-01

    Advances in the field of sequencing techniques have resulted in the greatly accelerated production of huge sequence datasets. This presents immediate challenges in database maintenance at datacenters. It provides additional computational challenges in data mining and sequence analysis. Together these represent a significant overburden on traditional stand-alone computer resources, and to reach effective conclusions quickly and efficiently, the virtualization of the resources and computation on a pay-as-you-go concept (together termed "cloud computing") has recently appeared. The collective resources of the datacenter, including both hardware and software, can be available publicly, being then termed a public cloud, the resources being provided in a virtual mode to the clients who pay according to the resources they employ. Examples of public companies providing these resources include Amazon, Google, and Joyent. The computational workload is shifted to the provider, which also implements required hardware and software upgrades over time. A virtual environment is created in the cloud corresponding to the computational and data storage needs of the user via the internet. The task is then performed, the results transmitted to the user, and the environment finally deleted after all tasks are completed. In this discussion, we focus on the basics of cloud computing, and go on to analyze the prerequisites and overall working of clouds. Finally, the applications of cloud computing in biological systems, particularly in comparative genomics, genome informatics, and SNP detection are discussed with reference to traditional workflows.

  5. 30 CFR 1206.154 - Determination of quantities and qualities for computing royalties.

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... 30 Mineral Resources 3 2014-07-01 2014-07-01 false Determination of quantities and qualities for computing royalties. 1206.154 Section 1206.154 Mineral Resources OFFICE OF NATURAL RESOURCES REVENUE, DEPARTMENT OF THE INTERIOR NATURAL RESOURCES REVENUE PRODUCT VALUATION Federal Gas § 1206.154 Determination...

  6. 30 CFR 1206.154 - Determination of quantities and qualities for computing royalties.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... 30 Mineral Resources 3 2012-07-01 2012-07-01 false Determination of quantities and qualities for computing royalties. 1206.154 Section 1206.154 Mineral Resources OFFICE OF NATURAL RESOURCES REVENUE, DEPARTMENT OF THE INTERIOR NATURAL RESOURCES REVENUE PRODUCT VALUATION Federal Gas § 1206.154 Determination...

  7. 30 CFR 1206.154 - Determination of quantities and qualities for computing royalties.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... 30 Mineral Resources 3 2013-07-01 2013-07-01 false Determination of quantities and qualities for computing royalties. 1206.154 Section 1206.154 Mineral Resources OFFICE OF NATURAL RESOURCES REVENUE, DEPARTMENT OF THE INTERIOR NATURAL RESOURCES REVENUE PRODUCT VALUATION Federal Gas § 1206.154 Determination...

  8. Adaptive capacity of fishing communities at marine protected areas: a case study from the Colombian Pacific.

    PubMed

    Moreno-Sánchez, Rocío del Pilar; Maldonado, Jorge Higinio

    2013-12-01

    Departing from a theoretical methodology, we estimate empirically an index of adaptive capacity (IAC) of a fishing community to the establishment of marine protected areas (MPAs). We carried out household surveys, designed to obtain information for indicators and sub-indicators, and calculated the IAC. Moreover, we performed a sensitivity analysis to check for robustness of the results. Our findings show that, despite being located between two MPAs, the fishing community of Bazán in the Colombian Pacific is highly vulnerable and that the socioeconomic dimension of the IAC constitutes the most binding dimension for building adaptive capacity. Bazán is characterized by extreme poverty, high dependence on resources, and lack of basic public infrastructure. Notwithstanding, social capital and local awareness about ecological conditions may act as enhancers of adaptive capacity. The establishment of MPAs should consider the development of strategies to confer adaptive capacity to local communities highly dependent on resource extraction.

  9. a Framework for Capacity Building in Mapping Coastal Resources Using Remote Sensing in the Philippines

    NASA Astrophysics Data System (ADS)

    Tamondong, A.; Cruz, C.; Ticman, T.; Peralta, R.; Go, G. A.; Vergara, M.; Estabillo, M. S.; Cadalzo, I. E.; Jalbuena, R.; Blanco, A.

    2016-06-01

    Remote sensing has been an effective technology in mapping natural resources by reducing the costs and field data gathering time and bringing in timely information. With the launch of several earth observation satellites, an increase in the availability of satellite imageries provides an immense selection of data for the users. The Philippines has recently embarked in a program which will enable the gathering of LiDAR data in the whole country. The capacity of the Philippines to take advantage of these advancements and opportunities is lacking. There is a need to transfer the knowledge of remote sensing technology to other institutions to better utilize the available data. Being an archipelagic country with approximately 36,000 kilometers of coastline, and most of its people depending on its coastal resources, remote sensing is an optimal choice in mapping such resources. A project involving fifteen (15) state universities and colleges and higher education institutions all over the country headed by the University of the Philippines Training Center for Applied Geodesy and Photogrammetry and funded by the Department of Science and Technology was formed to carry out the task of capacity building in mapping the country's coastal resources using LiDAR and other remotely sensed datasets. This paper discusses the accomplishments and the future activities of the project.

  10. Language Resource Centers Program

    ERIC Educational Resources Information Center

    Office of Postsecondary Education, US Department of Education, 2012

    2012-01-01

    The Language Resource Centers (LRC) program provides grants to institutions of higher education to establish, strengthen, and operate resource centers that serve to improve the nation's capacity to teach and learn foreign languages. Eligible applicants are institutions of higher education. Duration of the grant is four years. Center activities…

  11. High levels of time contraction in young children in dual tasks are related to their limited attention capacities.

    PubMed

    Hallez, Quentin; Droit-Volet, Sylvie

    2017-09-01

    Numerous studies have shown that durations are judged shorter in a dual-task condition than in a simple-task condition. The resource-based theory of time perception suggests that this is due to the processing of temporal information, which is a demanding cognitive task that consumes limited attention resources. Our study investigated whether this time contraction in a dual-task condition is greater in younger children and, if so, whether this is specifically related to their limited attention capacities. Children aged 5-7years were given a temporal reproduction task in a simple-task condition and a dual-task condition. In addition, different neuropsychological tests were used to assess not only their attention capacities but also their capacities in terms of working memory and information processing speed. The results showed a shortening of perceived time in the dual task compared with the simple task, and this increased as age decreased. The extent of this shortening effect was directly linked to younger children's limited attentional capacities; the lower their attentional capacities, the greater the time contraction. This study demonstrated that children's errors in time judgments are linked to their cognitive capacities rather than to capacities that are specific to time. Copyright © 2017 Elsevier Inc. All rights reserved.

  12. Existing capacity to manage pharmaceuticals and related commodities in East Africa: an assessment with specific reference to antiretroviral therapy

    PubMed Central

    Waako, Paul J; Odoi-adome, Richard; Obua, Celestino; Owino, Erisa; Tumwikirize, Winnie; Ogwal-okeng, Jasper; Anokbonggo, Willy W; Matowe, Lloyd; Aupont, Onesky

    2009-01-01

    Background East African countries have in the recent past experienced a tremendous increase in the volume of antiretroviral drugs. Capacity to manage these medicines in the region remains limited. Makerere University, with technical assistance from the USAID supported Rational Pharmaceutical Management Plus (RPM Plus) Program of Management Sciences for Health (MSH) established a network of academic institutions to build capacity for pharmaceutical management in the East African region. The initiative includes institutions from Uganda, Tanzania, Kenya and Rwanda and aims to improve access to safe, effective and quality-assured medicines for the treatment of HIV/AIDS, TB and Malaria through spearheading in-country capacity. The initiative conducted a regional assessment to determine the existing capacity for the management of antiretroviral drugs and related commodities. Methods Heads and implementing workers of fifty HIV/AIDS programs and institutions accredited to offer antiretroviral services in Uganda, Kenya, Tanzania and Rwanda were key informants in face-to-face interviews guided by structured questionnaires. The assessment explored categories of health workers involved in the management of ARVs, their knowledge and practices in selection, quantification, distribution and use of ARVs, nature of existing training programs, training preferences and resources for capacity building. Results Inadequate human resource capacity including, inability to select, quantify and distribute ARVs and related commodities, and irrational prescribing and dispensing were some of the problems identified. A competence gap existed in all the four countries with a variety of healthcare professionals involved in the supply and distribution of ARVs. Training opportunities and resources for capacity development were limited particularly for workers in remote facilities. On-the-job training and short courses were the preferred modes of training. Conclusion There is inadequate capacity for managing medicines and related commodities in East Africa. There is an urgent need for training in aspects of pharmaceutical management to different categories of health workers. Skills building activities that do not take healthcare workers from their places of work are preferred. PMID:19272134

  13. Optimal resource allocation strategy for two-layer complex networks

    NASA Astrophysics Data System (ADS)

    Ma, Jinlong; Wang, Lixin; Li, Sufeng; Duan, Congwen; Liu, Yu

    2018-02-01

    We study the traffic dynamics on two-layer complex networks, and focus on its delivery capacity allocation strategy to enhance traffic capacity measured by the critical value Rc. With the limited packet-delivering capacity, we propose a delivery capacity allocation strategy which can balance the capacities of non-hub nodes and hub nodes to optimize the data flow. With the optimal value of parameter αc, the maximal network capacity is reached because most of the nodes have shared the appropriate delivery capacity by the proposed delivery capacity allocation strategy. Our work will be beneficial to network service providers to design optimal networked traffic dynamics.

  14. Tools and Techniques for Measuring and Improving Grid Performance

    NASA Technical Reports Server (NTRS)

    Biswas, Rupak; Frumkin, M.; Smith, W.; VanderWijngaart, R.; Wong, P.; Biegel, Bryan (Technical Monitor)

    2001-01-01

    This viewgraph presentation provides information on NASA's geographically dispersed computing resources, and the various methods by which the disparate technologies are integrated within a nationwide computational grid. Many large-scale science and engineering projects are accomplished through the interaction of people, heterogeneous computing resources, information systems and instruments at different locations. The overall goal is to facilitate the routine interactions of these resources to reduce the time spent in design cycles, particularly for NASA's mission critical projects. The IPG (Information Power Grid) seeks to implement NASA's diverse computing resources in a fashion similar to the way in which electric power is made available.

  15. SaaS enabled admission control for MCMC simulation in cloud computing infrastructures

    NASA Astrophysics Data System (ADS)

    Vázquez-Poletti, J. L.; Moreno-Vozmediano, R.; Han, R.; Wang, W.; Llorente, I. M.

    2017-02-01

    Markov Chain Monte Carlo (MCMC) methods are widely used in the field of simulation and modelling of materials, producing applications that require a great amount of computational resources. Cloud computing represents a seamless source for these resources in the form of HPC. However, resource over-consumption can be an important drawback, specially if the cloud provision process is not appropriately optimized. In the present contribution we propose a two-level solution that, on one hand, takes advantage of approximate computing for reducing the resource demand and on the other, uses admission control policies for guaranteeing an optimal provision to running applications.

  16. Dynamo: a flexible, user-friendly development tool for subtomogram averaging of cryo-EM data in high-performance computing environments.

    PubMed

    Castaño-Díez, Daniel; Kudryashev, Mikhail; Arheit, Marcel; Stahlberg, Henning

    2012-05-01

    Dynamo is a new software package for subtomogram averaging of cryo Electron Tomography (cryo-ET) data with three main goals: first, Dynamo allows user-transparent adaptation to a variety of high-performance computing platforms such as GPUs or CPU clusters. Second, Dynamo implements user-friendliness through GUI interfaces and scripting resources. Third, Dynamo offers user-flexibility through a plugin API. Besides the alignment and averaging procedures, Dynamo includes native tools for visualization and analysis of results and data, as well as support for third party visualization software, such as Chimera UCSF or EMAN2. As a demonstration of these functionalities, we studied bacterial flagellar motors and showed automatically detected classes with absent and present C-rings. Subtomogram averaging is a common task in current cryo-ET pipelines, which requires extensive computational resources and follows a well-established workflow. However, due to the data diversity, many existing packages offer slight variations of the same algorithm to improve results. One of the main purposes behind Dynamo is to provide explicit tools to allow the user the insertion of custom designed procedures - or plugins - to replace or complement the native algorithms in the different steps of the processing pipeline for subtomogram averaging without the burden of handling parallelization. Custom scripts that implement new approaches devised by the user are integrated into the Dynamo data management system, so that they can be controlled by the GUI or the scripting capacities. Dynamo executables do not require licenses for third party commercial software. Sources, executables and documentation are freely distributed on http://www.dynamo-em.org. Copyright © 2012 Elsevier Inc. All rights reserved.

  17. Supercomputers Of The Future

    NASA Technical Reports Server (NTRS)

    Peterson, Victor L.; Kim, John; Holst, Terry L.; Deiwert, George S.; Cooper, David M.; Watson, Andrew B.; Bailey, F. Ron

    1992-01-01

    Report evaluates supercomputer needs of five key disciplines: turbulence physics, aerodynamics, aerothermodynamics, chemistry, and mathematical modeling of human vision. Predicts these fields will require computer speed greater than 10(Sup 18) floating-point operations per second (FLOP's) and memory capacity greater than 10(Sup 15) words. Also, new parallel computer architectures and new structured numerical methods will make necessary speed and capacity available.

  18. Modeling water resources as a constraint in electricity capacity expansion models

    NASA Astrophysics Data System (ADS)

    Newmark, R. L.; Macknick, J.; Cohen, S.; Tidwell, V. C.; Woldeyesus, T.; Martinez, A.

    2013-12-01

    In the United States, the electric power sector is the largest withdrawer of freshwater in the nation. The primary demand for water from the electricity sector is for thermoelectric power plant cooling. Areas likely to see the largest near-term growth in population and energy usage, the Southwest and the Southeast, are also facing freshwater scarcity and have experienced water-related power reliability issues in the past decade. Lack of water may become a barrier for new conventionally-cooled power plants, and alternative cooling systems will impact technology cost and performance. Although water is integral to electricity generation, it has long been neglected as a constraint in future electricity system projections. Assessing the impact of water resource scarcity on energy infrastructure development is critical, both for conventional and renewable energy technologies. Efficiently utilizing all water types, including wastewater and brackish sources, or utilizing dry-cooling technologies, will be essential for transitioning to a low-carbon electricity system. This work provides the first demonstration of a national electric system capacity expansion model that incorporates water resources as a constraint on the current and future U.S. electricity system. The Regional Electricity Deployment System (ReEDS) model was enhanced to represent multiple cooling technology types and limited water resource availability in its optimization of electricity sector capacity expansion to 2050. The ReEDS model has high geographic and temporal resolution, making it a suitable model for incorporating water resources, which are inherently seasonal and watershed-specific. Cooling system technologies were assigned varying costs (capital, operations and maintenance), and performance parameters, reflecting inherent tradeoffs in water impacts and operating characteristics. Water rights supply curves were developed for each of the power balancing regions in ReEDS. Supply curves include costs and availability of freshwater (surface and groundwater) and alternative water resources (municipal wastewater and brackish groundwater). In each region, a new power plant must secure sufficient water rights for operation before being built. Water rights constraints thus influence the type of power plant, cooling system, or location of new generating capacity. Results indicate that the aggregate national generating capacity by fuel type and associated carbon dioxide emissions change marginally with the inclusion of water rights. Water resource withdrawals and consumption, however, can vary considerably. Regional water resource dynamics indicate substantial differences in the location where power plant-cooling system technology combinations are built. These localized impacts highlight the importance of considering water resources as a constraint in the electricity sector when evaluating costs, transmission infrastructure needs, and externalities. Further scenario evaluations include assessments of how climate change could affect the availability of water resources, and thus the development of the electricity sector.

  19. Effects of sea state on offshore wind resourcing in Florida

    NASA Astrophysics Data System (ADS)

    Collier, Cristina

    Offshore resource assessment relies on estimating wind speeds at turbine hub height using observations typically made at substantially lower height. The methods used to adjust from observed wind speeds to hub height can impact resource estimation. The importance of directional sea state is examined, both as seasonal averages and as a function of the diurnal cycle. A General Electric 3.6 MW offshore turbine is used as a model for a power production. Including sea state increases or decreases seasonally averaged power production by roughly 1%, which is found to be an economically significant change. These changes occur because the sea state modifies the wind shear (vector wind difference between the buoy height and the moving surface) and therefore the extrapolation from the observation to hub height is affected. These seemingly small differences in capacity can alter profits by millions of dollars depending upon the size of the farm and fluctuations in price per kWh throughout the year. A 2% change in capacity factor can lead to a 10 million dollar difference from total kWh produced from a wind farm of 100 3.6MW turbines. These economic impacts can be a deciding factor in determining whether a resource is viable for development. Modification of power output due to sea states are shown for seasonal and diurnal time scales. Three regions are examined herein: West Florida, East Florida, and Nantucket Sound. The average capacity after sea state is included suggests areas around Florida could provide substantial amounts of wind power throughout three-fourths of the calendar year. At certain times of day winter average produced capacity factors in West Florida can be up to 45% more than in summer when sea state is included. Nantucket Sound capacity factors are calculated for comparison to a region near a planned United States offshore wind farm. This study provides evidence to suggest including sea state in offshore wind resource assessment causes economically significant differences for offshore wind power siting.

  20. "Workhood"-a useful concept for the analysis of health workers' resources? an evaluation from Tanzania

    PubMed Central

    2012-01-01

    Background International debates on improving health system performance and quality of care are strongly coined by systems thinking. There is a surprising lack of attention to the human (worker) elements. Although the central role of health workers within the health system has increasingly been acknowledged, there are hardly studies that analyze performance and quality of care from an individual perspective. Drawing on livelihood studies in health and sociological theory of capitals, this study develops and evaluates the new concept of workhood. As an analytical device the concept aims at understanding health workers' capacities to access resources (human, financial, physical, social, cultural and symbolic capital) and transfer them to the community from an individual perspective. Methods Case studies were conducted in four Reproductive-and-Child-Health (RCH) clinics in the Kilombero Valley, south-eastern Tanzania, using different qualitative methods such as participant observation, informal discussions and in-depth interviews to explore the relevance of the different types of workhood resources for effective health service delivery. Health workers' ability to access these resources were investigated and factors facilitating or constraining access identified. Results The study showed that lack of physical, human, cultural and financial capital constrained health workers' capacity to act. In particular, weak health infrastructure and health system failures led to the lack of sufficient drug and supply stocks and chronic staff shortages at the health facilities. However, health workers' capacity to mobilize social, cultural and symbolic capital played a significant role in their ability to overcome work related problems. Professional and non-professional social relationships were activated in order to access drug stocks and other supplies, transport and knowledge. Conclusions By evaluating the workhood concept this study highlights the importance of understanding health worker performance by looking at their resources and capacities. Rather than blaming health workers for health system failures, applying a strength-based approach offers new insights into health workers' capacities and identifies entry points for target actions. PMID:22401037

Top