Sample records for large scale infrastructure

  1. PKI security in large-scale healthcare networks.

    PubMed

    Mantas, Georgios; Lymberopoulos, Dimitrios; Komninos, Nikos

    2012-06-01

    During the past few years a lot of PKI (Public Key Infrastructures) infrastructures have been proposed for healthcare networks in order to ensure secure communication services and exchange of data among healthcare professionals. However, there is a plethora of challenges in these healthcare PKI infrastructures. Especially, there are a lot of challenges for PKI infrastructures deployed over large-scale healthcare networks. In this paper, we propose a PKI infrastructure to ensure security in a large-scale Internet-based healthcare network connecting a wide spectrum of healthcare units geographically distributed within a wide region. Furthermore, the proposed PKI infrastructure facilitates the trust issues that arise in a large-scale healthcare network including multi-domain PKI infrastructures.

  2. Efficient On-Demand Operations in Large-Scale Infrastructures

    ERIC Educational Resources Information Center

    Ko, Steven Y.

    2009-01-01

    In large-scale distributed infrastructures such as clouds, Grids, peer-to-peer systems, and wide-area testbeds, users and administrators typically desire to perform "on-demand operations" that deal with the most up-to-date state of the infrastructure. However, the scale and dynamism present in the operating environment make it challenging to…

  3. Mixing Metaphors: Building Infrastructure for Large Scale School Turnaround

    ERIC Educational Resources Information Center

    Peurach, Donald J.; Neumerski, Christine M.

    2015-01-01

    The purpose of this analysis is to increase understanding of the possibilities and challenges of building educational infrastructure--the basic, foundational structures, systems, and resources--to support large-scale school turnaround. Building educational infrastructure often exceeds the capacity of schools, districts, and state education…

  4. The Emergence of Dominant Design(s) in Large Scale Cyber-Infrastructure Systems

    ERIC Educational Resources Information Center

    Diamanti, Eirini Ilana

    2012-01-01

    Cyber-infrastructure systems are integrated large-scale IT systems designed with the goal of transforming scientific practice by enabling multi-disciplinary, cross-institutional collaboration. Their large scale and socio-technical complexity make design decisions for their underlying architecture practically irreversible. Drawing on three…

  5. The Computing and Data Grid Approach: Infrastructure for Distributed Science Applications

    NASA Technical Reports Server (NTRS)

    Johnston, William E.

    2002-01-01

    With the advent of Grids - infrastructure for using and managing widely distributed computing and data resources in the science environment - there is now an opportunity to provide a standard, large-scale, computing, data, instrument, and collaboration environment for science that spans many different projects and provides the required infrastructure and services in a relatively uniform and supportable way. Grid technology has evolved over the past several years to provide the services and infrastructure needed for building 'virtual' systems and organizations. We argue that Grid technology provides an excellent basis for the creation of the integrated environments that can combine the resources needed to support the large- scale science projects located at multiple laboratories and universities. We present some science case studies that indicate that a paradigm shift in the process of science will come about as a result of Grids providing transparent and secure access to advanced and integrated information and technologies infrastructure: powerful computing systems, large-scale data archives, scientific instruments, and collaboration tools. These changes will be in the form of services that can be integrated with the user's work environment, and that enable uniform and highly capable access to these computers, data, and instruments, regardless of the location or exact nature of these resources. These services will integrate transient-use resources like computing systems, scientific instruments, and data caches (e.g., as they are needed to perform a simulation or analyze data from a single experiment); persistent-use resources. such as databases, data catalogues, and archives, and; collaborators, whose involvement will continue for the lifetime of a project or longer. While we largely address large-scale science in this paper, Grids, particularly when combined with Web Services, will address a broad spectrum of science scenarios. both large and small scale.

  6. Comparison of WinSLAMM Modeled Results with Monitored Biofiltration Data

    EPA Science Inventory

    The US EPA’s Green Infrastructure Demonstration project in Kansas City incorporates both small scale individual biofiltration device monitoring, along with large scale watershed monitoring. The test watershed (100 acres) is saturated with green infrastructure components (includin...

  7. Grid-Enabled Quantitative Analysis of Breast Cancer

    DTIC Science & Technology

    2010-10-01

    large-scale, multi-modality computerized image analysis . The central hypothesis of this research is that large-scale image analysis for breast cancer...research, we designed a pilot study utilizing large scale parallel Grid computing harnessing nationwide infrastructure for medical image analysis . Also

  8. Towards Portable Large-Scale Image Processing with High-Performance Computing.

    PubMed

    Huo, Yuankai; Blaber, Justin; Damon, Stephen M; Boyd, Brian D; Bao, Shunxing; Parvathaneni, Prasanna; Noguera, Camilo Bermudez; Chaganti, Shikha; Nath, Vishwesh; Greer, Jasmine M; Lyu, Ilwoo; French, William R; Newton, Allen T; Rogers, Baxter P; Landman, Bennett A

    2018-05-03

    High-throughput, large-scale medical image computing demands tight integration of high-performance computing (HPC) infrastructure for data storage, job distribution, and image processing. The Vanderbilt University Institute for Imaging Science (VUIIS) Center for Computational Imaging (CCI) has constructed a large-scale image storage and processing infrastructure that is composed of (1) a large-scale image database using the eXtensible Neuroimaging Archive Toolkit (XNAT), (2) a content-aware job scheduling platform using the Distributed Automation for XNAT pipeline automation tool (DAX), and (3) a wide variety of encapsulated image processing pipelines called "spiders." The VUIIS CCI medical image data storage and processing infrastructure have housed and processed nearly half-million medical image volumes with Vanderbilt Advanced Computing Center for Research and Education (ACCRE), which is the HPC facility at the Vanderbilt University. The initial deployment was natively deployed (i.e., direct installations on a bare-metal server) within the ACCRE hardware and software environments, which lead to issues of portability and sustainability. First, it could be laborious to deploy the entire VUIIS CCI medical image data storage and processing infrastructure to another HPC center with varying hardware infrastructure, library availability, and software permission policies. Second, the spiders were not developed in an isolated manner, which has led to software dependency issues during system upgrades or remote software installation. To address such issues, herein, we describe recent innovations using containerization techniques with XNAT/DAX which are used to isolate the VUIIS CCI medical image data storage and processing infrastructure from the underlying hardware and software environments. The newly presented XNAT/DAX solution has the following new features: (1) multi-level portability from system level to the application level, (2) flexible and dynamic software development and expansion, and (3) scalable spider deployment compatible with HPC clusters and local workstations.

  9. Transmission Infrastructure | Energy Analysis | NREL

    Science.gov Websites

    aggregating geothermal with other complementary generating technologies, in renewable energy zones infrastructure planning and expansion to enable large-scale deployment of renewable energy in the future. Large Energy, FERC, NERC, and the regional entities, transmission providers, generating companies, utilities

  10. Assessing large-scale wildlife responses to human infrastructure development

    PubMed Central

    Torres, Aurora; Jaeger, Jochen A. G.; Alonso, Juan Carlos

    2016-01-01

    Habitat loss and deterioration represent the main threats to wildlife species, and are closely linked to the expansion of roads and human settlements. Unfortunately, large-scale effects of these structures remain generally overlooked. Here, we analyzed the European transportation infrastructure network and found that 50% of the continent is within 1.5 km of transportation infrastructure. We present a method for assessing the impacts from infrastructure on wildlife, based on functional response curves describing density reductions in birds and mammals (e.g., road-effect zones), and apply it to Spain as a case study. The imprint of infrastructure extends over most of the country (55.5% in the case of birds and 97.9% for mammals), with moderate declines predicted for birds (22.6% of individuals) and severe declines predicted for mammals (46.6%). Despite certain limitations, we suggest the approach proposed is widely applicable to the evaluation of effects of planned infrastructure developments under multiple scenarios, and propose an internationally coordinated strategy to update and improve it in the future. PMID:27402749

  11. Assessing large-scale wildlife responses to human infrastructure development.

    PubMed

    Torres, Aurora; Jaeger, Jochen A G; Alonso, Juan Carlos

    2016-07-26

    Habitat loss and deterioration represent the main threats to wildlife species, and are closely linked to the expansion of roads and human settlements. Unfortunately, large-scale effects of these structures remain generally overlooked. Here, we analyzed the European transportation infrastructure network and found that 50% of the continent is within 1.5 km of transportation infrastructure. We present a method for assessing the impacts from infrastructure on wildlife, based on functional response curves describing density reductions in birds and mammals (e.g., road-effect zones), and apply it to Spain as a case study. The imprint of infrastructure extends over most of the country (55.5% in the case of birds and 97.9% for mammals), with moderate declines predicted for birds (22.6% of individuals) and severe declines predicted for mammals (46.6%). Despite certain limitations, we suggest the approach proposed is widely applicable to the evaluation of effects of planned infrastructure developments under multiple scenarios, and propose an internationally coordinated strategy to update and improve it in the future.

  12. Modeling the Hydrologic Effects of Large-Scale Green Infrastructure Projects with GIS

    NASA Astrophysics Data System (ADS)

    Bado, R. A.; Fekete, B. M.; Khanbilvardi, R.

    2015-12-01

    Impervious surfaces in urban areas generate excess runoff, which in turn causes flooding, combined sewer overflows, and degradation of adjacent surface waters. Municipal environmental protection agencies have shown a growing interest in mitigating these effects with 'green' infrastructure practices that partially restore the perviousness and water holding capacity of urban centers. Assessment of the performance of current and future green infrastructure projects is hindered by the lack of adequate hydrological modeling tools; conventional techniques fail to account for the complex flow pathways of urban environments, and detailed analyses are difficult to prepare for the very large domains in which green infrastructure projects are implemented. Currently, no standard toolset exists that can rapidly and conveniently predict runoff, consequent inundations, and sewer overflows at a city-wide scale. We demonstrate how streamlined modeling techniques can be used with open-source GIS software to efficiently model runoff in large urban catchments. Hydraulic parameters and flow paths through city blocks, roadways, and sewer drains are automatically generated from GIS layers, and ultimately urban flow simulations can be executed for a variety of rainfall conditions. With this methodology, users can understand the implications of large-scale land use changes and green/gray storm water retention systems on hydraulic loading, peak flow rates, and runoff volumes.

  13. A risk assessment methodology for critical transportation infrastructure.

    DOT National Transportation Integrated Search

    2002-01-01

    Infrastructure protection typifies a problem of risk assessment and management in a large-scale system. This study offers a methodological framework to identify, prioritize, assess, and manage risks. It includes the following major considerations: (1...

  14. Education as eHealth Infrastructure: Considerations in Advancing a National Agenda for eHealth

    ERIC Educational Resources Information Center

    Hilberts, Sonya; Gray, Kathleen

    2014-01-01

    This paper explores the role of education as infrastructure in large-scale ehealth strategies--in theory, in international practice and in one national case study. Education is often invisible in the documentation of ehealth infrastructure. Nevertheless a review of international practice shows that there is significant educational investment made…

  15. Framing Innovation: The Impact of the Superintendent's Technology Infrastructure Decisions on the Acceptance of Large-Scale Technology Initiatives

    ERIC Educational Resources Information Center

    Arnold, Erik P.

    2014-01-01

    A multiple-case qualitative study of five school districts that had implemented various large-scale technology initiatives was conducted to describe what superintendents do to gain acceptance of those initiatives. The large-scale technology initiatives in the five participating districts included 1:1 District-Provided Device laptop and tablet…

  16. Implementation of a large-scale hospital information infrastructure for multi-unit health-care services.

    PubMed

    Yoo, Sun K; Kim, Dong Keun; Kim, Jung C; Park, Youn Jung; Chang, Byung Chul

    2008-01-01

    With the increase in demand for high quality medical services, the need for an innovative hospital information system has become essential. An improved system has been implemented in all hospital units of the Yonsei University Health System. Interoperability between multi-units required appropriate hardware infrastructure and software architecture. This large-scale hospital information system encompassed PACS (Picture Archiving and Communications Systems), EMR (Electronic Medical Records) and ERP (Enterprise Resource Planning). It involved two tertiary hospitals and 50 community hospitals. The monthly data production rate by the integrated hospital information system is about 1.8 TByte and the total quantity of data produced so far is about 60 TByte. Large scale information exchange and sharing will be particularly useful for telemedicine applications.

  17. Tradeoffs and synergies between biofuel production and large-scale solar infrastructure in deserts

    NASA Astrophysics Data System (ADS)

    Ravi, S.; Lobell, D. B.; Field, C. B.

    2012-12-01

    Solar energy installations in deserts are on the rise, fueled by technological advances and policy changes. Deserts, with a combination of high solar radiation and availability of large areas unusable for crop production are ideal locations for large scale solar installations. For efficient power generation, solar infrastructures require large amounts of water for operation (mostly for cleaning panels and dust suppression), leading to significant moisture additions to desert soil. A pertinent question is how to use the moisture inputs for sustainable agriculture/biofuel production. We investigated the water requirements for large solar infrastructures in North American deserts and explored the possibilities for integrating biofuel production with solar infrastructure. In co-located systems the possible decline in yields due to shading by solar panels may be offsetted by the benefits of periodic water addition to biofuel crops, simpler dust management and more efficient power generation in solar installations, and decreased impacts on natural habitats and scarce resources in deserts. In particular, we evaluated the potential to integrate solar infrastructure with biomass feedstocks that grow in arid and semi-arid lands (Agave Spp), which are found to produce high yields with minimal water inputs. To this end, we conducted detailed life cycle analysis for these coupled agave biofuel - solar energy systems to explore the tradeoffs and synergies, in the context of energy input-output, water use and carbon emissions.

  18. Research Activities at Fermilab for Big Data Movement

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mhashilkar, Parag; Wu, Wenji; Kim, Hyun W

    2013-01-01

    Adaptation of 100GE Networking Infrastructure is the next step towards management of Big Data. Being the US Tier-1 Center for the Large Hadron Collider's (LHC) Compact Muon Solenoid (CMS) experiment and the central data center for several other large-scale research collaborations, Fermilab has to constantly deal with the scaling and wide-area distribution challenges of the big data. In this paper, we will describe some of the challenges involved in the movement of big data over 100GE infrastructure and the research activities at Fermilab to address these challenges.

  19. Grid-Enabled Quantitative Analysis of Breast Cancer

    DTIC Science & Technology

    2009-10-01

    large-scale, multi-modality computerized image analysis . The central hypothesis of this research is that large-scale image analysis for breast cancer...pilot study to utilize large scale parallel Grid computing to harness the nationwide cluster infrastructure for optimization of medical image ... analysis parameters. Additionally, we investigated the use of cutting edge dataanalysis/ mining techniques as applied to Ultrasound, FFDM, and DCE-MRI Breast

  20. Evaluating Green/Gray Infrastructure for CSO/Stormwater Control

    EPA Science Inventory

    The NRMRL is conducting this project to evaluate the water quality and quantity benefits of a large-scale application of green infrastructure (low-impact development/best management practices) retrofits in an entire subcatchment. It will document ORD's effort to demonstrate the e...

  1. Scaling of an information system in a public healthcare market--infrastructuring from the vendor's perspective.

    PubMed

    Johannessen, Liv Karen; Obstfelder, Aud; Lotherington, Ann Therese

    2013-05-01

    The purpose of this paper is to explore the making and scaling of information infrastructures, as well as how the conditions for scaling a component may change for the vendor. The first research question is how the making and scaling of a healthcare information infrastructure can be done and by whom. The second question is what scope for manoeuvre there might be for vendors aiming to expand their market. This case study is based on an interpretive approach, whereby data is gathered through participant observation and semi-structured interviews. A case study of the making and scaling of an electronic system for general practitioners ordering laboratory services from hospitals is described as comprising two distinct phases. The first may be characterized as an evolving phase, when development, integration and implementation were achieved in small steps, and the vendor, together with end users, had considerable freedom to create the solution according to the users' needs. The second phase was characterized by a large-scale procurement process over which regional healthcare authorities exercised much more control and the needs of groups other than the end users influenced the design. The making and scaling of healthcare information infrastructures is not simply a process of evolution, in which the end users use and change the technology. It also consists of large steps, during which different actors, including vendors and healthcare authorities, may make substantial contributions. This process requires work, negotiation and strategies. The conditions for the vendor may change dramatically, from considerable freedom and close relationships with users and customers in the small-scale development, to losing control of the product and being required to engage in more formal relations with customers in the wider public healthcare market. Onerous procurement processes may be one of the reasons why large-scale implementation of information projects in healthcare is difficult and slow. Copyright © 2012 Elsevier Ireland Ltd. All rights reserved.

  2. Ontology-Driven Provenance Management in eScience: An Application in Parasite Research

    NASA Astrophysics Data System (ADS)

    Sahoo, Satya S.; Weatherly, D. Brent; Mutharaju, Raghava; Anantharam, Pramod; Sheth, Amit; Tarleton, Rick L.

    Provenance, from the French word "provenir", describes the lineage or history of a data entity. Provenance is critical information in scientific applications to verify experiment process, validate data quality and associate trust values with scientific results. Current industrial scale eScience projects require an end-to-end provenance management infrastructure. This infrastructure needs to be underpinned by formal semantics to enable analysis of large scale provenance information by software applications. Further, effective analysis of provenance information requires well-defined query mechanisms to support complex queries over large datasets. This paper introduces an ontology-driven provenance management infrastructure for biology experiment data, as part of the Semantic Problem Solving Environment (SPSE) for Trypanosoma cruzi (T.cruzi). This provenance infrastructure, called T.cruzi Provenance Management System (PMS), is underpinned by (a) a domain-specific provenance ontology called Parasite Experiment ontology, (b) specialized query operators for provenance analysis, and (c) a provenance query engine. The query engine uses a novel optimization technique based on materialized views called materialized provenance views (MPV) to scale with increasing data size and query complexity. This comprehensive ontology-driven provenance infrastructure not only allows effective tracking and management of ongoing experiments in the Tarleton Research Group at the Center for Tropical and Emerging Global Diseases (CTEGD), but also enables researchers to retrieve the complete provenance information of scientific results for publication in literature.

  3. Beyond wilderness: towards an anthropology of infrastructure and the built environment in the Russian North

    PubMed Central

    Schweitzer, Peter; Povoroznyuk, Olga; Schiesser, Sigrid

    2017-01-01

    Abstract Public and academic discourses about the Polar regions typically focus on the so-called natural environment. While, these discourses and inquiries continue to be relevant, the current article asks the question how to conceptualize the on-going industrial and infrastructural build-up of the Arctic. Acknowledging that the “built environment” is not an invention of modernity, the article nevertheless focuses on large-scale infrastructural projects of the twentieth century, which marks a watershed of industrial and infrastructural development in the north. Given that the Soviet Union was at the vanguard of these developments, the focus will be on Soviet and Russian large-scale projects. We will be discussing two cases of transportation infrastructure, one of them based on an on-going research project being conducted by the authors along the Baikal–Amur Mainline (BAM) and the other focused on the so-called Northern Sea Route, the marine passage with a long history that has recently been regaining public and academic attention. The concluding section will argue for increased attention to the interactions between humans and the built environment, serving as a kind of programmatic call for more anthropological attention to infrastructure in the Russian north and other polar regions. PMID:29098112

  4. A genome-wide association study platform built on iPlant cyber-infrastructure

    USDA-ARS?s Scientific Manuscript database

    We demonstrated a flexible Genome-Wide Association (GWA) Study (GWAS) platform built upon the iPlant Collaborative Cyber-infrastructure. The platform supports big data management, sharing, and large scale study of both genotype and phenotype data on clusters. End users can add their own analysis too...

  5. Evaluating Commercial and Private Cloud Services for Facility-Scale Geodetic Data Access, Analysis, and Services

    NASA Astrophysics Data System (ADS)

    Meertens, C. M.; Boler, F. M.; Ertz, D. J.; Mencin, D.; Phillips, D.; Baker, S.

    2017-12-01

    UNAVCO, in its role as a NSF facility for geodetic infrastructure and data, has succeeded for over two decades using on-premises infrastructure, and while the promise of cloud-based infrastructure is well-established, significant questions about suitability of such infrastructure for facility-scale services remain. Primarily through the GeoSciCloud award from NSF EarthCube, UNAVCO is investigating the costs, advantages, and disadvantages of providing its geodetic data and services in the cloud versus using UNAVCO's on-premises infrastructure. (IRIS is a collaborator on the project and is performing its own suite of investigations). In contrast to the 2-3 year time scale for the research cycle, the time scale of operation and planning for NSF facilities is for a minimum of five years and for some services extends to a decade or more. Planning for on-premises infrastructure is deliberate, and migrations typically take months to years to fully implement. Migrations to a cloud environment can only go forward with similar deliberate planning and understanding of all costs and benefits. The EarthCube GeoSciCloud project is intended to address the uncertainties of facility-level operations in the cloud. Investigations are being performed in a commercial cloud environment (Amazon AWS) during the first year of the project and in a private cloud environment (NSF XSEDE resource at the Texas Advanced Computing Center) during the second year. These investigations are expected to illuminate the potential as well as the limitations of running facility scale production services in the cloud. The work includes running parallel equivalent cloud-based services to on premises services and includes: data serving via ftp from a large data store, operation of a metadata database, production scale processing of multiple months of geodetic data, web services delivery of quality checked data and products, large-scale compute services for event post-processing, and serving real time data from a network of 700-plus GPS stations. The evaluation is based on a suite of metrics that we have developed to elucidate the effectiveness of cloud-based services in price, performance, and management. Services are currently running in AWS and evaluation is underway.

  6. Community-aware charging station network design for electrified vehicles in urban areas : reducing congestion, emissions, improving accessibility, and promoting walking, bicycling, and use of public transportation.

    DOT National Transportation Integrated Search

    2016-08-31

    A major challenge for achieving large-scale adoption of EVs is an accessible infrastructure for the communities. The societal benefits of large-scale adoption of EVs cannot be realized without adequate deployment of publicly accessible charging stati...

  7. The Infrastructure of Accountability: Data Use and the Transformation of American Education

    ERIC Educational Resources Information Center

    Anagnostopoulos, Dorothea, Ed.; Rutledge, Stacey A., Ed.; Jacobsen, Rebecca, Ed.

    2013-01-01

    "The Infrastructure of Accountability" brings together leading and emerging scholars who set forth an ambitious conceptual framework for understanding the full impact of large-scale, performance-based accountability systems on education. Over the past 20 years, schools and school systems have been utterly reshaped by the demands of…

  8. Performance results from Small- and Large-Scale System Monitoring and green Infrastructure in Kansas City - slides

    EPA Science Inventory

    In 2010, Kansas City, MO (KCMO) signed a consent degree with EPA on combined sewer overflows. The City decided to use adaptive management in order to extensively utilize green infrastructure (GI) in lieu of, and in addition to, structural controls. KCMO installed 130 GI storm co...

  9. People at risk - nexus critical infrastructure and society

    NASA Astrophysics Data System (ADS)

    Heiser, Micha; Thaler, Thomas; Fuchs, Sven

    2016-04-01

    Strategic infrastructure networks include the highly complex and interconnected systems that are so vital to a city or state that any sudden disruption can result in debilitating impacts on human life, the economy and the society as a whole. Recently, various studies have applied complex network-based models to study the performance and vulnerability of infrastructure systems under various types of attacks and hazards - a major part of them is, particularly after the 9/11 incident, related to terrorism attacks. Here, vulnerability is generally defined as the performance drop of an infrastructure system under a given disruptive event. The performance can be measured by different metrics, which correspond to various levels of resilience. In this paper, we will address vulnerability and exposure of critical infrastructure in the Eastern Alps. The Federal State Tyrol is an international transport route and an essential component of the north-south transport connectivity in Europe. Any interruption of the transport flow leads to incommensurable consequences in terms of indirect losses, since the system does not feature redundant elements at comparable economic efficiency. Natural hazard processes such as floods, debris flows, rock falls and avalanches, endanger this infrastructure line, such as large flood events in 2005 or 2012, rock falls 2014, which had strong impacts to the critical infrastructure, such as disruption of the railway lines (in 2005 and 2012), highways and motorways (in 2014). The aim of this paper is to present how critical infrastructures as well as communities and societies are vulnerable and can be resilient against natural hazard risks and the relative cascading effects to different compartments (industrial, infrastructural, societal, institutional, cultural, etc.), which is the dominant by the type of hazard (avalanches, torrential flooding, debris flow, rock falls). Specific themes will be addressed in various case studies to allow cross-learning and cross-comparison of, for example rural and urban areas, and different scales. Correspondingly, scale-specific resilience indicators and metrics will be developed to tailor methods to specific needs according to the scale of assessment (micro/local and macro/regional) and to the type of infrastructure. The traditional indicators normally used in structural analysis are not sufficient to understand how events happening on the networks can have cascading consequences. Moreover, effects have multidimensional (technical, economic, organizational and human), multiscale (micro and macro) and temporal characteristics (short- to long-term incidence). These considerations will guide to different activities: 1) computation of classic structural analysis indicators on the case studies in order to obtain an identity of the transport infrastructure and; 2) development of a set of new measures of resilience. To mitigate natural hazard risk a large amount of protection measures of different typology have been constructed following inhomogeneous reliability standards. The focus of this case study will be on resilience issues and decision making in the context of a large scale sectorial approach focused on transport infrastructure network.

  10. The cost of getting CCS wrong: Uncertainty, infrastructure design, and stranded CO 2

    DOE PAGES

    Middleton, Richard Stephen; Yaw, Sean Patrick

    2018-01-11

    Carbon capture, and storage (CCS) infrastructure will require industry—such as fossil-fuel power, ethanol production, and oil and gas extraction—to make massive investment in infrastructure. The cost of getting these investments wrong will be substantial and will impact the success of CCS technology. Multiple factors can and will impact the success of commercial-scale CCS, including significant uncertainties regarding capture, transport, and injection-storage decisions. Uncertainties throughout the CCS supply chain include policy, technology, engineering performance, economics, and market forces. In particular, large uncertainties exist for the injection and storage of CO 2. Even taking into account upfront investment in site characterization, themore » final performance of the storage phase is largely unknown until commercial-scale injection has started. We explore and quantify the impact of getting CCS infrastructure decisions wrong based on uncertain injection rates and uncertain CO 2 storage capacities using a case study managing CO 2 emissions from the Canadian oil sands industry in Alberta. We use SimCCS, a widely used CCS infrastructure design framework, to develop multiple CCS infrastructure scenarios. Each scenario consists of a CCS infrastructure network that connects CO 2 sources (oil sands extraction and processing) with CO 2 storage reservoirs (acid gas storage reservoirs) using a dedicated CO 2 pipeline network. Each scenario is analyzed under a range of uncertain storage estimates and infrastructure performance is assessed and quantified in terms of cost to build additional infrastructure to store all CO 2. We also include the role of stranded CO 2, CO 2 that a source was expecting to but cannot capture due substandard performance in the transport and storage infrastructure. Results show that the cost of getting the original infrastructure design wrong are significant and that comprehensive planning will be required to ensure that CCS becomes a successful climate mitigation technology. Here, we show that the concept of stranded CO 2 can transform a seemingly high-performing infrastructure design into the worst case scenario.« less

  11. The cost of getting CCS wrong: Uncertainty, infrastructure design, and stranded CO 2

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Middleton, Richard Stephen; Yaw, Sean Patrick

    Carbon capture, and storage (CCS) infrastructure will require industry—such as fossil-fuel power, ethanol production, and oil and gas extraction—to make massive investment in infrastructure. The cost of getting these investments wrong will be substantial and will impact the success of CCS technology. Multiple factors can and will impact the success of commercial-scale CCS, including significant uncertainties regarding capture, transport, and injection-storage decisions. Uncertainties throughout the CCS supply chain include policy, technology, engineering performance, economics, and market forces. In particular, large uncertainties exist for the injection and storage of CO 2. Even taking into account upfront investment in site characterization, themore » final performance of the storage phase is largely unknown until commercial-scale injection has started. We explore and quantify the impact of getting CCS infrastructure decisions wrong based on uncertain injection rates and uncertain CO 2 storage capacities using a case study managing CO 2 emissions from the Canadian oil sands industry in Alberta. We use SimCCS, a widely used CCS infrastructure design framework, to develop multiple CCS infrastructure scenarios. Each scenario consists of a CCS infrastructure network that connects CO 2 sources (oil sands extraction and processing) with CO 2 storage reservoirs (acid gas storage reservoirs) using a dedicated CO 2 pipeline network. Each scenario is analyzed under a range of uncertain storage estimates and infrastructure performance is assessed and quantified in terms of cost to build additional infrastructure to store all CO 2. We also include the role of stranded CO 2, CO 2 that a source was expecting to but cannot capture due substandard performance in the transport and storage infrastructure. Results show that the cost of getting the original infrastructure design wrong are significant and that comprehensive planning will be required to ensure that CCS becomes a successful climate mitigation technology. Here, we show that the concept of stranded CO 2 can transform a seemingly high-performing infrastructure design into the worst case scenario.« less

  12. Co-governing decentralised water systems: an analytical framework.

    PubMed

    Yu, C; Brown, R; Morison, P

    2012-01-01

    Current discourses in urban water management emphasise a diversity of water sources and scales of infrastructure for resilience and adaptability. During the last 2 decades, in particular, various small-scale systems emerged and developed so that the debate has largely moved from centralised versus decentralised water systems toward governing integrated and networked systems of provision and consumption where small-scale technologies are embedded in large-scale centralised infrastructures. However, while centralised systems have established boundaries of ownership and management, decentralised water systems (such as stormwater harvesting technologies for the street, allotment/house scales) do not, therefore the viability for adoption and/or continued use of decentralised water systems is challenged. This paper brings together insights from the literature on public sector governance, co-production and social practices model to develop an analytical framework for co-governing such systems. The framework provides urban water practitioners with guidance when designing co-governance arrangements for decentralised water systems so that these systems continue to exist, and become widely adopted, within the established urban water regime.

  13. cOSPREY: A Cloud-Based Distributed Algorithm for Large-Scale Computational Protein Design

    PubMed Central

    Pan, Yuchao; Dong, Yuxi; Zhou, Jingtian; Hallen, Mark; Donald, Bruce R.; Xu, Wei

    2016-01-01

    Abstract Finding the global minimum energy conformation (GMEC) of a huge combinatorial search space is the key challenge in computational protein design (CPD) problems. Traditional algorithms lack a scalable and efficient distributed design scheme, preventing researchers from taking full advantage of current cloud infrastructures. We design cloud OSPREY (cOSPREY), an extension to a widely used protein design software OSPREY, to allow the original design framework to scale to the commercial cloud infrastructures. We propose several novel designs to integrate both algorithm and system optimizations, such as GMEC-specific pruning, state search partitioning, asynchronous algorithm state sharing, and fault tolerance. We evaluate cOSPREY on three different cloud platforms using different technologies and show that it can solve a number of large-scale protein design problems that have not been possible with previous approaches. PMID:27154509

  14. Application of large-scale computing infrastructure for diverse environmental research applications using GC3Pie

    NASA Astrophysics Data System (ADS)

    Maffioletti, Sergio; Dawes, Nicholas; Bavay, Mathias; Sarni, Sofiane; Lehning, Michael

    2013-04-01

    The Swiss Experiment platform (SwissEx: http://www.swiss-experiment.ch) provides a distributed storage and processing infrastructure for environmental research experiments. The aim of the second phase project (the Open Support Platform for Environmental Research, OSPER, 2012-2015) is to develop the existing infrastructure to provide scientists with an improved workflow. This improved workflow will include pre-defined, documented and connected processing routines. A large-scale computing and data facility is required to provide reliable and scalable access to data for analysis, and it is desirable that such an infrastructure should be free of traditional data handling methods. Such an infrastructure has been developed using the cloud-based part of the Swiss national infrastructure SMSCG (http://www.smscg.ch) and Academic Cloud. The infrastructure under construction supports two main usage models: 1) Ad-hoc data analysis scripts: These scripts are simple processing scripts, written by the environmental researchers themselves, which can be applied to large data sets via the high power infrastructure. Examples of this type of script are spatial statistical analysis scripts (R-based scripts), mostly computed on raw meteorological and/or soil moisture data. These provide processed output in the form of a grid, a plot, or a kml. 2) Complex models: A more intense data analysis pipeline centered (initially) around the physical process model, Alpine3D, and the MeteoIO plugin; depending on the data set, this may require a tightly coupled infrastructure. SMSCG already supports Alpine3D executions as both regular grid jobs and as virtual software appliances. A dedicated appliance with the Alpine3D specific libraries has been created and made available through the SMSCG infrastructure. The analysis pipelines are activated and supervised by simple control scripts that, depending on the data fetched from the meteorological stations, launch new instances of the Alpine3D appliance, execute location-based subroutines at each grid point and store the results back into the central repository for post-processing. An optional extension of this infrastructure will be to provide a 'ring buffer'-type database infrastructure, such that model results (e.g. test runs made to check parameter dependency or for development) can be visualised and downloaded after completion without submitting them to a permanent storage infrastructure. Data organization Data collected from sensors are archived and classified in distributed sites connected with an open-source software middleware, GSN. Publicly available data are available through common web services and via a cloud storage server (based on Swift). Collocation of the data and processing in the cloud would eventually eliminate data transfer requirements. Execution control logic Execution of the data analysis pipelines (for both the R-based analysis and the Alpine3D simulations) has been implemented using the GC3Pie framework developed by UZH. (https://code.google.com/p/gc3pie/). This allows large-scale, fault-tolerant execution of the pipelines to be described in terms of software appliances. GC3Pie also allows supervision of the execution of large campaigns of appliances as a single simulation. This poster will present the fundamental architectural components of the data analysis pipelines together with initial experimental results.

  15. Infrastructure for Large-Scale Quality-Improvement Projects: Early Lessons from North Carolina Improving Performance in Practice

    ERIC Educational Resources Information Center

    Newton, Warren P.; Lefebvre, Ann; Donahue, Katrina E.; Bacon, Thomas; Dobson, Allen

    2010-01-01

    Introduction: Little is known regarding how to accomplish large-scale health care improvement. Our goal is to improve the quality of chronic disease care in all primary care practices throughout North Carolina. Methods: Methods for improvement include (1) common quality measures and shared data system; (2) rapid cycle improvement principles; (3)…

  16. The ATLAS Simulation Infrastructure

    DOE PAGES

    Aad, G.; Abbott, B.; Abdallah, J.; ...

    2010-09-25

    The simulation software for the ATLAS Experiment at the Large Hadron Collider is being used for large-scale production of events on the LHC Computing Grid. This simulation requires many components, from the generators that simulate particle collisions, through packages simulating the response of the various detectors and triggers. All of these components come together under the ATLAS simulation infrastructure. In this paper, that infrastructure is discussed, including that supporting the detector description, interfacing the event generation, and combining the GEANT4 simulation of the response of the individual detectors. Also described are the tools allowing the software validation, performance testing, andmore » the validation of the simulated output against known physics processes.« less

  17. Exploring Cloud Computing for Large-scale Scientific Applications

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lin, Guang; Han, Binh; Yin, Jian

    This paper explores cloud computing for large-scale data-intensive scientific applications. Cloud computing is attractive because it provides hardware and software resources on-demand, which relieves the burden of acquiring and maintaining a huge amount of resources that may be used only once by a scientific application. However, unlike typical commercial applications that often just requires a moderate amount of ordinary resources, large-scale scientific applications often need to process enormous amount of data in the terabyte or even petabyte range and require special high performance hardware with low latency connections to complete computation in a reasonable amount of time. To address thesemore » challenges, we build an infrastructure that can dynamically select high performance computing hardware across institutions and dynamically adapt the computation to the selected resources to achieve high performance. We have also demonstrated the effectiveness of our infrastructure by building a system biology application and an uncertainty quantification application for carbon sequestration, which can efficiently utilize data and computation resources across several institutions.« less

  18. The Role of ICT Infrastructure in Its Application to Classrooms: A Large Scale Survey for Middle and Primary Schools in China

    ERIC Educational Resources Information Center

    Lu, Chun; Tsai, Chin-Chung; Wu, Di

    2015-01-01

    With the ever-deepening economic reform and international trend of ICT application in education, the Chinese government is strengthening its basic education curriculum reform and actively facilitating the application of ICT in education. Given the achievement gap of ICT infrastructure and its application in middle and primary schools between urban…

  19. Infrastructure for Large-Scale Tests in Marine Autonomy

    DTIC Science & Technology

    2012-02-01

    suggestions for reducing this burden, to Washington Headquarters Services, Directorate for Information Operations and Reports, 1215 Jefferson Davis...8217+!0$%+()!()+($+!15+$! (#.%$&$)$-!%-!.BK*3$-(+$!$)&$-!.%$&$)+ *$+$+-3$)$$!. NHI

  20. Large-scale restoration mitigate land degradation and support the establishment of green infrastructure

    NASA Astrophysics Data System (ADS)

    Tóthmérész, Béla; Mitchley, Jonathan; Jongepierová, Ivana; Baasch, Annett; Fajmon, Karel; Kirmer, Anita; Prach, Karel; Řehounková, Klára; Tischew, Sabine; Twiston-Davies, Grace; Dutoit, Thierry; Buisson, Elise; Jeunatre, Renaud; Valkó, Orsolya; Deák, Balázs; Török, Péter

    2017-04-01

    Sustaining the human well-being and the quality of life, it is essential to develop and support green infrastructure (strategically planned network of natural and semi-natural areas with other environmental features designed and managed to deliver a wide range of ecosystem services). For developing and sustaining green infrastructure the conservation and restoration of biodiversity in natural and traditionally managed habitats is essential. Species-rich landscapes in Europe have been maintained over centuries by various kinds of low-intensity use. Recently, they suffered by losses in extent and diversity due to land degradation by intensification or abandonment. Conservation of landscape-scale biodiversity requires the maintenance of species-rich habitats and the restoration of lost grasslands. We are focusing on landscape-level restoration studies including multiple sites in wide geographical scale (including Czech Republic, France, Germany, Hungary, and UK). In a European-wide perspective we aimed at to address four specific questions: (i) What were the aims and objectives of landscape-scale restoration? (ii) What results have been achieved? (iii) What are the costs of large-scale restoration? (iv) What policy tools are available for the restoration of landscape-scale biodiversity? We conclude that landscape-level restoration offers exciting new opportunities to reconnect long-disrupted ecological processes and to restore landscape connectivity. Generally, these measures enable to enhance the biodiversity at the landscape scale. The development of policy tools to achieve restoration at the landscape scale are essential for the achievement of the ambitious targets of the Convention on Biological Diversity and the European Biodiversity Strategy for ecosystem restoration.

  1. Street Level Hydrology: An Urban Application of the WRF-Hydro Framework in Denver, Colorado

    NASA Astrophysics Data System (ADS)

    Read, L.; Hogue, T. S.; Salas, F. R.; Gochis, D.

    2015-12-01

    Urban flood modeling at the watershed scale carries unique challenges in routing complexity, data resolution, social and political issues, and land surface - infrastructure interactions. The ability to accurately trace and predict the flow of water through the urban landscape enables better emergency response management, floodplain mapping, and data for future urban infrastructure planning and development. These services are of growing importance as urban population is expected to continue increasing by 1.84% per year for the next 25 years, increasing the vulnerability of urban regions to damages and loss of life from floods. Although a range of watershed-scale models have been applied in specific urban areas to examine these issues, there is a trend towards national scale hydrologic modeling enabled by supercomputing resources to understand larger system-wide hydrologic impacts and feedbacks. As such it is important to address how urban landscapes can be represented in large scale modeling processes. The current project investigates how coupling terrain and infrastructure routing can improve flow prediction and flooding events over the urban landscape. We utilize the WRF-Hydro modeling framework and a high-resolution terrain routing grid with the goal of compiling standard data needs necessary for fine scale urban modeling and dynamic flood forecasting in the urban setting. The city of Denver is selected as a case study, as it has experienced several large flooding events in the last five years and has an urban annual population growth rate of 1.5%, one of the highest in the U.S. Our work highlights the hydro-informatic challenges associated with linking channel networks and drainage infrastructure in an urban area using the WRF-Hydro modeling framework and high resolution urban models for short-term flood prediction.

  2. Networking for large-scale science: infrastructure, provisioning, transport and application mapping

    NASA Astrophysics Data System (ADS)

    Rao, Nageswara S.; Carter, Steven M.; Wu, Qishi; Wing, William R.; Zhu, Mengxia; Mezzacappa, Anthony; Veeraraghavan, Malathi; Blondin, John M.

    2005-01-01

    Large-scale science computations and experiments require unprecedented network capabilities in the form of large bandwidth and dynamically stable connections to support data transfers, interactive visualizations, and monitoring and steering operations. A number of component technologies dealing with the infrastructure, provisioning, transport and application mappings must be developed and/or optimized to achieve these capabilities. We present a brief account of the following technologies that contribute toward achieving these network capabilities: (a) DOE UltraScienceNet and NSF CHEETAH network testbeds that provide on-demand and scheduled dedicated network connections; (b) experimental results on transport protocols that achieve close to 100% utilization on dedicated 1Gbps wide-area channels; (c) a scheme for optimally mapping a visualization pipeline onto a network to minimize the end-to-end delays; and (d) interconnect configuration and protocols that provides multiple Gbps flows from Cray X1 to external hosts.

  3. Collaboratively Architecting a Scalable and Adaptable Petascale Infrastructure to Support Transdisciplinary Scientific Research for the Australian Earth and Environmental Sciences

    NASA Astrophysics Data System (ADS)

    Wyborn, L. A.; Evans, B. J. K.; Pugh, T.; Lescinsky, D. T.; Foster, C.; Uhlherr, A.

    2014-12-01

    The National Computational Infrastructure (NCI) at the Australian National University (ANU) is a partnership between CSIRO, ANU, Bureau of Meteorology (BoM) and Geoscience Australia. Recent investments in a 1.2 PFlop Supercomputer (Raijin), ~ 20 PB data storage using Lustre filesystems and a 3000 core high performance cloud have created a hybrid platform for higher performance computing and data-intensive science to enable large scale earth and climate systems modelling and analysis. There are > 3000 users actively logging in and > 600 projects on the NCI system. Efficiently scaling and adapting data and software systems to petascale infrastructures requires the collaborative development of an architecture that is designed, programmed and operated to enable users to interactively invoke different forms of in-situ computation over complex and large scale data collections. NCI makes available major and long tail data collections from both the government and research sectors based on six themes: 1) weather, climate and earth system science model simulations, 2) marine and earth observations, 3) geosciences, 4) terrestrial ecosystems, 5) water and hydrology and 6) astronomy, bio and social. Collectively they span the lithosphere, crust, biosphere, hydrosphere, troposphere, and stratosphere. Collections are the operational form for data management and access. Similar data types from individual custodians are managed cohesively. Use of international standards for discovery and interoperability allow complex interactions within and between the collections. This design facilitates a transdisciplinary approach to research and enables a shift from small scale, 'stove-piped' science efforts to large scale, collaborative systems science. This new and complex infrastructure requires a move to shared, globally trusted software frameworks that can be maintained and updated. Workflow engines become essential and need to integrate provenance, versioning, traceability, repeatability and publication. There are also human resource challenges as highly skilled HPC/HPD specialists, specialist programmers, and data scientists are required whose skills can support scaling to the new paradigm of effective and efficient data-intensive earth science analytics on petascale, and soon to be exascale systems.

  4. Transport Infrastructure Shapes Foraging Habitat in a Raptor Community

    PubMed Central

    Planillo, Aimara; Kramer-Schadt, Stephanie; Malo, Juan E.

    2015-01-01

    Transport infrastructure elements are widespread and increasing in size and length in many countries, with the subsequent alteration of landscapes and wildlife communities. Nonetheless, their effects on habitat selection by raptors are still poorly understood. In this paper, we analyzed raptors’ foraging habitat selection in response to conventional roads and high capacity motorways at the landscape scale, and compared their effects with those of other variables, such as habitat structure, food availability, and presence of potential interspecific competitors. We also analyzed whether the raptors’ response towards infrastructure depends on the spatial scale of observation, comparing the attraction or avoidance behavior of the species at the landscape scale with the response of individuals observed in the proximity of the infrastructure. Based on ecological hypotheses for foraging habitat selection, we built generalized linear mixed models, selected the best models according to Akaike Information Criterion and assessed variable importance by Akaike weights. At the community level, the traffic volume was the most relevant variable in the landscape for foraging habitat selection. Abundance, richness, and diversity values reached their maximum at medium traffic volumes and decreased at highest traffic volumes. Individual species showed different degrees of tolerance toward traffic, from higher abundance in areas with high traffic values to avoidance of it. Medium-sized opportunistic raptors increased their abundance near the traffic infrastructures, large scavenger raptors avoided areas with higher traffic values, and other species showed no direct response to traffic but to the presence of prey. Finally, our cross-scale analysis revealed that the effect of transport infrastructures on the behavior of some species might be detectable only at a broad scale. Also, food availability may attract raptor species to risky areas such as motorways. PMID:25786218

  5. Transport infrastructure shapes foraging habitat in a raptor community.

    PubMed

    Planillo, Aimara; Kramer-Schadt, Stephanie; Malo, Juan E

    2015-01-01

    Transport infrastructure elements are widespread and increasing in size and length in many countries, with the subsequent alteration of landscapes and wildlife communities. Nonetheless, their effects on habitat selection by raptors are still poorly understood. In this paper, we analyzed raptors' foraging habitat selection in response to conventional roads and high capacity motorways at the landscape scale, and compared their effects with those of other variables, such as habitat structure, food availability, and presence of potential interspecific competitors. We also analyzed whether the raptors' response towards infrastructure depends on the spatial scale of observation, comparing the attraction or avoidance behavior of the species at the landscape scale with the response of individuals observed in the proximity of the infrastructure. Based on ecological hypotheses for foraging habitat selection, we built generalized linear mixed models, selected the best models according to Akaike Information Criterion and assessed variable importance by Akaike weights. At the community level, the traffic volume was the most relevant variable in the landscape for foraging habitat selection. Abundance, richness, and diversity values reached their maximum at medium traffic volumes and decreased at highest traffic volumes. Individual species showed different degrees of tolerance toward traffic, from higher abundance in areas with high traffic values to avoidance of it. Medium-sized opportunistic raptors increased their abundance near the traffic infrastructures, large scavenger raptors avoided areas with higher traffic values, and other species showed no direct response to traffic but to the presence of prey. Finally, our cross-scale analysis revealed that the effect of transport infrastructures on the behavior of some species might be detectable only at a broad scale. Also, food availability may attract raptor species to risky areas such as motorways.

  6. Hierarchical Engine for Large-scale Infrastructure Co-Simulation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    2017-04-24

    HELICS is designed to support very-large-scale (100,000+ federates) cosimulations with off-the-shelf power-system, communication, market, and end-use tools. Other key features include cross platform operating system support, the integration of both event driven (e.g., packetized communication) and time-series (e.g., power flow) simulations, and the ability to co-iterate among federates to ensure physical model convergence at each time step.

  7. High Fidelity Simulations of Large-Scale Wireless Networks (Plus-Up)

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Onunkwo, Uzoma

    Sandia has built a strong reputation in scalable network simulation and emulation for cyber security studies to protect our nation’s critical information infrastructures. Georgia Tech has preeminent reputation in academia for excellence in scalable discrete event simulations, with strong emphasis on simulating cyber networks. Many of the experts in this field, such as Dr. Richard Fujimoto, Dr. George Riley, and Dr. Chris Carothers, have strong affiliations with Georgia Tech. The collaborative relationship that we intend to immediately pursue is in high fidelity simulations of practical large-scale wireless networks using ns-3 simulator via Dr. George Riley. This project will have mutualmore » benefits in bolstering both institutions’ expertise and reputation in the field of scalable simulation for cyber-security studies. This project promises to address high fidelity simulations of large-scale wireless networks. This proposed collaboration is directly in line with Georgia Tech’s goals for developing and expanding the Communications Systems Center, the Georgia Tech Broadband Institute, and Georgia Tech Information Security Center along with its yearly Emerging Cyber Threats Report. At Sandia, this work benefits the defense systems and assessment area with promise for large-scale assessment of cyber security needs and vulnerabilities of our nation’s critical cyber infrastructures exposed to wireless communications.« less

  8. Locally Appropriate Energy Strategies for the Developing World: A focus on Clean Energy Opportunities in Borneo

    NASA Astrophysics Data System (ADS)

    Shirley, Rebekah Grace

    This dissertation focuses on an integration of energy modeling tools to explore energy transition pathways for emerging economies. The spate of growth in the global South has led to a global energy transition, evidenced in part by a surge in the development of large scale energy infrastructure projects for the provision of reliable electricity service. The rational of energy security and exigency often usher these large scale projects through to implementation with minimal analysis of costs: social and environmental impact, ecological risk, or opportunity costs of alternative energy transition pathways foregone. Furthermore, development of energy infrastructure is inherently characterized by the involvement of a number of state and non-state actors, with varying interests, objectives and access to authority. Being woven through and into social institutions necessarily impacts the design, control and functionality of infrastructure. In this dissertation I therefore conceptualize energy infrastructure as lying at the intersection, or nexus, of people, the environment and energy security. I argue that energy infrastructure plans and policy should, and can, be informed by each of these fields of influence in order to appropriately satisfy local development needs. This case study explores the socio-techno-environmental context of contemporary mega-dam development in northern Borneo. I describe the key actors of an ongoing mega-dam debate and the constellation of their interaction. This highlights the role that information may play in public discourse and lends insight into how inertia in the established system may stymie technological evolution. I then use a combination of power system simulation, ecological modeling and spatial analysis to analyze the potential for, and costs and tradeoffs of, future energy scenarios. In this way I demonstrate reproducible methods that can support energy infrastructure decision making by directly addressing data limitation barriers. I offer a platform for integrated analysis that considers cost perspectives across the nexus. The management of energy transitions is a growing field, critically important to low carbon futures. With the broader implications of my study I hope to contribute to a paradigm shift away from the dominant large-scale energy infrastructure as a means of energy security discourse, to a more encompassing security agenda that considers distributed and localized solutions.

  9. Current and future flood risk to railway infrastructure in Europe

    NASA Astrophysics Data System (ADS)

    Bubeck, Philip; Kellermann, Patric; Alfieri, Lorenzo; Feyen, Luc; Dillenardt, Lisa; Thieken, Annegret H.

    2017-04-01

    Railway infrastructure plays an important role in the transportation of freight and passengers across the European Union. According to Eurostat, more than four billion passenger-kilometres were travelled on national and international railway lines of the EU28 in 2014. To further strengthen transport infrastructure in Europe, the European Commission will invest another € 24.05 billion in the transnational transport network until 2020 as part of its new transport infrastructure policy (TEN-T), including railway infrastructure. Floods pose a significant risk to infrastructure elements. Damage data of recent flood events in Europe show that infrastructure losses can make up a considerable share of overall losses. For example, damage to state and municipal infrastructure in the federal state of Saxony (Germany) accounted for nearly 60% of overall losses during the large-scale event in June 2013. Especially in mountainous areas with little usable space available, roads and railway lines often follow floodplains or are located along steep and unsteady slopes. In Austria, for instance, the flood of 2013 caused € 75 million of direct damage to railway infrastructure. Despite the importance of railway infrastructure and its exposure to flooding, assessments of potential damage and risk (i.e. probability * damage) are still in its infancy compared with other sectors, such as the residential or industrial sector. Infrastructure-specific assessments at the regional scale are largely lacking. Regional assessment of potential damage to railway infrastructure has been hampered by a lack of infrastructure-specific damage models and data availability. The few available regional approaches have used damage models that assess damage to various infrastructure elements (e.g. roads, railway, airports and harbours) using one aggregated damage function and cost estimate. Moreover, infrastructure elements are often considerably underrepresented in regional land cover data, such as CORINE, due to their line shapes. To assess current and future damage and risk to railway infrastructure in Europe, we apply the damage model RAIL -' RAilway Infrastructure Loss' that was specifically developed for railway infrastructure using empirical damage data. To adequately and comprehensively capture the line-shaped features of railway infrastructure, the assessment makes use of the open-access data set of openrailway.org. Current and future flood hazard in Europe is obtained with the LISFLOOD-based pan-European flood hazard mapping procedure combined with ensemble projections of extreme streamflow for the current century based on EURO-CORDEX RCP 8.5 climate scenarios. The presentation shows first results of the combination of the hazard data and the model RAIL for Europe.

  10. WISDOM-II: screening against multiple targets implicated in malaria using computational grid infrastructures.

    PubMed

    Kasam, Vinod; Salzemann, Jean; Botha, Marli; Dacosta, Ana; Degliesposti, Gianluca; Isea, Raul; Kim, Doman; Maass, Astrid; Kenyon, Colin; Rastelli, Giulio; Hofmann-Apitius, Martin; Breton, Vincent

    2009-05-01

    Despite continuous efforts of the international community to reduce the impact of malaria on developing countries, no significant progress has been made in the recent years and the discovery of new drugs is more than ever needed. Out of the many proteins involved in the metabolic activities of the Plasmodium parasite, some are promising targets to carry out rational drug discovery. Recent years have witnessed the emergence of grids, which are highly distributed computing infrastructures particularly well fitted for embarrassingly parallel computations like docking. In 2005, a first attempt at using grids for large-scale virtual screening focused on plasmepsins and ended up in the identification of previously unknown scaffolds, which were confirmed in vitro to be active plasmepsin inhibitors. Following this success, a second deployment took place in the fall of 2006 focussing on one well known target, dihydrofolate reductase (DHFR), and on a new promising one, glutathione-S-transferase. In silico drug design, especially vHTS is a widely and well-accepted technology in lead identification and lead optimization. This approach, therefore builds, upon the progress made in computational chemistry to achieve more accurate in silico docking and in information technology to design and operate large scale grid infrastructures. On the computational side, a sustained infrastructure has been developed: docking at large scale, using different strategies in result analysis, storing of the results on the fly into MySQL databases and application of molecular dynamics refinement are MM-PBSA and MM-GBSA rescoring. The modeling results obtained are very promising. Based on the modeling results, In vitro results are underway for all the targets against which screening is performed. The current paper describes the rational drug discovery activity at large scale, especially molecular docking using FlexX software on computational grids in finding hits against three different targets (PfGST, PfDHFR, PvDHFR (wild type and mutant forms) implicated in malaria. Grid-enabled virtual screening approach is proposed to produce focus compound libraries for other biological targets relevant to fight the infectious diseases of the developing world.

  11. Associations between health and different types of environmental incivility: a Scotland-wide study.

    PubMed

    Ellaway, A; Morris, G; Curtice, J; Robertson, C; Allardice, G; Robertson, R

    2009-11-01

    Concern about the impact of the environment on health and well-being has tended to focus on the physical effects of exposure to toxic and infectious substances, and on the impact of large-scale infrastructures. Less attention has been paid to the possible psychosocial consequences of people's subjective perceptions of their everyday, street-level environment, such as the incidence of litter and graffiti. As little is known about the potential relative importance for health of perceptions of different types of environmental incivility, a module was developed for inclusion in the 2004 Scottish Social Attitudes survey in order to investigate this relationship. A random sample of 1637 adults living across a range of neighbourhoods throughout Scotland was interviewed. Respondents were asked to rate their local area on a range of possible environmental incivilities. These incivilities were subsequently grouped into three domains: (i) street-level incivilities (e.g. litter, graffiti); (ii) large-scale infrastructural incivilities (e.g. telephone masts); and (iii) the absence of environmental goods (e.g. safe play areas for children). For each of the three domains, the authors examined the degree to which they were thought to pose a problem locally, and how far these perceptions varied between those living in deprived areas and those living in less-deprived areas. Subsequently, the relationships between these perceptions and self-assessed health and health behaviours were explored, after controlling for gender, age and social class. Respondents with the highest levels of perceived street-level incivilities were almost twice as likely as those who perceived the lowest levels of street-level incivilities to report frequent feelings of anxiety and depression. Perceived absence of environmental goods was associated with increased anxiety (2.5 times more likely) and depression (90% more likely), and a 50% increased likelihood of being a smoker. Few associations with health were observed for perceptions of large-scale infrastructural incivilities. Environmental policy needs to give more priority to reducing the incidence of street-level incivilities and the absence of environmental goods, both of which appear to be more important for health than perceptions of large-scale infrastructural incivilities.

  12. Organization and scaling in water supply networks

    NASA Astrophysics Data System (ADS)

    Cheng, Likwan; Karney, Bryan W.

    2017-12-01

    Public water supply is one of the society's most vital resources and most costly infrastructures. Traditional concepts of these networks capture their engineering identity as isolated, deterministic hydraulic units, but overlook their physics identity as related entities in a probabilistic, geographic ensemble, characterized by size organization and property scaling. Although discoveries of allometric scaling in natural supply networks (organisms and rivers) raised the prospect for similar findings in anthropogenic supplies, so far such a finding has not been reported in public water or related civic resource supplies. Examining an empirical ensemble of large number and wide size range, we show that water supply networks possess self-organized size abundance and theory-explained allometric scaling in spatial, infrastructural, and resource- and emission-flow properties. These discoveries establish scaling physics for water supply networks and may lead to novel applications in resource- and jurisdiction-scale water governance.

  13. Data Intensive Scientific Workflows on a Federated Cloud: CRADA Final Report

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Garzoglio, Gabriele

    The Fermilab Scientific Computing Division and the KISTI Global Science Experimental Data Hub Center have built a prototypical large-scale infrastructure to handle scientific workflows of stakeholders to run on multiple cloud resources. The demonstrations have been in the areas of (a) Data-Intensive Scientific Workflows on Federated Clouds, (b) Interoperability and Federation of Cloud Resources, and (c) Virtual Infrastructure Automation to enable On-Demand Services.

  14. Challenges in Managing Trustworthy Large-scale Digital Science

    NASA Astrophysics Data System (ADS)

    Evans, B. J. K.

    2017-12-01

    The increased use of large-scale international digital science has opened a number of challenges for managing, handling, using and preserving scientific information. The large volumes of information are driven by three main categories - model outputs including coupled models and ensembles, data products that have been processing to a level of usability, and increasingly heuristically driven data analysis. These data products are increasingly the ones that are usable by the broad communities, and far in excess of the raw instruments data outputs. The data, software and workflows are then shared and replicated to allow broad use at an international scale, which places further demands of infrastructure to support how the information is managed reliably across distributed resources. Users necessarily rely on these underlying "black boxes" so that they are productive to produce new scientific outcomes. The software for these systems depend on computational infrastructure, software interconnected systems, and information capture systems. This ranges from the fundamentals of the reliability of the compute hardware, system software stacks and libraries, and the model software. Due to these complexities and capacity of the infrastructure, there is an increased emphasis of transparency of the approach and robustness of the methods over the full reproducibility. Furthermore, with large volume data management, it is increasingly difficult to store the historical versions of all model and derived data. Instead, the emphasis is on the ability to access the updated products and the reliability by which both previous outcomes are still relevant and can be updated for the new information. We will discuss these challenges and some of the approaches underway that are being used to address these issues.

  15. Large-scale parallel genome assembler over cloud computing environment.

    PubMed

    Das, Arghya Kusum; Koppa, Praveen Kumar; Goswami, Sayan; Platania, Richard; Park, Seung-Jong

    2017-06-01

    The size of high throughput DNA sequencing data has already reached the terabyte scale. To manage this huge volume of data, many downstream sequencing applications started using locality-based computing over different cloud infrastructures to take advantage of elastic (pay as you go) resources at a lower cost. However, the locality-based programming model (e.g. MapReduce) is relatively new. Consequently, developing scalable data-intensive bioinformatics applications using this model and understanding the hardware environment that these applications require for good performance, both require further research. In this paper, we present a de Bruijn graph oriented Parallel Giraph-based Genome Assembler (GiGA), as well as the hardware platform required for its optimal performance. GiGA uses the power of Hadoop (MapReduce) and Giraph (large-scale graph analysis) to achieve high scalability over hundreds of compute nodes by collocating the computation and data. GiGA achieves significantly higher scalability with competitive assembly quality compared to contemporary parallel assemblers (e.g. ABySS and Contrail) over traditional HPC cluster. Moreover, we show that the performance of GiGA is significantly improved by using an SSD-based private cloud infrastructure over traditional HPC cluster. We observe that the performance of GiGA on 256 cores of this SSD-based cloud infrastructure closely matches that of 512 cores of traditional HPC cluster.

  16. A hybrid computational strategy to address WGS variant analysis in >5000 samples.

    PubMed

    Huang, Zhuoyi; Rustagi, Navin; Veeraraghavan, Narayanan; Carroll, Andrew; Gibbs, Richard; Boerwinkle, Eric; Venkata, Manjunath Gorentla; Yu, Fuli

    2016-09-10

    The decreasing costs of sequencing are driving the need for cost effective and real time variant calling of whole genome sequencing data. The scale of these projects are far beyond the capacity of typical computing resources available with most research labs. Other infrastructures like the cloud AWS environment and supercomputers also have limitations due to which large scale joint variant calling becomes infeasible, and infrastructure specific variant calling strategies either fail to scale up to large datasets or abandon joint calling strategies. We present a high throughput framework including multiple variant callers for single nucleotide variant (SNV) calling, which leverages hybrid computing infrastructure consisting of cloud AWS, supercomputers and local high performance computing infrastructures. We present a novel binning approach for large scale joint variant calling and imputation which can scale up to over 10,000 samples while producing SNV callsets with high sensitivity and specificity. As a proof of principle, we present results of analysis on Cohorts for Heart And Aging Research in Genomic Epidemiology (CHARGE) WGS freeze 3 dataset in which joint calling, imputation and phasing of over 5300 whole genome samples was produced in under 6 weeks using four state-of-the-art callers. The callers used were SNPTools, GATK-HaplotypeCaller, GATK-UnifiedGenotyper and GotCloud. We used Amazon AWS, a 4000-core in-house cluster at Baylor College of Medicine, IBM power PC Blue BioU at Rice and Rhea at Oak Ridge National Laboratory (ORNL) for the computation. AWS was used for joint calling of 180 TB of BAM files, and ORNL and Rice supercomputers were used for the imputation and phasing step. All other steps were carried out on the local compute cluster. The entire operation used 5.2 million core hours and only transferred a total of 6 TB of data across the platforms. Even with increasing sizes of whole genome datasets, ensemble joint calling of SNVs for low coverage data can be accomplished in a scalable, cost effective and fast manner by using heterogeneous computing platforms without compromising on the quality of variants.

  17. The European cooperative approach to securing critical information infrastructure.

    PubMed

    Purser, Steve

    2011-10-01

    This paper provides an overview of the EU approach to securing critical information infrastructure, as defined in the Action Plan contained in the Commission Communication of March 2009, entitled 'Protecting Europe from large-scale cyber-attacks and disruptions: enhancing preparedness, security and resilience' and further elaborated by the Communication of May 2011 on critical Information infrastructure protection 'Achievements and next steps: towards global cyber-security'. After explaining the need for pan-European cooperation in this area, the CIIP Action Plan is explained in detail. Finally, the current state of progress is summarised together with the proposed next steps.

  18. Urban Greening Bay Area

    EPA Pesticide Factsheets

    Information about the San Francisco Bay Water Quality Project (SFBWQP) Urban Greening Bay Area, a large-scale effort to re-envision urban landscapes to include green infrastructure (GI) making communities more livable and reducing stormwater runoff.

  19. Landscape-scale distribution and density of raptor populations wintering in anthropogenic-dominated desert landscapes

    Treesearch

    Adam E. Duerr; Tricia A. Miller; Kerri L. Cornell Duerr; Michael J. Lanzone; Amy Fesnock; Todd E. Katzner

    2015-01-01

    Anthropogenic development has great potential to affect fragile desert environments. Large-scale development of renewable energy infrastructure is planned for many desert ecosystems. Development plans should account for anthropogenic effects to distributions and abundance of rare or sensitive wildlife; however, baseline data on abundance and distribution of such...

  20. Taking the pulse of a continent: Expanding site-based research infrastructure for regional- to continental-scale ecology

    USDA-ARS?s Scientific Manuscript database

    Many of the most dramatic and surprising effects of global change on ecological systems will occur across large spatial extents, from regions to continents. Multiple ecosystem types will be impacted across a range of interacting spatial and temporal scales. The ability of ecologists to understand an...

  1. Development of Affordable, Low-Carbon Hydrogen Supplies at an Industrial Scale

    ERIC Educational Resources Information Center

    Roddy, Dermot J.

    2008-01-01

    An existing industrial hydrogen generation and distribution infrastructure is described, and a number of large-scale investment projects are outlined. All of these projects have the potential to generate significant volumes of low-cost, low-carbon hydrogen. The technologies concerned range from gasification of coal with carbon capture and storage…

  2. Flexible services for the support of research.

    PubMed

    Turilli, Matteo; Wallom, David; Williams, Chris; Gough, Steve; Curran, Neal; Tarrant, Richard; Bretherton, Dan; Powell, Andy; Johnson, Matt; Harmer, Terry; Wright, Peter; Gordon, John

    2013-01-28

    Cloud computing has been increasingly adopted by users and providers to promote a flexible, scalable and tailored access to computing resources. Nonetheless, the consolidation of this paradigm has uncovered some of its limitations. Initially devised by corporations with direct control over large amounts of computational resources, cloud computing is now being endorsed by organizations with limited resources or with a more articulated, less direct control over these resources. The challenge for these organizations is to leverage the benefits of cloud computing while dealing with limited and often widely distributed computing resources. This study focuses on the adoption of cloud computing by higher education institutions and addresses two main issues: flexible and on-demand access to a large amount of storage resources, and scalability across a heterogeneous set of cloud infrastructures. The proposed solutions leverage a federated approach to cloud resources in which users access multiple and largely independent cloud infrastructures through a highly customizable broker layer. This approach allows for a uniform authentication and authorization infrastructure, a fine-grained policy specification and the aggregation of accounting and monitoring. Within a loosely coupled federation of cloud infrastructures, users can access vast amount of data without copying them across cloud infrastructures and can scale their resource provisions when the local cloud resources become insufficient.

  3. Who Should Join the Environmental Response Laboratory Network

    EPA Pesticide Factsheets

    Laboratories that analyze biological samples, chemical warfare agents, radiological, or toxic industrial chemical samples can join the ERLN. Members make up a critical infrastructure that delivers data necessary for responses to large scale emergencies.

  4. NCI's High Performance Computing (HPC) and High Performance Data (HPD) Computing Platform for Environmental and Earth System Data Science

    NASA Astrophysics Data System (ADS)

    Evans, Ben; Allen, Chris; Antony, Joseph; Bastrakova, Irina; Gohar, Kashif; Porter, David; Pugh, Tim; Santana, Fabiana; Smillie, Jon; Trenham, Claire; Wang, Jingbo; Wyborn, Lesley

    2015-04-01

    The National Computational Infrastructure (NCI) has established a powerful and flexible in-situ petascale computational environment to enable both high performance computing and Data-intensive Science across a wide spectrum of national environmental and earth science data collections - in particular climate, observational data and geoscientific assets. This paper examines 1) the computational environments that supports the modelling and data processing pipelines, 2) the analysis environments and methods to support data analysis, and 3) the progress so far to harmonise the underlying data collections for future interdisciplinary research across these large volume data collections. NCI has established 10+ PBytes of major national and international data collections from both the government and research sectors based on six themes: 1) weather, climate, and earth system science model simulations, 2) marine and earth observations, 3) geosciences, 4) terrestrial ecosystems, 5) water and hydrology, and 6) astronomy, social and biosciences. Collectively they span the lithosphere, crust, biosphere, hydrosphere, troposphere, and stratosphere. The data is largely sourced from NCI's partners (which include the custodians of many of the major Australian national-scale scientific collections), leading research communities, and collaborating overseas organisations. New infrastructures created at NCI mean the data collections are now accessible within an integrated High Performance Computing and Data (HPC-HPD) environment - a 1.2 PFlop supercomputer (Raijin), a HPC class 3000 core OpenStack cloud system and several highly connected large-scale high-bandwidth Lustre filesystems. The hardware was designed at inception to ensure that it would allow the layered software environment to flexibly accommodate the advancement of future data science. New approaches to software technology and data models have also had to be developed to enable access to these large and exponentially increasing data volumes at NCI. Traditional HPC and data environments are still made available in a way that flexibly provides the tools, services and supporting software systems on these new petascale infrastructures. But to enable the research to take place at this scale, the data, metadata and software now need to evolve together - creating a new integrated high performance infrastructure. The new infrastructure at NCI currently supports a catalogue of integrated, reusable software and workflows from earth system and ecosystem modelling, weather research, satellite and other observed data processing and analysis. One of the challenges for NCI has been to support existing techniques and methods, while carefully preparing the underlying infrastructure for the transition needed for the next class of Data-intensive Science. In doing so, a flexible range of techniques and software can be made available for application across the corpus of data collections available, and to provide a new infrastructure for future interdisciplinary research.

  5. Network information attacks on the control systems of power facilities belonging to the critical infrastructure

    NASA Astrophysics Data System (ADS)

    Loginov, E. L.; Raikov, A. N.

    2015-04-01

    The most large-scale accidents occurred as a consequence of network information attacks on the control systems of power facilities belonging to the United States' critical infrastructure are analyzed in the context of possibilities available in modern decision support systems. Trends in the development of technologies for inflicting damage to smart grids are formulated. A volume matrix of parameters characterizing attacks on facilities is constructed. A model describing the performance of a critical infrastructure's control system after an attack is developed. The recently adopted measures and legislation acts aimed at achieving more efficient protection of critical infrastructure are considered. Approaches to cognitive modeling and networked expertise of intricate situations for supporting the decision-making process, and to setting up a system of indicators for anticipatory monitoring of critical infrastructure are proposed.

  6. A Computational framework for telemedicine.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Foster, I.; von Laszewski, G.; Thiruvathukal, G. K.

    1998-07-01

    Emerging telemedicine applications require the ability to exploit diverse and geographically distributed resources. Highspeed networks are used to integrate advanced visualization devices, sophisticated instruments, large databases, archival storage devices, PCs, workstations, and supercomputers. This form of telemedical environment is similar to networked virtual supercomputers, also known as metacomputers. Metacomputers are already being used in many scientific application areas. In this article, we analyze requirements necessary for a telemedical computing infrastructure and compare them with requirements found in a typical metacomputing environment. We will show that metacomputing environments can be used to enable a more powerful and unified computational infrastructure formore » telemedicine. The Globus metacomputing toolkit can provide the necessary low level mechanisms to enable a large scale telemedical infrastructure. The Globus toolkit components are designed in a modular fashion and can be extended to support the specific requirements for telemedicine.« less

  7. Evaluation of Hydrogel Technologies for the Decontamination ...

    EPA Pesticide Factsheets

    Report This current research effort was developed to evaluate intermediate level (between bench-scale and large-scale or wide-area implementation) decontamination procedures, materials, technologies, and techniques used to remove radioactive material from different surfaces. In the event of such an incident, application of this technology would primarily be intended for decontamination of high-value buildings, important infrastructure, and landmarks.

  8. Scientific Services on the Cloud

    NASA Astrophysics Data System (ADS)

    Chapman, David; Joshi, Karuna P.; Yesha, Yelena; Halem, Milt; Yesha, Yaacov; Nguyen, Phuong

    Scientific Computing was one of the first every applications for parallel and distributed computation. To this date, scientific applications remain some of the most compute intensive, and have inspired creation of petaflop compute infrastructure such as the Oak Ridge Jaguar and Los Alamos RoadRunner. Large dedicated hardware infrastructure has become both a blessing and a curse to the scientific community. Scientists are interested in cloud computing for much the same reason as businesses and other professionals. The hardware is provided, maintained, and administrated by a third party. Software abstraction and virtualization provide reliability, and fault tolerance. Graduated fees allow for multi-scale prototyping and execution. Cloud computing resources are only a few clicks away, and by far the easiest high performance distributed platform to gain access to. There may still be dedicated infrastructure for ultra-scale science, but the cloud can easily play a major part of the scientific computing initiative.

  9. Skate Genome Project: Cyber-Enabled Bioinformatics Collaboration

    PubMed Central

    Vincent, J.

    2011-01-01

    The Skate Genome Project, a pilot project of the North East Cyber infrastructure Consortium, aims to produce a draft genome sequence of Leucoraja erinacea, the Little Skate. The pilot project was designed to also develop expertise in large scale collaborations across the NECC region. An overview of the bioinformatics and infrastructure challenges faced during the first year of the project will be presented. Results to date and lessons learned from the perspective of a bioinformatics core will be highlighted.

  10. Editorial [Special issue on software defined networks and infrastructures, network function virtualisation, autonomous systems and network management

    DOE PAGES

    Biswas, Amitava; Liu, Chen; Monga, Inder; ...

    2016-01-01

    For last few years, there has been a tremendous growth in data traffic due to high adoption rate of mobile devices and cloud computing. Internet of things (IoT) will stimulate even further growth. This is increasing scale and complexity of telecom/internet service provider (SP) and enterprise data centre (DC) compute and network infrastructures. As a result, managing these large network-compute converged infrastructures is becoming complex and cumbersome. To cope up, network and DC operators are trying to automate network and system operations, administrations and management (OAM) functions. OAM includes all non-functional mechanisms which keep the network running.

  11. Limited accessibility to designs and results of Japanese large-scale clinical trials for cardiovascular diseases.

    PubMed

    Sawata, Hiroshi; Ueshima, Kenji; Tsutani, Kiichiro

    2011-04-14

    Clinical evidence is important for improving the treatment of patients by health care providers. In the study of cardiovascular diseases, large-scale clinical trials involving thousands of participants are required to evaluate the risks of cardiac events and/or death. The problems encountered in conducting the Japanese Acute Myocardial Infarction Prospective (JAMP) study highlighted the difficulties involved in obtaining the financial and infrastructural resources necessary for conducting large-scale clinical trials. The objectives of the current study were: 1) to clarify the current funding and infrastructural environment surrounding large-scale clinical trials in cardiovascular and metabolic diseases in Japan, and 2) to find ways to improve the environment surrounding clinical trials in Japan more generally. We examined clinical trials examining cardiovascular diseases that evaluated true endpoints and involved 300 or more participants using Pub-Med, Ichushi (by the Japan Medical Abstracts Society, a non-profit organization), websites of related medical societies, the University Hospital Medical Information Network (UMIN) Clinical Trials Registry, and clinicaltrials.gov at three points in time: 30 November, 2004, 25 February, 2007 and 25 July, 2009. We found a total of 152 trials that met our criteria for 'large-scale clinical trials' examining cardiovascular diseases in Japan. Of these, 72.4% were randomized controlled trials (RCTs). Of 152 trials, 9.2% of the trials examined more than 10,000 participants, and 42.8% examined between 1,000 and 10,000 participants. The number of large-scale clinical trials markedly increased from 2001 to 2004, but suddenly decreased in 2007, then began to increase again. Ischemic heart disease (39.5%) was the most common target disease. Most of the larger-scale trials were funded by private organizations such as pharmaceutical companies. The designs and results of 13 trials were not disclosed. To improve the quality of clinical trials, all sponsors should register trials and disclose the funding sources before the enrolment of participants, and publish their results after the completion of each study.

  12. LEMON - LHC Era Monitoring for Large-Scale Infrastructures

    NASA Astrophysics Data System (ADS)

    Marian, Babik; Ivan, Fedorko; Nicholas, Hook; Hector, Lansdale Thomas; Daniel, Lenkes; Miroslav, Siket; Denis, Waldron

    2011-12-01

    At the present time computer centres are facing a massive rise in virtualization and cloud computing as these solutions bring advantages to service providers and consolidate the computer centre resources. However, as a result the monitoring complexity is increasing. Computer centre management requires not only to monitor servers, network equipment and associated software but also to collect additional environment and facilities data (e.g. temperature, power consumption, cooling efficiency, etc.) to have also a good overview of the infrastructure performance. The LHC Era Monitoring (Lemon) system is addressing these requirements for a very large scale infrastructure. The Lemon agent that collects data on every client and forwards the samples to the central measurement repository provides a flexible interface that allows rapid development of new sensors. The system allows also to report on behalf of remote devices such as switches and power supplies. Online and historical data can be visualized via a web-based interface or retrieved via command-line tools. The Lemon Alarm System component can be used for notifying the operator about error situations. In this article, an overview of the Lemon monitoring is provided together with a description of the CERN LEMON production instance. No direct comparison is made with other monitoring tool.

  13. Vibration energy harvesting based monitoring of an operational bridge undergoing forced vibration and train passage

    NASA Astrophysics Data System (ADS)

    Cahill, Paul; Hazra, Budhaditya; Karoumi, Raid; Mathewson, Alan; Pakrashi, Vikram

    2018-06-01

    The application of energy harvesting technology for monitoring civil infrastructure is a bourgeoning topic of interest. The ability of kinetic energy harvesters to scavenge ambient vibration energy can be useful for large civil infrastructure under operational conditions, particularly for bridge structures. The experimental integration of such harvesters with full scale structures and the subsequent use of the harvested energy directly for the purposes of structural health monitoring shows promise. This paper presents the first experimental deployment of piezoelectric vibration energy harvesting devices for monitoring a full-scale bridge undergoing forced dynamic vibrations under operational conditions using energy harvesting signatures against time. The calibration of the harvesters is presented, along with details of the host bridge structure and the dynamic assessment procedures. The measured responses of the harvesters from the tests are presented and the use the harvesters for the purposes of structural health monitoring (SHM) is investigated using empirical mode decomposition analysis, following a bespoke data cleaning approach. Finally, the use of sequential Karhunen Loeve transforms to detect train passages during the dynamic assessment is presented. This study is expected to further develop interest in energy-harvesting based monitoring of large infrastructure for both research and commercial purposes.

  14. EVALUATING MACROINVERTEBRATE COMMUNITY ...

    EPA Pesticide Factsheets

    Since 2010, new construction in California is required to include stormwater detention and infiltration that is designed to capture rainfall from the 85th percentile of storm events in the region, preferably through green infrastructure. This study used recent macroinvertebrate community monitoring data to determine the ecological threshold for percent impervious cover prior to large scale adoption of green infrastructure using Threshold Indicator Taxa Analysis (TITAN). TITAN uses an environmental gradient and biological community data to determine individual taxa change points with respect to changes in taxa abundance and frequency across that gradient. Individual taxa change points are then aggregated to calculate the ecological threshold. This study used impervious cover data from National Land Cover Datasets and macroinvertebrate community data from California Environmental Data Exchange Network and Southern California Coastal Water Research Project. Preliminary TITAN runs for California’s Chaparral region indicated that both increasing and decreasing taxa had ecological thresholds of <1% watershed impervious cover. Next, TITAN will be used to determine shifts in the ecological threshold after the implementation of green infrastructure on a large scale. This presentation for the Society for Freshwater Scientists will discuss initial evaluation of community and taxa-specific thresholds of impairment for macroinvertebrates in California streams along

  15. Cost estimate for a proposed GDF Suez LNG testing program

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Blanchat, Thomas K.; Brady, Patrick Dennis; Jernigan, Dann A.

    2014-02-01

    At the request of GDF Suez, a Rough Order of Magnitude (ROM) cost estimate was prepared for the design, construction, testing, and data analysis for an experimental series of large-scale (Liquefied Natural Gas) LNG spills on land and water that would result in the largest pool fires and vapor dispersion events ever conducted. Due to the expected cost of this large, multi-year program, the authors utilized Sandia's structured cost estimating methodology. This methodology insures that the efforts identified can be performed for the cost proposed at a plus or minus 30 percent confidence. The scale of the LNG spill, fire,more » and vapor dispersion tests proposed by GDF could produce hazard distances and testing safety issues that need to be fully explored. Based on our evaluations, Sandia can utilize much of our existing fire testing infrastructure for the large fire tests and some small dispersion tests (with some modifications) in Albuquerque, but we propose to develop a new dispersion testing site at our remote test area in Nevada because of the large hazard distances. While this might impact some testing logistics, the safety aspects warrant this approach. In addition, we have included a proposal to study cryogenic liquid spills on water and subsequent vaporization in the presence of waves. Sandia is working with DOE on applications that provide infrastructure pertinent to wave production. We present an approach to conduct repeatable wave/spill interaction testing that could utilize such infrastructure.« less

  16. SCALING AN URBAN EMERGENCY EVACUATION FRAMEWORK: CHALLENGES AND PRACTICES

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Karthik, Rajasekar; Lu, Wei

    2014-01-01

    Critical infrastructure disruption, caused by severe weather events, natural disasters, terrorist attacks, etc., has significant impacts on urban transportation systems. We built a computational framework to simulate urban transportation systems under critical infrastructure disruption in order to aid real-time emergency evacuation. This framework will use large scale datasets to provide a scalable tool for emergency planning and management. Our framework, World-Wide Emergency Evacuation (WWEE), integrates population distribution and urban infrastructure networks to model travel demand in emergency situations at global level. Also, a computational model of agent-based traffic simulation is used to provide an optimal evacuation plan for traffic operationmore » purpose [1]. In addition, our framework provides a web-based high resolution visualization tool for emergency evacuation modelers and practitioners. We have successfully tested our framework with scenarios in both United States (Alexandria, VA) and Europe (Berlin, Germany) [2]. However, there are still some major drawbacks for scaling this framework to handle big data workloads in real time. On our back-end, lack of proper infrastructure limits us in ability to process large amounts of data, run the simulation efficiently and quickly, and provide fast retrieval and serving of data. On the front-end, the visualization performance of microscopic evacuation results is still not efficient enough due to high volume data communication between server and client. We are addressing these drawbacks by using cloud computing and next-generation web technologies, namely Node.js, NoSQL, WebGL, Open Layers 3 and HTML5 technologies. We will describe briefly about each one and how we are using and leveraging these technologies to provide an efficient tool for emergency management organizations. Our early experimentation demonstrates that using above technologies is a promising approach to build a scalable and high performance urban emergency evacuation framework that can improve traffic mobility and safety under critical infrastructure disruption in today s socially connected world.« less

  17. Critical Infrastructure Vulnerability to Spatially Localized Failures with Applications to Chinese Railway System.

    PubMed

    Ouyang, Min; Tian, Hui; Wang, Zhenghua; Hong, Liu; Mao, Zijun

    2017-01-17

    This article studies a general type of initiating events in critical infrastructures, called spatially localized failures (SLFs), which are defined as the failure of a set of infrastructure components distributed in a spatially localized area due to damage sustained, while other components outside the area do not directly fail. These failures can be regarded as a special type of intentional attack, such as bomb or explosive assault, or a generalized modeling of the impact of localized natural hazards on large-scale systems. This article introduces three SLFs models: node centered SLFs, district-based SLFs, and circle-shaped SLFs, and proposes a SLFs-induced vulnerability analysis method from three aspects: identification of critical locations, comparisons of infrastructure vulnerability to random failures, topologically localized failures and SLFs, and quantification of infrastructure information value. The proposed SLFs-induced vulnerability analysis method is finally applied to the Chinese railway system and can be also easily adapted to analyze other critical infrastructures for valuable protection suggestions. © 2017 Society for Risk Analysis.

  18. Geospatial Data as a Service: Towards planetary scale real-time analytics

    NASA Astrophysics Data System (ADS)

    Evans, B. J. K.; Larraondo, P. R.; Antony, J.; Richards, C. J.

    2017-12-01

    The rapid growth of earth systems, environmental and geophysical datasets poses a challenge to both end-users and infrastructure providers. For infrastructure and data providers, tasks like managing, indexing and storing large collections of geospatial data needs to take into consideration the various use cases by which consumers will want to access and use the data. Considerable investment has been made by the Earth Science community to produce suitable real-time analytics platforms for geospatial data. There are currently different interfaces that have been defined to provide data services. Unfortunately, there is considerable difference on the standards, protocols or data models which have been designed to target specific communities or working groups. The Australian National University's National Computational Infrastructure (NCI) is used for a wide range of activities in the geospatial community. Earth observations, climate and weather forecasting are examples of these communities which generate large amounts of geospatial data. The NCI has been carrying out significant effort to develop a data and services model that enables the cross-disciplinary use of data. Recent developments in cloud and distributed computing provide a publicly accessible platform where new infrastructures can be built. One of the key components these technologies offer is the possibility of having "limitless" compute power next to where the data is stored. This model is rapidly transforming data delivery from centralised monolithic services towards ubiquitous distributed services that scale up and down adapting to fluctuations in the demand. NCI has developed GSKY, a scalable, distributed server which presents a new approach for geospatial data discovery and delivery based on OGC standards. We will present the architecture and motivating use-cases that drove GSKY's collaborative design, development and production deployment. We show our approach offers the community valuable exploratory analysis capabilities, for dealing with petabyte-scale geospatial data collections.

  19. Geographic Hotspots of Critical National Infrastructure.

    PubMed

    Thacker, Scott; Barr, Stuart; Pant, Raghav; Hall, Jim W; Alderson, David

    2017-12-01

    Failure of critical national infrastructures can result in major disruptions to society and the economy. Understanding the criticality of individual assets and the geographic areas in which they are located is essential for targeting investments to reduce risks and enhance system resilience. Within this study we provide new insights into the criticality of real-life critical infrastructure networks by integrating high-resolution data on infrastructure location, connectivity, interdependence, and usage. We propose a metric of infrastructure criticality in terms of the number of users who may be directly or indirectly disrupted by the failure of physically interdependent infrastructures. Kernel density estimation is used to integrate spatially discrete criticality values associated with individual infrastructure assets, producing a continuous surface from which statistically significant infrastructure criticality hotspots are identified. We develop a comprehensive and unique national-scale demonstration for England and Wales that utilizes previously unavailable data from the energy, transport, water, waste, and digital communications sectors. The testing of 200,000 failure scenarios identifies that hotspots are typically located around the periphery of urban areas where there are large facilities upon which many users depend or where several critical infrastructures are concentrated in one location. © 2017 Society for Risk Analysis.

  20. Environmental impact assessment and environmental audit in large-scale public infrastructure construction: the case of the Qinghai-Tibet Railway.

    PubMed

    He, Guizhen; Zhang, Lei; Lu, Yonglong

    2009-09-01

    Large-scale public infrastructure projects have featured in China's modernization course since the early 1980s. During the early stages of China's rapid economic development, public attention focused on the economic and social impact of high-profile construction projects. In recent years, however, we have seen a shift in public concern toward the environmental and ecological effects of such projects, and today governments are required to provide valid environmental impact assessments prior to allowing large-scale construction. The official requirement for the monitoring of environmental conditions has led to an increased number of debates in recent years regarding the effectiveness of Environmental Impact Assessments (EIAs) and Governmental Environmental Audits (GEAs) as environmental safeguards in instances of large-scale construction. Although EIA and GEA are conducted by different institutions and have different goals and enforcement potential, these two practices can be closely related in terms of methodology. This article cites the construction of the Qinghai-Tibet Railway as an instance in which EIA and GEA offer complementary approaches to environmental impact management. This study concludes that the GEA approach can serve as an effective follow-up to the EIA and establishes that the EIA lays a base for conducting future GEAs. The relationship that emerges through a study of the Railway's construction calls for more deliberate institutional arrangements and cooperation if the two practices are to be used in concert to optimal effect.

  1. Stream Responses to a Watershed-Scale Stormwater Retrofit

    EPA Science Inventory

    Green infrastructure can reduce stormwater runoff and mitigate many of the problems associated with impervious surfaces; however, the effectiveness of retrofit stormwater management for improving aquatic health is largely untested. In the suburban, 1.8 km2 Shepherd Creek catchmen...

  2. Potential impacts of tephra fallout from a large-scale explosive eruption at Sakurajima volcano, Japan

    NASA Astrophysics Data System (ADS)

    Biass, S.; Todde, A.; Cioni, R.; Pistolesi, M.; Geshi, N.; Bonadonna, C.

    2017-10-01

    We present an exposure analysis of infrastructure and lifeline to tephra fallout for a future large-scale explosive eruption of Sakurajima volcano. An eruption scenario is identified based on the field characterization of the last subplinian eruption at Sakurajima and a review of reports of the eruptions that occurred in the past six centuries. A scenario-based probabilistic hazard assessment is performed using the Tephra2 model, considering various eruption durations to reflect complex eruptive sequences of all considered reference eruptions. A quantitative exposure analysis of infrastructures and lifelines is presented primarily using open-access data. The post-event impact assessment of Magill et al. (Earth Planets Space 65:677-698, 2013) after the 2011 VEI 2 eruption of Shinmoedake is used to discuss the vulnerability and the resilience of infrastructures during a future large eruption of Sakurajima. Results indicate a main eastward dispersal, with longer eruption durations increasing the probability of tephra accumulation in proximal areas and reducing it in distal areas. The exposure analysis reveals that 2300 km of road network, 18 km2 of urban area, and 306 km2 of agricultural land have a 50% probability of being affected by an accumulation of tephra of 1 kg/m2. A simple qualitative exposure analysis suggests that the municipalities of Kagoshima, Kanoya, and Tarumizu are the most likely to suffer impacts. Finally, the 2011 VEI 2 eruption of Shinmoedake demonstrated that the already implemented mitigation strategies have increased resilience and improved recovery of affected infrastructures. Nevertheless, the extent to which these mitigation actions will perform during the VEI 4 eruption presented here is unclear and our hazard assessment points to possible damages on the Sakurajima peninsula and the neighboring municipality of Tarumizu.

  3. Distributed data networks: a blueprint for Big Data sharing and healthcare analytics.

    PubMed

    Popovic, Jennifer R

    2017-01-01

    This paper defines the attributes of distributed data networks and outlines the data and analytic infrastructure needed to build and maintain a successful network. We use examples from one successful implementation of a large-scale, multisite, healthcare-related distributed data network, the U.S. Food and Drug Administration-sponsored Sentinel Initiative. Analytic infrastructure-development concepts are discussed from the perspective of promoting six pillars of analytic infrastructure: consistency, reusability, flexibility, scalability, transparency, and reproducibility. This paper also introduces one use case for machine learning algorithm development to fully utilize and advance the portfolio of population health analytics, particularly those using multisite administrative data sources. © 2016 New York Academy of Sciences.

  4. Generic patterns in the evolution of urban water networks: Evidence from a large Asian city

    NASA Astrophysics Data System (ADS)

    Krueger, Elisabeth; Klinkhamer, Christopher; Urich, Christian; Zhan, Xianyuan; Rao, P. Suresh C.

    2017-03-01

    We examine high-resolution urban infrastructure data using every pipe for the water distribution network (WDN) and sanitary sewer network (SSN) in a large Asian city (≈4 million residents) to explore the structure as well as the spatial and temporal evolution of these infrastructure networks. Network data were spatially disaggregated into multiple subnets to examine intracity topological differences for functional zones of the WDN and SSN, and time-stamped SSN data were examined to understand network evolution over several decades as the city expanded. Graphs were generated using a dual-mapping technique (Hierarchical Intersection Continuity Negotiation), which emphasizes the functional attributes of these networks. Network graphs for WDNs and SSNs are characterized by several network topological metrics, and a double Pareto (power-law) model approximates the node-degree distributions of both water infrastructure networks (WDN and SSN), across spatial and hierarchical scales relevant to urban settings, and throughout their temporal evolution over several decades. These results indicate that generic mechanisms govern the networks' evolution, similar to those of scale-free networks found in nature. Deviations from the general topological patterns are indicative of (1) incomplete establishment of network hierarchies and functional network evolution, (2) capacity for growth (expansion) or densification (e.g., in-fill), and (3) likely network vulnerabilities. We discuss the implications of our findings for the (re-)design of urban infrastructure networks to enhance their resilience to external and internal threats.

  5. CERN data services for LHC computing

    NASA Astrophysics Data System (ADS)

    Espinal, X.; Bocchi, E.; Chan, B.; Fiorot, A.; Iven, J.; Lo Presti, G.; Lopez, J.; Gonzalez, H.; Lamanna, M.; Mascetti, L.; Moscicki, J.; Pace, A.; Peters, A.; Ponce, S.; Rousseau, H.; van der Ster, D.

    2017-10-01

    Dependability, resilience, adaptability and efficiency. Growing requirements require tailoring storage services and novel solutions. Unprecedented volumes of data coming from the broad number of experiments at CERN need to be quickly available in a highly scalable way for large-scale processing and data distribution while in parallel they are routed to tape for long-term archival. These activities are critical for the success of HEP experiments. Nowadays we operate at high incoming throughput (14GB/s during 2015 LHC Pb-Pb run and 11PB in July 2016) and with concurrent complex production work-loads. In parallel our systems provide the platform for the continuous user and experiment driven work-loads for large-scale data analysis, including end-user access and sharing. The storage services at CERN cover the needs of our community: EOS and CASTOR as a large-scale storage; CERNBox for end-user access and sharing; Ceph as data back-end for the CERN OpenStack infrastructure, NFS services and S3 functionality; AFS for legacy distributed-file-system services. In this paper we will summarise the experience in supporting LHC experiments and the transition of our infrastructure from static monolithic systems to flexible components providing a more coherent environment with pluggable protocols, tuneable QoS, sharing capabilities and fine grained ACLs management while continuing to guarantee dependable and robust services.

  6. Organizing phenological data resources to inform natural resource conservation

    USGS Publications Warehouse

    Rosemartin, Alyssa H.; Crimmins, Theresa M.; Enquist, Carolyn A.F.; Gerst, Katharine L.; Kellermann, Jherime L.; Posthumus, Erin E.; Denny, Ellen G.; Guertin, Patricia; Marsh, Lee; Weltzin, Jake F.

    2014-01-01

    Changes in the timing of plant and animal life cycle events, in response to climate change, are already happening across the globe. The impacts of these changes may affect biodiversity via disruption to mutualisms, trophic mismatches, invasions and population declines. To understand the nature, causes and consequences of changed, varied or static phenologies, new data resources and tools are being developed across the globe. The USA National Phenology Network is developing a long-term, multi-taxa phenological database, together with a customizable infrastructure, to support conservation and management needs. We present current and potential applications of the infrastructure, across scales and user groups. The approaches described here are congruent with recent trends towards multi-agency, large-scale research and action.

  7. Small scale green infrastructure design to meet different urban hydrological criteria.

    PubMed

    Jia, Z; Tang, S; Luo, W; Li, S; Zhou, M

    2016-04-15

    As small scale green infrastructures, rain gardens have been widely advocated for urban stormwater management in the contemporary low impact development (LID) era. This paper presents a simple method that consists of hydrological models and the matching plots of nomographs to provide an informative and practical tool for rain garden sizing and hydrological evaluation. The proposed method considers design storms, infiltration rates and the runoff contribution area ratio of the rain garden, allowing users to size a rain garden for a specific site with hydrological reference and predict overflow of the rain garden under different storms. The nomographs provide a visual presentation on the sensitivity of different design parameters. Subsequent application of the proposed method to a case study conducted in a sub-humid region in China showed that, the method accurately predicted the design storms for the existing rain garden, the predicted overflows under large storm events were within 13-50% of the measured volumes. The results suggest that the nomographs approach is a practical tool for quick selection or assessment of design options that incorporate key hydrological parameters of rain gardens or other infiltration type green infrastructure. The graphic approach as displayed by the nomographs allow urban planners to demonstrate the hydrological effect of small scale green infrastructure and gain more support for promoting low impact development. Copyright © 2016 Elsevier Ltd. All rights reserved.

  8. Building Community-Engaged Health Research and Discovery Infrastructure on the South Side of Chicago: Science in Service to Community Priorities

    PubMed Central

    Lindau, Stacy Tessler; Makelarski, Jennifer A.; Chin, Marshall H.; Desautels, Shane; Johnson, Daniel; Johnson, Waldo E.; Miller, Doriane; Peters, Susan; Robinson, Connie; Schneider, John; Thicklin, Florence; Watson, Natalie P.; Wolfe, Marcus; Whitaker, Eric

    2011-01-01

    Objective To describe the roles community members can and should play in, and an asset-based strategy used by Chicago’s South Side Health and Vitality Studies for, building sustainable, large-scale community health research infrastructure. The Studies are a family of research efforts aiming to produce actionable knowledge to inform health policy, programming, and investments for the region. Methods Community and university collaborators, using a consensus-based approach, developed shared theoretical perspectives, guiding principles, and a model for collaboration in 2008, which were used to inform an asset-based operational strategy. Ongoing community engagement and relationship-building support the infrastructure and research activities of the Studies. Results Key steps in the asset-based strategy include: 1) continuous community engagement and relationship building, 2) identifying community priorities, 3) identifying community assets, 4) leveraging assets, 5) conducting research, 6) sharing knowledge and 7) informing action. Examples of community member roles, and how these are informed by the Studies’ guiding principles, are provided. Conclusions Community and university collaborators, with shared vision and principles, can effectively work together to plan innovative, large-scale community-based research that serves community needs and priorities. Sustainable, effective models are needed to realize NIH’s mandate for meaningful translation of biomedical discovery into improved population health. PMID:21236295

  9. OpenSoC Fabric

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    2014-08-21

    Recent advancements in technology scaling have shown a trend towards greater integration with large-scale chips containing thousands of processors connected to memories and other I/O devices using non-trivial network topologies. Software simulation proves insufficient to study the tradeoffs in such complex systems due to slow execution time, whereas hardware RTL development is too time-consuming. We present OpenSoC Fabric, an on-chip network generation infrastructure which aims to provide a parameterizable and powerful on-chip network generator for evaluating future high performance computing architectures based on SoC technology. OpenSoC Fabric leverages a new hardware DSL, Chisel, which contains powerful abstractions provided by itsmore » base language, Scala, and generates both software (C++) and hardware (Verilog) models from a single code base. The OpenSoC Fabric2 infrastructure is modeled after existing state-of-the-art simulators, offers large and powerful collections of configuration options, and follows object-oriented design and functional programming to make functionality extension as easy as possible.« less

  10. Large-scale data analysis of power grid resilience across multiple US service regions

    NASA Astrophysics Data System (ADS)

    Ji, Chuanyi; Wei, Yun; Mei, Henry; Calzada, Jorge; Carey, Matthew; Church, Steve; Hayes, Timothy; Nugent, Brian; Stella, Gregory; Wallace, Matthew; White, Joe; Wilcox, Robert

    2016-05-01

    Severe weather events frequently result in large-scale power failures, affecting millions of people for extended durations. However, the lack of comprehensive, detailed failure and recovery data has impeded large-scale resilience studies. Here, we analyse data from four major service regions representing Upstate New York during Super Storm Sandy and daily operations. Using non-stationary spatiotemporal random processes that relate infrastructural failures to recoveries and cost, our data analysis shows that local power failures have a disproportionally large non-local impact on people (that is, the top 20% of failures interrupted 84% of services to customers). A large number (89%) of small failures, represented by the bottom 34% of customers and commonplace devices, resulted in 56% of the total cost of 28 million customer interruption hours. Our study shows that extreme weather does not cause, but rather exacerbates, existing vulnerabilities, which are obscured in daily operations.

  11. Optical/IR from ground

    NASA Technical Reports Server (NTRS)

    Strom, Stephen; Sargent, Wallace L. W.; Wolff, Sidney; Ahearn, Michael F.; Angel, J. Roger; Beckwith, Steven V. W.; Carney, Bruce W.; Conti, Peter S.; Edwards, Suzan; Grasdalen, Gary

    1991-01-01

    Optical/infrared (O/IR) astronomy in the 1990's is reviewed. The following subject areas are included: research environment; science opportunities; technical development of the 1980's and opportunities for the 1990's; and ground-based O/IR astronomy outside the U.S. Recommendations are presented for: (1) large scale programs (Priority 1: a coordinated program for large O/IR telescopes); (2) medium scale programs (Priority 1: a coordinated program for high angular resolution; Priority 2: a new generation of 4-m class telescopes); (3) small scale programs (Priority 1: near-IR and optical all-sky surveys; Priority 2: a National Astrometric Facility); and (4) infrastructure issues (develop, purchase, and distribute optical CCDs and infrared arrays; a program to support large optics technology; a new generation of large filled aperture telescopes; a program to archive and disseminate astronomical databases; and a program for training new instrumentalists)

  12. A Development of Lightweight Grid Interface

    NASA Astrophysics Data System (ADS)

    Iwai, G.; Kawai, Y.; Sasaki, T.; Watase, Y.

    2011-12-01

    In order to help a rapid development of Grid/Cloud aware applications, we have developed API to abstract the distributed computing infrastructures based on SAGA (A Simple API for Grid Applications). SAGA, which is standardized in the OGF (Open Grid Forum), defines API specifications to access distributed computing infrastructures, such as Grid, Cloud and local computing resources. The Universal Grid API (UGAPI), which is a set of command line interfaces (CLI) and APIs, aims to offer simpler API to combine several SAGA interfaces with richer functionalities. These CLIs of the UGAPI offer typical functionalities required by end users for job management and file access to the different distributed computing infrastructures as well as local computing resources. We have also built a web interface for the particle therapy simulation and demonstrated the large scale calculation using the different infrastructures at the same time. In this paper, we would like to present how the web interface based on UGAPI and SAGA achieve more efficient utilization of computing resources over the different infrastructures with technical details and practical experiences.

  13. Improving the effectiveness of school infrastructure planning using information systems based on priority scale in Salatiga

    NASA Astrophysics Data System (ADS)

    Sucipto, Katoningsih, Sri; Ratnaningrum, Anggry

    2017-03-01

    With large number of schools and many components of school infrastructure supporting with limited funds,so, the school infrastructure development cannot be done simultaneously. Implementation of development must be based on priorities according to the needs. Record all existing needs Identify the condition of the school infrastructure, so that all data recorded bias is valid and has covered all the infrastructure needs of the school. SIPIS very helpful in the process of recording all the necessary needs of the school. Make projections of school development, student participants to the HR business. Make the order needs based on their level of importance. Determine the order in accordance with the needs of its importance, the most important first. By using SIPIS can all be arranged correctly so that do not confuse to construct what should be done in advance but be the last because of factors like and dislike. Make the allocation of funds in detail, then when submitting the budget funds provided in accordance with demand.

  14. "Tactic": Traffic Aware Cloud for Tiered Infrastructure Consolidation

    ERIC Educational Resources Information Center

    Sangpetch, Akkarit

    2013-01-01

    Large-scale enterprise applications are deployed as distributed applications. These applications consist of many inter-connected components with heterogeneous roles and complex dependencies. Each component typically consumes 5-15% of the server capacity. Deploying each component as a separate virtual machine (VM) allows us to consolidate the…

  15. Measurement-Driven Characterization of the Mobile Environment

    ERIC Educational Resources Information Center

    Soroush, Hamed

    2013-01-01

    The concurrent deployment of high-quality wireless networks and large-scale cloud services offers the promise of secure ubiquitous access to seemingly limitless amount of content. However, as users' expectations have grown more demanding, the performance and connectivity failures endemic to the existing networking infrastructure have become more…

  16. A General Purpose High Performance Linux Installation Infrastructure

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wachsmann, Alf

    2002-06-17

    With more and more and larger and larger Linux clusters, the question arises how to install them. This paper addresses this question by proposing a solution using only standard software components. This installation infrastructure scales well for a large number of nodes. It is also usable for installing desktop machines or diskless Linux clients, thus, is not designed for cluster installations in particular but is, nevertheless, highly performant. The infrastructure proposed uses PXE as the network boot component on the nodes. It uses DHCP and TFTP servers to get IP addresses and a bootloader to all nodes. It then usesmore » kickstart to install Red Hat Linux over NFS. We have implemented this installation infrastructure at SLAC with our given server hardware and installed a 256 node cluster in 30 minutes. This paper presents the measurements from this installation and discusses the bottlenecks in our installation.« less

  17. caGrid 1.0: An Enterprise Grid Infrastructure for Biomedical Research

    PubMed Central

    Oster, Scott; Langella, Stephen; Hastings, Shannon; Ervin, David; Madduri, Ravi; Phillips, Joshua; Kurc, Tahsin; Siebenlist, Frank; Covitz, Peter; Shanbhag, Krishnakant; Foster, Ian; Saltz, Joel

    2008-01-01

    Objective To develop software infrastructure that will provide support for discovery, characterization, integrated access, and management of diverse and disparate collections of information sources, analysis methods, and applications in biomedical research. Design An enterprise Grid software infrastructure, called caGrid version 1.0 (caGrid 1.0), has been developed as the core Grid architecture of the NCI-sponsored cancer Biomedical Informatics Grid (caBIG™) program. It is designed to support a wide range of use cases in basic, translational, and clinical research, including 1) discovery, 2) integrated and large-scale data analysis, and 3) coordinated study. Measurements The caGrid is built as a Grid software infrastructure and leverages Grid computing technologies and the Web Services Resource Framework standards. It provides a set of core services, toolkits for the development and deployment of new community provided services, and application programming interfaces for building client applications. Results The caGrid 1.0 was released to the caBIG community in December 2006. It is built on open source components and caGrid source code is publicly and freely available under a liberal open source license. The core software, associated tools, and documentation can be downloaded from the following URL: https://cabig.nci.nih.gov/workspaces/Architecture/caGrid. Conclusions While caGrid 1.0 is designed to address use cases in cancer research, the requirements associated with discovery, analysis and integration of large scale data, and coordinated studies are common in other biomedical fields. In this respect, caGrid 1.0 is the realization of a framework that can benefit the entire biomedical community. PMID:18096909

  18. caGrid 1.0: an enterprise Grid infrastructure for biomedical research.

    PubMed

    Oster, Scott; Langella, Stephen; Hastings, Shannon; Ervin, David; Madduri, Ravi; Phillips, Joshua; Kurc, Tahsin; Siebenlist, Frank; Covitz, Peter; Shanbhag, Krishnakant; Foster, Ian; Saltz, Joel

    2008-01-01

    To develop software infrastructure that will provide support for discovery, characterization, integrated access, and management of diverse and disparate collections of information sources, analysis methods, and applications in biomedical research. An enterprise Grid software infrastructure, called caGrid version 1.0 (caGrid 1.0), has been developed as the core Grid architecture of the NCI-sponsored cancer Biomedical Informatics Grid (caBIG) program. It is designed to support a wide range of use cases in basic, translational, and clinical research, including 1) discovery, 2) integrated and large-scale data analysis, and 3) coordinated study. The caGrid is built as a Grid software infrastructure and leverages Grid computing technologies and the Web Services Resource Framework standards. It provides a set of core services, toolkits for the development and deployment of new community provided services, and application programming interfaces for building client applications. The caGrid 1.0 was released to the caBIG community in December 2006. It is built on open source components and caGrid source code is publicly and freely available under a liberal open source license. The core software, associated tools, and documentation can be downloaded from the following URL: https://cabig.nci.nih.gov/workspaces/Architecture/caGrid. While caGrid 1.0 is designed to address use cases in cancer research, the requirements associated with discovery, analysis and integration of large scale data, and coordinated studies are common in other biomedical fields. In this respect, caGrid 1.0 is the realization of a framework that can benefit the entire biomedical community.

  19. A global fingerprint of macro-scale changes in urban structure from 1999 to 2009

    NASA Astrophysics Data System (ADS)

    Frolking, Steve; Milliman, Tom; Seto, Karen C.; Friedl, Mark A.

    2013-06-01

    Urban population now exceeds rural population globally, and 60-80% of global energy consumption by households, businesses, transportation, and industry occurs in urban areas. There is growing evidence that built-up infrastructure contributes to carbon emissions inertia, and that investments in infrastructure today have delayed climate cost in the future. Although the United Nations statistics include data on urban population by country and select urban agglomerations, there are no empirical data on built-up infrastructure for a large sample of cities. Here we present the first study to examine changes in the structure of the world’s largest cities from 1999 to 2009. Combining data from two space-borne sensors—backscatter power (PR) from NASA’s SeaWinds microwave scatterometer, and nighttime lights (NL) from NOAA’s defense meteorological satellite program/operational linescan system (DMSP/OLS)—we report large increases in built-up infrastructure stock worldwide and show that cities are expanding both outward and upward. Our results reveal previously undocumented recent and rapid changes in urban areas worldwide that reflect pronounced shifts in the form and structure of cities. Increases in built-up infrastructure are highest in East Asian cities, with Chinese cities rapidly expanding their material infrastructure stock in both height and extent. In contrast, Indian cities are primarily building out and not increasing in verticality. This new dataset will help characterize the structure and form of cities, and ultimately improve our understanding of how cities affect regional-to-global energy use and greenhouse gas emissions.

  20. Large-Scale Overlays and Trends: Visually Mining, Panning and Zooming the Observable Universe.

    PubMed

    Luciani, Timothy Basil; Cherinka, Brian; Oliphant, Daniel; Myers, Sean; Wood-Vasey, W Michael; Labrinidis, Alexandros; Marai, G Elisabeta

    2014-07-01

    We introduce a web-based computing infrastructure to assist the visual integration, mining and interactive navigation of large-scale astronomy observations. Following an analysis of the application domain, we design a client-server architecture to fetch distributed image data and to partition local data into a spatial index structure that allows prefix-matching of spatial objects. In conjunction with hardware-accelerated pixel-based overlays and an online cross-registration pipeline, this approach allows the fetching, displaying, panning and zooming of gigabit panoramas of the sky in real time. To further facilitate the integration and mining of spatial and non-spatial data, we introduce interactive trend images-compact visual representations for identifying outlier objects and for studying trends within large collections of spatial objects of a given class. In a demonstration, images from three sky surveys (SDSS, FIRST and simulated LSST results) are cross-registered and integrated as overlays, allowing cross-spectrum analysis of astronomy observations. Trend images are interactively generated from catalog data and used to visually mine astronomy observations of similar type. The front-end of the infrastructure uses the web technologies WebGL and HTML5 to enable cross-platform, web-based functionality. Our approach attains interactive rendering framerates; its power and flexibility enables it to serve the needs of the astronomy community. Evaluation on three case studies, as well as feedback from domain experts emphasize the benefits of this visual approach to the observational astronomy field; and its potential benefits to large scale geospatial visualization in general.

  1. Public-Private Partnership: Joint recommendations to improve downloads of large Earth observation data

    NASA Astrophysics Data System (ADS)

    Ramachandran, R.; Murphy, K. J.; Baynes, K.; Lynnes, C.

    2016-12-01

    With the volume of Earth observation data expanding rapidly, cloud computing is quickly changing the way Earth observation data is processed, analyzed, and visualized. The cloud infrastructure provides the flexibility to scale up to large volumes of data and handle high velocity data streams efficiently. Having freely available Earth observation data collocated on a cloud infrastructure creates opportunities for innovation and value-added data re-use in ways unforeseen by the original data provider. These innovations spur new industries and applications and spawn new scientific pathways that were previously limited due to data volume and computational infrastructure issues. NASA, in collaboration with Amazon, Google, and Microsoft, have jointly developed a set of recommendations to enable efficient transfer of Earth observation data from existing data systems to a cloud computing infrastructure. The purpose of these recommendations is to provide guidelines against which all data providers can evaluate existing data systems and be used to improve any issues uncovered to enable efficient search, access, and use of large volumes of data. Additionally, these guidelines ensure that all cloud providers utilize a common methodology for bulk-downloading data from data providers thus preventing the data providers from building custom capabilities to meet the needs of individual cloud providers. The intent is to share these recommendations with other Federal agencies and organizations that serve Earth observation to enable efficient search, access, and use of large volumes of data. Additionally, the adoption of these recommendations will benefit data users interested in moving large volumes of data from data systems to any other location. These data users include the cloud providers, cloud users such as scientists, and other users working in a high performance computing environment who need to move large volumes of data.

  2. Economies of Scale and Large Classes

    ERIC Educational Resources Information Center

    Saiz, Martin

    2014-01-01

    Making classes larger saves money--and public universities across the country have found it a useful strategy to balance their budgets after decades of state funding cuts and increases to infrastructure costs. Where this author teaches, in the College of Social and Behavioral Sciences at California State University, Northridge (CSUN),…

  3. Architectural and Mobility Management Designs in Internet-Based Infrastructure Wireless Mesh Networks

    ERIC Educational Resources Information Center

    Zhao, Weiyi

    2011-01-01

    Wireless mesh networks (WMNs) have recently emerged to be a cost-effective solution to support large-scale wireless Internet access. They have numerous applications, such as broadband Internet access, building automation, and intelligent transportation systems. One research challenge for Internet-based WMNs is to design efficient mobility…

  4. A Case for Data Commons

    PubMed Central

    Grossman, Robert L.; Heath, Allison; Murphy, Mark; Patterson, Maria; Wells, Walt

    2017-01-01

    Data commons collocate data, storage, and computing infrastructure with core services and commonly used tools and applications for managing, analyzing, and sharing data to create an interoperable resource for the research community. An architecture for data commons is described, as well as some lessons learned from operating several large-scale data commons. PMID:29033693

  5. A Short History of Performance Assessment: Lessons Learned.

    ERIC Educational Resources Information Center

    Madaus, George F.; O'Dwyer, Laura M.

    1999-01-01

    Places performance assessment in the context of high-stakes uses, describes underlying technologies, and outlines the history of performance testing from 210 B.C.E. to the present. Historical issues of fairness, efficiency, cost, and infrastructure influence contemporary efforts to use performance assessments in large-scale, high-stakes testing…

  6. Quake Final Video

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    None

    Critical infrastructures of the world are at constant risks for earthquakes. Most of these critical structures are designed using archaic, seismic, simulation methods that were built from early digital computers from the 1970s. Idaho National Laboratory’s Seismic Research Group are working to modernize the simulation methods through computational research and large-scale laboratory experiments.

  7. Adaptation of a pattern-scaling approach for assessment of local (village/valley) scale water resources and related vulnerabilities in the Upper Indus Basin

    NASA Astrophysics Data System (ADS)

    Forsythe, Nathan; Kilsby, Chris G.; Fowler, Hayley J.; Archer, David R.

    2010-05-01

    The water resources of the Upper Indus Basin (UIB) are of the utmost importance to the economic wellbeing of Pakistan. The irrigated agriculture made possible by Indus river runoff underpins the food security for Pakistan's nearly 200 million people. Contributions from hydropower account for more than one fifth of peak installed electrical generating capacity in a country where widespread, prolonged load-shedding handicaps business activity and industrial development. Pakistan's further socio-economic development thus depends largely on optimisation of its precious water resources. Confident, accurate seasonal predictions of water resource availability coupled with sound understanding of interannual variability are urgent insights needed by development planners and infrastructure managers at all levels. This study focuses on the challenge of providing meaningful quantitative information at the village/valley scale in the upper reaches of the UIB. Proceeding by progressive reductions in scale, the typology of the observed UIB hydrological regimes -- glacial, nival and pluvial -- are examined with special emphasis on interannual variability for individual seasons. Variations in discharge (runoff) are compared to observations of climate parameters (temperature, precipitation) and available spatial data (elevation, snow cover and snow-water-equivalent). The first scale presented is composed of the large-scale, long-record gauged UIB tributary basins. The Pakistan Water and Power Development Authority (WAPDA) has maintained these stations for several decades in order to monitor seasonal flows and accumulate data for design of further infrastructure. Data from basins defined by five gauging stations on the Indus, Hunza, Gilgit and Astore rivers are examined. The second scale presented is a set of smaller gauged headwater catchments with short records. These gauges were installed by WAPDA and its partners amongst the international development agencies to assess potential sites for medium-scale infrastructure projects. These catchments are placed in their context within the hydrological regime classification using the spatial data and (remote sensing) observations as well as river gauging measurements. The study assesses the degree of similarity with the larger basins of the same hydrological regime. This assessment focuses on the measured response to observed climate variable anomalies. The smallest scale considered is comprised of a number of case studies at the ungauged village/valley scale. These examples are based on the delineation of areas to which specific communities (villages) have customary (riparian) water rights. These examples were suggested by non-governmental organisations working on grassroots economic development initiatives and small-scale infrastructure projects in the region. The direct observations available for these subcatchments are limited to spatial data (elevation, snow parameters). The challenge at this level is to accurately extrapolate areal values (precipitation, temperature, runoff) from point observations at the basin scale. The study assesses both the degree of similarity in the distribution of spatial parameters to the larger gauged basins and the interannual variability (spatial heterogeneity) of remotely-sensed snow cover and snow-water-equivalent at this subcatchment scale. Based upon the characterisation of spatial and interannual variability at these three spatial scales, the challenges facing local water resource managers and infrastructure operators are enumerated. Local vulnerabilities include, but are not limited to, varying thresholds in irrigation water requirements based on crop-type, minimum base flows for micro-hydropower generation during winter (high load) months and relatively small but growing demand for domestic water usage. In conclusion the study posits potential strategies for managing interannual variability and potential emerging trends. Suggested strategies are guided by the principles of low-risk adaptation, participative decision making and local capacity building.

  8. Multi-scalar interactions between infrastructure, smallholder water management, and coastal dynamics in the Bengal Delta, Bangladesh

    NASA Astrophysics Data System (ADS)

    Rogers, K. G.; Brondizio, E.; Roy, K.; Syvitski, J. P.

    2016-12-01

    Because of their low-lying elevations and large number of inhabitants and infrastructure, river deltas are ground zero for climate change impacts, particularly from sea-level rise and storm surges. The increased vulnerability of downstream delta communities to coastal flooding as a result of upstream engineering has been acknowledged for decades. What has received less attention is the sensitivity of deltas to the interactions of these processes and increasing intensity of cultivation and irrigation in their coastal regions. Beyond basin-scale damming, regional infrastructure affects the movement of sediment and water on deltas, and combined with upstream modifications may exacerbate the risk of expanded tidal flooding, erosion of arable land, and salinization of soils and groundwater associated with sea level rise. To examine the social-biophysical feedbacks associated with regional-scale infrastructure, smallholder water management practices and coastal dynamics, a nested framework was applied to two districts of the coastal southwest region of Bangladesh. The two districts vary in tidal range, salinity, freshwater availability and socioeconomic structures, and are spatially varied in farmer's adaptations. Both districts contain numerous large embankment systems initially designed to protect cropland from tidal flooding, but that have been poorly maintained since their construction in the 1960's. The framework was co-produced using local-level stakeholder input collected during group interviews with rural farmers in 8 villages within the two districts, and explicitly accounts for engineered and natural biophysical variables as well as governance and institutional structures at 3 levels of analysis. Household survey results indicate that the presence or absence of embankments as a result of poor management and dynamic coastal processes is the primary control on freshwater availability and thus influences farming strategies, socioeconomic conditions and social positions in both districts. Local-scale interactions with the embankments are spatially heterogeneous, but geospatial analyses show the potential for these to collectively impact physical and social stability across a region already vulnerable to coastal flooding.

  9. The relevance of large scale environmental research infrastructures from the point of view of Ethics: the case of EMSO

    NASA Astrophysics Data System (ADS)

    Favali, Paolo; Beranzoli, Laura; Best, Mairi; Franceschini, PierLuigi; Materia, Paola; Peppoloni, Silvia; Picard, John

    2014-05-01

    EMSO (European Multidisciplinary Seafloor and Water Column Observatory) is a large-scale European Research Infrastructure (RI). It is a geographically distributed infrastructure composed of several deep-seafloor and water-column observatories, which will be deployed at key sites in European waters, spanning from the Arctic, through the Atlantic and Mediterranean, to the Black Sea, with the basic scientific objective of real-time, long-term monitoring of environmental processes related to the interaction between the geosphere, biosphere and hydrosphere. EMSO is one of the environmental RIs on the ESFRI roadmap. The ESRFI Roadmap identifies new RIs of pan-European importance that correspond to the long term needs of European research communities. EMSO will be the sub-sea segment of the EU's large-scale Earth Observation program, Copernicus (previously known as GMES - Global Monitoring for Environment and Security) and will significantly enhance the observational capabilities of European member states. An open data policy compliant with the recommendations being developed within the GEOSS initiative (Global Earth Observation System of Systems) will allow for shared use of the infrastructure and the exchange of scientific information and knowledge. The processes that occur in the oceans have a direct impact on human societies, therefore it is crucial to improve our understanding of how they operate and interact. To encompass the breadth of these major processes, sustained and integrated observations are required that appreciate the interconnectedness of atmospheric, surface ocean, biological pump, deep-sea, and solid-Earth dynamics and that can address: • natural and anthropogenic change; • interactions between ecosystem services, biodiversity, biogeochemistry, physics, and climate; • impacts of exploration and extraction of energy, minerals, and living resources; • geo-hazard early warning capability for earthquakes, tsunamis, gas-hydrate release, and slope instability and failure; • connecting scientific outcomes to stakeholders and policy makers, including to government decision-makers. The development of a large research infrastructure initiatives like EMSO must continuously take into account wide-reaching environmental and socio-economic implications and objectives. For this reason, an Ethics Commitee was established early in EMSO's initial Preparatory Phase with responsibility for overseeing the key ethical and social aspects of the project. These include: • promoting inclusive science communication and data dissemination services to civil society according to Open Access principles; • guaranteeing top quality scientific information and data as results of top quality research; • promoting the increased adoption of eco-friendly, sustainable technologies through the dissemination of advanced scientific knowledge and best practices to the private sector and to policy makers; • developing Education Strategies in cooperation with academia and industry aimed at informing and sensitizing the general public on the environmental and socio-economic implications and benefits of large research infrastructure initiatives such as EMSO; • carrying out Excellent Science following strict criteria of research integrity, as expressed in the Montreal Statement (2013); • promoting Geo-ethical awareness and innovation by spurring innovative approaches in the management of environmental aspects of large research projects; • supporting technological Innovation by working closely in support of SMEs; • providing a constant, qualified and authoritative one-stop-shopping Reference Point and Advisory for politicians and decision-makers. The paper shows how Geoethics is an essential tool for guiding methodological and operational choices, and management of an European project with great impact on the environment and society.

  10. ICAT: Integrating data infrastructure for facilities based science

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Flannery, Damian; Matthews, Brian; Griffin, Tom

    2009-12-21

    ICAT: Integrating data infrastructure for facilities based science Damian Flannery, Brian Matthews, Tom Griffin, Juan Bicarregui, Michael Gleaves, Laurent Lerusse, Roger Downing, Alun Ashton, Shoaib Sufi, Glen Drinkwater, Kerstin Kleese Abstract— Scientific facilities, in particular large-scale photon and neutron sources, have demanding requirements to manage the increasing quantities of experimental data they generate in a systematic and secure way. In this paper, we describe the ICAT infrastructure for cataloguing facility generated experimental data which has been in development within STFC and DLS for several years. We consider the factors which have influenced its design and describe its architecture and metadatamore » model, a key tool in the management of data. We go on to give an outline of its current implementation and use, with plans for its future development.« less

  11. Leveraging finances for public health system improvement: results from the Turning Point initiative.

    PubMed

    Bekemeier, Betty; Riley, Catharine M; Berkowitz, Bobbie

    2007-01-01

    Reforming the public health infrastructure requires substantial system changes at the state level; state health agencies, however, often lack the resources and support for strategic planning and systemwide improvement. The Turning Point Initiative provided support for states to focus on large-scale system changes that resulted in increased funding for public health capacity and infrastructure development. Turning Point provides a test case for obtaining financial and institutional resources focused on systems change and infrastructure development-areas for which it has been historically difficult to obtain long-term support. The purpose of this exploratory, descriptive survey research was to enumerate the actual resources leveraged toward public health system improvement through the partnerships, planning, and implementation activities funded by the Robert Wood Johnson Foundation as a part of the Turning Point Initiative.

  12. Multi-Scale Infrastructure Assessment

    EPA Science Inventory

    The U.S. Environmental Protection Agency’s (EPA) multi-scale infrastructure assessment project supports both water resource adaptation to climate change and the rehabilitation of the nation’s aging water infrastructure by providing tools, scientific data and information to progra...

  13. Consolidation and development roadmap of the EMI middleware

    NASA Astrophysics Data System (ADS)

    Kónya, B.; Aiftimiei, C.; Cecchi, M.; Field, L.; Fuhrmann, P.; Nilsen, J. K.; White, J.

    2012-12-01

    Scientific research communities have benefited recently from the increasing availability of computing and data infrastructures with unprecedented capabilities for large scale distributed initiatives. These infrastructures are largely defined and enabled by the middleware they deploy. One of the major issues in the current usage of research infrastructures is the need to use similar but often incompatible middleware solutions. The European Middleware Initiative (EMI) is a collaboration of the major European middleware providers ARC, dCache, gLite and UNICORE. EMI aims to: deliver a consolidated set of middleware components for deployment in EGI, PRACE and other Distributed Computing Infrastructures; extend the interoperability between grids and other computing infrastructures; strengthen the reliability of the services; establish a sustainable model to maintain and evolve the middleware; fulfil the requirements of the user communities. This paper presents the consolidation and development objectives of the EMI software stack covering the last two years. The EMI development roadmap is introduced along the four technical areas of compute, data, security and infrastructure. The compute area plan focuses on consolidation of standards and agreements through a unified interface for job submission and management, a common format for accounting, the wide adoption of GLUE schema version 2.0 and the provision of a common framework for the execution of parallel jobs. The security area is working towards a unified security model and lowering the barriers to Grid usage by allowing users to gain access with their own credentials. The data area is focusing on implementing standards to ensure interoperability with other grids and industry components and to reuse already existing clients in operating systems and open source distributions. One of the highlights of the infrastructure area is the consolidation of the information system services via the creation of a common information backbone.

  14. Monitoring, Modeling, and Emergent Toxicology in the East Fork Watershed: Developing a Test Bed for Water Quality Management.

    EPA Science Inventory

    Overarching objectives for the development of the East Fork Watershed Test Bed in Southwestern Ohio include: 1) providing research infrastructure for integrating risk assessment and management research on the scale of a large multi-use watershed (1295 km2); 2) Focusing on process...

  15. From Networked Learning to Operational Practice: Constructing and Transferring Superintendent Knowledge in a Regional Instructional Rounds Network

    ERIC Educational Resources Information Center

    Travis, Timothy J.

    2015-01-01

    Instructional rounds are an emerging network structure with processes and protocols designed to develop superintendents' knowledge and skills in leading large-scale improvement, to enable superintendents to build an infrastructure that supports the work of improvement, to assist superintendents in distributing leadership throughout their district,…

  16. Field data collection, analysis, and adaptive management of green infrastructure in the urban water cycle in Cleveland and Columbus, OH

    NASA Astrophysics Data System (ADS)

    Darner, R.; Shuster, W.

    2016-12-01

    Expansion of the urban environment can alter the landscape and creates challenges for how cities deal with energy and water. Large volumes of stormwater in areas that have combined septic and stormwater systems present on challenge. Managing the water as near to the source as possible by creates an environment that allows more infiltration and evapotranspiration. Stormwater control measures (SCM) associated with this type of development, often called green infrastructure, include rain gardens, pervious or porous pavements, bioswales, green or blue roofs, and others. In this presentation, we examine the hydrology of green infrastructure in urban sewersheds in Cleveland and Columbus, OH. We present the need for data throughout the water cycle and challenges to collecting field data at a small scale (single rain garden instrumented to measure inflows, outflow, weather, soil moisture, and groundwater levels) and at a macro scale (a project including low-cost rain gardens, highly engineered rain gardens, groundwater wells, weather stations, soil moisture, and combined sewer flow monitoring). Results will include quantifying the effectiveness of SCMs in intercepting stormwater for different precipitation event sizes. Small scale deployment analysis will demonstrate the role of active adaptive management in the ongoing optimization over multiple years of data collection.

  17. Wireless Technology Infrastructures for Authentication of Patients: PKI that Rings

    PubMed Central

    Sax, Ulrich; Kohane, Isaac; Mandl, Kenneth D.

    2005-01-01

    As the public interest in consumer-driven electronic health care applications rises, so do concerns about the privacy and security of these applications. Achieving a balance between providing the necessary security while promoting user acceptance is a major obstacle in large-scale deployment of applications such as personal health records (PHRs). Robust and reliable forms of authentication are needed for PHRs, as the record will often contain sensitive and protected health information, including the patient's own annotations. Since the health care industry per se is unlikely to succeed at single-handedly developing and deploying a large scale, national authentication infrastructure, it makes sense to leverage existing hardware, software, and networks. This report proposes a new model for authentication of users to health care information applications, leveraging wireless mobile devices. Cell phones are widely distributed, have high user acceptance, and offer advanced security protocols. The authors propose harnessing this technology for the strong authentication of individuals by creating a registration authority and an authentication service, and examine the problems and promise of such a system. PMID:15684133

  18. Wireless technology infrastructures for authentication of patients: PKI that rings.

    PubMed

    Sax, Ulrich; Kohane, Isaac; Mandl, Kenneth D

    2005-01-01

    As the public interest in consumer-driven electronic health care applications rises, so do concerns about the privacy and security of these applications. Achieving a balance between providing the necessary security while promoting user acceptance is a major obstacle in large-scale deployment of applications such as personal health records (PHRs). Robust and reliable forms of authentication are needed for PHRs, as the record will often contain sensitive and protected health information, including the patient's own annotations. Since the health care industry per se is unlikely to succeed at single-handedly developing and deploying a large scale, national authentication infrastructure, it makes sense to leverage existing hardware, software, and networks. This report proposes a new model for authentication of users to health care information applications, leveraging wireless mobile devices. Cell phones are widely distributed, have high user acceptance, and offer advanced security protocols. The authors propose harnessing this technology for the strong authentication of individuals by creating a registration authority and an authentication service, and examine the problems and promise of such a system.

  19. The Landscape Evolution Observatory: a large-scale controllable infrastructure to study coupled Earth-surface processes

    USGS Publications Warehouse

    Pangle, Luke A.; DeLong, Stephen B.; Abramson, Nate; Adams, John; Barron-Gafford, Greg A.; Breshears, David D.; Brooks, Paul D.; Chorover, Jon; Dietrich, William E.; Dontsova, Katerina; Durcik, Matej; Espeleta, Javier; Ferré, T.P.A.; Ferriere, Regis; Henderson, Whitney; Hunt, Edward A.; Huxman, Travis E.; Millar, David; Murphy, Brendan; Niu, Guo-Yue; Pavao-Zuckerman, Mitch; Pelletier, Jon D.; Rasmussen, Craig; Ruiz, Joaquin; Saleska, Scott; Schaap, Marcel; Sibayan, Michael; Troch, Peter A.; Tuller, Markus; van Haren, Joost; Zeng, Xubin

    2015-01-01

    Zero-order drainage basins, and their constituent hillslopes, are the fundamental geomorphic unit comprising much of Earth's uplands. The convergent topography of these landscapes generates spatially variable substrate and moisture content, facilitating biological diversity and influencing how the landscape filters precipitation and sequesters atmospheric carbon dioxide. In light of these significant ecosystem services, refining our understanding of how these functions are affected by landscape evolution, weather variability, and long-term climate change is imperative. In this paper we introduce the Landscape Evolution Observatory (LEO): a large-scale controllable infrastructure consisting of three replicated artificial landscapes (each 330 m2 surface area) within the climate-controlled Biosphere 2 facility in Arizona, USA. At LEO, experimental manipulation of rainfall, air temperature, relative humidity, and wind speed are possible at unprecedented scale. The Landscape Evolution Observatory was designed as a community resource to advance understanding of how topography, physical and chemical properties of soil, and biological communities coevolve, and how this coevolution affects water, carbon, and energy cycles at multiple spatial scales. With well-defined boundary conditions and an extensive network of sensors and samplers, LEO enables an iterative scientific approach that includes numerical model development and virtual experimentation, physical experimentation, data analysis, and model refinement. We plan to engage the broader scientific community through public dissemination of data from LEO, collaborative experimental design, and community-based model development.

  20. Scalable Analysis Methods and In Situ Infrastructure for Extreme Scale Knowledge Discovery

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bethel, Wes

    2016-07-24

    The primary challenge motivating this team’s work is the widening gap between the ability to compute information and to store it for subsequent analysis. This gap adversely impacts science code teams, who are able to perform analysis only on a small fraction of the data they compute, resulting in the very real likelihood of lost or missed science, when results are computed but not analyzed. Our approach is to perform as much analysis or visualization processing on data while it is still resident in memory, an approach that is known as in situ processing. The idea in situ processing wasmore » not new at the time of the start of this effort in 2014, but efforts in that space were largely ad hoc, and there was no concerted effort within the research community that aimed to foster production-quality software tools suitable for use by DOE science projects. In large, our objective was produce and enable use of production-quality in situ methods and infrastructure, at scale, on DOE HPC facilities, though we expected to have impact beyond DOE due to the widespread nature of the challenges, which affect virtually all large-scale computational science efforts. To achieve that objective, we assembled a unique team of researchers consisting of representatives from DOE national laboratories, academia, and industry, and engaged in software technology R&D, as well as engaged in close partnerships with DOE science code teams, to produce software technologies that were shown to run effectively at scale on DOE HPC platforms.« less

  1. Emerging Cyber Infrastructure for NASA's Large-Scale Climate Data Analytics

    NASA Astrophysics Data System (ADS)

    Duffy, D.; Spear, C.; Bowen, M. K.; Thompson, J. H.; Hu, F.; Yang, C. P.; Pierce, D.

    2016-12-01

    The resolution of NASA climate and weather simulations have grown dramatically over the past few years with the highest-fidelity models reaching down to 1.5 KM global resolutions. With each doubling of the resolution, the resulting data sets grow by a factor of eight in size. As the climate and weather models push the envelope even further, a new infrastructure to store data and provide large-scale data analytics is necessary. The NASA Center for Climate Simulation (NCCS) has deployed the Data Analytics Storage Service (DASS) that combines scalable storage with the ability to perform in-situ analytics. Within this system, large, commonly used data sets are stored in a POSIX file system (write once/read many); examples of data stored include Landsat, MERRA2, observing system simulation experiments, and high-resolution downscaled reanalysis. The total size of this repository is on the order of 15 petabytes of storage. In addition to the POSIX file system, the NCCS has deployed file system connectors to enable emerging analytics built on top of the Hadoop File System (HDFS) to run on the same storage servers within the DASS. Coupled with a custom spatiotemporal indexing approach, users can now run emerging analytical operations built on MapReduce and Spark on the same data files stored within the POSIX file system without having to make additional copies. This presentation will discuss the architecture of this system and present benchmark performance measurements from traditional TeraSort and Wordcount to large-scale climate analytical operations on NetCDF data.

  2. Open | SpeedShop: An Open Source Infrastructure for Parallel Performance Analysis

    DOE PAGES

    Schulz, Martin; Galarowicz, Jim; Maghrak, Don; ...

    2008-01-01

    Over the last decades a large number of performance tools has been developed to analyze and optimize high performance applications. Their acceptance by end users, however, has been slow: each tool alone is often limited in scope and comes with widely varying interfaces and workflow constraints, requiring different changes in the often complex build and execution infrastructure of the target application. We started the Open | SpeedShop project about 3 years ago to overcome these limitations and provide efficient, easy to apply, and integrated performance analysis for parallel systems. Open | SpeedShop has two different faces: it provides an interoperable tool set covering themore » most common analysis steps as well as a comprehensive plugin infrastructure for building new tools. In both cases, the tools can be deployed to large scale parallel applications using DPCL/Dyninst for distributed binary instrumentation. Further, all tools developed within or on top of Open | SpeedShop are accessible through multiple fully equivalent interfaces including an easy-to-use GUI as well as an interactive command line interface reducing the usage threshold for those tools.« less

  3. Sparse maps—A systematic infrastructure for reduced-scaling electronic structure methods. II. Linear scaling domain based pair natural orbital coupled cluster theory

    NASA Astrophysics Data System (ADS)

    Riplinger, Christoph; Pinski, Peter; Becker, Ute; Valeev, Edward F.; Neese, Frank

    2016-01-01

    Domain based local pair natural orbital coupled cluster theory with single-, double-, and perturbative triple excitations (DLPNO-CCSD(T)) is a highly efficient local correlation method. It is known to be accurate and robust and can be used in a black box fashion in order to obtain coupled cluster quality total energies for large molecules with several hundred atoms. While previous implementations showed near linear scaling up to a few hundred atoms, several nonlinear scaling steps limited the applicability of the method for very large systems. In this work, these limitations are overcome and a linear scaling DLPNO-CCSD(T) method for closed shell systems is reported. The new implementation is based on the concept of sparse maps that was introduced in Part I of this series [P. Pinski, C. Riplinger, E. F. Valeev, and F. Neese, J. Chem. Phys. 143, 034108 (2015)]. Using the sparse map infrastructure, all essential computational steps (integral transformation and storage, initial guess, pair natural orbital construction, amplitude iterations, triples correction) are achieved in a linear scaling fashion. In addition, a number of additional algorithmic improvements are reported that lead to significant speedups of the method. The new, linear-scaling DLPNO-CCSD(T) implementation typically is 7 times faster than the previous implementation and consumes 4 times less disk space for large three-dimensional systems. For linear systems, the performance gains and memory savings are substantially larger. Calculations with more than 20 000 basis functions and 1000 atoms are reported in this work. In all cases, the time required for the coupled cluster step is comparable to or lower than for the preceding Hartree-Fock calculation, even if this is carried out with the efficient resolution-of-the-identity and chain-of-spheres approximations. The new implementation even reduces the error in absolute correlation energies by about a factor of two, compared to the already accurate previous implementation.

  4. Infrastructure system restoration planning using evolutionary algorithms

    USGS Publications Warehouse

    Corns, Steven; Long, Suzanna K.; Shoberg, Thomas G.

    2016-01-01

    This paper presents an evolutionary algorithm to address restoration issues for supply chain interdependent critical infrastructure. Rapid restoration of infrastructure after a large-scale disaster is necessary to sustaining a nation's economy and security, but such long-term restoration has not been investigated as thoroughly as initial rescue and recovery efforts. A model of the Greater Saint Louis Missouri area was created and a disaster scenario simulated. An evolutionary algorithm is used to determine the order in which the bridges should be repaired based on indirect costs. Solutions were evaluated based on the reduction of indirect costs and the restoration of transportation capacity. When compared to a greedy algorithm, the evolutionary algorithm solution reduced indirect costs by approximately 12.4% by restoring automotive travel routes for workers and re-establishing the flow of commodities across the three rivers in the Saint Louis area.

  5. Access Control Management for SCADA Systems

    NASA Astrophysics Data System (ADS)

    Hong, Seng-Phil; Ahn, Gail-Joon; Xu, Wenjuan

    The information technology revolution has transformed all aspects of our society including critical infrastructures and led a significant shift from their old and disparate business models based on proprietary and legacy environments to more open and consolidated ones. Supervisory Control and Data Acquisition (SCADA) systems have been widely used not only for industrial processes but also for some experimental facilities. Due to the nature of open environments, managing SCADA systems should meet various security requirements since system administrators need to deal with a large number of entities and functions involved in critical infrastructures. In this paper, we identify necessary access control requirements in SCADA systems and articulate access control policies for the simulated SCADA systems. We also attempt to analyze and realize those requirements and policies in the context of role-based access control that is suitable for simplifying administrative tasks in large scale enterprises.

  6. A spatial model to aggregate point-source and nonpoint-source water-quality data for large areas

    USGS Publications Warehouse

    White, D.A.; Smith, R.A.; Price, C.V.; Alexander, R.B.; Robinson, K.W.

    1992-01-01

    More objective and consistent methods are needed to assess water quality for large areas. A spatial model, one that capitalizes on the topologic relationships among spatial entities, to aggregate pollution sources from upstream drainage areas is described that can be implemented on land surfaces having heterogeneous water-pollution effects. An infrastructure of stream networks and drainage basins, derived from 1:250,000-scale digital-elevation models, define the hydrologic system in this spatial model. The spatial relationships between point- and nonpoint pollution sources and measurement locations are referenced to the hydrologic infrastructure with the aid of a geographic information system. A maximum-branching algorithm has been developed to simulate the effects of distance from a pollutant source to an arbitrary downstream location, a function traditionally employed in deterministic water quality models. ?? 1992.

  7. Cloud Computing for Complex Performance Codes.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Appel, Gordon John; Hadgu, Teklu; Klein, Brandon Thorin

    This report describes the use of cloud computing services for running complex public domain performance assessment problems. The work consisted of two phases: Phase 1 was to demonstrate complex codes, on several differently configured servers, could run and compute trivial small scale problems in a commercial cloud infrastructure. Phase 2 focused on proving non-trivial large scale problems could be computed in the commercial cloud environment. The cloud computing effort was successfully applied using codes of interest to the geohydrology and nuclear waste disposal modeling community.

  8. LAGUNA DESIGN STUDY, Underground infrastructures and engineering

    NASA Astrophysics Data System (ADS)

    Nuijten, Guido Alexander

    2011-07-01

    The European Commission has awarded the LAGUNA project a grant of 1.7 million euro for a Design Study from the seventh framework program of research and technology development (FP7-INFRASTRUCTURES - 2007-1) in 2008. The purpose of this two year work is to study the feasibility of the considered experiments and prepare a conceptual design of the required underground infrastructure. It is due to deliver a report that allows the funding agencies to decide on the realization of the experiment and to select the site and the technology. The result of this work is the first step towards fulfilling the goals of LAGUNA. The work will continue with EU funding to study the possibilities more thoroughly. The LAGUNA project is included in the future plans prepared by European funding organizations. (Astroparticle physics in Europe). It is recommended that a new large European infrastructure is put forward, as a future international multi-purpose facility for improved studies on proton decay and low-energy neutrinos from astrophysical origin. The three detection techniques being studied for such large detectors in Europe, Water-Cherenkov (like MEMPHYS), liquid scintillator (like LENA) and liquid argon (like GLACIER), are evaluated in the context of a common design study which should also address the underground infrastructure and the possibility of an eventual detection of future accelerator neutrino beams. The design study is also to take into account worldwide efforts and converge, on a time scale of 2010, to a common proposal.

  9. Large scale distribution monitoring of FRP-OF based on BOTDR technique for infrastructures

    NASA Astrophysics Data System (ADS)

    Zhou, Zhi; He, Jianping; Yan, Kai; Ou, Jinping

    2007-04-01

    BOTDA(R) sensing technique is considered as one of the most practical solution for large-sized structures as the instrument. However, there is still a big obstacle to apply BOTDA(R) in large-scale area due to the high cost and the reliability problem of sensing head which is associated to the sensor installation and survival. In this paper, we report a novel low-cost and high reliable BOTDA(R) sensing head using FRP(Fiber Reinforced Polymer)-bare optical fiber rebar, named BOTDA(R)-FRP-OF. We investigated the surface bonding and its mechanical strength by SEM and intensity experiments. Considering the strain difference between OF and host matrix which may result in measurement error, the strain transfer from host to OF have been theoretically studied. Furthermore, GFRP-OFs sensing properties of strain and temperature at different gauge length were tested under different spatial and readout resolution using commercial BOTDA. Dual FRP-OFs temperature compensation method has also been proposed and analyzed. And finally, BOTDA(R)-OFs have been applied in Tiyu west road civil structure at Guangzhou and Daqing Highway. This novel FRP-OF rebar shows both high strengthen and good sensing properties, which can be used in long-term SHM for civil infrastructures.

  10. Gaussian processes for personalized e-health monitoring with wearable sensors.

    PubMed

    Clifton, Lei; Clifton, David A; Pimentel, Marco A F; Watkinson, Peter J; Tarassenko, Lionel

    2013-01-01

    Advances in wearable sensing and communications infrastructure have allowed the widespread development of prototype medical devices for patient monitoring. However, such devices have not penetrated into clinical practice, primarily due to a lack of research into "intelligent" analysis methods that are sufficiently robust to support large-scale deployment. Existing systems are typically plagued by large false-alarm rates, and an inability to cope with sensor artifact in a principled manner. This paper has two aims: 1) proposal of a novel, patient-personalized system for analysis and inference in the presence of data uncertainty, typically caused by sensor artifact and data incompleteness; 2) demonstration of the method using a large-scale clinical study in which 200 patients have been monitored using the proposed system. This latter provides much-needed evidence that personalized e-health monitoring is feasible within an actual clinical environment, at scale, and that the method is capable of improving patient outcomes via personalized healthcare.

  11. Towards Large-Scale, Non-Destructive Inspection of Concrete Bridges

    NASA Astrophysics Data System (ADS)

    Mahmoud, A.; Shah, A. H.; Popplewell, N.

    2005-04-01

    It is estimated that the rehabilitation of deteriorating engineering infrastructure in the harsh North American environment could cost billions of dollars. Bridges are key infrastructure components for surface transportation. Steel-free and fibre-reinforced concrete is used increasingly nowadays to circumvent the vulnerability of steel rebar to corrosion. Existing steel-free and fibre-reinforced bridges may experience extensive surface-breaking cracks that need to be characterized without incurring further damage. In the present study, a method that uses Lamb elastic wave propagation to non-destructively characterize cracks in plain as well as fibre-reinforced concrete is investigated both numerically and experimentally. Numerical and experimental data are corroborated with good agreement.

  12. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Michael Schmitt; Juan Deaton; Curt Papke

    In the event of large-scale natural or manmade catastrophic events, access to reliable and enduring commercial communication systems is critical. Hurricane Katrina provided a recent example of the need to ensure communications during a national emergency. To ensure that communication demands are met during these critical times, Idaho National Laboratory (INL) under the guidance of United States Strategic Command has studied infrastructure issues, concerns, and vulnerabilities associated with an airborne wireless communications capability. Such a capability could provide emergency wireless communications until public/commercial nodes can be systematically restored. This report focuses on the airborne cellular restoration concept; analyzing basic infrastructuremore » requirements; identifying related infrastructure issues, concerns, and vulnerabilities and offers recommended solutions.« less

  13. Design considerations for implementation of large scale automatic meter reading systems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mak, S.; Radford, D.

    1995-01-01

    This paper discusses the requirements imposed on the design of an AMR system expected to serve a large (> 1 million) customer base spread over a large geographical area. Issues such as system throughput response time, and multi-application expendability are addressed, all of which are intimately dependent on the underlying communication system infrastructure, the local geography, the customer base, and the regulatory environment. A methodology for analysis, assessment, and design of large systems is presented. For illustration, two communication systems -- a low power RF/PLC system and a power frequency carrier system -- are analyzed and discussed.

  14. Tools for Large-Scale Data Analytic Examination of Relational and Epistemic Networks in Engineering Education

    ERIC Educational Resources Information Center

    Madhavan, Krishna; Johri, Aditya; Xian, Hanjun; Wang, G. Alan; Liu, Xiaomo

    2014-01-01

    The proliferation of digital information technologies and related infrastructure has given rise to novel ways of capturing, storing and analyzing data. In this paper, we describe the research and development of an information system called Interactive Knowledge Networks for Engineering Education Research (iKNEER). This system utilizes a framework…

  15. Fusion of Remote Sensing and Non-Authoritative Data for Flood Disaster and Transportation Infrastructure Assessment

    ERIC Educational Resources Information Center

    Schnebele, Emily K.

    2013-01-01

    Flooding is the most frequently occurring natural hazard on Earth; with catastrophic, large scale floods causing immense damage to people, property, and the environment. Over the past 20 years, remote sensing has become the standard technique for flood identification because of its ability to offer synoptic coverage. Unfortunately, remote sensing…

  16. A Year of Progress in School-to-Career System Building. The Benchmark Communities Initiative.

    ERIC Educational Resources Information Center

    Martinez, Martha I.; And Others

    This document examines the first year of Jobs for the Future's Benchmark Communities Initiative (BCI), a 5-year effort to achieve the following: large-scale systemic restructuring of K-16 educational systems; involvement of significant numbers of employers in work and learning partnerships; and development of the infrastructure necessary to…

  17. Knowledge Co-production at the Research-Practice Interface: Embedded Case Studies from Urban Forestry

    Treesearch

    Lindsay K. Campbell; Erika S. Svendsen; Lara A. Roman

    2016-01-01

    Cities are increasingly engaging in sustainability efforts and investment in green infrastructure, including large-scale urban tree planting campaigns. In this context, researchers and practitioners are working jointly to develop applicable knowledge for planning and managing the urban forest. This paper presents three case studies of knowledge co-production in the...

  18. Developing Server-Side Infrastructure for Large-Scale E-Learning of Web Technology

    ERIC Educational Resources Information Center

    Simpkins, Neil

    2010-01-01

    The growth of E-business has made experience in server-side technology an increasingly important area for educators. Server-side skills are in increasing demand and recognised to be of relatively greater value than comparable client-side aspects (Ehie, 2002). In response to this, many educational organisations have developed E-business courses,…

  19. Towards sustainable infrastructure management: knowledge-based service-oriented computing framework for visual analytics

    NASA Astrophysics Data System (ADS)

    Vatcha, Rashna; Lee, Seok-Won; Murty, Ajeet; Tolone, William; Wang, Xiaoyu; Dou, Wenwen; Chang, Remco; Ribarsky, William; Liu, Wanqiu; Chen, Shen-en; Hauser, Edd

    2009-05-01

    Infrastructure management (and its associated processes) is complex to understand, perform and thus, hard to make efficient and effective informed decisions. The management involves a multi-faceted operation that requires the most robust data fusion, visualization and decision making. In order to protect and build sustainable critical assets, we present our on-going multi-disciplinary large-scale project that establishes the Integrated Remote Sensing and Visualization (IRSV) system with a focus on supporting bridge structure inspection and management. This project involves specific expertise from civil engineers, computer scientists, geographers, and real-world practitioners from industry, local and federal government agencies. IRSV is being designed to accommodate the essential needs from the following aspects: 1) Better understanding and enforcement of complex inspection process that can bridge the gap between evidence gathering and decision making through the implementation of ontological knowledge engineering system; 2) Aggregation, representation and fusion of complex multi-layered heterogeneous data (i.e. infrared imaging, aerial photos and ground-mounted LIDAR etc.) with domain application knowledge to support machine understandable recommendation system; 3) Robust visualization techniques with large-scale analytical and interactive visualizations that support users' decision making; and 4) Integration of these needs through the flexible Service-oriented Architecture (SOA) framework to compose and provide services on-demand. IRSV is expected to serve as a management and data visualization tool for construction deliverable assurance and infrastructure monitoring both periodically (annually, monthly, even daily if needed) as well as after extreme events.

  20. Development of Bioinformatics Infrastructure for Genomics Research.

    PubMed

    Mulder, Nicola J; Adebiyi, Ezekiel; Adebiyi, Marion; Adeyemi, Seun; Ahmed, Azza; Ahmed, Rehab; Akanle, Bola; Alibi, Mohamed; Armstrong, Don L; Aron, Shaun; Ashano, Efejiro; Baichoo, Shakuntala; Benkahla, Alia; Brown, David K; Chimusa, Emile R; Fadlelmola, Faisal M; Falola, Dare; Fatumo, Segun; Ghedira, Kais; Ghouila, Amel; Hazelhurst, Scott; Isewon, Itunuoluwa; Jung, Segun; Kassim, Samar Kamal; Kayondo, Jonathan K; Mbiyavanga, Mamana; Meintjes, Ayton; Mohammed, Somia; Mosaku, Abayomi; Moussa, Ahmed; Muhammd, Mustafa; Mungloo-Dilmohamud, Zahra; Nashiru, Oyekanmi; Odia, Trust; Okafor, Adaobi; Oladipo, Olaleye; Osamor, Victor; Oyelade, Jellili; Sadki, Khalid; Salifu, Samson Pandam; Soyemi, Jumoke; Panji, Sumir; Radouani, Fouzia; Souiai, Oussama; Tastan Bishop, Özlem

    2017-06-01

    Although pockets of bioinformatics excellence have developed in Africa, generally, large-scale genomic data analysis has been limited by the availability of expertise and infrastructure. H3ABioNet, a pan-African bioinformatics network, was established to build capacity specifically to enable H3Africa (Human Heredity and Health in Africa) researchers to analyze their data in Africa. Since the inception of the H3Africa initiative, H3ABioNet's role has evolved in response to changing needs from the consortium and the African bioinformatics community. H3ABioNet set out to develop core bioinformatics infrastructure and capacity for genomics research in various aspects of data collection, transfer, storage, and analysis. Various resources have been developed to address genomic data management and analysis needs of H3Africa researchers and other scientific communities on the continent. NetMap was developed and used to build an accurate picture of network performance within Africa and between Africa and the rest of the world, and Globus Online has been rolled out to facilitate data transfer. A participant recruitment database was developed to monitor participant enrollment, and data is being harmonized through the use of ontologies and controlled vocabularies. The standardized metadata will be integrated to provide a search facility for H3Africa data and biospecimens. Because H3Africa projects are generating large-scale genomic data, facilities for analysis and interpretation are critical. H3ABioNet is implementing several data analysis platforms that provide a large range of bioinformatics tools or workflows, such as Galaxy, the Job Management System, and eBiokits. A set of reproducible, portable, and cloud-scalable pipelines to support the multiple H3Africa data types are also being developed and dockerized to enable execution on multiple computing infrastructures. In addition, new tools have been developed for analysis of the uniquely divergent African data and for downstream interpretation of prioritized variants. To provide support for these and other bioinformatics queries, an online bioinformatics helpdesk backed by broad consortium expertise has been established. Further support is provided by means of various modes of bioinformatics training. For the past 4 years, the development of infrastructure support and human capacity through H3ABioNet, have significantly contributed to the establishment of African scientific networks, data analysis facilities, and training programs. Here, we describe the infrastructure and how it has affected genomics and bioinformatics research in Africa. Copyright © 2017 World Heart Federation (Geneva). Published by Elsevier B.V. All rights reserved.

  1. International Symposium on Grids and Clouds (ISGC) 2016

    NASA Astrophysics Data System (ADS)

    The International Symposium on Grids and Clouds (ISGC) 2016 will be held at Academia Sinica in Taipei, Taiwan from 13-18 March 2016, with co-located events and workshops. The conference is hosted by the Academia Sinica Grid Computing Centre (ASGC). The theme of ISGC 2016 focuses on“Ubiquitous e-infrastructures and Applications”. Contemporary research is impossible without a strong IT component - researchers rely on the existence of stable and widely available e-infrastructures and their higher level functions and properties. As a result of these expectations, e-Infrastructures are becoming ubiquitous, providing an environment that supports large scale collaborations that deal with global challenges as well as smaller and temporal research communities focusing on particular scientific problems. To support those diversified communities and their needs, the e-Infrastructures themselves are becoming more layered and multifaceted, supporting larger groups of applications. Following the call for the last year conference, ISGC 2016 continues its aim to bring together users and application developers with those responsible for the development and operation of multi-purpose ubiquitous e-Infrastructures. Topics of discussion include Physics (including HEP) and Engineering Applications, Biomedicine & Life Sciences Applications, Earth & Environmental Sciences & Biodiversity Applications, Humanities, Arts, and Social Sciences (HASS) Applications, Virtual Research Environment (including Middleware, tools, services, workflow, etc.), Data Management, Big Data, Networking & Security, Infrastructure & Operations, Infrastructure Clouds and Virtualisation, Interoperability, Business Models & Sustainability, Highly Distributed Computing Systems, and High Performance & Technical Computing (HPTC), etc.

  2. Building a Community Infrastructure for Scalable On-Line Performance Analysis Tools around Open|Speedshop

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Miller, Barton

    2014-06-30

    Peta-scale computing environments pose significant challenges for both system and application developers and addressing them required more than simply scaling up existing tera-scale solutions. Performance analysis tools play an important role in gaining this understanding, but previous monolithic tools with fixed feature sets have not sufficed. Instead, this project worked on the design, implementation, and evaluation of a general, flexible tool infrastructure supporting the construction of performance tools as “pipelines” of high-quality tool building blocks. These tool building blocks provide common performance tool functionality, and are designed for scalability, lightweight data acquisition and analysis, and interoperability. For this project, wemore » built on Open|SpeedShop, a modular and extensible open source performance analysis tool set. The design and implementation of such a general and reusable infrastructure targeted for petascale systems required us to address several challenging research issues. All components needed to be designed for scale, a task made more difficult by the need to provide general modules. The infrastructure needed to support online data aggregation to cope with the large amounts of performance and debugging data. We needed to be able to map any combination of tool components to each target architecture. And we needed to design interoperable tool APIs and workflows that were concrete enough to support the required functionality, yet provide the necessary flexibility to address a wide range of tools. A major result of this project is the ability to use this scalable infrastructure to quickly create tools that match with a machine architecture and a performance problem that needs to be understood. Another benefit is the ability for application engineers to use the highly scalable, interoperable version of Open|SpeedShop, which are reassembled from the tool building blocks into a flexible, multi-user interface set of tools. This set of tools targeted at Office of Science Leadership Class computer systems and selected Office of Science application codes. We describe the contributions made by the team at the University of Wisconsin. The project built on the efforts in Open|SpeedShop funded by DOE/NNSA and the DOE/NNSA Tri-Lab community, extended Open|Speedshop to the Office of Science Leadership Class Computing Facilities, and addressed new challenges found on these cutting edge systems. Work done under this project at Wisconsin can be divided into two categories, new algorithms and techniques for debugging, and foundation infrastructure work on our Dyninst binary analysis and instrumentation toolkits and MRNet scalability infrastructure.« less

  3. Distributed coaxial cable crack sensors for crack mapping in RC

    NASA Astrophysics Data System (ADS)

    Greene, Gary G.; Belarbi, Abdeldjelil; Chen, Genda; McDaniel, Ryan

    2005-05-01

    New type of distributed coaxial cable sensors for health monitoring of large-scale civil infrastructure was recently proposed and developed by the authors. This paper shows the results and performance of such sensors mounted on near surface of two flexural beams and a large scale reinforced concrete box girder that was subjected to twenty cycles of combined shear and torsion. The main objectives of this health monitoring study was to correlate the sensor's response to strain in the member, and show that magnitude of the signal's reflection coefficient is related to increases in applied load, repeated cycles, cracking, crack mapping, and yielding. The effect of multiple adjacent cracks, and signal loss was also investigated.

  4. Cost-Efficient Storage of Cryogens

    NASA Technical Reports Server (NTRS)

    Fesmire, J. E.; Sass, J. P.; Nagy, Z.; Sojoumer, S. J.; Morris, D. L.; Augustynowicz, S. D.

    2007-01-01

    NASA's cryogenic infrastructure that supports launch vehicle operations and propulsion testing is reaching an age where major refurbishment will soon be required. Key elements of this infrastructure are the large double-walled cryogenic storage tanks used for both space vehicle launch operations and rocket propulsion testing at the various NASA field centers. Perlite powder has historically been the insulation material of choice for these large storage tank applications. New bulk-fill insulation materials, including glass bubbles and aerogel beads, have been shown to provide improved thermal and mechanical performance. A research testing program was conducted to investigate the thermal performance benefits as well as to identify operational considerations and associated risks associated with the application of these new materials in large cryogenic storage tanks. The program was divided into three main areas: material testing (thermal conductivity and physical characterization), tank demonstration testing (liquid nitrogen and liquid hydrogen), and system studies (thermal modeling, economic analysis, and insulation changeout). The results of this research work show that more energy-efficient insulation solutions are possible for large-scale cryogenic storage tanks worldwide and summarize the operational requirements that should be considered for these applications.

  5. Final Report Scalable Analysis Methods and In Situ Infrastructure for Extreme Scale Knowledge Discovery

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    O'Leary, Patrick

    The primary challenge motivating this project is the widening gap between the ability to compute information and to store it for subsequent analysis. This gap adversely impacts science code teams, who can perform analysis only on a small fraction of the data they calculate, resulting in the substantial likelihood of lost or missed science, when results are computed but not analyzed. Our approach is to perform as much analysis or visualization processing on data while it is still resident in memory, which is known as in situ processing. The idea in situ processing was not new at the time ofmore » the start of this effort in 2014, but efforts in that space were largely ad hoc, and there was no concerted effort within the research community that aimed to foster production-quality software tools suitable for use by Department of Energy (DOE) science projects. Our objective was to produce and enable the use of production-quality in situ methods and infrastructure, at scale, on DOE high-performance computing (HPC) facilities, though we expected to have an impact beyond DOE due to the widespread nature of the challenges, which affect virtually all large-scale computational science efforts. To achieve this objective, we engaged in software technology research and development (R&D), in close partnerships with DOE science code teams, to produce software technologies that were shown to run efficiently at scale on DOE HPC platforms.« less

  6. Re-evaluating estimates of impervious cover and riparian zone condition in New England watersheds: Green infrastructure effectiveness at the watershed scale

    EPA Science Inventory

    Under EPA’s Green Infrastructure Initiative, a variety of research activities are underway to evaluate the effectiveness of green infrastructure in mitigating the effects of urbanization and stormwater impacts on stream biota and habitat. Effectiveness of both site-scale s...

  7. Scale Development of Individual and Organisation Infrastructure for Heart Health Promotion in Regional Health Authorities

    ERIC Educational Resources Information Center

    Plotnikoff, Ronald C.; Anderson, Donna; Raine, Kim; Cook, Kay; Barrett, Linda; Prodaniuk, Tricia R.

    2005-01-01

    Objective: The purpose of this study was to validate measures of individual and organisational infrastructure for health promotion within Alberta's (Canada) 17 Regional Health Authorities (RHAs). Design: A series of phases were conducted to develop individual and organisational scales to measure health promotion infrastructure. Instruments were…

  8. The role of trees in urban stormwater management | Science ...

    EPA Pesticide Factsheets

    Urban impervious surfaces convert precipitation to stormwater runoff, which causes water quality and quantity problems. While traditional stormwater management has relied on gray infrastructure such as piped conveyances to collect and convey stormwater to wastewater treatment facilities or into surface waters, cities are exploring green infrastructure to manage stormwater at its source. Decentralized green infrastructure leverages the capabilities of soil and vegetation to infiltrate, redistribute, and otherwise store stormwater volume, with the potential to realize ancillary environmental, social, and economic benefits. To date, green infrastructure science and practice have largely focused on infiltration-based technologies that include rain gardens, bioswales, and permeable pavements. However, a narrow focus on infiltration overlooks other losses from the hydrologic cycle, and we propose that arboriculture – the cultivation of trees and other woody plants – deserves additional consideration as a stormwater control measure. Trees interact with the urban hydrologic cycle by intercepting incoming precipitation, removing water from the soil via transpiration, enhancing infiltration, and bolstering the performance of other green infrastructure technologies. However, many of these interactions are inadequately understood, particularly at spatial and temporal scales relevant to stormwater management. As such, the reliable use of trees for stormwater control depe

  9. Can Economics Provide Insights into Trust Infrastructure?

    NASA Astrophysics Data System (ADS)

    Vishik, Claire

    Many security technologies require infrastructure for authentication, verification, and other processes. In many cases, viable and innovative security technologies are never adopted on a large scale because the necessary infrastructure is slow to emerge. Analyses of such technologies typically focus on their technical flaws, and research emphasizes innovative approaches to stronger implementation of the core features. However, an observation can be made that in many cases the success of adoption pattern depends on non-technical issues rather than technology-lack of economic incentives, difficulties in finding initial investment, inadequate government support. While a growing body of research is dedicated to economics of security and privacy in general, few theoretical studies in this area have been completed, and even fewer that look at the economics of “trust infrastructure” beyond simple “cost of ownership” models. This exploratory paper takes a look at some approaches in theoretical economics to determine if they can provide useful insights into security infrastructure technologies and architectures that have the best chance to be adopted. We attempt to discover if models used in theoretical economics can help inform technology developers of the optimal business models that offer a better chance for quick infrastructure deployment.

  10. Stormbow: A Cloud-Based Tool for Reads Mapping and Expression Quantification in Large-Scale RNA-Seq Studies

    PubMed Central

    Zhao, Shanrong; Prenger, Kurt; Smith, Lance

    2013-01-01

    RNA-Seq is becoming a promising replacement to microarrays in transcriptome profiling and differential gene expression study. Technical improvements have decreased sequencing costs and, as a result, the size and number of RNA-Seq datasets have increased rapidly. However, the increasing volume of data from large-scale RNA-Seq studies poses a practical challenge for data analysis in a local environment. To meet this challenge, we developed Stormbow, a cloud-based software package, to process large volumes of RNA-Seq data in parallel. The performance of Stormbow has been tested by practically applying it to analyse 178 RNA-Seq samples in the cloud. In our test, it took 6 to 8 hours to process an RNA-Seq sample with 100 million reads, and the average cost was $3.50 per sample. Utilizing Amazon Web Services as the infrastructure for Stormbow allows us to easily scale up to handle large datasets with on-demand computational resources. Stormbow is a scalable, cost effective, and open-source based tool for large-scale RNA-Seq data analysis. Stormbow can be freely downloaded and can be used out of box to process Illumina RNA-Seq datasets. PMID:25937948

  11. Stormbow: A Cloud-Based Tool for Reads Mapping and Expression Quantification in Large-Scale RNA-Seq Studies.

    PubMed

    Zhao, Shanrong; Prenger, Kurt; Smith, Lance

    2013-01-01

    RNA-Seq is becoming a promising replacement to microarrays in transcriptome profiling and differential gene expression study. Technical improvements have decreased sequencing costs and, as a result, the size and number of RNA-Seq datasets have increased rapidly. However, the increasing volume of data from large-scale RNA-Seq studies poses a practical challenge for data analysis in a local environment. To meet this challenge, we developed Stormbow, a cloud-based software package, to process large volumes of RNA-Seq data in parallel. The performance of Stormbow has been tested by practically applying it to analyse 178 RNA-Seq samples in the cloud. In our test, it took 6 to 8 hours to process an RNA-Seq sample with 100 million reads, and the average cost was $3.50 per sample. Utilizing Amazon Web Services as the infrastructure for Stormbow allows us to easily scale up to handle large datasets with on-demand computational resources. Stormbow is a scalable, cost effective, and open-source based tool for large-scale RNA-Seq data analysis. Stormbow can be freely downloaded and can be used out of box to process Illumina RNA-Seq datasets.

  12. Collaboration and decision making tools for mobile groups

    NASA Astrophysics Data System (ADS)

    Abrahamyan, Suren; Balyan, Serob; Ter-Minasyan, Harutyun; Degtyarev, Alexander

    2017-12-01

    Nowadays the use of distributed collaboration tools is widespread in many areas of people activity. But lack of mobility and certain equipment-dependency creates difficulties and decelerates development and integration of such technologies. Also mobile technologies allow individuals to interact with each other without need of traditional office spaces and regardless of location. Hence, realization of special infrastructures on mobile platforms with help of ad-hoc wireless local networks could eliminate hardware-attachment and be useful also in terms of scientific approach. Solutions from basic internet-messengers to complex software for online collaboration equipment in large-scale workgroups are implementations of tools based on mobile infrastructures. Despite growth of mobile infrastructures, applied distributed solutions in group decisionmaking and e-collaboration are not common. In this article we propose software complex for real-time collaboration and decision-making based on mobile devices, describe its architecture and evaluate performance.

  13. GSDC: A Unique Data Center in Korea for HEP research

    NASA Astrophysics Data System (ADS)

    Ahn, Sang-Un

    2017-04-01

    Global Science experimental Data hub Center (GSDC) at Korea Institute of Science and Technology Information (KISTI) is a unique data center in South Korea established for promoting the fundamental research fields by supporting them with the expertise on Information and Communication Technology (ICT) and the infrastructure for High Performance Computing (HPC), High Throughput Computing (HTC) and Networking. GSDC has supported various research fields in South Korea dealing with the large scale of data, e.g. RENO experiment for neutrino research, LIGO experiment for gravitational wave detection, Genome sequencing project for bio-medical, and HEP experiments such as CDF at FNAL, Belle at KEK, and STAR at BNL. In particular, GSDC has run a Tier-1 center for ALICE experiment using the LHC at CERN since 2013. In this talk, we present the overview on computing infrastructure that GSDC runs for the research fields and we discuss on the data center infrastructure management system deployed at GSDC.

  14. Achievable steps toward building a National Health Information infrastructure in the United States.

    PubMed

    Stead, William W; Kelly, Brian J; Kolodner, Robert M

    2005-01-01

    Consensus is growing that a health care information and communication infrastructure is one key to fixing the crisis in the United States in health care quality, cost, and access. The National Health Information Infrastructure (NHII) is an initiative of the Department of Health and Human Services receiving bipartisan support. There are many possible courses toward its objective. Decision makers need to reflect carefully on which approaches are likely to work on a large enough scale to have the intended beneficial national impacts and which are better left to smaller projects within the boundaries of health care organizations. This report provides a primer for use by informatics professionals as they explain aspects of that dividing line to policy makers and to health care leaders and front-line providers. It then identifies short-term, intermediate, and long-term steps that might be taken by the NHII initiative.

  15. Building for the future: essential infrastructure for rodent ageing studies.

    PubMed

    Wells, Sara E; Bellantuono, Ilaria

    2016-08-01

    When planning ageing research using rodent models, the logistics of supply, long term housing and infrastructure provision are important factors to take into consideration. These issues need to be prioritised to ensure they meet the requirements of experiments which potentially will not be completed for several years. Although these issues are not unique to this discipline, the longevity of experiments and indeed the animals, requires a high level of consistency and sustainability to be maintained throughout lengthy periods of time. Moreover, the need to access aged stock or material for more immediate experiments poses many issues for the completion of pilot studies and/or short term intervention studies on older models. In this article, we highlight the increasing demand for ageing research, the resources and infrastructure involved, and the need for large-scale collaborative programmes to advance studies in both a timely and a cost-effective way.

  16. Achievable Steps Toward Building a National Health Information Infrastructure in the United States

    PubMed Central

    Stead, William W.; Kelly, Brian J.; Kolodner, Robert M.

    2005-01-01

    Consensus is growing that a health care information and communication infrastructure is one key to fixing the crisis in the United States in health care quality, cost, and access. The National Health Information Infrastructure (NHII) is an initiative of the Department of Health and Human Services receiving bipartisan support. There are many possible courses toward its objective. Decision makers need to reflect carefully on which approaches are likely to work on a large enough scale to have the intended beneficial national impacts and which are better left to smaller projects within the boundaries of health care organizations. This report provides a primer for use by informatics professionals as they explain aspects of that dividing line to policy makers and to health care leaders and front-line providers. It then identifies short-term, intermediate, and long-term steps that might be taken by the NHII initiative. PMID:15561783

  17. Scaling wetland green infrastructure?practices to watersheds using modeling approaches

    EPA Science Inventory

    Green infrastructure practices are typically implemented at the plot or local scale. Wetlands in the landscape can serve important functions at these scales and can mediate biogeochemical and hydrological processes, particularly when juxtaposed with low impact development (LID)....

  18. Global Economic Integration and Local Community Resilience: Road Paving and Rural Demographic Change in the Southwestern Amazon

    ERIC Educational Resources Information Center

    Perz, Stephen G.; Cabrera, Liliana; Carvalho, Lucas Araujo; Castillo, Jorge; Barnes, Grenville

    2010-01-01

    Recent years have witnessed an expansion in international investment in large-scale infrastructure projects with the goal of achieving global economic integration. We focus on one such project, the Inter-Oceanic Highway in the "MAP" region, a trinational frontier where Bolivia, Brazil, and Peru meet in the southwestern Amazon. We adopt a…

  19. Video games: a route to large-scale STEM education?

    PubMed

    Mayo, Merrilea J

    2009-01-02

    Video games have enormous mass appeal, reaching audiences in the hundreds of thousands to millions. They also embed many pedagogical practices known to be effective in other environments. This article reviews the sparse but encouraging data on learning outcomes for video games in science, technology, engineering, and math (STEM) disciplines, then reviews the infrastructural obstacles to wider adoption of this new medium.

  20. Strategies for Validation Testing of Ground Systems

    NASA Technical Reports Server (NTRS)

    Annis, Tammy; Sowards, Stephanie

    2009-01-01

    In order to accomplish the full Vision for Space Exploration announced by former President George W. Bush in 2004, NASA will have to develop a new space transportation system and supporting infrastructure. The main portion of this supporting infrastructure will reside at the Kennedy Space Center (KSC) in Florida and will either be newly developed or a modification of existing vehicle processing and launch facilities, including Ground Support Equipment (GSE). This type of large-scale launch site development is unprecedented since the time of the Apollo Program. In order to accomplish this successfully within the limited budget and schedule constraints a combination of traditional and innovative strategies for Verification and Validation (V&V) have been developed. The core of these strategies consists of a building-block approach to V&V, starting with component V&V and ending with a comprehensive end-to-end validation test of the complete launch site, called a Ground Element Integration Test (GEIT). This paper will outline these strategies and provide the high level planning for meeting the challenges of implementing V&V on a large-scale development program. KEY WORDS: Systems, Elements, Subsystem, Integration Test, Ground Systems, Ground Support Equipment, Component, End Item, Test and Verification Requirements (TVR), Verification Requirements (VR)

  1. Parallel high-performance grid computing: capabilities and opportunities of a novel demanding service and business class allowing highest resource efficiency.

    PubMed

    Kepper, Nick; Ettig, Ramona; Dickmann, Frank; Stehr, Rene; Grosveld, Frank G; Wedemann, Gero; Knoch, Tobias A

    2010-01-01

    Especially in the life-science and the health-care sectors the huge IT requirements are imminent due to the large and complex systems to be analysed and simulated. Grid infrastructures play here a rapidly increasing role for research, diagnostics, and treatment, since they provide the necessary large-scale resources efficiently. Whereas grids were first used for huge number crunching of trivially parallelizable problems, increasingly parallel high-performance computing is required. Here, we show for the prime example of molecular dynamic simulations how the presence of large grid clusters including very fast network interconnects within grid infrastructures allows now parallel high-performance grid computing efficiently and thus combines the benefits of dedicated super-computing centres and grid infrastructures. The demands for this service class are the highest since the user group has very heterogeneous requirements: i) two to many thousands of CPUs, ii) different memory architectures, iii) huge storage capabilities, and iv) fast communication via network interconnects, are all needed in different combinations and must be considered in a highly dedicated manner to reach highest performance efficiency. Beyond, advanced and dedicated i) interaction with users, ii) the management of jobs, iii) accounting, and iv) billing, not only combines classic with parallel high-performance grid usage, but more importantly is also able to increase the efficiency of IT resource providers. Consequently, the mere "yes-we-can" becomes a huge opportunity like e.g. the life-science and health-care sectors as well as grid infrastructures by reaching higher level of resource efficiency.

  2. Scaling the CERN OpenStack cloud

    NASA Astrophysics Data System (ADS)

    Bell, T.; Bompastor, B.; Bukowiec, S.; Castro Leon, J.; Denis, M. K.; van Eldik, J.; Fermin Lobo, M.; Fernandez Alvarez, L.; Fernandez Rodriguez, D.; Marino, A.; Moreira, B.; Noel, B.; Oulevey, T.; Takase, W.; Wiebalck, A.; Zilli, S.

    2015-12-01

    CERN has been running a production OpenStack cloud since July 2013 to support physics computing and infrastructure services for the site. In the past year, CERN Cloud Infrastructure has seen a constant increase in nodes, virtual machines, users and projects. This paper will present what has been done in order to make the CERN cloud infrastructure scale out.

  3. Development of a flash flood warning system based on real-time radar data and process-based erosion modelling

    NASA Astrophysics Data System (ADS)

    Schindewolf, Marcus; Kaiser, Andreas; Buchholtz, Arno; Schmidt, Jürgen

    2017-04-01

    Extreme rainfall events and resulting flash floods led to massive devastations in Germany during spring 2016. The study presented aims on the development of a early warning system, which allows the simulation and assessment of negative effects on infrastructure by radar-based heavy rainfall predictions, serving as input data for the process-based soil loss and deposition model EROSION 3D. Our approach enables a detailed identification of runoff and sediment fluxes in agricultural used landscapes. In a first step, documented historical events were analyzed concerning the accordance of measured radar rainfall and large scale erosion risk maps. A second step focused on a small scale erosion monitoring via UAV of source areas of heavy flooding events and a model reconstruction of the processes involved. In all examples damages were caused to local infrastructure. Both analyses are promising in order to detect runoff and sediment delivering areas even in a high temporal and spatial resolution. Results prove the important role of late-covering crops such as maize, sugar beet or potatoes in runoff generation. While e.g. winter wheat positively affects extensive runoff generation on undulating landscapes, massive soil loss and thus muddy flows are observed and depicted in model results. Future research aims on large scale model parameterization and application in real time, uncertainty estimation of precipitation forecast and interface developments.

  4. A Grid Infrastructure for Supporting Space-based Science Operations

    NASA Technical Reports Server (NTRS)

    Bradford, Robert N.; Redman, Sandra H.; McNair, Ann R. (Technical Monitor)

    2002-01-01

    Emerging technologies for computational grid infrastructures have the potential for revolutionizing the way computers are used in all aspects of our lives. Computational grids are currently being implemented to provide a large-scale, dynamic, and secure research and engineering environments based on standards and next-generation reusable software, enabling greater science and engineering productivity through shared resources and distributed computing for less cost than traditional architectures. Combined with the emerging technologies of high-performance networks, grids provide researchers, scientists and engineers the first real opportunity for an effective distributed collaborative environment with access to resources such as computational and storage systems, instruments, and software tools and services for the most computationally challenging applications.

  5. Scaling of the Urban Water Footprint: An Analysis of 65 Mid- to Large-Sized U.S. Metropolitan Areas

    NASA Astrophysics Data System (ADS)

    Mahjabin, T.; Garcia, S.; Grady, C.; Mejia, A.

    2017-12-01

    Scaling laws have been shown to be relevant to a range of disciplines including biology, ecology, hydrology, and physics, among others. Recently, scaling was shown to be important for understanding and characterizing cities. For instance, it was found that urban infrastructure (water supply pipes and electrical wires) tends to scale sublinearly with city population, implying that large cities are more efficient. In this study, we explore the scaling of the water footprint of cities. The water footprint is a measure of water appropriation that considers both the direct and indirect (virtual) water use of a consumer or producer. Here we compute the water footprint of 65 mid- to large-sized U.S. metropolitan areas, accounting for direct and indirect water uses associated with agricultural and industrial commodities, and residential and commercial water uses. We find that the urban water footprint, computed as the sum of the water footprint of consumption and production, exhibits sublinear scaling with an exponent of 0.89. This suggests the possibility of large cities being more water-efficient than small ones. To further assess this result, we conduct additional analysis by accounting for international flows, and the effects of green water and city boundary definition on the scaling. The analysis confirms the scaling and provides additional insight about its interpretation.

  6. Land-Price Dynamics Surrounding Large-Scale Land Development of Technopolis Gedebage, Bandung, Indonesia

    NASA Astrophysics Data System (ADS)

    Hasanawi, A.; Winarso, H.

    2018-05-01

    In spite of its potential value to governments, detailed information on how land prices vary spatially in a city is very lacking. Land price in the city, especially around the development activity, is not known. There are some considerable studies showing that investment in land development increases the land market price; however, only a few are found. One of them is about the impact of large-scale investment by Sumarecon in Gedebage Bandung, which is planning to develop “Technopolis”, as the second center of Bandung Municipality.This paper discusses the land-price dynamics around the Technopolis Gedebage Bandung, using information obtained from many sources including an interview with experienced brokers. Appraised prices were given for different types of residential plot distinguished by tenure, distance from the main road, and infrastructural provision. This research aims to explain the dynamics of the land price surrounding the large-scale land development. The dynamics of the land price are described by the median land price market growth using the Surfer DEM software. The data analysis in Technopolis Gedebage Bandung shows the relative importance of land location, infrastructural provision and tenure (land title) for dynamics of the land price. The examination of data makes it possible to test whether and where there has been a spiraling of land prices. This paper argues that the increasing recent price has been consistently greater in suburban plots than that in the inner city as a result of the massive demand of the large-scale land development project. The increasing price of land cannot be controlled; the market price is rising very quickly among other things due to the fact that Gedebage will become the technopolis area. This, however, can indirectly burden the lower-middle-class groups, such as they are displaced from their previous owned-land, and implicate on ever-decreasing income as the livelihood resources (such as farming and agriculture) are lost.

  7. Risk assessment for physical and cyber attacks on critical infrastructures.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Smith, Bryan J.; Sholander, Peter E.; Phelan, James M.

    2005-08-01

    Assessing the risk of malevolent attacks against large-scale critical infrastructures requires modifications to existing methodologies. Existing risk assessment methodologies consider physical security and cyber security separately. As such, they do not accurately model attacks that involve defeating both physical protection and cyber protection elements (e.g., hackers turning off alarm systems prior to forced entry). This paper presents a risk assessment methodology that accounts for both physical and cyber security. It also preserves the traditional security paradigm of detect, delay and respond, while accounting for the possibility that a facility may be able to recover from or mitigate the results ofmore » a successful attack before serious consequences occur. The methodology provides a means for ranking those assets most at risk from malevolent attacks. Because the methodology is automated the analyst can also play 'what if with mitigation measures to gain a better understanding of how to best expend resources towards securing the facilities. It is simple enough to be applied to large infrastructure facilities without developing highly complicated models. Finally, it is applicable to facilities with extensive security as well as those that are less well-protected.« less

  8. Research data management support for large-scale, long-term, interdisciplinary collaborative research centers with a focus on environmental sciences

    NASA Astrophysics Data System (ADS)

    Curdt, C.; Hoffmeister, D.; Bareth, G.; Lang, U.

    2017-12-01

    Science conducted in collaborative, cross-institutional research projects, requires active sharing of research ideas, data, documents and further information in a well-managed, controlled and structured manner. Thus, it is important to establish corresponding infrastructures and services for the scientists. Regular project meetings and joint field campaigns support the exchange of research ideas. Technical infrastructures facilitate storage, documentation, exchange and re-use of data as results of scientific output. Additionally, also publications, conference contributions, reports, pictures etc. should be managed. Both, knowledge and data sharing is essential to create synergies. Within the coordinated programme `Collaborative Research Center' (CRC), the German Research Foundation offers funding to establish research data management (RDM) infrastructures and services. CRCs are large-scale, interdisciplinary, multi-institutional, long-term (up to 12 years), university-based research institutions (up to 25 sub-projects). These CRCs address complex and scientifically challenging research questions. This poster presents the RDM services and infrastructures that have been established for two CRCs, both focusing on environmental sciences. Since 2007, a RDM support infrastructure and associated services have been set up for the CRC/Transregio 32 (CRC/TR32) `Patterns in Soil-Vegetation-Atmosphere-Systems: Monitoring, Modelling and Data Assimilation' (www.tr32.de). The experiences gained have been used to arrange RDM services for the CRC1211 `Earth - Evolution at the Dry Limit' (www.crc1211.de), funded since 2016. In both projects scientists from various disciplines collect heterogeneous data at field campaigns or by modelling approaches. To manage the scientific output, the TR32DB data repository (www.tr32db.de) has been designed and implemented for the CRC/TR32. This system was transferred and adapted to the CRC1211 needs (www.crc1211db.uni-koeln.de) in 2016. Both repositories support secure and sustainable data storage, backup, documentation, publication with DOIs, search, download, statistics as well as web mapping features. Moreover, RDM consulting and support services as well as training sessions are carried out regularly.

  9. Marching to the beat of Moore's Law

    NASA Astrophysics Data System (ADS)

    Borodovsky, Yan

    2006-03-01

    Area density scaling in integrated circuits, defined as transistor count per unit area, has followed the famous observation-cum-prediction by Gordon Moore for many generations. Known as "Moore's Law" which predicts density doubling every 18-24 month, it has provided all important synchronizing guidance and reference for tools and materials suppliers, IC manufacturers and their customers as to what minimal requirements their products and services need to meet to satisfy technical and financial expectations in support of the infrastructure required for the development and manufacturing of corresponding technology generation nodes. Multiple lithography solutions are usually under considerations for any given node. In general, three broad classes of solutions are considered: evolutionary - technology that is extension of existing technology infrastructure at similar or slightly higher cost and risk to schedule; revolutionary - technology that discards significant parts of the existing infrastructure at similar cost, higher risk to schedule but promises higher capability as compared to the evolutionary approach; and last but not least, disruptive - approach that as a rule promises similar or better capabilities, much lower cost and wholly unpredictable risk to schedule and products yields. This paper examines various lithography approaches, their respective merits against criteria of respective infrastructure availability, affordability and risk to IC manufacturer's schedules and strategy involved in developing and selecting best solution in an attempt to sort out key factors that will impact the decision on the lithography choice for large-scale manufacturing for the future technology nodes.

  10. Predictive Anomaly Management for Resilient Virtualized Computing Infrastructures

    DTIC Science & Technology

    2015-05-27

    PREC: Practical Root Exploit Containment for Android Devices, ACM Conference on Data and Application Security and Privacy (CODASPY) . 03-MAR-14...05-OCT-11, . : , Hiep Nguyen, Yongmin Tan, Xiaohui Gu. Propagation-aware Anomaly Localization for Cloud Hosted Distributed Applications , ACM...Workshop on Managing Large-Scale Systems via the Analysis of System Logs and the Application of Machine Learning Techniques (SLAML) in conjunction with SOSP

  11. A Visual Language for Situational Awareness

    DTIC Science & Technology

    2016-12-01

    listening. The arrival of the information age has delivered the ability to transfer larger volumes of data at far greater rates. Wireless digital... wireless infrastructure for use in large-scale events where domestic power and private wireless networks are overloaded or unavailable. States should...lacking by responders using ANSI INCITS 415 symbols sets.226 When combined with the power of a wireless network, a situational awareness metalanguage is

  12. A Large-Scale Donor Attempt to Improve Educational Status of the Poor and Household Income Distribution: The Experience of PEDC in Vietnam

    ERIC Educational Resources Information Center

    Carr-Hill, Roy A.

    2011-01-01

    In 2003, donors combined together in Vietnam to support the provision of quality primary schooling for 226 disadvantaged districts (about a third of the country). US $160 million was invested in infrastructure, materials and training across the 226 districts. The programme has been commended by donors and received good press inside Vietnam.…

  13. Large Scale System Defense

    DTIC Science & Technology

    2008-10-01

    AD); Aeolos, a distributed intrusion detection and event correlation infrastructure; STAND, a training-set sanitization technique applicable to ADs...UU 18. NUMBER OF PAGES 25 19a. NAME OF RESPONSIBLE PERSON Frank H. Born a. REPORT U b. ABSTRACT U c . THIS PAGE U 19b. TELEPHONE...Summary of findings 2 (a) Automatic Patch Generation 2 (b) Better Patch Management 2 ( c ) Artificial Diversity 3 (d) Distributed Anomaly Detection 3

  14. Learning from Our Global Competitors: A Comparative Analysis of Science, Technology, Engineering and Mathematics (STEM) Education Pipelines in the United States, Mainland China and Taiwan

    ERIC Educational Resources Information Center

    Chow, Christina M.

    2011-01-01

    Maintaining a competitive edge within the 21st century is dependent on the cultivation of human capital, producing qualified and innovative employees capable of competing within the new global marketplace. Technological advancements in communications technology as well as large scale, infrastructure development has led to a leveled playing field…

  15. Large funding inflows, limited local capacity and emerging disease control priorities: a situational assessment of tuberculosis control in Myanmar.

    PubMed

    Khan, Mishal S; Schwanke-Khilji, Sara; Yoong, Joanne; Tun, Zaw Myo; Watson, Samantha; Coker, Richard James

    2017-10-01

    There are numerous challenges in planning and implementing effective disease control programmes in Myanmar, which is undergoing internal political and economic transformations whilst experiencing massive inflows of external funding. The objective of our study-involving key informant discussions, participant observations and linked literature reviews-was to analyse how tuberculosis (TB) control strategies in Myanmar are influenced by the broader political, economic, epidemiological and health systems context using the Systemic Rapid Assessment conceptual and analytical framework. Our findings indicate that the substantial influx of donor funding, in the order of one billion dollars over a 5-year period, may be too rapid for the country's infrastructure to effectively utilize. TB control strategies thus far have tended to favour medical or technological approaches rather than infrastructure development, and appear to be driven more by perceived urgency to 'do something' rather informed by evidence of cost-effectiveness and sustainable long-term impact. Progress has been made towards ambitious targets for scaling up treatment of drug-resistant TB, although there are concerns about ensuring quality of care. We also find substantial disparities in health and funding allocation between regions and ethnic groups, which are related to the political context and health system infrastructure. Our situational assessment of emerging TB control strategies in this transitioning health system indicates that large investments by international donors may be pushing Myanmar to scale up TB and drug-resistant TB services too quickly, without due consideration given to the health system (service delivery infrastructure, human resource capacity, quality of care, equity) and epidemiological (evidence of effectiveness of interventions, prevention of new cases) context. © The Author 2017. Published by Oxford University Press in association with The London School of Hygiene and Tropical Medicine. All rights reserved. For Permissions, please email: journals.permissions@oup.com.

  16. A framework for linking cybersecurity metrics to the modeling of macroeconomic interdependencies.

    PubMed

    Santos, Joost R; Haimes, Yacov Y; Lian, Chenyang

    2007-10-01

    Hierarchical decision making is a multidimensional process involving management of multiple objectives (with associated metrics and tradeoffs in terms of costs, benefits, and risks), which span various levels of a large-scale system. The nation is a hierarchical system as it consists multiple classes of decisionmakers and stakeholders ranging from national policymakers to operators of specific critical infrastructure subsystems. Critical infrastructures (e.g., transportation, telecommunications, power, banking, etc.) are highly complex and interconnected. These interconnections take the form of flows of information, shared security, and physical flows of commodities, among others. In recent years, economic and infrastructure sectors have become increasingly dependent on networked information systems for efficient operations and timely delivery of products and services. In order to ensure the stability, sustainability, and operability of our critical economic and infrastructure sectors, it is imperative to understand their inherent physical and economic linkages, in addition to their cyber interdependencies. An interdependency model based on a transformation of the Leontief input-output (I-O) model can be used for modeling: (1) the steady-state economic effects triggered by a consumption shift in a given sector (or set of sectors); and (2) the resulting ripple effects to other sectors. The inoperability metric is calculated for each sector; this is achieved by converting the economic impact (typically in monetary units) into a percentage value relative to the size of the sector. Disruptive events such as terrorist attacks, natural disasters, and large-scale accidents have historically shown cascading effects on both consumption and production. Hence, a dynamic model extension is necessary to demonstrate the interplay between combined demand and supply effects. The result is a foundational framework for modeling cybersecurity scenarios for the oil and gas sector. A hypothetical case study examines a cyber attack that causes a 5-week shortfall in the crude oil supply in the Gulf Coast area.

  17. Decision analysis and risk models for land development affecting infrastructure systems.

    PubMed

    Thekdi, Shital A; Lambert, James H

    2012-07-01

    Coordination and layering of models to identify risks in complex systems such as large-scale infrastructure of energy, water, and transportation is of current interest across application domains. Such infrastructures are increasingly vulnerable to adjacent commercial and residential land development. Land development can compromise the performance of essential infrastructure systems and increase the costs of maintaining or increasing performance. A risk-informed approach to this topic would be useful to avoid surprise, regret, and the need for costly remedies. This article develops a layering and coordination of models for risk management of land development affecting infrastructure systems. The layers are: system identification, expert elicitation, predictive modeling, comparison of investment alternatives, and implications of current decisions for future options. The modeling layers share a focus on observable factors that most contribute to volatility of land development and land use. The relevant data and expert evidence include current and forecasted growth in population and employment, conservation and preservation rules, land topography and geometries, real estate assessments, market and economic conditions, and other factors. The approach integrates to a decision framework of strategic considerations based on assessing risk, cost, and opportunity in order to prioritize needs and potential remedies that mitigate impacts of land development to the infrastructure systems. The approach is demonstrated for a 5,700-mile multimodal transportation system adjacent to 60,000 tracts of potential land development. © 2011 Society for Risk Analysis.

  18. Accelerators for society: succession of European infrastructural projects: CARE, EuCARD, TIARA, EuCARD2

    NASA Astrophysics Data System (ADS)

    Romaniuk, Ryszard S.

    2013-10-01

    Accelerator science and technology is one of a key enablers of the developments in the particle physic, photon physics and also applications in medicine and industry. The paper presents a digest of the research results in the domain of accelerator science and technology in Europe, shown during the realization of CARE (Coordinated Accelerator R&D), EuCARD (European Coordination of Accelerator R&D) and during the national annual review meeting of the TIARA - Test Infrastructure of European Research Area in Accelerator R&D. The European projects on accelerator technology started in 2003 with CARE. TIARA is an European Collaboration of Accelerator Technology, which by running research projects, technical, networks and infrastructural has a duty to integrate the research and technical communities and infrastructures in the global scale of Europe. The Collaboration gathers all research centers with large accelerator infrastructures. Other ones, like universities, are affiliated as associate members. TIARA-PP (preparatory phase) is an European infrastructural project run by this Consortium and realized inside EU-FP7. The paper presents a general overview of CARE, EuCARD and especially TIARA activities, with an introduction containing a portrait of contemporary accelerator technology and a digest of its applications in modern society. CARE, EuCARD and TIARA activities integrated the European accelerator community in a very effective way. These projects are expected very much to be continued.

  19. Parallel Index and Query for Large Scale Data Analysis

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chou, Jerry; Wu, Kesheng; Ruebel, Oliver

    2011-07-18

    Modern scientific datasets present numerous data management and analysis challenges. State-of-the-art index and query technologies are critical for facilitating interactive exploration of large datasets, but numerous challenges remain in terms of designing a system for process- ing general scientific datasets. The system needs to be able to run on distributed multi-core platforms, efficiently utilize underlying I/O infrastructure, and scale to massive datasets. We present FastQuery, a novel software framework that address these challenges. FastQuery utilizes a state-of-the-art index and query technology (FastBit) and is designed to process mas- sive datasets on modern supercomputing platforms. We apply FastQuery to processing ofmore » a massive 50TB dataset generated by a large scale accelerator modeling code. We demonstrate the scalability of the tool to 11,520 cores. Motivated by the scientific need to search for inter- esting particles in this dataset, we use our framework to reduce search time from hours to tens of seconds.« less

  20. Compounded effects of heat waves and droughts over the Western Electricity Grid: spatio-temporal scales of impacts and predictability toward mitigation and adaptation.

    NASA Astrophysics Data System (ADS)

    Voisin, N.; Kintner-Meyer, M.; Skaggs, R.; Xie, Y.; Wu, D.; Nguyen, T. B.; Fu, T.; Zhou, T.

    2016-12-01

    Heat waves and droughts are projected to be more frequent and intense. We have seen in the past the effects of each of those extreme climate events on electricity demand and constrained electricity generation, challenging power system operations. Our aim here is to understand the compounding effects under historical conditions. We present a benchmark of Western US grid performance under 55 years of historical climate, and including droughts, using 2010-level of water demand and water management infrastructure, and 2010-level of electricity grid infrastructure and operations. We leverage CMIP5 historical hydrology simulations and force a large scale river routing- reservoir model with 2010-level sectoral water demands. The regulated flow at each water-dependent generating plants is processed to adjust water-dependent electricity generation parameterization in a production cost model, that represents 2010-level power system operations with hourly energy demand of 2010. The resulting benchmark includes a risk distribution of several grid performance metrics (unserved energy, production cost, carbon emission) as a function of inter-annual variability in regional water availability and predictability using large scale climate oscillations. In the second part of the presentation, we describe an approach to map historical heat waves onto this benchmark grid performance using a building energy demand model. The impact of the heat waves, combined with the impact of droughts, is explored at multiple scales to understand the compounding effects. Vulnerabilities of the power generation and transmission systems are highlighted to guide future adaptation.

  1. Spatial-Temporal Heterogeneity in Regional Watershed Phosphorus Cycles Driven by Changes in Human Activity over the Past Century

    NASA Astrophysics Data System (ADS)

    Hale, R. L.; Grimm, N. B.; Vorosmarty, C. J.

    2014-12-01

    An ongoing challenge for society is to harness the benefits of phosphorus (P) while minimizing negative effects on downstream ecosystems. To meet this challenge we must understand the controls on the delivery of anthropogenic P from landscapes to downstream ecosystems. We used a model that incorporates P inputs to watersheds, hydrology, and infrastructure (sewers, waste-water treatment plants, and reservoirs) to reconstruct historic P yields for the northeastern U.S. from 1930 to 2002. At the regional scale, increases in P inputs were paralleled by increased fractional retention, thus P loading to the coast did not increase significantly. We found that temporal variation in regional P yield was correlated with P inputs. Spatial patterns of watershed P yields were best predicted by inputs, but the correlation between inputs and yields in space weakened over time, due to infrastructure development. Although the magnitude of infrastructure effect was small, its role changed over time and was important in creating spatial and temporal heterogeneity in input-yield relationships. We then conducted a hierarchical cluster analysis to identify a typology of anthropogenic P cycling, using data on P inputs (fertilizer, livestock feed, and human food), infrastructure (dams, wastewater treatment plants, sewers), and hydrology (runoff coefficient). We identified 6 key types of watersheds that varied significantly in climate, infrastructure, and the types and amounts of P inputs. Annual watershed P yields and retention varied significantly across watershed types. Although land cover varied significantly across typologies, clusters based on land cover alone did not explain P budget patterns, suggesting that this variable is insufficient to understand patterns of P cycling across large spatial scales. Furthermore, clusters varied over time as patterns of climate, P use, and infrastructure changed. Our results demonstrate that the drivers of P cycles are spatially and temporally heterogeneous, yet they also suggest that a relatively simple typology of watersheds can be useful for understanding regional P cycles and may help inform P management approaches.

  2. LifeWatch - a Large-scale eScience Infrastructure to Assist in Understanding and Managing our Planet's Biodiversity

    NASA Astrophysics Data System (ADS)

    Hernández Ernst, Vera; Poigné, Axel; Los, Walter

    2010-05-01

    Understanding and managing the complexity of the biodiversity system in relation to global changes concerning land use and climate change with their social and economic implications is crucial to mitigate species loss and biodiversity changes in general. The sustainable development and exploitation of existing biodiversity resources require flexible and powerful infrastructures offering, on the one hand, the access to large-scale databases of observations and measures, to advanced analytical and modelling software, and to high performance computing environments and, on the other hand, the interlinkage of European scientific communities among each others and with national policies. The European Strategy Forum on Research Infrastructures (ESFRI) selected the "LifeWatch e-science and technology infrastructure for biodiversity research" as a promising development to construct facilities to contribute to meet those challenges. LifeWatch collaborates with other selected initiatives (e.g. ICOS, ANAEE, NOHA, and LTER-Europa) to achieve the integration of the infrastructures at landscape and regional scales. This should result in a cooperating cluster of such infrastructures supporting an integrated approach for data capture and transmission, data management and harmonisation. Besides, facilities for exploration, forecasting, and presentation using heterogeneous and distributed data and tools should allow the interdisciplinary scientific research at any spatial and temporal scale. LifeWatch is an example of a new generation of interoperable research infrastructures based on standards and a service-oriented architecture that allow for linkage with external resources and associated infrastructures. External data sources will be established data aggregators as the Global Biodiversity Information Facility (GBIF) for species occurrences and other EU Networks of Excellence like the Long-Term Ecological Research Network (LTER), GMES, and GEOSS for terrestrial monitoring, the MARBEF network for marine data, and the Consortium for European Taxonomic Facilities (CETAF) and its European Distributed Institute for Taxonomy (EDIT) for taxonomic data. But also "smaller" networks and "volunteer scientists" may send data (e.g. GPS supported species observations) to a LifeWatch repository. Autonomous operating wireless environmental sensors and other smart hand-held devices will contribute to increase data capture activities. In this way LifeWatch will directly underpin the development of GEOBON, the biodiversity component if GEOSS, the Global Earth observation System. To overcome all major technical difficulties imposed by the variety of currently and future technologies, protocols, data formats, etc., LifeWatch will define and use common open interfaces. For this purpose, the LifeWatch Reference Model was developed during the preparatory phase specifying the service-oriented architecture underlying the ICT-infrastructure. The Reference Model identifies key requirements and key architectural concepts to support workflows for scientific in-silico experiments, tracking of provenance, and semantic enhancement, besides meeting the functional requirements mentioned before. It provides guidelines for the specification and implementation of services and information models, defining as well a number of generic services and models. Another key issue addressed by the Reference Model is that the cooperation of many developer teams residing in many European countries has to be organized to obtain compatible results in that conformance with the specifications and policies of the Reference Model will be required. The LifeWatch Reference Model is based on the ORCHESTRA Reference Model for geospatial-oriented architectures and services networks that provides a generic framework and has been endorsed as best practice by the Open Geospatial Consortium (OGC). The LifeWatch Infrastructure will allow (interdisciplinary) scientific researchers to collaborate by creating e-Laboratories or by composing e-Services which can be shared and jointly developed. For it a long-term vision for the LifeWatch Biodiversity Workbench Portal has been developed as a one-stop application for the LifeWatch infrastructure based on existing and emerging technologies. There the user can find all available resources such as data, workflows, tools, etc. and access LifeWatch applications that integrate different resource and provides key capabilities like resource discovery and visualisation, creation of workflows, creation and management of provenance, and the support of collaborative activities. While LifeWatch developers will construct components for solving generic LifeWatch tasks, users may add their own facilities to fulfil individual needs. Examples for application of the LifeWatch Reference Model and the LifeWatch Biodiversity Workbench Portal will be given.

  3. Hyperspectral range imaging for transportation systems evaluation

    NASA Astrophysics Data System (ADS)

    Bridgelall, Raj; Rafert, J. B.; Atwood, Don; Tolliver, Denver D.

    2016-04-01

    Transportation agencies expend significant resources to inspect critical infrastructure such as roadways, railways, and pipelines. Regular inspections identify important defects and generate data to forecast maintenance needs. However, cost and practical limitations prevent the scaling of current inspection methods beyond relatively small portions of the network. Consequently, existing approaches fail to discover many high-risk defect formations. Remote sensing techniques offer the potential for more rapid and extensive non-destructive evaluations of the multimodal transportation infrastructure. However, optical occlusions and limitations in the spatial resolution of typical airborne and space-borne platforms limit their applicability. This research proposes hyperspectral image classification to isolate transportation infrastructure targets for high-resolution photogrammetric analysis. A plenoptic swarm of unmanned aircraft systems will capture images with centimeter-scale spatial resolution, large swaths, and polarization diversity. The light field solution will incorporate structure-from-motion techniques to reconstruct three-dimensional details of the isolated targets from sequences of two-dimensional images. A comparative analysis of existing low-power wireless communications standards suggests an application dependent tradeoff in selecting the best-suited link to coordinate swarming operations. This study further produced a taxonomy of specific roadway and railway defects, distress symptoms, and other anomalies that the proposed plenoptic swarm sensing system would identify and characterize to estimate risk levels.

  4. Impact analysis of two kinds of failure strategies in Beijing road transportation network

    NASA Astrophysics Data System (ADS)

    Zhang, Zundong; Xu, Xiaoyang; Zhang, Zhaoran; Zhou, Huijuan

    The Beijing road transportation network (BRTN), as a large-scale technological network, exhibits very complex and complicate features during daily periods. And it has been widely highlighted that how statistical characteristics (i.e. average path length and global network efficiency) change while the network evolves. In this paper, by using different modeling concepts, three kinds of network models of BRTN namely the abstract network model, the static network model with road mileage as weights and the dynamic network model with travel time as weights — are constructed, respectively, according to the topological data and the real detected flow data. The degree distribution of the three kinds of network models are analyzed, which proves that the urban road infrastructure network and the dynamic network behavior like scale-free networks. By analyzing and comparing the important statistical characteristics of three models under random attacks and intentional attacks, it shows that the urban road infrastructure network and the dynamic network of BRTN are both robust and vulnerable.

  5. Infrastructures for Distributed Computing: the case of BESIII

    NASA Astrophysics Data System (ADS)

    Pellegrino, J.

    2018-05-01

    The BESIII is an electron-positron collision experiment hosted at BEPCII in Beijing and aimed to investigate Tau-Charm physics. Now BESIII has been running for several years and gathered more than 1PB raw data. In order to analyze these data and perform massive Monte Carlo simulations, a large amount of computing and storage resources is needed. The distributed computing system is based up on DIRAC and it is in production since 2012. It integrates computing and storage resources from different institutes and a variety of resource types such as cluster, grid, cloud or volunteer computing. About 15 sites from BESIII Collaboration from all over the world joined this distributed computing infrastructure, giving a significant contribution to the IHEP computing facility. Nowadays cloud computing is playing a key role in the HEP computing field, due to its scalability and elasticity. Cloud infrastructures take advantages of several tools, such as VMDirac, to manage virtual machines through cloud managers according to the job requirements. With the virtually unlimited resources from commercial clouds, the computing capacity could scale accordingly in order to deal with any burst demands. General computing models have been discussed in the talk and are addressed herewith, with particular focus on the BESIII infrastructure. Moreover new computing tools and upcoming infrastructures will be addressed.

  6. A coordinated set of ecosystem research platforms open to international research in ecotoxicology, AnaEE-France.

    PubMed

    Mougin, Christian; Azam, Didier; Caquet, Thierry; Cheviron, Nathalie; Dequiedt, Samuel; Le Galliard, Jean-François; Guillaume, Olivier; Houot, Sabine; Lacroix, Gérard; Lafolie, François; Maron, Pierre-Alain; Michniewicz, Radika; Pichot, Christian; Ranjard, Lionel; Roy, Jacques; Zeller, Bernd; Clobert, Jean; Chanzy, André

    2015-10-01

    The infrastructure for Analysis and Experimentation on Ecosystems (AnaEE-France) is an integrated network of the major French experimental, analytical, and modeling platforms dedicated to the biological study of continental ecosystems (aquatic and terrestrial). This infrastructure aims at understanding and predicting ecosystem dynamics under global change. AnaEE-France comprises complementary nodes offering access to the best experimental facilities and associated biological resources and data: Ecotrons, seminatural experimental platforms to manipulate terrestrial and aquatic ecosystems, in natura sites equipped for large-scale and long-term experiments. AnaEE-France also provides shared instruments and analytical platforms dedicated to environmental (micro) biology. Finally, AnaEE-France provides users with data bases and modeling tools designed to represent ecosystem dynamics and to go further in coupling ecological, agronomical, and evolutionary approaches. In particular, AnaEE-France offers adequate services to tackle the new challenges of research in ecotoxicology, positioning its various types of platforms in an ecologically advanced ecotoxicology approach. AnaEE-France is a leading international infrastructure, and it is pioneering the construction of AnaEE (Europe) infrastructure in the field of ecosystem research. AnaEE-France infrastructure is already open to the international community of scientists in the field of continental ecotoxicology.

  7. S3DB core: a framework for RDF generation and management in bioinformatics infrastructures

    PubMed Central

    2010-01-01

    Background Biomedical research is set to greatly benefit from the use of semantic web technologies in the design of computational infrastructure. However, beyond well defined research initiatives, substantial issues of data heterogeneity, source distribution, and privacy currently stand in the way towards the personalization of Medicine. Results A computational framework for bioinformatic infrastructure was designed to deal with the heterogeneous data sources and the sensitive mixture of public and private data that characterizes the biomedical domain. This framework consists of a logical model build with semantic web tools, coupled with a Markov process that propagates user operator states. An accompanying open source prototype was developed to meet a series of applications that range from collaborative multi-institution data acquisition efforts to data analysis applications that need to quickly traverse complex data structures. This report describes the two abstractions underlying the S3DB-based infrastructure, logical and numerical, and discusses its generality beyond the immediate confines of existing implementations. Conclusions The emergence of the "web as a computer" requires a formal model for the different functionalities involved in reading and writing to it. The S3DB core model proposed was found to address the design criteria of biomedical computational infrastructure, such as those supporting large scale multi-investigator research, clinical trials, and molecular epidemiology. PMID:20646315

  8. Green Infrastructure Modeling Tools

    EPA Pesticide Factsheets

    Modeling tools support planning and design decisions on a range of scales from setting a green infrastructure target for an entire watershed to designing a green infrastructure practice for a particular site.

  9. Utilizing Semantic Big Data for realizing a National-scale Infrastructure Vulnerability Analysis System

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chinthavali, Supriya; Shankar, Mallikarjun

    Critical Infrastructure systems(CIs) such as energy, water, transportation and communication are highly interconnected and mutually dependent in complex ways. Robust modeling of CIs interconnections is crucial to identify vulnerabilities in the CIs. We present here a national-scale Infrastructure Vulnerability Analysis System (IVAS) vision leveraging Se- mantic Big Data (SBD) tools, Big Data, and Geographical Information Systems (GIS) tools. We survey existing ap- proaches on vulnerability analysis of critical infrastructures and discuss relevant systems and tools aligned with our vi- sion. Next, we present a generic system architecture and discuss challenges including: (1) Constructing and manag- ing a CI network-of-networks graph,more » (2) Performing analytic operations at scale, and (3) Interactive visualization of ana- lytic output to generate meaningful insights. We argue that this architecture acts as a baseline to realize a national-scale network based vulnerability analysis system.« less

  10. The Path to Convergence: Design, Coordination and Social Issues in the Implementation of a Middleware Data Broker.

    NASA Astrophysics Data System (ADS)

    Slota, S.; Khalsa, S. J. S.

    2015-12-01

    Infrastructures are the result of systems, networks, and inter-networks that accrete, overlay and segment one another over time. As a result, working infrastructures represent a broad heterogeneity of elements - data types, computational resources, material substrates (computing hardware, physical infrastructure, labs, physical information resources, etc.) as well as organizational and social functions which result in divergent outputs and goals. Cyber infrastructure's engineering often defaults to a separation of the social from the technical that results in the engineering succeeding in limited ways, or the exposure of unanticipated points of failure within the system. Studying the development of middleware intended to mediate interactions among systems within an earth systems science infrastructure exposes organizational, technical and standards-focused negotiations endemic to a fundamental trait of infrastructure: its characteristic invisibility in use. Intended to perform a core function within the EarthCube cyberinfrastructure, the development, governance and maintenance of an automated brokering system is a microcosm of large-scale infrastructural efforts. Points of potential system failure, regardless of the extent to which they are more social or more technical in nature, can be considered in terms of the reverse salient: a point of social and material configuration that momentarily lags behind the progress of an emerging or maturing infrastructure. The implementation of the BCube data broker has exposed reverse salients in regards to the overall EarthCube infrastructure (and the role of middleware brokering) in the form of organizational factors such as infrastructural alignment, maintenance and resilience; differing and incompatible practices of data discovery and evaluation among users and stakeholders; and a preponderance of local variations in the implementation of standards and authentication in data access. These issues are characterized by their role in increasing tension or friction among components that are on the path to convergence and may help to predict otherwise-occluded endogenous points of failure or non-adoption in the infrastructure.

  11. Evolution of Precipitation Extremes in Three Large Ensembles of Climate Simulations - Impact of Spatial and Temporal Resolutions

    NASA Astrophysics Data System (ADS)

    Martel, J. L.; Brissette, F.; Mailhot, A.; Wood, R. R.; Ludwig, R.; Frigon, A.; Leduc, M.; Turcotte, R.

    2017-12-01

    Recent studies indicate that the frequency and intensity of extreme precipitation will increase in future climate due to global warming. In this study, we compare annual maxima precipitation series from three large ensembles of climate simulations at various spatial and temporal resolutions. The first two are at the global scale: the Canadian Earth System Model (CanESM2) 50-member large ensemble (CanESM2-LE) at a 2.8° resolution and the Community Earth System Model (CESM1) 40-member large ensemble (CESM1-LE) at a 1° resolution. The third ensemble is at the regional scale over both Eastern North America and Europe: the Canadian Regional Climate Model (CRCM5) 50-member large ensemble (CRCM5-LE) at a 0.11° resolution, driven at its boundaries by the CanESM-LE. The CRCM5-LE is a new ensemble issued from the ClimEx project (http://www.climex-project.org), a Québec-Bavaria collaboration. Using these three large ensembles, change in extreme precipitations over the historical (1980-2010) and future (2070-2100) periods are investigated. This results in 1 500 (30 years x 50 members for CanESM2-LE and CRCM5-LE) and 1200 (30 years x 40 members for CESM1-LE) simulated years over both the historical and future periods. Using these large datasets, the empirical daily (and sub-daily for CRCM5-LE) extreme precipitation quantiles for large return periods ranging from 2 to 100 years are computed. Results indicate that daily extreme precipitations generally will increase over most land grid points of both domains according to the three large ensembles. Regarding the CRCM5-LE, the increase in sub-daily extreme precipitations will be even more important than the one observed for daily extreme precipitations. Considering that many public infrastructures have lifespans exceeding 75 years, the increase in extremes has important implications on service levels of water infrastructures and public safety.

  12. SparkText: Biomedical Text Mining on Big Data Framework.

    PubMed

    Ye, Zhan; Tafti, Ahmad P; He, Karen Y; Wang, Kai; He, Max M

    Many new biomedical research articles are published every day, accumulating rich information, such as genetic variants, genes, diseases, and treatments. Rapid yet accurate text mining on large-scale scientific literature can discover novel knowledge to better understand human diseases and to improve the quality of disease diagnosis, prevention, and treatment. In this study, we designed and developed an efficient text mining framework called SparkText on a Big Data infrastructure, which is composed of Apache Spark data streaming and machine learning methods, combined with a Cassandra NoSQL database. To demonstrate its performance for classifying cancer types, we extracted information (e.g., breast, prostate, and lung cancers) from tens of thousands of articles downloaded from PubMed, and then employed Naïve Bayes, Support Vector Machine (SVM), and Logistic Regression to build prediction models to mine the articles. The accuracy of predicting a cancer type by SVM using the 29,437 full-text articles was 93.81%. While competing text-mining tools took more than 11 hours, SparkText mined the dataset in approximately 6 minutes. This study demonstrates the potential for mining large-scale scientific articles on a Big Data infrastructure, with real-time update from new articles published daily. SparkText can be extended to other areas of biomedical research.

  13. SparkText: Biomedical Text Mining on Big Data Framework

    PubMed Central

    He, Karen Y.; Wang, Kai

    2016-01-01

    Background Many new biomedical research articles are published every day, accumulating rich information, such as genetic variants, genes, diseases, and treatments. Rapid yet accurate text mining on large-scale scientific literature can discover novel knowledge to better understand human diseases and to improve the quality of disease diagnosis, prevention, and treatment. Results In this study, we designed and developed an efficient text mining framework called SparkText on a Big Data infrastructure, which is composed of Apache Spark data streaming and machine learning methods, combined with a Cassandra NoSQL database. To demonstrate its performance for classifying cancer types, we extracted information (e.g., breast, prostate, and lung cancers) from tens of thousands of articles downloaded from PubMed, and then employed Naïve Bayes, Support Vector Machine (SVM), and Logistic Regression to build prediction models to mine the articles. The accuracy of predicting a cancer type by SVM using the 29,437 full-text articles was 93.81%. While competing text-mining tools took more than 11 hours, SparkText mined the dataset in approximately 6 minutes. Conclusions This study demonstrates the potential for mining large-scale scientific articles on a Big Data infrastructure, with real-time update from new articles published daily. SparkText can be extended to other areas of biomedical research. PMID:27685652

  14. Spatial mismatch analysis among hotspots of alien plant species, road and railway networks in Germany and Austria

    PubMed Central

    Morelli, Federico

    2017-01-01

    Road and railway networks are pervasive elements of all environments, which have expanded intensively over the last century in all European countries. These transportation infrastructures have major impacts on the surrounding landscape, representing a threat to biodiversity. Roadsides and railways may function as corridors for dispersal of alien species in fragmented landscapes. However, only few studies have explored the spread of invasive species in relationship to transport network at large spatial scales. We performed a spatial mismatch analysis, based on a spatially explicit correlation test, to investigate whether alien plant species hotspots in Germany and Austria correspond to areas of high density of roads and railways. We tested this independently of the effects of dominant environments in each spatial unit, in order to focus just on the correlation between occurrence of alien species and density of linear transportation infrastructures. We found a significant spatial association between alien plant species hotspots distribution and roads and railways density in both countries. As expected, anthropogenic landscapes, such as urban areas, harbored more alien plant species, followed by water bodies. However, our findings suggested that the distribution of neobiota is strongest correlated to road/railways density than to land use composition. This study provides new evidence, from a transnational scale, that alien plants can use roadsides and rail networks as colonization corridors. Furthermore, our approach contributes to the understanding on alien plant species distribution at large spatial scale by the combination with spatial modeling procedures. PMID:28829818

  15. NEON: Contributing continental-scale long-term environmental data for the benefit of society

    NASA Astrophysics Data System (ADS)

    Wee, B.; Aulenbach, S.

    2011-12-01

    The National Ecological Observatory Network (NEON) is a NSF funded national investment in physical and information infrastructure. Large-scale environmental changes pose challenges that straddle environmental, economic, and social boundaries. As we develop climate adaptation strategies at the Federal, state, local, and tribal levels, accessible and usable data are essential for implementing actions that are informed by the best available information. NEON's goal is to enable understanding and forecasting of the impacts of climate change, land use change and invasive species on continental-scale ecology by providing physical and information infrastructure. The NEON framework will take standardized, long-term, coordinated measurements of related environmental variables at each of its 62 sites across the nation. These observations, collected by automated instruments, field crews, and airborne instruments, will be processed into more than 700 data products that are provided freely over the web to support research, education, and environmental management. NEON is envisioned to be an integral component of an interoperable ecosystem of credible data and information sources. Other members of this information ecosystem include Federal, commercial, and non-profit entities. NEON is actively involved with the interoperability community via forums like the Foundation for Earth Science Information Partners and the USGS Community for Data Integration in a collective effort to identify the technical standards, best practices, and organizational principles that enable the emergence of such an information ecosystem. These forums have proven to be effective innovation engines for the experimentation of new techniques that evolve into emergent standards. These standards are, for the most part, discipline agnostic. It is becoming increasingly evident that we need to include socio-economic and public health data sources in interoperability initiatives, because the dynamics of coupled natural-human systems cannot be understood in the absence of data about the human dimension. Another essential element is the community of tool and platform developers who create the infrastructure for scientists, educators, resource managers, and policy analysts to discover, analyze, and collaborate on problems using the diverse data that are required to address emerging large-scale environmental challenges. These challenges are very unlikely to be problems confined to this generation: they are urgent, compelling, and long-term problems that require a sustained effort to generate and curate data and information from observations, models, and experiments. NEON's long-term national physical and information infrastructure for environmental observation is one of the cornerstones of a framework that transforms science and information for the benefit of society.

  16. Digital Rocks Portal: a sustainable platform for imaged dataset sharing, translation and automated analysis

    NASA Astrophysics Data System (ADS)

    Prodanovic, M.; Esteva, M.; Hanlon, M.; Nanda, G.; Agarwal, P.

    2015-12-01

    Recent advances in imaging have provided a wealth of 3D datasets that reveal pore space microstructure (nm to cm length scale) and allow investigation of nonlinear flow and mechanical phenomena from first principles using numerical approaches. This framework has popularly been called "digital rock physics". Researchers, however, have trouble storing and sharing the datasets both due to their size and the lack of standardized image types and associated metadata for volumetric datasets. This impedes scientific cross-validation of the numerical approaches that characterize large scale porous media properties, as well as development of multiscale approaches required for correct upscaling. A single research group typically specializes in an imaging modality and/or related modeling on a single length scale, and lack of data-sharing infrastructure makes it difficult to integrate different length scales. We developed a sustainable, open and easy-to-use repository called the Digital Rocks Portal, that (1) organizes images and related experimental measurements of different porous materials, (2) improves access to them for a wider community of geosciences or engineering researchers not necessarily trained in computer science or data analysis. Once widely accepter, the repository will jumpstart productivity and enable scientific inquiry and engineering decisions founded on a data-driven basis. This is the first repository of its kind. We show initial results on incorporating essential software tools and pipelines that make it easier for researchers to store and reuse data, and for educators to quickly visualize and illustrate concepts to a wide audience. For data sustainability and continuous access, the portal is implemented within the reliable, 24/7 maintained High Performance Computing Infrastructure supported by the Texas Advanced Computing Center (TACC) at the University of Texas at Austin. Long-term storage is provided through the University of Texas System Research Cyber-infrastructure initiative.

  17. Digital Rocks Portal: a Sustainable Platform for Data Management, Analysis and Remote Visualization of Volumetric Images of Porous Media

    NASA Astrophysics Data System (ADS)

    Prodanovic, M.; Esteva, M.; Ketcham, R. A.

    2017-12-01

    Nanometer to centimeter-scale imaging such as (focused ion beam) scattered electron microscopy, magnetic resonance imaging and X-ray (micro)tomography has since 1990s introduced 2D and 3D datasets of rock microstructure that allow investigation of nonlinear flow and mechanical phenomena on the length scales that are otherwise impervious to laboratory measurements. The numerical approaches that use such images produce various upscaled parameters required by subsurface flow and deformation simulators. All of this has revolutionized our knowledge about grain scale phenomena. However, a lack of data-sharing infrastructure among research groups makes it difficult to integrate different length scales. We have developed a sustainable, open and easy-to-use repository called the Digital Rocks Portal (https://www.digitalrocksportal.org), that (1) organizes images and related experimental measurements of different porous materials, (2) improves access to them for a wider community of engineering or geosciences researchers not necessarily trained in computer science or data analysis. Digital Rocks Portal (NSF EarthCube Grant 1541008) is the first repository for imaged porous microstructure data. It is implemented within the reliable, 24/7 maintained High Performance Computing Infrastructure supported by the Texas Advanced Computing Center (University of Texas at Austin). Long-term storage is provided through the University of Texas System Research Cyber-infrastructure initiative. We show how the data can be documented, referenced in publications via digital object identifiers (see Figure below for examples), visualized, searched for and linked to other repositories. We show recently implemented integration of the remote parallel visualization, bulk upload for large datasets as well as preliminary flow simulation workflow with the pore structures currently stored in the repository. We discuss the issues of collecting correct metadata, data discoverability and repository sustainability.

  18. Biodiversity characterisation and hydrodynamic consequences of marine fouling communities on marine renewable energy infrastructure in the Orkney Islands Archipelago, Scotland, UK.

    PubMed

    Want, Andrew; Crawford, Rebecca; Kakkonen, Jenni; Kiddie, Greg; Miller, Susan; Harris, Robert E; Porter, Joanne S

    2017-08-01

    As part of ongoing commitments to produce electricity from renewable energy sources in Scotland, Orkney waters have been targeted for potential large-scale deployment of wave and tidal energy converting devices. Orkney has a well-developed infrastructure supporting the marine energy industry; recently enhanced by the construction of additional piers. A major concern to marine industries is biofouling on submerged structures, including energy converters and measurement instrumentation. In this study, the marine energy infrastructure and instrumentation were surveyed to characterise the biofouling. Fouling communities varied between deployment habitats; key species were identified allowing recommendations for scheduling device maintenance and preventing spread of invasive organisms. A method to measure the impact of biofouling on hydrodynamic response is described and applied to data from a wave-monitoring buoy deployed at a test site in Orkney. The results are discussed in relation to the accuracy of the measurement resources for power generation. Further applications are suggested for future testing in other scenarios, including tidal energy.

  19. Network-Friendly Gossiping

    NASA Astrophysics Data System (ADS)

    Serbu, Sabina; Rivière, Étienne; Felber, Pascal

    The emergence of large-scale distributed applications based on many-to-many communication models, e.g., broadcast and decentralized group communication, has an important impact on the underlying layers, notably the Internet routing infrastructure. To make an effective use of network resources, protocols should both limit the stress (amount of messages) on each infrastructure entity like routers and links, and balance as much as possible the load in the network. Most protocols use application-level metrics such as delays to improve efficiency of content dissemination or routing, but the extend to which such application-centric optimizations help reduce and balance the load imposed to the infrastructure is unclear. In this paper, we elaborate on the design of such network-friendly protocols and associated metrics. More specifically, we investigate random-based gossip dissemination. We propose and evaluate different ways of making this representative protocol network-friendly while keeping its desirable properties (robustness and low delays). Simulations of the proposed methods using synthetic and real network topologies convey and compare their abilities to reduce and balance the load while keeping good performance.

  20. 'Two clicks and I'm in!' Patients as co-actors in managing health data through a personal health record infrastructure.

    PubMed

    Zanutto, Alberto

    2017-06-01

    One of the most significant changes in the healthcare field in the past 10 years has been the large-scale digitalization of patients' healthcare data, and an increasing emphasis on the importance of patients' roles in cooperating with healthcare professionals through digital infrastructures. A project carried out in the North of Italy with the aim of creating a personal health record has been evaluated over the course of 5 years by means of mixed method fieldwork. Two years after the infrastructure was put into regular service, the way in which patients are represented in the system and patient practices have been studied using surveys and qualitative interviews. The data show that, first, patients have become co-actors in describing their clinical histories; second, that they have become co-actors in the diagnosis process; and finally, they have become co-actors in the management of time and space as regards their specific state of health.

  1. A Real-Time Web of Things Framework with Customizable Openness Considering Legacy Devices

    PubMed Central

    Zhao, Shuai; Yu, Le; Cheng, Bo

    2016-01-01

    With the development of the Internet of Things (IoT), resources and applications based on it have emerged on a large scale. However, most efforts are “silo” solutions where devices and applications are tightly coupled. Infrastructures are needed to connect sensors to the Internet, open up and break the current application silos and move to a horizontal application mode. Based on the concept of Web of Things (WoT), many infrastructures have been proposed to integrate the physical world with the Web. However, issues such as no real-time guarantee, lack of fine-grained control of data, and the absence of explicit solutions for integrating heterogeneous legacy devices, hinder their widespread and practical use. To address these issues, this paper proposes a WoT resource framework that provides the infrastructures for the customizable openness and sharing of users’ data and resources under the premise of ensuring the real-time behavior of their own applications. The proposed framework is validated by actual systems and experimental evaluations. PMID:27690038

  2. A Real-Time Web of Things Framework with Customizable Openness Considering Legacy Devices.

    PubMed

    Zhao, Shuai; Yu, Le; Cheng, Bo

    2016-09-28

    With the development of the Internet of Things (IoT), resources and applications based on it have emerged on a large scale. However, most efforts are "silo" solutions where devices and applications are tightly coupled. Infrastructures are needed to connect sensors to the Internet, open up and break the current application silos and move to a horizontal application mode. Based on the concept of Web of Things (WoT), many infrastructures have been proposed to integrate the physical world with the Web. However, issues such as no real-time guarantee, lack of fine-grained control of data, and the absence of explicit solutions for integrating heterogeneous legacy devices, hinder their widespread and practical use. To address these issues, this paper proposes a WoT resource framework that provides the infrastructures for the customizable openness and sharing of users' data and resources under the premise of ensuring the real-time behavior of their own applications. The proposed framework is validated by actual systems and experimental evaluations.

  3. Informing watershed connectivity barrier prioritization decisions: A synthesis

    USGS Publications Warehouse

    McKay, S. K.; Cooper, A. R.; Diebel, M.W.; Elkins, D.; Oldford, G.; Roghair, C.; Wieferich, Daniel J.

    2017-01-01

    Water resources and transportation infrastructure such as dams and culverts provide countless socio-economic benefits; however, this infrastructure can also disconnect the movement of organisms, sediment, and water through river ecosystems. Trade-offs associated with these competing costs and benefits occur globally, with applications in barrier addition (e.g. dam and road construction), reengineering (e.g. culvert repair), and removal (e.g. dam removal and aging infrastructure). Barrier prioritization provides a unique opportunity to: (i) restore and reconnect potentially large habitat patches quickly and effectively and (ii) avoid impacts prior to occurrence in line with the mitigation hierarchy (i.e. avoid then minimize then mitigate). This paper synthesizes 46 watershed-scale barrier planning studies and presents a procedure to guide barrier prioritization associated with connectivity for aquatic organisms. We focus on practical issues informing prioritization studies such as available data sets, methods, techniques, and tools. We conclude with a discussion of emerging trends and issues in barrier prioritization and key opportunities for enhancing the body of knowledge.

  4. Design Aspects of the Rayleigh Convection Code

    NASA Astrophysics Data System (ADS)

    Featherstone, N. A.

    2017-12-01

    Understanding the long-term generation of planetary or stellar magnetic field requires complementary knowledge of the large-scale fluid dynamics pervading large fractions of the object's interior. Such large-scale motions are sensitive to the system's geometry which, in planets and stars, is spherical to a good approximation. As a result, computational models designed to study such systems often solve the MHD equations in spherical geometry, frequently employing a spectral approach involving spherical harmonics. We present computational and user-interface design aspects of one such modeling tool, the Rayleigh convection code, which is suitable for deployment on desktop and petascale-hpc architectures alike. In this poster, we will present an overview of this code's parallel design and its built-in diagnostics-output package. Rayleigh has been developed with NSF support through the Computational Infrastructure for Geodynamics and is expected to be released as open-source software in winter 2017/2018.

  5. Towards agile large-scale predictive modelling in drug discovery with flow-based programming design principles.

    PubMed

    Lampa, Samuel; Alvarsson, Jonathan; Spjuth, Ola

    2016-01-01

    Predictive modelling in drug discovery is challenging to automate as it often contains multiple analysis steps and might involve cross-validation and parameter tuning that create complex dependencies between tasks. With large-scale data or when using computationally demanding modelling methods, e-infrastructures such as high-performance or cloud computing are required, adding to the existing challenges of fault-tolerant automation. Workflow management systems can aid in many of these challenges, but the currently available systems are lacking in the functionality needed to enable agile and flexible predictive modelling. We here present an approach inspired by elements of the flow-based programming paradigm, implemented as an extension of the Luigi system which we name SciLuigi. We also discuss the experiences from using the approach when modelling a large set of biochemical interactions using a shared computer cluster.Graphical abstract.

  6. A Cross-Platform Infrastructure for Scalable Runtime Application Performance Analysis

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Jack Dongarra; Shirley Moore; Bart Miller, Jeffrey Hollingsworth

    2005-03-15

    The purpose of this project was to build an extensible cross-platform infrastructure to facilitate the development of accurate and portable performance analysis tools for current and future high performance computing (HPC) architectures. Major accomplishments include tools and techniques for multidimensional performance analysis, as well as improved support for dynamic performance monitoring of multithreaded and multiprocess applications. Previous performance tool development has been limited by the burden of having to re-write a platform-dependent low-level substrate for each architecture/operating system pair in order to obtain the necessary performance data from the system. Manual interpretation of performance data is not scalable for large-scalemore » long-running applications. The infrastructure developed by this project provides a foundation for building portable and scalable performance analysis tools, with the end goal being to provide application developers with the information they need to analyze, understand, and tune the performance of terascale applications on HPC architectures. The backend portion of the infrastructure provides runtime instrumentation capability and access to hardware performance counters, with thread-safety for shared memory environments and a communication substrate to support instrumentation of multiprocess and distributed programs. Front end interfaces provides tool developers with a well-defined, platform-independent set of calls for requesting performance data. End-user tools have been developed that demonstrate runtime data collection, on-line and off-line analysis of performance data, and multidimensional performance analysis. The infrastructure is based on two underlying performance instrumentation technologies. These technologies are the PAPI cross-platform library interface to hardware performance counters and the cross-platform Dyninst library interface for runtime modification of executable images. The Paradyn and KOJAK projects have made use of this infrastructure to build performance measurement and analysis tools that scale to long-running programs on large parallel and distributed systems and that automate much of the search for performance bottlenecks.« less

  7. Extremely Large Telescope Project Selected in ESFRI Roadmap

    NASA Astrophysics Data System (ADS)

    2006-10-01

    In its first Roadmap, the European Strategy Forum on Research Infrastructures (ESFRI) choose the European Extremely Large Telescope (ELT), for which ESO is presently developing a Reference Design, as one of the large scale projects to be conducted in astronomy, and the only one in optical astronomy. The aim of the ELT project is to build before the end of the next decade an optical/near-infrared telescope with a diameter in the 30-60m range. ESO PR Photo 40/06 The ESFRI Roadmap states: "Extremely Large Telescopes are seen world-wide as one of the highest priorities in ground-based astronomy. They will vastly advance astrophysical knowledge allowing detailed studies of inter alia planets around other stars, the first objects in the Universe, super-massive Black Holes, and the nature and distribution of the Dark Matter and Dark Energy which dominate the Universe. The European Extremely Large Telescope project will maintain and reinforce Europe's position at the forefront of astrophysical research." Said Catherine Cesarsky, Director General of ESO: "In 2004, the ESO Council mandated ESO to play a leading role in the development of an ELT for Europe's astronomers. To that end, ESO has undertaken conceptual studies for ELTs and is currently also leading a consortium of European institutes engaged in studying enabling technologies for such a telescope. The inclusion of the ELT in the ESFRI roadmap, together with the comprehensive preparatory work already done, paves the way for the next phase of this exciting project, the design phase." ESO is currently working, in close collaboration with the European astronomical community and the industry, on a baseline design for an Extremely Large Telescope. The plan is a telescope with a primary mirror between 30 and 60 metres in diameter and a financial envelope of about 750 m Euros. It aims at more than a factor ten improvement in overall performance compared to the current leader in ground based astronomy: the ESO Very Large Telescope at the Paranal Observatory. The draft Baseline Reference Design will be presented to the wider scientific community on 29 - 30 November 2006 at a dedicated ELT Workshop Meeting in Marseille (France) and will be further reiterated. The design is then to be presented to the ESO Council at the end of 2006. The goal is to start the detailed E-ELT design work by the first half of 2007. Launched in April 2002, the European Strategy Forum on Research Infrastructures was set-up following a recommendation of the European Union Council, with the role to support a coherent approach to policy-making on research infrastructures in Europe, and to act as an incubator for international negotiations about concrete initiatives. In particular, ESFRI has prepared a European Roadmap identifying new Research Infrastructure of pan-European interest corresponding to the long term needs of the European research communities, covering all scientific areas, regardless of possible location and likely to be realised in the next 10 to 20 years. The Roadmap was presented on 19 October. It is the result of an intensive two-year consultation and peer review process involving over 1000 high level European and international experts. The Roadmap identifies 35 large scale infrastructure projects, at various stages of development, in seven key research areas including Environmental Sciences; Energy; Materials Sciences; Astrophysics, Astronomy, Particle and Nuclear Physics; Biomedical and Life Sciences; Social Sciences and the Humanities; Computation and data Treatment.

  8. Large-scale Meteorological Patterns Associated with Extreme Precipitation Events over Portland, OR

    NASA Astrophysics Data System (ADS)

    Aragon, C.; Loikith, P. C.; Lintner, B. R.; Pike, M.

    2017-12-01

    Extreme precipitation events can have profound impacts on human life and infrastructure, with broad implications across a range of stakeholders. Changes to extreme precipitation events are a projected outcome of climate change that warrants further study, especially at regional- to local-scales. While global climate models are generally capable of simulating mean climate at global-to-regional scales with reasonable skill, resiliency and adaptation decisions are made at local-scales where most state-of-the-art climate models are limited by coarse resolution. Characterization of large-scale meteorological patterns associated with extreme precipitation events at local-scales can provide climatic information without this scale limitation, thus facilitating stakeholder decision-making. This research will use synoptic climatology as a tool by which to characterize the key large-scale meteorological patterns associated with extreme precipitation events in the Portland, Oregon metro region. Composite analysis of meteorological patterns associated with extreme precipitation days, and associated watershed-specific flooding, is employed to enhance understanding of the climatic drivers behind such events. The self-organizing maps approach is then used to characterize the within-composite variability of the large-scale meteorological patterns associated with extreme precipitation events, allowing us to better understand the different types of meteorological conditions that lead to high-impact precipitation events and associated hydrologic impacts. A more comprehensive understanding of the meteorological drivers of extremes will aid in evaluation of the ability of climate models to capture key patterns associated with extreme precipitation over Portland and to better interpret projections of future climate at impact-relevant scales.

  9. Integration of a neuroimaging processing pipeline into a pan-canadian computing grid

    NASA Astrophysics Data System (ADS)

    Lavoie-Courchesne, S.; Rioux, P.; Chouinard-Decorte, F.; Sherif, T.; Rousseau, M.-E.; Das, S.; Adalat, R.; Doyon, J.; Craddock, C.; Margulies, D.; Chu, C.; Lyttelton, O.; Evans, A. C.; Bellec, P.

    2012-02-01

    The ethos of the neuroimaging field is quickly moving towards the open sharing of resources, including both imaging databases and processing tools. As a neuroimaging database represents a large volume of datasets and as neuroimaging processing pipelines are composed of heterogeneous, computationally intensive tools, such open sharing raises specific computational challenges. This motivates the design of novel dedicated computing infrastructures. This paper describes an interface between PSOM, a code-oriented pipeline development framework, and CBRAIN, a web-oriented platform for grid computing. This interface was used to integrate a PSOM-compliant pipeline for preprocessing of structural and functional magnetic resonance imaging into CBRAIN. We further tested the capacity of our infrastructure to handle a real large-scale project. A neuroimaging database including close to 1000 subjects was preprocessed using our interface and publicly released to help the participants of the ADHD-200 international competition. This successful experiment demonstrated that our integrated grid-computing platform is a powerful solution for high-throughput pipeline analysis in the field of neuroimaging.

  10. Team building: electronic management-clinical translational research (eM-CTR) systems.

    PubMed

    Cecchetti, Alfred A; Parmanto, Bambang; Vecchio, Marcella L; Ahmad, Sjarif; Buch, Shama; Zgheib, Nathalie K; Groark, Stephen J; Vemuganti, Anupama; Romkes, Marjorie; Sciurba, Frank; Donahoe, Michael P; Branch, Robert A

    2009-12-01

    Classical drug exposure: response studies in clinical pharmacology represent the quintessential prototype for Bench to Bedside-Clinical Translational Research. A fundamental premise of this approach is for a multidisciplinary team of researchers to design and execute complex, in-depth mechanistic studies conducted in relatively small groups of subjects. The infrastructure support for this genre of clinical research is not well-handled by scaling down of infrastructure used for large Phase III clinical trials. We describe a novel, integrated strategy, whose focus is to support and manage a study using an Information Hub, Communication Hub, and Data Hub design. This design is illustrated by an application to a series of varied projects sponsored by Special Clinical Centers of Research in chronic obstructive pulmonary disease at the University of Pittsburgh. In contrast to classical informatics support, it is readily scalable to large studies. Our experience suggests the culture consequences of research group self-empowerment is not only economically efficient but transformative to the research process.

  11. Smart City Pilot Projects Using LoRa and IEEE802.15.4 Technologies.

    PubMed

    Pasolini, Gianni; Buratti, Chiara; Feltrin, Luca; Zabini, Flavio; De Castro, Cristina; Verdone, Roberto; Andrisano, Oreste

    2018-04-06

    Information and Communication Technologies (ICTs), through wireless communications and the Internet of Things (IoT) paradigm, are the enabling keys for transforming traditional cities into smart cities, since they provide the core infrastructure behind public utilities and services. However, to be effective, IoT-based services could require different technologies and network topologies, even when addressing the same urban scenario. In this paper, we highlight this aspect and present two smart city testbeds developed in Italy. The first one concerns a smart infrastructure for public lighting and relies on a heterogeneous network using the IEEE 802.15.4 short-range communication technology, whereas the second one addresses smart-building applications and is based on the LoRa low-rate, long-range communication technology. The smart lighting scenario is discussed providing the technical details and the economic benefits of a large-scale (around 3000 light poles) flexible and modular implementation of a public lighting infrastructure, while the smart-building testbed is investigated, through measurement campaigns and simulations, assessing the coverage and the performance of the LoRa technology in a real urban scenario. Results show that a proper parameter setting is needed to cover large urban areas while maintaining the airtime sufficiently low to keep packet losses at satisfactory levels.

  12. Smart City Pilot Projects Using LoRa and IEEE802.15.4 Technologies

    PubMed Central

    Buratti, Chiara; Zabini, Flavio; De Castro, Cristina; Verdone, Roberto; Andrisano, Oreste

    2018-01-01

    Information and Communication Technologies (ICTs), through wireless communications and the Internet of Things (IoT) paradigm, are the enabling keys for transforming traditional cities into smart cities, since they provide the core infrastructure behind public utilities and services. However, to be effective, IoT-based services could require different technologies and network topologies, even when addressing the same urban scenario. In this paper, we highlight this aspect and present two smart city testbeds developed in Italy. The first one concerns a smart infrastructure for public lighting and relies on a heterogeneous network using the IEEE 802.15.4 short-range communication technology, whereas the second one addresses smart-building applications and is based on the LoRa low-rate, long-range communication technology. The smart lighting scenario is discussed providing the technical details and the economic benefits of a large-scale (around 3000 light poles) flexible and modular implementation of a public lighting infrastructure, while the smart-building testbed is investigated, through measurement campaigns and simulations, assessing the coverage and the performance of the LoRa technology in a real urban scenario. Results show that a proper parameter setting is needed to cover large urban areas while maintaining the airtime sufficiently low to keep packet losses at satisfactory levels. PMID:29642391

  13. Development of an informatics infrastructure for data exchange of biomolecular simulations: architecture, data models and ontology$

    PubMed Central

    Thibault, J. C.; Roe, D. R.; Eilbeck, K.; Cheatham, T. E.; Facelli, J. C.

    2015-01-01

    Biomolecular simulations aim to simulate structure, dynamics, interactions, and energetics of complex biomolecular systems. With the recent advances in hardware, it is now possible to use more complex and accurate models, but also reach time scales that are biologically significant. Molecular simulations have become a standard tool for toxicology and pharmacology research, but organizing and sharing data – both within the same organization and among different ones – remains a substantial challenge. In this paper we review our recent work leading to the development of a comprehensive informatics infrastructure to facilitate the organization and exchange of biomolecular simulations data. Our efforts include the design of data models and dictionary tools that allow the standardization of the metadata used to describe the biomedical simulations, the development of a thesaurus and ontology for computational reasoning when searching for biomolecular simulations in distributed environments, and the development of systems based on these models to manage and share the data at a large scale (iBIOMES), and within smaller groups of researchers at laboratory scale (iBIOMES Lite), that take advantage of the standardization of the meta data used to describe biomolecular simulations. PMID:26387907

  14. Development of an informatics infrastructure for data exchange of biomolecular simulations: Architecture, data models and ontology.

    PubMed

    Thibault, J C; Roe, D R; Eilbeck, K; Cheatham, T E; Facelli, J C

    2015-01-01

    Biomolecular simulations aim to simulate structure, dynamics, interactions, and energetics of complex biomolecular systems. With the recent advances in hardware, it is now possible to use more complex and accurate models, but also reach time scales that are biologically significant. Molecular simulations have become a standard tool for toxicology and pharmacology research, but organizing and sharing data - both within the same organization and among different ones - remains a substantial challenge. In this paper we review our recent work leading to the development of a comprehensive informatics infrastructure to facilitate the organization and exchange of biomolecular simulations data. Our efforts include the design of data models and dictionary tools that allow the standardization of the metadata used to describe the biomedical simulations, the development of a thesaurus and ontology for computational reasoning when searching for biomolecular simulations in distributed environments, and the development of systems based on these models to manage and share the data at a large scale (iBIOMES), and within smaller groups of researchers at laboratory scale (iBIOMES Lite), that take advantage of the standardization of the meta data used to describe biomolecular simulations.

  15. ATLAS EventIndex general dataflow and monitoring infrastructure

    NASA Astrophysics Data System (ADS)

    Fernández Casaní, Á.; Barberis, D.; Favareto, A.; García Montoro, C.; González de la Hoz, S.; Hřivnáč, J.; Prokoshin, F.; Salt, J.; Sánchez, J.; Többicke, R.; Yuan, R.; ATLAS Collaboration

    2017-10-01

    The ATLAS EventIndex has been running in production since mid-2015, reliably collecting information worldwide about all produced events and storing them in a central Hadoop infrastructure at CERN. A subset of this information is copied to an Oracle relational database for fast dataset discovery, event-picking, crosschecks with other ATLAS systems and checks for event duplication. The system design and its optimization is serving event picking from requests of a few events up to scales of tens of thousand of events, and in addition, data consistency checks are performed for large production campaigns. Detecting duplicate events with a scope of physics collections has recently arisen as an important use case. This paper describes the general architecture of the project and the data flow and operation issues, which are addressed by recent developments to improve the throughput of the overall system. In this direction, the data collection system is reducing the usage of the messaging infrastructure to overcome the performance shortcomings detected during production peaks; an object storage approach is instead used to convey the event index information, and messages to signal their location and status. Recent changes in the Producer/Consumer architecture are also presented in detail, as well as the monitoring infrastructure.

  16. Future opportunities and trends for e-infrastructures and life sciences: going beyond the grid to enable life science data analysis

    PubMed Central

    Duarte, Afonso M. S.; Psomopoulos, Fotis E.; Blanchet, Christophe; Bonvin, Alexandre M. J. J.; Corpas, Manuel; Franc, Alain; Jimenez, Rafael C.; de Lucas, Jesus M.; Nyrönen, Tommi; Sipos, Gergely; Suhr, Stephanie B.

    2015-01-01

    With the increasingly rapid growth of data in life sciences we are witnessing a major transition in the way research is conducted, from hypothesis-driven studies to data-driven simulations of whole systems. Such approaches necessitate the use of large-scale computational resources and e-infrastructures, such as the European Grid Infrastructure (EGI). EGI, one of key the enablers of the digital European Research Area, is a federation of resource providers set up to deliver sustainable, integrated and secure computing services to European researchers and their international partners. Here we aim to provide the state of the art of Grid/Cloud computing in EU research as viewed from within the field of life sciences, focusing on key infrastructures and projects within the life sciences community. Rather than focusing purely on the technical aspects underlying the currently provided solutions, we outline the design aspects and key characteristics that can be identified across major research approaches. Overall, we aim to provide significant insights into the road ahead by establishing ever-strengthening connections between EGI as a whole and the life sciences community. PMID:26157454

  17. Future opportunities and trends for e-infrastructures and life sciences: going beyond the grid to enable life science data analysis.

    PubMed

    Duarte, Afonso M S; Psomopoulos, Fotis E; Blanchet, Christophe; Bonvin, Alexandre M J J; Corpas, Manuel; Franc, Alain; Jimenez, Rafael C; de Lucas, Jesus M; Nyrönen, Tommi; Sipos, Gergely; Suhr, Stephanie B

    2015-01-01

    With the increasingly rapid growth of data in life sciences we are witnessing a major transition in the way research is conducted, from hypothesis-driven studies to data-driven simulations of whole systems. Such approaches necessitate the use of large-scale computational resources and e-infrastructures, such as the European Grid Infrastructure (EGI). EGI, one of key the enablers of the digital European Research Area, is a federation of resource providers set up to deliver sustainable, integrated and secure computing services to European researchers and their international partners. Here we aim to provide the state of the art of Grid/Cloud computing in EU research as viewed from within the field of life sciences, focusing on key infrastructures and projects within the life sciences community. Rather than focusing purely on the technical aspects underlying the currently provided solutions, we outline the design aspects and key characteristics that can be identified across major research approaches. Overall, we aim to provide significant insights into the road ahead by establishing ever-strengthening connections between EGI as a whole and the life sciences community.

  18. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bazillian, Morgan; Pedersen, Ascha Lychett; Pless, Jacuelyn

    Shale gas resource potential in China is assessed to be large, and its development could have wide-ranging economic, environmental, and energy security implications. Although commercial scale shale gas development has not yet begun in China, it holds the potential to change the global energy landscape. Chinese decision-makers are wrestling with the challenges associated with bringing the potential to reality: geologic complexity; infrastructure and logistical difficulties; technological, institutional, social and market development issues; and environmental impacts, including greenhouse gas emissions, impacts on water availability and quality, and air pollution. This paper briefly examines the current situation and outlook for shale gasmore » in China, and explores existing and potential avenues for international cooperation. We find that despite some barriers to large-scale development, Chinese shale gas production has the potential to grow rapidly over the medium-term.« less

  19. Drivers and barriers to e-invoicing adoption in Greek large scale manufacturing industries

    NASA Astrophysics Data System (ADS)

    Marinagi, Catherine; Trivellas, Panagiotis; Reklitis, Panagiotis; Skourlas, Christos

    2015-02-01

    This paper attempts to investigate the drivers and barriers that large-scale Greek manufacturing industries experience in adopting electronic invoices (e-invoices), based on three case studies with organizations having international presence in many countries. The study focuses on the drivers that may affect the increase of the adoption and use of e-invoicing, including the customers demand for e-invoices, and sufficient know-how and adoption of e-invoicing in organizations. In addition, the study reveals important barriers that prevent the expansion of e-invoicing, such as suppliers' reluctance to implement e-invoicing, and IT infrastructures incompatibilities. Other issues examined by this study include the observed benefits from e-invoicing implementation, and the financial priorities of the organizations assumed to be supported by e-invoicing.

  20. The International Symposium on Grids and Clouds

    NASA Astrophysics Data System (ADS)

    The International Symposium on Grids and Clouds (ISGC) 2012 will be held at Academia Sinica in Taipei from 26 February to 2 March 2012, with co-located events and workshops. The conference is hosted by the Academia Sinica Grid Computing Centre (ASGC). 2012 is the decennium anniversary of the ISGC which over the last decade has tracked the convergence, collaboration and innovation of individual researchers across the Asia Pacific region to a coherent community. With the continuous support and dedication from the delegates, ISGC has provided the primary international distributed computing platform where distinguished researchers and collaboration partners from around the world share their knowledge and experiences. The last decade has seen the wide-scale emergence of e-Infrastructure as a critical asset for the modern e-Scientist. The emergence of large-scale research infrastructures and instruments that has produced a torrent of electronic data is forcing a generational change in the scientific process and the mechanisms used to analyse the resulting data deluge. No longer can the processing of these vast amounts of data and production of relevant scientific results be undertaken by a single scientist. Virtual Research Communities that span organisations around the world, through an integrated digital infrastructure that connects the trust and administrative domains of multiple resource providers, have become critical in supporting these analyses. Topics covered in ISGC 2012 include: High Energy Physics, Biomedicine & Life Sciences, Earth Science, Environmental Changes and Natural Disaster Mitigation, Humanities & Social Sciences, Operations & Management, Middleware & Interoperability, Security and Networking, Infrastructure Clouds & Virtualisation, Business Models & Sustainability, Data Management, Distributed Volunteer & Desktop Grid Computing, High Throughput Computing, and High Performance, Manycore & GPU Computing.

  1. The future of emissions trading in light of the acid rain experience

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    McLean, B.J.; Rico, R.

    1995-12-31

    The idea of emissions trading was developed more than two decades ago by environmental economists eager to provide new ideas for how to improve the efficiency of environmental protection. However, early emissions trading efforts were built on the historical {open_quotes}command and control{close_quotes} infrastructure which has dominated U.S. environmental protection until today. The {open_quotes}command and control{close_quotes} model initially had advantages that were of a very pragmatic character: it assured large pollution reductions in a time when large, cheap reductions were available and necessary; and it did not require a sophisticated government infrastructure. Within the last five years, large-scale emission trading programsmore » have been successfully designed and started that are fundamentally different from the earlier efforts, creating a new paradigm for environmental control just when our understanding of environmental problems is changing as well. The purpose of this paper is to focus on the largest national-scale program--the Acid Rain Program--and from that experience, forecast when emission trading programs may be headed based on our understanding of the factors currently influencing environmental management. The first section of this paper will briefly review the history of emissions trading programs, followed by a summary of the features of the Acid Rain Program, highlighting those features that distinguish it from previous efforts. The last section addresses the opportunities for emissions trading (and its probable future directions).« less

  2. Effect of infrastructure design on commons dilemmas in social-ecological system dynamics.

    PubMed

    Yu, David J; Qubbaj, Murad R; Muneepeerakul, Rachata; Anderies, John M; Aggarwal, Rimjhim M

    2015-10-27

    The use of shared infrastructure to direct natural processes for the benefit of humans has been a central feature of human social organization for millennia. Today, more than ever, people interact with one another and the environment through shared human-made infrastructure (the Internet, transportation, the energy grid, etc.). However, there has been relatively little work on how the design characteristics of shared infrastructure affect the dynamics of social-ecological systems (SESs) and the capacity of groups to solve social dilemmas associated with its provision. Developing such understanding is especially important in the context of global change where design criteria must consider how specific aspects of infrastructure affect the capacity of SESs to maintain vital functions in the face of shocks. Using small-scale irrigated agriculture (the most ancient and ubiquitous example of public infrastructure systems) as a model system, we show that two design features related to scale and the structure of benefit flows can induce fundamental changes in qualitative behavior, i.e., regime shifts. By relating the required maintenance threshold (a design feature related to infrastructure scale) to the incentives facing users under different regimes, our work also provides some general guidance on determinants of robustness of SESs under globalization-related stresses.

  3. Effect of infrastructure design on commons dilemmas in social−ecological system dynamics

    PubMed Central

    Yu, David J.; Qubbaj, Murad R.; Muneepeerakul, Rachata; Anderies, John M.; Aggarwal, Rimjhim M.

    2015-01-01

    The use of shared infrastructure to direct natural processes for the benefit of humans has been a central feature of human social organization for millennia. Today, more than ever, people interact with one another and the environment through shared human-made infrastructure (the Internet, transportation, the energy grid, etc.). However, there has been relatively little work on how the design characteristics of shared infrastructure affect the dynamics of social−ecological systems (SESs) and the capacity of groups to solve social dilemmas associated with its provision. Developing such understanding is especially important in the context of global change where design criteria must consider how specific aspects of infrastructure affect the capacity of SESs to maintain vital functions in the face of shocks. Using small-scale irrigated agriculture (the most ancient and ubiquitous example of public infrastructure systems) as a model system, we show that two design features related to scale and the structure of benefit flows can induce fundamental changes in qualitative behavior, i.e., regime shifts. By relating the required maintenance threshold (a design feature related to infrastructure scale) to the incentives facing users under different regimes, our work also provides some general guidance on determinants of robustness of SESs under globalization-related stresses. PMID:26460043

  4. Multiple pathways of commodity crop expansion in tropical forest landscapes

    NASA Astrophysics Data System (ADS)

    Meyfroidt, Patrick; Carlson, Kimberly M.; Fagan, Matthew E.; Gutiérrez-Vélez, Victor H.; Macedo, Marcia N.; Curran, Lisa M.; DeFries, Ruth S.; Dyer, George A.; Gibbs, Holly K.; Lambin, Eric F.; Morton, Douglas C.; Robiglio, Valentina

    2014-07-01

    Commodity crop expansion, for both global and domestic urban markets, follows multiple land change pathways entailing direct and indirect deforestation, and results in various social and environmental impacts. Here we compare six published case studies of rapid commodity crop expansion within forested tropical regions. Across cases, between 1.7% and 89.5% of new commodity cropland was sourced from forestlands. Four main factors controlled pathways of commodity crop expansion: (i) the availability of suitable forestland, which is determined by forest area, agroecological or accessibility constraints, and land use policies, (ii) economic and technical characteristics of agricultural systems, (iii) differences in constraints and strategies between small-scale and large-scale actors, and (iv) variable costs and benefits of forest clearing. When remaining forests were unsuitable for agriculture and/or policies restricted forest encroachment, a larger share of commodity crop expansion occurred by conversion of existing agricultural lands, and land use displacement was smaller. Expansion strategies of large-scale actors emerge from context-specific balances between the search for suitable lands; transaction costs or conflicts associated with expanding into forests or other state-owned lands versus smallholder lands; net benefits of forest clearing; and greater access to infrastructure in already-cleared lands. We propose five hypotheses to be tested in further studies: (i) land availability mediates expansion pathways and the likelihood that land use is displaced to distant, rather than to local places; (ii) use of already-cleared lands is favored when commodity crops require access to infrastructure; (iii) in proportion to total agricultural expansion, large-scale actors generate more clearing of mature forests than smallholders; (iv) property rights and land tenure security influence the actors participating in commodity crop expansion, the form of land use displacement, and livelihood outcomes; (v) intensive commodity crops may fail to spare land when inducing displacement. We conclude that understanding pathways of commodity crop expansion is essential to improve land use governance.

  5. The national response for preventing healthcare-associated infections: infrastructure development.

    PubMed

    Mendel, Peter; Siegel, Sari; Leuschner, Kristin J; Gall, Elizabeth M; Weinberg, Daniel A; Kahn, Katherine L

    2014-02-01

    In 2009, the US Department of Health and Human Services (HHS) launched the Action Plan to Prevent Healthcare-associated Infections (HAIs). The Action Plan adopted national targets for reduction of specific infections, making HHS accountable for change across the healthcare system over which federal agencies have limited control. This article examines the unique infrastructure developed through the Action Plan to support adoption of HAI prevention practices. Interviews of federal (n=32) and other stakeholders (n=38), reviews of agency documents and journal articles (n=260), and observations of interagency meetings (n=17) and multistakeholder conferences (n=17) over a 3-year evaluation period. We extract key progress and challenges in the development of national HAI prevention infrastructure--1 of the 4 system functions in our evaluation framework encompassing regulation, payment systems, safety culture, and dissemination and technical assistance. We then identify system properties--for example, coordination and alignment, accountability and incentives, etc.--that enabled or hindered progress within each key development. The Action Plan has developed a model of interagency coordination (including a dedicated "home" and culture of cooperation) at the federal level and infrastructure for stimulating change through the wider healthcare system (including transparency and financial incentives, support of state and regional HAI prevention capacity, changes in safety culture, and mechanisms for stakeholder engagement). Significant challenges to infrastructure development included many related to the same areas of progress. The Action Plan has built a foundation of infrastructure to expand prevention of HAIs and presents useful lessons for other large-scale improvement initiatives.

  6. Mid-crustal flow during Tertiary extension in the Ruby Mountains core complex, Nevada

    USGS Publications Warehouse

    MacCready, T.; Snoke, A.W.; Wright, J.E.; Howard, K.A.

    1997-01-01

    Structural analysis and geochronologic data indicate a nearly orthogonal, late Eocene-Oligocene flow pattern in migmatitic infrastructure immediately beneath the kilometer-thick, extensional, mylonitic shear zone of the Ruby Mountains metamorphic core complex, Nevada. New U-Pb radiometric dating indicates that the development of a northward-trending lineation in the infrastructure is partly coeval with the development of a pervasive, west-northwest-trending lineation in the mylonitic shear zone. U-Pb monazite data from the leucogranite orthogneiss of Thorpe Creek indicate a crystallization age of ca. 36-39 Ma. Zircon fractions from a biotite monzogranite dike yield an age of ca. 29 Ma. The three dated samples from these units exhibit a penetrative, approximately north-south-trending elongation lineation. This lineation is commonly defined by oriented bundles of sillimanite and/or elongated aggregates of quartz and feldspar, indicating a synmetamorphic and syndeformational origin. The elongation lineation can be interpreted as a slip line in the flow plane of the migmatitic, nonmylonitic infrastructural core of the northern Ruby Mountains. A portion of this midcrustal flow is coeval with the well-documented, west-northwest sense of slip in the structurally overlying kilometer-thick, mid-Tertiary mylonitic shear zone. Lineations in the mylonitic zone are orthogonal to those in the deeper infrastructure, suggesting fundamental plastic decoupling between structural levels in this core complex. Furthermore, the infrastructure is characterized by overlapping, oppositely verging fold nappes, which are rooted to the east and west. One of the nappes may be synkinematic with the intrusion of the late Eocene orthogneiss of Thorpe Creek. In addition, the penetrative, elongation lineation in the infrastructure is subparallel to hinge lines of parasitic folds developed synchronous with the fold nappes, suggesting a kinematically related evolution. The area is evaluated in terms of a whole-crust extension model. Magmatic underplating in the lower crust stimulated the production of late Eocene-early Oligocene granitic magmas, which invaded metasedimentary and Mesozoic granitic rocks of the middle crust. The midcrustal rocks, weakened by the magmatic heat influx, acted as a low-viscosity compensating material, decoupled from an extending upper crust. The fold nappes and lineation trends suggest large-scale flow of the weakened crust into the study area. The inflow pattern in the migmatitic infrastructure can be interpreted as a manifestation of midcrustal migration into an area beneath a domain of highly extended upper trustai rocks. At present the inferred Eocene-early Oligocene phase of upper-crust extension remains unknown, but available data on relative and geochronologic timing are not inconsistent with our model of return flow into an area already undergoing large-scale upper-crustal extension.

  7. Downscaling GLOF Hazards: An in-depth look at the Nepal Himalaya

    NASA Astrophysics Data System (ADS)

    Rounce, D.; McKinney, D. C.; Lala, J.

    2016-12-01

    The Nepal Himalaya house a large number of glacial lakes that pose a flood hazard to downstream communities and infrastructure. The modeling of the entire process chain of these glacial lake outburst floods (GLOFs) has been advancing rapidly in recent years. The most common cause of failure is mass movement entering the glacial lake, which triggers a tsunami-like wave that breaches the terminal moraine and causes the ensuing downstream flood. Unfortunately, modeling the avalanche, the breach of the moraine, and the downstream flood requires a large amount of site-specific information and can be very labor-intensive. Therefore, these detailed models need to be paired with large-scale hazard assessments that identify the glacial lakes that are the biggest threat and the triggering events that threaten these lakes. This study discusses the merger of a large-scale, remotely-based hazard assessment with more detailed GLOF models to show how GLOF hazard modeling can be downscaled in the Nepal Himalaya.

  8. Household Energy Consumption Segmentation Using Hourly Data

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kwac, J; Flora, J; Rajagopal, R

    2014-01-01

    The increasing US deployment of residential advanced metering infrastructure (AMI) has made hourly energy consumption data widely available. Using CA smart meter data, we investigate a household electricity segmentation methodology that uses an encoding system with a pre-processed load shape dictionary. Structured approaches using features derived from the encoded data drive five sample program and policy relevant energy lifestyle segmentation strategies. We also ensure that the methodologies developed scale to large data sets.

  9. Stratification, Integration and Challenges to Authority in Contemporary South Korea,

    DTIC Science & Technology

    1983-08-31

    the middle classes and the spread of middle class life styles and e:c)Betations throughout society is partic-larly marked. Large scale generational...educational qualification,: sharply limit a person’s life chances. Those in :-id> clas -- cupations are usul’lly able to pro-- de a supcrior education...bilit-y cf credit, more advanced agricultural technooy, greater urban demnzd for food, and increased gOverzient investnonzl i.n rural infrastructure

  10. Developing Sustainable Urban Water-Energy Infrastructures: Applying a Multi-Sectoral Social-Ecological-Infrastructural Systems (SEIS) Framework

    NASA Astrophysics Data System (ADS)

    Ramaswami, A.

    2016-12-01

    Urban infrastructure - broadly defined to include the systems that provide water, energy, food, shelter, transportation-communication, sanitation and green/public spaces in cities - have tremendous impact on the environment and on human well-being (Ramaswami et al., 2016; Ramaswami et al., 2012). Aggregated globally, these sectors contribute 90% of global greenhouse gas (GHG) emissions and 96% of global water withdrawals. Urban infrastructure contributions to such impacts are beginning to dominate. Cities are therefore becoming the action arena for infrastructure transformations that can achieve high levels of service delivery while reducing environmental impacts and enhancing human well-being. Achieving sustainable urban infrastructure transitions requires: information about the engineered infrastructure, and its interaction with the natural (ecological-environmental) and the social sub-systems In this paper, we apply a multi-sector, multi-scalar Social-Ecological-Infrastructural Systems framework that describes the interactions among biophysical engineered infrastructures, the natural environment and the social system in a systems-approach to inform urban infrastructure transformations. We apply the SEIS framework to inform water and energy sector transformations in cities to achieve environmental and human health benefits realized at multiple scales - local, regional and global. Local scales address pollution, health, wellbeing and inequity within the city; regional scales address regional pollution, scarcity, as well as supply risks in the water-energy sectors; global impacts include greenhouse gas emissions and climate impacts. Different actors shape infrastructure transitions including households, businesses, and policy actors. We describe the development of novel cross-sectoral strategies at the water-energy nexus in cities, focusing on water, waste and energy sectors, in a case study of Delhi, India. Ramaswami, A.; Russell, A.G.; Culligan, P.J.; Sharma, K.R.; Kumar, E. (2016). Meta-Principles for developing smart, sustainable, and healthy cities, Science, 352(6288), 940-3. Ramaswami, A., et al. A Social-Ecological Infrastructural Systems Framework for Inter-Disciplinary Study of Sustainable City-Systems. J. Ind Ecol, 16(6): 801-813, 2012.

  11. Green infrastructure retrofits on residential parcels: Ecohydrologic modeling for stormwater design

    NASA Astrophysics Data System (ADS)

    Miles, B.; Band, L. E.

    2014-12-01

    To meet water quality goals stormwater utilities and not-for-profit watershed organizations in the U.S. are working with citizens to design and implement green infrastructure on residential land. Green infrastructure, as an alternative and complement to traditional (grey) stormwater infrastructure, has the potential to contribute to multiple ecosystem benefits including stormwater volume reduction, carbon sequestration, urban heat island mitigation, and to provide amenities to residents. However, in small (1-10-km2) medium-density urban watersheds with heterogeneous land cover it is unclear whether stormwater retrofits on residential parcels significantly contributes to reduce stormwater volume at the watershed scale. In this paper, we seek to improve understanding of how small-scale redistribution of water at the parcel scale as part of green infrastructure implementation affects urban water budgets and stormwater volume across spatial scales. As study sites we use two medium-density headwater watersheds in Baltimore, MD and Durham, NC. We develop ecohydrology modeling experiments to evaluate the effectiveness of redirecting residential rooftop runoff to un-altered pervious surfaces and to engineered rain gardens to reduce stormwater runoff. As baselines for these experiments, we performed field surveys of residential rooftop hydrologic connectivity to adjacent impervious surfaces, and found low rates of connectivity. Through simulations of pervasive adoption of downspout disconnection to un-altered pervious areas or to rain garden stormwater control measures (SCM) in these catchments, we find that most parcel-scale changes in stormwater fate are attenuated at larger spatial scales and that neither SCM alone is likely to provide significant changes in streamflow at the watershed scale.

  12. NGScloud: RNA-seq analysis of non-model species using cloud computing.

    PubMed

    Mora-Márquez, Fernando; Vázquez-Poletti, José Luis; López de Heredia, Unai

    2018-05-03

    RNA-seq analysis usually requires large computing infrastructures. NGScloud is a bioinformatic system developed to analyze RNA-seq data using the cloud computing services of Amazon that permit the access to ad hoc computing infrastructure scaled according to the complexity of the experiment, so its costs and times can be optimized. The application provides a user-friendly front-end to operate Amazon's hardware resources, and to control a workflow of RNA-seq analysis oriented to non-model species, incorporating the cluster concept, which allows parallel runs of common RNA-seq analysis programs in several virtual machines for faster analysis. NGScloud is freely available at https://github.com/GGFHF/NGScloud/. A manual detailing installation and how-to-use instructions is available with the distribution. unai.lopezdeheredia@upm.es.

  13. Infrastructure to support learning health systems: are we there yet? Innovative solutions and lessons learned from American Recovery and Reinvestment Act CER investments.

    PubMed

    Holve, Erin; Segal, Courtney

    2014-11-01

    The 11 big health data networks participating in the AcademyHealth Electronic Data Methods Forum represent cutting-edge efforts to harness the power of big health data for research and quality improvement. This paper is a comparative case study based on site visits conducted with a subset of these large infrastructure grants funded through the Recovery Act, in which four key issues emerge that can inform the evolution of learning health systems, including the importance of acknowledging the challenges of scaling specialized expertise needed to manage and run CER networks; the delicate balance between privacy protections and the utility of distributed networks; emerging community engagement strategies; and the complexities of developing a robust business model for multi-use networks.

  14. Are We Ready for Mass Fatality Incidents? Preparedness of the US Mass Fatality Infrastructure.

    PubMed

    Merrill, Jacqueline A; Orr, Mark; Chen, Daniel Y; Zhi, Qi; Gershon, Robyn R

    2016-02-01

    To assess the preparedness of the US mass fatality infrastructure, we developed and tested metrics for 3 components of preparedness: organizational, operational, and resource sharing networks. In 2014, data were collected from 5 response sectors: medical examiners and coroners, the death care industry, health departments, faith-based organizations, and offices of emergency management. Scores were calculated within and across sectors and a weighted score was developed for the infrastructure. A total of 879 respondents reported highly variable organizational capabilities: 15% had responded to a mass fatality incident (MFI); 42% reported staff trained for an MFI, but only 27% for an MFI involving hazardous contaminants. Respondents estimated that 75% of their staff would be willing and able to respond, but only 53% if contaminants were involved. Most perceived their organization as somewhat prepared, but 13% indicated "not at all." Operational capability scores ranged from 33% (death care industry) to 77% (offices of emergency management). Network capability analysis found that only 42% of possible reciprocal relationships between resource-sharing partners were present. The cross-sector composite score was 51%; that is, half the key capabilities for preparedness were in place. The sectors in the US mass fatality infrastructure report suboptimal capability to respond. National leadership is needed to ensure sector-specific and infrastructure-wide preparedness for a large-scale MFI.

  15. Analysis of CERN computing infrastructure and monitoring data

    NASA Astrophysics Data System (ADS)

    Nieke, C.; Lassnig, M.; Menichetti, L.; Motesnitsalis, E.; Duellmann, D.

    2015-12-01

    Optimizing a computing infrastructure on the scale of LHC requires a quantitative understanding of a complex network of many different resources and services. For this purpose the CERN IT department and the LHC experiments are collecting a large multitude of logs and performance probes, which are already successfully used for short-term analysis (e.g. operational dashboards) within each group. The IT analytics working group has been created with the goal to bring data sources from different services and on different abstraction levels together and to implement a suitable infrastructure for mid- to long-term statistical analysis. It further provides a forum for joint optimization across single service boundaries and the exchange of analysis methods and tools. To simplify access to the collected data, we implemented an automated repository for cleaned and aggregated data sources based on the Hadoop ecosystem. This contribution describes some of the challenges encountered, such as dealing with heterogeneous data formats, selecting an efficient storage format for map reduce and external access, and will describe the repository user interface. Using this infrastructure we were able to quantitatively analyze the relationship between CPU/wall fraction, latency/throughput constraints of network and disk and the effective job throughput. In this contribution we will first describe the design of the shared analysis infrastructure and then present a summary of first analysis results from the combined data sources.

  16. Development of a remote sensing network for time-sensitive detection of fine scale damage to transportation infrastructure : [final report].

    DOT National Transportation Integrated Search

    2015-09-23

    This research project aimed to develop a remote sensing system capable of rapidly identifying fine-scale damage to critical transportation infrastructure following hazard events. Such a system must be pre-planned for rapid deployment, automate proces...

  17. Multiobjective optimization of cluster-scale urban water systems investigating alternative water sources and level of decentralization

    NASA Astrophysics Data System (ADS)

    Newman, J. P.; Dandy, G. C.; Maier, H. R.

    2014-10-01

    In many regions, conventional water supplies are unable to meet projected consumer demand. Consequently, interest has arisen in integrated urban water systems, which involve the reclamation or harvesting of alternative, localized water sources. However, this makes the planning and design of water infrastructure more difficult, as multiple objectives need to be considered, water sources need to be selected from a number of alternatives, and end uses of these sources need to be specified. In addition, the scale at which each treatment, collection, and distribution network should operate needs to be investigated. In order to deal with this complexity, a framework for planning and designing water infrastructure taking into account integrated urban water management principles is presented in this paper and applied to a rural greenfield development. Various options for water supply, and the scale at which they operate were investigated in order to determine the life-cycle trade-offs between water savings, cost, and GHG emissions as calculated from models calibrated using Australian data. The decision space includes the choice of water sources, storage tanks, treatment facilities, and pipes for water conveyance. For each water system analyzed, infrastructure components were sized using multiobjective genetic algorithms. The results indicate that local water sources are competitive in terms of cost and GHG emissions, and can reduce demand on the potable system by as much as 54%. Economies of scale in treatment dominated the diseconomies of scale in collection and distribution of water. Therefore, water systems that connect large clusters of households tend to be more cost efficient and have lower GHG emissions. In addition, water systems that recycle wastewater tended to perform better than systems that captured roof-runoff. Through these results, the framework was shown to be effective at identifying near optimal trade-offs between competing objectives, thereby enabling informed decisions to be made when planning water systems for greenfield developments.

  18. Hydrological response of karst systems to large-scale climate variability for different catchments of the French karst observatory network INSU/CNRS SNO KARST

    NASA Astrophysics Data System (ADS)

    Massei, Nicolas; Labat, David; Jourde, Hervé; Lecoq, Nicolas; Mazzilli, Naomi

    2017-04-01

    The french karst observatory network SNO KARST is a national initiative from the National Institute for Earth Sciences and Astronomy (INSU) of the National Center for Scientific Research (CNRS). It is also part of the new french research infrastructure for the observation of the critical zone OZCAR. SNO KARST is composed by several karst sites distributed over conterminous France which are located in different physiographic and climatic contexts (Mediterranean, Pyrenean, Jura mountain, western and northwestern shore near the Atlantic or the English Channel). This allows the scientific community to develop advanced research and experiments dedicated to improve understanding of the hydrological functioning of karst catchments. Here we used several sites of SNO KARST in order to assess the hydrological response of karst catchments to long-term variation of large-scale atmospheric circulation. Using NCEP reanalysis products and karst discharge, we analyzed the links between large-scale circulation and karst water resources variability. As karst hydrosystems are highly heterogeneous media, they behave differently across different time-scales : we explore the large-scale/local-scale relationships according to time-scales using a wavelet multiresolution approach of both karst hydrological variables and large-scale climate fields such as sea level pressure (SLP). The different wavelet components of karst discharge in response to the corresponding wavelet component of climate fields are either 1) compared to physico-chemical/geochemical responses at karst springs, or 2) interpreted in terms of hydrological functioning by comparing discharge wavelet components to internal components obtained from precipitation/discharge models using the KARSTMOD conceptual modeling platform of SNO KARST.

  19. Population as a proxy for infrastructure in the determination of event response and recovery resource allocations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Stamber, Kevin L.; Unis, Carl J.; Shirah, Donald N.

    Research into modeling of the quantification and prioritization of resources used in the recovery of lifeline critical infrastructure following disruptive incidents, such as hurricanes and earthquakes, has shown several factors to be important. Among these are population density and infrastructure density, event effects on infrastructure, and existence of an emergency response plan. The social sciences literature has a long history of correlating the population density and infrastructure density at a national scale, at a country-to-country level, mainly focused on transportation networks. This effort examines whether these correlations can be repeated at smaller geographic scales, for a variety of infrastructure types,more » so as to be able to use population data as a proxy for infrastructure data where infrastructure data is either incomplete or insufficiently granular. Using the best data available, this effort shows that strong correlations between infrastructure density for multiple types of infrastructure (e.g. miles of roads, hospital beds, miles of electric power transmission lines, and number of petroleum terminals) and population density do exist at known geographic boundaries (e.g. counties, service area boundaries) with exceptions that are explainable within the social sciences literature. Furthermore, the correlations identified provide a useful basis for ongoing research into the larger resource utilization problem.« less

  20. Population as a proxy for infrastructure in the determination of event response and recovery resource allocations

    DOE PAGES

    Stamber, Kevin L.; Unis, Carl J.; Shirah, Donald N.; ...

    2016-04-01

    Research into modeling of the quantification and prioritization of resources used in the recovery of lifeline critical infrastructure following disruptive incidents, such as hurricanes and earthquakes, has shown several factors to be important. Among these are population density and infrastructure density, event effects on infrastructure, and existence of an emergency response plan. The social sciences literature has a long history of correlating the population density and infrastructure density at a national scale, at a country-to-country level, mainly focused on transportation networks. This effort examines whether these correlations can be repeated at smaller geographic scales, for a variety of infrastructure types,more » so as to be able to use population data as a proxy for infrastructure data where infrastructure data is either incomplete or insufficiently granular. Using the best data available, this effort shows that strong correlations between infrastructure density for multiple types of infrastructure (e.g. miles of roads, hospital beds, miles of electric power transmission lines, and number of petroleum terminals) and population density do exist at known geographic boundaries (e.g. counties, service area boundaries) with exceptions that are explainable within the social sciences literature. Furthermore, the correlations identified provide a useful basis for ongoing research into the larger resource utilization problem.« less

  1. The role of the airline transportation network in the prediction and predictability of global epidemics.

    PubMed

    Colizza, Vittoria; Barrat, Alain; Barthélemy, Marc; Vespignani, Alessandro

    2006-02-14

    The systematic study of large-scale networks has unveiled the ubiquitous presence of connectivity patterns characterized by large-scale heterogeneities and unbounded statistical fluctuations. These features affect dramatically the behavior of the diffusion processes occurring on networks, determining the ensuing statistical properties of their evolution pattern and dynamics. In this article, we present a stochastic computational framework for the forecast of global epidemics that considers the complete worldwide air travel infrastructure complemented with census population data. We address two basic issues in global epidemic modeling: (i) we study the role of the large scale properties of the airline transportation network in determining the global diffusion pattern of emerging diseases; and (ii) we evaluate the reliability of forecasts and outbreak scenarios with respect to the intrinsic stochasticity of disease transmission and traffic flows. To address these issues we define a set of quantitative measures able to characterize the level of heterogeneity and predictability of the epidemic pattern. These measures may be used for the analysis of containment policies and epidemic risk assessment.

  2. Assessment of online public opinions on large infrastructure projects: A case study of the Three Gorges Project in China

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Jiang, Hanchen, E-mail: jhc13@mails.tsinghua.edu.cn; Qiang, Maoshan, E-mail: qiangms@tsinghua.edu.cn; Lin, Peng, E-mail: celinpe@mail.tsinghua.edu.cn

    Public opinion becomes increasingly salient in the ex post evaluation stage of large infrastructure projects which have significant impacts to the environment and the society. However, traditional survey methods are inefficient in collection and assessment of the public opinion due to its large quantity and diversity. Recently, Social media platforms provide a rich data source for monitoring and assessing the public opinion on controversial infrastructure projects. This paper proposes an assessment framework to transform unstructured online public opinions on large infrastructure projects into sentimental and topical indicators for enhancing practices of ex post evaluation and public participation. The framework usesmore » web crawlers to collect online comments related to a large infrastructure project and employs two natural language processing technologies, including sentiment analysis and topic modeling, with spatio-temporal analysis, to transform these comments into indicators for assessing online public opinion on the project. Based on the framework, we investigate the online public opinion of the Three Gorges Project on China's largest microblogging site, namely, Weibo. Assessment results present spatial-temporal distributions of post intensity and sentiment polarity, reveals major topics with different sentiments and summarizes managerial implications, for ex post evaluation of the world's largest hydropower project. The proposed assessment framework is expected to be widely applied as a methodological strategy to assess public opinion in the ex post evaluation stage of large infrastructure projects. - Highlights: • We developed a framework to assess online public opinion on large infrastructure projects with environmental impacts. • Indicators were built to assess post intensity, sentiment polarity and major topics of the public opinion. • We took the Three Gorges Project (TGP) as an example to demonstrate the effectiveness proposed framework. • We revealed spatial-temporal patterns of post intensity and sentiment polarity on the TGP. • We drew implications for a more in-depth understanding of the public opinion on large infrastructure projects.« less

  3. Sand waves in environmental flows: Insights gained by coupling large-eddy simulation with morphodynamics

    NASA Astrophysics Data System (ADS)

    Sotiropoulos, Fotis; Khosronejad, Ali

    2016-02-01

    Sand waves arise in subaqueous and Aeolian environments as the result of the complex interaction between turbulent flows and mobile sand beds. They occur across a wide range of spatial scales, evolve at temporal scales much slower than the integral scale of the transporting turbulent flow, dominate river morphodynamics, undermine streambank stability and infrastructure during flooding, and sculpt terrestrial and extraterrestrial landscapes. In this paper, we present the vision for our work over the last ten years, which has sought to develop computational tools capable of simulating the coupled interactions of sand waves with turbulence across the broad range of relevant scales: from small-scale ripples in laboratory flumes to mega-dunes in large rivers. We review the computational advances that have enabled us to simulate the genesis and long-term evolution of arbitrarily large and complex sand dunes in turbulent flows using large-eddy simulation and summarize numerous novel physical insights derived from our simulations. Our findings explain the role of turbulent sweeps in the near-bed region as the primary mechanism for destabilizing the sand bed, show that the seeds of the emergent structure in dune fields lie in the heterogeneity of the turbulence and bed shear stress fluctuations over the initially flatbed, and elucidate how large dunes at equilibrium give rise to energetic coherent structures and modify the spectra of turbulence. We also discuss future challenges and our vision for advancing a data-driven simulation-based engineering science approach for site-specific simulations of river flooding.

  4. Maestro: an orchestration framework for large-scale WSN simulations.

    PubMed

    Riliskis, Laurynas; Osipov, Evgeny

    2014-03-18

    Contemporary wireless sensor networks (WSNs) have evolved into large and complex systems and are one of the main technologies used in cyber-physical systems and the Internet of Things. Extensive research on WSNs has led to the development of diverse solutions at all levels of software architecture, including protocol stacks for communications. This multitude of solutions is due to the limited computational power and restrictions on energy consumption that must be accounted for when designing typical WSN systems. It is therefore challenging to develop, test and validate even small WSN applications, and this process can easily consume significant resources. Simulations are inexpensive tools for testing, verifying and generally experimenting with new technologies in a repeatable fashion. Consequently, as the size of the systems to be tested increases, so does the need for large-scale simulations. This article describes a tool called Maestro for the automation of large-scale simulation and investigates the feasibility of using cloud computing facilities for such task. Using tools that are built into Maestro, we demonstrate a feasible approach for benchmarking cloud infrastructure in order to identify cloud Virtual Machine (VM)instances that provide an optimal balance of performance and cost for a given simulation.

  5. Maestro: An Orchestration Framework for Large-Scale WSN Simulations

    PubMed Central

    Riliskis, Laurynas; Osipov, Evgeny

    2014-01-01

    Contemporary wireless sensor networks (WSNs) have evolved into large and complex systems and are one of the main technologies used in cyber-physical systems and the Internet of Things. Extensive research on WSNs has led to the development of diverse solutions at all levels of software architecture, including protocol stacks for communications. This multitude of solutions is due to the limited computational power and restrictions on energy consumption that must be accounted for when designing typical WSN systems. It is therefore challenging to develop, test and validate even small WSN applications, and this process can easily consume significant resources. Simulations are inexpensive tools for testing, verifying and generally experimenting with new technologies in a repeatable fashion. Consequently, as the size of the systems to be tested increases, so does the need for large-scale simulations. This article describes a tool called Maestro for the automation of large-scale simulation and investigates the feasibility of using cloud computing facilities for such task. Using tools that are built into Maestro, we demonstrate a feasible approach for benchmarking cloud infrastructure in order to identify cloud Virtual Machine (VM)instances that provide an optimal balance of performance and cost for a given simulation. PMID:24647123

  6. A global probabilistic tsunami hazard assessment from earthquake sources

    USGS Publications Warehouse

    Davies, Gareth; Griffin, Jonathan; Lovholt, Finn; Glimsdal, Sylfest; Harbitz, Carl; Thio, Hong Kie; Lorito, Stefano; Basili, Roberto; Selva, Jacopo; Geist, Eric L.; Baptista, Maria Ana

    2017-01-01

    Large tsunamis occur infrequently but have the capacity to cause enormous numbers of casualties, damage to the built environment and critical infrastructure, and economic losses. A sound understanding of tsunami hazard is required to underpin management of these risks, and while tsunami hazard assessments are typically conducted at regional or local scales, globally consistent assessments are required to support international disaster risk reduction efforts, and can serve as a reference for local and regional studies. This study presents a global-scale probabilistic tsunami hazard assessment (PTHA), extending previous global-scale assessments based largely on scenario analysis. Only earthquake sources are considered, as they represent about 80% of the recorded damaging tsunami events. Globally extensive estimates of tsunami run-up height are derived at various exceedance rates, and the associated uncertainties are quantified. Epistemic uncertainties in the exceedance rates of large earthquakes often lead to large uncertainties in tsunami run-up. Deviations between modelled tsunami run-up and event observations are quantified, and found to be larger than suggested in previous studies. Accounting for these deviations in PTHA is important, as it leads to a pronounced increase in predicted tsunami run-up for a given exceedance rate.

  7. Evolution of scaling emergence in large-scale spatial epidemic spreading.

    PubMed

    Wang, Lin; Li, Xiang; Zhang, Yi-Qing; Zhang, Yan; Zhang, Kan

    2011-01-01

    Zipf's law and Heaps' law are two representatives of the scaling concepts, which play a significant role in the study of complexity science. The coexistence of the Zipf's law and the Heaps' law motivates different understandings on the dependence between these two scalings, which has still hardly been clarified. In this article, we observe an evolution process of the scalings: the Zipf's law and the Heaps' law are naturally shaped to coexist at the initial time, while the crossover comes with the emergence of their inconsistency at the larger time before reaching a stable state, where the Heaps' law still exists with the disappearance of strict Zipf's law. Such findings are illustrated with a scenario of large-scale spatial epidemic spreading, and the empirical results of pandemic disease support a universal analysis of the relation between the two laws regardless of the biological details of disease. Employing the United States domestic air transportation and demographic data to construct a metapopulation model for simulating the pandemic spread at the U.S. country level, we uncover that the broad heterogeneity of the infrastructure plays a key role in the evolution of scaling emergence. The analyses of large-scale spatial epidemic spreading help understand the temporal evolution of scalings, indicating the coexistence of the Zipf's law and the Heaps' law depends on the collective dynamics of epidemic processes, and the heterogeneity of epidemic spread indicates the significance of performing targeted containment strategies at the early time of a pandemic disease.

  8. The Australian Computational Earth Systems Simulator

    NASA Astrophysics Data System (ADS)

    Mora, P.; Muhlhaus, H.; Lister, G.; Dyskin, A.; Place, D.; Appelbe, B.; Nimmervoll, N.; Abramson, D.

    2001-12-01

    Numerical simulation of the physics and dynamics of the entire earth system offers an outstanding opportunity for advancing earth system science and technology but represents a major challenge due to the range of scales and physical processes involved, as well as the magnitude of the software engineering effort required. However, new simulation and computer technologies are bringing this objective within reach. Under a special competitive national funding scheme to establish new Major National Research Facilities (MNRF), the Australian government together with a consortium of Universities and research institutions have funded construction of the Australian Computational Earth Systems Simulator (ACcESS). The Simulator or computational virtual earth will provide the research infrastructure to the Australian earth systems science community required for simulations of dynamical earth processes at scales ranging from microscopic to global. It will consist of thematic supercomputer infrastructure and an earth systems simulation software system. The Simulator models and software will be constructed over a five year period by a multi-disciplinary team of computational scientists, mathematicians, earth scientists, civil engineers and software engineers. The construction team will integrate numerical simulation models (3D discrete elements/lattice solid model, particle-in-cell large deformation finite-element method, stress reconstruction models, multi-scale continuum models etc) with geophysical, geological and tectonic models, through advanced software engineering and visualization technologies. When fully constructed, the Simulator aims to provide the software and hardware infrastructure needed to model solid earth phenomena including global scale dynamics and mineralisation processes, crustal scale processes including plate tectonics, mountain building, interacting fault system dynamics, and micro-scale processes that control the geological, physical and dynamic behaviour of earth systems. ACcESS represents a part of Australia's contribution to the APEC Cooperation for Earthquake Simulation (ACES) international initiative. Together with other national earth systems science initiatives including the Japanese Earth Simulator and US General Earthquake Model projects, ACcESS aims to provide a driver for scientific advancement and technological breakthroughs including: quantum leaps in understanding of earth evolution at global, crustal, regional and microscopic scales; new knowledge of the physics of crustal fault systems required to underpin the grand challenge of earthquake prediction; new understanding and predictive capabilities of geological processes such as tectonics and mineralisation.

  9. Dams and Intergovernmental Transfers

    NASA Astrophysics Data System (ADS)

    Bao, X.

    2012-12-01

    Gainers and Losers are always associated with large scale hydrological infrastructure construction, such as dams, canals and water treatment facilities. Since most of these projects are public services and public goods, Some of these uneven impacts cannot fully be solved by markets. This paper tried to explore whether the governments are paying any effort to balance the uneven distributional impacts caused by dam construction or not. It showed that dam construction brought an average 2% decrease in per capita tax revenue in the upstream counties, a 30% increase in the dam-location counties and an insignificant increase in downstream counties. Similar distributional impacts were observed for other outcome variables. like rural income and agricultural crop yields, though the impacts differ across different crops. The paper also found some balancing efforts from inter-governmental transfers to reduce the unevenly distributed impacts caused by dam construction. However, overall the inter-governmental fiscal transfer efforts were not large enough to fully correct those uneven distributions, reflected from a 2% decrease of per capita GDP in upstream counties and increase of per capita GDP in local and downstream counties. This paper may shed some lights on the governmental considerations in the decision making process for large hydrological infrastructures.

  10. Using Swarming Agents for Scalable Security in Large Network Environments

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Crouse, Michael; White, Jacob L.; Fulp, Errin W.

    2011-09-23

    The difficulty of securing computer infrastructures increases as they grow in size and complexity. Network-based security solutions such as IDS and firewalls cannot scale because of exponentially increasing computational costs inherent in detecting the rapidly growing number of threat signatures. Hostbased solutions like virus scanners and IDS suffer similar issues, and these are compounded when enterprises try to monitor these in a centralized manner. Swarm-based autonomous agent systems like digital ants and artificial immune systems can provide a scalable security solution for large network environments. The digital ants approach offers a biologically inspired design where each ant in the virtualmore » colony can detect atoms of evidence that may help identify a possible threat. By assembling the atomic evidences from different ant types the colony may detect the threat. This decentralized approach can require, on average, fewer computational resources than traditional centralized solutions; however there are limits to its scalability. This paper describes how dividing a large infrastructure into smaller managed enclaves allows the digital ant framework to effectively operate in larger environments. Experimental results will show that using smaller enclaves allows for more consistent distribution of agents and results in faster response times.« less

  11. Big Wind Turbines Require Infrastructure Upgrades - Continuum Magazine |

    Science.gov Websites

    rapidly. To that end, NREL has been completing electrical infrastructure upgrades to accommodate utility in the fall of 2009 necessitated infrastructure upgrades. Now the NWTC's electrical infrastructure eastern-most row on site. Interconnecting these large turbines required major electrical infrastructure

  12. Large scale and cloud-based multi-model analytics experiments on climate change data in the Earth System Grid Federation

    NASA Astrophysics Data System (ADS)

    Fiore, Sandro; Płóciennik, Marcin; Doutriaux, Charles; Blanquer, Ignacio; Barbera, Roberto; Donvito, Giacinto; Williams, Dean N.; Anantharaj, Valentine; Salomoni, Davide D.; Aloisio, Giovanni

    2017-04-01

    In many scientific domains such as climate, data is often n-dimensional and requires tools that support specialized data types and primitives to be properly stored, accessed, analysed and visualized. Moreover, new challenges arise in large-scale scenarios and eco-systems where petabytes (PB) of data can be available and data can be distributed and/or replicated, such as the Earth System Grid Federation (ESGF) serving the Coupled Model Intercomparison Project, Phase 5 (CMIP5) experiment, providing access to 2.5PB of data for the Intergovernmental Panel on Climate Change (IPCC) Fifth Assessment Report (AR5). A case study on climate models intercomparison data analysis addressing several classes of multi-model experiments is being implemented in the context of the EU H2020 INDIGO-DataCloud project. Such experiments require the availability of large amount of data (multi-terabyte order) related to the output of several climate models simulations as well as the exploitation of scientific data management tools for large-scale data analytics. More specifically, the talk discusses in detail a use case on precipitation trend analysis in terms of requirements, architectural design solution, and infrastructural implementation. The experiment has been tested and validated on CMIP5 datasets, in the context of a large scale distributed testbed across EU and US involving three ESGF sites (LLNL, ORNL, and CMCC) and one central orchestrator site (PSNC). The general "environment" of the case study relates to: (i) multi-model data analysis inter-comparison challenges; (ii) addressed on CMIP5 data; and (iii) which are made available through the IS-ENES/ESGF infrastructure. The added value of the solution proposed in the INDIGO-DataCloud project are summarized in the following: (i) it implements a different paradigm (from client- to server-side); (ii) it intrinsically reduces data movement; (iii) it makes lightweight the end-user setup; (iv) it fosters re-usability (of data, final/intermediate products, workflows, sessions, etc.) since everything is managed on the server-side; (v) it complements, extends and interoperates with the ESGF stack; (vi) it provides a "tool" for scientists to run multi-model experiments, and finally; and (vii) it can drastically reduce the time-to-solution for these experiments from weeks to hours. At the time the contribution is being written, the proposed testbed represents the first concrete implementation of a distributed multi-model experiment in the ESGF/CMIP context joining server-side and parallel processing, end-to-end workflow management and cloud computing. As opposed to the current scenario based on search & discovery, data download, and client-based data analysis, the INDIGO-DataCloud architectural solution described in this contribution addresses the scientific computing & analytics requirements by providing a paradigm shift based on server-side and high performance big data frameworks jointly with two-level workflow management systems realized at the PaaS level via a cloud infrastructure.

  13. Major Incident Hospital: Development of a Permanent Facility for Management of Incident Casualties.

    PubMed

    Marres, Geertruid; Bemelman, Michael; van der Eijk, John; Leenen, Luke

    2009-06-01

    Preparation is essential to cope with the challenge of providing optimal care when there is a sudden, unexpected surge of casualties due to a disaster or major incident. By definition, the requirements of such cases exceed the standard care facilities of hospitals in qualitative or quantitative respects and interfere with the care of regular patients. To meet the growing demands to be prepared for disasters, a permanent facility to provide structured, prepared relief in such situations was developed. A permanent but reserved Major Incident Hospital (MIH) has been developed through cooperation between a large academic medical institution, a trauma center, a military hospital, and the National Poison Information Centre (NVIC). The infrastructure, organization, support systems, training and systematic working methods of the MIH are designed to create order in a chaotic, unexpected situation and to optimize care and logistics in any possible scenario. Focus points are: patient flow and triage, registration, communication, evaluation and training. Research and the literature are used to identify characteristic pitfalls due to the chaos associated with and the unexpected nature of disasters, and to adapt our organization. At the MIH, the exceptional has become the core business, and preparation for disaster and large-scale emergency care is a daily occupation. An Emergency Response Protocol enables admittance to the normally dormant hospital of up to 100 (in exceptional cases even 300) patients after a start-up time of only 15 min. The Patient Barcode Registration System (PBR) with EAN codes guarantees quick and adequate registration of patient data in order to facilitate good medical coordination and follow-up during a major incident. The fact that the hospital is strictly reserved for this type of care guarantees availability and minimizes impact on normal care. When it is not being used during a major incident, there is time to address training and research. Collaboration with the NVIC and infrastructural adjustments enable us to not only care for patients with physical trauma, but also to provide centralized care of patients under quarantine conditions for, say, MRSA, SARS, smallpox, chemical or biological hazards. Triage plays an important role in medical disaster management and is therefore key to organization and infrastructure. Caps facilitate role distribution and recognizibility. The PBR resulted in more accurate registration and real-time availability of patient and group information. Infrastructure and a plan is not enough; training, research and evaluation are necessary to continuously work on disaster preparedness. The MIH in Utrecht (Netherlands) is a globally unique facility that can provide immediate emergency care for multiple casualties under exceptional circumstances. Resulting from the cooperation between a large academic medical institution, a trauma center, a military hospital and the NVIC, the MIH offers not only a good and complete infrastructure but also the expertise required to provide large-scale emergency care during disasters and major incidents.

  14. Effects of a significant New Madrid Seismic Zone event on oil and natural gas pipelines and their cascading effects to critical infrastructures

    NASA Astrophysics Data System (ADS)

    Fields, Damon E.

    Critical Infrastructure Protection (CIP) is a construct that relates preparedness and responsiveness to natural or man-made disasters that involve vulnerable assets deemed essential for the functioning of our economy and society. Infrastructure systems (power grids, bridges, airports, etc.) are vulnerable to disastrous types of events--natural or man-made. Failures of these systems can have devastating effects on communities and entire regions. CIP relates our willingness, ability, and capability to defend, mitigate, and re-constitute those assets that succumb to disasters affecting one or more infrastructure sectors. This qualitative research utilized ethnography and employed interviews with subject matter experts (SMEs) from various fields of study regarding CIP with respect to oil and natural gas pipelines in the New Madrid Seismic Zone. The study focused on the research question: What can be done to mitigate vulnerabilities in the oil and natural gas infrastructures, along with the potential cascading effects to interdependent systems, associated with a New Madrid fault event? The researcher also analyzed National Level Exercises (NLE) and real world events, and associated After Action Reports (AAR) and Lessons Learned (LL) in order to place a holistic lens across all infrastructures and their dependencies and interdependencies. Three main themes related to the research question emerged: (a) preparedness, (b) mitigation, and (c) impacts. These themes comprised several dimensions: (a) redundancy, (b) node hardening, (c) education, (d) infrastructure damage, (e) cascading effects, (f) interdependencies, (g) exercises, and (h) earthquake readiness. As themes and dimensions are analyzed, they are considered against findings in AARs and LL from previous real world events and large scale exercise events for validation or rejection.

  15. GATECloud.net: a platform for large-scale, open-source text processing on the cloud.

    PubMed

    Tablan, Valentin; Roberts, Ian; Cunningham, Hamish; Bontcheva, Kalina

    2013-01-28

    Cloud computing is increasingly being regarded as a key enabler of the 'democratization of science', because on-demand, highly scalable cloud computing facilities enable researchers anywhere to carry out data-intensive experiments. In the context of natural language processing (NLP), algorithms tend to be complex, which makes their parallelization and deployment on cloud platforms a non-trivial task. This study presents a new, unique, cloud-based platform for large-scale NLP research--GATECloud. net. It enables researchers to carry out data-intensive NLP experiments by harnessing the vast, on-demand compute power of the Amazon cloud. Important infrastructural issues are dealt with by the platform, completely transparently for the researcher: load balancing, efficient data upload and storage, deployment on the virtual machines, security and fault tolerance. We also include a cost-benefit analysis and usage evaluation.

  16. Drivers and barriers to e-invoicing adoption in Greek large scale manufacturing industries

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Marinagi, Catherine, E-mail: marinagi@teihal.gr, E-mail: ptrivel@yahoo.com, E-mail: preklitis@yahoo.com; Trivellas, Panagiotis, E-mail: marinagi@teihal.gr, E-mail: ptrivel@yahoo.com, E-mail: preklitis@yahoo.com; Reklitis, Panagiotis, E-mail: marinagi@teihal.gr, E-mail: ptrivel@yahoo.com, E-mail: preklitis@yahoo.com

    2015-02-09

    This paper attempts to investigate the drivers and barriers that large-scale Greek manufacturing industries experience in adopting electronic invoices (e-invoices), based on three case studies with organizations having international presence in many countries. The study focuses on the drivers that may affect the increase of the adoption and use of e-invoicing, including the customers demand for e-invoices, and sufficient know-how and adoption of e-invoicing in organizations. In addition, the study reveals important barriers that prevent the expansion of e-invoicing, such as suppliers’ reluctance to implement e-invoicing, and IT infrastructures incompatibilities. Other issues examined by this study include the observed benefitsmore » from e-invoicing implementation, and the financial priorities of the organizations assumed to be supported by e-invoicing.« less

  17. Remote third shift EAST operation: a new paradigm

    NASA Astrophysics Data System (ADS)

    Schissel, D. P.; Coviello, E.; Eidietis, N.; Flanagan, S.; Garcia, F.; Humphreys, D.; Kostuk, M.; Lanctot, M.; Lee, X.; Margo, M.; Miller, D.; Parker, C.; Penaflor, B.; Qian, J. P.; Sun, X.; Tan, H.; Walker, M.; Xiao, B.; Yuan, Q.

    2017-05-01

    General Atomics’ (GA) scientists in the United States remotely conducted experimental operation of the experimental advanced superconducting tokamak (EAST) in China during its third shift. Scientists led these experiments in a dedicated remote control room that utilized a novel computer science hardware and software infrastructure to allow data movement, visualization, and communication on the time scale of EAST’s pulse cycle. This Fusion Science Collaboration Zone infrastructure allows the movement of large amounts of data between continents in a short time scale with a 300-fold increase in data transfer rate over that available using the traditional transmission protocol. Real-time data from control systems is moved almost instantaneously. An event system tied to the EAST pulse cycle allows automatic initiation of data transfers, resulting in bulk EAST data to be transferred to GA within minutes. The EAST data at GA is served via MDSplus to approved US collaborators avoiding multiple US clients from requesting data from EAST and competing for the long-haul network’s bandwidth. At present there are 37 approved scientists from 8 US research institutions.

  18. Seqcrawler: biological data indexing and browsing platform.

    PubMed

    Sallou, Olivier; Bretaudeau, Anthony; Roult, Aurelien

    2012-07-24

    Seqcrawler takes its roots in software like SRS or Lucegene. It provides an indexing platform to ease the search of data and meta-data in biological banks and it can scale to face the current flow of data. While many biological bank search tools are available on the Internet, mainly provided by large organizations to search their data, there is a lack of free and open source solutions to browse one's own set of data with a flexible query system and able to scale from a single computer to a cloud system. A personal index platform will help labs and bioinformaticians to search their meta-data but also to build a larger information system with custom subsets of data. The software is scalable from a single computer to a cloud-based infrastructure. It has been successfully tested in a private cloud with 3 index shards (pieces of index) hosting ~400 millions of sequence information (whole GenBank, UniProt, PDB and others) for a total size of 600 GB in a fault tolerant architecture (high-availability). It has also been successfully integrated with software to add extra meta-data from blast results to enhance users' result analysis. Seqcrawler provides a complete open source search and store solution for labs or platforms needing to manage large amount of data/meta-data with a flexible and customizable web interface. All components (search engine, visualization and data storage), though independent, share a common and coherent data system that can be queried with a simple HTTP interface. The solution scales easily and can also provide a high availability infrastructure.

  19. Advancing research opportunities and promoting pathways in graduate education: a systemic approach to BUILD training at California State University, Long Beach (CSULB).

    PubMed

    Urizar, Guido G; Henriques, Laura; Chun, Chi-Ah; Buonora, Paul; Vu, Kim-Phuong L; Galvez, Gino; Kingsford, Laura

    2017-01-01

    First-generation college graduates, racial and ethnic minorities, people with disabilities, and those from disadvantaged backgrounds are gravely underrepresented in the health research workforce representing behavioral health sciences and biomedical sciences and engineering (BHS/BSE). Furthermore, relative to their peers, very few students from these underrepresented groups (URGs) earn scientific bachelor's degrees with even fewer earning doctorate degrees. Therefore, programs that engage and retain URGs in health-related research careers early on in their career path are imperative to promote the diversity of well-trained research scientists who have the ability to address the nation's complex health challenges in an interdisciplinary way. The purpose of this paper is to describe the challenges, lessons learned, and sustainability of implementing a large-scale, multidisciplinary research infrastructure at California State University, Long Beach (CSULB) - a minority-serving institution - through federal funding received by the National Institutes of Health (NIH) Building Infrastructure Leading to Diversity (BUILD) Initiative. The CSULB BUILD initiative consists of developing a research infrastructure designed to engage and retain URGs on the research career path by providing them with the research training and skills needed to make them highly competitive for doctoral programs and entry into the research workforce. This initiative unites many research disciplines using basic, applied, and translational approaches to offer insights and develop technologies addressing prominent community and national health issues from a multidisciplinary perspective. Additionally, this initiative brings together local (e.g., high school, community college, doctoral research institutions) and national (e.g., National Research Mentoring Network) collaborative partners to alter how we identify, develop, and implement resources to enhance student and faculty research. Finally, this initiative establishes a student research training program that engages URGs earlier in their academic development, is larger and multidisciplinary in scope, and is responsive to the life contexts and promotes the cultural capital that URGs bring to their career path. Although there have been many challenges to planning for and developing CSULB BUILD's large-scale, multidisciplinary research infrastructure, there have been many lessons learned in the process that could aid other campuses in the development and sustainability of similar research programs.

  20. SIOS: A regional cooperation of international research infrastructures as a building block for an Arctic observing system

    NASA Astrophysics Data System (ADS)

    Holmen, K. J.; Lønne, O. J.

    2016-12-01

    The Svalbard Integrated Earth Observing System (SIOS) is a regional response to the Earth System Science (ESS) challenges posed by the Amsterdam Declaration on Global Change. SIOS is intended to develop and implement methods for how observational networks in the Arctic are to be designed in order to address such issues in a regional scale. SIOS builds on the extensive observation capacity and research installations already in place by many international institutions and will provide upgraded and relevant Observing Systems and Research Facilities of world class in and around Svalbard. It is a distributed research infrastructure set up to provide a regional observational system for long term measurements under a joint framework. As one of the large scale research infrastructure initiatives on the ESFRI roadmap (European Strategy Forum on Research Infrastructures), SIOS is now being implemented. The new research infrastructure organization, the SIOS Knowledge Center (SIOS-KC), is instrumental in developing methods and solutions for setting up its regional contribution to a systematically constructed Arctic observational network useful for global change studies. We will discuss cross-disciplinary research experiences some case studies and lessons learned so far. SIOS aims to provide an effective, easily accessible data management system which makes use of existing data handling systems in the thematic fields covered by SIOS. SIOS will, implement a data policy which matches the ambitions that are set for the new European research infrastructures, but at the same time be flexible enough to consider `historical' legacies. Given the substantial international presence in the Svalbard archipelago and the pan-Arctic nature of the issue, there is an opportunity to build SIOS further into a wider regional network and pan-Arctic context, ideally under the umbrella of the Sustaining Arctic Observing Networks (SAON) initiative. It is necessary to anchor SIOS strongly in a European context and connect it to extra-EU initiatives, in order to establish a pan-Arctic perspective. SIOS must develop and secure a robust communication with other bodies carrying out and funding research activities in the Arctic (observational as well as modelling) and actively promote a sustained Arctic observing network.

  1. Emergency response to mass casualty incidents in Lebanon.

    PubMed

    El Sayed, Mazen J

    2013-08-01

    The emergency response to mass casualty incidents in Lebanon lacks uniformity. Three recent large-scale incidents have challenged the existing emergency response process and have raised the need to improve and develop incident management for better resilience in times of crisis. We describe some simple emergency management principles that are currently applied in the United States. These principles can be easily adopted by Lebanon and other developing countries to standardize and improve their emergency response systems using existing infrastructure.

  2. Responding to the Event Deluge

    NASA Technical Reports Server (NTRS)

    Williams, Roy D.; Barthelmy, Scott D.; Denny, Robert B.; Graham, Matthew J.; Swinbank, John

    2012-01-01

    We present the VOEventNet infrastructure for large-scale rapid follow-up of astronomical events, including selection, annotation, machine intelligence, and coordination of observations. The VOEvent.standard is central to this vision, with distributed and replicated services rather than centralized facilities. We also describe some of the event brokers, services, and software that .are connected to the network. These technologies will become more important in the coming years, with new event streams from Gaia, LOF AR, LIGO, LSST, and many others

  3. Energy Systems Integration Facility (ESIF) External Stakeholders Workshop: Workshop Proceedings, 9 October 2008, Golden, Colorado

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Komomua, C.; Kroposki, B.; Mooney, D.

    2009-01-01

    On October 9, 2008, NREL hosted a workshop to provide an opportunity for external stakeholders to offer insights and recommendations on the design and functionality of DOE's planned Energy Systems Infrastructure Facility (ESIF). The goal was to ensure that the planning for the ESIF effectively addresses the most critical barriers to large-scale energy efficiency (EE) and renewable energy (RE) deployment. This technical report documents the ESIF workshop proceedings.

  4. Mitigating and adapting to climate change: multi-functional and multi-scale assessment of green urban infrastructure.

    PubMed

    Demuzere, M; Orru, K; Heidrich, O; Olazabal, E; Geneletti, D; Orru, H; Bhave, A G; Mittal, N; Feliu, E; Faehnle, M

    2014-12-15

    In order to develop climate resilient urban areas and reduce emissions, several opportunities exist starting from conscious planning and design of green (and blue) spaces in these landscapes. Green urban infrastructure has been regarded as beneficial, e.g. by balancing water flows, providing thermal comfort. This article explores the existing evidence on the contribution of green spaces to climate change mitigation and adaptation services. We suggest a framework of ecosystem services for systematizing the evidence on the provision of bio-physical benefits (e.g. CO2 sequestration) as well as social and psychological benefits (e.g. improved health) that enable coping with (adaptation) or reducing the adverse effects (mitigation) of climate change. The multi-functional and multi-scale nature of green urban infrastructure complicates the categorization of services and benefits, since in reality the interactions between various benefits are manifold and appear on different scales. We will show the relevance of the benefits from green urban infrastructures on three spatial scales (i.e. city, neighborhood and site specific scales). We will further report on co-benefits and trade-offs between the various services indicating that a benefit could in turn be detrimental in relation to other functions. The manuscript identifies avenues for further research on the role of green urban infrastructure, in different types of cities, climates and social contexts. Our systematic understanding of the bio-physical and social processes defining various services allows targeting stressors that may hamper the provision of green urban infrastructure services in individual behavior as well as in wider planning and environmental management in urban areas. Copyright © 2014 Elsevier Ltd. All rights reserved.

  5. Towards large-scale mapping of urban three-dimensional structure using Landsat imagery and global elevation datasets

    NASA Astrophysics Data System (ADS)

    Wang, P.; Huang, C.

    2017-12-01

    The three-dimensional (3D) structure of buildings and infrastructures is fundamental to understanding and modelling of the impacts and challenges of urbanization in terms of energy use, carbon emissions, and earthquake vulnerabilities. However, spatially detailed maps of urban 3D structure have been scarce, particularly in fast-changing developing countries. We present here a novel methodology to map the volume of buildings and infrastructures at 30 meter resolution using a synergy of Landsat imagery and openly available global digital surface models (DSMs), including the Shuttle Radar Topography Mission (SRTM), ASTER Global Digital Elevation Map (GDEM), ALOS World 3D - 30m (AW3D30), and the recently released global DSM from the TanDEM-X mission. Our method builds on the concept of object-based height profile to extract height metrics from the DSMs and use a machine learning algorithm to predict height and volume from the height metrics. We have tested this algorithm in the entire England and assessed our result using Lidar measurements in 25 England cities. Our initial assessments achieved a RMSE of 1.4 m (R2 = 0.72) for building height and a RMSE of 1208.7 m3 (R2 = 0.69) for building volume, demonstrating the potential of large-scale applications and fully automated mapping of urban structure.

  6. Engineering youth service system infrastructure: Hawaii's continued efforts at large-scale implementation through knowledge management strategies.

    PubMed

    Nakamura, Brad J; Mueller, Charles W; Higa-McMillan, Charmaine; Okamura, Kelsie H; Chang, Jaime P; Slavin, Lesley; Shimabukuro, Scott

    2014-01-01

    Hawaii's Child and Adolescent Mental Health Division provides a unique illustration of a youth public mental health system with a long and successful history of large-scale quality improvement initiatives. Many advances are linked to flexibly organizing and applying knowledge gained from the scientific literature and move beyond installing a limited number of brand-named treatment approaches that might be directly relevant only to a small handful of system youth. This article takes a knowledge-to-action perspective and outlines five knowledge management strategies currently under way in Hawaii. Each strategy represents one component of a larger coordinated effort at engineering a service system focused on delivering both brand-named treatment approaches and complimentary strategies informed by the evidence base. The five knowledge management examples are (a) a set of modular-based professional training activities for currently practicing therapists, (b) an outreach initiative for supporting youth evidence-based practices training at Hawaii's mental health-related professional programs, (c) an effort to increase consumer knowledge of and demand for youth evidence-based practices, (d) a practice and progress agency performance feedback system, and (e) a sampling of system-level research studies focused on understanding treatment as usual. We end by outlining a small set of lessons learned and a longer term vision for embedding these efforts into the system's infrastructure.

  7. Behavioral responses of wolves to roads: scale-dependent ambivalence

    PubMed Central

    Nelson, Lindsey; Wabakken, Petter; Sand, Håkan; Liberg, Olof

    2014-01-01

    Throughout their recent recovery in several industrialized countries, large carnivores have had to cope with a changed landscape dominated by human infrastructure. Population growth depends on the ability of individuals to adapt to these changes by making use of new habitat features and at the same time to avoid increased risks of mortality associated with human infrastructure. We analyzed the summer movements of 19 GPS-collared resident wolves (Canis lupus L.) from 14 territories in Scandinavia in relation to roads. We used resource and step selection functions, including >12000 field-checked GPS-positions and 315 kill sites. Wolves displayed ambivalent responses to roads depending on the spatial scale, road type, time of day, behavioral state, and reproductive status. At the site scale (approximately 0.1 km2), they selected for roads when traveling, nearly doubling their travel speed. Breeding wolves moved the fastest. At the patch scale (10 km2), house density rather than road density was a significant negative predictor of wolf patch selection. At the home range scale (approximately 1000 km2), breeding wolves increased gravel road use with increasing road availability, although at a lower rate than expected. Wolves have adapted to use roads for ease of travel, but at the same time developed a cryptic behavior to avoid human encounters. This behavioral plasticity may have been important in allowing the successful recovery of wolf populations in industrialized countries. However, we emphasize the role of roads as a potential cause of increased human-caused mortality. PMID:25419085

  8. Behavioral responses of wolves to roads: scale-dependent ambivalence.

    PubMed

    Zimmermann, Barbara; Nelson, Lindsey; Wabakken, Petter; Sand, Håkan; Liberg, Olof

    2014-11-01

    Throughout their recent recovery in several industrialized countries, large carnivores have had to cope with a changed landscape dominated by human infrastructure. Population growth depends on the ability of individuals to adapt to these changes by making use of new habitat features and at the same time to avoid increased risks of mortality associated with human infrastructure. We analyzed the summer movements of 19 GPS-collared resident wolves ( Canis lupus L.) from 14 territories in Scandinavia in relation to roads. We used resource and step selection functions, including >12000 field-checked GPS-positions and 315 kill sites. Wolves displayed ambivalent responses to roads depending on the spatial scale, road type, time of day, behavioral state, and reproductive status. At the site scale (approximately 0.1 km 2 ), they selected for roads when traveling, nearly doubling their travel speed. Breeding wolves moved the fastest. At the patch scale (10 km 2 ), house density rather than road density was a significant negative predictor of wolf patch selection. At the home range scale (approximately 1000 km 2 ), breeding wolves increased gravel road use with increasing road availability, although at a lower rate than expected. Wolves have adapted to use roads for ease of travel, but at the same time developed a cryptic behavior to avoid human encounters. This behavioral plasticity may have been important in allowing the successful recovery of wolf populations in industrialized countries. However, we emphasize the role of roads as a potential cause of increased human-caused mortality.

  9. Green infrastructure and its catchment-scale effects: an emerging science

    PubMed Central

    Golden, Heather E.; Hoghooghi, Nahal

    2018-01-01

    Urbanizing environments alter the hydrological cycle by redirecting stream networks for stormwater and wastewater transmission and increasing impermeable surfaces. These changes thereby accelerate the runoff of water and its constituents following precipitation events, alter evapotranspiration processes, and indirectly modify surface precipitation patterns. Green infrastructure, or low-impact development (LID), can be used as a standalone practice or in concert with gray infrastructure (traditional stormwater management approaches) for cost-efficient, decentralized stormwater management. The growth in LID over the past several decades has resulted in a concomitant increase in research evaluating LID efficiency and effectiveness, but mostly at localized scales. There is a clear research need to quantify how LID practices affect water quantity (i.e., runoff and discharge) and quality at the scale of catchments. In this overview, we present the state of the science of LID research at the local scale, considerations for scaling this research to catchments, recent advances and findings in scaling the effects of LID practices on water quality and quantity at catchment scales, and the use of models as novel tools for these scaling efforts. PMID:29682288

  10. Green infrastructure and its catchment-scale effects: an emerging science.

    PubMed

    Golden, Heather E; Hoghooghi, Nahal

    2018-01-01

    Urbanizing environments alter the hydrological cycle by redirecting stream networks for stormwater and wastewater transmission and increasing impermeable surfaces. These changes thereby accelerate the runoff of water and its constituents following precipitation events, alter evapotranspiration processes, and indirectly modify surface precipitation patterns. Green infrastructure, or low-impact development (LID), can be used as a standalone practice or in concert with gray infrastructure (traditional stormwater management approaches) for cost-efficient, decentralized stormwater management. The growth in LID over the past several decades has resulted in a concomitant increase in research evaluating LID efficiency and effectiveness, but mostly at localized scales. There is a clear research need to quantify how LID practices affect water quantity (i.e., runoff and discharge) and quality at the scale of catchments. In this overview, we present the state of the science of LID research at the local scale, considerations for scaling this research to catchments, recent advances and findings in scaling the effects of LID practices on water quality and quantity at catchment scales, and the use of models as novel tools for these scaling efforts.

  11. Towards the Big Data Strategies for EISCAT-3D

    NASA Astrophysics Data System (ADS)

    Häggström, I.; Chen, Y.; Hardisty, A.; Sipos, G.; Krakowian, M.; Ferreira, N. L.; Savolainen, V.

    2013-12-01

    The design of the next generation incoherent scatter radar system, EISCAT-3D, opens up opportunities for physicists to explore many new research fields in the studies of the atmosphere and near-Earth space. On the other hand, it also introduces significant challenges in handling the large-scale experimental data which will be massively generated at great speeds and volumes. During its first operation stage in 2018, EISCAT-3D will produce 5PB data per year, and the total data volume will rise up to 40PB per year in its full operations stage in 2023. This refers to the so-called big data problem, whose size is beyond the capabilities of the current database technology [1]. To unlock the value from these data, new forms of processing and platforms of tools are needed. Advanced e-Science infrastructures such as, EGI, EUDAT, and PRACE, and their enabling technologies are making large-scale computational capacities more accessible to researchers of all scientific disciplines. The European Grid Infrastructure (EGI), a not-for-profit foundation created to manage the infrastructures on behalf of the National Grid Initiatives and European Intergovernmental Research Organisations, operates more than 370,000 logical CPUs, 248 PB disk and 176 PB of disk capacity (June 2013 statistics). EUDAT is a European project aiming to take the first steps towards building a Collaborative Data Infrastructure for European scientific data products. It will offer services for data storage and replication, data staging to computational resources (and vice versa) and services for data cataloguing and discovery. PRACE is the pan-European supercomputing infrastructure that forms the top-tier of HPC provision across Europe, with the aim of enabling high impact scientific discovery and engineering research and development across all disciplines to enhance European competitiveness. We propose e-Science approaches to tackle the challenges of processing and searching EISCAT-3D data, and will provide solutions for: 1. Staging EISCAT-3D lower-level data (voltage data) into the large-scale e-Science storage, such as EGI or EUDAT; 2. Providing various processing and mining facilities such as, auto-correlation and spatial/temporal integration, to allow individual scientists to analyse data as their will; 3. Providing advanced searching facilities to enable individual scientists to search through all level of data and identify specific signatures, e.g., plasma features, meteors, space debris, astronomical features. The new data processing and searching strategy will offer more flexible way for EISCAT users to analyse and discover interesting data patterns which are not yet available. Space physicists will be able to make better use of the observation data and exploit the growing wealth of them. This will eventually lead to a new data-centric way of conceptualising, organising and carrying out research activities which could lead to an introduction of new approaches to solve problems that were previously considered extremely hard or, in some cases, impossible to solve and also lead to serendipitous discoveries and significant breakthrough [1]. [1] C. Thanos, S. Manegold and M. Kersten, 'Big Data', ERCIM Special Theme: Big Data, No. 89, Apr. 2012.

  12. Lessons learned from implementing a national infrastructure in Sweden for storage and analysis of next-generation sequencing data

    PubMed Central

    2013-01-01

    Analyzing and storing data and results from next-generation sequencing (NGS) experiments is a challenging task, hampered by ever-increasing data volumes and frequent updates of analysis methods and tools. Storage and computation have grown beyond the capacity of personal computers and there is a need for suitable e-infrastructures for processing. Here we describe UPPNEX, an implementation of such an infrastructure, tailored to the needs of data storage and analysis of NGS data in Sweden serving various labs and multiple instruments from the major sequencing technology platforms. UPPNEX comprises resources for high-performance computing, large-scale and high-availability storage, an extensive bioinformatics software suite, up-to-date reference genomes and annotations, a support function with system and application experts as well as a web portal and support ticket system. UPPNEX applications are numerous and diverse, and include whole genome-, de novo- and exome sequencing, targeted resequencing, SNP discovery, RNASeq, and methylation analysis. There are over 300 projects that utilize UPPNEX and include large undertakings such as the sequencing of the flycatcher and Norwegian spruce. We describe the strategic decisions made when investing in hardware, setting up maintenance and support, allocating resources, and illustrate major challenges such as managing data growth. We conclude with summarizing our experiences and observations with UPPNEX to date, providing insights into the successful and less successful decisions made. PMID:23800020

  13. Analysing the usage and evidencing the importance of fast chargers for the adoption of battery electric vehicles

    DOE PAGES

    Neaimeh, Myriam; Salisbury, Shawn D.; Hill, Graeme A.; ...

    2017-06-27

    An appropriate charging infrastructure is one of the key aspects needed to support the mass adoption of battery electric vehicles (BEVs), and it is suggested that publically available fast chargers could play a key role in this infrastructure. As fast charging is a relatively new technology, very little research is conducted on the topic using real world datasets, and it is of utmost importance to measure actual usage of this technology and provide evidence on its importance to properly inform infrastructure planning. 90,000 fast charge events collected from the first large-scale roll-outs and evaluation projects of fast charging infrastructure inmore » the UK and the US and 12,700 driving days collected from 35 BEVs in the UK were analysed. Using multiple regression analysis, we examined the relationship between daily driving distance and standard and fast charging and demonstrated that fast chargers are more influential. Fast chargers enabled using BEVs on journeys above their single-charge range that would have been impractical using standard chargers. Fast chargers could help overcome perceived and actual range barriers, making BEVs more attractive to future users. At current BEV market share, there is a vital need for policy support to accelerate the development of fast charge networks.« less

  14. Analysing the usage and evidencing the importance of fast chargers for the adoption of battery electric vehicles

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Neaimeh, Myriam; Salisbury, Shawn D.; Hill, Graeme A.

    An appropriate charging infrastructure is one of the key aspects needed to support the mass adoption of battery electric vehicles (BEVs), and it is suggested that publically available fast chargers could play a key role in this infrastructure. As fast charging is a relatively new technology, very little research is conducted on the topic using real world datasets, and it is of utmost importance to measure actual usage of this technology and provide evidence on its importance to properly inform infrastructure planning. 90,000 fast charge events collected from the first large-scale roll-outs and evaluation projects of fast charging infrastructure inmore » the UK and the US and 12,700 driving days collected from 35 BEVs in the UK were analysed. Using multiple regression analysis, we examined the relationship between daily driving distance and standard and fast charging and demonstrated that fast chargers are more influential. Fast chargers enabled using BEVs on journeys above their single-charge range that would have been impractical using standard chargers. Fast chargers could help overcome perceived and actual range barriers, making BEVs more attractive to future users. At current BEV market share, there is a vital need for policy support to accelerate the development of fast charge networks.« less

  15. Robustness and Recovery of Lifeline Infrastructure and Ecosystem Networks

    NASA Astrophysics Data System (ADS)

    Bhatia, U.; Ganguly, A. R.

    2015-12-01

    Disruptive events, both natural and man-made, can have widespread impacts on both natural systems and lifeline infrastructure networks leading to the loss of biodiversity and essential functionality, respectively. Projected sea-level rise and climate change can further increase the frequency and severity of large-scale floods on urban-coastal megacities. Nevertheless, Failure in infrastructure systems can trigger cascading impacts on dependent ecosystems, and vice-versa. An important consideration in the behavior of the isolated networks and inter-connected networks following disruptive events is their resilience, or the ability of the network to "bounce back" to a pre-disaster state. Conventional risk analysis and subsequent risk management frameworks have focused on identifying the components' vulnerability and strengthening of the isolated components to withstand these disruptions. But high interconnectedness of these systems, and evolving nature of hazards, particularly in the context of climate extremes, make the component level analysis unrealistic. In this study, we discuss the complex network-based resilience framework to understand fragility and recovery strategies for infrastructure systems impacted by climate-related hazards. We extend the proposed framework to assess the response of ecological networks to multiple species loss and design the restoration management framework to identify the most efficient restoration sequence of species, which can potentially lead to disproportionate gains in biodiversity.

  16. caGrid 1.0 : an enterprise Grid infrastructure for biomedical research.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Oster, S.; Langella, S.; Hastings, S.

    To develop software infrastructure that will provide support for discovery, characterization, integrated access, and management of diverse and disparate collections of information sources, analysis methods, and applications in biomedical research. Design: An enterprise Grid software infrastructure, called caGrid version 1.0 (caGrid 1.0), has been developed as the core Grid architecture of the NCI-sponsored cancer Biomedical Informatics Grid (caBIG{trademark}) program. It is designed to support a wide range of use cases in basic, translational, and clinical research, including (1) discovery, (2) integrated and large-scale data analysis, and (3) coordinated study. Measurements: The caGrid is built as a Grid software infrastructure andmore » leverages Grid computing technologies and the Web Services Resource Framework standards. It provides a set of core services, toolkits for the development and deployment of new community provided services, and application programming interfaces for building client applications. Results: The caGrid 1.0 was released to the caBIG community in December 2006. It is built on open source components and caGrid source code is publicly and freely available under a liberal open source license. The core software, associated tools, and documentation can be downloaded from the following URL: .« less

  17. Building analytical platform with Big Data solutions for log files of PanDA infrastructure

    NASA Astrophysics Data System (ADS)

    Alekseev, A. A.; Barreiro Megino, F. G.; Klimentov, A. A.; Korchuganova, T. A.; Maendo, T.; Padolski, S. V.

    2018-05-01

    The paper describes the implementation of a high-performance system for the processing and analysis of log files for the PanDA infrastructure of the ATLAS experiment at the Large Hadron Collider (LHC), responsible for the workload management of order of 2M daily jobs across the Worldwide LHC Computing Grid. The solution is based on the ELK technology stack, which includes several components: Filebeat, Logstash, ElasticSearch (ES), and Kibana. Filebeat is used to collect data from logs. Logstash processes data and export to Elasticsearch. ES are responsible for centralized data storage. Accumulated data in ES can be viewed using a special software Kibana. These components were integrated with the PanDA infrastructure and replaced previous log processing systems for increased scalability and usability. The authors will describe all the components and their configuration tuning for the current tasks, the scale of the actual system and give several real-life examples of how this centralized log processing and storage service is used to showcase the advantages for daily operations.

  18. Independent practice associations: Advantages and disadvantages of an alternative form of physician practice organization.

    PubMed

    Casalino, Lawrence P; Chenven, Norman

    2017-03-01

    Value-based purchasing (VBP) favors provider organizations large enough to accept financial risk and develop care management infrastructure. Independent Practice Associations (IPAs) are a potential alternative for physicians to becoming employed by a hospital or large medical group. But little is known about IPAs. We selected four IPAs that vary in location, structure, and strategy, and conducted interviews with their president and medical director, as well as with a hospital executive and health plan executive familiar with that IPA. The IPAs studied vary in size and sophistication, but overall are performing well and are highly regarded by hospital and health plan executives. IPAs can grow rapidly without the cost of purchasing and operating physician practices and make it possible for physicians to remain independent in their own practices while providing the scale and care management infrastructure to make it possible to succeed in VBP. However, it can be difficult for IPAs to gain cooperation from hundreds to thousands of independent physicians, and the need for capital for growth and care management infrastructure is increasing as VBP becomes more prevalent and more demanding. Some IPAs are succeeding at VBP. As VBP raises the performance bar, IPAs will have to demonstrate that they can achieve results equal to more highly capitalized and tightly structured large medical groups and hospital-owned practices. Physicians should be aware of IPAs as a potential option for participating in VBP. Payers are aware of IPAs; the Medicare ACO program and health insurer ACO programs include many IPAs. Copyright © 2016 Elsevier Inc. All rights reserved.

  19. Global sand trade is paving the way for a tragedy of the sand commons

    NASA Astrophysics Data System (ADS)

    Torres, A.; Brandt, J.; Lear, K.; Liu, J.

    2016-12-01

    In the first 40 years of the 21st century, planet Earth is highly likely to experience more urban land expansion than in all of history, an increase in transportation infrastructure by more than a third, and a great variety of land reclamation projects. While scientists are beginning to quantify the deep imprint of human infrastructure on biodiversity at large scales, its off-site impacts and linkages to sand mining and trade have been largely ignored. Sand is the most widely used building material in the world. With an ever-increasing demand for this resource, sand is being extracted at rates that far exceed its replenishment, and is becoming increasingly scarce. This has already led to conflicts around the world and will likely lead to a "tragedy of the sand commons" if sustainable sand mining and trade cannot be achieved. We investigate the environmental and socioeconomic interactions over large distances (telecouplings) of infrastructure development and sand mining and trade across diverse systems through transdisciplinary research and the recently proposed telecoupling framework. Our research is generating a thorough understanding of the telecouplings driven by an increasing demand for sand. In particular, we address three main research questions: 1) Where are the conflicts related to sand mining occurring?; 2) What are the major "sending" and "receiving" systems of sand?; and 3) What are the main components (e.g. causes, effects, agents, etc.) of telecoupled systems involving sand mining and trade? Our results highlight the role of global sand trade as a driver of environmental degradation that threatens the integrity of natural systems and their capacity to deliver key ecosystem services. In addition, infrastructure development and sand mining and trade have important implications for other sustainability challenges such as over-fishing and global warming. This knowledge will help to identify opportunities and tools to better promote a more sustainable use of sand, ultimately helping avoid a "tragedy of the sand commons".

  20. Strategic behavior and governance challenges in self-organized coupled natural-human systems

    NASA Astrophysics Data System (ADS)

    Muneepeerakul, R.; Anderies, J. M.

    2017-12-01

    Successful and sustainable coupling of human societies and natural systems requires effective governance, which depends on the existence of proper infrastructure (both hard and soft). In recent decades, much attention has been paid to what has allowed many small-scale self-organized coupled natural-human systems around the world to persist for centuries, thanks to a large part to the work by Elinor Ostrom and colleagues. In this work, we mathematically operationalize a conceptual framework that is developed based on this body of work by way of a stylized model. The model captures the interplay between replicator dynamics within the population, dynamics of natural resources, and threshold characteristics of public infrastructure. The model analysis reveals conditions for long-term sustainability and collapse of the coupled systems as well as other tradeoffs and potential pitfalls in governing these systems.

  1. Data-Driven Simulation-Enhanced Optimization of People-Based Print Production Service

    NASA Astrophysics Data System (ADS)

    Rai, Sudhendu

    This paper describes a systematic six-step data-driven simulation-based methodology for optimizing people-based service systems on a large distributed scale that exhibit high variety and variability. The methodology is exemplified through its application within the printing services industry where it has been successfully deployed by Xerox Corporation across small, mid-sized and large print shops generating over 250 million in profits across the customer value chain. Each step of the methodology consisting of innovative concepts co-development and testing in partnership with customers, development of software and hardware tools to implement the innovative concepts, establishment of work-process and practices for customer-engagement and service implementation, creation of training and infrastructure for large scale deployment, integration of the innovative offering within the framework of existing corporate offerings and lastly the monitoring and deployment of the financial and operational metrics for estimating the return-on-investment and the continual renewal of the offering are described in detail.

  2. Recent developments in user-job management with Ganga

    NASA Astrophysics Data System (ADS)

    Currie, R.; Elmsheuser, J.; Fay, R.; Owen, P. H.; Richards, A.; Slater, M.; Sutcliffe, W.; Williams, M.

    2015-12-01

    The Ganga project was originally developed for use by LHC experiments and has been used extensively throughout Run1 in both LHCb and ATLAS. This document describes some the most recent developments within the Ganga project. There have been improvements in the handling of large scale computational tasks in the form of a new GangaTasks infrastructure. Improvements in file handling through using a new IGangaFile interface makes handling files largely transparent to the end user. In addition to this the performance and usability of Ganga have both been addressed through the development of a new queues system allows for parallel processing of job related tasks.

  3. A secure and efficiently searchable health information architecture.

    PubMed

    Yasnoff, William A

    2016-06-01

    Patient-centric repositories of health records are an important component of health information infrastructure. However, patient information in a single repository is potentially vulnerable to loss of the entire dataset from a single unauthorized intrusion. A new health record storage architecture, the personal grid, eliminates this risk by separately storing and encrypting each person's record. The tradeoff for this improved security is that a personal grid repository must be sequentially searched since each record must be individually accessed and decrypted. To allow reasonable search times for large numbers of records, parallel processing with hundreds (or even thousands) of on-demand virtual servers (now available in cloud computing environments) is used. Estimated search times for a 10 million record personal grid using 500 servers vary from 7 to 33min depending on the complexity of the query. Since extremely rapid searching is not a critical requirement of health information infrastructure, the personal grid may provide a practical and useful alternative architecture that eliminates the large-scale security vulnerabilities of traditional databases by sacrificing unnecessary searching speed. Copyright © 2016 Elsevier Inc. All rights reserved.

  4. Facilitating large-scale clinical trials: in Asia.

    PubMed

    Choi, Han Yong; Ko, Jae-Wook

    2010-01-01

    The number of clinical trials conducted in Asian countries has started to increase as a result of expansion of the pharmaceutical market in this area. There is a growing opportunity for large-scale clinical trials because of the large number of patients, significant market potential, good quality of data, and the cost effective and qualified medical infrastructure. However, for carrying out large-scale clinical trials in Asia, there are several major challenges, including the quality control of data, budget control, laboratory validation, monitoring capacity, authorship, staff training, and nonstandard treatment that need to be considered. There are also several difficulties in collaborating on international trials in Asia because Asia is an extremely diverse continent. The major challenges are language differences, diversity of patterns of disease, and current treatments, a large gap in the experience with performing multinational trials, and regulatory differences among the Asian countries. In addition, there are also differences in the understanding of global clinical trials, medical facilities, indemnity assurance, and culture, including food and religion. To make regional and local data provide evidence for efficacy through the standardization of these differences, unlimited effort is required. At this time, there are no large clinical trials led by urologists in Asia, but it is anticipated that the role of urologists in clinical trials will continue to increase. Copyright © 2010 Elsevier Inc. All rights reserved.

  5. HAlign-II: efficient ultra-large multiple sequence alignment and phylogenetic tree reconstruction with distributed and parallel computing.

    PubMed

    Wan, Shixiang; Zou, Quan

    2017-01-01

    Multiple sequence alignment (MSA) plays a key role in biological sequence analyses, especially in phylogenetic tree construction. Extreme increase in next-generation sequencing results in shortage of efficient ultra-large biological sequence alignment approaches for coping with different sequence types. Distributed and parallel computing represents a crucial technique for accelerating ultra-large (e.g. files more than 1 GB) sequence analyses. Based on HAlign and Spark distributed computing system, we implement a highly cost-efficient and time-efficient HAlign-II tool to address ultra-large multiple biological sequence alignment and phylogenetic tree construction. The experiments in the DNA and protein large scale data sets, which are more than 1GB files, showed that HAlign II could save time and space. It outperformed the current software tools. HAlign-II can efficiently carry out MSA and construct phylogenetic trees with ultra-large numbers of biological sequences. HAlign-II shows extremely high memory efficiency and scales well with increases in computing resource. THAlign-II provides a user-friendly web server based on our distributed computing infrastructure. HAlign-II with open-source codes and datasets was established at http://lab.malab.cn/soft/halign.

  6. Nanotechnology and MEMS-based systems for civil infrastructure safety and security: Opportunities and challenges

    NASA Astrophysics Data System (ADS)

    Robinson, Nidia; Saafi, Mohamed

    2006-03-01

    Critical civil infrastructure systems such as bridges, high rises, dams, nuclear power plants and pipelines present a major investment and the health of the United States' economy and the lifestyle of its citizens both depend on their safety and security. The challenge for engineers is to maintain the safety and security of these large structures in the face of terrorism threats, natural disasters and long-term deterioration, as well as to meet the demands of emergency response times. With the significant negative impact that these threats can have on the structural environment, health monitoring of civil infrastructure holds promise as a way to provide information for near real-time condition assessment of the structure's safety and security. This information can be used to assess the integrity of the structure for post-earthquake and terrorist attacks rescue and recovery, and to safely and rapidly remove the debris and to temporary shore specific structural elements. This information can also be used for identification of incipient damage in structures experiencing long-term deterioration. However, one of the major obstacles preventing sensor-based monitoring is the lack of reliable, easy-to-install, cost-effective and harsh environment resistant sensors that can be densely embedded into large-scale civil infrastructure systems. Nanotechnology and MEMS-based systems which have matured in recent years represent an innovative solution to current damage detection systems, leading to wireless, inexpensive, durable, compact, and high-density information collection. In this paper, ongoing research activities at Alabama A&M University (AAMU) Center for Transportation Infrastructure Safety and Security on the application of nanotechnology and MEMS to Civil Infrastructure for health monitoring will presented. To date, research showed that nanotechnology and MEMS-based systems can be used to wirelessly detect and monitor different damage mechanisms in concrete structures as well as monitor critical structures' stability during floods and barge impact. However, some technical issues that needs to be addressed before full implementation of these new systems and will also be discussed in this paper.

  7. An outdoor test facility for the large-scale production of microalgae

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Johnson, D.A.; Weissman, J.; Goebel, R.

    The goal of the US Department of EnergySolar Energy Research Institute's Aquatic Species Program is to develop the technology base to produce liquid fuels from microalgae. This technology is being initially developed for the desert Southwest. As part of this program an outdoor test facility has been designed and constructed in Roswell, New Mexico. The site has a large existing infrastructure, a suitable climate, and abundant saline groundwater. This facility will be used to evaluate productivity of microalgae strains and conduct large-scale experiments to increase biomass productivity while decreasing production costs. Six 3-m/sup 2/ fiberglass raceways were constructed. Several microalgaemore » strains were screened for growth, one of which had a short-term productivity rate of greater than 50 g dry wt m/sup /minus/2/ d/sup /minus/1/. Two large-scale, 0.1-ha raceways have also been built. These are being used to evaluate the performance trade-offs between low-cost earthen liners and higher cost plastic liners. A series of hydraulic measurements is also being carried out to evaluate future improved pond designs. Future plans include a 0.5-ha pond, which will be built in approximately 2 years to test a scaled-up system. This unique facility will be available to other researchers and industry for studies on microalgae productivity. 6 refs., 9 figs., 1 tab.« less

  8. The Hydrologic Implications Of Unique Urban Soil Horizon Sequencing On The Functions Of Passive Green Infrastructure

    EPA Science Inventory

    Green infrastructure represents a broad set of site- to landscape-scale practices that can be flexibly implemented to increase sewershed retention capacity, and can thereby improve on the management of water quantity and quality. Although much green infrastructure presents as for...

  9. Optimizing Estimates of Impervious Cover and Riparian Zone Condition in New England Watersheds: A Green Infrastructure Analysis.

    EPA Science Inventory

    Under EPA’s Green Infrastructure Initiative, a variety of research activities are underway to evaluate the effectiveness of green infrastructure in mitigating the effects of urbanization and stormwater impacts on stream biota and habitat. Effectiveness of both site-scale st...

  10. Watershed Scale Impacts of Stormwater Green Infrastructure on Hydrology, Nitrogen Fluxes, and Combined Sewer Overflows in the Baltimore, MD and Washington, DC area

    EPA Science Inventory

    Despite the increasing use of urban stormwater green infrastructure (SGI), including detention ponds and rain gardens, few studies have quantified the cumulative effects of multiple SGI projects on hydrology and water quality at the watershed scale. To assess the effects of SGI, ...

  11. Health Diagnosis of Major Transportation Infrastructures in Shanghai Metropolis Using High-Resolution Persistent Scatterer Interferometry

    PubMed Central

    Qin, Xiaoqiong; Yang, Tianliang; Yang, Mengshi; Zhang, Lu; Liao, Mingsheng

    2017-01-01

    Since the Persistent Scatterer Synthetic Aperture Radar (SAR) Interferometry (PSI) technology allows the detection of ground subsidence with millimeter accuracy, it is becoming one of the most powerful and economical means for health diagnosis of major transportation infrastructures. However, structures of different types may suffer from various levels of localized subsidence due to the different structural characteristics and subsidence mechanisms. Moreover, in the complex urban scenery, some segments of these infrastructures may be sheltered by surrounding buildings in SAR images, obscuring the desirable signals. Therefore, the subsidence characteristics on different types of structures should be discussed separately and the accuracy of persistent scatterers (PSs) should be optimized. In this study, the PSI-based subsidence mapping over the entire transportation network of Shanghai (more than 10,000 km) is illustrated, achieving the city-wide monitoring specifically along the elevated roads, ground highways and underground subways. The precise geolocation and structural characteristics of infrastructures were combined to effectively guide more accurate identification and separation of PSs along the structures. The experimental results from two neighboring TerraSAR-X stacks from 2013 to 2016 were integrated by joint estimating the measurements in the overlapping area, performing large-scale subsidence mapping and were validated by leveling data, showing highly consistent in terms of subsidence velocities and time-series displacements. Spatial-temporal subsidence patterns on each type of infrastructures are strongly dependent on the operational durations and structural characteristics, as well as the variation of the foundation soil layers. PMID:29186039

  12. Health Diagnosis of Major Transportation Infrastructures in Shanghai Metropolis Using High-Resolution Persistent Scatterer Interferometry.

    PubMed

    Qin, Xiaoqiong; Yang, Tianliang; Yang, Mengshi; Zhang, Lu; Liao, Mingsheng

    2017-11-29

    Since the Persistent Scatterer Synthetic Aperture Radar (SAR) Interferometry (PSI) technology allows the detection of ground subsidence with millimeter accuracy, it is becoming one of the most powerful and economical means for health diagnosis of major transportation infrastructures. However, structures of different types may suffer from various levels of localized subsidence due to the different structural characteristics and subsidence mechanisms. Moreover, in the complex urban scenery, some segments of these infrastructures may be sheltered by surrounding buildings in SAR images, obscuring the desirable signals. Therefore, the subsidence characteristics on different types of structures should be discussed separately and the accuracy of persistent scatterers (PSs) should be optimized. In this study, the PSI-based subsidence mapping over the entire transportation network of Shanghai (more than 10,000 km) is illustrated, achieving the city-wide monitoring specifically along the elevated roads, ground highways and underground subways. The precise geolocation and structural characteristics of infrastructures were combined to effectively guide more accurate identification and separation of PSs along the structures. The experimental results from two neighboring TerraSAR-X stacks from 2013 to 2016 were integrated by joint estimating the measurements in the overlapping area, performing large-scale subsidence mapping and were validated by leveling data, showing highly consistent in terms of subsidence velocities and time-series displacements. Spatial-temporal subsidence patterns on each type of infrastructures are strongly dependent on the operational durations and structural characteristics, as well as the variation of the foundation soil layers.

  13. Evolution of Scaling Emergence in Large-Scale Spatial Epidemic Spreading

    PubMed Central

    Wang, Lin; Li, Xiang; Zhang, Yi-Qing; Zhang, Yan; Zhang, Kan

    2011-01-01

    Background Zipf's law and Heaps' law are two representatives of the scaling concepts, which play a significant role in the study of complexity science. The coexistence of the Zipf's law and the Heaps' law motivates different understandings on the dependence between these two scalings, which has still hardly been clarified. Methodology/Principal Findings In this article, we observe an evolution process of the scalings: the Zipf's law and the Heaps' law are naturally shaped to coexist at the initial time, while the crossover comes with the emergence of their inconsistency at the larger time before reaching a stable state, where the Heaps' law still exists with the disappearance of strict Zipf's law. Such findings are illustrated with a scenario of large-scale spatial epidemic spreading, and the empirical results of pandemic disease support a universal analysis of the relation between the two laws regardless of the biological details of disease. Employing the United States domestic air transportation and demographic data to construct a metapopulation model for simulating the pandemic spread at the U.S. country level, we uncover that the broad heterogeneity of the infrastructure plays a key role in the evolution of scaling emergence. Conclusions/Significance The analyses of large-scale spatial epidemic spreading help understand the temporal evolution of scalings, indicating the coexistence of the Zipf's law and the Heaps' law depends on the collective dynamics of epidemic processes, and the heterogeneity of epidemic spread indicates the significance of performing targeted containment strategies at the early time of a pandemic disease. PMID:21747932

  14. Opportunities and challenges of big data for the social sciences: The case of genomic data.

    PubMed

    Liu, Hexuan; Guo, Guang

    2016-09-01

    In this paper, we draw attention to one unique and valuable source of big data, genomic data, by demonstrating the opportunities they provide to social scientists. We discuss different types of large-scale genomic data and recent advances in statistical methods and computational infrastructure used to address challenges in managing and analyzing such data. We highlight how these data and methods can be used to benefit social science research. Copyright © 2016 Elsevier Inc. All rights reserved.

  15. Renewable Energy Zone (REZ) Transmission Planning Process

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lee, Nathan

    A REZ is a geographical area that enables the development of profitable, cost-effective, grid-connected renewable energy (RE). The REZ Transmission Planning Process is a proactive approach to plan, approve, and build transmission infrastructure connecting REZs to the power system which helps to increase the share of solar, wind and other RE resources in the power system while maintaining reliability and economics, and focuses on large-scale wind and solar resources that can be developed in sufficient quantities to warrant transmission system expansion and upgrades.

  16. Integrating complexity into data-driven multi-hazard supply chain network strategies

    USGS Publications Warehouse

    Long, Suzanna K.; Shoberg, Thomas G.; Ramachandran, Varun; Corns, Steven M.; Carlo, Hector J.

    2013-01-01

    Major strategies in the wake of a large-scale disaster have focused on short-term emergency response solutions. Few consider medium-to-long-term restoration strategies that reconnect urban areas to the national supply chain networks (SCN) and their supporting infrastructure. To re-establish this connectivity, the relationships within the SCN must be defined and formulated as a model of a complex adaptive system (CAS). A CAS model is a representation of a system that consists of large numbers of inter-connections, demonstrates non-linear behaviors and emergent properties, and responds to stimulus from its environment. CAS modeling is an effective method of managing complexities associated with SCN restoration after large-scale disasters. In order to populate the data space large data sets are required. Currently access to these data is hampered by proprietary restrictions. The aim of this paper is to identify the data required to build a SCN restoration model, look at the inherent problems associated with these data, and understand the complexity that arises due to integration of these data.

  17. Estimating harvested rainwater at greenhouses in south Portugal aquifer Campina de Faro for potential infiltration in Managed Aquifer Recharge.

    NASA Astrophysics Data System (ADS)

    Costa, Luís; Monteiro, José Paulo; Leitão, Teresa; Lobo-Ferreira, João Paulo; Oliveira, Manuel; Martins de Carvalho, José; Martins de Carvalho, Tiago; Agostinho, Rui

    2015-04-01

    The Campina de Faro (CF) aquifer system, located on the south coast of Portugal, is an important source of groundwater, mostly used for agriculture purposes. In some areas, this multi-layered aquifer is contaminated with high concentration of nitrates, possibly arising from excessive usage of fertilizers, reaching to values as high as 300 mg/L. In order to tackle this problem, Managed Aquifer Recharge (MAR) techniques are being applied at demonstration scale to improve groundwater quality through aquifer recharge, in both infiltration basins at the river bed of ephemeral river Rio Seco and existing traditional large diameter wells located in this aquifer. In order to assess the infiltration capacity of the existing infrastructures, in particular infiltration basins and large diameter wells at CF aquifer, infiltration tests were performed, indicating a high infiltration capacity of the existing infrastructures. Concerning the sources of water for recharge, harvested rainwater at greenhouses was identified in CF aquifer area as one of the main potential sources for aquifer recharge, once there is a large surface area occupied by these infrastructures at the demo site. This potential source of water could, in some cases, be redirected to the large diameter wells or to the infiltration basins at the riverbed of Rio Seco. Estimates of rainwater harvested at greenhouses were calculated based on a 32 year average rainfall model and on the location of the greenhouses and their surface areas, the latter based on aerial photograph. Potential estimated annual rainwater intercepted by greenhouses at CF aquifer accounts an average of 1.63 hm3/year. Nonetheless it is unlikely that the totality of this amount can be harvested, collected and redirected to aquifer recharge infrastructures, for several reasons, such as the lack of appropriate greenhouse infrastructures, conduits or a close location between greenhouses and large diameter wells and infiltration basins. Anyway, this value is a good indication of the total amount of the harvested rainfall that could be considered for future MAR solutions. Given the estimates on the greenhouse harvested rainwater and the infiltration capacity of the infiltration basins and large diameter wells, it is intended to develop groundwater flow models in order to assess the nitrate washing rate in the CF aquifer. This work is being developed under the scope of MARSOL Project (MARSOL-GA-2013-619120), in which Campina de Faro aquifer system is one of the several case studies. This project aims to demonstrate that MAR is a sound, safe and sustainable strategy that can be applied with great confidence in finding solutions to water scarcity in Southern Europe.

  18. What do the data show? Fostering physical intuition with ClimateBits and NASA Earth Observations

    NASA Astrophysics Data System (ADS)

    Schollaert Uz, S.; Ward, K.

    2017-12-01

    Through data visualizations using global satellite imagery available in NASA Earth Observations (NEO), we explain Earth science concepts (e.g. albedo, urban heat island effect, phytoplankton). We also provide examples of ways to explore the satellite data in NEO within a new blog series. This is an ideal tool for scientists and non-scientists alike who want to quickly check satellite imagery for large scale features or patterns. NEO analysis requires no software or plug-ins; only a browser and an internet connection. You can even check imagery and perform simple analyses from your smart phone. NEO can be used to create graphics for presentations and papers or as a first step before acquiring data for more rigorous analysis. NEO has potential application to easily explore large scale environmental and climate patterns that impact operations and infrastructure. This is something we are currently exploring with end user groups.

  19. Big Data Analytics for Genomic Medicine

    PubMed Central

    He, Karen Y.; Ge, Dongliang; He, Max M.

    2017-01-01

    Genomic medicine attempts to build individualized strategies for diagnostic or therapeutic decision-making by utilizing patients’ genomic information. Big Data analytics uncovers hidden patterns, unknown correlations, and other insights through examining large-scale various data sets. While integration and manipulation of diverse genomic data and comprehensive electronic health records (EHRs) on a Big Data infrastructure exhibit challenges, they also provide a feasible opportunity to develop an efficient and effective approach to identify clinically actionable genetic variants for individualized diagnosis and therapy. In this paper, we review the challenges of manipulating large-scale next-generation sequencing (NGS) data and diverse clinical data derived from the EHRs for genomic medicine. We introduce possible solutions for different challenges in manipulating, managing, and analyzing genomic and clinical data to implement genomic medicine. Additionally, we also present a practical Big Data toolset for identifying clinically actionable genetic variants using high-throughput NGS data and EHRs. PMID:28212287

  20. Measuring Large-Scale Social Networks with High Resolution

    PubMed Central

    Stopczynski, Arkadiusz; Sekara, Vedran; Sapiezynski, Piotr; Cuttone, Andrea; Madsen, Mette My; Larsen, Jakob Eg; Lehmann, Sune

    2014-01-01

    This paper describes the deployment of a large-scale study designed to measure human interactions across a variety of communication channels, with high temporal resolution and spanning multiple years—the Copenhagen Networks Study. Specifically, we collect data on face-to-face interactions, telecommunication, social networks, location, and background information (personality, demographics, health, politics) for a densely connected population of 1 000 individuals, using state-of-the-art smartphones as social sensors. Here we provide an overview of the related work and describe the motivation and research agenda driving the study. Additionally, the paper details the data-types measured, and the technical infrastructure in terms of both backend and phone software, as well as an outline of the deployment procedures. We document the participant privacy procedures and their underlying principles. The paper is concluded with early results from data analysis, illustrating the importance of multi-channel high-resolution approach to data collection. PMID:24770359

  1. Big Data Analytics for Genomic Medicine.

    PubMed

    He, Karen Y; Ge, Dongliang; He, Max M

    2017-02-15

    Genomic medicine attempts to build individualized strategies for diagnostic or therapeutic decision-making by utilizing patients' genomic information. Big Data analytics uncovers hidden patterns, unknown correlations, and other insights through examining large-scale various data sets. While integration and manipulation of diverse genomic data and comprehensive electronic health records (EHRs) on a Big Data infrastructure exhibit challenges, they also provide a feasible opportunity to develop an efficient and effective approach to identify clinically actionable genetic variants for individualized diagnosis and therapy. In this paper, we review the challenges of manipulating large-scale next-generation sequencing (NGS) data and diverse clinical data derived from the EHRs for genomic medicine. We introduce possible solutions for different challenges in manipulating, managing, and analyzing genomic and clinical data to implement genomic medicine. Additionally, we also present a practical Big Data toolset for identifying clinically actionable genetic variants using high-throughput NGS data and EHRs.

  2. Linear infrastructure impacts on landscape hydrology.

    PubMed

    Raiter, Keren G; Prober, Suzanne M; Possingham, Hugh P; Westcott, Fiona; Hobbs, Richard J

    2018-01-15

    The extent of roads and other forms of linear infrastructure is burgeoning worldwide, but their impacts are inadequately understood and thus poorly mitigated. Previous studies have identified many potential impacts, including alterations to the hydrological functions and soil processes upon which ecosystems depend. However, these impacts have seldom been quantified at a regional level, particularly in arid and semi-arid systems where the gap in knowledge is the greatest, and impacts potentially the most severe. To explore the effects of extensive track, road, and rail networks on surface hydrology at a regional level we assessed over 1000 km of linear infrastructure, including approx. 300 locations where ephemeral streams crossed linear infrastructure, in the largely intact landscapes of Australia's Great Western Woodlands. We found a high level of association between linear infrastructure and altered surface hydrology, with erosion and pooling 5 and 6 times as likely to occur on-road than off-road on average (1.06 erosional and 0.69 pooling features km -1 on vehicle tracks, compared with 0.22 and 0.12 km -1 , off-road, respectively). Erosion severity was greater in the presence of tracks, and 98% of crossings of ephemeral streamlines showed some evidence of impact on water movement (flow impedance (62%); diversion of flows (73%); flow concentration (76%); and/or channel initiation (31%)). Infrastructure type, pastoral land use, culvert presence, soil clay content and erodibility, mean annual rainfall, rainfall erosivity, topography and bare soil cover influenced the frequency and severity of these impacts. We conclude that linear infrastructure frequently affects ephemeral stream flows and intercepts natural overland and near-surface flows, artificially changing site-scale moisture regimes, with some parts of the landscape becoming abnormally wet and other parts becoming water-starved. In addition, linear infrastructure frequently triggers or exacerbates erosion, leading to soil loss and degradation. Where linear infrastructure densities are high, their impacts on ecological processes are likely to be considerable. Linear infrastructure is widespread across much of this relatively intact region, but there remain areas with very low infrastructure densities that need to be protected from further impacts. There is substantial scope for mitigating the impacts of existing and planned infrastructure developments. Copyright © 2017 Elsevier Ltd. All rights reserved.

  3. Repurposing Mass-produced Internal Combustion Engines Quantifying the Value and Use of Low-cost Internal Combustion Piston Engines for Modular Applications in Energy and Chemical Engineering Industries

    NASA Astrophysics Data System (ADS)

    L'Heureux, Zara E.

    This thesis proposes that internal combustion piston engines can help clear the way for a transformation in the energy, chemical, and refining industries that is akin to the transition computer technology experienced with the shift from large mainframes to small personal computers and large farms of individually small, modular processing units. This thesis provides a mathematical foundation, multi-dimensional optimizations, experimental results, an engine model, and a techno-economic assessment, all working towards quantifying the value of repurposing internal combustion piston engines for new applications in modular, small-scale technologies, particularly for energy and chemical engineering systems. Many chemical engineering and power generation industries have focused on increasing individual unit sizes and centralizing production. This "bigger is better" concept makes it difficult to evolve and incorporate change. Large systems are often designed with long lifetimes, incorporate innovation slowly, and necessitate high upfront investment costs. Breaking away from this cycle is essential for promoting change, especially change happening quickly in the energy and chemical engineering industries. The ability to evolve during a system's lifetime provides a competitive advantage in a field dominated by large and often very old equipment that cannot respond to technology change. This thesis specifically highlights the value of small, mass-manufactured internal combustion piston engines retrofitted to participate in non-automotive system designs. The applications are unconventional and stem first from the observation that, when normalized by power output, internal combustion engines are one hundred times less expensive than conventional, large power plants. This cost disparity motivated a look at scaling laws to determine if scaling across both individual unit size and number of units produced would predict the two order of magnitude difference seen here. For the first time, this thesis provides a mathematical analysis of scaling with a combination of both changing individual unit size and varying the total number of units produced. Different paths to meet a particular cumulative capacity are analyzed and show that total costs are path dependent and vary as a function of the unit size and number of units produced. The path dependence identified is fairly weak, however, and for all practical applications, the underlying scaling laws seem unaffected. This analysis continues to support the interest in pursuing designs built around small, modular infrastructure. Building on the observation that internal combustion engines are an inexpensive power-producing unit, the first optimization in this thesis focuses on quantifying the value of engine capacity committing to deliver power in the day-ahead electricity and reserve markets, specifically based on pricing from the New York Independent System Operator (NYISO). An optimization was written in Python to determine, based on engine cost, fuel cost, engine wear, engine lifetime, and electricity prices, when and how much of an engine's power should be committed to a particular energy market. The optimization aimed to maximize profit for the engine and generator (engine genset) system acting as a price-taker. The result is an annual profit on the order of \\$30 per kilowatt. The most value in the engine genset is in its commitments to the spinning reserve market, where power is often committed but not always called on to deliver. This analysis highlights the benefits of modularity in energy generation and provides one example where the system is so inexpensive and short-lived, that the optimization views the engine replacement cost as a consumable operating expense rather than a capital cost. Having the opportunity to incorporate incremental technological improvements in a system's infrastructure throughout its lifetime allows introduction of new technology with higher efficiencies and better designs. An alternative to traditionally large infrastructure that locks in a design and today's state-of-the-art technology for the next 50 - 70 years, is a system designed to incorporate new technology in a modular fashion. The modular engine genset system used for power generation is one example of how this works in practice. The largest single component of this thesis is modeling, designing, retrofitting, and testing a reciprocating piston engine used as a compressor. Motivated again by the low cost of an internal combustion engine, this work looks at how an engine (which is, in its conventional form, essentially a reciprocating compressor) can be cost-effectively retrofitted to perform as a small-scale gas compressor. In the laboratory, an engine compressor was built by retrofitting a one-cylinder, 79 cc engine. Various retrofitting techniques were incorporated into the system design, and the engine compressor performance was quantified in each iteration. Because the retrofitted engine is now a power consumer rather than a power-producing unit, the engine compressor is driven in the laboratory with an electric motor. Experimentally, compressed air engine exhaust (starting at elevated inlet pressures) surpassed 650 psia (about 45 bar), which makes this system very attractive for many applications in chemical engineering and refining industries. A model of the engine compressor system was written in Python and incorporates experimentally-derived parameters to quantify gas leakage, engine friction, and flow (including backflow) through valves. The model as a whole was calibrated and verified with experimental data and is used to explore engine retrofits beyond what was tested in the laboratory. Along with the experimental and modeling work, a techno-economic assessment is included to compare the engine compressor system with state-of-the-art, commercially-available compressors. Included in the financial analysis is a case study where an engine compressor system is modeled to achieve specific compression needs. The result of the assessment is that, indeed, the low engine cost, even with the necessary retrofits, provides a cost advantage over incumbent compression technologies. Lastly, this thesis provides an algorithm and case study for another application of small-scale units in energy infrastructure, specifically in energy storage. This study focuses on quantifying the value of small-scale, onsite energy storage in shaving peak power demands. This case study focuses on university-level power demands. The analysis finds that, because peak power is so costly, even small amounts of energy storage, when dispatched optimally, can provide significant cost reductions. This provides another example of the value of small-scale implementations, particularly in energy infrastructure. While the study focuses on flywheels and batteries as the energy storage medium, engine gensets could also be used to deliver power and shave peak power demands. The overarching goal of this thesis is to introduce small-scale, modular infrastructure, with a particular focus on the opportunity to retrofit and repurpose inexpensive, mass-manufactured internal combustion engines in new and unconventional applications. The modeling and experimental work presented in this dissertation show very compelling results for engines incorporated into both energy generation infrastructure and chemical engineering industries via compression technologies. The low engine cost provides an opportunity to add retrofits whilst remaining cost competitive with the incumbent technology. This work supports the claim that modular infrastructure, built on the indivisible unit of an internal combustion engine, can revolutionize many industries by providing a low-cost mechanism for rapid change and promoting small-scale designs.

  4. A bottom-up approach to identifying the maximum operational adaptive capacity of water resource systems to a changing climate

    NASA Astrophysics Data System (ADS)

    Culley, S.; Noble, S.; Yates, A.; Timbs, M.; Westra, S.; Maier, H. R.; Giuliani, M.; Castelletti, A.

    2016-09-01

    Many water resource systems have been designed assuming that the statistical characteristics of future inflows are similar to those of the historical record. This assumption is no longer valid due to large-scale changes in the global climate, potentially causing declines in water resource system performance, or even complete system failure. Upgrading system infrastructure to cope with climate change can require substantial financial outlay, so it might be preferable to optimize existing system performance when possible. This paper builds on decision scaling theory by proposing a bottom-up approach to designing optimal feedback control policies for a water system exposed to a changing climate. This approach not only describes optimal operational policies for a range of potential climatic changes but also enables an assessment of a system's upper limit of its operational adaptive capacity, beyond which upgrades to infrastructure become unavoidable. The approach is illustrated using the Lake Como system in Northern Italy—a regulated system with a complex relationship between climate and system performance. By optimizing system operation under different hydrometeorological states, it is shown that the system can continue to meet its minimum performance requirements for more than three times as many states as it can under current operations. Importantly, a single management policy, no matter how robust, cannot fully utilize existing infrastructure as effectively as an ensemble of flexible management policies that are updated as the climate changes.

  5. Building Nationally-Focussed, Globally Federated, High Performance Earth Science Platforms to Solve Next Generation Social and Economic Issues.

    NASA Astrophysics Data System (ADS)

    Wyborn, Lesley; Evans, Ben; Foster, Clinton; Pugh, Timothy; Uhlherr, Alfred

    2015-04-01

    Digital geoscience data and information are integral to informing decisions on the social, economic and environmental management of natural resources. Traditionally, such decisions were focused on regional or national viewpoints only, but it is increasingly being recognised that global perspectives are required to meet new challenges such as predicting impacts of climate change; sustainably exploiting scarce water, mineral and energy resources; and protecting our communities through better prediction of the behaviour of natural hazards. In recent years, technical advances in scientific instruments have resulted in a surge in data volumes, with data now being collected at unprecedented rates and at ever increasing resolutions. The size of many earth science data sets now exceed the computational capacity of many government and academic organisations to locally store and dynamically access the data sets; to internally process and analyse them to high resolutions; and then to deliver them online to clients, partners and stakeholders. Fortunately, at the same time, computational capacities have commensurately increased (both cloud and HPC): these can now provide the capability to effectively access the ever-growing data assets within realistic time frames. However, to achieve this, data and computing need to be co-located: bandwidth limits the capacity to move the large data sets; the data transfers are too slow; and latencies to access them are too high. These scenarios are driving the move towards more centralised High Performance (HP) Infrastructures. The rapidly increasing scale of data, the growing complexity of software and hardware environments, combined with the energy costs of running such infrastructures is creating a compelling economic argument for just having one or two major national (or continental) HP facilities that can be federated internationally to enable earth and environmental issues to be tackled at global scales. But at the same time, if properly constructed, these infrastructures can also service very small-scale research projects. The National Computational Infrastructure (NCI) at the Australian National University (ANU) has built such an HP infrastructure as part of the Australian Government's National Collaborative Research Infrastructure Strategy. NCI operates as a formal partnership between the ANU and the three major Australian National Government Scientific Agencies: the Commonwealth Scientific and Industrial Research Organisation (CSIRO), the Bureau of Meteorology and Geoscience Australia. The government partners agreed to explore the new opportunities offered within the partnership with NCI, rather than each running their own separate agenda independently. The data from these national agencies, as well as from collaborating overseas organisations (e.g., NASA, NOAA, USGS, CMIP, etc.) are either replicated to, or produced at, NCI. By co-locating and harmonising these vast data collections within the integrated HP computing environments at NCI, new opportunities have arisen for Data-intensive Interdisciplinary Science at scales and resolutions not hitherto possible. The new NCI infrastructure has also enabled the blending of research by the university sector with the more operational business of government science agencies, with the fundamental shift being that researchers from both sectors work and collaborate within a federated data and computational environment that contains both national and international data collections.

  6. The NCI High Performance Computing (HPC) and High Performance Data (HPD) Platform to Support the Analysis of Petascale Environmental Data Collections

    NASA Astrophysics Data System (ADS)

    Evans, B. J. K.; Pugh, T.; Wyborn, L. A.; Porter, D.; Allen, C.; Smillie, J.; Antony, J.; Trenham, C.; Evans, B. J.; Beckett, D.; Erwin, T.; King, E.; Hodge, J.; Woodcock, R.; Fraser, R.; Lescinsky, D. T.

    2014-12-01

    The National Computational Infrastructure (NCI) has co-located a priority set of national data assets within a HPC research platform. This powerful in-situ computational platform has been created to help serve and analyse the massive amounts of data across the spectrum of environmental collections - in particular the climate, observational data and geoscientific domains. This paper examines the infrastructure, innovation and opportunity for this significant research platform. NCI currently manages nationally significant data collections (10+ PB) categorised as 1) earth system sciences, climate and weather model data assets and products, 2) earth and marine observations and products, 3) geosciences, 4) terrestrial ecosystem, 5) water management and hydrology, and 6) astronomy, social science and biosciences. The data is largely sourced from the NCI partners (who include the custodians of many of the national scientific records), major research communities, and collaborating overseas organisations. By co-locating these large valuable data assets, new opportunities have arisen by harmonising the data collections, making a powerful transdisciplinary research platformThe data is accessible within an integrated HPC-HPD environment - a 1.2 PFlop supercomputer (Raijin), a HPC class 3000 core OpenStack cloud system and several highly connected large scale and high-bandwidth Lustre filesystems. New scientific software, cloud-scale techniques, server-side visualisation and data services have been harnessed and integrated into the platform, so that analysis is performed seamlessly across the traditional boundaries of the underlying data domains. Characterisation of the techniques along with performance profiling ensures scalability of each software component, all of which can either be enhanced or replaced through future improvements. A Development-to-Operations (DevOps) framework has also been implemented to manage the scale of the software complexity alone. This ensures that software is both upgradable and maintainable, and can be readily reused with complexly integrated systems and become part of the growing global trusted community tools for cross-disciplinary research.

  7. Using Distributed Fiber Optic Sensing to Monitor Large Scale Permafrost Transitions: Preliminary Results from a Controlled Thaw Experiment

    NASA Astrophysics Data System (ADS)

    Ajo Franklin, J. B.; Wagner, A. M.; Lindsey, N.; Dou, S.; Bjella, K.; Daley, T. M.; Freifeld, B. M.; Ulrich, C.; Gelvin, A.; Morales, A.; James, S. R.; Saari, S.; Ekblaw, I.; Wood, T.; Robertson, M.; Martin, E. R.

    2016-12-01

    In a warming world, permafrost landscapes are being rapidly transformed by thaw, yielding surface subsidence and groundwater flow alteration. The same transformations pose a threat to arctic infrastructure and can induce catastrophic failure of the roads, runways, and pipelines on which human habitation depends. Scalable solutions to monitoring permafrost thaw dynamics are required to both quantitatively understand biogeochemical feedbacks as well as to protect built infrastructure from damage. Unfortunately, permafrost alteration happens over the time scale of climate change, years to decades, a decided challenge for testing new sensing technologies in a limited context. One solution is to engineer systems capable of rapidly thawing large permafrost units to allow short duration experiments targeting next-generation sensing approaches. We present preliminary results from a large-scale controlled permafrost thaw experiment designed to evaluate the utility of different geophysical approaches for tracking the cause, precursors, and early phases of thaw subsidence. We focus on the use of distributed fiber optic sensing for this challenge and deployed distributed temperature (DTS), strain (DSS), and acoustic (DAS) sensing systems in a 2D array to detect thaw signatures. A 10 x 15 x 1 m section of subsurface permafrost was heated using an array of 120 downhole heaters (60 w) at an experimental site near Fairbanks, AK. Ambient noise analysis of DAS datasets collected at the plot, coupled to shear wave inversion, was utilized to evaluate changes in shear wave velocity associated with heating and thaw. These measurements were confirmed by seismic surveys collected using a semi-permanent orbital seismic source activated on a daily basis. Fiber optic measurements were complemented by subsurface thermistor and thermocouple arrays, timelapse total station surveys, LIDAR, secondary seismic measurements (geophone and broadband recordings), timelapse ERT, borehole NMR, soil moisture measurements, hydrologic measurements, and multi-angle photogrammetry. This unusually dense combination of measurement techniques provides an excellent opportunity to characterize the geophysical signatures of permafrost thaw in a controlled environment.

  8. Aeroelastic Stability Investigations for Large-scale Vertical Axis Wind Turbines

    NASA Astrophysics Data System (ADS)

    Owens, B. C.; Griffith, D. T.

    2014-06-01

    The availability of offshore wind resources in coastal regions, along with a high concentration of load centers in these areas, makes offshore wind energy an attractive opportunity for clean renewable electricity production. High infrastructure costs such as the offshore support structure and operation and maintenance costs for offshore wind technology, however, are significant obstacles that need to be overcome to make offshore wind a more cost-effective option. A vertical-axis wind turbine (VAWT) rotor configuration offers a potential transformative technology solution that significantly lowers cost of energy for offshore wind due to its inherent advantages for the offshore market. However, several potential challenges exist for VAWTs and this paper addresses one of them with an initial investigation of dynamic aeroelastic stability for large-scale, multi-megawatt VAWTs. The aeroelastic formulation and solution method from the BLade Aeroelastic STability Tool (BLAST) for HAWT blades was employed to extend the analysis capability of a newly developed structural dynamics design tool for VAWTs. This investigation considers the effect of configuration geometry, material system choice, and number of blades on the aeroelastic stability of a VAWT, and provides an initial scoping for potential aeroelastic instabilities in large-scale VAWT designs.

  9. Challenges in scaling up biofuels infrastructure.

    PubMed

    Richard, Tom L

    2010-08-13

    Rapid growth in demand for lignocellulosic bioenergy will require major changes in supply chain infrastructure. Even with densification and preprocessing, transport volumes by mid-century are likely to exceed the combined capacity of current agricultural and energy supply chains, including grain, petroleum, and coal. Efficient supply chains can be achieved through decentralized conversion processes that facilitate local sourcing, satellite preprocessing and densification for long-distance transport, and business models that reward biomass growers both nearby and afar. Integrated systems that are cost-effective and energy-efficient will require new ways of thinking about agriculture, energy infrastructure, and rural economic development. Implementing these integrated systems will require innovation and investment in novel technologies, efficient value chains, and socioeconomic and policy frameworks; all are needed to support an expanded biofuels infrastructure that can meet the challenges of scale.

  10. Using the infrastructure of a conditional cash transfer program to deliver a scalable integrated early child development program in Colombia: cluster randomized controlled trial.

    PubMed

    Attanasio, Orazio P; Fernández, Camila; Fitzsimons, Emla O A; Grantham-McGregor, Sally M; Meghir, Costas; Rubio-Codina, Marta

    2014-09-29

    To assess the effectiveness of an integrated early child development intervention, combining stimulation and micronutrient supplementation and delivered on a large scale in Colombia, for children's development, growth, and hemoglobin levels. Cluster randomized controlled trial, using a 2 × 2 factorial design, with municipalities assigned to one of four groups: psychosocial stimulation, micronutrient supplementation, combined intervention, or control. 96 municipalities in Colombia, located across eight of its 32 departments. 1420 children aged 12-24 months and their primary carers. Psychosocial stimulation (weekly home visits with play demonstrations), micronutrient sprinkles given daily, and both combined. All delivered by female community leaders for 18 months. Cognitive, receptive and expressive language, and fine and gross motor scores on the Bayley scales of infant development-III; height, weight, and hemoglobin levels measured at the baseline and end of intervention. Stimulation improved cognitive scores (adjusted for age, sex, testers, and baseline levels of outcomes) by 0.26 of a standard deviation (P=0.002). Stimulation also increased receptive language by 0.22 of a standard deviation (P=0.032). Micronutrient supplementation had no significant effect on any outcome and there was no interaction between the interventions. No intervention affected height, weight, or hemoglobin levels. Using the infrastructure of a national welfare program we implemented the integrated early child development intervention on a large scale and showed its potential for improving children's cognitive development. We found no effect of supplementation on developmental or health outcomes. Moreover, supplementation did not interact with stimulation. The implementation model for delivering stimulation suggests that it may serve as a promising blueprint for future policy on early childhood development.Trial registration Current Controlled trials ISRCTN18991160. © Attanasio et al 2014.

  11. PLANNING AND RESPONSE IN THE AFTERMATH OF A LARGE CRISIS: AN AGENT-BASED INFORMATICS FRAMEWORK*

    PubMed Central

    Barrett, Christopher; Bisset, Keith; Chandan, Shridhar; Chen, Jiangzhuo; Chungbaek, Youngyun; Eubank, Stephen; Evrenosoğlu, Yaman; Lewis, Bryan; Lum, Kristian; Marathe, Achla; Marathe, Madhav; Mortveit, Henning; Parikh, Nidhi; Phadke, Arun; Reed, Jeffrey; Rivers, Caitlin; Saha, Sudip; Stretz, Paula; Swarup, Samarth; Thorp, James; Vullikanti, Anil; Xie, Dawen

    2014-01-01

    We present a synthetic information and modeling environment that can allow policy makers to study various counter-factual experiments in the event of a large human-initiated crisis. The specific scenario we consider is a ground detonation caused by an improvised nuclear device in a large urban region. In contrast to earlier work in this area that focuses largely on the prompt effects on human health and injury, we focus on co-evolution of individual and collective behavior and its interaction with the differentially damaged infrastructure. This allows us to study short term secondary and tertiary effects. The present environment is suitable for studying the dynamical outcomes over a two week period after the initial blast. A novel computing and data processing architecture is described; the architecture allows us to represent multiple co-evolving infrastructures and social networks at a highly resolved temporal, spatial, and individual scale. The representation allows us to study the emergent behavior of individuals as well as specific strategies to reduce casualties and injuries that exploit the spatial and temporal nature of the secondary and tertiary effects. A number of important conclusions are obtained using the modeling environment. For example, the studies decisively show that deploying ad hoc communication networks to reach individuals in the affected area is likely to have a significant impact on the overall casualties and injuries. PMID:25580055

  12. PLANNING AND RESPONSE IN THE AFTERMATH OF A LARGE CRISIS: AN AGENT-BASED INFORMATICS FRAMEWORK*

    PubMed

    Barrett, Christopher; Bisset, Keith; Chandan, Shridhar; Chen, Jiangzhuo; Chungbaek, Youngyun; Eubank, Stephen; Evrenosoğlu, Yaman; Lewis, Bryan; Lum, Kristian; Marathe, Achla; Marathe, Madhav; Mortveit, Henning; Parikh, Nidhi; Phadke, Arun; Reed, Jeffrey; Rivers, Caitlin; Saha, Sudip; Stretz, Paula; Swarup, Samarth; Thorp, James; Vullikanti, Anil; Xie, Dawen

    2013-01-01

    We present a synthetic information and modeling environment that can allow policy makers to study various counter-factual experiments in the event of a large human-initiated crisis. The specific scenario we consider is a ground detonation caused by an improvised nuclear device in a large urban region. In contrast to earlier work in this area that focuses largely on the prompt effects on human health and injury, we focus on co-evolution of individual and collective behavior and its interaction with the differentially damaged infrastructure. This allows us to study short term secondary and tertiary effects. The present environment is suitable for studying the dynamical outcomes over a two week period after the initial blast. A novel computing and data processing architecture is described; the architecture allows us to represent multiple co-evolving infrastructures and social networks at a highly resolved temporal, spatial, and individual scale. The representation allows us to study the emergent behavior of individuals as well as specific strategies to reduce casualties and injuries that exploit the spatial and temporal nature of the secondary and tertiary effects. A number of important conclusions are obtained using the modeling environment. For example, the studies decisively show that deploying ad hoc communication networks to reach individuals in the affected area is likely to have a significant impact on the overall casualties and injuries.

  13. A model for assessing habitat fragmentation caused by new infrastructures in extensive territories - evaluation of the impact of the Spanish strategic infrastructure and transport plan.

    PubMed

    Mancebo Quintana, S; Martín Ramos, B; Casermeiro Martínez, M A; Otero Pastor, I

    2010-05-01

    The aim of the present work is to design a model for evaluating the impact of planned infrastructures on species survival at the territorial scale by calculating a connectivity index. The method developed involves determining the effective distance of displacement between patches of the same habitat, simplifying earlier models so that there is no dependence on specific variables for each species. A case study is presented in which the model was used to assess the impact of the forthcoming roads and railways included in the Spanish Strategic Infrastructure and Transport Plan (PEIT, in its Spanish initials). This study took into account the habitats of peninsular Spain, which occupies an area of some 500,000 km(2). In this territory, the areas deemed to provide natural habitats are defined by Directive 92/43/EEC. The impact of new infrastructures on connectivity was assessed by comparing two scenarios, with and without the plan, for the major new road and railway networks. The calculation of the connectivity index (CI) requires the use of a raster methodology based on the Arc/Info geographical information system (GIS). The actual calculation was performed using a program written in Arc/Info Macro Language (AML); this program is available in FragtULs (Mancebo Quintana, 2007), a set of tools for calculating indicators of fragmentation caused by transport infrastructure (http://topografia.montes.upm.es/fragtuls.html). The indicator of connectivity proposed allows the estimation of the connectivity between all the patches of a territory, with no artificial (non-ecologically based) boundaries imposed. The model proposed appears to be a useful tool for the analysis of fragmentation caused by plans for large territories. Copyright 2009 Elsevier Ltd. All rights reserved.

  14. Changing Perceptions of Flooding and Stormwater as a Driver of Urban Hydrology and Biogeochemistry

    NASA Astrophysics Data System (ADS)

    Hale, R. L.

    2015-12-01

    Urbanization can have detrimental impacts on downstream ecosystems due to its effects on hydrological and biogeochemical cycles. In particular, how urban stormwater systems are designed have implications for flood regimes and biogeochemical transformations. Flood and stormwater management paradigms have shifted over time at large scales, but patterns and drivers of local stormwater infrastructure designs are unknown. We describe patterns of infrastructure design and use over the 20th century in three cities along an urbanization gradient in Utah: Salt Lake, Logan, and Heber City. To understand changes in stormwater management paradigms we conducted a historical media content analysis of newspaper articles related to flooding and stormwater in Salt Lake City from 1900 to 2012. Stormwater infrastructure design varied spatially and temporally, both within and among cities. All three cities transitioned from agriculture to urban land use, and legacies were evident in the use of agricultural canals for stormwater conveyance. Salt Lake City infrastructure transitioned from centralized storm sewers during early urbanization to decentralized detention systems in the 1970's. In contrast, newer cities, Logan and Heber, saw parallel increases in conveyance and detention systems with urbanization. The media analysis revealed significant changes in flood and stormwater management paradigms over the 20th century that were driven by complex factors including top-down regulations, local disturbances, and funding constraints. Early management paradigms focused on infrastructural solutions to address problems with private and public property damage, whereas more recent paradigms focus on behavioral solutions to flooding and green infrastructure solutions to prevent negative impacts of urban stormwater on local ecosystems. Changes in human perceptions of the environment can affect how we design urban ecosystems, with important implications for ecological functions.

  15. Extremely Scalable Spiking Neuronal Network Simulation Code: From Laptops to Exascale Computers.

    PubMed

    Jordan, Jakob; Ippen, Tammo; Helias, Moritz; Kitayama, Itaru; Sato, Mitsuhisa; Igarashi, Jun; Diesmann, Markus; Kunkel, Susanne

    2018-01-01

    State-of-the-art software tools for neuronal network simulations scale to the largest computing systems available today and enable investigations of large-scale networks of up to 10 % of the human cortex at a resolution of individual neurons and synapses. Due to an upper limit on the number of incoming connections of a single neuron, network connectivity becomes extremely sparse at this scale. To manage computational costs, simulation software ultimately targeting the brain scale needs to fully exploit this sparsity. Here we present a two-tier connection infrastructure and a framework for directed communication among compute nodes accounting for the sparsity of brain-scale networks. We demonstrate the feasibility of this approach by implementing the technology in the NEST simulation code and we investigate its performance in different scaling scenarios of typical network simulations. Our results show that the new data structures and communication scheme prepare the simulation kernel for post-petascale high-performance computing facilities without sacrificing performance in smaller systems.

  16. Extremely Scalable Spiking Neuronal Network Simulation Code: From Laptops to Exascale Computers

    PubMed Central

    Jordan, Jakob; Ippen, Tammo; Helias, Moritz; Kitayama, Itaru; Sato, Mitsuhisa; Igarashi, Jun; Diesmann, Markus; Kunkel, Susanne

    2018-01-01

    State-of-the-art software tools for neuronal network simulations scale to the largest computing systems available today and enable investigations of large-scale networks of up to 10 % of the human cortex at a resolution of individual neurons and synapses. Due to an upper limit on the number of incoming connections of a single neuron, network connectivity becomes extremely sparse at this scale. To manage computational costs, simulation software ultimately targeting the brain scale needs to fully exploit this sparsity. Here we present a two-tier connection infrastructure and a framework for directed communication among compute nodes accounting for the sparsity of brain-scale networks. We demonstrate the feasibility of this approach by implementing the technology in the NEST simulation code and we investigate its performance in different scaling scenarios of typical network simulations. Our results show that the new data structures and communication scheme prepare the simulation kernel for post-petascale high-performance computing facilities without sacrificing performance in smaller systems. PMID:29503613

  17. EPA Recognizes Excellence and Innovation in Clean Water Infrastructure

    EPA Pesticide Factsheets

    Today, the U.S. Environmental Protection Agency recognized 28 clean water infrastructure projects for excellence & innovation within the Clean Water State Revolving Fund (CWSRF) program. Honored projects include large wastewater infrastructure projects.

  18. Eyjafjallajökull and 9/11: The Impact of Large-Scale Disasters on Worldwide Mobility

    PubMed Central

    Woolley-Meza, Olivia; Grady, Daniel; Thiemann, Christian; Bagrow, James P.; Brockmann, Dirk

    2013-01-01

    Large-scale disasters that interfere with globalized socio-technical infrastructure, such as mobility and transportation networks, trigger high socio-economic costs. Although the origin of such events is often geographically confined, their impact reverberates through entire networks in ways that are poorly understood, difficult to assess, and even more difficult to predict. We investigate how the eruption of volcano Eyjafjallajökull, the September 11th terrorist attacks, and geographical disruptions in general interfere with worldwide mobility. To do this we track changes in effective distance in the worldwide air transportation network from the perspective of individual airports. We find that universal features exist across these events: airport susceptibilities to regional disruptions follow similar, strongly heterogeneous distributions that lack a scale. On the other hand, airports are more uniformly susceptible to attacks that target the most important hubs in the network, exhibiting a well-defined scale. The statistical behavior of susceptibility can be characterized by a single scaling exponent. Using scaling arguments that capture the interplay between individual airport characteristics and the structural properties of routes we can recover the exponent for all types of disruption. We find that the same mechanisms responsible for efficient passenger flow may also keep the system in a vulnerable state. Our approach can be applied to understand the impact of large, correlated disruptions in financial systems, ecosystems and other systems with a complex interaction structure between heterogeneous components. PMID:23950904

  19. 1001 Ways to run AutoDock Vina for virtual screening

    NASA Astrophysics Data System (ADS)

    Jaghoori, Mohammad Mahdi; Bleijlevens, Boris; Olabarriaga, Silvia D.

    2016-03-01

    Large-scale computing technologies have enabled high-throughput virtual screening involving thousands to millions of drug candidates. It is not trivial, however, for biochemical scientists to evaluate the technical alternatives and their implications for running such large experiments. Besides experience with the molecular docking tool itself, the scientist needs to learn how to run it on high-performance computing (HPC) infrastructures, and understand the impact of the choices made. Here, we review such considerations for a specific tool, AutoDock Vina, and use experimental data to illustrate the following points: (1) an additional level of parallelization increases virtual screening throughput on a multi-core machine; (2) capturing of the random seed is not enough (though necessary) for reproducibility on heterogeneous distributed computing systems; (3) the overall time spent on the screening of a ligand library can be improved by analysis of factors affecting execution time per ligand, including number of active torsions, heavy atoms and exhaustiveness. We also illustrate differences among four common HPC infrastructures: grid, Hadoop, small cluster and multi-core (virtual machine on the cloud). Our analysis shows that these platforms are suitable for screening experiments of different sizes. These considerations can guide scientists when choosing the best computing platform and set-up for their future large virtual screening experiments.

  20. 1001 Ways to run AutoDock Vina for virtual screening.

    PubMed

    Jaghoori, Mohammad Mahdi; Bleijlevens, Boris; Olabarriaga, Silvia D

    2016-03-01

    Large-scale computing technologies have enabled high-throughput virtual screening involving thousands to millions of drug candidates. It is not trivial, however, for biochemical scientists to evaluate the technical alternatives and their implications for running such large experiments. Besides experience with the molecular docking tool itself, the scientist needs to learn how to run it on high-performance computing (HPC) infrastructures, and understand the impact of the choices made. Here, we review such considerations for a specific tool, AutoDock Vina, and use experimental data to illustrate the following points: (1) an additional level of parallelization increases virtual screening throughput on a multi-core machine; (2) capturing of the random seed is not enough (though necessary) for reproducibility on heterogeneous distributed computing systems; (3) the overall time spent on the screening of a ligand library can be improved by analysis of factors affecting execution time per ligand, including number of active torsions, heavy atoms and exhaustiveness. We also illustrate differences among four common HPC infrastructures: grid, Hadoop, small cluster and multi-core (virtual machine on the cloud). Our analysis shows that these platforms are suitable for screening experiments of different sizes. These considerations can guide scientists when choosing the best computing platform and set-up for their future large virtual screening experiments.

  1. An imperative need for global change research in tropical forests.

    PubMed

    Zhou, Xuhui; Fu, Yuling; Zhou, Lingyan; Li, Bo; Luo, Yiqi

    2013-09-01

    Tropical forests play a crucial role in regulating regional and global climate dynamics, and model projections suggest that rapid climate change may result in forest dieback or savannization. However, these predictions are largely based on results from leaf-level studies. How tropical forests respond and feedback to climate change is largely unknown at the ecosystem level. Several complementary approaches have been used to evaluate the effects of climate change on tropical forests, but the results are conflicting, largely due to confounding effects of multiple factors. Although altered precipitation and nitrogen deposition experiments have been conducted in tropical forests, large-scale warming and elevated carbon dioxide (CO2) manipulations are completely lacking, leaving many hypotheses and model predictions untested. Ecosystem-scale experiments to manipulate temperature and CO2 concentration individually or in combination are thus urgently needed to examine their main and interactive effects on tropical forests. Such experiments will provide indispensable data and help gain essential knowledge on biogeochemical, hydrological and biophysical responses and feedbacks of tropical forests to climate change. These datasets can also inform regional and global models for predicting future states of tropical forests and climate systems. The success of such large-scale experiments in natural tropical forests will require an international framework to coordinate collaboration so as to meet the challenges in cost, technological infrastructure and scientific endeavor.

  2. Advanced Optical Burst Switched Network Concepts

    NASA Astrophysics Data System (ADS)

    Nejabati, Reza; Aracil, Javier; Castoldi, Piero; de Leenheer, Marc; Simeonidou, Dimitra; Valcarenghi, Luca; Zervas, Georgios; Wu, Jian

    In recent years, as the bandwidth and the speed of networks have increased significantly, a new generation of network-based applications using the concept of distributed computing and collaborative services is emerging (e.g., Grid computing applications). The use of the available fiber and DWDM infrastructure for these applications is a logical choice offering huge amounts of cheap bandwidth and ensuring global reach of computing resources [230]. Currently, there is a great deal of interest in deploying optical circuit (wavelength) switched network infrastructure for distributed computing applications that require long-lived wavelength paths and address the specific needs of a small number of well-known users. Typical users are particle physicists who, due to their international collaborations and experiments, generate enormous amounts of data (Petabytes per year). These users require a network infrastructures that can support processing and analysis of large datasets through globally distributed computing resources [230]. However, providing wavelength granularity bandwidth services is not an efficient and scalable solution for applications and services that address a wider base of user communities with different traffic profiles and connectivity requirements. Examples of such applications may be: scientific collaboration in smaller scale (e.g., bioinformatics, environmental research), distributed virtual laboratories (e.g., remote instrumentation), e-health, national security and defense, personalized learning environments and digital libraries, evolving broadband user services (i.e., high resolution home video editing, real-time rendering, high definition interactive TV). As a specific example, in e-health services and in particular mammography applications due to the size and quantity of images produced by remote mammography, stringent network requirements are necessary. Initial calculations have shown that for 100 patients to be screened remotely, the network would have to securely transport 1.2 GB of data every 30 s [230]. According to the above explanation it is clear that these types of applications need a new network infrastructure and transport technology that makes large amounts of bandwidth at subwavelength granularity, storage, computation, and visualization resources potentially available to a wide user base for specified time durations. As these types of collaborative and network-based applications evolve addressing a wide range and large number of users, it is infeasible to build dedicated networks for each application type or category. Consequently, there should be an adaptive network infrastructure able to support all application types, each with their own access, network, and resource usage patterns. This infrastructure should offer flexible and intelligent network elements and control mechanism able to deploy new applications quickly and efficiently.

  3. Multi-period natural gas market modeling Applications, stochastic extensions and solution approaches

    NASA Astrophysics Data System (ADS)

    Egging, Rudolf Gerardus

    This dissertation develops deterministic and stochastic multi-period mixed complementarity problems (MCP) for the global natural gas market, as well as solution approaches for large-scale stochastic MCP. The deterministic model is unique in the combination of the level of detail of the actors in the natural gas markets and the transport options, the detailed regional and global coverage, the multi-period approach with endogenous capacity expansions for transportation and storage infrastructure, the seasonal variation in demand and the representation of market power according to Nash-Cournot theory. The model is applied to several scenarios for the natural gas market that cover the formation of a cartel by the members of the Gas Exporting Countries Forum, a low availability of unconventional gas in the United States, and cost reductions in long-distance gas transportation. 1 The results provide insights in how different regions are affected by various developments, in terms of production, consumption, traded volumes, prices and profits of market participants. The stochastic MCP is developed and applied to a global natural gas market problem with four scenarios for a time horizon until 2050 with nineteen regions and containing 78,768 variables. The scenarios vary in the possibility of a gas market cartel formation and varying depletion rates of gas reserves in the major gas importing regions. Outcomes for hedging decisions of market participants show some significant shifts in the timing and location of infrastructure investments, thereby affecting local market situations. A first application of Benders decomposition (BD) is presented to solve a large-scale stochastic MCP for the global gas market with many hundreds of first-stage capacity expansion variables and market players exerting various levels of market power. The largest problem solved successfully using BD contained 47,373 variables of which 763 first-stage variables, however using BD did not result in shorter solution times relative to solving the extensive-forms. Larger problems, up to 117,481 variables, were solved in extensive-form, but not when applying BD due to numerical issues. It is discussed how BD could significantly reduce the solution time of large-scale stochastic models, but various challenges remain and more research is needed to assess the potential of Benders decomposition for solving large-scale stochastic MCP. 1 www.gecforum.org

  4. Managing competing elastic Grid and Cloud scientific computing applications using OpenNebula

    NASA Astrophysics Data System (ADS)

    Bagnasco, S.; Berzano, D.; Lusso, S.; Masera, M.; Vallero, S.

    2015-12-01

    Elastic cloud computing applications, i.e. applications that automatically scale according to computing needs, work on the ideal assumption of infinite resources. While large public cloud infrastructures may be a reasonable approximation of this condition, scientific computing centres like WLCG Grid sites usually work in a saturated regime, in which applications compete for scarce resources through queues, priorities and scheduling policies, and keeping a fraction of the computing cores idle to allow for headroom is usually not an option. In our particular environment one of the applications (a WLCG Tier-2 Grid site) is much larger than all the others and cannot autoscale easily. Nevertheless, other smaller applications can benefit of automatic elasticity; the implementation of this property in our infrastructure, based on the OpenNebula cloud stack, will be described and the very first operational experiences with a small number of strategies for timely allocation and release of resources will be discussed.

  5. Precipitation Processes and their Modulation by Synoptic Conditions and Complex Terrain Observed during the GPM Ground Validation Olympic Mountains Experiment (OLYMPEX)

    NASA Astrophysics Data System (ADS)

    McMurdie, L. A.; Houze, R.; Zagrodnik, J.; Rowe, A.; DeHart, J.; Barnes, H.

    2016-12-01

    Successful and sustainable coupling of human societies and natural systems requires effective governance, which depends on the existence of proper infrastructure (both hard and soft). In recent decades, much attention has been paid to what has allowed many small-scale self-organized coupled natural-human systems around the world to persist for centuries, thanks to a large part to the work by Elinor Ostrom and colleagues. In this work, we mathematically operationalize a conceptual framework that is developed based on this body of work by way of a stylized model. The model captures the interplay between replicator dynamics within the population, dynamics of natural resources, and threshold characteristics of public infrastructure. The model analysis reveals conditions for long-term sustainability and collapse of the coupled systems as well as other tradeoffs and potential pitfalls in governing these systems.

  6. Data Transfer Advisor with Transport Profiling Optimization

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Rao, Nageswara S.; Liu, Qiang; Yun, Daqing

    The network infrastructures have been rapidly upgraded in many high-performance networks (HPNs). However, such infrastructure investment has not led to corresponding performance improvement in big data transfer, especially at the application layer, largely due to the complexity of optimizing transport control on end hosts. We design and implement ProbData, a PRofiling Optimization Based DAta Transfer Advisor, to help users determine the most effective data transfer method with the most appropriate control parameter values to achieve the best data transfer performance. ProbData employs a profiling optimization based approach to exploit the optimal operational zone of various data transfer methods in supportmore » of big data transfer in extreme scale scientific applications. We present a theoretical framework of the optimized profiling approach employed in ProbData as wellas its detailed design and implementation. The advising procedure and performance benefits of ProbData are illustrated and evaluated by proof-of-concept experiments in real-life networks.« less

  7. Evaluating betterment projects.

    PubMed

    Fleming, Christopher M; Manning, Matthew; Smith, Christine

    2016-04-01

    In the past decade Australia has experienced a series of large-scale, severe natural disasters including catastrophic bushfires, widespread and repeated flooding, and intense storms and cyclones. There appears to be a prima facie case for rebuilding damaged infrastructure to a more disaster resilient (that is, to 'betterment') standard. The purpose of this paper is to develop and illustrate a consistent and readily applied method for advancing proposals for the betterment of essential public assets, which can be used by governments at all levels to determine the net benefits of such proposals. Case study results demonstrate that betterment investments have the potential to deliver a positive economic return across a range of asset types and regions. Results, however, are highly sensitive to underlying assumptions; in particular the probability of the natural disaster affecting the infrastructure in the absence of betterment. © 2016 The Author(s). Disasters © Overseas Development Institute, 2016.

  8. Defense on the Move: Ant-Based Cyber Defense

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Fink, Glenn A.; Haack, Jereme N.; McKinnon, Archibald D.

    Many common cyber defenses (like firewalls and IDS) are as static as trench warfare allowing the attacker freedom to probe them at will. The concept of Moving Target Defense (MTD) adds dynamism to the defender side, but puts the systems to be defended themselves in motion, potentially at great cost to the defender. An alternative approach is a mobile resilient defense that removes attackers’ ability to rely on prior experience without requiring motion in the protected infrastructure itself. The defensive technology absorbs most of the cost of motion, is resilient to attack, and is unpredictable to attackers. The Ant-Based Cybermore » Defense (ABCD) is a mobile resilient defense providing a set of roaming, bio-inspired, digital-ant agents working with stationary agents in a hierarchy headed by a human supervisor. The ABCD approach provides a resilient, extensible, and flexible defense that can scale to large, multi-enterprise infrastructures like the smart electric grid.« less

  9. Eco-logical successes : third edition, September 2012

    DOT National Transportation Integrated Search

    2012-09-01

    Eco-Logical: An Ecosystem Approach to Developing Infrastructure Projects outlines an ecosystem-scale approach to prioritizing, developing, and delivering infrastructure projects. Eco-Logical emphasizes interagency collaboration in order to create inf...

  10. Integration of structural health monitoring and asset management.

    DOT National Transportation Integrated Search

    2012-08-01

    This project investigated the feasibility and potential benefits of the integration of infrastructure monitoring systems into enterprise-scale transportation management systems. An infrastructure monitoring system designed for bridges was implemented...

  11. Insights and Challenges to Integrating Data from Diverse Ecological Networks

    NASA Astrophysics Data System (ADS)

    Peters, D. P. C.

    2014-12-01

    Many of the most dramatic and surprising effects of global change occur across large spatial extents, from regions to continents, that impact multiple ecosystem types across a range of interacting spatial and temporal scales. The ability of ecologists and inter-disciplinary scientists to understand and predict these dynamics depend, in large part, on existing site-based research infrastructures that developed in response to historic events. Integrating these diverse sources of data is critical to addressing these broad-scale questions. A conceptual approach is presented to synthesize and integrate diverse sources and types of data from different networks of research sites. This approach focuses on developing derived data products through spatial and temporal aggregation that allow datasets collected with different methods to be compared. The approach is illustrated through the integration, analysis, and comparison of hundreds of long-term datasets from 50 ecological sites in the US that represent ecosystem types commonly found globally. New insights were found by comparing multiple sites using common derived data. In addition to "bringing to light" many dark data in a standardized, open access, easy-to-use format, a suite of lessons were learned that can be applied to up and coming research networks in the US and internationally. These lessons will be described along with the challenges, including cyber-infrastructure, cultural, and behavioral constraints associated with the use of big and little data, that may keep ecologists and inter-disciplinary scientists from taking full advantage of the vast amounts of existing and yet-to-be exposed data.

  12. Infrastructure stability surveillance with high resolution InSAR

    NASA Astrophysics Data System (ADS)

    Balz, Timo; Düring, Ralf

    2017-02-01

    The construction of new infrastructure in largely unknown and difficult environments, as it is necessary for the construction of the New Silk Road, can lead to a decreased stability along the construction site, leading to an increase in landslide risk and deformation caused by surface motion. This generally requires a thorough pre-analysis and consecutive surveillance of the deformation patterns to ensure the stability and safety of the infrastructure projects. Interferometric SAR (InSAR) and the derived techniques of multi-baseline InSAR are very powerful tools for a large area observation of surface deformation patterns. With InSAR and deriver techniques, the topographic height and the surface motion can be estimated for large areas, making it an ideal tool for supporting the planning, construction, and safety surveillance of new infrastructure elements in remote areas.

  13. GISpark: A Geospatial Distributed Computing Platform for Spatiotemporal Big Data

    NASA Astrophysics Data System (ADS)

    Wang, S.; Zhong, E.; Wang, E.; Zhong, Y.; Cai, W.; Li, S.; Gao, S.

    2016-12-01

    Geospatial data are growing exponentially because of the proliferation of cost effective and ubiquitous positioning technologies such as global remote-sensing satellites and location-based devices. Analyzing large amounts of geospatial data can provide great value for both industrial and scientific applications. Data- and compute- intensive characteristics inherent in geospatial big data increasingly pose great challenges to technologies of data storing, computing and analyzing. Such challenges require a scalable and efficient architecture that can store, query, analyze, and visualize large-scale spatiotemporal data. Therefore, we developed GISpark - a geospatial distributed computing platform for processing large-scale vector, raster and stream data. GISpark is constructed based on the latest virtualized computing infrastructures and distributed computing architecture. OpenStack and Docker are used to build multi-user hosting cloud computing infrastructure for GISpark. The virtual storage systems such as HDFS, Ceph, MongoDB are combined and adopted for spatiotemporal data storage management. Spark-based algorithm framework is developed for efficient parallel computing. Within this framework, SuperMap GIScript and various open-source GIS libraries can be integrated into GISpark. GISpark can also integrated with scientific computing environment (e.g., Anaconda), interactive computing web applications (e.g., Jupyter notebook), and machine learning tools (e.g., TensorFlow/Orange). The associated geospatial facilities of GISpark in conjunction with the scientific computing environment, exploratory spatial data analysis tools, temporal data management and analysis systems make up a powerful geospatial computing tool. GISpark not only provides spatiotemporal big data processing capacity in the geospatial field, but also provides spatiotemporal computational model and advanced geospatial visualization tools that deals with other domains related with spatial property. We tested the performance of the platform based on taxi trajectory analysis. Results suggested that GISpark achieves excellent run time performance in spatiotemporal big data applications.

  14. Large Independent Primary Care Medical Groups

    PubMed Central

    Casalino, Lawrence P.; Chen, Melinda A.; Staub, C. Todd; Press, Matthew J.; Mendelsohn, Jayme L.; Lynch, John T.; Miranda, Yesenia

    2016-01-01

    PURPOSE In the turbulent US health care environment, many primary care physicians seek hospital employment. Large physician-owned primary care groups are an alternative, but few physicians or policy makers realize that such groups exist. We wanted to describe these groups, their advantages, and their challenges. METHODS We identified 21 groups and studied 5 that varied in size and location. We conducted interviews with group leaders, surveyed randomly selected group physicians, and interviewed external observers—leaders of a health plan, hospital, and specialty medical group that shared patients with the group. We triangulated responses from group leaders, group physicians, and external observers to identify key themes. RESULTS The groups’ physicians work in small practices, with the group providing economies of scale necessary to develop laboratory and imaging services, health information technology, and quality improvement infrastructure. The groups differ in their size and the extent to which they engage in value-based contracting, though all are moving to increase the amount of financial risk they take for their quality and cost performance. Unlike hospital-employed and multispecialty groups, independent primary care groups can aim to reduce health care costs without conflicting incentives to fill hospital beds and keep specialist incomes high. Each group was positively regarded by external observers. The groups are under pressure, however, to sell to organizations that can provide capital for additional infrastructure to engage in value-based contracting, as well as provide substantial income to physicians from the sale. CONCLUSIONS Large, independent primary care groups have the potential to make primary care attractive to physicians and to improve patient care by combining human scale advantages of physician autonomy and the small practice setting with resources that are important to succeed in value-based contracting. PMID:26755779

  15. Evaluation of Future Internet Technologies for Processing and Distribution of Satellite Imagery

    NASA Astrophysics Data System (ADS)

    Becedas, J.; Perez, R.; Gonzalez, G.; Alvarez, J.; Garcia, F.; Maldonado, F.; Sucari, A.; Garcia, J.

    2015-04-01

    Satellite imagery data centres are designed to operate a defined number of satellites. For instance, difficulties when new satellites have to be incorporated in the system appear. This occurs because traditional infrastructures are neither flexible nor scalable. With the appearance of Future Internet technologies new solutions can be provided to manage large and variable amounts of data on demand. These technologies optimize resources and facilitate the appearance of new applications and services in the traditional Earth Observation (EO) market. The use of Future Internet technologies for the EO sector were validated with the GEO-Cloud experiment, part of the Fed4FIRE FP7 European project. This work presents the final results of the project, in which a constellation of satellites records the whole Earth surface on a daily basis. The satellite imagery is downloaded into a distributed network of ground stations and ingested in a cloud infrastructure, where the data is processed, stored, archived and distributed to the end users. The processing and transfer times inside the cloud, workload of the processors, automatic cataloguing and accessibility through the Internet are evaluated to validate if Future Internet technologies present advantages over traditional methods. Applicability of these technologies is evaluated to provide high added value services. Finally, the advantages of using federated testbeds to carry out large scale, industry driven experiments are analysed evaluating the feasibility of an experiment developed in the European infrastructure Fed4FIRE and its migration to a commercial cloud: SoftLayer, an IBM Company.

  16. Caution Ahead: Overdue Investments for New York's Aging Infrastructure

    ERIC Educational Resources Information Center

    Forman, Adam

    2014-01-01

    Following the devastation of Superstorm Sandy in October 2012, New York City's essential infrastructure needs were made a top policy priority for the first time in decades. The scale and severity of the storm prompted numerous studies to assess the damage and led policymakers to take steps to shore up the city's coastal infrastructure weaknesses.…

  17. HCP: A Flexible CNN Framework for Multi-label Image Classification.

    PubMed

    Wei, Yunchao; Xia, Wei; Lin, Min; Huang, Junshi; Ni, Bingbing; Dong, Jian; Zhao, Yao; Yan, Shuicheng

    2015-10-26

    Convolutional Neural Network (CNN) has demonstrated promising performance in single-label image classification tasks. However, how CNN best copes with multi-label images still remains an open problem, mainly due to the complex underlying object layouts and insufficient multi-label training images. In this work, we propose a flexible deep CNN infrastructure, called Hypotheses-CNN-Pooling (HCP), where an arbitrary number of object segment hypotheses are taken as the inputs, then a shared CNN is connected with each hypothesis, and finally the CNN output results from different hypotheses are aggregated with max pooling to produce the ultimate multi-label predictions. Some unique characteristics of this flexible deep CNN infrastructure include: 1) no ground-truth bounding box information is required for training; 2) the whole HCP infrastructure is robust to possibly noisy and/or redundant hypotheses; 3) the shared CNN is flexible and can be well pre-trained with a large-scale single-label image dataset, e.g., ImageNet; and 4) it may naturally output multi-label prediction results. Experimental results on Pascal VOC 2007 and VOC 2012 multi-label image datasets well demonstrate the superiority of the proposed HCP infrastructure over other state-of-the-arts. In particular, the mAP reaches 90.5% by HCP only and 93.2% after the fusion with our complementary result in [44] based on hand-crafted features on the VOC 2012 dataset.

  18. Report of the Interagency Optical Network Testbeds Workshop 2, NASA Ames Research Center, September 12-14, 2005

    NASA Technical Reports Server (NTRS)

    2005-01-01

    The Optical Network Testbeds Workshop 2 (ONT2), held on September 12-14, 2005, was cosponsored by the Department of Energy Office of Science (DOE/SC) and the National Aeronautics and Space Administration (NASA), in cooperation with the Joint Engineering Team (JET) of the Federal Networking and Information Technology Research and Development (NITRD) Program's Large Scale Networking (LSN) Coordinating Group. The ONT2 workshop was a follow-on to an August 2004 Workshop on Optical Network Testbeds (ONT1). ONT1 recommended actions by the Federal agencies to assure timely development and implementation of optical networking technologies and infrastructure. Hosted by the NASA Ames Research Center in Mountain View, California, the ONT2 workshop brought together representatives of the U.S. advanced research and education (R&E) networks, regional optical networks (RONs), service providers, international networking organizations, and senior engineering and R&D managers from Federal agencies and national research laboratories. Its purpose was to develop a common vision of the optical network technologies, services, infrastructure, and organizations needed to enable widespread use of optical networks; recommend activities for transitioning the optical networking research community and its current infrastructure to leading-edge optical networks over the next three to five years; and present information enabling commercial network infrastructure providers to plan for and use leading-edge optical network services in that time frame.

  19. The size effect in corrosion greatly influences the predicted life span of concrete infrastructures.

    PubMed

    Angst, Ueli M; Elsener, Bernhard

    2017-08-01

    Forecasting the life of concrete infrastructures in corrosive environments presents a long-standing and socially relevant challenge in science and engineering. Chloride-induced corrosion of reinforcing steel in concrete is the main cause for premature degradation of concrete infrastructures worldwide. Since the middle of the past century, this challenge has been tackled by using a conceptual approach relying on a threshold chloride concentration for corrosion initiation ( C crit ). All state-of-the-art models for forecasting chloride-induced steel corrosion in concrete are based on this concept. We present an experiment that shows that C crit depends strongly on the exposed steel surface area. The smaller the tested specimen is, the higher and the more variable C crit becomes. This size effect in the ability of reinforced concrete to withstand corrosion can be explained by the local conditions at the steel-concrete interface, which exhibit pronounced spatial variability. The size effect has major implications for the future use of the common concept of C crit . It questions the applicability of laboratory results to engineering structures and the reproducibility of typically small-scale laboratory testing. Finally, we show that the weakest link theory is suitable to transform C crit from small to large dimensions, which lays the basis for taking the size effect into account in the science and engineering of forecasting the durability of infrastructures.

  20. The size effect in corrosion greatly influences the predicted life span of concrete infrastructures

    PubMed Central

    Angst, Ueli M.; Elsener, Bernhard

    2017-01-01

    Forecasting the life of concrete infrastructures in corrosive environments presents a long-standing and socially relevant challenge in science and engineering. Chloride-induced corrosion of reinforcing steel in concrete is the main cause for premature degradation of concrete infrastructures worldwide. Since the middle of the past century, this challenge has been tackled by using a conceptual approach relying on a threshold chloride concentration for corrosion initiation (Ccrit). All state-of-the-art models for forecasting chloride-induced steel corrosion in concrete are based on this concept. We present an experiment that shows that Ccrit depends strongly on the exposed steel surface area. The smaller the tested specimen is, the higher and the more variable Ccrit becomes. This size effect in the ability of reinforced concrete to withstand corrosion can be explained by the local conditions at the steel-concrete interface, which exhibit pronounced spatial variability. The size effect has major implications for the future use of the common concept of Ccrit. It questions the applicability of laboratory results to engineering structures and the reproducibility of typically small-scale laboratory testing. Finally, we show that the weakest link theory is suitable to transform Ccrit from small to large dimensions, which lays the basis for taking the size effect into account in the science and engineering of forecasting the durability of infrastructures. PMID:28782038

  1. Zero Launch Mass 3D printer

    NASA Image and Video Library

    2018-05-01

    Packing light is the idea behind the Zero Launch Mass 3-D Printer. Instead of loading up on heavy building supplies, a large scale 3-D printer capable of using recycled plastic waste and dirt at the destination as construction material would save mass and money when launching robotic precursor missions to build infrastructure on the Moon or Mars in preparation for human habitation. To make this a reality, Nathan Gelino, a researcher engineer with NASA’s Swamp Works at Kennedy Space Center, measured the temperature of a test specimen from the 3-D printer Tuesday as an early step in characterizing printed material strength properties. Material temperature plays a large role in the strength of bonds between layers.

  2. Characterizing the impact of spatiotemporal variations in stormwater infrastructure on hydrologic conditions

    NASA Astrophysics Data System (ADS)

    Jovanovic, T.; Mejia, A.; Hale, R. L.; Gironas, J. A.

    2015-12-01

    Urban stormwater infrastructure design has evolved in time, reflecting changes in stormwater policy and regulations, and in engineering design. This evolution makes urban basins heterogeneous socio-ecological-technological systems. We hypothesize that this heterogeneity creates unique impact trajectories in time and impact hotspots in space within and across cities. To explore this, we develop and implement a network hydro-engineering modeling framework based on high-resolution digital elevation and stormwater infrastructure data. The framework also accounts for climatic, soils, land use, and vegetation conditions in an urban basin, thus making it useful to study the impacts of stormwater infrastructure across cities. Here, to evaluate the framework, we apply it to urban basins in the metropolitan areas of Phoenix, Arizona. We use it to estimate different metrics to characterize the storm-event hydrologic response. We estimate both traditional metrics (e.g., peak flow, time to peak, and runoff volume) as well as new metrics (e.g., basin-scale dispersion mechanisms). We also use the dispersion mechanisms to assess the scaling characteristics of urban basins. Ultimately, we find that the proposed framework can be used to understand and characterize the impacts associated with stormwater infrastructure on hydrologic conditions within a basin. Additionally, we find that the scaling approach helps in synthesizing information but it requires further validation using additional urban basins.

  3. PEEX Modelling Platform for Seamless Environmental Prediction

    NASA Astrophysics Data System (ADS)

    Baklanov, Alexander; Mahura, Alexander; Arnold, Stephen; Makkonen, Risto; Petäjä, Tuukka; Kerminen, Veli-Matti; Lappalainen, Hanna K.; Ezau, Igor; Nuterman, Roman; Zhang, Wen; Penenko, Alexey; Gordov, Evgeny; Zilitinkevich, Sergej; Kulmala, Markku

    2017-04-01

    The Pan-Eurasian EXperiment (PEEX) is a multidisciplinary, multi-scale research programme stared in 2012 and aimed at resolving the major uncertainties in Earth System Science and global sustainability issues concerning the Arctic and boreal Northern Eurasian regions and in China. Such challenges include climate change, air quality, biodiversity loss, chemicalization, food supply, and the use of natural resources by mining, industry, energy production and transport. The research infrastructure introduces the current state of the art modeling platform and observation systems in the Pan-Eurasian region and presents the future baselines for the coherent and coordinated research infrastructures in the PEEX domain. The PEEX modeling Platform is characterized by a complex seamless integrated Earth System Modeling (ESM) approach, in combination with specific models of different processes and elements of the system, acting on different temporal and spatial scales. The ensemble approach is taken to the integration of modeling results from different models, participants and countries. PEEX utilizes the full potential of a hierarchy of models: scenario analysis, inverse modeling, and modeling based on measurement needs and processes. The models are validated and constrained by available in-situ and remote sensing data of various spatial and temporal scales using data assimilation and top-down modeling. The analyses of the anticipated large volumes of data produced by available models and sensors will be supported by a dedicated virtual research environment developed for these purposes.

  4. Attacker-defender game from a network science perspective

    NASA Astrophysics Data System (ADS)

    Li, Ya-Peng; Tan, Suo-Yi; Deng, Ye; Wu, Jun

    2018-05-01

    Dealing with the protection of critical infrastructures, many game-theoretic methods have been developed to study the strategic interactions between defenders and attackers. However, most game models ignore the interrelationship between different components within a certain system. In this paper, we propose a simultaneous-move attacker-defender game model, which is a two-player zero-sum static game with complete information. The strategies and payoffs of this game are defined on the basis of the topology structure of the infrastructure system, which is represented by a complex network. Due to the complexity of strategies, the attack and defense strategies are confined by two typical strategies, namely, targeted strategy and random strategy. The simulation results indicate that in a scale-free network, the attacker virtually always attacks randomly in the Nash equilibrium. With a small cost-sensitive parameter, representing the degree to which costs increase with the importance of a target, the defender protects the hub targets with large degrees preferentially. When the cost-sensitive parameter exceeds a threshold, the defender switches to protecting nodes randomly. Our work provides a new theoretical framework to analyze the confrontations between the attacker and the defender on critical infrastructures and deserves further study.

  5. DICOMGrid: a middleware to integrate PACS and EELA-2 grid infrastructure

    NASA Astrophysics Data System (ADS)

    Moreno, Ramon A.; de Sá Rebelo, Marina; Gutierrez, Marco A.

    2010-03-01

    Medical images provide lots of information for physicians, but the huge amount of data produced by medical image equipments in a modern Health Institution is not completely explored in its full potential yet. Nowadays medical images are used in hospitals mostly as part of routine activities while its intrinsic value for research is underestimated. Medical images can be used for the development of new visualization techniques, new algorithms for patient care and new image processing techniques. These research areas usually require the use of huge volumes of data to obtain significant results, along with enormous computing capabilities. Such qualities are characteristics of grid computing systems such as EELA-2 infrastructure. The grid technologies allow the sharing of data in large scale in a safe and integrated environment and offer high computing capabilities. In this paper we describe the DicomGrid to store and retrieve medical images, properly anonymized, that can be used by researchers to test new processing techniques, using the computational power offered by grid technology. A prototype of the DicomGrid is under evaluation and permits the submission of jobs into the EELA-2 grid infrastructure while offering a simple interface that requires minimal understanding of the grid operation.

  6. When good practices by water committees are not relevant: Sustainability of small water infrastructures in semi-arid mozambique

    NASA Astrophysics Data System (ADS)

    Ducrot, Raphaëlle

    2017-12-01

    This paper explores the contradiction between the need for large scale interventions in rural water supplies and the need for flexibility when providing support for community institutions, by investigating the implementation of the Mozambique - National Rural Water Supply and Sanitation Program in a semi-arid district of the Limpopo Basin. Our results showed that coordinated leadership by key committee members, and the level of village governance was more important for borehole sustainability than the normative functioning of the committee. In a context in which the centrality of leadership prevails over collective action the sustainability of rural water infrastructure derives from the ability of leaders to motivate the community to provide supplementary funding. This, in turn, depends on the added value to the community of the water points and on village politics. Any interventions that increased community conflicts, for example because of lack of transparency or unequitable access to the benefit of the intervention, weakened the coordination and the collective action capacity of the community and hence the sustainability of the infrastructures even if the intervention was not directly related to water access. These results stress the importance of the project/program implementation pathway.

  7. Mesh infrastructure for coupled multiprocess geophysical simulations

    DOE PAGES

    Garimella, Rao V.; Perkins, William A.; Buksas, Mike W.; ...

    2014-01-01

    We have developed a sophisticated mesh infrastructure capability to support large scale multiphysics simulations such as subsurface flow and reactive contaminant transport at storage sites as well as the analysis of the effects of a warming climate on the terrestrial arctic. These simulations involve a wide range of coupled processes including overland flow, subsurface flow, freezing and thawing of ice rich soil, accumulation, redistribution and melting of snow, biogeochemical processes involving plant matter and finally, microtopography evolution due to melting and degradation of ice wedges below the surface. In addition to supporting the usual topological and geometric queries about themore » mesh, the mesh infrastructure adds capabilities such as identifying columnar structures in the mesh, enabling deforming of the mesh subject to constraints and enabling the simultaneous use of meshes of different dimensionality for subsurface and surface processes. The generic mesh interface is capable of using three different open source mesh frameworks (MSTK, MOAB and STKmesh) under the hood allowing the developers to directly compare them and choose one that is best suited for the application's needs. We demonstrate the results of some simulations using these capabilities as well as present a comparison of the performance of the different mesh frameworks.« less

  8. A Neighborhood-Scale Green Infrastructure Retrofit: Experimental Results, Model Simulations, and Resident Perspectives

    NASA Astrophysics Data System (ADS)

    Jefferson, A.; Avellaneda, P. M.; Jarden, K. M.; Turner, V. K.; Grieser, J.

    2016-12-01

    Distributed green infrastructure approaches to stormwater management that can be retrofit into existing development are of growing interest, but questions remain about their effectiveness at the watershed-scale. In suburban northeastern Ohio, homeowners on a residential street with 55% impervious surface were given the opportunity for free rain barrels, rain gardens, and bioretention cells. Of 163 parcels, only 22 owners (13.5%) chose to participate, despite intense outreach efforts. After pre-treatment monitoring, 37 rain barrels, 7 rain gardens, and 16 street-side bioretention cells were installed in 2013-2014. Using a paired watershed approach, a reduction in up to 33% of peak flow and 40% of total runoff volume per storm was measured in the storm sewer. Using the monitoring data, a calibrated and validated SWMM model was built to explore the long-term effectiveness of the green infrastructure against a wider range of hydrological conditions. Model results confirm the effectiveness of green infrastructure in reducing surface runoff and increasing infiltration and evaporation. Based on 20 years of historical precipitation data, the model shows that the green infrastructure is capable of reducing flows by >40% at the 1, 2, and 5 year return period, suggesting some resilience to projected increases in precipitation intensity in a changing climate. Further, in this project, more benefit is derived from the street-side bioretention cells than from the rain barrels and gardens that treat rooftop runoff. Substantial hydrological gains were achieved despite low homeowner participation. Surveys indicate that many residents viewed stormwater as the city's problem and had negative perceptions of green infrastructure, despite slightly pro-environment values generally. Overall, this study demonstrates green infrastructure's hydrological effectiveness but raises challenging questions about overcoming social barriers retrofits at the neighborhood scale.

  9. !CHAOS: A cloud of controls

    NASA Astrophysics Data System (ADS)

    Angius, S.; Bisegni, C.; Ciuffetti, P.; Di Pirro, G.; Foggetta, L. G.; Galletti, F.; Gargana, R.; Gioscio, E.; Maselli, D.; Mazzitelli, G.; Michelotti, A.; Orrù, R.; Pistoni, M.; Spagnoli, F.; Spigone, D.; Stecchi, A.; Tonto, T.; Tota, M. A.; Catani, L.; Di Giulio, C.; Salina, G.; Buzzi, P.; Checcucci, B.; Lubrano, P.; Piccini, M.; Fattibene, E.; Michelotto, M.; Cavallaro, S. R.; Diana, B. F.; Enrico, F.; Pulvirenti, S.

    2016-01-01

    The paper is aimed to present the !CHAOS open source project aimed to develop a prototype of a national private Cloud Computing infrastructure, devoted to accelerator control systems and large experiments of High Energy Physics (HEP). The !CHAOS project has been financed by MIUR (Italian Ministry of Research and Education) and aims to develop a new concept of control system and data acquisition framework by providing, with a high level of aaabstraction, all the services needed for controlling and managing a large scientific, or non-scientific, infrastructure. A beta version of the !CHAOS infrastructure will be released at the end of December 2015 and will run on private Cloud infrastructures based on OpenStack.

  10. National Laboratory for Advanced Scientific Visualization at UNAM - Mexico

    NASA Astrophysics Data System (ADS)

    Manea, Marina; Constantin Manea, Vlad; Varela, Alfredo

    2016-04-01

    In 2015, the National Autonomous University of Mexico (UNAM) joined the family of Universities and Research Centers where advanced visualization and computing plays a key role to promote and advance missions in research, education, community outreach, as well as business-oriented consulting. This initiative provides access to a great variety of advanced hardware and software resources and offers a range of consulting services that spans a variety of areas related to scientific visualization, among which are: neuroanatomy, embryonic development, genome related studies, geosciences, geography, physics and mathematics related disciplines. The National Laboratory for Advanced Scientific Visualization delivers services through three main infrastructure environments: the 3D fully immersive display system Cave, the high resolution parallel visualization system Powerwall, the high resolution spherical displays Earth Simulator. The entire visualization infrastructure is interconnected to a high-performance-computing-cluster (HPCC) called ADA in honor to Ada Lovelace, considered to be the first computer programmer. The Cave is an extra large 3.6m wide room with projected images on the front, left and right, as well as floor walls. Specialized crystal eyes LCD-shutter glasses provide a strong stereo depth perception, and a variety of tracking devices allow software to track the position of a user's hand, head and wand. The Powerwall is designed to bring large amounts of complex data together through parallel computing for team interaction and collaboration. This system is composed by 24 (6x4) high-resolution ultra-thin (2 mm) bezel monitors connected to a high-performance GPU cluster. The Earth Simulator is a large (60") high-resolution spherical display used for global-scale data visualization like geophysical, meteorological, climate and ecology data. The HPCC-ADA, is a 1000+ computing core system, which offers parallel computing resources to applications that requires large quantity of memory as well as large and fast parallel storage systems. The entire system temperature is controlled by an energy and space efficient cooling solution, based on large rear door liquid cooled heat exchangers. This state-of-the-art infrastructure will boost research activities in the region, offer a powerful scientific tool for teaching at undergraduate and graduate levels, and enhance association and cooperation with business-oriented organizations.

  11. Nonstationarity RC Workshop Report: Nonstationary Weather Patterns and Extreme Events Informing Design and Planning for Long-Lived Infrastructure

    DTIC Science & Technology

    2017-11-01

    magnitude, intensity, and seasonality of climate. For infrastructure projects, relevant design life often exceeds 30 years—a period of time of...uncertainty about future statistical properties of climate at time and spatial scales required for planning and design purposes. Information...about future statistical properties of climate at time and spatial scales required for planning and design , and for assessing future operational

  12. AdvoCATE - User Guide

    NASA Technical Reports Server (NTRS)

    Denney, Ewen W.

    2015-01-01

    The basic vision of AdvoCATE is to automate the creation, manipulation, and management of large-scale assurance cases based on a formal theory of argument structures. Its main purposes are for creating and manipulating argument structures for safety assurance cases using the Goal Structuring Notation (GSN), and as a test bed and proof-of-concept for the formal theory of argument structures. AdvoCATE is available for Windows 7, Macintosh OSX, and Linux. Eventually, AdvoCATE will serve as a dashboard for safety related information and provide an infrastructure for safety decisions and management.

  13. Los Alamos National Laboratory Economic Analysis Capability Overview

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Boero, Riccardo; Edwards, Brian Keith; Pasqualini, Donatella

    Los Alamos National Laboratory has developed two types of models to compute the economic impact of infrastructure disruptions. FastEcon is a fast running model that estimates first-­order economic impacts of large scale events such as hurricanes and floods and can be used to identify the amount of economic activity that occurs in a specific area. LANL’s Computable General Equilibrium (CGE) model estimates more comprehensive static and dynamic economic impacts of a broader array of events and captures the interactions between sectors and industries when estimating economic impacts.

  14. AGIS: The ATLAS Grid Information System

    NASA Astrophysics Data System (ADS)

    Anisenkov, A.; Di Girolamo, A.; Klimentov, A.; Oleynik, D.; Petrosyan, A.; Atlas Collaboration

    2014-06-01

    ATLAS, a particle physics experiment at the Large Hadron Collider at CERN, produced petabytes of data annually through simulation production and tens of petabytes of data per year from the detector itself. The ATLAS computing model embraces the Grid paradigm and a high degree of decentralization and computing resources able to meet ATLAS requirements of petabytes scale data operations. In this paper we describe the ATLAS Grid Information System (AGIS), designed to integrate configuration and status information about resources, services and topology of the computing infrastructure used by the ATLAS Distributed Computing applications and services.

  15. Situating Green Infrastructure in Context: Adaptive Socio-Hydrology for Sustainable Cities - poster

    EPA Science Inventory

    The benefits of green infrastructure (GI) in controlling urban hydrologic processes have largely focused on practical matters like stormwater management, which drives the planning stage. Green Infrastructure design and implementation usually takes into account physical site chara...

  16. The use of digital imaging, video conferencing, and telepathology in histopathology: a national survey.

    PubMed

    Dennis, T; Start, R D; Cross, S S

    2005-03-01

    To undertake a large scale survey of histopathologists in the UK to determine the current infrastructure, training, and attitudes to digital pathology. A postal questionnaire was sent to 500 consultant histopathologists randomly selected from the membership of the Royal College of Pathologists in the UK. There was a response rate of 47%. Sixty four per cent of respondents had a digital camera mounted on their microscope, but only 12% had any sort of telepathology equipment. Thirty per cent used digital images in electronic presentations at meetings at least once a year and only 24% had ever used telepathology in a diagnostic situation. Fifty nine per cent had received no training in digital imaging. Fifty eight per cent felt that the medicolegal implications of duty of care were a barrier to its use. A large proportion of pathologists (69%) were interested in using video conferencing for remote attendance at multidisciplinary team meetings. There is a reasonable level of equipment and communications infrastructure among histopathologists in the UK but a very low level of training. There is resistance to the use of telepathology in the diagnostic context but enthusiasm for the use of video conferencing in multidisciplinary team meetings.

  17. XS: a FASTQ read simulator.

    PubMed

    Pratas, Diogo; Pinho, Armando J; Rodrigues, João M O S

    2014-01-16

    The emerging next-generation sequencing (NGS) is bringing, besides the natural huge amounts of data, an avalanche of new specialized tools (for analysis, compression, alignment, among others) and large public and private network infrastructures. Therefore, a direct necessity of specific simulation tools for testing and benchmarking is rising, such as a flexible and portable FASTQ read simulator, without the need of a reference sequence, yet correctly prepared for producing approximately the same characteristics as real data. We present XS, a skilled FASTQ read simulation tool, flexible, portable (does not need a reference sequence) and tunable in terms of sequence complexity. It has several running modes, depending on the time and memory available, and is aimed at testing computing infrastructures, namely cloud computing of large-scale projects, and testing FASTQ compression algorithms. Moreover, XS offers the possibility of simulating the three main FASTQ components individually (headers, DNA sequences and quality-scores). XS provides an efficient and convenient method for fast simulation of FASTQ files, such as those from Ion Torrent (currently uncovered by other simulators), Roche-454, Illumina and ABI-SOLiD sequencing machines. This tool is publicly available at http://bioinformatics.ua.pt/software/xs/.

  18. InterMine Webservices for Phytozome (Rev2)

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Carlson, Joseph; Goodstein, David; Rokhsar, Dan

    2014-07-10

    A datawarehousing framework for information provides a useful infrastructure for providers and users of genomic data. For providers, the infrastructure give them a consistent mechanism for extracting raw data. While for the users, the web services supported by the software allows them to make complex, and often unique, queries of the data. Previously, phytozome.net used BioMart to provide the infrastructure. As the complexity, scale and diversity of the dataset as grown, we decided to implement an InterMine web service on our servers. This change was largely motivated by the ability to have a more complex table structure and richer webmore » reporting mechanism than BioMart. For InterMine to achieve its more complex database schema it requires an XML description of the data and an appropriate loader. Unlimited one-to-many and many-to-many relationship between the tables can be enabled in the schema. We have implemented support for:1.) Genomes and annotations for the data in Phytozome. This set is the 48 organisms currently stored in a back end CHADO datastore. The data loaders are modified versions of the CHADO data adapters from FlyMine. 2.) Interproscan results from all proteins in the Phytozome database. 3.) Clusters of proteins into a grouped heirarchically by similarity. 4.) Cufflinks results from tissue-specific RNA-Seq data of Phytozome organisms. 5.) Diversity data (GATK and SnpEFF results) from a set of individual organism. The last two datatypes are new in this implementation of our web services. We anticipate that the scale of these data will increase considerably in the near future.« less

  19. Contribution of the infrasound technology to characterize large scale atmospheric disturbances and impact on infrasound monitoring

    NASA Astrophysics Data System (ADS)

    Blanc, Elisabeth; Le Pichon, Alexis; Ceranna, Lars; Pilger, Christoph; Charlton Perez, Andrew; Smets, Pieter

    2016-04-01

    The International Monitoring System (IMS) developed for the verification of the Comprehensive nuclear-Test-Ban Treaty (CTBT) provides a unique global description of atmospheric disturbances generating infrasound such as extreme events (e.g. meteors, volcanoes, earthquakes, and severe weather) or human activity (e.g. explosions and supersonic airplanes). The analysis of the detected signals, recorded at global scales and over near 15 years at some stations, demonstrates that large-scale atmospheric disturbances strongly affect infrasound propagation. Their time scales vary from several tens of minutes to hours and days. Their effects are in average well resolved by the current model predictions; however, accurate spatial and temporal description is lacking in both weather and climate models. This study reviews recent results using the infrasound technology to characterize these large scale disturbances, including (i) wind fluctuations induced by gravity waves generating infrasound partial reflections and modifications of the infrasound waveguide, (ii) convection from thunderstorms and mountain waves generating gravity waves, (iii) stratospheric warming events which yield wind inversions in the stratosphere, (iv)planetary waves which control the global atmospheric circulation. Improved knowledge of these disturbances and assimilation in future models is an important objective of the ARISE (Atmospheric dynamics Research InfraStructure in Europe) project. This is essential in the context of the future verification of the CTBT as enhanced atmospheric models are necessary to assess the IMS network performance in higher resolution, reduce source location errors, and improve characterization methods.

  20. Landscape trajectory of natural boreal forest loss as an impediment to green infrastructure.

    PubMed

    Svensson, Johan; Andersson, Jon; Sandström, Per; Mikusiński, Grzegorz; Jonsson, Bengt-Gunnar

    2018-06-08

    Loss of natural forests has been identified as a critical conservation challenge worldwide. This loss impede the establishment of a functional green infrastructure as a spatiotemporally connected landscape-scale network of habitats enhancing biodiversity, favorable conservation status and ecosystem services. In many regions this loss is caused by forest clearcutting. Through retrospective satellite images analysis we assessed a 50-60 year spatiotemporal clearcutting impact trajectory on natural and near-natural boreal forests across a sizable and representative region from the Gulf of Bothnia to the Scandinavian Mountain Range in northern Fennoscandia. Our analysis broadly covers the whole forest clearcutting period and thus our study approach and results can be applied for comprehensive impact assessment of industrial forest management. Our results demonstrate profound disturbance on natural forest landscape configuration. The whole forest landscape is in a late phase in a transition from a natural or near-natural to a land-use modified state. Our results provide evidence of natural forest loss and spatial polarization at the regional scale, with a pre-dominant share of valuable habitats left in the mountain area, whereas the inland area has been more severely impacted. We highlight the importance of interior forest areas as most valuable biodiversity hotspots and the central axis of green infrastructure. Superimposing the effects of edge disturbance on forest fragmentation, the loss of interior forest entities further aggravate the conservation premises. Our results also show a loss of large contiguous forest patches and indicate patch size homogenization. The current forest protection share is low in the region and with geographical imbalance as the absolute majority is located in remote and low productive sites in the mountain area. Our approach provides possibilities to identify forest areas for directed conservation actions in the form of new protection, restoration and nature conservation oriented forest management, for implementing a functional green infrastructure. This article is protected by copyright. All rights reserved. This article is protected by copyright. All rights reserved.

  1. Assessing the risk posed by natural hazards to infrastructures

    NASA Astrophysics Data System (ADS)

    Eidsvig, Unni; Kristensen, Krister; Vidar Vangelsten, Bjørn

    2015-04-01

    The modern society is increasingly dependent on infrastructures to maintain its function, and disruption in one of the infrastructure systems may have severe consequences. The Norwegian municipalities have, according to legislation, a duty to carry out a risk and vulnerability analysis and plan and prepare for emergencies in a short- and long term perspective. Vulnerability analysis of the infrastructures and their interdependencies is an important part of this analysis. This paper proposes a model for assessing the risk posed by natural hazards to infrastructures. The model prescribes a three level analysis with increasing level of detail, moving from qualitative to quantitative analysis. This paper focuses on the second level, which consists of a semi-quantitative analysis. The purpose of this analysis is to perform a screening of the scenarios of natural hazards threatening the infrastructures identified in the level 1 analysis and investigate the need for further analyses, i.e. level 3 quantitative analyses. The proposed level 2 analysis considers the frequency of the natural hazard, different aspects of vulnerability including the physical vulnerability of the infrastructure itself and the societal dependency on the infrastructure. An indicator-based approach is applied, ranking the indicators on a relative scale. The proposed indicators characterize the robustness of the infrastructure, the importance of the infrastructure as well as interdependencies between society and infrastructure affecting the potential for cascading effects. Each indicator is ranked on a 1-5 scale based on pre-defined ranking criteria. The aggregated risk estimate is a combination of the semi-quantitative vulnerability indicators, as well as quantitative estimates of the frequency of the natural hazard and the number of users of the infrastructure. Case studies for two Norwegian municipalities are presented, where risk to primary road, water supply and power network threatened by storm and landslide is assessed. The application examples show that the proposed model provides a useful tool for screening of undesirable events, with the ultimate goal to reduce the societal vulnerability.

  2. Developing eThread pipeline using SAGA-pilot abstraction for large-scale structural bioinformatics.

    PubMed

    Ragothaman, Anjani; Boddu, Sairam Chowdary; Kim, Nayong; Feinstein, Wei; Brylinski, Michal; Jha, Shantenu; Kim, Joohyun

    2014-01-01

    While most of computational annotation approaches are sequence-based, threading methods are becoming increasingly attractive because of predicted structural information that could uncover the underlying function. However, threading tools are generally compute-intensive and the number of protein sequences from even small genomes such as prokaryotes is large typically containing many thousands, prohibiting their application as a genome-wide structural systems biology tool. To leverage its utility, we have developed a pipeline for eThread--a meta-threading protein structure modeling tool, that can use computational resources efficiently and effectively. We employ a pilot-based approach that supports seamless data and task-level parallelism and manages large variation in workload and computational requirements. Our scalable pipeline is deployed on Amazon EC2 and can efficiently select resources based upon task requirements. We present runtime analysis to characterize computational complexity of eThread and EC2 infrastructure. Based on results, we suggest a pathway to an optimized solution with respect to metrics such as time-to-solution or cost-to-solution. Our eThread pipeline can scale to support a large number of sequences and is expected to be a viable solution for genome-scale structural bioinformatics and structure-based annotation, particularly, amenable for small genomes such as prokaryotes. The developed pipeline is easily extensible to other types of distributed cyberinfrastructure.

  3. Developing eThread Pipeline Using SAGA-Pilot Abstraction for Large-Scale Structural Bioinformatics

    PubMed Central

    Ragothaman, Anjani; Feinstein, Wei; Jha, Shantenu; Kim, Joohyun

    2014-01-01

    While most of computational annotation approaches are sequence-based, threading methods are becoming increasingly attractive because of predicted structural information that could uncover the underlying function. However, threading tools are generally compute-intensive and the number of protein sequences from even small genomes such as prokaryotes is large typically containing many thousands, prohibiting their application as a genome-wide structural systems biology tool. To leverage its utility, we have developed a pipeline for eThread—a meta-threading protein structure modeling tool, that can use computational resources efficiently and effectively. We employ a pilot-based approach that supports seamless data and task-level parallelism and manages large variation in workload and computational requirements. Our scalable pipeline is deployed on Amazon EC2 and can efficiently select resources based upon task requirements. We present runtime analysis to characterize computational complexity of eThread and EC2 infrastructure. Based on results, we suggest a pathway to an optimized solution with respect to metrics such as time-to-solution or cost-to-solution. Our eThread pipeline can scale to support a large number of sequences and is expected to be a viable solution for genome-scale structural bioinformatics and structure-based annotation, particularly, amenable for small genomes such as prokaryotes. The developed pipeline is easily extensible to other types of distributed cyberinfrastructure. PMID:24995285

  4. A model of urban scaling laws based on distance dependent interactions

    NASA Astrophysics Data System (ADS)

    Ribeiro, Fabiano L.; Meirelles, Joao; Ferreira, Fernando F.; Neto, Camilo Rodrigues

    2017-03-01

    Socio-economic related properties of a city grow faster than a linear relationship with the population, in a log-log plot, the so-called superlinear scaling. Conversely, the larger a city, the more efficient it is in the use of its infrastructure, leading to a sublinear scaling on these variables. In this work, we addressed a simple explanation for those scaling laws in cities based on the interaction range between the citizens and on the fractal properties of the cities. To this purpose, we introduced a measure of social potential which captured the influence of social interaction on the economic performance and the benefits of amenities in the case of infrastructure offered by the city. We assumed that the population density depends on the fractal dimension and on the distance-dependent interactions between individuals. The model suggests that when the city interacts as a whole, and not just as a set of isolated parts, there is improvement of the socio-economic indicators. Moreover, the bigger the interaction range between citizens and amenities, the bigger the improvement of the socio-economic indicators and the lower the infrastructure costs of the city. We addressed how public policies could take advantage of these properties to improve cities development, minimizing negative effects. Furthermore, the model predicts that the sum of the scaling exponents of social-economic and infrastructure variables are 2, as observed in the literature. Simulations with an agent-based model are confronted with the theoretical approach and they are compatible with the empirical evidences.

  5. A model of urban scaling laws based on distance dependent interactions.

    PubMed

    Ribeiro, Fabiano L; Meirelles, Joao; Ferreira, Fernando F; Neto, Camilo Rodrigues

    2017-03-01

    Socio-economic related properties of a city grow faster than a linear relationship with the population, in a log-log plot, the so-called superlinear scaling . Conversely, the larger a city, the more efficient it is in the use of its infrastructure, leading to a sublinear scaling on these variables. In this work, we addressed a simple explanation for those scaling laws in cities based on the interaction range between the citizens and on the fractal properties of the cities. To this purpose, we introduced a measure of social potential which captured the influence of social interaction on the economic performance and the benefits of amenities in the case of infrastructure offered by the city. We assumed that the population density depends on the fractal dimension and on the distance-dependent interactions between individuals. The model suggests that when the city interacts as a whole, and not just as a set of isolated parts, there is improvement of the socio-economic indicators. Moreover, the bigger the interaction range between citizens and amenities, the bigger the improvement of the socio-economic indicators and the lower the infrastructure costs of the city. We addressed how public policies could take advantage of these properties to improve cities development, minimizing negative effects. Furthermore, the model predicts that the sum of the scaling exponents of social-economic and infrastructure variables are 2, as observed in the literature. Simulations with an agent-based model are confronted with the theoretical approach and they are compatible with the empirical evidences.

  6. Nitrogen and phosphorus fluxes from watersheds of the northeast U.S. from 1930 to 2000: Role of anthropogenic nutrient inputs, infrastructure, and runoff

    NASA Astrophysics Data System (ADS)

    Hale, Rebecca L.; Grimm, Nancy B.; Vörösmarty, Charles J.; Fekete, Balazs

    2015-03-01

    An ongoing challenge for society is to harness the benefits of nutrients, nitrogen (N) and phosphorus (P), while minimizing their negative effects on ecosystems. While there is a good understanding of the mechanisms of nutrient delivery at small scales, it is unknown how nutrient transport and processing scale up to larger watersheds and whole regions over long time periods. We used a model that incorporates nutrient inputs to watersheds, hydrology, and infrastructure (sewers, wastewater treatment plants, and reservoirs) to reconstruct historic nutrient yields for the northeastern U.S. from 1930 to 2002. Over the study period, yields of nutrients increased significantly from some watersheds and decreased in others. As a result, at the regional scale, the total yield of N and P from the region did not change significantly. Temporal variation in regional N and P yields was correlated with runoff coefficient, but not with nutrient inputs. Spatial patterns of N and P yields were best predicted by nutrient inputs, but the correlation between inputs and yields across watersheds decreased over the study period. The effect of infrastructure on yields was minimal relative to the importance of soils and rivers. However, infrastructure appeared to alter the relationships between inputs and yields. The role of infrastructure changed over time and was important in creating spatial and temporal heterogeneity in nutrient input-yield relationships.

  7. A sense of life: computational and experimental investigations with models of biochemical and evolutionary processes.

    PubMed

    Mishra, Bud; Daruwala, Raoul-Sam; Zhou, Yi; Ugel, Nadia; Policriti, Alberto; Antoniotti, Marco; Paxia, Salvatore; Rejali, Marc; Rudra, Archisman; Cherepinsky, Vera; Silver, Naomi; Casey, William; Piazza, Carla; Simeoni, Marta; Barbano, Paolo; Spivak, Marina; Feng, Jiawu; Gill, Ofer; Venkatesh, Mysore; Cheng, Fang; Sun, Bing; Ioniata, Iuliana; Anantharaman, Thomas; Hubbard, E Jane Albert; Pnueli, Amir; Harel, David; Chandru, Vijay; Hariharan, Ramesh; Wigler, Michael; Park, Frank; Lin, Shih-Chieh; Lazebnik, Yuri; Winkler, Franz; Cantor, Charles R; Carbone, Alessandra; Gromov, Mikhael

    2003-01-01

    We collaborate in a research program aimed at creating a rigorous framework, experimental infrastructure, and computational environment for understanding, experimenting with, manipulating, and modifying a diverse set of fundamental biological processes at multiple scales and spatio-temporal modes. The novelty of our research is based on an approach that (i) requires coevolution of experimental science and theoretical techniques and (ii) exploits a certain universality in biology guided by a parsimonious model of evolutionary mechanisms operating at the genomic level and manifesting at the proteomic, transcriptomic, phylogenic, and other higher levels. Our current program in "systems biology" endeavors to marry large-scale biological experiments with the tools to ponder and reason about large, complex, and subtle natural systems. To achieve this ambitious goal, ideas and concepts are combined from many different fields: biological experimentation, applied mathematical modeling, computational reasoning schemes, and large-scale numerical and symbolic simulations. From a biological viewpoint, the basic issues are many: (i) understanding common and shared structural motifs among biological processes; (ii) modeling biological noise due to interactions among a small number of key molecules or loss of synchrony; (iii) explaining the robustness of these systems in spite of such noise; and (iv) cataloging multistatic behavior and adaptation exhibited by many biological processes.

  8. Uncertainty in Predicted Neighborhood-Scale Green Stormwater Infrastructure Performance Informed by field monitoring of Hydrologic Abstractions

    NASA Astrophysics Data System (ADS)

    Smalls-Mantey, L.; Jeffers, S.; Montalto, F. A.

    2013-12-01

    Human alterations to the environment provide infrastructure for housing and transportation but have drastically changed local hydrology. Excess stormwater runoff from impervious surfaces generates erosion, overburdens sewer infrastructure, and can pollute receiving bodies. Increased attention to green stormwater management controls is based on the premise that some of these issues can be mitigated by capturing or slowing the flow of stormwater. However, our ability to predict actual green infrastructure facility performance using physical or statistical methods needs additional validation, and efforts to incorporate green infrastructure controls into hydrologic models are still in their infancy stages. We use more than three years of field monitoring data to derive facility specific probability density functions characterizing the hydrologic abstractions provided by a stormwater treatment wetland, streetside bioretention facility, and a green roof. The monitoring results are normalized by impervious area treated, and incorporated into a neighborhood-scale agent model allowing probabilistic comparisons of the stormwater capture outcomes associated with alternative urban greening scenarios. Specifically, we compare the uncertainty introduced into the model by facility performance (as represented by the variability in the abstraction), to that introduced by both precipitation variability, and spatial patterns of emergence of different types of green infrastructure. The modeling results are used to update a discussion about the potential effectiveness of urban green infrastructure implementation plans.

  9. Software Engineering Infrastructure in a Large Virtual Campus

    ERIC Educational Resources Information Center

    Cristobal, Jesus; Merino, Jorge; Navarro, Antonio; Peralta, Miguel; Roldan, Yolanda; Silveira, Rosa Maria

    2011-01-01

    Purpose: The design, construction and deployment of a large virtual campus are a complex issue. Present virtual campuses are made of several software applications that complement e-learning platforms. In order to develop and maintain such virtual campuses, a complex software engineering infrastructure is needed. This paper aims to analyse the…

  10. A national assessment of green infrastructure and change for the conterminous United States using morphological image processing

    Treesearch

    J.D Wickham; Kurt H. Riitters; T.G. Wade; P. Vogt

    2010-01-01

    Green infrastructure is a popular framework for conservation planning. The main elements of green infrastructure are hubs and links. Hubs tend to be large areas of ‘natural’ vegetation and links tend to be linear features (e.g., streams) that connect hubs. Within the United States, green infrastructure projects can be characterized as: (...

  11. Cloud computing for comparative genomics

    PubMed Central

    2010-01-01

    Background Large comparative genomics studies and tools are becoming increasingly more compute-expensive as the number of available genome sequences continues to rise. The capacity and cost of local computing infrastructures are likely to become prohibitive with the increase, especially as the breadth of questions continues to rise. Alternative computing architectures, in particular cloud computing environments, may help alleviate this increasing pressure and enable fast, large-scale, and cost-effective comparative genomics strategies going forward. To test this, we redesigned a typical comparative genomics algorithm, the reciprocal smallest distance algorithm (RSD), to run within Amazon's Elastic Computing Cloud (EC2). We then employed the RSD-cloud for ortholog calculations across a wide selection of fully sequenced genomes. Results We ran more than 300,000 RSD-cloud processes within the EC2. These jobs were farmed simultaneously to 100 high capacity compute nodes using the Amazon Web Service Elastic Map Reduce and included a wide mix of large and small genomes. The total computation time took just under 70 hours and cost a total of $6,302 USD. Conclusions The effort to transform existing comparative genomics algorithms from local compute infrastructures is not trivial. However, the speed and flexibility of cloud computing environments provides a substantial boost with manageable cost. The procedure designed to transform the RSD algorithm into a cloud-ready application is readily adaptable to similar comparative genomics problems. PMID:20482786

  12. Cloud computing for comparative genomics.

    PubMed

    Wall, Dennis P; Kudtarkar, Parul; Fusaro, Vincent A; Pivovarov, Rimma; Patil, Prasad; Tonellato, Peter J

    2010-05-18

    Large comparative genomics studies and tools are becoming increasingly more compute-expensive as the number of available genome sequences continues to rise. The capacity and cost of local computing infrastructures are likely to become prohibitive with the increase, especially as the breadth of questions continues to rise. Alternative computing architectures, in particular cloud computing environments, may help alleviate this increasing pressure and enable fast, large-scale, and cost-effective comparative genomics strategies going forward. To test this, we redesigned a typical comparative genomics algorithm, the reciprocal smallest distance algorithm (RSD), to run within Amazon's Elastic Computing Cloud (EC2). We then employed the RSD-cloud for ortholog calculations across a wide selection of fully sequenced genomes. We ran more than 300,000 RSD-cloud processes within the EC2. These jobs were farmed simultaneously to 100 high capacity compute nodes using the Amazon Web Service Elastic Map Reduce and included a wide mix of large and small genomes. The total computation time took just under 70 hours and cost a total of $6,302 USD. The effort to transform existing comparative genomics algorithms from local compute infrastructures is not trivial. However, the speed and flexibility of cloud computing environments provides a substantial boost with manageable cost. The procedure designed to transform the RSD algorithm into a cloud-ready application is readily adaptable to similar comparative genomics problems.

  13. Redox Flow Batteries, Hydrogen and Distributed Storage.

    PubMed

    Dennison, C R; Vrubel, Heron; Amstutz, Véronique; Peljo, Pekka; Toghill, Kathryn E; Girault, Hubert H

    2015-01-01

    Social, economic, and political pressures are causing a shift in the global energy mix, with a preference toward renewable energy sources. In order to realize widespread implementation of these resources, large-scale storage of renewable energy is needed. Among the proposed energy storage technologies, redox flow batteries offer many unique advantages. The primary limitation of these systems, however, is their limited energy density which necessitates very large installations. In order to enhance the energy storage capacity of these systems, we have developed a unique dual-circuit architecture which enables two levels of energy storage; first in the conventional electrolyte, and then through the formation of hydrogen. Moreover, we have begun a pilot-scale demonstration project to investigate the scalability and technical readiness of this approach. This combination of conventional energy storage and hydrogen production is well aligned with the current trajectory of modern energy and mobility infrastructure. The combination of these two means of energy storage enables the possibility of an energy economy dominated by renewable resources.

  14. Understanding the recurrent large-scale green tide in the Yellow Sea: temporal and spatial correlations between multiple geographical, aquacultural and biological factors.

    PubMed

    Liu, Feng; Pang, Shaojun; Chopin, Thierry; Gao, Suqin; Shan, Tifeng; Zhao, Xiaobo; Li, Jing

    2013-02-01

    The coast of Jiangsu Province in China - where Ulva prolifera has always been firstly spotted before developing into green tides - is uniquely characterized by a huge intertidal radial mudflat. Results showed that: (1) propagules of U. prolifera have been consistently present in seawater and sediments of this mudflat and varied with locations and seasons; (2) over 50,000 tons of fermented chicken manure have been applied annually from March to May in coastal animal aquaculture ponds and thereafter the waste water has been discharged into the radial mudflat intensifying eutrophication; and (3) free-floating U. prolifera could be stranded in any floating infrastructures in coastal waters including large scale Porphyra farming rafts. For a truly integrated management of the coastal zone, reduction in nutrient inputs, and control of the effluents of the coastal pond systems, are needed to control eutrophication and prevent green tides in the future. Copyright © 2012 Elsevier Ltd. All rights reserved.

  15. Identifying and modeling the structural discontinuities of human interactions

    NASA Astrophysics Data System (ADS)

    Grauwin, Sebastian; Szell, Michael; Sobolevsky, Stanislav; Hövel, Philipp; Simini, Filippo; Vanhoof, Maarten; Smoreda, Zbigniew; Barabási, Albert-László; Ratti, Carlo

    2017-04-01

    The idea of a hierarchical spatial organization of society lies at the core of seminal theories in human geography that have strongly influenced our understanding of social organization. Along the same line, the recent availability of large-scale human mobility and communication data has offered novel quantitative insights hinting at a strong geographical confinement of human interactions within neighboring regions, extending to local levels within countries. However, models of human interaction largely ignore this effect. Here, we analyze several country-wide networks of telephone calls - both, mobile and landline - and in either case uncover a systematic decrease of communication induced by borders which we identify as the missing variable in state-of-the-art models. Using this empirical evidence, we propose an alternative modeling framework that naturally stylizes the damping effect of borders. We show that this new notion substantially improves the predictive power of widely used interaction models. This increases our ability to understand, model and predict social activities and to plan the development of infrastructures across multiple scales.

  16. Identifying and modeling the structural discontinuities of human interactions

    PubMed Central

    Grauwin, Sebastian; Szell, Michael; Sobolevsky, Stanislav; Hövel, Philipp; Simini, Filippo; Vanhoof, Maarten; Smoreda, Zbigniew; Barabási, Albert-László; Ratti, Carlo

    2017-01-01

    The idea of a hierarchical spatial organization of society lies at the core of seminal theories in human geography that have strongly influenced our understanding of social organization. Along the same line, the recent availability of large-scale human mobility and communication data has offered novel quantitative insights hinting at a strong geographical confinement of human interactions within neighboring regions, extending to local levels within countries. However, models of human interaction largely ignore this effect. Here, we analyze several country-wide networks of telephone calls - both, mobile and landline - and in either case uncover a systematic decrease of communication induced by borders which we identify as the missing variable in state-of-the-art models. Using this empirical evidence, we propose an alternative modeling framework that naturally stylizes the damping effect of borders. We show that this new notion substantially improves the predictive power of widely used interaction models. This increases our ability to understand, model and predict social activities and to plan the development of infrastructures across multiple scales. PMID:28443647

  17. Identifying and modeling the structural discontinuities of human interactions.

    PubMed

    Grauwin, Sebastian; Szell, Michael; Sobolevsky, Stanislav; Hövel, Philipp; Simini, Filippo; Vanhoof, Maarten; Smoreda, Zbigniew; Barabási, Albert-László; Ratti, Carlo

    2017-04-26

    The idea of a hierarchical spatial organization of society lies at the core of seminal theories in human geography that have strongly influenced our understanding of social organization. Along the same line, the recent availability of large-scale human mobility and communication data has offered novel quantitative insights hinting at a strong geographical confinement of human interactions within neighboring regions, extending to local levels within countries. However, models of human interaction largely ignore this effect. Here, we analyze several country-wide networks of telephone calls - both, mobile and landline - and in either case uncover a systematic decrease of communication induced by borders which we identify as the missing variable in state-of-the-art models. Using this empirical evidence, we propose an alternative modeling framework that naturally stylizes the damping effect of borders. We show that this new notion substantially improves the predictive power of widely used interaction models. This increases our ability to understand, model and predict social activities and to plan the development of infrastructures across multiple scales.

  18. Invisible water, visible impact: groundwater use and Indian agriculture under climate change

    NASA Astrophysics Data System (ADS)

    Zaveri, Esha; Grogan, Danielle S.; Fisher-Vanden, Karen; Frolking, Steve; Lammers, Richard B.; Wrenn, Douglas H.; Prusevich, Alexander; Nicholas, Robert E.

    2016-08-01

    India is one of the world’s largest food producers, making the sustainability of its agricultural system of global significance. Groundwater irrigation underpins India’s agriculture, currently boosting crop production by enough to feed 170 million people. Groundwater overexploitation has led to drastic declines in groundwater levels, threatening to push this vital resource out of reach for millions of small-scale farmers who are the backbone of India’s food security. Historically, losing access to groundwater has decreased agricultural production and increased poverty. We take a multidisciplinary approach to assess climate change challenges facing India’s agricultural system, and to assess the effectiveness of large-scale water infrastructure projects designed to meet these challenges. We find that even in areas that experience climate change induced precipitation increases, expansion of irrigated agriculture will require increasing amounts of unsustainable groundwater. The large proposed national river linking project has limited capacity to alleviate groundwater stress. Thus, without intervention, poverty and food insecurity in rural India is likely to worsen.

  19. An explicit GIS-based river basin framework for aquatic ecosystem conservation in the Amazon

    NASA Astrophysics Data System (ADS)

    Venticinque, Eduardo; Forsberg, Bruce; Barthem, Ronaldo; Petry, Paulo; Hess, Laura; Mercado, Armando; Cañas, Carlos; Montoya, Mariana; Durigan, Carlos; Goulding, Michael

    2016-11-01

    Despite large-scale infrastructure development, deforestation, mining and petroleum exploration in the Amazon Basin, relatively little attention has been paid to the management scale required for the protection of wetlands, fisheries and other aspects of aquatic ecosystems. This is due, in part, to the enormous size, multinational composition and interconnected nature of the Amazon River system, as well as to the absence of an adequate spatial model for integrating data across the entire Amazon Basin. In this data article we present a spatially uniform multi-scale GIS framework that was developed especially for the analysis, management and monitoring of various aspects of aquatic systems in the Amazon Basin. The Amazon GIS-Based River Basin Framework is accessible as an ESRI geodatabase at doi:10.5063/F1BG2KX8.

  20. The Updating of Geospatial Base Data

    NASA Astrophysics Data System (ADS)

    Alrajhi, Muhamad N.; Konecny, Gottfried

    2018-04-01

    Topopographic mapping issues concern the area coverage at different scales and their age. The age of the map is determined by the system of updating. The United Nations (UNGGIM) have attempted to track the global map coverage at various scale ranges, which has greatly improved in recent decades. However the poor state of updating of base maps is still a global problem. In Saudi Arabia large scale mapping is carried out for all urban, suburban and rural areas by aerial surveys. Updating is carried out by remapping every 5 to 10 years. Due to the rapid urban development this is not satisfactory, but faster update methods are forseen by use of high resolution satellite imagery and the improvement of object oriented geodatabase structures, which will permit to utilize various survey technologies to update the photogrammetry established geodatabases. The longterm goal is to create an geodata infrastructure, which exists in Great Britain or Germany.

  1. Full Scale Software Support on Mobile Lightweight Devices by Utilization of All Types of Wireless Technologies

    NASA Astrophysics Data System (ADS)

    Krejcar, Ondrej

    New kind of mobile lightweight devices can run full scale applications with same comfort as on desktop devices only with several limitations. One of them is insufficient transfer speed on wireless connectivity. Main area of interest is in a model of a radio-frequency based system enhancement for locating and tracking users of a mobile information system. The experimental framework prototype uses a wireless network infrastructure to let a mobile lightweight device determine its indoor or outdoor position. User location is used for data prebuffering and pushing information from server to user’s PDA. All server data is saved as artifacts along with its position information in building or larger area environment. The accessing of prebuffered data on mobile lightweight device can highly improve response time needed to view large multimedia data. This fact can help with design of new full scale applications for mobile lightweight devices.

  2. Machine learning for Big Data analytics in plants.

    PubMed

    Ma, Chuang; Zhang, Hao Helen; Wang, Xiangfeng

    2014-12-01

    Rapid advances in high-throughput genomic technology have enabled biology to enter the era of 'Big Data' (large datasets). The plant science community not only needs to build its own Big-Data-compatible parallel computing and data management infrastructures, but also to seek novel analytical paradigms to extract information from the overwhelming amounts of data. Machine learning offers promising computational and analytical solutions for the integrative analysis of large, heterogeneous and unstructured datasets on the Big-Data scale, and is gradually gaining popularity in biology. This review introduces the basic concepts and procedures of machine-learning applications and envisages how machine learning could interface with Big Data technology to facilitate basic research and biotechnology in the plant sciences. Copyright © 2014 Elsevier Ltd. All rights reserved.

  3. A model for decentralised grey wastewater treatment system in Singapore public housing.

    PubMed

    Lim, J; Jern, Ng Wun; Chew, K L; Kallianpur, V

    2002-01-01

    Global concerns over the sustainable use of natural resources provided the impetus for research into water reclamation from wastewater within the Singapore context. The objective of the research is to study and develop a water infrastructure system as an integral element of architecture and the urbanscape, thereby reducing the need for the large area requirements associated with centralised treatment plants. The decentralised plants were considered so as to break up the large contiguous plot of land otherwise needed, into smaller integrated fragments, which can be incorporated within the housing scheme. This liberated more usable space on the ground plane of the urban housing master plan, enabling water-edge and waterscape relationships within both the private and public domains of varying scale.

  4. Review of the Need for a Large-scale Test Facility for Research on the Effects of Extreme Winds on Structures

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    R. G. Little

    1999-03-01

    The Idaho National Engineering and Environmental Laboratory (INEEL), through the US Department of Energy (DOE), has proposed that a large-scale wind test facility (LSWTF) be constructed to study, in full-scale, the behavior of low-rise structures under simulated extreme wind conditions. To determine the need for, and potential benefits of, such a facility, the Idaho Operations Office of the DOE requested that the National Research Council (NRC) perform an independent assessment of the role and potential value of an LSWTF in the overall context of wind engineering research. The NRC established the Committee to Review the Need for a Large-scale Testmore » Facility for Research on the Effects of Extreme Winds on Structures, under the auspices of the Board on Infrastructure and the Constructed Environment, to perform this assessment. This report conveys the results of the committee's deliberations as well as its findings and recommendations. Data developed at large-scale would enhanced the understanding of how structures, particularly light-frame structures, are affected by extreme winds (e.g., hurricanes, tornadoes, sever thunderstorms, and other events). With a large-scale wind test facility, full-sized structures, such as site-built or manufactured housing and small commercial or industrial buildings, could be tested under a range of wind conditions in a controlled, repeatable environment. At this time, the US has no facility specifically constructed for this purpose. During the course of this study, the committee was confronted by three difficult questions: (1) does the lack of a facility equate to a need for the facility? (2) is need alone sufficient justification for the construction of a facility? and (3) would the benefits derived from information produced in an LSWTF justify the costs of producing that information? The committee's evaluation of the need and justification for an LSWTF was shaped by these realities.« less

  5. District-Scale Green Infrastructure Scenarios for the Zidell Development Site, City of Portland

    EPA Pesticide Factsheets

    The report outlines technical assistance to develop green infrastructure scenarios for the Zidell Yards site consistent with the constraints of a recently remediated brownfield that can be implemented within a 15-20 year time horizon.

  6. Infrastructure features outperform environmental variables explaining rabbit abundance around motorways.

    PubMed

    Planillo, Aimara; Malo, Juan E

    2018-01-01

    Human disturbance is widespread across landscapes in the form of roads that alter wildlife populations. Knowing which road features are responsible for the species response and their relevance in comparison with environmental variables will provide useful information for effective conservation measures. We sampled relative abundance of European rabbits, a very widespread species, in motorway verges at regional scale, in an area with large variability in environmental and infrastructure conditions. Environmental variables included vegetation structure, plant productivity, distance to water sources, and altitude. Infrastructure characteristics were the type of vegetation in verges, verge width, traffic volume, and the presence of embankments. We performed a variance partitioning analysis to determine the relative importance of two sets of variables on rabbit abundance. Additionally, we identified the most important variables and their effects model averaging after model selection by AICc on hypothesis-based models. As a group, infrastructure features explained four times more variability in rabbit abundance than environmental variables, being the effects of the former critical in motorway stretches located in altered landscapes with no available habitat for rabbits, such as agricultural fields. Model selection and Akaike weights showed that verge width and traffic volume are the most important variables explaining rabbit abundance index, with positive and negative effects, respectively. In the light of these results, the response of species to the infrastructure can be modulated through the modification of motorway features, being some of them manageable in the design phase. The identification of such features leads to suggestions for improvement through low-cost corrective measures and conservation plans. As a general indication, keeping motorway verges less than 10 m wide will prevent high densities of rabbits and avoid the unwanted effects that rabbit populations can generate in some areas.

  7. The Unseeing State: How Ideals of Modernity Have Undermined Innovation in Africa's Urban Water Systems.

    PubMed

    Nilsson, David

    2016-12-01

    In contrast to the European historical experience, Africa's urban infrastructural systems are characterised by stagnation long before demand has been saturated. Water infrastructures have been stabilised as systems predominantly providing services for elites, with millions of poor people lacking basic services in the cities. What is puzzling is that so little emphasis has been placed on innovation and the adaptation of the colonial technological paradigm to better suit the local and current socio-economic contexts. Based on historical case studies of Kampala and Nairobi, this paper argues that the lack of innovation in African urban water infrastructure can be understood using Pinch and Bijker's concept of technological closure, and by looking at water technology from its embedded values and ideology. Large-scale water technology became part of African leaders' strategies to build prosperous nations and cities after decolonisation and the ideological purpose of infrastructure may have been much more important than previously understood. Water technology had reached a state of closure in Europe and then came to represent modernisation and progress in the colonial context. It has continued to serve such a similar symbolic purpose after independence, with old norms essentially being preserved. Recent sector reforms have defined problems predominantly as of economic and institutional nature while state actors have become 'unseeing' vis-á-vis controversies within the technological systems themselves. In order to induce socio-technical innovation towards equality in urban infrastructure services, it will be necessary to understand the broader incentive structure that governs the relevant social groups, such as governments, donors, water suppliers and the consumers, as well as power-structures and political accountability.

  8. Conceptual Design of Optimized Fossil Energy Systems with Capture and Sequestration of Carbon Dioxide

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Nils Johnson; Joan Ogden

    2010-12-31

    In this final report, we describe research results from Phase 2 of a technical/economic study of fossil hydrogen energy systems with carbon dioxide (CO{sub 2}) capture and storage (CCS). CO{sub 2} capture and storage, or alternatively, CO{sub 2} capture and sequestration, involves capturing CO{sub 2} from large point sources and then injecting it into deep underground reservoirs for long-term storage. By preventing CO{sub 2} emissions into the atmosphere, this technology has significant potential to reduce greenhouse gas (GHG) emissions from fossil-based facilities in the power and industrial sectors. Furthermore, the application of CCS to power plants and hydrogen production facilitiesmore » can reduce CO{sub 2} emissions associated with electric vehicles (EVs) and hydrogen fuel cell vehicles (HFCVs) and, thus, can also improve GHG emissions in the transportation sector. This research specifically examines strategies for transitioning to large-scale coal-derived energy systems with CCS for both hydrogen fuel production and electricity generation. A particular emphasis is on the development of spatially-explicit modeling tools for examining how these energy systems might develop in real geographic regions. We employ an integrated modeling approach that addresses all infrastructure components involved in the transition to these energy systems. The overall objective is to better understand the system design issues and economics associated with the widespread deployment of hydrogen and CCS infrastructure in real regions. Specific objectives of this research are to: Develop improved techno-economic models for all components required for the deployment of both hydrogen and CCS infrastructure, Develop novel modeling methods that combine detailed spatial data with optimization tools to explore spatially-explicit transition strategies, Conduct regional case studies to explore how these energy systems might develop in different regions of the United States, and Examine how the design and cost of coal-based H{sub 2} and CCS infrastructure depend on geography and location.« less

  9. Risk assessment of sewer condition using artificial intelligence tools: application to the SANEST sewer system.

    PubMed

    Sousa, V; Matos, J P; Almeida, N; Saldanha Matos, J

    2014-01-01

    Operation, maintenance and rehabilitation comprise the main concerns of wastewater infrastructure asset management. Given the nature of the service provided by a wastewater system and the characteristics of the supporting infrastructure, technical issues are relevant to support asset management decisions. In particular, in densely urbanized areas served by large, complex and aging sewer networks, the sustainability of the infrastructures largely depends on the implementation of an efficient asset management system. The efficiency of such a system may be enhanced with technical decision support tools. This paper describes the role of artificial intelligence tools such as artificial neural networks and support vector machines for assisting the planning of operation and maintenance activities of wastewater infrastructures. A case study of the application of this type of tool to the wastewater infrastructures of Sistema de Saneamento da Costa do Estoril is presented.

  10. Performance Analysis, Design Considerations, and Applications of Extreme-Scale In Situ Infrastructures

    DOE PAGES

    Ayachit, Utkarsh; Bauer, Andrew; Duque, Earl P. N.; ...

    2016-11-01

    A key trend facing extreme-scale computational science is the widening gap between computational and I/O rates, and the challenge that follows is how to best gain insight from simulation data when it is increasingly impractical to save it to persistent storage for subsequent visual exploration and analysis. One approach to this challenge is centered around the idea of in situ processing, where visualization and analysis processing is performed while data is still resident in memory. Our paper examines several key design and performance issues related to the idea of in situ processing at extreme scale on modern platforms: Scalability, overhead,more » performance measurement and analysis, comparison and contrast with a traditional post hoc approach, and interfacing with simulation codes. We illustrate these principles in practice with studies, conducted on large-scale HPC platforms, that include a miniapplication and multiple science application codes, one of which demonstrates in situ methods in use at greater than 1M-way concurrency.« less

  11. Autonomous smart sensor network for full-scale structural health monitoring

    NASA Astrophysics Data System (ADS)

    Rice, Jennifer A.; Mechitov, Kirill A.; Spencer, B. F., Jr.; Agha, Gul A.

    2010-04-01

    The demands of aging infrastructure require effective methods for structural monitoring and maintenance. Wireless smart sensor networks offer the ability to enhance structural health monitoring (SHM) practices through the utilization of onboard computation to achieve distributed data management. Such an approach is scalable to the large number of sensor nodes required for high-fidelity modal analysis and damage detection. While smart sensor technology is not new, the number of full-scale SHM applications has been limited. This slow progress is due, in part, to the complex network management issues that arise when moving from a laboratory setting to a full-scale monitoring implementation. This paper presents flexible network management software that enables continuous and autonomous operation of wireless smart sensor networks for full-scale SHM applications. The software components combine sleep/wake cycling for enhanced power management with threshold detection for triggering network wide tasks, such as synchronized sensing or decentralized modal analysis, during periods of critical structural response.

  12. BioPig: a Hadoop-based analytic toolkit for large-scale sequence data.

    PubMed

    Nordberg, Henrik; Bhatia, Karan; Wang, Kai; Wang, Zhong

    2013-12-01

    The recent revolution in sequencing technologies has led to an exponential growth of sequence data. As a result, most of the current bioinformatics tools become obsolete as they fail to scale with data. To tackle this 'data deluge', here we introduce the BioPig sequence analysis toolkit as one of the solutions that scale to data and computation. We built BioPig on the Apache's Hadoop MapReduce system and the Pig data flow language. Compared with traditional serial and MPI-based algorithms, BioPig has three major advantages: first, BioPig's programmability greatly reduces development time for parallel bioinformatics applications; second, testing BioPig with up to 500 Gb sequences demonstrates that it scales automatically with size of data; and finally, BioPig can be ported without modification on many Hadoop infrastructures, as tested with Magellan system at National Energy Research Scientific Computing Center and the Amazon Elastic Compute Cloud. In summary, BioPig represents a novel program framework with the potential to greatly accelerate data-intensive bioinformatics analysis.

  13. Assessment of Change in Green Infrastructure Components Using Morphological Spatial Pattern Analysis for the Conterminous United States

    EPA Science Inventory

    Green infrastructure is a widely used framework for conservation planning in the United States and elsewhere. The main components of green infrastructure are hubs and corridors. Hubs are large areas of natural vegetation, and corridors are linear features that connect hubs. W...

  14. A National Assessment of Change in Green Infrastructure Using Mathematical Morphology

    EPA Science Inventory

    Green infrastructure is a popular framework for conservation planning. The main elements of green infrastructure are hubs and links. Hubs tend to be large areas of natural vegetation and links tend to be linear features (e.g., streams) that connect hubs. Within the United States...

  15. BluePyOpt: Leveraging Open Source Software and Cloud Infrastructure to Optimise Model Parameters in Neuroscience.

    PubMed

    Van Geit, Werner; Gevaert, Michael; Chindemi, Giuseppe; Rössert, Christian; Courcol, Jean-Denis; Muller, Eilif B; Schürmann, Felix; Segev, Idan; Markram, Henry

    2016-01-01

    At many scales in neuroscience, appropriate mathematical models take the form of complex dynamical systems. Parameterizing such models to conform to the multitude of available experimental constraints is a global non-linear optimisation problem with a complex fitness landscape, requiring numerical techniques to find suitable approximate solutions. Stochastic optimisation approaches, such as evolutionary algorithms, have been shown to be effective, but often the setting up of such optimisations and the choice of a specific search algorithm and its parameters is non-trivial, requiring domain-specific expertise. Here we describe BluePyOpt, a Python package targeted at the broad neuroscience community to simplify this task. BluePyOpt is an extensible framework for data-driven model parameter optimisation that wraps and standardizes several existing open-source tools. It simplifies the task of creating and sharing these optimisations, and the associated techniques and knowledge. This is achieved by abstracting the optimisation and evaluation tasks into various reusable and flexible discrete elements according to established best-practices. Further, BluePyOpt provides methods for setting up both small- and large-scale optimisations on a variety of platforms, ranging from laptops to Linux clusters and cloud-based compute infrastructures. The versatility of the BluePyOpt framework is demonstrated by working through three representative neuroscience specific use cases.

  16. A Dynamic Framework for Water Security

    NASA Astrophysics Data System (ADS)

    Srinivasan, Veena; Konar, Megan; Sivapalan, Murugesu

    2017-04-01

    Water security is a multi-faceted problem, going beyond mere balancing of supply and demand. Conventional attempts to quantify water security starting rely on static indices at a particular place and point in time. While these are simple and scalable, they lack predictive or explanatory power. 1) Most static indices focus on specific spatial scales and largely ignore cross-scale feedbacks between human and water systems. 2) They fail to account for the increasing spatial specialization in the modern world - some regions are cities others are agricultural breadbaskets; so water security means different things in different places. Human adaptation to environmental change necessitates a dynamic view of water security. We present a framework that defines water security as an emergent outcome of a coupled socio-hydrologic system. Over the medium term (5-25 years), water security models might hold governance, culture and infrastructure constant, but allow humans to respond to changes and thus predict how water security would evolve. But over very long time-frames (25-100 years), a society's values, norms and beliefs themselves may themselves evolve; these in turn may prompt changes in policy, governance and infrastructure. Predictions of water security in the long term involve accounting for such regime shifts in the cultural and political context of a watershed by allowing the governing equations of the models to change.

  17. Regional and National Use of Semi-Natural and Natural Depressional Wetlands in Green Infrastructure

    NASA Astrophysics Data System (ADS)

    Lane, C.; D'Amico, E.

    2016-12-01

    Depressional wetlands are frequently amongst the first aquatic systems to be exposed to pollutants from terrestrial source areas. Wetland functions include the finite ability to process nutrients and other pollutants. Through assimilation or sequestration of pollutants, depressional wetlands can affect other waters. While the functions of wetlands are well known, the abundance of depressional wetlands throughout the United States is not well known. Recent estimates conclude that approximately 16% of the freshwater wetlands of the conterminous United States may be depressional wetlands, or putative "geographically isolated wetlands" (Lane and D'Amico JAWRA 2016 52(3):705-722). However, there remains uncertainty in the impact or effects of depressional wetlands on other waters. We present geographic information system analyses showing the abundance and types of depressional wetlands effectively serving as green infrastructure throughout the conterminous U.S. We furthermore analyze the landscape position of depressional wetlands intersecting potentially pollutant-laden surficial flow paths from specific land uses (e.g., depressional wetlands embedded in agricultural landscapes). We discuss how similarities and differences in types and abundances of depressional wetlands between and among ecoregions of the conterminous US provide an opportunity for wise management at broad geographic scales. These data may suggest utility in including wetland depressions in large-scale coupled hydrological and nutrient modeling.

  18. Development of Green Fuels From Algae - The University of Tulsa

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Crunkleton, Daniel; Price, Geoffrey; Johannes, Tyler

    The general public has become increasingly aware of the pitfalls encountered with the continued reliance on fossil fuels in the industrialized world. In response, the scientific community is in the process of developing non-fossil fuel technologies that can supply adequate energy while also being environmentally friendly. In this project, we concentrate on green fuels which we define as those capable of being produced from renewable and sustainable resources in a way that is compatible with the current transportation fuel infrastructure. One route to green fuels that has received relatively little attention begins with algae as a feedstock. Algae are amore » diverse group of aquatic, photosynthetic organisms, generally categorized as either macroalgae (i.e. seaweed) or microalgae. Microalgae constitute a spectacularly diverse group of prokaryotic and eukaryotic unicellular organisms and account for approximately 50% of global organic carbon fixation. The PI's have subdivided the proposed research program into three main research areas, all of which are essential to the development of commercially viable algae fuels compatible with current energy infrastructure. In the fuel development focus, catalytic cracking reactions of algae oils is optimized. In the species development project, genetic engineering is used to create microalgae strains that are capable of high-level hydrocarbon production. For the modeling effort, the construction of multi-scaled models of algae production was prioritized, including integrating small-scale hydrodynamic models of algae production and reactor design and large-scale design optimization models.« less

  19. Managing a tier-2 computer centre with a private cloud infrastructure

    NASA Astrophysics Data System (ADS)

    Bagnasco, Stefano; Berzano, Dario; Brunetti, Riccardo; Lusso, Stefano; Vallero, Sara

    2014-06-01

    In a typical scientific computing centre, several applications coexist and share a single physical infrastructure. An underlying Private Cloud infrastructure eases the management and maintenance of such heterogeneous applications (such as multipurpose or application-specific batch farms, Grid sites, interactive data analysis facilities and others), allowing dynamic allocation resources to any application. Furthermore, the maintenance of large deployments of complex and rapidly evolving middleware and application software is eased by the use of virtual images and contextualization techniques. Such infrastructures are being deployed in some large centres (see e.g. the CERN Agile Infrastructure project), but with several open-source tools reaching maturity this is becoming viable also for smaller sites. In this contribution we describe the Private Cloud infrastructure at the INFN-Torino Computer Centre, that hosts a full-fledged WLCG Tier-2 centre, an Interactive Analysis Facility for the ALICE experiment at the CERN LHC and several smaller scientific computing applications. The private cloud building blocks include the OpenNebula software stack, the GlusterFS filesystem and the OpenWRT Linux distribution (used for network virtualization); a future integration into a federated higher-level infrastructure is made possible by exposing commonly used APIs like EC2 and OCCI.

  20. The scaling structure of the global road network

    PubMed Central

    Giometto, Andrea; Shai, Saray; Bertuzzo, Enrico; Mucha, Peter J.; Rinaldo, Andrea

    2017-01-01

    Because of increasing global urbanization and its immediate consequences, including changes in patterns of food demand, circulation and land use, the next century will witness a major increase in the extent of paved roads built worldwide. To model the effects of this increase, it is crucial to understand whether possible self-organized patterns are inherent in the global road network structure. Here, we use the largest updated database comprising all major roads on the Earth, together with global urban and cropland inventories, to suggest that road length distributions within croplands are indistinguishable from urban ones, once rescaled to account for the difference in mean road length. Such similarity extends to road length distributions within urban or agricultural domains of a given area. We find two distinct regimes for the scaling of the mean road length with the associated area, holding in general at small and at large values of the latter. In suitably large urban and cropland domains, we find that mean and total road lengths increase linearly with their domain area, differently from earlier suggestions. Scaling regimes suggest that simple and universal mechanisms regulate urban and cropland road expansion at the global scale. As such, our findings bear implications for global road infrastructure growth based on land-use change and for planning policies sustaining urban expansions. PMID:29134071

  1. The scaling structure of the global road network.

    PubMed

    Strano, Emanuele; Giometto, Andrea; Shai, Saray; Bertuzzo, Enrico; Mucha, Peter J; Rinaldo, Andrea

    2017-10-01

    Because of increasing global urbanization and its immediate consequences, including changes in patterns of food demand, circulation and land use, the next century will witness a major increase in the extent of paved roads built worldwide. To model the effects of this increase, it is crucial to understand whether possible self-organized patterns are inherent in the global road network structure. Here, we use the largest updated database comprising all major roads on the Earth, together with global urban and cropland inventories, to suggest that road length distributions within croplands are indistinguishable from urban ones, once rescaled to account for the difference in mean road length. Such similarity extends to road length distributions within urban or agricultural domains of a given area. We find two distinct regimes for the scaling of the mean road length with the associated area, holding in general at small and at large values of the latter. In suitably large urban and cropland domains, we find that mean and total road lengths increase linearly with their domain area, differently from earlier suggestions. Scaling regimes suggest that simple and universal mechanisms regulate urban and cropland road expansion at the global scale. As such, our findings bear implications for global road infrastructure growth based on land-use change and for planning policies sustaining urban expansions.

  2. Augmented Reality 2.0

    NASA Astrophysics Data System (ADS)

    Schmalstieg, Dieter; Langlotz, Tobias; Billinghurst, Mark

    Augmented Reality (AR) was first demonstrated in the 1960s, but only recently have technologies emerged that can be used to easily deploy AR applications to many users. Camera-equipped cell phones with significant processing power and graphics abilities provide an inexpensive and versatile platform for AR applications, while the social networking technology of Web 2.0 provides a large-scale infrastructure for collaboratively producing and distributing geo-referenced AR content. This combination of widely used mobile hardware and Web 2.0 software allows the development of a new type of AR platform that can be used on a global scale. In this paper we describe the Augmented Reality 2.0 concept and present existing work on mobile AR and web technologies that could be used to create AR 2.0 applications.

  3. Moving image analysis to the cloud: A case study with a genome-scale tomographic study

    NASA Astrophysics Data System (ADS)

    Mader, Kevin; Stampanoni, Marco

    2016-01-01

    Over the last decade, the time required to measure a terabyte of microscopic imaging data has gone from years to minutes. This shift has moved many of the challenges away from experimental design and measurement to scalable storage, organization, and analysis. As many scientists and scientific institutions lack training and competencies in these areas, major bottlenecks have arisen and led to substantial delays and gaps between measurement, understanding, and dissemination. We present in this paper a framework for analyzing large 3D datasets using cloud-based computational and storage resources. We demonstrate its applicability by showing the setup and costs associated with the analysis of a genome-scale study of bone microstructure. We then evaluate the relative advantages and disadvantages associated with local versus cloud infrastructures.

  4. Does the Room Matter? Active Learning in Traditional and Enhanced Lecture Spaces

    PubMed Central

    Stoltzfus, Jon R.; Libarkin, Julie

    2016-01-01

    SCALE-UP–type classrooms, originating with the Student-Centered Active Learning Environment with Upside-down Pedagogies project, are designed to facilitate active learning by maximizing opportunities for interactions between students and embedding technology in the classroom. Positive impacts when active learning replaces lecture are well documented, both in traditional lecture halls and SCALE-UP–type classrooms. However, few studies have carefully analyzed student outcomes when comparable active learning–based instruction takes place in a traditional lecture hall and a SCALE-UP–type classroom. Using a quasi-experimental design, we compared student perceptions and performance between sections of a nonmajors biology course, one taught in a traditional lecture hall and one taught in a SCALE-UP–type classroom. Instruction in both sections followed a flipped model that relied heavily on cooperative learning and was as identical as possible given the infrastructure differences between classrooms. Results showed that students in both sections thought that SCALE-UP infrastructure would enhance performance. However, measures of actual student performance showed no difference between the two sections. We conclude that, while SCALE-UP–type classrooms may facilitate implementation of active learning, it is the active learning and not the SCALE-UP infrastructure that enhances student performance. As a consequence, we suggest that institutions can modify existing classrooms to enhance student engagement without incorporating expensive technology. PMID:27909018

  5. A Screen Space GPGPU Surface LIC Algorithm for Distributed Memory Data Parallel Sort Last Rendering Infrastructures

    NASA Astrophysics Data System (ADS)

    Loring, B.; Karimabadi, H.; Rortershteyn, V.

    2015-10-01

    The surface line integral convolution(LIC) visualization technique produces dense visualization of vector fields on arbitrary surfaces. We present a screen space surface LIC algorithm for use in distributed memory data parallel sort last rendering infrastructures. The motivations for our work are to support analysis of datasets that are too large to fit in the main memory of a single computer and compatibility with prevalent parallel scientific visualization tools such as ParaView and VisIt. By working in screen space using OpenGL we can leverage the computational power of GPUs when they are available and run without them when they are not. We address efficiency and performance issues that arise from the transformation of data from physical to screen space by selecting an alternate screen space domain decomposition. We analyze the algorithm's scaling behavior with and without GPUs on two high performance computing systems using data from turbulent plasma simulations.

  6. Grid accounting service: state and future development

    NASA Astrophysics Data System (ADS)

    Levshina, T.; Sehgal, C.; Bockelman, B.; Weitzel, D.; Guru, A.

    2014-06-01

    During the last decade, large-scale federated distributed infrastructures have been continually developed and expanded. One of the crucial components of a cyber-infrastructure is an accounting service that collects data related to resource utilization and identity of users using resources. The accounting service is important for verifying pledged resource allocation per particular groups and users, providing reports for funding agencies and resource providers, and understanding hardware provisioning requirements. It can also be used for end-to-end troubleshooting as well as billing purposes. In this work we describe Gratia, a federated accounting service jointly developed at Fermilab and Holland Computing Center at University of Nebraska-Lincoln. The Open Science Grid, Fermilab, HCC, and several other institutions have used Gratia in production for several years. The current development activities include expanding Virtual Machines provisioning information, XSEDE allocation usage accounting, and Campus Grids resource utilization. We also identify the direction of future work: improvement and expansion of Cloud accounting, persistent and elastic storage space allocation, and the incorporation of WAN and LAN network metrics.

  7. European security framework for healthcare.

    PubMed

    Ruotsalainen, Pekka; Pohjonen, Hanna

    2003-01-01

    eHealth and telemedicine services are promising business areas in Europe. It is clear that eHealth products and services will be sold and ordered from a distance and over national borderlines in the future. However, there are many barriers to overcome. For both national and pan-European eHealth and telemedicine applications a common security framework is needed. These frameworks set security requirements needed for cross-border eHealth services. The next step is to build a security infrastructure which is independent of technical platforms. Most of the European eHealth platforms are regional or territorial. Some countries are looking for a Public Key Infrastructure, but no large scale solutions do exist in healthcare. There is no clear candidate solution for European-wide interoperable eHealth platform. Gross-platform integration seems to be the most practical integration method at a European level in the short run. The use of Internet as a European integration platform is a promising solution in the long run.

  8. Developing a data infrastructure for a learning health system: the PORTAL network

    PubMed Central

    McGlynn, Elizabeth A; Lieu, Tracy A; Durham, Mary L; Bauck, Alan; Laws, Reesa; Go, Alan S; Chen, Jersey; Feigelson, Heather Spencer; Corley, Douglas A; Young, Deborah Rohm; Nelson, Andrew F; Davidson, Arthur J; Morales, Leo S; Kahn, Michael G

    2014-01-01

    The Kaiser Permanente & Strategic Partners Patient Outcomes Research To Advance Learning (PORTAL) network engages four healthcare delivery systems (Kaiser Permanente, Group Health Cooperative, HealthPartners, and Denver Health) and their affiliated research centers to create a new national network infrastructure that builds on existing relationships among these institutions. PORTAL is enhancing its current capabilities by expanding the scope of the common data model, paying particular attention to incorporating patient-reported data more systematically, implementing new multi-site data governance procedures, and integrating the PCORnet PopMedNet platform across our research centers. PORTAL is partnering with clinical research and patient experts to create cohorts of patients with a common diagnosis (colorectal cancer), a rare diagnosis (adolescents and adults with severe congenital heart disease), and adults who are overweight or obese, including those with pre-diabetes or diabetes, to conduct large-scale observational comparative effectiveness research and pragmatic clinical trials across diverse clinical care settings. PMID:24821738

  9. VLSI Implementation of a 2.8 Gevent/s Packet-Based AER Interface with Routing and Event Sorting Functionality

    PubMed Central

    Scholze, Stefan; Schiefer, Stefan; Partzsch, Johannes; Hartmann, Stephan; Mayr, Christian Georg; Höppner, Sebastian; Eisenreich, Holger; Henker, Stephan; Vogginger, Bernhard; Schüffny, Rene

    2011-01-01

    State-of-the-art large-scale neuromorphic systems require sophisticated spike event communication between units of the neural network. We present a high-speed communication infrastructure for a waferscale neuromorphic system, based on application-specific neuromorphic communication ICs in an field programmable gate arrays (FPGA)-maintained environment. The ICs implement configurable axonal delays, as required for certain types of dynamic processing or for emulating spike-based learning among distant cortical areas. Measurements are presented which show the efficacy of these delays in influencing behavior of neuromorphic benchmarks. The specialized, dedicated address-event-representation communication in most current systems requires separate, low-bandwidth configuration channels. In contrast, the configuration of the waferscale neuromorphic system is also handled by the digital packet-based pulse channel, which transmits configuration data at the full bandwidth otherwise used for pulse transmission. The overall so-called pulse communication subgroup (ICs and FPGA) delivers a factor 25–50 more event transmission rate than other current neuromorphic communication infrastructures. PMID:22016720

  10. A Screen Space GPGPU Surface LIC Algorithm for Distributed Memory Data Parallel Sort Last Rendering Infrastructures

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Loring, Burlen; Karimabadi, Homa; Rortershteyn, Vadim

    2014-07-01

    The surface line integral convolution(LIC) visualization technique produces dense visualization of vector fields on arbitrary surfaces. We present a screen space surface LIC algorithm for use in distributed memory data parallel sort last rendering infrastructures. The motivations for our work are to support analysis of datasets that are too large to fit in the main memory of a single computer and compatibility with prevalent parallel scientific visualization tools such as ParaView and VisIt. By working in screen space using OpenGL we can leverage the computational power of GPUs when they are available and run without them when they are not.more » We address efficiency and performance issues that arise from the transformation of data from physical to screen space by selecting an alternate screen space domain decomposition. We analyze the algorithm's scaling behavior with and without GPUs on two high performance computing systems using data from turbulent plasma simulations.« less

  11. Hospital nursing leadership-led interventions increased genomic awareness and educational intent in Magnet settings.

    PubMed

    Calzone, Kathleen A; Jenkins, Jean; Culp, Stacey; Badzek, Laurie

    2017-11-13

    The Precision Medicine Initiative will accelerate genomic discoveries that improve health care, necessitating a genomic competent workforce. This study assessed leadership team (administrator/educator) year-long interventions to improve registered nurses' (RNs) capacity to integrate genomics into practice. We examined genomic competency outcomes in 8,150 RNs. Awareness and intention to learn more increased compared with controls. Findings suggest achieving genomic competency requires a longer intervention and support strategies such as infrastructure and policies. Leadership played a role in mobilizing staff, resources, and supporting infrastructure to sustain a large-scale competency effort on an institutional basis. Results demonstrate genomic workforce competency can be attained with leadership support and sufficient time. Our study provides evidence of the critical role health-care leaders play in facilitating genomic integration into health care to improve patient outcomes. Genomics' impact on quality, safety, and cost indicate a leader-initiated national competency effort is achievable and warranted. Published by Elsevier Inc.

  12. Governance and Risk Management of Network and Information Security: The Role of Public Private Partnerships in Managing the Existing and Emerging Risks

    NASA Astrophysics Data System (ADS)

    Navare, Jyoti; Gemikonakli, Orhan

    Globalisation and new technology has opened the gates to more security risks. As the strategic importance of communication networks and information increased, threats to the security and safety of communication infrastructures, as well as information stored in and/or transmitted increased significantly. The development of the self replicating programmes has become a nightmare for Internet users. Leading companies, strategic organisations were not immune to attacks; they were also "hacked" and overtaken by intruders. Incidents of recent years have also shown that national/regional crisis may also trigger cyber attacks at large scale. Experts forecast that cyber wars are likely to take the stage as tension mounts between developed societies. New risks such as cyber-attacks, network terrorism and disintegration of traditional infrastructures has somewhat blurred the boundaries of operation and control. This paper seeks to consider the risk management and governance and looking more specifically at implications for emerging economies.

  13. CDP - Adaptive Supervisory Control and Data Acquisition (SCADA) Technology for Infrastructure Protection

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Marco Carvalho; Richard Ford

    2012-05-14

    Supervisory Control and Data Acquisition (SCADA) Systems are a type of Industrial Control System characterized by the centralized (or hierarchical) monitoring and control of geographically dispersed assets. SCADA systems combine acquisition and network components to provide data gathering, transmission, and visualization for centralized monitoring and control. However these integrated capabilities, especially when built over legacy systems and protocols, generally result in vulnerabilities that can be exploited by attackers, with potentially disastrous consequences. Our research project proposal was to investigate new approaches for secure and survivable SCADA systems. In particular, we were interested in the resilience and adaptability of large-scale mission-criticalmore » monitoring and control infrastructures. Our research proposal was divided in two main tasks. The first task was centered on the design and investigation of algorithms for survivable SCADA systems and a prototype framework demonstration. The second task was centered on the characterization and demonstration of the proposed approach in illustrative scenarios (simulated or emulated).« less

  14. Seismically reactivated Hattian slide in Kashmir, Northern Pakistan

    NASA Astrophysics Data System (ADS)

    Schneider, Jean F.

    2009-07-01

    The Pakistan 2005 earthquake, of magnitude 7.6, caused severe damage on landscape and infrastructure, in addition to numerous casualties. The event reactivated Hattian Slide, creating a rock avalanche in a location where earlier mass movements had happened already, as indicated by satellite imagery and ground investigation. The slide originated on Dana Hill, in the upper catchment area of Hattian on Karli Stream, a tributary of Jhelum River, Pakistan, and buried the hamlet Dandbeh and several farms nearby. A natural dam accumulated, impounding two lakes, the larger one threatening parts of downstream Hattian Village with flooding. An access road and artificial spillways needed to be constructed in very short time to minimize the flooding risk. As shown by this example, when pointing out the risk of large-scale damage to population and infrastructure by way of hazard indication maps of seismically active regions, and preparing for alleviation of that risk, it is advisable to consider the complete Holocene history of the slopes involved.

  15. A model of urban scaling laws based on distance dependent interactions

    PubMed Central

    Ribeiro, Fabiano L.; Meirelles, Joao; Ferreira, Fernando F.

    2017-01-01

    Socio-economic related properties of a city grow faster than a linear relationship with the population, in a log–log plot, the so-called superlinear scaling. Conversely, the larger a city, the more efficient it is in the use of its infrastructure, leading to a sublinear scaling on these variables. In this work, we addressed a simple explanation for those scaling laws in cities based on the interaction range between the citizens and on the fractal properties of the cities. To this purpose, we introduced a measure of social potential which captured the influence of social interaction on the economic performance and the benefits of amenities in the case of infrastructure offered by the city. We assumed that the population density depends on the fractal dimension and on the distance-dependent interactions between individuals. The model suggests that when the city interacts as a whole, and not just as a set of isolated parts, there is improvement of the socio-economic indicators. Moreover, the bigger the interaction range between citizens and amenities, the bigger the improvement of the socio-economic indicators and the lower the infrastructure costs of the city. We addressed how public policies could take advantage of these properties to improve cities development, minimizing negative effects. Furthermore, the model predicts that the sum of the scaling exponents of social-economic and infrastructure variables are 2, as observed in the literature. Simulations with an agent-based model are confronted with the theoretical approach and they are compatible with the empirical evidences. PMID:28405381

  16. A network-based framework for assessing infrastructure resilience: a case study of the London metro system.

    PubMed

    Chopra, Shauhrat S; Dillon, Trent; Bilec, Melissa M; Khanna, Vikas

    2016-05-01

    Modern society is increasingly dependent on the stability of a complex system of interdependent infrastructure sectors. It is imperative to build resilience of large-scale infrastructures like metro systems for addressing the threat of natural disasters and man-made attacks in urban areas. Analysis is needed to ensure that these systems are capable of withstanding and containing unexpected perturbations, and develop heuristic strategies for guiding the design of more resilient networks in the future. We present a comprehensive, multi-pronged framework that analyses information on network topology, spatial organization and passenger flow to understand the resilience of the London metro system. Topology of the London metro system is not fault tolerant in terms of maintaining connectivity at the periphery of the network since it does not exhibit small-world properties. The passenger strength distribution follows a power law, suggesting that while the London metro system is robust to random failures, it is vulnerable to disruptions on a few critical stations. The analysis further identifies particular sources of structural and functional vulnerabilities that need to be mitigated for improving the resilience of the London metro network. The insights from our framework provide useful strategies to build resilience for both existing and upcoming metro systems. © 2016 The Author(s).

  17. Time-Varying, Multi-Scale Adaptive System Reliability Analysis of Lifeline Infrastructure Networks

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gearhart, Jared Lee; Kurtz, Nolan Scot

    2014-09-01

    The majority of current societal and economic needs world-wide are met by the existing networked, civil infrastructure. Because the cost of managing such infrastructure is high and increases with time, risk-informed decision making is essential for those with management responsibilities for these systems. To address such concerns, a methodology that accounts for new information, deterioration, component models, component importance, group importance, network reliability, hierarchical structure organization, and efficiency concerns has been developed. This methodology analyzes the use of new information through the lens of adaptive Importance Sampling for structural reliability problems. Deterioration, multi-scale bridge models, and time-variant component importance aremore » investigated for a specific network. Furthermore, both bridge and pipeline networks are studied for group and component importance, as well as for hierarchical structures in the context of specific networks. Efficiency is the primary driver throughout this study. With this risk-informed approach, those responsible for management can address deteriorating infrastructure networks in an organized manner.« less

  18. Infrastructure for large space telescopes

    NASA Astrophysics Data System (ADS)

    MacEwen, Howard A.; Lillie, Charles F.

    2016-10-01

    It is generally recognized (e.g., in the National Aeronautics and Space Administration response to recent congressional appropriations) that future space observatories must be serviceable, even if they are orbiting in deep space (e.g., around the Sun-Earth libration point, SEL2). On the basis of this legislation, we believe that budgetary considerations throughout the foreseeable future will require that large, long-lived astrophysics missions must be designed as evolvable semipermanent observatories that will be serviced using an operational, in-space infrastructure. We believe that the development of this infrastructure will include the design and development of a small to mid-sized servicing vehicle (MiniServ) as a key element of an affordable infrastructure for in-space assembly and servicing of future space vehicles. This can be accomplished by the adaptation of technology developed over the past half-century into a vehicle approximately the size of the ascent stage of the Apollo Lunar Module to provide some of the servicing capabilities that will be needed by very large telescopes located in deep space in the near future (2020s and 2030s). We specifically address the need for a detailed study of these servicing requirements and the current proposals for using presently available technologies to provide the appropriate infrastructure.

  19. Marine Research Infrastructure collaboration in the COOPLUS project framework - Promoting synergies for marine ecosystems studies

    NASA Astrophysics Data System (ADS)

    Beranzoli, L.; Best, M.; Embriaco, D.; Favali, P.; Juniper, K.; Lo Bue, N.; Lara-Lopez, A.; Materia, P.; Ó Conchubhair, D.; O'Rourke, E.; Proctor, R.; Weller, R. A.

    2017-12-01

    Understanding effects on marine ecosystems of multiple drivers at various scales; from regional such as climate and ocean circulation, to local, such as seafloor gas emissions and harmful underwater noise, requires long time-series of integrated and standardised datasets. Large-scale research infrastructures for ocean observation are able to provide such time-series for a variety of ocean process physical parameters (mass and energy exchanges among surface, water column and benthic boundary layer) that constitute important and necessary measures of environmental conditions and change/development over time. Information deduced from these data is essential for the study, modelling and prediction of marine ecosystems changes and can reveal and potentially confirm deterioration and threats. The COOPLUS European Commission project brings together research infrastructures with the aim of coordinating multilateral cooperation among RIs and identifying common priorities, actions, instruments, resources. COOPLUS will produce a Strategic Research and Innovation Agenda (SRIA) which will be a shared roadmap for mid to long-term collaboration. In particular, marine RIs collaborating in COOPLUS, namely the European Multidisciplinary Seafloor and water column Observatory: EMSO (Europe), the Ocean Observatories Initiative (OOI, USA), Ocean Networks Canada (ONC), and the Integrated Marine Observing System (IMOS, Australia), can represent a source of important data for researchers of marine ecosystems. The RIs can then, in turn, receive suggestions from researchers for implementing new measurements and stimulating cross-cutting collaborations and data integration and standardisation from their user community. This poster provides a description of EMSO, OOI, ONC and IMOS for the benefit of marine ecosystem studies and presents examples of where the analyses of time-series have revealed noteworthy environmental conditions, temporal trends and events.

  20. Systematic Planning of Adaptation Options for Pluvial Flood Resilience

    NASA Astrophysics Data System (ADS)

    Babovic, Filip; Mijic, Ana; Madani, Kaveh

    2016-04-01

    Different elements of infrastructure and the built environment vary in their ability to quickly adapt to changing circumstances. Furthermore, many of the slowest, and often largest infrastructure adaptations, offer the greatest improvements to system performance. In the context of de-carbonation of individual buildings Brand (1995) identified six potential layers of adaptation based on their renewal times ranging from daily to multi-decadal time scales. Similar layers exist in urban areas with regards to Water Sensitive Urban Design (WSUD) and pluvial flood risk. These layers range from appliances within buildings to changes in the larger urban form. Changes in low-level elements can be quickly implemented, but are limited in effectiveness, while larger interventions occur at a much slower pace but offer greater benefits as a part of systemic change. In the context of urban adaptation this multi-layered approach provides information on how to order urban adaptations. This information helps to identify potential pathways by prioritising relatively quick adaptations to be implemented in the short term while identifying options which require more long term planning with respect to both uncertainty and flexibility. This information is particularly critical in the evolution towards more resilient and water sensitive cities (Brown, 2009). Several potential adaptation options were identified ranging from small to large-scale adaptations. The time needed for the adaptation to be implemented was estimated and curves representing the added drainage capacity per year were established. The total drainage capacity added by each option was then established. This methodology was utilised on a case study in the Cranbrook Catchment in the North East of London. This information was able to provide insight on how to best renew or extend the life of critical ageing infrastructure.

  1. Policy Model of Sustainable Infrastructure Development (Case Study : Bandarlampung City, Indonesia)

    NASA Astrophysics Data System (ADS)

    Persada, C.; Sitorus, S. R. P.; Marimin; Djakapermana, R. D.

    2018-03-01

    Infrastructure development does not only affect the economic aspect, but also social and environmental, those are the main dimensions of sustainable development. Many aspects and actors involved in urban infrastructure development requires a comprehensive and integrated policy towards sustainability. Therefore, it is necessary to formulate an infrastructure development policy that considers various dimensions of sustainable development. The main objective of this research is to formulate policy of sustainable infrastructure development. In this research, urban infrastructure covers transportation, water systems (drinking water, storm water, wastewater), green open spaces and solid waste. This research was conducted in Bandarlampung City. This study use a comprehensive modeling, namely the Multi Dimensional Scaling (MDS) with Rapid Appraisal of Infrastructure (Rapinfra), it uses of Analytic Network Process (ANP) and it uses system dynamics model. The findings of the MDS analysis showed that the status of Bandarlampung City infrastructure sustainability is less sustainable. The ANP analysis produces 8 main indicators of the most influential in the development of sustainable infrastructure. The system dynamics model offered 4 scenarios of sustainable urban infrastructure policy model. The best scenario was implemented into 3 policies consist of: the integrated infrastructure management, the population control, and the local economy development.

  2. The role of the independent clinical laboratory in new assay development and commercialization.

    PubMed

    Ellis, David G

    2003-01-01

    Most would agree that these are exciting times in the field of laboratory medicine. As the body of scientific knowledge expands and research activities, such as those catalyzed by the sequencing of the human genome, bring us closer to the promise of personalized medicine, the clinical laboratory industry will have increasing opportunities to partner with owners of intellectual property to develop and commercialize new diagnostic tests. The large, independent clinical laboratories are particularly well positioned to commercialize important new tests, with their broad market penetration, infrastructure, and the scale to run esoteric tests cost-effectively.

  3. Case Study: The Role of eLearning in Midwifery Pre-Service Education in Ghana.

    PubMed

    Appiagyei, Martha; Trump, Alison; Danso, Evans; Yeboah, Alex; Searle, Sarah; Carr, Catherine

    The issues and challenges of implementing eLearning in pre-service health education were explored through a pilot study conducted in six nurse-midwifery education programs in Ghana. Case-based, interactive computer mediated eLearning modules, targeted to basic emergency and obstetrical signal functions, were delivered both online and offline using a free-for-use eLearning platform, skoool HE(®). Key success factors included broad stakeholder support, an established curriculum and student and tutor interest. Challenges included infrastructure limitations, large class sizes and added workloads for tutors and information technology staff. National scale up is planned.

  4. Design and Implement of Astronomical Cloud Computing Environment In China-VO

    NASA Astrophysics Data System (ADS)

    Li, Changhua; Cui, Chenzhou; Mi, Linying; He, Boliang; Fan, Dongwei; Li, Shanshan; Yang, Sisi; Xu, Yunfei; Han, Jun; Chen, Junyi; Zhang, Hailong; Yu, Ce; Xiao, Jian; Wang, Chuanjun; Cao, Zihuang; Fan, Yufeng; Liu, Liang; Chen, Xiao; Song, Wenming; Du, Kangyu

    2017-06-01

    Astronomy cloud computing environment is a cyber-Infrastructure for Astronomy Research initiated by Chinese Virtual Observatory (China-VO) under funding support from NDRC (National Development and Reform commission) and CAS (Chinese Academy of Sciences). Based on virtualization technology, astronomy cloud computing environment was designed and implemented by China-VO team. It consists of five distributed nodes across the mainland of China. Astronomer can get compuitng and storage resource in this cloud computing environment. Through this environments, astronomer can easily search and analyze astronomical data collected by different telescopes and data centers , and avoid the large scale dataset transportation.

  5. Systems Biology and Cancer Prevention: All Options on the Table

    PubMed Central

    Rosenfeld, Simon; Kapetanovic, Izet

    2008-01-01

    In this paper, we outline the status quo and approaches to further development of the systems biology concepts with focus on applications in cancer prevention science. We discuss the biological aspects of cancer research that are of primary importance in cancer prevention, motivations for their mathematical modeling and some recent advances in computational oncology. We also make an attempt to outline in big conceptual terms the contours of future work aimed at creation of large-scale computational and informational infrastructure for using as a routine tool in cancer prevention science and decision making. PMID:19787092

  6. Outdoor thermal monitoring of large scale structures by infrared thermography integrated in an ICT based architecture

    NASA Astrophysics Data System (ADS)

    Dumoulin, Jean; Crinière, Antoine; Averty, Rodolphe

    2015-04-01

    An infrared system has been developed to monitor transport infrastructures in a standalone configuration. Results obtained on bridges open to traffic allows to retrieve the inner structure of the decks. To complete this study, experiments were carried out over several months to monitor two reinforced concrete beams of 16 m long and 21 T each. Detection of a damaged area over one of the two beams was made by Pulse Phase Thermography approach. Measurements carried out over several months. Finally, conclusion on the robustness of the system is proposed and perspectives are presented.

  7. Semi-Automated Air-Coupled Impact-Echo Method for Large-Scale Parkade Structure.

    PubMed

    Epp, Tyler; Svecova, Dagmar; Cha, Young-Jin

    2018-03-29

    Structural Health Monitoring (SHM) has moved to data-dense systems, utilizing numerous sensor types to monitor infrastructure, such as bridges and dams, more regularly. One of the issues faced in this endeavour is the scale of the inspected structures and the time it takes to carry out testing. Installing automated systems that can provide measurements in a timely manner is one way of overcoming these obstacles. This study proposes an Artificial Neural Network (ANN) application that determines intact and damaged locations from a small training sample of impact-echo data, using air-coupled microphones from a reinforced concrete beam in lab conditions and data collected from a field experiment in a parking garage. The impact-echo testing in the field is carried out in a semi-autonomous manner to expedite the front end of the in situ damage detection testing. The use of an ANN removes the need for a user-defined cutoff value for the classification of intact and damaged locations when a least-square distance approach is used. It is postulated that this may contribute significantly to testing time reduction when monitoring large-scale civil Reinforced Concrete (RC) structures.

  8. Semi-Automated Air-Coupled Impact-Echo Method for Large-Scale Parkade Structure

    PubMed Central

    Epp, Tyler; Svecova, Dagmar; Cha, Young-Jin

    2018-01-01

    Structural Health Monitoring (SHM) has moved to data-dense systems, utilizing numerous sensor types to monitor infrastructure, such as bridges and dams, more regularly. One of the issues faced in this endeavour is the scale of the inspected structures and the time it takes to carry out testing. Installing automated systems that can provide measurements in a timely manner is one way of overcoming these obstacles. This study proposes an Artificial Neural Network (ANN) application that determines intact and damaged locations from a small training sample of impact-echo data, using air-coupled microphones from a reinforced concrete beam in lab conditions and data collected from a field experiment in a parking garage. The impact-echo testing in the field is carried out in a semi-autonomous manner to expedite the front end of the in situ damage detection testing. The use of an ANN removes the need for a user-defined cutoff value for the classification of intact and damaged locations when a least-square distance approach is used. It is postulated that this may contribute significantly to testing time reduction when monitoring large-scale civil Reinforced Concrete (RC) structures. PMID:29596332

  9. The place of algae in agriculture: policies for algal biomass production.

    PubMed

    Trentacoste, Emily M; Martinez, Alice M; Zenk, Tim

    2015-03-01

    Algae have been used for food and nutraceuticals for thousands of years, and the large-scale cultivation of algae, or algaculture, has existed for over half a century. More recently algae have been identified and developed as renewable fuel sources, and the cultivation of algal biomass for various products is transitioning to commercial-scale systems. It is crucial during this period that institutional frameworks (i.e., policies) support and promote development and commercialization and anticipate and stimulate the evolution of the algal biomass industry as a source of renewable fuels, high value protein and carbohydrates and low-cost drugs. Large-scale cultivation of algae merges the fundamental aspects of traditional agricultural farming and aquaculture. Despite this overlap, algaculture has not yet been afforded a position within agriculture or the benefits associated with it. Various federal and state agricultural support and assistance programs are currently appropriated for crops, but their extension to algal biomass is uncertain. These programs are essential for nascent industries to encourage investment, build infrastructure, disseminate technical experience and information, and create markets. This review describes the potential agricultural policies and programs that could support algal biomass cultivation, and the barriers to the expansion of these programs to algae.

  10. Evaluating sub-seasonal skill in probabilistic forecasts of Atmospheric Rivers and associated extreme events

    NASA Astrophysics Data System (ADS)

    Subramanian, A. C.; Lavers, D.; Matsueda, M.; Shukla, S.; Cayan, D. R.; Ralph, M.

    2017-12-01

    Atmospheric rivers (ARs) - elongated plumes of intense moisture transport - are a primary source of hydrological extremes, water resources and impactful weather along the West Coast of North America and Europe. There is strong demand in the water management, societal infrastructure and humanitarian sectors for reliable sub-seasonal forecasts, particularly of extreme events, such as floods and droughts so that actions to mitigate disastrous impacts can be taken with sufficient lead-time. Many recent studies have shown that ARs in the Pacific and the Atlantic are modulated by large-scale modes of climate variability. Leveraging the improved understanding of how these large-scale climate modes modulate the ARs in these two basins, we use the state-of-the-art multi-model forecast systems such as the North American Multi-Model Ensemble (NMME) and the Subseasonal-to-Seasonal (S2S) database to help inform and assess the probabilistic prediction of ARs and related extreme weather events over the North American and European West Coasts. We will present results from evaluating probabilistic forecasts of extreme precipitation and AR activity at the sub-seasonal scale. In particular, results from the comparison of two winters (2015-16 and 2016-17) will be shown, winters which defied canonical El Niño teleconnection patterns over North America and Europe. We further extend this study to analyze probabilistic forecast skill of AR events in these two basins and the variability in forecast skill during certain regimes of large-scale climate modes.

  11. Dynamic ruptures on faults of complex geometry: insights from numerical simulations, from large-scale curvature to small-scale fractal roughness

    NASA Astrophysics Data System (ADS)

    Ulrich, T.; Gabriel, A. A.

    2016-12-01

    The geometry of faults is subject to a large degree of uncertainty. As buried structures being not directly observable, their complex shapes may only be inferred from surface traces, if available, or through geophysical methods, such as reflection seismology. As a consequence, most studies aiming at assessing the potential hazard of faults rely on idealized fault models, based on observable large-scale features. Yet, real faults are known to be wavy at all scales, their geometric features presenting similar statistical properties from the micro to the regional scale. The influence of roughness on the earthquake rupture process is currently a driving topic in the computational seismology community. From the numerical point of view, rough faults problems are challenging problems that require optimized codes able to run efficiently on high-performance computing infrastructure and simultaneously handle complex geometries. Physically, simulated ruptures hosted by rough faults appear to be much closer to source models inverted from observation in terms of complexity. Incorporating fault geometry on all scales may thus be crucial to model realistic earthquake source processes and to estimate more accurately seismic hazard. In this study, we use the software package SeisSol, based on an ADER-Discontinuous Galerkin scheme, to run our numerical simulations. SeisSol allows solving the spontaneous dynamic earthquake rupture problem and the wave propagation problem with high-order accuracy in space and time efficiently on large-scale machines. In this study, the influence of fault roughness on dynamic rupture style (e.g. onset of supershear transition, rupture front coherence, propagation of self-healing pulses, etc) at different length scales is investigated by analyzing ruptures on faults of varying roughness spectral content. In particular, we investigate the existence of a minimum roughness length scale in terms of rupture inherent length scales below which the rupture ceases to be sensible. Finally, the effect of fault geometry on ground-motions, in the near-field, is considered. Our simulations feature a classical linear slip weakening on the fault and a viscoplastic constitutive model off the fault. The benefits of using a more elaborate fast velocity-weakening friction law will also be considered.

  12. A National Assessment of Green Infrastructure and Change for the Conterminous United States Using Morphological Image Processing

    EPA Science Inventory

    Green infrastructure is a popular framework for conservation planning. The main elements of green infrastructure are hubs and links. Hubs tend to be large areas of ‘natural’ vegetation and links tend to be linear features (e.g., streams) that connect hubs. Within the United State...

  13. Effects of climate change on infrastructure [Chapter 11

    Treesearch

    Michael J. Furniss; Natalie J. Little; David L. Peterson

    2018-01-01

    Climatic conditions, particularly extreme rainfall, snowmelt, and flooding, pose substantial risks to infrastructure in and near public lands in the Intermountain Adaptation Partnership (IAP) region (box 11.1). Minor floods happen frequently in the region, and large floods happen occasionally. These events can damage or destroy roads and other infrastructure and affect...

  14. Enabling fast charging - Infrastructure and economic considerations

    NASA Astrophysics Data System (ADS)

    Burnham, Andrew; Dufek, Eric J.; Stephens, Thomas; Francfort, James; Michelbacher, Christopher; Carlson, Richard B.; Zhang, Jiucai; Vijayagopal, Ram; Dias, Fernando; Mohanpurkar, Manish; Scoffield, Don; Hardy, Keith; Shirk, Matthew; Hovsapian, Rob; Ahmed, Shabbir; Bloom, Ira; Jansen, Andrew N.; Keyser, Matthew; Kreuzer, Cory; Markel, Anthony; Meintz, Andrew; Pesaran, Ahmad; Tanim, Tanvir R.

    2017-11-01

    The ability to charge battery electric vehicles (BEVs) on a time scale that is on par with the time to fuel an internal combustion engine vehicle (ICEV) would remove a significant barrier to the adoption of BEVs. However, for viability, fast charging at this time scale needs to also occur at a price that is acceptable to consumers. Therefore, the cost drivers for both BEV owners and charging station providers are analyzed. In addition, key infrastructure considerations are examined, including grid stability and delivery of power, the design of fast charging stations and the design and use of electric vehicle service equipment. Each of these aspects have technical barriers that need to be addressed, and are directly linked to economic impacts to use and implementation. This discussion focuses on both the economic and infrastructure issues which exist and need to be addressed for the effective implementation of fast charging at 400 kW and above. In so doing, it has been found that there is a distinct need to effectively manage the intermittent, high power demand of fast charging, strategically plan infrastructure corridors, and to further understand the cost of operation of charging infrastructure and BEVs.

  15. Enabling fast charging – Infrastructure and economic considerations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Burnham, Andrew; Dufek, Eric J.; Stephens, Thomas

    The ability to charge battery electric vehicles (BEVs) on a time scale that is on par with the time to fuel an internal combustion engine vehicle (ICEV) would remove a significant barrier to the adoption of BEVs. However, for viability, fast charging at this time scale needs to also occur at a price that is acceptable to consumers. Therefore, the cost drivers for both BEV owners and charging station providers are analyzed. In addition, key infrastructure considerations are examined, including grid stability and delivery of power, the design of fast charging stations and the design and use of electric vehiclemore » service equipment. Each of these aspects have technical barriers that need to be addressed, and are directly linked to economic impacts to use and implementation. This discussion focuses on both the economic and infrastructure issues which exist and need to be addressed for the effective implementation of fast charging at 400 kW and above. In so doing, it has been found that there is a distinct need to effectively manage the intermittent, high power demand of fast charging, strategically plan infrastructure corridors, and to further understand the cost of operation of charging infrastructure and BEVs.« less

  16. Enabling fast charging – Infrastructure and economic considerations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Burnham, Andrew; Dufek, Eric J.; Stephens, Thomas

    The ability to charge battery electric vehicles (BEVs) on a time scale that is on par with the time to fuel an internal combustion engine vehicle (ICEV) would remove a significant barrier to the adoption of BEVs. However, for viability, fast charging at this time scale needs to also occur at a price that is acceptable to consumers. Therefore, the cost drivers for both BEV owners and charging station providers are analyzed. In addition, key infrastructure considerations are examined, including grid stability and delivery of power, the design of fast charging stations and the design and use of electric vehiclemore » service equipment. Each of these aspects have technical barriers that need to be addressed, and are directly linked to economic impacts to use and implementation. Here, this discussion focuses on both the economic and infrastructure issues which exist and need to be addressed for the effective implementation of fast charging up to 350 kW. In doing so, it has been found that there is a distinct need to effectively manage the intermittent, high power demand of fast charging, strategically plan infrastructure corridors, and to further understand the cost of operation of charging infrastructure and BEVs.« less

  17. Enabling fast charging – Infrastructure and economic considerations

    DOE PAGES

    Burnham, Andrew; Dufek, Eric J.; Stephens, Thomas; ...

    2017-10-23

    The ability to charge battery electric vehicles (BEVs) on a time scale that is on par with the time to fuel an internal combustion engine vehicle (ICEV) would remove a significant barrier to the adoption of BEVs. However, for viability, fast charging at this time scale needs to also occur at a price that is acceptable to consumers. Therefore, the cost drivers for both BEV owners and charging station providers are analyzed. In addition, key infrastructure considerations are examined, including grid stability and delivery of power, the design of fast charging stations and the design and use of electric vehiclemore » service equipment. Each of these aspects have technical barriers that need to be addressed, and are directly linked to economic impacts to use and implementation. Here, this discussion focuses on both the economic and infrastructure issues which exist and need to be addressed for the effective implementation of fast charging up to 350 kW. In doing so, it has been found that there is a distinct need to effectively manage the intermittent, high power demand of fast charging, strategically plan infrastructure corridors, and to further understand the cost of operation of charging infrastructure and BEVs.« less

  18. Reproducible Large-Scale Neuroimaging Studies with the OpenMOLE Workflow Management System.

    PubMed

    Passerat-Palmbach, Jonathan; Reuillon, Romain; Leclaire, Mathieu; Makropoulos, Antonios; Robinson, Emma C; Parisot, Sarah; Rueckert, Daniel

    2017-01-01

    OpenMOLE is a scientific workflow engine with a strong emphasis on workload distribution. Workflows are designed using a high level Domain Specific Language (DSL) built on top of Scala. It exposes natural parallelism constructs to easily delegate the workload resulting from a workflow to a wide range of distributed computing environments. OpenMOLE hides the complexity of designing complex experiments thanks to its DSL. Users can embed their own applications and scale their pipelines from a small prototype running on their desktop computer to a large-scale study harnessing distributed computing infrastructures, simply by changing a single line in the pipeline definition. The construction of the pipeline itself is decoupled from the execution context. The high-level DSL abstracts the underlying execution environment, contrary to classic shell-script based pipelines. These two aspects allow pipelines to be shared and studies to be replicated across different computing environments. Workflows can be run as traditional batch pipelines or coupled with OpenMOLE's advanced exploration methods in order to study the behavior of an application, or perform automatic parameter tuning. In this work, we briefly present the strong assets of OpenMOLE and detail recent improvements targeting re-executability of workflows across various Linux platforms. We have tightly coupled OpenMOLE with CARE, a standalone containerization solution that allows re-executing on a Linux host any application that has been packaged on another Linux host previously. The solution is evaluated against a Python-based pipeline involving packages such as scikit-learn as well as binary dependencies. All were packaged and re-executed successfully on various HPC environments, with identical numerical results (here prediction scores) obtained on each environment. Our results show that the pair formed by OpenMOLE and CARE is a reliable solution to generate reproducible results and re-executable pipelines. A demonstration of the flexibility of our solution showcases three neuroimaging pipelines harnessing distributed computing environments as heterogeneous as local clusters or the European Grid Infrastructure (EGI).

  19. Using the infrastructure of a conditional cash transfer program to deliver a scalable integrated early child development program in Colombia: cluster randomized controlled trial

    PubMed Central

    Attanasio, Orazio P; Fernández, Camila; Grantham-McGregor, Sally M; Meghir, Costas; Rubio-Codina, Marta

    2014-01-01

    Objective To assess the effectiveness of an integrated early child development intervention, combining stimulation and micronutrient supplementation and delivered on a large scale in Colombia, for children’s development, growth, and hemoglobin levels. Design Cluster randomized controlled trial, using a 2×2 factorial design, with municipalities assigned to one of four groups: psychosocial stimulation, micronutrient supplementation, combined intervention, or control. Setting 96 municipalities in Colombia, located across eight of its 32 departments. Participants 1420 children aged 12-24 months and their primary carers. Intervention Psychosocial stimulation (weekly home visits with play demonstrations), micronutrient sprinkles given daily, and both combined. All delivered by female community leaders for 18 months. Main outcome measures Cognitive, receptive and expressive language, and fine and gross motor scores on the Bayley scales of infant development-III; height, weight, and hemoglobin levels measured at the baseline and end of intervention. Results Stimulation improved cognitive scores (adjusted for age, sex, testers, and baseline levels of outcomes) by 0.26 of a standard deviation (P=0.002). Stimulation also increased receptive language by 0.22 of a standard deviation (P=0.032). Micronutrient supplementation had no significant effect on any outcome and there was no interaction between the interventions. No intervention affected height, weight, or hemoglobin levels. Conclusions Using the infrastructure of a national welfare program we implemented the integrated early child development intervention on a large scale and showed its potential for improving children’s cognitive development. We found no effect of supplementation on developmental or health outcomes. Moreover, supplementation did not interact with stimulation. The implementation model for delivering stimulation suggests that it may serve as a promising blueprint for future policy on early childhood development. Trial registration Current Controlled trials ISRCTN18991160. PMID:25266222

  20. Science and Strategic - Climate Implications

    NASA Astrophysics Data System (ADS)

    Tindall, J. A.; Moran, E. H.

    2008-12-01

    Energy of weather systems greatly exceeds energy produced and used by humans. Variation in this energy causes climate variability potentially resulting in local, national, and/or global catastrophes beyond our ability to deter the loss of life and economic destabilization. Large scale natural disasters routinely result in shortages of water, disruption of energy supplies, and destruction of infrastructure. The resulting unforeseen and disastrous events occurring beyond national emergency preparation, as related to climate variability, could insight civil unrest due to dwindling and/or inaccessible resources necessary for survival. Lack of these necessary resources in impacted countries often leads to wars. Climate change coupled with population growth, which exposes more of the population to potential risks associated with climate and environmental change, demands faster technological response. Understanding climate/associated environmental changes, the relation to human activity and behavior, and including this in national and international emergency/security management plans would alleviate shortcomings in our present and future technological status. The scale of environmental change will determine the potential magnitude of civil unrest at the local, national, and/or global level along with security issues at each level. Commonly, security issues related to possible civil unrest owing to temporal environmental change is not part of a short and/or long-term strategy, yet recent large-scale disasters are reminders that system failures (as in hurricane Katrina) include acknowledged breaches to individual, community, and infrastructure security. Without advance planning and management concerning environmental change, oncoming and climate related events will intensify the level of devastation and human catastrophe. Depending upon the magnitude and period of catastrophic events and/or environmental changes, destabilization of agricultural systems, energy supplies, and other lines of commodities often results in severely unbalanced supply and demand ratios, which eventually affect the entire global community. National economies potentially risk destabilization, which is especially important since economics plays a major role in strategic planning. This presentation will address these issues and the role that science can play in human sustainability and local, national, and international security.

  1. Reproducible Large-Scale Neuroimaging Studies with the OpenMOLE Workflow Management System

    PubMed Central

    Passerat-Palmbach, Jonathan; Reuillon, Romain; Leclaire, Mathieu; Makropoulos, Antonios; Robinson, Emma C.; Parisot, Sarah; Rueckert, Daniel

    2017-01-01

    OpenMOLE is a scientific workflow engine with a strong emphasis on workload distribution. Workflows are designed using a high level Domain Specific Language (DSL) built on top of Scala. It exposes natural parallelism constructs to easily delegate the workload resulting from a workflow to a wide range of distributed computing environments. OpenMOLE hides the complexity of designing complex experiments thanks to its DSL. Users can embed their own applications and scale their pipelines from a small prototype running on their desktop computer to a large-scale study harnessing distributed computing infrastructures, simply by changing a single line in the pipeline definition. The construction of the pipeline itself is decoupled from the execution context. The high-level DSL abstracts the underlying execution environment, contrary to classic shell-script based pipelines. These two aspects allow pipelines to be shared and studies to be replicated across different computing environments. Workflows can be run as traditional batch pipelines or coupled with OpenMOLE's advanced exploration methods in order to study the behavior of an application, or perform automatic parameter tuning. In this work, we briefly present the strong assets of OpenMOLE and detail recent improvements targeting re-executability of workflows across various Linux platforms. We have tightly coupled OpenMOLE with CARE, a standalone containerization solution that allows re-executing on a Linux host any application that has been packaged on another Linux host previously. The solution is evaluated against a Python-based pipeline involving packages such as scikit-learn as well as binary dependencies. All were packaged and re-executed successfully on various HPC environments, with identical numerical results (here prediction scores) obtained on each environment. Our results show that the pair formed by OpenMOLE and CARE is a reliable solution to generate reproducible results and re-executable pipelines. A demonstration of the flexibility of our solution showcases three neuroimaging pipelines harnessing distributed computing environments as heterogeneous as local clusters or the European Grid Infrastructure (EGI). PMID:28381997

  2. Calibration of LOFAR data on the cloud

    NASA Astrophysics Data System (ADS)

    Sabater, J.; Sánchez-Expósito, S.; Best, P.; Garrido, J.; Verdes-Montenegro, L.; Lezzi, D.

    2017-04-01

    New scientific instruments are starting to generate an unprecedented amount of data. The Low Frequency Array (LOFAR), one of the Square Kilometre Array (SKA) pathfinders, is already producing data on a petabyte scale. The calibration of these data presents a huge challenge for final users: (a) extensive storage and computing resources are required; (b) the installation and maintenance of the software required for the processing is not trivial; and (c) the requirements of calibration pipelines, which are experimental and under development, are quickly evolving. After encountering some limitations in classical infrastructures like dedicated clusters, we investigated the viability of cloud infrastructures as a solution. We found that the installation and operation of LOFAR data calibration pipelines is not only possible, but can also be efficient in cloud infrastructures. The main advantages were: (1) the ease of software installation and maintenance, and the availability of standard APIs and tools, widely used in the industry; this reduces the requirement for significant manual intervention, which can have a highly negative impact in some infrastructures; (2) the flexibility to adapt the infrastructure to the needs of the problem, especially as those demands change over time; (3) the on-demand consumption of (shared) resources. We found that a critical factor (also in other infrastructures) is the availability of scratch storage areas of an appropriate size. We found no significant impediments associated with the speed of data transfer, the use of virtualization, the use of external block storage, or the memory available (provided a minimum threshold is reached). Finally, we considered the cost-effectiveness of a commercial cloud like Amazon Web Services. While a cloud solution is more expensive than the operation of a large, fully-utilized cluster completely dedicated to LOFAR data reduction, we found that its costs are competitive if the number of datasets to be analysed is not high, or if the costs of maintaining a system capable of calibrating LOFAR data become high. Coupled with the advantages discussed above, this suggests that a cloud infrastructure may be favourable for many users.

  3. An Attempt to Develop AN Environmental Information System of Ecological Infrastructure for Evaluating Functions of Ecosystem-Based Solutions for Disaster Risk Reduction Eco-Drr

    NASA Astrophysics Data System (ADS)

    Doko, T.; Chen, W.; Sasaki, K.; Furutani, T.

    2016-06-01

    "Ecological Infrastructure (EI)" are defined as naturally functioning ecosystems that deliver valuable services to people, such as healthy mountain catchments, rivers, wetlands, coastal dunes, and nodes and corridors of natural habitat, which together form a network of interconnected structural elements in the landscape. On the other hand, natural disaster occur at the locations where habitat was reduced due to the changes of land use, in which the land was converted to the settlements and agricultural cropland. Hence, habitat loss and natural disaster are linked closely. Ecological infrastructure is the nature-based equivalent of built or hard infrastructure, and is as important for providing services and underpinning socio-economic development. Hence, ecological infrastructure is expected to contribute to functioning as ecological disaster reduction, which is termed Ecosystem-based Solutions for Disaster Risk Reduction (Eco-DRR). Although ecological infrastructure already exists in the landscape, it might be degraded, needs to be maintained and managed, and in some cases restored. Maintenance and restoration of ecological infrastructure is important for security of human lives. Therefore, analytical tool and effective visualization tool in spatially explicit way for the past natural disaster and future prediction of natural disaster in relation to ecological infrastructure is considered helpful. Hence, Web-GIS based Ecological Infrastructure Environmental Information System (EI-EIS) has been developed. This paper aims to describe the procedure of development and future application of EI-EIS. The purpose of the EI-EIS is to evaluate functions of Eco-DRR. In order to analyse disaster data, collection of past disaster information, and disaster-prone area is effective. First, a number of digital maps and analogue maps in Japan and Europe were collected. In total, 18,572 maps over 100 years were collected. The Japanese data includes Future-Pop Data Series (1,736 maps), JMC dataset 50m grid (elevation) (13,071 maps), Old Edition Maps: Topographic Map (325 maps), Digital Base Map at a scale of 2500 for reconstruction planning (808 maps), Detailed Digital Land Use Information for Metropolitan Area (10 m land use) (2,436 maps), and Digital Information by GSI (national large scale map) (71 maps). Old Edition Maps: Topographic Map were analogue maps, and were scanned and georeferenced. These geographical area covered 1) Tohoku area, 2) Five Lakes of Mikata area (Fukui), 3) Ooshima Island (Tokyo), 4) Hiroshima area (Hiroshima), 5) Okushiri Island (Hokkaido), and 6) Toyooka City area (Hyogo). The European data includes topographic map in Germany (8 maps), old topographic map in Germany (31 maps), ancient map in Germany (23 maps), topographic map in Austria (9 maps), old topographic map in Austria (17 maps), and ancient map in Austria (37 maps). Second, focusing on Five Lakes of Mikata area as an example, these maps were integrated into the ArcGIS Online® (ESRI). These data can be overlaid, and time-series data can be visualized by a time slider function of ArcGIS Online.

  4. Collaborative Multi-Scale 3d City and Infrastructure Modeling and Simulation

    NASA Astrophysics Data System (ADS)

    Breunig, M.; Borrmann, A.; Rank, E.; Hinz, S.; Kolbe, T.; Schilcher, M.; Mundani, R.-P.; Jubierre, J. R.; Flurl, M.; Thomsen, A.; Donaubauer, A.; Ji, Y.; Urban, S.; Laun, S.; Vilgertshofer, S.; Willenborg, B.; Menninghaus, M.; Steuer, H.; Wursthorn, S.; Leitloff, J.; Al-Doori, M.; Mazroobsemnani, N.

    2017-09-01

    Computer-aided collaborative and multi-scale 3D planning are challenges for complex railway and subway track infrastructure projects in the built environment. Many legal, economic, environmental, and structural requirements have to be taken into account. The stringent use of 3D models in the different phases of the planning process facilitates communication and collaboration between the stake holders such as civil engineers, geological engineers, and decision makers. This paper presents concepts, developments, and experiences gained by an interdisciplinary research group coming from civil engineering informatics and geo-informatics banding together skills of both, the Building Information Modeling and the 3D GIS world. New approaches including the development of a collaborative platform and 3D multi-scale modelling are proposed for collaborative planning and simulation to improve the digital 3D planning of subway tracks and other infrastructures. Experiences during this research and lessons learned are presented as well as an outlook on future research focusing on Building Information Modeling and 3D GIS applications for cities of the future.

  5. The Price of Precision: Large-Scale Mapping of Forest Structure and Biomass Using Airborne Lidar

    NASA Astrophysics Data System (ADS)

    Dubayah, R.

    2015-12-01

    Lidar remote sensing provides one of the best means for acquiring detailed information on forest structure. However, its application over large areas has been limited largely because of its expense. Nonetheless, extant data exist over many states in the U.S., funded largely by state and federal consortia and mainly for infrastructure, emergency response, flood plain and coastal mapping. These lidar data are almost always acquired in leaf-off seasons, and until recently, usually with low point count densities. Even with these limitations, they provide unprecedented wall-to-wall mappings that enable development of appropriate methodologies for large-scale deployment of lidar. In this talk we summarize our research and lessons learned in deriving forest structure over regional areas as part of NASA's Carbon Monitoring System (CMS). We focus on two areas: the entire state of Maryland and Sonoma County, California. The Maryland effort used low density, leaf-off data acquired by each county in varying epochs, while the on-going Sonoma work employs state-of-the-art, high density, wall-to-wall, leaf-on lidar data. In each area we combine these lidar coverages with high-resolution multispectral imagery from the National Agricultural Imagery Program (NAIP) and in situ plot data to produce maps of canopy height, tree cover and biomass, and compare our results against FIA plot data and national biomass maps. Our work demonstrates that large-scale mapping of forest structure at high spatial resolution is achievable but products may be complex to produce and validate over large areas. Furthermore, fundamental issues involving statistical approaches, plot types and sizes, geolocation, modeling scales, allometry, and even the definitions of "forest" and "non-forest" must be approached carefully. Ultimately, determining the "price of precision", that is, does the value of wall-to-wall forest structure data justify their expense, should consider not only carbon market applications, but the other ways the underlying lidar data may be used.

  6. Building green infrastructure via citizen participation - a six-year study in the Shepherd Creek

    EPA Science Inventory

    Green infrastructure at the parcel scale provides critical ecosystem goods and services when these services (such as flood mitigation) must be provided locally. Here we report on an approach that encourages suburban landowners to mitigate impervious surfaces on their properties t...

  7. COMMUNITY-ORIENTED DESIGN AND EVALUATION PROCESS FOR SUSTAINABLE INFRASTRUCTURE

    EPA Science Inventory

    We met our first objective by completing the physical infrastructure of the La Fortuna-Tule water and sanitation project using the CODE-PSI method. This physical component of the project was important in providing a real, relevant, community-scale test case for the methods ...

  8. The JASMIN Cloud: specialised and hybrid to meet the needs of the Environmental Sciences Community

    NASA Astrophysics Data System (ADS)

    Kershaw, Philip; Lawrence, Bryan; Churchill, Jonathan; Pritchard, Matt

    2014-05-01

    Cloud computing provides enormous opportunities for the research community. The large public cloud providers provide near-limitless scaling capability. However, adapting Cloud to scientific workloads is not without its problems. The commodity nature of the public cloud infrastructure can be at odds with the specialist requirements of the research community. Issues such as trust, ownership of data, WAN bandwidth and costing models make additional barriers to more widespread adoption. Alongside the application of public cloud for scientific applications, a number of private cloud initiatives are underway in the research community of which the JASMIN Cloud is one example. Here, cloud service models are being effectively super-imposed over more established services such as data centres, compute cluster facilities and Grids. These have the potential to deliver the specialist infrastructure needed for the science community coupled with the benefits of a Cloud service model. The JASMIN facility based at the Rutherford Appleton Laboratory was established in 2012 to support the data analysis requirements of the climate and Earth Observation community. In its first year of operation, the 5PB of available storage capacity was filled and the hosted compute capability used extensively. JASMIN has modelled the concept of a centralised large-volume data analysis facility. Key characteristics have enabled success: peta-scale fast disk connected via low latency networks to compute resources and the use of virtualisation for effective management of the resources for a range of users. A second phase is now underway funded through NERC's (Natural Environment Research Council) Big Data initiative. This will see significant expansion to the resources available with a doubling of disk-based storage to 12PB and an increase of compute capacity by a factor of ten to over 3000 processing cores. This expansion is accompanied by a broadening in the scope for JASMIN, as a service available to the entire UK environmental science community. Experience with the first phase demonstrated the range of user needs. A trade-off is needed between access privileges to resources, flexibility of use and security. This has influenced the form and types of service under development for the new phase. JASMIN will deploy a specialised private cloud organised into "Managed" and "Unmanaged" components. In the Managed Cloud, users have direct access to the storage and compute resources for optimal performance but for reasons of security, via a more restrictive PaaS (Platform-as-a-Service) interface. The Unmanaged Cloud is deployed in an isolated part of the network but co-located with the rest of the infrastructure. This enables greater liberty to tenants - full IaaS (Infrastructure-as-a-Service) capability to provision customised infrastructure - whilst at the same time protecting more sensitive parts of the system from direct access using these elevated privileges. The private cloud will be augmented with cloud-bursting capability so that it can exploit the resources available from public clouds, making it effectively a hybrid solution. A single interface will overlay the functionality of both the private cloud and external interfaces to public cloud providers giving users the flexibility to migrate resources between infrastructures as requirements dictate.

  9. Perspectives on the use of green infrastructure for stormwater management in Cleveland and Milwaukee.

    PubMed

    Keeley, Melissa; Koburger, Althea; Dolowitz, David P; Medearis, Dale; Nickel, Darla; Shuster, William

    2013-06-01

    Green infrastructure is a general term referring to the management of landscapes in ways that generate human and ecosystem benefits. Many municipalities have begun to utilize green infrastructure in efforts to meet stormwater management goals. This study examines challenges to integrating gray and green infrastructure for stormwater management, informed by interviews with practitioners in Cleveland, OH and Milwaukee WI. Green infrastructure in these cities is utilized under conditions of extreme fiscal austerity and its use presents opportunities to connect stormwater management with urban revitalization and economic recovery while planning for the effects of negative- or zero-population growth. In this context, specific challenges in capturing the multiple benefits of green infrastructure exist because the projects required to meet federally mandated stormwater management targets and the needs of urban redevelopment frequently differ in scale and location.

  10. Perspectives on the Use of Green Infrastructure for Stormwater Management in Cleveland and Milwaukee

    NASA Astrophysics Data System (ADS)

    Keeley, Melissa; Koburger, Althea; Dolowitz, David P.; Medearis, Dale; Nickel, Darla; Shuster, William

    2013-06-01

    Green infrastructure is a general term referring to the management of landscapes in ways that generate human and ecosystem benefits. Many municipalities have begun to utilize green infrastructure in efforts to meet stormwater management goals. This study examines challenges to integrating gray and green infrastructure for stormwater management, informed by interviews with practitioners in Cleveland, OH and Milwaukee WI. Green infrastructure in these cities is utilized under conditions of extreme fiscal austerity and its use presents opportunities to connect stormwater management with urban revitalization and economic recovery while planning for the effects of negative- or zero-population growth. In this context, specific challenges in capturing the multiple benefits of green infrastructure exist because the projects required to meet federally mandated stormwater management targets and the needs of urban redevelopment frequently differ in scale and location.

  11. NEON: High Frequency Monitoring Network for Watershed-Scale Processes and Aquatic Ecology

    NASA Astrophysics Data System (ADS)

    Vance, J. M.; Fitzgerald, M.; Parker, S. M.; Roehm, C. L.; Goodman, K. J.; Bohall, C.; Utz, R.

    2014-12-01

    Networked high frequency hydrologic and water quality measurements needed to investigate physical and biogeochemical processes at the watershed scale and create robust models are limited and lacking standardization. Determining the drivers and mechanisms of ecological changes in aquatic systems in response to natural and anthropogenic pressures is challenging due to the large amounts of terrestrial, aquatic, atmospheric, biological, chemical, and physical data it requires at varied spatiotemporal scales. The National Ecological Observatory Network (NEON) is a continental-scale infrastructure project designed to provide data to address the impacts of climate change, land-use, and invasive species on ecosystem structure and function. Using a combination of standardized continuous in situ measurements and observational sampling, the NEON Aquatic array will produce over 200 data products across its spatially-distributed field sites for 30 years to facilitate spatiotemporal analysis of the drivers of ecosystem change. Three NEON sites in Alabama were chosen to address linkages between watershed-scale processes and ecosystem changes along an eco-hydrological gradient within the Tombigbee River Basin. The NEON Aquatic design, once deployed, will include continuous measurements of surface water physical, chemical, and biological parameters, groundwater level, temperature and conductivity and local meteorology. Observational sampling will include bathymetry, water chemistry and isotopes, and a suite of organismal sampling from microbes to macroinvertebrates to vertebrates. NEON deployed a buoy to measure the temperature profile of the Black Warrior River from July - November, 2013 to determine the spatiotemporal variability across the water column from a daily to seasonal scale. In July 2014 a series of water quality profiles were performed to assess the contribution of physical and biogeochemical drivers over a diurnal cycle. Additional river transects were performed across our site reach to capture the spatial variability of surface water parameters. Our preliminary data show differing response times to precipitation events and diurnal processes informing our infrastructure designs and sampling protocols aimed at providing data to address the eco-hydrological gradient.

  12. The use of digital imaging, video conferencing, and telepathology in histopathology: a national survey

    PubMed Central

    Dennis, T; Start, R D; Cross, S S

    2005-01-01

    Aims: To undertake a large scale survey of histopathologists in the UK to determine the current infrastructure, training, and attitudes to digital pathology. Methods: A postal questionnaire was sent to 500 consultant histopathologists randomly selected from the membership of the Royal College of Pathologists in the UK. Results: There was a response rate of 47%. Sixty four per cent of respondents had a digital camera mounted on their microscope, but only 12% had any sort of telepathology equipment. Thirty per cent used digital images in electronic presentations at meetings at least once a year and only 24% had ever used telepathology in a diagnostic situation. Fifty nine per cent had received no training in digital imaging. Fifty eight per cent felt that the medicolegal implications of duty of care were a barrier to its use. A large proportion of pathologists (69%) were interested in using video conferencing for remote attendance at multidisciplinary team meetings. Conclusions: There is a reasonable level of equipment and communications infrastructure among histopathologists in the UK but a very low level of training. There is resistance to the use of telepathology in the diagnostic context but enthusiasm for the use of video conferencing in multidisciplinary team meetings. PMID:15735155

  13. Distributed Coding/Decoding Complexity in Video Sensor Networks

    PubMed Central

    Cordeiro, Paulo J.; Assunção, Pedro

    2012-01-01

    Video Sensor Networks (VSNs) are recent communication infrastructures used to capture and transmit dense visual information from an application context. In such large scale environments which include video coding, transmission and display/storage, there are several open problems to overcome in practical implementations. This paper addresses the most relevant challenges posed by VSNs, namely stringent bandwidth usage and processing time/power constraints. In particular, the paper proposes a novel VSN architecture where large sets of visual sensors with embedded processors are used for compression and transmission of coded streams to gateways, which in turn transrate the incoming streams and adapt them to the variable complexity requirements of both the sensor encoders and end-user decoder terminals. Such gateways provide real-time transcoding functionalities for bandwidth adaptation and coding/decoding complexity distribution by transferring the most complex video encoding/decoding tasks to the transcoding gateway at the expense of a limited increase in bit rate. Then, a method to reduce the decoding complexity, suitable for system-on-chip implementation, is proposed to operate at the transcoding gateway whenever decoders with constrained resources are targeted. The results show that the proposed method achieves good performance and its inclusion into the VSN infrastructure provides an additional level of complexity control functionality. PMID:22736972

  14. Distributed coding/decoding complexity in video sensor networks.

    PubMed

    Cordeiro, Paulo J; Assunção, Pedro

    2012-01-01

    Video Sensor Networks (VSNs) are recent communication infrastructures used to capture and transmit dense visual information from an application context. In such large scale environments which include video coding, transmission and display/storage, there are several open problems to overcome in practical implementations. This paper addresses the most relevant challenges posed by VSNs, namely stringent bandwidth usage and processing time/power constraints. In particular, the paper proposes a novel VSN architecture where large sets of visual sensors with embedded processors are used for compression and transmission of coded streams to gateways, which in turn transrate the incoming streams and adapt them to the variable complexity requirements of both the sensor encoders and end-user decoder terminals. Such gateways provide real-time transcoding functionalities for bandwidth adaptation and coding/decoding complexity distribution by transferring the most complex video encoding/decoding tasks to the transcoding gateway at the expense of a limited increase in bit rate. Then, a method to reduce the decoding complexity, suitable for system-on-chip implementation, is proposed to operate at the transcoding gateway whenever decoders with constrained resources are targeted. The results show that the proposed method achieves good performance and its inclusion into the VSN infrastructure provides an additional level of complexity control functionality.

  15. OpenCyto: An Open Source Infrastructure for Scalable, Robust, Reproducible, and Automated, End-to-End Flow Cytometry Data Analysis

    PubMed Central

    Finak, Greg; Frelinger, Jacob; Jiang, Wenxin; Newell, Evan W.; Ramey, John; Davis, Mark M.; Kalams, Spyros A.; De Rosa, Stephen C.; Gottardo, Raphael

    2014-01-01

    Flow cytometry is used increasingly in clinical research for cancer, immunology and vaccines. Technological advances in cytometry instrumentation are increasing the size and dimensionality of data sets, posing a challenge for traditional data management and analysis. Automated analysis methods, despite a general consensus of their importance to the future of the field, have been slow to gain widespread adoption. Here we present OpenCyto, a new BioConductor infrastructure and data analysis framework designed to lower the barrier of entry to automated flow data analysis algorithms by addressing key areas that we believe have held back wider adoption of automated approaches. OpenCyto supports end-to-end data analysis that is robust and reproducible while generating results that are easy to interpret. We have improved the existing, widely used core BioConductor flow cytometry infrastructure by allowing analysis to scale in a memory efficient manner to the large flow data sets that arise in clinical trials, and integrating domain-specific knowledge as part of the pipeline through the hierarchical relationships among cell populations. Pipelines are defined through a text-based csv file, limiting the need to write data-specific code, and are data agnostic to simplify repetitive analysis for core facilities. We demonstrate how to analyze two large cytometry data sets: an intracellular cytokine staining (ICS) data set from a published HIV vaccine trial focused on detecting rare, antigen-specific T-cell populations, where we identify a new subset of CD8 T-cells with a vaccine-regimen specific response that could not be identified through manual analysis, and a CyTOF T-cell phenotyping data set where a large staining panel and many cell populations are a challenge for traditional analysis. The substantial improvements to the core BioConductor flow cytometry packages give OpenCyto the potential for wide adoption. It can rapidly leverage new developments in computational cytometry and facilitate reproducible analysis in a unified environment. PMID:25167361

  16. OpenCyto: an open source infrastructure for scalable, robust, reproducible, and automated, end-to-end flow cytometry data analysis.

    PubMed

    Finak, Greg; Frelinger, Jacob; Jiang, Wenxin; Newell, Evan W; Ramey, John; Davis, Mark M; Kalams, Spyros A; De Rosa, Stephen C; Gottardo, Raphael

    2014-08-01

    Flow cytometry is used increasingly in clinical research for cancer, immunology and vaccines. Technological advances in cytometry instrumentation are increasing the size and dimensionality of data sets, posing a challenge for traditional data management and analysis. Automated analysis methods, despite a general consensus of their importance to the future of the field, have been slow to gain widespread adoption. Here we present OpenCyto, a new BioConductor infrastructure and data analysis framework designed to lower the barrier of entry to automated flow data analysis algorithms by addressing key areas that we believe have held back wider adoption of automated approaches. OpenCyto supports end-to-end data analysis that is robust and reproducible while generating results that are easy to interpret. We have improved the existing, widely used core BioConductor flow cytometry infrastructure by allowing analysis to scale in a memory efficient manner to the large flow data sets that arise in clinical trials, and integrating domain-specific knowledge as part of the pipeline through the hierarchical relationships among cell populations. Pipelines are defined through a text-based csv file, limiting the need to write data-specific code, and are data agnostic to simplify repetitive analysis for core facilities. We demonstrate how to analyze two large cytometry data sets: an intracellular cytokine staining (ICS) data set from a published HIV vaccine trial focused on detecting rare, antigen-specific T-cell populations, where we identify a new subset of CD8 T-cells with a vaccine-regimen specific response that could not be identified through manual analysis, and a CyTOF T-cell phenotyping data set where a large staining panel and many cell populations are a challenge for traditional analysis. The substantial improvements to the core BioConductor flow cytometry packages give OpenCyto the potential for wide adoption. It can rapidly leverage new developments in computational cytometry and facilitate reproducible analysis in a unified environment.

  17. Romanian contribution to research infrastructure database for EPOS

    NASA Astrophysics Data System (ADS)

    Ionescu, Constantin; Craiu, Andreea; Tataru, Dragos; Balan, Stefan; Muntean, Alexandra; Nastase, Eduard; Oaie, Gheorghe; Asimopolos, Laurentiu; Panaiotu, Cristian

    2014-05-01

    European Plate Observation System - EPOS is a long-term plan to facilitate integrated use of data, models and facilities from mainly distributed existing, but also new, research infrastructures for solid Earth Science. In EPOS Preparatory Phase were integrated the national Research Infrastructures at pan European level in order to create the EPOS distributed research infrastructures, structure in which, at the present time, Romania participates by means of the earth science research infrastructures of the national interest declared on the National Roadmap. The mission of EPOS is to build an efficient and comprehensive multidisciplinary research platform for solid Earth Sciences in Europe and to allow the scientific community to study the same phenomena from different points of view, in different time periods and spatial scales (laboratory and field experiments). At national scale, research and monitoring infrastructures have gathered a vast amount of geological and geophysical data, which have been used by research networks to underpin our understanding of the Earth. EPOS promotes the creation of comprehensive national and regional consortia, as well as the organization of collective actions. To serve the EPOS goals, in Romania a group of National Research Institutes, together with their infrastructures, gathered in an EPOS National Consortium, as follows: 1. National Institute for Earth Physics - Seismic, strong motion, GPS and Geomagnetic network and Experimental Laboratory; 2. National Institute of Marine Geology and Geoecology - Marine Research infrastructure and Euxinus integrated regional Black Sea observation and early-warning system; 3. Geological Institute of Romania - Surlari National Geomagnetic Observatory and National lithoteque (the latter as part of the National Museum of Geology) 4. University of Bucharest - Paleomagnetic Laboratory After national dissemination of EPOS initiative other Research Institutes and companies from the potential stakeholders group also show their interest to participate in the EPOS National Consortium.

  18. Stormwater management and ecosystem services: a review

    NASA Astrophysics Data System (ADS)

    Prudencio, Liana; Null, Sarah E.

    2018-03-01

    Researchers and water managers have turned to green stormwater infrastructure, such as bioswales, retention basins, wetlands, rain gardens, and urban green spaces to reduce flooding, augment surface water supplies, recharge groundwater, and improve water quality. It is increasingly clear that green stormwater infrastructure not only controls stormwater volume and timing, but also promotes ecosystem services, which are the benefits that ecosystems provide to humans. Yet there has been little synthesis focused on understanding how green stormwater management affects ecosystem services. The objectives of this paper are to review and synthesize published literature on ecosystem services and green stormwater infrastructure and identify gaps in research and understanding, establishing a foundation for research at the intersection of ecosystems services and green stormwater management. We reviewed 170 publications on stormwater management and ecosystem services, and summarized the state-of-the-science categorized by the four types of ecosystem services. Major findings show that: (1) most research was conducted at the parcel-scale and should expand to larger scales to more closely understand green stormwater infrastructure impacts, (2) nearly a third of papers developed frameworks for implementing green stormwater infrastructure and highlighted barriers, (3) papers discussed ecosystem services, but less than 40% quantified ecosystem services, (4) no geographic trends emerged, indicating interest in applying green stormwater infrastructure across different contexts, (5) studies increasingly integrate engineering, physical science, and social science approaches for holistic understanding, and (6) standardizing green stormwater infrastructure terminology would provide a more cohesive field of study than the diverse and often redundant terminology currently in use. We recommend that future research provide metrics and quantify ecosystem services, integrate disciplines to measure ecosystem services from green stormwater infrastructure, and better incorporate stormwater management into environmental policy. Our conclusions outline promising future research directions at the intersection of stormwater management and ecosystem services.

  19. Building Indigenous Community Resilience in the Great Plains

    NASA Astrophysics Data System (ADS)

    Gough, B.

    2014-12-01

    Indigenous community resilience is rooted in the seasoned lifeways, developed over generations, incorporated into systems of knowledge, and realized in artifacts of infrastructure through keen observations of the truth and consequences of their interactions with the environment found in place over time. Their value lies, not in their nature as artifacts, but in the underlying patterns and processes of culture: how previous adaptations were derived and evolved, and how the principles and processes of detailed observation may inform future adaptations. This presentation examines how such holistic community approaches, reflected in design and practice, can be applied to contemporary issues of energy and housing in a rapidly changing climate. The Indigenous Peoples of the Great Plains seek to utilize the latest scientific climate modeling to support the development of large, utility scale distributed renewable energy projects and to re-invigorate an indigenous housing concept of straw bale construction, originating in this region. In the energy context, we explore the potential for the development of an intertribal wind energy dynamo on the Great Plains, utilizing elements of existing federal policies for Indian energy development and existing federal infrastructure initially created to serve hydropower resources, which may be significantly altered under current and prospective drought scenarios. For housing, we consider the opportunity to address the built environment in Indian Country, where Tribes have greater control as it consists largely of residences needed for their growing populations. Straw bale construction allows for greater use of local natural and renewable materials in a strategy for preparedness for the weather extremes and insurance perils already common to the region, provides solutions to chronic unemployment and increasing energy costs, while offering greater affordable comfort in both low and high temperature extremes. The development of large utility scale distributed wind gives greater systemwide flexibility to incorporate renewables and the communal construction techniques associated with straw bale housing puts high-performance shelter back into the hands of the community. Creative and distributed experimentation can result in more graceful failures forward.

  20. A large-scale assessment of European rabbit damage to agriculture in Spain.

    PubMed

    Delibes-Mateos, Miguel; Farfán, Miguel Ángel; Rouco, Carlos; Olivero, Jesús; Márquez, Ana Luz; Fa, John E; Vargas, Juan Mario; Villafuerte, Rafael

    2018-01-01

    Numerous small and medium-sized mammal pests cause widespread and economically significant damage to crops all over the globe. However, most research on pest species has focused on accounts of the level of damage. There are fewer studies concentrating on the description of crop damage caused by pests at large geographical scales, or on analysis of the ecological and anthropogenic factors correlated with these observed patterns. We investigated the relationship between agricultural damage by the European rabbit (Oryctolagus cuniculus) and environmental and anthropogenic variables throughout Spain. Rabbit damage was mainly concentrated within the central-southern regions of Spain. We found that rabbit damage increased significantly between the early 2000s and 2013. Greater losses were typical of those areas where farming dominated and natural vegetation was scarce, where main railways and highways were present, and where environmental conditions were generally favourable for rabbit populations to proliferate. From our analysis, we suggest that roads and railway lines act as potential corridors along which rabbits can spread. The recent increase in Spain of such infrastructure may explain the rise in rabbit damage reported in this study. Our approach is valuable as a method for assessing drivers of wildlife pest damage at large spatial scales, and can be used to propose methods to reduce human - wildlife conflict. © 2017 Society of Chemical Industry. © 2017 Society of Chemical Industry.

  1. A Systems Approach to Develop Sustainable Water Supply Infrastructure and Management

    EPA Science Inventory

    In a visit to Zhejiang University, China, Dr. Y. Jeffrey Yang will discuss in this presentation the system approach for urban water infrastructure sustainability. Through a system analysis, it becomes clear at an urban scale that the energy and water efficiencies of a water supp...

  2. Development of a methodology for the assessment of sea level rise impacts on Florida's transportation modes and infrastructure : [summary].

    DOT National Transportation Integrated Search

    2012-01-01

    In Florida, low elevations can make transportation infrastructure in coastal and low-lying areas potentially vulnerable to sea level rise (SLR). Becuase global SLR forecasts lack precision at local or regional scales, SLR forecasts or scenarios for p...

  3. Evaluating the Effect of Green Infrastructure Stormwater Best Management Practices on New England Stream Habitat

    EPA Science Inventory

    The U.S. EPA is evaluating the effectiveness of green infrastructure (GI) stormwater best management practices (BMPs) on stream habitat at the small watershed (< HUC12) scale in New England. Predictive models for thermal regime and substrate characteristics (substrate size, % em...

  4. DEVELOP MULTI-STRESSOR, OPEN ARCHITECTURE MODELING FRAMEWORK FOR ECOLOGICAL EXPOSURE FROM SITE TO WATERSHED SCALE

    EPA Science Inventory

    A number of multimedia modeling frameworks are currently being developed. The Multimedia Integrated Modeling System (MIMS) is one of these frameworks. A framework should be seen as more of a multimedia modeling infrastructure than a single software system. This infrastructure do...

  5. Some recent advances of intelligent health monitoring systems for civil infrastructures in HIT

    NASA Astrophysics Data System (ADS)

    Ou, Jinping

    2005-06-01

    The intelligent health monitoring systems more and more become a technique for ensuring the health and safety of civil infrastructures and also an important approach for research of the damage accumulation or even disaster evolving characteristics of civil infrastructures, and attracts prodigious research interests and active development interests of scientists and engineers since a great number of civil infrastructures are planning and building each year in mainland China. In this paper, some recent advances on research, development nad implementation of intelligent health monitoring systems for civil infrastructuresin mainland China, especially in Harbin Institute of Technology (HIT), P.R.China. The main contents include smart sensors such as optical fiber Bragg grating (OFBG) and polivinyllidene fluoride (PVDF) sensors, fatigue life gauges, self-sensing mortar and carbon fiber reinforced polymer (CFRP), wireless sensor networks and their implementation in practical infrastructures such as offshore platform structures, hydraulic engineering structures, large span bridges and large space structures. Finally, the relative research projects supported by the national foundation agencies of China are briefly introduced.

  6. Integrating Green and Blue Water Management Tools for Land and Water Resources Planning

    NASA Astrophysics Data System (ADS)

    Jewitt, G. P. W.

    2009-04-01

    The role of land use and land use change on the hydrological cycle is well known. However, the impacts of large scale land use change are poorly considered in water resources planning, unless they require direct abstraction of water resources and associated development of infrastructure e.g. Irrigation Schemes. However, large scale deforestation for the supply of raw materials, expansion of the areas of plantation forestry, increasing areas under food production and major plans for cultivation of biofuels in many developing countries are likely to result in extensive land use change. Given the spatial extent and temporal longevity of these proposed developments, major impacts on water resources are inevitable. It is imperative that managers and planners consider the consequences for downstream ecosystems and users in such developments. However, many popular tools, such as the vitual water approach, provide only coarse scale "order of magnitude" type estimates with poor consideration of, and limited usefulness, for land use planning. In this paper, a framework for the consideration of the impacts of large scale land use change on water resources at a range of temporal and spatial scales is presented. Drawing on experiences from South Africa, where the establishment of exotic commercial forest plantations is only permitted once a water use license has been granted, the framework adopts the "green water concept" for the identification of potential high impact areas of land use change and provides for integration with traditional "blue water" water resources planning tools for more detailed planning. Appropriate tools, ranging from simple spreadsheet solutions to more sophisticated remote sensing and hydrological models are described, and the application of the framework for consideration of water resources impacts associated with the establishment of large scale tectona grandis, sugar cane and jatropha curcas plantations is illustrated through examples in Mozambique and South Africa. Keywords: Land use change, water resources, green water, blue water, biofuels, developing countries

  7. Megascours: the morphodynamics of large river confluences

    NASA Astrophysics Data System (ADS)

    Dixon, Simon; Sambrook Smith, Greg; Nicholas, Andrew; Best, Jim; Bull, Jon; Vardy, Mark; Goodbred, Steve; Haque Sarker, Maminul

    2015-04-01

    River confluences are wildly acknowledged as crucial controlling influences upon upstream and downstream morphology and thus landscape evolution. Despite their importance very little is known about their evolution and morphodynamics, and there is a consensus in the literature that confluences represent fixed, nodal points in the fluvial network. Confluences have been shown to generate substantial bed scours around five times greater than mean depth. Previous research on the Ganges-Jamuna junction has shown large river confluences can be highly mobile, potentially 'combing' bed scours across a large area, although the extent to which this is representative of large confluences in general is unknown. Understanding the migration of confluences and associated scours is important for multiple applications including: designing civil engineering infrastructure (e.g. bridges, laying cable, pipelines, etc.), sequence stratigraphic interpretation for reconstruction of past environmental and sea level change, and in the hydrocarbon industry where it is crucial to discriminate autocyclic confluence scours from widespread allocyclic surfaces. Here we present a wide-ranging global review of large river confluence planforms based on analysis of Landsat imagery from 1972 through to 2014. This demonstrates there is an array of confluence morphodynamic types: from freely migrating confluences such as the Ganges-Jamuna, through confluences migrating on decadal timescales and fixed confluences. Along with data from recent geophysical field studies in the Ganges-Brahmaputra-Meghna basin we propose a conceptual model of large river confluence types and hypothesise how these influence morphodynamics and preservation of 'megascours' in the rock record. This conceptual model has implications for sequence stratigraphic models and the correct identification of surfaces related to past sea level change. We quantify the abundance of mobile confluence types by classifying all large confluences in the Amazon and Ganges-Brahmaputra-Meghna basins, showing these two basins have contrasting confluence morphodynamics. For the first time we show large river confluences have multiple scales of planform adjustment with important implications for infrastructure and interpretation of the rock record.

  8. A new implementation of full resolution SBAS-DInSAR processing chain for the effective monitoring of structures and infrastructures

    NASA Astrophysics Data System (ADS)

    Bonano, Manuela; Buonanno, Sabatino; Ojha, Chandrakanta; Berardino, Paolo; Lanari, Riccardo; Zeni, Giovanni; Manunta, Michele

    2017-04-01

    The advanced DInSAR technique referred to as Small BAseline Subset (SBAS) algorithm has already largely demonstrated its effectiveness to carry out multi-scale and multi-platform surface deformation analyses relevant to both natural and man-made hazards. Thanks to its capability to generate displacement maps and long-term deformation time series at both regional (low resolution analysis) and local (full resolution analysis) spatial scales, it allows to get more insights on the spatial and temporal patterns of localized displacements relevant to single buildings and infrastructures over extended urban areas, with a key role in supporting risk mitigation and preservation activities. The extensive application of the multi-scale SBAS-DInSAR approach in many scientific contexts has gone hand in hand with new SAR satellite mission development, characterized by different frequency bands, spatial resolution, revisit times and ground coverage. This brought to the generation of huge DInSAR data stacks to be efficiently handled, processed and archived, with a strong impact on both the data storage and the computational requirements needed for generating the full resolution SBAS-DInSAR results. Accordingly, innovative and effective solutions for the automatic processing of massive SAR data archives and for the operational management of the derived SBAS-DInSAR products need to be designed and implemented, by exploiting the high efficiency (in terms of portability, scalability and computing performances) of the new ICT methodologies. In this work, we present a novel parallel implementation of the full resolution SBAS-DInSAR processing chain, aimed at investigating localized displacements affecting single buildings and infrastructures relevant to very large urban areas, relying on different granularity level parallelization strategies. The image granularity level is applied in most steps of the SBAS-DInSAR processing chain and exploits the multiprocessor systems with distributed memory. Moreover, in some processing steps very heavy from the computational point of view, the Graphical Processing Units (GPU) are exploited for the processing of blocks working on a pixel-by-pixel basis, requiring strong modifications on some key parts of the sequential full resolution SBAS-DInSAR processing chain. GPU processing is implemented by efficiently exploiting parallel processing architectures (as CUDA) for increasing the computing performances, in terms of optimization of the available GPU memory, as well as reduction of the Input/Output operations on the GPU and of the whole processing time for specific blocks w.r.t. the corresponding sequential implementation, particularly critical in presence of huge DInSAR datasets. Moreover, to efficiently handle the massive amount of DInSAR measurements provided by the new generation SAR constellations (CSK and Sentinel-1), we perform a proper re-design strategy aimed at the robust assimilation of the full resolution SBAS-DInSAR results into the web-based Geonode platform of the Spatial Data Infrastructure, thus allowing the efficient management, analysis and integration of the interferometric results with different data sources.

  9. Weathering the Storm: Developing a Spatial Data Infrastructure and Online Research Platform for Oil Spill Preparedness

    NASA Astrophysics Data System (ADS)

    Bauer, J. R.; Rose, K.; Romeo, L.; Barkhurst, A.; Nelson, J.; Duran-Sesin, R.; Vielma, J.

    2016-12-01

    Efforts to prepare for and reduce the risk of hazards, from both natural and anthropogenic sources, which threaten our oceans and coasts requires an understanding of the dynamics and interactions between the physical, ecological, and socio-economic systems. Understanding these coupled dynamics are essential as offshore oil & gas exploration and production continues to push into harsher, more extreme environments where risks and uncertainty increase. However, working with these large, complex data from various sources and scales to assess risks and potential impacts associated with offshore energy exploration and production poses several challenges to research. In order to address these challenges, an integrated assessment model (IAM) was developed at the Department of Energy's (DOE) National Energy Technology Laboratory (NETL) that combines spatial data infrastructure and an online research platform to manage, process, analyze, and share these large, multidimensional datasets, research products, and the tools and models used to evaluate risk and reduce uncertainty for the entire offshore system, from the subsurface, through the water column, to coastal ecosystems and communities. Here, we will discuss the spatial data infrastructure and online research platform, NETL's Energy Data eXchange (EDX), that underpin the offshore IAM, providing information on how the framework combines multidimensional spatial data and spatio-temporal tools to evaluate risks to the complex matrix of potential environmental, social, and economic impacts stemming from modeled offshore hazard scenarios, such as oil spills or hurricanes. In addition, we will discuss the online analytics, tools, and visualization methods integrated into this framework that support availability and access to data, as well as allow for the rapid analysis and effective communication of analytical results to aid a range of decision-making needs.

  10. Consortium for materials development in space interaction with Space Station Freedom

    NASA Technical Reports Server (NTRS)

    Lundquist, Charles A.; Seaquist, Valerie

    1992-01-01

    The Consortium for Materials Development in Space (CMDS) is one of seventeen Centers for the Commercial Development of Space (CCDS) sponsored by the Office of Commercial Programs of NASA. The CMDS formed at the University of Alabama in Huntsville in the fall of 1985. The Consortium activities therefore will have progressed for over a decade by the time Space Station Freedom (SSF) begins operation. The topic to be addressed here is: what are the natural, mutually productive relationships between the CMDS and SSF? For management and planning purposes, the Consortium organizes its activities into a number of individual projects. Normally, each project has a team of personnel from industry, university, and often government organizations. This is true for both product-oriented materials projects and for infrastructure projects. For various projects Space Station offers specific mutually productive relationships. First, SSF can provide a site for commercial operations that have evolved as a natural stage in the life cycle of individual projects. Efficiency and associated cost control lead to another important option. With SSF in place, there is the possibility to leave major parts of processing equipment in SSF, and only bring materials to SSF to be processed and return to earth the treated materials. This saves the transportation costs of repeatedly carrying heavy equipment to orbit and back to the ground. Another generic feature of commercial viability can be the general need to accomplish large through-put or large scale operations. The size of SSF lends itself to such needs. Also in addition to processing equipment, some of the other infrastructure capabilities developed in CCDS projects may be applied on SSF to support product activities. The larger SSF program may derive mutual benefits from these infrastructure abilities.

  11. Ocean Data Interoperability Platform (ODIP): developing a common global framework for marine data management through international collaboration

    NASA Astrophysics Data System (ADS)

    Glaves, Helen

    2015-04-01

    Marine research is rapidly moving away from traditional discipline specific science to a wider ecosystem level approach. This more multidisciplinary approach to ocean science requires large amounts of good quality, interoperable data to be readily available for use in an increasing range of new and complex applications. Significant amounts of marine data and information are already available throughout the world as a result of e-infrastructures being established at a regional level to manage and deliver marine data to the end user. However, each of these initiatives has been developed to address specific regional requirements and independently of those in other regions. Establishing a common framework for marine data management on a global scale necessitates that there is interoperability across these existing data infrastructures and active collaboration between the organisations responsible for their management. The Ocean Data Interoperability Platform (ODIP) project is promoting co-ordination between a number of these existing regional e-infrastructures including SeaDataNet and Geo-Seas in Europe, the Integrated Marine Observing System (IMOS) in Australia, the Rolling Deck to Repository (R2R) in the USA and the international IODE initiative. To demonstrate this co-ordinated approach the ODIP project partners are currently working together to develop several prototypes to test and evaluate potential interoperability solutions for solving the incompatibilities between the individual regional marine data infrastructures. However, many of the issues being addressed by the Ocean Data Interoperability Platform are not specific to marine science. For this reason many of the outcomes of this international collaborative effort are equally relevant and transferable to other domains.

  12. Possible illnesses: assessing the health impacts of the Chad Pipeline Project.

    PubMed Central

    Leonard, Lori

    2003-01-01

    Health impact assessments associated with large-scale infrastructure projects, such as the Chad-Cameroon Petroleum Development and Pipeline Project, monitor pre-existing conditions and new diseases associated with particular industries or changes in social organization. This paper suggests that illness self-reports constitute a complementary set of benchmarks to measure the health impacts of these projects, and presents data gathered in ongoing household and health service surveys in Ngalaba, a village near a major oilfield in Chad. In an initial 16-week period of weekly data collection, 363 people reported few of the clinically chronic or asymptomatic conditions expected according to health transition theory, and the overall level of illness reporting was low. Illnesses often were described by symptoms or lay diagnoses. Health care practitioners were consulted rarely; when they were, resources for diagnosis and treatment were limited. Clinically acute, short-duration illnesses (e.g. parasitic infections, toothaches, or hernias) were experienced as chronic conditions and were reported week after week. The low levels of illness reporting and lack of clinically chronic conditions are not taken to mean that rural Chadians are healthy. Rather, the patterns of morbidity reflect a particular local ecology in which health services are organized and care dispensed in ways that limit the possibilities for illness in terms of types of illnesses that can be diagnosed and reported, forms illnesses take, and ways in which illnesses are experienced. Illness self-reports are useful adjuncts to "harder" biological measures in HIAs, particularly in the context of large-scale infrastructure projects with explicit development goals. Rather than providing data on the extent to which harm has been mitigated by corporate, state, and donor activities, self-reports show the possibilities of illness in local contexts. PMID:12894327

  13. Large-scale, high-performance and cloud-enabled multi-model analytics experiments in the context of the Earth System Grid Federation

    NASA Astrophysics Data System (ADS)

    Fiore, S.; Płóciennik, M.; Doutriaux, C.; Blanquer, I.; Barbera, R.; Williams, D. N.; Anantharaj, V. G.; Evans, B. J. K.; Salomoni, D.; Aloisio, G.

    2017-12-01

    The increased models resolution in the development of comprehensive Earth System Models is rapidly leading to very large climate simulations output that pose significant scientific data management challenges in terms of data sharing, processing, analysis, visualization, preservation, curation, and archiving.Large scale global experiments for Climate Model Intercomparison Projects (CMIP) have led to the development of the Earth System Grid Federation (ESGF), a federated data infrastructure which has been serving the CMIP5 experiment, providing access to 2PB of data for the IPCC Assessment Reports. In such a context, running a multi-model data analysis experiment is very challenging, as it requires the availability of a large amount of data related to multiple climate models simulations and scientific data management tools for large-scale data analytics. To address these challenges, a case study on climate models intercomparison data analysis has been defined and implemented in the context of the EU H2020 INDIGO-DataCloud project. The case study has been tested and validated on CMIP5 datasets, in the context of a large scale, international testbed involving several ESGF sites (LLNL, ORNL and CMCC), one orchestrator site (PSNC) and one more hosting INDIGO PaaS services (UPV). Additional ESGF sites, such as NCI (Australia) and a couple more in Europe, are also joining the testbed. The added value of the proposed solution is summarized in the following: it implements a server-side paradigm which limits data movement; it relies on a High-Performance Data Analytics (HPDA) stack to address performance; it exploits the INDIGO PaaS layer to support flexible, dynamic and automated deployment of software components; it provides user-friendly web access based on the INDIGO Future Gateway; and finally it integrates, complements and extends the support currently available through ESGF. Overall it provides a new "tool" for climate scientists to run multi-model experiments. At the time this contribution is being written, the proposed testbed represents the first implementation of a distributed large-scale, multi-model experiment in the ESGF/CMIP context, joining together server-side approaches for scientific data analysis, HPDA frameworks, end-to-end workflow management, and cloud computing.

  14. A scalable infrastructure for CMS data analysis based on OpenStack Cloud and Gluster file system

    NASA Astrophysics Data System (ADS)

    Toor, S.; Osmani, L.; Eerola, P.; Kraemer, O.; Lindén, T.; Tarkoma, S.; White, J.

    2014-06-01

    The challenge of providing a resilient and scalable computational and data management solution for massive scale research environments requires continuous exploration of new technologies and techniques. In this project the aim has been to design a scalable and resilient infrastructure for CERN HEP data analysis. The infrastructure is based on OpenStack components for structuring a private Cloud with the Gluster File System. We integrate the state-of-the-art Cloud technologies with the traditional Grid middleware infrastructure. Our test results show that the adopted approach provides a scalable and resilient solution for managing resources without compromising on performance and high availability.

  15. Private Sector Investment in Pakistani Agriculture: The Role of Infrastructural Investment

    DTIC Science & Technology

    1999-01-01

    private sector will be expected to play the major role in providing capital to the agricultural sector, with the government’s remaining involvement being largely one of furnishing basic infrastructure. The critical question of course is how willing is the private sector to commit capital to agricultural activities in this new policy environment? Has the private sector responded in the past to the increases in profitability provided by an expansion in infrastructure? If so, what types of infrastructure are most conducive in

  16. Fiber optic sensors for infrastructure applications

    DOT National Transportation Integrated Search

    1998-02-01

    Fiber optic sensor technology offers the possibility of implementing "nervous systems" for infrastructure elements that allow high performance, cost effective health and damage assessment systems to be achieved. This is possible, largely due to syner...

  17. Green Infrastructure

    EPA Science Inventory

    Large paved surfaces keep rain from infiltrating the soil and recharging groundwater supplies. Alternatively, Green infrastructure uses natural processes to reduce and treat stormwater in place by soaking up and storing water. These systems provide many environmental, social, an...

  18. An overview of LLNL high-energy short-pulse technology for advanced radiography of laser fusion experiments

    NASA Astrophysics Data System (ADS)

    Barty, C. P. J.; Key, M.; Britten, J.; Beach, R.; Beer, G.; Brown, C.; Bryan, S.; Caird, J.; Carlson, T.; Crane, J.; Dawson, J.; Erlandson, A. C.; Fittinghoff, D.; Hermann, M.; Hoaglan, C.; Iyer, A.; Jones, L., II; Jovanovic, I.; Komashko, A.; Landen, O.; Liao, Z.; Molander, W.; Mitchell, S.; Moses, E.; Nielsen, N.; Nguyen, H.-H.; Nissen, J.; Payne, S.; Pennington, D.; Risinger, L.; Rushford, M.; Skulina, K.; Spaeth, M.; Stuart, B.; Tietbohl, G.; Wattellier, B.

    2004-12-01

    The technical challenges and motivations for high-energy, short-pulse generation with NIF and possibly other large-scale Nd : glass lasers are reviewed. High-energy short-pulse generation (multi-kilojoule, picosecond pulses) will be possible via the adaptation of chirped pulse amplification laser techniques on NIF. Development of metre-scale, high-efficiency, high-damage-threshold final optics is a key technical challenge. In addition, deployment of high energy petawatt (HEPW) pulses on NIF is constrained by existing laser infrastructure and requires new, compact compressor designs and short-pulse, fibre-based, seed-laser systems. The key motivations for HEPW pulses on NIF is briefly outlined and includes high-energy, x-ray radiography, proton beam radiography, proton isochoric heating and tests of the fast ignitor concept for inertial confinement fusion.

  19. Integrated Data Modeling and Simulation on the Joint Polar Satellite System Program

    NASA Technical Reports Server (NTRS)

    Roberts, Christopher J.; Boyce, Leslye; Smith, Gary; Li, Angela; Barrett, Larry

    2012-01-01

    The Joint Polar Satellite System is a modern, large-scale, complex, multi-mission aerospace program, and presents a variety of design, testing and operational challenges due to: (1) System Scope: multi-mission coordination, role, responsibility and accountability challenges stemming from porous/ill-defined system and organizational boundaries (including foreign policy interactions) (2) Degree of Concurrency: design, implementation, integration, verification and operation occurring simultaneously, at multiple scales in the system hierarchy (3) Multi-Decadal Lifecycle: technical obsolesce, reliability and sustainment concerns, including those related to organizational and industrial base. Additionally, these systems tend to become embedded in the broader societal infrastructure, resulting in new system stakeholders with perhaps different preferences (4) Barriers to Effective Communications: process and cultural issues that emerge due to geographic dispersion and as one spans boundaries including gov./contractor, NASA/Other USG, and international relationships.

  20. Spatially explicit modeling of particulate nutrient flux in Large global rivers

    NASA Astrophysics Data System (ADS)

    Cohen, S.; Kettner, A.; Mayorga, E.; Harrison, J. A.

    2017-12-01

    Water, sediment, nutrient and carbon fluxes along river networks have undergone considerable alterations in response to anthropogenic and climatic changes, with significant consequences to infrastructure, agriculture, water security, ecology and geomorphology worldwide. However, in a global setting, these changes in fluvial fluxes and their spatial and temporal characteristics are poorly constrained, due to the limited availability of continuous and long-term observations. We present results from a new global-scale particulate modeling framework (WBMsedNEWS) that combines the Global NEWS watershed nutrient export model with the spatially distributed WBMsed water and sediment model. We compare the model predictions against multiple observational datasets. The results indicate that the model is able to accurately predict particulate nutrient (Nitrogen, Phosphorus and Organic Carbon) fluxes on an annual time scale. Analysis of intra-basin nutrient dynamics and fluxes to global oceans is presented.

  1. Dynamic Collaboration Infrastructure for Hydrologic Science

    NASA Astrophysics Data System (ADS)

    Tarboton, D. G.; Idaszak, R.; Castillo, C.; Yi, H.; Jiang, F.; Jones, N.; Goodall, J. L.

    2016-12-01

    Data and modeling infrastructure is becoming increasingly accessible to water scientists. HydroShare is a collaborative environment that currently offers water scientists the ability to access modeling and data infrastructure in support of data intensive modeling and analysis. It supports the sharing of and collaboration around "resources" which are social objects defined to include both data and models in a structured standardized format. Users collaborate around these objects via comments, ratings, and groups. HydroShare also supports web services and cloud based computation for the execution of hydrologic models and analysis and visualization of hydrologic data. However, the quantity and variety of data and modeling infrastructure available that can be accessed from environments like HydroShare is increasing. Storage infrastructure can range from one's local PC to campus or organizational storage to storage in the cloud. Modeling or computing infrastructure can range from one's desktop to departmental clusters to national HPC resources to grid and cloud computing resources. How does one orchestrate this vast number of data and computing infrastructure without needing to correspondingly learn each new system? A common limitation across these systems is the lack of efficient integration between data transport mechanisms and the corresponding high-level services to support large distributed data and compute operations. A scientist running a hydrology model from their desktop may require processing a large collection of files across the aforementioned storage and compute resources and various national databases. To address these community challenges a proof-of-concept prototype was created integrating HydroShare with RADII (Resource Aware Data-centric collaboration Infrastructure) to provide software infrastructure to enable the comprehensive and rapid dynamic deployment of what we refer to as "collaborative infrastructure." In this presentation we discuss the results of this proof-of-concept prototype which enabled HydroShare users to readily instantiate virtual infrastructure marshaling arbitrary combinations, varieties, and quantities of distributed data and computing infrastructure in addressing big problems in hydrology.

  2. Ocean Data Interoperability Platform (ODIP): Developing a Common Framework for Marine Data Management on a Global Scale

    NASA Astrophysics Data System (ADS)

    Glaves, H. M.; Schaap, D.

    2014-12-01

    As marine research becomes increasingly multidisciplinary in its approach there has been a corresponding rise in the demand for large quantities of high quality interoperable data. A number of regional initiatives are already addressing this requirement through the establishment of e-infrastructures to improve the discovery and access of marine data. Projects such as Geo-Seas and SeaDataNet in Europe, Rolling Deck to Repository (R2R) in the USA and IMOS in Australia have implemented local infrastructures to facilitate the exchange of standardised marine datasets. However, each of these regional initiatives has been developed to address their own requirements and independently of other regions. To establish a common framework for marine data management on a global scale these is a need to develop interoperability solutions that can be implemented across these initiatives.Through a series of workshops attended by the relevant domain specialists, the Ocean Data Interoperability Platform (ODIP) project has identified areas of commonality between the regional infrastructures and used these as the foundation for the development of three prototype interoperability solutions addressing: the use of brokering services for the purposes of providing access to the data available in the regional data discovery and access services including via the GEOSS portal the development of interoperability between cruise summary reporting systems in Europe, the USA and Australia for routine harvesting of cruise data for delivery via the Partnership for Observation of Global Oceans (POGO) portal the establishment of a Sensor Observation Service (SOS) for selected sensors installed on vessels and in real-time monitoring systems using sensor web enablement (SWE) These prototypes will be used to underpin the development of a common global approach to the management of marine data which can be promoted to the wider marine research community. ODIP is a community lead project that is currently focussed on regional initiatives in Europe, the USA and Australia but which is seeking to expand this framework to include other regional marine data infrastructures.

  3. Parallel digital forensics infrastructure.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Liebrock, Lorie M.; Duggan, David Patrick

    2009-10-01

    This report documents the architecture and implementation of a Parallel Digital Forensics infrastructure. This infrastructure is necessary for supporting the design, implementation, and testing of new classes of parallel digital forensics tools. Digital Forensics has become extremely difficult with data sets of one terabyte and larger. The only way to overcome the processing time of these large sets is to identify and develop new parallel algorithms for performing the analysis. To support algorithm research, a flexible base infrastructure is required. A candidate architecture for this base infrastructure was designed, instantiated, and tested by this project, in collaboration with New Mexicomore » Tech. Previous infrastructures were not designed and built specifically for the development and testing of parallel algorithms. With the size of forensics data sets only expected to increase significantly, this type of infrastructure support is necessary for continued research in parallel digital forensics. This report documents the implementation of the parallel digital forensics (PDF) infrastructure architecture and implementation.« less

  4. Scaling up Dietary Data for Decision-Making in Low-Income Countries: New Technological Frontiers.

    PubMed

    Bell, Winnie; Colaiezzi, Brooke A; Prata, Cathleen S; Coates, Jennifer C

    2017-11-01

    Dietary surveys in low-income countries (LICs) are hindered by low investment in the necessary research infrastructure, including a lack of basic technology for data collection, links to food composition information, and data processing. The result has been a dearth of dietary data in many LICs because of the high cost and time burden associated with dietary surveys, which are typically carried out by interviewers using pencil and paper. This study reviewed innovative dietary assessment technologies and gauged their suitability to improve the quality and time required to collect dietary data in LICs. Predefined search terms were used to identify technologies from peer-reviewed and gray literature. A total of 78 technologies were identified and grouped into 6 categories: 1 ) computer- and tablet-based, 2 ) mobile-based, 3 ) camera-enabled, 4 ) scale-based, 5 ) wearable, and 6 ) handheld spectrometers. For each technology, information was extracted on a number of overarching factors, including the primary purpose, mode of administration, and data processing capabilities. Each technology was then assessed against predetermined criteria, including requirements for respondent literacy, battery life, requirements for connectivity, ability to measure macro- and micronutrients, and overall appropriateness for use in LICs. Few technologies reviewed met all the criteria, exhibiting both practical constraints and a lack of demonstrated feasibility for use in LICs, particularly for large-scale, population-based surveys. To increase collection of dietary data in LICs, development of a contextually adaptable, interviewer-administered dietary assessment platform is recommended. Additional investments in the research infrastructure are equally important to ensure time and cost savings for the user.

  5. Scaling up Dietary Data for Decision-Making in Low-Income Countries: New Technological Frontiers

    PubMed Central

    Bell, Winnie; Colaiezzi, Brooke A; Prata, Cathleen S

    2017-01-01

    Dietary surveys in low-income countries (LICs) are hindered by low investment in the necessary research infrastructure, including a lack of basic technology for data collection, links to food composition information, and data processing. The result has been a dearth of dietary data in many LICs because of the high cost and time burden associated with dietary surveys, which are typically carried out by interviewers using pencil and paper. This study reviewed innovative dietary assessment technologies and gauged their suitability to improve the quality and time required to collect dietary data in LICs. Predefined search terms were used to identify technologies from peer-reviewed and gray literature. A total of 78 technologies were identified and grouped into 6 categories: 1) computer- and tablet-based, 2) mobile-based, 3) camera-enabled, 4) scale-based, 5) wearable, and 6) handheld spectrometers. For each technology, information was extracted on a number of overarching factors, including the primary purpose, mode of administration, and data processing capabilities. Each technology was then assessed against predetermined criteria, including requirements for respondent literacy, battery life, requirements for connectivity, ability to measure macro- and micronutrients, and overall appropriateness for use in LICs. Few technologies reviewed met all the criteria, exhibiting both practical constraints and a lack of demonstrated feasibility for use in LICs, particularly for large-scale, population-based surveys. To increase collection of dietary data in LICs, development of a contextually adaptable, interviewer-administered dietary assessment platform is recommended. Additional investments in the research infrastructure are equally important to ensure time and cost savings for the user. PMID:29141974

  6. Evaluating Effectiveness of Green Infrastructure Application of Stormwater Best Management Practices in Protecting Stream Habitat and Biotic Condition in New England

    EPA Science Inventory

    The US EPA is developing assessment tools to evaluate the effectiveness of green infrastructure (GI) applied in stormwater best management practices (BMPs) at the small watershed (HUC12 or finer) scale. Based on analysis of historical monitoring data using boosted regression tre...

  7. WLCG scale testing during CMS data challenges

    NASA Astrophysics Data System (ADS)

    Gutsche, O.; Hajdu, C.

    2008-07-01

    The CMS computing model to process and analyze LHC collision data follows a data-location driven approach and is using the WLCG infrastructure to provide access to GRID resources. As a preparation for data taking, CMS tests its computing model during dedicated data challenges. An important part of the challenges is the test of the user analysis which poses a special challenge for the infrastructure with its random distributed access patterns. The CMS Remote Analysis Builder (CRAB) handles all interactions with the WLCG infrastructure transparently for the user. During the 2006 challenge, CMS set its goal to test the infrastructure at a scale of 50,000 user jobs per day using CRAB. Both direct submissions by individual users and automated submissions by robots were used to achieve this goal. A report will be given about the outcome of the user analysis part of the challenge using both the EGEE and OSG parts of the WLCG. In particular, the difference in submission between both GRID middlewares (resource broker vs. direct submission) will be discussed. In the end, an outlook for the 2007 data challenge is given.

  8. Fiber optic sensors for infrastructure applications : final report.

    DOT National Transportation Integrated Search

    1998-02-01

    Fiber optic sensor technology offers the possibility of implementing "nervous systems" for infrastructure elements that allow high performance, cost effective health and damage assessment systems to be achieved. This is possible, largely due to syner...

  9. AIMES Final Technical Report

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Katz, Daniel S; Jha, Shantenu; Weissman, Jon

    2017-01-31

    This is the final technical report for the AIMES project. Many important advances in science and engineering are due to large-scale distributed computing. Notwithstanding this reliance, we are still learning how to design and deploy large-scale production Distributed Computing Infrastructures (DCI). This is evidenced by missing design principles for DCI, and an absence of generally acceptable and usable distributed computing abstractions. The AIMES project was conceived against this backdrop, following on the heels of a comprehensive survey of scientific distributed applications. AIMES laid the foundations to address the tripartite challenge of dynamic resource management, integrating information, and portable and interoperablemore » distributed applications. Four abstractions were defined and implemented: skeleton, resource bundle, pilot, and execution strategy. The four abstractions were implemented into software modules and then aggregated into the AIMES middleware. This middleware successfully integrates information across the application layer (skeletons) and resource layer (Bundles), derives a suitable execution strategy for the given skeleton and enacts its execution by means of pilots on one or more resources, depending on the application requirements, and resource availabilities and capabilities.« less

  10. AIMES Final Technical Report

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Weissman, Jon; Katz, Dan; Jha, Shantenu

    2017-01-31

    This is the final technical report for the AIMES project. Many important advances in science and engineering are due to large scale distributed computing. Notwithstanding this reliance, we are still learning how to design and deploy large-scale production Distributed Computing Infrastructures (DCI). This is evidenced by missing design principles for DCI, and an absence of generally acceptable and usable distributed computing abstractions. The AIMES project was conceived against this backdrop, following on the heels of a comprehensive survey of scientific distributed applications. AIMES laid the foundations to address the tripartite challenge of dynamic resource management, integrating information, and portable andmore » interoperable distributed applications. Four abstractions were defined and implemented: skeleton, resource bundle, pilot, and execution strategy. The four abstractions were implemented into software modules and then aggregated into the AIMES middleware. This middleware successfully integrates information across the application layer (skeletons) and resource layer (Bundles), derives a suitable execution strategy for the given skeleton and enacts its execution by means of pilots on one or more resources, depending on the application requirements, and resource availabilities and capabilities.« less

  11. Effectively Transparent Front Contacts for Optoelectronic Devices

    DOE PAGES

    Saive, Rebecca; Borsuk, Aleca M.; Emmer, Hal S.; ...

    2016-06-10

    Effectively transparent front contacts for optoelectronic devices achieve a measured transparency of up to 99.9% and a measured sheet resistance of 4.8 Ω sq-1. These 3D microscale triangular cross-section grid fingers redirect incoming photons efficiently to the active semiconductor area and can replace standard grid fingers as well as transparent conductive oxide layers in optoelectronic devices. Optoelectronic devices such as light emitting diodes, photodiodes, and solar cells play an important and expanding role in modern technology. Photovoltaics is one of the largest optoelectronic industry sectors and an ever-increasing component of the world's rapidly growing renewable carbon-free electricity generation infrastructure. Inmore » recent years, the photovoltaics field has dramatically expanded owing to the large-scale manufacture of inexpensive crystalline Si and thin film cells and modules. The current record efficiency (η = 25.6%) Si solar cell utilizes a heterostructure intrinsic thin layer (HIT) design[1] to enable increased open circuit voltage, while more mass-manufacturable solar cell architectures feature front contacts.[2, 3] Thus improved solar cell front contact designs are important for future large-scale photovoltaics with even higher efficiency.« less

  12. INFN, IT the GENIUS grid portal and the robot certificates to perform phylogenetic analysis on large scale: a success story from the International LIBI project

    NASA Astrophysics Data System (ADS)

    Barbera, Roberto; Donvit, Giacinto; Falzone, Alberto; Rocca, Giuseppe La; Maggi, Giorgio Pietro; Milanesi, Luciano; Vicarioicario, Saverio

    This paper depicts the solution proposed by INFN to allow users, not owning a personal digital certificate and therefore not belonging to any specific Virtual Organization (VO), to access Grid infrastructures via the GENIUS Grid portal enabled with robot certificates. Robot certificates, also known as portal certificates, are associated with a specific application that the user wants to share with the whole Grid community and have recently been introduced by the EUGridPMA (European Policy Management Authority for Grid Authentication) to perform automated tasks on Grids on behalf of users. They are proven to be extremely useful to automate grid service monitoring, data processing production, distributed data collection systems, etc. In this paper, robot certificates have been used to allow bioinformaticians involved in the Italian LIBI project to perform large scale phylogenetic analyses. The distributed environment set up in this work strongly simplify the grid access of occasional users and represents a valuable step forward to wide the communities of users.

  13. A microkernel design for component-based parallel numerical software systems.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Balay, S.

    1999-01-13

    What is the minimal software infrastructure and what type of conventions are needed to simplify development of sophisticated parallel numerical application codes using a variety of software components that are not necessarily available as source code? We propose an opaque object-based model where the objects are dynamically loadable from the file system or network. The microkernel required to manage such a system needs to include, at most: (1) a few basic services, namely--a mechanism for loading objects at run time via dynamic link libraries, and consistent schemes for error handling and memory management; and (2) selected methods that all objectsmore » share, to deal with object life (destruction, reference counting, relationships), and object observation (viewing, profiling, tracing). We are experimenting with these ideas in the context of extensible numerical software within the ALICE (Advanced Large-scale Integrated Computational Environment) project, where we are building the microkernel to manage the interoperability among various tools for large-scale scientific simulations. This paper presents some preliminary observations and conclusions from our work with microkernel design.« less

  14. Impact of landslides induced by 2014 northeast monsoon extreme rain in Malaysia

    NASA Astrophysics Data System (ADS)

    Fukuoka, Hiroshi; Koay, Swee Peng; Sakai, Naoki; Lateh, Habibah

    2016-04-01

    In December 2014, northeast monsoon brought extreme rainfalls to Malaysia, mainly in the eastern coast of Peninsular Malaysia and coastal area in Sabah and Sarawak. In this month, many of the rain gauge records in this area exceeded 1,000 mm, which is about 1/3 of average annual rainfall precipitation (2,850mm/year) in Malaysia. This unexpected heavy rainfall induced landslides and floods which brought about large-scale losses in Malaysia equivalent to several hundred million USD as thousands of residents had evacuated from hometown for months, and factories, schools and business activities were shut down for weeks. Among the major infrastructure of the nation, East-west Highway was subjected to damages by 21 landslides. Two large-scale landslides cut off the highway for a week. Authors had installed landslide monitoring instruments at reactivated landslide sites along the highway at N05° 36.042' E101° 35.546'. Records by in-situ inclinometers showed clear deformation from 17th December to 26th December, associated with certain change in piezometeres record for groundwater level monitoring. Several cracks occurred in the slope.

  15. Computational biomedicine: a challenge for the twenty-first century.

    PubMed

    Coveney, Peter V; Shublaq, Nour W

    2012-01-01

    With the relentless increase of computer power and the widespread availability of digital patient-specific medical data, we are now entering an era when it is becoming possible to develop predictive models of human disease and pathology, which can be used to support and enhance clinical decision-making. The approach amounts to a grand challenge to computational science insofar as we need to be able to provide seamless yet secure access to large scale heterogeneous personal healthcare data in a facile way, typically integrated into complex workflows-some parts of which may need to be run on high performance computers-in a facile way that is integrated into clinical decision support software. In this paper, we review the state of the art in terms of case studies drawn from neurovascular pathologies and HIV/AIDS. These studies are representative of a large number of projects currently being performed within the Virtual Physiological Human initiative. They make demands of information technology at many scales, from the desktop to national and international infrastructures for data storage and processing, linked by high performance networks.

  16. Unbundling the corporation.

    PubMed

    Hagel, J; Singer, M

    1999-01-01

    No matter how monolithic they may seem, most companies are really engaged in three kinds of businesses. One business attracts customers. Another develops products. The third oversees operations. Although organizationally intertwined, these businesses have conflicting characteristics. It takes a big investment to find and develop a relationship with a customer, so profitability hinges on achieving economies of scope. But speed, not scope, drives the economics of product innovation. And the high fixed costs of capital-intensive infrastructure businesses require economies of scale. Scope, speed, and scale can't be optimized simultaneously, so trade-offs have to be made when the three businesses are bundled into one corporation. Historically, they have been bundled because the interaction costs--the friction--incurred by separating them were too high. But we are on the verge of a worldwide reduction in interaction costs, the authors contend, as electronic networks drive down the costs of communicating and of exchanging data. Activities that companies have always believed were central to their businesses will suddenly be offered by new, specialized competitors that won't have to make trade-offs. Ultimately, the authors predict, traditional businesses will unbundle and then rebundle into large infrastructure and customer-relationship businesses and small, nimble product innovation companies. And executives in many industries will be forced to ask the most basic question about their companies: What business are we really in? Their answer will determine their fate in an increasingly frictionless economy.

  17. Constructing a Foundational Platform Driven by Japan's K Supercomputer for Next-Generation Drug Design.

    PubMed

    Brown, J B; Nakatsui, Masahiko; Okuno, Yasushi

    2014-12-01

    The cost of pharmaceutical R&D has risen enormously, both worldwide and in Japan. However, Japan faces a particularly difficult situation in that its population is aging rapidly, and the cost of pharmaceutical R&D affects not only the industry but the entire medical system as well. To attempt to reduce costs, the newly launched K supercomputer is available for big data drug discovery and structural simulation-based drug discovery. We have implemented both primary (direct) and secondary (infrastructure, data processing) methods for the two types of drug discovery, custom tailored to maximally use the 88 128 compute nodes/CPUs of K, and evaluated the implementations. We present two types of results. In the first, we executed the virtual screening of nearly 19 billion compound-protein interactions, and calculated the accuracy of predictions against publicly available experimental data. In the second investigation, we implemented a very computationally intensive binding free energy algorithm, and found that comparison of our binding free energies was considerably accurate when validated against another type of publicly available experimental data. The common feature of both result types is the scale at which computations were executed. The frameworks presented in this article provide prospectives and applications that, while tuned to the computing resources available in Japan, are equally applicable to any equivalent large-scale infrastructure provided elsewhere. © 2014 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  18. mHealth in Sub-Saharan Africa

    PubMed Central

    Betjeman, Thomas J.; Soghoian, Samara E.; Foran, Mark P.

    2013-01-01

    Mobile phone penetration rates have reached 63% in sub-Saharan Africa (SSA) and are projected to pass 70% by 2013. In SSA, millions of people who never used traditional landlines now use mobile phones on a regular basis. Mobile health, or mHealth, is the utilization of short messaging service (SMS), wireless data transmission, voice calling, and smartphone applications to transmit health-related information or direct care. This systematic review analyzes and summarizes key articles from the current body of peer-reviewed literature on PubMed on the topic of mHealth in SSA. Studies included in the review demonstrate that mHealth can improve and reduce the cost of patient monitoring, medication adherence, and healthcare worker communication, especially in rural areas. mHealth has also shown initial promise in emergency and disaster response, helping standardize, store, analyze, and share patient information. Challenges for mHealth implementation in SSA include operating costs, knowledge, infrastructure, and policy among many others. Further studies of the effectiveness of mHealth interventions are being hindered by similar factors as well as a lack of standardization in study design. Overall, the current evidence is not strong enough to warrant large-scale implementation of existing mHealth interventions in SSA, but rapid progress of both infrastructure and mHealth-related research in the region could justify scale-up of the most promising programs in the near future. PMID:24369460

  19. BluePyOpt: Leveraging Open Source Software and Cloud Infrastructure to Optimise Model Parameters in Neuroscience

    PubMed Central

    Van Geit, Werner; Gevaert, Michael; Chindemi, Giuseppe; Rössert, Christian; Courcol, Jean-Denis; Muller, Eilif B.; Schürmann, Felix; Segev, Idan; Markram, Henry

    2016-01-01

    At many scales in neuroscience, appropriate mathematical models take the form of complex dynamical systems. Parameterizing such models to conform to the multitude of available experimental constraints is a global non-linear optimisation problem with a complex fitness landscape, requiring numerical techniques to find suitable approximate solutions. Stochastic optimisation approaches, such as evolutionary algorithms, have been shown to be effective, but often the setting up of such optimisations and the choice of a specific search algorithm and its parameters is non-trivial, requiring domain-specific expertise. Here we describe BluePyOpt, a Python package targeted at the broad neuroscience community to simplify this task. BluePyOpt is an extensible framework for data-driven model parameter optimisation that wraps and standardizes several existing open-source tools. It simplifies the task of creating and sharing these optimisations, and the associated techniques and knowledge. This is achieved by abstracting the optimisation and evaluation tasks into various reusable and flexible discrete elements according to established best-practices. Further, BluePyOpt provides methods for setting up both small- and large-scale optimisations on a variety of platforms, ranging from laptops to Linux clusters and cloud-based compute infrastructures. The versatility of the BluePyOpt framework is demonstrated by working through three representative neuroscience specific use cases. PMID:27375471

  20. The Role of Social Media in the Civic Co-Management of Urban Infrastructure Resilience

    NASA Astrophysics Data System (ADS)

    Turpin, E.; Holderness, T.; Wickramasuriya, R.

    2014-12-01

    As cities evolve to become increasingly complex systems of people and interconnected infrastructure the impacts of extreme events and long term climatological change are significantly heightened (Walsh et al. 2011). Understanding the resilience of urban systems and the impacts of infrastructure failure is therefore key to understanding the adaptability of cities to climate change (Rosenzweig 2011). Such information is particularly critical in developing nations which are predicted to bear the brunt of climate change (Douglas et al., 2008), but often lack the resources and data required to make informed decisions regarding infrastructure and societal resilience (e.g. Paar & Rekittke 2011). We propose that mobile social media in a people-as-sensors paradigm provides a means of monitoring the response of a city to cascading infrastructure failures induced by extreme weather events. Such an approach is welcomed in developing nations where crowd-sourced data are increasingly being used as an alternative to missing or incomplete formal data sources to help solve infrastructure challenges (Holderness 2014). In this paper we present PetaJakarta.org as a case study that harnesses the power of social media to gather, sort and display information about flooding for residents of Jakarta, Indonesia in real time, recuperating the failures of infrastructure and monitoring systems through a web of social media connections. Our GeoSocial Intelligence Framework enables the capture and comprehension of significant time-critical information to support decision-making, and as a means of transparent communication, while maintaining user privacy, to enable civic co-management processes to aid city-scale climate adaptation and resilience. PetaJakarta empowers community residents to collect and disseminate situational information about flooding, via the social media network Twitter, to provide city-scale decision support for Jakarta's Emergency Management Team, and a neighbourhood-scale public information service for individuals and communities to alert them of nearby flood events. Douglas I., et al. 2008 ENVIRONMENT & URBANIZATION Holderness T. 2014 IEEE TECHNOLOGY & SOCIETY MAGAZINE Paar P. & Rekittke J. 2011 FUTURE INTERNET Rosenzweig C. 2011 SCIENTIFIC AMERICAN Walsh C. L., et al. 2011 URBAN DESIGN & PLANNING

Top