Sample records for national computational infrastructure

  1. The High-Performance Computing and Communications program, the national information infrastructure and health care.

    PubMed Central

    Lindberg, D A; Humphreys, B L

    1995-01-01

    The High-Performance Computing and Communications (HPCC) program is a multiagency federal effort to advance the state of computing and communications and to provide the technologic platform on which the National Information Infrastructure (NII) can be built. The HPCC program supports the development of high-speed computers, high-speed telecommunications, related software and algorithms, education and training, and information infrastructure technology and applications. The vision of the NII is to extend access to high-performance computing and communications to virtually every U.S. citizen so that the technology can be used to improve the civil infrastructure, lifelong learning, energy management, health care, etc. Development of the NII will require resolution of complex economic and social issues, including information privacy. Health-related applications supported under the HPCC program and NII initiatives include connection of health care institutions to the Internet; enhanced access to gene sequence data; the "Visible Human" Project; and test-bed projects in telemedicine, electronic patient records, shared informatics tool development, and image systems. PMID:7614116

  2. Large-Scale Data Collection Metadata Management at the National Computation Infrastructure

    NASA Astrophysics Data System (ADS)

    Wang, J.; Evans, B. J. K.; Bastrakova, I.; Ryder, G.; Martin, J.; Duursma, D.; Gohar, K.; Mackey, T.; Paget, M.; Siddeswara, G.

    2014-12-01

    Data Collection management has become an essential activity at the National Computation Infrastructure (NCI) in Australia. NCI's partners (CSIRO, Bureau of Meteorology, Australian National University, and Geoscience Australia), supported by the Australian Government and Research Data Storage Infrastructure (RDSI), have established a national data resource that is co-located with high-performance computing. This paper addresses the metadata management of these data assets over their lifetime. NCI manages 36 data collections (10+ PB) categorised as earth system sciences, climate and weather model data assets and products, earth and marine observations and products, geosciences, terrestrial ecosystem, water management and hydrology, astronomy, social science and biosciences. The data is largely sourced from NCI partners, the custodians of many of the national scientific records, and major research community organisations. The data is made available in a HPC and data-intensive environment - a ~56000 core supercomputer, virtual labs on a 3000 core cloud system, and data services. By assembling these large national assets, new opportunities have arisen to harmonise the data collections, making a powerful cross-disciplinary resource.To support the overall management, a Data Management Plan (DMP) has been developed to record the workflows, procedures, the key contacts and responsibilities. The DMP has fields that can be exported to the ISO19115 schema and to the collection level catalogue of GeoNetwork. The subset or file level metadata catalogues are linked with the collection level through parent-child relationship definition using UUID. A number of tools have been developed that support interactive metadata management, bulk loading of data, and support for computational workflows or data pipelines. NCI creates persistent identifiers for each of the assets. The data collection is tracked over its lifetime, and the recognition of the data providers, data owners, data

  3. Critical infrastructure protection : significant challenges in developing national capabilities

    DOT National Transportation Integrated Search

    2001-04-01

    To address the concerns about protecting the nation's critical computer-dependent infrastructure, this General Accounting Office (GOA) report describes the progress of the National Infrastructure Protection Center (NIPC) in (1) developing national ca...

  4. The Czech National Grid Infrastructure

    NASA Astrophysics Data System (ADS)

    Chudoba, J.; Křenková, I.; Mulač, M.; Ruda, M.; Sitera, J.

    2017-10-01

    The Czech National Grid Infrastructure is operated by MetaCentrum, a CESNET department responsible for coordinating and managing activities related to distributed computing. CESNET as the Czech National Research and Education Network (NREN) provides many e-infrastructure services, which are used by 94% of the scientific and research community in the Czech Republic. Computing and storage resources owned by different organizations are connected by fast enough network to provide transparent access to all resources. We describe in more detail the computing infrastructure, which is based on several different technologies and covers grid, cloud and map-reduce environment. While the largest part of CPUs is still accessible via distributed torque servers, providing environment for long batch jobs, part of infrastructure is available via standard EGI tools in EGI, subset of NGI resources is provided into EGI FedCloud environment with cloud interface and there is also Hadoop cluster provided by the same e-infrastructure.A broad spectrum of computing servers is offered; users can choose from standard 2 CPU servers to large SMP machines with up to 6 TB of RAM or servers with GPU cards. Different groups have different priorities on various resources, resource owners can even have an exclusive access. The software is distributed via AFS. Storage servers offering up to tens of terabytes of disk space to individual users are connected via NFS4 on top of GPFS and access to long term HSM storage with peta-byte capacity is also provided. Overview of available resources and recent statistics of usage will be given.

  5. The National Information Infrastructure: Agenda for Action.

    ERIC Educational Resources Information Center

    Department of Commerce, Washington, DC. Information Infrastructure Task Force.

    The National Information Infrastructure (NII) is planned as a web of communications networks, computers, databases, and consumer electronics that will put vast amounts of information at the users' fingertips. Private sector firms are beginning to develop this infrastructure, but essential roles remain for the Federal Government. The National…

  6. National Computational Infrastructure for Lattice Gauge Theory SciDAC-2 Closeout Report Indiana University Component

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gottlieb, Steven Arthur; DeTar, Carleton; Tousaint, Doug

    This is the closeout report for the Indiana University portion of the National Computational Infrastructure for Lattice Gauge Theory project supported by the United States Department of Energy under the SciDAC program. It includes information about activities at Indian University, the University of Arizona, and the University of Utah, as those three universities coordinated their activities.

  7. Information technology developments within the national biological information infrastructure

    USGS Publications Warehouse

    Cotter, G.; Frame, M.T.

    2000-01-01

    Looking out an office window or exploring a community park, one can easily see the tremendous challenges that biological information presents the computer science community. Biological information varies in format and content depending whether or not it is information pertaining to a particular species (i.e. Brown Tree Snake), or a specific ecosystem, which often includes multiple species, land use characteristics, and geospatially referenced information. The complexity and uniqueness of each individual species or ecosystem do not easily lend themselves to today's computer science tools and applications. To address the challenges that the biological enterprise presents the National Biological Information Infrastructure (NBII) (http://www.nbii.gov) was established in 1993. The NBII is designed to address these issues on a National scale within the United States, and through international partnerships abroad. This paper discusses current computer science efforts within the National Biological Information Infrastructure Program and future computer science research endeavors that are needed to address the ever-growing issues related to our Nation's biological concerns.

  8. Privacy and the National Information Infrastructure.

    ERIC Educational Resources Information Center

    Rotenberg, Marc

    1994-01-01

    Explains the work of Computer Professionals for Social Responsibility regarding privacy issues in the use of electronic networks; recommends principles that should be adopted for a National Information Infrastructure privacy code; discusses the need for public education; and suggests pertinent legislative proposals. (LRW)

  9. Information science and technology developments within the National Biological Information Infrastructure

    USGS Publications Warehouse

    Frame, M.T.; Cotter, G.; Zolly, L.; Little, J.

    2002-01-01

    Whether your vantage point is that of an office window or a national park, your view undoubtedly encompasses a rich diversity of life forms, all carefully studied or managed by some scientist, resource manager, or planner. A few simple calculations - the number of species, their interrelationships, and the many researchers studying them - and you can easily see the tremendous challenges that the resulting biological data presents to the information and computer science communities. Biological information varies in format and content: it may pertain to a particular species or an entire ecosystem; it can contain land use characteristics, and geospatially referenced information. The complexity and uniqueness of each individual species or ecosystem do not easily lend themselves to today's computer science tools and applications. To address the challenges that the biological enterprise presents, the National Biological Information Infrastructure (NBII) (http://www.nbii.gov) was established in 1993 on the recommendation of the National Research Council (National Research Council 1993). The NBII is designed to address these issues on a national scale, and through international partnerships. This paper discusses current information and computer science efforts within the National Biological Information Infrastructure Program, and future computer science research endeavors that are needed to address the ever-growing issues related to our nation's biological concerns. ?? 2003 by The Haworth Press, Inc. All rights reserved.

  10. 75 FR 14454 - National Protection and Programs Directorate; National Infrastructure Advisory Council

    Federal Register 2010, 2011, 2012, 2013, 2014

    2010-03-25

    ..., National Infrastructure Advisory Council. [FR Doc. 2010-6633 Filed 3-24-10; 8:45 am] BILLING CODE 9110-9P-P ... Directorate; National Infrastructure Advisory Council AGENCY: National Protection and Programs Directorate... Infrastructure Advisory Council (NIAC) will meet on Tuesday, April 13, 2010, at the National Press Club's...

  11. Critical Infrastructure: The National Asset Database

    DTIC Science & Technology

    2007-07-16

    Infrastructure: The National Asset Database 5a. CONTRACT NUMBER 5b. GRANT NUMBER 5c. PROGRAM ELEMENT NUMBER 6. AUTHOR(S) 5d. PROJECT NUMBER 5e...upon which federal resources, including infrastructure protection grants , are allocated. According to DHS, both of those assumptions are wrong. DHS...assets that it has determined are critical to the nation. Also, while the National Asset Database has been used to support federal grant -making

  12. National Biological Information Infrastructure (NBII) | Information Center

    Science.gov Websites

    National Biological Information Infrastructure (NBII) Contact Information Website: http://www.nbii.gov/ The National Biological Information Infrastructure (NBII) is a broad, collaborative program to provide increased access to data and information on the nation's biological resources. The NBII links diverse, high

  13. NASA's Participation in the National Computational Grid

    NASA Technical Reports Server (NTRS)

    Feiereisen, William J.; Zornetzer, Steve F. (Technical Monitor)

    1998-01-01

    Over the last several years it has become evident that the character of NASA's supercomputing needs has changed. One of the major missions of the agency is to support the design and manufacture of aero- and space-vehicles with technologies that will significantly reduce their cost. It is becoming clear that improvements in the process of aerospace design and manufacturing will require a high performance information infrastructure that allows geographically dispersed teams to draw upon resources that are broader than traditional supercomputing. A computational grid draws together our information resources into one system. We can foresee the time when a Grid will allow engineers and scientists to use the tools of supercomputers, databases and on line experimental devices in a virtual environment to collaborate with distant colleagues. The concept of a computational grid has been spoken of for many years, but several events in recent times are conspiring to allow us to actually build one. In late 1997 the National Science Foundation initiated the Partnerships for Advanced Computational Infrastructure (PACI) which is built around the idea of distributed high performance computing. The Alliance lead, by the National Computational Science Alliance (NCSA), and the National Partnership for Advanced Computational Infrastructure (NPACI), lead by the San Diego Supercomputing Center, have been instrumental in drawing together the "Grid Community" to identify the technology bottlenecks and propose a research agenda to address them. During the same period NASA has begun to reformulate parts of two major high performance computing research programs to concentrate on distributed high performance computing and has banded together with the PACI centers to address the research agenda in common.

  14. Distributed telemedicine for the National Information Infrastructure

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Forslund, D.W.; Lee, Seong H.; Reverbel, F.C.

    1997-08-01

    TeleMed is an advanced system that provides a distributed multimedia electronic medical record available over a wide area network. It uses object-based computing, distributed data repositories, advanced graphical user interfaces, and visualization tools along with innovative concept extraction of image information for storing and accessing medical records developed in a separate project from 1994-5. In 1996, we began the transition to Java, extended the infrastructure, and worked to begin deploying TeleMed-like technologies throughout the nation. Other applications are mentioned.

  15. Geographic Hotspots of Critical National Infrastructure.

    PubMed

    Thacker, Scott; Barr, Stuart; Pant, Raghav; Hall, Jim W; Alderson, David

    2017-12-01

    Failure of critical national infrastructures can result in major disruptions to society and the economy. Understanding the criticality of individual assets and the geographic areas in which they are located is essential for targeting investments to reduce risks and enhance system resilience. Within this study we provide new insights into the criticality of real-life critical infrastructure networks by integrating high-resolution data on infrastructure location, connectivity, interdependence, and usage. We propose a metric of infrastructure criticality in terms of the number of users who may be directly or indirectly disrupted by the failure of physically interdependent infrastructures. Kernel density estimation is used to integrate spatially discrete criticality values associated with individual infrastructure assets, producing a continuous surface from which statistically significant infrastructure criticality hotspots are identified. We develop a comprehensive and unique national-scale demonstration for England and Wales that utilizes previously unavailable data from the energy, transport, water, waste, and digital communications sectors. The testing of 200,000 failure scenarios identifies that hotspots are typically located around the periphery of urban areas where there are large facilities upon which many users depend or where several critical infrastructures are concentrated in one location. © 2017 Society for Risk Analysis.

  16. Infrastructures for Distributed Computing: the case of BESIII

    NASA Astrophysics Data System (ADS)

    Pellegrino, J.

    2018-05-01

    The BESIII is an electron-positron collision experiment hosted at BEPCII in Beijing and aimed to investigate Tau-Charm physics. Now BESIII has been running for several years and gathered more than 1PB raw data. In order to analyze these data and perform massive Monte Carlo simulations, a large amount of computing and storage resources is needed. The distributed computing system is based up on DIRAC and it is in production since 2012. It integrates computing and storage resources from different institutes and a variety of resource types such as cluster, grid, cloud or volunteer computing. About 15 sites from BESIII Collaboration from all over the world joined this distributed computing infrastructure, giving a significant contribution to the IHEP computing facility. Nowadays cloud computing is playing a key role in the HEP computing field, due to its scalability and elasticity. Cloud infrastructures take advantages of several tools, such as VMDirac, to manage virtual machines through cloud managers according to the job requirements. With the virtually unlimited resources from commercial clouds, the computing capacity could scale accordingly in order to deal with any burst demands. General computing models have been discussed in the talk and are addressed herewith, with particular focus on the BESIII infrastructure. Moreover new computing tools and upcoming infrastructures will be addressed.

  17. 76 FR 81956 - National Infrastructure Advisory Council

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-12-29

    ... through the Secretary of Homeland Security with advice on the security of the critical infrastructure... critical infrastructure as directed by the President. At this meeting, the committee will receive work from... DEPARTMENT OF HOMELAND SECURITY [Docket No. DHS-2011-0117] National Infrastructure Advisory...

  18. 76 FR 36137 - National Infrastructure Advisory Council

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-06-21

    ... Homeland Security with advice on the security of the critical infrastructure sectors and their information systems. The NIAC will meet to address issues relevant to the protection of critical infrastructure as... DEPARTMENT OF HOMELAND SECURITY [Docket No. DHS-2011-0034] National Infrastructure Advisory...

  19. 75 FR 81284 - National Protection and Programs Directorate; National Infrastructure Advisory Council Meeting

    Federal Register 2010, 2011, 2012, 2013, 2014

    2010-12-27

    ... Homeland Security with advice on the security of the critical infrastructure sectors and their information systems. The NIAC will meet to address issues relevant to the protection of critical infrastructure as... Directorate; National Infrastructure Advisory Council Meeting AGENCY: National Protection and Programs...

  20. Critical Infrastructure Protection- Los Alamos National Laboratory

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bofman, Ryan K.

    Los Alamos National Laboratory (LANL) has been a key facet of Critical National Infrastructure since the nuclear bombing of Hiroshima exposed the nature of the Laboratory’s work in 1945. Common knowledge of the nature of sensitive information contained here presents a necessity to protect this critical infrastructure as a matter of national security. This protection occurs in multiple forms beginning with physical security, followed by cybersecurity, safeguarding of classified information, and concluded by the missions of the National Nuclear Security Administration.

  1. High-throughput neuroimaging-genetics computational infrastructure

    PubMed Central

    Dinov, Ivo D.; Petrosyan, Petros; Liu, Zhizhong; Eggert, Paul; Hobel, Sam; Vespa, Paul; Woo Moon, Seok; Van Horn, John D.; Franco, Joseph; Toga, Arthur W.

    2014-01-01

    Many contemporary neuroscientific investigations face significant challenges in terms of data management, computational processing, data mining, and results interpretation. These four pillars define the core infrastructure necessary to plan, organize, orchestrate, validate, and disseminate novel scientific methods, computational resources, and translational healthcare findings. Data management includes protocols for data acquisition, archival, query, transfer, retrieval, and aggregation. Computational processing involves the necessary software, hardware, and networking infrastructure required to handle large amounts of heterogeneous neuroimaging, genetics, clinical, and phenotypic data and meta-data. Data mining refers to the process of automatically extracting data features, characteristics and associations, which are not readily visible by human exploration of the raw dataset. Result interpretation includes scientific visualization, community validation of findings and reproducible findings. In this manuscript we describe the novel high-throughput neuroimaging-genetics computational infrastructure available at the Institute for Neuroimaging and Informatics (INI) and the Laboratory of Neuro Imaging (LONI) at University of Southern California (USC). INI and LONI include ultra-high-field and standard-field MRI brain scanners along with an imaging-genetics database for storing the complete provenance of the raw and derived data and meta-data. In addition, the institute provides a large number of software tools for image and shape analysis, mathematical modeling, genomic sequence processing, and scientific visualization. A unique feature of this architecture is the Pipeline environment, which integrates the data management, processing, transfer, and visualization. Through its client-server architecture, the Pipeline environment provides a graphical user interface for designing, executing, monitoring validating, and disseminating of complex protocols that utilize

  2. HPCC and the National Information Infrastructure: an overview.

    PubMed Central

    Lindberg, D A

    1995-01-01

    The National Information Infrastructure (NII) or "information superhighway" is a high-priority federal initiative to combine communications networks, computers, databases, and consumer electronics to deliver information services to all U.S. citizens. The NII will be used to improve government and social services while cutting administrative costs. Operated by the private sector, the NII will rely on advanced technologies developed under the direction of the federal High Performance Computing and Communications (HPCC) Program. These include computing systems capable of performing trillions of operations (teraops) per second and networks capable of transmitting billions of bits (gigabits) per second. Among other activities, the HPCC Program supports the national supercomputer research centers, the federal portion of the Internet, and the development of interface software, such as Mosaic, that facilitates access to network information services. Health care has been identified as a critical demonstration area for HPCC technology and an important application area for the NII. As an HPCC participant, the National Library of Medicine (NLM) assists hospitals and medical centers to connect to the Internet through projects directed by the Regional Medical Libraries and through an Internet Connections Program cosponsored by the National Science Foundation. In addition to using the Internet to provide enhanced access to its own information services, NLM sponsors health-related applications of HPCC technology. Examples include the "Visible Human" project and recently awarded contracts for test-bed networks to share patient data and medical images, telemedicine projects to provide consultation and medical care to patients in rural areas, and advanced computer simulations of human anatomy for training in "virtual surgery." PMID:7703935

  3. NISAC | National Infrastructure Simulation and Analysis Center | NISAC

    Science.gov Websites

    Logo National Infrastructure Simulation and Analysis Center Search Btn search this site... Overview Capabilities Fact Sheets Publications Contacts NISAC content top NISAC The National Infrastructure Simulation and Analysis Center (NISAC) is a modeling, simulation, and analysis program within the Department of

  4. National Intelligent Transportation Infrastructure Initiative

    DOT National Transportation Integrated Search

    1997-09-19

    This report gives an overview of the National Intelligent Transportation Infrastructure Initiative (NITI). NITI refers to the integrated electronics, communications, and hardware and software elements that are available to support Intelligent Transpo...

  5. Infrastructure Systems for Advanced Computing in E-science applications

    NASA Astrophysics Data System (ADS)

    Terzo, Olivier

    2013-04-01

    In the e-science field are growing needs for having computing infrastructure more dynamic and customizable with a model of use "on demand" that follow the exact request in term of resources and storage capacities. The integration of grid and cloud infrastructure solutions allows us to offer services that can adapt the availability in terms of up scaling and downscaling resources. The main challenges for e-sciences domains will on implement infrastructure solutions for scientific computing that allow to adapt dynamically the demands of computing resources with a strong emphasis on optimizing the use of computing resources for reducing costs of investments. Instrumentation, data volumes, algorithms, analysis contribute to increase the complexity for applications who require high processing power and storage for a limited time and often exceeds the computational resources that equip the majority of laboratories, research Unit in an organization. Very often it is necessary to adapt or even tweak rethink tools, algorithms, and consolidate existing applications through a phase of reverse engineering in order to adapt them to a deployment on Cloud infrastructure. For example, in areas such as rainfall monitoring, meteorological analysis, Hydrometeorology, Climatology Bioinformatics Next Generation Sequencing, Computational Electromagnetic, Radio occultation, the complexity of the analysis raises several issues such as the processing time, the scheduling of tasks of processing, storage of results, a multi users environment. For these reasons, it is necessary to rethink the writing model of E-Science applications in order to be already adapted to exploit the potentiality of cloud computing services through the uses of IaaS, PaaS and SaaS layer. An other important focus is on create/use hybrid infrastructure typically a federation between Private and public cloud, in fact in this way when all resources owned by the organization are all used it will be easy with a federate

  6. Computational Infrastructure for Geodynamics (CIG)

    NASA Astrophysics Data System (ADS)

    Gurnis, M.; Kellogg, L. H.; Bloxham, J.; Hager, B. H.; Spiegelman, M.; Willett, S.; Wysession, M. E.; Aivazis, M.

    2004-12-01

    Solid earth geophysicists have a long tradition of writing scientific software to address a wide range of problems. In particular, computer simulations came into wide use in geophysics during the decade after the plate tectonic revolution. Solution schemes and numerical algorithms that developed in other areas of science, most notably engineering, fluid mechanics, and physics, were adapted with considerable success to geophysics. This software has largely been the product of individual efforts and although this approach has proven successful, its strength for solving problems of interest is now starting to show its limitations as we try to share codes and algorithms or when we want to recombine codes in novel ways to produce new science. With funding from the NSF, the US community has embarked on a Computational Infrastructure for Geodynamics (CIG) that will develop, support, and disseminate community-accessible software for the greater geodynamics community from model developers to end-users. The software is being developed for problems involving mantle and core dynamics, crustal and earthquake dynamics, magma migration, seismology, and other related topics. With a high level of community participation, CIG is leveraging state-of-the-art scientific computing into a suite of open-source tools and codes. The infrastructure that we are now starting to develop will consist of: (a) a coordinated effort to develop reusable, well-documented and open-source geodynamics software; (b) the basic building blocks - an infrastructure layer - of software by which state-of-the-art modeling codes can be quickly assembled; (c) extension of existing software frameworks to interlink multiple codes and data through a superstructure layer; (d) strategic partnerships with the larger world of computational science and geoinformatics; and (e) specialized training and workshops for both the geodynamics and broader Earth science communities. The CIG initiative has already started to

  7. Dynamic Collaboration Infrastructure for Hydrologic Science

    NASA Astrophysics Data System (ADS)

    Tarboton, D. G.; Idaszak, R.; Castillo, C.; Yi, H.; Jiang, F.; Jones, N.; Goodall, J. L.

    2016-12-01

    Data and modeling infrastructure is becoming increasingly accessible to water scientists. HydroShare is a collaborative environment that currently offers water scientists the ability to access modeling and data infrastructure in support of data intensive modeling and analysis. It supports the sharing of and collaboration around "resources" which are social objects defined to include both data and models in a structured standardized format. Users collaborate around these objects via comments, ratings, and groups. HydroShare also supports web services and cloud based computation for the execution of hydrologic models and analysis and visualization of hydrologic data. However, the quantity and variety of data and modeling infrastructure available that can be accessed from environments like HydroShare is increasing. Storage infrastructure can range from one's local PC to campus or organizational storage to storage in the cloud. Modeling or computing infrastructure can range from one's desktop to departmental clusters to national HPC resources to grid and cloud computing resources. How does one orchestrate this vast number of data and computing infrastructure without needing to correspondingly learn each new system? A common limitation across these systems is the lack of efficient integration between data transport mechanisms and the corresponding high-level services to support large distributed data and compute operations. A scientist running a hydrology model from their desktop may require processing a large collection of files across the aforementioned storage and compute resources and various national databases. To address these community challenges a proof-of-concept prototype was created integrating HydroShare with RADII (Resource Aware Data-centric collaboration Infrastructure) to provide software infrastructure to enable the comprehensive and rapid dynamic deployment of what we refer to as "collaborative infrastructure." In this presentation we discuss the

  8. Using Cloud Computing infrastructure with CloudBioLinux, CloudMan and Galaxy

    PubMed Central

    Afgan, Enis; Chapman, Brad; Jadan, Margita; Franke, Vedran; Taylor, James

    2012-01-01

    Cloud computing has revolutionized availability and access to computing and storage resources; making it possible to provision a large computational infrastructure with only a few clicks in a web browser. However, those resources are typically provided in the form of low-level infrastructure components that need to be procured and configured before use. In this protocol, we demonstrate how to utilize cloud computing resources to perform open-ended bioinformatics analyses, with fully automated management of the underlying cloud infrastructure. By combining three projects, CloudBioLinux, CloudMan, and Galaxy into a cohesive unit, we have enabled researchers to gain access to more than 100 preconfigured bioinformatics tools and gigabytes of reference genomes on top of the flexible cloud computing infrastructure. The protocol demonstrates how to setup the available infrastructure and how to use the tools via a graphical desktop interface, a parallel command line interface, and the web-based Galaxy interface. PMID:22700313

  9. Using cloud computing infrastructure with CloudBioLinux, CloudMan, and Galaxy.

    PubMed

    Afgan, Enis; Chapman, Brad; Jadan, Margita; Franke, Vedran; Taylor, James

    2012-06-01

    Cloud computing has revolutionized availability and access to computing and storage resources, making it possible to provision a large computational infrastructure with only a few clicks in a Web browser. However, those resources are typically provided in the form of low-level infrastructure components that need to be procured and configured before use. In this unit, we demonstrate how to utilize cloud computing resources to perform open-ended bioinformatic analyses, with fully automated management of the underlying cloud infrastructure. By combining three projects, CloudBioLinux, CloudMan, and Galaxy, into a cohesive unit, we have enabled researchers to gain access to more than 100 preconfigured bioinformatics tools and gigabytes of reference genomes on top of the flexible cloud computing infrastructure. The protocol demonstrates how to set up the available infrastructure and how to use the tools via a graphical desktop interface, a parallel command-line interface, and the Web-based Galaxy interface.

  10. 20/20 Vision: The Development of a National Information Infrastructure.

    ERIC Educational Resources Information Center

    National Telecommunications and Information Administration (DOC), Washington, DC.

    After the publication of the Clinton Administration's "The National Information Infrastructure: Agenda for Action," a group of telecommunication specialists were asked to evaluate the proposals in order to broaden the policy discussion concerning the National Information Infrastructure (NII). This collection contains their visions of the…

  11. New security infrastructure model for distributed computing systems

    NASA Astrophysics Data System (ADS)

    Dubenskaya, J.; Kryukov, A.; Demichev, A.; Prikhodko, N.

    2016-02-01

    At the paper we propose a new approach to setting up a user-friendly and yet secure authentication and authorization procedure in a distributed computing system. The security concept of the most heterogeneous distributed computing systems is based on the public key infrastructure along with proxy certificates which are used for rights delegation. In practice a contradiction between the limited lifetime of the proxy certificates and the unpredictable time of the request processing is a big issue for the end users of the system. We propose to use unlimited in time hashes which are individual for each request instead of proxy certificate. Our approach allows to avoid using of the proxy certificates. Thus the security infrastructure of distributed computing system becomes easier for development, support and use.

  12. National Infrastructure Protection Plan: Partnering to Enhance Protection and Resiliency

    ERIC Educational Resources Information Center

    US Department of Homeland Security, 2009

    2009-01-01

    The overarching goal of the National Infrastructure Protection Plan (NIPP) is to build a safer, more secure, and more resilient America by preventing, deterring, neutralizing, or mitigating the effects of deliberate efforts by terrorists to destroy, incapacitate, or exploit elements of our Nation's critical infrastructure and key resources (CIKR)…

  13. Network and computing infrastructure for scientific applications in Georgia

    NASA Astrophysics Data System (ADS)

    Kvatadze, R.; Modebadze, Z.

    2016-09-01

    Status of network and computing infrastructure and available services for research and education community of Georgia are presented. Research and Educational Networking Association - GRENA provides the following network services: Internet connectivity, network services, cyber security, technical support, etc. Computing resources used by the research teams are located at GRENA and at major state universities. GE-01-GRENA site is included in European Grid infrastructure. Paper also contains information about programs of Learning Center and research and development projects in which GRENA is participating.

  14. Services and the National Information Infrastructure. Report of the Information Infrastructure Task Force Committee on Applications and Technology, Technology Policy Working Group. Draft for Public Comment.

    ERIC Educational Resources Information Center

    Office of Science and Technology Policy, Washington, DC.

    In this report, the National Information Infrastructure (NII) services issue is addressed, and activities to advance the development of NII services are recommended. The NII is envisioned to grow into a seamless web of communications networks, computers, databases, and consumer electronics that will put vast amounts of information at users'…

  15. Toward a digital library strategy for a National Information Infrastructure

    NASA Technical Reports Server (NTRS)

    Coyne, Robert A.; Hulen, Harry

    1993-01-01

    Bills currently before the House and Senate would give support to the development of a National Information Infrastructure, in which digital libraries and storage systems would be an important part. A simple model is offered to show the relationship of storage systems, software, and standards to the overall information infrastructure. Some elements of a national strategy for digital libraries are proposed, based on the mission of the nonprofit National Storage System Foundation.

  16. Cloud computing can simplify HIT infrastructure management.

    PubMed

    Glaser, John

    2011-08-01

    Software as a Service (SaaS), built on cloud computing technology, is emerging as the forerunner in IT infrastructure because it helps healthcare providers reduce capital investments. Cloud computing leads to predictable, monthly, fixed operating expenses for hospital IT staff. Outsourced cloud computing facilities are state-of-the-art data centers boasting some of the most sophisticated networking equipment on the market. The SaaS model helps hospitals safeguard against technology obsolescence, minimizes maintenance requirements, and simplifies management.

  17. The National Biological Information Infrastructure: Coming of age

    USGS Publications Warehouse

    Cotter, G.; Frame, M.; Sepic, R.; Zolly, L.

    2000-01-01

    Coordinated by the US Geological Survey, the National Biological Information Infrastructure (NBII) is a Web-based system that provides increased access to data and information on the nation's biological resources. The NBII can be viewed from a variety of perspectives. This article - an individual case study and not a broad survey with extensive references to the literature - addresses the structure of the NBII related to thematic sections, infrastructure sections and place-based sections, and other topics such as the Integrated Taxonomic Information System (one of our more innovative tools) and the development of our controlled vocabulary.

  18. Crowdsourced Contributions to the Nation's Geodetic Elevation Infrastructure

    NASA Astrophysics Data System (ADS)

    Stone, W. A.

    2014-12-01

    NOAA's National Geodetic Survey (NGS), a United States Department of Commerce agency, is engaged in providing the nation's fundamental positioning infrastructure - the National Spatial Reference System (NSRS) - which includes the framework for latitude, longitude, and elevation determination as well as various geodetic models, tools, and data. Capitalizing on Global Navigation Satellite System (GNSS) technology for improved access to the nation's precise geodetic elevation infrastructure requires use of a geoid model, which relates GNSS-derived heights (ellipsoid heights) with traditional elevations (orthometric heights). NGS is facilitating the use of crowdsourced GNSS observations collected at published elevation control stations by the professional surveying, geospatial, and scientific communities to help improve NGS' geoid modeling capability. This collocation of published elevation data and newly collected GNSS data integrates together the two height systems. This effort in turn supports enhanced access to accurate elevation information across the nation, thereby benefiting all users of geospatial data. By partnering with the public in this collaborative effort, NGS is not only helping facilitate improvements to the elevation infrastructure for all users but also empowering users of NSRS with the capability to do their own high-accuracy positioning. The educational outreach facet of this effort helps inform the public, including the scientific community, about the utility of various NGS tools, including the widely used Online Positioning User Service (OPUS). OPUS plays a key role in providing user-friendly and high accuracy access to NSRS, with optional sharing of results with NGS and the public. All who are interested in helping evolve and improve the nationwide elevation determination capability are invited to participate in this nationwide partnership and to learn more about the geodetic infrastructure which is a vital component of viable spatial data for

  19. AstroCloud, a Cyber-Infrastructure for Astronomy Research: Cloud Computing Environments

    NASA Astrophysics Data System (ADS)

    Li, C.; Wang, J.; Cui, C.; He, B.; Fan, D.; Yang, Y.; Chen, J.; Zhang, H.; Yu, C.; Xiao, J.; Wang, C.; Cao, Z.; Fan, Y.; Hong, Z.; Li, S.; Mi, L.; Wan, W.; Wang, J.; Yin, S.

    2015-09-01

    AstroCloud is a cyber-Infrastructure for Astronomy Research initiated by Chinese Virtual Observatory (China-VO) under funding support from NDRC (National Development and Reform commission) and CAS (Chinese Academy of Sciences). Based on CloudStack, an open source software, we set up the cloud computing environment for AstroCloud Project. It consists of five distributed nodes across the mainland of China. Users can use and analysis data in this cloud computing environment. Based on GlusterFS, we built a scalable cloud storage system. Each user has a private space, which can be shared among different virtual machines and desktop systems. With this environments, astronomer can access to astronomical data collected by different telescopes and data centers easily, and data producers can archive their datasets safely.

  20. Managing a tier-2 computer centre with a private cloud infrastructure

    NASA Astrophysics Data System (ADS)

    Bagnasco, Stefano; Berzano, Dario; Brunetti, Riccardo; Lusso, Stefano; Vallero, Sara

    2014-06-01

    In a typical scientific computing centre, several applications coexist and share a single physical infrastructure. An underlying Private Cloud infrastructure eases the management and maintenance of such heterogeneous applications (such as multipurpose or application-specific batch farms, Grid sites, interactive data analysis facilities and others), allowing dynamic allocation resources to any application. Furthermore, the maintenance of large deployments of complex and rapidly evolving middleware and application software is eased by the use of virtual images and contextualization techniques. Such infrastructures are being deployed in some large centres (see e.g. the CERN Agile Infrastructure project), but with several open-source tools reaching maturity this is becoming viable also for smaller sites. In this contribution we describe the Private Cloud infrastructure at the INFN-Torino Computer Centre, that hosts a full-fledged WLCG Tier-2 centre, an Interactive Analysis Facility for the ALICE experiment at the CERN LHC and several smaller scientific computing applications. The private cloud building blocks include the OpenNebula software stack, the GlusterFS filesystem and the OpenWRT Linux distribution (used for network virtualization); a future integration into a federated higher-level infrastructure is made possible by exposing commonly used APIs like EC2 and OCCI.

  1. Autonomic Management of Application Workflows on Hybrid Computing Infrastructure

    DOE PAGES

    Kim, Hyunjoo; el-Khamra, Yaakoub; Rodero, Ivan; ...

    2011-01-01

    In this paper, we present a programming and runtime framework that enables the autonomic management of complex application workflows on hybrid computing infrastructures. The framework is designed to address system and application heterogeneity and dynamics to ensure that application objectives and constraints are satisfied. The need for such autonomic system and application management is becoming critical as computing infrastructures become increasingly heterogeneous, integrating different classes of resources from high-end HPC systems to commodity clusters and clouds. For example, the framework presented in this paper can be used to provision the appropriate mix of resources based on application requirements and constraints.more » The framework also monitors the system/application state and adapts the application and/or resources to respond to changing requirements or environment. To demonstrate the operation of the framework and to evaluate its ability, we employ a workflow used to characterize an oil reservoir executing on a hybrid infrastructure composed of TeraGrid nodes and Amazon EC2 instances of various types. Specifically, we show how different applications objectives such as acceleration, conservation and resilience can be effectively achieved while satisfying deadline and budget constraints, using an appropriate mix of dynamically provisioned resources. Our evaluations also demonstrate that public clouds can be used to complement and reinforce the scheduling and usage of traditional high performance computing infrastructure.« less

  2. The Federal Role in Bringing Education into the National Information Infrastructure

    NASA Technical Reports Server (NTRS)

    Cradler, John

    1995-01-01

    One of the most important issues facing Congress is to work with business, education, and the states to enable the nation's shools to better prepare students for a technological work force and to ensure that education has a place on the National Information Infrastructure (NII). This document provides background and important information for national leaders concerned about education, the information infrastructure, and related issues for the Federal government.

  3. Workflow4Metabolomics: a collaborative research infrastructure for computational metabolomics

    PubMed Central

    Giacomoni, Franck; Le Corguillé, Gildas; Monsoor, Misharl; Landi, Marion; Pericard, Pierre; Pétéra, Mélanie; Duperier, Christophe; Tremblay-Franco, Marie; Martin, Jean-François; Jacob, Daniel; Goulitquer, Sophie; Thévenot, Etienne A.; Caron, Christophe

    2015-01-01

    Summary: The complex, rapidly evolving field of computational metabolomics calls for collaborative infrastructures where the large volume of new algorithms for data pre-processing, statistical analysis and annotation can be readily integrated whatever the language, evaluated on reference datasets and chained to build ad hoc workflows for users. We have developed Workflow4Metabolomics (W4M), the first fully open-source and collaborative online platform for computational metabolomics. W4M is a virtual research environment built upon the Galaxy web-based platform technology. It enables ergonomic integration, exchange and running of individual modules and workflows. Alternatively, the whole W4M framework and computational tools can be downloaded as a virtual machine for local installation. Availability and implementation: http://workflow4metabolomics.org homepage enables users to open a private account and access the infrastructure. W4M is developed and maintained by the French Bioinformatics Institute (IFB) and the French Metabolomics and Fluxomics Infrastructure (MetaboHUB). Contact: contact@workflow4metabolomics.org PMID:25527831

  4. The National Information Infrastructure: Requirements for Education and Training.

    ERIC Educational Resources Information Center

    Educational IRM Quarterly, 1994

    1994-01-01

    Includes 19 access, education and training, and technical requirements that must be addressed in the development of the national information infrastructure. The requirements were prepared by national education, training, and trade associations participating in the National Coordinating Committee on Technology in Education and Training (NCC-TET). A…

  5. The National Information Infrastructure: Agenda for Action.

    ERIC Educational Resources Information Center

    Microcomputers for Information Management, 1995

    1995-01-01

    Discusses the National Information Infrastructure and the role of the government. Topics include private sector investment; universal service; technological innovation; user orientation; information security and network reliability; management of the radio frequency spectrum; intellectual property rights; coordination with other levels of…

  6. The National Information Infrastructure: Requirements for Education and Training.

    ERIC Educational Resources Information Center

    National Coordinating Committee on Technology in Education and Training, Alexandria, VA.

    The National Coordinating Committee for Technology in Education and Training (NCC-TET) has developed these requirements to ensure that the National Information Infrastructure (NII) provides expanded opportunities for education and training. A number of national organizations have contributed to these requirements, which are intended to be used in…

  7. Women's health nursing in the context of the National Health Information Infrastructure.

    PubMed

    Jenkins, Melinda L; Hewitt, Caroline; Bakken, Suzanne

    2006-01-01

    Nurses must be prepared to participate in the evolving National Health Information Infrastructure and the changes that will consequently occur in health care practice and documentation. Informatics technologies will be used to develop electronic health records with integrated decision support features that will likely lead to enhanced health care quality and safety. This paper provides a summary of the National Health Information Infrastructure and highlights electronic health records and decision support systems within the context of evidence-based practice. Activities at the Columbia University School of Nursing designed to prepare nurses with the necessary informatics competencies to practice in a National Health Information Infrastructure-enabled health care system are described. Data are presented from electronic (personal digital assistant) encounter logs used in our Women's Health Nurse Practitioner program to support evidence-based advanced practice nursing care. Implications for nursing practice, education, and research in the evolving National Health Information Infrastructure are discussed.

  8. Workflow4Metabolomics: a collaborative research infrastructure for computational metabolomics.

    PubMed

    Giacomoni, Franck; Le Corguillé, Gildas; Monsoor, Misharl; Landi, Marion; Pericard, Pierre; Pétéra, Mélanie; Duperier, Christophe; Tremblay-Franco, Marie; Martin, Jean-François; Jacob, Daniel; Goulitquer, Sophie; Thévenot, Etienne A; Caron, Christophe

    2015-05-01

    The complex, rapidly evolving field of computational metabolomics calls for collaborative infrastructures where the large volume of new algorithms for data pre-processing, statistical analysis and annotation can be readily integrated whatever the language, evaluated on reference datasets and chained to build ad hoc workflows for users. We have developed Workflow4Metabolomics (W4M), the first fully open-source and collaborative online platform for computational metabolomics. W4M is a virtual research environment built upon the Galaxy web-based platform technology. It enables ergonomic integration, exchange and running of individual modules and workflows. Alternatively, the whole W4M framework and computational tools can be downloaded as a virtual machine for local installation. http://workflow4metabolomics.org homepage enables users to open a private account and access the infrastructure. W4M is developed and maintained by the French Bioinformatics Institute (IFB) and the French Metabolomics and Fluxomics Infrastructure (MetaboHUB). contact@workflow4metabolomics.org. © The Author 2014. Published by Oxford University Press.

  9. Advanced Computational Methods for Optimization of Non-Periodic Inspection Intervals for Aging Infrastructure

    DTIC Science & Technology

    2017-01-05

    AFRL-AFOSR-JP-TR-2017-0002 Advanced Computational Methods for Optimization of Non-Periodic Inspection Intervals for Aging Infrastructure Manabu...Computational Methods for Optimization of Non-Periodic Inspection Intervals for Aging Infrastructure 5a.  CONTRACT NUMBER 5b.  GRANT NUMBER FA2386...UNLIMITED: PB Public Release 13. SUPPLEMENTARY NOTES 14. ABSTRACT This report for the project titled ’Advanced Computational Methods for Optimization of

  10. A national clinical decision support infrastructure to enable the widespread and consistent practice of genomic and personalized medicine.

    PubMed

    Kawamoto, Kensaku; Lobach, David F; Willard, Huntington F; Ginsburg, Geoffrey S

    2009-03-23

    In recent years, the completion of the Human Genome Project and other rapid advances in genomics have led to increasing anticipation of an era of genomic and personalized medicine, in which an individual's health is optimized through the use of all available patient data, including data on the individual's genome and its downstream products. Genomic and personalized medicine could transform healthcare systems and catalyze significant reductions in morbidity, mortality, and overall healthcare costs. Critical to the achievement of more efficient and effective healthcare enabled by genomics is the establishment of a robust, nationwide clinical decision support infrastructure that assists clinicians in their use of genomic assays to guide disease prevention, diagnosis, and therapy. Requisite components of this infrastructure include the standardized representation of genomic and non-genomic patient data across health information systems; centrally managed repositories of computer-processable medical knowledge; and standardized approaches for applying these knowledge resources against patient data to generate and deliver patient-specific care recommendations. Here, we provide recommendations for establishing a national decision support infrastructure for genomic and personalized medicine that fulfills these needs, leverages existing resources, and is aligned with the Roadmap for National Action on Clinical Decision Support commissioned by the U.S. Office of the National Coordinator for Health Information Technology. Critical to the establishment of this infrastructure will be strong leadership and substantial funding from the federal government. A national clinical decision support infrastructure will be required for reaping the full benefits of genomic and personalized medicine. Essential components of this infrastructure include standards for data representation; centrally managed knowledge repositories; and standardized approaches for leveraging these knowledge

  11. A systems framework for national assessment of climate risks to infrastructure.

    PubMed

    Dawson, Richard J; Thompson, David; Johns, Daniel; Wood, Ruth; Darch, Geoff; Chapman, Lee; Hughes, Paul N; Watson, Geoff V R; Paulson, Kevin; Bell, Sarah; Gosling, Simon N; Powrie, William; Hall, Jim W

    2018-06-13

    Extreme weather causes substantial adverse socio-economic impacts by damaging and disrupting the infrastructure services that underpin modern society. Globally, $2.5tn a year is spent on infrastructure which is typically designed to last decades, over which period projected changes in the climate will modify infrastructure performance. A systems approach has been developed to assess risks across all infrastructure sectors to guide national policy making and adaptation investment. The method analyses diverse evidence of climate risks and adaptation actions, to assess the urgency and extent of adaptation required. Application to the UK shows that despite recent adaptation efforts, risks to infrastructure outweigh opportunities. Flooding is the greatest risk to all infrastructure sectors: even if the Paris Agreement to limit global warming to 2°C is achieved, the number of users reliant on electricity infrastructure at risk of flooding would double, while a 4°C rise could triple UK flood damage. Other risks are significant, for example 5% and 20% of river catchments would be unable to meet water demand with 2°C and 4°C global warming respectively. Increased interdependence between infrastructure systems, especially from energy and information and communication technology (ICT), are amplifying risks, but adaptation action is limited by lack of clear responsibilities. A programme to build national capability is urgently required to improve infrastructure risk assessment.This article is part of the theme issue 'Advances in risk assessment for climate change adaptation policy'. © 2018 The Authors.

  12. A systems framework for national assessment of climate risks to infrastructure

    NASA Astrophysics Data System (ADS)

    Dawson, Richard J.; Thompson, David; Johns, Daniel; Wood, Ruth; Darch, Geoff; Chapman, Lee; Hughes, Paul N.; Watson, Geoff V. R.; Paulson, Kevin; Bell, Sarah; Gosling, Simon N.; Powrie, William; Hall, Jim W.

    2018-06-01

    Extreme weather causes substantial adverse socio-economic impacts by damaging and disrupting the infrastructure services that underpin modern society. Globally, $2.5tn a year is spent on infrastructure which is typically designed to last decades, over which period projected changes in the climate will modify infrastructure performance. A systems approach has been developed to assess risks across all infrastructure sectors to guide national policy making and adaptation investment. The method analyses diverse evidence of climate risks and adaptation actions, to assess the urgency and extent of adaptation required. Application to the UK shows that despite recent adaptation efforts, risks to infrastructure outweigh opportunities. Flooding is the greatest risk to all infrastructure sectors: even if the Paris Agreement to limit global warming to 2°C is achieved, the number of users reliant on electricity infrastructure at risk of flooding would double, while a 4°C rise could triple UK flood damage. Other risks are significant, for example 5% and 20% of river catchments would be unable to meet water demand with 2°C and 4°C global warming respectively. Increased interdependence between infrastructure systems, especially from energy and information and communication technology (ICT), are amplifying risks, but adaptation action is limited by lack of clear responsibilities. A programme to build national capability is urgently required to improve infrastructure risk assessment. This article is part of the theme issue `Advances in risk assessment for climate change adaptation policy'.

  13. A systems framework for national assessment of climate risks to infrastructure

    PubMed Central

    Thompson, David; Johns, Daniel; Darch, Geoff; Paulson, Kevin

    2018-01-01

    Extreme weather causes substantial adverse socio-economic impacts by damaging and disrupting the infrastructure services that underpin modern society. Globally, $2.5tn a year is spent on infrastructure which is typically designed to last decades, over which period projected changes in the climate will modify infrastructure performance. A systems approach has been developed to assess risks across all infrastructure sectors to guide national policy making and adaptation investment. The method analyses diverse evidence of climate risks and adaptation actions, to assess the urgency and extent of adaptation required. Application to the UK shows that despite recent adaptation efforts, risks to infrastructure outweigh opportunities. Flooding is the greatest risk to all infrastructure sectors: even if the Paris Agreement to limit global warming to 2°C is achieved, the number of users reliant on electricity infrastructure at risk of flooding would double, while a 4°C rise could triple UK flood damage. Other risks are significant, for example 5% and 20% of river catchments would be unable to meet water demand with 2°C and 4°C global warming respectively. Increased interdependence between infrastructure systems, especially from energy and information and communication technology (ICT), are amplifying risks, but adaptation action is limited by lack of clear responsibilities. A programme to build national capability is urgently required to improve infrastructure risk assessment. This article is part of the theme issue ‘Advances in risk assessment for climate change adaptation policy’. PMID:29712793

  14. Cybersecurity: The Nation’s Greatest Threat to Critical Infrastructure

    DTIC Science & Technology

    2013-03-01

    protection has become a matter of national security, public safety, and economic stability . It is imperative the U.S. Government (USG) examine current...recommendations for federal responsibilities and legislation to direct nation critical infrastructure efforts to ensure national security, public safety and economic stability .

  15. 78 FR 40487 - National Infrastructure Advisory Council

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-07-05

    ... DEPARTMENT OF HOMELAND SECURITY [Docket No. DHS-2013-0033] National Infrastructure Advisory... (NIAC) will meet Monday, July 29, 2013, at the United States Access Board, 1331 F Street NW., Suite 800, Washington, DC 20004. The meeting will be open to the public. DATES: The NIAC will meet Monday, July 29, 2013...

  16. Elastic Cloud Computing Infrastructures in the Open Cirrus Testbed Implemented via Eucalyptus

    NASA Astrophysics Data System (ADS)

    Baun, Christian; Kunze, Marcel

    Cloud computing realizes the advantages and overcomes some restrictionsof the grid computing paradigm. Elastic infrastructures can easily be createdand managed by cloud users. In order to accelerate the research ondata center management and cloud services the OpenCirrusTM researchtestbed has been started by HP, Intel and Yahoo!. Although commercialcloud offerings are proprietary, Open Source solutions exist in the field ofIaaS with Eucalyptus, PaaS with AppScale and at the applications layerwith Hadoop MapReduce. This paper examines the I/O performance ofcloud computing infrastructures implemented with Eucalyptus in contrastto Amazon S3.

  17. National roadmap for research infrastructure

    NASA Astrophysics Data System (ADS)

    Bonev, Tanyu

    In 2010 the Council of Ministers of Republic of Bulgaria passed a National roadmap for research infrastructure (Decision Num. 692 from 21.09.2010). Part of the roadmap is the project called Regional Astronomical Center for Research and Education (RACIO). Distinctive feature of this project is the integration of the existing in the country research and educational organizations in the field of astronomy. The project is a substantial part of the strategy for the development of astronomy in Bulgaria over the next decade. What is the content of this strategis project? How it was possible to include RACIO in the roadmap? Does the national roadmap charmonize with the strategic plans for the development of astronomy in Europe, elaborated by Astronet (http://www.astronet-eu.org/)? These are some of the questions which I try to give answers in this paper.

  18. A national-scale authentication infrastructure.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Butler, R.; Engert, D.; Foster, I.

    2000-12-01

    Today, individuals and institutions in science and industry are increasingly forming virtual organizations to pool resources and tackle a common goal. Participants in virtual organizations commonly need to share resources such as data archives, computer cycles, and networks - resources usually available only with restrictions based on the requested resource's nature and the user's identity. Thus, any sharing mechanism must have the ability to authenticate the user's identity and determine if the user is authorized to request the resource. Virtual organizations tend to be fluid, however, so authentication mechanisms must be flexible and lightweight, allowing administrators to quickly establish andmore » change resource-sharing arrangements. However, because virtual organizations complement rather than replace existing institutions, sharing mechanisms cannot change local policies and must allow individual institutions to maintain control over their own resources. Our group has created and deployed an authentication and authorization infrastructure that meets these requirements: the Grid Security Infrastructure. GSI offers secure single sign-ons and preserves site control over access policies and local security. It provides its own versions of common applications, such as FTP and remote login, and a programming interface for creating secure applications.« less

  19. Critical Infrastructure: The National Asset Database

    DTIC Science & Technology

    2006-09-14

    NUMBER 5b. GRANT NUMBER 5c. PROGRAM ELEMENT NUMBER 6. AUTHOR(S) 5d. PROJECT NUMBER 5e. TASK NUMBER 5f. WORK UNIT NUMBER 7. PERFORMING ORGANIZATION...that, in its current form, it is being used inappropriately as the basis upon which federal resources, including infrastructure protection grants , are...National Asset Database has been used to support federal grant -making decisions, according to a DHS official, it does not drive those decisions. In July

  20. The Computing and Data Grid Approach: Infrastructure for Distributed Science Applications

    NASA Technical Reports Server (NTRS)

    Johnston, William E.

    2002-01-01

    With the advent of Grids - infrastructure for using and managing widely distributed computing and data resources in the science environment - there is now an opportunity to provide a standard, large-scale, computing, data, instrument, and collaboration environment for science that spans many different projects and provides the required infrastructure and services in a relatively uniform and supportable way. Grid technology has evolved over the past several years to provide the services and infrastructure needed for building 'virtual' systems and organizations. We argue that Grid technology provides an excellent basis for the creation of the integrated environments that can combine the resources needed to support the large- scale science projects located at multiple laboratories and universities. We present some science case studies that indicate that a paradigm shift in the process of science will come about as a result of Grids providing transparent and secure access to advanced and integrated information and technologies infrastructure: powerful computing systems, large-scale data archives, scientific instruments, and collaboration tools. These changes will be in the form of services that can be integrated with the user's work environment, and that enable uniform and highly capable access to these computers, data, and instruments, regardless of the location or exact nature of these resources. These services will integrate transient-use resources like computing systems, scientific instruments, and data caches (e.g., as they are needed to perform a simulation or analyze data from a single experiment); persistent-use resources. such as databases, data catalogues, and archives, and; collaborators, whose involvement will continue for the lifetime of a project or longer. While we largely address large-scale science in this paper, Grids, particularly when combined with Web Services, will address a broad spectrum of science scenarios. both large and small scale.

  1. The Sunrise project: An R&D project for a national information infrastructure prototype

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lee, Juhnyoung

    1995-02-01

    Sunrise is a Los Alamos National Laboratory (LANL) project started in October 1993. It is intended to a prototype National Information Infrastructure (NII) development project. A main focus of Sunrise is to tie together enabling technologies (networking, object-oriented distributed computing, graphical interfaces, security, multimedia technologies, and data mining technologies) with several specific applications. A diverse set of application areas was chosen to ensure that the solutions developed in the project are as generic as possible. Some of the application areas are materials modeling, medical records and image analysis, transportation simulations, and education. This paper provides a description of Sunrise andmore » a view of the architecture and objectives of this evolving project. The primary objectives of Sunrise are three-fold: (1) To develop common information-enabling tools for advanced scientific research and its applications to industry; (2) To enhance the capabilities of important research programs at the Laboratory; and (3) To define a new way of collaboration between computer science and industrially relevant research.« less

  2. Surety of the nation`s critical infrastructures: The challenge restructuring poses to the telecommunications sector

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Cox, R.; Drennen, T.E.; Gilliom, L.

    1998-04-01

    The telecommunications sector plays a pivotal role in the system of increasingly connected and interdependent networks that make up national infrastructure. An assessment of the probable structure and function of the bit-moving industry in the twenty-first century must include issues associated with the surety of telecommunications. The term surety, as used here, means confidence in the acceptable behavior of a system in both intended and unintended circumstances. This paper outlines various engineering approaches to surety in systems, generally, and in the telecommunications infrastructure, specifically. It uses the experience and expectations of the telecommunications system of the US as an examplemore » of the global challenges. The paper examines the principal factors underlying the change to more distributed systems in this sector, assesses surety issues associated with these changes, and suggests several possible strategies for mitigation. It also studies the ramifications of what could happen if this sector became a target for those seeking to compromise a nation`s security and economic well being. Experts in this area generally agree that the U. S. telecommunications sector will eventually respond in a way that meets market demands for surety. Questions remain open, however, about confidence in the telecommunications sector and the nation`s infrastructure during unintended circumstances--such as those posed by information warfare or by cascading software failures. Resolution of these questions is complicated by the lack of clear accountability of the private and the public sectors for the surety of telecommunications.« less

  3. Overview of NASA communications infrastructure

    NASA Technical Reports Server (NTRS)

    Arnold, Ray J.; Fuechsel, Charles

    1991-01-01

    The infrastructure of NASA communications systems for effecting coordination across NASA offices and with the national and international research and technological communities is discussed. The offices and networks of the communication system include the Office of Space Science and Applications (OSSA), which manages all NASA missions, and the Office of Space Operations, which furnishes communication support through the NASCOM, the mission critical communications support network, and the Program Support Communications network. The NASA Science Internet was established by OSSA to centrally manage, develop, and operate an integrated computer network service dedicated to NASA's space science and application research. Planned for the future is the National Research and Education Network, which will provide communications infrastructure to enhance science resources at a national level.

  4. China national space remote sensing infrastructure and its application

    NASA Astrophysics Data System (ADS)

    Li, Ming

    2016-07-01

    Space Infrastructure is a space system that provides communication, navigation and remote sensing service for broad users. China National Space Remote Sensing Infrastructure includes remote sensing satellites, ground system and related systems. According to the principle of multiple-function on one satellite, multiple satellites in one constellation and collaboration between constellations, series of land observation, ocean observation and atmosphere observation satellites have been suggested to have high, middle and low resolution and fly on different orbits and with different means of payloads to achieve a high ability for global synthetically observation. With such an infrastructure, we can carry out the research on climate change, geophysics global surveying and mapping, water resources management, safety and emergency management, and so on. I This paper gives a detailed introduction about the planning of this infrastructure and its application in different area, especially the international cooperation potential in the so called One Belt and One Road space information corridor.

  5. National Water Infrastructure Adaptation Assessment, Part I: Climate Change Adaptation Readiness Analysis

    EPA Science Inventory

    The report “National Water Infrastructure Adaptation Assessment” is comprised of four parts (Part I to IV), each in an independent volume. The Part I report presented herein describes a preliminary regulatory and technical analysis of water infrastructure and regulations in the ...

  6. Integrating multiple scientific computing needs via a Private Cloud infrastructure

    NASA Astrophysics Data System (ADS)

    Bagnasco, S.; Berzano, D.; Brunetti, R.; Lusso, S.; Vallero, S.

    2014-06-01

    In a typical scientific computing centre, diverse applications coexist and share a single physical infrastructure. An underlying Private Cloud facility eases the management and maintenance of heterogeneous use cases such as multipurpose or application-specific batch farms, Grid sites catering to different communities, parallel interactive data analysis facilities and others. It allows to dynamically and efficiently allocate resources to any application and to tailor the virtual machines according to the applications' requirements. Furthermore, the maintenance of large deployments of complex and rapidly evolving middleware and application software is eased by the use of virtual images and contextualization techniques; for example, rolling updates can be performed easily and minimizing the downtime. In this contribution we describe the Private Cloud infrastructure at the INFN-Torino Computer Centre, that hosts a full-fledged WLCG Tier-2 site and a dynamically expandable PROOF-based Interactive Analysis Facility for the ALICE experiment at the CERN LHC and several smaller scientific computing applications. The Private Cloud building blocks include the OpenNebula software stack, the GlusterFS filesystem (used in two different configurations for worker- and service-class hypervisors) and the OpenWRT Linux distribution (used for network virtualization). A future integration into a federated higher-level infrastructure is made possible by exposing commonly used APIs like EC2 and by using mainstream contextualization tools like CloudInit.

  7. Social determinants of health: poverty, national infrastructure and investment.

    PubMed

    Douthit, Nathan T; Alemu, Haimanot Kasahun

    2016-06-22

    This case presentation of a 19-year-old Ethiopian woman diagnosed with nasopharyngeal carcinoma reveals the barriers the patient has to medical treatment, including poverty and a lack of national infrastructure. The patient lives a life of poverty, and the outcome of her illness is a result of her being unable to overcome barriers to accessing health care due to inability to afford transport, lodging and treatment. In this case, the patient's vulnerability to disease due to her poverty is not overcome because of lack of infrastructure. The infrastructure fails to develop because of inadequate investment and other delays in building. The end result is that the patient is vulnerable to disease. Her disease process impacts her family and their contribution to Ethiopia's development. 2016 BMJ Publishing Group Ltd.

  8. The Information Superhighway and the National Information Infrastructure (NII).

    ERIC Educational Resources Information Center

    Griffith, Jane Bortnick; Smith, Marcia S.

    1994-01-01

    Discusses issues connected with the information superhighway and the National Information Infrastructure (NII). Topics addressed include principles for government action; economic benefits; regulations; applications; information policy; pending federal legislation; private sector/government relationship; open access and universal service; privacy…

  9. The dependence of educational infrastructure on clinical infrastructure.

    PubMed Central

    Cimino, C.

    1998-01-01

    The Albert Einstein College of Medicine needed to assess the growth of its infrastructure for educational computing as a first step to determining if student needs were being met. Included in computing infrastructure are space, equipment, software, and computing services. The infrastructure was assessed by reviewing purchasing and support logs for a six year period from 1992 to 1998. This included equipment, software, and e-mail accounts provided to students and to faculty for educational purposes. Student space has grown at a constant rate (averaging 14% increase each year respectively). Student equipment on campus has grown by a constant amount each year (average 8.3 computers each year). Student infrastructure off campus and educational support of faculty has not kept pace. It has either declined or remained level over the six year period. The availability of electronic mail clearly demonstrates this with accounts being used by 99% of students, 78% of Basic Science Course Leaders, 38% of Clerkship Directors, 18% of Clerkship Site Directors, and 8% of Clinical Elective Directors. The collection of the initial descriptive infrastructure data has revealed problems that may generalize to other medical schools. The discrepancy between infrastructure available to students and faculty on campus and students and faculty off campus creates a setting where students perceive a paradoxical declining support for computer use as they progress through medical school. While clinical infrastructure may be growing, it is at the expense of educational infrastructure at affiliate hospitals. PMID:9929262

  10. Progress in Machine Learning Studies for the CMS Computing Infrastructure

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bonacorsi, Daniele; Kuznetsov, Valentin; Magini, Nicolo

    Here, computing systems for LHC experiments developed together with Grids worldwide. While a complete description of the original Grid-based infrastructure and services for LHC experiments and its recent evolutions can be found elsewhere, it is worth to mention here the scale of the computing resources needed to fulfill the needs of LHC experiments in Run-1 and Run-2 so far.

  11. Progress in Machine Learning Studies for the CMS Computing Infrastructure

    DOE PAGES

    Bonacorsi, Daniele; Kuznetsov, Valentin; Magini, Nicolo; ...

    2017-12-06

    Here, computing systems for LHC experiments developed together with Grids worldwide. While a complete description of the original Grid-based infrastructure and services for LHC experiments and its recent evolutions can be found elsewhere, it is worth to mention here the scale of the computing resources needed to fulfill the needs of LHC experiments in Run-1 and Run-2 so far.

  12. Infrastructure Task Force National Environmental Policy Act Requirements - February 2011

    EPA Pesticide Factsheets

    This document summarizes in a matrix format the federal regulations requirements and guidance for complying with the National Environmental Policy Act for the Infrastructure Task Force federal partner agencies.

  13. Implementing Computer-Aided Instruction in Distance Education: An Infrastructure. RR/89-06.

    ERIC Educational Resources Information Center

    Kotze, Paula

    The infrastructure required for the implementation of computer aided instruction is described with particular reference to the distance education environment at the University of South Africa. A review of the state of the art of online distance education in the United States and Europe is followed by an outline of the proposed infrastructure for…

  14. 78 FR 38723 - National Infrastructure Advisory Council; Meetings

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-06-27

    ... DEPARTMENT OF HOMELAND SECURITY [Docket No. DHS-2013-0034] National Infrastructure Advisory... (NIAC) will meet July 17, August 14, and September 17, 2013. The meetings will be open to the public. DATES: The NIAC will meet at the following dates and times: July 17, 2013, at 3:00 p.m. to 4:30 p.m...

  15. The national response for preventing healthcare-associated infections: infrastructure development.

    PubMed

    Mendel, Peter; Siegel, Sari; Leuschner, Kristin J; Gall, Elizabeth M; Weinberg, Daniel A; Kahn, Katherine L

    2014-02-01

    In 2009, the US Department of Health and Human Services (HHS) launched the Action Plan to Prevent Healthcare-associated Infections (HAIs). The Action Plan adopted national targets for reduction of specific infections, making HHS accountable for change across the healthcare system over which federal agencies have limited control. This article examines the unique infrastructure developed through the Action Plan to support adoption of HAI prevention practices. Interviews of federal (n=32) and other stakeholders (n=38), reviews of agency documents and journal articles (n=260), and observations of interagency meetings (n=17) and multistakeholder conferences (n=17) over a 3-year evaluation period. We extract key progress and challenges in the development of national HAI prevention infrastructure--1 of the 4 system functions in our evaluation framework encompassing regulation, payment systems, safety culture, and dissemination and technical assistance. We then identify system properties--for example, coordination and alignment, accountability and incentives, etc.--that enabled or hindered progress within each key development. The Action Plan has developed a model of interagency coordination (including a dedicated "home" and culture of cooperation) at the federal level and infrastructure for stimulating change through the wider healthcare system (including transparency and financial incentives, support of state and regional HAI prevention capacity, changes in safety culture, and mechanisms for stakeholder engagement). Significant challenges to infrastructure development included many related to the same areas of progress. The Action Plan has built a foundation of infrastructure to expand prevention of HAIs and presents useful lessons for other large-scale improvement initiatives.

  16. Cultured construction: global evidence of the impact of national values on sanitation infrastructure choice.

    PubMed

    Kaminsky, Jessica A

    2015-06-16

    Case study research often claims culture-variously defined-impacts infrastructure development. I test this claim using Hofstede's cultural dimensions and newly available data representing change in national coverage of sewer connections, sewerage treatment, and onsite sanitation between 1990 and 2010 for 21 developing nations. The results show that the cultural dimensions of uncertainty avoidance, masculinity-femininity, and individualism-collectivism have statistically significant relationships to sanitation technology choice. These data prove the global impact of culture on infrastructure choice, and reemphasize that local cultural preferences must be considered when constructing sanitation infrastructure.

  17. Geospatial-enabled Data Exploration and Computation through Data Infrastructure Building Blocks

    NASA Astrophysics Data System (ADS)

    Song, C. X.; Biehl, L. L.; Merwade, V.; Villoria, N.

    2015-12-01

    Geospatial data are present everywhere today with the proliferation of location-aware computing devices and sensors. This is especially true in the scientific community where large amounts of data are driving research and education activities in many domains. Collaboration over geospatial data, for example, in modeling, data analysis and visualization, must still overcome the barriers of specialized software and expertise among other challenges. The GABBs project aims at enabling broader access to geospatial data exploration and computation by developing spatial data infrastructure building blocks that leverage capabilities of end-to-end application service and virtualized computing framework in HUBzero. Funded by NSF Data Infrastructure Building Blocks (DIBBS) initiative, GABBs provides a geospatial data architecture that integrates spatial data management, mapping and visualization and will make it available as open source. The outcome of the project will enable users to rapidly create tools and share geospatial data and tools on the web for interactive exploration of data without requiring significant software development skills, GIS expertise or IT administrative privileges. This presentation will describe the development of geospatial data infrastructure building blocks and the scientific use cases that help drive the software development, as well as seek feedback from the user communities.

  18. Analysis of CERN computing infrastructure and monitoring data

    NASA Astrophysics Data System (ADS)

    Nieke, C.; Lassnig, M.; Menichetti, L.; Motesnitsalis, E.; Duellmann, D.

    2015-12-01

    Optimizing a computing infrastructure on the scale of LHC requires a quantitative understanding of a complex network of many different resources and services. For this purpose the CERN IT department and the LHC experiments are collecting a large multitude of logs and performance probes, which are already successfully used for short-term analysis (e.g. operational dashboards) within each group. The IT analytics working group has been created with the goal to bring data sources from different services and on different abstraction levels together and to implement a suitable infrastructure for mid- to long-term statistical analysis. It further provides a forum for joint optimization across single service boundaries and the exchange of analysis methods and tools. To simplify access to the collected data, we implemented an automated repository for cleaned and aggregated data sources based on the Hadoop ecosystem. This contribution describes some of the challenges encountered, such as dealing with heterogeneous data formats, selecting an efficient storage format for map reduce and external access, and will describe the repository user interface. Using this infrastructure we were able to quantitatively analyze the relationship between CPU/wall fraction, latency/throughput constraints of network and disk and the effective job throughput. In this contribution we will first describe the design of the shared analysis infrastructure and then present a summary of first analysis results from the combined data sources.

  19. 78 FR 73202 - Review and Revision of the National Critical Infrastructure Security and Resilience (NCISR...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-12-05

    ...This Request for Information (RFI) notice informs the public that the Department of Homeland Security's (DHS) Science and Technology Directorate (S&T) is currently developing a National Critical Infrastructure Security and Resilience Research and Development Plan (NCISR R&D Plan) to conform to the requirements of Presidential Policy Directive 21, Critical Infrastructure Security and Resilience. As part of a comprehensive national review process, DHS solicits public comment on issues or language in the NCISR R&D Plan that need to be included. Critical infrastructure includes both cyber and physical components, systems, and networks for the sixteen established ``critical infrastructures''.

  20. The National Information Infrastructure: Requirements for Education and Training: Executive Summary.

    ERIC Educational Resources Information Center

    TechTrends, 1994

    1994-01-01

    Includes 19 requirements prepared by the National Coordinating Committee for Technology in Education (NCC-TET) to ensure that the national information infrastructure (NII) provides expanded opportunities for education and training. The requirements, which cover access, education and training applications, and technical needs, are intended as…

  1. National Plug-In Electric Vehicle Infrastructure Analysis

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wood, Eric W.; Rames, Clement L.; Muratori, Matteo

    This document describes a study conducted by the National Renewable Energy Laboratory quantifying the charging station infrastructure required to serve the growing U.S. fleet of plug-in electric vehicles (PEVs). PEV sales, which include plug-in hybrid electric vehicles (PHEVs) and battery electric vehicles (BEVs), have surged recently. Most PEV charging occurs at home, but widespread PEV adoption will require the development of a national network of non-residential charging stations. Installation of these stations strategically would maximize the economic viability of early stations while enabling efficient network growth as the PEV market matures. This document describes what effective co-evolution of the PEVmore » fleet and charging infrastructure might look like under a range of scenarios. To develop the roadmap, NREL analyzed PEV charging requirements along interstate corridors and within urban and rural communities. The results suggest that a few hundred corridor fast-charging stations could enable long-distance BEV travel between U.S. cities. Compared to interstate corridors, urban and rural communities are expected to have significantly larger charging infrastructure requirements. About 8,000 fast-charging stations would be required to provide a minimum level of coverage nationwide. In an expanding PEV market, the total number of non-residential charging outlets or 'plugs' required to meet demand ranges from around 100,000 to more than 1.2 million. Understanding what drives this large range in capacity requirements is critical. For example, whether consumers prefer long-range or short-range PEVs has a larger effect on plug requirements than does the total number of PEVs on the road. The relative success of PHEVs versus BEVs also has a major impact, as does the number of PHEVs that charge away from home. This study shows how important it is to understand consumer preferences and driving behaviors when planning charging networks.« less

  2. National Plug-In Electric Vehicle Infrastructure Analysis

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Muratori, Matteo; Rames, Clement L; Srinivasa Raghavan, Sesha

    This presentation describes a study conducted by the National Renewable Energy Laboratory quantifying the charging station infrastructure required to serve the growing U.S. fleet of plug-in electric vehicles (PEVs). PEV sales, which include plug-in hybrid electric vehicles (PHEVs) and battery electric vehicles (BEVs), have surged recently. Most PEV charging occurs at home, but widespread PEV adoption will require the development of a national network of non-residential charging stations. Installation of these stations strategically would maximize the economic viability of early stations while enabling efficient network growth as the PEV market matures. This document describes what effective co-evolution of the PEVmore » fleet and charging infrastructure might look like under a range of scenarios. To develop the roadmap, NREL analyzed PEV charging requirements along interstate corridors and within urban and rural communities. The results suggest that a few hundred corridor fast-charging stations could enable long-distance BEV travel between U.S. cities. Compared to interstate corridors, urban and rural communities are expected to have significantly larger charging infrastructure requirements. About 8,000 fast-charging stations would be required to provide a minimum level of coverage nationwide. In an expanding PEV market, the total number of non-residential charging outlets or 'plugs' required to meet demand ranges from around 100,000 to more than 1.2 million. Understanding what drives this large range in capacity requirements is critical. For example, whether consumers prefer long-range or short-range PEVs has a larger effect on plug requirements than does the total number of PEVs on the road. The relative success of PHEVs versus BEVs also has a major impact, as does the number of PHEVs that charge away from home. This study shows how important it is to understand consumer preferences and driving behaviors when planning charging networks.« less

  3. NAS infrastructure management system build 1.5 computer-human interface

    DOT National Transportation Integrated Search

    2001-01-01

    Human factors engineers from the National Airspace System (NAS) Human Factors Branch (ACT-530) of the Federal Aviation Administration William J. Hughes Technical Center conducted an evaluation of the NAS Infrastructure Management System (NIMS) Build ...

  4. A Cloud-based Infrastructure and Architecture for Environmental System Research

    NASA Astrophysics Data System (ADS)

    Wang, D.; Wei, Y.; Shankar, M.; Quigley, J.; Wilson, B. E.

    2016-12-01

    The present availability of high-capacity networks, low-cost computers and storage devices, and the widespread adoption of hardware virtualization and service-oriented architecture provide a great opportunity to enable data and computing infrastructure sharing between closely related research activities. By taking advantage of these approaches, along with the world-class high computing and data infrastructure located at Oak Ridge National Laboratory, a cloud-based infrastructure and architecture has been developed to efficiently deliver essential data and informatics service and utilities to the environmental system research community, and will provide unique capabilities that allows terrestrial ecosystem research projects to share their software utilities (tools), data and even data submission workflow in a straightforward fashion. The infrastructure will minimize large disruptions from current project-based data submission workflows for better acceptances from existing projects, since many ecosystem research projects already have their own requirements or preferences for data submission and collection. The infrastructure will eliminate scalability problems with current project silos by provide unified data services and infrastructure. The Infrastructure consists of two key components (1) a collection of configurable virtual computing environments and user management systems that expedite data submission and collection from environmental system research community, and (2) scalable data management services and system, originated and development by ORNL data centers.

  5. Uganda's National Transmission Backbone Infrastructure Project: Technical Challenges and the Way Forward

    NASA Astrophysics Data System (ADS)

    Bulega, T.; Kyeyune, A.; Onek, P.; Sseguya, R.; Mbabazi, D.; Katwiremu, E.

    2011-10-01

    Several publications have identified technical challenges facing Uganda's National Transmission Backbone Infrastructure project. This research addresses the technical limitations of the National Transmission Backbone Infrastructure project, evaluates the goals of the project, and compares the results against the technical capability of the backbone. The findings of the study indicate a bandwidth deficit, which will be addressed by using dense wave division multiplexing repeaters, leasing bandwidth from private companies. Microwave links for redundancy, a Network Operation Center for operation and maintenance, and deployment of wireless interoperability for microwave access as a last-mile solution are also suggested.

  6. Working paper : national costs of the metropolitan ITS infrastructure : updated with 2004 deployment data

    DOT National Transportation Integrated Search

    The purpose of this report, "Working Paper National Costs of the Metropolitan ITS infrastructure: Updated with 2004 Deployment Data," is to update the estimates of the costs remaining to deploy Intelligent Transportation Systems (ITS) infrastructure ...

  7. Extreme Scale Computing to Secure the Nation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Brown, D L; McGraw, J R; Johnson, J R

    2009-11-10

    significant increases in the scientific bases that underlie the computational tools. Computer codes must be developed that replace phenomenology with increased levels of scientific understanding together with an accompanying quantification of uncertainty. These advanced codes will place significantly higher demands on the computing infrastructure than do the current 3D ASC codes. This article discusses not only the need for a future computing capability at the exascale for the SBSS program, but also considers high performance computing requirements for broader national security questions. For example, the increasing concern over potential nuclear terrorist threats demands a capability to assess threats and potential disablement technologies as well as a rapid forensic capability for determining a nuclear weapons design from post-detonation evidence (nuclear counterterrorism).« less

  8. Building the national health information infrastructure for personal health, health care services, public health, and research

    PubMed Central

    Detmer, Don E

    2003-01-01

    Background Improving health in our nation requires strengthening four major domains of the health care system: personal health management, health care delivery, public health, and health-related research. Many avoidable shortcomings in the health sector that result in poor quality are due to inaccessible data, information, and knowledge. A national health information infrastructure (NHII) offers the connectivity and knowledge management essential to correct these shortcomings. Better health and a better health system are within our reach. Discussion A national health information infrastructure for the United States should address the needs of personal health management, health care delivery, public health, and research. It should also address relevant global dimensions (e.g., standards for sharing data and knowledge across national boundaries). The public and private sectors will need to collaborate to build a robust national health information infrastructure, essentially a 'paperless' health care system, for the United States. The federal government should assume leadership for assuring a national health information infrastructure as recommended by the National Committee on Vital and Health Statistics and the President's Information Technology Advisory Committee. Progress is needed in the areas of funding, incentives, standards, and continued refinement of a privacy (i.e., confidentiality and security) framework to facilitate personal identification for health purposes. Particular attention should be paid to NHII leadership and change management challenges. Summary A national health information infrastructure is a necessary step for improved health in the U.S. It will require a concerted, collaborative effort by both public and private sectors. If you cannot measure it, you cannot improve it. Lord Kelvin PMID:12525262

  9. A national survey of health service infrastructure and policy impacts on access to computerised CBT in Scotland

    PubMed Central

    2012-01-01

    Background NICE recommends computerised cognitive behavioural therapy (cCBT) for the treatment of several mental health problems such as anxiety and depression. cCBT may be one way that services can reduce waiting lists and improve capacity and efficiency. However, there is some doubt about the extent to which the National Health Service (NHS) in the UK is embracing this new health technology in practice. This study aimed to investigate Scottish health service infrastructure and policies that promote or impede the implementation of cCBT in the NHS. Methods A telephone survey of lead IT staff at all health board areas across Scotland to systematically enquire about the ability of local IT infrastructure and IT policies to support delivery of cCBT. Results Overall, most of the health boards possess the required software to use cCBT programmes. However, the majority of NHS health boards reported that they lack dedicated computers for patient use, hence access to cCBT at NHS sites is limited. Additionally, local policy in the majority of boards prevent staff from routinely contacting patients via email, skype or instant messenger, making the delivery of short, efficient support sessions difficult. Conclusions Conclusions: Overall most of the infrastructure is in place but is not utilised in ways that allow effective delivery. For cCBT to be successfully delivered within a guided support model, as recommended by national guidelines, dedicated patient computers should be provided to allow access to online interventions. Additionally, policy should allow staff to support patients in convenient ways such as via email or live chat. These measures would increase the likelihood of achieving Scottish health service targets to reduce waiting time for psychological therapies to 18 weeks. PMID:22958309

  10. The multi-modal Australian ScienceS Imaging and Visualization Environment (MASSIVE) high performance computing infrastructure: applications in neuroscience and neuroinformatics research

    PubMed Central

    Goscinski, Wojtek J.; McIntosh, Paul; Felzmann, Ulrich; Maksimenko, Anton; Hall, Christopher J.; Gureyev, Timur; Thompson, Darren; Janke, Andrew; Galloway, Graham; Killeen, Neil E. B.; Raniga, Parnesh; Kaluza, Owen; Ng, Amanda; Poudel, Govinda; Barnes, David G.; Nguyen, Toan; Bonnington, Paul; Egan, Gary F.

    2014-01-01

    The Multi-modal Australian ScienceS Imaging and Visualization Environment (MASSIVE) is a national imaging and visualization facility established by Monash University, the Australian Synchrotron, the Commonwealth Scientific Industrial Research Organization (CSIRO), and the Victorian Partnership for Advanced Computing (VPAC), with funding from the National Computational Infrastructure and the Victorian Government. The MASSIVE facility provides hardware, software, and expertise to drive research in the biomedical sciences, particularly advanced brain imaging research using synchrotron x-ray and infrared imaging, functional and structural magnetic resonance imaging (MRI), x-ray computer tomography (CT), electron microscopy and optical microscopy. The development of MASSIVE has been based on best practice in system integration methodologies, frameworks, and architectures. The facility has: (i) integrated multiple different neuroimaging analysis software components, (ii) enabled cross-platform and cross-modality integration of neuroinformatics tools, and (iii) brought together neuroimaging databases and analysis workflows. MASSIVE is now operational as a nationally distributed and integrated facility for neuroinfomatics and brain imaging research. PMID:24734019

  11. Sustaining a Community Computing Infrastructure for Online Teacher Professional Development: A Case Study of Designing Tapped In

    NASA Astrophysics Data System (ADS)

    Farooq, Umer; Schank, Patricia; Harris, Alexandra; Fusco, Judith; Schlager, Mark

    Community computing has recently grown to become a major research area in human-computer interaction. One of the objectives of community computing is to support computer-supported cooperative work among distributed collaborators working toward shared professional goals in online communities of practice. A core issue in designing and developing community computing infrastructures — the underlying sociotechnical layer that supports communitarian activities — is sustainability. Many community computing initiatives fail because the underlying infrastructure does not meet end user requirements; the community is unable to maintain a critical mass of users consistently over time; it generates insufficient social capital to support significant contributions by members of the community; or, as typically happens with funded initiatives, financial and human capital resource become unavailable to further maintain the infrastructure. On the basis of more than 9 years of design experience with Tapped In-an online community of practice for education professionals — we present a case study that discusses four design interventions that have sustained the Tapped In infrastructure and its community to date. These interventions represent broader design strategies for developing online environments for professional communities of practice.

  12. The Federal Government and Information Technology Standards: Building the National Information Infrastructure.

    ERIC Educational Resources Information Center

    Radack, Shirley M.

    1994-01-01

    Examines the role of the National Institute of Standards and Technology (NIST) in the development of the National Information Infrastructure (NII). Highlights include the standards process; voluntary standards; Open Systems Interconnection problems; Internet Protocol Suite; consortia; government's role; and network security. (16 references) (LRW)

  13. Application of large-scale computing infrastructure for diverse environmental research applications using GC3Pie

    NASA Astrophysics Data System (ADS)

    Maffioletti, Sergio; Dawes, Nicholas; Bavay, Mathias; Sarni, Sofiane; Lehning, Michael

    2013-04-01

    The Swiss Experiment platform (SwissEx: http://www.swiss-experiment.ch) provides a distributed storage and processing infrastructure for environmental research experiments. The aim of the second phase project (the Open Support Platform for Environmental Research, OSPER, 2012-2015) is to develop the existing infrastructure to provide scientists with an improved workflow. This improved workflow will include pre-defined, documented and connected processing routines. A large-scale computing and data facility is required to provide reliable and scalable access to data for analysis, and it is desirable that such an infrastructure should be free of traditional data handling methods. Such an infrastructure has been developed using the cloud-based part of the Swiss national infrastructure SMSCG (http://www.smscg.ch) and Academic Cloud. The infrastructure under construction supports two main usage models: 1) Ad-hoc data analysis scripts: These scripts are simple processing scripts, written by the environmental researchers themselves, which can be applied to large data sets via the high power infrastructure. Examples of this type of script are spatial statistical analysis scripts (R-based scripts), mostly computed on raw meteorological and/or soil moisture data. These provide processed output in the form of a grid, a plot, or a kml. 2) Complex models: A more intense data analysis pipeline centered (initially) around the physical process model, Alpine3D, and the MeteoIO plugin; depending on the data set, this may require a tightly coupled infrastructure. SMSCG already supports Alpine3D executions as both regular grid jobs and as virtual software appliances. A dedicated appliance with the Alpine3D specific libraries has been created and made available through the SMSCG infrastructure. The analysis pipelines are activated and supervised by simple control scripts that, depending on the data fetched from the meteorological stations, launch new instances of the Alpine3D appliance

  14. A National Strategy to Develop Pragmatic Clinical Trials Infrastructure

    PubMed Central

    Guise, Jeanne‐Marie; Dolor, Rowena J.; Meissner, Paul; Tunis, Sean; Krishnan, Jerry A.; Pace, Wilson D.; Saltz, Joel; Hersh, William R.; Michener, Lloyd; Carey, Timothy S.

    2014-01-01

    Abstract An important challenge in comparative effectiveness research is the lack of infrastructure to support pragmatic clinical trials, which compare interventions in usual practice settings and subjects. These trials present challenges that differ from those of classical efficacy trials, which are conducted under ideal circumstances, in patients selected for their suitability, and with highly controlled protocols. In 2012, we launched a 1‐year learning network to identify high‐priority pragmatic clinical trials and to deploy research infrastructure through the NIH Clinical and Translational Science Awards Consortium that could be used to launch and sustain them. The network and infrastructure were initiated as a learning ground and shared resource for investigators and communities interested in developing pragmatic clinical trials. We followed a three‐stage process of developing the network, prioritizing proposed trials, and implementing learning exercises that culminated in a 1‐day network meeting at the end of the year. The year‐long project resulted in five recommendations related to developing the network, enhancing community engagement, addressing regulatory challenges, advancing information technology, and developing research methods. The recommendations can be implemented within 24 months and are designed to lead toward a sustained national infrastructure for pragmatic trials. PMID:24472114

  15. First results from a combined analysis of CERN computing infrastructure metrics

    NASA Astrophysics Data System (ADS)

    Duellmann, Dirk; Nieke, Christian

    2017-10-01

    The IT Analysis Working Group (AWG) has been formed at CERN across individual computing units and the experiments to attempt a cross cutting analysis of computing infrastructure and application metrics. In this presentation we will describe the first results obtained using medium/long term data (1 months — 1 year) correlating box level metrics, job level metrics from LSF and HTCondor, IO metrics from the physics analysis disk pools (EOS) and networking and application level metrics from the experiment dashboards. We will cover in particular the measurement of hardware performance and prediction of job duration, the latency sensitivity of different job types and a search for bottlenecks with the production job mix in the current infrastructure. The presentation will conclude with the proposal of a small set of metrics to simplify drawing conclusions also in the more constrained environment of public cloud deployments.

  16. Defense strategies for cloud computing multi-site server infrastructures

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Rao, Nageswara S.; Ma, Chris Y. T.; He, Fei

    We consider cloud computing server infrastructures for big data applications, which consist of multiple server sites connected over a wide-area network. The sites house a number of servers, network elements and local-area connections, and the wide-area network plays a critical, asymmetric role of providing vital connectivity between them. We model this infrastructure as a system of systems, wherein the sites and wide-area network are represented by their cyber and physical components. These components can be disabled by cyber and physical attacks, and also can be protected against them using component reinforcements. The effects of attacks propagate within the systems, andmore » also beyond them via the wide-area network.We characterize these effects using correlations at two levels using: (a) aggregate failure correlation function that specifies the infrastructure failure probability given the failure of an individual site or network, and (b) first-order differential conditions on system survival probabilities that characterize the component-level correlations within individual systems. We formulate a game between an attacker and a provider using utility functions composed of survival probability and cost terms. At Nash Equilibrium, we derive expressions for the expected capacity of the infrastructure given by the number of operational servers connected to the network for sum-form, product-form and composite utility functions.« less

  17. Network Computing Infrastructure to Share Tools and Data in Global Nuclear Energy Partnership

    NASA Astrophysics Data System (ADS)

    Kim, Guehee; Suzuki, Yoshio; Teshima, Naoya

    CCSE/JAEA (Center for Computational Science and e-Systems/Japan Atomic Energy Agency) integrated a prototype system of a network computing infrastructure for sharing tools and data to support the U.S. and Japan collaboration in GNEP (Global Nuclear Energy Partnership). We focused on three technical issues to apply our information process infrastructure, which are accessibility, security, and usability. In designing the prototype system, we integrated and improved both network and Web technologies. For the accessibility issue, we adopted SSL-VPN (Security Socket Layer-Virtual Private Network) technology for the access beyond firewalls. For the security issue, we developed an authentication gateway based on the PKI (Public Key Infrastructure) authentication mechanism to strengthen the security. Also, we set fine access control policy to shared tools and data and used shared key based encryption method to protect tools and data against leakage to third parties. For the usability issue, we chose Web browsers as user interface and developed Web application to provide functions to support sharing tools and data. By using WebDAV (Web-based Distributed Authoring and Versioning) function, users can manipulate shared tools and data through the Windows-like folder environment. We implemented the prototype system in Grid infrastructure for atomic energy research: AEGIS (Atomic Energy Grid Infrastructure) developed by CCSE/JAEA. The prototype system was applied for the trial use in the first period of GNEP.

  18. X-ray-induced acoustic computed tomography of concrete infrastructure

    NASA Astrophysics Data System (ADS)

    Tang, Shanshan; Ramseyer, Chris; Samant, Pratik; Xiang, Liangzhong

    2018-02-01

    X-ray-induced Acoustic Computed Tomography (XACT) takes advantage of both X-ray absorption contrast and high ultrasonic resolution in a single imaging modality by making use of the thermoacoustic effect. In XACT, X-ray absorption by defects and other structures in concrete create thermally induced pressure jumps that launch ultrasonic waves, which are then received by acoustic detectors to form images. In this research, XACT imaging was used to non-destructively test and identify defects in concrete. For concrete structures, we conclude that XACT imaging allows multiscale imaging at depths ranging from centimeters to meters, with spatial resolutions from sub-millimeter to centimeters. XACT imaging also holds promise for single-side testing of concrete infrastructure and provides an optimal solution for nondestructive inspection of existing bridges, pavement, nuclear power plants, and other concrete infrastructure.

  19. The national strategy for the physical protection of critical infrastructures and key assets

    DOT National Transportation Integrated Search

    2003-02-01

    This document defines the road ahead for a core mission area identified in the President's National Strategy for Homeland Security-reducing the Nation's vulnerability to acts of terrorism by protecting our critical infrastructures and key assets from...

  20. Global information infrastructure.

    PubMed

    Lindberg, D A

    1994-01-01

    The High Performance Computing and Communications Program (HPCC) is a multiagency federal initiative under the leadership of the White House Office of Science and Technology Policy, established by the High Performance Computing Act of 1991. It has been assigned a critical role in supporting the international collaboration essential to science and to health care. Goals of the HPCC are to extend USA leadership in high performance computing and networking technologies; to improve technology transfer for economic competitiveness, education, and national security; and to provide a key part of the foundation for the National Information Infrastructure. The first component of the National Institutes of Health to participate in the HPCC, the National Library of Medicine (NLM), recently issued a solicitation for proposals to address a range of issues, from privacy to 'testbed' networks, 'virtual reality,' and more. These efforts will build upon the NLM's extensive outreach program and other initiatives, including the Unified Medical Language System (UMLS), MEDLARS, and Grateful Med. New Internet search tools are emerging, such as Gopher and 'Knowbots'. Medicine will succeed in developing future intelligent agents to assist in utilizing computer networks. Our ability to serve patients is so often restricted by lack of information and knowledge at the time and place of medical decision-making. The new technologies, properly employed, will also greatly enhance our ability to serve the patient.

  1. VMEbus based computer and real-time UNIX as infrastructure of DAQ

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Yasu, Y.; Fujii, H.; Nomachi, M.

    1994-12-31

    This paper describes what the authors have constructed as the infrastructure of data acquisition system (DAQ). The paper reports recent developments concerned with HP VME board computer with LynxOS (HP742rt/HP-RT) and Alpha/OSF1 with VMEbus adapter. The paper also reports current status of developing a Benchmark Suite for Data Acquisition (DAQBENCH) for measuring not only the performance of VME/CAMAC access but also that of the context switching, the inter-process communications and so on, for various computers including Workstation-based systems and VME board computers.

  2. Education as eHealth Infrastructure: Considerations in Advancing a National Agenda for eHealth

    ERIC Educational Resources Information Center

    Hilberts, Sonya; Gray, Kathleen

    2014-01-01

    This paper explores the role of education as infrastructure in large-scale ehealth strategies--in theory, in international practice and in one national case study. Education is often invisible in the documentation of ehealth infrastructure. Nevertheless a review of international practice shows that there is significant educational investment made…

  3. A Computing Infrastructure for Supporting Climate Studies

    NASA Astrophysics Data System (ADS)

    Yang, C.; Bambacus, M.; Freeman, S. M.; Huang, Q.; Li, J.; Sun, M.; Xu, C.; Wojcik, G. S.; Cahalan, R. F.; NASA Climate @ Home Project Team

    2011-12-01

    Climate change is one of the major challenges facing us on the Earth planet in the 21st century. Scientists build many models to simulate the past and predict the climate change for the next decades or century. Most of the models are at a low resolution with some targeting high resolution in linkage to practical climate change preparedness. To calibrate and validate the models, millions of model runs are needed to find the best simulation and configuration. This paper introduces the NASA effort on Climate@Home project to build a supercomputer based-on advanced computing technologies, such as cloud computing, grid computing, and others. Climate@Home computing infrastructure includes several aspects: 1) a cloud computing platform is utilized to manage the potential spike access to the centralized components, such as grid computing server for dispatching and collecting models runs results; 2) a grid computing engine is developed based on MapReduce to dispatch models, model configuration, and collect simulation results and contributing statistics; 3) a portal serves as the entry point for the project to provide the management, sharing, and data exploration for end users; 4) scientists can access customized tools to configure model runs and visualize model results; 5) the public can access twitter and facebook to get the latest about the project. This paper will introduce the latest progress of the project and demonstrate the operational system during the AGU fall meeting. It will also discuss how this technology can become a trailblazer for other climate studies and relevant sciences. It will share how the challenges in computation and software integration were solved.

  4. Utilizing Semantic Big Data for realizing a National-scale Infrastructure Vulnerability Analysis System

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chinthavali, Supriya; Shankar, Mallikarjun

    Critical Infrastructure systems(CIs) such as energy, water, transportation and communication are highly interconnected and mutually dependent in complex ways. Robust modeling of CIs interconnections is crucial to identify vulnerabilities in the CIs. We present here a national-scale Infrastructure Vulnerability Analysis System (IVAS) vision leveraging Se- mantic Big Data (SBD) tools, Big Data, and Geographical Information Systems (GIS) tools. We survey existing ap- proaches on vulnerability analysis of critical infrastructures and discuss relevant systems and tools aligned with our vi- sion. Next, we present a generic system architecture and discuss challenges including: (1) Constructing and manag- ing a CI network-of-networks graph,more » (2) Performing analytic operations at scale, and (3) Interactive visualization of ana- lytic output to generate meaningful insights. We argue that this architecture acts as a baseline to realize a national-scale network based vulnerability analysis system.« less

  5. Establishing a Nation Wide Infrastructure for Systematic Use of Patient Reported Information.

    PubMed

    Jensen, Sanne; Lyng, Karen Marie

    2018-01-01

    In Denmark, we have set up a program to establish a nationwide infrastructure for Patient Reported Outcome (PRO) questionnaires. The effort is divided into an IT infrastructure part and a questionnaire development part. This paper describes how development and evaluation are closely knit together in the two tracks, as complexity is high in the PRO field and IT infrastructure, legal issues, various clinical workflows and the numerous stakeholders have to be taken into account concurrently. In the design process, we have thus used a participatory design approach to ensure a high level of active stakeholder involvement and capability of addressing all the relevant issues. In the next phases, we will apply the IT infrastructure in the planned full-scale evaluation of the questionnaires developed in the first phase, while we continue to develop new national questionnaires.

  6. From Regional Healthcare Information Organizations to a National Healthcare Information Infrastructure

    PubMed Central

    Kaufman, James H; Eiron, Iris; Deen, Glenn; Ford, Dan A; Smith, Eishay; Knoop, Sarah; Nelken, H; Kol, Tomer; Mesika, Yossi; Witting, Karen; Julier, Kevin; Bennett, Craig; Rapp, Bill; Carmeli, Boaz; Cohen, Simona

    2005-01-01

    Recently there has been increased focus on the need to modernize the healthcare information infrastructure in the United States.1–4 The U.S. healthcare industry is by far the largest in the world in both absolute dollars and in percentage of GDP (more than $1.5 trillion, or 15 percent of GDP). It is also fragmented and complex. These difficulties, coupled with an antiquated infrastructure for the collection of and access to medical data, lead to enormous inefficiencies and sources of error. Consumer, regulatory, and governmental pressure drive a growing consensus that the time has come to modernize the U.S. healthcare information infrastructure (HII). While such transformation may be disruptive in the short term, it will, in the future, significantly improve the quality, expediency, efficiency, and successful delivery of healthcare while decreasing costs to patients and payers and improving the overall experiences of consumers and providers. The launch of a national health infrastructure initiative in the United States in May 2004-with the goal of providing an electronic health record for every American within the next decade-will eventually transform the healthcare industry in general, just as information technology (IT) has transformed other industries in the past. The key to this successful outcome will be based on the way we apply IT to healthcare data and the services delivered through IT. This must be accomplished in a way that protects individuals and allows competition but gives caregivers reliable and efficient access to the data required to treat patients and to improve the practice of medical science. This paper describes key IT solutions and technologies that address the challenges of creating a nation-wide healthcare IT infrastructure. Furthermore we discuss the emergence of new electronic healthcare services and the current efforts of IBM Research, Software Group, and Healthcare Life Sciences to realize this new vision for healthcare. PMID:18066378

  7. Weighing the Options for Improving the National Postsecondary Data Infrastructure

    ERIC Educational Resources Information Center

    Rorison, Jamey; Voight, Mamie

    2015-01-01

    Students, policymakers and institutions all need to have high quality data about how today's students access and pay for higher education--and what contributes to their success. But the data that are available now are woefully inadequate. We need to improve the national postsecondary data infrastructure The report thoroughly explores seven options…

  8. Establishing a distributed national research infrastructure providing bioinformatics support to life science researchers in Australia.

    PubMed

    Schneider, Maria Victoria; Griffin, Philippa C; Tyagi, Sonika; Flannery, Madison; Dayalan, Saravanan; Gladman, Simon; Watson-Haigh, Nathan; Bayer, Philipp E; Charleston, Michael; Cooke, Ira; Cook, Rob; Edwards, Richard J; Edwards, David; Gorse, Dominique; McConville, Malcolm; Powell, David; Wilkins, Marc R; Lonie, Andrew

    2017-06-30

    EMBL Australia Bioinformatics Resource (EMBL-ABR) is a developing national research infrastructure, providing bioinformatics resources and support to life science and biomedical researchers in Australia. EMBL-ABR comprises 10 geographically distributed national nodes with one coordinating hub, with current funding provided through Bioplatforms Australia and the University of Melbourne for its initial 2-year development phase. The EMBL-ABR mission is to: (1) increase Australia's capacity in bioinformatics and data sciences; (2) contribute to the development of training in bioinformatics skills; (3) showcase Australian data sets at an international level and (4) enable engagement in international programs. The activities of EMBL-ABR are focussed in six key areas, aligning with comparable international initiatives such as ELIXIR, CyVerse and NIH Commons. These key areas-Tools, Data, Standards, Platforms, Compute and Training-are described in this article. © The Author 2017. Published by Oxford University Press.

  9. Monitoring performance of a highly distributed and complex computing infrastructure in LHCb

    NASA Astrophysics Data System (ADS)

    Mathe, Z.; Haen, C.; Stagni, F.

    2017-10-01

    In order to ensure an optimal performance of the LHCb Distributed Computing, based on LHCbDIRAC, it is necessary to be able to inspect the behavior over time of many components: firstly the agents and services on which the infrastructure is built, but also all the computing tasks and data transfers that are managed by this infrastructure. This consists of recording and then analyzing time series of a large number of observables, for which the usage of SQL relational databases is far from optimal. Therefore within DIRAC we have been studying novel possibilities based on NoSQL databases (ElasticSearch, OpenTSDB and InfluxDB) as a result of this study we developed a new monitoring system based on ElasticSearch. It has been deployed on the LHCb Distributed Computing infrastructure for which it collects data from all the components (agents, services, jobs) and allows creating reports through Kibana and a web user interface, which is based on the DIRAC web framework. In this paper we describe this new implementation of the DIRAC monitoring system. We give details on the ElasticSearch implementation within the DIRAC general framework, as well as an overview of the advantages of the pipeline aggregation used for creating a dynamic bucketing of the time series. We present the advantages of using the ElasticSearch DSL high-level library for creating and running queries. Finally we shall present the performances of that system.

  10. Distributed Monitoring Infrastructure for Worldwide LHC Computing Grid

    NASA Astrophysics Data System (ADS)

    Andrade, P.; Babik, M.; Bhatt, K.; Chand, P.; Collados, D.; Duggal, V.; Fuente, P.; Hayashi, S.; Imamagic, E.; Joshi, P.; Kalmady, R.; Karnani, U.; Kumar, V.; Lapka, W.; Quick, R.; Tarragon, J.; Teige, S.; Triantafyllidis, C.

    2012-12-01

    The journey of a monitoring probe from its development phase to the moment its execution result is presented in an availability report is a complex process. It goes through multiple phases such as development, testing, integration, release, deployment, execution, data aggregation, computation, and reporting. Further, it involves people with different roles (developers, site managers, VO[1] managers, service managers, management), from different middleware providers (ARC[2], dCache[3], gLite[4], UNICORE[5] and VDT[6]), consortiums (WLCG[7], EMI[11], EGI[15], OSG[13]), and operational teams (GOC[16], OMB[8], OTAG[9], CSIRT[10]). The seamless harmonization of these distributed actors is in daily use for monitoring of the WLCG infrastructure. In this paper we describe the monitoring of the WLCG infrastructure from the operational perspective. We explain the complexity of the journey of a monitoring probe from its execution on a grid node to the visualization on the MyWLCG[27] portal where it is exposed to other clients. This monitoring workflow profits from the interoperability established between the SAM[19] and RSV[20] frameworks. We show how these two distributed structures are capable of uniting technologies and hiding the complexity around them, making them easy to be used by the community. Finally, the different supported deployment strategies, tailored not only for monitoring the entire infrastructure but also for monitoring sites and virtual organizations, are presented and the associated operational benefits highlighted.

  11. US cities can manage national hydrology and biodiversity using local infrastructure policy.

    PubMed

    McManamay, Ryan A; Surendran Nair, Sujithkumar; DeRolph, Christopher R; Ruddell, Benjamin L; Morton, April M; Stewart, Robert N; Troia, Matthew J; Tran, Liem; Kim, Hyun; Bhaduri, Budhendra L

    2017-09-05

    Cities are concentrations of sociopolitical power and prime architects of land transformation, while also serving as consumption hubs of "hard" water and energy infrastructures. These infrastructures extend well outside metropolitan boundaries and impact distal river ecosystems. We used a comprehensive model to quantify the roles of anthropogenic stressors on hydrologic alteration and biodiversity in US streams and isolate the impacts stemming from hard infrastructure developments in cities. Across the contiguous United States, cities' hard infrastructures have significantly altered at least 7% of streams, which influence habitats for over 60% of North America's fish, mussel, and crayfish species. Additionally, city infrastructures have contributed to local extinctions in 260 species and currently influence 970 indigenous species, 27% of which are in jeopardy. We find that ecosystem impacts do not scale with city size but are instead proportionate to infrastructure decisions. For example, Atlanta's impacts by hard infrastructures extend across four major river basins, 12,500 stream km, and contribute to 100 local extinctions of aquatic species. In contrast, Las Vegas, a similar size city, impacts <1,000 stream km, leading to only seven local extinctions. So, cities have local policy choices that can reduce future impacts to regional aquatic ecosystems as they grow. By coordinating policy and communication between hard infrastructure sectors, local city governments and utilities can directly improve environmental quality in a significant fraction of the nation's streams reaching far beyond their city boundaries.

  12. Downscaling seasonal to centennial simulations on distributed computing infrastructures using WRF model. The WRF4G project

    NASA Astrophysics Data System (ADS)

    Cofino, A. S.; Fernández Quiruelas, V.; Blanco Real, J. C.; García Díez, M.; Fernández, J.

    2013-12-01

    Nowadays Grid Computing is powerful computational tool which is ready to be used for scientific community in different areas (such as biomedicine, astrophysics, climate, etc.). However, the use of this distributed computing infrastructures (DCI) is not yet common practice in climate research, and only a few teams and applications in this area take advantage of this infrastructure. Thus, the WRF4G project objective is to popularize the use of this technology in the atmospheric sciences area. In order to achieve this objective, one of the most used applications has been taken (WRF; a limited- area model, successor of the MM5 model), that has a user community formed by more than 8000 researchers worldwide. This community develop its research activity on different areas and could benefit from the advantages of Grid resources (case study simulations, regional hind-cast/forecast, sensitivity studies, etc.). The WRF model is used by many groups, in the climate research community, to carry on downscaling simulations. Therefore this community will also benefit. However, Grid infrastructures have some drawbacks for the execution of applications that make an intensive use of CPU and memory for a long period of time. This makes necessary to develop a specific framework (middleware). This middleware encapsulates the application and provides appropriate services for the monitoring and management of the simulations and the data. Thus,another objective of theWRF4G project consists on the development of a generic adaptation of WRF to DCIs. It should simplify the access to the DCIs for the researchers, and also to free them from the technical and computational aspects of the use of theses DCI. Finally, in order to demonstrate the ability of WRF4G solving actual scientific challenges with interest and relevance on the climate science (implying a high computational cost) we will shown results from different kind of downscaling experiments, like ERA-Interim re-analysis, CMIP5 models

  13. Computational Infrastructure for Engine Structural Performance Simulation

    NASA Technical Reports Server (NTRS)

    Chamis, Christos C.

    1997-01-01

    Select computer codes developed over the years to simulate specific aspects of engine structures are described. These codes include blade impact integrated multidisciplinary analysis and optimization, progressive structural fracture, quantification of uncertainties for structural reliability and risk, benefits estimation of new technology insertion and hierarchical simulation of engine structures made from metal matrix and ceramic matrix composites. Collectively these codes constitute a unique infrastructure readiness to credibly evaluate new and future engine structural concepts throughout the development cycle from initial concept, to design and fabrication, to service performance and maintenance and repairs, and to retirement for cause and even to possible recycling. Stated differently, they provide 'virtual' concurrent engineering for engine structures total-life-cycle-cost.

  14. Cyber resilience: a review of critical national infrastructure and cyber security protection measures applied in the UK and USA.

    PubMed

    Harrop, Wayne; Matteson, Ashley

    This paper presents cyber resilience as key strand of national security. It establishes the importance of critical national infrastructure protection and the growing vicarious nature of remote, well-planned, and well executed cyber attacks on critical infrastructures. Examples of well-known historical cyber attacks are presented, and the emergence of 'internet of things' as a cyber vulnerability issue yet to be tackled is explored. The paper identifies key steps being undertaken by those responsible for detecting, deterring, and disrupting cyber attacks on critical national infrastructure in the United Kingdom and the USA.

  15. Cultured Construction: Global Evidence of the Impact of National Values on Piped-to-Premises Water Infrastructure Development.

    PubMed

    Kaminsky, Jessica A

    2016-07-19

    In 2016, the global community undertook the Sustainable Development Goals. One of these goals seeks to achieve universal and equitable access to safe and affordable drinking water for all people by the year 2030. In support of this undertaking, this paper seeks to discover the cultural work done by piped water infrastructure across 33 nations with developed and developing economies that have experienced change in the percentage of population served by piped-to-premises water infrastructure at the national level of analysis. To do so, I regressed the 1990-2012 change in piped-to-premises water infrastructure coverage against Hofstede's cultural dimensions, controlling for per capita GDP, the 1990 baseline level of coverage, percent urban population, overall 1990-2012 change in improved sanitation (all technologies), and per capita freshwater resources. Separate analyses were carried out for the urban, rural, and aggregate national contexts. Hofstede's dimensions provide a measure of cross-cultural difference; high or low scores are not in any way intended to represent better or worse but rather serve as a quantitative way to compare aggregate preferences for ways of being and doing. High scores in the cultural dimensions of Power Distance, Individualism-Collectivism, and Uncertainty Avoidance explain increased access to piped-to-premises water infrastructure in the rural context. Higher Power Distance and Uncertainty Avoidance scores are also statistically significant for increased coverage in the urban and national aggregate contexts. These results indicate that, as presently conceived, piped-to-premises water infrastructure fits best with spatial contexts that prefer hierarchy and centralized control. Furthermore, water infrastructure is understood to reduce uncertainty regarding the provision of individually valued benefits. The results of this analysis identify global trends that enable engineers and policy makers to design and manage more culturally appropriate

  16. Helix Nebula: Enabling federation of existing data infrastructures and data services to an overarching cross-domain e-infrastructure

    NASA Astrophysics Data System (ADS)

    Lengert, Wolfgang; Farres, Jordi; Lanari, Riccardo; Casu, Francesco; Manunta, Michele; Lassalle-Balier, Gerard

    2014-05-01

    Helix Nebula has established a growing public private partnership of more than 30 commercial cloud providers, SMEs, and publicly funded research organisations and e-infrastructures. The Helix Nebula strategy is to establish a federated cloud service across Europe. Three high-profile flagships, sponsored by CERN (high energy physics), EMBL (life sciences) and ESA/DLR/CNES/CNR (earth science), have been deployed and extensively tested within this federated environment. The commitments behind these initial flagships have created a critical mass that attracts suppliers and users to the initiative, to work together towards an "Information as a Service" market place. Significant progress in implementing the following 4 programmatic goals (as outlined in the strategic Plan Ref.1) has been achieved: × Goal #1 Establish a Cloud Computing Infrastructure for the European Research Area (ERA) serving as a platform for innovation and evolution of the overall infrastructure. × Goal #2 Identify and adopt suitable policies for trust, security and privacy on a European-level can be provided by the European Cloud Computing framework and infrastructure. × Goal #3 Create a light-weight governance structure for the future European Cloud Computing Infrastructure that involves all the stakeholders and can evolve over time as the infrastructure, services and user-base grows. × Goal #4 Define a funding scheme involving the three stake-holder groups (service suppliers, users, EC and national funding agencies) into a Public-Private-Partnership model to implement a Cloud Computing Infrastructure that delivers a sustainable business environment adhering to European level policies. Now in 2014 a first version of this generic cross-domain e-infrastructure is ready to go into operations building on federation of European industry and contributors (data, tools, knowledge, ...). This presentation describes how Helix Nebula is being used in the domain of earth science focusing on geohazards. The

  17. Building oral health research infrastructure: the first national oral health survey of Rwanda.

    PubMed

    Morgan, John P; Isyagi, Moses; Ntaganira, Joseph; Gatarayiha, Agnes; Pagni, Sarah E; Roomian, Tamar C; Finkelman, Matthew; Steffensen, Jane E M; Barrow, Jane R; Mumena, Chrispinus H; Hackley, Donna M

    2018-01-01

    Oral health affects quality of life and is linked to overall health. Enhanced oral health research is needed in low- and middle-income countries to develop strategies that reduce the burden of oral disease, improve oral health and inform oral health workforce and infrastructure development decisions. To implement the first National Oral Health Survey of Rwanda to assess the oral disease burden and inform oral health promotion strategies. In this cross-sectional study, sample size and site selection were based on the World Health Organization (WHO) Oral Health Surveys Pathfinder stratified cluster methodologies. Randomly selected 15 sites included 2 in the capital city, 2 other urban centers and 11 rural locations representing all provinces and rural/urban population distribution. A minimum of 125 individuals from each of 5 age groups were included at each site. A Computer Assisted Personal Instrument (CAPI) was developed to administer the study instrument. Nearly two-thirds (64.9%) of the 2097 participants had caries experience and 54.3% had untreated caries. Among adults 20 years of age and older, 32.4% had substantial oral debris and 60.0% had calculus. A majority (70.6%) had never visited an oral health provider. Quality-of-life challenges due to oral diseases/conditions including pain, difficulty chewing, self-consciousness, and difficulty participating in usual activities was reported at 63.9%, 42.2% 36.2%, 35.4% respectively. The first National Oral Health Survey of Rwanda was a collaboration of the Ministry of Health of Rwanda, the University of Rwanda Schools of Dentistry and Public Health, the Rwanda Dental Surgeons and Dental (Therapists) Associations, and Tufts University and Harvard University Schools of Dental Medicine. The international effort contributed to building oral health research capacity and resulted in a national oral health database of oral disease burden. This information is essential for developing oral disease prevention and management

  18. Building oral health research infrastructure: the first national oral health survey of Rwanda

    PubMed Central

    Morgan, John P.; Ntaganira, Joseph; Gatarayiha, Agnes; Pagni, Sarah E.; Roomian, Tamar C.; Finkelman, Matthew; Steffensen, Jane E. M.; Barrow, Jane R.; Mumena, Chrispinus H.

    2018-01-01

    ABSTRACT Background: Oral health affects quality of life and is linked to overall health. Enhanced oral health research is needed in low- and middle-income countries to develop strategies that reduce the burden of oral disease, improve oral health and inform oral health workforce and infrastructure development decisions. Objective: To implement the first National Oral Health Survey of Rwanda to assess the oral disease burden and inform oral health promotion strategies. Methods: In this cross-sectional study, sample size and site selection were based on the World Health Organization (WHO) Oral Health Surveys Pathfinder stratified cluster methodologies. Randomly selected 15 sites included 2 in the capital city, 2 other urban centers and 11 rural locations representing all provinces and rural/urban population distribution. A minimum of 125 individuals from each of 5 age groups were included at each site. A Computer Assisted Personal Instrument (CAPI) was developed to administer the study instrument. Results: Nearly two-thirds (64.9%) of the 2097 participants had caries experience and 54.3% had untreated caries. Among adults 20 years of age and older, 32.4% had substantial oral debris and 60.0% had calculus. A majority (70.6%) had never visited an oral health provider. Quality-of-life challenges due to oral diseases/conditions including pain, difficulty chewing, self-consciousness, and difficulty participating in usual activities was reported at 63.9%, 42.2% 36.2%, 35.4% respectively. Conclusion: The first National Oral Health Survey of Rwanda was a collaboration of the Ministry of Health of Rwanda, the University of Rwanda Schools of Dentistry and Public Health, the Rwanda Dental Surgeons and Dental (Therapists) Associations, and Tufts University and Harvard University Schools of Dental Medicine. The international effort contributed to building oral health research capacity and resulted in a national oral health database of oral disease burden. This information is

  19. 77 FR 19300 - National Infrastructure Advisory Council

    Federal Register 2010, 2011, 2012, 2013, 2014

    2012-03-30

    ... Homeland Security with advice on the security of the critical infrastructure sectors and their information systems. The NIAC will meet to address issues relevant to the protection of critical infrastructure as... Group regarding the scope of the next phase of the Working Group's critical infrastructure resilience...

  20. New Features in the Computational Infrastructure for Nuclear Astrophysics

    NASA Astrophysics Data System (ADS)

    Smith, M. S.; Lingerfelt, E. J.; Scott, J. P.; Hix, W. R.; Nesaraja, C. D.; Koura, H.; Roberts, L. F.

    2006-04-01

    The Computational Infrastructure for Nuclear Astrophysics is a suite of computer codes online at nucastrodata.org that streamlines the incorporation of recent nuclear physics results into astrophysical simulations. The freely-available, cross- platform suite enables users to upload cross sections and s-factors, convert them into reaction rates, parameterize the rates, store the rates in customizable libraries, setup and run custom post-processing element synthesis calculations, and visualize the results. New features include the ability for users to comment on rates or libraries using an email-type interface, a nuclear mass model evaluator, enhanced techniques for rate parameterization, better treatment of rate inverses, and creation and exporting of custom animations of simulation results. We also have online animations of r- process, rp-process, and neutrino-p process element synthesis occurring in stellar explosions.

  1. Cloud Computing in Support of Applied Learning: A Baseline Study of Infrastructure Design at Southern Polytechnic State University

    ERIC Educational Resources Information Center

    Conn, Samuel S.; Reichgelt, Han

    2013-01-01

    Cloud computing represents an architecture and paradigm of computing designed to deliver infrastructure, platforms, and software as constructible computing resources on demand to networked users. As campuses are challenged to better accommodate academic needs for applications and computing environments, cloud computing can provide an accommodating…

  2. Putting the Information Infrastructure to Work. Report of the Information Infrastructure Task Force Committee on Applications and Technology. NIST Special Publication 857.

    ERIC Educational Resources Information Center

    National Inst. of Standards and Technology, Gaithersburg, MD.

    An interconnection of computer networks, telecommunications services, and applications, the National Information Infrastructure (NII) can open up new vistas and profoundly change much of American life. This report explores some of the opportunities and obstacles to the use of the NII by people and organizations. The goal is to express how…

  3. Should dentistry be part of the National Health Information Infrastructure?

    PubMed

    Schleyer, Titus K L

    2004-12-01

    The National Health Information Infrastructure, or NHII, proposes to improve the effectiveness, efficiency and overall quality of health in the United States by establishing a national, electronic information network for health care. To date, dentistry's integration into this network has not been discussed widely. The author reviews the NHII and its goals and structure through published reports and background literature. The author evaluates the advantages and disadvantages of the NHII regarding their implications for the dental care system. The NHII proposes to implement computer-based patient records, or CPRs, for most Americans by 2014, connect personal health information with other clinical and public health information, and enable different types of care providers to access CPRs. Advantages of the NHII include transparency of health information across health care providers, potentially increased involvement of patients in their care, better clinical decision making through connecting patient-specific information with the best clinical evidence, increased efficiency, enhanced bioterrorism defense and potential cost savings. Challenges in the implementation of the NHII in dentistry include limited use of CPRs, required investments in information technology, limited availability and adoption of standards, and perceived threats to privacy and confidentiality. The implementation of the NHII is making rapid strides. Dentistry should become an active participant in the NHII and work to ensure that the needs of dental patients and the profession are met. Practice Implications. The NHII has far-reaching implications on dental practice by making it easier to access relevant patient information and by helping to improve clinical decision making.

  4. TCIA Secure Cyber Critical Infrastructure Modernization.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Keliiaa, Curtis M.

    The Sandia National Laboratories (Sandia Labs) tribal cyber infrastructure assurance initiative was developed in response to growing national cybersecurity concerns in the the sixteen Department of Homeland Security (DHS) defined critical infrastructure sectors1. Technical assistance is provided for the secure modernization of critical infrastructure and key resources from a cyber-ecosystem perspective with an emphasis on enhanced security, resilience, and protection. Our purpose is to address national critical infrastructure challenges as a shared responsibility.

  5. Climate Science's Globally Distributed Infrastructure

    NASA Astrophysics Data System (ADS)

    Williams, D. N.

    2016-12-01

    The Earth System Grid Federation (ESGF) is primarily funded by the Department of Energy's (DOE's) Office of Science (the Office of Biological and Environmental Research [BER] Climate Data Informatics Program and the Office of Advanced Scientific Computing Research Next Generation Network for Science Program), the National Oceanic and Atmospheric Administration (NOAA), the National Aeronautics and Space Administration (NASA), and the National Science Foundation (NSF), the European Infrastructure for the European Network for Earth System Modeling (IS-ENES), and the Australian National University (ANU). Support also comes from other U.S. federal and international agencies. The federation works across multiple worldwide data centers and spans seven international network organizations to provide users with the ability to access, analyze, and visualize data using a globally federated collection of networks, computers, and software. Its architecture employs a series of geographically distributed peer nodes that are independently administered and united by common federation protocols and application programming interfaces (APIs). The full ESGF infrastructure has now been adopted by multiple Earth science projects and allows access to petabytes of geophysical data, including the Coupled Model Intercomparison Project (CMIP; output used by the Intergovernmental Panel on Climate Change assessment reports), multiple model intercomparison projects (MIPs; endorsed by the World Climate Research Programme [WCRP]), and the Accelerated Climate Modeling for Energy (ACME; ESGF is included in the overarching ACME workflow process to store model output). ESGF is a successful example of integration of disparate open-source technologies into a cohesive functional system that serves the needs the global climate science community. Data served by ESGF includes not only model output but also observational data from satellites and instruments, reanalysis, and generated images.

  6. WISDOM-II: screening against multiple targets implicated in malaria using computational grid infrastructures.

    PubMed

    Kasam, Vinod; Salzemann, Jean; Botha, Marli; Dacosta, Ana; Degliesposti, Gianluca; Isea, Raul; Kim, Doman; Maass, Astrid; Kenyon, Colin; Rastelli, Giulio; Hofmann-Apitius, Martin; Breton, Vincent

    2009-05-01

    Despite continuous efforts of the international community to reduce the impact of malaria on developing countries, no significant progress has been made in the recent years and the discovery of new drugs is more than ever needed. Out of the many proteins involved in the metabolic activities of the Plasmodium parasite, some are promising targets to carry out rational drug discovery. Recent years have witnessed the emergence of grids, which are highly distributed computing infrastructures particularly well fitted for embarrassingly parallel computations like docking. In 2005, a first attempt at using grids for large-scale virtual screening focused on plasmepsins and ended up in the identification of previously unknown scaffolds, which were confirmed in vitro to be active plasmepsin inhibitors. Following this success, a second deployment took place in the fall of 2006 focussing on one well known target, dihydrofolate reductase (DHFR), and on a new promising one, glutathione-S-transferase. In silico drug design, especially vHTS is a widely and well-accepted technology in lead identification and lead optimization. This approach, therefore builds, upon the progress made in computational chemistry to achieve more accurate in silico docking and in information technology to design and operate large scale grid infrastructures. On the computational side, a sustained infrastructure has been developed: docking at large scale, using different strategies in result analysis, storing of the results on the fly into MySQL databases and application of molecular dynamics refinement are MM-PBSA and MM-GBSA rescoring. The modeling results obtained are very promising. Based on the modeling results, In vitro results are underway for all the targets against which screening is performed. The current paper describes the rational drug discovery activity at large scale, especially molecular docking using FlexX software on computational grids in finding hits against three different targets (Pf

  7. 75 FR 67989 - Agency Information Collection Activities: Office of Infrastructure Protection; Infrastructure...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2010-11-04

    ..., National Protection and Programs Directorate, Office of Infrastructure Protection (IP), will submit the... manner.'' DHS designated IP to lead these efforts. Given that the vast majority of the Nation's critical infrastructure and key resources in most sectors are privately owned or controlled, IP's success in achieving the...

  8. National Coordinating Committee for Technology in Education and Training (NCC-TET) Requirements for the National Information Infrastructure (NII).

    ERIC Educational Resources Information Center

    Yrchik, John; Cradler, John

    1994-01-01

    Discusses guidelines that were developed to ensure that the National Information Infrastructure provides expanded opportunities for education and training. Topics include access requirements for homes and work places as well as schools; education and training application requirements, including coordination by federal departments and agencies; and…

  9. Intellectual Property and the National Information Infrastructure. The Report of the Working Group on Intellectual Property Rights.

    ERIC Educational Resources Information Center

    Lehman, Bruce A.

    In February 1993, the Information Infrastructure Task Force (IITF) was formed to articulate and implement the Clinton Administration's vision for the National Information Infrastructure (NII). The Working Group on Intellectual Property Rights was established within the Information Policy Committee to examine the intellectual property implications…

  10. Infrastructure sensing.

    PubMed

    Soga, Kenichi; Schooling, Jennifer

    2016-08-06

    Design, construction, maintenance and upgrading of civil engineering infrastructure requires fresh thinking to minimize use of materials, energy and labour. This can only be achieved by understanding the performance of the infrastructure, both during its construction and throughout its design life, through innovative monitoring. Advances in sensor systems offer intriguing possibilities to radically alter methods of condition assessment and monitoring of infrastructure. In this paper, it is hypothesized that the future of infrastructure relies on smarter information; the rich information obtained from embedded sensors within infrastructure will act as a catalyst for new design, construction, operation and maintenance processes for integrated infrastructure systems linked directly with user behaviour patterns. Some examples of emerging sensor technologies for infrastructure sensing are given. They include distributed fibre-optics sensors, computer vision, wireless sensor networks, low-power micro-electromechanical systems, energy harvesting and citizens as sensors.

  11. [Design and study of parallel computing environment of Monte Carlo simulation for particle therapy planning using a public cloud-computing infrastructure].

    PubMed

    Yokohama, Noriya

    2013-07-01

    This report was aimed at structuring the design of architectures and studying performance measurement of a parallel computing environment using a Monte Carlo simulation for particle therapy using a high performance computing (HPC) instance within a public cloud-computing infrastructure. Performance measurements showed an approximately 28 times faster speed than seen with single-thread architecture, combined with improved stability. A study of methods of optimizing the system operations also indicated lower cost.

  12. National Geodata Policy Forum: present and emerging U.S. policies governing the development, evolution, and use of the National Spatial Data Infrastructure: summary report

    USGS Publications Warehouse

    Federal Geographic Data Committee, U.S. Geological Survey

    1993-01-01

    The first National Geo-Data Policy Forum was held on May 10-12, 1993, in Tyson's Corner, Virginia. The objective of the National Geo-Data Policy Forum was to examine policies related to the evolution and use of the National Spatial Data Infrastructure (NSDI). A second goal was to identify issues concerning spatial data technology and its use by all citizens. Policy makers from the public and private sectors offered ideas on the myriad issues and questions related to the NSDI and learned of concerns that their organizations must address. The links that connect the NSDI to the Clinton Administration's National Information Infrastructure were identified and discussed. The forum offered participants an opportunity to define the NSDI's role in carrying out technology policy.

  13. U.S. National Cyberstrategy and Critical Infrastructure: The Protection Mandate and Its Execution

    DTIC Science & Technology

    2013-09-01

    revising this thesis, and balancing the coordination needed for: (1) Piano; (2) Soccer /Baseball; (3) Cubmaster Cub Scout Pack-135; (4) Hospitality...disease and pest response; and provides nutritional assistance. Provides the financial infrastructure of the nation. This sector consists of commercial

  14. Critical infrastructure protection.

    PubMed

    Deitz, Kim M

    2012-01-01

    Current government policies for protecting the nation's critical infrastructure are described in this article which focuses on hospital disaster planning and incident management and the significant role of Security in infrastructure protection

  15. Infrastructure sensing

    PubMed Central

    Soga, Kenichi; Schooling, Jennifer

    2016-01-01

    Design, construction, maintenance and upgrading of civil engineering infrastructure requires fresh thinking to minimize use of materials, energy and labour. This can only be achieved by understanding the performance of the infrastructure, both during its construction and throughout its design life, through innovative monitoring. Advances in sensor systems offer intriguing possibilities to radically alter methods of condition assessment and monitoring of infrastructure. In this paper, it is hypothesized that the future of infrastructure relies on smarter information; the rich information obtained from embedded sensors within infrastructure will act as a catalyst for new design, construction, operation and maintenance processes for integrated infrastructure systems linked directly with user behaviour patterns. Some examples of emerging sensor technologies for infrastructure sensing are given. They include distributed fibre-optics sensors, computer vision, wireless sensor networks, low-power micro-electromechanical systems, energy harvesting and citizens as sensors. PMID:27499845

  16. Achievable steps toward building a National Health Information infrastructure in the United States.

    PubMed

    Stead, William W; Kelly, Brian J; Kolodner, Robert M

    2005-01-01

    Consensus is growing that a health care information and communication infrastructure is one key to fixing the crisis in the United States in health care quality, cost, and access. The National Health Information Infrastructure (NHII) is an initiative of the Department of Health and Human Services receiving bipartisan support. There are many possible courses toward its objective. Decision makers need to reflect carefully on which approaches are likely to work on a large enough scale to have the intended beneficial national impacts and which are better left to smaller projects within the boundaries of health care organizations. This report provides a primer for use by informatics professionals as they explain aspects of that dividing line to policy makers and to health care leaders and front-line providers. It then identifies short-term, intermediate, and long-term steps that might be taken by the NHII initiative.

  17. Achievable Steps Toward Building a National Health Information Infrastructure in the United States

    PubMed Central

    Stead, William W.; Kelly, Brian J.; Kolodner, Robert M.

    2005-01-01

    Consensus is growing that a health care information and communication infrastructure is one key to fixing the crisis in the United States in health care quality, cost, and access. The National Health Information Infrastructure (NHII) is an initiative of the Department of Health and Human Services receiving bipartisan support. There are many possible courses toward its objective. Decision makers need to reflect carefully on which approaches are likely to work on a large enough scale to have the intended beneficial national impacts and which are better left to smaller projects within the boundaries of health care organizations. This report provides a primer for use by informatics professionals as they explain aspects of that dividing line to policy makers and to health care leaders and front-line providers. It then identifies short-term, intermediate, and long-term steps that might be taken by the NHII initiative. PMID:15561783

  18. Management advisory memorandum on National Airspace System infrastructure management system prototype, Federal Aviation Administration

    DOT National Transportation Integrated Search

    1997-03-01

    This is our Management Advisory Memorandum on the National Airspace : System (NAS) Infrastructure Management System (NIMS) prototype : project in the Federal Aviation Administration (FAA). Our review was : initiated in response to a hotline complaint...

  19. The National Plan for Research and Development In Support of Critical Infrastructure Protection

    DTIC Science & Technology

    2004-01-01

    vulnerabilities and highlight areas for improvement. As part of this effort, CIP &CP has cre- ated a research and development agenda aimed at improving...Infrastructure Protection Research and Development Plan 13 LONG-TERM DIRECTION PROVIDED BY THE CIP R&D PLAN The creation of a national critical...Research and Development Plan 20 Mapping to Other National R&D Plans The many R&D plans outside the direct context of CIP underway within DHS, other

  20. Critical Infrastructure Interdependencies Assessment

    DOE PAGES

    Petit, Frederic; Verner, Duane

    2016-11-01

    Throughout the world there is strong recognition that critical infrastructure security and resilience needs to be improved. In the United States, the National Infrastructure Protection Plan (NIPP) provides the strategic vision to guide the national effort to manage risk to the Nation’s critical infrastructure.”1 The achievement of this vision is challenged by the complexity of critical infrastructure systems and their inherent interdependencies. The update to the NIPP presents an opportunity to advance the nation’s efforts to further understand and analyze interdependencies. Such an important undertaking requires the involvement of public and private sector stakeholders and the reinforcement of existing partnershipsmore » and collaborations within the U.S. Department of Homeland Security (DHS) and other Federal agencies, including national laboratories; State, local, tribal, and territorial governments; and nongovernmental organizations.« less

  1. Probability Distributome: A Web Computational Infrastructure for Exploring the Properties, Interrelations, and Applications of Probability Distributions.

    PubMed

    Dinov, Ivo D; Siegrist, Kyle; Pearl, Dennis K; Kalinin, Alexandr; Christou, Nicolas

    2016-06-01

    Probability distributions are useful for modeling, simulation, analysis, and inference on varieties of natural processes and physical phenomena. There are uncountably many probability distributions. However, a few dozen families of distributions are commonly defined and are frequently used in practice for problem solving, experimental applications, and theoretical studies. In this paper, we present a new computational and graphical infrastructure, the Distributome , which facilitates the discovery, exploration and application of diverse spectra of probability distributions. The extensible Distributome infrastructure provides interfaces for (human and machine) traversal, search, and navigation of all common probability distributions. It also enables distribution modeling, applications, investigation of inter-distribution relations, as well as their analytical representations and computational utilization. The entire Distributome framework is designed and implemented as an open-source, community-built, and Internet-accessible infrastructure. It is portable, extensible and compatible with HTML5 and Web2.0 standards (http://Distributome.org). We demonstrate two types of applications of the probability Distributome resources: computational research and science education. The Distributome tools may be employed to address five complementary computational modeling applications (simulation, data-analysis and inference, model-fitting, examination of the analytical, mathematical and computational properties of specific probability distributions, and exploration of the inter-distributional relations). Many high school and college science, technology, engineering and mathematics (STEM) courses may be enriched by the use of modern pedagogical approaches and technology-enhanced methods. The Distributome resources provide enhancements for blended STEM education by improving student motivation, augmenting the classical curriculum with interactive webapps, and overhauling the

  2. Probability Distributome: A Web Computational Infrastructure for Exploring the Properties, Interrelations, and Applications of Probability Distributions

    PubMed Central

    Dinov, Ivo D.; Siegrist, Kyle; Pearl, Dennis K.; Kalinin, Alexandr; Christou, Nicolas

    2015-01-01

    Probability distributions are useful for modeling, simulation, analysis, and inference on varieties of natural processes and physical phenomena. There are uncountably many probability distributions. However, a few dozen families of distributions are commonly defined and are frequently used in practice for problem solving, experimental applications, and theoretical studies. In this paper, we present a new computational and graphical infrastructure, the Distributome, which facilitates the discovery, exploration and application of diverse spectra of probability distributions. The extensible Distributome infrastructure provides interfaces for (human and machine) traversal, search, and navigation of all common probability distributions. It also enables distribution modeling, applications, investigation of inter-distribution relations, as well as their analytical representations and computational utilization. The entire Distributome framework is designed and implemented as an open-source, community-built, and Internet-accessible infrastructure. It is portable, extensible and compatible with HTML5 and Web2.0 standards (http://Distributome.org). We demonstrate two types of applications of the probability Distributome resources: computational research and science education. The Distributome tools may be employed to address five complementary computational modeling applications (simulation, data-analysis and inference, model-fitting, examination of the analytical, mathematical and computational properties of specific probability distributions, and exploration of the inter-distributional relations). Many high school and college science, technology, engineering and mathematics (STEM) courses may be enriched by the use of modern pedagogical approaches and technology-enhanced methods. The Distributome resources provide enhancements for blended STEM education by improving student motivation, augmenting the classical curriculum with interactive webapps, and overhauling the

  3. Working paper : national costs of the metropolitan ITS infrastructure : updated with 2005 deployment data

    DOT National Transportation Integrated Search

    2006-07-01

    The purpose of this report, "Working Paper National Costs of the Metropolitan ITS Infrastructure: Updated with 2005 Deployment Data," is to update the estimates of the costs remaining to fully deploy Intelligent Transportation Systems (ITS) infrastru...

  4. The National Biological Information Infrastructure as an E-Government tool

    USGS Publications Warehouse

    Sepic, R.; Kase, K.

    2002-01-01

    Coordinated by the U.S. Geological Survey (USGS), the National Biological Information Infrastructure (NBII) is a Web-based system that provides access to data and information on the nation's biological resources. Although it was begun in 1993, predating any formal E-Government initiative, the NBII typifies the E-Government concepts outlined in the President's Management Agenda, as well as in the proposed E-Government Act of 2002. This article-an individual case study and not a broad survey with extensive references to the literature-explores the structure and operation of the NBII in relation to several emerging trends in E-Government: end-user focus, defined and scalable milestones, public-private partnerships, alliances with stakeholders, and interagency cooperation. ?? 2002 Elsevier Science Inc. All rights reserved.

  5. Regional and National Use of Semi‐Natural and Natural Depressional Wetlands in Green Infrastructure

    EPA Science Inventory

    Regional and National Use of Semi‐Natural and Natural Depressional Wetlands in Green Infrastructure Charles Lane, US Environmental Protection Agency, Ellen D’Amico, Pegasus Technical ServicesDepressional wetlands are frequently amongst the first aquatic systems to be ...

  6. National Aeronautics Research, Development, Test and Evaluation (RDT&E) Infrastructure Plan

    DTIC Science & Technology

    2011-01-01

    addressed in the National Aeronautics R&D Plan, identi- fying unnecessary redundancy solely on the basis of infrastructure required to support H H13 ...near, mid, and far terms, and impact not only scramjet propulsion systems, but potential turbine-based combined cycle systems as well. Turbine Engine...Icing Test Facilities A greater understanding of the impact that icing conditions have on turbine engine opera- tions is needed to develop enhanced

  7. An Open Computing Infrastructure that Facilitates Integrated Product and Process Development from a Decision-Based Perspective

    NASA Technical Reports Server (NTRS)

    Hale, Mark A.

    1996-01-01

    Computer applications for design have evolved rapidly over the past several decades, and significant payoffs are being achieved by organizations through reductions in design cycle times. These applications are overwhelmed by the requirements imposed during complex, open engineering systems design. Organizations are faced with a number of different methodologies, numerous legacy disciplinary tools, and a very large amount of data. Yet they are also faced with few interdisciplinary tools for design collaboration or methods for achieving the revolutionary product designs required to maintain a competitive advantage in the future. These organizations are looking for a software infrastructure that integrates current corporate design practices with newer simulation and solution techniques. Such an infrastructure must be robust to changes in both corporate needs and enabling technologies. In addition, this infrastructure must be user-friendly, modular and scalable. This need is the motivation for the research described in this dissertation. The research is focused on the development of an open computing infrastructure that facilitates product and process design. In addition, this research explicitly deals with human interactions during design through a model that focuses on the role of a designer as that of decision-maker. The research perspective here is taken from that of design as a discipline with a focus on Decision-Based Design, Theory of Languages, Information Science, and Integration Technology. Given this background, a Model of IPPD is developed and implemented along the lines of a traditional experimental procedure: with the steps of establishing context, formalizing a theory, building an apparatus, conducting an experiment, reviewing results, and providing recommendations. Based on this Model, Design Processes and Specification can be explored in a structured and implementable architecture. An architecture for exploring design called DREAMS (Developing Robust

  8. Cloud Infrastructure & Applications - CloudIA

    NASA Astrophysics Data System (ADS)

    Sulistio, Anthony; Reich, Christoph; Doelitzscher, Frank

    The idea behind Cloud Computing is to deliver Infrastructure-as-a-Services and Software-as-a-Service over the Internet on an easy pay-per-use business model. To harness the potentials of Cloud Computing for e-Learning and research purposes, and to small- and medium-sized enterprises, the Hochschule Furtwangen University establishes a new project, called Cloud Infrastructure & Applications (CloudIA). The CloudIA project is a market-oriented cloud infrastructure that leverages different virtualization technologies, by supporting Service-Level Agreements for various service offerings. This paper describes the CloudIA project in details and mentions our early experiences in building a private cloud using an existing infrastructure.

  9. National Infrastructure Protection Plan

    DTIC Science & Technology

    2006-01-01

    effective and efficient CI/KR protection; and • Provide a system for continuous measurement and improvement of CI/KR...information- based core processes, a top-down system -, network-, or function- based approach may be more appropri- ate. A bottom-up approach normally... e - commerce , e -mail, and R&D systems . • Control Systems : Cyber systems used within many infrastructure and industries to monitor and

  10. 75 FR 61160 - National Protection and Programs Directorate; National Infrastructure Advisory Council

    Federal Register 2010, 2011, 2012, 2013, 2014

    2010-10-04

    ... systems. The NIAC will meet to address issues relevant to the protection of critical infrastructure as.... Deliberation: Optimization of Resources for Mitigating Infrastructure Disruptions VII. Discussion of Potential...

  11. Alternative Transportation Systems Vehicles and Supporting Infrastructure Guide : Plan Implementation Considerations for National Park Managers.

    DOT National Transportation Integrated Search

    2004-01-09

    This manual is a guide to the basic concepts involved and issues to be addressed in acquiring and maintaining vehicles, supporting infrastructure, and personnel needed for alternative transportation systems to serve visitors to national parks, recrea...

  12. The process of moving from a regionally based cervical cytology biobank to a national infrastructure.

    PubMed

    Perskvist, Nasrin; Norlin, Loreana; Dillner, Joakim

    2015-04-01

    This article addresses the important issue of the standardization of the biobank process. It reports on i) the implementation of standard operating procedures for the processing of liquid-based cervical cells, ii) the standardization of storage conditions, and iii) the ultimate establishment of nationwide standardized biorepositories for cervical specimens. Given the differences in the infrastructure and healthcare systems of various county councils in Sweden, these efforts were designed to develop standardized methods of biobanking across the nation. The standardization of cervical sample processing and biobanking is an important and widely acknowledged issue. Efforts to address these concerns will facilitate better patient care and improve research based on retrospective and prospective collections of patient samples and cohorts. The successful nationalization of the Cervical Cytology Biobank in Sweden is based on three vital issues: i) the flexibility of the system to adapt to other regional systems, ii) the development of the system based on national collaboration between the university and the county councils, and iii) stable governmental financing by the provider, the Biobanking and Molecular Resource Infrastructure of Sweden (BBMRI.se). We will share our experiences with biorepository communities to promote understanding of and advances in opportunities to establish a nationalized biobank which covers the healthcare of the entire nation.

  13. 75 FR 39266 - National Protection and Programs Directorate; National Infrastructure Advisory Council

    Federal Register 2010, 2011, 2012, 2013, 2014

    2010-07-08

    ... infrastructure sectors and their information systems. Pursuant to 41 CFR 102-3.150(b), this notice was published... Critical Infrastructure Resilience Goals VI. Working Group Status: Optimization of Resources for Mitigating...

  14. NHERI: Advancing the Research Infrastructure of the Multi-Hazard Community

    NASA Astrophysics Data System (ADS)

    Blain, C. A.; Ramirez, J. A.; Bobet, A.; Browning, J.; Edge, B.; Holmes, W.; Johnson, D.; Robertson, I.; Smith, T.; Zuo, D.

    2017-12-01

    The Natural Hazards Engineering Research Infrastructure (NHERI), supported by the National Science Foundation (NSF), is a distributed, multi-user national facility that provides the natural hazards research community with access to an advanced research infrastructure. Components of NHERI are comprised of a Network Coordination Office (NCO), a cloud-based cyberinfrastructure (DesignSafe-CI), a computational modeling and simulation center (SimCenter), and eight Experimental Facilities (EFs), including a post-disaster, rapid response research facility (RAPID). Utimately NHERI enables researchers to explore and test ground-breaking concepts to protect homes, businesses and infrastructure lifelines from earthquakes, windstorms, tsunamis, and surge enabling innovations to help prevent natural hazards from becoming societal disasters. When coupled with education and community outreach, NHERI will facilitate research and educational advances that contribute knowledge and innovation toward improving the resiliency of the nation's civil infrastructure to withstand natural hazards. The unique capabilities and coordinating activities over Year 1 between NHERI's DesignSafe-CI, the SimCenter, and individual EFs will be presented. Basic descriptions of each component are also found at https://www.designsafe-ci.org/facilities/. Additionally to be discussed are the various roles of the NCO in leading development of a 5-year multi-hazard science plan, coordinating facility scheduling and fostering the sharing of technical knowledge and best practices, leading education and outreach programs such as the recent Summer Institute and multi-facility REU program, ensuring a platform for technology transfer to practicing engineers, and developing strategic national and international partnerships to support a diverse multi-hazard research and user community.

  15. Centre for Research Infrastructure of Polish GNSS Data - response and possible contribution to EPOS

    NASA Astrophysics Data System (ADS)

    Araszkiewicz, Andrzej; Rohm, Witold; Bosy, Jaroslaw; Szolucha, Marcin; Kaplon, Jan; Kroszczynski, Krzysztof

    2017-04-01

    In the frame of the first call under Action 4.2: Development of modern research infrastructure of the science sector in the Smart Growth Operational Programme 2014-2020 in the late of 2016 the "EPOS-PL" project has launched. Following institutes are responsible for the implementation of this project: Institute of Geophysics, Polish Academy of Sciences - Project Leader, Academic Computer Centre Cyfronet AGH University of Science and Technology, Central Mining Institute, the Institute of Geodesy and Cartography, Wrocław University of Environmental and Life Sciences, Military University of Technology. In addition, resources constituting entrepreneur's own contribution will come from the Polish Mining Group. Research Infrastructure EPOS-PL will integrate both existing and newly built National Research Infrastructures (Theme Centre for Research Infrastructures), which, under the premise of the program EPOS, are financed exclusively by the national founds. In addition, the e-science platform will be developed. The Centre for Research Infrastructure of GNSS Data (CIBDG - Task 5) will be built based on the experience and facilities of two institutions: Military University of Technology and Wrocław University of Environmental and Life Sciences. The project includes the construction of the National GNNS Repository with data QC procedures and adaptation of two Regional GNNS Analysis Centres for rapid and long-term geodynamical monitoring.

  16. Romanian contribution to research infrastructure database for EPOS

    NASA Astrophysics Data System (ADS)

    Ionescu, Constantin; Craiu, Andreea; Tataru, Dragos; Balan, Stefan; Muntean, Alexandra; Nastase, Eduard; Oaie, Gheorghe; Asimopolos, Laurentiu; Panaiotu, Cristian

    2014-05-01

    European Plate Observation System - EPOS is a long-term plan to facilitate integrated use of data, models and facilities from mainly distributed existing, but also new, research infrastructures for solid Earth Science. In EPOS Preparatory Phase were integrated the national Research Infrastructures at pan European level in order to create the EPOS distributed research infrastructures, structure in which, at the present time, Romania participates by means of the earth science research infrastructures of the national interest declared on the National Roadmap. The mission of EPOS is to build an efficient and comprehensive multidisciplinary research platform for solid Earth Sciences in Europe and to allow the scientific community to study the same phenomena from different points of view, in different time periods and spatial scales (laboratory and field experiments). At national scale, research and monitoring infrastructures have gathered a vast amount of geological and geophysical data, which have been used by research networks to underpin our understanding of the Earth. EPOS promotes the creation of comprehensive national and regional consortia, as well as the organization of collective actions. To serve the EPOS goals, in Romania a group of National Research Institutes, together with their infrastructures, gathered in an EPOS National Consortium, as follows: 1. National Institute for Earth Physics - Seismic, strong motion, GPS and Geomagnetic network and Experimental Laboratory; 2. National Institute of Marine Geology and Geoecology - Marine Research infrastructure and Euxinus integrated regional Black Sea observation and early-warning system; 3. Geological Institute of Romania - Surlari National Geomagnetic Observatory and National lithoteque (the latter as part of the National Museum of Geology) 4. University of Bucharest - Paleomagnetic Laboratory After national dissemination of EPOS initiative other Research Institutes and companies from the potential

  17. SEE-GRID eInfrastructure for Regional eScience

    NASA Astrophysics Data System (ADS)

    Prnjat, Ognjen; Balaz, Antun; Vudragovic, Dusan; Liabotis, Ioannis; Sener, Cevat; Marovic, Branko; Kozlovszky, Miklos; Neagu, Gabriel

    In the past 6 years, a number of targeted initiatives, funded by the European Commission via its information society and RTD programmes and Greek infrastructure development actions, have articulated a successful regional development actions in South East Europe that can be used as a role model for other international developments. The SEEREN (South-East European Research and Education Networking initiative) project, through its two phases, established the SEE segment of the pan-European G ´EANT network and successfully connected the research and scientific communities in the region. Currently, the SEE-LIGHT project is working towards establishing a dark-fiber backbone that will interconnect most national Research and Education networks in the region. On the distributed computing and storage provisioning i.e. Grid plane, the SEE-GRID (South-East European GRID e-Infrastructure Development) project, similarly through its two phases, has established a strong human network in the area of scientific computing and has set up a powerful regional Grid infrastructure, and attracted a number of applications from different fields from countries throughout the South-East Europe. The current SEEGRID-SCI project, ending in April 2010, empowers the regional user communities from fields of meteorology, seismology and environmental protection in common use and sharing of the regional e-Infrastructure. Current technical initiatives in formulation are focusing on a set of coordinated actions in the area of HPC and application fields making use of HPC initiatives. Finally, the current SEERA-EI project brings together policy makers - programme managers from 10 countries in the region. The project aims to establish a communication platform between programme managers, pave the way towards common e-Infrastructure strategy and vision, and implement concrete actions for common funding of electronic infrastructures on the regional level. The regional vision on establishing an e-Infrastructure

  18. NCI's High Performance Computing (HPC) and High Performance Data (HPD) Computing Platform for Environmental and Earth System Data Science

    NASA Astrophysics Data System (ADS)

    Evans, Ben; Allen, Chris; Antony, Joseph; Bastrakova, Irina; Gohar, Kashif; Porter, David; Pugh, Tim; Santana, Fabiana; Smillie, Jon; Trenham, Claire; Wang, Jingbo; Wyborn, Lesley

    2015-04-01

    The National Computational Infrastructure (NCI) has established a powerful and flexible in-situ petascale computational environment to enable both high performance computing and Data-intensive Science across a wide spectrum of national environmental and earth science data collections - in particular climate, observational data and geoscientific assets. This paper examines 1) the computational environments that supports the modelling and data processing pipelines, 2) the analysis environments and methods to support data analysis, and 3) the progress so far to harmonise the underlying data collections for future interdisciplinary research across these large volume data collections. NCI has established 10+ PBytes of major national and international data collections from both the government and research sectors based on six themes: 1) weather, climate, and earth system science model simulations, 2) marine and earth observations, 3) geosciences, 4) terrestrial ecosystems, 5) water and hydrology, and 6) astronomy, social and biosciences. Collectively they span the lithosphere, crust, biosphere, hydrosphere, troposphere, and stratosphere. The data is largely sourced from NCI's partners (which include the custodians of many of the major Australian national-scale scientific collections), leading research communities, and collaborating overseas organisations. New infrastructures created at NCI mean the data collections are now accessible within an integrated High Performance Computing and Data (HPC-HPD) environment - a 1.2 PFlop supercomputer (Raijin), a HPC class 3000 core OpenStack cloud system and several highly connected large-scale high-bandwidth Lustre filesystems. The hardware was designed at inception to ensure that it would allow the layered software environment to flexibly accommodate the advancement of future data science. New approaches to software technology and data models have also had to be developed to enable access to these large and exponentially

  19. Developing standards for a national spatial data infrastructure

    USGS Publications Warehouse

    Wortman, Kathryn C.

    1994-01-01

    The concept of a framework for data and information linkages among producers and users, known as a National Spatial Data Infrastructure (NSDI), is built upon four corners: data, technology, institutions, and standards. Standards are paramount to increase the efficiency and effectiveness of the NSDI. Historically, data standards and specifications have been developed with a very limited scope - they were parochial, and even competitive in nature, and promoted the sharing of data and information within only a small community at the expense of more open sharing across many communities. Today, an approach is needed to grow and evolve standards to support open systems and provide consistency and uniformity among data producers. There are several significant ongoing activities in geospatial data standards: transfer or exchange, metadata, and data content. In addition, standards in other areas are under discussion, including data quality, data models, and data collection.

  20. 78 FR 21320 - Unlicensed National Information Infrastructure (U-NII) Devices in the 5 GHz Band

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-04-10

    ... provide a wide array of high data rate mobile and fixed communications for individuals, businesses, and... FEDERAL COMMUNICATIONS COMMISSION 47 CFR Part 15 [ET Docket No. 13-49; FCC 13-22] Unlicensed National Information Infrastructure (U-NII) Devices in the 5 GHz Band AGENCY: Federal Communications...

  1. Standardized cardiovascular data for clinical research, registries, and patient care: a report from the Data Standards Workgroup of the National Cardiovascular Research Infrastructure project.

    PubMed

    Anderson, H Vernon; Weintraub, William S; Radford, Martha J; Kremers, Mark S; Roe, Matthew T; Shaw, Richard E; Pinchotti, Dana M; Tcheng, James E

    2013-05-07

    Relatively little attention has been focused on standardization of data exchange in clinical research studies and patient care activities. Both are usually managed locally using separate and generally incompatible data systems at individual hospitals or clinics. In the past decade there have been nascent efforts to create data standards for clinical research and patient care data, and to some extent these are helpful in providing a degree of uniformity. Nonetheless, these data standards generally have not been converted into accepted computer-based language structures that could permit reliable data exchange across computer networks. The National Cardiovascular Research Infrastructure (NCRI) project was initiated with a major objective of creating a model framework for standard data exchange in all clinical research, clinical registry, and patient care environments, including all electronic health records. The goal is complete syntactic and semantic interoperability. A Data Standards Workgroup was established to create or identify and then harmonize clinical definitions for a base set of standardized cardiovascular data elements that could be used in this network infrastructure. Recognizing the need for continuity with prior efforts, the Workgroup examined existing data standards sources. A basic set of 353 elements was selected. The NCRI staff then collaborated with the 2 major technical standards organizations in health care, the Clinical Data Interchange Standards Consortium and Health Level Seven International, as well as with staff from the National Cancer Institute Enterprise Vocabulary Services. Modeling and mapping were performed to represent (instantiate) the data elements in appropriate technical computer language structures for endorsement as an accepted data standard for public access and use. Fully implemented, these elements will facilitate clinical research, registry reporting, administrative reporting and regulatory compliance, and patient care

  2. Green infrastructure.

    DOT National Transportation Integrated Search

    2014-06-01

    The transportation industry has increasingly recognized the vital role sustainability serves in promoting and : protecting the transportation infrastructure of the nation. Many state Departments of Transportation have : correspondingly increased effo...

  3. Adequate & Equitable U.S. PK-12 Infrastructure: Priority Actions for Systemic Reform. A Report from the Planning for PK-12 School Infrastructure National Initiative

    ERIC Educational Resources Information Center

    Filardo, Mary; Vincent, Jeffrey M.

    2017-01-01

    To formulate a "systems-based" plan to address the PK-12 infrastructure crisis, in 2016, the 21st Century School Fund (21CSF) and the University of California-Berkeley's Center for Cities + Schools (CC+S), in partnership with the National Council on School Facilities and the Center for Green Schools at the U.S. Green Building Council,…

  4. A centralized informatics infrastructure for the National Institute on Drug Abuse Clinical Trials Network.

    PubMed

    Pan, Jeng-Jong; Nahm, Meredith; Wakim, Paul; Cushing, Carol; Poole, Lori; Tai, Betty; Pieper, Carl F

    2009-02-01

    Clinical trial networks (CTNs) were created to provide a sustaining infrastructure for the conduct of multisite clinical trials. As such, they must withstand changes in membership. Centralization of infrastructure including knowledge management, portfolio management, information management, process automation, work policies, and procedures in clinical research networks facilitates consistency and ultimately research. In 2005, the National Institute on Drug Abuse (NIDA) CTN transitioned from a distributed data management model to a centralized informatics infrastructure to support the network's trial activities and administration. We describe the centralized informatics infrastructure and discuss our challenges to inform others considering such an endeavor. During the migration of a clinical trial network from a decentralized to a centralized data center model, descriptive data were captured and are presented here to assess the impact of centralization. We present the framework for the informatics infrastructure and evaluative metrics. The network has decreased the time from last patient-last visit to database lock from an average of 7.6 months to 2.8 months. The average database error rate decreased from 0.8% to 0.2%, with a corresponding decrease in the interquartile range from 0.04%-1.0% before centralization to 0.01-0.27% after centralization. Centralization has provided the CTN with integrated trial status reporting and the first standards-based public data share. A preliminary cost-benefit analysis showed a 50% reduction in data management cost per study participant over the life of a trial. A single clinical trial network comprising addiction researchers and community treatment programs was assessed. The findings may not be applicable to other research settings. The identified informatics components provide the information and infrastructure needed for our clinical trial network. Post centralization data management operations are more efficient and less

  5. A Consensus Action Agenda for Achieving the National Health Information Infrastructure

    PubMed Central

    Yasnoff, William A.; Humphreys, Betsy L.; Overhage, J. Marc; Detmer, Don E.; Brennan, Patricia Flatley; Morris, Richard W.; Middleton, Blackford; Bates, David W.; Fanning, John P.

    2004-01-01

    Background: Improving the safety, quality, and efficiency of health care will require immediate and ubiquitous access to complete patient information and decision support provided through a National Health Information Infrastructure (NHII). Methods: To help define the action steps needed to achieve an NHII, the U.S. Department of Health and Human Services sponsored a national consensus conference in July 2003. Results: Attendees favored a public–private coordination group to guide NHII activities, provide education, share resources, and monitor relevant metrics to mark progress. They identified financial incentives, health information standards, and overcoming a few important legal obstacles as key NHII enablers. Community and regional implementation projects, including consumer access to a personal health record, were seen as necessary to demonstrate comprehensive functional systems that can serve as models for the entire nation. Finally, the participants identified the need for increased funding for research on the impact of health information technology on patient safety and quality of care. Individuals, organizations, and federal agencies are using these consensus recommendations to guide NHII efforts. PMID:15187075

  6. A consensus action agenda for achieving the national health information infrastructure.

    PubMed

    Yasnoff, William A; Humphreys, Betsy L; Overhage, J Marc; Detmer, Don E; Brennan, Patricia Flatley; Morris, Richard W; Middleton, Blackford; Bates, David W; Fanning, John P

    2004-01-01

    Improving the safety, quality, and efficiency of health care will require immediate and ubiquitous access to complete patient information and decision support provided through a National Health Information Infrastructure (NHII). To help define the action steps needed to achieve an NHII, the U.S. Department of Health and Human Services sponsored a national consensus conference in July 2003. Attendees favored a public-private coordination group to guide NHII activities, provide education, share resources, and monitor relevant metrics to mark progress. They identified financial incentives, health information standards, and overcoming a few important legal obstacles as key NHII enablers. Community and regional implementation projects, including consumer access to a personal health record, were seen as necessary to demonstrate comprehensive functional systems that can serve as models for the entire nation. Finally, the participants identified the need for increased funding for research on the impact of health information technology on patient safety and quality of care. Individuals, organizations, and federal agencies are using these consensus recommendations to guide NHII efforts.

  7. SEED: A Suite of Instructional Laboratories for Computer Security Education

    ERIC Educational Resources Information Center

    Du, Wenliang; Wang, Ronghua

    2008-01-01

    The security and assurance of our computing infrastructure has become a national priority. To address this priority, higher education has gradually incorporated the principles of computer and information security into the mainstream undergraduate and graduate computer science curricula. To achieve effective education, learning security principles…

  8. High-Performance Compute Infrastructure in Astronomy: 2020 Is Only Months Away

    NASA Astrophysics Data System (ADS)

    Berriman, B.; Deelman, E.; Juve, G.; Rynge, M.; Vöckler, J. S.

    2012-09-01

    By 2020, astronomy will be awash with as much as 60 PB of public data. Full scientific exploitation of such massive volumes of data will require high-performance computing on server farms co-located with the data. Development of this computing model will be a community-wide enterprise that has profound cultural and technical implications. Astronomers must be prepared to develop environment-agnostic applications that support parallel processing. The community must investigate the applicability and cost-benefit of emerging technologies such as cloud computing to astronomy, and must engage the Computer Science community to develop science-driven cyberinfrastructure such as workflow schedulers and optimizers. We report here the results of collaborations between a science center, IPAC, and a Computer Science research institute, ISI. These collaborations may be considered pathfinders in developing a high-performance compute infrastructure in astronomy. These collaborations investigated two exemplar large-scale science-driver workflow applications: 1) Calculation of an infrared atlas of the Galactic Plane at 18 different wavelengths by placing data from multiple surveys on a common plate scale and co-registering all the pixels; 2) Calculation of an atlas of periodicities present in the public Kepler data sets, which currently contain 380,000 light curves. These products have been generated with two workflow applications, written in C for performance and designed to support parallel processing on multiple environments and platforms, but with different compute resource needs: the Montage image mosaic engine is I/O-bound, and the NASA Star and Exoplanet Database periodogram code is CPU-bound. Our presentation will report cost and performance metrics and lessons-learned for continuing development. Applicability of Cloud Computing: Commercial Cloud providers generally charge for all operations, including processing, transfer of input and output data, and for storage of data

  9. The Computational Infrastructure for Geodynamics as a Community of Practice

    NASA Astrophysics Data System (ADS)

    Hwang, L.; Kellogg, L. H.

    2016-12-01

    Computational Infrastructure for Geodynamics (CIG), geodynamics.org, originated in 2005 out of community recognition that the efforts of individual or small groups of researchers to develop scientifically-sound software is impossible to sustain, duplicates effort, and makes it difficult for scientists to adopt state-of-the art computational methods that promote new discovery. As a community of practice, participants in CIG share an interest in computational modeling in geodynamics and work together on open source software to build the capacity to support complex, extensible, scalable, interoperable, reliable, and reusable software in an effort to increase the return on investment in scientific software development and increase the quality of the resulting software. The group interacts regularly to learn from each other and better their practices formally through webinar series, workshops, and tutorials and informally through listservs and hackathons. Over the past decade, we have learned that successful scientific software development requires at a minimum: collaboration between domain-expert researchers, software developers and computational scientists; clearly identified and committed lead developer(s); well-defined scientific and computational goals that are regularly evaluated and updated; well-defined benchmarks and testing throughout development; attention throughout development to usability and extensibility; understanding and evaluation of the complexity of dependent libraries; and managed user expectations through education, training, and support. CIG's code donation standards provide the basis for recently formalized best practices in software development (geodynamics.org/cig/dev/best-practices/). Best practices include use of version control; widely used, open source software libraries; extensive test suites; portable configuration and build systems; extensive documentation internal and external to the code; and structured, human readable input formats.

  10. Technography and Design-Actuality Gap-Analysis of Internet Computer Technologies-Assisted Education: Western Expectations and Global Education

    ERIC Educational Resources Information Center

    Greenhalgh-Spencer, Heather; Jerbi, Moja

    2017-01-01

    In this paper, we provide a design-actuality gap-analysis of the internet infrastructure that exists in developing nations and nations in the global South with the deployed internet computer technologies (ICT)-assisted programs that are designed to use internet infrastructure to provide educational opportunities. Programs that specifically…

  11. INFN-Pisa scientific computation environment (GRID, HPC and Interactive Analysis)

    NASA Astrophysics Data System (ADS)

    Arezzini, S.; Carboni, A.; Caruso, G.; Ciampa, A.; Coscetti, S.; Mazzoni, E.; Piras, S.

    2014-06-01

    The INFN-Pisa Tier2 infrastructure is described, optimized not only for GRID CPU and Storage access, but also for a more interactive use of the resources in order to provide good solutions for the final data analysis step. The Data Center, equipped with about 6700 production cores, permits the use of modern analysis techniques realized via advanced statistical tools (like RooFit and RooStat) implemented in multicore systems. In particular a POSIX file storage access integrated with standard SRM access is provided. Therefore the unified storage infrastructure is described, based on GPFS and Xrootd, used both for SRM data repository and interactive POSIX access. Such a common infrastructure allows a transparent access to the Tier2 data to the users for their interactive analysis. The organization of a specialized many cores CPU facility devoted to interactive analysis is also described along with the login mechanism integrated with the INFN-AAI (National INFN Infrastructure) to extend the site access and use to a geographical distributed community. Such infrastructure is used also for a national computing facility in use to the INFN theoretical community, it enables a synergic use of computing and storage resources. Our Center initially developed for the HEP community is now growing and includes also HPC resources fully integrated. In recent years has been installed and managed a cluster facility (1000 cores, parallel use via InfiniBand connection) and we are now updating this facility that will provide resources for all the intermediate level HPC computing needs of the INFN theoretical national community.

  12. Development of a public health nursing data infrastructure.

    PubMed

    Monsen, Karen A; Bekemeier, Betty; P Newhouse, Robin; Scutchfield, F Douglas

    2012-01-01

    An invited group of national public health nursing (PHN) scholars, practitioners, policymakers, and other stakeholders met in October 2010 identifying a critical need for a national PHN data infrastructure to support PHN research. This article summarizes the strengths, limitations, and gaps specific to PHN data and proposes a research agenda for development of a PHN data infrastructure. Future implications are suggested, such as issues related to the development of the proposed PHN data infrastructure and future research possibilities enabled by the infrastructure. Such a data infrastructure has potential to improve accountability and measurement, to demonstrate the value of PHN services, and to improve population health. © 2012 Wiley Periodicals, Inc.

  13. International Development of e-Infrastructures and Data Management Priorities for Global Change Research

    NASA Astrophysics Data System (ADS)

    Allison, M. L.; Gurney, R. J.

    2015-12-01

    An e-infrastructure that supports data-intensive, multidisciplinary research is needed to accelerate the pace of science to address 21st century global change challenges. Data discovery, access, sharing and interoperability collectively form core elements of an emerging shared vision of e-infrastructure for scientific discovery. The pace and breadth of change in information management across the data lifecycle means that no one country or institution can unilaterally provide the leadership and resources required to use data and information effectively, or needed to support a coordinated, global e-infrastructure. An 18-month long process involving ~120 experts in domain, computer, and social sciences from more than a dozen countries resulted in a formal set of recommendations to the Belmont Forum collaboration of national science funding agencies and others on what they are best suited to implement for development of an e-infrastructure in support of global change research, including: adoption of data principles that promote a global, interoperable e-infrastructure establishment of information and data officers for coordination of global data management and e-infrastructure efforts promotion of effective data planning determination of best practices development of a cross-disciplinary training curriculum on data management and curation The Belmont Forum is ideally poised to play a vital and transformative leadership role in establishing a sustained human and technical international data e-infrastructure to support global change research. The international collaborative process that went into forming these recommendations is contributing to national governments and funding agencies and international bodies working together to execute them.

  14. @neurIST: infrastructure for advanced disease management through integration of heterogeneous data, computing, and complex processing services.

    PubMed

    Benkner, Siegfried; Arbona, Antonio; Berti, Guntram; Chiarini, Alessandro; Dunlop, Robert; Engelbrecht, Gerhard; Frangi, Alejandro F; Friedrich, Christoph M; Hanser, Susanne; Hasselmeyer, Peer; Hose, Rod D; Iavindrasana, Jimison; Köhler, Martin; Iacono, Luigi Lo; Lonsdale, Guy; Meyer, Rodolphe; Moore, Bob; Rajasekaran, Hariharan; Summers, Paul E; Wöhrer, Alexander; Wood, Steven

    2010-11-01

    The increasing volume of data describing human disease processes and the growing complexity of understanding, managing, and sharing such data presents a huge challenge for clinicians and medical researchers. This paper presents the @neurIST system, which provides an infrastructure for biomedical research while aiding clinical care, by bringing together heterogeneous data and complex processing and computing services. Although @neurIST targets the investigation and treatment of cerebral aneurysms, the system's architecture is generic enough that it could be adapted to the treatment of other diseases. Innovations in @neurIST include confining the patient data pertaining to aneurysms inside a single environment that offers clinicians the tools to analyze and interpret patient data and make use of knowledge-based guidance in planning their treatment. Medical researchers gain access to a critical mass of aneurysm related data due to the system's ability to federate distributed information sources. A semantically mediated grid infrastructure ensures that both clinicians and researchers are able to seamlessly access and work on data that is distributed across multiple sites in a secure way in addition to providing computing resources on demand for performing computationally intensive simulations for treatment planning and research.

  15. Surface transportation : clear federal role and criteria-based selection process could improve three national and regional infrastructure programs.

    DOT National Transportation Integrated Search

    2009-02-01

    To help meet increasing transportation demands, the Safe, Accountable, Flexible, Efficient Transportation Equity Act: A Legacy for Users (SAFETEA-LU) created three programs to invest federal funds in national and regional transportation infrastructur...

  16. Software Infrastructure for Computer-aided Drug Discovery and Development, a Practical Example with Guidelines.

    PubMed

    Moretti, Loris; Sartori, Luca

    2016-09-01

    In the field of Computer-Aided Drug Discovery and Development (CADDD) the proper software infrastructure is essential for everyday investigations. The creation of such an environment should be carefully planned and implemented with certain features in order to be productive and efficient. Here we describe a solution to integrate standard computational services into a functional unit that empowers modelling applications for drug discovery. This system allows users with various level of expertise to run in silico experiments automatically and without the burden of file formatting for different software, managing the actual computation, keeping track of the activities and graphical rendering of the structural outcomes. To showcase the potential of this approach, performances of five different docking programs on an Hiv-1 protease test set are presented. © 2016 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  17. 75 FR 57079 - NASA Advisory Council; Information Technology Infrastructure Committee; Meeting

    Federal Register 2010, 2011, 2012, 2013, 2014

    2010-09-17

    ...; Information Technology Infrastructure Committee; Meeting AGENCY: National Aeronautics and Space Administration... Information Technology Infrastructure Committee of the NASA Advisory Council (NAC). DATES: Tuesday, September... Information Technology Infrastructure Committee, National Aeronautics and Space Administration Headquarters...

  18. Hydrogeology of the Old Faithful area, Yellowstone National Park, Wyoming, and its relevance to natural resources and infrastructure

    USGS Publications Warehouse

    ,; Foley, Duncan; Fournier, Robert O.; Heasler, Henry P.; Hinckley, Bern; Ingebritsen, Steven E.; Lowenstern, Jacob B.; Susong, David D.

    2014-01-01

    There are many documented examples at YNP and elsewhere where human infrastructure and natural thermal features have negatively affected each other. Unless action is taken, human conflicts with the Old Faithful hydrothermal system are likely to increase over the coming years. This is partly because of the increase in park visitation over the past decades, but also because the interval between eruptions of Old Faithful has increased, lengthening the time spent (and services needed) for each visitor at Old Faithful. To avoid an increase in visitor impacts, the National Park Service should consider 2 alternate strategies to accommodate people, vehicles, and services in the Upper Geyser Basin, such as shuttle services from staging (parking and dining) areas with little or no recent hydrothermal activity. We further suggest that YNP consider a zone system to guide maintenance and development of infrastructure in the immediate Old Faithful area. A “red” zone includes hydrothermally active land where new development is discouraged and existing infrastructure is modified with great care. An outer “green” zone represents areas where cooler temperatures and less hydrothermal flow are thought to exist, and where development and maintenance could proceed as occurs elsewhere in the park. An intermediate “yellow” zone would require preliminary assessment of subsurface temperatures and gas concentrations to assess suitability for infrastructure development. The panel recommends that YNP management follow the lead of the National Park System Advisory Board Science Committee (2012) by applying the “precautionary principle” when making decisions regarding the interaction of hydrothermal phenomena and park infrastructure in the Old Faithful area and other thermal areas within YNP.

  19. Security Economics and Critical National Infrastructure

    NASA Astrophysics Data System (ADS)

    Anderson, Ross; Fuloria, Shailendra

    There has been considerable effort and expenditure since 9/11 on the protection of ‘Critical National Infrastructure' against online attack. This is commonly interpreted to mean preventing online sabotage against utilities such as electricity,oil and gas, water, and sewage - including pipelines, refineries, generators, storage depots and transport facilities such as tankers and terminals. A consensus is emerging that the protection of such assets is more a matter of business models and regulation - in short, of security economics - than of technology. We describe the problems, and the state of play, in this paper. Industrial control systems operate in a different world from systems previously studied by security economists; we find the same issues (lock-in, externalities, asymmetric information and so on) but in different forms. Lock-in is physical, rather than based on network effects, while the most serious externalities result from correlated failure, whether from cascade failures, common-mode failures or simultaneous attacks. There is also an interesting natural experiment happening, in that the USA is regulating cyber security in the electric power industry, but not in oil and gas, while the UK is not regulating at all but rather encouraging industry's own efforts. Some European governments are intervening, while others are leaving cybersecurity entirely to plant owners to worry about. We already note some perverse effects of the U.S. regulation regime as companies game the system, to the detriment of overall dependability.

  20. FIN-EPOS - Finnish national initiative of the European Plate Observing System: Bringing Finnish solid Earth infrastructures into EPOS

    NASA Astrophysics Data System (ADS)

    Vuorinen, Tommi; Korja, Annakaisa

    2017-04-01

    FIN-EPOS consortium is a joint community of Finnish national research institutes tasked with operating and maintaining solid-earth geophysical and geological observatories and laboratories in Finland. These national research infrastructures (NRIs) seek to join EPOS research infrastructure (EPOS RI) and further pursue Finland's participation as a founding member in EPOS ERIC (European Research Infrastructure Consortium). Current partners of FIN-EPOS are the University of Helsinki (UH), the University of and Oulu (UO), Finnish Geospatial Research Institute (FGI) of the National Land Survey (NLS), Finnish Meteorological Institute (FMI), Geological Survey of Finland (GTK), CSC - IT Center for Science and MIKES Metrology at VTT Technical Research Centre of Finland Ltd. The consortium is hosted by the Institute of Seismology, UH (ISUH). The primary purpose of the consortium is to act as a coordinating body between various NRIs and the EPOS RI. FIN-EPOS engages in planning and development of the national EPOS RI and will provide support in EPOS implementation phase (IP) for the partner NRIs. FIN-EPOS also promotes the awareness of EPOS in Finland and is open to new partner NRIs that would benefit from participating in EPOS. The consortium additionally seeks to advance solid Earth science education, technologies and innovations in Finland and is actively engaging in Nordic co-operation and collaboration of solid Earth RIs. The main short term objective of FIN-EPOS is to make Finnish geoscientific data provided by NRIs interoperable with the Thematic Core Services (TCS) in the EPOS IP. Consortium partners commit into applying and following metadata and data format standards provided by EPOS. FIN-EPOS will also provide a national Finnish language web portal where users are identified and their user rights for EPOS resources are defined.

  1. 75 FR 21011 - Critical Infrastructure Partnership Advisory Council

    Federal Register 2010, 2011, 2012, 2013, 2014

    2010-04-22

    ... DEPARTMENT OF HOMELAND SECURITY [Docket No. DHS-2010-0032] Critical Infrastructure Partnership... Infrastructure Partnership Advisory Council (CIPAC) charter renewal. SUMMARY: The Department of Homeland Security... and Outreach Division, Office of Infrastructure Protection, National Protection and Programs...

  2. National Strategic Computing Initiative Strategic Plan

    DTIC Science & Technology

    2016-07-01

    23 A.6 National Nanotechnology Initiative...Initiative: https://www.nitrd.gov/nitrdgroups/index.php?title=Big_Data_(BD_SSG)  National Nanotechnology Initiative: http://www.nano.gov  Precision...computing. While not limited to neuromorphic technologies, the National Nanotechnology Initiative’s first Grand Challenge seeks to achieve brain

  3. JINR cloud infrastructure evolution

    NASA Astrophysics Data System (ADS)

    Baranov, A. V.; Balashov, N. A.; Kutovskiy, N. A.; Semenov, R. N.

    2016-09-01

    To fulfil JINR commitments in different national and international projects related to the use of modern information technologies such as cloud and grid computing as well as to provide a modern tool for JINR users for their scientific research a cloud infrastructure was deployed at Laboratory of Information Technologies of Joint Institute for Nuclear Research. OpenNebula software was chosen as a cloud platform. Initially it was set up in simple configuration with single front-end host and a few cloud nodes. Some custom development was done to tune JINR cloud installation to fit local needs: web form in the cloud web-interface for resources request, a menu item with cloud utilization statistics, user authentication via Kerberos, custom driver for OpenVZ containers. Because of high demand in that cloud service and its resources over-utilization it was re-designed to cover increasing users' needs in capacity, availability and reliability. Recently a new cloud instance has been deployed in high-availability configuration with distributed network file system and additional computing power.

  4. A Comprehensive and Cost-Effective Computer Infrastructure for K-12 Schools

    NASA Technical Reports Server (NTRS)

    Warren, G. P.; Seaton, J. M.

    1996-01-01

    Since 1993, NASA Langley Research Center has been developing and implementing a low-cost Internet connection model, including system architecture, training, and support, to provide Internet access for an entire network of computers. This infrastructure allows local area networks which exceed 50 machines per school to independently access the complete functionality of the Internet by connecting to a central site, using state-of-the-art commercial modem technology, through a single standard telephone line. By locating high-cost resources at this central site and sharing these resources and their costs among the school districts throughout a region, a practical, efficient, and affordable infrastructure for providing scale-able Internet connectivity has been developed. As the demand for faster Internet access grows, the model has a simple expansion path that eliminates the need to replace major system components and re-train personnel. Observations of optical Internet usage within an environment, particularly school classrooms, have shown that after an initial period of 'surfing,' the Internet traffic becomes repetitive. By automatically storing requested Internet information on a high-capacity networked disk drive at the local site (network based disk caching), then updating this information only when it changes, well over 80 percent of the Internet traffic that leaves a location can be eliminated by retrieving the information from the local disk cache.

  5. A cyber infrastructure for the SKA Telescope Manager

    NASA Astrophysics Data System (ADS)

    Barbosa, Domingos; Barraca, João. P.; Carvalho, Bruno; Maia, Dalmiro; Gupta, Yashwant; Natarajan, Swaminathan; Le Roux, Gerhard; Swart, Paul

    2016-07-01

    The Square Kilometre Array Telescope Manager (SKA TM) will be responsible for assisting the SKA Operations and Observation Management, carrying out System diagnosis and collecting Monitoring and Control data from the SKA subsystems and components. To provide adequate compute resources, scalability, operation continuity and high availability, as well as strict Quality of Service, the TM cyber-infrastructure (embodied in the Local Infrastructure - LINFRA) consists of COTS hardware and infrastructural software (for example: server monitoring software, host operating system, virtualization software, device firmware), providing a specially tailored Infrastructure as a Service (IaaS) and Platform as a Service (PaaS) solution. The TM infrastructure provides services in the form of computational power, software defined networking, power, storage abstractions, and high level, state of the art IaaS and PaaS management interfaces. This cyber platform will be tailored to each of the two SKA Phase 1 telescopes (SKA_MID in South Africa and SKA_LOW in Australia) instances, each presenting different computational and storage infrastructures and conditioned by location. This cyber platform will provide a compute model enabling TM to manage the deployment and execution of its multiple components (observation scheduler, proposal submission tools, MandC components, Forensic tools and several Databases, etc). In this sense, the TM LINFRA is primarily focused towards the provision of isolated instances, mostly resorting to virtualization technologies, while defaulting to bare hardware if specifically required due to performance, security, availability, or other requirement.

  6. Implementation Issues of Virtual Desktop Infrastructure and Its Case Study for a Physician's Round at Seoul National University Bundang Hospital.

    PubMed

    Yoo, Sooyoung; Kim, Seok; Kim, Taegi; Kim, Jon Soo; Baek, Rong-Min; Suh, Chang Suk; Chung, Chin Youb; Hwang, Hee

    2012-12-01

    The cloud computing-based virtual desktop infrastructure (VDI) allows access to computing environments with no limitations in terms of time or place such that it can permit the rapid establishment of a mobile hospital environment. The objective of this study was to investigate the empirical issues to be considered when establishing a virtual mobile environment using VDI technology in a hospital setting and to examine the utility of the technology with an Apple iPad during a physician's rounds as a case study. Empirical implementation issues were derived from a 910-bed tertiary national university hospital that recently launched a VDI system. During the physicians' rounds, we surveyed patient satisfaction levels with the VDI-based mobile consultation service with the iPad and the relationship between these levels of satisfaction and hospital revisits, hospital recommendations, and the hospital brand image. Thirty-five inpatients (including their next-of-kin) and seven physicians participated in the survey. Implementation issues pertaining to the VDI system arose with regard to the highly availability system architecture, wireless network infrastructure, and screen resolution of the system. Other issues were related to privacy and security, mobile device management, and user education. When the system was used in rounds, patients and their next-of-kin expressed high satisfaction levels, and a positive relationship was noted as regards patients' decisions to revisit the hospital and whether the use of the VDI system improved the brand image of the hospital. Mobile hospital environments have the potential to benefit both physicians and patients. The issues related to the implementation of VDI system discussed here should be examined in advance for its successful adoption and implementation.

  7. The computing and data infrastructure to interconnect EEE stations

    NASA Astrophysics Data System (ADS)

    Noferini, F.; EEE Collaboration

    2016-07-01

    The Extreme Energy Event (EEE) experiment is devoted to the search of high energy cosmic rays through a network of telescopes installed in about 50 high schools distributed throughout the Italian territory. This project requires a peculiar data management infrastructure to collect data registered in stations very far from each other and to allow a coordinated analysis. Such an infrastructure is realized at INFN-CNAF, which operates a Cloud facility based on the OpenStack opensource Cloud framework and provides Infrastructure as a Service (IaaS) for its users. In 2014 EEE started to use it for collecting, monitoring and reconstructing the data acquired in all the EEE stations. For the synchronization between the stations and the INFN-CNAF infrastructure we used BitTorrent Sync, a free peer-to-peer software designed to optimize data syncronization between distributed nodes. All data folders are syncronized with the central repository in real time to allow an immediate reconstruction of the data and their publication in a monitoring webpage. We present the architecture and the functionalities of this data management system that provides a flexible environment for the specific needs of the EEE project.

  8. National connected vehicle field infrastructure footprint analysis.

    DOT National Transportation Integrated Search

    2014-06-01

    The fundamental premise of the connected vehicle initiative is that enabling wireless connectivity among vehicles, the infrastructure, and mobile devices will bring about transformative changes in safety, mobility, and the environmental impacts in th...

  9. The future of infrastructure security :

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Garcia, Pablo; Turnley, Jessica Glicken; Parrott, Lori K.

    2013-05-01

    Sandia National Laboratories hosted a workshop on the future of infrastructure security on February 27-28, 2013, in Albuquerque, NM. The 17 participants came from backgrounds as diverse as federal policy, the insurance industry, infrastructure management, and technology development. The purpose of the workshop was to surface key issues, identify directions forward, and lay groundwork for cross-sectoral and cross-disciplinary collaborations. The workshop addressed issues such as the problem space (what is included in infrastructure problems?), the general types of threats to infrastructure (such as acute or chronic, system-inherent or exogenously imposed) and definitions of secure and resilient infrastructures. The workshop concludedmore » with a consideration of stakeholders and players in the infrastructure world, and identification of specific activities that could be undertaken by the Department of Homeland Security (DHS) and other players.« less

  10. EPA Research Highlights: EPA Studies Aging Water Infrastructure

    EPA Science Inventory

    The nation's extensive water infrastructure has the capacity to treat, store, and transport trillions of gallons of water and wastewater per day through millions of miles of pipelines. However, some infrastructure components are more than 100 years old, and as the infrastructure ...

  11. Computing and data processing

    NASA Technical Reports Server (NTRS)

    Smarr, Larry; Press, William; Arnett, David W.; Cameron, Alastair G. W.; Crutcher, Richard M.; Helfand, David J.; Horowitz, Paul; Kleinmann, Susan G.; Linsky, Jeffrey L.; Madore, Barry F.

    1991-01-01

    The applications of computers and data processing to astronomy are discussed. Among the topics covered are the emerging national information infrastructure, workstations and supercomputers, supertelescopes, digital astronomy, astrophysics in a numerical laboratory, community software, archiving of ground-based observations, dynamical simulations of complex systems, plasma astrophysics, and the remote control of fourth dimension supercomputers.

  12. Toolkit of Available EPA Green Infrastructure Modeling ...

    EPA Pesticide Factsheets

    This webinar will present a toolkit consisting of five EPA green infrastructure models and tools, along with communication material. This toolkit can be used as a teaching and quick reference resource for use by planners and developers when making green infrastructure implementation decisions. It can also be used for low impact development design competitions. Models and tools included: Green Infrastructure Wizard (GIWiz), Watershed Management Optimization Support Tool (WMOST), Visualizing Ecosystem Land Management Assessments (VELMA) Model, Storm Water Management Model (SWMM), and the National Stormwater Calculator (SWC). This webinar will present a toolkit consisting of five EPA green infrastructure models and tools, along with communication material. This toolkit can be used as a teaching and quick reference resource for use by planners and developers when making green infrastructure implementation decisions. It can also be used for low impact development design competitions. Models and tools included: Green Infrastructure Wizard (GIWiz), Watershed Management Optimization Support Tool (WMOST), Visualizing Ecosystem Land Management Assessments (VELMA) Model, Storm Water Management Model (SWMM), and the National Stormwater Calculator (SWC).

  13. Design and implementation of a reliable and cost-effective cloud computing infrastructure: the INFN Napoli experience

    NASA Astrophysics Data System (ADS)

    Capone, V.; Esposito, R.; Pardi, S.; Taurino, F.; Tortone, G.

    2012-12-01

    Over the last few years we have seen an increasing number of services and applications needed to manage and maintain cloud computing facilities. This is particularly true for computing in high energy physics, which often requires complex configurations and distributed infrastructures. In this scenario a cost effective rationalization and consolidation strategy is the key to success in terms of scalability and reliability. In this work we describe an IaaS (Infrastructure as a Service) cloud computing system, with high availability and redundancy features, which is currently in production at INFN-Naples and ATLAS Tier-2 data centre. The main goal we intended to achieve was a simplified method to manage our computing resources and deliver reliable user services, reusing existing hardware without incurring heavy costs. A combined usage of virtualization and clustering technologies allowed us to consolidate our services on a small number of physical machines, reducing electric power costs. As a result of our efforts we developed a complete solution for data and computing centres that can be easily replicated using commodity hardware. Our architecture consists of 2 main subsystems: a clustered storage solution, built on top of disk servers running GlusterFS file system, and a virtual machines execution environment. GlusterFS is a network file system able to perform parallel writes on multiple disk servers, providing this way live replication of data. High availability is also achieved via a network configuration using redundant switches and multiple paths between hypervisor hosts and disk servers. We also developed a set of management scripts to easily perform basic system administration tasks such as automatic deployment of new virtual machines, adaptive scheduling of virtual machines on hypervisor hosts, live migration and automated restart in case of hypervisor failures.

  14. Climate simulations and services on HPC, Cloud and Grid infrastructures

    NASA Astrophysics Data System (ADS)

    Cofino, Antonio S.; Blanco, Carlos; Minondo Tshuma, Antonio

    2017-04-01

    Cloud, Grid and High Performance Computing have changed the accessibility and availability of computing resources for Earth Science research communities, specially for Climate community. These paradigms are modifying the way how climate applications are being executed. By using these technologies the number, variety and complexity of experiments and resources are increasing substantially. But, although computational capacity is increasing, traditional applications and tools used by the community are not good enough to manage this large volume and variety of experiments and computing resources. In this contribution, we evaluate the challenges to run climate simulations and services on Grid, Cloud and HPC infrestructures and how to tackle them. The Grid and Cloud infrastructures provided by EGI's VOs ( esr , earth.vo.ibergrid and fedcloud.egi.eu) will be evaluated, as well as HPC resources from PRACE infrastructure and institutional clusters. To solve those challenges, solutions using DRM4G framework will be shown. DRM4G provides a good framework to manage big volume and variety of computing resources for climate experiments. This work has been supported by the Spanish National R&D Plan under projects WRF4G (CGL2011-28864), INSIGNIA (CGL2016-79210-R) and MULTI-SDM (CGL2015-66583-R) ; the IS-ENES2 project from the 7FP of the European Commission (grant agreement no. 312979); the European Regional Development Fund—ERDF and the Programa de Personal Investigador en Formación Predoctoral from Universidad de Cantabria and Government of Cantabria.

  15. Development Model for Research Infrastructures

    NASA Astrophysics Data System (ADS)

    Wächter, Joachim; Hammitzsch, Martin; Kerschke, Dorit; Lauterjung, Jörn

    2015-04-01

    Research infrastructures (RIs) are platforms integrating facilities, resources and services used by the research communities to conduct research and foster innovation. RIs include scientific equipment, e.g., sensor platforms, satellites or other instruments, but also scientific data, sample repositories or archives. E-infrastructures on the other hand provide the technological substratum and middleware to interlink distributed RI components with computing systems and communication networks. The resulting platforms provide the foundation for the design and implementation of RIs and play an increasing role in the advancement and exploitation of knowledge and technology. RIs are regarded as essential to achieve and maintain excellence in research and innovation crucial for the European Research Area (ERA). The implementation of RIs has to be considered as a long-term, complex development process often over a period of 10 or more years. The ongoing construction of Spatial Data Infrastructures (SDIs) provides a good example for the general complexity of infrastructure development processes especially in system-of-systems environments. A set of directives issued by the European Commission provided a framework of guidelines for the implementation processes addressing the relevant content and the encoding of data as well as the standards for service interfaces and the integration of these services into networks. Additionally, a time schedule for the overall construction process has been specified. As a result this process advances with a strong participation of member states and responsible organisations. Today, SDIs provide the operational basis for new digital business processes in both national and local authorities. Currently, the development of integrated RIs in Earth and Environmental Sciences is characterised by the following properties: • A high number of parallel activities on European and national levels with numerous institutes and organisations participating

  16. 75 FR 60771 - Critical Infrastructure Partnership Advisory Council (CIPAC)

    Federal Register 2010, 2011, 2012, 2013, 2014

    2010-10-01

    ... DEPARTMENT OF HOMELAND SECURITY [Docket No. DHS-2010-0080] Critical Infrastructure Partnership..., Section Chief Partnership Programs, Partnership and Outreach Division, Office of Infrastructure Protection... Outreach Division, Office of Infrastructure Protection, National Protection and Programs Directorate...

  17. New Geodetic Infrastructure for Australia: The NCRIS / AuScope Geospatial Component

    NASA Astrophysics Data System (ADS)

    Tregoning, P.; Watson, C. S.; Coleman, R.; Johnston, G.; Lovell, J.; Dickey, J.; Featherstone, W. E.; Rizos, C.; Higgins, M.; Priebbenow, R.

    2009-12-01

    In November 2006, the Australian Federal Government announced AUS15.8M in funding for geospatial research infrastructure through the National Collaborative Research Infrastructure Strategy (NCRIS). Funded within a broader capability area titled ‘Structure and Evolution of the Australian Continent’, NCRIS has provided a significant investment across Earth imaging, geochemistry, numerical simulation and modelling, the development of a virtual core library, and geospatial infrastructure. Known collectively as AuScope (www.auscope.org.au), this capability area has brought together Australian’s leading Earth scientists to decide upon the most pressing scientific issues and infrastructure needs for studying Earth systems and their impact on the Australian continent. Importantly and at the same time, the investment in geospatial infrastructure offers the opportunity to raise Australian geodetic science capability to the highest international level into the future. The geospatial component of AuScope builds onto the AUS15.8M of direct funding through the NCRIS process with significant in-kind and co-investment from universities and State/Territory and Federal government departments. The infrastructure to be acquired includes an FG5 absolute gravimeter, three gPhone relative gravimeters, three 12.1 m radio telescopes for geodetic VLBI, a continent-wide network of continuously operating geodetic quality GNSS receivers, a trial of a mobile SLR system and access to updated cluster computing facilities. We present an overview of the AuScope geospatial capability, review the current status of the infrastructure procurement and discuss some examples of the scientific research that will utilise the new geospatial infrastructure.

  18. Jali - Unstructured Mesh Infrastructure for Multi-Physics Applications

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Garimella, Rao V; Berndt, Markus; Coon, Ethan

    2017-04-13

    Jali is a parallel unstructured mesh infrastructure library designed for use by multi-physics simulations. It supports 2D and 3D arbitrary polyhedral meshes distributed over hundreds to thousands of nodes. Jali can read write Exodus II meshes along with fields and sets on the mesh and support for other formats is partially implemented or is (https://github.com/MeshToolkit/MSTK), an open source general purpose unstructured mesh infrastructure library from Los Alamos National Laboratory. While it has been made to work with other mesh frameworks such as MOAB and STKmesh in the past, support for maintaining the interface to these frameworks has been suspended formore » now. Jali supports distributed as well as on-node parallelism. Support of on-node parallelism is through direct use of the the mesh in multi-threaded constructs or through the use of "tiles" which are submeshes or sub-partitions of a partition destined for a compute node.« less

  19. Benchmarking infrastructure for mutation text mining

    PubMed Central

    2014-01-01

    Background Experimental research on the automatic extraction of information about mutations from texts is greatly hindered by the lack of consensus evaluation infrastructure for the testing and benchmarking of mutation text mining systems. Results We propose a community-oriented annotation and benchmarking infrastructure to support development, testing, benchmarking, and comparison of mutation text mining systems. The design is based on semantic standards, where RDF is used to represent annotations, an OWL ontology provides an extensible schema for the data and SPARQL is used to compute various performance metrics, so that in many cases no programming is needed to analyze results from a text mining system. While large benchmark corpora for biological entity and relation extraction are focused mostly on genes, proteins, diseases, and species, our benchmarking infrastructure fills the gap for mutation information. The core infrastructure comprises (1) an ontology for modelling annotations, (2) SPARQL queries for computing performance metrics, and (3) a sizeable collection of manually curated documents, that can support mutation grounding and mutation impact extraction experiments. Conclusion We have developed the principal infrastructure for the benchmarking of mutation text mining tasks. The use of RDF and OWL as the representation for corpora ensures extensibility. The infrastructure is suitable for out-of-the-box use in several important scenarios and is ready, in its current state, for initial community adoption. PMID:24568600

  20. Benchmarking infrastructure for mutation text mining.

    PubMed

    Klein, Artjom; Riazanov, Alexandre; Hindle, Matthew M; Baker, Christopher Jo

    2014-02-25

    Experimental research on the automatic extraction of information about mutations from texts is greatly hindered by the lack of consensus evaluation infrastructure for the testing and benchmarking of mutation text mining systems. We propose a community-oriented annotation and benchmarking infrastructure to support development, testing, benchmarking, and comparison of mutation text mining systems. The design is based on semantic standards, where RDF is used to represent annotations, an OWL ontology provides an extensible schema for the data and SPARQL is used to compute various performance metrics, so that in many cases no programming is needed to analyze results from a text mining system. While large benchmark corpora for biological entity and relation extraction are focused mostly on genes, proteins, diseases, and species, our benchmarking infrastructure fills the gap for mutation information. The core infrastructure comprises (1) an ontology for modelling annotations, (2) SPARQL queries for computing performance metrics, and (3) a sizeable collection of manually curated documents, that can support mutation grounding and mutation impact extraction experiments. We have developed the principal infrastructure for the benchmarking of mutation text mining tasks. The use of RDF and OWL as the representation for corpora ensures extensibility. The infrastructure is suitable for out-of-the-box use in several important scenarios and is ready, in its current state, for initial community adoption.

  1. Data Center Consolidation: A Step towards Infrastructure Clouds

    NASA Astrophysics Data System (ADS)

    Winter, Markus

    Application service providers face enormous challenges and rising costs in managing and operating a growing number of heterogeneous system and computing landscapes. Limitations of traditional computing environments force IT decision-makers to reorganize computing resources within the data center, as continuous growth leads to an inefficient utilization of the underlying hardware infrastructure. This paper discusses a way for infrastructure providers to improve data center operations based on the findings of a case study on resource utilization of very large business applications and presents an outlook beyond server consolidation endeavors, transforming corporate data centers into compute clouds.

  2. 76 FR 20995 - Critical Infrastructure Partnership Advisory Council (CIPAC)

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-04-14

    ... DEPARTMENT OF HOMELAND SECURITY [Docket No. DHS-2011-0028] Critical Infrastructure Partnership... Critical Infrastructure Partnership Advisory Council (CIPAC) by notice published in the Federal Register... Infrastructure Protection, National Protection and Programs Directorate, U.S. Department of Homeland Security...

  3. Critical Infrastructure for Ocean Research and Societal Needs in 2030

    NASA Astrophysics Data System (ADS)

    Glickson, D.; Barron, E. J.; Fine, R. A.; Bellingham, J. G.; Boss, E.; Boyle, E. A.; Edwards, M.; Johnson, K. S.; Kelley, D. S.; Kite-Powell, H.; Ramberg, S. E.; Rudnick, D. L.; Schofield, O.; Tamburri, M.; Wiebe, P. H.; Wright, D. J.; Committee on an Ocean Infrastructure StrategyU. S. Ocean Research in 2030

    2011-12-01

    At the request of the Subcommittee on Ocean Science and Technology, an expert committee was convened by the National Research Council to identify major research questions anticipated to be at the forefront of ocean science in 2030, define categories of infrastructure that should be included in planning, provide advice on criteria and processes that could be used to set priorities, and recommend ways to maximize the value of investments in ocean infrastructure. The committee identified 32 future ocean research questions in four themes: enabling stewardship of the environment, protecting life and property, promoting economic vitality, and increasing fundamental scientific understanding. Many of the questions reflect challenging, multidisciplinary science questions that are clearly relevant now and are likely to take decades to solve. U.S. ocean research will require a growing suite of ocean infrastructure for a range of activities, such as high quality, sustained time series observations and autonomous monitoring at a broad range of spatial and temporal scales. A coordinated national plan for making future strategic investments will be needed and should be based upon known priorities and reviewed every 5-10 years. After assessing trends in ocean infrastructure and technology development, the committee recommended implementing a comprehensive, long-term research fleet plan in order to retain access to the sea; continuing U.S. capability to access fully and partially ice-covered seas; supporting innovation, particularly the development of biogeochemical sensors; enhancing computing and modeling capacity and capability; establishing broadly accessible data management facilities; and increasing interdisciplinary education and promoting a technically-skilled workforce. They also recommended that development, maintenance, or replacement of ocean research infrastructure assets should be prioritized in terms of societal benefit. Particular consideration should be given to

  4. The Virtual Geophysics Laboratory (VGL): Scientific Workflows Operating Across Organizations and Across Infrastructures

    NASA Astrophysics Data System (ADS)

    Cox, S. J.; Wyborn, L. A.; Fraser, R.; Rankine, T.; Woodcock, R.; Vote, J.; Evans, B.

    2012-12-01

    The Virtual Geophysics Laboratory (VGL) is web portal that provides geoscientists with an integrated online environment that: seamlessly accesses geophysical and geoscience data services from the AuScope national geoscience information infrastructure; loosely couples these data to a variety of gesocience software tools; and provides large scale processing facilities via cloud computing. VGL is a collaboration between CSIRO, Geoscience Australia, National Computational Infrastructure, Monash University, Australian National University and the University of Queensland. The VGL provides a distributed system whereby a user can enter an online virtual laboratory to seamlessly connect to OGC web services for geoscience data. The data is supplied in open standards formats using international standards like GeoSciML. A VGL user uses a web mapping interface to discover and filter the data sources using spatial and attribute filters to define a subset. Once the data is selected the user is not required to download the data. VGL collates the service query information for later in the processing workflow where it will be staged directly to the computing facilities. The combination of deferring data download and access to Cloud computing enables VGL users to access their data at higher resolutions and to undertake larger scale inversions, more complex models and simulations than their own local computing facilities might allow. Inside the Virtual Geophysics Laboratory, the user has access to a library of existing models, complete with exemplar workflows for specific scientific problems based on those models. For example, the user can load a geological model published by Geoscience Australia, apply a basic deformation workflow provided by a CSIRO scientist, and have it run in a scientific code from Monash. Finally the user can publish these results to share with a colleague or cite in a paper. This opens new opportunities for access and collaboration as all the resources (models

  5. 77 FR 62521 - National Infrastructure Advisory Council

    Federal Register 2010, 2011, 2012, 2013, 2014

    2012-10-15

    ... oral comments after the presentation of the report from the Regional Resilience Working Group. We... a presentation from the NIAC Regional Resilience Working Group documenting their work to date on the Regional Resilience Study, which includes the role and impact of critical infrastructure on regional...

  6. The European Research Infrastructure for Heritage Science (erihs)

    NASA Astrophysics Data System (ADS)

    Striova, J.; Pezzati, L.

    2017-08-01

    The European Research Infrastructure for Heritage Science (E-RIHS) entered the European strategic roadmap for research infrastructures (ESFRI Roadmap [1]) in 2016, as one of its six new projects. E-RIHS supports research on heritage interpretation, preservation, documentation and management. Both cultural and natural heritage are addressed: collections, artworks, buildings, monuments and archaeological sites. E-RIHS aims to become a distributed research infrastructure with a multi-level star-structure: facilities from single Countries will be organized in national nodes, coordinated by National Hubs. The E-RIHS Central Hub will provide the unique access point to all E-RIHS services through coordination of National Hubs. E-RIHS activities already started in some of its national nodes. In Italy the access to some E-RIHS services started in 2015. A case study concerning the diagnostic of a hypogea cave is presented.

  7. Evolution of a Materials Data Infrastructure

    NASA Astrophysics Data System (ADS)

    Warren, James A.; Ward, Charles H.

    2018-06-01

    The field of materials science and engineering is writing a new chapter in its evolution, one of digitally empowered materials discovery, development, and deployment. The 2008 Integrated Computational Materials Engineering (ICME) study report helped usher in this paradigm shift, making a compelling case and strong recommendations for an infrastructure supporting ICME that would enable access to precompetitive materials data for both scientific and engineering applications. With the launch of the Materials Genome Initiative in 2011, which drew substantial inspiration from the ICME study, digital data was highlighted as a core component of a Materials Innovation Infrastructure, along with experimental and computational tools. Over the past 10 years, our understanding of what it takes to provide accessible materials data has matured and rapid progress has been made in establishing a Materials Data Infrastructure (MDI). We are learning that the MDI is essential to eliminating the seams between experiment and computation by providing a means for them to connect effortlessly. Additionally, the MDI is becoming an enabler, allowing materials engineering to tie into a much broader model-based engineering enterprise for product design.

  8. Implementation Issues of Virtual Desktop Infrastructure and Its Case Study for a Physician's Round at Seoul National University Bundang Hospital

    PubMed Central

    Yoo, Sooyoung; Kim, Seok; Kim, Taegi; Kim, Jon Soo; Baek, Rong-Min; Suh, Chang Suk; Chung, Chin Youb

    2012-01-01

    Objectives The cloud computing-based virtual desktop infrastructure (VDI) allows access to computing environments with no limitations in terms of time or place such that it can permit the rapid establishment of a mobile hospital environment. The objective of this study was to investigate the empirical issues to be considered when establishing a virtual mobile environment using VDI technology in a hospital setting and to examine the utility of the technology with an Apple iPad during a physician's rounds as a case study. Methods Empirical implementation issues were derived from a 910-bed tertiary national university hospital that recently launched a VDI system. During the physicians' rounds, we surveyed patient satisfaction levels with the VDI-based mobile consultation service with the iPad and the relationship between these levels of satisfaction and hospital revisits, hospital recommendations, and the hospital brand image. Thirty-five inpatients (including their next-of-kin) and seven physicians participated in the survey. Results Implementation issues pertaining to the VDI system arose with regard to the highly availability system architecture, wireless network infrastructure, and screen resolution of the system. Other issues were related to privacy and security, mobile device management, and user education. When the system was used in rounds, patients and their next-of-kin expressed high satisfaction levels, and a positive relationship was noted as regards patients' decisions to revisit the hospital and whether the use of the VDI system improved the brand image of the hospital. Conclusions Mobile hospital environments have the potential to benefit both physicians and patients. The issues related to the implementation of VDI system discussed here should be examined in advance for its successful adoption and implementation. PMID:23346476

  9. [Attributes of forest infrastructure].

    PubMed

    Gao, Jun-kai; Jin, Ying-shan

    2007-06-01

    This paper discussed the origin and evolution of the conception of ecological infrastructure, the understanding of international communities about the functions of forest, the important roles of forest in China' s economic development and ecological security, and the situations and challenges to the ongoing forestry ecological restoration programs. It was suggested that forest should be defined as an essential infrastructure for national economic and social development in a modern society. The critical functions of forest infrastructure played in the transition of forestry ecological development were emphasized. Based on the synthesis of forest ecosystem features, it was considered that the attributes of forest infrastructure are distinctive, due to the fact that it is constructed by living biological material and diversified in ownership. The forestry ecological restoration program should not only follow the basic principles of infrastructural construction, but also take the special characteristics of forests into consideration in studying the managerial system of the programs. Some suggestions for the ongoing programs were put forward: 1) developing a modern concept of ecosystem where man and nature in harmony is the core, 2) formulating long-term stable investments for forestry ecological restoration programs, 3) implementing forestry ecological restoration programs based on infrastructure construction principles, and 4) managing forests according to the principles of infrastructural construction management.

  10. Information Infrastructure Technology and Applications (IITA) Program: Annual K-12 Workshop

    NASA Technical Reports Server (NTRS)

    Hunter, Paul; Likens, William; Leon, Mark

    1995-01-01

    The purpose of the K-12 workshop is to stimulate a cross pollination of inter-center activity and introduce the regional centers to curing edge K-1 activities. The format of the workshop consists of project presentations, working groups, and working group reports, all contained in a three day period. The agenda is aggressive and demanding. The K-12 Education Project is a multi-center activity managed by the Information Infrastructure Technology and Applications (IITA)/K-12 Project Office at the NASA Ames Research Center (ARC). this workshop is conducted in support of executing the K-12 Education element of the IITA Project The IITA/K-12 Project funds activities that use the National Information Infrastructure (NII) (e.g., the Internet) to foster reform and restructuring in mathematics, science, computing, engineering, and technical education.

  11. System Architecture Development for Energy and Water Infrastructure Data Management and Geovisual Analytics

    NASA Astrophysics Data System (ADS)

    Berres, A.; Karthik, R.; Nugent, P.; Sorokine, A.; Myers, A.; Pang, H.

    2017-12-01

    Building an integrated data infrastructure that can meet the needs of a sustainable energy-water resource management requires a robust data management and geovisual analytics platform, capable of cross-domain scientific discovery and knowledge generation. Such a platform can facilitate the investigation of diverse complex research and policy questions for emerging priorities in Energy-Water Nexus (EWN) science areas. Using advanced data analytics, machine learning techniques, multi-dimensional statistical tools, and interactive geovisualization components, such a multi-layered federated platform is being developed, the Energy-Water Nexus Knowledge Discovery Framework (EWN-KDF). This platform utilizes several enterprise-grade software design concepts and standards such as extensible service-oriented architecture, open standard protocols, event-driven programming model, enterprise service bus, and adaptive user interfaces to provide a strategic value to the integrative computational and data infrastructure. EWN-KDF is built on the Compute and Data Environment for Science (CADES) environment in Oak Ridge National Laboratory (ORNL).

  12. National Fusion Collaboratory: Grid Computing for Simulations and Experiments

    NASA Astrophysics Data System (ADS)

    Greenwald, Martin

    2004-05-01

    The National Fusion Collaboratory Project is creating a computational grid designed to advance scientific understanding and innovation in magnetic fusion research by facilitating collaborations, enabling more effective integration of experiments, theory and modeling and allowing more efficient use of experimental facilities. The philosophy of FusionGrid is that data, codes, analysis routines, visualization tools, and communication tools should be thought of as network available services, easily used by the fusion scientist. In such an environment, access to services is stressed rather than portability. By building on a foundation of established computer science toolkits, deployment time can be minimized. These services all share the same basic infrastructure that allows for secure authentication and resource authorization which allows stakeholders to control their own resources such as computers, data and experiments. Code developers can control intellectual property, and fair use of shared resources can be demonstrated and controlled. A key goal is to shield scientific users from the implementation details such that transparency and ease-of-use are maximized. The first FusionGrid service deployed was the TRANSP code, a widely used tool for transport analysis. Tools for run preparation, submission, monitoring and management have been developed and shared among a wide user base. This approach saves user sites from the laborious effort of maintaining such a large and complex code while at the same time reducing the burden on the development team by avoiding the need to support a large number of heterogeneous installations. Shared visualization and A/V tools are being developed and deployed to enhance long-distance collaborations. These include desktop versions of the Access Grid, a highly capable multi-point remote conferencing tool and capabilities for sharing displays and analysis tools over local and wide-area networks.

  13. Why You Should Consider Green Stormwater Infrastructure for Your Community

    EPA Pesticide Factsheets

    This page provides an overview of the nation's infrastructure needs and cost and the benefits of integrating green infrastructure into projects that typically use grey infrastructure, such as roadways, sidewalks and parking lots.

  14. Public Private Partnerships, Corporate Welfare or Building the Nation's Scientific Infrastructure?

    NASA Astrophysics Data System (ADS)

    Shank, C. V.

    1996-03-01

    A debate is taking place in the U.S. concerning the investment of scarce Federal funds in science and technology research. Clouding this discussion is the proliferation of extreme views illustrated in the title of this talk. The impacts of the end of the cold war, the globalization of the economy and the realities of the budget deficit create a situation that cries out for a new social contract between scientists and taxpayers. We need to examine the successes and failures of the last 50 years to form the basis for a set of principles to enable the creation of a new consensus to define the roles of industry, government, universities and national laboratories in the research enterprise. The scientific infrastructure, and by extension, the economic vitality of the U.S., are at risk.

  15. The Impact of a Carbapenem-Resistant Enterobacteriaceae Outbreak on Facilitating Development of a National Infrastructure for Infection Control in Israel.

    PubMed

    Schwaber, Mitchell J; Carmeli, Yehuda

    2017-11-29

    In 2006 the Israeli healthcare system faced an unprecedented outbreak of carbapenem-resistant Enterobacteriaceae, primarily involving KPC-producing Klebsiella pneumoniae clonal complex CC258. This public health crisis exposed major gaps in infection control. In response, Israel established a national infection control infrastructure. The steps taken to build this infrastructure and benefits realized from its creation are described here. © The Author 2017. Published by Oxford University Press for the Infectious Diseases Society of America. All rights reserved. For permissions, e-mail: journals.permissions@oup.com.

  16. US cities can manage national hydrology and biodiversity using local infrastructure policy

    PubMed Central

    Surendran Nair, Sujithkumar; DeRolph, Christopher R.; Ruddell, Benjamin L.; Morton, April M.; Stewart, Robert N.; Troia, Matthew J.; Tran, Liem; Kim, Hyun; Bhaduri, Budhendra L.

    2017-01-01

    Cities are concentrations of sociopolitical power and prime architects of land transformation, while also serving as consumption hubs of “hard” water and energy infrastructures. These infrastructures extend well outside metropolitan boundaries and impact distal river ecosystems. We used a comprehensive model to quantify the roles of anthropogenic stressors on hydrologic alteration and biodiversity in US streams and isolate the impacts stemming from hard infrastructure developments in cities. Across the contiguous United States, cities’ hard infrastructures have significantly altered at least 7% of streams, which influence habitats for over 60% of North America’s fish, mussel, and crayfish species. Additionally, city infrastructures have contributed to local extinctions in 260 species and currently influence 970 indigenous species, 27% of which are in jeopardy. We find that ecosystem impacts do not scale with city size but are instead proportionate to infrastructure decisions. For example, Atlanta’s impacts by hard infrastructures extend across four major river basins, 12,500 stream km, and contribute to 100 local extinctions of aquatic species. In contrast, Las Vegas, a similar size city, impacts <1,000 stream km, leading to only seven local extinctions. So, cities have local policy choices that can reduce future impacts to regional aquatic ecosystems as they grow. By coordinating policy and communication between hard infrastructure sectors, local city governments and utilities can directly improve environmental quality in a significant fraction of the nation’s streams reaching far beyond their city boundaries. PMID:28827332

  17. Onsite and Electric Backup Capabilities at Critical Infrastructure Facilities in the United States

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Phillips, Julia A.; Wallace, Kelly E.; Kudo, Terence Y.

    2016-04-01

    The following analysis, conducted by Argonne National Laboratory’s (Argonne’s) Risk and Infrastructure Science Center (RISC), details an analysis of electric power backup of national critical infrastructure as captured through the Department of Homeland Security’s (DHS’s) Enhanced Critical Infrastructure Program (ECIP) Initiative. Between January 1, 2011, and September 2014, 3,174 ECIP facility surveys have been conducted. This study focused first on backup capabilities by infrastructure type and then expanded to infrastructure type by census region.

  18. 75 FR 48983 - The Critical Infrastructure Partnership Advisory Council (CIPAC)

    Federal Register 2010, 2011, 2012, 2013, 2014

    2010-08-12

    ... DEPARTMENT OF HOMELAND SECURITY [Docket No. DHS-2010-0062] The Critical Infrastructure Partnership... Critical Infrastructure Partnership Advisory Council (CIPAC) by notice published in the Federal Register... Infrastructure Protection, National Protection and Programs Directorate, Department of Homeland Security, 245...

  19. PRACE - The European HPC Infrastructure

    NASA Astrophysics Data System (ADS)

    Stadelmeyer, Peter

    2014-05-01

    The mission of PRACE (Partnership for Advanced Computing in Europe) is to enable high impact scientific discovery and engineering research and development across all disciplines to enhance European competitiveness for the benefit of society. PRACE seeks to realize this mission by offering world class computing and data management resources and services through a peer review process. This talk gives a general overview about PRACE and the PRACE research infrastructure (RI). PRACE is established as an international not-for-profit association and the PRACE RI is a pan-European supercomputing infrastructure which offers access to computing and data management resources at partner sites distributed throughout Europe. Besides a short summary about the organization, history, and activities of PRACE, it is explained how scientists and researchers from academia and industry from around the world can access PRACE systems and which education and training activities are offered by PRACE. The overview also contains a selection of PRACE contributions to societal challenges and ongoing activities. Examples of the latter are beside others petascaling, application benchmark suite, best practice guides for efficient use of key architectures, application enabling / scaling, new programming models, and industrial applications. The Partnership for Advanced Computing in Europe (PRACE) is an international non-profit association with its seat in Brussels. The PRACE Research Infrastructure provides a persistent world-class high performance computing service for scientists and researchers from academia and industry in Europe. The computer systems and their operations accessible through PRACE are provided by 4 PRACE members (BSC representing Spain, CINECA representing Italy, GCS representing Germany and GENCI representing France). The Implementation Phase of PRACE receives funding from the EU's Seventh Framework Programme (FP7/2007-2013) under grant agreements RI-261557, RI-283493 and RI

  20. Enhancing the Resilience of Interdependent Critical Infrastructure Systems Using a Common Computational Framework

    NASA Astrophysics Data System (ADS)

    Little, J. C.; Filz, G. M.

    2016-12-01

    As modern societies become more complex, critical interdependent infrastructure systems become more likely to fail under stress unless they are designed and implemented to be resilient. Hurricane Katrina clearly demonstrated the catastrophic and as yet unpredictable consequences of such failures. Resilient infrastructure systems maintain the flow of goods and services in the face of a broad range of natural and manmade hazards. In this presentation, we illustrate a generic computational framework to facilitate high-level decision-making about how to invest scarce resources most effectively to enhance resilience in coastal protection, transportation, and the economy of a region. Coastal Louisiana, our study area, has experienced the catastrophic effects of several land-falling hurricanes in recent years. In this project, we implement and further refine three process models (a coastal protection model, a transportation model, and an economic model) for the coastal Louisiana region. We upscale essential mechanistic features of the three detailed process models to the systems level and integrate the three reduced-order systems models in a modular fashion. We also evaluate the proposed approach in annual workshops with input from stakeholders. Based on stakeholder inputs, we derive a suite of goals, targets, and indicators for evaluating resilience at the systems level, and assess and enhance resilience using several deterministic scenarios. The unifying framework will be able to accommodate the different spatial and temporal scales that are appropriate for each model. We combine our generic computational framework, which encompasses the entire system of systems, with the targets, and indicators needed to systematically meet our chosen resilience goals. We will start with targets that focus on technical and economic systems, but future work will ensure that targets and indicators are extended to other dimensions of resilience including those in the environmental and

  1. Development of a Water Infrastructure Knowledge Database

    EPA Science Inventory

    This paper presents a methodology for developing a national database, as applied to water infrastructure systems, which includes both drinking water and wastewater. The database is branded as "WATERiD" and can be accessed at www.waterid.org. Water infrastructure in the U.S. is ag...

  2. An infrastructure with a unified control plane to integrate IP into optical metro networks to provide flexible and intelligent bandwidth on demand for cloud computing

    NASA Astrophysics Data System (ADS)

    Yang, Wei; Hall, Trevor

    2012-12-01

    The Internet is entering an era of cloud computing to provide more cost effective, eco-friendly and reliable services to consumer and business users and the nature of the Internet traffic will undertake a fundamental transformation. Consequently, the current Internet will no longer suffice for serving cloud traffic in metro areas. This work proposes an infrastructure with a unified control plane that integrates simple packet aggregation technology with optical express through the interoperation between IP routers and electrical traffic controllers in optical metro networks. The proposed infrastructure provides flexible, intelligent, and eco-friendly bandwidth on demand for cloud computing in metro areas.

  3. 76 FR 70730 - The Critical Infrastructure Partnership Advisory Council (CIPAC)

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-11-15

    ... DEPARTMENT OF HOMELAND SECURITY [Docket No. DHS-2011-0112] The Critical Infrastructure Partnership... Critical Infrastructure Partnership Advisory Council (CIPAC) by notice published in the Federal Register... Infrastructure Protection, National Protection and Programs Directorate, U.S. Department of Homeland Security...

  4. 76 FR 29775 - The Critical Infrastructure Partnership Advisory Council (CIPAC)

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-05-23

    ... DEPARTMENT OF HOMELAND SECURITY [Docket No. DHS-2011-0038] The Critical Infrastructure Partnership... Critical Infrastructure Partnership Advisory Council (CIPAC) by notice published in the Federal Register... Infrastructure Protection, National Protection and Programs Directorate, U.S. Department of Homeland Security...

  5. US cities can manage national hydrology and biodiversity using local infrastructure policy

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    McManamay, Ryan A.; Surendran Nair, Sujithkumar; DeRolph, Christopher R.

    Cities are concentrations of socio-political power and prime architects of land transformation, while also serving as consumption hubs of “hard” water and energy infrastructures (e.g. electrical power, stormwater management, zoning, water supply, and wastewater). These infrastructures extend well outside metropolitan boundaries and impact distal river ecosystems. We used a comprehensive model to quantify the roles of anthropogenic stressors on hydrologic alteration and biodiversity in US streams and isolated the impacts stemming from hard infrastructure developments in cities. Across the conterminous US, cities’ hard infrastructures have significantly altered at least 7% of streams, which influence habitats for over 60% of Northmore » America’s fish, mussel, and crayfish species. Additionally, city infrastructures have contributed to local extinctions in 260 species and currently influence 970 indigenous species, 27% of which are in jeopardy. We find that ecosystem impacts do not scale with city size but are instead proportionate to infrastructure decisions. For example, Atlanta’s impacts by hard infrastructures extend across four major river basins, 12,500 stream km, and contribute to 100 local extinctions of aquatic species. In contrast, Las Vegas, a similar size city, impacts < 1000 stream km, leading to only 7 local extinctions. So, cities have local policy choices that can reduce future impacts to regional aquatic ecosystems as cities grow. Furthermore, by coordinating policy and communication between hard infrastructure sectors, local city governments and utilities can directly improve environmental quality in a significant fraction of the nation’s streams and aquatic biota reaching far beyond their city boundaries.« less

  6. US cities can manage national hydrology and biodiversity using local infrastructure policy

    DOE PAGES

    McManamay, Ryan A.; Surendran Nair, Sujithkumar; DeRolph, Christopher R.; ...

    2017-08-21

    Cities are concentrations of socio-political power and prime architects of land transformation, while also serving as consumption hubs of “hard” water and energy infrastructures (e.g. electrical power, stormwater management, zoning, water supply, and wastewater). These infrastructures extend well outside metropolitan boundaries and impact distal river ecosystems. We used a comprehensive model to quantify the roles of anthropogenic stressors on hydrologic alteration and biodiversity in US streams and isolated the impacts stemming from hard infrastructure developments in cities. Across the conterminous US, cities’ hard infrastructures have significantly altered at least 7% of streams, which influence habitats for over 60% of Northmore » America’s fish, mussel, and crayfish species. Additionally, city infrastructures have contributed to local extinctions in 260 species and currently influence 970 indigenous species, 27% of which are in jeopardy. We find that ecosystem impacts do not scale with city size but are instead proportionate to infrastructure decisions. For example, Atlanta’s impacts by hard infrastructures extend across four major river basins, 12,500 stream km, and contribute to 100 local extinctions of aquatic species. In contrast, Las Vegas, a similar size city, impacts < 1000 stream km, leading to only 7 local extinctions. So, cities have local policy choices that can reduce future impacts to regional aquatic ecosystems as cities grow. Furthermore, by coordinating policy and communication between hard infrastructure sectors, local city governments and utilities can directly improve environmental quality in a significant fraction of the nation’s streams and aquatic biota reaching far beyond their city boundaries.« less

  7. Toolkit of Available EPA Green Infrastructure Modeling Software. National Stormwater Calculator

    EPA Science Inventory

    This webinar will present a toolkit consisting of five EPA green infrastructure models and tools, along with communication material. This toolkit can be used as a teaching and quick reference resource for use by planners and developers when making green infrastructure implementat...

  8. Lessons learned from implementing a national infrastructure in Sweden for storage and analysis of next-generation sequencing data

    PubMed Central

    2013-01-01

    Analyzing and storing data and results from next-generation sequencing (NGS) experiments is a challenging task, hampered by ever-increasing data volumes and frequent updates of analysis methods and tools. Storage and computation have grown beyond the capacity of personal computers and there is a need for suitable e-infrastructures for processing. Here we describe UPPNEX, an implementation of such an infrastructure, tailored to the needs of data storage and analysis of NGS data in Sweden serving various labs and multiple instruments from the major sequencing technology platforms. UPPNEX comprises resources for high-performance computing, large-scale and high-availability storage, an extensive bioinformatics software suite, up-to-date reference genomes and annotations, a support function with system and application experts as well as a web portal and support ticket system. UPPNEX applications are numerous and diverse, and include whole genome-, de novo- and exome sequencing, targeted resequencing, SNP discovery, RNASeq, and methylation analysis. There are over 300 projects that utilize UPPNEX and include large undertakings such as the sequencing of the flycatcher and Norwegian spruce. We describe the strategic decisions made when investing in hardware, setting up maintenance and support, allocating resources, and illustrate major challenges such as managing data growth. We conclude with summarizing our experiences and observations with UPPNEX to date, providing insights into the successful and less successful decisions made. PMID:23800020

  9. National health information infrastructure model: a milestone for health information management education realignment.

    PubMed

    Meidani, Zahra; Sadoughi, Farhnaz; Ahmadi, Maryam; Maleki, Mohammad Reza; Zohoor, Alireza; Saddik, Basema

    2012-01-01

    Challenges and drawbacks of the health information management (HIM) curriculum at the Master's degree were examined, including lack of well-established computing sciences and inadequacy to give rise to specific competencies. Information management was condensed to the hospital setting to intensify the indispensability of a well-organized educational campaign. The healthcare information dimensions of a national health information infrastructure (NHII) model present novel requirements for HIM education. Articles related to challenges and barriers to adoption of the personal health record (PHR), the core component of personal health dimension of an NHII, were searched through sources including Science Direct, ProQuest, and PubMed. Through a literature review, concerns about the PHR that are associated with HIM functions and responsibilities were extracted. In the community/public health dimension of the NHII the main components have been specified, and the targeted information was gathered through literature review, e-mail, and navigation of international and national organizations. Again, topics related to HIM were evoked. Using an information system (decision support system, artificial neural network, etc.) to support PHR media and content, patient education, patient-HIM communication skills, consumer health information, conducting a surveillance system in other areas of healthcare such as a risk factor surveillance system, occupational health, using an information system to analyze aggregated data including a geographic information system, data mining, online analytical processing, public health vocabulary and classification system, and emerging automated coding systems pose major knowledge gaps in HIM education. Combining all required skills and expertise to handle personal and public dimensions of healthcare information in a single curriculum is simply impractical. Role expansion and role extension for HIM professionals should be defined based on the essence of

  10. Peer-to-peer Cooperative Scheduling Architecture for National Grid Infrastructure

    NASA Astrophysics Data System (ADS)

    Matyska, Ludek; Ruda, Miroslav; Toth, Simon

    For some ten years, the Czech National Grid Infrastructure MetaCentrum uses a single central PBSPro installation to schedule jobs across the country. This centralized approach keeps a full track about all the clusters, providing support for jobs spanning several sites, implementation for the fair-share policy and better overall control of the grid environment. Despite a steady progress in the increased stability and resilience to intermittent very short network failures, growing number of sites and processors makes this architecture, with a single point of failure and scalability limits, obsolete. As a result, a new scheduling architecture is proposed, which relies on higher autonomy of clusters. It is based on a peer to peer network of semi-independent schedulers for each site or even cluster. Each scheduler accepts jobs for the whole infrastructure, cooperating with other schedulers on implementation of global policies like central job accounting, fair-share, or submission of jobs across several sites. The scheduling system is integrated with the Magrathea system to support scheduling of virtual clusters, including the setup of their internal network, again eventually spanning several sites. On the other hand, each scheduler is local to one of several clusters and is able to directly control and submit jobs to them even if the connection of other scheduling peers is lost. In parallel to the change of the overall architecture, the scheduling system itself is being replaced. Instead of PBSPro, chosen originally for its declared support of large scale distributed environment, the new scheduling architecture is based on the open-source Torque system. The implementation and support for the most desired properties in PBSPro and Torque are discussed and the necessary modifications to Torque to support the MetaCentrum scheduling architecture are presented, too.

  11. A National Assessment of Change in Green Infrastructure Using Mathematical Morphology

    EPA Science Inventory

    Green infrastructure is a popular framework for conservation planning. The main elements of green infrastructure are hubs and links. Hubs tend to be large areas of natural vegetation and links tend to be linear features (e.g., streams) that connect hubs. Within the United States...

  12. Integrated Facilities and Infrastructure Plan.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Reisz Westlund, Jennifer Jill

    Our facilities and infrastructure are a key element of our capability-based science and engineering foundation. The focus of the Integrated Facilities and Infrastructure Plan is the development and implementation of a comprehensive plan to sustain the capabilities necessary to meet national research, design, and fabrication needs for Sandia National Laboratories’ (Sandia’s) comprehensive national security missions both now and into the future. A number of Sandia’s facilities have reached the end of their useful lives and many others are not suitable for today’s mission needs. Due to the continued aging and surge in utilization of Sandia’s facilities, deferred maintenance has continuedmore » to increase. As part of our planning focus, Sandia is committed to halting the growth of deferred maintenance across its sites through demolition, replacement, and dedicated funding to reduce the backlog of maintenance needs. Sandia will become more agile in adapting existing space and changing how space is utilized in response to the changing requirements. This Integrated Facilities & Infrastructure (F&I) Plan supports the Sandia Strategic Plan’s strategic objectives, specifically Strategic Objective 2: Strengthen our Laboratories’ foundation to maximize mission impact, and Strategic Objective 3: Advance an exceptional work environment that enables and inspires our people in service to our nation. The Integrated F&I Plan is developed through a planning process model to understand the F&I needs, analyze solution options, plan the actions and funding, and then execute projects.« less

  13. 75 FR 31458 - Infrastructure Protection Data Call Survey

    Federal Register 2010, 2011, 2012, 2013, 2014

    2010-06-03

    ...-0022] Infrastructure Protection Data Call Survey AGENCY: National Protection and Programs Directorate... New Information Collection Request, Infrastructure Protection Data Call Survey. DHS previously... territories are able to achieve this mission, IP requests opinions and information in a survey from IP Data...

  14. Facilities and Infrastructure FY 2017 Budget At-A-Glance

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    None

    2016-03-01

    The Facilities and Infrastructure Program includes EERE’s capital investments, operations and maintenance, and site-wide support of the National Renewable Energy Laboratory (NREL). It is the nation’s only national laboratory with a primary mission dedicated to the research, development and demonstration (RD&D) of energy efficiency, renewable energy and related technologies. EERE is NREL’s steward, primary client and sponsor of NREL’s designation as a Federally Funded Research and Development Center. The Facilities and Infrastructure (F&I) budget maintains NREL’s research and support infrastructure, ensures availability for EERE’s use, and provides a safe and secure workplace for employees.

  15. Information Infrastructure Sourcebook.

    ERIC Educational Resources Information Center

    Kahin, Brian, Ed.

    This volume is designed to provide planners and policymakers with a single volume reference book on efforts to define and develop policy for the National Information Infrastructure. The sourcebook is divided into five sections: (1) official documents; (2) vision statements and position papers; (3) program and project descriptions (all sectors);…

  16. Health care information infrastructure: what will it be and how will we get there?

    NASA Astrophysics Data System (ADS)

    Kun, Luis G.

    1996-02-01

    During the first Health Care Technology Policy [HCTPI conference last year, during Health Care Reform, four major issues were brought up in regards to the underway efforts to develop a Computer Based Patient Record (CBPR)I the National Information Infrastructure (NIl) as part of the High Performance Computers & Communications (HPCC), and the so-called "Patient Card" . More specifically it was explained how a national information system will greatly affect the way health care delivery is provided to the United States public and reduce its costs. These four issues were: Constructing a National Information Infrastructure (NIl); Building a Computer Based Patient Record System; Bringing the collective resources of our National Laboratories to bear in developing and implementing the NIl and CBPR, as well as a security system with which to safeguard the privacy rights of patients and the physician-patient privilege; Utilizing Government (e.g. DOD, DOE) capabilities (technology and human resources) to maximize resource utilization, create new jobs and accelerate technology transfer to address health care issues. During the second HCTP conference, in mid 1 995, a section of this meeting entitled: "Health Care Technology Assets of the Federal Government" addressed benefits of the technology transfer which should occur for maximizing already developed resources. Also a section entitled:"Transfer and Utilization of Government Technology Assets to the Private Sector", looked at both Health Care and non-Health Care related technologies since many areas such as Information Technologies (i.e. imaging, communications, archival I retrieval, systems integration, information display, multimedia, heterogeneous data bases, etc.) already exist and are part of our National Labs and/or other federal agencies, i.e. ARPA. These technologies although they are not labeled under "Health Care" programs they could provide enormous value to address technical needs. An additional issue deals with

  17. Scientific Computing Strategic Plan for the Idaho National Laboratory

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Whiting, Eric Todd

    Scientific computing is a critical foundation of modern science. Without innovations in the field of computational science, the essential missions of the Department of Energy (DOE) would go unrealized. Taking a leadership role in such innovations is Idaho National Laboratory’s (INL’s) challenge and charge, and is central to INL’s ongoing success. Computing is an essential part of INL’s future. DOE science and technology missions rely firmly on computing capabilities in various forms. Modeling and simulation, fueled by innovations in computational science and validated through experiment, are a critical foundation of science and engineering. Big data analytics from an increasing numbermore » of widely varied sources is opening new windows of insight and discovery. Computing is a critical tool in education, science, engineering, and experiments. Advanced computing capabilities in the form of people, tools, computers, and facilities, will position INL competitively to deliver results and solutions on important national science and engineering challenges. A computing strategy must include much more than simply computers. The foundational enabling component of computing at many DOE national laboratories is the combination of a showcase like data center facility coupled with a very capable supercomputer. In addition, network connectivity, disk storage systems, and visualization hardware are critical and generally tightly coupled to the computer system and co located in the same facility. The existence of these resources in a single data center facility opens the doors to many opportunities that would not otherwise be possible.« less

  18. Map of Water Infrastructure and Homes Without Access to Safe Drinking Water and Basic Sanitation on the Navajo Nation - October 2010

    EPA Pesticide Factsheets

    This document presents the results of completed work using existing geographic information system (GIS) data to map existing water and sewer infrastructure and homes without access to safe drinking water and basic sanitation on the Navajo Nation.

  19. 77 FR 6825 - NASA Advisory Council; Information Technology Infrastructure Committee; Meeting.

    Federal Register 2010, 2011, 2012, 2013, 2014

    2012-02-09

    ...; Information Technology Infrastructure Committee; Meeting. AGENCY: National Aeronautics and Space... Information Technology Infrastructure Committee of the NASA Advisory Council. DATES: Wednesday, March 7, 2012... CONTACT: Ms. Karen Harper, Executive Secretary for the Information Technology Infrastructure Committee...

  20. National research and education network

    NASA Technical Reports Server (NTRS)

    Villasenor, Tony

    1991-01-01

    Some goals of this network are as follows: Extend U.S. technological leadership in high performance computing and computer communications; Provide wide dissemination and application of the technologies both to the speed and the pace of innovation and to serve the national economy, national security, education, and the global environment; and Spur gains in the U.S. productivity and industrial competitiveness by making high performance computing and networking technologies an integral part of the design and production process. Strategies for achieving these goals are as follows: Support solutions to important scientific and technical challenges through a vigorous R and D effort; Reduce the uncertainties to industry for R and D and use of this technology through increased cooperation between government, industry, and universities and by the continued use of government and government funded facilities as a prototype user for early commercial HPCC products; and Support underlying research, network, and computational infrastructures on which U.S. high performance computing technology is based.

  1. Internet infrastructures and health care systems: a qualitative comparative analysis on networks and markets in the British National Health Service and Kaiser Permanente.

    PubMed

    Séror, Ann C

    2002-12-01

    The Internet and emergent telecommunications infrastructures are transforming the future of health care management. The costs of health care delivery systems, products, and services continue to rise everywhere, but performance of health care delivery is associated with institutional and ideological considerations as well as availability of financial and technological resources. to identify the effects of ideological differences on health care market infrastructures including the Internet and telecommunications technologies by a comparative case analysis of two large health care organizations: the British National Health Service and the California-based Kaiser Permanente health maintenance organization. A qualitative comparative analysis focusing on the British National Health Service and the Kaiser Permanente health maintenance organization to show how system infrastructures vary according to market dynamics dominated by health care institutions ("push") or by consumer demand ("pull"). System control mechanisms may be technologically embedded, institutional, or behavioral. The analysis suggests that telecommunications technologies and the Internet may contribute significantly to health care system performance in a context of ideological diversity. The study offers evidence to validate alternative models of health care governance: the national constitution model, and the enterprise business contract model. This evidence also suggests important questions for health care policy makers as well as researchers in telecommunications, organizational theory, and health care management.

  2. The Information Infrastructure: Reaching Society's Goals. A Report of the Information Infrastructure Task Force Committee on Applications and Technology.

    ERIC Educational Resources Information Center

    National Inst. of Standards and Technology, Gaithersburg, MD.

    Intended for public comment and discussion, this document is the second volume of papers in which the Information Infrastructure Task Force has attempted to articulate in clear terms, with sufficient detail, how improvements in the National Information Infrastructure (NII) can help meet other social goals. These are not plans to be enacted, but…

  3. Complete distributed computing environment for a HEP experiment: experience with ARC-connected infrastructure for ATLAS

    NASA Astrophysics Data System (ADS)

    Read, A.; Taga, A.; O-Saada, F.; Pajchel, K.; Samset, B. H.; Cameron, D.

    2008-07-01

    Computing and storage resources connected by the Nordugrid ARC middleware in the Nordic countries, Switzerland and Slovenia are a part of the ATLAS computing Grid. This infrastructure is being commissioned with the ongoing ATLAS Monte Carlo simulation production in preparation for the commencement of data taking in 2008. The unique non-intrusive architecture of ARC, its straightforward interplay with the ATLAS Production System via the Dulcinea executor, and its performance during the commissioning exercise is described. ARC support for flexible and powerful end-user analysis within the GANGA distributed analysis framework is also shown. Whereas the storage solution for this Grid was earlier based on a large, distributed collection of GridFTP-servers, the ATLAS computing design includes a structured SRM-based system with a limited number of storage endpoints. The characteristics, integration and performance of the old and new storage solutions are presented. Although the hardware resources in this Grid are quite modest, it has provided more than double the agreed contribution to the ATLAS production with an efficiency above 95% during long periods of stable operation.

  4. 78 FR 48806 - Approval and Promulgation of Implementation Plans; Tennessee; Infrastructure Requirements for the...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-08-12

    ... Promulgation of Implementation Plans; Tennessee; Infrastructure Requirements for the 2008 Lead National Ambient... infrastructure requirements of the Clean Air Act (CAA or Act) for the 2008 Lead national ambient air quality... ``infrastructure'' SIP. TDEC certified that the Tennessee SIP contains provisions that ensure the 2008 Lead NAAQS...

  5. AstroCloud, a Cyber-Infrastructure for Astronomy Research: Overview

    NASA Astrophysics Data System (ADS)

    Cui, C.; Yu, C.; Xiao, J.; He, B.; Li, C.; Fan, D.; Wang, C.; Hong, Z.; Li, S.; Mi, L.; Wan, W.; Cao, Z.; Wang, J.; Yin, S.; Fan, Y.; Wang, J.

    2015-09-01

    AstroCloud is a cyber-Infrastructure for Astronomy Research initiated by Chinese Virtual Observatory (China-VO) under funding support from NDRC (National Development and Reform commission) and CAS (Chinese Academy of Sciences). Tasks such as proposal submission, proposal peer-review, data archiving, data quality control, data release and open access, Cloud based data processing and analyzing, will be all supported on the platform. It will act as a full lifecycle management system for astronomical data and telescopes. Achievements from international Virtual Observatories and Cloud Computing are adopted heavily. In this paper, backgrounds of the project, key features of the system, and latest progresses are introduced.

  6. Impact of public electric vehicle charging infrastructure

    DOE PAGES

    Levinson, Rebecca S.; West, Todd H.

    2017-10-16

    Our work uses market analysis and simulation to explore the potential of public charging infrastructure to spur US battery electric vehicle (BEV) sales, increase national electrified mileage, and lower greenhouse gas (GHG) emissions. By employing both scenario and parametric analysis for policy driven injection of public charging stations we find the following: (1) For large deployments of public chargers, DC fast chargers are more effective than level 2 chargers at increasing BEV sales, increasing electrified mileage, and lowering GHG emissions, even if only one DC fast charging station can be built for every ten level 2 charging stations. (2) Amore » national initiative to build DC fast charging infrastructure will see diminishing returns on investment at approximately 30,000 stations. (3) Some infrastructure deployment costs can be defrayed by passing them back to electric vehicle consumers, but once those costs to the consumer reach the equivalent of approximately 12¢/kWh for all miles driven, almost all gains to BEV sales and GHG emissions reductions from infrastructure construction are lost.« less

  7. Impact of public electric vehicle charging infrastructure

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Levinson, Rebecca S.; West, Todd H.

    Our work uses market analysis and simulation to explore the potential of public charging infrastructure to spur US battery electric vehicle (BEV) sales, increase national electrified mileage, and lower greenhouse gas (GHG) emissions. By employing both scenario and parametric analysis for policy driven injection of public charging stations we find the following: (1) For large deployments of public chargers, DC fast chargers are more effective than level 2 chargers at increasing BEV sales, increasing electrified mileage, and lowering GHG emissions, even if only one DC fast charging station can be built for every ten level 2 charging stations. (2) Amore » national initiative to build DC fast charging infrastructure will see diminishing returns on investment at approximately 30,000 stations. (3) Some infrastructure deployment costs can be defrayed by passing them back to electric vehicle consumers, but once those costs to the consumer reach the equivalent of approximately 12¢/kWh for all miles driven, almost all gains to BEV sales and GHG emissions reductions from infrastructure construction are lost.« less

  8. Building Nationally-Focussed, Globally Federated, High Performance Earth Science Platforms to Solve Next Generation Social and Economic Issues.

    NASA Astrophysics Data System (ADS)

    Wyborn, Lesley; Evans, Ben; Foster, Clinton; Pugh, Timothy; Uhlherr, Alfred

    2015-04-01

    Digital geoscience data and information are integral to informing decisions on the social, economic and environmental management of natural resources. Traditionally, such decisions were focused on regional or national viewpoints only, but it is increasingly being recognised that global perspectives are required to meet new challenges such as predicting impacts of climate change; sustainably exploiting scarce water, mineral and energy resources; and protecting our communities through better prediction of the behaviour of natural hazards. In recent years, technical advances in scientific instruments have resulted in a surge in data volumes, with data now being collected at unprecedented rates and at ever increasing resolutions. The size of many earth science data sets now exceed the computational capacity of many government and academic organisations to locally store and dynamically access the data sets; to internally process and analyse them to high resolutions; and then to deliver them online to clients, partners and stakeholders. Fortunately, at the same time, computational capacities have commensurately increased (both cloud and HPC): these can now provide the capability to effectively access the ever-growing data assets within realistic time frames. However, to achieve this, data and computing need to be co-located: bandwidth limits the capacity to move the large data sets; the data transfers are too slow; and latencies to access them are too high. These scenarios are driving the move towards more centralised High Performance (HP) Infrastructures. The rapidly increasing scale of data, the growing complexity of software and hardware environments, combined with the energy costs of running such infrastructures is creating a compelling economic argument for just having one or two major national (or continental) HP facilities that can be federated internationally to enable earth and environmental issues to be tackled at global scales. But at the same time, if

  9. Computer network access to scientific information systems for minority universities

    NASA Astrophysics Data System (ADS)

    Thomas, Valerie L.; Wakim, Nagi T.

    1993-08-01

    The evolution of computer networking technology has lead to the establishment of a massive networking infrastructure which interconnects various types of computing resources at many government, academic, and corporate institutions. A large segment of this infrastructure has been developed to facilitate information exchange and resource sharing within the scientific community. The National Aeronautics and Space Administration (NASA) supports both the development and the application of computer networks which provide its community with access to many valuable multi-disciplinary scientific information systems and on-line databases. Recognizing the need to extend the benefits of this advanced networking technology to the under-represented community, the National Space Science Data Center (NSSDC) in the Space Data and Computing Division at the Goddard Space Flight Center has developed the Minority University-Space Interdisciplinary Network (MU-SPIN) Program: a major networking and education initiative for Historically Black Colleges and Universities (HBCUs) and Minority Universities (MUs). In this paper, we will briefly explain the various components of the MU-SPIN Program while highlighting how, by providing access to scientific information systems and on-line data, it promotes a higher level of collaboration among faculty and students and NASA scientists.

  10. Space-based communications infrastructure for developing countries

    NASA Astrophysics Data System (ADS)

    Barker, Keith; Barnes, Carl; Price, K. M.

    1995-08-01

    This study examines the potential use of satellites to augment the telecommunications infrastructure of developing countries with advanced satellites. The study investigated the potential market for using satellites in developing countries, the role of satellites in national information infrastructures (NII), the technical feasibility of augmenting NIIs with satellites, and a nation's financial conditions necessary for procuring satellite systems. In addition, the study examined several technical areas including onboard processing, intersatellite links, frequency of operation, multibeam and active antennas, and advanced satellite technologies. The marketing portion of this study focused on three case studies: China, Brazil, and Mexico. These cases represent countries in various stages of telecommunication infrastructure development. The study concludes by defining the needs of developing countries for satellites, and recommends steps that both industry and NASA can take to improve the competitiveness of U.S. satellite manufacturing.

  11. Interactions among human behavior, social networks, and societal infrastructures: A Case Study in Computational Epidemiology

    NASA Astrophysics Data System (ADS)

    Barrett, Christopher L.; Bisset, Keith; Chen, Jiangzhuo; Eubank, Stephen; Lewis, Bryan; Kumar, V. S. Anil; Marathe, Madhav V.; Mortveit, Henning S.

    Human behavior, social networks, and the civil infrastructures are closely intertwined. Understanding their co-evolution is critical for designing public policies and decision support for disaster planning. For example, human behaviors and day to day activities of individuals create dense social interactions that are characteristic of modern urban societies. These dense social networks provide a perfect fabric for fast, uncontrolled disease propagation. Conversely, people’s behavior in response to public policies and their perception of how the crisis is unfolding as a result of disease outbreak can dramatically alter the normally stable social interactions. Effective planning and response strategies must take these complicated interactions into account. In this chapter, we describe a computer simulation based approach to study these issues using public health and computational epidemiology as an illustrative example. We also formulate game-theoretic and stochastic optimization problems that capture many of the problems that we study empirically.

  12. Spatial aspects of the research on tourist infrastructure with the use of the cartographic method on the basis of Roztoczański National Park

    NASA Astrophysics Data System (ADS)

    Kałamucki, Krzysztof; Kamińska, Anna; Buk, Dorota

    2012-01-01

    The aim of the research was to demonstrate changes in tourist trails and in the distribution of tourist infrastructure spots in the area of Roztoczański National Park in its vicinity. Another, equally important aim, was to check the usefulness of tourist infrastructure in both cartographic method of infrastructure research and in cartography of presentation methods. The research covered the region of Roztoczański National Park. The following elements of tourist infrastructure were selected for the analysis: linear elements (walking trails, education paths) and spot elements (accommodation, eating places and the accompanied basis). In order to recreate the state of infrastructure during the last 50 years, it was necessary to analyse the following source material: tourist maps issued as independent publications, maps issued as supplements to tour guides and aerial photography. The information from text sources was used, e.g. from tourist guides, leaflets and monographs. The temporal framework was defined as 50 years from the 1960's until 2009. This time range was divided into five 10-year periods. In order to present the state of tourist infrastructure, its spatial and qualitative changes, 6 maps were produces (maps of states and types of changes). The conducted spatial analyses and the interpretations of maps of states and changes in tourist infrastructure allowed to capture both qualitative and quantitative changes. It was stated that the changes in the trails were not regular. There were parts of trails that did not change for 40 years. There were also some that were constructed during the last decade. Presently, the area is densely covered with tourist trails and education paths. The measurements of lengths of tourist trails and their parts with regard to land cover and category of roads allowed to determine the character of trails and the scope of changes. The conducted analyses proved the usefulness of cartographic methods in researching tourist

  13. Early Intervention Service Coordination Policies: National Policy Infrastructure

    ERIC Educational Resources Information Center

    Harbin, Gloria L.; Bruder, Mary Beth; Adams, Candace; Mazzarella, Cynthia; Whitbread, Kathy; Gabbard, Glenn; Staff, Ilene

    2004-01-01

    Effective implementation of service coordination in early intervention, as mandated by the Individuals with Disabilities Education Act, remains a challenge for most states. The present study provides a better understanding of the various aspects of the policy infrastructure that undergird service coordination across the United States. Data from a…

  14. Public Key Infrastructure Study

    DTIC Science & Technology

    1994-04-01

    commerce. This Public Key Infrastructure (PKI) study focuses on the United States Federal Government operations, but also addresses national and global ... issues in order to facilitate the interoperation of protected electronic commerce among the various levels of government in the U.S., private citizens

  15. Research infrastructure support to address ecosystem dynamics

    NASA Astrophysics Data System (ADS)

    Los, Wouter

    2014-05-01

    Predicting the evolution of ecosystems to climate change or human pressures is a challenge. Even understanding past or current processes is complicated as a result of the many interactions and feedbacks that occur within and between components of the system. This talk will present an example of current research on changes in landscape evolution, hydrology, soil biogeochemical processes, zoological food webs, and plant community succession, and how these affect feedbacks to components of the systems, including the climate system. Multiple observations, experiments, and simulations provide a wealth of data, but not necessarily understanding. Model development on the coupled processes on different spatial and temporal scales is sensitive for variations in data and of parameter change. Fast high performance computing may help to visualize the effect of these changes and the potential stability (and reliability) of the models. This may than allow for iteration between data production and models towards stable models reducing uncertainty and improving the prediction of change. The role of research infrastructures becomes crucial is overcoming barriers for such research. Environmental infrastructures are covering physical site facilities, dedicated instrumentation and e-infrastructure. The LifeWatch infrastructure for biodiversity and ecosystem research will provide services for data integration, analysis and modeling. But it has to cooperate intensively with the other kinds of infrastructures in order to support the iteration between data production and model computation. The cooperation in the ENVRI project (Common operations of environmental research infrastructures) is one of the initiatives to foster such multidisciplinary research.

  16. Future Naval Use of COTS Networking Infrastructure

    DTIC Science & Technology

    2009-07-01

    user to benefit from Google’s vast databases and computational resources. Obviously, the ability to harness the full power of the Cloud could be... Computing Impact Findings Action Items Take-Aways Appendices: Pages 54-68 A. Terms of Reference Document B. Sample Definitions of Cloud ...and definition of Cloud Computing . While Cloud Computing is developing in many variations – including Infrastructure as a Service (IaaS), Platform as

  17. Quantifying the Digital Divide: A Scientific Overview of Network Connectivity and Grid Infrastructure in South Asian Countries

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Khan, Shahryar Muhammad; /SLAC /NUST, Rawalpindi; Cottrell, R.Les

    2007-10-30

    The future of Computing in High Energy Physics (HEP) applications depends on both the Network and Grid infrastructure. South Asian countries such as India and Pakistan are making significant progress by building clusters as well as improving their network infrastructure However to facilitate the use of these resources, they need to manage the issues of network connectivity to be among the leading participants in Computing for HEP experiments. In this paper we classify the connectivity for academic and research institutions of South Asia. The quantitative measurements are carried out using the PingER methodology; an approach that induces minimal ICMP trafficmore » to gather active end-to-end network statistics. The PingER project has been measuring the Internet performance for the last decade. Currently the measurement infrastructure comprises of over 700 hosts in more than 130 countries which collectively represents approximately 99% of the world's Internet-connected population. Thus, we are well positioned to characterize the world's connectivity. Here we present the current state of the National Research and Educational Networks (NRENs) and Grid Infrastructure in the South Asian countries and identify the areas of concern. We also present comparisons between South Asia and other developing as well as developed regions. We show that there is a strong correlation between the Network performance and several Human Development indices.« less

  18. Green Infrastructure for Arid Communities

    EPA Pesticide Factsheets

    how green infrastructure practices and the many associated benefits can be effective not only in wetter climates, but also for those communities in arid and semi-arid regions around the nation that have different precipitation patterns

  19. ATLAS computing on Swiss Cloud SWITCHengines

    NASA Astrophysics Data System (ADS)

    Haug, S.; Sciacca, F. G.; ATLAS Collaboration

    2017-10-01

    Consolidation towards more computing at flat budgets beyond what pure chip technology can offer, is a requirement for the full scientific exploitation of the future data from the Large Hadron Collider at CERN in Geneva. One consolidation measure is to exploit cloud infrastructures whenever they are financially competitive. We report on the technical solutions and the performances used and achieved running simulation tasks for the ATLAS experiment on SWITCHengines. SWITCHengines is a new infrastructure as a service offered to Swiss academia by the National Research and Education Network SWITCH. While solutions and performances are general, financial considerations and policies, on which we also report, are country specific.

  20. Internet Infrastructures and Health Care Systems: a Qualitative Comparative Analysis on Networks and Markets in the British National Health Service and Kaiser Permanente

    PubMed Central

    2002-01-01

    Background The Internet and emergent telecommunications infrastructures are transforming the future of health care management. The costs of health care delivery systems, products, and services continue to rise everywhere, but performance of health care delivery is associated with institutional and ideological considerations as well as availability of financial and technological resources. Objective To identify the effects of ideological differences on health care market infrastructures including the Internet and telecommunications technologies by a comparative case analysis of two large health care organizations: the British National Health Service and the California-based Kaiser Permanente health maintenance organization. Methods A qualitative comparative analysis focusing on the British National Health Service and the Kaiser Permanente health maintenance organization to show how system infrastructures vary according to market dynamics dominated by health care institutions ("push") or by consumer demand ("pull"). System control mechanisms may be technologically embedded, institutional, or behavioral. Results The analysis suggests that telecommunications technologies and the Internet may contribute significantly to health care system performance in a context of ideological diversity. Conclusions The study offers evidence to validate alternative models of health care governance: the national constitution model, and the enterprise business contract model. This evidence also suggests important questions for health care policy makers as well as researchers in telecommunications, organizational theory, and health care management. PMID:12554552

  1. Towards sustainable infrastructure management: knowledge-based service-oriented computing framework for visual analytics

    NASA Astrophysics Data System (ADS)

    Vatcha, Rashna; Lee, Seok-Won; Murty, Ajeet; Tolone, William; Wang, Xiaoyu; Dou, Wenwen; Chang, Remco; Ribarsky, William; Liu, Wanqiu; Chen, Shen-en; Hauser, Edd

    2009-05-01

    Infrastructure management (and its associated processes) is complex to understand, perform and thus, hard to make efficient and effective informed decisions. The management involves a multi-faceted operation that requires the most robust data fusion, visualization and decision making. In order to protect and build sustainable critical assets, we present our on-going multi-disciplinary large-scale project that establishes the Integrated Remote Sensing and Visualization (IRSV) system with a focus on supporting bridge structure inspection and management. This project involves specific expertise from civil engineers, computer scientists, geographers, and real-world practitioners from industry, local and federal government agencies. IRSV is being designed to accommodate the essential needs from the following aspects: 1) Better understanding and enforcement of complex inspection process that can bridge the gap between evidence gathering and decision making through the implementation of ontological knowledge engineering system; 2) Aggregation, representation and fusion of complex multi-layered heterogeneous data (i.e. infrared imaging, aerial photos and ground-mounted LIDAR etc.) with domain application knowledge to support machine understandable recommendation system; 3) Robust visualization techniques with large-scale analytical and interactive visualizations that support users' decision making; and 4) Integration of these needs through the flexible Service-oriented Architecture (SOA) framework to compose and provide services on-demand. IRSV is expected to serve as a management and data visualization tool for construction deliverable assurance and infrastructure monitoring both periodically (annually, monthly, even daily if needed) as well as after extreme events.

  2. On the Development of a Computing Infrastructure that Facilitates IPPD from a Decision-Based Design Perspective

    NASA Technical Reports Server (NTRS)

    Hale, Mark A.; Craig, James I.; Mistree, Farrokh; Schrage, Daniel P.

    1995-01-01

    Integrated Product and Process Development (IPPD) embodies the simultaneous application of both system and quality engineering methods throughout an iterative design process. The use of IPPD results in the time-conscious, cost-saving development of engineering systems. Georgia Tech has proposed the development of an Integrated Design Engineering Simulator that will merge Integrated Product and Process Development with interdisciplinary analysis techniques and state-of-the-art computational technologies. To implement IPPD, a Decision-Based Design perspective is encapsulated in an approach that focuses on the role of the human designer in product development. The approach has two parts and is outlined in this paper. First, an architecture, called DREAMS, is being developed that facilitates design from a decision-based perspective. Second, a supporting computing infrastructure, called IMAGE, is being designed. The current status of development is given and future directions are outlined.

  3. Defense of Cyber Infrastructures Against Cyber-Physical Attacks Using Game-Theoretic Models

    DOE PAGES

    Rao, Nageswara S. V.; Poole, Stephen W.; Ma, Chris Y. T.; ...

    2015-04-06

    The operation of cyber infrastructures relies on both cyber and physical components, which are subject to incidental and intentional degradations of different kinds. Within the context of network and computing infrastructures, we study the strategic interactions between an attacker and a defender using game-theoretic models that take into account both cyber and physical components. The attacker and defender optimize their individual utilities expressed as sums of cost and system terms. First, we consider a Boolean attack-defense model, wherein the cyber and physical sub-infrastructures may be attacked and reinforced as individual units. Second, we consider a component attack-defense model wherein theirmore » components may be attacked and defended, and the infrastructure requires minimum numbers of both to function. We show that the Nash equilibrium under uniform costs in both cases is computable in polynomial time, and it provides high-level deterministic conditions for the infrastructure survival. When probabilities of successful attack and defense, and of incidental failures are incorporated into the models, the results favor the attacker but otherwise remain qualitatively similar. This approach has been motivated and validated by our experiences with UltraScience Net infrastructure, which was built to support high-performance network experiments. In conclusion, the analytical results, however, are more general, and we apply them to simplified models of cloud and high-performance computing infrastructures.« less

  4. Defense of Cyber Infrastructures Against Cyber-Physical Attacks Using Game-Theoretic Models

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Rao, Nageswara S. V.; Poole, Stephen W.; Ma, Chris Y. T.

    The operation of cyber infrastructures relies on both cyber and physical components, which are subject to incidental and intentional degradations of different kinds. Within the context of network and computing infrastructures, we study the strategic interactions between an attacker and a defender using game-theoretic models that take into account both cyber and physical components. The attacker and defender optimize their individual utilities expressed as sums of cost and system terms. First, we consider a Boolean attack-defense model, wherein the cyber and physical sub-infrastructures may be attacked and reinforced as individual units. Second, we consider a component attack-defense model wherein theirmore » components may be attacked and defended, and the infrastructure requires minimum numbers of both to function. We show that the Nash equilibrium under uniform costs in both cases is computable in polynomial time, and it provides high-level deterministic conditions for the infrastructure survival. When probabilities of successful attack and defense, and of incidental failures are incorporated into the models, the results favor the attacker but otherwise remain qualitatively similar. This approach has been motivated and validated by our experiences with UltraScience Net infrastructure, which was built to support high-performance network experiments. In conclusion, the analytical results, however, are more general, and we apply them to simplified models of cloud and high-performance computing infrastructures.« less

  5. Structural health monitoring of civil infrastructure.

    PubMed

    Brownjohn, J M W

    2007-02-15

    Structural health monitoring (SHM) is a term increasingly used in the last decade to describe a range of systems implemented on full-scale civil infrastructures and whose purposes are to assist and inform operators about continued 'fitness for purpose' of structures under gradual or sudden changes to their state, to learn about either or both of the load and response mechanisms. Arguably, various forms of SHM have been employed in civil infrastructure for at least half a century, but it is only in the last decade or two that computer-based systems are being designed for the purpose of assisting owners/operators of ageing infrastructure with timely information for their continued safe and economic operation. This paper describes the motivations for and recent history of SHM applications to various forms of civil infrastructure and provides case studies on specific types of structure. It ends with a discussion of the present state-of-the-art and future developments in terms of instrumentation, data acquisition, communication systems and data mining and presentation procedures for diagnosis of infrastructural 'health'.

  6. In Situ Methods, Infrastructures, and Applications on High Performance Computing Platforms, a State-of-the-art (STAR) Report

    DOE PAGES

    Bethel, EW; Bauer, A; Abbasi, H; ...

    2016-06-10

    The considerable interest in the high performance computing (HPC) community regarding analyzing and visualization data without first writing to disk, i.e., in situ processing, is due to several factors. First is an I/O cost savings, where data is analyzed /visualized while being generated, without first storing to a filesystem. Second is the potential for increased accuracy, where fine temporal sampling of transient analysis might expose some complex behavior missed in coarse temporal sampling. Third is the ability to use all available resources, CPU’s and accelerators, in the computation of analysis products. This STAR paper brings together researchers, developers and practitionersmore » using in situ methods in extreme-scale HPC with the goal to present existing methods, infrastructures, and a range of computational science and engineering applications using in situ analysis and visualization.« less

  7. Evolution of the Virtualized HPC Infrastructure of Novosibirsk Scientific Center

    NASA Astrophysics Data System (ADS)

    Adakin, A.; Anisenkov, A.; Belov, S.; Chubarov, D.; Kalyuzhny, V.; Kaplin, V.; Korol, A.; Kuchin, N.; Lomakin, S.; Nikultsev, V.; Skovpen, K.; Sukharev, A.; Zaytsev, A.

    2012-12-01

    Novosibirsk Scientific Center (NSC), also known worldwide as Akademgorodok, is one of the largest Russian scientific centers hosting Novosibirsk State University (NSU) and more than 35 research organizations of the Siberian Branch of Russian Academy of Sciences including Budker Institute of Nuclear Physics (BINP), Institute of Computational Technologies, and Institute of Computational Mathematics and Mathematical Geophysics (ICM&MG). Since each institute has specific requirements on the architecture of computing farms involved in its research field, currently we've got several computing facilities hosted by NSC institutes, each optimized for a particular set of tasks, of which the largest are the NSU Supercomputer Center, Siberian Supercomputer Center (ICM&MG), and a Grid Computing Facility of BINP. A dedicated optical network with the initial bandwidth of 10 Gb/s connecting these three facilities was built in order to make it possible to share the computing resources among the research communities, thus increasing the efficiency of operating the existing computing facilities and offering a common platform for building the computing infrastructure for future scientific projects. Unification of the computing infrastructure is achieved by extensive use of virtualization technology based on XEN and KVM platforms. This contribution gives a thorough review of the present status and future development prospects for the NSC virtualized computing infrastructure and the experience gained while using it for running production data analysis jobs related to HEP experiments being carried out at BINP, especially the KEDR detector experiment at the VEPP-4M electron-positron collider.

  8. Critical Infrastructure Protection II, The International Federation for Information Processing, Volume 290.

    NASA Astrophysics Data System (ADS)

    Papa, Mauricio; Shenoi, Sujeet

    The information infrastructure -- comprising computers, embedded devices, networks and software systems -- is vital to day-to-day operations in every sector: information and telecommunications, banking and finance, energy, chemicals and hazardous materials, agriculture, food, water, public health, emergency services, transportation, postal and shipping, government and defense. Global business and industry, governments, indeed society itself, cannot function effectively if major components of the critical information infrastructure are degraded, disabled or destroyed. Critical Infrastructure Protection II describes original research results and innovative applications in the interdisciplinary field of critical infrastructure protection. Also, it highlights the importance of weaving science, technology and policy in crafting sophisticated, yet practical, solutions that will help secure information, computer and network assets in the various critical infrastructure sectors. Areas of coverage include: - Themes and Issues - Infrastructure Security - Control Systems Security - Security Strategies - Infrastructure Interdependencies - Infrastructure Modeling and Simulation This book is the second volume in the annual series produced by the International Federation for Information Processing (IFIP) Working Group 11.10 on Critical Infrastructure Protection, an international community of scientists, engineers, practitioners and policy makers dedicated to advancing research, development and implementation efforts focused on infrastructure protection. The book contains a selection of twenty edited papers from the Second Annual IFIP WG 11.10 International Conference on Critical Infrastructure Protection held at George Mason University, Arlington, Virginia, USA in the spring of 2008.

  9. A national survey of the infrastructure and IT policies required to deliver computerised cognitive behavioural therapy in the English NHS

    PubMed Central

    Andrewes, Holly; Kenicer, David; McClay, Carrie-Anne; Williams, Christopher

    2013-01-01

    Objective This study aimed to identify if patients have adequate access to Computerised Cognitive Behavioural Therapy (cCBT) programmes in all mental health trusts across England. Design The primary researcher contacted a targeted sample of information technology (IT) leads in each mental health trust in England to complete the survey. Setting Telephone, email and postal mail were used to contact an IT lead or nominated expert from each mental health trust. Participants 48 of the 56 IT experts from each mental health trust in England responded. The experts who were chosen had sufficient knowledge of the infrastructure, technology, policies and regulations to answer all survey questions. Results 77% of trusts provided computers for direct patient use, with computers in all except one trust meeting the specifications to access cCBT. However, 24% of trusts acknowledged that the number of computers provided was insufficient to provide a trust-wide service. 71% stated that the bandwidth available was adequate to provide access to cCBT sites, yet for many trusts, internet speed was identified as unpredictable and variable between locations. IT policies in only 56% of the trusts allowed National Health Service (NHS) staff to directly support patients as they complete cCBT courses via emails to the patients’ personal email account. Only 37% allowed support via internet video calls, and only 9% allowed support via instant messaging services. Conclusions Patient access to cCBT in English NHS mental health trusts is limited by the inadequate number of computers provided to patients, unpredictable bandwidth speed and inconsistent IT policies, which restrict patients from receiving the support needed to maximise the success of this therapy. English NHS mental health trusts need to alter IT policy and improve resources to reduce the waiting time for psychological resources required for patients seeking this evidence-based therapy. PMID:23377995

  10. Informatics Infrastructure for the Materials Genome Initiative

    NASA Astrophysics Data System (ADS)

    Dima, Alden; Bhaskarla, Sunil; Becker, Chandler; Brady, Mary; Campbell, Carelyn; Dessauw, Philippe; Hanisch, Robert; Kattner, Ursula; Kroenlein, Kenneth; Newrock, Marcus; Peskin, Adele; Plante, Raymond; Li, Sheng-Yen; Rigodiat, Pierre-François; Amaral, Guillaume Sousa; Trautt, Zachary; Schmitt, Xavier; Warren, James; Youssef, Sharief

    2016-08-01

    A materials data infrastructure that enables the sharing and transformation of a wide range of materials data is an essential part of achieving the goals of the Materials Genome Initiative. We describe two high-level requirements of such an infrastructure as well as an emerging open-source implementation consisting of the Materials Data Curation System and the National Institute of Standards and Technology Materials Resource Registry.

  11. Hydrogen Infrastructure Testing and Research Facility

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    None

    2017-04-10

    Learn about the Hydrogen Infrastructure Testing and Research Facility (HITRF), where NREL researchers are working on vehicle and hydrogen infrastructure projects that aim to enable more rapid inclusion of fuel cell and hydrogen technologies in the market to meet consumer and national goals for emissions reduction, performance, and energy security. As part of NREL’s Energy Systems Integration Facility (ESIF), the HITRF is designed for collaboration with a wide range of hydrogen, fuel cell, and transportation stakeholders.

  12. Idaho National Laboratory’s Analysis of ARRA-Funded Plug-in Electric Vehicle and Charging Infrastructure Projects: Final Report

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Francfort, Jim; Bennett, Brion; Carlson, Richard

    2015-09-01

    Battelle Energy Alliance, LLC, managing and operating contractor for the U.S. Department of Energy’s (DOE) Idaho National Laboratory (INL), is the lead laboratory for U.S. Department of Energy’s Advanced Vehicle Testing Activity (AVTA). INL’s conduct of the AVTA resulted in a significant base of knowledge and experience in the area of testing light-duty vehicles that reduced transportation-related petroleum consumption. Due to this experience, INL was tasked by DOE to develop agreements with companies that were the recipients of The American Recovery and Reinvestment Act of 2009 (ARRA) grants, that would allow INL to collect raw data from light-duty vehicles andmore » charging infrastructure. INL developed non-disclosure agreements (NDAs) with several companies and their partners that resulted in INL being able to receive raw data via server-to-server connections from the partner companies. This raw data allowed INL to independently conduct data quality checks, perform analysis, and report publicly to DOE, partners, and stakeholders, how drivers used both new vehicle technologies and the deployed charging infrastructure. The ultimate goal was not the deployment of vehicles and charging infrastructure, cut rather to create real-world laboratories of vehicles, charging infrastructure and drivers that would aid in the design of future electric drive transportation systems. The five projects that INL collected data from and their partners are: • ChargePoint America - Plug-in Electric Vehicle Charging Infrastructure Demonstration • Chrysler Ram PHEV Pickup - Vehicle Demonstration • General Motors Chevrolet Volt - Vehicle Demonstration • The EV Project - Plug-in Electric Vehicle Charging Infrastructure Demonstration • EPRI / Via Motors PHEVs – Vehicle Demonstration The document serves to benchmark the performance science involved the execution, analysis and reporting for the five above projects that provided lessons learned based on driver’s use of

  13. The ATLAS Simulation Infrastructure

    DOE PAGES

    Aad, G.; Abbott, B.; Abdallah, J.; ...

    2010-09-25

    The simulation software for the ATLAS Experiment at the Large Hadron Collider is being used for large-scale production of events on the LHC Computing Grid. This simulation requires many components, from the generators that simulate particle collisions, through packages simulating the response of the various detectors and triggers. All of these components come together under the ATLAS simulation infrastructure. In this paper, that infrastructure is discussed, including that supporting the detector description, interfacing the event generation, and combining the GEANT4 simulation of the response of the individual detectors. Also described are the tools allowing the software validation, performance testing, andmore » the validation of the simulated output against known physics processes.« less

  14. National information infrastructure applications

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Forslund, D.; George, J.; Greenfield, J.

    1996-07-01

    This is the final report of a two-year, Laboratory-Directed Research and Development (LDRD) project at the Los Alamos National Laboratory (LANL). This project sought to develop a telemedical application in which medical records are electronically searched and digital signatures of real CT scan data are indexed and used to characterize a range of diseases and are used to compare on-line medical data with archived clinical data rapidly. This system includes multimedia data management, interactive collaboration, data compression and transmission, remote data storage and retrieval, and automated data analysis integrated in a distributed application between Los Alamos and the National Jewishmore » Hospital.« less

  15. UNH Data Cooperative: A Cyber Infrastructure for Earth System Studies

    NASA Astrophysics Data System (ADS)

    Braswell, B. H.; Fekete, B. M.; Prusevich, A.; Gliden, S.; Magill, A.; Vorosmarty, C. J.

    2007-12-01

    Earth system scientists and managers have a continuously growing demand for a wide array of earth observations derived from various data sources including (a) modern satellite retrievals, (b) "in-situ" records, (c) various simulation outputs, and (d) assimilated data products combining model results with observational records. The sheer quantity of data, and formatting inconsistencies make it difficult for users to take full advantage of this important information resource. Thus the system could benefit from a thorough retooling of our current data processing procedures and infrastructure. Emerging technologies, like OPeNDAP and OGC map services, open standard data formats (NetCDF, HDF) data cataloging systems (NASA-Echo, Global Change Master Directory, etc.) are providing the basis for a new approach in data management and processing, where web- services are increasingly designed to serve computer-to-computer communications without human interactions and complex analysis can be carried out over distributed computer resources interconnected via cyber infrastructure. The UNH Earth System Data Collaborative is designed to utilize the aforementioned emerging web technologies to offer new means of access to earth system data. While the UNH Data Collaborative serves a wide array of data ranging from weather station data (Climate Portal) to ocean buoy records and ship tracks (Portsmouth Harbor Initiative) to land cover characteristics, etc. the underlaying data architecture shares common components for data mining and data dissemination via web-services. Perhaps the most unique element of the UNH Data Cooperative's IT infrastructure is its prototype modeling environment for regional ecosystem surveillance over the Northeast corridor, which allows the integration of complex earth system model components with the Cooperative's data services. While the complexity of the IT infrastructure to perform complex computations is continuously increasing, scientists are often forced

  16. Healthcare information technology infrastructures in Turkey.

    PubMed

    Dogac, A; Yuksel, M; Ertürkmen, G L; Kabak, Y; Namli, T; Yıldız, M H; Ay, Y; Ceyhan, B; Hülür, U; Oztürk, H; Atbakan, E

    2014-05-22

    The objective of this paper is to describe some of the major healthcare information technology (IT) infrastructures in Turkey, namely, Sağlık-Net (Turkish for "Health-Net"), the Centralized Hospital Appointment System, the Basic Health Statistics Module, the Core Resources Management System, and the e-prescription system of the Social Security Institution. International collaboration projects that are integrated with Sağlık-Net are also briefly summarized. The authors provide a survey of the some of the major healthcare IT infrastructures in Turkey. Sağlık-Net has two main components: the National Health Information System (NHIS) and the Family Medicine Information System (FMIS). The NHIS is a nation-wide infrastructure for sharing patients' Electronic Health Records (EHRs). So far, EHRs of 78.9 million people have been created in the NHIS. Similarly, family medicine is operational in the whole country via FMIS. Centralized Hospital Appointment System enables the citizens to easily make appointments in healthcare providers. Basic Health Statistics Module is used for collecting information about the health status, risks and indicators across the country. Core Resources Management System speeds up the flow of information between the headquarters and Provincial Health Directorates. The e-prescription system is linked with Sağlık-Net and seamlessly integrated with the healthcare provider information systems. Finally, Turkey is involved in several international projects for experience sharing and disseminating national developments. With the introduction of the "Health Transformation Program" in 2003, a number of successful healthcare IT infrastructures have been developed in Turkey. Currently, work is going on to enhance and further improve their functionality.

  17. Infrastructure SIP Requirements and Guidance

    EPA Pesticide Factsheets

    The Clean Air Act requires states to submit SIPs that implement, maintain, and enforce a new or revised national ambient air quality standard (NAAQS) within 3 years of EPA issuing the standard. The Infrastructure SIP is required for all states.

  18. The Australian Computational Earth Systems Simulator

    NASA Astrophysics Data System (ADS)

    Mora, P.; Muhlhaus, H.; Lister, G.; Dyskin, A.; Place, D.; Appelbe, B.; Nimmervoll, N.; Abramson, D.

    2001-12-01

    Numerical simulation of the physics and dynamics of the entire earth system offers an outstanding opportunity for advancing earth system science and technology but represents a major challenge due to the range of scales and physical processes involved, as well as the magnitude of the software engineering effort required. However, new simulation and computer technologies are bringing this objective within reach. Under a special competitive national funding scheme to establish new Major National Research Facilities (MNRF), the Australian government together with a consortium of Universities and research institutions have funded construction of the Australian Computational Earth Systems Simulator (ACcESS). The Simulator or computational virtual earth will provide the research infrastructure to the Australian earth systems science community required for simulations of dynamical earth processes at scales ranging from microscopic to global. It will consist of thematic supercomputer infrastructure and an earth systems simulation software system. The Simulator models and software will be constructed over a five year period by a multi-disciplinary team of computational scientists, mathematicians, earth scientists, civil engineers and software engineers. The construction team will integrate numerical simulation models (3D discrete elements/lattice solid model, particle-in-cell large deformation finite-element method, stress reconstruction models, multi-scale continuum models etc) with geophysical, geological and tectonic models, through advanced software engineering and visualization technologies. When fully constructed, the Simulator aims to provide the software and hardware infrastructure needed to model solid earth phenomena including global scale dynamics and mineralisation processes, crustal scale processes including plate tectonics, mountain building, interacting fault system dynamics, and micro-scale processes that control the geological, physical and dynamic

  19. Fermilab computing at the Intensity Frontier

    DOE PAGES

    Group, Craig; Fuess, S.; Gutsche, O.; ...

    2015-12-23

    The Intensity Frontier refers to a diverse set of particle physics experiments using high- intensity beams. In this paper I will focus the discussion on the computing requirements and solutions of a set of neutrino and muon experiments in progress or planned to take place at the Fermi National Accelerator Laboratory located near Chicago, Illinois. In addition, the experiments face unique challenges, but also have overlapping computational needs. In principle, by exploiting the commonality and utilizing centralized computing tools and resources, requirements can be satisfied efficiently and scientists of individual experiments can focus more on the science and less onmore » the development of tools and infrastructure.« less

  20. The israeli virtual national health record: a robust national health information infrastructure based on a firm foundation of trust.

    PubMed

    Saiag, Esther

    2005-01-01

    In many developed countries, a coordinated effort is underway to build national and regional Health Information Infrastructures (HII) for the linking of disparate sites of care, so that an access to a comprehensive Health Record will be feasible when critical medical decisions are made [1]. However, widespread adoption of such national projects is hindered by a series of barriers- regulatory, technical, financial and cultural. Above all, a robust national HII requires a firm foundation of trust: patients must be assured that their confidential health information will not be misused and that there are adequate legal remedies in the event of inappropriate behavior on the part of either authorized or unauthorized parties[2].The Israeli evolving National HII is an innovative state of the art implementation of a wide-range clinical inter-organizational data exchange, based on a unique concept of virtually temporary sharing of information. A logically connection of multiple caregivers and medical organizations creates a patient-centric virtual repository, without centralization. All information remains in its original format, location, system and ownership. On demand, relevant information is instantly integrated and delivered to the point of care. This system, successfully covering more than half of Israel's population, is currently evolving from a voluntary private-public partnership (dbMOTION and CLALIT HMO) to a formal national reality. The governmental leadership, now taking over the process, is essential to achieve a full potential of the health information technology. All partners of the Israeli health system are coordinated in concert with each other, driven with a shared vision - realizing that a secured, private, confidential health information exchange is assured.

  1. Building the National Information Infrastructure in K-12 Education: A Comprehensive Survey of Attitudes towards Linking Both Sides of the Desk. A Report of the Global Telecommunications Infrastructure Research Project. Research Report Series.

    ERIC Educational Resources Information Center

    Pereira, Francis; And Others

    This survey was designed to elicit the perceptions of the members of the educational community on four issues concerning the NII (National Information Infrastructure), and to test whether these visions of the NII were shared by educators. The issues were: (1) the benefits of the NII to the education sector and specifically whether the NII will be…

  2. Federated data storage and management infrastructure

    NASA Astrophysics Data System (ADS)

    Zarochentsev, A.; Kiryanov, A.; Klimentov, A.; Krasnopevtsev, D.; Hristov, P.

    2016-10-01

    The Large Hadron Collider (LHC)’ operating at the international CERN Laboratory in Geneva, Switzerland, is leading Big Data driven scientific explorations. Experiments at the LHC explore the fundamental nature of matter and the basic forces that shape our universe. Computing models for the High Luminosity LHC era anticipate a growth of storage needs of at least orders of magnitude; it will require new approaches in data storage organization and data handling. In our project we address the fundamental problem of designing of architecture to integrate a distributed heterogeneous disk resources for LHC experiments and other data- intensive science applications and to provide access to data from heterogeneous computing facilities. We have prototyped a federated storage for Russian T1 and T2 centers located in Moscow, St.-Petersburg and Gatchina, as well as Russian / CERN federation. We have conducted extensive tests of underlying network infrastructure and storage endpoints with synthetic performance measurement tools as well as with HENP-specific workloads, including the ones running on supercomputing platform, cloud computing and Grid for ALICE and ATLAS experiments. We will present our current accomplishments with running LHC data analysis remotely and locally to demonstrate our ability to efficiently use federated data storage experiment wide within National Academic facilities for High Energy and Nuclear Physics as well as for other data-intensive science applications, such as bio-informatics.

  3. Radiotherapy infrastructure and human resources in Switzerland : Present status and projected computations for 2020.

    PubMed

    Datta, Niloy Ranjan; Khan, Shaka; Marder, Dietmar; Zwahlen, Daniel; Bodis, Stephan

    2016-09-01

    The purpose of this study was to evaluate the present status of radiotherapy infrastructure and human resources in Switzerland and compute projections for 2020. The European Society of Therapeutic Radiation Oncology "Quantification of Radiation Therapy Infrastructure and Staffing" guidelines (ESTRO-QUARTS) and those of the International Atomic Energy Agency (IAEA) were applied to estimate the requirements for teleradiotherapy (TRT) units, radiation oncologists (RO), medical physicists (MP) and radiotherapy technologists (RTT). The databases used for computation of the present gap and additional requirements are (a) Global Cancer Incidence, Mortality and Prevalence (GLOBOCAN) for cancer incidence (b) the Directory of Radiotherapy Centres (DIRAC) of the IAEA for existing TRT units (c) human resources from the recent ESTRO "Health Economics in Radiation Oncology" (HERO) survey and (d) radiotherapy utilization (RTU) rates for each tumour site, published by the Ingham Institute for Applied Medical Research (IIAMR). In 2015, 30,999 of 45,903 cancer patients would have required radiotherapy. By 2020, this will have increased to 34,041 of 50,427 cancer patients. Switzerland presently has an adequate number of TRTs, but a deficit of 57 ROs, 14 MPs and 36 RTTs. By 2020, an additional 7 TRTs, 72 ROs, 22 MPs and 66 RTTs will be required. In addition, a realistic dynamic model for calculation of staff requirements due to anticipated changes in future radiotherapy practices has been proposed. This model could be tailor-made and individualized for any radiotherapy centre. A 9.8 % increase in radiotherapy requirements is expected for cancer patients over the next 5 years. The present study should assist the stakeholders and health planners in designing an appropriate strategy for meeting future radiotherapy needs for Switzerland.

  4. Railroad infrastructure trespass detection performance guidelines

    DOT National Transportation Integrated Search

    2011-01-01

    The U.S. Department of Transportations John A. Volpe National Transportation Systems Center, under the direction of the Federal Railroad Administration, conducted a 3-year demonstration of an automated prototype railroad infrastructure security sy...

  5. Communications satellites in the national and global health care information infrastructure: their role, impact, and issues

    NASA Technical Reports Server (NTRS)

    Zuzek, J. E.; Bhasin, K. B.

    1996-01-01

    Health care services delivered from a distance, known collectively as telemedicine, are being increasingly demonstrated on various transmission media. Telemedicine activities have included diagnosis by a doctor at a remote location, emergency and disaster medical assistance, medical education, and medical informatics. The ability of communications satellites to offer communication channels and bandwidth on demand, connectivity to mobile, remote and under served regions, and global access will afford them a critical role for telemedicine applications within the National and Global Information Infrastructure (NII/GII). The importance that communications satellites will have in telemedicine applications within the NII/GII the differences in requirements for NII vs. GII, the major issues such as interoperability, confidentiality, quality, availability, and costs, and preliminary conclusions for future usability based on the review of several recent trails at national and global levels are presented.

  6. Critical Homeland Infrastructure Protection

    DTIC Science & Technology

    2007-01-01

    talent. Examples include: * Detection of surveillance activities; * Stand-off detection of chemical, biological, nuclear, radiation and explosive ...Manager Guardian DARPA Overview Mr. Roger Gibbs DARPA LLNL Technologies in Support of Infrastructure Mr. Don Prosnitz LLNL Protection Sandia National...FP Antiterrorism/Force Protection CBRNE Chemical Biological Radiological Nuclear Explosive CERT Commuter Emergency Response Team CIA Central

  7. A simple grid implementation with Berkeley Open Infrastructure for Network Computing using BLAST as a model

    PubMed Central

    Pinthong, Watthanai; Muangruen, Panya

    2016-01-01

    Development of high-throughput technologies, such as Next-generation sequencing, allows thousands of experiments to be performed simultaneously while reducing resource requirement. Consequently, a massive amount of experiment data is now rapidly generated. Nevertheless, the data are not readily usable or meaningful until they are further analysed and interpreted. Due to the size of the data, a high performance computer (HPC) is required for the analysis and interpretation. However, the HPC is expensive and difficult to access. Other means were developed to allow researchers to acquire the power of HPC without a need to purchase and maintain one such as cloud computing services and grid computing system. In this study, we implemented grid computing in a computer training center environment using Berkeley Open Infrastructure for Network Computing (BOINC) as a job distributor and data manager combining all desktop computers to virtualize the HPC. Fifty desktop computers were used for setting up a grid system during the off-hours. In order to test the performance of the grid system, we adapted the Basic Local Alignment Search Tools (BLAST) to the BOINC system. Sequencing results from Illumina platform were aligned to the human genome database by BLAST on the grid system. The result and processing time were compared to those from a single desktop computer and HPC. The estimated durations of BLAST analysis for 4 million sequence reads on a desktop PC, HPC and the grid system were 568, 24 and 5 days, respectively. Thus, the grid implementation of BLAST by BOINC is an efficient alternative to the HPC for sequence alignment. The grid implementation by BOINC also helped tap unused computing resources during the off-hours and could be easily modified for other available bioinformatics software. PMID:27547555

  8. Defense of Cyber Infrastructures Against Cyber-Physical Attacks Using Game-Theoretic Models.

    PubMed

    Rao, Nageswara S V; Poole, Stephen W; Ma, Chris Y T; He, Fei; Zhuang, Jun; Yau, David K Y

    2016-04-01

    The operation of cyber infrastructures relies on both cyber and physical components, which are subject to incidental and intentional degradations of different kinds. Within the context of network and computing infrastructures, we study the strategic interactions between an attacker and a defender using game-theoretic models that take into account both cyber and physical components. The attacker and defender optimize their individual utilities, expressed as sums of cost and system terms. First, we consider a Boolean attack-defense model, wherein the cyber and physical subinfrastructures may be attacked and reinforced as individual units. Second, we consider a component attack-defense model wherein their components may be attacked and defended, and the infrastructure requires minimum numbers of both to function. We show that the Nash equilibrium under uniform costs in both cases is computable in polynomial time, and it provides high-level deterministic conditions for the infrastructure survival. When probabilities of successful attack and defense, and of incidental failures, are incorporated into the models, the results favor the attacker but otherwise remain qualitatively similar. This approach has been motivated and validated by our experiences with UltraScience Net infrastructure, which was built to support high-performance network experiments. The analytical results, however, are more general, and we apply them to simplified models of cloud and high-performance computing infrastructures. © 2015 Society for Risk Analysis.

  9. Nuclear Energy Infrastructure Database Description and User’s Manual

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Heidrich, Brenden

    In 2014, the Deputy Assistant Secretary for Science and Technology Innovation initiated the Nuclear Energy (NE)–Infrastructure Management Project by tasking the Nuclear Science User Facilities, formerly the Advanced Test Reactor National Scientific User Facility, to create a searchable and interactive database of all pertinent NE-supported and -related infrastructure. This database, known as the Nuclear Energy Infrastructure Database (NEID), is used for analyses to establish needs, redundancies, efficiencies, distributions, etc., to best understand the utility of NE’s infrastructure and inform the content of infrastructure calls. The Nuclear Science User Facilities developed the database by utilizing data and policy direction from amore » variety of reports from the U.S. Department of Energy, the National Research Council, the International Atomic Energy Agency, and various other federal and civilian resources. The NEID currently contains data on 802 research and development instruments housed in 377 facilities at 84 institutions in the United States and abroad. The effort to maintain and expand the database is ongoing. Detailed information on many facilities must be gathered from associated institutions and added to complete the database. The data must be validated and kept current to capture facility and instrumentation status as well as to cover new acquisitions and retirements. This document provides a short tutorial on the navigation of the NEID web portal at NSUF-Infrastructure.INL.gov.« less

  10. Healthcare Information Technology Infrastructures in Turkey

    PubMed Central

    Yuksel, M.; Ertürkmen, G. L.; Kabak, Y.; Namli, T.; Yıldız, M. H.; Ay, Y.; Ceyhan, B.; Hülür, Ü.; Öztürk, H.; Atbakan, E.

    2014-01-01

    Summary Objectives The objective of this paper is to describe some of the major healthcare information technology (IT) infrastructures in Turkey, namely, Sağlık-Net (Turkish for “Health-Net”), the Centralized Hospital Appointment System, the Basic Health Statistics Module, the Core Resources Management System, and the e-prescription system of the Social Security Institution. International collaboration projects that are integrated with Sağlık-Net are also briefly summarized. Methods The authors provide a survey of the some of the major healthcare IT infrastructures in Turkey. Results Sağlık-Net has two main components: the National Health Information System (NHIS) and the Family Medicine Information System (FMIS). The NHIS is a nation-wide infrastructure for sharing patients’ Electronic Health Records (EHRs). So far, EHRs of 78.9 million people have been created in the NHIS. Similarly, family medicine is operational in the whole country via FMIS. Centralized Hospital Appointment System enables the citizens to easily make appointments in healthcare providers. Basic Health Statistics Module is used for collecting information about the health status, risks and indicators across the country. Core Resources Management System speeds up the flow of information between the headquarters and Provincial Health Directorates. The e-prescription system is linked with Sağlık-Net and seamlessly integrated with the healthcare provider information systems. Finally, Turkey is involved in several international projects for experience sharing and disseminating national developments. Conclusion With the introduction of the “Health Transformation Program” in 2003, a number of successful healthcare IT infrastructures have been developed in Turkey. Currently, work is going on to enhance and further improve their functionality. PMID:24853036

  11. Control System Applicable Use Assessment of the Secure Computing Corporation - Secure Firewall (Sidewinder)

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hadley, Mark D.; Clements, Samuel L.

    2009-01-01

    Battelle’s National Security & Defense objective is, “applying unmatched expertise and unique facilities to deliver homeland security solutions. From detection and protection against weapons of mass destruction to emergency preparedness/response and protection of critical infrastructure, we are working with industry and government to integrate policy, operational, technological, and logistical parameters that will secure a safe future”. In an ongoing effort to meet this mission, engagements with industry that are intended to improve operational and technical attributes of commercial solutions that are related to national security initiatives are necessary. This necessity will ensure that capabilities for protecting critical infrastructure assets aremore » considered by commercial entities in their development, design, and deployment lifecycles thus addressing the alignment of identified deficiencies and improvements needed to support national cyber security initiatives. The Secure Firewall (Sidewinder) appliance by Secure Computing was assessed for applicable use in critical infrastructure control system environments, such as electric power, nuclear and other facilities containing critical systems that require augmented protection from cyber threat. The testing was performed in the Pacific Northwest National Laboratory’s (PNNL) Electric Infrastructure Operations Center (EIOC). The Secure Firewall was tested in a network configuration that emulates a typical control center network and then evaluated. A number of observations and recommendations are included in this report relating to features currently included in the Secure Firewall that support critical infrastructure security needs.« less

  12. 78 FR 42553 - NASA Advisory Council; Information Technology Infrastructure Committee; Meeting

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-07-16

    ...; Information Technology Infrastructure Committee; Meeting AGENCY: National Aeronautics and Space Administration... Information Technology Infrastructure Committee (ITIC) of the NASA Advisory Council (NAC). This Committee..., DC 20546. FOR FURTHER INFORMATION CONTACT: Ms. Deborah Diaz, ITIC Executive Secretariat, NASA...

  13. 78 FR 72718 - NASA Advisory Council; Information Technology Infrastructure Committee; Meeting

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-12-03

    ...; Information Technology Infrastructure Committee; Meeting AGENCY: National Aeronautics and Space Administration... Information Technology Infrastructure Committee (ITIC) of the NASA Advisory Council (NAC). DATES: Tuesday... Chief Information Officer Space Launch System Kennedy Space Center Operations and Technology Issues...

  14. A service-based BLAST command tool supported by cloud infrastructures.

    PubMed

    Carrión, Abel; Blanquer, Ignacio; Hernández, Vicente

    2012-01-01

    Notwithstanding the benefits of distributed-computing infrastructures for empowering bioinformatics analysis tools with the needed computing and storage capability, the actual use of these infrastructures is still low. Learning curves and deployment difficulties have reduced the impact on the wide research community. This article presents a porting strategy of BLAST based on a multiplatform client and a service that provides the same interface as sequential BLAST, thus reducing learning curve and with minimal impact on their integration on existing workflows. The porting has been done using the execution and data access components from the EC project Venus-C and the Windows Azure infrastructure provided in this project. The results obtained demonstrate a low overhead on the global execution framework and reasonable speed-up and cost-efficiency with respect to a sequential version.

  15. Connectivity and Resilience: A Multidimensional Analysis of Infrastructure Impacts in the Southwestern Amazon

    ERIC Educational Resources Information Center

    Perz, Stephen G.; Shenkin, Alexander; Barnes, Grenville; Cabrera, Liliana; Carvalho, Lucas A.; Castillo, Jorge

    2012-01-01

    Infrastructure is a worldwide policy priority for national development via regional integration into the global economy. However, economic, ecological and social research draws contrasting conclusions about the consequences of infrastructure. We present a synthetic approach to the study of infrastructure, focusing on a multidimensional treatment…

  16. Software Reuse Methods to Improve Technological Infrastructure for e-Science

    NASA Technical Reports Server (NTRS)

    Marshall, James J.; Downs, Robert R.; Mattmann, Chris A.

    2011-01-01

    Social computing has the potential to contribute to scientific research. Ongoing developments in information and communications technology improve capabilities for enabling scientific research, including research fostered by social computing capabilities. The recent emergence of e-Science practices has demonstrated the benefits from improvements in the technological infrastructure, or cyber-infrastructure, that has been developed to support science. Cloud computing is one example of this e-Science trend. Our own work in the area of software reuse offers methods that can be used to improve new technological development, including cloud computing capabilities, to support scientific research practices. In this paper, we focus on software reuse and its potential to contribute to the development and evaluation of information systems and related services designed to support new capabilities for conducting scientific research.

  17. Second annual Transportation Infrastructure Engineering Conference.

    DOT National Transportation Integrated Search

    2013-10-01

    The conference will highlight a few of the current projects that have been sponsored by the Center for Transportation : Infrastructure and Safety (CTIS), a national University Transportation Center at S&T. In operation since 1998, the CTIS supports :...

  18. VASA: Interactive Computational Steering of Large Asynchronous Simulation Pipelines for Societal Infrastructure.

    PubMed

    Ko, Sungahn; Zhao, Jieqiong; Xia, Jing; Afzal, Shehzad; Wang, Xiaoyu; Abram, Greg; Elmqvist, Niklas; Kne, Len; Van Riper, David; Gaither, Kelly; Kennedy, Shaun; Tolone, William; Ribarsky, William; Ebert, David S

    2014-12-01

    We present VASA, a visual analytics platform consisting of a desktop application, a component model, and a suite of distributed simulation components for modeling the impact of societal threats such as weather, food contamination, and traffic on critical infrastructure such as supply chains, road networks, and power grids. Each component encapsulates a high-fidelity simulation model that together form an asynchronous simulation pipeline: a system of systems of individual simulations with a common data and parameter exchange format. At the heart of VASA is the Workbench, a visual analytics application providing three distinct features: (1) low-fidelity approximations of the distributed simulation components using local simulation proxies to enable analysts to interactively configure a simulation run; (2) computational steering mechanisms to manage the execution of individual simulation components; and (3) spatiotemporal and interactive methods to explore the combined results of a simulation run. We showcase the utility of the platform using examples involving supply chains during a hurricane as well as food contamination in a fast food restaurant chain.

  19. 76 FR 64386 - NASA Advisory Council; Information Technology Infrastructure Committee; Meeting

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-10-18

    ..., Executive Secretary for the Information Technology Infrastructure Committee, National Aeronautics and Space... they are attending the NASA Advisory Council, Information Technology Infrastructure Committee meeting in Building 34, Room W305. All U.S. citizens desiring to attend the Information Technology...

  20. Controlling Infrastructure Costs: Right-Sizing the Mission Control Facility

    NASA Technical Reports Server (NTRS)

    Martin, Keith; Sen-Roy, Michael; Heiman, Jennifer

    2009-01-01

    Johnson Space Center's Mission Control Center is a space vehicle, space program agnostic facility. The current operational design is essentially identical to the original facility architecture that was developed and deployed in the mid-90's. In an effort to streamline the support costs of the mission critical facility, the Mission Operations Division (MOD) of Johnson Space Center (JSC) has sponsored an exploratory project to evaluate and inject current state-of-the-practice Information Technology (IT) tools, processes and technology into legacy operations. The general push in the IT industry has been trending towards a data-centric computer infrastructure for the past several years. Organizations facing challenges with facility operations costs are turning to creative solutions combining hardware consolidation, virtualization and remote access to meet and exceed performance, security, and availability requirements. The Operations Technology Facility (OTF) organization at the Johnson Space Center has been chartered to build and evaluate a parallel Mission Control infrastructure, replacing the existing, thick-client distributed computing model and network architecture with a data center model utilizing virtualization to provide the MCC Infrastructure as a Service. The OTF will design a replacement architecture for the Mission Control Facility, leveraging hardware consolidation through the use of blade servers, increasing utilization rates for compute platforms through virtualization while expanding connectivity options through the deployment of secure remote access. The architecture demonstrates the maturity of the technologies generally available in industry today and the ability to successfully abstract the tightly coupled relationship between thick-client software and legacy hardware into a hardware agnostic "Infrastructure as a Service" capability that can scale to meet future requirements of new space programs and spacecraft. This paper discusses the benefits

  1. Unreliable Sustainable Infrastructure: Three Transformations to Guide Cities towards Becoming Healthy 'Smart Cities'

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sperling, Joshua; Fisher, Stephen; Reiner, Mark B.

    The term 'leapfrogging' has been applied to cities and nations that have adopted a new form of infrastructure by bypassing the traditional progression of development, e.g., from no phones to cell phones - bypassing landlines all together. However, leapfrogging from unreliable infrastructure systems to 'smart' cities is too large a jump resulting in unsustainable and unhealthy infrastructure systems. In the Global South, a baseline of unreliable infrastructure is a prevalent problem. The push for sustainable and 'smart' [re]development tends to ignore many of those already living with failing, unreliable infrastructure. Without awareness of baseline conditions, uninformed projects run the riskmore » of returning conditions to the status quo, keeping many urban populations below targets of the United Nations' Sustainable Development Goals. A key part of understanding the baseline is to identify how citizens have long learned to adjust their expectations of basic services. To compensate for poor infrastructure, most residents in the Global South invest in remedial secondary infrastructure (RSI) at the household and business levels. The authors explore three key 'smart' city transformations that address RSI within a hierarchical planning pyramid known as the comprehensive resilient and reliable infrastructure systems (CRISP) planning framework.« less

  2. The Importance of Biodiversity E-infrastructures for Megadiverse Countries.

    PubMed

    Canhos, Dora A L; Sousa-Baena, Mariane S; de Souza, Sidnei; Maia, Leonor C; Stehmann, João R; Canhos, Vanderlei P; De Giovanni, Renato; Bonacelli, Maria B M; Los, Wouter; Peterson, A Townsend

    2015-07-01

    Addressing the challenges of biodiversity conservation and sustainable development requires global cooperation, support structures, and new governance models to integrate diverse initiatives and achieve massive, open exchange of data, tools, and technology. The traditional paradigm of sharing scientific knowledge through publications is not sufficient to meet contemporary demands that require not only the results but also data, knowledge, and skills to analyze the data. E-infrastructures are key in facilitating access to data and providing the framework for collaboration. Here we discuss the importance of e-infrastructures of public interest and the lack of long-term funding policies. We present the example of Brazil's speciesLink network, an e-infrastructure that provides free and open access to biodiversity primary data and associated tools. SpeciesLink currently integrates 382 datasets from 135 national institutions and 13 institutions from abroad, openly sharing ~7.4 million records, 94% of which are associated to voucher specimens. Just as important as the data is the network of data providers and users. In 2014, more than 95% of its users were from Brazil, demonstrating the importance of local e-infrastructures in enabling and promoting local use of biodiversity data and knowledge. From the outset, speciesLink has been sustained through project-based funding, normally public grants for 2-4-year periods. In between projects, there are short-term crises in trying to keep the system operational, a fact that has also been observed in global biodiversity portals, as well as in social and physical sciences platforms and even in computing services portals. In the last decade, the open access movement propelled the development of many web platforms for sharing data. Adequate policies unfortunately did not follow the same tempo, and now many initiatives may perish.

  3. The Importance of Biodiversity E-infrastructures for Megadiverse Countries

    PubMed Central

    Canhos, Dora A. L.; Sousa-Baena, Mariane S.; de Souza, Sidnei; Maia, Leonor C.; Stehmann, João R.; Canhos, Vanderlei P.; De Giovanni, Renato; Bonacelli, Maria B. M.; Los, Wouter; Peterson, A. Townsend

    2015-01-01

    Addressing the challenges of biodiversity conservation and sustainable development requires global cooperation, support structures, and new governance models to integrate diverse initiatives and achieve massive, open exchange of data, tools, and technology. The traditional paradigm of sharing scientific knowledge through publications is not sufficient to meet contemporary demands that require not only the results but also data, knowledge, and skills to analyze the data. E-infrastructures are key in facilitating access to data and providing the framework for collaboration. Here we discuss the importance of e-infrastructures of public interest and the lack of long-term funding policies. We present the example of Brazil’s speciesLink network, an e-infrastructure that provides free and open access to biodiversity primary data and associated tools. SpeciesLink currently integrates 382 datasets from 135 national institutions and 13 institutions from abroad, openly sharing ~7.4 million records, 94% of which are associated to voucher specimens. Just as important as the data is the network of data providers and users. In 2014, more than 95% of its users were from Brazil, demonstrating the importance of local e-infrastructures in enabling and promoting local use of biodiversity data and knowledge. From the outset, speciesLink has been sustained through project-based funding, normally public grants for 2–4-year periods. In between projects, there are short-term crises in trying to keep the system operational, a fact that has also been observed in global biodiversity portals, as well as in social and physical sciences platforms and even in computing services portals. In the last decade, the open access movement propelled the development of many web platforms for sharing data. Adequate policies unfortunately did not follow the same tempo, and now many initiatives may perish. PMID:26204382

  4. Reconfiguring practice: the interdependence of experimental procedure and computing infrastructure in distributed earthquake engineering.

    PubMed

    De La Flor, Grace; Ojaghi, Mobin; Martínez, Ignacio Lamata; Jirotka, Marina; Williams, Martin S; Blakeborough, Anthony

    2010-09-13

    When transitioning local laboratory practices into distributed environments, the interdependent relationship between experimental procedure and the technologies used to execute experiments becomes highly visible and a focal point for system requirements. We present an analysis of ways in which this reciprocal relationship is reconfiguring laboratory practices in earthquake engineering as a new computing infrastructure is embedded within three laboratories in order to facilitate the execution of shared experiments across geographically distributed sites. The system has been developed as part of the UK Network for Earthquake Engineering Simulation e-Research project, which links together three earthquake engineering laboratories at the universities of Bristol, Cambridge and Oxford. We consider the ways in which researchers have successfully adapted their local laboratory practices through the modification of experimental procedure so that they may meet the challenges of coordinating distributed earthquake experiments.

  5. Auscope: Australian Earth Science Information Infrastructure using Free and Open Source Software

    NASA Astrophysics Data System (ADS)

    Woodcock, R.; Cox, S. J.; Fraser, R.; Wyborn, L. A.

    2013-12-01

    Scope's careful selection has been rewarded by adoption. In some cases the features provided by the SISS solution are now significantly in advance of COTS offerings which will create expectations that can be passed back from users to their preferred vendors. Using FOSS, AuScope has addressed the challenge of data exchange across organisations nationally. The data standards (e.g. GeosciML) and platforms that underpin AuScope provide important new datasets and multi-agency links independent of underlying software and hardware differences. AuScope has created an infrastructure, a platform of technologies and the opportunity for new ways of working with and integrating disparate data at much lower cost. Research activities are now exploiting the information infrastructure to create virtual laboratories for research ranging from geophysics through water and the environment. Once again the AuScope community is making heavy use of FOSS to provide access to processing software and Cloud computing and HPC. The successful use of FOSS by AuScope, and the efforts made to ensure it is suitable for adoption, have resulted in the SISS being selected as a reference implementation for a number of Australian Government initiatives beyond AuScope in environmental information and bioregional assessments.

  6. An authentication infrastructure for today and tomorrow

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Engert, D.E.

    1996-06-01

    The Open Software Foundation`s Distributed Computing Environment (OSF/DCE) was originally designed to provide a secure environment for distributed applications. By combining it with Kerberos Version 5 from MIT, it can be extended to provide network security as well. This combination can be used to build both an inter and intra organizational infrastructure while providing single sign-on for the user with overall improved security. The ESnet community of the Department of Energy is building just such an infrastructure. ESnet has modified these systems to improve their interoperability, while encouraging the developers to incorporate these changes and work more closely together tomore » continue to improve the interoperability. The success of this infrastructure depends on its flexibility to meet the needs of many applications and network security requirements. The open nature of Kerberos, combined with the vendor support of OSF/DCE, provides the infrastructure for today and tomorrow.« less

  7. Telemedicine and the National Information Infrastructure

    PubMed Central

    Jones, Mary Gardiner

    1997-01-01

    Abstract Health care is shifting from a focus on hospital-based acute care toward prevention, promotion of wellness, and maintenance of function in community and home-based facilities. Telemedicine can facilitate this shifted focus, but the bulk of the current projects emphasize academic medical center consultations to rural hospitals. Home-based projects encounter barriers of cost and inadequate infrastructure. The 1996 Telecommunications Act as implemented by the Federal Communications commission holds out significant promise to overcome these barriers, although it has serious limitations in its application to health care providers. Health care advocates must work actively on the federal, state, and local public and private sector levels to address these shortcomings and develop cost effective partnerships with other community-based organizations to build network links to facilitate telemedicine-generated services to the home, where the majority of health care decisions are made. PMID:9391928

  8. Behavioral and social sciences at the National Institutes of Health: Methods, measures, and data infrastructures as a scientific priority.

    PubMed

    Riley, William T

    2017-01-01

    The National Institutes of Health Office of Behavioral and Social Sciences Research (OBSSR) recently released its strategic plan for 2017-2021. This plan focuses on three equally important strategic priorities: 1) improve the synergy of basic and applied behavioral and social sciences research, 2) enhance and promote the research infrastructure, methods, and measures needed to support a more cumulative and integrated approach to behavioral and social sciences research, and 3) facilitate the adoption of behavioral and social sciences research findings in health research and in practice. This commentary focuses on scientific priority two and future directions in measurement science, technology, data infrastructure, behavioral ontologies, and big data methods and analytics that have the potential to transform the behavioral and social sciences into more cumulative, data rich sciences that more efficiently build on prior research. (PsycINFO Database Record (c) 2017 APA, all rights reserved).

  9. Tertiary Educational Infrastructural Development in Ghana: Financing, Challenges and Strategies

    ERIC Educational Resources Information Center

    Badu, Edward; Kissi, Ernest; Boateng, Emmanuel B.; Antwi-Afari, Maxwell F.

    2018-01-01

    Education is the mainstay of the development of any nation; and in developing countries it has become the backbone of human resource development, ensuring effective growth of the economy; however, its corresponding infrastructure development is lacking. Governments around the globe are finding it difficult to provide the needed infrastructure.…

  10. 3 CFR 13636 - Executive Order 13636 of February 12, 2013. Improving Critical Infrastructure Cybersecurity

    Code of Federal Regulations, 2014 CFR

    2014-01-01

    ... hereby ordered as follows: Section 1. Policy. Repeated cyber intrusions into critical infrastructure demonstrate the need for improved cybersecurity. The cyber threat to critical infrastructure continues to grow... resilience of the Nation's critical infrastructure and to maintain a cyber environment that encourages...

  11. GREEN INFRASTRUCTURE RESEARCH PROGRAM: Rain Gardens

    EPA Science Inventory

    the National Risk Management Research Laboratory (NRMRL) rain garden evaluation is part of a larger collection of long-term research that evaluates a variety of stormwater management practices. The U.S. EPA recognizes the potential of rain gardens as a green infrastructure manag...

  12. Los Alamos National Laboratory Economic Analysis Capability Overview

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Boero, Riccardo; Edwards, Brian Keith; Pasqualini, Donatella

    Los Alamos National Laboratory has developed two types of models to compute the economic impact of infrastructure disruptions. FastEcon is a fast running model that estimates first-­order economic impacts of large scale events such as hurricanes and floods and can be used to identify the amount of economic activity that occurs in a specific area. LANL’s Computable General Equilibrium (CGE) model estimates more comprehensive static and dynamic economic impacts of a broader array of events and captures the interactions between sectors and industries when estimating economic impacts.

  13. National spatial data infrastructure - coming together of GIS and EO in India

    NASA Astrophysics Data System (ADS)

    Rao, Mukund; Pandey, Amitabha; Ahuja, A. K.; Ramamurthy, V. S.; Kasturirangan, K.

    2002-07-01

    A new wave of technological innovation is allowing us to capture, store, process and display an unprecedented amount of geographical and spatial information about Society and a wide variety of environmental and cultural phenomena. Much of this information is "spatial" - that is, it refers to a coordinate system and is representable in map form. Current and accurate spatial data must be readily available to contribute to local, state and national development and contribute to economic growth, environmental quality and stability, and social progress. India has, over the past years, produced a rich "base" of map information through systematic topographic surveys, geological surveys, soil surveys, cadastral surveys, various natural resources inventory programmes and the use of the remote sensing images. Further, with the availability of precision, high-resolution satellite images, data enabling the organisation of GIS, combined with the Global Positioning System (GPS), the accuracy and information content of these spatial datasets or maps is extremely high. Encapsulating these maps and images into a National Spatial Data Infrastructure (NSDI) is the need of the hour and the emphasis has to be on information transparency and sharing, with the recognition that spatial information is a national resource and citizens, society, private enterprise and government have a right to access it, appropriately. Only through common conventions and technical agreements, standards, metadata definitions, network and access protocols will it be easily possible for the NSDI to come into existence. India has now a NSDI strategy and the "NSDI Strategy and Action Plan" report has been prepared and is being opened up to a national debate. The first steps have been taken but the end-goal is farther away but in sight now. While Government must provide the lead, private enterprise, NGOs and academia have a major role to play in making the NSDI a reality. NSDI will require for coming together of

  14. Engineering Infrastructures: Problems of Safety and Security in the Russian Federation

    NASA Astrophysics Data System (ADS)

    Makhutov, Nikolay A.; Reznikov, Dmitry O.; Petrov, Vitaly P.

    Modern society cannot exist without stable and reliable engineering infrastructures (EI), whose operation is vital for any national economy. These infrastructures include energy, transportation, water and gas supply systems, telecommunication and cyber systems, etc. Their performance is commensurate with storing and processing huge amounts of information, energy and hazardous substances. Ageing infrastructures are deteriorating — with operating conditions declining from normal to emergency and catastrophic. The complexity of engineering infrastructures and their interdependence with other technical systems makes them vulnerable to emergency situations triggered by natural and manmade catastrophes or terrorist attacks.

  15. The National Education Association's Educational Computer Service. An Assessment.

    ERIC Educational Resources Information Center

    Software Publishers Association, Washington, DC.

    The Educational Computer Service (ECS) of the National Education Association (NEA) evaluates and distributes educational software. An investigation of ECS was conducted by the Computer Education Committee of the Software Publishers Association (SPA) at the request of SPA members. The SPA found that the service, as it is presently structured, is…

  16. 1997 Ozone National Ambient Air Quality Standards (NAAQS) Infrastructure Actions

    EPA Pesticide Factsheets

    Read about the EPA's infrastructure actions for the 1997 Ozone NAAQS. These actions are regarding states' failure to submit SIPs addressing various parts of the standards. Here you can read the federal register notices, fact sheets, and the docket folder.

  17. 2008 Ozone National Ambient Air Quality Standards (NAAQS) Infrastructure Actions

    EPA Pesticide Factsheets

    Read about the EPA's infrastructure actions for the 2008 Ozone NAAQS. These actions are regarding states' failure to submit SIPs addressing various parts of the standards. Here you can read the federal register notices, fact sheets, and the docket folder.

  18. A New Frontier: The National Information Infrastructure. Proceedings from the State-of-the-Art Institute (8th, Washington, D.C., November 3-4, 1994).

    ERIC Educational Resources Information Center

    Special Libraries Association, New York, NY.

    These conference proceedings address the key issues relating to the National Information Infrastructure, including social policy, cultural issues, government policy, and technological applications. The goal is to provide the knowledge and resources needed to conceptualize and think clearly about this topic. Proceedings include: "Opening…

  19. How Critical Is Critical Infrastructure?

    DTIC Science & Technology

    2015-09-01

    electrical power, telecommunications, transportation, petroleum liquid , or natural gas as shown in Figure 34 from the National Infrastructure Protection...Natural Gas Segment  Food and Agriculture Sector  Government facilities Sector  Healthcare and Public Health Sector  Information Technology...514 religious meeting places, 127 gas 69 “Current United States GDP,” 2015, http

  20. Water Intelligence and the Cyber-Infrastructure Revolution

    NASA Astrophysics Data System (ADS)

    Cline, D. W.

    2015-12-01

    As an intrinsic factor in national security, the global economy, food and energy production, and human and ecological health, fresh water resources are increasingly being considered by an ever-widening array of stakeholders. The U.S. intelligence community has identified water as a key factor in the Nation's security risk profile. Water industries are growing rapidly, and seek to revolutionize the role of water in the global economy, making water an economic value rather than a limitation on operations. Recent increased focus on the complex interrelationships and interdependencies between water, food, and energy signal a renewed effort to move towards integrated water resource management. Throughout all of this, hydrologic extremes continue to wreak havoc on communities and regions around the world, in some cases threatening long-term economic stability. This increased attention on water coincides with the "second IT revolution" of cyber-infrastructure (CI). The CI concept is a convergence of technology, data, applications and human resources, all coalescing into a tightly integrated global grid of computing, information, networking and sensor resources, and ultimately serving as an engine of change for collaboration, education and scientific discovery and innovation. In the water arena, we have unprecedented opportunities to apply the CI concept to help address complex water challenges and shape the future world of water resources - on both science and socio-economic application fronts. Providing actionable local "water intelligence" nationally or globally is now becoming feasible through high-performance computing, data technologies, and advanced hydrologic modeling. Further development on all of these fronts appears likely and will help advance this much-needed capability. Lagging behind are water observation systems, especially in situ networks, which need significant innovation to keep pace with and help fuel rapid advancements in water intelligence.

  1. Clinical Computing in General Dentistry

    PubMed Central

    Schleyer, Titus K.L.; Thyvalikakath, Thankam P.; Spallek, Heiko; Torres-Urquidy, Miguel H.; Hernandez, Pedro; Yuhaniak, Jeannie

    2006-01-01

    Objective: Measure the adoption and utilization of, opinions about, and attitudes toward clinical computing among general dentists in the United States. Design: Telephone survey of a random sample of 256 general dentists in active practice in the United States. Measurements: A 39-item telephone interview measuring practice characteristics and information technology infrastructure; clinical information storage; data entry and access; attitudes toward and opinions about clinical computing (features of practice management systems, barriers, advantages, disadvantages, and potential improvements); clinical Internet use; and attitudes toward the National Health Information Infrastructure. Results: The authors successfully screened 1,039 of 1,159 randomly sampled U.S. general dentists in active practice (89.6% response rate). Two hundred fifty-six (24.6%) respondents had computers at chairside and thus were eligible for this study. The authors successfully interviewed 102 respondents (39.8%). Clinical information associated with administration and billing, such as appointments and treatment plans, was stored predominantly on the computer; other information, such as the medical history and progress notes, primarily resided on paper. Nineteen respondents, or 1.8% of all general dentists, were completely paperless. Auxiliary personnel, such as dental assistants and hygienists, entered most data. Respondents adopted clinical computing to improve office efficiency and operations, support diagnosis and treatment, and enhance patient communication and perception. Barriers included insufficient operational reliability, program limitations, a steep learning curve, cost, and infection control issues. Conclusion: Clinical computing is being increasingly adopted in general dentistry. However, future research must address usefulness and ease of use, workflow support, infection control, integration, and implementation issues. PMID:16501177

  2. Event heap: a coordination infrastructure for dynamic heterogeneous application interactions in ubiquitous computing environments

    DOEpatents

    Johanson, Bradley E.; Fox, Armando; Winograd, Terry A.; Hanrahan, Patrick M.

    2010-04-20

    An efficient and adaptive middleware infrastructure called the Event Heap system dynamically coordinates application interactions and communications in a ubiquitous computing environment, e.g., an interactive workspace, having heterogeneous software applications running on various machines and devices across different platforms. Applications exchange events via the Event Heap. Each event is characterized by a set of unordered, named fields. Events are routed by matching certain attributes in the fields. The source and target versions of each field are automatically set when an event is posted or used as a template. The Event Heap system implements a unique combination of features, both intrinsic to tuplespaces and specific to the Event Heap, including content based addressing, support for routing patterns, standard routing fields, limited data persistence, query persistence/registration, transparent communication, self-description, flexible typing, logical/physical centralization, portable client API, at most once per source first-in-first-out ordering, and modular restartability.

  3. Infrastructure for large space telescopes

    NASA Astrophysics Data System (ADS)

    MacEwen, Howard A.; Lillie, Charles F.

    2016-10-01

    It is generally recognized (e.g., in the National Aeronautics and Space Administration response to recent congressional appropriations) that future space observatories must be serviceable, even if they are orbiting in deep space (e.g., around the Sun-Earth libration point, SEL2). On the basis of this legislation, we believe that budgetary considerations throughout the foreseeable future will require that large, long-lived astrophysics missions must be designed as evolvable semipermanent observatories that will be serviced using an operational, in-space infrastructure. We believe that the development of this infrastructure will include the design and development of a small to mid-sized servicing vehicle (MiniServ) as a key element of an affordable infrastructure for in-space assembly and servicing of future space vehicles. This can be accomplished by the adaptation of technology developed over the past half-century into a vehicle approximately the size of the ascent stage of the Apollo Lunar Module to provide some of the servicing capabilities that will be needed by very large telescopes located in deep space in the near future (2020s and 2030s). We specifically address the need for a detailed study of these servicing requirements and the current proposals for using presently available technologies to provide the appropriate infrastructure.

  4. Cyber-Physical Correlations for Infrastructure Resilience: A Game-Theoretic Approach

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Rao, Nageswara S; He, Fei; Ma, Chris Y. T.

    In several critical infrastructures, the cyber and physical parts are correlated so that disruptions to one affect the other and hence the whole system. These correlations may be exploited to strategically launch components attacks, and hence must be accounted for ensuring the infrastructure resilience, specified by its survival probability. We characterize the cyber-physical interactions at two levels: (i) the failure correlation function specifies the conditional survival probability of cyber sub-infrastructure given the physical sub-infrastructure as a function of their marginal probabilities, and (ii) the individual survival probabilities of both sub-infrastructures are characterized by first-order differential conditions. We formulate a resiliencemore » problem for infrastructures composed of discrete components as a game between the provider and attacker, wherein their utility functions consist of an infrastructure survival probability term and a cost term expressed in terms of the number of components attacked and reinforced. We derive Nash Equilibrium conditions and sensitivity functions that highlight the dependence of infrastructure resilience on the cost term, correlation function and sub-infrastructure survival probabilities. These results generalize earlier ones based on linear failure correlation functions and independent component failures. We apply the results to models of cloud computing infrastructures and energy grids.« less

  5. CLIMB (the Cloud Infrastructure for Microbial Bioinformatics): an online resource for the medical microbiology community

    PubMed Central

    Smith, Andy; Southgate, Joel; Poplawski, Radoslaw; Bull, Matthew J.; Richardson, Emily; Ismail, Matthew; Thompson, Simon Elwood-; Kitchen, Christine; Guest, Martyn; Bakke, Marius

    2016-01-01

    The increasing availability and decreasing cost of high-throughput sequencing has transformed academic medical microbiology, delivering an explosion in available genomes while also driving advances in bioinformatics. However, many microbiologists are unable to exploit the resulting large genomics datasets because they do not have access to relevant computational resources and to an appropriate bioinformatics infrastructure. Here, we present the Cloud Infrastructure for Microbial Bioinformatics (CLIMB) facility, a shared computing infrastructure that has been designed from the ground up to provide an environment where microbiologists can share and reuse methods and data. PMID:28785418

  6. CLIMB (the Cloud Infrastructure for Microbial Bioinformatics): an online resource for the medical microbiology community.

    PubMed

    Connor, Thomas R; Loman, Nicholas J; Thompson, Simon; Smith, Andy; Southgate, Joel; Poplawski, Radoslaw; Bull, Matthew J; Richardson, Emily; Ismail, Matthew; Thompson, Simon Elwood-; Kitchen, Christine; Guest, Martyn; Bakke, Marius; Sheppard, Samuel K; Pallen, Mark J

    2016-09-01

    The increasing availability and decreasing cost of high-throughput sequencing has transformed academic medical microbiology, delivering an explosion in available genomes while also driving advances in bioinformatics. However, many microbiologists are unable to exploit the resulting large genomics datasets because they do not have access to relevant computational resources and to an appropriate bioinformatics infrastructure. Here, we present the Cloud Infrastructure for Microbial Bioinformatics (CLIMB) facility, a shared computing infrastructure that has been designed from the ground up to provide an environment where microbiologists can share and reuse methods and data.

  7. Using high-performance networks to enable computational aerosciences applications

    NASA Technical Reports Server (NTRS)

    Johnson, Marjory J.

    1992-01-01

    One component of the U.S. Federal High Performance Computing and Communications Program (HPCCP) is the establishment of a gigabit network to provide a communications infrastructure for researchers across the nation. This gigabit network will provide new services and capabilities, in addition to increased bandwidth, to enable future applications. An understanding of these applications is necessary to guide the development of the gigabit network and other high-performance networks of the future. In this paper we focus on computational aerosciences applications run remotely using the Numerical Aerodynamic Simulation (NAS) facility located at NASA Ames Research Center. We characterize these applications in terms of network-related parameters and relate user experiences that reveal limitations imposed by the current wide-area networking infrastructure. Then we investigate how the development of a nationwide gigabit network would enable users of the NAS facility to work in new, more productive ways.

  8. Nuclear Energy Infrastructure Database Fitness and Suitability Review

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Heidrich, Brenden

    In 2014, the Deputy Assistant Secretary for Science and Technology Innovation (NE-4) initiated the Nuclear Energy-Infrastructure Management Project by tasking the Nuclear Science User Facilities (NSUF) to create a searchable and interactive database of all pertinent NE supported or related infrastructure. This database will be used for analyses to establish needs, redundancies, efficiencies, distributions, etc. in order to best understand the utility of NE’s infrastructure and inform the content of the infrastructure calls. The NSUF developed the database by utilizing data and policy direction from a wide variety of reports from the Department of Energy, the National Research Council, themore » International Atomic Energy Agency and various other federal and civilian resources. The NEID contains data on 802 R&D instruments housed in 377 facilities at 84 institutions in the US and abroad. A Database Review Panel (DRP) was formed to review and provide advice on the development, implementation and utilization of the NEID. The panel is comprised of five members with expertise in nuclear energy-associated research. It was intended that they represent the major constituencies associated with nuclear energy research: academia, industry, research reactor, national laboratory, and Department of Energy program management. The Nuclear Energy Infrastructure Database Review Panel concludes that the NSUF has succeeded in creating a capability and infrastructure database that identifies and documents the major nuclear energy research and development capabilities across the DOE complex. The effort to maintain and expand the database will be ongoing. Detailed information on many facilities must be gathered from associated institutions added to complete the database. The data must be validated and kept current to capture facility and instrumentation status as well as to cover new acquisitions and retirements.« less

  9. Computing through Scientific Abstractions in SysBioPS

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chin, George; Stephan, Eric G.; Gracio, Deborah K.

    2004-10-13

    Today, biologists and bioinformaticists have a tremendous amount of computational power at their disposal. With the availability of supercomputers, burgeoning scientific databases and digital libraries such as GenBank and PubMed, and pervasive computational environments such as the Grid, biologists have access to a wealth of computational capabilities and scientific data at hand. Yet, the rapid development of computational technologies has far exceeded the typical biologist’s ability to effectively apply the technology in their research. Computational sciences research and development efforts such as the Biology Workbench, BioSPICE (Biological Simulation Program for Intra-Cellular Evaluation), and BioCoRE (Biological Collaborative Research Environment) are importantmore » in connecting biologists and their scientific problems to computational infrastructures. On the Computational Cell Environment and Heuristic Entity-Relationship Building Environment projects at the Pacific Northwest National Laboratory, we are jointly developing a new breed of scientific problem solving environment called SysBioPSE that will allow biologists to access and apply computational resources in the scientific research context. In contrast to other computational science environments, SysBioPSE operates as an abstraction layer above a computational infrastructure. The goal of SysBioPSE is to allow biologists to apply computational resources in the context of the scientific problems they are addressing and the scientific perspectives from which they conduct their research. More specifically, SysBioPSE allows biologists to capture and represent scientific concepts and theories and experimental processes, and to link these views to scientific applications, data repositories, and computer systems.« less

  10. Development and Implementation of Collaborative e-Infrastructures and Data Management for Global Change Research

    NASA Astrophysics Data System (ADS)

    Allison, M. Lee; Davis, Rowena

    2016-04-01

    An e-infrastructure that supports data-intensive, multidisciplinary research is needed to accelerate the pace of science to address 21st century global change challenges. Data discovery, access, sharing and interoperability collectively form core elements of an emerging shared vision of e-infrastructure for scientific discovery. The pace and breadth of change in information management across the data lifecycle means that no one country or institution can unilaterally provide the leadership and resources required to use data and information effectively, or needed to support a coordinated, global e-infrastructure. An 18-month long process involving ~120 experts in domain, computer, and social sciences from more than a dozen countries resulted in a formal set of recommendations that were adopted in fall, 2015 by the Belmont Forum collaboration of national science funding agencies and international bodies on what they are best suited to implement for development of an e-infrastructure in support of global change research, including: • adoption of data principles that promote a global, interoperable e-infrastructure, that can be enforced • establishment of information and data officers for coordination of global data management and e-infrastructure efforts • promotion of effective data planning and stewardship • determination of international and community best practices for adoption • development of a cross-disciplinary training curriculum on data management and curation The implementation plan is being executed under four internationally-coordinated Action Themes towards a globally organized, internationally relevant e-infrastructure and data management capability drawn from existing components, protocols, and standards. The Belmont Forum anticipates opportunities to fund additional projects to fill key gaps and to integrate best practices into an e-infrastructure to support their programs but that can also be scaled up and deployed more widely. Background

  11. Space-based Communications Infrastructure for Developing Countries

    NASA Technical Reports Server (NTRS)

    Barker, Keith; Barnes, Carl; Price, K. M.

    1995-01-01

    This study examines the potential use of satellites to augment the telecommunications infrastructure of developing countries with advanced satellites. The study investigated the potential market for using satellites in developing countries, the role of satellites in national information infractructures (NII), the technical feasibility of augmenting NIIs with satellites, and a nation's financial conditions necessary for procuring satellite systems. In addition, the study examined several technical areas including onboard processing, intersatellite links, frequency of operation, multibeam and active antennas, and advanced satellite technologies. The marketing portion of this study focused on three case studies: China, Brazil, and Mexico. These cases represent countries in various stages of telecommunication infrastructure development. The study concludes by defining the needs of developing countries for satellites, and recommends steps that both industry and NASA can take to improve the competitiveness of U.S. satellite manufacturing.

  12. Space-Based Information Infrastructure Architecture for Broadband Services

    NASA Technical Reports Server (NTRS)

    Price, Kent M.; Inukai, Tom; Razdan, Rajendev; Lazeav, Yvonne M.

    1996-01-01

    This study addressed four tasks: (1) identify satellite-addressable information infrastructure markets; (2) perform network analysis for space-based information infrastructure; (3) develop conceptual architectures; and (4) economic assessment of architectures. The report concludes that satellites will have a major role in the national and global information infrastructure, requiring seamless integration between terrestrial and satellite networks. The proposed LEO, MEO, and GEO satellite systems have satellite characteristics that vary widely. They include delay, delay variations, poorer link quality and beam/satellite handover. The barriers against seamless interoperability between satellite and terrestrial networks are discussed. These barriers are the lack of compatible parameters, standards and protocols, which are presently being evaluated and reduced.

  13. Service-Oriented Architecture for NVO and TeraGrid Computing

    NASA Technical Reports Server (NTRS)

    Jacob, Joseph; Miller, Craig; Williams, Roy; Steenberg, Conrad; Graham, Matthew

    2008-01-01

    The National Virtual Observatory (NVO) Extensible Secure Scalable Service Infrastructure (NESSSI) is a Web service architecture and software framework that enables Web-based astronomical data publishing and processing on grid computers such as the National Science Foundation's TeraGrid. Characteristics of this architecture include the following: (1) Services are created, managed, and upgraded by their developers, who are trusted users of computing platforms on which the services are deployed. (2) Service jobs can be initiated by means of Java or Python client programs run on a command line or with Web portals. (3) Access is granted within a graduated security scheme in which the size of a job that can be initiated depends on the level of authentication of the user.

  14. The Satellite Data Thematic Core Service within the EPOS Research Infrastructure

    NASA Astrophysics Data System (ADS)

    Manunta, Michele; Casu, Francesco; Zinno, Ivana; De Luca, Claudio; Buonanno, Sabatino; Zeni, Giovanni; Wright, Tim; Hooper, Andy; Diament, Michel; Ostanciaux, Emilie; Mandea, Mioara; Walter, Thomas; Maccaferri, Francesco; Fernandez, Josè; Stramondo, Salvatore; Bignami, Christian; Bally, Philippe; Pinto, Salvatore; Marin, Alessandro; Cuomo, Antonio

    2017-04-01

    EPOS, the European Plate Observing System, is a long-term plan to facilitate the integrated use of data, data products, software and services, available from distributed Research Infrastructures (RI), for solid Earth science in Europe. Indeed, EPOS integrates a large number of existing European RIs belonging to several fields of the Earth science, from seismology to geodesy, near fault and volcanic observatories as well as anthropogenic hazards. The EPOS vision is that the integration of the existing national and trans-national research infrastructures will increase access and use of the multidisciplinary data recorded by the solid Earth monitoring networks, acquired in laboratory experiments and/or produced by computational simulations. The establishment of EPOS will foster the interoperability of products and services in the Earth science field to a worldwide community of users. Accordingly, the EPOS aim is to integrate the diverse and advanced European Research Infrastructures for solid Earth science, and build on new e-science opportunities to monitor and understand the dynamic and complex solid-Earth System. One of the EPOS Thematic Core Services (TCS), referred to as Satellite Data, aims at developing, implementing and deploying advanced satellite data products and services, mainly based on Copernicus data (namely Sentinel acquisitions), for the Earth science community. This work intends to present the technological enhancements, fostered by EPOS, to deploy effective satellite services in a harmonized and integrated way. In particular, the Satellite Data TCS will deploy five services, EPOSAR, GDM, COMET, 3D-Def and MOD, which are mainly based on the exploitation of SAR data acquired by the Sentinel-1 constellation and designed to provide information on Earth surface displacements. In particular, the planned services will provide both advanced DInSAR products (deformation maps, velocity maps, deformation time series) and value-added measurements (source model

  15. Modeling, Simulation and Analysis of Public Key Infrastructure

    NASA Technical Reports Server (NTRS)

    Liu, Yuan-Kwei; Tuey, Richard; Ma, Paul (Technical Monitor)

    1998-01-01

    Security is an essential part of network communication. The advances in cryptography have provided solutions to many of the network security requirements. Public Key Infrastructure (PKI) is the foundation of the cryptography applications. The main objective of this research is to design a model to simulate a reliable, scalable, manageable, and high-performance public key infrastructure. We build a model to simulate the NASA public key infrastructure by using SimProcess and MatLab Software. The simulation is from top level all the way down to the computation needed for encryption, decryption, digital signature, and secure web server. The application of secure web server could be utilized in wireless communications. The results of the simulation are analyzed and confirmed by using queueing theory.

  16. Agile Infrastructure Monitoring

    NASA Astrophysics Data System (ADS)

    Andrade, P.; Ascenso, J.; Fedorko, I.; Fiorini, B.; Paladin, M.; Pigueiras, L.; Santos, M.

    2014-06-01

    At the present time, data centres are facing a massive rise in virtualisation and cloud computing. The Agile Infrastructure (AI) project is working to deliver new solutions to ease the management of CERN data centres. Part of the solution consists in a new "shared monitoring architecture" which collects and manages monitoring data from all data centre resources. In this article, we present the building blocks of this new monitoring architecture, the different open source technologies selected for each architecture layer, and how we are building a community around this common effort.

  17. iTools: a framework for classification, categorization and integration of computational biology resources.

    PubMed

    Dinov, Ivo D; Rubin, Daniel; Lorensen, William; Dugan, Jonathan; Ma, Jeff; Murphy, Shawn; Kirschner, Beth; Bug, William; Sherman, Michael; Floratos, Aris; Kennedy, David; Jagadish, H V; Schmidt, Jeanette; Athey, Brian; Califano, Andrea; Musen, Mark; Altman, Russ; Kikinis, Ron; Kohane, Isaac; Delp, Scott; Parker, D Stott; Toga, Arthur W

    2008-05-28

    The advancement of the computational biology field hinges on progress in three fundamental directions--the development of new computational algorithms, the availability of informatics resource management infrastructures and the capability of tools to interoperate and synergize. There is an explosion in algorithms and tools for computational biology, which makes it difficult for biologists to find, compare and integrate such resources. We describe a new infrastructure, iTools, for managing the query, traversal and comparison of diverse computational biology resources. Specifically, iTools stores information about three types of resources--data, software tools and web-services. The iTools design, implementation and resource meta-data content reflect the broad research, computational, applied and scientific expertise available at the seven National Centers for Biomedical Computing. iTools provides a system for classification, categorization and integration of different computational biology resources across space-and-time scales, biomedical problems, computational infrastructures and mathematical foundations. A large number of resources are already iTools-accessible to the community and this infrastructure is rapidly growing. iTools includes human and machine interfaces to its resource meta-data repository. Investigators or computer programs may utilize these interfaces to search, compare, expand, revise and mine meta-data descriptions of existent computational biology resources. We propose two ways to browse and display the iTools dynamic collection of resources. The first one is based on an ontology of computational biology resources, and the second one is derived from hyperbolic projections of manifolds or complex structures onto planar discs. iTools is an open source project both in terms of the source code development as well as its meta-data content. iTools employs a decentralized, portable, scalable and lightweight framework for long-term resource management

  18. iTools: A Framework for Classification, Categorization and Integration of Computational Biology Resources

    PubMed Central

    Dinov, Ivo D.; Rubin, Daniel; Lorensen, William; Dugan, Jonathan; Ma, Jeff; Murphy, Shawn; Kirschner, Beth; Bug, William; Sherman, Michael; Floratos, Aris; Kennedy, David; Jagadish, H. V.; Schmidt, Jeanette; Athey, Brian; Califano, Andrea; Musen, Mark; Altman, Russ; Kikinis, Ron; Kohane, Isaac; Delp, Scott; Parker, D. Stott; Toga, Arthur W.

    2008-01-01

    The advancement of the computational biology field hinges on progress in three fundamental directions – the development of new computational algorithms, the availability of informatics resource management infrastructures and the capability of tools to interoperate and synergize. There is an explosion in algorithms and tools for computational biology, which makes it difficult for biologists to find, compare and integrate such resources. We describe a new infrastructure, iTools, for managing the query, traversal and comparison of diverse computational biology resources. Specifically, iTools stores information about three types of resources–data, software tools and web-services. The iTools design, implementation and resource meta - data content reflect the broad research, computational, applied and scientific expertise available at the seven National Centers for Biomedical Computing. iTools provides a system for classification, categorization and integration of different computational biology resources across space-and-time scales, biomedical problems, computational infrastructures and mathematical foundations. A large number of resources are already iTools-accessible to the community and this infrastructure is rapidly growing. iTools includes human and machine interfaces to its resource meta-data repository. Investigators or computer programs may utilize these interfaces to search, compare, expand, revise and mine meta-data descriptions of existent computational biology resources. We propose two ways to browse and display the iTools dynamic collection of resources. The first one is based on an ontology of computational biology resources, and the second one is derived from hyperbolic projections of manifolds or complex structures onto planar discs. iTools is an open source project both in terms of the source code development as well as its meta-data content. iTools employs a decentralized, portable, scalable and lightweight framework for long-term resource

  19. Results of the First National Assessment of Computer Competence (The Printout).

    ERIC Educational Resources Information Center

    Balajthy, Ernest

    1988-01-01

    Discusses the findings of the National Assessment of Educational Progress 1985-86 survey of American students' computer competence, focusing on findings of interest to reading teachers who use computers. (MM)

  20. Scope of Work for Integration Management and Installation Services of the National Ignition Facility Beampath Infrastructure System

    NASA Astrophysics Data System (ADS)

    Coyle, P. D.

    2000-03-01

    The goal of the National Ignition Facility (NIF) project is to provide an above ground experimental capability for maintaining nuclear competence and weapons effects simulation and to provide a facility capable of achieving fusion ignition using solid-state lasers as the energy driver. The facility will incorporate 192 laser beams, which will be focused onto a small target located at the center of a spherical target chamber-the energy from the laser beams will be deposited in a few billionths of a second. The target will then implode, forcing atomic nuclei to sufficiently high temperatures and densities necessary to achieve a miniature fusion reaction. The NIF is under construction, at Livermore, California, located approximately 50 miles southeast of San Francisco, California. The University of California, Lawrence Livermore National Laboratory (LLNL), operating under Prime Contract W-7405-ENG. 48 with the U.S. Department of Energy (DOE), shall subcontract for Integration Management and Installation (IMI) Services for the Beampath Infrastructure System (BIS). The BIS includes Beampath Hardware and Beampath Utilities. Conventional Facilities work for the NIF Laser and Target Area Building (LTAB) and Optics Assembly Building (OAB) is over 86 percent constructed. This Scope of Work is for Integration Management and Installation (IMI) Services corresponding to Management Services, Design Integration Services, Construction Services, and Commissioning Services for the NIB BIS. The BIS includes Beampath Hardware and Beampath Utilities. Beampath Hardware and Beampath Utilities include beampath vessels, enclosures, and beam tubes; auxiliary and utility systems; and support structures. A substantial amount of GFE will be provided by the University for installation as part of the infrastructure packages.

  1. Determining critical infrastructure for ocean research and societal needs in 2030

    NASA Astrophysics Data System (ADS)

    Glickson, Deborah; Barron, Eric; Fine, Rana

    2011-06-01

    The United States has jurisdiction over 3.4 million square miles of ocean—an expanse greater than the land area of all 50 states combined. This vast marine area offers researchers opportunities to investigate the ocean's role in an integrated Earth system but also presents challenges to society, including damaging tsunamis and hurricanes, industrial accidents, and outbreaks of waterborne diseases. The 2010 Gulf of Mexico Deepwater Horizon oil spill and 2011 Japanese earthquake and tsunami are vivid reminders that a broad range of infrastructure is needed to advance scientists' still incomplete understanding of the ocean. The National Research Council's (NRC) Ocean Studies Board was asked by the National Science and Technology Council's Subcommittee on Ocean Science and Technology, comprising 25 U.S. government agencies, to examine infrastructure needs for ocean research in the year 2030. This request reflects concern, among a myriad of marine issues, over the present state of aging and obsolete infrastructure, insufficient capacity, growing technological gaps, and declining national leadership in marine technological development; these issues were brought to the nation's attention in 2004 by the U.S. Commission on Ocean Policy.

  2. Examining Willingness to Attack Critical Infrastructure Online and Offline

    ERIC Educational Resources Information Center

    Holt, Thomas J.; Kilger, Max

    2012-01-01

    The continuing adoption of technologies by the general public coupled with the expanding reliance of critical infrastructures connected through the Internet has created unique opportunities for attacks by civilians and nation-states alike. Although governments are increasingly focusing on policies to deter nation-state level attacks, it is unclear…

  3. MOBBED: a computational data infrastructure for handling large collections of event-rich time series datasets in MATLAB

    PubMed Central

    Cockfield, Jeremy; Su, Kyungmin; Robbins, Kay A.

    2013-01-01

    Experiments to monitor human brain activity during active behavior record a variety of modalities (e.g., EEG, eye tracking, motion capture, respiration monitoring) and capture a complex environmental context leading to large, event-rich time series datasets. The considerable variability of responses within and among subjects in more realistic behavioral scenarios requires experiments to assess many more subjects over longer periods of time. This explosion of data requires better computational infrastructure to more systematically explore and process these collections. MOBBED is a lightweight, easy-to-use, extensible toolkit that allows users to incorporate a computational database into their normal MATLAB workflow. Although capable of storing quite general types of annotated data, MOBBED is particularly oriented to multichannel time series such as EEG that have event streams overlaid with sensor data. MOBBED directly supports access to individual events, data frames, and time-stamped feature vectors, allowing users to ask questions such as what types of events or features co-occur under various experimental conditions. A database provides several advantages not available to users who process one dataset at a time from the local file system. In addition to archiving primary data in a central place to save space and avoid inconsistencies, such a database allows users to manage, search, and retrieve events across multiple datasets without reading the entire dataset. The database also provides infrastructure for handling more complex event patterns that include environmental and contextual conditions. The database can also be used as a cache for expensive intermediate results that are reused in such activities as cross-validation of machine learning algorithms. MOBBED is implemented over PostgreSQL, a widely used open source database, and is freely available under the GNU general public license at http://visual.cs.utsa.edu/mobbed. Source and issue reports for MOBBED

  4. MOBBED: a computational data infrastructure for handling large collections of event-rich time series datasets in MATLAB.

    PubMed

    Cockfield, Jeremy; Su, Kyungmin; Robbins, Kay A

    2013-01-01

    Experiments to monitor human brain activity during active behavior record a variety of modalities (e.g., EEG, eye tracking, motion capture, respiration monitoring) and capture a complex environmental context leading to large, event-rich time series datasets. The considerable variability of responses within and among subjects in more realistic behavioral scenarios requires experiments to assess many more subjects over longer periods of time. This explosion of data requires better computational infrastructure to more systematically explore and process these collections. MOBBED is a lightweight, easy-to-use, extensible toolkit that allows users to incorporate a computational database into their normal MATLAB workflow. Although capable of storing quite general types of annotated data, MOBBED is particularly oriented to multichannel time series such as EEG that have event streams overlaid with sensor data. MOBBED directly supports access to individual events, data frames, and time-stamped feature vectors, allowing users to ask questions such as what types of events or features co-occur under various experimental conditions. A database provides several advantages not available to users who process one dataset at a time from the local file system. In addition to archiving primary data in a central place to save space and avoid inconsistencies, such a database allows users to manage, search, and retrieve events across multiple datasets without reading the entire dataset. The database also provides infrastructure for handling more complex event patterns that include environmental and contextual conditions. The database can also be used as a cache for expensive intermediate results that are reused in such activities as cross-validation of machine learning algorithms. MOBBED is implemented over PostgreSQL, a widely used open source database, and is freely available under the GNU general public license at http://visual.cs.utsa.edu/mobbed. Source and issue reports for MOBBED

  5. Design and Implement of Astronomical Cloud Computing Environment In China-VO

    NASA Astrophysics Data System (ADS)

    Li, Changhua; Cui, Chenzhou; Mi, Linying; He, Boliang; Fan, Dongwei; Li, Shanshan; Yang, Sisi; Xu, Yunfei; Han, Jun; Chen, Junyi; Zhang, Hailong; Yu, Ce; Xiao, Jian; Wang, Chuanjun; Cao, Zihuang; Fan, Yufeng; Liu, Liang; Chen, Xiao; Song, Wenming; Du, Kangyu

    2017-06-01

    Astronomy cloud computing environment is a cyber-Infrastructure for Astronomy Research initiated by Chinese Virtual Observatory (China-VO) under funding support from NDRC (National Development and Reform commission) and CAS (Chinese Academy of Sciences). Based on virtualization technology, astronomy cloud computing environment was designed and implemented by China-VO team. It consists of five distributed nodes across the mainland of China. Astronomer can get compuitng and storage resource in this cloud computing environment. Through this environments, astronomer can easily search and analyze astronomical data collected by different telescopes and data centers , and avoid the large scale dataset transportation.

  6. Secure Infrastructure-Less Network (SINET)

    DTIC Science & Technology

    2017-06-01

    Protocol CNSA Commercial National Security Algorithm COMSEC Communications Security COTS Commercial off the Shelf CSfC Commercial Solutions for...ABSTRACT (maximum 200 words) Military leaders and first responders desire the familiarity of commercial -off-the-shelf lightweight mobile devices while...since they lack reliable or secure communication infrastructure. Routine and simple mobile information-sharing tasks become a challenge over the

  7. The building of the EUDAT Cross-Disciplinary Data Infrastructure

    NASA Astrophysics Data System (ADS)

    Lecarpentier, Damien; Michelini, Alberto; Wittenburg, Peter

    2013-04-01

    The EUDAT project is a European data initiative that brings together a unique consortium of 25 partners - including research communities, national data and high performance computing (HPC) centers, technology providers, and funding agencies - from 13 countries. EUDAT aims to build a sustainable cross-disciplinary and cross-national Commom Data Infrastructure (CDI) that provides a set of shared services for accessing and preserving research data. The design and deployment of these services is being coordinated by multi-disciplinary task forces comprising representatives from research communities and data centers. One of EUDAT's fundamental goals is the facilitation of cross-disciplinary data-intensive science. By providing opportunity for disciplines from across the spectrum to share data and cross-fertilize ideas, the CDI will encourage progress towards this vision of open and participatory data-intensive science. EUDAT will also facilitate this process through the creation of teams of experts from different disciplines, aiming to cooperatively develop services to meet the needs of several communities. Five research communities joined the EUDAT initiative at the start - CLARIN (Linguistics), ENES (Climate Modeling), EPOS (Earth Sciences), LifeWatch (Environmental Sciences - Biodiversity), VPH (Biological and Medical Sciences). They are acting as partners in the project, and have clear tasks and commitments. Since EUDAT started on the 1st of October 2011, we have been reviewing the approaches and requirements of these five communities regarding the deployment and use of a cross-disciplinary and persistent data e-Infrastructure. This analysis was conducted through interviews and frequent interactions with representatives of the communities. In this talk will be provided an updated status of the current CDI with specific refernce to the solid Earth science commnity of EPOS.

  8. Alternative Fuels Data Center: Ethanol Fueling Infrastructure Development

    Science.gov Websites

    Studies California Ramps Up Biofuels Infrastructure Alternative Fuels Help Ensure America's National Parks Stay Green for Another Century More Ethanol Case Studies | All Case Studies Publications Handbook for

  9. The INDIGO-Datacloud Authentication and Authorization Infrastructure

    NASA Astrophysics Data System (ADS)

    Ceccanti, A.; Hardt, M.; Wegh, B.; Millar, AP; Caberletti, M.; Vianello, E.; Licehammer, S.

    2017-10-01

    Contemporary distributed computing infrastructures (DCIs) are not easily and securely accessible by scientists. These computing environments are typically hard to integrate due to interoperability problems resulting from the use of different authentication mechanisms, identity negotiation protocols and access control policies. Such limitations have a big impact on the user experience making it hard for user communities to port and run their scientific applications on resources aggregated from multiple providers. The INDIGO-DataCloud project wants to provide the services and tools needed to enable a secure composition of resources from multiple providers in support of scientific applications. In order to do so, a common AAI architecture has to be defined that supports multiple authentication mechanisms, support delegated authorization across services and can be easily integrated in off-the-shelf software. In this contribution we introduce the INDIGO Authentication and Authorization Infrastructure, describing its main components and their status and how authentication, delegation and authorization flows are implemented across services.

  10. Infrastructure for Multiphysics Software Integration in High Performance Computing-Aided Science and Engineering

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Campbell, Michael T.; Safdari, Masoud; Kress, Jessica E.

    The project described in this report constructed and exercised an innovative multiphysics coupling toolkit called the Illinois Rocstar MultiPhysics Application Coupling Toolkit (IMPACT). IMPACT is an open source, flexible, natively parallel infrastructure for coupling multiple uniphysics simulation codes into multiphysics computational systems. IMPACT works with codes written in several high-performance-computing (HPC) programming languages, and is designed from the beginning for HPC multiphysics code development. It is designed to be minimally invasive to the individual physics codes being integrated, and has few requirements on those physics codes for integration. The goal of IMPACT is to provide the support needed to enablemore » coupling existing tools together in unique and innovative ways to produce powerful new multiphysics technologies without extensive modification and rewrite of the physics packages being integrated. There are three major outcomes from this project: 1) construction, testing, application, and open-source release of the IMPACT infrastructure, 2) production of example open-source multiphysics tools using IMPACT, and 3) identification and engagement of interested organizations in the tools and applications resulting from the project. This last outcome represents the incipient development of a user community and application echosystem being built using IMPACT. Multiphysics coupling standardization can only come from organizations working together to define needs and processes that span the space of necessary multiphysics outcomes, which Illinois Rocstar plans to continue driving toward. The IMPACT system, including source code, documentation, and test problems are all now available through the public gitHUB.org system to anyone interested in multiphysics code coupling. Many of the basic documents explaining use and architecture of IMPACT are also attached as appendices to this document. Online HTML documentation is available through the git

  11. EO Data as a Critical Element of the National Spatial Data Infrastructure (NSDI)

    NASA Astrophysics Data System (ADS)

    Rao, Mukund; Dasgupta, A. R.; Kasturirangan, K.

    India has, over the past years, produced a rich "base" of map information through systematic topographic surveys, geological surveys, soil surveys, cadastral surveys, various natural resources inventory programmes and the use of the remote sensing images. Further, with the availability of precision, high-resolution satellite images, data enabling the organisation of GIS, combined with the Global Positioning System (GPS), the accuracy and information content of these spatial datasets or maps is extremely high. Encapsulating these maps and images into a National Spatial Data Infrastructure (NSDI) is the need of the hour and the emphasis has to be on information transparency and sharing, with the recognition that spatial information is a national resource and citizens, society, private enterprise and government have a right to access it, appropriately. Only through common conventions and technical agreements, standards, metadata definitions, network and access protocols will it be easily possible for the NSDI to come into existence. India has now a NSDI strategy and the "NSDI Strategy and Action Plan" report has been prepared and is being opened up to a national debate. The first steps have been taken but the end-goal is farther away but in sight now. While Government must provide the lead, private enterprise, NGOs and academia have a major role to play in making the NSDI a reality. NSDI will require for coming together of various "groups" and harmonizing their efforts in making this national endeavor a success. The paper discusses how the convergence of technologies is being startegised in NSDI - specifically the input of EO images and GIS technologies and how the nation would benefit from access to these datasets. The paper also discusses and illustrates with specific examples the techniques being developed and how the NSDI would support development efforts in the country. The paper also highlights the role of EO images in the NSDI - especially in the access and

  12. National Laboratory for Advanced Scientific Visualization at UNAM - Mexico

    NASA Astrophysics Data System (ADS)

    Manea, Marina; Constantin Manea, Vlad; Varela, Alfredo

    2016-04-01

    In 2015, the National Autonomous University of Mexico (UNAM) joined the family of Universities and Research Centers where advanced visualization and computing plays a key role to promote and advance missions in research, education, community outreach, as well as business-oriented consulting. This initiative provides access to a great variety of advanced hardware and software resources and offers a range of consulting services that spans a variety of areas related to scientific visualization, among which are: neuroanatomy, embryonic development, genome related studies, geosciences, geography, physics and mathematics related disciplines. The National Laboratory for Advanced Scientific Visualization delivers services through three main infrastructure environments: the 3D fully immersive display system Cave, the high resolution parallel visualization system Powerwall, the high resolution spherical displays Earth Simulator. The entire visualization infrastructure is interconnected to a high-performance-computing-cluster (HPCC) called ADA in honor to Ada Lovelace, considered to be the first computer programmer. The Cave is an extra large 3.6m wide room with projected images on the front, left and right, as well as floor walls. Specialized crystal eyes LCD-shutter glasses provide a strong stereo depth perception, and a variety of tracking devices allow software to track the position of a user's hand, head and wand. The Powerwall is designed to bring large amounts of complex data together through parallel computing for team interaction and collaboration. This system is composed by 24 (6x4) high-resolution ultra-thin (2 mm) bezel monitors connected to a high-performance GPU cluster. The Earth Simulator is a large (60") high-resolution spherical display used for global-scale data visualization like geophysical, meteorological, climate and ecology data. The HPCC-ADA, is a 1000+ computing core system, which offers parallel computing resources to applications that requires

  13. Amendments to the Drinking Water Infrastructure Grants Program as Required by the Water Infrastructure Improvements for the Nation Act

    EPA Pesticide Factsheets

    The WIIN Act has expanded the activities that qualify for Drinking Water Infrastructure Grant Tribal Set-Aside (DWIG-TSA) funding to include training and operator certification for operators of PWSs serving American Indians and Alaskan Natives.

  14. Acoustic emission safety monitoring of intermodal transportation infrastructure.

    DOT National Transportation Integrated Search

    2015-09-01

    Safety and integrity of the national transportation infrastructure are of paramount importance and highway bridges are critical components of the highway system network. This network provides an immense contribution to the industry productivity and e...

  15. Toward Information Infrastructure Studies: Ways of Knowing in a Networked Environment

    NASA Astrophysics Data System (ADS)

    Bowker, Geoffrey C.; Baker, Karen; Millerand, Florence; Ribes, David

    This article presents Information Infrastructure Studies, a research area that takes up some core issues in digital information and organization research. Infrastructure Studies simultaneously addresses the technical, social, and organizational aspects of the development, usage, and maintenance of infrastructures in local communities as well as global arenas. While infrastructure is understood as a broad category referring to a variety of pervasive, enabling network resources such as railroad lines, plumbing and pipes, electrical power plants and wires, this article focuses on information infrastructure, such as computational services and help desks, or federating activities such as scientific data repositories and archives spanning the multiple disciplines needed to address such issues as climate warming and the biodiversity crisis. These are elements associated with the internet and, frequently today, associated with cyberinfrastructure or e-science endeavors. We argue that a theoretical understanding of infrastructure provides the context for needed dialogue between design, use, and sustainability of internet-based infrastructure services. This article outlines a research area and outlines overarching themes of Infrastructure Studies. Part one of the paper presents definitions for infrastructure and cyberinfrastructure, reviewing salient previous work. Part two portrays key ideas from infrastructure studies (knowledge work, social and political values, new forms of sociality, etc.). In closing, the character of the field today is considered.

  16. Reliable Communication Models in Interdependent Critical Infrastructure Networks

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lee, Sangkeun; Chinthavali, Supriya; Shankar, Mallikarjun

    Modern critical infrastructure networks are becoming increasingly interdependent where the failures in one network may cascade to other dependent networks, causing severe widespread national-scale failures. A number of previous efforts have been made to analyze the resiliency and robustness of interdependent networks based on different models. However, communication network, which plays an important role in today's infrastructures to detect and handle failures, has attracted little attention in the interdependency studies, and no previous models have captured enough practical features in the critical infrastructure networks. In this paper, we study the interdependencies between communication network and other kinds of critical infrastructuremore » networks with an aim to identify vulnerable components and design resilient communication networks. We propose several interdependency models that systematically capture various features and dynamics of failures spreading in critical infrastructure networks. We also discuss several research challenges in building reliable communication solutions to handle failures in these models.« less

  17. The Impact of Process Capability on Service Reliability for Critical Infrastructure Providers

    ERIC Educational Resources Information Center

    Houston, Clemith J., Jr.

    2013-01-01

    This study investigated the relationship between organizational processes that have been identified as promoting resiliency and their impact on service reliability within the scope of critical infrastructure providers. The importance of critical infrastructure to the nation is evident from the body of research and is supported by instances where…

  18. Executable research compendia in geoscience research infrastructures

    NASA Astrophysics Data System (ADS)

    Nüst, Daniel

    2017-04-01

    From generation through analysis and collaboration to communication, scientific research requires the right tools. Scientists create their own software using third party libraries and platforms. Cloud computing, Open Science, public data infrastructures, and Open Source enable scientists with unprecedented opportunites, nowadays often in a field "Computational X" (e.g. computational seismology) or X-informatics (e.g. geoinformatics) [0]. This increases complexity and generates more innovation, e.g. Environmental Research Infrastructures (environmental RIs [1]). Researchers in Computational X write their software relying on both source code (e.g. from https://github.com) and binary libraries (e.g. from package managers such as APT, https://wiki.debian.org/Apt, or CRAN, https://cran.r-project.org/). They download data from domain specific (cf. https://re3data.org) or generic (e.g. https://zenodo.org) data repositories, and deploy computations remotely (e.g. European Open Science Cloud). The results themselves are archived, given persistent identifiers, connected to other works (e.g. using https://orcid.org/), and listed in metadata catalogues. A single researcher, intentionally or not, interacts with all sub-systems of RIs: data acquisition, data access, data processing, data curation, and community support [3]. To preserve computational research [3] proposes the Executable Research Compendium (ERC), a container format closing the gap of dependency preservation by encapsulating the runtime environment. ERCs and RIs can be integrated for different uses: (i) Coherence: ERC services validate completeness, integrity and results (ii) Metadata: ERCs connect the different parts of a piece of research and faciliate discovery (iii) Exchange and Preservation: ERC as usable building blocks are the shared and archived entity (iv) Self-consistency: ERCs remove dependence on ephemeral sources (v) Execution: ERC services create and execute a packaged analysis but integrate with

  19. 15 CFR 292.4 - Information infrastructure projects.

    Code of Federal Regulations, 2013 CFR

    2013-01-01

    ... 15 Commerce and Foreign Trade 1 2013-01-01 2013-01-01 false Information infrastructure projects. 292.4 Section 292.4 Commerce and Foreign Trade Regulations Relating to Commerce and Foreign Trade NATIONAL INSTITUTE OF STANDARDS AND TECHNOLOGY, DEPARTMENT OF COMMERCE NIST EXTRAMURAL PROGRAMS...

  20. 15 CFR 292.4 - Information infrastructure projects.

    Code of Federal Regulations, 2012 CFR

    2012-01-01

    ... 15 Commerce and Foreign Trade 1 2012-01-01 2012-01-01 false Information infrastructure projects. 292.4 Section 292.4 Commerce and Foreign Trade Regulations Relating to Commerce and Foreign Trade NATIONAL INSTITUTE OF STANDARDS AND TECHNOLOGY, DEPARTMENT OF COMMERCE NIST EXTRAMURAL PROGRAMS...

  1. 15 CFR 292.4 - Information infrastructure projects.

    Code of Federal Regulations, 2014 CFR

    2014-01-01

    ... 15 Commerce and Foreign Trade 1 2014-01-01 2014-01-01 false Information infrastructure projects. 292.4 Section 292.4 Commerce and Foreign Trade Regulations Relating to Commerce and Foreign Trade NATIONAL INSTITUTE OF STANDARDS AND TECHNOLOGY, DEPARTMENT OF COMMERCE NIST EXTRAMURAL PROGRAMS...

  2. Computer-Based National Information Systems. Technology and Public Policy Issues.

    ERIC Educational Resources Information Center

    Congress of the U.S., Washington, DC. Office of Technology Assessment.

    A general introduction to computer based national information systems, and the context and basis for future studies are provided in this report. Chapter One, the introduction, summarizes computers and information systems and their relation to society, the structure of information policy issues, and public policy issues. Chapter Two describes the…

  3. The national public's values and interests related to the Arctic National Wildlife Refuge: A computer content analysis

    Treesearch

    David N. Bengston; David P. Fan; Roger Kaye

    2010-01-01

    This study examined the national public's values and interests related to the Arctic National Wildlife Refuge. Computer content analysis was used to analyze more than 23,000 media stories about the refuge from 1995 through 2007. Ten main categories of Arctic National Wildlife Refuge values and interests emerged from the analysis, reflecting a diversity of values,...

  4. Aging Water Infrastructure Research Program Update: Innovation & Research for the 21st Century

    EPA Science Inventory

    This slide presentation summarizes key elements of the EOA, Office of Research and Development’s (ORD) Aging Water Infrastructure (AWI)) Research program. An overview of the national problems posed by aging water infrastructure is followed by a brief description of EPA’s overall...

  5. Collaboratively Architecting a Scalable and Adaptable Petascale Infrastructure to Support Transdisciplinary Scientific Research for the Australian Earth and Environmental Sciences

    NASA Astrophysics Data System (ADS)

    Wyborn, L. A.; Evans, B. J. K.; Pugh, T.; Lescinsky, D. T.; Foster, C.; Uhlherr, A.

    2014-12-01

    The National Computational Infrastructure (NCI) at the Australian National University (ANU) is a partnership between CSIRO, ANU, Bureau of Meteorology (BoM) and Geoscience Australia. Recent investments in a 1.2 PFlop Supercomputer (Raijin), ~ 20 PB data storage using Lustre filesystems and a 3000 core high performance cloud have created a hybrid platform for higher performance computing and data-intensive science to enable large scale earth and climate systems modelling and analysis. There are > 3000 users actively logging in and > 600 projects on the NCI system. Efficiently scaling and adapting data and software systems to petascale infrastructures requires the collaborative development of an architecture that is designed, programmed and operated to enable users to interactively invoke different forms of in-situ computation over complex and large scale data collections. NCI makes available major and long tail data collections from both the government and research sectors based on six themes: 1) weather, climate and earth system science model simulations, 2) marine and earth observations, 3) geosciences, 4) terrestrial ecosystems, 5) water and hydrology and 6) astronomy, bio and social. Collectively they span the lithosphere, crust, biosphere, hydrosphere, troposphere, and stratosphere. Collections are the operational form for data management and access. Similar data types from individual custodians are managed cohesively. Use of international standards for discovery and interoperability allow complex interactions within and between the collections. This design facilitates a transdisciplinary approach to research and enables a shift from small scale, 'stove-piped' science efforts to large scale, collaborative systems science. This new and complex infrastructure requires a move to shared, globally trusted software frameworks that can be maintained and updated. Workflow engines become essential and need to integrate provenance, versioning, traceability, repeatability

  6. 75 FR 55616 - NASA Advisory Council; Information Technology Infrastructure Committee; Meeting

    Federal Register 2010, 2011, 2012, 2013, 2014

    2010-09-13

    ... NATIONAL AERONAUTICS AND SPACE ADMINISTRATION [Notice: (10-110)] NASA Advisory Council...-463, as amended, the National Aeronautics and Space Administration (NASA) announce a meeting for the Information Technology Infrastructure Committee of the NASA Advisory Council (NAC). DATES: Tuesday, September...

  7. LEMON - LHC Era Monitoring for Large-Scale Infrastructures

    NASA Astrophysics Data System (ADS)

    Marian, Babik; Ivan, Fedorko; Nicholas, Hook; Hector, Lansdale Thomas; Daniel, Lenkes; Miroslav, Siket; Denis, Waldron

    2011-12-01

    At the present time computer centres are facing a massive rise in virtualization and cloud computing as these solutions bring advantages to service providers and consolidate the computer centre resources. However, as a result the monitoring complexity is increasing. Computer centre management requires not only to monitor servers, network equipment and associated software but also to collect additional environment and facilities data (e.g. temperature, power consumption, cooling efficiency, etc.) to have also a good overview of the infrastructure performance. The LHC Era Monitoring (Lemon) system is addressing these requirements for a very large scale infrastructure. The Lemon agent that collects data on every client and forwards the samples to the central measurement repository provides a flexible interface that allows rapid development of new sensors. The system allows also to report on behalf of remote devices such as switches and power supplies. Online and historical data can be visualized via a web-based interface or retrieved via command-line tools. The Lemon Alarm System component can be used for notifying the operator about error situations. In this article, an overview of the Lemon monitoring is provided together with a description of the CERN LEMON production instance. No direct comparison is made with other monitoring tool.

  8. National Hydroclimatic Change and Infrastructure Adaptation Assessment: Region-Specific Adaptation Factors

    EPA Science Inventory

    Climate change, land use and socioeconomic developments are principal variables that define the need and scope of adaptive engineering and management to sustain water resource and infrastructure development. As described in IPCC (2007), hydroclimatic changes in the next 30-50 ye...

  9. Improving FHWA's Ability to Assess Highway Infrastructure Health : National Meeting Report

    DOT National Transportation Integrated Search

    2011-12-08

    The FHWA in coordination with AASHTO conducted a study to define a consistent and reliable method to document infrastructure health with a focus on pavements and bridges on the Interstate System, and to develop a framework for tools that can provide ...

  10. Network Interdependency Modeling for Risk Assessment on Built Infrastructure Systems

    DTIC Science & Technology

    2013-10-01

    does begin to address infrastructure decay as a source of risk comes from the Department of Homeland Security (DHS). In 2009, the DHS Science and...network of connected edges and nodes. The National Research Council (2005) reported that the study of networks as a science and applications of...principles from this science are still in its early stages. As modern infrastructures have become more interlinked, knowledge of an infrastructure’s network

  11. Measuring infrastructure: A key step in program evaluation and planning

    PubMed Central

    Schmitt, Carol L.; Glasgow, LaShawn; Lavinghouze, S. Rene; Rieker, Patricia P.; Fulmer, Erika; McAleer, Kelly; Rogers, Todd

    2016-01-01

    State tobacco prevention and control programs (TCPs) require a fully functioning infrastructure to respond effectively to the Surgeon General’s call for accelerating the national reduction in tobacco use. The literature describes common elements of infrastructure; however, a lack of valid and reliable measures has made it difficult for program planners to monitor relevant infrastructure indicators and address observed deficiencies, or for evaluators to determine the association among infrastructure, program efforts, and program outcomes. The Component Model of Infrastructure (CMI) is a comprehensive, evidence-based framework that facilitates TCP program planning efforts to develop and maintain their infrastructure. Measures of CMI components were needed to evaluate the model’s utility and predictive capability for assessing infrastructure. This paper describes the development of CMI measures and results of a pilot test with nine state TCP managers. Pilot test findings indicate that the tool has good face validity and is clear and easy to follow. The CMI tool yields data that can enhance public health efforts in a funding-constrained environment and provides insight into program sustainability. Ultimately, the CMI measurement tool could facilitate better evaluation and program planning across public health programs. PMID:27037655

  12. Measuring infrastructure: A key step in program evaluation and planning.

    PubMed

    Schmitt, Carol L; Glasgow, LaShawn; Lavinghouze, S Rene; Rieker, Patricia P; Fulmer, Erika; McAleer, Kelly; Rogers, Todd

    2016-06-01

    State tobacco prevention and control programs (TCPs) require a fully functioning infrastructure to respond effectively to the Surgeon General's call for accelerating the national reduction in tobacco use. The literature describes common elements of infrastructure; however, a lack of valid and reliable measures has made it difficult for program planners to monitor relevant infrastructure indicators and address observed deficiencies, or for evaluators to determine the association among infrastructure, program efforts, and program outcomes. The Component Model of Infrastructure (CMI) is a comprehensive, evidence-based framework that facilitates TCP program planning efforts to develop and maintain their infrastructure. Measures of CMI components were needed to evaluate the model's utility and predictive capability for assessing infrastructure. This paper describes the development of CMI measures and results of a pilot test with nine state TCP managers. Pilot test findings indicate that the tool has good face validity and is clear and easy to follow. The CMI tool yields data that can enhance public health efforts in a funding-constrained environment and provides insight into program sustainability. Ultimately, the CMI measurement tool could facilitate better evaluation and program planning across public health programs. Copyright © 2016 Elsevier Ltd. All rights reserved.

  13. S3DB core: a framework for RDF generation and management in bioinformatics infrastructures

    PubMed Central

    2010-01-01

    Background Biomedical research is set to greatly benefit from the use of semantic web technologies in the design of computational infrastructure. However, beyond well defined research initiatives, substantial issues of data heterogeneity, source distribution, and privacy currently stand in the way towards the personalization of Medicine. Results A computational framework for bioinformatic infrastructure was designed to deal with the heterogeneous data sources and the sensitive mixture of public and private data that characterizes the biomedical domain. This framework consists of a logical model build with semantic web tools, coupled with a Markov process that propagates user operator states. An accompanying open source prototype was developed to meet a series of applications that range from collaborative multi-institution data acquisition efforts to data analysis applications that need to quickly traverse complex data structures. This report describes the two abstractions underlying the S3DB-based infrastructure, logical and numerical, and discusses its generality beyond the immediate confines of existing implementations. Conclusions The emergence of the "web as a computer" requires a formal model for the different functionalities involved in reading and writing to it. The S3DB core model proposed was found to address the design criteria of biomedical computational infrastructure, such as those supporting large scale multi-investigator research, clinical trials, and molecular epidemiology. PMID:20646315

  14. Security Engineering and Educational Initiatives for Critical Information Infrastructures

    DTIC Science & Technology

    2013-06-01

    standard for cryptographic protection of SCADA communications. The United Kingdom’s National Infrastructure Security Co-ordination Centre (NISCC...has released a good practice guide on firewall deployment for SCADA systems and process control networks [17]. Meanwhile, National Institute for ...report. APPROVED FOR PUBLIC RELEASE; DISTRIBUTION UNLIMITED 18 The SCADA gateway collects the data gathered by sensors, translates them from

  15. Campus Computing 1993. The USC National Survey of Desktop Computing in Higher Education.

    ERIC Educational Resources Information Center

    Green, Kenneth C.; Eastman, Skip

    A national survey of desktop computing in higher education was conducted in spring and summer 1993 at over 2500 institutions. Data were responses from public and private research universities, public and private four-year colleges and community colleges. Respondents (N=1011) were individuals specifically responsible for the operation and future…

  16. SpecialNet. A National Computer-Based Communications Network.

    ERIC Educational Resources Information Center

    Morin, Alfred J.

    1986-01-01

    "SpecialNet," a computer-based communications network for educators at all administrative levels, has been established and is managed by National Systems Management, Inc. Users can send and receive electronic mail, share information on electronic bulletin boards, participate in electronic conferences, and send reports and other documents to each…

  17. NASA World Wind: Infrastructure for Spatial Data

    NASA Technical Reports Server (NTRS)

    Hogan, Patrick

    2011-01-01

    The world has great need for analysis of Earth observation data, be it climate change, carbon monitoring, disaster response, national defense or simply local resource management. To best provide for spatial and time-dependent information analysis, the world benefits from an open standards and open source infrastructure for spatial data. In the spirit of NASA's motto "for the benefit of all" NASA invites the world community to collaboratively advance this core technology. The World Wind infrastructure for spatial data both unites and challenges the world for innovative solutions analyzing spatial data while also allowing absolute command and control over any respective information exchange medium.

  18. 76 FR 17934 - Infrastructure Protection Data Call

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-03-31

    ...), National Protection and Programs Directorate (NPPD), Office of Infrastructure Protection (IP), will submit... Collection Request should be forwarded to DHS/NPPD/IP, 245 Murray Lane, SW., Mail Stop 0602, Arlington, VA..., this responsibility is managed by IP within NPPD. Beginning in Fiscal Year 2006, IP engaged in the...

  19. The virtual machine (VM) scaler: an infrastructure manager supporting environmental modeling on IaaS clouds

    USDA-ARS?s Scientific Manuscript database

    Infrastructure-as-a-service (IaaS) clouds provide a new medium for deployment of environmental modeling applications. Harnessing advancements in virtualization, IaaS clouds can provide dynamic scalable infrastructure to better support scientific modeling computational demands. Providing scientific m...

  20. Infrastructure State Implementation Plan (SIP) Requirements and Guidance

    EPA Pesticide Factsheets

    The Clean Air Act requires states to submit SIPs that implement, maintain, and enforce a new or revised national ambient air quality standard (NAAQS) within 3 years of EPA issuing the standard. The Infrastructure SIP is required for all states.

  1. Increasing the productivity of the nation's urban transportation infrastructure: Measures to increase transit use and carpooling. Final report

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kain, J.F.; Gittell, R.; Daniere, A.

    1992-01-01

    The report surveys the growing use of bus and carpool priority measures to increase the productivity of the nation's transportation infrastructure. While it identifies a wide variety of priority measures, the report principally focuses on the planning and operation of exclusive and shared busways and high occupancy vehicle (HOV) facilities. It presents a variety of case studies describing the implementation of busways and transitways. The document also compares the cost effectiveness of exclusive busways and bus-HOV facilities with the cost effectiveness of recently completed light and heavy rail lines. It also explores the options and problems in serving large downtownmore » areas.« less

  2. Infrastructure for Training and Partnershipes: California Water and Coastal Ocean Resources

    NASA Technical Reports Server (NTRS)

    Siegel, David A.; Dozier, Jeffrey; Gautier, Catherine; Davis, Frank; Dickey, Tommy; Dunne, Thomas; Frew, James; Keller, Arturo; MacIntyre, Sally; Melack, John

    2000-01-01

    The purpose of this project was to advance the existing ICESS/Bren School computing infrastructure to allow scientists, students, and research trainees the opportunity to interact with environmental data and simulations in near-real time. Improvements made with the funding from this project have helped to strengthen the research efforts within both units, fostered graduate research training, and helped fortify partnerships with government and industry. With this funding, we were able to expand our computational environment in which computer resources, software, and data sets are shared by ICESS/Bren School faculty researchers in all areas of Earth system science. All of the graduate and undergraduate students associated with the Donald Bren School of Environmental Science and Management and the Institute for Computational Earth System Science have benefited from the infrastructure upgrades accomplished by this project. Additionally, the upgrades fostered a significant number of research projects (attached is a list of the projects that benefited from the upgrades). As originally proposed, funding for this project provided the following infrastructure upgrades: 1) a modem file management system capable of interoperating UNIX and NT file systems that can scale to 6.7 TB, 2) a Qualstar 40-slot tape library with two AIT tape drives and Legato Networker backup/archive software, 3) previously unavailable import/export capability for data sets on Zip, Jaz, DAT, 8mm, CD, and DLT media in addition to a 622Mb/s Internet 2 connection, 4) network switches capable of 100 Mbps to 128 desktop workstations, 5) Portable Batch System (PBS) computational task scheduler, and vi) two Compaq/Digital Alpha XP1000 compute servers each with 1.5 GB of RAM along with an SGI Origin 2000 (purchased partially using funds from this project along with funding from various other sources) to be used for very large computations, as required for simulation of mesoscale meteorology or climate.

  3. Making Network Markets in Education: The Development of Data Infrastructure in Australian Schooling

    ERIC Educational Resources Information Center

    Sellar, Sam

    2017-01-01

    This paper examines the development of data infrastructure in Australian schooling with a specific focus on interoperability standards that help to make new markets for education data. The conceptual framework combines insights from studies of infrastructure, economic markets and digital data. The case of the Australian National Schools…

  4. a System Dynamics Model to Study the Importance of Infrastructure Facilities on Quality of Primary Education System in Developing Countries

    NASA Astrophysics Data System (ADS)

    Pedamallu, Chandra Sekhar; Ozdamar, Linet; Weber, Gerhard-Wilhelm; Kropat, Erik

    2010-06-01

    The system dynamics approach is a holistic way of solving problems in real-time scenarios. This is a powerful methodology and computer simulation modeling technique for framing, analyzing, and discussing complex issues and problems. System dynamics modeling and simulation is often the background of a systemic thinking approach and has become a management and organizational development paradigm. This paper proposes a system dynamics approach for study the importance of infrastructure facilities on quality of primary education system in developing nations. The model is proposed to be built using the Cross Impact Analysis (CIA) method of relating entities and attributes relevant to the primary education system in any given community. We offer a survey to build the cross-impact correlation matrix and, hence, to better understand the primary education system and importance of infrastructural facilities on quality of primary education. The resulting model enables us to predict the effects of infrastructural facilities on the access of primary education by the community. This may support policy makers to take more effective actions in campaigns.

  5. Assessing the uptake of persistent identifiers by research infrastructure users

    PubMed Central

    Maull, Keith E.

    2017-01-01

    Significant progress has been made in the past few years in the development of recommendations, policies, and procedures for creating and promoting citations to data sets, software, and other research infrastructures like computing facilities. Open questions remain, however, about the extent to which referencing practices of authors of scholarly publications are changing in ways desired by these initiatives. This paper uses four focused case studies to evaluate whether research infrastructures are being increasingly identified and referenced in the research literature via persistent citable identifiers. The findings of the case studies show that references to such resources are increasing, but that the patterns of these increases are variable. In addition, the study suggests that citation practices for data sets may change more slowly than citation practices for software and research facilities, due to the inertia of existing practices for referencing the use of data. Similarly, existing practices for acknowledging computing support may slow the adoption of formal citations for computing resources. PMID:28394907

  6. Proceedings from the conference on high speed computing: High speed computing and national security

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hirons, K.P.; Vigil, M.; Carlson, R.

    1997-07-01

    This meeting covered the following topics: technologies/national needs/policies: past, present and future; information warfare; crisis management/massive data systems; risk assessment/vulnerabilities; Internet law/privacy and rights of society; challenges to effective ASCI programmatic use of 100 TFLOPs systems; and new computing technologies.

  7. Building Efficient Wireless Infrastructures for Pervasive Computing Environments

    ERIC Educational Resources Information Center

    Sheng, Bo

    2010-01-01

    Pervasive computing is an emerging concept that thoroughly brings computing devices and the consequent technology into people's daily life and activities. Most of these computing devices are very small, sometimes even "invisible", and often embedded into the objects surrounding people. In addition, these devices usually are not isolated, but…

  8. The Next Generation of Lab and Classroom Computing - The Silver Lining

    DTIC Science & Technology

    2016-12-01

    desktop infrastructure (VDI) solution, as well as the computing solutions at three universities, was selected as the basis for comparison. The research... infrastructure , VDI, hardware cost, software cost, manpower, availability, cloud computing, private cloud, bring your own device, BYOD, thin client...virtual desktop infrastructure (VDI) solution, as well as the computing solutions at three universities, was selected as the basis for comparison. The

  9. A national assessment of green infrastructure and change for the conterminous United States using morphological image processing

    Treesearch

    J.D Wickham; Kurt H. Riitters; T.G. Wade; P. Vogt

    2010-01-01

    Green infrastructure is a popular framework for conservation planning. The main elements of green infrastructure are hubs and links. Hubs tend to be large areas of ‘natural’ vegetation and links tend to be linear features (e.g., streams) that connect hubs. Within the United States, green infrastructure projects can be characterized as: (...

  10. Successful introduction of an underutilized elderly pneumococcal vaccine in a national immunization program by integrating the pre-existing public health infrastructure.

    PubMed

    Yang, Tae Un; Kim, Eunsung; Park, Young-Joon; Kim, Dongwook; Kwon, Yoon Hyung; Shin, Jae Kyong; Park, Ok

    2016-03-18

    Although pneumococcal vaccines had been recommended for the elderly population in South Korea for a considerable period of time, the coverage has been well below the optimal level. To increase the vaccination rate with integrating the pre-existing public health infrastructure and governmental funding, the Korean government introduced an elderly pneumococcal vaccination into the national immunization program with a 23-valent pneumococcal polysaccharide vaccine in May 2013. The aim of this study was to assess the performance of the program in increasing the vaccine coverage rate and maintaining stable vaccine supply and safe vaccination during the 20 months of the program. We qualitatively and quantitatively analyzed the process of introducing and the outcomes of the program in terms of the systematic organization, efficiency, and stability at the national level. A staggered introduction during the first year utilizing the public sector, with a target coverage of 60%, was implemented based on the public demand for an elderly pneumococcal vaccination, vaccine supply capacity, vaccine delivery capacity, safety, and sustainability. During the 20-month program period, the pneumococcal vaccine coverage rate among the population aged ≥65 years increased from 5.0% to 57.3% without a noticeable vaccine shortage or safety issues. A web-based integrated immunization information system, which includes the immunization registry, vaccine supply chain management, and surveillance of adverse events following immunization, reduced programmatic errors and harmonized the overall performance of the program. Introduction of an elderly pneumococcal vaccination in the national immunization program based on strong government commitment, meticulous preparation, financial support, and the pre-existing public health infrastructure resulted in an efficient, stable, and sustainable increase in vaccination coverage. Copyright © 2016. Published by Elsevier Ltd.

  11. A comprehensive typology for mainstreaming urban green infrastructure

    NASA Astrophysics Data System (ADS)

    Young, Robert; Zanders, Julie; Lieberknecht, Katherine; Fassman-Beck, Elizabeth

    2014-11-01

    During a National Science Foundation (US) funded "International Greening of Cities Workshop" in Auckland, New Zealand, participants agreed an effective urban green infrastructure (GI) typology should identify cities' present stage of GI development and map next steps to mainstream GI as a component of urban infrastructure. Our review reveals current GI typologies do not systematically identify such opportunities. We address this knowledge gap by developing a new typology incorporating political, economic, and ecological forces shaping GI implementation. Applying this information allows symmetrical, place-based exploration of the social and ecological elements driving a city's GI systems. We use this information to distinguish current levels of GI development and clarify intervention opportunities to advance GI into the mainstream of metropolitan infrastructure. We employ three case studies (San Antonio, Texas; Auckland, New Zealand; and New York, New York) to test and refine our typology.

  12. Extensible Infrastructure for Browsing and Searching Abstracted Spacecraft Data

    NASA Technical Reports Server (NTRS)

    Wallick, Michael N.; Crockett, Thomas M.; Joswig, Joseph C.; Torres, Recaredo J.; Norris, Jeffrey S.; Fox, Jason M.; Powell, Mark W.; Mittman, David S.; Abramyan, Lucy; Shams, Khawaja S.; hide

    2009-01-01

    A computer program has been developed to provide a common interface for all space mission data, and allows different types of data to be displayed in the same context. This software provides an infrastructure for representing any type of mission data.

  13. National resource for computation in chemistry, phase I: evaluation and recommendations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Not Available

    1980-05-01

    The National Resource for Computation in Chemistry (NRCC) was inaugurated at the Lawrence Berkeley Laboratory (LBL) in October 1977, with joint funding by the Department of Energy (DOE) and the National Science Foundation (NSF). The chief activities of the NRCC include: assembling a staff of eight postdoctoral computational chemists, establishing an office complex at LBL, purchasing a midi-computer and graphics display system, administering grants of computer time, conducting nine workshops in selected areas of computational chemistry, compiling a library of computer programs with adaptations and improvements, initiating a software distribution system, providing user assistance and consultation on request. This reportmore » presents assessments and recommendations of an Ad Hoc Review Committee appointed by the DOE and NSF in January 1980. The recommendations are that NRCC should: (1) not fund grants for computing time or research but leave that to the relevant agencies, (2) continue the Workshop Program in a mode similar to Phase I, (3) abandon in-house program development and establish instead a competitive external postdoctoral program in chemistry software development administered by the Policy Board and Director, and (4) not attempt a software distribution system (leaving that function to the QCPE). Furthermore, (5) DOE should continue to make its computational facilities available to outside users (at normal cost rates) and should find some way to allow the chemical community to gain occasional access to a CRAY-level computer.« less

  14. The EPOS e-Infrastructure

    NASA Astrophysics Data System (ADS)

    Jeffery, Keith; Bailo, Daniele

    2014-05-01

    The European Plate Observing System (EPOS) is integrating geoscientific information concerning earth movements in Europe. We are approaching the end of the PP (Preparatory Project) phase and in October 2014 expect to continue with the full project within ESFRI (European Strategic Framework for Research Infrastructures). The key aspects of EPOS concern providing services to allow homogeneous access by end-users over heterogeneous data, software, facilities, equipment and services. The e-infrastructure of EPOS is the heart of the project since it integrates the work on organisational, legal, economic and scientific aspects. Following the creation of an inventory of relevant organisations, persons, facilities, equipment, services, datasets and software (RIDE) the scale of integration required became apparent. The EPOS e-infrastructure architecture has been developed systematically based on recorded primary (user) requirements and secondary (interoperation with other systems) requirements through Strawman, Woodman and Ironman phases with the specification - and developed confirmatory prototypes - becoming more precise and progressively moving from paper to implemented system. The EPOS architecture is based on global core services (Integrated Core Services - ICS) which access thematic nodes (domain-specific European-wide collections, called thematic Core Services - TCS), national nodes and specific institutional nodes. The key aspect is the metadata catalog. In one dimension this is described in 3 levels: (1) discovery metadata using well-known and commonly used standards such as DC (Dublin Core) to enable users (via an intelligent user interface) to search for objects within the EPOS environment relevant to their needs; (2) contextual metadata providing the context of the object described in the catalog to enable a user or the system to determine the relevance of the discovered object(s) to their requirement - the context includes projects, funding, organisations

  15. A modular (almost) automatic set-up for elastic multi-tenants cloud (micro)infrastructures

    NASA Astrophysics Data System (ADS)

    Amoroso, A.; Astorino, F.; Bagnasco, S.; Balashov, N. A.; Bianchi, F.; Destefanis, M.; Lusso, S.; Maggiora, M.; Pellegrino, J.; Yan, L.; Yan, T.; Zhang, X.; Zhao, X.

    2017-10-01

    An auto-installing tool on an usb drive can allow for a quick and easy automatic deployment of OpenNebula-based cloud infrastructures remotely managed by a central VMDIRAC instance. A single team, in the main site of an HEP Collaboration or elsewhere, can manage and run a relatively large network of federated (micro-)cloud infrastructures, making an highly dynamic and elastic use of computing resources. Exploiting such an approach can lead to modular systems of cloud-bursting infrastructures addressing complex real-life scenarios.

  16. A Grid Infrastructure for Supporting Space-based Science Operations

    NASA Technical Reports Server (NTRS)

    Bradford, Robert N.; Redman, Sandra H.; McNair, Ann R. (Technical Monitor)

    2002-01-01

    Emerging technologies for computational grid infrastructures have the potential for revolutionizing the way computers are used in all aspects of our lives. Computational grids are currently being implemented to provide a large-scale, dynamic, and secure research and engineering environments based on standards and next-generation reusable software, enabling greater science and engineering productivity through shared resources and distributed computing for less cost than traditional architectures. Combined with the emerging technologies of high-performance networks, grids provide researchers, scientists and engineers the first real opportunity for an effective distributed collaborative environment with access to resources such as computational and storage systems, instruments, and software tools and services for the most computationally challenging applications.

  17. Building a Cloud Infrastructure for a Virtual Environmental Observatory

    NASA Astrophysics Data System (ADS)

    El-khatib, Y.; Blair, G. S.; Gemmell, A. L.; Gurney, R. J.

    2012-12-01

    Environmental science is often fragmented: data is collected by different organizations using mismatched formats and conventions, and models are misaligned and run in isolation. Cloud computing offers a lot of potential in the way of resolving such issues by supporting data from different sources and at various scales, and integrating models to create more sophisticated and collaborative software services. The Environmental Virtual Observatory pilot (EVOp) project, funded by the UK Natural Environment Research Council, aims to demonstrate how cloud computing principles and technologies can be harnessed to develop more effective solutions to pressing environmental issues. The EVOp infrastructure is a tailored one constructed from resources in both private clouds (owned and managed by us) and public clouds (leased from third party providers). All system assets are accessible via a uniform web service interface in order to enable versatile and transparent resource management, and to support fundamental infrastructure properties such as reliability and elasticity. The abstraction that this 'everything as a service' principle brings also supports mashups, i.e. combining different web services (such as models) and data resources of different origins (in situ gauging stations, warehoused data stores, external sources, etc.). We adopt the RESTful style of web services in order to draw a clear line between client and server (i.e. cloud host) and also to keep the server completely stateless. This significantly improves the scalability of the infrastructure and enables easy infrastructure management. For instance, tasks such as load balancing and failure recovery are greatly simplified without the need for techniques such as advance resource reservation or shared block devices. Upon this infrastructure, we developed a web portal composed of a bespoke collection of web-based visualization tools to help bring out relationships or patterns within the data. The portal was

  18. Documentary of MFENET, a national computer network

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Shuttleworth, B.O.

    1977-06-01

    The national Magnetic Fusion Energy Computer Network (MFENET) is a newly operational star network of geographically separated heterogeneous hosts and a communications subnetwork of PDP-11 processors. Host processors interfaced to the subnetwork currently include a CDC 7600 at the Central Computer Center (CCC) and several DECsystem-10's at User Service Centers (USC's). The network was funded by a U.S. government agency (ERDA) to provide in an economical manner the needed computational resources to magnetic confinement fusion researchers. Phase I operation of MFENET distributed the processing power of the CDC 7600 among the USC's through the provision of file transport between anymore » two hosts and remote job entry to the 7600. Extending the capabilities of Phase I, MFENET Phase II provided interactive terminal access to the CDC 7600 from the USC's. A file management system is maintained at the CCC for all network users. The history and development of MFENET are discussed, with emphasis on the protocols used to link the host computers and the USC software. Comparisons are made of MFENET versus ARPANET (Advanced Research Projects Agency Computer Network) and DECNET (Digital Distributed Network Architecture). DECNET and MFENET host-to host, host-to-CCP, and link protocols are discussed in detail. The USC--CCP interface is described briefly. 43 figures, 2 tables.« less

  19. Overview of Infrastructure Science and Analysis for Homeland Security

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Backhaus, Scott N.

    This presentation offers an analysis of infrastructure science with goals to provide third-party independent science based input into complex problems of national concern and to use scientific analysis to "turn down the noise" around complex problems.

  20. Integration in primary community care networks (PCCNs): examination of governance, clinical, marketing, financial, and information infrastructures in a national demonstration project in Taiwan

    PubMed Central

    Lin, Blossom Yen-Ju

    2007-01-01

    Background Taiwan's primary community care network (PCCN) demonstration project, funded by the Bureau of National Health Insurance on March 2003, was established to discourage hospital shopping behavior of people and drive the traditional fragmented health care providers into cooperate care models. Between 2003 and 2005, 268 PCCNs were established. This study profiled the individual members in the PCCNs to study the nature and extent to which their network infrastructures have been integrated among the members (clinics and hospitals) within individual PCCNs. Methods The thorough questionnaire items, covering the network working infrastructures – governance, clinical, marketing, financial, and information integration in PCCNs, were developed with validity and reliability confirmed. One thousand five hundred and fifty-seven clinics that had belonged to PCCNs for more than one year, based on the 2003–2005 Taiwan Primary Community Care Network List, were surveyed by mail. Nine hundred and twenty-eight clinic members responded to the surveys giving a 59.6 % response rate. Results Overall, the PCCNs' members had higher involvement in the governance infrastructure, which was usually viewed as the most important for establishment of core values in PCCNs' organization design and management at the early integration stage. In addition, it found that there existed a higher extent of integration of clinical, marketing, and information infrastructures among the hospital-clinic member relationship than those among clinic members within individual PCCNs. The financial infrastructure was shown the least integrated relative to other functional infrastructures at the early stage of PCCN formation. Conclusion There was still room for better integrated partnerships, as evidenced by the great variety of relationships and differences in extent of integration in this study. In addition to provide how the network members have done for their initial work at the early stage of network

  1. Integration in primary community care networks (PCCNs): examination of governance, clinical, marketing, financial, and information infrastructures in a national demonstration project in Taiwan.

    PubMed

    Lin, Blossom Yen-Ju

    2007-06-19

    Taiwan's primary community care network (PCCN) demonstration project, funded by the Bureau of National Health Insurance on March 2003, was established to discourage hospital shopping behavior of people and drive the traditional fragmented health care providers into cooperate care models. Between 2003 and 2005, 268 PCCNs were established. This study profiled the individual members in the PCCNs to study the nature and extent to which their network infrastructures have been integrated among the members (clinics and hospitals) within individual PCCNs. The thorough questionnaire items, covering the network working infrastructures--governance, clinical, marketing, financial, and information integration in PCCNs, were developed with validity and reliability confirmed. One thousand five hundred and fifty-seven clinics that had belonged to PCCNs for more than one year, based on the 2003-2005 Taiwan Primary Community Care Network List, were surveyed by mail. Nine hundred and twenty-eight clinic members responded to the surveys giving a 59.6 % response rate. Overall, the PCCNs' members had higher involvement in the governance infrastructure, which was usually viewed as the most important for establishment of core values in PCCNs' organization design and management at the early integration stage. In addition, it found that there existed a higher extent of integration of clinical, marketing, and information infrastructures among the hospital-clinic member relationship than those among clinic members within individual PCCNs. The financial infrastructure was shown the least integrated relative to other functional infrastructures at the early stage of PCCN formation. There was still room for better integrated partnerships, as evidenced by the great variety of relationships and differences in extent of integration in this study. In addition to provide how the network members have done for their initial work at the early stage of network forming in this study, the detailed surveyed

  2. Tracking the deployment of the integrated metropolitan ITS infrastructure in the USA : FY99 results

    DOT National Transportation Integrated Search

    2000-05-01

    This report describes the results of a major data gathering effort aimed at tracking deployment of nine infrastructure components of the metropolitan ITS infrastructure in 78 of the largest metropolitan areas in the nation. The nine components are: F...

  3. e-Infrastructures for e-Sciences 2013 A CHAIN-REDS Workshop organised under the aegis of the European Commission

    NASA Astrophysics Data System (ADS)

    The CHAIN-REDS Project is organising a workshop on "e-Infrastructures for e-Sciences" focusing on Cloud Computing and Data Repositories under the aegis of the European Commission and in co-location with the International Conference on e-Science 2013 (IEEE2013) that will be held in Beijing, P.R. of China on October 17-22, 2013. The core objective of the CHAIN-REDS project is to promote, coordinate and support the effort of a critical mass of non-European e-Infrastructures for Research and Education to collaborate with Europe addressing interoperability and interoperation of Grids and other Distributed Computing Infrastructures (DCI). From this perspective, CHAIN-REDS will optimise the interoperation of European infrastructures with those present in 6 other regions of the world, both from a development and use point of view, and catering to different communities. Overall, CHAIN-REDS will provide input for future strategies and decision-making regarding collaboration with other regions on e-Infrastructure deployment and availability of related data; it will raise the visibility of e-Infrastructures towards intercontinental audiences, covering most of the world and will provide support to establish globally connected and interoperable infrastructures, in particular between the EU and the developing regions. Organised by IHEP, INFN and Sigma Orionis with the support of all project partners, this workshop will aim at: - Presenting the state of the art of Cloud computing in Europe and in China and discussing the opportunities offered by having interoperable and federated e-Infrastructures; - Exploring the existing initiatives of Data Infrastructures in Europe and China, and highlighting the Data Repositories of interest for the Virtual Research Communities in several domains such as Health, Agriculture, Climate, etc.

  4. Anti-social networking: crowdsourcing and the cyber defence of national critical infrastructures.

    PubMed

    Johnson, Chris W

    2014-01-01

    We identify four roles that social networking plays in the 'attribution problem', which obscures whether or not cyber-attacks were state-sponsored. First, social networks motivate individuals to participate in Distributed Denial of Service attacks by providing malware and identifying potential targets. Second, attackers use an individual's social network to focus attacks, through spear phishing. Recipients are more likely to open infected attachments when they come from a trusted source. Third, social networking infrastructures create disposable architectures to coordinate attacks through command and control servers. The ubiquitous nature of these architectures makes it difficult to determine who owns and operates the servers. Finally, governments recruit anti-social criminal networks to launch attacks on third-party infrastructures using botnets. The closing sections identify a roadmap to increase resilience against the 'dark side' of social networking.

  5. Results and Analysis of the Infrastructure Request for Information (DE-SOL-0008318)

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Heidrich, Brenden John

    2015-07-01

    The Department of Energy (DOE) Office of Nuclear Energy (NE) released a request for information (RFI) (DE-SOL-0008318) for “University, National Laboratory, Industry and International Input on Potential Office of Nuclear Energy Infrastructure Investments” on April 13, 2015. DOE-NE solicited information on five specific types of capabilities as well as any others suggested by the community. The RFI proposal period closed on June 19, 2015. From the 26 responses, 34 individual proposals were extracted. Eighteen were associated with a DOE national laboratory, including Argonne National Laboratory (ANL), Brookhaven National Laboratory (BNL), Idaho National Laboratory (INL), Los Alamos National Laboratory (LANL), Pacificmore » Northwest National Laboratory (PNNL) and Sandia National Laboratory (SNL). Oak Ridge National Laboratory (ORNL) was referenced in a proposal as a proposed capability location, although the proposal did not originate with ORNL. Five US universities submitted proposals (Massachusetts Institute of Technology, Pennsylvania State University, Rensselaer Polytechnic Institute, University of Houston and the University of Michigan). Three industrial/commercial institutions submitted proposals (AREVA NP, Babcock and Wilcox (B&W) and the Electric Power Research Institute (EPRI)). Eight major themes emerged from the submissions as areas needing additional capability or support for existing capabilities. Two submissions supported multiple areas. The major themes are: Advanced Manufacturing (AM), High Performance Computing (HPC), Ion Irradiation with X-Ray Diagnostics (IIX), Ion Irradiation with TEM Visualization (IIT), Radiochemistry Laboratories (RCL), Test Reactors, Neutron Sources and Critical Facilities (RX) , Sample Preparation and Post-Irradiation Examination (PIE) and Thermal-Hydraulics Test Facilities (THF).« less

  6. Computer integration of engineering design and production: A national opportunity

    NASA Astrophysics Data System (ADS)

    1984-10-01

    The National Aeronautics and Space Administration (NASA), as a purchaser of a variety of manufactured products, including complex space vehicles and systems, clearly has a stake in the advantages of computer-integrated manufacturing (CIM). Two major NASA objectives are to launch a Manned Space Station by 1992 with a budget of $8 billion, and to be a leader in the development and application of productivity-enhancing technology. At the request of NASA, a National Research Council committee visited five companies that have been leaders in using CIM. Based on these case studies, technical, organizational, and financial issues that influence computer integration are described, guidelines for its implementation in industry are offered, and the use of CIM to manage the space station program is recommended.

  7. Computer integration of engineering design and production: A national opportunity

    NASA Technical Reports Server (NTRS)

    1984-01-01

    The National Aeronautics and Space Administration (NASA), as a purchaser of a variety of manufactured products, including complex space vehicles and systems, clearly has a stake in the advantages of computer-integrated manufacturing (CIM). Two major NASA objectives are to launch a Manned Space Station by 1992 with a budget of $8 billion, and to be a leader in the development and application of productivity-enhancing technology. At the request of NASA, a National Research Council committee visited five companies that have been leaders in using CIM. Based on these case studies, technical, organizational, and financial issues that influence computer integration are described, guidelines for its implementation in industry are offered, and the use of CIM to manage the space station program is recommended.

  8. Rehabilitation, Replacement and Redesign of the Nation's Water and Wastewater Infrastructure as a Valuable Adaptation Opportunity

    EPA Science Inventory

    In support of the Agency's Sustainable Water Infrastructure Initiative, EPA's Office of Research and Develpment initiated the Aging Water Infrastructure Research Program in 2007. The program, with its core focus on the support of strategic asset management, is designed to facili...

  9. Computing shifts to monitor ATLAS distributed computing infrastructure and operations

    NASA Astrophysics Data System (ADS)

    Adam, C.; Barberis, D.; Crépé-Renaudin, S.; De, K.; Fassi, F.; Stradling, A.; Svatos, M.; Vartapetian, A.; Wolters, H.

    2017-10-01

    The ATLAS Distributed Computing (ADC) group established a new Computing Run Coordinator (CRC) shift at the start of LHC Run 2 in 2015. The main goal was to rely on a person with a good overview of the ADC activities to ease the ADC experts’ workload. The CRC shifter keeps track of ADC tasks related to their fields of expertise and responsibility. At the same time, the shifter maintains a global view of the day-to-day operations of the ADC system. During Run 1, this task was accomplished by a person of the expert team called the ADC Manager on Duty (AMOD), a position that was removed during the shutdown period due to the reduced number and availability of ADC experts foreseen for Run 2. The CRC position was proposed to cover some of the AMODs former functions, while allowing more people involved in computing to participate. In this way, CRC shifters help with the training of future ADC experts. The CRC shifters coordinate daily ADC shift operations, including tracking open issues, reporting, and representing ADC in relevant meetings. The CRC also facilitates communication between the ADC experts team and the other ADC shifters. These include the Distributed Analysis Support Team (DAST), which is the first point of contact for addressing all distributed analysis questions, and the ATLAS Distributed Computing Shifters (ADCoS), which check and report problems in central services, sites, Tier-0 export, data transfers and production tasks. Finally, the CRC looks at the level of ADC activities on a weekly or monthly timescale to ensure that ADC resources are used efficiently.

  10. Enabling opportunistic resources for CMS Computing Operations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hufnagel, Dirk

    With the increased pressure on computing brought by the higher energy and luminosity from the LHC in Run 2, CMS Computing Operations expects to require the ability to utilize opportunistic resources resources not owned by, or a priori configured for CMS to meet peak demands. In addition to our dedicated resources we look to add computing resources from non CMS grids, cloud resources, and national supercomputing centers. CMS uses the HTCondor/glideinWMS job submission infrastructure for all its batch processing, so such resources will need to be transparently integrated into its glideinWMS pool. Bosco and parrot wrappers are used to enablemore » access and bring the CMS environment into these non CMS resources. Finally, we describe our strategy to supplement our native capabilities with opportunistic resources and our experience so far using them.« less

  11. Enabling opportunistic resources for CMS Computing Operations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hufnagel, Dick

    With the increased pressure on computing brought by the higher energy and luminosity from the LHC in Run 2, CMS Computing Operations expects to require the ability to utilize “opportunistic” resources — resources not owned by, or a priori configured for CMS — to meet peak demands. In addition to our dedicated resources we look to add computing resources from non CMS grids, cloud resources, and national supercomputing centers. CMS uses the HTCondor/glideinWMS job submission infrastructure for all its batch processing, so such resources will need to be transparently integrated into its glideinWMS pool. Bosco and parrot wrappers are usedmore » to enable access and bring the CMS environment into these non CMS resources. Here we describe our strategy to supplement our native capabilities with opportunistic resources and our experience so far using them.« less

  12. Enabling opportunistic resources for CMS Computing Operations

    DOE PAGES

    Hufnagel, Dirk

    2015-12-23

    With the increased pressure on computing brought by the higher energy and luminosity from the LHC in Run 2, CMS Computing Operations expects to require the ability to utilize opportunistic resources resources not owned by, or a priori configured for CMS to meet peak demands. In addition to our dedicated resources we look to add computing resources from non CMS grids, cloud resources, and national supercomputing centers. CMS uses the HTCondor/glideinWMS job submission infrastructure for all its batch processing, so such resources will need to be transparently integrated into its glideinWMS pool. Bosco and parrot wrappers are used to enablemore » access and bring the CMS environment into these non CMS resources. Finally, we describe our strategy to supplement our native capabilities with opportunistic resources and our experience so far using them.« less

  13. 2006 Particulate Matter (PM) National Ambient Air Quality Standards (NAAQS) Infrastructure Actions

    EPA Pesticide Factsheets

    Read about the EPA's infrastructure actions for the 2006 PM NAAQS. These actions are regarding states' failure to submit SIPs addressing various parts of the standards. Here you can read the federal register notices,and fact sheets

  14. Safety impacts of bicycle infrastructure: A critical review.

    PubMed

    DiGioia, Jonathan; Watkins, Kari Edison; Xu, Yanzhi; Rodgers, Michael; Guensler, Randall

    2017-06-01

    This paper takes a critical look at the present state of bicycle infrastructure treatment safety research, highlighting data needs. Safety literature relating to 22 bicycle treatments is examined, including findings, study methodologies, and data sources used in the studies. Some preliminary conclusions related to research efficacy are drawn from the available data and findings in the research. While the current body of bicycle safety literature points toward some defensible conclusions regarding the safety and effectiveness of certain bicycle treatments, such as bike lanes and removal of on-street parking, the vast majority treatments are still in need of rigorous research. Fundamental questions arise regarding appropriate exposure measures, crash measures, and crash data sources. This research will aid transportation departments with regard to decisions about bicycle infrastructure and guide future research efforts toward understanding safety impacts of bicycle infrastructure. Copyright © 2017 Elsevier Ltd and National Safety Council. All rights reserved.

  15. Internet-based computer technology on radiotherapy.

    PubMed

    Chow, James C L

    2017-01-01

    Recent rapid development of Internet-based computer technologies has made possible many novel applications in radiation dose delivery. However, translational speed of applying these new technologies in radiotherapy could hardly catch up due to the complex commissioning process and quality assurance protocol. Implementing novel Internet-based technology in radiotherapy requires corresponding design of algorithm and infrastructure of the application, set up of related clinical policies, purchase and development of software and hardware, computer programming and debugging, and national to international collaboration. Although such implementation processes are time consuming, some recent computer advancements in the radiation dose delivery are still noticeable. In this review, we will present the background and concept of some recent Internet-based computer technologies such as cloud computing, big data processing and machine learning, followed by their potential applications in radiotherapy, such as treatment planning and dose delivery. We will also discuss the current progress of these applications and their impacts on radiotherapy. We will explore and evaluate the expected benefits and challenges in implementation as well.

  16. Advanced Artificial Science. The development of an artificial science and engineering research infrastructure to facilitate innovative computational modeling, analysis, and application to interdisciplinary areas of scientific investigation.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Saffer, Shelley

    2014-12-01

    This is a final report of the DOE award DE-SC0001132, Advanced Artificial Science. The development of an artificial science and engineering research infrastructure to facilitate innovative computational modeling, analysis, and application to interdisciplinary areas of scientific investigation. This document describes the achievements of the goals, and resulting research made possible by this award.

  17. An integrated approach to infrastructure.

    PubMed

    Hayes, Stewart

    2010-02-01

    In an edited version of a paper presented at the IHEA (Institute of Hospital Engineering Australia) 60th National Conference 2009, Stewart Hayes, principal consultant at Jakeman Business Solutions, argues that, with "traditional" means of purchasing and maintaining critical hospital infrastructure systems "becoming less viable", a more integrated, strategic approach to procuring and providing essential hospital services that looks not just to the present, but equally to the facility's anticipated future needs, is becoming ever more important.

  18. NCI's national environmental research data collection: metadata management built on standards and preparing for the semantic web

    NASA Astrophysics Data System (ADS)

    Wang, Jingbo; Bastrakova, Irina; Evans, Ben; Gohar, Kashif; Santana, Fabiana; Wyborn, Lesley

    2015-04-01

    National Computational Infrastructure (NCI) manages national environmental research data collections (10+ PB) as part of its specialized high performance data node of the Research Data Storage Infrastructure (RDSI) program. We manage 40+ data collections using NCI's Data Management Plan (DMP), which is compatible with the ISO 19100 metadata standards. We utilize ISO standards to make sure our metadata is transferable and interoperable for sharing and harvesting. The DMP is used along with metadata from the data itself, to create a hierarchy of data collection, dataset and time series catalogues that is then exposed through GeoNetwork for standard discoverability. This hierarchy catalogues are linked using a parent-child relationship. The hierarchical infrastructure of our GeoNetwork catalogues system aims to address both discoverability and in-house administrative use-cases. At NCI, we are currently improving the metadata interoperability in our catalogue by linking with standardized community vocabulary services. These emerging vocabulary services are being established to help harmonise data from different national and international scientific communities. One such vocabulary service is currently being established by the Australian National Data Services (ANDS). Data citation is another important aspect of the NCI data infrastructure, which allows tracking of data usage and infrastructure investment, encourage data sharing, and increasing trust in research that is reliant on these data collections. We incorporate the standard vocabularies into the data citation metadata so that the data citation become machine readable and semantically friendly for web-search purpose as well. By standardizing our metadata structure across our entire data corpus, we are laying the foundation to enable the application of appropriate semantic mechanisms to enhance discovery and analysis of NCI's national environmental research data information. We expect that this will further

  19. Railroad infrastructure trespassing detection systems research in Pittsford, New York

    DOT National Transportation Integrated Search

    2006-08-01

    The U.S. Department of Transportations Volpe National Transportation Systems Center, under the direction of the Federal Railroad Administration, conducted a 3-year demonstration of an automated prototype railroad infrastructure security system on ...

  20. Robust, Optimal Water Infrastructure Planning Under Deep Uncertainty Using Metamodels

    NASA Astrophysics Data System (ADS)

    Maier, H. R.; Beh, E. H. Y.; Zheng, F.; Dandy, G. C.; Kapelan, Z.

    2015-12-01

    Optimal long-term planning plays an important role in many water infrastructure problems. However, this task is complicated by deep uncertainty about future conditions, such as the impact of population dynamics and climate change. One way to deal with this uncertainty is by means of robustness, which aims to ensure that water infrastructure performs adequately under a range of plausible future conditions. However, as robustness calculations require computationally expensive system models to be run for a large number of scenarios, it is generally computationally intractable to include robustness as an objective in the development of optimal long-term infrastructure plans. In order to overcome this shortcoming, an approach is developed that uses metamodels instead of computationally expensive simulation models in robustness calculations. The approach is demonstrated for the optimal sequencing of water supply augmentation options for the southern portion of the water supply for Adelaide, South Australia. A 100-year planning horizon is subdivided into ten equal decision stages for the purpose of sequencing various water supply augmentation options, including desalination, stormwater harvesting and household rainwater tanks. The objectives include the minimization of average present value of supply augmentation costs, the minimization of average present value of greenhouse gas emissions and the maximization of supply robustness. The uncertain variables are rainfall, per capita water consumption and population. Decision variables are the implementation stages of the different water supply augmentation options. Artificial neural networks are used as metamodels to enable all objectives to be calculated in a computationally efficient manner at each of the decision stages. The results illustrate the importance of identifying optimal staged solutions to ensure robustness and sustainability of water supply into an uncertain long-term future.

  1. OOI CyberInfrastructure - Next Generation Oceanographic Research

    NASA Astrophysics Data System (ADS)

    Farcas, C.; Fox, P.; Arrott, M.; Farcas, E.; Klacansky, I.; Krueger, I.; Meisinger, M.; Orcutt, J.

    2008-12-01

    Software has become a key enabling technology for scientific discovery, observation, modeling, and exploitation of natural phenomena. New value emerges from the integration of individual subsystems into networked federations of capabilities exposed to the scientific community. Such data-intensive interoperability networks are crucial for future scientific collaborative research, as they open up new ways of fusing data from different sources and across various domains, and analysis on wide geographic areas. The recently established NSF OOI program, through its CyberInfrastructure component addresses this challenge by providing broad access from sensor networks for data acquisition up to computational grids for massive computations and binding infrastructure facilitating policy management and governance of the emerging system-of-scientific-systems. We provide insight into the integration core of this effort, namely, a hierarchic service-oriented architecture for a robust, performant, and maintainable implementation. We first discuss the relationship between data management and CI crosscutting concerns such as identity management, policy and governance, which define the organizational contexts for data access and usage. Next, we detail critical services including data ingestion, transformation, preservation, inventory, and presentation. To address interoperability issues between data represented in various formats we employ a semantic framework derived from the Earth System Grid technology, a canonical representation for scientific data based on DAP/OPeNDAP, and related data publishers such as ERDDAP. Finally, we briefly present the underlying transport based on a messaging infrastructure over the AMQP protocol, and the preservation based on a distributed file system through SDSC iRODS.

  2. 75 FR 68370 - Agency Information Collection Activities: Office of Infrastructure Protection; Chemical Security...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2010-11-05

    ... DEPARTMENT OF HOMELAND SECURITY National Protection and Programs Directorate [Docket No. DHS-2010-0071] Agency Information Collection Activities: Office of Infrastructure Protection; Chemical Security...: The Department of Homeland Security (DHS), National Protection and Programs Directorate (NPPD), Office...

  3. Lawrence Livermore National Laboratories Perspective on Code Development and High Performance Computing Resources in Support of the National HED/ICF Effort

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Clouse, C. J.; Edwards, M. J.; McCoy, M. G.

    2015-07-07

    Through its Advanced Scientific Computing (ASC) and Inertial Confinement Fusion (ICF) code development efforts, Lawrence Livermore National Laboratory (LLNL) provides a world leading numerical simulation capability for the National HED/ICF program in support of the Stockpile Stewardship Program (SSP). In addition the ASC effort provides high performance computing platform capabilities upon which these codes are run. LLNL remains committed to, and will work with, the national HED/ICF program community to help insure numerical simulation needs are met and to make those capabilities available, consistent with programmatic priorities and available resources.

  4. A National Virtual Specimen Database for Early Cancer Detection

    NASA Technical Reports Server (NTRS)

    Crichton, Daniel; Kincaid, Heather; Kelly, Sean; Thornquist, Mark; Johnsey, Donald; Winget, Marcy

    2003-01-01

    Access to biospecimens is essential for enabling cancer biomarker discovery. The National Cancer Institute's (NCI) Early Detection Research Network (EDRN) comprises and integrates a large number of laboratories into a network in order to establish a collaborative scientific environment to discover and validate disease markers. The diversity of both the institutions and the collaborative focus has created the need for establishing cross-disciplinary teams focused on integrating expertise in biomedical research, computational and biostatistics, and computer science. Given the collaborative design of the network, the EDRN needed an informatics infrastructure. The Fred Hutchinson Cancer Research Center, the National Cancer Institute,and NASA's Jet Propulsion Laboratory (JPL) teamed up to build an informatics infrastructure creating a collaborative, science-driven research environment despite the geographic and morphology differences of the information systems that existed within the diverse network. EDRN investigators identified the need to share biospecimen data captured across the country managed in disparate databases. As a result, the informatics team initiated an effort to create a virtual tissue database whereby scientists could search and locate details about specimens located at collaborating laboratories. Each database, however, was locally implemented and integrated into collection processes and methods unique to each institution. This meant that efforts to integrate databases needed to be done in a manner that did not require redesign or re-implementation of existing system

  5. Cyberspace Policy Review: Assuring a Trusted and Resilient Information and Communications Infrastructure

    DTIC Science & Technology

    2009-05-01

    information technology revolution. The architecture of the Nation’s digital infrastructure, based largely upon the Internet, is not secure or resilient...thriving digital infrastructure. In addi­ tion, differing national and regional laws and practices —such as laws concerning the investigation and... technology has transformed the global economy and connected people and markets in ways never imagined. To realize the full benefits of the digital

  6. Transportation Community Institutional Infrastructure Study : Volume 1. National Transportation Needs Mail Survey.

    DOT National Transportation Integrated Search

    1976-04-01

    The results of the Transportation Community Infrastructure Study are presented as a three volume series. This series presents a surveyed priority of topics for information exhange, a case study of a porposed training proram, and an analysis of the tr...

  7. The GMOS cyber(e)-infrastructure: advanced services for supporting science and policy.

    PubMed

    Cinnirella, S; D'Amore, F; Bencardino, M; Sprovieri, F; Pirrone, N

    2014-03-01

    The need for coordinated, systematized and catalogued databases on mercury in the environment is of paramount importance as improved information can help the assessment of the effectiveness of measures established to phase out and ban mercury. Long-term monitoring sites have been established in a number of regions and countries for the measurement of mercury in ambient air and wet deposition. Long term measurements of mercury concentration in biota also produced a huge amount of information, but such initiatives are far from being within a global, systematic and interoperable approach. To address these weaknesses the on-going Global Mercury Observation System (GMOS) project ( www.gmos.eu ) established a coordinated global observation system for mercury as well it retrieved historical data ( www.gmos.eu/sdi ). To manage such large amount of information a technological infrastructure was planned. This high-performance back-end resource associated with sophisticated client applications enables data storage, computing services, telecommunications networks and all services necessary to support the activity. This paper reports the architecture definition of the GMOS Cyber(e)-Infrastructure and the services developed to support science and policy, including the United Nation Environmental Program. It finally describes new possibilities in data analysis and data management through client applications.

  8. New EVSE Analytical Tools/Models: Electric Vehicle Infrastructure Projection Tool (EVI-Pro)

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wood, Eric W; Rames, Clement L; Muratori, Matteo

    This presentation addresses the fundamental question of how much charging infrastructure is needed in the United States to support PEVs. It complements ongoing EVSE initiatives by providing a comprehensive analysis of national PEV charging infrastructure requirements. The result is a quantitative estimate for a U.S. network of non-residential (public and workplace) EVSE that would be needed to support broader PEV adoption. The analysis provides guidance to public and private stakeholders who are seeking to provide nationwide charging coverage, improve the EVSE business case by maximizing station utilization, and promote effective use of private/public infrastructure investments.

  9. NFDRSPC: The National Fire-Danger Rating System on a Personal Computer

    Treesearch

    Bryan G. Donaldson; James T. Paul

    1990-01-01

    This user's guide is an introductory manual for using the 1988 version (Burgan 1988) of the National Fire-Danger Rating System on an IBM PC or compatible computer. NFDRSPC is a window-oriented, interactive computer program that processes observed and forecast weather with fuels data to produce NFDRS indices. Other program features include user-designed display...

  10. Effecting IT infrastructure culture change: management by processes and metrics

    NASA Technical Reports Server (NTRS)

    Miller, R. L.

    2001-01-01

    This talk describes the processes and metrics used by Jet Propulsion Laboratory to bring about the required IT infrastructure culture change to update and certify, as Y2K compliant, thousands of computers and millions of lines of code.

  11. Assessment of the Energy Impacts of Improving Highway-Infrastructure Materials

    DOT National Transportation Integrated Search

    1995-04-01

    Argonne National Laboratory has conducted a study to ascertain the relative importance of improved highway materials compared to vehicle energy consumption on U.S. energy consumption. Energy savings through an improved highway infrastructure can occu...

  12. Galaxy CloudMan: delivering cloud compute clusters.

    PubMed

    Afgan, Enis; Baker, Dannon; Coraor, Nate; Chapman, Brad; Nekrutenko, Anton; Taylor, James

    2010-12-21

    Widespread adoption of high-throughput sequencing has greatly increased the scale and sophistication of computational infrastructure needed to perform genomic research. An alternative to building and maintaining local infrastructure is "cloud computing", which, in principle, offers on demand access to flexible computational infrastructure. However, cloud computing resources are not yet suitable for immediate "as is" use by experimental biologists. We present a cloud resource management system that makes it possible for individual researchers to compose and control an arbitrarily sized compute cluster on Amazon's EC2 cloud infrastructure without any informatics requirements. Within this system, an entire suite of biological tools packaged by the NERC Bio-Linux team (http://nebc.nerc.ac.uk/tools/bio-linux) is available for immediate consumption. The provided solution makes it possible, using only a web browser, to create a completely configured compute cluster ready to perform analysis in less than five minutes. Moreover, we provide an automated method for building custom deployments of cloud resources. This approach promotes reproducibility of results and, if desired, allows individuals and labs to add or customize an otherwise available cloud system to better meet their needs. The expected knowledge and associated effort with deploying a compute cluster in the Amazon EC2 cloud is not trivial. The solution presented in this paper eliminates these barriers, making it possible for researchers to deploy exactly the amount of computing power they need, combined with a wealth of existing analysis software, to handle the ongoing data deluge.

  13. A national perspective on paleoclimate streamflow and water storage infrastructure in the conterminous United States

    NASA Astrophysics Data System (ADS)

    Ho, Michelle; Lall, Upmanu; Sun, Xun; Cook, Edward

    2017-04-01

    Large-scale water storage infrastructure in the Conterminous United States (CONUS) provides a means of regulating the temporal variability in water supply with storage capacities ranging from seasonal storage in the wetter east to multi-annual and decadal-scale storage in the drier west. Regional differences in water availability across the CONUS provides opportunities for optimizing water dependent economic activities, such as food and energy production, through storage and transportation. However, the ability to sufficiently regulate water supplies into the future is compromised by inadequate monitoring of non-federally-owned dams that make up around 97% of all dams. Furthermore, many of these dams are reaching or have exceeded their economic design life. Understanding the role of dams in the current and future landscape of water requirements in the CONUS is needed to prioritize dam safety remediation or identify where redundant dams may be removed. A national water assessment and planning process is needed for addressing water requirements, accounting for regional differences in water supply and demand, and the role of dams in such a landscape. Most dams in the CONUS were designed without knowledge of devastating floods and prolonged droughts detected in multi-centennial paleoclimate records, consideration of projected climate change, nor consideration of optimal operation across large-scale regions. As a step towards informing water supply across the CONUS we present a paleoclimate reconstruction of annual streamflow across the CONUS over the past 555 years using a spatially and temporally complete paleoclimate record of summer drought across the CONUS targeting a set of US Geological Survey streamflow sites. The spatial and temporal structures of national streamflow variability are analyzed using hierarchical clustering, principal component analysis, and wavelet analyses. The reconstructions show signals of contemporary droughts such as the Dust Bowl (1930s

  14. A new vision of the post-NIST civil infrastructure program: the challenges of next-generation construction materials and processes

    NASA Astrophysics Data System (ADS)

    Wu, H. Felix; Wan, Yan

    2014-03-01

    Our nation's infrastructural systems are crumbling. The deteriorating process grows over time. The physical aging of these vital facilities and the remediation of their current critical state pose a key societal challenge to the United States. Current sensing technologies, while well developed in controlled laboratory environments, have not yet yielded tools for producing real-time, in-situ data that are adequately comprehensible for infrastructure decision-makers. The need for advanced sensing technologies is national because every municipality and state in the nation faces infrastructure management challenges. The need is critical because portions of infrastructure are reaching the end of their life-spans and there are few cost-effective means to monitor infrastructure integrity and to prioritize the renovation and replacement of infrastructure elements. New advanced sensing technologies that produce cost-effective inspection and real-time monitoring data, and that can also help or aid in meaningful interpretation of the acquired data, therefore will enhance the safety in regard to the public on structural integrity by issuing timely and accurate alert data for effective maintenance to avoid disasters happening. New advanced sensing technologies also allow more informed management of infrastructural investments by avoiding premature replacement of infrastructure and identifying those structures in need of immediate action to prevent from catastrophic failure. Infrastructure management requires that once a structural defect is detected, an economical and efficient repair be made. Advancing the technologies of repairing infrastructure elements in contact with water, road salt, and subjected to thermal changes requires innovative research to significantly extend the service life of repairs, lower the costs of repairs, and provide repair technologies that are suitable for a wide range of conditions. All these new technologies will provide increased lifetimes

  15. Latin American space activities based on different infrastructures

    NASA Astrophysics Data System (ADS)

    Gall, Ruth

    The paper deals with recent basic space research and space applications in several Latin-American countries. It links space activities with national scientific and institutional infrastructures and stresses the importance of interdisciplinary space programs, that can play a major role in the developing countries achievement of self reliance in space matters.

  16. Design principles in the development of (public) health information infrastructures.

    PubMed

    Neame, Roderick

    2012-01-01

    In this article the author outlines the key issues in the development of a regional health information infrastructure suitable for public health data collections. A set of 10 basic design and development principles as used and validated in the development of the successful New Zealand National Health Information Infrastructure in 1993 are put forward as a basis for future developments. The article emphasises the importance of securing clinical input into any health data that is collected, and suggests strategies whereby this may be achieved, including creating an information economy alongside the care economy. It is suggested that the role of government in such developments is to demonstrate leadership, to work with the sector to develop data, messaging and security standards, to establish key online indexes, to develop data warehouses and to create financial incentives for adoption of the infrastructure and the services it delivers to users. However experience suggests that government should refrain from getting involved in local care services data infrastructure, technology and management issues.

  17. Use of the computer and Internet among Italian families: first national study.

    PubMed

    Bricolo, Francesco; Gentile, Douglas A; Smelser, Rachel L; Serpelloni, Giovanni

    2007-12-01

    Although home Internet access has continued to increase, little is known about actual usage patterns in homes. This nationally representative study of over 4,700 Italian households with children measured computer and Internet use of each family member across 3 months. Data on actual computer and Internet usage were collected by Nielsen//NetRatings service and provide national baseline information on several variables for several age groups separately, including children, adolescents, and adult men and women. National averages are shown for the average amount of time spent using computers and on the Web, the percentage of each age group online, and the types of Web sites viewed. Overall, about one-third of children ages 2 to 11, three-fourths of adolescents and adult women, and over four-fifths of adult men access the Internet each month. Children spend an average of 22 hours/month on the computer, with a jump to 87 hours/month for adolescents. Adult women spend less time (about 60 hours/month), and adult men spend more (over 100). The types of Web sites visited are reported, including the top five for each age group. In general, search engines and Web portals are the top sites visited, regardless of age group. These data provide a baseline for comparisons across time and cultures.

  18. The role of private developers in local infrastructure provision in Malaysia

    NASA Astrophysics Data System (ADS)

    Salleh, Dani; Okinono, Otega

    2016-08-01

    Globally, the challenge of local infrastructure provision has attracted much debate amongst different nations including Malaysia, on how to achieve an effective and efficient infrastructural management. This approach therefore, has intensified the efforts of local authorities in incorporating private developers in their developmental agenda in attaining a sustainable infrastructural development in local areas. Basically, the knowledge of the need for adequate provision of local infrastructure is well understood by both local and private authorities. Likewise, the divergent opinions on the usage of private delivery services. Notwithstanding the common perception, significant loopholes have been identified on the most appropriate and ideal approach and practices to adopt in enhancing local infrastructure development. The study therefore examined the role of private developers in local infrastructure provision and procedure adopted by both local authorities and the privates sector in local infrastructure development. Data was obtained using the questionnaire through purposive sampling, administered to 22 local authorities and 16 developers which was descriptively analysed. Emanating from the study findings, the most frequently approved practices by local authorities are joint venture and complete public delivery systems. Likewise, negotiation was identified as a vital tool for stimulating the acquisition of local infrastructure provision. It was also discovered the one of the greatest challenge in promoting private sector involvement in local infrastructure development is due to unregulated-procedure. The study therefore recommends, there is need for local authorities to adopt a collective and integrated approach, nevertheless, cognisance and priority should be given to developing a well-structured and systematic process of local infrastructure provision and development.

  19. Simulating economic effects of disruptions in the telecommunications infrastructure.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Cox, Roger Gary; Barton, Dianne Catherine; Reinert, Rhonda K.

    2004-01-01

    CommAspen is a new agent-based model for simulating the interdependent effects of market decisions and disruptions in the telecommunications infrastructure on other critical infrastructures in the U.S. economy such as banking and finance, and electric power. CommAspen extends and modifies the capabilities of Aspen-EE, an agent-based model previously developed by Sandia National Laboratories to analyze the interdependencies between the electric power system and other critical infrastructures. CommAspen has been tested on a series of scenarios in which the communications network has been disrupted, due to congestion and outages. Analysis of the scenario results indicates that communications networks simulated by themore » model behave as their counterparts do in the real world. Results also show that the model could be used to analyze the economic impact of communications congestion and outages.« less

  20. 78 FR 49409 - Approval and Promulgation of Air Quality Implementation Plans; Delaware; Infrastructure...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-08-14

    ...] Approval and Promulgation of Air Quality Implementation Plans; Delaware; Infrastructure Requirements for the 2010 Nitrogen Dioxide National Ambient Air Quality Standards AGENCY: Environmental Protection... national ambient air quality standards (NAAQS) are promulgated, the CAA requires states to submit a plan...

  1. Transportation Infrastructure Design and Construction \\0x16 Virtual Training Tools

    DOT National Transportation Integrated Search

    2003-09-01

    This project will develop 3D interactive computer-training environments for a major element of transportation infrastructure : hot mix asphalt paving. These tools will include elements of hot mix design (including laboratory equipment) and constructi...

  2. Editorial [Special issue on software defined networks and infrastructures, network function virtualisation, autonomous systems and network management

    DOE PAGES

    Biswas, Amitava; Liu, Chen; Monga, Inder; ...

    2016-01-01

    For last few years, there has been a tremendous growth in data traffic due to high adoption rate of mobile devices and cloud computing. Internet of things (IoT) will stimulate even further growth. This is increasing scale and complexity of telecom/internet service provider (SP) and enterprise data centre (DC) compute and network infrastructures. As a result, managing these large network-compute converged infrastructures is becoming complex and cumbersome. To cope up, network and DC operators are trying to automate network and system operations, administrations and management (OAM) functions. OAM includes all non-functional mechanisms which keep the network running.

  3. Software Attribution for Geoscience Applications in the Computational Infrastructure for Geodynamics

    NASA Astrophysics Data System (ADS)

    Hwang, L.; Dumit, J.; Fish, A.; Soito, L.; Kellogg, L. H.; Smith, M.

    2015-12-01

    Scientific software is largely developed by individual scientists and represents a significant intellectual contribution to the field. As the scientific culture and funding agencies move towards an expectation that software be open-source, there is a corresponding need for mechanisms to cite software, both to provide credit and recognition to developers, and to aid in discoverability of software and scientific reproducibility. We assess the geodynamic modeling community's current citation practices by examining more than 300 predominantly self-reported publications utilizing scientific software in the past 5 years that is available through the Computational Infrastructure for Geodynamics (CIG). Preliminary results indicate that authors cite and attribute software either through citing (in rank order) peer-reviewed scientific publications, a user's manual, and/or a paper describing the software code. Attributions maybe found directly in the text, in acknowledgements, in figure captions, or in footnotes. What is considered citable varies widely. Citations predominantly lack software version numbers or persistent identifiers to find the software package. Versioning may be implied through reference to a versioned user manual. Authors sometimes report code features used and whether they have modified the code. As an open-source community, CIG requests that researchers contribute their modifications to the repository. However, such modifications may not be contributed back to a repository code branch, decreasing the chances of discoverability and reproducibility. Survey results through CIG's Software Attribution for Geoscience Applications (SAGA) project suggest that lack of knowledge, tools, and workflows to cite codes are barriers to effectively implement the emerging citation norms. Generated on-demand attributions on software landing pages and a prototype extensible plug-in to automatically generate attributions in codes are the first steps towards reproducibility.

  4. GreenView and GreenLand Applications Development on SEE-GRID Infrastructure

    NASA Astrophysics Data System (ADS)

    Mihon, Danut; Bacu, Victor; Gorgan, Dorian; Mészáros, Róbert; Gelybó, Györgyi; Stefanut, Teodor

    2010-05-01

    The GreenView and GreenLand applications [1] have been developed through the SEE-GRID-SCI (SEE-GRID eInfrastructure for regional eScience) FP7 project co-funded by the European Commission [2]. The development of environment applications is a challenge for Grid technologies and software development methodologies. This presentation exemplifies the development of the GreenView and GreenLand applications over the SEE-GRID infrastructure by the Grid Application Development Methodology [3]. Today's environmental applications are used in vary domains of Earth Science such as meteorology, ground and atmospheric pollution, ground metal detection or weather prediction. These applications run on satellite images (e.g. Landsat, MERIS, MODIS, etc.) and the accuracy of output results depends mostly of the quality of these images. The main drawback of such environmental applications regards the need of computation power and storage power (some images are almost 1GB in size), in order to process such a large data volume. Actually, almost applications requiring high computation resources have approached the migration onto the Grid infrastructure. This infrastructure offers the computing power by running the atomic application components on different Grid nodes in sequential or parallel mode. The middleware used between the Grid infrastructure and client applications is ESIP (Environment Oriented Satellite Image Processing Platform), which is based on gProcess platform [4]. In its current format, gProcess is used for launching new processes on the Grid nodes, but also for monitoring the execution status of these processes. This presentation highlights two case studies of Grid based environmental applications, GreenView and GreenLand [5]. GreenView is used in correlation with MODIS (Moderate Resolution Imaging Spectroradiometer) satellite images and meteorological datasets, in order to produce pseudo colored temperature and vegetation maps for different geographical CEE (Central

  5. Behavioural science at work for Canada: National Research Council laboratories.

    PubMed

    Veitch, Jennifer A

    2007-03-01

    The National Research Council is Canada's principal research and development agency. Its 20 institutes are structured to address interdisciplinary problems for industrial sectors, and to provide the necessary scientific infrastructure, such as the national science library. Behavioural scientists are active in five institutes: Biological Sciences, Biodiagnostics, Aerospace, Information Technology, and Construction. Research topics include basic cellular neuroscience, brain function, human factors in the cockpit, human-computer interaction, emergency evacuation, and indoor environment effects on occupants. Working in collaboration with NRC colleagues and with researchers from universities and industry, NRC behavioural scientists develop knowledge, designs, and applications that put technology to work for people, designed with people in mind.

  6. Description and operational status of the National Transonic Facility computer complex

    NASA Technical Reports Server (NTRS)

    Boyles, G. B., Jr.

    1986-01-01

    This paper describes the National Transonic Facility (NTF) computer complex and its support of tunnel operations. The capabilities of the research data acquisition and reduction are discussed along with the types of data that can be acquired and presented. Pretest, test, and posttest capabilities are also outlined along with a discussion of the computer complex to monitor the tunnel control processes and provide the tunnel operators with information needed to control the tunnel. Planned enhancements to the computer complex for support of future testing are presented.

  7. The Small Aircraft Transportation System for America: A Case in Public Infrastructure Change

    NASA Technical Reports Server (NTRS)

    Bowen, Brent D.

    2000-01-01

    The National Aeronautics and Space Administration (NASA), U.S. Department of Transportation, Federal Aviation Administration, industry stakeholders, and academia, have joined forces to pursue the NASA National General Aviation Roadmap leading to a Small Aircraft Transportation System (SATS). This strategic undertaking has a 25-year goal to bring next-generation technologies and improve travel between remote communities and transportation centers in urban areas by utilizing the nation's 5,400 public-use general aviation airports. To facilitate this initiative, a comprehensive upgrade of public infrastructure must be planned, coordinated, and implemented within the framework of the national air transportation system. The Nebraska NASA EPSCoR Program has proposed to deliver research support in key public infrastructure areas in coordination with the General Aviation Program Office at the NASA Langley Research Center. Ultimately, SATS may permit tripling aviation system throughput capacity by tapping the underutilized general aviation facilities to achieve the national goal of doorstep-to-destination travel at four times the speed of highways for the nation's suburban, rural, and remote communities.

  8. Green Infrastructure

    EPA Pesticide Factsheets

    To promote the benefits of green infrastructure, help communities overcome barriers to using GI, and encourage the use of GI to create sustainable and resilient water infrastructure that improves water quality and supports and revitalizes communities.

  9. Modernization of the USGS Hawaiian Volcano Observatory Seismic Processing Infrastructure

    NASA Astrophysics Data System (ADS)

    Antolik, L.; Shiro, B.; Friberg, P. A.

    2016-12-01

    The USGS Hawaiian Volcano Observatory (HVO) operates a Tier 1 Advanced National Seismic System (ANSS) seismic network to monitor, characterize, and report on volcanic and earthquake activity in the State of Hawaii. Upgrades at the observatory since 2009 have improved the digital telemetry network, computing resources, and seismic data processing with the adoption of the ANSS Quake Management System (AQMS) system. HVO aims to build on these efforts by further modernizing its seismic processing infrastructure and strengthen its ability to meet ANSS performance standards. Most notably, this will also allow HVO to support redundant systems, both onsite and offsite, in order to provide better continuity of operation during intermittent power and network outages. We are in the process of implementing a number of upgrades and improvements on HVO's seismic processing infrastructure, including: 1) Virtualization of AQMS physical servers; 2) Migration of server operating systems from Solaris to Linux; 3) Consolidation of AQMS real-time and post-processing services to a single server; 4) Upgrading database from Oracle 10 to Oracle 12; and 5) Upgrading to the latest Earthworm and AQMS software. These improvements will make server administration more efficient, minimize hardware resources required by AQMS, simplify the Oracle replication setup, and provide better integration with HVO's existing state of health monitoring tools and backup system. Ultimately, it will provide HVO with the latest and most secure software available while making the software easier to deploy and support.

  10. Research on Computer-Based Education for Reading Teachers: A 1989 Update. Results of the First National Assessment of Computer Competence.

    ERIC Educational Resources Information Center

    Balajthy, Ernest

    Results of the 1985-86 National Assessment of Educational Progress (NAEP) survey of American students' knowledge of computers suggest that American schools have a long way to go before computers can be said to have made a significant impact. The survey covered the 3rd, 7th, and 11th grade levels and assessed competence in knowledge of computers,…

  11. An Infrastructure for Web-Based Computer Assisted Learning

    ERIC Educational Resources Information Center

    Joy, Mike; Muzykantskii, Boris; Rawles, Simon; Evans, Michael

    2002-01-01

    We describe an initiative under way at Warwick to provide a technical foundation for computer aided learning and computer-assisted assessment tools, which allows a rich dialogue sensitive to individual students' response patterns. The system distinguishes between dialogues for individual problems and the linking of problems. This enables a subject…

  12. Low-Cost, Robust, Threat-Aware Wireless Sensor Network for Assuring the Nation's Energy Infrastructure

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Carols H. Rentel

    2007-03-31

    Eaton, in partnership with Oak Ridge National Laboratory and the Electric Power Research Institute (EPRI) has completed a project that applies a combination of wireless sensor network (WSN) technology, anticipatory theory, and a near-term value proposition based on diagnostics and process uptime to ensure the security and reliability of critical electrical power infrastructure. Representatives of several Eaton business units have been engaged to ensure a viable commercialization plan. Tennessee Valley Authority (TVA), American Electric Power (AEP), PEPCO, and Commonwealth Edison were recruited as partners to confirm and refine the requirements definition from the perspective of the utilities that actually operatemore » the facilities to be protected. Those utilities have cooperated with on-site field tests as the project proceeds. Accomplishments of this project included: (1) the design, modeling, and simulation of the anticipatory wireless sensor network (A-WSN) that will be used to gather field information for the anticipatory application, (2) the design and implementation of hardware and software prototypes for laboratory and field experimentation, (3) stack and application integration, (4) develop installation and test plan, and (5) refinement of the commercialization plan.« less

  13. Low-carbon infrastructure strategies for cities

    NASA Astrophysics Data System (ADS)

    Kennedy, C. A.; Ibrahim, N.; Hoornweg, D.

    2014-05-01

    Reducing greenhouse gas emissions to avert potentially disastrous global climate change requires substantial redevelopment of infrastructure systems. Cities are recognized as key actors for leading such climate change mitigation efforts. We have studied the greenhouse gas inventories and underlying characteristics of 22 global cities. These cities differ in terms of their climates, income, levels of industrial activity, urban form and existing carbon intensity of electricity supply. Here we show how these differences in city characteristics lead to wide variations in the type of strategies that can be used for reducing emissions. Cities experiencing greater than ~1,500 heating degree days (below an 18 °C base), for example, will review building construction and retrofitting for cold climates. Electrification of infrastructure technologies is effective for cities where the carbon intensity of the grid is lower than ~600 tCO2e GWh-1 whereas transportation strategies will differ between low urban density (<~6,000 persons km-2) and high urban density (>~6,000 persons km-2) cities. As nation states negotiate targets and develop policies for reducing greenhouse gas emissions, attention to the specific characteristics of their cities will broaden and improve their suite of options. Beyond carbon pricing, markets and taxation, governments may develop policies and target spending towards low-carbon urban infrastructure.

  14. Collaborative Development of e-Infrastructures and Data Management Practices for Global Change Research

    NASA Astrophysics Data System (ADS)

    Samors, R. J.; Allison, M. L.

    2016-12-01

    An e-infrastructure that supports data-intensive, multidisciplinary research is being organized under the auspices of the Belmont Forum consortium of national science funding agencies to accelerate the pace of science to address 21st century global change research challenges. The pace and breadth of change in information management across the data lifecycle means that no one country or institution can unilaterally provide the leadership and resources required to use data and information effectively, or needed to support a coordinated, global e-infrastructure. The five action themes adopted by the Belmont Forum: 1. Adopt and make enforceable Data Principles that establish a global, interoperable e-infrastructure. 2. Foster communication, collaboration and coordination between the wider research community and Belmont Forum and its projects through an e-Infrastructure Coordination, Communication, & Collaboration Office. 3. Promote effective data planning and stewardship in all Belmont Forum agency-funded research with a goal to make it enforceable. 4. Determine international and community best practice to inform Belmont Forum research e-infrastructure policy through identification and analysis of cross-disciplinary research case studies. 5. Support the development of a cross-disciplinary training curriculum to expand human capacity in technology and data-intensive analysis methods. The Belmont Forum is ideally poised to play a vital and transformative leadership role in establishing a sustained human and technical international data e-infrastructure to support global change research. In 2016, members of the 23-nation Belmont Forum began a collaborative implementation phase. Four multi-national teams are undertaking Action Themes based on the recommendations above. Tasks include mapping the landscape, identifying and documenting existing data management plans, and scheduling a series of workshops that analyse trans-disciplinary applications of existing Belmont Forum

  15. Current and future flood risk to railway infrastructure in Europe

    NASA Astrophysics Data System (ADS)

    Bubeck, Philip; Kellermann, Patric; Alfieri, Lorenzo; Feyen, Luc; Dillenardt, Lisa; Thieken, Annegret H.

    2017-04-01

    Railway infrastructure plays an important role in the transportation of freight and passengers across the European Union. According to Eurostat, more than four billion passenger-kilometres were travelled on national and international railway lines of the EU28 in 2014. To further strengthen transport infrastructure in Europe, the European Commission will invest another € 24.05 billion in the transnational transport network until 2020 as part of its new transport infrastructure policy (TEN-T), including railway infrastructure. Floods pose a significant risk to infrastructure elements. Damage data of recent flood events in Europe show that infrastructure losses can make up a considerable share of overall losses. For example, damage to state and municipal infrastructure in the federal state of Saxony (Germany) accounted for nearly 60% of overall losses during the large-scale event in June 2013. Especially in mountainous areas with little usable space available, roads and railway lines often follow floodplains or are located along steep and unsteady slopes. In Austria, for instance, the flood of 2013 caused € 75 million of direct damage to railway infrastructure. Despite the importance of railway infrastructure and its exposure to flooding, assessments of potential damage and risk (i.e. probability * damage) are still in its infancy compared with other sectors, such as the residential or industrial sector. Infrastructure-specific assessments at the regional scale are largely lacking. Regional assessment of potential damage to railway infrastructure has been hampered by a lack of infrastructure-specific damage models and data availability. The few available regional approaches have used damage models that assess damage to various infrastructure elements (e.g. roads, railway, airports and harbours) using one aggregated damage function and cost estimate. Moreover, infrastructure elements are often considerably underrepresented in regional land cover data, such as

  16. 76 FR 50487 - Protected Critical Infrastructure Information (PCII) Stakeholder Survey

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-08-15

    ... Information (PCII) Stakeholder Survey AGENCY: National Protection and Programs Directorate, DHS. ACTION: 30... Collection Request, Protected Critical Infrastructure Information (PCII) Stakeholder Survey. DHS previously... homeland security duties. This survey is designed to gather information from PCII Officers that can be used...

  17. 78 FR 28707 - National Defense Transportation Day and National Transportation Week, 2013

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-05-15

    ... challenges we face. We need to restore our roads, bridges, and ports-- transportation networks that are... security. At a time when our cities face unprecedented threats and hazards, we must do more to ensure our... infrastructure. In recognition of the importance of our Nation's transportation infrastructure, and of the men...

  18. Cloud Environment Automation: from infrastructure deployment to application monitoring

    NASA Astrophysics Data System (ADS)

    Aiftimiei, C.; Costantini, A.; Bucchi, R.; Italiano, A.; Michelotto, D.; Panella, M.; Pergolesi, M.; Saletta, M.; Traldi, S.; Vistoli, C.; Zizzi, G.; Salomoni, D.

    2017-10-01

    The potential offered by the cloud paradigm is often limited by technical issues, rules and regulations. In particular, the activities related to the design and deployment of the Infrastructure as a Service (IaaS) cloud layer can be difficult to apply and time-consuming for the infrastructure maintainers. In this paper the research activity, carried out during the Open City Platform (OCP) research project [1], aimed at designing and developing an automatic tool for cloud-based IaaS deployment is presented. Open City Platform is an industrial research project funded by the Italian Ministry of University and Research (MIUR), started in 2014. It intends to research, develop and test new technological solutions open, interoperable and usable on-demand in the field of Cloud Computing, along with new sustainable organizational models that can be deployed for and adopted by the Public Administrations (PA). The presented work and the related outcomes are aimed at simplifying the deployment and maintenance of a complete IaaS cloud-based infrastructure.

  19. A hybrid computational strategy to address WGS variant analysis in >5000 samples.

    PubMed

    Huang, Zhuoyi; Rustagi, Navin; Veeraraghavan, Narayanan; Carroll, Andrew; Gibbs, Richard; Boerwinkle, Eric; Venkata, Manjunath Gorentla; Yu, Fuli

    2016-09-10

    The decreasing costs of sequencing are driving the need for cost effective and real time variant calling of whole genome sequencing data. The scale of these projects are far beyond the capacity of typical computing resources available with most research labs. Other infrastructures like the cloud AWS environment and supercomputers also have limitations due to which large scale joint variant calling becomes infeasible, and infrastructure specific variant calling strategies either fail to scale up to large datasets or abandon joint calling strategies. We present a high throughput framework including multiple variant callers for single nucleotide variant (SNV) calling, which leverages hybrid computing infrastructure consisting of cloud AWS, supercomputers and local high performance computing infrastructures. We present a novel binning approach for large scale joint variant calling and imputation which can scale up to over 10,000 samples while producing SNV callsets with high sensitivity and specificity. As a proof of principle, we present results of analysis on Cohorts for Heart And Aging Research in Genomic Epidemiology (CHARGE) WGS freeze 3 dataset in which joint calling, imputation and phasing of over 5300 whole genome samples was produced in under 6 weeks using four state-of-the-art callers. The callers used were SNPTools, GATK-HaplotypeCaller, GATK-UnifiedGenotyper and GotCloud. We used Amazon AWS, a 4000-core in-house cluster at Baylor College of Medicine, IBM power PC Blue BioU at Rice and Rhea at Oak Ridge National Laboratory (ORNL) for the computation. AWS was used for joint calling of 180 TB of BAM files, and ORNL and Rice supercomputers were used for the imputation and phasing step. All other steps were carried out on the local compute cluster. The entire operation used 5.2 million core hours and only transferred a total of 6 TB of data across the platforms. Even with increasing sizes of whole genome datasets, ensemble joint calling of SNVs for low

  20. Creating an open environment software infrastructure

    NASA Technical Reports Server (NTRS)

    Jipping, Michael J.

    1992-01-01

    As the development of complex computer hardware accelerates at increasing rates, the ability of software to keep pace is essential. The development of software design tools, however, is falling behind the development of hardware for several reasons, the most prominent of which is the lack of a software infrastructure to provide an integrated environment for all parts of a software system. The research was undertaken to provide a basis for answering this problem by investigating the requirements of open environments.

  1. Fast Risk Assessment Software For Natural Hazard Phenomena Using Georeference Population And Infrastructure Data Bases

    NASA Astrophysics Data System (ADS)

    Marrero, J. M.; Pastor Paz, J. E.; Erazo, C.; Marrero, M.; Aguilar, J.; Yepes, H. A.; Estrella, C. M.; Mothes, P. A.

    2015-12-01

    Disaster Risk Reduction (DRR) requires an integrated multi-hazard assessment approach towards natural hazard mitigation. In the case of volcanic risk, long term hazard maps are generally developed on a basis of the most probable scenarios (likelihood of occurrence) or worst cases. However, in the short-term, expected scenarios may vary substantially depending on the monitoring data or new knowledge. In this context, the time required to obtain and process data is critical for optimum decision making. Availability of up-to-date volcanic scenarios is as crucial as it is to have this data accompanied by efficient estimations of their impact among populations and infrastructure. To address this impact estimation during volcanic crises, or other natural hazards, a web interface has been developed to execute an ANSI C application. This application allows one to compute - in a matter of seconds - the demographic and infrastructure impact that any natural hazard may cause employing an overlay-layer approach. The web interface is tailored to users involved in the volcanic crises management of Cotopaxi volcano (Ecuador). The population data base and the cartographic basis used are of public domain, published by the National Office of Statistics of Ecuador (INEC, by its Spanish acronym). To run the application and obtain results the user is expected to upload a raster file containing information related to the volcanic hazard or any other natural hazard, and determine categories to group population or infrastructure potentially affected. The results are displayed in a user-friendly report.

  2. 78 FR 65593 - Approval and Promulgation of Air Quality Implementation Plans; West Virginia; Infrastructure...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-11-01

    ...] Approval and Promulgation of Air Quality Implementation Plans; West Virginia; Infrastructure Requirements for the 2010 Nitrogen Dioxide National Ambient Air Quality Standards AGENCY: Environmental Protection... revised national ambient air quality standards (NAAQS) are promulgated, the CAA requires states to submit...

  3. A National Assessment of Green Infrastructure and Change for the Conterminous United States Using Morphological Image Processing

    EPA Science Inventory

    Green infrastructure is a popular framework for conservation planning. The main elements of green infrastructure are hubs and links. Hubs tend to be large areas of ‘natural’ vegetation and links tend to be linear features (e.g., streams) that connect hubs. Within the United State...

  4. What Do Experienced Water Managers Think of Water Resources of Our Nation and Its Management Infrastructure?

    PubMed

    Hossain, Faisal; Arnold, Jeffrey; Beighley, Ed; Brown, Casey; Burian, Steve; Chen, Ji; Mitra, Anindita; Niyogi, Dev; Pielke, Roger; Tidwell, Vincent; Wegner, Dave

    2015-01-01

    This article represents the second report by an ASCE Task Committee "Infrastructure Impacts of Landscape-driven Weather Change" under the ASCE Watershed Management Technical Committee and the ASCE Hydroclimate Technical Committee. Herein, the 'infrastructure impacts" are referred to as infrastructure-sensitive changes in weather and climate patterns (extremes and non-extremes) that are modulated, among other factors, by changes in landscape, land use and land cover change. In this first report, the article argued for explicitly considering the well-established feedbacks triggered by infrastructure systems to the land-atmosphere system via landscape change. In this report by the ASCE Task Committee (TC), we present the results of this ASCE TC's survey of a cross section of experienced water managers using a set of carefully crafted questions. These questions covered water resources management, infrastructure resiliency and recommendations for inclusion in education and curriculum. We describe here the specifics of the survey and the results obtained in the form of statistical averages on the 'perception' of these managers. Finally, we discuss what these 'perception' averages may indicate to the ASCE TC and community as a whole for stewardship of the civil engineering profession. The survey and the responses gathered are not exhaustive nor do they represent the ASCE-endorsed viewpoint. However, the survey provides a critical first step to developing the framework of a research and education plan for ASCE. Given the Water Resources Reform and Development Act passed in 2014, we must now take into account the perceived concerns of the water management community.

  5. What Do Experienced Water Managers Think of Water Resources of Our Nation and Its Management Infrastructure?

    PubMed Central

    Hossain, Faisal; Arnold, Jeffrey; Beighley, Ed; Brown, Casey; Burian, Steve; Chen, Ji; Mitra, Anindita; Niyogi, Dev; Pielke, Roger; Tidwell, Vincent; Wegner, Dave

    2015-01-01

    This article represents the second report by an ASCE Task Committee “Infrastructure Impacts of Landscape-driven Weather Change” under the ASCE Watershed Management Technical Committee and the ASCE Hydroclimate Technical Committee. Herein, the ‘infrastructure impacts” are referred to as infrastructure-sensitive changes in weather and climate patterns (extremes and non-extremes) that are modulated, among other factors, by changes in landscape, land use and land cover change. In this first report, the article argued for explicitly considering the well-established feedbacks triggered by infrastructure systems to the land-atmosphere system via landscape change. In this report by the ASCE Task Committee (TC), we present the results of this ASCE TC’s survey of a cross section of experienced water managers using a set of carefully crafted questions. These questions covered water resources management, infrastructure resiliency and recommendations for inclusion in education and curriculum. We describe here the specifics of the survey and the results obtained in the form of statistical averages on the ‘perception’ of these managers. Finally, we discuss what these ‘perception’ averages may indicate to the ASCE TC and community as a whole for stewardship of the civil engineering profession. The survey and the responses gathered are not exhaustive nor do they represent the ASCE-endorsed viewpoint. However, the survey provides a critical first step to developing the framework of a research and education plan for ASCE. Given the Water Resources Reform and Development Act passed in 2014, we must now take into account the perceived concerns of the water management community. PMID:26544045

  6. Role of information systems in controlling costs: the electronic medical record (EMR) and the high-performance computing and communications (HPCC) efforts

    NASA Astrophysics Data System (ADS)

    Kun, Luis G.

    1994-12-01

    On October 18, 1991, the IEEE-USA produced an entity statement which endorsed the vital importance of the High Performance Computer and Communications Act of 1991 (HPCC) and called for the rapid implementation of all its elements. Efforts are now underway to develop a Computer Based Patient Record (CBPR), the National Information Infrastructure (NII) as part of the HPCC, and the so-called `Patient Card'. Multiple legislative initiatives which address these and related information technology issues are pending in Congress. Clearly, a national information system will greatly affect the way health care delivery is provided to the United States public. Timely and reliable information represents a critical element in any initiative to reform the health care system as well as to protect and improve the health of every person. Appropriately used, information technologies offer a vital means of improving the quality of patient care, increasing access to universal care and lowering overall costs within a national health care program. Health care reform legislation should reflect increased budgetary support and a legal mandate for the creation of a national health care information system by: (1) constructing a National Information Infrastructure; (2) building a Computer Based Patient Record System; (3) bringing the collective resources of our National Laboratories to bear in developing and implementing the NII and CBPR, as well as a security system with which to safeguard the privacy rights of patients and the physician-patient privilege; and (4) utilizing Government (e.g. DOD, DOE) capabilities (technology and human resources) to maximize resource utilization, create new jobs and accelerate technology transfer to address health care issues.

  7. Integrating CAD modules in a PACS environment using a wide computing infrastructure.

    PubMed

    Suárez-Cuenca, Jorge J; Tilve, Amara; López, Ricardo; Ferro, Gonzalo; Quiles, Javier; Souto, Miguel

    2017-04-01

    The aim of this paper is to describe a project designed to achieve a total integration of different CAD algorithms into the PACS environment by using a wide computing infrastructure. The aim is to build a system for the entire region of Galicia, Spain, to make CAD accessible to multiple hospitals by employing different PACSs and clinical workstations. The new CAD model seeks to connect different devices (CAD systems, acquisition modalities, workstations and PACS) by means of networking based on a platform that will offer different CAD services. This paper describes some aspects related to the health services of the region where the project was developed, CAD algorithms that were either employed or selected for inclusion in the project, and several technical aspects and results. We have built a standard-based platform with which users can request a CAD service and receive the results in their local PACS. The process runs through a web interface that allows sending data to the different CAD services. A DICOM SR object is received with the results of the algorithms stored inside the original study in the proper folder with the original images. As a result, a homogeneous service to the different hospitals of the region will be offered. End users will benefit from a homogeneous workflow and a standardised integration model to request and obtain results from CAD systems in any modality, not dependant on commercial integration models. This new solution will foster the deployment of these technologies in the entire region of Galicia.

  8. Computational Science in Armenia (Invited Talk)

    NASA Astrophysics Data System (ADS)

    Marandjian, H.; Shoukourian, Yu.

    This survey is devoted to the development of informatics and computer science in Armenia. The results in theoretical computer science (algebraic models, solutions to systems of general form recursive equations, the methods of coding theory, pattern recognition and image processing), constitute the theoretical basis for developing problem-solving-oriented environments. As examples can be mentioned: a synthesizer of optimized distributed recursive programs, software tools for cluster-oriented implementations of two-dimensional cellular automata, a grid-aware web interface with advanced service trading for linear algebra calculations. In the direction of solving scientific problems that require high-performance computing resources, examples of completed projects include the field of physics (parallel computing of complex quantum systems), astrophysics (Armenian virtual laboratory), biology (molecular dynamics study of human red blood cell membrane), meteorology (implementing and evaluating the Weather Research and Forecast Model for the territory of Armenia). The overview also notes that the Institute for Informatics and Automation Problems of the National Academy of Sciences of Armenia has established a scientific and educational infrastructure, uniting computing clusters of scientific and educational institutions of the country and provides the scientific community with access to local and international computational resources, that is a strong support for computational science in Armenia.

  9. Increasing the resilience and security of the United States' power infrastructure

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Happenny, Sean F.

    2015-08-01

    The United States' power infrastructure is aging, underfunded, and vulnerable to cyber attack. Emerging smart grid technologies may take some of the burden off of existing systems and make the grid as a whole more efficient, reliable, and secure. The Pacific Northwest National Laboratory (PNNL) is funding research into several aspects of smart grid technology and grid security, creating a software simulation tool that will allow researchers to test power infrastructure control and distribution paradigms by utilizing different smart grid technologies to determine how the grid and these technologies react under different circumstances. Understanding how these systems behave in real-worldmore » conditions will lead to new ways to make our power infrastructure more resilient and secure. Demonstrating security in embedded systems is another research area PNNL is tackling. Many of the systems controlling the U.S. critical infrastructure, such as the power grid, lack integrated security and the aging networks protecting them are becoming easier to attack.« less

  10. The inventions technology on water resources to support environmental engineering based infrastructure

    NASA Astrophysics Data System (ADS)

    Sunjoto, S.

    2017-03-01

    Since the Stockholm Declaration, declared on the United Nation Conference on the Human Environment in Sweden on 5-16 June 1972 and attended the 113 country delegations, all the infrastructure construction should comply the sustainable development. As a consequence, almost research and studies were directing to the environmental aspect of construction including on water resources engineering. This paper will present the inventions which are very useful for the design of infrastructure, especially on the Groundwater engineering. This field has been rapidly developed since the publication of the well known law of flow through porous materials by Henri Darcy in 1856 on his book "Les fontaine publiques de la ville de Dijon". This law states that the discharge through porous media is proportional to the product of the hydraulic gradient, the cross-sectional area normal to the flow and the coefficient of permeability of the material. Forchheimer in 1930 developed a breakthrough formula by simplifying solution in a steady state flow condition especially in the case of radial flow to compute the permeability coefficient of casing hole or tube test with zero inflow discharge. The outflow discharge on the holes is equal to shape factor of tip of casing (F) multiplied by coefficient of permeability of soils (K) and multiplied by hydraulic head (H). In 1988, Sunjoto derived an equation in unsteady state flow condition based on this formula. In 2002, Sunjoto developed several formulas of shape factor as the parameters of the equation. In the beginning this formula is implemented to compute for the dimension of recharge well as the best method of water conservation for the urban area. After a long research this formula can be implemented to compute the drawdown on pumping or coefficient of permeability of soil by pumping test. This method can substitute the former methods like Theis (1935), Cooper-Jacob (1946), Chow (1952), Glover (1966), Papadopulos-Cooper (1967), Todd (1980

  11. Branch Campus Librarianship with Minimal Infrastructure: Rewards and Challenges

    ERIC Educational Resources Information Center

    Knickman, Elena; Walton, Kerry

    2014-01-01

    Delaware County Community College provides library services to its branch campus community members by stationing a librarian at a campus 5 to 20 hours each week, without any more library infrastructure than an Internet-enabled computer on the school network. Faculty and students have reacted favorably to the increased presence of librarians.…

  12. Regional Charging Infrastructure for Plug-In Electric Vehicles: A Case Study of Massachusetts

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wood, Eric; Raghavan, Sesha; Rames, Clement

    Given the complex issues associated with plug-in electric vehicle (PEV) charging and options in deploying charging infrastructure, there is interest in exploring scenarios of future charging infrastructure deployment to provide insight and guidance to national and regional stakeholders. The complexity and cost of PEV charging infrastructure pose challenges to decision makers, including individuals, communities, and companies considering infrastructure installations. The value of PEVs to consumers and fleet operators can be increased with well-planned and cost-effective deployment of charging infrastructure. This will increase the number of miles driven electrically and accelerate PEV market penetration, increasing the shared value of charging networksmore » to an expanding consumer base. Given these complexities and challenges, the objective of the present study is to provide additional insight into the role of charging infrastructure in accelerating PEV market growth. To that end, existing studies on PEV infrastructure are summarized in a literature review. Next, an analysis of current markets is conducted with a focus on correlations between PEV adoption and public charging availability. A forward-looking case study is then conducted focused on supporting 300,000 PEVs by 2025 in Massachusetts. The report concludes with a discussion of potential methodology for estimating economic impacts of PEV infrastructure growth.« less

  13. Galaxy CloudMan: delivering cloud compute clusters

    PubMed Central

    2010-01-01

    Background Widespread adoption of high-throughput sequencing has greatly increased the scale and sophistication of computational infrastructure needed to perform genomic research. An alternative to building and maintaining local infrastructure is “cloud computing”, which, in principle, offers on demand access to flexible computational infrastructure. However, cloud computing resources are not yet suitable for immediate “as is” use by experimental biologists. Results We present a cloud resource management system that makes it possible for individual researchers to compose and control an arbitrarily sized compute cluster on Amazon’s EC2 cloud infrastructure without any informatics requirements. Within this system, an entire suite of biological tools packaged by the NERC Bio-Linux team (http://nebc.nerc.ac.uk/tools/bio-linux) is available for immediate consumption. The provided solution makes it possible, using only a web browser, to create a completely configured compute cluster ready to perform analysis in less than five minutes. Moreover, we provide an automated method for building custom deployments of cloud resources. This approach promotes reproducibility of results and, if desired, allows individuals and labs to add or customize an otherwise available cloud system to better meet their needs. Conclusions The expected knowledge and associated effort with deploying a compute cluster in the Amazon EC2 cloud is not trivial. The solution presented in this paper eliminates these barriers, making it possible for researchers to deploy exactly the amount of computing power they need, combined with a wealth of existing analysis software, to handle the ongoing data deluge. PMID:21210983

  14. Commissioning the CERN IT Agile Infrastructure with experiment workloads

    NASA Astrophysics Data System (ADS)

    Medrano Llamas, Ramón; Harald Barreiro Megino, Fernando; Kucharczyk, Katarzyna; Kamil Denis, Marek; Cinquilli, Mattia

    2014-06-01

    In order to ease the management of their infrastructure, most of the WLCG sites are adopting cloud based strategies. In the case of CERN, the Tier 0 of the WLCG, is completely restructuring the resource and configuration management of their computing center under the codename Agile Infrastructure. Its goal is to manage 15,000 Virtual Machines by means of an OpenStack middleware in order to unify all the resources in CERN's two datacenters: the one placed in Meyrin and the new on in Wigner, Hungary. During the commissioning of this infrastructure, CERN IT is offering an attractive amount of computing resources to the experiments (800 cores for ATLAS and CMS) through a private cloud interface. ATLAS and CMS have joined forces to exploit them by running stress tests and simulation workloads since November 2012. This work will describe the experience of the first deployments of the current experiment workloads on the CERN private cloud testbed. The paper is organized as follows: the first section will explain the integration of the experiment workload management systems (WMS) with the cloud resources. The second section will revisit the performance and stress testing performed with HammerCloud in order to evaluate and compare the suitability for the experiment workloads. The third section will go deeper into the dynamic provisioning techniques, such as the use of the cloud APIs directly by the WMS. The paper finishes with a review of the conclusions and the challenges ahead.

  15. Towards a single seismological service infrastructure in Europe

    NASA Astrophysics Data System (ADS)

    Spinuso, A.; Trani, L.; Frobert, L.; Van Eck, T.

    2012-04-01

    within a data-intensive computation framework, which will be tailored to the specific needs of the community. It will provide a new interoperable infrastructure, as the computational backbone laying behind the publicly available interfaces. VERCE will have to face the challenges of implementing a service oriented architecture providing an efficient layer between the Data and the Grid infrastructures, coupling HPC data analysis and HPC data modeling applications through the execution of workflows and data sharing mechanism. Online registries of interoperable worklflow components, storage of intermediate results and data provenance are those aspects that are currently under investigations to make the VERCE facilities usable from a large scale of users, data and service providers. For such purposes the adoption of a Digital Object Architecture, to create online catalogs referencing and describing semantically all these distributed resources, such as datasets, computational processes and derivative products, is seen as one of the viable solution to monitor and steer the usage of the infrastructure, increasing its efficiency and the cooperation among the community.

  16. Simulating Impacts of Disruptions to Liquid Fuels Infrastructure

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wilson, Michael; Corbet, Thomas F.; Baker, Arnold B.

    This report presents a methodology for estimating the impacts of events that damage or disrupt liquid fuels infrastructure. The impact of a disruption depends on which components of the infrastructure are damaged, the time required for repairs, and the position of the disrupted components in the fuels supply network. Impacts are estimated for seven stressing events in regions of the United States, which were selected to represent a range of disruption types. For most of these events the analysis is carried out using the National Transportation Fuels Model (NTFM) to simulate the system-level liquid fuels sector response. Results are presentedmore » for each event, and a brief cross comparison of event simulation results is provided.« less

  17. Modeling Hydrogen Refueling Infrastructure to Support Passenger Vehicles

    DOE PAGES

    Muratori, Matteo; Bush, Brian; Hunter, Chad; ...

    2018-05-07

    The year 2014 marked hydrogen fuel cell electric vehicles (FCEVs) first becoming commercially available in California, where significant investments are being made to promote the adoption of alternative transportation fuels. A refueling infrastructure network that guarantees adequate coverage and expands in line with vehicle sales is required for FCEVs to be successfully adopted by private customers. In this article, we provide an overview of modelling methodologies used to project hydrogen refueling infrastructure requirements to support FCEV adoption, and we describe, in detail, the National Renewable Energy Laboratory's scenario evaluation and regionalization analysis (SERA) model. As an example, we use SERAmore » to explore two alternative scenarios of FCEV adoption: one in which FCEV deployment is limited to California and several major cities in the United States; and one in which FCEVs reach widespread adoption, becoming a major option as passenger vehicles across the entire country. Such scenarios can provide guidance and insights for efforts required to deploy the infrastructure supporting transition toward different levels of hydrogen use as a transportation fuel for passenger vehicles in the United States.« less

  18. Modeling Hydrogen Refueling Infrastructure to Support Passenger Vehicles

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Muratori, Matteo; Bush, Brian; Hunter, Chad

    The year 2014 marked hydrogen fuel cell electric vehicles (FCEVs) first becoming commercially available in California, where significant investments are being made to promote the adoption of alternative transportation fuels. A refueling infrastructure network that guarantees adequate coverage and expands in line with vehicle sales is required for FCEVs to be successfully adopted by private customers. In this article, we provide an overview of modelling methodologies used to project hydrogen refueling infrastructure requirements to support FCEV adoption, and we describe, in detail, the National Renewable Energy Laboratory's scenario evaluation and regionalization analysis (SERA) model. As an example, we use SERAmore » to explore two alternative scenarios of FCEV adoption: one in which FCEV deployment is limited to California and several major cities in the United States; and one in which FCEVs reach widespread adoption, becoming a major option as passenger vehicles across the entire country. Such scenarios can provide guidance and insights for efforts required to deploy the infrastructure supporting transition toward different levels of hydrogen use as a transportation fuel for passenger vehicles in the United States.« less

  19. Geovisualization applications to examine and explore high-density and hierarchical critical infrastructure data

    NASA Astrophysics Data System (ADS)

    Edsall, Robert; Hembree, Harvey

    2018-05-01

    The geospatial research and development team in the National and Homeland Security Division at Idaho National Laboratory was tasked with providing tools to derive insight from the substantial amount of data currently available - and continuously being produced - associated with the critical infrastructure of the US. This effort is in support of the Department of Homeland Security, whose mission includes the protection of this infrastructure and the enhancement of its resilience to hazards, both natural and human. We present geovisual-analytics-based approaches for analysis of vulnerabilities and resilience of critical infrastructure, designed so that decision makers, analysts, and infrastructure owners and managers can manage risk, prepare for hazards, and direct resources before and after an incident that might result in an interruption in service. Our designs are based on iterative discussions with DHS leadership and analysts, who in turn will use these tools to explore and communicate data in partnership with utility providers, law enforcement, and emergency response and recovery organizations, among others. In most cases these partners desire summaries of large amounts of data, but increasingly, our users seek the additional capability of focusing on, for example, a specific infrastructure sector, a particular geographic region, or time period, or of examining data in a variety of generalization or aggregation levels. These needs align well with tenets of in-formation-visualization design; in this paper, selected applications among those that we have designed are described and positioned within geovisualization, geovisual analytical, and information visualization frameworks.

  20. Evolving a lingua franca and associated software infrastructure for computational systems biology: the Systems Biology Markup Language (SBML) project.

    PubMed

    Hucka, M; Finney, A; Bornstein, B J; Keating, S M; Shapiro, B E; Matthews, J; Kovitz, B L; Schilstra, M J; Funahashi, A; Doyle, J C; Kitano, H

    2004-06-01

    Biologists are increasingly recognising that computational modelling is crucial for making sense of the vast quantities of complex experimental data that are now being collected. The systems biology field needs agreed-upon information standards if models are to be shared, evaluated and developed cooperatively. Over the last four years, our team has been developing the Systems Biology Markup Language (SBML) in collaboration with an international community of modellers and software developers. SBML has become a de facto standard format for representing formal, quantitative and qualitative models at the level of biochemical reactions and regulatory networks. In this article, we summarise the current and upcoming versions of SBML and our efforts at developing software infrastructure for supporting and broadening its use. We also provide a brief overview of the many SBML-compatible software tools available today.

  1. Infrastructure system restoration planning using evolutionary algorithms

    USGS Publications Warehouse

    Corns, Steven; Long, Suzanna K.; Shoberg, Thomas G.

    2016-01-01

    This paper presents an evolutionary algorithm to address restoration issues for supply chain interdependent critical infrastructure. Rapid restoration of infrastructure after a large-scale disaster is necessary to sustaining a nation's economy and security, but such long-term restoration has not been investigated as thoroughly as initial rescue and recovery efforts. A model of the Greater Saint Louis Missouri area was created and a disaster scenario simulated. An evolutionary algorithm is used to determine the order in which the bridges should be repaired based on indirect costs. Solutions were evaluated based on the reduction of indirect costs and the restoration of transportation capacity. When compared to a greedy algorithm, the evolutionary algorithm solution reduced indirect costs by approximately 12.4% by restoring automotive travel routes for workers and re-establishing the flow of commodities across the three rivers in the Saint Louis area.

  2. Parallel digital forensics infrastructure.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Liebrock, Lorie M.; Duggan, David Patrick

    2009-10-01

    This report documents the architecture and implementation of a Parallel Digital Forensics infrastructure. This infrastructure is necessary for supporting the design, implementation, and testing of new classes of parallel digital forensics tools. Digital Forensics has become extremely difficult with data sets of one terabyte and larger. The only way to overcome the processing time of these large sets is to identify and develop new parallel algorithms for performing the analysis. To support algorithm research, a flexible base infrastructure is required. A candidate architecture for this base infrastructure was designed, instantiated, and tested by this project, in collaboration with New Mexicomore » Tech. Previous infrastructures were not designed and built specifically for the development and testing of parallel algorithms. With the size of forensics data sets only expected to increase significantly, this type of infrastructure support is necessary for continued research in parallel digital forensics. This report documents the implementation of the parallel digital forensics (PDF) infrastructure architecture and implementation.« less

  3. e-Infrastructures for Astronomy: An Integrated View

    NASA Astrophysics Data System (ADS)

    Pasian, F.; Longo, G.

    2010-12-01

    As for other disciplines, the capability of performing “Big Science” in astrophysics requires the availability of large facilities. In the field of ICT, computational resources (e.g. HPC) are important, but are far from being enough for the community: as a matter of fact, the whole set of e-infrastructures (network, computing nodes, data repositories, applications) need to work in an interoperable way. This implies the development of common (or at least compatible) user interfaces to computing resources, transparent access to observations and numerical simulations through the Virtual Observatory, integrated data processing pipelines, data mining and semantic web applications. Achieving this interoperability goal is a must to build a real “Knowledge Infrastructure” in the astrophysical domain. Also, the emergence of new professional profiles (e.g. the “astro-informatician”) is necessary to allow defining and implementing properly this conceptual schema.

  4. DICOMGrid: a middleware to integrate PACS and EELA-2 grid infrastructure

    NASA Astrophysics Data System (ADS)

    Moreno, Ramon A.; de Sá Rebelo, Marina; Gutierrez, Marco A.

    2010-03-01

    Medical images provide lots of information for physicians, but the huge amount of data produced by medical image equipments in a modern Health Institution is not completely explored in its full potential yet. Nowadays medical images are used in hospitals mostly as part of routine activities while its intrinsic value for research is underestimated. Medical images can be used for the development of new visualization techniques, new algorithms for patient care and new image processing techniques. These research areas usually require the use of huge volumes of data to obtain significant results, along with enormous computing capabilities. Such qualities are characteristics of grid computing systems such as EELA-2 infrastructure. The grid technologies allow the sharing of data in large scale in a safe and integrated environment and offer high computing capabilities. In this paper we describe the DicomGrid to store and retrieve medical images, properly anonymized, that can be used by researchers to test new processing techniques, using the computational power offered by grid technology. A prototype of the DicomGrid is under evaluation and permits the submission of jobs into the EELA-2 grid infrastructure while offering a simple interface that requires minimal understanding of the grid operation.

  5. Aging infrastructure creates opportunities for cost-efficient restoration of aquatic ecosystem connectivity.

    PubMed

    Neeson, Thomas M; Moody, Allison T; O'Hanley, Jesse R; Diebel, Matthew; Doran, Patrick J; Ferris, Michael C; Colling, Timothy; McIntyre, Peter B

    2018-06-09

    A hallmark of industrialization is the construction of dams for water management and roads for transportation, leading to fragmentation of aquatic ecosystems. Many nations are striving to address both maintenance backlogs and mitigation of environmental impacts as their infrastructure ages. Here, we test whether accounting for road repair needs could offer opportunities to boost conservation efficiency by piggybacking connectivity restoration projects on infrastructure maintenance. Using optimization models to align fish passage restoration sites with likely road repair priorities, we find potential increases in conservation return-on-investment ranging from 17% to 25%. Importantly, these gains occur without compromising infrastructure or conservation priorities; simply communicating openly about objectives and candidate sites enables greater accomplishment at current funding levels. Society embraces both reliable roads and thriving fisheries, so overcoming this coordination challenge should be feasible. Given deferred maintenance crises for many types of infrastructure, there could be widespread opportunities to enhance the cost effectiveness of conservation investments by coordinating with infrastructure renewal efforts. © 2018 by the Ecological Society of America.

  6. Interoperability and security in wireless body area network infrastructures.

    PubMed

    Warren, Steve; Lebak, Jeffrey; Yao, Jianchu; Creekmore, Jonathan; Milenkovic, Aleksandar; Jovanov, Emil

    2005-01-01

    Wireless body area networks (WBANs) and their supporting information infrastructures offer unprecedented opportunities to monitor state of health without constraining the activities of a wearer. These mobile point-of-care systems are now realizable due to the convergence of technologies such as low-power wireless communication standards, plug-and-play device buses, off-the-shelf development kits for low-power microcontrollers, handheld computers, electronic medical records, and the Internet. To increase acceptance of personal monitoring technology while lowering equipment cost, advances must be made in interoperability (at both the system and device levels) and security. This paper presents an overview of WBAN infrastructure work in these areas currently underway in the Medical Component Design Laboratory at Kansas State University (KSU) and at the University of Alabama in Huntsville (UAH). KSU efforts include the development of wearable health status monitoring systems that utilize ISO/IEEE 11073, Bluetooth, Health Level 7, and OpenEMed. WBAN efforts at UAH include the development of wearable activity and health monitors that incorporate ZigBee-compliant wireless sensor platforms with hardware-level encryption and the TinyOS development environment. WBAN infrastructures are complex, requiring many functional support elements. To realize these infrastructures through collaborative efforts, organizations such as KSU and UAH must define and utilize standard interfaces, nomenclature, and security approaches.

  7. IP Infrastructure Geolocation

    DTIC Science & Technology

    2015-03-01

    unlimited 13. ABSTRACT (maximum 200 words) Physical network maps are important to critical infrastructure defense and planning. Current state-of...the-art network infrastructure geolocation relies on Domain Name System (DNS) inferences. However, not only is using the DNS relatively inaccurate for...INTENTIONALLY LEFT BLANK iv ABSTRACT Physical network maps are important to critical infrastructure defense and planning. Cur- rent state-of-the-art

  8. NECC 2002: National Educational Computing Conference Proceedings (23rd, San Antonio, Texas, June 17-19, 2002).

    ERIC Educational Resources Information Center

    National Educational Computing Conference.

    The National Educational Computing Conference (NECC) is the largest conference of its kind in the world. This document is the Proceedings from the 23rd annual National Educational Computing Conference (NECC) held in San Antonio, June 17-19, 2002. Included are: general information; schedule of events; evaluation form; and the program. Information…

  9. A prototype Infrastructure for Cloud-based distributed services in High Availability over WAN

    NASA Astrophysics Data System (ADS)

    Bulfon, C.; Carlino, G.; De Salvo, A.; Doria, A.; Graziosi, C.; Pardi, S.; Sanchez, A.; Carboni, M.; Bolletta, P.; Puccio, L.; Capone, V.; Merola, L.

    2015-12-01

    In this work we present the architectural and performance studies concerning a prototype of a distributed Tier2 infrastructure for HEP, instantiated between the two Italian sites of INFN-Romal and INFN-Napoli. The network infrastructure is based on a Layer-2 geographical link, provided by the Italian NREN (GARR), directly connecting the two remote LANs of the named sites. By exploiting the possibilities offered by the new distributed file systems, a shared storage area with synchronous copy has been set up. The computing infrastructure, based on an OpenStack facility, is using a set of distributed Hypervisors installed in both sites. The main parameter to be taken into account when managing two remote sites with a single framework is the effect of the latency, due to the distance and the end-to-end service overhead. In order to understand the capabilities and limits of our setup, the impact of latency has been investigated by means of a set of stress tests, including data I/O throughput, metadata access performance evaluation and network occupancy, during the life cycle of a Virtual Machine. A set of resilience tests has also been performed, in order to verify the stability of the system on the event of hardware or software faults. The results of this work show that the reliability and robustness of the chosen architecture are effective enough to build a production system and to provide common services. This prototype can also be extended to multiple sites with small changes of the network topology, thus creating a National Network of Cloud-based distributed services, in HA over WAN.

  10. GLIDE: a grid-based light-weight infrastructure for data-intensive environments

    NASA Technical Reports Server (NTRS)

    Mattmann, Chris A.; Malek, Sam; Beckman, Nels; Mikic-Rakic, Marija; Medvidovic, Nenad; Chrichton, Daniel J.

    2005-01-01

    The promise of the grid is that it will enable public access and sharing of immense amounts of computational and data resources among dynamic coalitions of individuals and institutions. However, the current grid solutions make several limiting assumptions that curtail their widespread adoption. To address these limitations, we present GLIDE, a prototype light-weight, data-intensive middleware infrastructure that enables access to the robust data and computational power of the grid on DREAM platforms.

  11. Improving National Capability in Biogeochemical Flux Modelling: the UK Environmental Virtual Observatory (EVOp)

    NASA Astrophysics Data System (ADS)

    Johnes, P.; Greene, S.; Freer, J. E.; Bloomfield, J.; Macleod, K.; Reaney, S. M.; Odoni, N. A.

    2012-12-01

    The best outcomes from watershed management arise where policy and mitigation efforts are underpinned by strong science evidence, but there are major resourcing problems associated with the scale of monitoring needed to effectively characterise the sources rates and impacts of nutrient enrichment nationally. The challenge is to increase national capability in predictive modelling of nutrient flux to waters, securing an effective mechanism for transferring knowledge and management tools from data-rich to data-poor regions. The inadequacy of existing tools and approaches to address these challenges provided the motivation for the Environmental Virtual Observatory programme (EVOp), an innovation from the UK Natural Environment Research Council (NERC). EVOp is exploring the use of a cloud-based infrastructure in catchment science, developing an exemplar to explore N and P fluxes to inland and coastal waters in the UK from grid to catchment and national scale. EVOp is bringing together for the first time national data sets, models and uncertainty analysis into cloud computing environments to explore and benchmark current predictive capability for national scale biogeochemical modelling. The objective is to develop national biogeochemical modelling capability, capitalising on extensive national investment in the development of science understanding and modelling tools to support integrated catchment management, and supporting knowledge transfer from data rich to data poor regions, The AERC export coefficient model (Johnes et al., 2007) has been adapted to function within the EVOp cloud environment, and on a geoclimatic basis, using a range of high resolution, geo-referenced digital datasets as an initial demonstration of the enhanced national capacity for N and P flux modelling using cloud computing infrastructure. Geoclimatic regions are landscape units displaying homogenous or quasi-homogenous functional behaviour in terms of process controls on N and P cycling

  12. Demonstration of Green/Gray Infrastructure for Combined Sewer Overflow Control

    EPA Science Inventory

    This project is a major national demonstration of the integration of green and gray infrastructure for combined sewer overflow (CSO) control in a cost-effective and environmentally friendly manner. It will use Kansas City, MO, as a case example. The project will have a major in...

  13. A green infrastructure experimental site for developing and evaluating models

    EPA Science Inventory

    The Ecosystems Research Division (ERD) of the U.S. EPA’s National Exposure Research Laboratory (NERL) in Athens, GA has a 14-acre urban watershed which has become an experimental research site for green infrastructure studies. About half of the watershed is covered by pervious la...

  14. Parallel Infrastructure Modeling and Inversion Module for E4D

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    2014-10-09

    Electrical resistivity tomography ERT is a method of imaging the electrical conductivity of the subsurface. Electrical conductivity is a useful metric for understanding the subsurface because it is governed by geomechanical and geochemical properties that drive subsurface systems. ERT works by injecting current into the subsurface across a pair of electrodes, and measuring the corresponding electrical potential response across another pair of electrodes. Many such measurements are strategically taken across an array of electrodes to produce an ERT data set. These data are then processed through a computationally demanding process known as inversion to produce an image of the subsurfacemore » conductivity structure that gave rise to the measurements. Data can be inverted to provide 2D images, 3D images, or in the case of time-lapse 3D imaging, 4D images. ERT is generally not well suited for environments with buried electrically conductive infrastructure such as pipes, tanks, or well casings, because these features tend to dominate and degrade ERT images. This reduces or eliminates the utility of ERT imaging where it would otherwise be highly useful for, for example, imaging fluid migration from leaking pipes, imaging soil contamination beneath leaking subusurface tanks, and monitoring contaminant migration in locations with dense network of metal cased monitoring wells. The location and dimension of buried metallic infrastructure is often known. If so, then the effects of the infrastructure can be explicitly modeled within the ERT imaging algorithm, and thereby removed from the corresponding ERT image. However,there are a number of obstacles limiting this application. 1) Metallic infrastructure cannot be accurately modeled with standard codes because of the large contrast in conductivity between the metal and host material. 2) Modeling infrastructure in true dimension requires the computational mesh to be highly refined near the metal inclusions, which increases

  15. Transportation security research : coordination needed in selecting and implementing infrastructure vulnerability assessments

    DOT National Transportation Integrated Search

    2003-05-01

    The Department of Transportation's (DOT) Research and Special Programs Administration (RSPA) began research in to assess the vulnerabilities of the nation's transportation infrastructure and develop needed improvements in security in June 2001. The g...

  16. Infrastructure and the Virtual Observatory

    NASA Astrophysics Data System (ADS)

    Dowler, P.; Gaudet, S.; Schade, D.

    2011-07-01

    The modern data center is faced with architectural and software engineering challenges that grow along with the challenges facing observatories: massive data flow, distributed computing environments, and distributed teams collaborating on large and small projects. By using VO standards as key components of the infrastructure, projects can take advantage of a decade of intellectual investment by the IVOA community. By their nature, these standards are proven and tested designs that already exist. Adopting VO standards saves considerable design effort, allows projects to take advantage of open-source software and test suites to speed development, and enables the use of third party tools that understand the VO protocols. The evolving CADC architecture now makes heavy use of VO standards. We show examples of how these standards may be used directly, coupled with non-VO standards, or extended with custom capabilities to solve real problems and provide value to our users. In the end, we use VO services as major parts of the core infrastructure to reduce cost rather than as an extra layer with additional cost and we can deliver more general purpose and robust services to our user community.

  17. Swiss Experiment: Design, implemention and use of a cross-disciplinary infrastructure for data intensive science

    NASA Astrophysics Data System (ADS)

    Dawes, N.; Salehi, A.; Clifton, A.; Bavay, M.; Aberer, K.; Parlange, M. B.; Lehning, M.

    2010-12-01

    It has long been known that environmental processes are cross-disciplinary, but data has continued to be acquired and held for a single purpose. Swiss Experiment is a rapidly evolving cross-disciplinary, distributed sensor data infrastructure, where tools for the environmental science community stem directly from computer science research. The platform uses the bleeding edge of computer science to acquire, store and distribute data and metadata from all environmental science disciplines at a variety of temporal and spatial resolutions. SwissEx is simultaneously developing new technologies to allow low cost, high spatial and temporal resolution measurements such that small areas can be intensely monitored. This data is then combined with existing widespread, low density measurements in the cross-disciplinary platform to provide well documented datasets, which are of use to multiple research disciplines. We present a flexible, generic infrastructure at an advanced stage of development. The infrastructure makes the most of Web 2.0 technologies for a collaborative working environment and as a user interface for a metadata database. This environment is already closely integrated with GSN, an open-source database middleware developed under Swiss Experiment for acquisition and storage of generic time-series data (2D and 3D). GSN can be queried directly by common data processing packages and makes data available in real-time to models and 3rd party software interfaces via its web service interface. It also provides real-time push or pull data exchange between instances, a user management system which leaves data owners in charge of their data, advanced real-time processing and much more. The SwissEx interface is increasingly gaining users and supporting environmental science in Switzerland. It is also an integral part of environmental education projects ClimAtscope and O3E, where the technologies can provide rapid feedback of results for children of all ages and where the

  18. Increasing the impact of medical image computing using community-based open-access hackathons: The NA-MIC and 3D Slicer experience.

    PubMed

    Kapur, Tina; Pieper, Steve; Fedorov, Andriy; Fillion-Robin, J-C; Halle, Michael; O'Donnell, Lauren; Lasso, Andras; Ungi, Tamas; Pinter, Csaba; Finet, Julien; Pujol, Sonia; Jagadeesan, Jayender; Tokuda, Junichi; Norton, Isaiah; Estepar, Raul San Jose; Gering, David; Aerts, Hugo J W L; Jakab, Marianna; Hata, Nobuhiko; Ibanez, Luiz; Blezek, Daniel; Miller, Jim; Aylward, Stephen; Grimson, W Eric L; Fichtinger, Gabor; Wells, William M; Lorensen, William E; Schroeder, Will; Kikinis, Ron

    2016-10-01

    The National Alliance for Medical Image Computing (NA-MIC) was launched in 2004 with the goal of investigating and developing an open source software infrastructure for the extraction of information and knowledge from medical images using computational methods. Several leading research and engineering groups participated in this effort that was funded by the US National Institutes of Health through a variety of infrastructure grants. This effort transformed 3D Slicer from an internal, Boston-based, academic research software application into a professionally maintained, robust, open source platform with an international leadership and developer and user communities. Critical improvements to the widely used underlying open source libraries and tools-VTK, ITK, CMake, CDash, DCMTK-were an additional consequence of this effort. This project has contributed to close to a thousand peer-reviewed publications and a growing portfolio of US and international funded efforts expanding the use of these tools in new medical computing applications every year. In this editorial, we discuss what we believe are gaps in the way medical image computing is pursued today; how a well-executed research platform can enable discovery, innovation and reproducible science ("Open Science"); and how our quest to build such a software platform has evolved into a productive and rewarding social engineering exercise in building an open-access community with a shared vision. Copyright © 2016 Elsevier B.V. All rights reserved.

  19. Deploying the integrated metropolitan intelligent transportation systems (ITS) infrastructure : FY 2003 report

    DOT National Transportation Integrated Search

    2003-01-01

    In January 1996, the Secretary of Transportation set a goal of deploying the integrated metropolitan Intelligent Transportation System (ITS) infrastructure in 75 of the nation's largest metropolitan areas by 2005. Using data from surveys administered...

  20. Deploying the integrated metropolitan intelligent transportation systems (ITS) infrastructure : FY 2004 report

    DOT National Transportation Integrated Search

    2005-07-01

    In January 1996, the Secretary of Transportation set a goal of deploying the integrated metropolitan Intelligent Transportation System (ITS) infrastructure in 75 of the nation's largest metropolitan areas by 2005. Using data from surveys administered...