DOE Office of Scientific and Technical Information (OSTI.GOV)
Carols H. Rentel
2007-03-31
Eaton, in partnership with Oak Ridge National Laboratory and the Electric Power Research Institute (EPRI) has completed a project that applies a combination of wireless sensor network (WSN) technology, anticipatory theory, and a near-term value proposition based on diagnostics and process uptime to ensure the security and reliability of critical electrical power infrastructure. Representatives of several Eaton business units have been engaged to ensure a viable commercialization plan. Tennessee Valley Authority (TVA), American Electric Power (AEP), PEPCO, and Commonwealth Edison were recruited as partners to confirm and refine the requirements definition from the perspective of the utilities that actually operatemore » the facilities to be protected. Those utilities have cooperated with on-site field tests as the project proceeds. Accomplishments of this project included: (1) the design, modeling, and simulation of the anticipatory wireless sensor network (A-WSN) that will be used to gather field information for the anticipatory application, (2) the design and implementation of hardware and software prototypes for laboratory and field experimentation, (3) stack and application integration, (4) develop installation and test plan, and (5) refinement of the commercialization plan.« less
Parallel, Gradient-Based Anisotropic Mesh Adaptation for Re-entry Vehicle Configurations
NASA Technical Reports Server (NTRS)
Bibb, Karen L.; Gnoffo, Peter A.; Park, Michael A.; Jones, William T.
2006-01-01
Two gradient-based adaptation methodologies have been implemented into the Fun3d refine GridEx infrastructure. A spring-analogy adaptation which provides for nodal movement to cluster mesh nodes in the vicinity of strong shocks has been extended for general use within Fun3d, and is demonstrated for a 70 sphere cone at Mach 2. A more general feature-based adaptation metric has been developed for use with the adaptation mechanics available in Fun3d, and is applicable to any unstructured, tetrahedral, flow solver. The basic functionality of general adaptation is explored through a case of flow over the forebody of a 70 sphere cone at Mach 6. A practical application of Mach 10 flow over an Apollo capsule, computed with the Felisa flow solver, is given to compare the adaptive mesh refinement with uniform mesh refinement. The examples of the paper demonstrate that the gradient-based adaptation capability as implemented can give an improvement in solution quality.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wall, Thomas; Trail, Jessica; Gevondyan, Erna
During times of crisis, communities and regions rely heavily on critical infrastructure systems to support their emergency management response and recovery activities. Therefore, the resilience of critical infrastructure systems to crises is a pivotal factor to a community’s overall resilience. Critical infrastructure resilience can be influenced by many factors, including State policies – which are not always uniform in their structure or application across the United States – were identified by the U.S. Department of Homeland Security as an area of particular interest with respect to their the influence on the resilience of critical infrastructure systems. This study focuses onmore » developing an analytical methodology to assess links between policy and resilience, and applies that methodology to critical infrastructure in the Transportation Systems Sector. Specifically, this study seeks to identify potentially influential linkages between State transportation capital funding policies and the resilience of bridges located on roadways that are under the management of public agencies. This study yielded notable methodological outcomes, including the general capability of the analytical methodology to yield – in the case of some States – significant results connecting State policies with critical infrastructure resilience, with the suggestion that further refinement of the methodology may be beneficial.« less
Green Infrastructure Models and Tools
The objective of this project is to modify and refine existing models and develop new tools to support decision making for the complete green infrastructure (GI) project lifecycle, including the planning and implementation of stormwater control in urban and agricultural settings,...
Implications of Increasing Light Tight Oil Production for U.S. Refining
2015-01-01
EIA retained Turner, Mason & Company to provide analysis of the implications of increasing domestic light tight oil production for U.S. refining, focusing on regional crude supply/demand balances, refinery crude slates, operations, capital investment, product yields, crude oil exports/imports, petroleum product exports, infrastructure constraints and expansions, and crude oil price relationships.
40 CFR 80.1340 - How does a refiner obtain approval as a small refiner?
Code of Federal Regulations, 2010 CFR
2010-07-01
... Provisions § 80.1340 How does a refiner obtain approval as a small refiner? (a) Applications for small refiner status must be submitted to EPA by December 31, 2007. (b) For U.S. Postal delivery, applications... small refiner status application must contain the following information for the company seeking small...
Midwest Regional Rail System : a transportation network for the 21st century
DOT National Transportation Integrated Search
2000-02-01
This report includes an assessment of and refinements to the Midwest Regional Rail System Plan published in August 1998. The report addresses an extensive range of issues, including infrastructure and operational requirements, level of travel market ...
DOE Office of Scientific and Technical Information (OSTI.GOV)
Herrnstein, Aaron R.
An ocean model with adaptive mesh refinement (AMR) capability is presented for simulating ocean circulation on decade time scales. The model closely resembles the LLNL ocean general circulation model with some components incorporated from other well known ocean models when appropriate. Spatial components are discretized using finite differences on a staggered grid where tracer and pressure variables are defined at cell centers and velocities at cell vertices (B-grid). Horizontal motion is modeled explicitly with leapfrog and Euler forward-backward time integration, and vertical motion is modeled semi-implicitly. New AMR strategies are presented for horizontal refinement on a B-grid, leapfrog time integration,more » and time integration of coupled systems with unequal time steps. These AMR capabilities are added to the LLNL software package SAMRAI (Structured Adaptive Mesh Refinement Application Infrastructure) and validated with standard benchmark tests. The ocean model is built on top of the amended SAMRAI library. The resulting model has the capability to dynamically increase resolution in localized areas of the domain. Limited basin tests are conducted using various refinement criteria and produce convergence trends in the model solution as refinement is increased. Carbon sequestration simulations are performed on decade time scales in domains the size of the North Atlantic and the global ocean. A suggestion is given for refinement criteria in such simulations. AMR predicts maximum pH changes and increases in CO 2 concentration near the injection sites that are virtually unattainable with a uniform high resolution due to extremely long run times. Fine scale details near the injection sites are achieved by AMR with shorter run times than the finest uniform resolution tested despite the need for enhanced parallel performance. The North Atlantic simulations show a reduction in passive tracer errors when AMR is applied instead of a uniform coarse resolution. No dramatic or persistent signs of error growth in the passive tracer outgassing or the ocean circulation are observed to result from AMR.« less
Kanarska, Yuliya; Walton, Otis
2015-11-30
Fluid-granular flows are common phenomena in nature and industry. Here, an efficient computational technique based on the distributed Lagrange multiplier method is utilized to simulate complex fluid-granular flows. Each particle is explicitly resolved on an Eulerian grid as a separate domain, using solid volume fractions. The fluid equations are solved through the entire computational domain, however, Lagrange multiplier constrains are applied inside the particle domain such that the fluid within any volume associated with a solid particle moves as an incompressible rigid body. The particle–particle interactions are implemented using explicit force-displacement interactions for frictional inelastic particles similar to the DEMmore » method with some modifications using the volume of an overlapping region as an input to the contact forces. Here, a parallel implementation of the method is based on the SAMRAI (Structured Adaptive Mesh Refinement Application Infrastructure) library.« less
Engineering uses of physics-based ground motion simulations
Baker, Jack W.; Luco, Nicolas; Abrahamson, Norman A.; Graves, Robert W.; Maechling, Phillip J.; Olsen, Kim B.
2014-01-01
This paper summarizes validation methodologies focused on enabling ground motion simulations to be used with confidence in engineering applications such as seismic hazard analysis and dynmaic analysis of structural and geotechnical systems. Numberical simullation of ground motion from large erthquakes, utilizing physics-based models of earthquake rupture and wave propagation, is an area of active research in the earth science community. Refinement and validatoin of these models require collaboration between earthquake scientists and engineering users, and testing/rating methodolgies for simulated ground motions to be used with confidence in engineering applications. This paper provides an introduction to this field and an overview of current research activities being coordinated by the Souther California Earthquake Center (SCEC). These activities are related both to advancing the science and computational infrastructure needed to produce ground motion simulations, as well as to engineering validation procedures. Current research areas and anticipated future achievements are also discussed.
First use of LHC Run 3 Conditions Database infrastructure for auxiliary data files in ATLAS
NASA Astrophysics Data System (ADS)
Aperio Bella, L.; Barberis, D.; Buttinger, W.; Formica, A.; Gallas, E. J.; Rinaldi, L.; Rybkin, G.; ATLAS Collaboration
2017-10-01
Processing of the large amount of data produced by the ATLAS experiment requires fast and reliable access to what we call Auxiliary Data Files (ADF). These files, produced by Combined Performance, Trigger and Physics groups, contain conditions, calibrations, and other derived data used by the ATLAS software. In ATLAS this data has, thus far for historical reasons, been collected and accessed outside the ATLAS Conditions Database infrastructure and related software. For this reason, along with the fact that ADF are effectively read by the software as binary objects, this class of data appears ideal for testing the proposed Run 3 conditions data infrastructure now in development. This paper describes this implementation as well as the lessons learned in exploring and refining the new infrastructure with the potential for deployment during Run 2.
A comprehensive typology for mainstreaming urban green infrastructure
NASA Astrophysics Data System (ADS)
Young, Robert; Zanders, Julie; Lieberknecht, Katherine; Fassman-Beck, Elizabeth
2014-11-01
During a National Science Foundation (US) funded "International Greening of Cities Workshop" in Auckland, New Zealand, participants agreed an effective urban green infrastructure (GI) typology should identify cities' present stage of GI development and map next steps to mainstream GI as a component of urban infrastructure. Our review reveals current GI typologies do not systematically identify such opportunities. We address this knowledge gap by developing a new typology incorporating political, economic, and ecological forces shaping GI implementation. Applying this information allows symmetrical, place-based exploration of the social and ecological elements driving a city's GI systems. We use this information to distinguish current levels of GI development and clarify intervention opportunities to advance GI into the mainstream of metropolitan infrastructure. We employ three case studies (San Antonio, Texas; Auckland, New Zealand; and New York, New York) to test and refine our typology.
Initial Assessment of U.S. Refineries for Purposes of Potential Bio-Based Oil Insertions
DOE Office of Scientific and Technical Information (OSTI.GOV)
Freeman, Charles J.; Jones, Susanne B.; Padmaperuma, Asanga B.
2013-04-01
In order to meet U.S. biofuel objectives over the coming decade the conversion of a broad range of biomass feedstocks, using diverse processing options, will be required. Further, the production of both gasoline and diesel biofuels will employ biomass conversion methods that produce wide boiling range intermediate oils requiring treatment similar to conventional refining processes (i.e. fluid catalytic cracking, hydrocracking, and hydrotreating). As such, it is widely recognized that leveraging existing U.S. petroleum refining infrastructure is key to reducing overall capital demands. This study examines how existing U.S. refining location, capacities and conversion capabilities match in geography and processing capabilitiesmore » with the needs projected from anticipated biofuels production.« less
Bridge-in-a-Backpack(TM). Task 2 : reduction of costs through design modifications and optimization.
DOT National Transportation Integrated Search
2011-09-01
The cost effective use of FRP composites in infrastructure requires the efficient use of the : composite materials in the design. Previous work during the development phase and : demonstration phase illustrated the need to refine the design methods f...
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ellis, R.J.; Arguile, R.; Bocca, P.L.
1986-01-01
An assessment was made of the energy consumption and oil-combustion-related sulfur emissions in the period 1980-2000 for the EEC-10 countries. The possibility of further sulfur emissions reduction and its effects on cost and refining infrastructure are discussed.
Using Adaptive Mesh Refinment to Simulate Storm Surge
NASA Astrophysics Data System (ADS)
Mandli, K. T.; Dawson, C.
2012-12-01
Coastal hazards related to strong storms such as hurricanes and typhoons are one of the most frequently recurring and wide spread hazards to coastal communities. Storm surges are among the most devastating effects of these storms, and their prediction and mitigation through numerical simulations is of great interest to coastal communities that need to plan for the subsequent rise in sea level during these storms. Unfortunately these simulations require a large amount of resolution in regions of interest to capture relevant effects resulting in a computational cost that may be intractable. This problem is exacerbated in situations where a large number of similar runs is needed such as in design of infrastructure or forecasting with ensembles of probable storms. One solution to address the problem of computational cost is to employ adaptive mesh refinement (AMR) algorithms. AMR functions by decomposing the computational domain into regions which may vary in resolution as time proceeds. Decomposing the domain as the flow evolves makes this class of methods effective at ensuring that computational effort is spent only where it is needed. AMR also allows for placement of computational resolution independent of user interaction and expectation of the dynamics of the flow as well as particular regions of interest such as harbors. The simulation of many different applications have only been made possible by using AMR-type algorithms, which have allowed otherwise impractical simulations to be performed for much less computational expense. Our work involves studying how storm surge simulations can be improved with AMR algorithms. We have implemented relevant storm surge physics in the GeoClaw package and tested how Hurricane Ike's surge into Galveston Bay and up the Houston Ship Channel compares to available tide gauge data. We will also discuss issues dealing with refinement criteria, optimal resolution and refinement ratios, and inundation.
Cellular-based preemption system
NASA Technical Reports Server (NTRS)
Bachelder, Aaron D. (Inventor)
2011-01-01
A cellular-based preemption system that uses existing cellular infrastructure to transmit preemption related data to allow safe passage of emergency vehicles through one or more intersections. A cellular unit in an emergency vehicle is used to generate position reports that are transmitted to the one or more intersections during an emergency response. Based on this position data, the one or more intersections calculate an estimated time of arrival (ETA) of the emergency vehicle, and transmit preemption commands to traffic signals at the intersections based on the calculated ETA. Additional techniques may be used for refining the position reports, ETA calculations, and the like. Such techniques include, without limitation, statistical preemption, map-matching, dead-reckoning, augmented navigation, and/or preemption optimization techniques, all of which are described in further detail in the above-referenced patent applications.
A semi-automated workflow for biodiversity data retrieval, cleaning, and quality control
Mathew, Cherian; Obst, Matthias; Vicario, Saverio; Haines, Robert; Williams, Alan R.; de Jong, Yde; Goble, Carole
2014-01-01
Abstract The compilation and cleaning of data needed for analyses and prediction of species distributions is a time consuming process requiring a solid understanding of data formats and service APIs provided by biodiversity informatics infrastructures. We designed and implemented a Taverna-based Data Refinement Workflow which integrates taxonomic data retrieval, data cleaning, and data selection into a consistent, standards-based, and effective system hiding the complexity of underlying service infrastructures. The workflow can be freely used both locally and through a web-portal which does not require additional software installations by users. PMID:25535486
NASA Astrophysics Data System (ADS)
Huang, T.; Alarcon, C.; Quach, N. T.
2014-12-01
Capture, curate, and analysis are the typical activities performed at any given Earth Science data center. Modern data management systems must be adaptable to heterogeneous science data formats, scalable to meet the mission's quality of service requirements, and able to manage the life-cycle of any given science data product. Designing a scalable data management doesn't happen overnight. It takes countless hours of refining, refactoring, retesting, and re-architecting. The Horizon data management and workflow framework, developed at the Jet Propulsion Laboratory, is a portable, scalable, and reusable framework for developing high-performance data management and product generation workflow systems to automate data capturing, data curation, and data analysis activities. The NASA's Physical Oceanography Distributed Active Archive Center (PO.DAAC)'s Data Management and Archive System (DMAS) is its core data infrastructure that handles capturing and distribution of hundreds of thousands of satellite observations each day around the clock. DMAS is an application of the Horizon framework. The NASA Global Imagery Browse Services (GIBS) is NASA's Earth Observing System Data and Information System (EOSDIS)'s solution for making high-resolution global imageries available to the science communities. The Imagery Exchange (TIE), an application of the Horizon framework, is a core subsystem for GIBS responsible for data capturing and imagery generation automation to support the EOSDIS' 12 distributed active archive centers and 17 Science Investigator-led Processing Systems (SIPS). This presentation discusses our ongoing effort in refining, refactoring, retesting, and re-architecting the Horizon framework to enable data-intensive science and its applications.
DOT National Transportation Integrated Search
2015-06-01
This document serves as an Operational Concept for the Transit Traveler Information Infrastructure Mobility Application. The purpose of this document is to provide an operational description of how the Transit Traveler Information Infrastructur...
40 CFR 80.1622 - Approval for small refiner and small volume refinery status.
Code of Federal Regulations, 2014 CFR
2014-07-01
... appropriate data to correct the record when the company submits its application. (ii) Foreign small refiners... 40 Protection of Environment 17 2014-07-01 2014-07-01 false Approval for small refiner and small... Approval for small refiner and small volume refinery status. (a) Applications for small refiner or small...
40 CFR 421.50 - Applicability: Description of the primary electrolytic copper refining subcategory.
Code of Federal Regulations, 2011 CFR
2011-07-01
... primary electrolytic copper refining subcategory. 421.50 Section 421.50 Protection of Environment... POINT SOURCE CATEGORY Primary Electrolytic Copper Refining Subcategory § 421.50 Applicability: Description of the primary electrolytic copper refining subcategory. The provisions of this subpart apply to...
40 CFR 421.50 - Applicability: Description of the primary electrolytic copper refining subcategory.
Code of Federal Regulations, 2010 CFR
2010-07-01
... primary electrolytic copper refining subcategory. 421.50 Section 421.50 Protection of Environment... POINT SOURCE CATEGORY Primary Electrolytic Copper Refining Subcategory § 421.50 Applicability: Description of the primary electrolytic copper refining subcategory. The provisions of this subpart apply to...
40 CFR 409.30 - Applicability; description of the liquid cane sugar refining subcategory.
Code of Federal Regulations, 2011 CFR
2011-07-01
... liquid cane sugar refining subcategory. 409.30 Section 409.30 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) EFFLUENT GUIDELINES AND STANDARDS SUGAR PROCESSING POINT SOURCE CATEGORY Liquid Cane Sugar Refining Subcategory § 409.30 Applicability; description of the liquid cane sugar refining...
40 CFR 409.30 - Applicability; description of the liquid cane sugar refining subcategory.
Code of Federal Regulations, 2010 CFR
2010-07-01
... liquid cane sugar refining subcategory. 409.30 Section 409.30 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) EFFLUENT GUIDELINES AND STANDARDS SUGAR PROCESSING POINT SOURCE CATEGORY Liquid Cane Sugar Refining Subcategory § 409.30 Applicability; description of the liquid cane sugar refining...
40 CFR 409.30 - Applicability; description of the liquid cane sugar refining subcategory.
Code of Federal Regulations, 2014 CFR
2014-07-01
... liquid cane sugar refining subcategory. 409.30 Section 409.30 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) EFFLUENT GUIDELINES AND STANDARDS SUGAR PROCESSING POINT SOURCE CATEGORY Liquid Cane Sugar Refining Subcategory § 409.30 Applicability; description of the liquid cane sugar refining...
40 CFR 409.30 - Applicability; description of the liquid cane sugar refining subcategory.
Code of Federal Regulations, 2013 CFR
2013-07-01
... liquid cane sugar refining subcategory. 409.30 Section 409.30 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) EFFLUENT GUIDELINES AND STANDARDS SUGAR PROCESSING POINT SOURCE CATEGORY Liquid Cane Sugar Refining Subcategory § 409.30 Applicability; description of the liquid cane sugar refining...
40 CFR 409.30 - Applicability; description of the liquid cane sugar refining subcategory.
Code of Federal Regulations, 2012 CFR
2012-07-01
... liquid cane sugar refining subcategory. 409.30 Section 409.30 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) EFFLUENT GUIDELINES AND STANDARDS SUGAR PROCESSING POINT SOURCE CATEGORY Liquid Cane Sugar Refining Subcategory § 409.30 Applicability; description of the liquid cane sugar refining...
40 CFR 80.551 - How does a refiner obtain approval as a small refiner under this subpart?
Code of Federal Regulations, 2010 CFR
2010-07-01
...) Applications for motor vehicle diesel fuel small refiner status must be submitted to EPA by December 31, 2001. (ii) Applications for NRLM diesel fuel small refiner status must be submitted to EPA by December 31, 2004. (2)(i) In the case of a refiner who acquires or reactivates a refinery that was shutdown or non...
Monitoring Structure and Regional-Level Displacements for Lisbon Using Mltitemporal InSAR Techniques
NASA Astrophysics Data System (ADS)
Roque, Dora; Perissin, Daniele Falcao, Ana Paula; Fonseca, Ana Maria; Henriques, Maria Joao
2015-05-01
The city of Lisbon is the capital of Portugal and has been devastated by catastrophic events in the past, such as earthquakes and tsunamis. This study provides a regional analysis of displacements for the city and its neighbourhoods, between 2008 and 2010, through the application of mutitemporal InSAR techniques on Envisat ASAR images. Smaller areas with identified problems were subjected to a more refined processing. Besides, the behaviour of some key infrastructures, such as important buildings or railways, was carefully analysed in order to evaluate their safety. Subsidence was detected at the regional and small areas, in which the highest subsidence rates were verified on industrial parks or on landfills close to the river. Seasonal trends were found for the small areas, mainly related with structure thermal expansion or variations in underground water.
The Other Infrastructure: Distance Education's Digital Plant.
ERIC Educational Resources Information Center
Boettcher, Judith V.; Kumar, M. S. Vijay
2000-01-01
Suggests a new infrastructure--the digital plant--for supporting flexible Web campus environments. Describes four categories which make up the infrastructure: personal communication tools and applications; network of networks for the Web campus; dedicated servers and software applications; software applications and services from external…
An object-oriented approach for parallel self adaptive mesh refinement on block structured grids
NASA Technical Reports Server (NTRS)
Lemke, Max; Witsch, Kristian; Quinlan, Daniel
1993-01-01
Self-adaptive mesh refinement dynamically matches the computational demands of a solver for partial differential equations to the activity in the application's domain. In this paper we present two C++ class libraries, P++ and AMR++, which significantly simplify the development of sophisticated adaptive mesh refinement codes on (massively) parallel distributed memory architectures. The development is based on our previous research in this area. The C++ class libraries provide abstractions to separate the issues of developing parallel adaptive mesh refinement applications into those of parallelism, abstracted by P++, and adaptive mesh refinement, abstracted by AMR++. P++ is a parallel array class library to permit efficient development of architecture independent codes for structured grid applications, and AMR++ provides support for self-adaptive mesh refinement on block-structured grids of rectangular non-overlapping blocks. Using these libraries, the application programmers' work is greatly simplified to primarily specifying the serial single grid application and obtaining the parallel and self-adaptive mesh refinement code with minimal effort. Initial results for simple singular perturbation problems solved by self-adaptive multilevel techniques (FAC, AFAC), being implemented on the basis of prototypes of the P++/AMR++ environment, are presented. Singular perturbation problems frequently arise in large applications, e.g. in the area of computational fluid dynamics. They usually have solutions with layers which require adaptive mesh refinement and fast basic solvers in order to be resolved efficiently.
Code of Federal Regulations, 2010 CFR
2010-07-01
... requirements for gasoline toxics compliance applicable to refiners and importers? 80.1035 Section 80.1035... FUELS AND FUEL ADDITIVES Gasoline Toxics Attest Engagements § 80.1035 What are the attest engagement requirements for gasoline toxics compliance applicable to refiners and importers? In addition to the...
Code of Federal Regulations, 2010 CFR
2010-07-01
... requirements for gasoline sulfur compliance applicable to refiners and importers? 80.415 Section 80.415... FUELS AND FUEL ADDITIVES Gasoline Sulfur Attest Engagements § 80.415 What are the attest engagement requirements for gasoline sulfur compliance applicable to refiners and importers? In addition to the...
Code of Federal Regulations, 2012 CFR
2012-07-01
... requirements for gasoline sulfur compliance applicable to refiners and importers? 80.415 Section 80.415... FUELS AND FUEL ADDITIVES Gasoline Sulfur Attest Engagements § 80.415 What are the attest engagement requirements for gasoline sulfur compliance applicable to refiners and importers? In addition to the...
Code of Federal Regulations, 2012 CFR
2012-07-01
... requirements for gasoline toxics compliance applicable to refiners and importers? 80.1035 Section 80.1035... FUELS AND FUEL ADDITIVES Gasoline Toxics Attest Engagements § 80.1035 What are the attest engagement requirements for gasoline toxics compliance applicable to refiners and importers? In addition to the...
Code of Federal Regulations, 2011 CFR
2011-07-01
... requirements for gasoline sulfur compliance applicable to refiners and importers? 80.415 Section 80.415... FUELS AND FUEL ADDITIVES Gasoline Sulfur Attest Engagements § 80.415 What are the attest engagement requirements for gasoline sulfur compliance applicable to refiners and importers? In addition to the...
Code of Federal Regulations, 2013 CFR
2013-07-01
... requirements for gasoline sulfur compliance applicable to refiners and importers? 80.415 Section 80.415... FUELS AND FUEL ADDITIVES Gasoline Sulfur Attest Engagements § 80.415 What are the attest engagement requirements for gasoline sulfur compliance applicable to refiners and importers? In addition to the...
Code of Federal Regulations, 2014 CFR
2014-07-01
... requirements for gasoline sulfur compliance applicable to refiners and importers? 80.415 Section 80.415... FUELS AND FUEL ADDITIVES Gasoline Sulfur Attest Engagements § 80.415 What are the attest engagement requirements for gasoline sulfur compliance applicable to refiners and importers? In addition to the...
Code of Federal Regulations, 2014 CFR
2014-07-01
... requirements for gasoline toxics compliance applicable to refiners and importers? 80.1035 Section 80.1035... FUELS AND FUEL ADDITIVES Gasoline Toxics Attest Engagements § 80.1035 What are the attest engagement requirements for gasoline toxics compliance applicable to refiners and importers? In addition to the...
Code of Federal Regulations, 2011 CFR
2011-07-01
... requirements for gasoline toxics compliance applicable to refiners and importers? 80.1035 Section 80.1035... FUELS AND FUEL ADDITIVES Gasoline Toxics Attest Engagements § 80.1035 What are the attest engagement requirements for gasoline toxics compliance applicable to refiners and importers? In addition to the...
Code of Federal Regulations, 2013 CFR
2013-07-01
... requirements for gasoline toxics compliance applicable to refiners and importers? 80.1035 Section 80.1035... FUELS AND FUEL ADDITIVES Gasoline Toxics Attest Engagements § 80.1035 What are the attest engagement requirements for gasoline toxics compliance applicable to refiners and importers? In addition to the...
Managing a tier-2 computer centre with a private cloud infrastructure
NASA Astrophysics Data System (ADS)
Bagnasco, Stefano; Berzano, Dario; Brunetti, Riccardo; Lusso, Stefano; Vallero, Sara
2014-06-01
In a typical scientific computing centre, several applications coexist and share a single physical infrastructure. An underlying Private Cloud infrastructure eases the management and maintenance of such heterogeneous applications (such as multipurpose or application-specific batch farms, Grid sites, interactive data analysis facilities and others), allowing dynamic allocation resources to any application. Furthermore, the maintenance of large deployments of complex and rapidly evolving middleware and application software is eased by the use of virtual images and contextualization techniques. Such infrastructures are being deployed in some large centres (see e.g. the CERN Agile Infrastructure project), but with several open-source tools reaching maturity this is becoming viable also for smaller sites. In this contribution we describe the Private Cloud infrastructure at the INFN-Torino Computer Centre, that hosts a full-fledged WLCG Tier-2 centre, an Interactive Analysis Facility for the ALICE experiment at the CERN LHC and several smaller scientific computing applications. The private cloud building blocks include the OpenNebula software stack, the GlusterFS filesystem and the OpenWRT Linux distribution (used for network virtualization); a future integration into a federated higher-level infrastructure is made possible by exposing commonly used APIs like EC2 and OCCI.
ERIC Educational Resources Information Center
Balfanz, Robert; Mac Iver, Doug
2000-01-01
Two developers of the Talent Development Middle School model discuss 10 lessons from implementing, refining, and evaluating this model in 5 high-poverty middle schools in Philadelphia, Pennsylvania, and describe obstacles encountered and breakthroughs experienced in developing the knowledge base, materials, and infrastructure of the model. (SLD)
Harvey, Catherine; Brewster, Jill; Bakerly, Nawar Diar; Elkhenini, Hanaa F.; Stanciu, Roxana; Williams, Claire; Brereton, Jacqui; New, John P.; McCrae, John; McCorkindale, Sheila; Leather, David
2016-01-01
Abstract Background The Salford Lung Study (SLS) programme, encompassing two phase III pragmatic randomised controlled trials, was designed to generate evidence on the effectiveness of a once‐daily treatment for asthma and chronic obstructive pulmonary disease in routine primary care using electronic health records. Objective The objective of this study was to describe and discuss the safety monitoring methodology and the challenges associated with ensuring patient safety in the SLS. Refinements to safety monitoring processes and infrastructure are also discussed. The study results are outside the remit of this paper. The results of the COPD study were published recently and a more in‐depth exploration of the safety results will be the subject of future publications. Achievements The SLS used a linked database system to capture relevant data from primary care practices in Salford and South Manchester, two university hospitals and other national databases. Patient data were collated and analysed to create daily summaries that were used to alert a specialist safety team to potential safety events. Clinical research teams at participating general practitioner sites and pharmacies also captured safety events during routine consultations. Confidence in the safety monitoring processes over time allowed the methodology to be refined and streamlined without compromising patient safety or the timely collection of data. The information technology infrastructure also allowed additional details of safety information to be collected. Conclusion Integration of multiple data sources in the SLS may provide more comprehensive safety information than usually collected in standard randomised controlled trials. Application of the principles of safety monitoring methodology from the SLS could facilitate safety monitoring processes for future pragmatic randomised controlled trials and yield important complementary safety and effectiveness data. © 2016 The Authors Pharmacoepidemiology and Drug Safety Published by John Wiley & Sons Ltd. PMID:27804174
Collier, Sue; Harvey, Catherine; Brewster, Jill; Bakerly, Nawar Diar; Elkhenini, Hanaa F; Stanciu, Roxana; Williams, Claire; Brereton, Jacqui; New, John P; McCrae, John; McCorkindale, Sheila; Leather, David
2017-03-01
The Salford Lung Study (SLS) programme, encompassing two phase III pragmatic randomised controlled trials, was designed to generate evidence on the effectiveness of a once-daily treatment for asthma and chronic obstructive pulmonary disease in routine primary care using electronic health records. The objective of this study was to describe and discuss the safety monitoring methodology and the challenges associated with ensuring patient safety in the SLS. Refinements to safety monitoring processes and infrastructure are also discussed. The study results are outside the remit of this paper. The results of the COPD study were published recently and a more in-depth exploration of the safety results will be the subject of future publications. The SLS used a linked database system to capture relevant data from primary care practices in Salford and South Manchester, two university hospitals and other national databases. Patient data were collated and analysed to create daily summaries that were used to alert a specialist safety team to potential safety events. Clinical research teams at participating general practitioner sites and pharmacies also captured safety events during routine consultations. Confidence in the safety monitoring processes over time allowed the methodology to be refined and streamlined without compromising patient safety or the timely collection of data. The information technology infrastructure also allowed additional details of safety information to be collected. Integration of multiple data sources in the SLS may provide more comprehensive safety information than usually collected in standard randomised controlled trials. Application of the principles of safety monitoring methodology from the SLS could facilitate safety monitoring processes for future pragmatic randomised controlled trials and yield important complementary safety and effectiveness data. © 2016 The Authors Pharmacoepidemiology and Drug Safety Published by John Wiley & Sons Ltd. © 2016 The Authors Pharmacoepidemiology and Drug Safety Published by John Wiley & Sons Ltd.
Macklin, Paul; Cristini, Vittorio
2013-01-01
Simulating cancer behavior across multiple biological scales in space and time, i.e., multiscale cancer modeling, is increasingly being recognized as a powerful tool to refine hypotheses, focus experiments, and enable more accurate predictions. A growing number of examples illustrate the value of this approach in providing quantitative insight on the initiation, progression, and treatment of cancer. In this review, we introduce the most recent and important multiscale cancer modeling works that have successfully established a mechanistic link between different biological scales. Biophysical, biochemical, and biomechanical factors are considered in these models. We also discuss innovative, cutting-edge modeling methods that are moving predictive multiscale cancer modeling toward clinical application. Furthermore, because the development of multiscale cancer models requires a new level of collaboration among scientists from a variety of fields such as biology, medicine, physics, mathematics, engineering, and computer science, an innovative Web-based infrastructure is needed to support this growing community. PMID:21529163
Imposing a Lagrangian Particle Framework on an Eulerian Hydrodynamics Infrastructure in Flash
NASA Technical Reports Server (NTRS)
Dubey, A.; Daley, C.; ZuHone, J.; Ricker, P. M.; Weide, K.; Graziani, C.
2012-01-01
In many astrophysical simulations, both Eulerian and Lagrangian quantities are of interest. For example, in a galaxy cluster merger simulation, the intracluster gas can have Eulerian discretization, while dark matter can be modeled using particles. FLASH, a component-based scientific simulation code, superimposes a Lagrangian framework atop an adaptive mesh refinement Eulerian framework to enable such simulations. The discretization of the field variables is Eulerian, while the Lagrangian entities occur in many different forms including tracer particles, massive particles, charged particles in particle-in-cell mode, and Lagrangian markers to model fluid structure interactions. These widely varying roles for Lagrangian entities are possible because of the highly modular, flexible, and extensible architecture of the Lagrangian framework. In this paper, we describe the Lagrangian framework in FLASH in the context of two very different applications, Type Ia supernovae and galaxy cluster mergers, which use the Lagrangian entities in fundamentally different ways.
Imposing a Lagrangian Particle Framework on an Eulerian Hydrodynamics Infrastructure in FLASH
NASA Astrophysics Data System (ADS)
Dubey, A.; Daley, C.; ZuHone, J.; Ricker, P. M.; Weide, K.; Graziani, C.
2012-08-01
In many astrophysical simulations, both Eulerian and Lagrangian quantities are of interest. For example, in a galaxy cluster merger simulation, the intracluster gas can have Eulerian discretization, while dark matter can be modeled using particles. FLASH, a component-based scientific simulation code, superimposes a Lagrangian framework atop an adaptive mesh refinement Eulerian framework to enable such simulations. The discretization of the field variables is Eulerian, while the Lagrangian entities occur in many different forms including tracer particles, massive particles, charged particles in particle-in-cell mode, and Lagrangian markers to model fluid-structure interactions. These widely varying roles for Lagrangian entities are possible because of the highly modular, flexible, and extensible architecture of the Lagrangian framework. In this paper, we describe the Lagrangian framework in FLASH in the context of two very different applications, Type Ia supernovae and galaxy cluster mergers, which use the Lagrangian entities in fundamentally different ways.
Application of Al-2La-1B Grain Refiner to Al-10Si-0.3Mg Casting Alloy
NASA Astrophysics Data System (ADS)
Jing, Lijun; Pan, Ye; Lu, Tao; Li, Chenlin; Pi, Jinhong; Sheng, Ningyue
2018-05-01
This paper reports the application and microstructure refining effect of an Al-2La-1B grain refiner in Al-10Si-0.3Mg casting alloy. Compared with the traditional Al-5Ti-1B refiner, Al-2La-1B refiner shows better performances on the grain refinement of Al-10Si-0.3Mg alloy. Transmission electron microscopy analysis suggests that the crystallite structure features of LaB6 are beneficial to the heterogeneous nucleation of α-Al grains. Regarding the mechanical performances, tensile properties of Al-10Si-0.3Mg casting alloy are prominently improved, due to the refined microstructures.
Requirements Engineering in Building Climate Science Software
NASA Astrophysics Data System (ADS)
Batcheller, Archer L.
Software has an important role in supporting scientific work. This dissertation studies teams that build scientific software, focusing on the way that they determine what the software should do. These requirements engineering processes are investigated through three case studies of climate science software projects. The Earth System Modeling Framework assists modeling applications, the Earth System Grid distributes data via a web portal, and the NCAR (National Center for Atmospheric Research) Command Language is used to convert, analyze and visualize data. Document analysis, observation, and interviews were used to investigate the requirements-related work. The first research question is about how and why stakeholders engage in a project, and what they do for the project. Two key findings arise. First, user counts are a vital measure of project success, which makes adoption important and makes counting tricky and political. Second, despite the importance of quantities of users, a few particular "power users" develop a relationship with the software developers and play a special role in providing feedback to the software team and integrating the system into user practice. The second research question focuses on how project objectives are articulated and how they are put into practice. The team seeks to both build a software system according to product requirements but also to conduct their work according to process requirements such as user support. Support provides essential communication between users and developers that assists with refining and identifying requirements for the software. It also helps users to learn and apply the software to their real needs. User support is a vital activity for scientific software teams aspiring to create infrastructure. The third research question is about how change in scientific practice and knowledge leads to changes in the software, and vice versa. The "thickness" of a layer of software infrastructure impacts whether the software team or users have control and responsibility for making changes in response to new scientific ideas. Thick infrastructure provides more functionality for users, but gives them less control of it. The stability of infrastructure trades off against the responsiveness that the infrastructure can have to user needs.
User-level framework for performance monitoring of HPC applications
NASA Astrophysics Data System (ADS)
Hristova, R.; Goranov, G.
2013-10-01
HP-SEE is an infrastructure that links the existing HPC facilities in South East Europe in a common infrastructure. The analysis of the performance monitoring of the High-Performance Computing (HPC) applications in the infrastructure can be useful for the end user as diagnostic for the overall performance of his applications. The existing monitoring tools for HP-SEE provide to the end user only aggregated information for all applications. Usually, the user does not have permissions to select only the relevant information for him and for his applications. In this article we present a framework for performance monitoring of the HPC applications in the HP-SEE infrastructure. The framework provides standardized performance metrics, which every user can use in order to monitor his applications. Furthermore as a part of the framework a program interface is developed. The interface allows the user to publish metrics data from his application and to read and analyze gathered information. Publishing and reading through the framework is possible only with grid certificate valid for the infrastructure. Therefore the user is authorized to access only the data for his applications.
40 CFR 409.20 - Applicability; description of the crystalline cane sugar refining subcategory.
Code of Federal Regulations, 2010 CFR
2010-07-01
... crystalline cane sugar refining subcategory. 409.20 Section 409.20 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) EFFLUENT GUIDELINES AND STANDARDS SUGAR PROCESSING POINT SOURCE CATEGORY Crystalline Cane Sugar Refining Subcategory § 409.20 Applicability; description of the crystalline cane sugar...
40 CFR 409.20 - Applicability; description of the crystalline cane sugar refining subcategory.
Code of Federal Regulations, 2011 CFR
2011-07-01
... crystalline cane sugar refining subcategory. 409.20 Section 409.20 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) EFFLUENT GUIDELINES AND STANDARDS SUGAR PROCESSING POINT SOURCE CATEGORY Crystalline Cane Sugar Refining Subcategory § 409.20 Applicability; description of the crystalline cane sugar...
40 CFR 409.20 - Applicability; description of the crystalline cane sugar refining subcategory.
Code of Federal Regulations, 2013 CFR
2013-07-01
... crystalline cane sugar refining subcategory. 409.20 Section 409.20 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) EFFLUENT GUIDELINES AND STANDARDS SUGAR PROCESSING POINT SOURCE CATEGORY Crystalline Cane Sugar Refining Subcategory § 409.20 Applicability; description of the crystalline cane sugar...
40 CFR 409.20 - Applicability; description of the crystalline cane sugar refining subcategory.
Code of Federal Regulations, 2012 CFR
2012-07-01
... crystalline cane sugar refining subcategory. 409.20 Section 409.20 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) EFFLUENT GUIDELINES AND STANDARDS SUGAR PROCESSING POINT SOURCE CATEGORY Crystalline Cane Sugar Refining Subcategory § 409.20 Applicability; description of the crystalline cane sugar...
40 CFR 409.20 - Applicability; description of the crystalline cane sugar refining subcategory.
Code of Federal Regulations, 2014 CFR
2014-07-01
... crystalline cane sugar refining subcategory. 409.20 Section 409.20 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) EFFLUENT GUIDELINES AND STANDARDS SUGAR PROCESSING POINT SOURCE CATEGORY Crystalline Cane Sugar Refining Subcategory § 409.20 Applicability; description of the crystalline cane sugar...
Development of Vehicle-to-Infrastructure Applications Program Second Annual Report.
DOT National Transportation Integrated Search
2016-08-31
This report documents the work completed by the Crash Avoidance Metrics Partners LLC (CAMP) Vehicle to Infrastructure (V2I) Consortium during the second year of the Development of Vehicle-to-Infrastructure Applications (V2I) Program. Participat...
Development of vehicle-to-infrastructure applications program : first annual report.
DOT National Transportation Integrated Search
2015-08-01
This report documents the work completed by the Crash Avoidance Metrics Partners LLC (CAMP) Vehicle to Infrastructure (V2I) Consortium during the first year of the Development of Vehicle-to-Infrastructure Applications (V2I) Program. Participati...
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hutchinson, R.L.; Hamilton, V.A.; Istrail, G.G.
1997-11-01
This report describes the results of a Sandia-funded laboratory-directed research and development project titled {open_quotes}Integrated and Robust Security Infrastructure{close_quotes} (IRSI). IRSI was to provide a broad range of commercial-grade security services to any software application. IRSI has two primary goals: application transparency and manageable public key infrastructure. IRSI must provide its security services to any application without the need to modify the application to invoke the security services. Public key mechanisms are well suited for a network with many end users and systems. There are many issues that make it difficult to deploy and manage a public key infrastructure. IRSImore » addressed some of these issues to create a more manageable public key infrastructure.« less
Smarter–lighter–greener: research innovations for the automotive sector
Bhattacharyya, S. K.
2015-01-01
This paper reviews the changing nature of research underpinning the revolution in the automotive sector. Legislation controlling vehicle emissions has brought urgency to research, so we are now noticing a more rapid development of new technologies than at any time in the past century. The light-weighting of structures, the refinement of advanced propulsion systems, the advent of new smart materials, and greater in-vehicle intelligence and connectivity with transport infrastructure all require a fundamental rethink of established technologies used for many decades—defining a range of new multi-disciplinary research challenges. While meeting escalating emission penalties, cars must also fulfil the human desire for speed, reliability, beauty, refinement and elegance, qualities that mark out the truly great automobile. PMID:26345309
Bull, Fiona; Powell, Jane; Cooper, Ashley R.; Brand, Christian; Mutrie, Nanette; Preston, John; Rutter, Harry
2011-01-01
Improving infrastructure for walking and cycling is increasingly recommended as a means to promote physical activity, prevent obesity, and reduce traffic congestion and carbon emissions. However, limited evidence from intervention studies exists to support this approach. Drawing on classic epidemiological methods, psychological and ecological models of behavior change, and the principles of realistic evaluation, we have developed an applied ecological framework by which current theories about the behavioral effects of environmental change may be tested in heterogeneous and complex intervention settings. Our framework guides study design and analysis by specifying the most important data to be collected and relations to be tested to confirm or refute specific hypotheses and thereby refine the underlying theories. PMID:21233429
Optimisation of Critical Infrastructure Protection: The SiVe Project on Airport Security
NASA Astrophysics Data System (ADS)
Breiing, Marcus; Cole, Mara; D'Avanzo, John; Geiger, Gebhard; Goldner, Sascha; Kuhlmann, Andreas; Lorenz, Claudia; Papproth, Alf; Petzel, Erhard; Schwetje, Oliver
This paper outlines the scientific goals, ongoing work and first results of the SiVe research project on critical infrastructure security. The methodology is generic while pilot studies are chosen from airport security. The outline proceeds in three major steps, (1) building a threat scenario, (2) development of simulation models as scenario refinements, and (3) assessment of alternatives. Advanced techniques of systems analysis and simulation are employed to model relevant airport structures and processes as well as offences. Computer experiments are carried out to compare and optimise alternative solutions. The optimality analyses draw on approaches to quantitative risk assessment recently developed in the operational sciences. To exploit the advantages of the various techniques, an integrated simulation workbench is build up in the project.
International Symposium on Grids and Clouds (ISGC) 2016
NASA Astrophysics Data System (ADS)
The International Symposium on Grids and Clouds (ISGC) 2016 will be held at Academia Sinica in Taipei, Taiwan from 13-18 March 2016, with co-located events and workshops. The conference is hosted by the Academia Sinica Grid Computing Centre (ASGC). The theme of ISGC 2016 focuses on“Ubiquitous e-infrastructures and Applications”. Contemporary research is impossible without a strong IT component - researchers rely on the existence of stable and widely available e-infrastructures and their higher level functions and properties. As a result of these expectations, e-Infrastructures are becoming ubiquitous, providing an environment that supports large scale collaborations that deal with global challenges as well as smaller and temporal research communities focusing on particular scientific problems. To support those diversified communities and their needs, the e-Infrastructures themselves are becoming more layered and multifaceted, supporting larger groups of applications. Following the call for the last year conference, ISGC 2016 continues its aim to bring together users and application developers with those responsible for the development and operation of multi-purpose ubiquitous e-Infrastructures. Topics of discussion include Physics (including HEP) and Engineering Applications, Biomedicine & Life Sciences Applications, Earth & Environmental Sciences & Biodiversity Applications, Humanities, Arts, and Social Sciences (HASS) Applications, Virtual Research Environment (including Middleware, tools, services, workflow, etc.), Data Management, Big Data, Networking & Security, Infrastructure & Operations, Infrastructure Clouds and Virtualisation, Interoperability, Business Models & Sustainability, Highly Distributed Computing Systems, and High Performance & Technical Computing (HPTC), etc.
2017-03-30
experimental evaluations for hosting DDDAS-like applications in public cloud infrastructures . Finally, we report on ongoing work towards using the DDDAS...developed and their experimental evaluations for hosting DDDAS-like applications in public cloud infrastructures . Finally, we report on ongoing work towards...Dynamic resource management, model learning, simulation-based optimizations, cloud infrastructures for DDDAS applications. I. INTRODUCTION Critical cyber
Wavelet-enabled progressive data Access and Storage Protocol (WASP)
NASA Astrophysics Data System (ADS)
Clyne, J.; Frank, L.; Lesperance, T.; Norton, A.
2015-12-01
Current practices for storing numerical simulation outputs hail from an era when the disparity between compute and I/O performance was not as great as it is today. The memory contents for every sample, computed at every grid point location, are simply saved at some prescribed temporal frequency. Though straightforward, this approach fails to take advantage of the coherency in neighboring grid points that invariably exists in numerical solutions to mathematical models. Exploiting such coherence is essential to digital multimedia; DVD-Video, digital cameras, streaming movies and audio are all possible today because of transform-based compression schemes that make substantial reductions in data possible by taking advantage of the strong correlation between adjacent samples in both space and time. Such methods can also be exploited to enable progressive data refinement in a manner akin to that used in ubiquitous digital mapping applications: views from far away are shown in coarsened detail to provide context, and can be progressively refined as the user zooms in on a localized region of interest. The NSF funded WASP project aims to provide a common, NetCDF-compatible software framework for supporting wavelet-based, multi-scale, progressive data, enabling interactive exploration of large data sets for the geoscience communities. This presentation will provide an overview of this work in progress to develop community cyber-infrastructure for the efficient analysis of very large data sets.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ackerman, G; Bale, J; Moran, K
Certain types of infrastructure--critical infrastructure (CI)--play vital roles in underpinning our economy, security, and way of life. One particular type of CI--that relating to chemicals--constitutes both an important element of our nation's infrastructure and a particularly attractive set of potential targets. This is primarily because of the large quantities of toxic industrial chemicals (TICs) it employs in various operations and because of the essential economic functions it serves. This study attempts to minimize some of the ambiguities that presently impede chemical infrastructure threat assessments by providing new insight into the key motivational factors that affect terrorist organizations propensity to attackmore » chemical facilities. Prepared as a companion piece to the Center for Nonproliferation Studies August 2004 study--''Assessing Terrorist Motivations for Attacking Critical Infrastructure''--it investigates three overarching research questions: (1) why do terrorists choose to attack chemical-related infrastructure over other targets; (2) what specific factors influence their target selection decisions concerning chemical facilities; and (3) which, if any, types of groups are most inclined to attack chemical infrastructure targets? The study involved a multi-pronged research design, which made use of four discrete investigative techniques to answer the above questions as comprehensively as possible. These include: (1) a review of terrorism and threat assessment literature to glean expert consensus regarding terrorist interest in targeting chemical facilities; (2) the preparation of case studies to help identify internal group factors and contextual influences that have played a significant role in leading some terrorist groups to attack chemical facilities; (3) an examination of data from the Critical Infrastructure Terrorist Incident Catalog (CrITIC) to further illuminate the nature of terrorist attacks against chemical facilities to date; and (4) the refinement of the DECIDe--the Determinants Effecting Critical Infrastructure Decisions--analytical framework to make the factors and dynamics identified by the study more ''usable'' in future efforts to assess terrorist intentions to target chemical-related infrastructure.« less
Code of Federal Regulations, 2013 CFR
2013-07-01
... include review and copying of any documents related to: (A) Refinery baseline establishment, if applicable...-Diesel. (b) Baseline. For any foreign refiner to obtain approval under the diesel foreign refiner program... subpart. To obtain approval the refiner is required, as applicable, to demonstrate a volume baseline under...
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sentis, Manuel Lorenzo; Gable, Carl W.
Furthermore, there are many applications in science and engineering modeling where an accurate representation of a complex model geometry in the form of a mesh is important. In applications of flow and transport in subsurface porous media, this is manifest in models that must capture complex geologic stratigraphy, structure (faults, folds, erosion, deposition) and infrastructure (tunnels, boreholes, excavations). Model setup, defined as the activities of geometry definition, mesh generation (creation, optimization, modification, refine, de-refine, smooth), assigning material properties, initial conditions and boundary conditions requires specialized software tools to automate and streamline the process. In addition, some model setup tools willmore » provide more utility if they are designed to interface with and meet the needs of a particular flow and transport software suite. A control volume discretization that uses a two point flux approximation is for example most accurate when the underlying control volumes are 2D or 3D Voronoi tessellations. In this paper we will present the coupling of LaGriT, a mesh generation and model setup software suite and TOUGH2 to model subsurface flow problems and we show an example of how LaGriT can be used as a model setup tool for the generation of a Voronoi mesh for the simulation program TOUGH2. To generate the MESH file for TOUGH2 from the LaGriT output a standalone module Lagrit2Tough2 was developed, which is presented here and will be included in a future release of LaGriT. Here in this paper an alternative method to generate a Voronoi mesh for TOUGH2 with LaGriT is presented and thanks to the modular and command based structure of LaGriT this method is well suited to generating a mesh for complex models.« less
The conservation physiology toolbox: status and opportunities
Love, Oliver P; Hultine, Kevin R
2018-01-01
Abstract For over a century, physiological tools and techniques have been allowing researchers to characterize how organisms respond to changes in their natural environment and how they interact with human activities or infrastructure. Over time, many of these techniques have become part of the conservation physiology toolbox, which is used to monitor, predict, conserve, and restore plant and animal populations under threat. Here, we provide a summary of the tools that currently comprise the conservation physiology toolbox. By assessing patterns in articles that have been published in ‘Conservation Physiology’ over the past 5 years that focus on introducing, refining and validating tools, we provide an overview of where researchers are placing emphasis in terms of taxa and physiological sub-disciplines. Although there is certainly diversity across the toolbox, metrics of stress physiology (particularly glucocorticoids) and studies focusing on mammals have garnered the greatest attention, with both comprising the majority of publications (>45%). We also summarize the types of validations that are actively being completed, including those related to logistics (sample collection, storage and processing), interpretation of variation in physiological traits and relevance for conservation science. Finally, we provide recommendations for future tool refinement, with suggestions for: (i) improving our understanding of the applicability of glucocorticoid physiology; (ii) linking multiple physiological and non-physiological tools; (iii) establishing a framework for plant conservation physiology; (iv) assessing links between environmental disturbance, physiology and fitness; (v) appreciating opportunities for validations in under-represented taxa; and (vi) emphasizing tool validation as a core component of research programmes. Overall, we are confident that conservation physiology will continue to increase its applicability to more taxa, develop more non-invasive techniques, delineate where limitations exist, and identify the contexts necessary for interpretation in captivity and the wild. PMID:29942517
FDA's Activities Supporting Regulatory Application of "Next Gen" Sequencing Technologies.
Wilson, Carolyn A; Simonyan, Vahan
2014-01-01
Applications of next-generation sequencing (NGS) technologies require availability and access to an information technology (IT) infrastructure and bioinformatics tools for large amounts of data storage and analyses. The U.S. Food and Drug Administration (FDA) anticipates that the use of NGS data to support regulatory submissions will continue to increase as the scientific and clinical communities become more familiar with the technologies and identify more ways to apply these advanced methods to support development and evaluation of new biomedical products. FDA laboratories are conducting research on different NGS platforms and developing the IT infrastructure and bioinformatics tools needed to enable regulatory evaluation of the technologies and the data sponsors will submit. A High-performance Integrated Virtual Environment, or HIVE, has been launched, and development and refinement continues as a collaborative effort between the FDA and George Washington University to provide the tools to support these needs. The use of a highly parallelized environment facilitated by use of distributed cloud storage and computation has resulted in a platform that is both rapid and responsive to changing scientific needs. The FDA plans to further develop in-house capacity in this area, while also supporting engagement by the external community, by sponsoring an open, public workshop to discuss NGS technologies and data formats standardization, and to promote the adoption of interoperability protocols in September 2014. Next-generation sequencing (NGS) technologies are enabling breakthroughs in how the biomedical community is developing and evaluating medical products. One example is the potential application of this method to the detection and identification of microbial contaminants in biologic products. In order for the U.S. Food and Drug Administration (FDA) to be able to evaluate the utility of this technology, we need to have the information technology infrastructure and bioinformatics tools to be able to store and analyze large amounts of data. To address this need, we have developed the High-performance Integrated Virtual Environment, or HIVE. HIVE uses a combination of distributed cloud storage and distributed cloud computations to provide a platform that is both rapid and responsive to support the growing and increasingly diverse scientific and regulatory needs of FDA scientists in their evaluation of NGS in research and ultimately for evaluation of NGS data in regulatory submissions. © PDA, Inc. 2014.
Parallel Infrastructure Modeling and Inversion Module for E4D
DOE Office of Scientific and Technical Information (OSTI.GOV)
2014-10-09
Electrical resistivity tomography ERT is a method of imaging the electrical conductivity of the subsurface. Electrical conductivity is a useful metric for understanding the subsurface because it is governed by geomechanical and geochemical properties that drive subsurface systems. ERT works by injecting current into the subsurface across a pair of electrodes, and measuring the corresponding electrical potential response across another pair of electrodes. Many such measurements are strategically taken across an array of electrodes to produce an ERT data set. These data are then processed through a computationally demanding process known as inversion to produce an image of the subsurfacemore » conductivity structure that gave rise to the measurements. Data can be inverted to provide 2D images, 3D images, or in the case of time-lapse 3D imaging, 4D images. ERT is generally not well suited for environments with buried electrically conductive infrastructure such as pipes, tanks, or well casings, because these features tend to dominate and degrade ERT images. This reduces or eliminates the utility of ERT imaging where it would otherwise be highly useful for, for example, imaging fluid migration from leaking pipes, imaging soil contamination beneath leaking subusurface tanks, and monitoring contaminant migration in locations with dense network of metal cased monitoring wells. The location and dimension of buried metallic infrastructure is often known. If so, then the effects of the infrastructure can be explicitly modeled within the ERT imaging algorithm, and thereby removed from the corresponding ERT image. However,there are a number of obstacles limiting this application. 1) Metallic infrastructure cannot be accurately modeled with standard codes because of the large contrast in conductivity between the metal and host material. 2) Modeling infrastructure in true dimension requires the computational mesh to be highly refined near the metal inclusions, which increases computational demands. 3) The ERT imaging algorithm requires specialized modifications to accomodate high conductivty inclusions within the computational mesh. The solution to each of these challenges was implemented within E4D (formerly FERM3D), which is a parallel ERT imaging code developed at PNNL (IPID #30249). The infrastructure modeling module implement in E4D uses a method of decoupling the model at the metallic interface(s) boundaries, into several well posed sub-problems (one for each distinct metallicinclusion) that are subsequently solved and recombined to form the global solution. The approach is based on the immersed interface method, with has been applied for similar problems in other fields (e.g. semiconductor industry). Comparisons to analytic solutions have shown the results to be very accurate, addressing item 1 above. The solution is implemented about an unstructured mesh, which enables arbitrary shapes to be efficiently modelled, thereby addressing item 2 above. In addition, the algorithm is written in parallel and shows excellent scalability, which also addresses equation 2 above. Finally, because only the boundaries of metallic inclusions are modeled, there are no high conductivity cells within the modeling mesh, and the problem described by item 3 above is no longer applicable.« less
40 CFR 80.551 - How does a refiner obtain approval as a small refiner under this subpart?
Code of Federal Regulations, 2014 CFR
2014-07-01
... application for small refiner status. EPA may accept such alternate data at its discretion. (4) For motor... a small refiner under this subpart? 80.551 Section 80.551 Protection of Environment ENVIRONMENTAL... Diesel Fuel; Nonroad, Locomotive, and Marine Diesel Fuel; and ECA Marine Fuel Small Refiner Hardship...
40 CFR 80.551 - How does a refiner obtain approval as a small refiner under this subpart?
Code of Federal Regulations, 2011 CFR
2011-07-01
... application for small refiner status. EPA may accept such alternate data at its discretion. (4) For motor... a small refiner under this subpart? 80.551 Section 80.551 Protection of Environment ENVIRONMENTAL... Diesel Fuel; Nonroad, Locomotive, and Marine Diesel Fuel; and ECA Marine Fuel Small Refiner Hardship...
40 CFR 80.551 - How does a refiner obtain approval as a small refiner under this subpart?
Code of Federal Regulations, 2013 CFR
2013-07-01
... application for small refiner status. EPA may accept such alternate data at its discretion. (4) For motor... a small refiner under this subpart? 80.551 Section 80.551 Protection of Environment ENVIRONMENTAL... Diesel Fuel; Nonroad, Locomotive, and Marine Diesel Fuel; and ECA Marine Fuel Small Refiner Hardship...
40 CFR 80.551 - How does a refiner obtain approval as a small refiner under this subpart?
Code of Federal Regulations, 2012 CFR
2012-07-01
... application for small refiner status. EPA may accept such alternate data at its discretion. (4) For motor... a small refiner under this subpart? 80.551 Section 80.551 Protection of Environment ENVIRONMENTAL... Diesel Fuel; Nonroad, Locomotive, and Marine Diesel Fuel; and ECA Marine Fuel Small Refiner Hardship...
Integrating biodiversity distribution knowledge: toward a global map of life.
Jetz, Walter; McPherson, Jana M; Guralnick, Robert P
2012-03-01
Global knowledge about the spatial distribution of species is orders of magnitude coarser in resolution than other geographically-structured environmental datasets such as topography or land cover. Yet such knowledge is crucial in deciphering ecological and evolutionary processes and in managing global change. In this review, we propose a conceptual and cyber-infrastructure framework for refining species distributional knowledge that is novel in its ability to mobilize and integrate diverse types of data such that their collective strengths overcome individual weaknesses. The ultimate aim is a public, online, quality-vetted 'Map of Life' that for every species integrates and visualizes available distributional knowledge, while also facilitating user feedback and dynamic biodiversity analyses. First milestones toward such an infrastructure have now been implemented. Copyright © 2011 Elsevier Ltd. All rights reserved.
Frontiers, Opportunities and Challenges for a Hydrogen Economy
NASA Astrophysics Data System (ADS)
Turner, John
2015-03-01
Energy carriers are the staple for powering the society we live in. Coal, oil, natural gas, gasoline and diesel all carry energy in chemical bonds, used in almost all areas of our civilization. But these carriers have a limited-use lifetime on this planet. They are finite, contribute to climate change and carry significant geopolitical issues. If mankind is to maintain and grow our societies, new energy carriers must be developed and deployed into our energy infrastructure. Hydrogen is the simplest of all the energy carriers and when refined from water using renewable energies like solar and wind, represents a sustainable energy carrier, viable for millennia to come. This talk with discuss the challenges for sustainable production of hydrogen, along with the promise and possible pathways for implementing hydrogen into our energy infrastructure.
EIA application in China's expressway infrastructure: clarifying the decision-making hierarchy.
Zhou, Kai-Yi; Sheate, William R
2011-06-01
China's EIA Law came into effect in 2003 and formally requires road transport infrastructure development actions to be subject to Environmental Impact Assessment (EIA). EIAs (including project EIA and plan EIA, or strategic environmental impact assessment, SEA) have been being widely applied in the expressway infrastructure planning field. Among those applications, SEA is applied to provincial level expressway network (PLEI) plans, and project EIA is applied to expressway infrastructure development 'projects' under PLEI plans. Three case studies (one expressway project EIA and two PLEI plan SEAs) were examined to understand currently how EIAs are applied to expressway infrastructure development planning. Through the studies, a number of problems that significantly influence the quality of EIA application in the field were identified. The reasons causing those problems are analyzed and possible solutions are suggested aimed at enhancing EIA practice, helping deliver better decision-making and ultimately improving the environmental performance of expressway infrastructure. Copyright © 2010 Elsevier Ltd. All rights reserved.
Autonomic Management of Application Workflows on Hybrid Computing Infrastructure
Kim, Hyunjoo; el-Khamra, Yaakoub; Rodero, Ivan; ...
2011-01-01
In this paper, we present a programming and runtime framework that enables the autonomic management of complex application workflows on hybrid computing infrastructures. The framework is designed to address system and application heterogeneity and dynamics to ensure that application objectives and constraints are satisfied. The need for such autonomic system and application management is becoming critical as computing infrastructures become increasingly heterogeneous, integrating different classes of resources from high-end HPC systems to commodity clusters and clouds. For example, the framework presented in this paper can be used to provision the appropriate mix of resources based on application requirements and constraints.more » The framework also monitors the system/application state and adapts the application and/or resources to respond to changing requirements or environment. To demonstrate the operation of the framework and to evaluate its ability, we employ a workflow used to characterize an oil reservoir executing on a hybrid infrastructure composed of TeraGrid nodes and Amazon EC2 instances of various types. Specifically, we show how different applications objectives such as acceleration, conservation and resilience can be effectively achieved while satisfying deadline and budget constraints, using an appropriate mix of dynamically provisioned resources. Our evaluations also demonstrate that public clouds can be used to complement and reinforce the scheduling and usage of traditional high performance computing infrastructure.« less
Cloud Infrastructure & Applications - CloudIA
NASA Astrophysics Data System (ADS)
Sulistio, Anthony; Reich, Christoph; Doelitzscher, Frank
The idea behind Cloud Computing is to deliver Infrastructure-as-a-Services and Software-as-a-Service over the Internet on an easy pay-per-use business model. To harness the potentials of Cloud Computing for e-Learning and research purposes, and to small- and medium-sized enterprises, the Hochschule Furtwangen University establishes a new project, called Cloud Infrastructure & Applications (CloudIA). The CloudIA project is a market-oriented cloud infrastructure that leverages different virtualization technologies, by supporting Service-Level Agreements for various service offerings. This paper describes the CloudIA project in details and mentions our early experiences in building a private cloud using an existing infrastructure.
40 CFR 80.1340 - How does a refiner obtain approval as a small refiner?
Code of Federal Regulations, 2012 CFR
2012-07-01
... EPA with appropriate data to correct the record when the company submits its application for small... a small refiner? 80.1340 Section 80.1340 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) AIR PROGRAMS (CONTINUED) REGULATION OF FUELS AND FUEL ADDITIVES Gasoline Benzene Small Refiner...
Detmer, Don E
2003-01-01
Background Improving health in our nation requires strengthening four major domains of the health care system: personal health management, health care delivery, public health, and health-related research. Many avoidable shortcomings in the health sector that result in poor quality are due to inaccessible data, information, and knowledge. A national health information infrastructure (NHII) offers the connectivity and knowledge management essential to correct these shortcomings. Better health and a better health system are within our reach. Discussion A national health information infrastructure for the United States should address the needs of personal health management, health care delivery, public health, and research. It should also address relevant global dimensions (e.g., standards for sharing data and knowledge across national boundaries). The public and private sectors will need to collaborate to build a robust national health information infrastructure, essentially a 'paperless' health care system, for the United States. The federal government should assume leadership for assuring a national health information infrastructure as recommended by the National Committee on Vital and Health Statistics and the President's Information Technology Advisory Committee. Progress is needed in the areas of funding, incentives, standards, and continued refinement of a privacy (i.e., confidentiality and security) framework to facilitate personal identification for health purposes. Particular attention should be paid to NHII leadership and change management challenges. Summary A national health information infrastructure is a necessary step for improved health in the U.S. It will require a concerted, collaborative effort by both public and private sectors. If you cannot measure it, you cannot improve it. Lord Kelvin PMID:12525262
NASA Technical Reports Server (NTRS)
Clare, Loren; Clement, B.; Gao, J.; Hutcherson, J.; Jennings, E.
2006-01-01
Described recent development of communications protocols, services, and associated tools targeted to reduce risk, reduce cost and increase efficiency of IND infrastructure and supported mission operations. Space-based networking technologies developed were: a) Provide differentiated quality of service (QoS) that will give precedence to traffic that users have selected as having the greatest importance and/or time-criticality; b) Improve the total value of information to users through the use of QoS prioritization techniques; c) Increase operational flexibility and improve command-response turnaround; d) Enable new class of networked and collaborative science missions; e) Simplify applications interfaces to communications services; and f) Reduce risk and cost from a common object model and automated scheduling and communications protocols. Technologies are described in three general areas: communications scheduling, middleware, and protocols. Additionally developed simulation environment, which provides comprehensive, quantitative understanding of the technologies performance within overall, evolving architecture, as well as ability to refine & optimize specific components.
Model based systems engineering for astronomical projects
NASA Astrophysics Data System (ADS)
Karban, R.; Andolfato, L.; Bristow, P.; Chiozzi, G.; Esselborn, M.; Schilling, M.; Schmid, C.; Sommer, H.; Zamparelli, M.
2014-08-01
Model Based Systems Engineering (MBSE) is an emerging field of systems engineering for which the System Modeling Language (SysML) is a key enabler for descriptive, prescriptive and predictive models. This paper surveys some of the capabilities, expectations and peculiarities of tools-assisted MBSE experienced in real-life astronomical projects. The examples range in depth and scope across a wide spectrum of applications (for example documentation, requirements, analysis, trade studies) and purposes (addressing a particular development need, or accompanying a project throughout many - if not all - its lifecycle phases, fostering reuse and minimizing ambiguity). From the beginnings of the Active Phasing Experiment, through VLT instrumentation, VLTI infrastructure, Telescope Control System for the E-ELT, until Wavefront Control for the E-ELT, we show how stepwise refinements of tools, processes and methods have provided tangible benefits to customary system engineering activities like requirement flow-down, design trade studies, interfaces definition, and validation, by means of a variety of approaches (like Model Checking, Simulation, Model Transformation) and methodologies (like OOSEM, State Analysis)
NASA Astrophysics Data System (ADS)
Li, Gaohua; Fu, Xiang; Wang, Fuxin
2017-10-01
The low-dissipation high-order accurate hybrid up-winding/central scheme based on fifth-order weighted essentially non-oscillatory (WENO) and sixth-order central schemes, along with the Spalart-Allmaras (SA)-based delayed detached eddy simulation (DDES) turbulence model, and the flow feature-based adaptive mesh refinement (AMR), are implemented into a dual-mesh overset grid infrastructure with parallel computing capabilities, for the purpose of simulating vortex-dominated unsteady detached wake flows with high spatial resolutions. The overset grid assembly (OGA) process based on collection detection theory and implicit hole-cutting algorithm achieves an automatic coupling for the near-body and off-body solvers, and the error-and-try method is used for obtaining a globally balanced load distribution among the composed multiple codes. The results of flows over high Reynolds cylinder and two-bladed helicopter rotor show that the combination of high-order hybrid scheme, advanced turbulence model, and overset adaptive mesh refinement can effectively enhance the spatial resolution for the simulation of turbulent wake eddies.
75 FR 56506 - Energy and Infrastructure Mission to Saudi Arabia; Application Deadline Extended
Federal Register 2010, 2011, 2012, 2013, 2014
2010-09-16
... DEPARTMENT OF COMMERCE International Trade Administration Energy and Infrastructure Mission to Saudi Arabia; Application Deadline Extended AGENCY: International Trade Administration, Department of... application deadline has been extended to September 30, 2010. The U.S. Department of Commerce will review all...
Pangle, Luke A.; DeLong, Stephen B.; Abramson, Nate; Adams, John; Barron-Gafford, Greg A.; Breshears, David D.; Brooks, Paul D.; Chorover, Jon; Dietrich, William E.; Dontsova, Katerina; Durcik, Matej; Espeleta, Javier; Ferré, T.P.A.; Ferriere, Regis; Henderson, Whitney; Hunt, Edward A.; Huxman, Travis E.; Millar, David; Murphy, Brendan; Niu, Guo-Yue; Pavao-Zuckerman, Mitch; Pelletier, Jon D.; Rasmussen, Craig; Ruiz, Joaquin; Saleska, Scott; Schaap, Marcel; Sibayan, Michael; Troch, Peter A.; Tuller, Markus; van Haren, Joost; Zeng, Xubin
2015-01-01
Zero-order drainage basins, and their constituent hillslopes, are the fundamental geomorphic unit comprising much of Earth's uplands. The convergent topography of these landscapes generates spatially variable substrate and moisture content, facilitating biological diversity and influencing how the landscape filters precipitation and sequesters atmospheric carbon dioxide. In light of these significant ecosystem services, refining our understanding of how these functions are affected by landscape evolution, weather variability, and long-term climate change is imperative. In this paper we introduce the Landscape Evolution Observatory (LEO): a large-scale controllable infrastructure consisting of three replicated artificial landscapes (each 330 m2 surface area) within the climate-controlled Biosphere 2 facility in Arizona, USA. At LEO, experimental manipulation of rainfall, air temperature, relative humidity, and wind speed are possible at unprecedented scale. The Landscape Evolution Observatory was designed as a community resource to advance understanding of how topography, physical and chemical properties of soil, and biological communities coevolve, and how this coevolution affects water, carbon, and energy cycles at multiple spatial scales. With well-defined boundary conditions and an extensive network of sensors and samplers, LEO enables an iterative scientific approach that includes numerical model development and virtual experimentation, physical experimentation, data analysis, and model refinement. We plan to engage the broader scientific community through public dissemination of data from LEO, collaborative experimental design, and community-based model development.
29 CFR 215.3 - Employees represented by a labor organization.
Code of Federal Regulations, 2011 CFR
2011-07-01
... to applicants for the Over-the-Road Bus Accessibility Program, and grant applications for the Other... Commute Program or grants to capitalize State Infrastructure Bank accounts under the State Infrastructure...
Kasam, Vinod; Salzemann, Jean; Botha, Marli; Dacosta, Ana; Degliesposti, Gianluca; Isea, Raul; Kim, Doman; Maass, Astrid; Kenyon, Colin; Rastelli, Giulio; Hofmann-Apitius, Martin; Breton, Vincent
2009-05-01
Despite continuous efforts of the international community to reduce the impact of malaria on developing countries, no significant progress has been made in the recent years and the discovery of new drugs is more than ever needed. Out of the many proteins involved in the metabolic activities of the Plasmodium parasite, some are promising targets to carry out rational drug discovery. Recent years have witnessed the emergence of grids, which are highly distributed computing infrastructures particularly well fitted for embarrassingly parallel computations like docking. In 2005, a first attempt at using grids for large-scale virtual screening focused on plasmepsins and ended up in the identification of previously unknown scaffolds, which were confirmed in vitro to be active plasmepsin inhibitors. Following this success, a second deployment took place in the fall of 2006 focussing on one well known target, dihydrofolate reductase (DHFR), and on a new promising one, glutathione-S-transferase. In silico drug design, especially vHTS is a widely and well-accepted technology in lead identification and lead optimization. This approach, therefore builds, upon the progress made in computational chemistry to achieve more accurate in silico docking and in information technology to design and operate large scale grid infrastructures. On the computational side, a sustained infrastructure has been developed: docking at large scale, using different strategies in result analysis, storing of the results on the fly into MySQL databases and application of molecular dynamics refinement are MM-PBSA and MM-GBSA rescoring. The modeling results obtained are very promising. Based on the modeling results, In vitro results are underway for all the targets against which screening is performed. The current paper describes the rational drug discovery activity at large scale, especially molecular docking using FlexX software on computational grids in finding hits against three different targets (PfGST, PfDHFR, PvDHFR (wild type and mutant forms) implicated in malaria. Grid-enabled virtual screening approach is proposed to produce focus compound libraries for other biological targets relevant to fight the infectious diseases of the developing world.
NASA Astrophysics Data System (ADS)
Knox, S.; Meier, P.; Mohammed, K.; Korteling, B.; Matrosov, E. S.; Hurford, A.; Huskova, I.; Harou, J. J.; Rosenberg, D. E.; Thilmant, A.; Medellin-Azuara, J.; Wicks, J.
2015-12-01
Capacity expansion on resource networks is essential to adapting to economic and population growth and pressures such as climate change. Engineered infrastructure systems such as water, energy, or transport networks require sophisticated and bespoke models to refine management and investment strategies. Successful modeling of such complex systems relies on good data management and advanced methods to visualize and share data.Engineered infrastructure systems are often represented as networks of nodes and links with operating rules describing their interactions. Infrastructure system management and planning can be abstracted to simulating or optimizing new operations and extensions of the network. By separating the data storage of abstract networks from manipulation and modeling we have created a system where infrastructure modeling across various domains is facilitated.We introduce Hydra Platform, a Free Open Source Software designed for analysts and modelers to store, manage and share network topology and data. Hydra Platform is a Python library with a web service layer for remote applications, called Apps, to connect. Apps serve various functions including network or results visualization, data export (e.g. into a proprietary format) or model execution. This Client-Server architecture allows users to manipulate and share centrally stored data. XML templates allow a standardised description of the data structure required for storing network data such that it is compatible with specific models.Hydra Platform represents networks in an abstract way and is therefore not bound to a single modeling domain. It is the Apps that create domain-specific functionality. Using Apps researchers from different domains can incorporate different models within the same network enabling cross-disciplinary modeling while minimizing errors and streamlining data sharing. Separating the Python library from the web layer allows developers to natively expand the software or build web-based apps in other languages for remote functionality. Partner CH2M is developing a commercial user-interface for Hydra Platform however custom interfaces and visualization tools can be built. Hydra Platform is available on GitHub while Apps will be shared on a central repository.
7 CFR 1530.104 - Application for a license.
Code of Federal Regulations, 2011 CFR
2011-01-01
... OF AGRICULTURE THE REFINED SUGAR RE-EXPORT PROGRAM, THE SUGAR CONTAINING PRODUCTS RE-EXPORT PROGRAM... of any co-packer(s); (4) In the case of a refined sugar product, the polarity of the product and the formula proposed by the refiner for calculating the refined sugar in the product; (5) In the case of a...
7 CFR 1530.104 - Application for a license.
Code of Federal Regulations, 2013 CFR
2013-01-01
... OF AGRICULTURE THE REFINED SUGAR RE-EXPORT PROGRAM, THE SUGAR CONTAINING PRODUCTS RE-EXPORT PROGRAM... of any co-packer(s); (4) In the case of a refined sugar product, the polarity of the product and the formula proposed by the refiner for calculating the refined sugar in the product; (5) In the case of a...
ERIC Educational Resources Information Center
National Inst. of Standards and Technology, Gaithersburg, MD.
An interconnection of computer networks, telecommunications services, and applications, the National Information Infrastructure (NII) can open up new vistas and profoundly change much of American life. This report explores some of the opportunities and obstacles to the use of the NII by people and organizations. The goal is to express how…
DOT National Transportation Integrated Search
2015-08-01
This document is the first of a seven volume report that describes performance requirements for connected vehicle vehicle-to-infrastructure (V2I) Safety Applications developed for the U.S. Department of Transportation (U.S. DOT). The applications add...
THE PLUTO CODE FOR ADAPTIVE MESH COMPUTATIONS IN ASTROPHYSICAL FLUID DYNAMICS
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mignone, A.; Tzeferacos, P.; Zanni, C.
We present a description of the adaptive mesh refinement (AMR) implementation of the PLUTO code for solving the equations of classical and special relativistic magnetohydrodynamics (MHD and RMHD). The current release exploits, in addition to the static grid version of the code, the distributed infrastructure of the CHOMBO library for multidimensional parallel computations over block-structured, adaptively refined grids. We employ a conservative finite-volume approach where primary flow quantities are discretized at the cell center in a dimensionally unsplit fashion using the Corner Transport Upwind method. Time stepping relies on a characteristic tracing step where piecewise parabolic method, weighted essentially non-oscillatory,more » or slope-limited linear interpolation schemes can be handily adopted. A characteristic decomposition-free version of the scheme is also illustrated. The solenoidal condition of the magnetic field is enforced by augmenting the equations with a generalized Lagrange multiplier providing propagation and damping of divergence errors through a mixed hyperbolic/parabolic explicit cleaning step. Among the novel features, we describe an extension of the scheme to include non-ideal dissipative processes, such as viscosity, resistivity, and anisotropic thermal conduction without operator splitting. Finally, we illustrate an efficient treatment of point-local, potentially stiff source terms over hierarchical nested grids by taking advantage of the adaptivity in time. Several multidimensional benchmarks and applications to problems of astrophysical relevance assess the potentiality of the AMR version of PLUTO in resolving flow features separated by large spatial and temporal disparities.« less
Soak Up the Rain New England Webinar Series: National ...
Presenters will provide an introduction to the most recent EPA green infrastructure tools to R1 stakeholders; and their use in making decisions about implementing green infrastructure. We will discuss structuring your green infrastructure decision, finding appropriate information and tools, evaluating options and selecting the right Best Management Practices mix for your needs.WMOST (Watershed Management Optimization Support Tool)- for screening a wide range of practices for cost-effectiveness in achieving watershed or water utilities management goals.GIWiz (Green Infrastructure Wizard)- a web application connecting communities to EPA Green Infrastructure tools and resources.Opti-Tool-designed to assist in developing technically sound and optimized cost-effective Stormwater management plans. National Stormwater Calculator- a desktop application for estimating the impact of land cover change and green infrastructure controls on stormwater runoff. DASEES-GI (Decision Analysis for a Sustainable Environment, Economy, and Society) – a framework for linking objectives and measures with green infrastructure methods. Presenters will provide an introduction to the most recent EPA green infrastructure tools to R1 stakeholders; and their use in making decisions about implementing green infrastructure. We will discuss structuring your green infrastructure decision, finding appropriate information and tools, evaluating options and selecting the right Best Management Pr
Grain Refinement of Al-Si Hypoeutectic Alloys by Al3Ti1B Master Alloy and Ultrasonic Treatment
NASA Astrophysics Data System (ADS)
Wang, Gui; Wang, Eric Qiang; Prasad, Arvind; Dargusch, Matthew; StJohn, David H.
Al-Si alloys are widely used in automotive and aerospace industries due to their excellent castability, high strength to weight ratio and good corrosion resistance. However, Si poisoning severely limits the degree of grain refinement with the grain size becoming larger as the Si content increases. Generally the effect of Si poisoning is reduced by increasing the amount of master alloy added to the melt during casting. However, an alternative approach is physical grain refinement through the application of an external force (e.g. mechanical or electromagnetic stirring, intensive shearing and ultrasonic irradiation). This work compares the grain refining efficiency of three approaches to the grain refinement of a range of hypoeutectic Al-Si alloys by (i) the addition of A13Ti1B master alloy, (ii) the application of Ultrasonic Treatment (UT) and (iii) the combined addition of A13Ti1B master alloy and the application of UT.
Recycled carpet materials for infrastructure applications.
DOT National Transportation Integrated Search
2013-06-01
The objective of this project was to develop novel composite materials for infrastructure applications by recycling nylon based waste carpets. These novel composites have been proven to possess improved mechanical and sound barrier properties to meet...
Ultrasonic imaging for concrete infrastructure condition assessment and quality assurance.
DOT National Transportation Integrated Search
2017-04-01
This report describes work on laboratory and field performance reviews of an ultrasonic shear wave imaging device called MIRA : for application to plain and reinforced concrete infrastructure components. Potential applications investigated included b...
A template-based approach for parallel hexahedral two-refinement
Owen, Steven J.; Shih, Ryan M.; Ernst, Corey D.
2016-10-17
Here, we provide a template-based approach for generating locally refined all-hex meshes. We focus specifically on refinement of initially structured grids utilizing a 2-refinement approach where uniformly refined hexes are subdivided into eight child elements. The refinement algorithm consists of identifying marked nodes that are used as the basis for a set of four simple refinement templates. The target application for 2-refinement is a parallel grid-based all-hex meshing tool for high performance computing in a distributed environment. The result is a parallel consistent locally refined mesh requiring minimal communication and where minimum mesh quality is greater than scaled Jacobian 0.3more » prior to smoothing.« less
A template-based approach for parallel hexahedral two-refinement
DOE Office of Scientific and Technical Information (OSTI.GOV)
Owen, Steven J.; Shih, Ryan M.; Ernst, Corey D.
Here, we provide a template-based approach for generating locally refined all-hex meshes. We focus specifically on refinement of initially structured grids utilizing a 2-refinement approach where uniformly refined hexes are subdivided into eight child elements. The refinement algorithm consists of identifying marked nodes that are used as the basis for a set of four simple refinement templates. The target application for 2-refinement is a parallel grid-based all-hex meshing tool for high performance computing in a distributed environment. The result is a parallel consistent locally refined mesh requiring minimal communication and where minimum mesh quality is greater than scaled Jacobian 0.3more » prior to smoothing.« less
DOT National Transportation Integrated Search
2016-09-30
Implementing Connected Vehicle Infrastructure (CVI) applications for handheld devices into public transportation transit systems would provide transit agencies and their users with two-directional information flow from traveler-to-agencies, agencies-...
ERIC Educational Resources Information Center
Lu, Chun; Tsai, Chin-Chung; Wu, Di
2015-01-01
With the ever-deepening economic reform and international trend of ICT application in education, the Chinese government is strengthening its basic education curriculum reform and actively facilitating the application of ICT in education. Given the achievement gap of ICT infrastructure and its application in middle and primary schools between urban…
A model for simulating adaptive, dynamic flows on networks: Application to petroleum infrastructure
Corbet, Thomas F.; Beyeler, Walt; Wilson, Michael L.; ...
2017-10-03
Simulation models can greatly improve decisions meant to control the consequences of disruptions to critical infrastructures. We describe a dynamic flow model on networks purposed to inform analyses by those concerned about consequences of disruptions to infrastructures and to help policy makers design robust mitigations. We conceptualize the adaptive responses of infrastructure networks to perturbations as market transactions and business decisions of operators. We approximate commodity flows in these networks by a diffusion equation, with nonlinearities introduced to model capacity limits. To illustrate the behavior and scalability of the model, we show its application first on two simple networks, thenmore » on petroleum infrastructure in the United States, where we analyze the effects of a hypothesized earthquake.« less
A model for simulating adaptive, dynamic flows on networks: Application to petroleum infrastructure
DOE Office of Scientific and Technical Information (OSTI.GOV)
Corbet, Thomas F.; Beyeler, Walt; Wilson, Michael L.
Simulation models can greatly improve decisions meant to control the consequences of disruptions to critical infrastructures. We describe a dynamic flow model on networks purposed to inform analyses by those concerned about consequences of disruptions to infrastructures and to help policy makers design robust mitigations. We conceptualize the adaptive responses of infrastructure networks to perturbations as market transactions and business decisions of operators. We approximate commodity flows in these networks by a diffusion equation, with nonlinearities introduced to model capacity limits. To illustrate the behavior and scalability of the model, we show its application first on two simple networks, thenmore » on petroleum infrastructure in the United States, where we analyze the effects of a hypothesized earthquake.« less
40 CFR 80.1342 - What compliance options are available to small refiners under this subpart?
Code of Federal Regulations, 2014 CFR
2014-07-01
... Benzene Small Refiner Provisions § 80.1342 What compliance options are available to small refiners under... this section must comply with the applicable benzene standards at § 80.1230 beginning with the first...
40 CFR 80.1342 - What compliance options are available to small refiners under this subpart?
Code of Federal Regulations, 2011 CFR
2011-07-01
... Benzene Small Refiner Provisions § 80.1342 What compliance options are available to small refiners under... this section must comply with the applicable benzene standards at § 80.1230 beginning with the first...
40 CFR 80.1342 - What compliance options are available to small refiners under this subpart?
Code of Federal Regulations, 2013 CFR
2013-07-01
... Benzene Small Refiner Provisions § 80.1342 What compliance options are available to small refiners under... this section must comply with the applicable benzene standards at § 80.1230 beginning with the first...
Escaping America’s Future: A Clarion Call for a National Energy Security Strategy
2010-06-01
the heavy oils, the air by means of the ultra refined oils, and the land by means of the petrol and the illuminating oils. And in addition to these he... Deepwater Horizon oil spill incident in the Gulf, although classified an accident, demonstrates how damaging a potential attack on infrastructure could...energy, but has suffered recent setbacks due to the Deepwater Horizon oil spill in the Gulf of Mexico. The Outer Continental Shelf surrounding the
Infrastructure dynamics: A selected bibliography
NASA Technical Reports Server (NTRS)
Dajani, J. S.; Bencosme, A. J.
1978-01-01
The term infrastructure is used to denote the set of life support and public service systems which is necessary for the development of growth of human settlements. Included are some basic references in the field of dynamic simulation, as well as a number of relevant applications in the area of infrastructure planning. The intent is to enable the student or researcher to quickly identify such applications to the extent necessary for initiating further work in the field.
Preparing to use vehicle infrastructure integration (VII) in transportation operations : phase II.
DOT National Transportation Integrated Search
2009-01-01
Vehicle infrastructure integration (VII) is an emerging approach intended to create an enabling communication capability to support vehicle-to-vehicle and vehicle-to-infrastructure communications for safety and mobility applications. The Virginia Dep...
Lipkin, W. Ian
2010-01-01
Summary: Platforms for pathogen discovery have improved since the days of Koch and Pasteur; nonetheless, the challenges of proving causation are at least as daunting as they were in the late 1800s. Although we will almost certainly continue to accumulate low-hanging fruit, where simple relationships will be found between the presence of a cultivatable agent and a disease, these successes will be increasingly infrequent. The future of the field rests instead in our ability to follow footprints of infectious agents that cannot be characterized using classical microbiological techniques and to develop the laboratory and computational infrastructure required to dissect complex host-microbe interactions. I have tried to refine the criteria used by Koch and successors to prove linkage to disease. These refinements are working constructs that will continue to evolve in light of new technologies, new models, and new insights. What will endure is the excitement of the chase. Happy hunting! PMID:20805403
Crash data analyses for vehicle-to-infrastructure communications for safety applications.
DOT National Transportation Integrated Search
2012-11-01
This report presents the potential safety benefits of wireless communication between the roadway infrastructure and vehicles, : (i.e., vehicle-to-infrastructure (V2I) safety). Specifically, it identifies the magnitude, characteristics, and cost of cr...
Federal Register 2010, 2011, 2012, 2013, 2014
2012-03-14
..., addressed the required infrastructure elements for the 1997 8-hour ozone NAAQS, however the subject of this notice is limited to infrastructure elements 110(a)(2)(C) and (J). All other applicable Tennessee infrastructure elements will be addressed in a separate rulemaking. DATES: Effective Date: This rule will be...
The stock-flow model of spatial data infrastructure development refined by fuzzy logic.
Abdolmajidi, Ehsan; Harrie, Lars; Mansourian, Ali
2016-01-01
The system dynamics technique has been demonstrated to be a proper method by which to model and simulate the development of spatial data infrastructures (SDI). An SDI is a collaborative effort to manage and share spatial data at different political and administrative levels. It is comprised of various dynamically interacting quantitative and qualitative (linguistic) variables. To incorporate linguistic variables and their joint effects in an SDI-development model more effectively, we suggest employing fuzzy logic. Not all fuzzy models are able to model the dynamic behavior of SDIs properly. Therefore, this paper aims to investigate different fuzzy models and their suitability for modeling SDIs. To that end, two inference and two defuzzification methods were used for the fuzzification of the joint effect of two variables in an existing SDI model. The results show that the Average-Average inference and Center of Area defuzzification can better model the dynamics of SDI development.
A Robust and Scalable Software Library for Parallel Adaptive Refinement on Unstructured Meshes
NASA Technical Reports Server (NTRS)
Lou, John Z.; Norton, Charles D.; Cwik, Thomas A.
1999-01-01
The design and implementation of Pyramid, a software library for performing parallel adaptive mesh refinement (PAMR) on unstructured meshes, is described. This software library can be easily used in a variety of unstructured parallel computational applications, including parallel finite element, parallel finite volume, and parallel visualization applications using triangular or tetrahedral meshes. The library contains a suite of well-designed and efficiently implemented modules that perform operations in a typical PAMR process. Among these are mesh quality control during successive parallel adaptive refinement (typically guided by a local-error estimator), parallel load-balancing, and parallel mesh partitioning using the ParMeTiS partitioner. The Pyramid library is implemented in Fortran 90 with an interface to the Message-Passing Interface (MPI) library, supporting code efficiency, modularity, and portability. An EM waveguide filter application, adaptively refined using the Pyramid library, is illustrated.
DOT National Transportation Integrated Search
2004-01-01
This study attempted to identify new materials that might have applications in highway infrastructure that could lead to savings for the Virginia Department of Transportation (VDOT) and other transportation agencies. This search identified 47 materia...
Cafe: A Generic Configurable Customizable Composite Cloud Application Framework
NASA Astrophysics Data System (ADS)
Mietzner, Ralph; Unger, Tobias; Leymann, Frank
In this paper we present Cafe (Composite Application Framework) an approach to describe configurable composite service-oriented applications and to automatically provision them across different providers. Cafe enables independent software vendors to describe their composite service-oriented applications and the components that are used to assemble them. Components can be internal to the application or external and can be deployed in any of the delivery models present in the cloud. The components are annotated with requirements for the infrastructure they later need to be run on. Providers on the other hand advertise their infrastructure services by describing them as infrastructure capabilities. The separation of software vendors and providers enables end users and providers to follow a best-of-breed strategy by combining arbitrary applications with arbitrary providers. We show how such applications can be automatically provisioned and present an architecture and a prototype that implements the concepts.
76 FR 36103 - Combined Notice of Filings #1
Federal Register 2010, 2011, 2012, 2013, 2014
2011-06-21
...-2077-005. Applicants: Delaware City Refining Company LLC, PBF Power Marketing LLC. Desciption: Delaware City Refining Company LLC, Triennial Market Power Analysis and Notice of Change in Status. Filed Date..., 2011. Docket Numbers: ER10-2074-001; ER10-2097-003. Applicants: Kansas City Power & Light Company, KCP...
40 CFR 80.101 - Standards applicable to refiners and importers.
Code of Federal Regulations, 2013 CFR
2013-07-01
... 40 Protection of Environment 17 2013-07-01 2013-07-01 false Standards applicable to refiners and importers. 80.101 Section 80.101 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) AIR...)(2); (ii) A description of the hardships that make it infeasible, on a cost and/or technological...
40 CFR 80.101 - Standards applicable to refiners and importers.
Code of Federal Regulations, 2014 CFR
2014-07-01
... 40 Protection of Environment 17 2014-07-01 2014-07-01 false Standards applicable to refiners and importers. 80.101 Section 80.101 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) AIR...)(2); (ii) A description of the hardships that make it infeasible, on a cost and/or technological...
DOT National Transportation Integrated Search
2009-05-01
As a major ITS initiative, the Vehicle Infrastructure Integration (VII) program is to revolutionize : transportation by creating an enabling communication infrastructure that will open up a wide range of : safety applications. The road-condition warn...
DOT National Transportation Integrated Search
2007-01-01
Vehicle Infrastructure Integration (VII) involves the two-way wireless transmission of data from vehicle-to-vehicle and vehicle-to-infrastructure utilizing Dedicated Short Range Communications (DSRC). VII will enable the development of weather-relate...
Federal Register 2010, 2011, 2012, 2013, 2014
2012-09-07
... DEPARTMENT OF COMMERCE International Trade Administration Executive-Led Indonesia Vietnam... the Notice of the Executive-Led Indonesia Vietnam Infrastructure Business Development Mission... Timeframe for Recruitment and Applications section of the Notice of the Indonesia Vietnam Infrastructure...
RAPPORT: running scientific high-performance computing applications on the cloud.
Cohen, Jeremy; Filippis, Ioannis; Woodbridge, Mark; Bauer, Daniela; Hong, Neil Chue; Jackson, Mike; Butcher, Sarah; Colling, David; Darlington, John; Fuchs, Brian; Harvey, Matt
2013-01-28
Cloud computing infrastructure is now widely used in many domains, but one area where there has been more limited adoption is research computing, in particular for running scientific high-performance computing (HPC) software. The Robust Application Porting for HPC in the Cloud (RAPPORT) project took advantage of existing links between computing researchers and application scientists in the fields of bioinformatics, high-energy physics (HEP) and digital humanities, to investigate running a set of scientific HPC applications from these domains on cloud infrastructure. In this paper, we focus on the bioinformatics and HEP domains, describing the applications and target cloud platforms. We conclude that, while there are many factors that need consideration, there is no fundamental impediment to the use of cloud infrastructure for running many types of HPC applications and, in some cases, there is potential for researchers to benefit significantly from the flexibility offered by cloud platforms.
NASA Astrophysics Data System (ADS)
Leka, K. D.; Barnes, Graham; Wagner, Eric
2018-04-01
A classification infrastructure built upon Discriminant Analysis (DA) has been developed at NorthWest Research Associates for examining the statistical differences between samples of two known populations. Originating to examine the physical differences between flare-quiet and flare-imminent solar active regions, we describe herein some details of the infrastructure including: parametrization of large datasets, schemes for handling "null" and "bad" data in multi-parameter analysis, application of non-parametric multi-dimensional DA, an extension through Bayes' theorem to probabilistic classification, and methods invoked for evaluating classifier success. The classifier infrastructure is applicable to a wide range of scientific questions in solar physics. We demonstrate its application to the question of distinguishing flare-imminent from flare-quiet solar active regions, updating results from the original publications that were based on different data and much smaller sample sizes. Finally, as a demonstration of "Research to Operations" efforts in the space-weather forecasting context, we present the Discriminant Analysis Flare Forecasting System (DAFFS), a near-real-time operationally-running solar flare forecasting tool that was developed from the research-directed infrastructure.
Sentis, Manuel Lorenzo; Gable, Carl W.
2017-06-15
Furthermore, there are many applications in science and engineering modeling where an accurate representation of a complex model geometry in the form of a mesh is important. In applications of flow and transport in subsurface porous media, this is manifest in models that must capture complex geologic stratigraphy, structure (faults, folds, erosion, deposition) and infrastructure (tunnels, boreholes, excavations). Model setup, defined as the activities of geometry definition, mesh generation (creation, optimization, modification, refine, de-refine, smooth), assigning material properties, initial conditions and boundary conditions requires specialized software tools to automate and streamline the process. In addition, some model setup tools willmore » provide more utility if they are designed to interface with and meet the needs of a particular flow and transport software suite. A control volume discretization that uses a two point flux approximation is for example most accurate when the underlying control volumes are 2D or 3D Voronoi tessellations. In this paper we will present the coupling of LaGriT, a mesh generation and model setup software suite and TOUGH2 to model subsurface flow problems and we show an example of how LaGriT can be used as a model setup tool for the generation of a Voronoi mesh for the simulation program TOUGH2. To generate the MESH file for TOUGH2 from the LaGriT output a standalone module Lagrit2Tough2 was developed, which is presented here and will be included in a future release of LaGriT. Here in this paper an alternative method to generate a Voronoi mesh for TOUGH2 with LaGriT is presented and thanks to the modular and command based structure of LaGriT this method is well suited to generating a mesh for complex models.« less
NASA Astrophysics Data System (ADS)
Sentís, Manuel Lorenzo; Gable, Carl W.
2017-11-01
There are many applications in science and engineering modeling where an accurate representation of a complex model geometry in the form of a mesh is important. In applications of flow and transport in subsurface porous media, this is manifest in models that must capture complex geologic stratigraphy, structure (faults, folds, erosion, deposition) and infrastructure (tunnels, boreholes, excavations). Model setup, defined as the activities of geometry definition, mesh generation (creation, optimization, modification, refine, de-refine, smooth), assigning material properties, initial conditions and boundary conditions requires specialized software tools to automate and streamline the process. In addition, some model setup tools will provide more utility if they are designed to interface with and meet the needs of a particular flow and transport software suite. A control volume discretization that uses a two point flux approximation is for example most accurate when the underlying control volumes are 2D or 3D Voronoi tessellations. In this paper we will present the coupling of LaGriT, a mesh generation and model setup software suite and TOUGH2 (Pruess et al., 1999) to model subsurface flow problems and we show an example of how LaGriT can be used as a model setup tool for the generation of a Voronoi mesh for the simulation program TOUGH2. To generate the MESH file for TOUGH2 from the LaGriT output a standalone module Lagrit2Tough2 was developed, which is presented here and will be included in a future release of LaGriT. In this paper an alternative method to generate a Voronoi mesh for TOUGH2 with LaGriT is presented and thanks to the modular and command based structure of LaGriT this method is well suited to generating a mesh for complex models.
ERIC Educational Resources Information Center
National Inst. of Standards and Technology, Gaithersburg, MD.
Intended for public comment and discussion, this document is the second volume of papers in which the Information Infrastructure Task Force has attempted to articulate in clear terms, with sufficient detail, how improvements in the National Information Infrastructure (NII) can help meet other social goals. These are not plans to be enacted, but…
Abstracting application deployment on Cloud infrastructures
NASA Astrophysics Data System (ADS)
Aiftimiei, D. C.; Fattibene, E.; Gargana, R.; Panella, M.; Salomoni, D.
2017-10-01
Deploying a complex application on a Cloud-based infrastructure can be a challenging task. In this contribution we present an approach for Cloud-based deployment of applications and its present or future implementation in the framework of several projects, such as “!CHAOS: a cloud of controls” [1], a project funded by MIUR (Italian Ministry of Research and Education) to create a Cloud-based deployment of a control system and data acquisition framework, “INDIGO-DataCloud” [2], an EC H2020 project targeting among other things high-level deployment of applications on hybrid Clouds, and “Open City Platform”[3], an Italian project aiming to provide open Cloud solutions for Italian Public Administrations. We considered to use an orchestration service to hide the complex deployment of the application components, and to build an abstraction layer on top of the orchestration one. Through Heat [4] orchestration service, we prototyped a dynamic, on-demand, scalable platform of software components, based on OpenStack infrastructures. On top of the orchestration service we developed a prototype of a web interface exploiting the Heat APIs. The user can start an instance of the application without having knowledge about the underlying Cloud infrastructure and services. Moreover, the platform instance can be customized by choosing parameters related to the application such as the size of a File System or the number of instances of a NoSQL DB cluster. As soon as the desired platform is running, the web interface offers the possibility to scale some infrastructure components. In this contribution we describe the solution design and implementation, based on the application requirements, the details of the development of both the Heat templates and of the web interface, together with possible exploitation strategies of this work in Cloud data centers.
DOT National Transportation Integrated Search
2015-08-01
This document is the third of a seven volume report that describe the Performance Requirements for the connected vehicle vehicle-to-infrastructure (V2I) safety applications developed for the U.S. Department of Transportation (U.S. DOT). This volume d...
DOT National Transportation Integrated Search
2015-08-01
This document is the seventh of a seven volume report that describe the Performance Requirements for the connected vehicle vehicle-to-infrastructure (V2I) safety applications developed for the U.S. Department of Transportation (U.S. DOT). This volume...
DOT National Transportation Integrated Search
2015-08-01
This document is the second of a seven volume report that describe the Performance Requirements for the connected vehicle vehicle-to-infrastructure (V2I) safety applications developed for the U.S. Department of Transportation (U.S. DOT). This volume ...
78 FR 71607 - Notice of Receipt of Petitions for a Waiver of the Renewable Fuel Standard
Federal Register 2010, 2011, 2012, 2013, 2014
2013-11-29
... refining companies submitted individual petitions to the Administrator that also request a waiver of the... waiver of the 2014 applicable volumes under the RFS. Subsequently, several refining companies submitted... renewable fuel (and RINs) will lead to an inadequate supply of gasoline and diesel, because refiners and...
Federal Register 2010, 2011, 2012, 2013, 2014
2013-08-14
... Refining & Marketing Company LLC; Supplemental Notice That Initial Market-Based Rate Filing Includes... proceeding of Tesoro Refining & Marketing Company LLC's application for market-based rate authority, with an... of protests and interventions in lieu of paper, using the FERC Online links at http://www.ferc.gov...
Federal Register 2010, 2011, 2012, 2013, 2014
2013-03-14
... Refining & Marketing Company LLC; Supplemental Notice That Initial Market-Based Rate Filing Includes... proceeding, of Tesoro Refining & Marketing Company LLC's application for market- based rate authority, with... submission of protests and interventions in lieu of paper, using the FERC Online links at http://www.ferc.gov...
Interactive Model-Centric Systems Engineering (IMCSE) Phase 1
2014-09-30
and supporting infrastructure ...testing. 4. Supporting MPTs. During Phase 1, the opportunity to develop several MPTs to support IMCSE arose, including supporting infrastructure ...Analysis will be completed and tested with a case application, along with preliminary supporting infrastructure , which will then be used to inform the
USDA-ARS?s Scientific Manuscript database
Infrastructure-as-a-service (IaaS) clouds provide a new medium for deployment of environmental modeling applications. Harnessing advancements in virtualization, IaaS clouds can provide dynamic scalable infrastructure to better support scientific modeling computational demands. Providing scientific m...
Advanced e-Infrastructures for Civil Protection applications: the CYCLOPS Project
NASA Astrophysics Data System (ADS)
Mazzetti, P.; Nativi, S.; Verlato, M.; Ayral, P. A.; Fiorucci, P.; Pina, A.; Oliveira, J.; Sorani, R.
2009-04-01
During the full cycle of the emergency management, Civil Protection operative procedures involve many actors belonging to several institutions (civil protection agencies, public administrations, research centers, etc.) playing different roles (decision-makers, data and service providers, emergency squads, etc.). In this context the sharing of information is a vital requirement to make correct and effective decisions. Therefore a European-wide technological infrastructure providing a distributed and coordinated access to different kinds of resources (data, information, services, expertise, etc.) could enhance existing Civil Protection applications and even enable new ones. Such European Civil Protection e-Infrastructure should be designed taking into account the specific requirements of Civil Protection applications and the state-of-the-art in the scientific and technological disciplines which could make the emergency management more effective. In the recent years Grid technologies have reached a mature state providing a platform for secure and coordinated resource sharing between the participants collected in the so-called Virtual Organizations. Moreover the Earth and Space Sciences Informatics provide the conceptual tools for modeling the geospatial information shared in Civil Protection applications during its entire lifecycle. Therefore a European Civil Protection e-infrastructure might be based on a Grid platform enhanced with Earth Sciences services. In the context of the 6th Framework Programme the EU co-funded Project CYCLOPS (CYber-infrastructure for CiviL protection Operative ProcedureS), ended in December 2008, has addressed the problem of defining the requirements and identifying the research strategies and innovation guidelines towards an advanced e-Infrastructure for Civil Protection. Starting from the requirement analysis CYCLOPS has proposed an architectural framework for a European Civil Protection e-Infrastructure. This architectural framework has been evaluated through the development of prototypes of two operative applications used by the Italian Civil Protection for Wild Fires Risk Assessment (RISICO) and by the French Civil Protection for Flash Flood Risk Management (SPC-GD). The results of these studies and proof-of-concepts have been used as the basis for the definition of research and innovation strategies aiming to the detailed design and implementation of the infrastructure. In particular the main research themes and topics to be addressed have been identified and detailed. Finally the obstacles to the innovation required for the adoption of this infrastructure and possible strategies to overcome them have been discussed.
DOE Office of Scientific and Technical Information (OSTI.GOV)
DeRosa, Sean E.; Flanagan, Tatiana Paz
The National Transportation Fuels Model was used to simulate a hypothetical increase in North Slope of Alaska crude oil production. The results show that the magnitude of production utilized depends in part on the ability of crude oil and refined products infrastructure in the contiguous United States to absorb and adjust to the additional supply. Decisions about expanding North Slope production can use the National Transportation Fuels Model take into account the effects on crude oil flows in the contiguous United States.
An authentication infrastructure for today and tomorrow
DOE Office of Scientific and Technical Information (OSTI.GOV)
Engert, D.E.
1996-06-01
The Open Software Foundation`s Distributed Computing Environment (OSF/DCE) was originally designed to provide a secure environment for distributed applications. By combining it with Kerberos Version 5 from MIT, it can be extended to provide network security as well. This combination can be used to build both an inter and intra organizational infrastructure while providing single sign-on for the user with overall improved security. The ESnet community of the Department of Energy is building just such an infrastructure. ESnet has modified these systems to improve their interoperability, while encouraging the developers to incorporate these changes and work more closely together tomore » continue to improve the interoperability. The success of this infrastructure depends on its flexibility to meet the needs of many applications and network security requirements. The open nature of Kerberos, combined with the vendor support of OSF/DCE, provides the infrastructure for today and tomorrow.« less
Sousa, V; Matos, J P; Almeida, N; Saldanha Matos, J
2014-01-01
Operation, maintenance and rehabilitation comprise the main concerns of wastewater infrastructure asset management. Given the nature of the service provided by a wastewater system and the characteristics of the supporting infrastructure, technical issues are relevant to support asset management decisions. In particular, in densely urbanized areas served by large, complex and aging sewer networks, the sustainability of the infrastructures largely depends on the implementation of an efficient asset management system. The efficiency of such a system may be enhanced with technical decision support tools. This paper describes the role of artificial intelligence tools such as artificial neural networks and support vector machines for assisting the planning of operation and maintenance activities of wastewater infrastructures. A case study of the application of this type of tool to the wastewater infrastructures of Sistema de Saneamento da Costa do Estoril is presented.
China national space remote sensing infrastructure and its application
NASA Astrophysics Data System (ADS)
Li, Ming
2016-07-01
Space Infrastructure is a space system that provides communication, navigation and remote sensing service for broad users. China National Space Remote Sensing Infrastructure includes remote sensing satellites, ground system and related systems. According to the principle of multiple-function on one satellite, multiple satellites in one constellation and collaboration between constellations, series of land observation, ocean observation and atmosphere observation satellites have been suggested to have high, middle and low resolution and fly on different orbits and with different means of payloads to achieve a high ability for global synthetically observation. With such an infrastructure, we can carry out the research on climate change, geophysics global surveying and mapping, water resources management, safety and emergency management, and so on. I This paper gives a detailed introduction about the planning of this infrastructure and its application in different area, especially the international cooperation potential in the so called One Belt and One Road space information corridor.
40 CFR 61.270 - Applicability and designation of sources.
Code of Federal Regulations, 2011 CFR
2011-07-01
... for Benzene Emissions From Benzene Storage Vessels § 61.270 Applicability and designation of sources. (a) The source to which this subpart applies is each storage vessel that is storing benzene having a... Benzene, ASTM D835-85 for Refined Benzene-485, ASTM D2359-85a or 93 for Refined Benzene-535, and ASTM...
40 CFR 61.270 - Applicability and designation of sources.
Code of Federal Regulations, 2014 CFR
2014-07-01
... for Benzene Emissions From Benzene Storage Vessels § 61.270 Applicability and designation of sources. (a) The source to which this subpart applies is each storage vessel that is storing benzene having a... Benzene, ASTM D835-85 for Refined Benzene-485, ASTM D2359-85a or 93 for Refined Benzene-535, and ASTM...
40 CFR 61.270 - Applicability and designation of sources.
Code of Federal Regulations, 2010 CFR
2010-07-01
... for Benzene Emissions From Benzene Storage Vessels § 61.270 Applicability and designation of sources. (a) The source to which this subpart applies is each storage vessel that is storing benzene having a... Benzene, ASTM D835-85 for Refined Benzene-485, ASTM D2359-85a or 93 for Refined Benzene-535, and ASTM...
40 CFR 61.270 - Applicability and designation of sources.
Code of Federal Regulations, 2012 CFR
2012-07-01
... for Benzene Emissions From Benzene Storage Vessels § 61.270 Applicability and designation of sources. (a) The source to which this subpart applies is each storage vessel that is storing benzene having a... Benzene, ASTM D835-85 for Refined Benzene-485, ASTM D2359-85a or 93 for Refined Benzene-535, and ASTM...
40 CFR 61.270 - Applicability and designation of sources.
Code of Federal Regulations, 2013 CFR
2013-07-01
... for Benzene Emissions From Benzene Storage Vessels § 61.270 Applicability and designation of sources. (a) The source to which this subpart applies is each storage vessel that is storing benzene having a... Benzene, ASTM D835-85 for Refined Benzene-485, ASTM D2359-85a or 93 for Refined Benzene-535, and ASTM...
Modeling, Simulation and Analysis of Public Key Infrastructure
NASA Technical Reports Server (NTRS)
Liu, Yuan-Kwei; Tuey, Richard; Ma, Paul (Technical Monitor)
1998-01-01
Security is an essential part of network communication. The advances in cryptography have provided solutions to many of the network security requirements. Public Key Infrastructure (PKI) is the foundation of the cryptography applications. The main objective of this research is to design a model to simulate a reliable, scalable, manageable, and high-performance public key infrastructure. We build a model to simulate the NASA public key infrastructure by using SimProcess and MatLab Software. The simulation is from top level all the way down to the computation needed for encryption, decryption, digital signature, and secure web server. The application of secure web server could be utilized in wireless communications. The results of the simulation are analyzed and confirmed by using queueing theory.
DOT National Transportation Integrated Search
2015-08-01
This document is the fifth of a seven volume report that describe the Performance Requirements for the connected vehicle vehicle-to-infrastructure (V2I) safety applications developed for the U.S. Department of Transportation (U.S. DOT). This volume d...
DOT National Transportation Integrated Search
2015-08-01
This document is the sixth of a seven volume report that describe the Performance Requirements for the connected vehicle vehicle-to-infrastructure (V2I) safety applications developed for the U.S. Department of Transportation (U.S. DOT). This volume d...
DOT National Transportation Integrated Search
2015-08-01
This document is the fourth of a seven volume report that describe the Performance Requirements for the connected vehicle vehicle-to-infrastructure (V2I) safety applications developed for the U.S. Department of Transportation (U.S. DOT). This volume ...
Exchange of Veterans Affairs medical data using national and local networks.
Dayhoff, R E; Maloney, D L
1992-12-17
Remote data exchange is extremely useful to a number of medical applications. It requires an infrastructure including systems, network and software tools. With such an infrastructure, existing local applications can be extended to serve national needs. There are many approaches to providing remote data exchange. Selection of an approach for an application requires balancing of various factors, including the need for rapid interactive access to data and ad hoc queries, the adequacy of access to predefined data sets, the need for an integrated view of the data, the ability to provide adequate security protection, the amount of data required, and the time frame in which data is required. The applications described here demonstrate new ways that the VA is reaping benefits from its infrastructure and its compatible integrated hospital information systems located at its facilities. The needs that have been met are also needs of private hospitals. However, in many cases the infrastructure to allow data exchange is not present. The VA's experiences may serve to establish the benefits that can be obtained by all hospitals.
Potential markets for advanced satellite communications
NASA Astrophysics Data System (ADS)
Adamson, Steven; Roberts, David; Schubert, Leroy; Smith, Brian; Sogegian, Robert; Walters, Daniel
1993-09-01
This report identifies trends in the volume and type of traffic offered to the U.S. domestic communications infrastructure and extrapolates these trends through the year 2011. To describe how telecommunications service providers are adapting to the identified trends, this report assesses the status, plans, and capacity of the domestic communications infrastructure. Cable, satellite, and radio components of the infrastructure are examined separately. The report also assesses the following major applications making use of the infrastructure: (1) Broadband services, including Broadband Integrated Services Digital Network (BISDN), Switched Multimegabit Data Service (SMDS), and frame relay; (2) mobile services, including voice, location, and paging; (3) Very Small Aperture Terminals (VSAT), including mesh VSAT; and (4) Direct Broadcast Satellite (DBS) for audio and video. The report associates satellite implementation of specific applications with market segments appropriate to their features and capabilities. The volume and dollar value of these market segments are estimated. For the satellite applications able to address the needs of significant market segments, the report also examines the potential of each satellite-based application to capture business from alternative technologies.
Potential markets for advanced satellite communications
NASA Technical Reports Server (NTRS)
Adamson, Steven; Roberts, David; Schubert, Leroy; Smith, Brian; Sogegian, Robert; Walters, Daniel
1993-01-01
This report identifies trends in the volume and type of traffic offered to the U.S. domestic communications infrastructure and extrapolates these trends through the year 2011. To describe how telecommunications service providers are adapting to the identified trends, this report assesses the status, plans, and capacity of the domestic communications infrastructure. Cable, satellite, and radio components of the infrastructure are examined separately. The report also assesses the following major applications making use of the infrastructure: (1) Broadband services, including Broadband Integrated Services Digital Network (BISDN), Switched Multimegabit Data Service (SMDS), and frame relay; (2) mobile services, including voice, location, and paging; (3) Very Small Aperture Terminals (VSAT), including mesh VSAT; and (4) Direct Broadcast Satellite (DBS) for audio and video. The report associates satellite implementation of specific applications with market segments appropriate to their features and capabilities. The volume and dollar value of these market segments are estimated. For the satellite applications able to address the needs of significant market segments, the report also examines the potential of each satellite-based application to capture business from alternative technologies.
Operations dashboard: comparative study
NASA Astrophysics Data System (ADS)
Ramly, Noor Nashriq; Ismail, Ahmad Zuhairi; Aziz, Mohd Haris; Ahmad, Nurul Haszeli
2011-10-01
In this present days and age, there are increasing needs for companies to monitor application and infrastructure health. Apart from having proactive measures to secure their application and infrastructure, many see monitoring dashboards as crucial investment in disaster preparedness. As companies struggle to find the best solution to cater for their needs and interest for monitoring their application and infrastructure's health, this paper summarizes the studies made on several known off-the-shelf operations dashboard and in-house developed dashboard. A few criteria of good dashboard are collected from previous studies carried out by several researchers and rank them according to importance and business needs. The finalized criteria that will be discussed in later sections are data visualization, performance indicator, dashboard personalization, audit capability and alert/ notification. Comparative studies between several popular dashboards were then carried out to determine whether they met these criteria that we derived from the first exercise. The findings hopefully can be used to educate and provide an overview of selecting the best IT application and infrastructure operations dashboard that suit business needs, thus become the main contribution of this paper.
Refined geometric transition and qq-characters
NASA Astrophysics Data System (ADS)
Kimura, Taro; Mori, Hironori; Sugimoto, Yuji
2018-01-01
We show the refinement of the prescription for the geometric transition in the refined topological string theory and, as its application, discuss a possibility to describe qq-characters from the string theory point of view. Though the suggested way to operate the refined geometric transition has passed through several checks, it is additionally found in this paper that the presence of the preferred direction brings a nontrivial effect. We provide the modified formula involving this point. We then apply our prescription of the refined geometric transition to proposing the stringy description of doubly quantized Seiberg-Witten curves called qq-characters in certain cases.
Dimond, Eileen P; Zon, Robin T; Weiner, Bryan J; St Germain, Diane; Denicoff, Andrea M; Dempsey, Kandie; Carrigan, Angela C; Teal, Randall W; Good, Marjorie J; McCaskill-Stevens, Worta; Grubbs, Stephen S; Dimond, Eileen P; Zon, Robin T; Weiner, Bryan J; St Germain, Diane; Denicoff, Andrea M; Dempsey, Kandie; Carrigan, Angela C; Teal, Randall W; Good, Marjorie J; McCaskill-Stevens, Worta; Grubbs, Stephen S
2016-01-01
Several publications have described minimum standards and exemplary attributes for clinical trial sites to improve research quality. The National Cancer Institute (NCI) Community Cancer Centers Program (NCCCP) developed the clinical trial Best Practice Matrix tool to facilitate research program improvements through annual self-assessments and benchmarking. The tool identified nine attributes, each with three progressive levels, to score clinical trial infrastructural elements from less to more exemplary. The NCCCP sites correlated tool use with research program improvements, and the NCI pursued a formative evaluation to refine the interpretability and measurability of the tool. From 2011 to 2013, 21 NCCCP sites self-assessed their programs with the tool annually. During 2013 to 2014, NCI collaborators conducted a five-step formative evaluation of the matrix tool. Sites reported significant increases in level-three scores across the original nine attributes combined (P<.001). Two specific attributes exhibited significant change: clinical trial portfolio diversity and management (P=.0228) and clinical trial communication (P=.0281). The formative evaluation led to revisions, including renaming the Best Practice Matrix as the Clinical Trial Assessment of Infrastructure Matrix (CT AIM), expanding infrastructural attributes from nine to 11, clarifying metrics, and developing a new scoring tool. Broad community input, cognitive interviews, and pilot testing improved the usability and functionality of the tool. Research programs are encouraged to use the CT AIM to assess and improve site infrastructure. Experience within the NCCCP suggests that the CT AIM is useful for improving quality, benchmarking research performance, reporting progress, and communicating program needs with institutional leaders. The tool model may also be useful in disciplines beyond oncology.
Improving water, sanitation and hygiene in health-care facilities, Liberia.
Abrampah, Nana Mensah; Montgomery, Maggie; Baller, April; Ndivo, Francis; Gasasira, Alex; Cooper, Catherine; Frescas, Ruben; Gordon, Bruce; Syed, Shamsuzzoha Babar
2017-07-01
The lack of proper water and sanitation infrastructures and poor hygiene practices in health-care facilities reduces facilities' preparedness and response to disease outbreaks and decreases the communities' trust in the health services provided. To improve water and sanitation infrastructures and hygiene practices, the Liberian health ministry held multistakeholder meetings to develop a national water, sanitation and hygiene and environmental health package. A national train-the-trainer course was held for county environmental health technicians, which included infection prevention and control focal persons; the focal persons acted as change agents. In Liberia, only 45% of 701 surveyed health-care facilities had an improved water source in 2015, and only 27% of these health-care facilities had proper disposal for infectious waste. Local ownership, through engagement of local health workers, was introduced to ensure development and refinement of the package. In-county collaborations between health-care facilities, along with multisectoral collaboration, informed national level direction, which led to increased focus on water and sanitation infrastructures and uptake of hygiene practices to improve the overall quality of service delivery. National level leadership was important to identify a vision and create an enabling environment for changing the perception of water, sanitation and hygiene in health-care provision. The involvement of health workers was central to address basic infrastructure and hygiene practices in health-care facilities and they also worked as stimulators for sustainable change. Further, developing a long-term implementation plan for national level initiatives is important to ensure sustainability.
Sun, Shanhui; Sonka, Milan; Beichel, Reinhard R.
2013-01-01
Recently, the optimal surface finding (OSF) and layered optimal graph image segmentation of multiple objects and surfaces (LOGISMOS) approaches have been reported with applications to medical image segmentation tasks. While providing high levels of performance, these approaches may locally fail in the presence of pathology or other local challenges. Due to the image data variability, finding a suitable cost function that would be applicable to all image locations may not be feasible. This paper presents a new interactive refinement approach for correcting local segmentation errors in the automated OSF-based segmentation. A hybrid desktop/virtual reality user interface was developed for efficient interaction with the segmentations utilizing state-of-the-art stereoscopic visualization technology and advanced interaction techniques. The user interface allows a natural and interactive manipulation on 3-D surfaces. The approach was evaluated on 30 test cases from 18 CT lung datasets, which showed local segmentation errors after employing an automated OSF-based lung segmentation. The performed experiments exhibited significant increase in performance in terms of mean absolute surface distance errors (2.54 ± 0.75 mm prior to refinement vs. 1.11 ± 0.43 mm post-refinement, p ≪ 0.001). Speed of the interactions is one of the most important aspects leading to the acceptance or rejection of the approach by users expecting real-time interaction experience. The average algorithm computing time per refinement iteration was 150 ms, and the average total user interaction time required for reaching complete operator satisfaction per case was about 2 min. This time was mostly spent on human-controlled manipulation of the object to identify whether additional refinement was necessary and to approve the final segmentation result. The reported principle is generally applicable to segmentation problems beyond lung segmentation in CT scans as long as the underlying segmentation utilizes the OSF framework. The two reported segmentation refinement tools were optimized for lung segmentation and might need some adaptation for other application domains. PMID:23415254
Increasing the Cryogenic Toughness of Steels
NASA Technical Reports Server (NTRS)
Rush, H. F.
1986-01-01
Grain-refining heat treatments increase toughness without substantial strength loss. Five alloys selected for study, all at or near technological limit. Results showed clearly grain sizes of these alloys refined by such heat treatments and grain refinement results in large improvement in toughness without substantial loss in strength. Best improvements seen in HP-9-4-20 Steel, at low-strength end of technological limit, and in Maraging 200, at high-strength end. These alloys, in grain refined condition, considered for model applications in high-Reynolds-number cryogenic wind tunnels.
NASA Astrophysics Data System (ADS)
Mazzetti, P.; Nativi, S.; Verlato, M.; Angelini, V.
2009-04-01
In the context of the EU co-funded project CYCLOPS (http://www.cyclops-project.eu) the problem of designing an advanced e-Infrastructure for Civil Protection (CP) applications has been addressed. As a preliminary step, some studies about European CP systems and operational applications were performed in order to define their specific system requirements. At a higher level it was verified that CP applications are usually conceived to map CP Business Processes involving different levels of processing including data access, data processing, and output visualization. At their core they usually run one or more Earth Science models for information extraction. The traditional approach based on the development of monolithic applications presents some limitations related to flexibility (e.g. the possibility of running the same models with different input data sources, or different models with the same data sources) and scalability (e.g. launching several runs for different scenarios, or implementing more accurate and computing-demanding models). Flexibility can be addressed adopting a modular design based on a SOA and standard services and models, such as OWS and ISO for geospatial services. Distributed computing and storage solutions could improve scalability. Basing on such considerations an architectural framework has been defined. It is made of a Web Service layer providing advanced services for CP applications (e.g. standard geospatial data sharing and processing services) working on the underlying Grid platform. This framework has been tested through the development of prototypes as proof-of-concept. These theoretical studies and proof-of-concept demonstrated that although Grid and geospatial technologies would be able to provide significant benefits to CP applications in terms of scalability and flexibility, current platforms are designed taking into account requirements different from CP. In particular CP applications have strict requirements in terms of: a) Real-Time capabilities, privileging time-of-response instead of accuracy, b) Security services to support complex data policies and trust relationships, c) Interoperability with existing or planned infrastructures (e.g. e-Government, INSPIRE compliant, etc.). Actually these requirements are the main reason why CP applications differ from Earth Science applications. Therefore further research is required to design and implement an advanced e-Infrastructure satisfying those specific requirements. In particular five themes where further research is required were identified: Grid Infrastructure Enhancement, Advanced Middleware for CP Applications, Security and Data Policies, CP Applications Enablement, and Interoperability. For each theme several research topics were proposed and detailed. They are targeted to solve specific problems for the implementation of an effective operational European e-Infrastructure for CP applications.
NASA Astrophysics Data System (ADS)
Barr, Jeffrey D.; Gressler, William; Sebag, Jacques; Seriche, Jaime; Serrano, Eduardo
2016-07-01
The civil work, site infrastructure and buildings for the summit facility of the Large Synoptic Survey Telescope (LSST) are among the first major elements that need to be designed, bid and constructed to support the subsequent integration of the dome, telescope, optics, camera and supporting systems. As the contracts for those other major subsystems now move forward under the management of the LSST Telescope and Site (T and S) team, there has been inevitable and beneficial evolution in their designs, which has resulted in significant modifications to the facility and infrastructure. The earliest design requirements for the LSST summit facility were first documented in 2005, its contracted full design was initiated in 2010, and construction began in January, 2015. During that entire development period, and extending now roughly halfway through construction, there continue to be necessary modifications to the facility design resulting from the refinement of interfaces to other major elements of the LSST project and now, during construction, due to unanticipated field conditions. Changes from evolving interfaces have principally involved the telescope mount, the dome and mirror handling/coating facilities which have included significant variations in mass, dimensions, heat loads and anchorage conditions. Modifications related to field conditions have included specifying and testing alternative methods of excavation and contending with the lack of competent rock substrate where it was predicted to be. While these and other necessary changes are somewhat specific to the LSST project and site, they also exemplify inherent challenges related to the typical timeline for the design and construction of astronomical observatory support facilities relative to the overall development of the project.
Medical image informatics infrastructure design and applications.
Huang, H K; Wong, S T; Pietka, E
1997-01-01
Picture archiving and communication systems (PACS) is a system integration of multimodality images and health information systems designed for improving the operation of a radiology department. As it evolves, PACS becomes a hospital image document management system with a voluminous image and related data file repository. A medical image informatics infrastructure can be designed to take advantage of existing data, providing PACS with add-on value for health care service, research, and education. A medical image informatics infrastructure (MIII) consists of the following components: medical images and associated data (including PACS database), image processing, data/knowledge base management, visualization, graphic user interface, communication networking, and application oriented software. This paper describes these components and their logical connection, and illustrates some applications based on the concept of the MIII.
Data Center Consolidation: A Step towards Infrastructure Clouds
NASA Astrophysics Data System (ADS)
Winter, Markus
Application service providers face enormous challenges and rising costs in managing and operating a growing number of heterogeneous system and computing landscapes. Limitations of traditional computing environments force IT decision-makers to reorganize computing resources within the data center, as continuous growth leads to an inefficient utilization of the underlying hardware infrastructure. This paper discusses a way for infrastructure providers to improve data center operations based on the findings of a case study on resource utilization of very large business applications and presents an outlook beyond server consolidation endeavors, transforming corporate data centers into compute clouds.
Advanced Optical Burst Switched Network Concepts
NASA Astrophysics Data System (ADS)
Nejabati, Reza; Aracil, Javier; Castoldi, Piero; de Leenheer, Marc; Simeonidou, Dimitra; Valcarenghi, Luca; Zervas, Georgios; Wu, Jian
In recent years, as the bandwidth and the speed of networks have increased significantly, a new generation of network-based applications using the concept of distributed computing and collaborative services is emerging (e.g., Grid computing applications). The use of the available fiber and DWDM infrastructure for these applications is a logical choice offering huge amounts of cheap bandwidth and ensuring global reach of computing resources [230]. Currently, there is a great deal of interest in deploying optical circuit (wavelength) switched network infrastructure for distributed computing applications that require long-lived wavelength paths and address the specific needs of a small number of well-known users. Typical users are particle physicists who, due to their international collaborations and experiments, generate enormous amounts of data (Petabytes per year). These users require a network infrastructures that can support processing and analysis of large datasets through globally distributed computing resources [230]. However, providing wavelength granularity bandwidth services is not an efficient and scalable solution for applications and services that address a wider base of user communities with different traffic profiles and connectivity requirements. Examples of such applications may be: scientific collaboration in smaller scale (e.g., bioinformatics, environmental research), distributed virtual laboratories (e.g., remote instrumentation), e-health, national security and defense, personalized learning environments and digital libraries, evolving broadband user services (i.e., high resolution home video editing, real-time rendering, high definition interactive TV). As a specific example, in e-health services and in particular mammography applications due to the size and quantity of images produced by remote mammography, stringent network requirements are necessary. Initial calculations have shown that for 100 patients to be screened remotely, the network would have to securely transport 1.2 GB of data every 30 s [230]. According to the above explanation it is clear that these types of applications need a new network infrastructure and transport technology that makes large amounts of bandwidth at subwavelength granularity, storage, computation, and visualization resources potentially available to a wide user base for specified time durations. As these types of collaborative and network-based applications evolve addressing a wide range and large number of users, it is infeasible to build dedicated networks for each application type or category. Consequently, there should be an adaptive network infrastructure able to support all application types, each with their own access, network, and resource usage patterns. This infrastructure should offer flexible and intelligent network elements and control mechanism able to deploy new applications quickly and efficiently.
NASA Astrophysics Data System (ADS)
van Hemert, Jano; Vilotte, Jean-Pierre
2010-05-01
Research in earthquake and seismology addresses fundamental problems in understanding Earth's internal wave sources and structures, and augment applications to societal concerns about natural hazards, energy resources and environmental change. This community is central to the European Plate Observing System (EPOS)—the ESFRI initiative in solid Earth Sciences. Global and regional seismology monitoring systems are continuously operated and are transmitting a growing wealth of data from Europe and from around the world. These tremendous volumes of seismograms, i.e., records of ground motions as a function of time, have a definite multi-use attribute, which puts a great premium on open-access data infrastructures that are integrated globally. In Europe, the earthquake and seismology community is part of the European Integrated Data Archives (EIDA) infrastructure and is structured as "horizontal" data services. On top of this distributed data archive system, the community has developed recently within the EC project NERIES advanced SOA-based web services and a unified portal system. Enabling advanced analysis of these data by utilising a data-aware distributed computing environment is instrumental to fully exploit the cornucopia of data and to guarantee optimal operation of the high-cost monitoring facilities. The strategy of VERCE is driven by the needs of data-intensive applications in data mining and modelling and will be illustrated through a set of applications. It aims to provide a comprehensive architecture and framework adapted to the scale and the diversity of these applications, and to integrate the community data infrastructure with Grid and HPC infrastructures. A first novel aspect is a service-oriented architecture that provides well-equipped integrated workbenches, with an efficient communication layer between data and Grid infrastructures, augmented with bridges to the HPC facilities. A second novel aspect is the coupling between Grid data analysis and HPC data modelling applications through workflow and data sharing mechanisms. VERCE will develop important interactions with the European infrastructure initiatives in Grid and HPC computing. The VERCE team: CNRS-France (IPG Paris, LGIT Grenoble), UEDIN (UK), KNMI-ORFEUS (Holland), EMSC, INGV (Italy), LMU (Germany), ULIV (UK), BADW-LRZ (Germany), SCAI (Germany), CINECA (Italy)
NASA Astrophysics Data System (ADS)
Reyes López, Yaidel; Roose, Dirk; Recarey Morfa, Carlos
2013-05-01
In this paper, we present a dynamic refinement algorithm for the smoothed particle Hydrodynamics (SPH) method. An SPH particle is refined by replacing it with smaller daughter particles, which positions are calculated by using a square pattern centered at the position of the refined particle. We determine both the optimal separation and the smoothing distance of the new particles such that the error produced by the refinement in the gradient of the kernel is small and possible numerical instabilities are reduced. We implemented the dynamic refinement procedure into two different models: one for free surface flows, and one for post-failure flow of non-cohesive soil. The results obtained for the test problems indicate that using the dynamic refinement procedure provides a good trade-off between the accuracy and the cost of the simulations.
Commercial Technology at the Tactical Edge
2013-06-01
Typical environmental examples are survivability in the face of hostile action, lack of fixed infrastructure , high mobility and ruggedness...Disconnected, Intermittent, and Limited (DIL) Communications Delay Tolerance Mobile Ad Hoc Networks (MANETs) Loss of infrastructure Security Cyber...for Apple’s IOS.25 In particular, various vendors have built application infrastructures around the various mobile phone operating systems (OSs) such
Federal Register 2010, 2011, 2012, 2013, 2014
2012-05-17
... DEPARTMENT OF LABOR Employment and Training Administration [TA-W-81,145; TA-W-81,145A] Sunoco, Inc., R&M Refining Division, Marcus Hook, PA; Sunoco, Inc., 10 Industrial Hwy, MS4 Building G, Lester, PA; Notice of Affirmative Determination Regarding Application for Reconsideration By application dated March 26, 2012, the United Steel Workers Union...
The TENCompetence Infrastructure: A Learning Network Implementation
NASA Astrophysics Data System (ADS)
Vogten, Hubert; Martens, Harrie; Lemmers, Ruud
The TENCompetence project developed a first release of a Learning Network infrastructure to support individuals, groups and organisations in professional competence development. This infrastructure Learning Network infrastructure was released as open source to the community thereby allowing users and organisations to use and contribute to this development as they see fit. The infrastructure consists of client applications providing the user experience and server components that provide the services to these clients. These services implement the domain model (Koper 2006) by provisioning the entities of the domain model (see also Sect. 18.4) and henceforth will be referenced as domain entity services.
A Development of Lightweight Grid Interface
NASA Astrophysics Data System (ADS)
Iwai, G.; Kawai, Y.; Sasaki, T.; Watase, Y.
2011-12-01
In order to help a rapid development of Grid/Cloud aware applications, we have developed API to abstract the distributed computing infrastructures based on SAGA (A Simple API for Grid Applications). SAGA, which is standardized in the OGF (Open Grid Forum), defines API specifications to access distributed computing infrastructures, such as Grid, Cloud and local computing resources. The Universal Grid API (UGAPI), which is a set of command line interfaces (CLI) and APIs, aims to offer simpler API to combine several SAGA interfaces with richer functionalities. These CLIs of the UGAPI offer typical functionalities required by end users for job management and file access to the different distributed computing infrastructures as well as local computing resources. We have also built a web interface for the particle therapy simulation and demonstrated the large scale calculation using the different infrastructures at the same time. In this paper, we would like to present how the web interface based on UGAPI and SAGA achieve more efficient utilization of computing resources over the different infrastructures with technical details and practical experiences.
Multiphysics Application Coupling Toolkit
DOE Office of Scientific and Technical Information (OSTI.GOV)
Campbell, Michael T.
2013-12-02
This particular consortium implementation of the software integration infrastructure will, in large part, refactor portions of the Rocstar multiphysics infrastructure. Development of this infrastructure originated at the University of Illinois DOE ASCI Center for Simulation of Advanced Rockets (CSAR) to support the center's massively parallel multiphysics simulation application, Rocstar, and has continued at IllinoisRocstar, a small company formed near the end of the University-based program. IllinoisRocstar is now licensing these new developments as free, open source, in hopes to help improve their own and others' access to infrastructure which can be readily utilized in developing coupled or composite software systems;more » with particular attention to more rapid production and utilization of multiphysics applications in the HPC environment. There are two major pieces to the consortium implementation, the Application Component Toolkit (ACT), and the Multiphysics Application Coupling Toolkit (MPACT). The current development focus is the ACT, which is (will be) the substrate for MPACT. The ACT itself is built up from the components described in the technical approach. In particular, the ACT has the following major components: 1.The Component Object Manager (COM): The COM package provides encapsulation of user applications, and their data. COM also provides the inter-component function call mechanism. 2.The System Integration Manager (SIM): The SIM package provides constructs and mechanisms for orchestrating composite systems of multiply integrated pieces.« less
Integrating multiple scientific computing needs via a Private Cloud infrastructure
NASA Astrophysics Data System (ADS)
Bagnasco, S.; Berzano, D.; Brunetti, R.; Lusso, S.; Vallero, S.
2014-06-01
In a typical scientific computing centre, diverse applications coexist and share a single physical infrastructure. An underlying Private Cloud facility eases the management and maintenance of heterogeneous use cases such as multipurpose or application-specific batch farms, Grid sites catering to different communities, parallel interactive data analysis facilities and others. It allows to dynamically and efficiently allocate resources to any application and to tailor the virtual machines according to the applications' requirements. Furthermore, the maintenance of large deployments of complex and rapidly evolving middleware and application software is eased by the use of virtual images and contextualization techniques; for example, rolling updates can be performed easily and minimizing the downtime. In this contribution we describe the Private Cloud infrastructure at the INFN-Torino Computer Centre, that hosts a full-fledged WLCG Tier-2 site and a dynamically expandable PROOF-based Interactive Analysis Facility for the ALICE experiment at the CERN LHC and several smaller scientific computing applications. The Private Cloud building blocks include the OpenNebula software stack, the GlusterFS filesystem (used in two different configurations for worker- and service-class hypervisors) and the OpenWRT Linux distribution (used for network virtualization). A future integration into a federated higher-level infrastructure is made possible by exposing commonly used APIs like EC2 and by using mainstream contextualization tools like CloudInit.
Green Infrastructure Research at EPA's Edison Environmental Center
The presentation outline includes: (1) Green infrastructure research objectives (2) Introduction to ongoing research projects - Aspects of design, construction, and maintenence that affect function - Real-world applications of GI research
More Bang for the Buck: Integrating Green Infrastructure into Existing Public Works Projects
shares lessons learned from municipal and county officials experienced in coordinating green infrastructure applications with scheduled street maintenance, park improvements, and projects on public sites.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Armstrong, Robert C.; Ray, Jaideep; Malony, A.
2003-11-01
We present a case study of performance measurement and modeling of a CCA (Common Component Architecture) component-based application in a high performance computing environment. We explore issues peculiar to component-based HPC applications and propose a performance measurement infrastructure for HPC based loosely on recent work done for Grid environments. A prototypical implementation of the infrastructure is used to collect data for a three components in a scientific application and construct performance models for two of them. Both computational and message-passing performance are addressed.
Ubiquitous Green Computing Techniques for High Demand Applications in Smart Environments
Zapater, Marina; Sanchez, Cesar; Ayala, Jose L.; Moya, Jose M.; Risco-Martín, José L.
2012-01-01
Ubiquitous sensor network deployments, such as the ones found in Smart cities and Ambient intelligence applications, require constantly increasing high computational demands in order to process data and offer services to users. The nature of these applications imply the usage of data centers. Research has paid much attention to the energy consumption of the sensor nodes in WSNs infrastructures. However, supercomputing facilities are the ones presenting a higher economic and environmental impact due to their very high power consumption. The latter problem, however, has been disregarded in the field of smart environment services. This paper proposes an energy-minimization workload assignment technique, based on heterogeneity and application-awareness, that redistributes low-demand computational tasks from high-performance facilities to idle nodes with low and medium resources in the WSN infrastructure. These non-optimal allocation policies reduce the energy consumed by the whole infrastructure and the total execution time. PMID:23112621
Ubiquitous green computing techniques for high demand applications in Smart environments.
Zapater, Marina; Sanchez, Cesar; Ayala, Jose L; Moya, Jose M; Risco-Martín, José L
2012-01-01
Ubiquitous sensor network deployments, such as the ones found in Smart cities and Ambient intelligence applications, require constantly increasing high computational demands in order to process data and offer services to users. The nature of these applications imply the usage of data centers. Research has paid much attention to the energy consumption of the sensor nodes in WSNs infrastructures. However, supercomputing facilities are the ones presenting a higher economic and environmental impact due to their very high power consumption. The latter problem, however, has been disregarded in the field of smart environment services. This paper proposes an energy-minimization workload assignment technique, based on heterogeneity and application-awareness, that redistributes low-demand computational tasks from high-performance facilities to idle nodes with low and medium resources in the WSN infrastructure. These non-optimal allocation policies reduce the energy consumed by the whole infrastructure and the total execution time.
Synthetic Proxy Infrastructure for Task Evaluation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Junghans, Christoph; Pavel, Robert
The Synthetic Proxy Infrastructure for Task Evaluation is a proxy application designed to support application developers in gauging the performance of various task granularities when determining how best to utilize task based programming models.The infrastructure is designed to provide examples of common communication patterns with a synthetic workload intended to provide performance data to evaluate programming model and platform overheads for the purpose of determining task granularity for task decomposition purposes. This is presented as a reference implementation of a proxy application with run-time configurable input and output task dependencies ranging from an embarrassingly parallel scenario to patterns with stencil-likemore » dependencies upon their nearest neighbors. Once all, if any, inputs are satisfied each task will execute a synthetic workload (a simple DGEMM of in this case) of varying size and output all, if any, outputs to the next tasks.The intent is for this reference implementation to be implemented as a proxy app in different programming models so as to provide the same infrastructure and to allow for application developers to simulate their own communication needs to assist in task decomposition under various models on a given platform.« less
A Real-Time Web of Things Framework with Customizable Openness Considering Legacy Devices
Zhao, Shuai; Yu, Le; Cheng, Bo
2016-01-01
With the development of the Internet of Things (IoT), resources and applications based on it have emerged on a large scale. However, most efforts are “silo” solutions where devices and applications are tightly coupled. Infrastructures are needed to connect sensors to the Internet, open up and break the current application silos and move to a horizontal application mode. Based on the concept of Web of Things (WoT), many infrastructures have been proposed to integrate the physical world with the Web. However, issues such as no real-time guarantee, lack of fine-grained control of data, and the absence of explicit solutions for integrating heterogeneous legacy devices, hinder their widespread and practical use. To address these issues, this paper proposes a WoT resource framework that provides the infrastructures for the customizable openness and sharing of users’ data and resources under the premise of ensuring the real-time behavior of their own applications. The proposed framework is validated by actual systems and experimental evaluations. PMID:27690038
A Real-Time Web of Things Framework with Customizable Openness Considering Legacy Devices.
Zhao, Shuai; Yu, Le; Cheng, Bo
2016-09-28
With the development of the Internet of Things (IoT), resources and applications based on it have emerged on a large scale. However, most efforts are "silo" solutions where devices and applications are tightly coupled. Infrastructures are needed to connect sensors to the Internet, open up and break the current application silos and move to a horizontal application mode. Based on the concept of Web of Things (WoT), many infrastructures have been proposed to integrate the physical world with the Web. However, issues such as no real-time guarantee, lack of fine-grained control of data, and the absence of explicit solutions for integrating heterogeneous legacy devices, hinder their widespread and practical use. To address these issues, this paper proposes a WoT resource framework that provides the infrastructures for the customizable openness and sharing of users' data and resources under the premise of ensuring the real-time behavior of their own applications. The proposed framework is validated by actual systems and experimental evaluations.
Sustainable, Reliable Mission-Systems Architecture
NASA Technical Reports Server (NTRS)
O'Neil, Graham; Orr, James K.; Watson, Steve
2005-01-01
A mission-systems architecture, based on a highly modular infrastructure utilizing open-standards hardware and software interfaces as the enabling technology is essential for affordable md sustainable space exploration programs. This mission-systems architecture requires (8) robust communication between heterogeneous systems, (b) high reliability, (c) minimal mission-to-mission reconfiguration, (d) affordable development, system integration, end verification of systems, and (e) minimal sustaining engineering. This paper proposes such an architecture. Lessons learned from the Space Shuttle program and Earthbound complex engineered systems are applied to define the model. Technology projections reaching out 5 years are made to refine model details.
NASA Technical Reports Server (NTRS)
Watson, Steve; Orr, Jim; O'Neil, Graham
2004-01-01
A mission-systems architecture based on a highly modular "systems of systems" infrastructure utilizing open-standards hardware and software interfaces as the enabling technology is absolutely essential for an affordable and sustainable space exploration program. This architecture requires (a) robust communication between heterogeneous systems, (b) high reliability, (c) minimal mission-to-mission reconfiguration, (d) affordable development, system integration, and verification of systems, and (e) minimum sustaining engineering. This paper proposes such an architecture. Lessons learned from the space shuttle program are applied to help define and refine the model.
Strategies for a permanent lunar base
NASA Technical Reports Server (NTRS)
Duke, M. B.; Mendell, W. W.; Roberts, B. B.
1985-01-01
One or more of three possible objectives, encompassing scientific research, lunar resource exploitation for space infrastructure construction, and lunar environment self-sufficiency refinement with a view to future planetary habitation, may be the purpose of manned lunar base activities. Attention is presently given to the possibility that the early phases of all three lunar base orientations may be developed in such a way as to share the greatest number of common elements. An evaluation is made of the cost and complexity of the lunar base, and the Space Transportation System used in conjunction with it, as functions of long term base use strategy.
Sustainable, Reliable Mission-Systems Architecture
NASA Technical Reports Server (NTRS)
O'Neil, Graham; Orr, James K.; Watson, Steve
2007-01-01
A mission-systems architecture, based on a highly modular infrastructure utilizing: open-standards hardware and software interfaces as the enabling technology is essential for affordable and sustainable space exploration programs. This mission-systems architecture requires (a) robust communication between heterogeneous system, (b) high reliability, (c) minimal mission-to-mission reconfiguration, (d) affordable development, system integration, and verification of systems, and (e) minimal sustaining engineering. This paper proposes such an architecture. Lessons learned from the Space Shuttle program and Earthbound complex engineered system are applied to define the model. Technology projections reaching out 5 years are mde to refine model details.
A Tale of Two Regions: Landscape Ecological Planning for Shale Gas Energy Futures
NASA Astrophysics Data System (ADS)
Murtha, T., Jr.; Schroth, O.; Orland, B.; Goldberg, L.; Mazurczyk, T.
2015-12-01
As we increasingly embrace deep shale gas deposits to meet global energy demands new and dispersed local and regional policy and planning challenges emerge. Even in regions with long histories of energy extraction, such as coal, shale gas and the infrastructure needed to produce the gas and transport it to market offers uniquely complex transformations in land use and landcover not previously experienced. These transformations are fast paced, dispersed and can overwhelm local and regional planning and regulatory processes. Coupled to these transformations is a structural confounding factor. While extraction and testing are carried out locally, regulation and decision-making is multilayered, often influenced by national and international factors. Using a geodesign framework, this paper applies a set of geospatial landscape ecological planning tools in two shale gas settings. First, we describe and detail a series of ongoing studies and tools that we have developed for communities in the Marcellus Shale region of the eastern United States, specifically the northern tier of Pennsylvania. Second, we apply a subset of these tools to potential gas development areas of the Fylde region in Lancashire, United Kingdom. For the past five years we have tested, applied and refined a set of place based and data driven geospatial models for forecasting, envisioning, analyzing and evaluating shale gas activities in northern Pennsylvania. These models are continuously compared to important landscape ecological planning challenges and priorities in the region, e.g. visual and cultural resource preservation. Adapting and applying these tools to a different landscape allow us to not only isolate and define important regulatory and policy exigencies in each specific setting, but also to develop and refine these models for broader application. As we continue to explore increasingly complex energy solutions globally, we need an equally complex comparative set of landscape ecological planning tools to inform policy, design and regional planning. Adapting tools and techniques developed in Pennsylvania where shale gas extraction is ongoing to Lancashire, where industry is still in the exploratory phase offers a key opportunity to test and refine more generalizable models.
A Cross-Platform Infrastructure for Scalable Runtime Application Performance Analysis
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jack Dongarra; Shirley Moore; Bart Miller, Jeffrey Hollingsworth
2005-03-15
The purpose of this project was to build an extensible cross-platform infrastructure to facilitate the development of accurate and portable performance analysis tools for current and future high performance computing (HPC) architectures. Major accomplishments include tools and techniques for multidimensional performance analysis, as well as improved support for dynamic performance monitoring of multithreaded and multiprocess applications. Previous performance tool development has been limited by the burden of having to re-write a platform-dependent low-level substrate for each architecture/operating system pair in order to obtain the necessary performance data from the system. Manual interpretation of performance data is not scalable for large-scalemore » long-running applications. The infrastructure developed by this project provides a foundation for building portable and scalable performance analysis tools, with the end goal being to provide application developers with the information they need to analyze, understand, and tune the performance of terascale applications on HPC architectures. The backend portion of the infrastructure provides runtime instrumentation capability and access to hardware performance counters, with thread-safety for shared memory environments and a communication substrate to support instrumentation of multiprocess and distributed programs. Front end interfaces provides tool developers with a well-defined, platform-independent set of calls for requesting performance data. End-user tools have been developed that demonstrate runtime data collection, on-line and off-line analysis of performance data, and multidimensional performance analysis. The infrastructure is based on two underlying performance instrumentation technologies. These technologies are the PAPI cross-platform library interface to hardware performance counters and the cross-platform Dyninst library interface for runtime modification of executable images. The Paradyn and KOJAK projects have made use of this infrastructure to build performance measurement and analysis tools that scale to long-running programs on large parallel and distributed systems and that automate much of the search for performance bottlenecks.« less
Fiber optic sensors for infrastructure applications
DOT National Transportation Integrated Search
1998-02-01
Fiber optic sensor technology offers the possibility of implementing "nervous systems" for infrastructure elements that allow high performance, cost effective health and damage assessment systems to be achieved. This is possible, largely due to syner...
AASHTO connected vehicle infrastructure deployment analysis.
DOT National Transportation Integrated Search
2011-06-17
This report describes a deployment scenario for Connected Vehicle infrastructure by state and local transportation agencies, together with a series of strategies and actions to be performed by AASHTO to support application development and deployment.
Biofuels Infrastructure Partnership (BIP) grant program. The BIP program works with retailers and state and eligible applicants in the following amounts: Infrastructure Grant Amount E15 Pumps 50% of the costs of
Cyber Compendium, Professional Continuing Education Course Papers. Volume 2, Issue 1, Spring 2015
2015-01-01
250,000 mobile devices.23 Cutting just 20% of this infrastructure would translate to a potential savings of over $7 Billion per year. Air...BYOD device extends beyond the device itself. There is the supporting infrastructure such as the Mobile Application Store (MAS), which provides access...96 Prioritizing Cyber Capabilities: to Protect U.S. Critical Infrastructure
NASA Astrophysics Data System (ADS)
Koutroulis, A. G.; Tsanis, I. K.; Jacob, D.
2012-04-01
A robust signal of a warmer and drier climate over the western Mediterranean region is projected from the majority of climate models. This effect appears more pronounced during warm periods, when the seasonal decrease of precipitation can exceed control climatology by 25-30%. The rapid development of Crete in the last 30 years has exerted strong pressures on the natural resources of the region. Urbanization and growth of agriculture, tourism and industry had strong impact on the water resources of island by substantially increasing water demand. The objective of this study is to analyze and assess the impact of global change on the water resources status for the island of Crete for a range of 24 different scenarios of projected hydro-climatological regime, demand and supply potential. Water resources application issues analyzed and facilitated within this study, focusing on a refinement of the future water demands of the island, and comparing with "state of the art" global climate model (GCM) results and an ensemble of regional climate models (RCMs) under three different emission scenarios, to estimate water resources availability, during the 21st century. A robust signal of water scarcity is projected for all the combinations of emission (A2, A1B and B1), demand and infrastructure scenarios. Despite the uncertainty of the assessments, the quantitative impact of the projected changes on water availability indicates that climate change plays an equally important role to water use and management in controlling future water status in a Mediterranean island like the island of Crete. The outcome of this analysis will assist in short and long-term strategic water resources planning by prioritizing water related infrastructure development.
On macromolecular refinement at subatomic resolution withinteratomic scatterers
DOE Office of Scientific and Technical Information (OSTI.GOV)
Afonine, Pavel V.; Grosse-Kunstleve, Ralf W.; Adams, Paul D.
2007-11-09
A study of the accurate electron density distribution in molecular crystals at subatomic resolution, better than {approx} 1.0 {angstrom}, requires more detailed models than those based on independent spherical atoms. A tool conventionally used in small-molecule crystallography is the multipolar model. Even at upper resolution limits of 0.8-1.0 {angstrom}, the number of experimental data is insufficient for the full multipolar model refinement. As an alternative, a simpler model composed of conventional independent spherical atoms augmented by additional scatterers to model bonding effects has been proposed. Refinement of these mixed models for several benchmark datasets gave results comparable in quality withmore » results of multipolar refinement and superior of those for conventional models. Applications to several datasets of both small- and macro-molecules are shown. These refinements were performed using the general-purpose macromolecular refinement module phenix.refine of the PHENIX package.« less
On macromolecular refinement at subatomic resolution with interatomic scatterers
Afonine, Pavel V.; Grosse-Kunstleve, Ralf W.; Adams, Paul D.; Lunin, Vladimir Y.; Urzhumtsev, Alexandre
2007-01-01
A study of the accurate electron-density distribution in molecular crystals at subatomic resolution (better than ∼1.0 Å) requires more detailed models than those based on independent spherical atoms. A tool that is conventionally used in small-molecule crystallography is the multipolar model. Even at upper resolution limits of 0.8–1.0 Å, the number of experimental data is insufficient for full multipolar model refinement. As an alternative, a simpler model composed of conventional independent spherical atoms augmented by additional scatterers to model bonding effects has been proposed. Refinement of these mixed models for several benchmark data sets gave results that were comparable in quality with the results of multipolar refinement and superior to those for conventional models. Applications to several data sets of both small molecules and macromolecules are shown. These refinements were performed using the general-purpose macromolecular refinement module phenix.refine of the PHENIX package. PMID:18007035
On macromolecular refinement at subatomic resolution with interatomic scatterers.
Afonine, Pavel V; Grosse-Kunstleve, Ralf W; Adams, Paul D; Lunin, Vladimir Y; Urzhumtsev, Alexandre
2007-11-01
A study of the accurate electron-density distribution in molecular crystals at subatomic resolution (better than approximately 1.0 A) requires more detailed models than those based on independent spherical atoms. A tool that is conventionally used in small-molecule crystallography is the multipolar model. Even at upper resolution limits of 0.8-1.0 A, the number of experimental data is insufficient for full multipolar model refinement. As an alternative, a simpler model composed of conventional independent spherical atoms augmented by additional scatterers to model bonding effects has been proposed. Refinement of these mixed models for several benchmark data sets gave results that were comparable in quality with the results of multipolar refinement and superior to those for conventional models. Applications to several data sets of both small molecules and macromolecules are shown. These refinements were performed using the general-purpose macromolecular refinement module phenix.refine of the PHENIX package.
Improving water, sanitation and hygiene in health-care facilities, Liberia
Montgomery, Maggie; Baller, April; Ndivo, Francis; Gasasira, Alex; Cooper, Catherine; Frescas, Ruben; Gordon, Bruce; Syed, Shamsuzzoha Babar
2017-01-01
Abstract Problem The lack of proper water and sanitation infrastructures and poor hygiene practices in health-care facilities reduces facilities’ preparedness and response to disease outbreaks and decreases the communities’ trust in the health services provided. Approach To improve water and sanitation infrastructures and hygiene practices, the Liberian health ministry held multistakeholder meetings to develop a national water, sanitation and hygiene and environmental health package. A national train-the-trainer course was held for county environmental health technicians, which included infection prevention and control focal persons; the focal persons acted as change agents. Local setting In Liberia, only 45% of 701 surveyed health-care facilities had an improved water source in 2015, and only 27% of these health-care facilities had proper disposal for infectious waste. Relevant changes Local ownership, through engagement of local health workers, was introduced to ensure development and refinement of the package. In-county collaborations between health-care facilities, along with multisectoral collaboration, informed national level direction, which led to increased focus on water and sanitation infrastructures and uptake of hygiene practices to improve the overall quality of service delivery. Lessons learnt National level leadership was important to identify a vision and create an enabling environment for changing the perception of water, sanitation and hygiene in health-care provision. The involvement of health workers was central to address basic infrastructure and hygiene practices in health-care facilities and they also worked as stimulators for sustainable change. Further, developing a long-term implementation plan for national level initiatives is important to ensure sustainability. PMID:28670017
Resilient workflows for computational mechanics platforms
NASA Astrophysics Data System (ADS)
Nguyên, Toàn; Trifan, Laurentiu; Désidéri, Jean-Antoine
2010-06-01
Workflow management systems have recently been the focus of much interest and many research and deployment for scientific applications worldwide [26, 27]. Their ability to abstract the applications by wrapping application codes have also stressed the usefulness of such systems for multidiscipline applications [23, 24]. When complex applications need to provide seamless interfaces hiding the technicalities of the computing infrastructures, their high-level modeling, monitoring and execution functionalities help giving production teams seamless and effective facilities [25, 31, 33]. Software integration infrastructures based on programming paradigms such as Python, Mathlab and Scilab have also provided evidence of the usefulness of such approaches for the tight coupling of multidisciplne application codes [22, 24]. Also high-performance computing based on multi-core multi-cluster infrastructures open new opportunities for more accurate, more extensive and effective robust multi-discipline simulations for the decades to come [28]. This supports the goal of full flight dynamics simulation for 3D aircraft models within the next decade, opening the way to virtual flight-tests and certification of aircraft in the future [23, 24, 29].
DOE Office of Scientific and Technical Information (OSTI.GOV)
Quinlan, D.; Yi, Q.; Buduc, R.
2005-02-17
ROSE is an object-oriented software infrastructure for source-to-source translation that provides an interface for programmers to write their own specialized translators for optimizing scientific applications. ROSE is a part of current research on telescoping languages, which provides optimizations of the use of libraries in scientific applications. ROSE defines approaches to extend the optimization techniques, common in well defined languages, to the optimization of scientific applications using well defined libraries. ROSE includes a rich set of tools for generating customized transformations to support optimization of applications codes. We currently support full C and C++ (including template instantiation etc.), with Fortran 90more » support under development as part of a collaboration and contract with Rice to use their version of the open source Open64 F90 front-end. ROSE represents an attempt to define an open compiler infrastructure to handle the full complexity of full scale DOE applications codes using the languages common to scientific computing within DOE. We expect that such an infrastructure will also be useful for the development of numerous tools that may then realistically expect to work on DOE full scale applications.« less
Bayesian ensemble refinement by replica simulations and reweighting.
Hummer, Gerhard; Köfinger, Jürgen
2015-12-28
We describe different Bayesian ensemble refinement methods, examine their interrelation, and discuss their practical application. With ensemble refinement, the properties of dynamic and partially disordered (bio)molecular structures can be characterized by integrating a wide range of experimental data, including measurements of ensemble-averaged observables. We start from a Bayesian formulation in which the posterior is a functional that ranks different configuration space distributions. By maximizing this posterior, we derive an optimal Bayesian ensemble distribution. For discrete configurations, this optimal distribution is identical to that obtained by the maximum entropy "ensemble refinement of SAXS" (EROS) formulation. Bayesian replica ensemble refinement enhances the sampling of relevant configurations by imposing restraints on averages of observables in coupled replica molecular dynamics simulations. We show that the strength of the restraints should scale linearly with the number of replicas to ensure convergence to the optimal Bayesian result in the limit of infinitely many replicas. In the "Bayesian inference of ensembles" method, we combine the replica and EROS approaches to accelerate the convergence. An adaptive algorithm can be used to sample directly from the optimal ensemble, without replicas. We discuss the incorporation of single-molecule measurements and dynamic observables such as relaxation parameters. The theoretical analysis of different Bayesian ensemble refinement approaches provides a basis for practical applications and a starting point for further investigations.
Bayesian ensemble refinement by replica simulations and reweighting
NASA Astrophysics Data System (ADS)
Hummer, Gerhard; Köfinger, Jürgen
2015-12-01
We describe different Bayesian ensemble refinement methods, examine their interrelation, and discuss their practical application. With ensemble refinement, the properties of dynamic and partially disordered (bio)molecular structures can be characterized by integrating a wide range of experimental data, including measurements of ensemble-averaged observables. We start from a Bayesian formulation in which the posterior is a functional that ranks different configuration space distributions. By maximizing this posterior, we derive an optimal Bayesian ensemble distribution. For discrete configurations, this optimal distribution is identical to that obtained by the maximum entropy "ensemble refinement of SAXS" (EROS) formulation. Bayesian replica ensemble refinement enhances the sampling of relevant configurations by imposing restraints on averages of observables in coupled replica molecular dynamics simulations. We show that the strength of the restraints should scale linearly with the number of replicas to ensure convergence to the optimal Bayesian result in the limit of infinitely many replicas. In the "Bayesian inference of ensembles" method, we combine the replica and EROS approaches to accelerate the convergence. An adaptive algorithm can be used to sample directly from the optimal ensemble, without replicas. We discuss the incorporation of single-molecule measurements and dynamic observables such as relaxation parameters. The theoretical analysis of different Bayesian ensemble refinement approaches provides a basis for practical applications and a starting point for further investigations.
77 FR 21989 - Critical Infrastructure Private Sector Clearance Program Request
Federal Register 2010, 2011, 2012, 2013, 2014
2012-04-12
... Advisors email the form to the individual who then emails back the completed form, minus their date and... official who nominated the applicant and by the Assistant Secretary for Infrastructure Protection. Upon...
Fiber optic sensors for infrastructure applications : final report.
DOT National Transportation Integrated Search
1998-02-01
Fiber optic sensor technology offers the possibility of implementing "nervous systems" for infrastructure elements that allow high performance, cost effective health and damage assessment systems to be achieved. This is possible, largely due to syner...
Transit Vehicle-to-Infrastructure (V2I) assessment study.
DOT National Transportation Integrated Search
2015-07-01
The United States Department of Transportation (USDOT) is engaged in assessing applications that realize the full potential of connected vehicles, travelers, and infrastructure to enhance current operational practices and transform future surface tra...
Sun, Shanhui; Sonka, Milan; Beichel, Reinhard R
2013-01-01
Recently, the optimal surface finding (OSF) and layered optimal graph image segmentation of multiple objects and surfaces (LOGISMOS) approaches have been reported with applications to medical image segmentation tasks. While providing high levels of performance, these approaches may locally fail in the presence of pathology or other local challenges. Due to the image data variability, finding a suitable cost function that would be applicable to all image locations may not be feasible. This paper presents a new interactive refinement approach for correcting local segmentation errors in the automated OSF-based segmentation. A hybrid desktop/virtual reality user interface was developed for efficient interaction with the segmentations utilizing state-of-the-art stereoscopic visualization technology and advanced interaction techniques. The user interface allows a natural and interactive manipulation of 3-D surfaces. The approach was evaluated on 30 test cases from 18 CT lung datasets, which showed local segmentation errors after employing an automated OSF-based lung segmentation. The performed experiments exhibited significant increase in performance in terms of mean absolute surface distance errors (2.54±0.75 mm prior to refinement vs. 1.11±0.43 mm post-refinement, p≪0.001). Speed of the interactions is one of the most important aspects leading to the acceptance or rejection of the approach by users expecting real-time interaction experience. The average algorithm computing time per refinement iteration was 150 ms, and the average total user interaction time required for reaching complete operator satisfaction was about 2 min per case. This time was mostly spent on human-controlled manipulation of the object to identify whether additional refinement was necessary and to approve the final segmentation result. The reported principle is generally applicable to segmentation problems beyond lung segmentation in CT scans as long as the underlying segmentation utilizes the OSF framework. The two reported segmentation refinement tools were optimized for lung segmentation and might need some adaptation for other application domains. Copyright © 2013 Elsevier Ltd. All rights reserved.
Project Integration Architecture: Implementation of the CORBA-Served Application Infrastructure
NASA Technical Reports Server (NTRS)
Jones, William Henry
2005-01-01
The Project Integration Architecture (PIA) has been demonstrated in a single-machine C++ implementation prototype. The architecture is in the process of being migrated to a Common Object Request Broker Architecture (CORBA) implementation. The migration of the Foundation Layer interfaces is fundamentally complete. The implementation of the Application Layer infrastructure for that migration is reported. The Application Layer provides for distributed user identification and authentication, per-user/per-instance access controls, server administration, the formation of mutually-trusting application servers, a server locality protocol, and an ability to search for interface implementations through such trusted server networks.
Cyberinfrastructure for e-Science.
Hey, Tony; Trefethen, Anne E
2005-05-06
Here we describe the requirements of an e-Infrastructure to enable faster, better, and different scientific research capabilities. We use two application exemplars taken from the United Kingdom's e-Science Programme to illustrate these requirements and make the case for a service-oriented infrastructure. We provide a brief overview of the UK "plug-and-play composable services" vision and the role of semantics in such an e-Infrastructure.
Scalable collaborative risk management technology for complex critical systems
NASA Technical Reports Server (NTRS)
Campbell, Scott; Torgerson, Leigh; Burleigh, Scott; Feather, Martin S.; Kiper, James D.
2004-01-01
We describe here our project and plans to develop methods, software tools, and infrastructure tools to address challenges relating to geographically distributed software development. Specifically, this work is creating an infrastructure that supports applications working over distributed geographical and organizational domains and is using this infrastructure to develop a tool that supports project development using risk management and analysis techniques where the participants are not collocated.
NASA Technical Reports Server (NTRS)
Habib-Agahi, H.
1981-01-01
Market assessment, refined with analysis disaggregated from a national level to the regional level and to specific market applications, resulted in more accurate and detailed market estimates. The development of an integrated set of computer simulations, coupled with refined market data, allowed progress in the ability to evaluate the worth of solar thermal parabolic dish systems. In-depth analyses of both electric and thermal market applications of these systems are described. The following market assessment studies were undertaken: (1) regional analysis of the near term market for parabolic dish systems; (2) potential early market estimate for electric applications; (3) potential early market estimate for industrial process heat/cogeneration applications; and (4) selection of thermal and electric application case studies for fiscal year 1981.
MODFLOW-LGR: Practical application to a large regional dataset
NASA Astrophysics Data System (ADS)
Barnes, D.; Coulibaly, K. M.
2011-12-01
In many areas of the US, including southwest Florida, large regional-scale groundwater models have been developed to aid in decision making and water resources management. These models are subsequently used as a basis for site-specific investigations. Because the large scale of these regional models is not appropriate for local application, refinement is necessary to analyze the local effects of pumping wells and groundwater related projects at specific sites. The most commonly used approach to date is Telescopic Mesh Refinement or TMR. It allows the extraction of a subset of the large regional model with boundary conditions derived from the regional model results. The extracted model is then updated and refined for local use using a variable sized grid focused on the area of interest. MODFLOW-LGR, local grid refinement, is an alternative approach which allows model discretization at a finer resolution in areas of interest and provides coupling between the larger "parent" model and the locally refined "child." In the present work, these two approaches are tested on a mining impact assessment case in southwest Florida using a large regional dataset (The Lower West Coast Surficial Aquifer System Model). Various metrics for performance are considered. They include: computation time, water balance (as compared to the variable sized grid), calibration, implementation effort, and application advantages and limitations. The results indicate that MODFLOW-LGR is a useful tool to improve local resolution of regional scale models. While performance metrics, such as computation time, are case-dependent (model size, refinement level, stresses involved), implementation effort, particularly when regional models of suitable scale are available, can be minimized. The creation of multiple child models within a larger scale parent model makes it possible to reuse the same calibrated regional dataset with minimal modification. In cases similar to the Lower West Coast model, where a model is larger than optimal for direct application as a parent grid, a combination of TMR and LGR approaches should be used to develop a suitable parent grid.
Systematic risk assessment methodology for critical infrastructure elements - Oil and Gas subsectors
NASA Astrophysics Data System (ADS)
Gheorghiu, A.-D.; Ozunu, A.
2012-04-01
The concern for the protection of critical infrastructure has been rapidly growing in the last few years in Europe. The level of knowledge and preparedness in this field is beginning to develop in a lawfully organized manner, for the identification and designation of critical infrastructure elements of national and European interest. Oil and gas production, refining, treatment, storage and transmission by pipelines facilities, are considered European critical infrastructure sectors, as per Annex I of the Council Directive 2008/114/EC of 8 December 2008 on the identification and designation of European critical infrastructures and the assessment of the need to improve their protection. Besides identifying European and national critical infrastructure elements, member states also need to perform a risk analysis for these infrastructure items, as stated in Annex II of the above mentioned Directive. In the field of risk assessment, there are a series of acknowledged and successfully used methods in the world, but not all hazard identification and assessment methods and techniques are suitable for a given site, situation, or type of hazard. As Theoharidou, M. et al. noted (Theoharidou, M., P. Kotzanikolaou, and D. Gritzalis 2009. Risk-Based Criticality Analysis. In Critical Infrastructure Protection III. Proceedings. Third Annual IFIP WG 11.10 International Conference on Critical Infrastructure Protection. Hanover, New Hampshire, USA, March 23-25, 2009: revised selected papers, edited by C. Palmer and S. Shenoi, 35-49. Berlin: Springer.), despite the wealth of knowledge already created, there is a need for simple, feasible, and standardized criticality analyses. The proposed systematic risk assessment methodology includes three basic steps: the first step (preliminary analysis) includes the identification of hazards (including possible natural hazards) for each installation/section within a given site, followed by a criterial analysis and then a detailed analysis step. The criterial evaluation is used as a ranking system in order to establish the priorities for the detailed risk assessment. This criterial analysis stage is necessary because the total number of installations and sections on a site can be quite large. As not all installations and sections on a site contribute significantly to the risk of a major accident occurring, it is not efficient to include all installations and sections in the detailed risk assessment, which can be time and resource consuming. The selected installations are then taken into consideration in the detailed risk assessment, which is the third step of the systematic risk assessment methodology. Following this step, conclusions can be drawn related to the overall risk characteristics of the site. The proposed methodology can as such be successfully applied to the assessment of risk related to critical infrastructure elements falling under the energy sector of Critical Infrastructure, mainly the sub-sectors oil and gas. Key words: Systematic risk assessment, criterial analysis, energy sector critical infrastructure elements
New statistical downscaling for Canada
NASA Astrophysics Data System (ADS)
Murdock, T. Q.; Cannon, A. J.; Sobie, S.
2013-12-01
This poster will document the production of a set of statistically downscaled future climate projections for Canada based on the latest available RCM and GCM simulations - the North American Regional Climate Change Assessment Program (NARCCAP; Mearns et al. 2007) and the Coupled Model Intercomparison Project Phase 5 (CMIP5). The main stages of the project included (1) downscaling method evaluation, (2) scenarios selection, (3) production of statistically downscaled results, and (4) applications of results. We build upon a previous downscaling evaluation project (Bürger et al. 2012, Bürger et al. 2013) in which a quantile-based method (Bias Correction/Spatial Disaggregation - BCSD; Werner 2011) provided high skill compared with four other methods representing the majority of types of downscaling used in Canada. Additional quantile-based methods (Bias-Correction/Constructed Analogues; Maurer et al. 2010 and Bias-Correction/Climate Imprint ; Hunter and Meentemeyer 2005) were evaluated. A subset of 12 CMIP5 simulations was chosen based on an objective set of selection criteria. This included hemispheric skill assessment based on the CLIMDEX indices (Sillmann et al. 2013), historical criteria used previously at the Pacific Climate Impacts Consortium (Werner 2011), and refinement based on a modified clustering algorithm (Houle et al. 2012; Katsavounidis et al. 1994). Statistical downscaling was carried out on the NARCCAP ensemble and a subset of the CMIP5 ensemble. We produced downscaled scenarios over Canada at a daily time resolution and 300 arc second (~10 km) spatial resolution from historical runs for 1951-2005 and from RCP 2.6, 4.5, and 8.5 projections for 2006-2100. The ANUSPLIN gridded daily dataset (McKenney et al. 2011) was used as a target. It has national coverage, spans the historical period of interest 1951-2005, and has daily time resolution. It uses interpolation of station data based on thin-plate splines. This type of method has been shown to have superior skill in interpolating RCM data over North America (McGinnis et al. 2012). An early application of the new dataset was to provide projections of climate extremes for adaptation planning by the British Columbia Ministry of Transportation and Infrastructure. Recently, certain stretches of highway have experienced extreme precipitation events resulting in substantial damage to infrastructure. As part of the planning process to refurbish or replace components of these highways, information about the magnitude and frequency of future extreme events are needed to inform the infrastructure design. The increased resolution provided by downscaling improves the representation of topographic features, particularly valley temperature and precipitation effects. A range of extreme values, from simple daily maxima and minima to complex multi-day and threshold-based climate indices were computed and analyzed from the downscaled output. Selected results from this process and how the projections of precipitation extremes are being used in the context of highway infrastructure planning in British Columbia will be presented.
NASA Astrophysics Data System (ADS)
Chan, Christine S.; Ostertag, Michael H.; Akyürek, Alper Sinan; Šimunić Rosing, Tajana
2017-05-01
The Internet of Things envisions a web-connected infrastructure of billions of sensors and actuation devices. However, the current state-of-the-art presents another reality: monolithic end-to-end applications tightly coupled to a limited set of sensors and actuators. Growing such applications with new devices or behaviors, or extending the existing infrastructure with new applications, involves redesign and redeployment. We instead propose a modular approach to these applications, breaking them into an equivalent set of functional units (context engines) whose input/output transformations are driven by general-purpose machine learning, demonstrating an improvement in compute redundancy and computational complexity with minimal impact on accuracy. In conjunction with formal data specifications, or ontologies, we can replace application-specific implementations with a composition of context engines that use common statistical learning to generate output, thus improving context reuse. We implement interconnected context-aware applications using our approach, extracting user context from sensors in both healthcare and grid applications. We compare our infrastructure to single-stage monolithic implementations with single-point communications between sensor nodes and the cloud servers, demonstrating a reduction in combined system energy by 22-45%, and multiplying the battery lifetime of power-constrained devices by at least 22x, with easy deployment across different architectures and devices.
Mesh quality control for multiply-refined tetrahedral grids
NASA Technical Reports Server (NTRS)
Biswas, Rupak; Strawn, Roger
1994-01-01
A new algorithm for controlling the quality of multiply-refined tetrahedral meshes is presented in this paper. The basic dynamic mesh adaption procedure allows localized grid refinement and coarsening to efficiently capture aerodynamic flow features in computational fluid dynamics problems; however, repeated application of the procedure may significantly deteriorate the quality of the mesh. Results presented show the effectiveness of this mesh quality algorithm and its potential in the area of helicopter aerodynamics and acoustics.
Data distribution service-based interoperability framework for smart grid testbed infrastructure
Youssef, Tarek A.; Elsayed, Ahmed T.; Mohammed, Osama A.
2016-03-02
This study presents the design and implementation of a communication and control infrastructure for smart grid operation. The proposed infrastructure enhances the reliability of the measurements and control network. The advantages of utilizing the data-centric over message-centric communication approach are discussed in the context of smart grid applications. The data distribution service (DDS) is used to implement a data-centric common data bus for the smart grid. This common data bus improves the communication reliability, enabling distributed control and smart load management. These enhancements are achieved by avoiding a single point of failure while enabling peer-to-peer communication and an automatic discoverymore » feature for dynamic participating nodes. The infrastructure and ideas presented in this paper were implemented and tested on the smart grid testbed. A toolbox and application programing interface for the testbed infrastructure are developed in order to facilitate interoperability and remote access to the testbed. This interface allows control, monitoring, and performing of experiments remotely. Furthermore, it could be used to integrate multidisciplinary testbeds to study complex cyber-physical systems (CPS).« less
Dynamically adaptive data-driven simulation of extreme hydrological flows
NASA Astrophysics Data System (ADS)
Kumar Jain, Pushkar; Mandli, Kyle; Hoteit, Ibrahim; Knio, Omar; Dawson, Clint
2018-02-01
Hydrological hazards such as storm surges, tsunamis, and rainfall-induced flooding are physically complex events that are costly in loss of human life and economic productivity. Many such disasters could be mitigated through improved emergency evacuation in real-time and through the development of resilient infrastructure based on knowledge of how systems respond to extreme events. Data-driven computational modeling is a critical technology underpinning these efforts. This investigation focuses on the novel combination of methodologies in forward simulation and data assimilation. The forward geophysical model utilizes adaptive mesh refinement (AMR), a process by which a computational mesh can adapt in time and space based on the current state of a simulation. The forward solution is combined with ensemble based data assimilation methods, whereby observations from an event are assimilated into the forward simulation to improve the veracity of the solution, or used to invert for uncertain physical parameters. The novelty in our approach is the tight two-way coupling of AMR and ensemble filtering techniques. The technology is tested using actual data from the Chile tsunami event of February 27, 2010. These advances offer the promise of significantly transforming data-driven, real-time modeling of hydrological hazards, with potentially broader applications in other science domains.
The NASA John C. Stennis Environmental Geographic Information System
NASA Technical Reports Server (NTRS)
Cohan, Tyrus
2002-01-01
Contents include the following: 1. Introduction: Background information. Initial applications of the SSC EGIS. Ongoing projects. 2.Scope of SSC EGIS. 3. Data layers. 4. Onsite operations. 5. Landcover classifications. 6. Current activities. 7. GIS/Key. 8. Infrastructure base map - development. 9. Infrastructure base map - application. 10. Incorrected layer. 11. Corrected layer. 12. Emergency environmental response tool. 13. Future directions. 14. Bridging the gaps. 15. Environmental geographical information system.
Predictive Software Cost Model Study. Volume I. Final Technical Report.
1980-06-01
development phase to identify computer resources necessary to support computer programs after transfer of program manangement responsibility and system... classical model development with refinements specifically applicable to avionics systems. The refinements are the result of the Phase I literature search
Overview of NASA communications infrastructure
NASA Technical Reports Server (NTRS)
Arnold, Ray J.; Fuechsel, Charles
1991-01-01
The infrastructure of NASA communications systems for effecting coordination across NASA offices and with the national and international research and technological communities is discussed. The offices and networks of the communication system include the Office of Space Science and Applications (OSSA), which manages all NASA missions, and the Office of Space Operations, which furnishes communication support through the NASCOM, the mission critical communications support network, and the Program Support Communications network. The NASA Science Internet was established by OSSA to centrally manage, develop, and operate an integrated computer network service dedicated to NASA's space science and application research. Planned for the future is the National Research and Education Network, which will provide communications infrastructure to enhance science resources at a national level.
18 CFR 5.29 - Other provisions.
Code of Federal Regulations, 2014 CFR
2014-04-01
... member of the public. (c) Requests for privileged or Critical Energy Infrastructure Information treatment of pre-filing submission. If a potential Applicant requests privileged or critical energy infrastructure information treatment of any information submitted to the Commission during pre-filing...
18 CFR 5.29 - Other provisions.
Code of Federal Regulations, 2013 CFR
2013-04-01
... member of the public. (c) Requests for privileged or Critical Energy Infrastructure Information treatment of pre-filing submission. If a potential Applicant requests privileged or critical energy infrastructure information treatment of any information submitted to the Commission during pre-filing...
Interactive Model-Centric Systems Engineering (IMCSE) Phase Two
2015-02-28
109 Backend Implementation...42 Figure 10. Interactive Epoch-Era Analysis leverages humans-in-the-loop analysis and supporting infrastructure ...preliminary supporting 10 infrastructure . This will inform the transition strategies, additional case application and prototype user testing. • The
Vehicle-to-infrastructure rail crossing violation warning : concept of operations.
DOT National Transportation Integrated Search
2016-03-31
This Concept of Operations (ConOps) document describes the operational characteristics of a Vehicle-to-Infrastructure (V2I) Rail Crossing Violation Warning (RCVW) safety application. The development of this document was completed with an advisory tea...
Infrastructure Systems for Advanced Computing in E-science applications
NASA Astrophysics Data System (ADS)
Terzo, Olivier
2013-04-01
In the e-science field are growing needs for having computing infrastructure more dynamic and customizable with a model of use "on demand" that follow the exact request in term of resources and storage capacities. The integration of grid and cloud infrastructure solutions allows us to offer services that can adapt the availability in terms of up scaling and downscaling resources. The main challenges for e-sciences domains will on implement infrastructure solutions for scientific computing that allow to adapt dynamically the demands of computing resources with a strong emphasis on optimizing the use of computing resources for reducing costs of investments. Instrumentation, data volumes, algorithms, analysis contribute to increase the complexity for applications who require high processing power and storage for a limited time and often exceeds the computational resources that equip the majority of laboratories, research Unit in an organization. Very often it is necessary to adapt or even tweak rethink tools, algorithms, and consolidate existing applications through a phase of reverse engineering in order to adapt them to a deployment on Cloud infrastructure. For example, in areas such as rainfall monitoring, meteorological analysis, Hydrometeorology, Climatology Bioinformatics Next Generation Sequencing, Computational Electromagnetic, Radio occultation, the complexity of the analysis raises several issues such as the processing time, the scheduling of tasks of processing, storage of results, a multi users environment. For these reasons, it is necessary to rethink the writing model of E-Science applications in order to be already adapted to exploit the potentiality of cloud computing services through the uses of IaaS, PaaS and SaaS layer. An other important focus is on create/use hybrid infrastructure typically a federation between Private and public cloud, in fact in this way when all resources owned by the organization are all used it will be easy with a federate cloud infrastructure to add some additional resources form the Public cloud for following the needs in term of computational and storage resources and release them where process are finished. Following the hybrid model, the scheduling approach is important for managing both cloud models. Thanks to this model infrastructure every time resources are available for additional request in term of IT capacities that can used "on demand" for a limited time without having to proceed to purchase additional servers.
TLS from fundamentals to practice
Urzhumtsev, Alexandre; Afonine, Pavel V.; Adams, Paul D.
2014-01-01
The Translation-Libration-Screw-rotation (TLS) model of rigid-body harmonic displacements introduced in crystallography by Schomaker & Trueblood (1968) is now a routine tool in macromolecular studies and is a feature of most modern crystallographic structure refinement packages. In this review we consider a number of simple examples that illustrate important features of the TLS model. Based on these examples simplified formulae are given for several special cases that may occur in structure modeling and refinement. The derivation of general TLS formulae from basic principles is also provided. This manuscript describes the principles of TLS modeling, as well as some select algorithmic details for practical application. An extensive list of applications references as examples of TLS in macromolecular crystallography refinement is provided. PMID:25249713
Ontology-Driven Provenance Management in eScience: An Application in Parasite Research
NASA Astrophysics Data System (ADS)
Sahoo, Satya S.; Weatherly, D. Brent; Mutharaju, Raghava; Anantharam, Pramod; Sheth, Amit; Tarleton, Rick L.
Provenance, from the French word "provenir", describes the lineage or history of a data entity. Provenance is critical information in scientific applications to verify experiment process, validate data quality and associate trust values with scientific results. Current industrial scale eScience projects require an end-to-end provenance management infrastructure. This infrastructure needs to be underpinned by formal semantics to enable analysis of large scale provenance information by software applications. Further, effective analysis of provenance information requires well-defined query mechanisms to support complex queries over large datasets. This paper introduces an ontology-driven provenance management infrastructure for biology experiment data, as part of the Semantic Problem Solving Environment (SPSE) for Trypanosoma cruzi (T.cruzi). This provenance infrastructure, called T.cruzi Provenance Management System (PMS), is underpinned by (a) a domain-specific provenance ontology called Parasite Experiment ontology, (b) specialized query operators for provenance analysis, and (c) a provenance query engine. The query engine uses a novel optimization technique based on materialized views called materialized provenance views (MPV) to scale with increasing data size and query complexity. This comprehensive ontology-driven provenance infrastructure not only allows effective tracking and management of ongoing experiments in the Tarleton Research Group at the Center for Tropical and Emerging Global Diseases (CTEGD), but also enables researchers to retrieve the complete provenance information of scientific results for publication in literature.
Grid computing technology for hydrological applications
NASA Astrophysics Data System (ADS)
Lecca, G.; Petitdidier, M.; Hluchy, L.; Ivanovic, M.; Kussul, N.; Ray, N.; Thieron, V.
2011-06-01
SummaryAdvances in e-Infrastructure promise to revolutionize sensing systems and the way in which data are collected and assimilated, and complex water systems are simulated and visualized. According to the EU Infrastructure 2010 work-programme, data and compute infrastructures and their underlying technologies, either oriented to tackle scientific challenges or complex problem solving in engineering, are expected to converge together into the so-called knowledge infrastructures, leading to a more effective research, education and innovation in the next decade and beyond. Grid technology is recognized as a fundamental component of e-Infrastructures. Nevertheless, this emerging paradigm highlights several topics, including data management, algorithm optimization, security, performance (speed, throughput, bandwidth, etc.), and scientific cooperation and collaboration issues that require further examination to fully exploit it and to better inform future research policies. The paper illustrates the results of six different surface and subsurface hydrology applications that have been deployed on the Grid. All the applications aim to answer to strong requirements from the Civil Society at large, relatively to natural and anthropogenic risks. Grid technology has been successfully tested to improve flood prediction, groundwater resources management and Black Sea hydrological survey, by providing large computing resources. It is also shown that Grid technology facilitates e-cooperation among partners by means of services for authentication and authorization, seamless access to distributed data sources, data protection and access right, and standardization.
Snyder, Kimberly; Rieker, Patricia P.
2014-01-01
Functioning program infrastructure is necessary for achieving public health outcomes. It is what supports program capacity, implementation, and sustainability. The public health program infrastructure model presented in this article is grounded in data from a broader evaluation of 18 state tobacco control programs and previous work. The newly developed Component Model of Infrastructure (CMI) addresses the limitations of a previous model and contains 5 core components (multilevel leadership, managed resources, engaged data, responsive plans and planning, networked partnerships) and 3 supporting components (strategic understanding, operations, contextual influences). The CMI is a practical, implementation-focused model applicable across public health programs, enabling linkages to capacity, sustainability, and outcome measurement. PMID:24922125
NASA Astrophysics Data System (ADS)
Graham, Christopher J.
2012-05-01
Success in the future battle space is increasingly dependent on rapid access to the right information. Faced with a shrinking budget, the Government has a mandate to improve intelligence productivity, quality, and reliability. To achieve increased ISR effectiveness, leverage of tactical edge mobile devices via integration with strategic cloud-based infrastructure is the single, most likely candidate area for dramatic near-term impact. This paper discusses security, collaboration, and usability components of this evolving space. These three paramount tenets outlined below, embody how mission information is exchanged securely, efficiently, with social media cooperativeness. Tenet 1: Complete security, privacy, and data integrity, must be ensured within the net-centric battle space. This paper discusses data security on a mobile device, data at rest on a cloud-based system, authorization and access control, and securing data transport between entities. Tenet 2: Lack of collaborative information sharing and content reliability jeopardizes mission objectives and limits the end user capability. This paper discusses cooperative pairing of mobile devices and cloud systems, enabling social media style interaction via tagging, meta-data refinement, and sharing of pertinent data. Tenet 3: Fielded mobile solutions must address usability and complexity. Simplicity is a powerful paradigm on mobile platforms, where complex applications are not utilized, and simple, yet powerful, applications flourish. This paper discusses strategies for ensuring mobile applications are streamlined and usable at the tactical edge through focused features sets, leveraging the power of the back-end cloud, minimization of differing HMI concepts, and directed end-user feedback.teInput=
AI Techniques in a Context-Aware Ubiquitous Environment
NASA Astrophysics Data System (ADS)
Coppola, Paolo; Mea, Vincenzo Della; di Gaspero, Luca; Lomuscio, Raffaella; Mischis, Danny; Mizzaro, Stefano; Nazzi, Elena; Scagnetto, Ivan; Vassena, Luca
Nowadays, the mobile computing paradigm and the widespread diffusion of mobile devices are quickly changing and replacing many common assumptions about software architectures and interaction/communication models. The environment, in particular, or more generally, the so-called user context is claiming a central role in everyday’s use of cellular phones, PDAs, etc. This is due to the huge amount of data “suggested” by the surrounding environment that can be helpful in many common tasks. For instance, the current context can help a search engine to refine the set of results in a useful way, providing the user with a more suitable and exploitable information. Moreover, we can take full advantage of this new data source by “pushing” active contents towards mobile devices, empowering the latter with new features (e.g., applications) that can allow the user to fruitfully interact with the current context. Following this vision, mobile devices become dynamic self-adapting tools, according to the user needs and the possibilities offered by the environment. The present work proposes MoBe: an approach for providing a basic infrastructure for pervasive context-aware applications on mobile devices, in which AI techniques (namely a principled combination of rule-based systems, Bayesian networks and ontologies) are applied to context inference. The aim is to devise a general inferential framework to make easier the development of context-aware applications by integrating the information coming from physical and logical sensors (e.g., position, agenda) and reasoning about this information in order to infer new and more abstract contexts.
Experiments Toward the Application of Multi-Robot Systems to Disaster-Relief Scenarios
2015-09-01
responsibility is assessment, such as dislocated populations, degree of property damage, and remaining communications infrastructure . These are all...specific problems: evaluating of damage to infrastructure in the environment, e.g., traversability of roads; and localizing particular targets of interest...regarding hardware and software infrastructure are driven by the need for these systems to “survive the field” and allow for reliable evaluation of autonomy
ERIC Educational Resources Information Center
Office of Science and Technology Policy, Washington, DC.
In this report, the National Information Infrastructure (NII) services issue is addressed, and activities to advance the development of NII services are recommended. The NII is envisioned to grow into a seamless web of communications networks, computers, databases, and consumer electronics that will put vast amounts of information at users'…
Agile Development of a Smartphone App for Perinatal Monitoring in a Resource-Constrained Setting
Martinez, Boris; Hall-Clifford, Rachel; Coyote, Enma; Stroux, Lisa; Valderrama, Camilo E.; Aaron, Christopher; Francis, Aaron; Hendren, Cate; Rohloff, Peter; Clifford, Gari D.
2017-01-01
Technology provides the potential to empower frontline healthcare workers with low levels of training and literacy, particularly in low- and middle-income countries. An obvious platform for achieving this aim is the smartphone, a low cost, almost ubiquitous device with good supply chain infrastructure and a general cultural acceptance for its use. In particular, the smartphone offers the opportunity to provide augmented or procedural information through active audiovisual aids to illiterate or untrained users, as described in this article. In this article, the process of refinement and iterative design of a smartphone application prototype to support perinatal surveillance in rural Guatemala for indigenous Maya lay midwives with low levels of literacy and technology exposure is described. Following on from a pilot to investigate the feasibility of this system, a two-year project to develop a robust in-field system was initiated, culminating in a randomized controlled trial of the system, which is ongoing. The development required an agile approach, with the development team working both remotely and in country to identify and solve key technical and cultural issues in close collaboration with the midwife end-users. This article describes this process and intermediate results. The application prototype was refined in two phases, with expanding numbers of end-users. Some of the key weaknesses identified in the system during the development cycles were user error when inserting and assembling cables and interacting with the 1-D ultrasound-recording interface, as well as unexpectedly poor bandwidth for data uploads in the central healthcare facility. Safety nets for these issues were developed and the resultant system was well accepted and highly utilized by the end-users. To evaluate the effectiveness of the system after full field deployment, data quality, and corruption over time, as well as general usage of the system and the volume of application support for end-users required by the in-country team was analyzed. Through iterative review of data quality and consistent use of user feedback, the volume and percentage of high quality recordings was increased monthly. Final analysis of the impact of the system on obstetrical referral volume and maternal and neonatal clinical outcomes is pending conclusion of the ongoing clinical trial. PMID:28936111
Agile Development of a Smartphone App for Perinatal Monitoring in a Resource-Constrained Setting.
Martinez, Boris; Hall-Clifford, Rachel; Coyote, Enma; Stroux, Lisa; Valderrama, Camilo E; Aaron, Christopher; Francis, Aaron; Hendren, Cate; Rohloff, Peter; Clifford, Gari D
2017-01-01
Technology provides the potential to empower frontline healthcare workers with low levels of training and literacy, particularly in low- and middle-income countries. An obvious platform for achieving this aim is the smartphone, a low cost, almost ubiquitous device with good supply chain infrastructure and a general cultural acceptance for its use. In particular, the smartphone offers the opportunity to provide augmented or procedural information through active audiovisual aids to illiterate or untrained users, as described in this article. In this article, the process of refinement and iterative design of a smartphone application prototype to support perinatal surveillance in rural Guatemala for indigenous Maya lay midwives with low levels of literacy and technology exposure is described. Following on from a pilot to investigate the feasibility of this system, a two-year project to develop a robust in-field system was initiated, culminating in a randomized controlled trial of the system, which is ongoing. The development required an agile approach, with the development team working both remotely and in country to identify and solve key technical and cultural issues in close collaboration with the midwife end-users. This article describes this process and intermediate results. The application prototype was refined in two phases, with expanding numbers of end-users. Some of the key weaknesses identified in the system during the development cycles were user error when inserting and assembling cables and interacting with the 1-D ultrasound-recording interface, as well as unexpectedly poor bandwidth for data uploads in the central healthcare facility. Safety nets for these issues were developed and the resultant system was well accepted and highly utilized by the end-users. To evaluate the effectiveness of the system after full field deployment, data quality, and corruption over time, as well as general usage of the system and the volume of application support for end-users required by the in-country team was analyzed. Through iterative review of data quality and consistent use of user feedback, the volume and percentage of high quality recordings was increased monthly. Final analysis of the impact of the system on obstetrical referral volume and maternal and neonatal clinical outcomes is pending conclusion of the ongoing clinical trial.
Foley, Nora K.; Jaskula, Brian W.; Kimball, Bryn E.; Schulte, Ruth F.; Schulz, Klaus J.; DeYoung,, John H.; Seal, Robert R.; Bradley, Dwight C.
2017-12-19
Gallium is a soft, silvery metallic element with an atomic number of 31 and the chemical symbol Ga. Gallium is used in a wide variety of products that have microelectronic components containing either gallium arsenide (GaAs) or gallium nitride (GaN). GaAs is able to change electricity directly into laser light and is used in the manufacture of optoelectronic devices (laser diodes, light-emitting diodes [LEDs], photo detectors, and solar cells), which are important for aerospace and telecommunications applications and industrial and medical equipment. GaAs is also used in the production of highly specialized integrated circuits, semiconductors, and transistors; these are necessary for defense applications and high-performance computers. For example, cell phones with advanced personal computer-like functionality (smartphones) use GaAs-rich semiconductor components. GaN is used principally in the manufacture of LEDs and laser diodes, power electronics, and radio-frequency electronics. Because GaN power transistors operate at higher voltages and with a higher power density than GaAs devices, the uses for advanced GaN-based products are expected to increase in the future. Gallium technologies also have large power-handling capabilities and are used for cable television transmission, commercial wireless infrastructure, power electronics, and satellites. Gallium is also used for such familiar applications as screen backlighting for computer notebooks, flat-screen televisions, and desktop computer monitors.Gallium is dispersed in small amounts in many minerals and rocks where it substitutes for elements of similar size and charge, such as aluminum and zinc. For example, gallium is found in small amounts (about 50 parts per million) in such aluminum-bearing minerals as diaspore-boehmite and gibbsite, which form bauxite deposits, and in the zinc-sulfide mineral sphalerite, which is found in many mineral deposits. At the present time, gallium metal is derived mainly as a byproduct of the processing of bauxite ore for aluminum; lesser amounts of gallium metal are produced from the processing of sphalerite ore from three types of deposits (sediment-hosted, Mississippi Valley-type, and volcanogenic massive sulfide) for zinc. The United States is expected to meet its current and expected future needs for gallium through imports of primary, recycled, and refined gallium, as well as through domestic production of recycled and refined gallium. The U.S. Geological Survey estimates that world resources of gallium in bauxite exceed 1 billion kilograms, and a considerable quantity of gallium could be present in world zinc reserves.
GreenView and GreenLand Applications Development on SEE-GRID Infrastructure
NASA Astrophysics Data System (ADS)
Mihon, Danut; Bacu, Victor; Gorgan, Dorian; Mészáros, Róbert; Gelybó, Györgyi; Stefanut, Teodor
2010-05-01
The GreenView and GreenLand applications [1] have been developed through the SEE-GRID-SCI (SEE-GRID eInfrastructure for regional eScience) FP7 project co-funded by the European Commission [2]. The development of environment applications is a challenge for Grid technologies and software development methodologies. This presentation exemplifies the development of the GreenView and GreenLand applications over the SEE-GRID infrastructure by the Grid Application Development Methodology [3]. Today's environmental applications are used in vary domains of Earth Science such as meteorology, ground and atmospheric pollution, ground metal detection or weather prediction. These applications run on satellite images (e.g. Landsat, MERIS, MODIS, etc.) and the accuracy of output results depends mostly of the quality of these images. The main drawback of such environmental applications regards the need of computation power and storage power (some images are almost 1GB in size), in order to process such a large data volume. Actually, almost applications requiring high computation resources have approached the migration onto the Grid infrastructure. This infrastructure offers the computing power by running the atomic application components on different Grid nodes in sequential or parallel mode. The middleware used between the Grid infrastructure and client applications is ESIP (Environment Oriented Satellite Image Processing Platform), which is based on gProcess platform [4]. In its current format, gProcess is used for launching new processes on the Grid nodes, but also for monitoring the execution status of these processes. This presentation highlights two case studies of Grid based environmental applications, GreenView and GreenLand [5]. GreenView is used in correlation with MODIS (Moderate Resolution Imaging Spectroradiometer) satellite images and meteorological datasets, in order to produce pseudo colored temperature and vegetation maps for different geographical CEE (Central Eastern Europe) regions. On the other hand, GreenLand is used for generating maps for different vegetation indexes (e.g. NDVI, EVI, SAVI, GEMI) based on Landsat satellite images. Both applications are using interpolation and random value generation algorithms, but also specific formulas for computing vegetation index values. The GreenView and GreenLand applications have been experimented over the SEE-GRID infrastructure and the performance evaluation is reported in [6]. The improvement of the execution time (obtained through a better parallelization of jobs), the extension of geographical areas to other parts of the Earth, and new user interaction techniques on spatial data and large set of satellite images are the goals of the future work. References [1] GreenView application on Wiki, http://wiki.egee-see.org/index.php/GreenView [2] SEE-GRID-SCI Project, http://www.see-grid-sci.eu/ [3] Gorgan D., Stefanut T., Bâcu V., Mihon D., Grid based Environment Application Development Methodology, SCICOM, 7th International Conference on "Large-Scale Scientific Computations", 4-8 June, 2009, Sozopol, Bulgaria, (To be published by Springer), (2009). [4] Gorgan D., Bacu V., Stefanut T., Rodila D., Mihon D., Grid based Satellite Image Processing Platform for Earth Observation Applications Development. IDAACS'2009 - IEEE Fifth International Workshop on "Intelligent Data Acquisition and Advanced Computing Systems: Technology and Applications", 21-23 September, Cosenza, Italy, IEEE Published in Computer Press, 247-252 (2009). [5] Mihon D., Bacu V., Stefanut T., Gorgan D., "Grid Based Environment Application Development - GreenView Application". ICCP2009 - IEEE 5th International Conference on Intelligent Computer Communication and Processing, 27 Aug, 2009 Cluj-Napoca. Published by IEEE Computer Press, pp. 275-282 (2009). [6] Danut Mihon, Victor Bacu, Dorian Gorgan, Róbert Mészáros, Györgyi Gelybó, Teodor Stefanut, Practical Considerations on the GreenView Application Development and Execution over SEE-GRID. SEE-GRID-SCI User Forum, 9-10 Dec 2009, Bogazici University, Istanbul, Turkey, ISBN: 978-975-403-510-0, pp. 167-175 (2009).
DOE Office of Scientific and Technical Information (OSTI.GOV)
Tiegs, T.N.
The Cooperative Research and Development Agreement (CRADA) was to develop composites of TiC-Ni{sub 3}Al with refined grain microstructures for application in diesel engine fuel injection devices. Grain refinement is important for improved wear resistance and high strength for the applications of interest. Attrition milling effectively reduces the initial particle size and leads to a reduction of the final grain size. However, an increase in the oxygen content occurs concomitantly with the grinding operation and decreased densification of the compacts occurs during sintering.
Economics in Criticality and Restoration of Energy Infrastructures.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Boyd, Gale A.; Flaim, Silvio J.; Folga, Stephen M.
Economists, systems analysts, engineers, regulatory specialists, and other experts were assembled from academia, the national laboratories, and the energy industry to discuss present restoration practices (many have already been defined to the level of operational protocols) in the sectors of the energy infrastructure as well as other infrastructures, to identify whether economics, a discipline concerned with the allocation of scarce resources, is explicitly or implicitly a part of restoration strategies, and if there are novel economic techniques and solution methods that could be used help encourage the restoration of energy services more quickly than present practices or to restore servicemore » more efficiently from an economic perspective. AcknowledgementsDevelopment of this work into a coherent product with a useful message has occurred thanks to the thoughtful support of several individuals:Kenneth Friedman, Department of Energy, Office of Energy Assurance, provided the impetus for the work, as well as several suggestions and reminders of direction along the way. Funding from DOE/OEA was critical to the completion of this effort.Arnold Baker, Chief Economist, Sandia National Laboratories, and James Peerenboom, Director, Infrastructure Assurance Center, Argonne National Laboratory, provided valuable contacts that helped to populate the authoring team with the proper mix of economists, engineers, and systems and regulatory specialists to meet the objectives of the work.Several individuals provided valuable review of the document at various stages of completion, and provided suggestions that were valuable to the editing process. This list of reviewers includes Jeffrey Roark, Economist, Tennessee Valley Authority; James R. Dalrymple, Manager of Transmission System Services and Transmission/Power Supply, Tennessee Valley Authority; William Mampre, Vice President, EN Engineering; Kevin Degenstein, EN Engineering; and Patrick Wilgang, Department of Energy, Office of Energy Assurance.With many authors, creating a document with a single voice is a difficult task. Louise Maffitt, Senior Research Associate, Institute for Engineering Research and Applications at New Mexico Institute of Mining & Technology (on contract to Sandia National Laboratories) served a vital role in the development of this document by taking the unedited material (in structured format) and refining the basic language so as to make the flow of the document as close to a single voice as one could hope for. Louise's work made the job of reducing the content to a readable length an easier process. Additional editorial suggestions from the authors themselves, particularly from Sam Flaim, Steve Folga, and Doug Gotham, expedited this process.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Li, Dongsheng; Lavender, Curt
2015-05-08
Improving yield strength and asymmetry is critical to expand applications of magnesium alloys in industry for higher fuel efficiency and lower CO 2 production. Grain refinement is an efficient method for strengthening low symmetry magnesium alloys, achievable by precipitate refinement. This study provides guidance on how precipitate engineering will improve mechanical properties through grain refinement. Precipitate refinement for improving yield strengths and asymmetry is simulated quantitatively by coupling a stochastic second phase grain refinement model and a modified polycrystalline crystal viscoplasticity φ-model. Using the stochastic second phase grain refinement model, grain size is quantitatively determined from the precipitate size andmore » volume fraction. Yield strengths, yield asymmetry, and deformation behavior are calculated from the modified φ-model. If the precipitate shape and size remain constant, grain size decreases with increasing precipitate volume fraction. If the precipitate volume fraction is kept constant, grain size decreases with decreasing precipitate size during precipitate refinement. Yield strengths increase and asymmetry approves to one with decreasing grain size, contributed by increasing precipitate volume fraction or decreasing precipitate size.« less
DOT National Transportation Integrated Search
2012-06-01
A small team of university-based transportation system experts and simulation experts has been : assembled to develop, test, and apply an approach to assessing road infrastructure capacity using : micro traffic simulation supported by publically avai...
ERIC Educational Resources Information Center
Hassler, Vesna; Biely, Helmut
1999-01-01
Describes the Digital Signature Project that was developed in Austria to establish an infrastructure for applying smart card-based digital signatures in banking and electronic-commerce applications. Discusses the need to conform to international standards, an international certification infrastructure, and security features for a public directory…
The Information Superhighway and the National Information Infrastructure (NII).
ERIC Educational Resources Information Center
Griffith, Jane Bortnick; Smith, Marcia S.
1994-01-01
Discusses issues connected with the information superhighway and the National Information Infrastructure (NII). Topics addressed include principles for government action; economic benefits; regulations; applications; information policy; pending federal legislation; private sector/government relationship; open access and universal service; privacy…
DOT National Transportation Integrated Search
2016-04-01
Achieving environmental sustainability of the US transportation infrastructure via more environmentally sound construction is not a trivial task. Our : proposal, which addresses this critical area, is aiming at transforming concrete, the material of ...
DOT National Transportation Integrated Search
2014-09-01
This project studied application of acoustic emission (AE) technology to perform structural : health monitoring of highway bridges. Highway bridges are a vital part of transportation : infrastructure and there is need for reliable non-destructive met...
A grid-enabled web service for low-resolution crystal structure refinement.
O'Donovan, Daniel J; Stokes-Rees, Ian; Nam, Yunsun; Blacklow, Stephen C; Schröder, Gunnar F; Brunger, Axel T; Sliz, Piotr
2012-03-01
Deformable elastic network (DEN) restraints have proved to be a powerful tool for refining structures from low-resolution X-ray crystallographic data sets. Unfortunately, optimal refinement using DEN restraints requires extensive calculations and is often hindered by a lack of access to sufficient computational resources. The DEN web service presented here intends to provide structural biologists with access to resources for running computationally intensive DEN refinements in parallel on the Open Science Grid, the US cyberinfrastructure. Access to the grid is provided through a simple and intuitive web interface integrated into the SBGrid Science Portal. Using this portal, refinements combined with full parameter optimization that would take many thousands of hours on standard computational resources can now be completed in several hours. An example of the successful application of DEN restraints to the human Notch1 transcriptional complex using the grid resource, and summaries of all submitted refinements, are presented as justification.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Saffer, Shelley
2014-12-01
This is a final report of the DOE award DE-SC0001132, Advanced Artificial Science. The development of an artificial science and engineering research infrastructure to facilitate innovative computational modeling, analysis, and application to interdisciplinary areas of scientific investigation. This document describes the achievements of the goals, and resulting research made possible by this award.
Evaluation of Service Level Agreement Approaches for Portfolio Management in the Financial Industry
NASA Astrophysics Data System (ADS)
Pontz, Tobias; Grauer, Manfred; Kuebert, Roland; Tenschert, Axel; Koller, Bastian
The idea of service-oriented Grid computing seems to have the potential for fundamental paradigm change and a new architectural alignment concerning the design of IT infrastructures. There is a wide range of technical approaches from scientific communities which describe basic infrastructures and middlewares for integrating Grid resources in order that by now Grid applications are technically realizable. Hence, Grid computing needs viable business models and enhanced infrastructures to move from academic application right up to commercial application. For a commercial usage of these evolutions service level agreements are needed. The developed approaches are primary of academic interest and mostly have not been put into practice. Based on a business use case of the financial industry, five service level agreement approaches have been evaluated in this paper. Based on the evaluation, a management architecture has been designed and implemented as a prototype.
Dinov, Ivo D; Siegrist, Kyle; Pearl, Dennis K; Kalinin, Alexandr; Christou, Nicolas
2016-06-01
Probability distributions are useful for modeling, simulation, analysis, and inference on varieties of natural processes and physical phenomena. There are uncountably many probability distributions. However, a few dozen families of distributions are commonly defined and are frequently used in practice for problem solving, experimental applications, and theoretical studies. In this paper, we present a new computational and graphical infrastructure, the Distributome , which facilitates the discovery, exploration and application of diverse spectra of probability distributions. The extensible Distributome infrastructure provides interfaces for (human and machine) traversal, search, and navigation of all common probability distributions. It also enables distribution modeling, applications, investigation of inter-distribution relations, as well as their analytical representations and computational utilization. The entire Distributome framework is designed and implemented as an open-source, community-built, and Internet-accessible infrastructure. It is portable, extensible and compatible with HTML5 and Web2.0 standards (http://Distributome.org). We demonstrate two types of applications of the probability Distributome resources: computational research and science education. The Distributome tools may be employed to address five complementary computational modeling applications (simulation, data-analysis and inference, model-fitting, examination of the analytical, mathematical and computational properties of specific probability distributions, and exploration of the inter-distributional relations). Many high school and college science, technology, engineering and mathematics (STEM) courses may be enriched by the use of modern pedagogical approaches and technology-enhanced methods. The Distributome resources provide enhancements for blended STEM education by improving student motivation, augmenting the classical curriculum with interactive webapps, and overhauling the learning assessment protocols.
Dinov, Ivo D.; Siegrist, Kyle; Pearl, Dennis K.; Kalinin, Alexandr; Christou, Nicolas
2015-01-01
Probability distributions are useful for modeling, simulation, analysis, and inference on varieties of natural processes and physical phenomena. There are uncountably many probability distributions. However, a few dozen families of distributions are commonly defined and are frequently used in practice for problem solving, experimental applications, and theoretical studies. In this paper, we present a new computational and graphical infrastructure, the Distributome, which facilitates the discovery, exploration and application of diverse spectra of probability distributions. The extensible Distributome infrastructure provides interfaces for (human and machine) traversal, search, and navigation of all common probability distributions. It also enables distribution modeling, applications, investigation of inter-distribution relations, as well as their analytical representations and computational utilization. The entire Distributome framework is designed and implemented as an open-source, community-built, and Internet-accessible infrastructure. It is portable, extensible and compatible with HTML5 and Web2.0 standards (http://Distributome.org). We demonstrate two types of applications of the probability Distributome resources: computational research and science education. The Distributome tools may be employed to address five complementary computational modeling applications (simulation, data-analysis and inference, model-fitting, examination of the analytical, mathematical and computational properties of specific probability distributions, and exploration of the inter-distributional relations). Many high school and college science, technology, engineering and mathematics (STEM) courses may be enriched by the use of modern pedagogical approaches and technology-enhanced methods. The Distributome resources provide enhancements for blended STEM education by improving student motivation, augmenting the classical curriculum with interactive webapps, and overhauling the learning assessment protocols. PMID:27158191
Network Interdependency Modeling for Risk Assessment on Built Infrastructure Systems
2013-10-01
does begin to address infrastructure decay as a source of risk comes from the Department of Homeland Security (DHS). In 2009, the DHS Science and...network of connected edges and nodes. The National Research Council (2005) reported that the study of networks as a science and applications of...principles from this science are still in its early stages. As modern infrastructures have become more interlinked, knowledge of an infrastructure’s network
Tan, J K
1999-11-01
The critical success factor (CSF) approach is a technique that will aid health administrators, planners and managers to identify, specify and sort among the most relevant and critical factors determining an organization's survival and success. Following a top-down management perspective, this paper discusses the CSF methodology as a strategic information management process comprising several important phases: (i) understanding the external factors such as the organization's industry, market and environment; (ii) achieving strong support and championship from top management; (iii) encouraging the proactive involvement of management and staff in generic CSF identification; (iv) educating and directing the participation of staff members in CSF verification and further refinement of generic CSFs into specific CSFs; and (v) aggregating, prioritizing and translating activity-related CSFs into organizational information requirements for the design of the organization's management information infrastructure. The implementation of this CSF approach is illustrated in the context of a British Columbia community hospital, with insights provided into key issues for future health researchers and practitioners.
Uzbek licensing round brings geology, potential into focus
DOE Office of Scientific and Technical Information (OSTI.GOV)
Heafford, A.P.; Lichtman, G.S.
1993-08-09
Uzbekistan is a Central Asian Republic that declared independence from the former Soviet Union in 1991. Uzbekistan produces about 18 million bbl/year of oil and 40 bcf/year of gas. It is the third largest gas producer in the Commonwealth of Independent States and imports oil. The Uzbek government and oil and gas industry are offering exploration acreage for foreign participation via competitive bid. Acreage on offer includes fields for development and unproven-underexplored areas. Terms awaiting approval by the Cabinet of Ministers provide financial incentives for rapid development of existing reserves, creation of required infrastructure, and long term investment growth. Licensemore » areas concentrate on acreage where western equipment and technology can bring new reserves economically on line in the near future. National oil company Uzbekneftegaz was created in 1992 to oversee the extraction, transport, and refining of hydrocarbons in Uzbekistan. The paper describes some of the fields and infrastructure in place, the structural geology, stratigraphy, petroleum distribution, source rocks, reservoir rocks, cap rocks, traps, and hydrocarbon composition, which includes oil, gases, and gas condensates.« less
This presentation will document, benchmark and evalute state-of-the-science research and implementation on BMP performance, monitoring, and integration for green infrastructure applications, to manage wet weather flwo, storm-water-runoff stressor relief and remedial sustainable w...
DOT National Transportation Integrated Search
2001-05-01
The purpose of this working paper is to provide an estimate of the federal proportion of funds expended on intelligent transportation systems (ITS) infrastructure deployments for fiscal year (FY) 2000 using budget and planning data from state departm...
47 CFR 90.615 - Individual channels available in the General Category in 806-824/851-869 MHz band.
Code of Federal Regulations, 2011 CFR
2011-10-01
... Critical Infrastructure Industry Categories from three to five years after the release of a public notice... applicants in the Public Safety or Critical Infrastructure Industry Categories from three to five years after...
47 CFR 90.615 - Individual channels available in the General Category in 806-824/851-869 MHz band.
Code of Federal Regulations, 2014 CFR
2014-10-01
... Critical Infrastructure Industry Categories from three to five years after the release of a public notice... applicants in the Public Safety or Critical Infrastructure Industry Categories from three to five years after...
47 CFR 90.615 - Individual channels available in the General Category in 806-824/851-869 MHz band.
Code of Federal Regulations, 2013 CFR
2013-10-01
... Critical Infrastructure Industry Categories from three to five years after the release of a public notice... applicants in the Public Safety or Critical Infrastructure Industry Categories from three to five years after...
47 CFR 90.615 - Individual channels available in the General Category in 806-824/851-869 MHz band.
Code of Federal Regulations, 2012 CFR
2012-10-01
... Critical Infrastructure Industry Categories from three to five years after the release of a public notice... applicants in the Public Safety or Critical Infrastructure Industry Categories from three to five years after...
Evaluating Green/Gray Infrastructure for CSO/Stormwater Control
The NRMRL is conducting this project to evaluate the water quality and quantity benefits of a large-scale application of green infrastructure (low-impact development/best management practices) retrofits in an entire subcatchment. It will document ORD's effort to demonstrate the e...
Development of the AuScope Australian Earth Observing System
NASA Astrophysics Data System (ADS)
Rawling, T.
2017-12-01
Advances in monitoring technology and significant investment in new national research initiatives, will provide significant new opportunities for delivery of novel geoscience data streams from across the Australian continent over the next decade. The AuScope Australian Earth Observing System (AEOS) is linking field and laboratory infrastructure across Australia to form a national sensor array focusing on the Solid Earth. As such AuScope is working with these programs to deploy observational infrastructure, including MT, passive seismic, and GNSS networks across the entire Australian Continent. Where possible the observational grid will be co-located with strategic basement drilling in areas of shallow cover and tied with national reflection seismic and sampling transects. This integrated suite of distributed earth observation and imaging sensors will provide unprecedented imaging fidelity of our crust, across all length and time scales, to fundamental and applied researchers in the earth, environmental and geospatial sciences. The AEOS will the Earth Science community's Square Kilometer Array (SKA) - a distributed telescope that looks INTO the earth rather than away from it - a 10 million SKA. The AEOS is strongly aligned with other community strategic initiatives including the UNCOVER research program as well as other National Collaborative Research Infrastructure programs such as the Terrestrial Environmental Research Network (TERN) and the Integrated Marine Observing System (IMOS) providing an interdisciplinary collaboration platform across the earth and environmental sciences. There is also very close alignment between AuScope and similar international programs such as EPOS, the USArray and EarthCube - potential collaborative linkages we are currently in the process of pursuing more fomally. The AuScope AEOS Infrastructure System is ultimately designed to enable the progressive construction, refinement and ongoing enrichment of a live, "FAIR" four-dimensional Earth Model for the Australian Continent and its immediate environs.
Panter, Jenna; Ogilvie, David
2015-01-01
Objective Some studies have assessed the effectiveness of environmental interventions to promote physical activity, but few have examined how such interventions work. We investigated the environmental mechanisms linking an infrastructural intervention with behaviour change. Design Natural experimental study. Setting Three UK municipalities (Southampton, Cardiff and Kenilworth). Participants Adults living within 5 km of new walking and cycling infrastructure. Intervention Construction or improvement of walking and cycling routes. Exposure to the intervention was defined in terms of residential proximity. Outcome measures Questionnaires at baseline and 2-year follow-up assessed perceptions of the supportiveness of the environment, use of the new infrastructure, and walking and cycling behaviours. Analysis proceeded via factor analysis of perceptions of the physical environment (step 1) and regression analysis to identify plausible pathways involving physical and social environmental mediators and refine the intervention theory (step 2) to a final path analysis to test the model (step 3). Results Participants who lived near and used the new routes reported improvements in their perceptions of provision and safety. However, path analysis (step 3, n=967) showed that the effects of the intervention on changes in time spent walking and cycling were largely (90%) explained by a simple causal pathway involving use of the new routes, and other pathways involving changes in environmental cognitions explained only a small proportion of the effect. Conclusions Physical improvement of the environment itself was the key to the effectiveness of the intervention, and seeking to change people's perceptions may be of limited value. Studies of how interventions lead to population behaviour change should complement those concerned with estimating their effects in supporting valid causal inference. PMID:26338837
NASA Astrophysics Data System (ADS)
L'Heureux, Zara E.
This thesis proposes that internal combustion piston engines can help clear the way for a transformation in the energy, chemical, and refining industries that is akin to the transition computer technology experienced with the shift from large mainframes to small personal computers and large farms of individually small, modular processing units. This thesis provides a mathematical foundation, multi-dimensional optimizations, experimental results, an engine model, and a techno-economic assessment, all working towards quantifying the value of repurposing internal combustion piston engines for new applications in modular, small-scale technologies, particularly for energy and chemical engineering systems. Many chemical engineering and power generation industries have focused on increasing individual unit sizes and centralizing production. This "bigger is better" concept makes it difficult to evolve and incorporate change. Large systems are often designed with long lifetimes, incorporate innovation slowly, and necessitate high upfront investment costs. Breaking away from this cycle is essential for promoting change, especially change happening quickly in the energy and chemical engineering industries. The ability to evolve during a system's lifetime provides a competitive advantage in a field dominated by large and often very old equipment that cannot respond to technology change. This thesis specifically highlights the value of small, mass-manufactured internal combustion piston engines retrofitted to participate in non-automotive system designs. The applications are unconventional and stem first from the observation that, when normalized by power output, internal combustion engines are one hundred times less expensive than conventional, large power plants. This cost disparity motivated a look at scaling laws to determine if scaling across both individual unit size and number of units produced would predict the two order of magnitude difference seen here. For the first time, this thesis provides a mathematical analysis of scaling with a combination of both changing individual unit size and varying the total number of units produced. Different paths to meet a particular cumulative capacity are analyzed and show that total costs are path dependent and vary as a function of the unit size and number of units produced. The path dependence identified is fairly weak, however, and for all practical applications, the underlying scaling laws seem unaffected. This analysis continues to support the interest in pursuing designs built around small, modular infrastructure. Building on the observation that internal combustion engines are an inexpensive power-producing unit, the first optimization in this thesis focuses on quantifying the value of engine capacity committing to deliver power in the day-ahead electricity and reserve markets, specifically based on pricing from the New York Independent System Operator (NYISO). An optimization was written in Python to determine, based on engine cost, fuel cost, engine wear, engine lifetime, and electricity prices, when and how much of an engine's power should be committed to a particular energy market. The optimization aimed to maximize profit for the engine and generator (engine genset) system acting as a price-taker. The result is an annual profit on the order of \\$30 per kilowatt. The most value in the engine genset is in its commitments to the spinning reserve market, where power is often committed but not always called on to deliver. This analysis highlights the benefits of modularity in energy generation and provides one example where the system is so inexpensive and short-lived, that the optimization views the engine replacement cost as a consumable operating expense rather than a capital cost. Having the opportunity to incorporate incremental technological improvements in a system's infrastructure throughout its lifetime allows introduction of new technology with higher efficiencies and better designs. An alternative to traditionally large infrastructure that locks in a design and today's state-of-the-art technology for the next 50 - 70 years, is a system designed to incorporate new technology in a modular fashion. The modular engine genset system used for power generation is one example of how this works in practice. The largest single component of this thesis is modeling, designing, retrofitting, and testing a reciprocating piston engine used as a compressor. Motivated again by the low cost of an internal combustion engine, this work looks at how an engine (which is, in its conventional form, essentially a reciprocating compressor) can be cost-effectively retrofitted to perform as a small-scale gas compressor. In the laboratory, an engine compressor was built by retrofitting a one-cylinder, 79 cc engine. Various retrofitting techniques were incorporated into the system design, and the engine compressor performance was quantified in each iteration. Because the retrofitted engine is now a power consumer rather than a power-producing unit, the engine compressor is driven in the laboratory with an electric motor. Experimentally, compressed air engine exhaust (starting at elevated inlet pressures) surpassed 650 psia (about 45 bar), which makes this system very attractive for many applications in chemical engineering and refining industries. A model of the engine compressor system was written in Python and incorporates experimentally-derived parameters to quantify gas leakage, engine friction, and flow (including backflow) through valves. The model as a whole was calibrated and verified with experimental data and is used to explore engine retrofits beyond what was tested in the laboratory. Along with the experimental and modeling work, a techno-economic assessment is included to compare the engine compressor system with state-of-the-art, commercially-available compressors. Included in the financial analysis is a case study where an engine compressor system is modeled to achieve specific compression needs. The result of the assessment is that, indeed, the low engine cost, even with the necessary retrofits, provides a cost advantage over incumbent compression technologies. Lastly, this thesis provides an algorithm and case study for another application of small-scale units in energy infrastructure, specifically in energy storage. This study focuses on quantifying the value of small-scale, onsite energy storage in shaving peak power demands. This case study focuses on university-level power demands. The analysis finds that, because peak power is so costly, even small amounts of energy storage, when dispatched optimally, can provide significant cost reductions. This provides another example of the value of small-scale implementations, particularly in energy infrastructure. While the study focuses on flywheels and batteries as the energy storage medium, engine gensets could also be used to deliver power and shave peak power demands. The overarching goal of this thesis is to introduce small-scale, modular infrastructure, with a particular focus on the opportunity to retrofit and repurpose inexpensive, mass-manufactured internal combustion engines in new and unconventional applications. The modeling and experimental work presented in this dissertation show very compelling results for engines incorporated into both energy generation infrastructure and chemical engineering industries via compression technologies. The low engine cost provides an opportunity to add retrofits whilst remaining cost competitive with the incumbent technology. This work supports the claim that modular infrastructure, built on the indivisible unit of an internal combustion engine, can revolutionize many industries by providing a low-cost mechanism for rapid change and promoting small-scale designs.
The role of the chief information officer in the health care organization in the 1990s.
Glaser, J P
1993-02-01
During the next decade, the role of the CIO will change in two major areas: 1. The relative importance of the CIO as the person who translates business and clinical needs into information technology ideas will diminish. Although this portion of the CIO role will not disappear, this role will be increasingly filled by senior management, clinicians, and other members of the hospital staff. 2. The CIO role will need to shift from an emphasis on managing implementations and projects to developing and advancing the infrastructure. CIOs need to distinguish between the expression of the asset (the application portfolio) and the information technology infrastructure (the remaining four components of the asset). While being pressured to deliver more applications, they can fail to invest in and manage the infrastructure. This is a mistake. By neglecting management of and investment in the infrastructure (e.g., staff training and data quality) or by failing to take advantage of new technologies, they can hinder the ability of an organization to deliver superior applications. Poor data quality will cripple an executive information system and a too-permissive stance toward hardware and operating system heterogeneity will hinder the ability to deliver a computerized patient record. Although some management of the infrastructure is in place, in general it is insufficient. Few organizations have both a distinct data management function and a technical architecture plan, and also develop and enforce key technical, data, and development standards. This insufficiency will hinder their ability to effectively and efficiently apply their information technology infrastructure. The role of the CIO will evolve due to several powerful forces.(ABSTRACT TRUNCATED AT 250 WORDS)
On-Board Switching and Routing Advanced Technology Study
NASA Technical Reports Server (NTRS)
Yegenoglu, F.; Inukai, T.; Kaplan, T.; Redman, W.; Mitchell, C.
1998-01-01
Future satellite communications is expected to be fully integrated into National and Global Information Infrastructures (NII/GII). These infrastructures will carry multi gigabit-per-second data rates, with integral switching and routing of constituent data elements. The satellite portion of these infrastructures must, therefore, be more than pipes through the sky. The satellite portion will also be required to perform very high speed routing and switching of these data elements to enable efficient broad area coverage to many home and corporate users. The technology to achieve the on-board switching and routing must be selected and developed specifically for satellite application within the next few years. This report presents evaluation of potential technologies for on-board switching and routing applications.
DOT National Transportation Integrated Search
2013-08-01
The Sustainable Design Guidelines were developed in Phase I of this research program (WA-RD : 816.1). Here we are reporting on the Phase II effort that beta-tested the Phase I Guidelines on : example ferry terminal designs and refinements made ...
A Tool for Intersecting Context-Free Grammars and Its Applications
NASA Technical Reports Server (NTRS)
Gange, Graeme; Navas, Jorge A.; Schachte, Peter; Sondergaard, Harald; Stuckey, Peter J.
2015-01-01
This paper describes a tool for intersecting context-free grammars. Since this problem is undecidable the tool follows a refinement-based approach and implements a novel refinement which is complete for regularly separable grammars. We show its effectiveness for safety verification of recursive multi-threaded programs.
Connected vehicle applications : safety.
DOT National Transportation Integrated Search
2016-01-01
Connected vehicle safety applications are designed to increase situational awareness : and reduce or eliminate crashes through vehicle-to-infrastructure, vehicle-to-vehicle, : and vehicle-to-pedestrian data transmissions. Applications support advisor...
Connected vehicle applications : safety.
DOT National Transportation Integrated Search
2016-01-01
Connected vehicle safety applications are designed to increase situational awareness and reduce or eliminate crashes through vehicle-to-infrastructure, vehicle-to-vehicle, and vehicle-to-pedestrian data transmissions. Applications support advisories ...
NASA Astrophysics Data System (ADS)
Wiggins, H. V.; Warnick, W. K.; Hempel, L. C.; Henk, J.; Sorensen, M.; Tweedie, C. E.; Gaylord, A. G.
2007-12-01
As the creation and use of geospatial data in research, management, logistics, and education applications has proliferated, there is now a tremendous potential for advancing science through a variety of cyber-infrastructure applications, including Spatial Data Infrastructure (SDI) and related technologies. SDIs provide a necessary and common framework of standards, securities, policies, procedures, and technology to support the effective acquisition, coordination, dissemination and use of geospatial data by multiple and distributed stakeholder and user groups. Despite the numerous research activities in the Arctic, there is no established SDI and, because of this lack of a coordinated infrastructure, there is inefficiency, duplication of effort, and reduced data quality and search ability of arctic geospatial data. The urgency for establishing this framework is significant considering the myriad of data that is being collected in celebration of the International Polar Year (IPY) in 2007-2008 and the current international momentum for an improved and integrated circum-arctic terrestrial-marine-atmospheric environmental observatories network. The key objective of this project is to lay the foundation for full implementation of an Arctic Spatial Data Infrastructure (ASDI) through an assessment of community needs, readiness, and resources and through the development of a prototype web-mapping portal.
European grid services for global earth science
NASA Astrophysics Data System (ADS)
Brewer, S.; Sipos, G.
2012-04-01
This presentation will provide an overview of the distributed computing services that the European Grid Infrastructure (EGI) offers to the Earth Sciences community and also explain the processes whereby Earth Science users can engage with the infrastructure. One of the main overarching goals for EGI over the coming year is to diversify its user-base. EGI therefore - through the National Grid Initiatives (NGIs) that provide the bulk of resources that make up the infrastructure - offers a number of routes whereby users, either individually or as communities, can make use of its services. At one level there are two approaches to working with EGI: either users can make use of existing resources and contribute to their evolution and configuration; or alternatively they can work with EGI, and hence the NGIs, to incorporate their own resources into the infrastructure to take advantage of EGI's monitoring, networking and managing services. Adopting this approach does not imply a loss of ownership of the resources. Both of these approaches are entirely applicable to the Earth Sciences community. The former because researchers within this field have been involved with EGI (and previously EGEE) as a Heavy User Community and the latter because they have very specific needs, such as incorporating HPC services into their workflows, and these will require multi-skilled interventions to fully provide such services. In addition to the technical support services that EGI has been offering for the last year or so - the applications database, the training marketplace and the Virtual Organisation services - there now exists a dynamic short-term project framework that can be utilised to establish and operate services for Earth Science users. During this talk we will present a summary of various on-going projects that will be of interest to Earth Science users with the intention that suggestions for future projects will emerge from the subsequent discussions: • The Federated Cloud Task Force is already providing a cloud infrastructure through a few committed NGIs. This is being made available to research communities participating in the Task Force and the long-term aim is to integrate these national clouds into a pan-European infrastructure for scientific communities. • The MPI group provides support for application developers to port and scale up parallel applications to the global European Grid Infrastructure. • A lively portal developer and provider community that is able to setup and operate custom, application and/or community specific portals for members of the Earth Science community to interact with EGI. • A project to assess the possibilities for federated identity management in EGI and the readiness of EGI member states for federated authentication and authorisation mechanisms. • Operating resources and user support services to process data with new types of services and infrastructures, such as desktop grids, map-reduce frameworks, GPU clusters.
NASA Astrophysics Data System (ADS)
Papa, Mauricio; Shenoi, Sujeet
The information infrastructure -- comprising computers, embedded devices, networks and software systems -- is vital to day-to-day operations in every sector: information and telecommunications, banking and finance, energy, chemicals and hazardous materials, agriculture, food, water, public health, emergency services, transportation, postal and shipping, government and defense. Global business and industry, governments, indeed society itself, cannot function effectively if major components of the critical information infrastructure are degraded, disabled or destroyed. Critical Infrastructure Protection II describes original research results and innovative applications in the interdisciplinary field of critical infrastructure protection. Also, it highlights the importance of weaving science, technology and policy in crafting sophisticated, yet practical, solutions that will help secure information, computer and network assets in the various critical infrastructure sectors. Areas of coverage include: - Themes and Issues - Infrastructure Security - Control Systems Security - Security Strategies - Infrastructure Interdependencies - Infrastructure Modeling and Simulation This book is the second volume in the annual series produced by the International Federation for Information Processing (IFIP) Working Group 11.10 on Critical Infrastructure Protection, an international community of scientists, engineers, practitioners and policy makers dedicated to advancing research, development and implementation efforts focused on infrastructure protection. The book contains a selection of twenty edited papers from the Second Annual IFIP WG 11.10 International Conference on Critical Infrastructure Protection held at George Mason University, Arlington, Virginia, USA in the spring of 2008.
This poster presentation will document, benchmark and evaluate state-of-the-science research and implementation on BMP performance, monitoring and integration for green infrastructure applications, to manage wet weather flow, storm-water runoff stressor relief and remedial sustai...
DOT National Transportation Integrated Search
2008-08-11
It will be advantageous to have information on the state of health of infrastructure at all times in : order to carry out effective on-demand maintenance. With the tremendous advancement in technology, it is : possible to employ devices embedded in s...
Rearchitecting IT: Simplify. Simplify
ERIC Educational Resources Information Center
Panettieri, Joseph C.
2006-01-01
Simplifying and securing an IT infrastructure is not easy. It frequently requires rethinking years of hardware and software investments, and a gradual migration to modern systems. Even so, writes the author, universities can take six practical steps to success: (1) Audit software infrastructure; (2) Evaluate current applications; (3) Centralize…
Federal Register 2010, 2011, 2012, 2013, 2014
2013-07-02
... DEPARTMENT OF COMMERCE International Trade Administration Critical Infrastructure Protection and Cyber Security Trade Mission to Saudi Arabia and Kuwait Clarification and Amendment AGENCY... cyber-security firms and trade organizations which have not already submitted an application are...
Lindberg, D A; Humphreys, B L
1995-01-01
The High-Performance Computing and Communications (HPCC) program is a multiagency federal effort to advance the state of computing and communications and to provide the technologic platform on which the National Information Infrastructure (NII) can be built. The HPCC program supports the development of high-speed computers, high-speed telecommunications, related software and algorithms, education and training, and information infrastructure technology and applications. The vision of the NII is to extend access to high-performance computing and communications to virtually every U.S. citizen so that the technology can be used to improve the civil infrastructure, lifelong learning, energy management, health care, etc. Development of the NII will require resolution of complex economic and social issues, including information privacy. Health-related applications supported under the HPCC program and NII initiatives include connection of health care institutions to the Internet; enhanced access to gene sequence data; the "Visible Human" Project; and test-bed projects in telemedicine, electronic patient records, shared informatics tool development, and image systems. PMID:7614116
DOE Office of Scientific and Technical Information (OSTI.GOV)
Youssef, Tarek A.; Elsayed, Ahmed T.; Mohammed, Osama A.
This study presents the design and implementation of a communication and control infrastructure for smart grid operation. The proposed infrastructure enhances the reliability of the measurements and control network. The advantages of utilizing the data-centric over message-centric communication approach are discussed in the context of smart grid applications. The data distribution service (DDS) is used to implement a data-centric common data bus for the smart grid. This common data bus improves the communication reliability, enabling distributed control and smart load management. These enhancements are achieved by avoiding a single point of failure while enabling peer-to-peer communication and an automatic discoverymore » feature for dynamic participating nodes. The infrastructure and ideas presented in this paper were implemented and tested on the smart grid testbed. A toolbox and application programing interface for the testbed infrastructure are developed in order to facilitate interoperability and remote access to the testbed. This interface allows control, monitoring, and performing of experiments remotely. Furthermore, it could be used to integrate multidisciplinary testbeds to study complex cyber-physical systems (CPS).« less
Space Telecommunications Radio System (STRS) Compliance Testing
NASA Technical Reports Server (NTRS)
Handler, Louis M.
2011-01-01
The Space Telecommunications Radio System (STRS) defines an open architecture for software defined radios. This document describes the testing methodology to aid in determining the degree of compliance to the STRS architecture. Non-compliances are reported to the software and hardware developers as well as the NASA project manager so that any non-compliances may be fixed or waivers issued. Since the software developers may be divided into those that provide the operating environment including the operating system and STRS infrastructure (OE) and those that supply the waveform applications, the tests are divided accordingly. The static tests are also divided by the availability of an automated tool that determines whether the source code and configuration files contain the appropriate items. Thus, there are six separate step-by-step test procedures described as well as the corresponding requirements that they test. The six types of STRS compliance tests are: STRS application automated testing, STRS infrastructure automated testing, STRS infrastructure testing by compiling WFCCN with the infrastructure, STRS configuration file testing, STRS application manual code testing, and STRS infrastructure manual code testing. Examples of the input and output of the scripts are shown in the appendices as well as more specific information about what to configure and test in WFCCN for non-compliance. In addition, each STRS requirement is listed and the type of testing briefly described. Attached is also a set of guidelines on what to look for in addition to the requirements to aid in the document review process.
2014-09-01
power. The wireless infrastructure is an expansion of the current DOD IE which can be leveraged to connect mobile capabilities and technologies. The...DOD must focus on three critical areas central to mobility : the wireless infrastructure , the devices themselves, and the applications the devices use... infrastructure to support mobile devices. – The intent behind this goal is to improve the existing wireless backbone to support secure voice, data, and video
NASA Astrophysics Data System (ADS)
Romaniuk, Ryszard S.
2013-10-01
Accelerator science and technology is one of a key enablers of the developments in the particle physic, photon physics and also applications in medicine and industry. The paper presents a digest of the research results in the domain of accelerator science and technology in Europe, shown during the realization of CARE (Coordinated Accelerator R&D), EuCARD (European Coordination of Accelerator R&D) and during the national annual review meeting of the TIARA - Test Infrastructure of European Research Area in Accelerator R&D. The European projects on accelerator technology started in 2003 with CARE. TIARA is an European Collaboration of Accelerator Technology, which by running research projects, technical, networks and infrastructural has a duty to integrate the research and technical communities and infrastructures in the global scale of Europe. The Collaboration gathers all research centers with large accelerator infrastructures. Other ones, like universities, are affiliated as associate members. TIARA-PP (preparatory phase) is an European infrastructural project run by this Consortium and realized inside EU-FP7. The paper presents a general overview of CARE, EuCARD and especially TIARA activities, with an introduction containing a portrait of contemporary accelerator technology and a digest of its applications in modern society. CARE, EuCARD and TIARA activities integrated the European accelerator community in a very effective way. These projects are expected very much to be continued.
A Review of Pedestrian Indoor Positioning Systems for Mass Market Applications
Barcelo, Marc; Vicario, Jose Lopez
2017-01-01
In the last decade, the interest in Indoor Location Based Services (ILBS) has increased stimulating the development of Indoor Positioning Systems (IPS). In particular, ILBS look for positioning systems that can be applied anywhere in the world for millions of users, that is, there is a need for developing IPS for mass market applications. Those systems must provide accurate position estimations with minimum infrastructure cost and easy scalability to different environments. This survey overviews the current state of the art of IPSs and classifies them in terms of the infrastructure and methodology employed. Finally, each group is reviewed analysing its advantages and disadvantages and its applicability to mass market applications. PMID:28829386
Development and Implementation of NASA's Lead Center for Rocket Propulsion Testing
NASA Technical Reports Server (NTRS)
Dawson, Michael C.
2001-01-01
With the new millennium, NASA's John C. Stennis Space Center (SSC) continues to develop and refine its role as rocket test service provider for NASA and the Nation. As Lead Center for Rocket Propulsion Testing (LCRPT), significant progress has been made under SSC's leadership to consolidate and streamline NASA's rocket test infrastructure and make this vital capability truly world class. NASA's Rocket Propulsion Test (RPT) capability consists of 32 test positions with a replacement value in excess of $2B. It is dispersed at Marshall Space Flight Center (MSFC), Johnson Space Center (JSC)-White Sands Test Facility (WSTF), Glenn Research Center (GRC)-Plum Brook (PB), and SSC and is sized appropriately to minimize duplication and infrastructure costs. The LCRPT also provides a single integrated point of entry into NASA's rocket test services. The RPT capability is managed through the Rocket Propulsion Test Management Board (RPTMB), chaired by SSC with representatives from each center identified above. The Board is highly active, meeting weekly, and is key to providing responsive test services for ongoing operational and developmental NASA and commercial programs including Shuttle, Evolved Expendable Launch Vehicle, and 2nd and 3rd Generation Reusable Launch Vehicles. The relationship between SSC, the test provider, and the hardware developers, like MSFC, is critical to the implementation of the LCRPT. Much effort has been expended to develop and refine these relationships with SSC customers. These efforts have met with success and will continue to be a high priority to SSC for the future. To data in the exercise of its role, the LCRPT has made 22 test assignments and saved or avoided approximately $51M. The LCRPT directly manages approximately $30M annually in test infrastructure costs including facility maintenance and upgrades, direct test support, and test technology development. This annual budges supports rocket propulsion test programs which have an annual budget in excess of $150M. As the LCRPT continues to develop, customer responsiveness and lower cost test services will be major themes. In that light, SSC is embarking on major test technology development activities ensuring long range goals of safer, more responsive, and more cost effective test services are realized. The LCRPT is also focusing on the testing requirements for advanced propulsion systems. This future planning is key to defining and fielding the ability to test these new technologies in support of the hardware developers.
Managing Mission-Critical Infrastructure
ERIC Educational Resources Information Center
Breeding, Marshall
2012-01-01
In the library context, they depend on sophisticated business applications specifically designed to support their work. This infrastructure consists of such components as integrated library systems, their associated online catalogs or discovery services, and self-check equipment, as well as a Web site and the various online tools and services…
18 CFR 5.30 - Critical energy infrastructure information.
Code of Federal Regulations, 2011 CFR
2011-04-01
... 18 Conservation of Power and Water Resources 1 2011-04-01 2011-04-01 false Critical energy infrastructure information. 5.30 Section 5.30 Conservation of Power and Water Resources FEDERAL ENERGY REGULATORY COMMISSION, DEPARTMENT OF ENERGY REGULATIONS UNDER THE FEDERAL POWER ACT INTEGRATED LICENSE APPLICATION...
18 CFR 5.30 - Critical energy infrastructure information.
Code of Federal Regulations, 2013 CFR
2013-04-01
... 18 Conservation of Power and Water Resources 1 2013-04-01 2013-04-01 false Critical energy infrastructure information. 5.30 Section 5.30 Conservation of Power and Water Resources FEDERAL ENERGY REGULATORY COMMISSION, DEPARTMENT OF ENERGY REGULATIONS UNDER THE FEDERAL POWER ACT INTEGRATED LICENSE APPLICATION...
18 CFR 5.30 - Critical energy infrastructure information.
Code of Federal Regulations, 2010 CFR
2010-04-01
... 18 Conservation of Power and Water Resources 1 2010-04-01 2010-04-01 false Critical energy infrastructure information. 5.30 Section 5.30 Conservation of Power and Water Resources FEDERAL ENERGY REGULATORY COMMISSION, DEPARTMENT OF ENERGY REGULATIONS UNDER THE FEDERAL POWER ACT INTEGRATED LICENSE APPLICATION...
18 CFR 5.30 - Critical energy infrastructure information.
Code of Federal Regulations, 2012 CFR
2012-04-01
... 18 Conservation of Power and Water Resources 1 2012-04-01 2012-04-01 false Critical energy infrastructure information. 5.30 Section 5.30 Conservation of Power and Water Resources FEDERAL ENERGY REGULATORY COMMISSION, DEPARTMENT OF ENERGY REGULATIONS UNDER THE FEDERAL POWER ACT INTEGRATED LICENSE APPLICATION...
18 CFR 5.30 - Critical energy infrastructure information.
Code of Federal Regulations, 2014 CFR
2014-04-01
... 18 Conservation of Power and Water Resources 1 2014-04-01 2014-04-01 false Critical energy infrastructure information. 5.30 Section 5.30 Conservation of Power and Water Resources FEDERAL ENERGY REGULATORY COMMISSION, DEPARTMENT OF ENERGY REGULATIONS UNDER THE FEDERAL POWER ACT INTEGRATED LICENSE APPLICATION...
DOT National Transportation Integrated Search
2008-09-01
This report tests the application of Ground Penetrating Radar (GPR) as a non-destructive tool for highway infrastructure assessment. Multiple antennas with different frequency ranges were used on a variety infrastructure projects. This report highlig...
NASA Astrophysics Data System (ADS)
Wentzcovitch, R. M.; Da Silveira, P. R.; Wu, Z.; Yu, Y.
2013-12-01
Today first principles calculations in mineral physics play a fundamental role in understanding of the Earth. They complement experiments by expanding the pressure and temperature range for which properties can be obtained and provide access to atomic scale phenomena. Since the wealth of predictive first principles results can hardly be communicated in printed form, we have developed online applications where published results can be reproduced/verified online and extensive unpublished results can be generated in customized form. So far these applications have included thermodynamics properties of end-member phases and thermal elastic properties of end-member phases and few solid solutions. Extension of this software infrastructure to include other properties is in principle straightforward. This contribution will review the nature of results that can be generated (methods, thermodynamics domain, list of minerals, properties, etc) and nature of the software infrastructure. These applications are part of a more extensive cyber-infrastructure operating in the XSEDE - the VLab Science Gateway [1]. [1] https://www.xsede.org/web/guest/gateways-listing Research supported by NSF grants ATM-0428744 and EAR-1047629.
On macromolecular refinement at subatomic resolution with interatomic scatterers
DOE Office of Scientific and Technical Information (OSTI.GOV)
Afonine, Pavel V., E-mail: pafonine@lbl.gov; Grosse-Kunstleve, Ralf W.; Adams, Paul D.
2007-11-01
Modelling deformation electron density using interatomic scatters is simpler than multipolar methods, produces comparable results at subatomic resolution and can easily be applied to macromolecules. A study of the accurate electron-density distribution in molecular crystals at subatomic resolution (better than ∼1.0 Å) requires more detailed models than those based on independent spherical atoms. A tool that is conventionally used in small-molecule crystallography is the multipolar model. Even at upper resolution limits of 0.8–1.0 Å, the number of experimental data is insufficient for full multipolar model refinement. As an alternative, a simpler model composed of conventional independent spherical atoms augmented bymore » additional scatterers to model bonding effects has been proposed. Refinement of these mixed models for several benchmark data sets gave results that were comparable in quality with the results of multipolar refinement and superior to those for conventional models. Applications to several data sets of both small molecules and macromolecules are shown. These refinements were performed using the general-purpose macromolecular refinement module phenix.refine of the PHENIX package.« less
e-Infrastructures supporting research into depression, self-harm and suicide.
McCafferty, S; Doherty, T; Sinnott, R O; Watt, J
2010-08-28
The Economic and Social Research Council (ESRC)-funded Data Management through e-Social Sciences (DAMES) project is investigating, as one of its four research themes, how research into depression, self-harm and suicide may be enhanced through the adoption of e-Science infrastructures and techniques. In this paper, we explore the challenges in supporting such research infrastructures and describe the distributed and heterogeneous datasets that need to be provisioned to support such research. We describe and demonstrate the application of an advanced user and security-driven infrastructure that has been developed specifically to meet these challenges in an on-going study into depression, self-harm and suicide.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Eltoweissy, Mohamed Y.; Du, David H.C.; Gerla, Mario
Mission-Critical Networking (MCN) refers to networking for application domains where life or livelihood may be at risk. Typical application domains for MCN include critical infrastructure protection and operation, emergency and crisis intervention, healthcare services, and military operations. Such networking is essential for safety, security and economic vitality in our complex world characterized by uncertainty, heterogeneity, emergent behaviors, and the need for reliable and timely response. MCN comprise networking technology, infrastructures and services that may alleviate the risk and directly enable and enhance connectivity for mission-critical information exchange among diverse, widely dispersed, mobile users.
NASA Technical Reports Server (NTRS)
1994-01-01
This is a draft report on the Government Information Locator Service (GILS) to the National Information Infrastructure (NII) task force. GILS is designed to take advantage of internetworking technology known as client-server architecture which allows information to be distributed among multiple independent information servers. Two appendices are provided -- (1) A glossary of related terminology and (2) extracts from a draft GILS profile for the use of the American National Standard Information Retrieval Application Service Definition and Protocol Specification for Library Applications.
Next generation information communication infrastructure and case studies for future power systems
NASA Astrophysics Data System (ADS)
Qiu, Bin
As power industry enters the new century, powerful driving forces, uncertainties and new functions are compelling electric utilities to make dramatic changes in their information communication infrastructure. Expanding network services such as real time measurement and monitoring are also driving the need for more bandwidth in the communication network. These needs will grow further as new remote real-time protection and control applications become more feasible and pervasive. This dissertation addresses two main issues for the future power system information infrastructure: communication network infrastructure and associated power system applications. Optical networks no doubt will become the predominant data transmission media for next generation power system communication. The rapid development of fiber optic network technology poses new challenges in the areas of topology design, network management and real time applications. Based on advanced fiber optic technologies, an all-fiber network is investigated and proposed. The study will cover the system architecture and data exchange protocol aspects. High bandwidth, robust optical networks could provide great opportunities to the power system for better service and efficient operation. In the dissertation, different applications are investigated. One of the typical applications is the SCADA information accessing system. An Internet-based application for the substation automation system will be presented. VLSI (Very Large Scale Integration) technology is also used for one-line diagrams auto-generation. High transition rate and low latency optical network is especially suitable for power system real time control. In the dissertation, a new local area network based Load Shedding Controller (LSC) for isolated power system will be presented. By using PMU (Phasor Measurement Unit) and fiber optic network, an AGE (Area Generation Error) based accurate wide area load shedding scheme will also be proposed. The objective is to shed the load in the limited area with minimum disturbance.
NiftyNet: a deep-learning platform for medical imaging.
Gibson, Eli; Li, Wenqi; Sudre, Carole; Fidon, Lucas; Shakir, Dzhoshkun I; Wang, Guotai; Eaton-Rosen, Zach; Gray, Robert; Doel, Tom; Hu, Yipeng; Whyntie, Tom; Nachev, Parashkev; Modat, Marc; Barratt, Dean C; Ourselin, Sébastien; Cardoso, M Jorge; Vercauteren, Tom
2018-05-01
Medical image analysis and computer-assisted intervention problems are increasingly being addressed with deep-learning-based solutions. Established deep-learning platforms are flexible but do not provide specific functionality for medical image analysis and adapting them for this domain of application requires substantial implementation effort. Consequently, there has been substantial duplication of effort and incompatible infrastructure developed across many research groups. This work presents the open-source NiftyNet platform for deep learning in medical imaging. The ambition of NiftyNet is to accelerate and simplify the development of these solutions, and to provide a common mechanism for disseminating research outputs for the community to use, adapt and build upon. The NiftyNet infrastructure provides a modular deep-learning pipeline for a range of medical imaging applications including segmentation, regression, image generation and representation learning applications. Components of the NiftyNet pipeline including data loading, data augmentation, network architectures, loss functions and evaluation metrics are tailored to, and take advantage of, the idiosyncracies of medical image analysis and computer-assisted intervention. NiftyNet is built on the TensorFlow framework and supports features such as TensorBoard visualization of 2D and 3D images and computational graphs by default. We present three illustrative medical image analysis applications built using NiftyNet infrastructure: (1) segmentation of multiple abdominal organs from computed tomography; (2) image regression to predict computed tomography attenuation maps from brain magnetic resonance images; and (3) generation of simulated ultrasound images for specified anatomical poses. The NiftyNet infrastructure enables researchers to rapidly develop and distribute deep learning solutions for segmentation, regression, image generation and representation learning applications, or extend the platform to new applications. Copyright © 2018 The Authors. Published by Elsevier B.V. All rights reserved.
On the impact of a refined stochastic model for airborne LiDAR measurements
NASA Astrophysics Data System (ADS)
Bolkas, Dimitrios; Fotopoulos, Georgia; Glennie, Craig
2016-09-01
Accurate topographic information is critical for a number of applications in science and engineering. In recent years, airborne light detection and ranging (LiDAR) has become a standard tool for acquiring high quality topographic information. The assessment of airborne LiDAR derived DEMs is typically based on (i) independent ground control points and (ii) forward error propagation utilizing the LiDAR geo-referencing equation. The latter approach is dependent on the stochastic model information of the LiDAR observation components. In this paper, the well-known statistical tool of variance component estimation (VCE) is implemented for a dataset in Houston, Texas, in order to refine the initial stochastic information. Simulations demonstrate the impact of stochastic-model refinement for two practical applications, namely coastal inundation mapping and surface displacement estimation. Results highlight scenarios where erroneous stochastic information is detrimental. Furthermore, the refined stochastic information provides insights on the effect of each LiDAR measurement in the airborne LiDAR error budget. The latter is important for targeting future advancements in order to improve point cloud accuracy.
Fenn, Timothy D; Schnieders, Michael J; Mustyakimov, Marat; Wu, Chuanjie; Langan, Paul; Pande, Vijay S; Brunger, Axel T
2011-04-13
Most current crystallographic structure refinements augment the diffraction data with a priori information consisting of bond, angle, dihedral, planarity restraints, and atomic repulsion based on the Pauli exclusion principle. Yet, electrostatics and van der Waals attraction are physical forces that provide additional a priori information. Here, we assess the inclusion of electrostatics for the force field used for all-atom (including hydrogen) joint neutron/X-ray refinement. Two DNA and a protein crystal structure were refined against joint neutron/X-ray diffraction data sets using force fields without electrostatics or with electrostatics. Hydrogen-bond orientation/geometry favors the inclusion of electrostatics. Refinement of Z-DNA with electrostatics leads to a hypothesis for the entropic stabilization of Z-DNA that may partly explain the thermodynamics of converting the B form of DNA to its Z form. Thus, inclusion of electrostatics assists joint neutron/X-ray refinements, especially for placing and orienting hydrogen atoms. Copyright © 2011 Elsevier Ltd. All rights reserved.
Fenn, Timothy D.; Schnieders, Michael J.; Mustyakimov, Marat; Wu, Chuanjie; Langan, Paul; Pande, Vijay S.; Brunger, Axel T.
2011-01-01
Summary Most current crystallographic structure refinements augment the diffraction data with a priori information consisting of bond, angle, dihedral, planarity restraints and atomic repulsion based on the Pauli exclusion principle. Yet, electrostatics and van der Waals attraction are physical forces that provide additional a priori information. Here we assess the inclusion of electrostatics for the force field used for all-atom (including hydrogen) joint neutron/X-ray refinement. Two DNA and a protein crystal structure were refined against joint neutron/X-ray diffraction data sets using force fields without electrostatics or with electrostatics. Hydrogen bond orientation/geometry favors the inclusion of electrostatics. Refinement of Z-DNA with electrostatics leads to a hypothesis for the entropic stabilization of Z-DNA that may partly explain the thermodynamics of converting the B form of DNA to its Z form. Thus, inclusion of electrostatics assists joint neutron/X-ray refinements, especially for placing and orienting hydrogen atoms. PMID:21481775
Network and computing infrastructure for scientific applications in Georgia
NASA Astrophysics Data System (ADS)
Kvatadze, R.; Modebadze, Z.
2016-09-01
Status of network and computing infrastructure and available services for research and education community of Georgia are presented. Research and Educational Networking Association - GRENA provides the following network services: Internet connectivity, network services, cyber security, technical support, etc. Computing resources used by the research teams are located at GRENA and at major state universities. GE-01-GRENA site is included in European Grid infrastructure. Paper also contains information about programs of Learning Center and research and development projects in which GRENA is participating.
NASA Technical Reports Server (NTRS)
Boggs, Karen; Gutheinz, Sandy C.; Watanabe, Susan M.; Oks, Boris; Arca, Jeremy M.; Stanboli, Alice; Peez, Martin; Whatmore, Rebecca; Kang, Minliang; Espinoza, Luis A.
2010-01-01
Space Images for NASA/JPL is an Apple iPhone application that allows the general public to access featured images from the Jet Propulsion Laboratory (JPL). A back-end infrastructure stores, tracks, and retrieves space images from the JPL Photojournal Web server, and catalogs the information into a streamlined rating infrastructure.
Informing Watershed Connectivity Barrier Prioritization Decisions: A Synthesis
S. K. McKay; A. R. Cooper; M. W. Diebel; D. Elkins; G. Oldford; Craig Roghair; D. Wieferich
2016-01-01
Water resources and transportation infrastructure such as dams and culverts provide countless socio-economic benefits; however, this infrastructure can also disconnect the movement of organisms, sediment, and water through river ecosystems. Trade-offs associated with these competing costs and benefits occur globally, with applications in barrier addition (e.g...
Southwestern/Western United States is among the fastest growing urbanized area and faces multiple water resource challenges. Low Impact Development (LID) /Green Infrastructure (GI) practices are increasingly popular technologies for managing stormwater; however, LID is often not ...
Latin American space activities based on different infrastructures
NASA Astrophysics Data System (ADS)
Gall, Ruth
The paper deals with recent basic space research and space applications in several Latin-American countries. It links space activities with national scientific and institutional infrastructures and stresses the importance of interdisciplinary space programs, that can play a major role in the developing countries achievement of self reliance in space matters.
Software Engineering Infrastructure in a Large Virtual Campus
ERIC Educational Resources Information Center
Cristobal, Jesus; Merino, Jorge; Navarro, Antonio; Peralta, Miguel; Roldan, Yolanda; Silveira, Rosa Maria
2011-01-01
Purpose: The design, construction and deployment of a large virtual campus are a complex issue. Present virtual campuses are made of several software applications that complement e-learning platforms. In order to develop and maintain such virtual campuses, a complex software engineering infrastructure is needed. This paper aims to analyse the…
The National Information Infrastructure: Requirements for Education and Training: Executive Summary.
ERIC Educational Resources Information Center
TechTrends, 1994
1994-01-01
Includes 19 requirements prepared by the National Coordinating Committee for Technology in Education (NCC-TET) to ensure that the national information infrastructure (NII) provides expanded opportunities for education and training. The requirements, which cover access, education and training applications, and technical needs, are intended as…
Code of Federal Regulations, 2010 CFR
2010-10-01
... appropriations and which must be paid by Applicant or its non-Federal infrastructure partner before that direct... provisions of this part. (j) Including means including but not limited to. (k) Infrastructure partner means... subtitle IV of title 49, United States Code. (s) Subsidy cost of a direct loan means the net present value...
Danville Community College Information Technology General Plan, 1998-99.
ERIC Educational Resources Information Center
Danville Community Coll., VA.
This document describes technology usage, infrastructure and planning for Danville Community College. The Plan is divided into four sections: Introduction, Vision and Mission, Applications, and Infrastructure. The four major goals identified in Vision and Mission are: (1) to ensure the successful use of all technologies through continued training…
caGrid 1.0 : an enterprise Grid infrastructure for biomedical research.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Oster, S.; Langella, S.; Hastings, S.
To develop software infrastructure that will provide support for discovery, characterization, integrated access, and management of diverse and disparate collections of information sources, analysis methods, and applications in biomedical research. Design: An enterprise Grid software infrastructure, called caGrid version 1.0 (caGrid 1.0), has been developed as the core Grid architecture of the NCI-sponsored cancer Biomedical Informatics Grid (caBIG{trademark}) program. It is designed to support a wide range of use cases in basic, translational, and clinical research, including (1) discovery, (2) integrated and large-scale data analysis, and (3) coordinated study. Measurements: The caGrid is built as a Grid software infrastructure andmore » leverages Grid computing technologies and the Web Services Resource Framework standards. It provides a set of core services, toolkits for the development and deployment of new community provided services, and application programming interfaces for building client applications. Results: The caGrid 1.0 was released to the caBIG community in December 2006. It is built on open source components and caGrid source code is publicly and freely available under a liberal open source license. The core software, associated tools, and documentation can be downloaded from the following URL:
Refining mass formulas for astrophysical applications: A Bayesian neural network approach
NASA Astrophysics Data System (ADS)
Utama, R.; Piekarewicz, J.
2017-10-01
Background: Exotic nuclei, particularly those near the drip lines, are at the core of one of the fundamental questions driving nuclear structure and astrophysics today: What are the limits of nuclear binding? Exotic nuclei play a critical role in both informing theoretical models as well as in our understanding of the origin of the heavy elements. Purpose: Our aim is to refine existing mass models through the training of an artificial neural network that will mitigate the large model discrepancies far away from stability. Methods: The basic paradigm of our two-pronged approach is an existing mass model that captures as much as possible of the underlying physics followed by the implementation of a Bayesian neural network (BNN) refinement to account for the missing physics. Bayesian inference is employed to determine the parameters of the neural network so that model predictions may be accompanied by theoretical uncertainties. Results: Despite the undeniable quality of the mass models adopted in this work, we observe a significant improvement (of about 40%) after the BNN refinement is implemented. Indeed, in the specific case of the Duflo-Zuker mass formula, we find that the rms deviation relative to experiment is reduced from σrms=0.503 MeV to σrms=0.286 MeV. These newly refined mass tables are used to map the neutron drip lines (or rather "drip bands") and to study a few critical r -process nuclei. Conclusions: The BNN approach is highly successful in refining the predictions of existing mass models. In particular, the large discrepancy displayed by the original "bare" models in regions where experimental data are unavailable is considerably quenched after the BNN refinement. This lends credence to our approach and has motivated us to publish refined mass tables that we trust will be helpful for future astrophysical applications.
2013-01-01
ξi be the Legendre -Gauss-Lobatto (LGL) points defined as the roots of (1 − ξ2)P ′N (ξ) = 0, where PN (ξ) is the N th order Legendre polynomial . The...mesh refinement. By expanding the solution in a basis of high order polynomials in each element, one can dynamically adjust the order of these basis...on refining the mesh while keeping the polynomial order constant across the elements. If we choose to allow non-conforming elements, the challenge in
NASA Astrophysics Data System (ADS)
Serbu, Sabina; Rivière, Étienne; Felber, Pascal
The emergence of large-scale distributed applications based on many-to-many communication models, e.g., broadcast and decentralized group communication, has an important impact on the underlying layers, notably the Internet routing infrastructure. To make an effective use of network resources, protocols should both limit the stress (amount of messages) on each infrastructure entity like routers and links, and balance as much as possible the load in the network. Most protocols use application-level metrics such as delays to improve efficiency of content dissemination or routing, but the extend to which such application-centric optimizations help reduce and balance the load imposed to the infrastructure is unclear. In this paper, we elaborate on the design of such network-friendly protocols and associated metrics. More specifically, we investigate random-based gossip dissemination. We propose and evaluate different ways of making this representative protocol network-friendly while keeping its desirable properties (robustness and low delays). Simulations of the proposed methods using synthetic and real network topologies convey and compare their abilities to reduce and balance the load while keeping good performance.
SEE-GRID eInfrastructure for Regional eScience
NASA Astrophysics Data System (ADS)
Prnjat, Ognjen; Balaz, Antun; Vudragovic, Dusan; Liabotis, Ioannis; Sener, Cevat; Marovic, Branko; Kozlovszky, Miklos; Neagu, Gabriel
In the past 6 years, a number of targeted initiatives, funded by the European Commission via its information society and RTD programmes and Greek infrastructure development actions, have articulated a successful regional development actions in South East Europe that can be used as a role model for other international developments. The SEEREN (South-East European Research and Education Networking initiative) project, through its two phases, established the SEE segment of the pan-European G ´EANT network and successfully connected the research and scientific communities in the region. Currently, the SEE-LIGHT project is working towards establishing a dark-fiber backbone that will interconnect most national Research and Education networks in the region. On the distributed computing and storage provisioning i.e. Grid plane, the SEE-GRID (South-East European GRID e-Infrastructure Development) project, similarly through its two phases, has established a strong human network in the area of scientific computing and has set up a powerful regional Grid infrastructure, and attracted a number of applications from different fields from countries throughout the South-East Europe. The current SEEGRID-SCI project, ending in April 2010, empowers the regional user communities from fields of meteorology, seismology and environmental protection in common use and sharing of the regional e-Infrastructure. Current technical initiatives in formulation are focusing on a set of coordinated actions in the area of HPC and application fields making use of HPC initiatives. Finally, the current SEERA-EI project brings together policy makers - programme managers from 10 countries in the region. The project aims to establish a communication platform between programme managers, pave the way towards common e-Infrastructure strategy and vision, and implement concrete actions for common funding of electronic infrastructures on the regional level. The regional vision on establishing an e-Infrastructure compatible with European developments, and empowering the scientists in the region in equal participation in the use of pan- European infrastructures, is materializing through the above initiatives. This model has a number of concrete operational and organizational guidelines which can be adapted to help e-Infrastructure developments in other world regions. In this paper we review the most important developments and contributions by the SEEGRID- SCI project.
Preliminary Results From NASA's Space Solar Power Exploratory Research and Technology Program
NASA Technical Reports Server (NTRS)
Howell, Joe T.; Mankins, John C.
2000-01-01
Large solar power satellite (SPS) systems that might provide base load power into terrestrial markets were examined extensively in the 1970s by the US Department of Energy (DOE) and the National Aeronautics and Space Administration (NASA). Following a hiatus of about 15 years, the subject of space solar power (SSP) was reexamined by NASA from 1995-1997 in the "fresh look" study, and during 1998 in an SSP "concept definition study". As a result of these efforts, during 1999-2000, NASA has been conducting the SSP Exploratory Research and Technology (SERT) program. The goal of the SERT activity has been to conduct preliminary strategic technology research and development to enable large, multi-megawatt SSP systems and wireless power transmission (WPT) for government missions and commercial markets (in-space and terrestrial). In pursuing that goal, the SERT: (1) refined and modeled systems approaches for the utilization of SSP concepts and technologies, ranging from the near-term (e.g., for space science, exploration and commercial space applications) to the far-term (e.g., SSP for terrestrial markets), including systems concepts, architectures, technology, infrastructure (e.g. transportation), and economics; (2) conducted technology research, development and demonstration activities to produce "proof-of-concept" validation of critical SSP elements for both nearer and farther-term applications; and (3) engendered the beginnings of partnerships (nationally and internationally) that could be expanded, as appropriate, to pursue later SSP technology and applications. Through these efforts, the SERT should allow better informed future decisions regarding further SSP and related technology research and development investments by both NASA and prospective partners, and guide further definition of technology roadmaps - including performance objectives, resources and schedules, as well as "multi-purpose" applications (e.g., commerce, science, and government). This paper presents preliminary results from the SERT effort at a summary level, including the study approach, SPS concepts, applications findings, and concludes with a revised assessment of the prospects for solar power satellites using SSP technologies and systems.
An Information Infrastructure for Coastal Models and Data
NASA Astrophysics Data System (ADS)
Hardin, D.; Keiser, K.; Conover, H.; Graves, S.
2007-12-01
Advances in semantics and visualization have given rise to new capabilities for the location, manipulation, integration, management and display of data and information in and across domains. An example of these capabilities is illustrated by a coastal restoration project that utilizes satellite, in-situ data and hydrodynamic model output to address seagrass habitat restoration in the Northern Gulf of Mexico. In this project a standard stressor conceptual model was implemented as an ontology in addition to the typical CMAP diagram. The ontology captures the elements of the seagrass conceptual model as well as the relationships between them. Noesis, developed by the University of Alabama in Huntsville, is an application that provides a simple but powerful way to search and organize data and information represented by ontologies. Noesis uses domain ontologies to help scope search queries to ensure that search results are both accurate and complete. Semantics are captured by refining the query terms to cover synonyms, specializations, generalizations and related concepts. As a resource aggregator Noesis categorizes search results returned from multiple, concurrent search engines such as Google, Yahoo, and Ask.com. Search results are further directed by accessing domain specific catalogs that include outputs from hydrodynamic and other models. Embedded within the search results are links that invoke applications such as web map displays, animation tools and virtual globe applications such as Google Earth. In the seagrass prioritization project Noesis is used to locate information that is vital to understanding the impact of stressors on the habitat. This presentation will show how the intelligent search capabilities of Noesis are coupled with visualization tools and model output to investigate the restoration of seagrass habitat.
On Correspondence of BRST-BFV, Dirac, and Refined Algebraic Quantizations of Constrained Systems
NASA Astrophysics Data System (ADS)
Shvedov, O. Yu.
2002-11-01
The correspondence between BRST-BFV, Dirac, and refined algebraic (group averaging, projection operator) approaches to quantizing constrained systems is analyzed. For the closed-algebra case, it is shown that the component of the BFV wave function corresponding to maximal (minimal) value of number of ghosts and antighosts in the Schrodinger representation may be viewed as a wave function in the refined algebraic (Dirac) quantization approach. The Giulini-Marolf group averaging formula for the inner product in the refined algebraic quantization approach is obtained from the Batalin-Marnelius prescription for the BRST-BFV inner product, which should be generally modified due to topological problems. The considered prescription for the correspondence of states is observed to be applicable to the open-algebra case. The refined algebraic quantization approach is generalized then to the case of nontrivial structure functions. A simple example is discussed. The correspondence of observables for different quantization methods is also investigated.
Terwilliger, Thomas C; Grosse-Kunstleve, Ralf W; Afonine, Pavel V; Moriarty, Nigel W; Zwart, Peter H; Hung, Li Wei; Read, Randy J; Adams, Paul D
2008-01-01
The PHENIX AutoBuild wizard is a highly automated tool for iterative model building, structure refinement and density modification using RESOLVE model building, RESOLVE statistical density modification and phenix.refine structure refinement. Recent advances in the AutoBuild wizard and phenix.refine include automated detection and application of NCS from models as they are built, extensive model-completion algorithms and automated solvent-molecule picking. Model-completion algorithms in the AutoBuild wizard include loop building, crossovers between chains in different models of a structure and side-chain optimization. The AutoBuild wizard has been applied to a set of 48 structures at resolutions ranging from 1.1 to 3.2 A, resulting in a mean R factor of 0.24 and a mean free R factor of 0.29. The R factor of the final model is dependent on the quality of the starting electron density and is relatively independent of resolution.
Controlling Reflections from Mesh Refinement Interfaces in Numerical Relativity
NASA Technical Reports Server (NTRS)
Baker, John G.; Van Meter, James R.
2005-01-01
A leading approach to improving the accuracy on numerical relativity simulations of black hole systems is through fixed or adaptive mesh refinement techniques. We describe a generic numerical error which manifests as slowly converging, artificial reflections from refinement boundaries in a broad class of mesh-refinement implementations, potentially limiting the effectiveness of mesh- refinement techniques for some numerical relativity applications. We elucidate this numerical effect by presenting a model problem which exhibits the phenomenon, but which is simple enough that its numerical error can be understood analytically. Our analysis shows that the effect is caused by variations in finite differencing error generated across low and high resolution regions, and that its slow convergence is caused by the presence of dramatic speed differences among propagation modes typical of 3+1 relativity. Lastly, we resolve the problem, presenting a class of finite-differencing stencil modifications which eliminate this pathology in both our model problem and in numerical relativity examples.
A national curriculum for ophthalmology residency training
Grover, Ashok Kumar; Honavar, Santosh G; Azad, Rajvardhan; Verma, Lalit
2018-01-01
We present a residency curriculum for Ophthalmology in India. The document derives from a workshop by the All India Ophthalmological Society (AlOS) which adapted the International Council of Ophthalmology residency curriculum and refined and customized it based on inputs by the residency program directors who participated in the work shop. The curriculum describes the course content, lays down the minimum requirements of infrastructure and mandates diagnostic and therapeutic procedures required for optimal training. It emphasises professionalism, management, research methodology, community ophthalmology as integral to the curriculum. The proposed national ophthalmology residency curriculum for India incorporates the required knowledge and skills for effective and safe practice of ophthalmology and takes into account the specific needs of the country. PMID:29785982
Grid-based HPC astrophysical applications at INAF Catania.
NASA Astrophysics Data System (ADS)
Costa, A.; Calanducci, A.; Becciani, U.; Capuzzo Dolcetta, R.
The research activity on grid area at INAF Catania has been devoted to two main goals: the integration of a multiprocessor supercomputer (IBM SP4) within INFN-GRID middleware and the developing of a web-portal, Astrocomp-G, for the submission of astrophysical jobs into the grid infrastructure. Most of the actual grid implementation infrastructure is based on common hardware, i.e. i386 architecture machines (Intel Celeron, Pentium III, IV, Amd Duron, Athlon) using Linux RedHat OS. We were the first institute to integrate a totally different machine, an IBM SP with RISC architecture and AIX OS, as a powerful Worker Node inside a grid infrastructure. We identified and ported to AIX OS the grid components dealing with job monitoring and execution and properly tuned the Computing Element to delivery jobs into this special Worker Node. For testing purpose we used MARA, an astrophysical application for the analysis of light curve sequences. Astrocomp-G is a user-friendly front end to our grid site. Users who want to submit the astrophysical applications already available in the portal need to own a valid personal X509 certificate in addiction to a username and password released by the grid portal web master. The personal X509 certificate is a prerequisite for the creation of a short or long-term proxy certificate that allows the grid infrastructure services to identify clearly whether the owner of the job has the permissions to use resources and data. X509 and proxy certificates are part of GSI (Grid Security Infrastructure), a standard security tool adopted by all major grid sites around the world.
S3DB core: a framework for RDF generation and management in bioinformatics infrastructures
2010-01-01
Background Biomedical research is set to greatly benefit from the use of semantic web technologies in the design of computational infrastructure. However, beyond well defined research initiatives, substantial issues of data heterogeneity, source distribution, and privacy currently stand in the way towards the personalization of Medicine. Results A computational framework for bioinformatic infrastructure was designed to deal with the heterogeneous data sources and the sensitive mixture of public and private data that characterizes the biomedical domain. This framework consists of a logical model build with semantic web tools, coupled with a Markov process that propagates user operator states. An accompanying open source prototype was developed to meet a series of applications that range from collaborative multi-institution data acquisition efforts to data analysis applications that need to quickly traverse complex data structures. This report describes the two abstractions underlying the S3DB-based infrastructure, logical and numerical, and discusses its generality beyond the immediate confines of existing implementations. Conclusions The emergence of the "web as a computer" requires a formal model for the different functionalities involved in reading and writing to it. The S3DB core model proposed was found to address the design criteria of biomedical computational infrastructure, such as those supporting large scale multi-investigator research, clinical trials, and molecular epidemiology. PMID:20646315
NASA Astrophysics Data System (ADS)
Edsall, Robert; Hembree, Harvey
2018-05-01
The geospatial research and development team in the National and Homeland Security Division at Idaho National Laboratory was tasked with providing tools to derive insight from the substantial amount of data currently available - and continuously being produced - associated with the critical infrastructure of the US. This effort is in support of the Department of Homeland Security, whose mission includes the protection of this infrastructure and the enhancement of its resilience to hazards, both natural and human. We present geovisual-analytics-based approaches for analysis of vulnerabilities and resilience of critical infrastructure, designed so that decision makers, analysts, and infrastructure owners and managers can manage risk, prepare for hazards, and direct resources before and after an incident that might result in an interruption in service. Our designs are based on iterative discussions with DHS leadership and analysts, who in turn will use these tools to explore and communicate data in partnership with utility providers, law enforcement, and emergency response and recovery organizations, among others. In most cases these partners desire summaries of large amounts of data, but increasingly, our users seek the additional capability of focusing on, for example, a specific infrastructure sector, a particular geographic region, or time period, or of examining data in a variety of generalization or aggregation levels. These needs align well with tenets of in-formation-visualization design; in this paper, selected applications among those that we have designed are described and positioned within geovisualization, geovisual analytical, and information visualization frameworks.
Construction and Application of a Refined Hospital Management Chain.
Lihua, Yi
2016-01-01
Large scale development was quite common in the later period of hospital industrialization in China. Today, Chinese hospital management faces such problems as service inefficiency, high human resources cost, and low rate of capital use. This study analyzes the refined management chain of Wuxi No.2 People's Hospital. This consists of six gears namely, "organizational structure, clinical practice, outpatient service, medical technology, and nursing care and logistics." The gears are based on "flat management system targets, chief of medical staff, centralized outpatient service, intensified medical examinations, vertical nursing management and socialized logistics." The core concepts of refined hospital management are optimizing flow process, reducing waste, improving efficiency, saving costs, and taking good care of patients as most important. Keywords: Hospital, Refined, Management chain
NASA Astrophysics Data System (ADS)
Hohmann, Audrey; Dufréchou, Grégory; Grandjean, Gilles; Bourguignon, Anne
2014-05-01
Swelling soils contain clay minerals that change volume with water content and cause extensive and expensive damage on infrastructures. Based on spatial distribution of infrastructure damages and existing geological maps, the Bureau de Recherches Géologiques et Minières (BRGM, i.e. the French Geological Survey) published in 2010 a 1:50 000 swelling hazard map of France, indexing the territory to low, moderate, or high swelling risk. This study aims to use SWIR (1100-2500 nm) reflectance spectra of soils acquired under laboratory controlled conditions to estimate the swelling potential of soils and improve the swelling risk map of France. 332 samples were collected at the W of Orléans (France) in various geological formations and swelling risk areas. Comparisons of swelling potential of soil samples and swelling risk areas of the map show several inconsistent associations that confirm the necessity to redraw the actual swelling risk map of France. New swelling risk maps of the sampling area were produce from soil samples using three interpolation methods. Maps produce using kriging and Natural neighbour interpolation methods did not permit to show discrete lithological units, introduced unsupported swelling risk zones, and did not appear useful to refine swelling risk map of France. Voronoi polygon was also used to produce map where swelling potential estimated from each samples were extrapolated to a polygon and all polygons were thus supported by field information. From methods tested here, Voronoi polygon appears thus the most adapted method to produce expansive soils maps. However, size of polygon is highly dependent of the samples spacing and samples may not be representative of the entire polygon. More samples are thus needed to provide reliable map at the scale of the sampling area. Soils were also sampled along two sections with a sampling interval of ca. 260 m and ca. 50 m. Sample interval of 50 m appears more adapted for mapping of smallest lithological units. The presence of several samples close to themselves indicating the same swelling potential is a good indication of the presence of a zone with constant swelling potential. Combination of Voronoi method and sampling interval of ca. 50 m appear adapted to produce local swelling potential maps in areas where doubt remain or where infrastructure damages attributed to expansive soils are knew.
40 CFR 80.101 - Standards applicable to refiners and importers.
Code of Federal Regulations, 2011 CFR
2011-07-01
... are produced or imported over a period no longer than one month; (iii) Uses the total of the volumes... Administrator may grant an averaging period of two, three, four or five years upon petition of a refiner who: (A... for a two or three year averaging period must be submitted by June 1, 2003. Regardless of the...
40 CFR 80.101 - Standards applicable to refiners and importers.
Code of Federal Regulations, 2010 CFR
2010-07-01
... are produced or imported over a period no longer than one month; (iii) Uses the total of the volumes... Administrator may grant an averaging period of two, three, four or five years upon petition of a refiner who: (A... for a two or three year averaging period must be submitted by June 1, 2003. Regardless of the...
40 CFR 80.101 - Standards applicable to refiners and importers.
Code of Federal Regulations, 2012 CFR
2012-07-01
... are produced or imported over a period no longer than one month; (iii) Uses the total of the volumes... Administrator may grant an averaging period of two, three, four or five years upon petition of a refiner who: (A... for a two or three year averaging period must be submitted by June 1, 2003. Regardless of the...
The Internet information infrastructure: Terrorist tool or architecture for information defense?
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kadner, S.; Turpen, E.; Rees, B.
The Internet is a culmination of information age technologies and an agent of change. As with any infrastructure, dependency upon the so-called global information infrastructure creates vulnerabilities. Moreover, unlike physical infrastructures, the Internet is a multi-use technology. While information technologies, such as the Internet, can be utilized as a tool of terror, these same technologies can facilitate the implementation of solutions to mitigate the threat. In this vein, this paper analyzes the multifaceted nature of the Internet information infrastructure and argues that policymakers should concentrate on the solutions it provides rather than the vulnerabilities it creates. Minimizing risks and realizingmore » possibilities in the information age will require institutional activities that translate, exploit and convert information technologies into positive solutions. What follows is a discussion of the Internet information infrastructure as it relates to increasing vulnerabilities and positive potential. The following four applications of the Internet will be addressed: as the infrastructure for information competence; as a terrorist tool; as the terrorist`s target; and as an architecture for rapid response.« less
MFC Communications Infrastructure Study
DOE Office of Scientific and Technical Information (OSTI.GOV)
Michael Cannon; Terry Barney; Gary Cook
2012-01-01
Unprecedented growth of required telecommunications services and telecommunications applications change the way the INL does business today. High speed connectivity compiled with a high demand for telephony and network services requires a robust communications infrastructure. The current state of the MFC communication infrastructure limits growth opportunities of current and future communication infrastructure services. This limitation is largely due to equipment capacity issues, aging cabling infrastructure (external/internal fiber and copper cable) and inadequate space for telecommunication equipment. While some communication infrastructure improvements have been implemented over time projects, it has been completed without a clear overall plan and technology standard.more » This document identifies critical deficiencies with the current state of the communication infrastructure in operation at the MFC facilities and provides an analysis to identify needs and deficiencies to be addressed in order to achieve target architectural standards as defined in STD-170. The intent of STD-170 is to provide a robust, flexible, long-term solution to make communications capabilities align with the INL mission and fit the various programmatic growth and expansion needs.« less
Prototype Rail Crossing Violation Warning Application Project Report.
DOT National Transportation Integrated Search
2017-09-05
This report is the Project Report for the Rail Crossing Violation Warning (RCVW) safety application developed for the project on Rail Crossing Violation Warning Application and Infrastructure Connection, providing a means for equipped connected vehic...
Connected vehicle application : safety.
DOT National Transportation Integrated Search
2015-01-01
Connected vehicle safety applications are designed to increase situational awareness : and reduce or eliminate crashes through vehicle-to-infrastructure (V2I), vehicle-to-vehicle (V2V), and vehicle-to-pedestrian (V2P) data transmissions. Applications...
Infrastructure Joint Venture Projects in Malaysia: A Preliminary Study
NASA Astrophysics Data System (ADS)
Romeli, Norsyakilah; Muhamad Halil, Faridah; Ismail, Faridah; Sufian Hasim, Muhammad
2018-03-01
As many developed country practise, the function of the infrastructure is to connect the each region of Malaysia holistically and infrastructure is an investment network projects such as transportation water and sewerage, power, communication and irrigations system. Hence, a billions allocations of government income reserved for the sake of the infrastructure development. Towards a successful infrastructure development, a joint venture approach has been promotes by 2016 in one of the government thrust in Construction Industry Transformation Plan which encourage the internationalisation among contractors. However, there is depletion in information on the actual practise of the infrastructure joint venture projects in Malaysia. Therefore, this study attempt to explore the real application of the joint venture in Malaysian infrastructure projects. Using the questionnaire survey, a set of survey question distributed to the targeted respondents. The survey contained three section which the sections are respondent details, organizations background and project capital in infrastructure joint venture project. The results recorded and analyse using SPSS software. The contractors stated that they have implemented the joint venture practice with mostly the client with the usual construction period of the infrastructure project are more than 5 years. Other than that, the study indicates that there are problems in the joint venture project in the perspective of the project capital and the railway infrastructure should be given a highlights in future study due to its high significant in term of cost and technical issues.
Airoldi, Laura; Bulleri, Fabio
2011-01-01
Coastal landscapes are being transformed as a consequence of the increasing demand for infrastructures to sustain residential, commercial and tourist activities. Thus, intertidal and shallow marine habitats are largely being replaced by a variety of artificial substrata (e.g. breakwaters, seawalls, jetties). Understanding the ecological functioning of these artificial habitats is key to planning their design and management, in order to minimise their impacts and to improve their potential to contribute to marine biodiversity and ecosystem functioning. Nonetheless, little effort has been made to assess the role of human disturbances in shaping the structure of assemblages on marine artificial infrastructures. We tested the hypothesis that some negative impacts associated with the expansion of opportunistic and invasive species on urban infrastructures can be related to the severe human disturbances that are typical of these environments, such as those from maintenance and renovation works. Maintenance caused a marked decrease in the cover of dominant space occupiers, such as mussels and oysters, and a significant enhancement of opportunistic and invasive forms, such as biofilm and macroalgae. These effects were particularly pronounced on sheltered substrata compared to exposed substrata. Experimental application of the disturbance in winter reduced the magnitude of the impacts compared to application in spring or summer. We use these results to identify possible management strategies to inform the improvement of the ecological value of artificial marine infrastructures. We demonstrate that some of the impacts of globally expanding marine urban infrastructures, such as those related to the spread of opportunistic, and invasive species could be mitigated through ecologically-driven planning and management of long-term maintenance of these structures. Impact mitigation is a possible outcome of policies that consider the ecological features of built infrastructures and the fundamental value of controlling biodiversity in marine urban systems.
NASA Astrophysics Data System (ADS)
England, John F.; Julien, Pierre Y.; Velleux, Mark L.
2014-03-01
Traditionally, deterministic flood procedures such as the Probable Maximum Flood have been used for critical infrastructure design. Some Federal agencies now use hydrologic risk analysis to assess potential impacts of extreme events on existing structures such as large dams. Extreme flood hazard estimates and distributions are needed for these efforts, with very low annual exceedance probabilities (⩽10-4) (return periods >10,000 years). An integrated data-modeling hydrologic hazard framework for physically-based extreme flood hazard estimation is presented. Key elements include: (1) a physically-based runoff model (TREX) coupled with a stochastic storm transposition technique; (2) hydrometeorological information from radar and an extreme storm catalog; and (3) streamflow and paleoflood data for independently testing and refining runoff model predictions at internal locations. This new approach requires full integration of collaborative work in hydrometeorology, flood hydrology and paleoflood hydrology. An application on the 12,000 km2 Arkansas River watershed in Colorado demonstrates that the size and location of extreme storms are critical factors in the analysis of basin-average rainfall frequency and flood peak distributions. Runoff model results are substantially improved by the availability and use of paleoflood nonexceedance data spanning the past 1000 years at critical watershed locations.
Acoustic Sensor Network for Relative Positioning of Nodes
De Marziani, Carlos; Ureña, Jesus; Hernandez, Álvaro; Mazo, Manuel; García, Juan Jesús; Jimenez, Ana; Rubio, María del Carmen Pérez; Álvarez, Fernando; Villadangos, José Manuel
2009-01-01
In this work, an acoustic sensor network for a relative localization system is analyzed by reporting the accuracy achieved in the position estimation. The proposed system has been designed for those applications where objects are not restricted to a particular environment and thus one cannot depend on any external infrastructure to compute their positions. The objects are capable of computing spatial relations among themselves using only acoustic emissions as a ranging mechanism. The object positions are computed by a multidimensional scaling (MDS) technique and, afterwards, a least-square algorithm, based on the Levenberg-Marquardt algorithm (LMA), is applied to refine results. Regarding the position estimation, all the parameters involved in the computation of the temporary relations with the proposed ranging mechanism have been considered. The obtained results show that a fine-grained localization can be achieved considering a Gaussian distribution error in the proposed ranging mechanism. Furthermore, since acoustic sensors require a line-of-sight to properly work, the system has been tested by modeling the lost of this line-of-sight as a non-Gaussian error. A suitable position estimation has been achieved even if it is considered a bias of up to 25 of the line-of-sight measurements among a set of nodes. PMID:22291520
Low cost, multiscale and multi-sensor application for flooded area mapping
NASA Astrophysics Data System (ADS)
Giordan, Daniele; Notti, Davide; Villa, Alfredo; Zucca, Francesco; Calò, Fabiana; Pepe, Antonio; Dutto, Furio; Pari, Paolo; Baldo, Marco; Allasia, Paolo
2018-05-01
Flood mapping and estimation of the maximum water depth are essential elements for the first damage evaluation, civil protection intervention planning and detection of areas where remediation is needed. In this work, we present and discuss a methodology for mapping and quantifying flood severity over floodplains. The proposed methodology considers a multiscale and multi-sensor approach using free or low-cost data and sensors. We applied this method to the November 2016 Piedmont (northwestern Italy) flood. We first mapped the flooded areas at the basin scale using free satellite data from low- to medium-high-resolution from both the SAR (Sentinel-1, COSMO-Skymed) and multispectral sensors (MODIS, Sentinel-2). Using very- and ultra-high-resolution images from the low-cost aerial platform and remotely piloted aerial system, we refined the flooded zone and detected the most damaged sector. The presented method considers both urbanised and non-urbanised areas. Nadiral images have several limitations, in particular in urbanised areas, where the use of terrestrial images solved this limitation. Very- and ultra-high-resolution images were processed with structure from motion (SfM) for the realisation of 3-D models. These data, combined with an available digital terrain model, allowed us to obtain maps of the flooded area, maximum high water area and damaged infrastructures.
ERIC Educational Resources Information Center
Sukwong, Orathai
2013-01-01
Virtualization enables the ability to consolidate multiple servers on a single physical machine, increasing the infrastructure utilization. Maximizing the ratio of server virtual machines (VMs) to physical machines, namely the consolidation ratio, becomes an important goal toward infrastructure cost saving in a cloud. However, the consolidation…
49 CFR 1520.9 - Restrictions on the disclosure of SSI.
Code of Federal Regulations, 2011 CFR
2011-10-01
... inform TSA or the applicable DOT or DHS component or agency. (d) Additional Requirements for Critical Infrastructure Information. In the case of information that is both SSI and has been designated as critical infrastructure information under section 214 of the Homeland Security Act, any covered person who is a Federal...
49 CFR 1520.9 - Restrictions on the disclosure of SSI.
Code of Federal Regulations, 2013 CFR
2013-10-01
... inform TSA or the applicable DOT or DHS component or agency. (d) Additional Requirements for Critical Infrastructure Information. In the case of information that is both SSI and has been designated as critical infrastructure information under section 214 of the Homeland Security Act, any covered person who is a Federal...
49 CFR 1520.9 - Restrictions on the disclosure of SSI.
Code of Federal Regulations, 2014 CFR
2014-10-01
... inform TSA or the applicable DOT or DHS component or agency. (d) Additional Requirements for Critical Infrastructure Information. In the case of information that is both SSI and has been designated as critical infrastructure information under section 214 of the Homeland Security Act, any covered person who is a Federal...
49 CFR 1520.9 - Restrictions on the disclosure of SSI.
Code of Federal Regulations, 2012 CFR
2012-10-01
... inform TSA or the applicable DOT or DHS component or agency. (d) Additional Requirements for Critical Infrastructure Information. In the case of information that is both SSI and has been designated as critical infrastructure information under section 214 of the Homeland Security Act, any covered person who is a Federal...
ERIC Educational Resources Information Center
Yrchik, John; Cradler, John
1994-01-01
Discusses guidelines that were developed to ensure that the National Information Infrastructure provides expanded opportunities for education and training. Topics include access requirements for homes and work places as well as schools; education and training application requirements, including coordination by federal departments and agencies; and…
The US EPA is developing assessment tools to evaluate the effectiveness of green infrastructure (GI) applied in stormwater best management practices (BMPs) at the small watershed (HUC12 or finer) scale. Based on analysis of historical monitoring data using boosted regression tre...
ERIC Educational Resources Information Center
Conn, Samuel S.; Reichgelt, Han
2013-01-01
Cloud computing represents an architecture and paradigm of computing designed to deliver infrastructure, platforms, and software as constructible computing resources on demand to networked users. As campuses are challenged to better accommodate academic needs for applications and computing environments, cloud computing can provide an accommodating…
DOE Office of Scientific and Technical Information (OSTI.GOV)
Williamson, Richard L.; Kochunas, Brendan; Adams, Brian M.
The Virtual Environment for Reactor Applications components included in this distribution include selected computational tools and supporting infrastructure that solve neutronics, thermal-hydraulics, fuel performance, and coupled neutronics-thermal hydraulics problems. The infrastructure components provide a simplified common user input capability and provide for the physics integration with data transfer and coupled-physics iterative solution algorithms.
Quantifying Security Threats and Their Impact
DOE Office of Scientific and Technical Information (OSTI.GOV)
Aissa, Anis Ben; Abercrombie, Robert K; Sheldon, Frederick T
In earlier works, we present a computational infrastructure that allows an analyst to estimate the security of a system in terms of the loss that each stakeholder stands to sustain as a result of security breakdowns. In this paper we illustrate this infrastructure by means of a sample example involving an e-commerce application.
Quantifying Security Threats and Their Potential Impacts: A Case Study
DOE Office of Scientific and Technical Information (OSTI.GOV)
Aissa, Anis Ben; Abercrombie, Robert K; Sheldon, Frederick T
In earlier works, we present a computational infrastructure that allows an analyst to estimate the security of a system in terms of the loss that each stakeholder stands to sustain as a result of security breakdowns. In this paper, we illustrate this infrastructure by means of an e-commerce application.
Cyber Security Threats to Safety-Critical, Space-Based Infrastructures
NASA Astrophysics Data System (ADS)
Johnson, C. W.; Atencia Yepez, A.
2012-01-01
Space-based systems play an important role within national critical infrastructures. They are being integrated into advanced air-traffic management applications, rail signalling systems, energy distribution software etc. Unfortunately, the end users of communications, location sensing and timing applications often fail to understand that these infrastructures are vulnerable to a wide range of security threats. The following pages focus on concerns associated with potential cyber-attacks. These are important because future attacks may invalidate many of the safety assumptions that support the provision of critical space-based services. These safety assumptions are based on standard forms of hazard analysis that ignore cyber-security considerations This is a significant limitation when, for instance, security attacks can simultaneously exploit multiple vulnerabilities in a manner that would never occur without a deliberate enemy seeking to damage space based systems and ground infrastructures. We address this concern through the development of a combined safety and security risk assessment methodology. The aim is to identify attack scenarios that justify the allocation of additional design resources so that safety barriers can be strengthened to increase our resilience against security threats.
Federal Register 2010, 2011, 2012, 2013, 2014
2013-04-10
...EPA is proposing action on four Missouri State Implementation Plan (SIP) submissions. First, EPA is proposing to approve portions of two SIP submissions from the State of Missouri addressing the applicable requirements of Clean Air Act (CAA) for the 1997 and 2006 National Ambient Air Quality Standards (NAAQS) for fine particulate matter (PM2.5). The CAA requires that each state adopt and submit a SIP to support implementation, maintenance, and enforcement of each new or revised NAAQS promulgated by EPA. These SIPs are commonly referred to as ``infrastructure'' SIPs. The infrastructure requirements are designed to ensure that the structural components of each state's air quality management program are adequate to meet the state's responsibilities under the CAA. EPA is also proposing to approve two additional SIP submissions from Missouri, one addressing the Prevention of Significant Deterioration (PSD) program in Missouri, and another addressing the requirements applicable to any board or body which approves permits or enforcement orders of the CAA, both of which support requirements associated with infrastructure SIPs.
Body Area Network BAN--a key infrastructure element for patient-centered medical applications.
Schmidt, Robert; Norgall, Thomas; Mörsdorf, Joachim; Bernhard, Josef; von der Grün, Thomas
2002-01-01
The Body Area Network (BAN) concept enables wireless communication between several miniaturized, intelligent Body Sensor (or actor) Units (BSU) and a single Body Central Unit (BCU) worn at the human body. A separate wireless transmission link from the BCU to a network access point--using different technology--provides for online access to BAN data via usual network infrastructure. BAN is expected to become a basic infrastructure element for service-based electronic health assistance: By integrating patient-attached sensors and control of mobile dedicated actor units, the range of medical workflow can be extended by wireless patient monitoring and therapy support. Beyond clinical use, professional disease management environments, and private personal health assistance scenarios (without financial reimbursement by health agencies/insurance companies), BAN enables a wide range of health care applications and related services.
Information Infrastructure Technology and Applications (IITA) Program: Annual K-12 Workshop
NASA Technical Reports Server (NTRS)
Hunter, Paul; Likens, William; Leon, Mark
1995-01-01
The purpose of the K-12 workshop is to stimulate a cross pollination of inter-center activity and introduce the regional centers to curing edge K-1 activities. The format of the workshop consists of project presentations, working groups, and working group reports, all contained in a three day period. The agenda is aggressive and demanding. The K-12 Education Project is a multi-center activity managed by the Information Infrastructure Technology and Applications (IITA)/K-12 Project Office at the NASA Ames Research Center (ARC). this workshop is conducted in support of executing the K-12 Education element of the IITA Project The IITA/K-12 Project funds activities that use the National Information Infrastructure (NII) (e.g., the Internet) to foster reform and restructuring in mathematics, science, computing, engineering, and technical education.
A Computational framework for telemedicine.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Foster, I.; von Laszewski, G.; Thiruvathukal, G. K.
1998-07-01
Emerging telemedicine applications require the ability to exploit diverse and geographically distributed resources. Highspeed networks are used to integrate advanced visualization devices, sophisticated instruments, large databases, archival storage devices, PCs, workstations, and supercomputers. This form of telemedical environment is similar to networked virtual supercomputers, also known as metacomputers. Metacomputers are already being used in many scientific application areas. In this article, we analyze requirements necessary for a telemedical computing infrastructure and compare them with requirements found in a typical metacomputing environment. We will show that metacomputing environments can be used to enable a more powerful and unified computational infrastructure formore » telemedicine. The Globus metacomputing toolkit can provide the necessary low level mechanisms to enable a large scale telemedical infrastructure. The Globus toolkit components are designed in a modular fashion and can be extended to support the specific requirements for telemedicine.« less
Parallel Adaptive Mesh Refinement Library
NASA Technical Reports Server (NTRS)
Mac-Neice, Peter; Olson, Kevin
2005-01-01
Parallel Adaptive Mesh Refinement Library (PARAMESH) is a package of Fortran 90 subroutines designed to provide a computer programmer with an easy route to extension of (1) a previously written serial code that uses a logically Cartesian structured mesh into (2) a parallel code with adaptive mesh refinement (AMR). Alternatively, in its simplest use, and with minimal effort, PARAMESH can operate as a domain-decomposition tool for users who want to parallelize their serial codes but who do not wish to utilize adaptivity. The package builds a hierarchy of sub-grids to cover the computational domain of a given application program, with spatial resolution varying to satisfy the demands of the application. The sub-grid blocks form the nodes of a tree data structure (a quad-tree in two or an oct-tree in three dimensions). Each grid block has a logically Cartesian mesh. The package supports one-, two- and three-dimensional models.
Brunger, Axel T; Das, Debanu; Deacon, Ashley M; Grant, Joanna; Terwilliger, Thomas C; Read, Randy J; Adams, Paul D; Levitt, Michael; Schröder, Gunnar F
2012-04-01
Phasing by molecular replacement remains difficult for targets that are far from the search model or in situations where the crystal diffracts only weakly or to low resolution. Here, the process of determining and refining the structure of Cgl1109, a putative succinyl-diaminopimelate desuccinylase from Corynebacterium glutamicum, at ∼3 Å resolution is described using a combination of homology modeling with MODELLER, molecular-replacement phasing with Phaser, deformable elastic network (DEN) refinement and automated model building using AutoBuild in a semi-automated fashion, followed by final refinement cycles with phenix.refine and Coot. This difficult molecular-replacement case illustrates the power of including DEN restraints derived from a starting model to guide the movements of the model during refinement. The resulting improved model phases provide better starting points for automated model building and produce more significant difference peaks in anomalous difference Fourier maps to locate anomalous scatterers than does standard refinement. This example also illustrates a current limitation of automated procedures that require manual adjustment of local sequence misalignments between the homology model and the target sequence.
Brunger, Axel T.; Das, Debanu; Deacon, Ashley M.; Grant, Joanna; Terwilliger, Thomas C.; Read, Randy J.; Adams, Paul D.; Levitt, Michael; Schröder, Gunnar F.
2012-01-01
Phasing by molecular replacement remains difficult for targets that are far from the search model or in situations where the crystal diffracts only weakly or to low resolution. Here, the process of determining and refining the structure of Cgl1109, a putative succinyl-diaminopimelate desuccinylase from Corynebacterium glutamicum, at ∼3 Å resolution is described using a combination of homology modeling with MODELLER, molecular-replacement phasing with Phaser, deformable elastic network (DEN) refinement and automated model building using AutoBuild in a semi-automated fashion, followed by final refinement cycles with phenix.refine and Coot. This difficult molecular-replacement case illustrates the power of including DEN restraints derived from a starting model to guide the movements of the model during refinement. The resulting improved model phases provide better starting points for automated model building and produce more significant difference peaks in anomalous difference Fourier maps to locate anomalous scatterers than does standard refinement. This example also illustrates a current limitation of automated procedures that require manual adjustment of local sequence misalignments between the homology model and the target sequence. PMID:22505259
Ouyang, Min; Tian, Hui; Wang, Zhenghua; Hong, Liu; Mao, Zijun
2017-01-17
This article studies a general type of initiating events in critical infrastructures, called spatially localized failures (SLFs), which are defined as the failure of a set of infrastructure components distributed in a spatially localized area due to damage sustained, while other components outside the area do not directly fail. These failures can be regarded as a special type of intentional attack, such as bomb or explosive assault, or a generalized modeling of the impact of localized natural hazards on large-scale systems. This article introduces three SLFs models: node centered SLFs, district-based SLFs, and circle-shaped SLFs, and proposes a SLFs-induced vulnerability analysis method from three aspects: identification of critical locations, comparisons of infrastructure vulnerability to random failures, topologically localized failures and SLFs, and quantification of infrastructure information value. The proposed SLFs-induced vulnerability analysis method is finally applied to the Chinese railway system and can be also easily adapted to analyze other critical infrastructures for valuable protection suggestions. © 2017 Society for Risk Analysis.
Assessing large-scale wildlife responses to human infrastructure development
Torres, Aurora; Jaeger, Jochen A. G.; Alonso, Juan Carlos
2016-01-01
Habitat loss and deterioration represent the main threats to wildlife species, and are closely linked to the expansion of roads and human settlements. Unfortunately, large-scale effects of these structures remain generally overlooked. Here, we analyzed the European transportation infrastructure network and found that 50% of the continent is within 1.5 km of transportation infrastructure. We present a method for assessing the impacts from infrastructure on wildlife, based on functional response curves describing density reductions in birds and mammals (e.g., road-effect zones), and apply it to Spain as a case study. The imprint of infrastructure extends over most of the country (55.5% in the case of birds and 97.9% for mammals), with moderate declines predicted for birds (22.6% of individuals) and severe declines predicted for mammals (46.6%). Despite certain limitations, we suggest the approach proposed is widely applicable to the evaluation of effects of planned infrastructure developments under multiple scenarios, and propose an internationally coordinated strategy to update and improve it in the future. PMID:27402749
Structural health monitoring of civil infrastructure.
Brownjohn, J M W
2007-02-15
Structural health monitoring (SHM) is a term increasingly used in the last decade to describe a range of systems implemented on full-scale civil infrastructures and whose purposes are to assist and inform operators about continued 'fitness for purpose' of structures under gradual or sudden changes to their state, to learn about either or both of the load and response mechanisms. Arguably, various forms of SHM have been employed in civil infrastructure for at least half a century, but it is only in the last decade or two that computer-based systems are being designed for the purpose of assisting owners/operators of ageing infrastructure with timely information for their continued safe and economic operation. This paper describes the motivations for and recent history of SHM applications to various forms of civil infrastructure and provides case studies on specific types of structure. It ends with a discussion of the present state-of-the-art and future developments in terms of instrumentation, data acquisition, communication systems and data mining and presentation procedures for diagnosis of infrastructural 'health'.
Assessing large-scale wildlife responses to human infrastructure development.
Torres, Aurora; Jaeger, Jochen A G; Alonso, Juan Carlos
2016-07-26
Habitat loss and deterioration represent the main threats to wildlife species, and are closely linked to the expansion of roads and human settlements. Unfortunately, large-scale effects of these structures remain generally overlooked. Here, we analyzed the European transportation infrastructure network and found that 50% of the continent is within 1.5 km of transportation infrastructure. We present a method for assessing the impacts from infrastructure on wildlife, based on functional response curves describing density reductions in birds and mammals (e.g., road-effect zones), and apply it to Spain as a case study. The imprint of infrastructure extends over most of the country (55.5% in the case of birds and 97.9% for mammals), with moderate declines predicted for birds (22.6% of individuals) and severe declines predicted for mammals (46.6%). Despite certain limitations, we suggest the approach proposed is widely applicable to the evaluation of effects of planned infrastructure developments under multiple scenarios, and propose an internationally coordinated strategy to update and improve it in the future.
Infrastructure Retrofit Design via Composite Mechanics
NASA Technical Reports Server (NTRS)
Chamis, Christos, C.; Gotsis,Pascal K.
1998-01-01
Select applications are described to illustrate the concept for retrofitting reinforced concrete infrastructure with fiber reinforced plastic laminates. The concept is first illustrated by using an axially loaded reinforced concrete column. A reinforced concrete arch and a dome are then used to illustrate the versatility of the concept. Advanced methods such as finite element structural analysis and progressive structural fracture are then used to evaluate the retrofitting laminate adequacy. Results obtains show that retrofits can be designed to double and even triple the as-designed load of the select reinforced concrete infrastructures.
Conflict minerals from the Democratic Republic of the Congo—Gold supply chain
George, Micheal W.
2015-11-10
Processing of the 3T mineral concentrates requires substantial infrastructure and capital and generally is done at relatively few specialized facilities that are not located at the mine site; primary and secondary processors typically are at separate locations. Gold, however, can easily be processed into semi-refined products at or near the mine site and has a high unit value in any form, which allows it to be readily exported through undocumented channels, making it more difficult to track to the mine or region of origin. To put this in perspective, 30 kilograms (66 pounds) of 83 percent pure gold (20 carat) would form a cube measuring 12 centimeters per side (about the size of a small tissue box) and, at a price of $1,200 per ounce, would be worth nearly $1 million. By contrast, the equivalent value of tungsten concentrates would weigh about 45 metric tons (t) (100,000 pounds). Once conflict sourced gold has been combined with gold from other mines and scrap at a refiner, there is no feasible way to distinguish the source of the gold. Thus, once the gold leaves the immediate area of production, it is nearly indistinguishable from gold products mined in other areas.
Applications of smart materials in structural engineering.
DOT National Transportation Integrated Search
2003-10-01
With the development of materials and technology, many new materials find their applications in civil engineering to deal with the deteriorating infrastructure. Smart material is a promising example that deserves a wide focus, from research to applic...
Managing competing elastic Grid and Cloud scientific computing applications using OpenNebula
NASA Astrophysics Data System (ADS)
Bagnasco, S.; Berzano, D.; Lusso, S.; Masera, M.; Vallero, S.
2015-12-01
Elastic cloud computing applications, i.e. applications that automatically scale according to computing needs, work on the ideal assumption of infinite resources. While large public cloud infrastructures may be a reasonable approximation of this condition, scientific computing centres like WLCG Grid sites usually work in a saturated regime, in which applications compete for scarce resources through queues, priorities and scheduling policies, and keeping a fraction of the computing cores idle to allow for headroom is usually not an option. In our particular environment one of the applications (a WLCG Tier-2 Grid site) is much larger than all the others and cannot autoscale easily. Nevertheless, other smaller applications can benefit of automatic elasticity; the implementation of this property in our infrastructure, based on the OpenNebula cloud stack, will be described and the very first operational experiences with a small number of strategies for timely allocation and release of resources will be discussed.
Scholz, Stefan; Ngoli, Baltazar; Flessa, Steffen
2015-05-01
Health care infrastructure constitutes a major component of the structural quality of a health system. Infrastructural deficiencies of health services are reported in literature and research. A number of instruments exist for the assessment of infrastructure. However, no easy-to-use instruments to assess health facility infrastructure in developing countries are available. Present tools are not applicable for a rapid assessment by health facility staff. Therefore, health information systems lack data on facility infrastructure. A rapid assessment tool for the infrastructure of primary health care facilities was developed by the authors and pilot-tested in Tanzania. The tool measures the quality of all infrastructural components comprehensively and with high standardization. Ratings use a 2-1-0 scheme which is frequently used in Tanzanian health care services. Infrastructural indicators and indices are obtained from the assessment and serve for reporting and tracing of interventions. The tool was pilot-tested in Tanga Region (Tanzania). The pilot test covered seven primary care facilities in the range between dispensary and district hospital. The assessment encompassed the facilities as entities as well as 42 facility buildings and 80 pieces of technical medical equipment. A full assessment of facility infrastructure was undertaken by health care professionals while the rapid assessment was performed by facility staff. Serious infrastructural deficiencies were revealed. The rapid assessment tool proved a reliable instrument of routine data collection by health facility staff. The authors recommend integrating the rapid assessment tool in the health information systems of developing countries. Health authorities in a decentralized health system are thus enabled to detect infrastructural deficiencies and trace the effects of interventions. The tool can lay the data foundation for district facility infrastructure management.
Nödler, Karsten; Tsakiri, Maria; Aloupi, Maria; Gatidou, Georgia; Stasinakis, Athanasios S; Licha, Tobias
2016-04-01
Results from coastal water pollution monitoring (Lesvos Island, Greece) are presented. In total, 53 samples were analyzed for 58 polar organic micropollutants such as selected herbicides, biocides, corrosion inhibitors, stimulants, artificial sweeteners, and pharmaceuticals. Main focus is the application of a proposed wastewater indicator quartet (acesulfame, caffeine, valsartan, and valsartan acid) to detect point sources and contamination hot-spots with untreated and treated wastewater. The derived conclusions are compared with the state of knowledge regarding local land use and infrastructure. The artificial sweetener acesulfame and the stimulant caffeine were used as indicators for treated and untreated wastewater, respectively. In case of a contamination with untreated wastewater the concentration ratio of the antihypertensive valsartan and its transformation product valsartan acid was used to further refine the estimation of the residence time of the contamination. The median/maximum concentrations of acesulfame and caffeine were 5.3/178 ng L(-1) and 6.1/522 ng L(-1), respectively. Their detection frequency was 100%. Highest concentrations were detected within the urban area of the capital of the island (Mytilene). The indicator quartet in the gulfs of Gera and Kalloni (two semi-enclosed embayments on the island) demonstrated different concentration patterns. A comparatively higher proportion of untreated wastewater was detected in the gulf of Gera, which is in agreement with data on the wastewater infrastructure. The indicator quality of the micropollutants to detect wastewater was compared with electrical conductivity (EC) data. Due to their anthropogenic nature and low detection limits, the micropollutants are superior to EC regarding both sensitivity and selectivity. The concentrations of atrazine, diuron, and isoproturon did not exceed the annual average of their environmental quality standards (EQS) defined by the European Commission. At two sampling locations irgarol 1051 exceeded its annual average EQS value but not the maximum allowable concentration of 16 ng L(-1). Copyright © 2016 Elsevier Ltd. All rights reserved.
Addition of a breeding database in the Genome Database for Rosaceae
Evans, Kate; Jung, Sook; Lee, Taein; Brutcher, Lisa; Cho, Ilhyung; Peace, Cameron; Main, Dorrie
2013-01-01
Breeding programs produce large datasets that require efficient management systems to keep track of performance, pedigree, geographical and image-based data. With the development of DNA-based screening technologies, more breeding programs perform genotyping in addition to phenotyping for performance evaluation. The integration of breeding data with other genomic and genetic data is instrumental for the refinement of marker-assisted breeding tools, enhances genetic understanding of important crop traits and maximizes access and utility by crop breeders and allied scientists. Development of new infrastructure in the Genome Database for Rosaceae (GDR) was designed and implemented to enable secure and efficient storage, management and analysis of large datasets from the Washington State University apple breeding program and subsequently expanded to fit datasets from other Rosaceae breeders. The infrastructure was built using the software Chado and Drupal, making use of the Natural Diversity module to accommodate large-scale phenotypic and genotypic data. Breeders can search accessions within the GDR to identify individuals with specific trait combinations. Results from Search by Parentage lists individuals with parents in common and results from Individual Variety pages link to all data available on each chosen individual including pedigree, phenotypic and genotypic information. Genotypic data are searchable by markers and alleles; results are linked to other pages in the GDR to enable the user to access tools such as GBrowse and CMap. This breeding database provides users with the opportunity to search datasets in a fully targeted manner and retrieve and compare performance data from multiple selections, years and sites, and to output the data needed for variety release publications and patent applications. The breeding database facilitates efficient program management. Storing publicly available breeding data in a database together with genomic and genetic data will further accelerate the cross-utilization of diverse data types by researchers from various disciplines. Database URL: http://www.rosaceae.org/breeders_toolbox PMID:24247530
Addition of a breeding database in the Genome Database for Rosaceae.
Evans, Kate; Jung, Sook; Lee, Taein; Brutcher, Lisa; Cho, Ilhyung; Peace, Cameron; Main, Dorrie
2013-01-01
Breeding programs produce large datasets that require efficient management systems to keep track of performance, pedigree, geographical and image-based data. With the development of DNA-based screening technologies, more breeding programs perform genotyping in addition to phenotyping for performance evaluation. The integration of breeding data with other genomic and genetic data is instrumental for the refinement of marker-assisted breeding tools, enhances genetic understanding of important crop traits and maximizes access and utility by crop breeders and allied scientists. Development of new infrastructure in the Genome Database for Rosaceae (GDR) was designed and implemented to enable secure and efficient storage, management and analysis of large datasets from the Washington State University apple breeding program and subsequently expanded to fit datasets from other Rosaceae breeders. The infrastructure was built using the software Chado and Drupal, making use of the Natural Diversity module to accommodate large-scale phenotypic and genotypic data. Breeders can search accessions within the GDR to identify individuals with specific trait combinations. Results from Search by Parentage lists individuals with parents in common and results from Individual Variety pages link to all data available on each chosen individual including pedigree, phenotypic and genotypic information. Genotypic data are searchable by markers and alleles; results are linked to other pages in the GDR to enable the user to access tools such as GBrowse and CMap. This breeding database provides users with the opportunity to search datasets in a fully targeted manner and retrieve and compare performance data from multiple selections, years and sites, and to output the data needed for variety release publications and patent applications. The breeding database facilitates efficient program management. Storing publicly available breeding data in a database together with genomic and genetic data will further accelerate the cross-utilization of diverse data types by researchers from various disciplines. Database URL: http://www.rosaceae.org/breeders_toolbox.
Basu, Debasish; Avasthi, Ajit
2015-01-01
Background: Substance use disorders are believed to have become rampant in the State of Punjab, causing substantive loss to the person, the family, the society, and the state. The situation is likely to worsen further if a structured, government-level, state-wide de-addiction service is not put into place. Aims: The aim was to describe a comprehensive structural model of de-addiction service in the State of Punjab (the “Pyramid model” or “Punjab model”), which is primarily concerned with demand reduction, particularly that part which is concerned with identification, treatment, and aftercare of substance users. Materials and Methods: At the behest of the Punjab Government, this model was developed by the authors after a detailed study of the current scenario, critical and exhaustive look at the existing guidelines, policies, books, web resources, government documents, and the like in this area, a check of the ground reality in terms of existing infrastructural and manpower resources, and keeping pragmatism and practicability in mind. Several rounds of meetings with the government officials and other important stakeholders helped to refine the model further. Results: Our model envisages structural innovation and renovations within the existing state healthcare infrastructure. We formulated a “Pyramid model,” later renamed as “Punjab model,” where there is a broad community base for early identification and outpatient level treatment at the primary care level, both outpatient and inpatient care at the secondary care level, and comprehensive management for more difficult cases at the tertiary care level. A separate de-addiction system for the prisons was also developed. Each of these structural elements was described and refined in details, with the aim of uniform, standardized, and easily accessible care across the state. Conclusions: If the “Punjab model” succeeds, it can provide useful models for other states or even at the national level. PMID:25657452
NASA Astrophysics Data System (ADS)
Gupta, Shubhank; Panda, Aditi; Naskar, Ruchira; Mishra, Dinesh Kumar; Pal, Snehanshu
2017-11-01
Steels are alloys of iron and carbon, widely used in construction and other applications. The evolution of steel microstructure through various heat treatment processes is an important factor in controlling properties and performance of steel. Extensive experimentations have been performed to enhance the properties of steel by customizing heat treatment processes. However, experimental analyses are always associated with high resource requirements in terms of cost and time. As an alternative solution, we propose an image processing-based technique for refinement of raw plain carbon steel microstructure images, into a digital form, usable in experiments related to heat treatment processes of steel in diverse applications. The proposed work follows the conventional steps practiced by materials engineers in manual refinement of steel images; and it appropriately utilizes basic image processing techniques (including filtering, segmentation, opening, and clustering) to automate the whole process. The proposed refinement of steel microstructure images is aimed to enable computer-aided simulations of heat treatment of plain carbon steel, in a timely and cost-efficient manner; hence it is beneficial for the materials and metallurgy industry. Our experimental results prove the efficiency and effectiveness of the proposed technique.
Algorithms for Lightweight Key Exchange.
Alvarez, Rafael; Caballero-Gil, Cándido; Santonja, Juan; Zamora, Antonio
2017-06-27
Public-key cryptography is too slow for general purpose encryption, with most applications limiting its use as much as possible. Some secure protocols, especially those that enable forward secrecy, make a much heavier use of public-key cryptography, increasing the demand for lightweight cryptosystems that can be implemented in low powered or mobile devices. This performance requirements are even more significant in critical infrastructure and emergency scenarios where peer-to-peer networks are deployed for increased availability and resiliency. We benchmark several public-key key-exchange algorithms, determining those that are better for the requirements of critical infrastructure and emergency applications and propose a security framework based on these algorithms and study its application to decentralized node or sensor networks.
Mayer, Gerhard; Quast, Christian; Felden, Janine; Lange, Matthias; Prinz, Manuel; Pühler, Alfred; Lawerenz, Chris; Scholz, Uwe; Glöckner, Frank Oliver; Müller, Wolfgang; Marcus, Katrin; Eisenacher, Martin
2017-10-30
Sustainable noncommercial bioinformatics infrastructures are a prerequisite to use and take advantage of the potential of big data analysis for research and economy. Consequently, funders, universities and institutes as well as users ask for a transparent value model for the tools and services offered. In this article, a generally applicable lightweight method is described by which bioinformatics infrastructure projects can estimate the value of tools and services offered without determining exactly the total costs of ownership. Five representative scenarios for value estimation from a rough estimation to a detailed breakdown of costs are presented. To account for the diversity in bioinformatics applications and services, the notion of service-specific 'service provision units' is introduced together with the factors influencing them and the main underlying assumptions for these 'value influencing factors'. Special attention is given on how to handle personnel costs and indirect costs such as electricity. Four examples are presented for the calculation of the value of tools and services provided by the German Network for Bioinformatics Infrastructure (de.NBI): one for tool usage, one for (Web-based) database analyses, one for consulting services and one for bioinformatics training events. Finally, from the discussed values, the costs of direct funding and the costs of payment of services by funded projects are calculated and compared. © The Author 2017. Published by Oxford University Press.
A technological infrastructure to sustain Internetworked Enterprises
NASA Astrophysics Data System (ADS)
La Mattina, Ernesto; Savarino, Vincenzo; Vicari, Claudia; Storelli, Davide; Bianchini, Devis
In the Web 3.0 scenario, where information and services are connected by means of their semantics, organizations can improve their competitive advantage by publishing their business and service descriptions. In this scenario, Semantic Peer to Peer (P2P) can play a key role in defining dynamic and highly reconfigurable infrastructures. Organizations can share knowledge and services, using this infrastructure to move towards value networks, an emerging organizational model characterized by fluid boundaries and complex relationships. This chapter collects and defines the technological requirements and architecture of a modular and multi-Layer Peer to Peer infrastructure for SOA-based applications. This technological infrastructure, based on the combination of Semantic Web and P2P technologies, is intended to sustain Internetworked Enterprise configurations, defining a distributed registry and enabling more expressive queries and efficient routing mechanisms. The following sections focus on the overall architecture, while describing the layers that form it.
NASA Astrophysics Data System (ADS)
Jacobs, J. M.; Thomas, N.; Mo, W.; Kirshen, P. H.; Douglas, E. M.; Daniel, J.; Bell, E.; Friess, L.; Mallick, R.; Kartez, J.; Hayhoe, K.; Croope, S.
2014-12-01
Recent events have demonstrated that the United States' transportation infrastructure is highly vulnerable to extreme weather events which will likely increase in the future. In light of the 60% shortfall of the $900 billion investment needed over the next five years to maintain this aging infrastructure, hardening of all infrastructures is unlikely. Alternative strategies are needed to ensure that critical aspects of the transportation network are maintained during climate extremes. Preliminary concepts around multi-tier service expectations of bridges and roads with reference to network capacity will be presented. Drawing from recent flooding events across the U.S., specific examples for roads/pavement will be used to illustrate impacts, disruptions, and trade-offs between performance during events and subsequent damage. This talk will also address policy and cultural norms within the civil engineering practice that will likely challenge the application of graceful failure pathways during extreme events.
COMPUTATIONAL METHODOLOGIES for REAL-SPACE STRUCTURAL REFINEMENT of LARGE MACROMOLECULAR COMPLEXES
Goh, Boon Chong; Hadden, Jodi A.; Bernardi, Rafael C.; Singharoy, Abhishek; McGreevy, Ryan; Rudack, Till; Cassidy, C. Keith; Schulten, Klaus
2017-01-01
The rise of the computer as a powerful tool for model building and refinement has revolutionized the field of structure determination for large biomolecular systems. Despite the wide availability of robust experimental methods capable of resolving structural details across a range of spatiotemporal resolutions, computational hybrid methods have the unique ability to integrate the diverse data from multimodal techniques such as X-ray crystallography and electron microscopy into consistent, fully atomistic structures. Here, commonly employed strategies for computational real-space structural refinement are reviewed, and their specific applications are illustrated for several large macromolecular complexes: ribosome, virus capsids, chemosensory array, and photosynthetic chromatophore. The increasingly important role of computational methods in large-scale structural refinement, along with current and future challenges, is discussed. PMID:27145875
NASA Technical Reports Server (NTRS)
Tsiveriotis, K.; Brown, R. A.
1993-01-01
A new method is presented for the solution of free-boundary problems using Lagrangian finite element approximations defined on locally refined grids. The formulation allows for direct transition from coarse to fine grids without introducing non-conforming basis functions. The calculation of elemental stiffness matrices and residual vectors are unaffected by changes in the refinement level, which are accounted for in the loading of elemental data to the global stiffness matrix and residual vector. This technique for local mesh refinement is combined with recently developed mapping methods and Newton's method to form an efficient algorithm for the solution of free-boundary problems, as demonstrated here by sample calculations of cellular interfacial microstructure during directional solidification of a binary alloy.
18 CFR 33.8 - Requirements for filing applications.
Code of Federal Regulations, 2014 CFR
2014-04-01
... the Commission's instructions for submission of privileged materials and Critical Energy Infrastructure Information in § 388.112 of this chapter. (b) If required, the applicant must submit information...
18 CFR 33.8 - Requirements for filing applications.
Code of Federal Regulations, 2013 CFR
2013-04-01
... the Commission's instructions for submission of privileged materials and Critical Energy Infrastructure Information in § 388.112 of this chapter. (b) If required, the applicant must submit information...
ERIC Educational Resources Information Center
Natoli, Riccardo; Zuhair, Segu
2013-01-01
The resource-infrastructure-environment (RIE) index was proposed as an alternative measure of progress which was then employed to: (1) compare the aggregate (single summary) index findings between Australia (mid-industrialised nation), Mexico (emerging economy), and the US (highly industrialised nation); and (2) compare the RIE index against the…
Multiuser Collaboration with Networked Mobile Devices
NASA Technical Reports Server (NTRS)
Tso, Kam S.; Tai, Ann T.; Deng, Yong M.; Becks, Paul G.
2006-01-01
In this paper we describe a multiuser collaboration infrastructure that enables multiple mission scientists to remotely and collaboratively interact with visualization and planning software, using wireless networked personal digital assistants(PDAs) and other mobile devices. During ground operations of planetary rover and lander missions, scientists need to meet daily to review downlinked data and plan science activities. For example, scientists use the Science Activity Planner (SAP) in the Mars Exploration Rover (MER) mission to visualize downlinked data and plan rover activities during the science meetings [1]. Computer displays are projected onto large screens in the meeting room to enable the scientists to view and discuss downlinked images and data displayed by SAP and other software applications. However, only one person can interact with the software applications because input to the computer is limited to a single mouse and keyboard. As a result, the scientists have to verbally express their intentions, such as selecting a target at a particular location on the Mars terrain image, to that person in order to interact with the applications. This constrains communication and limits the returns of science planning. Furthermore, ground operations for Mars missions are fundamentally constrained by the short turnaround time for science and engineering teams to process and analyze data, plan the next uplink, generate command sequences, and transmit the uplink to the vehicle [2]. Therefore, improving ground operations is crucial to the success of Mars missions. The multiuser collaboration infrastructure enables users to control software applications remotely and collaboratively using mobile devices. The infrastructure includes (1) human-computer interaction techniques to provide natural, fast, and accurate inputs, (2) a communications protocol to ensure reliable and efficient coordination of the input devices and host computers, (3) an application-independent middleware that maintains the states, sessions, and interactions of individual users of the software applications, (4) an application programming interface to enable tight integration of applications and the middleware. The infrastructure is able to support any software applications running under the Windows or Unix platforms. The resulting technologies not only are applicable to NASA mission operations, but also useful in other situations such as design reviews, brainstorming sessions, and business meetings, as they can benefit from having the participants concurrently interact with the software applications (e.g., presentation applications and CAD design tools) to illustrate their ideas and provide inputs.
NASA Technical Reports Server (NTRS)
Falke, Stefan; Husar, Rudolf
2011-01-01
The goal of this REASoN applications and technology project is to deliver and use Earth Science Enterprise (ESE) data and tools in support of air quality management. Its scope falls within the domain of air quality management and aims to develop a federated air quality information sharing network that includes data from NASA, EPA, US States and others. Project goals were achieved through a access of satellite and ground observation data, web services information technology, interoperability standards, and air quality community collaboration. In contributing to a network of NASA ESE data in support of particulate air quality management, the project will develop access to distributed data, build Web infrastructure, and create tools for data processing and analysis. The key technologies used in the project include emerging web services for developing self describing and modular data access and processing tools, and service oriented architecture for chaining web services together to assemble customized air quality management applications. The technology and tools required for this project were developed within DataFed.net, a shared infrastructure that supports collaborative atmospheric data sharing and processing web services. Much of the collaboration was facilitated through community interactions through the Federation of Earth Science Information Partners (ESIP) Air Quality Workgroup. The main activities during the project that successfully advanced DataFed, enabled air quality applications and established community-oriented infrastructures were: develop access to distributed data (surface and satellite), build Web infrastructure to support data access, processing and analysis create tools for data processing and analysis foster air quality community collaboration and interoperability.
Application of Ontologies for Big Earth Data
NASA Astrophysics Data System (ADS)
Huang, T.; Chang, G.; Armstrong, E. M.; Boening, C.
2014-12-01
Connected data is smarter data! Earth Science research infrastructure must do more than just being able to support temporal, geospatial discovery of satellite data. As the Earth Science data archives continue to expand across NASA data centers, the research communities are demanding smarter data services. A successful research infrastructure must be able to present researchers the complete picture, that is, datasets with linked citations, related interdisciplinary data, imageries, current events, social media discussions, and scientific data tools that are relevant to the particular dataset. The popular Semantic Web for Earth and Environmental Terminology (SWEET) ontologies is a collection of ontologies and concepts designed to improve discovery and application of Earth Science data. The SWEET ontologies collection was initially developed to capture the relationships between keywords in the NASA Global Change Master Directory (GCMD). Over the years this popular ontologies collection has expanded to cover over 200 ontologies and 6000 concepts to enable scalable classification of Earth system science concepts and Space science. This presentation discusses the semantic web technologies as the enabling technology for data-intensive science. We will discuss the application of the SWEET ontologies as a critical component in knowledge-driven research infrastructure for some of the recent projects, which include the DARPA Ontological System for Context Artifact and Resources (OSCAR), 2013 NASA ACCESS Virtual Quality Screening Service (VQSS), and the 2013 NASA Sea Level Change Portal (SLCP) projects. The presentation will also discuss the benefits in using semantic web technologies in developing research infrastructure for Big Earth Science Data in an attempt to "accommodate all domains and provide the necessary glue for information to be cross-linked, correlated, and discovered in a semantically rich manner." [1] [1] Savas Parastatidis: A platform for all that we know: creating a knowledge-driven research infrastructure. The Fourth Paradigm 2009: 165-172
DOE Office of Scientific and Technical Information (OSTI.GOV)
Borbulevych, Oleg Y.; Plumley, Joshua A.; Martin, Roger I.
2014-05-01
Semiempirical quantum-chemical X-ray macromolecular refinement using the program DivCon integrated with PHENIX is described. Macromolecular crystallographic refinement relies on sometimes dubious stereochemical restraints and rudimentary energy functionals to ensure the correct geometry of the model of the macromolecule and any covalently bound ligand(s). The ligand stereochemical restraint file (CIF) requires a priori understanding of the ligand geometry within the active site, and creation of the CIF is often an error-prone process owing to the great variety of potential ligand chemistry and structure. Stereochemical restraints have been replaced with more robust functionals through the integration of the linear-scaling, semiempirical quantum-mechanics (SE-QM)more » program DivCon with the PHENIX X-ray refinement engine. The PHENIX/DivCon package has been thoroughly validated on a population of 50 protein–ligand Protein Data Bank (PDB) structures with a range of resolutions and chemistry. The PDB structures used for the validation were originally refined utilizing various refinement packages and were published within the past five years. PHENIX/DivCon does not utilize CIF(s), link restraints and other parameters for refinement and hence it does not make as many a priori assumptions about the model. Across the entire population, the method results in reasonable ligand geometries and low ligand strains, even when the original refinement exhibited difficulties, indicating that PHENIX/DivCon is applicable to both single-structure and high-throughput crystallography.« less
Arid Green Infrastructure for Water Control and Conservation ...
Green infrastructure is an approach to managing wet weather flows using systems and practices that mimic natural processes. It is designed to manage stormwater as close to its source as possible and protect the quality of receiving waters. Although most green infrastructure practices were first developed in temperate climates, green infrastructure also can be a cost-effective approach to stormwater management and water conservation in arid and semi-arid regions, such as those found in the western and southwestern United States. Green infrastructure practices can be applied at the site, neighborhood and watershed scales. In addition to water management and conservation, implementing green infrastructure confers many social and economic benefits and can address issues of environmental justice. The U.S. Environmental Protection Agency (EPA) commissioned a literature review to identify the state-of-the science practices dealing with water control and conservation in arid and semi-arid regions, with emphasis on these regions in the United States. The search focused on stormwater control measures or practices that slow, capture, treat, infiltrate and/or store runoff at its source (i.e., green infrastructure). The material in Chapters 1 through 3 provides background to EPA’s current activities related to the application of green infrastructure practices in arid and semi-arid regions. An introduction to the topic of green infrastructure in arid and semi-arid regions i
A systems framework for national assessment of climate risks to infrastructure.
Dawson, Richard J; Thompson, David; Johns, Daniel; Wood, Ruth; Darch, Geoff; Chapman, Lee; Hughes, Paul N; Watson, Geoff V R; Paulson, Kevin; Bell, Sarah; Gosling, Simon N; Powrie, William; Hall, Jim W
2018-06-13
Extreme weather causes substantial adverse socio-economic impacts by damaging and disrupting the infrastructure services that underpin modern society. Globally, $2.5tn a year is spent on infrastructure which is typically designed to last decades, over which period projected changes in the climate will modify infrastructure performance. A systems approach has been developed to assess risks across all infrastructure sectors to guide national policy making and adaptation investment. The method analyses diverse evidence of climate risks and adaptation actions, to assess the urgency and extent of adaptation required. Application to the UK shows that despite recent adaptation efforts, risks to infrastructure outweigh opportunities. Flooding is the greatest risk to all infrastructure sectors: even if the Paris Agreement to limit global warming to 2°C is achieved, the number of users reliant on electricity infrastructure at risk of flooding would double, while a 4°C rise could triple UK flood damage. Other risks are significant, for example 5% and 20% of river catchments would be unable to meet water demand with 2°C and 4°C global warming respectively. Increased interdependence between infrastructure systems, especially from energy and information and communication technology (ICT), are amplifying risks, but adaptation action is limited by lack of clear responsibilities. A programme to build national capability is urgently required to improve infrastructure risk assessment.This article is part of the theme issue 'Advances in risk assessment for climate change adaptation policy'. © 2018 The Authors.
A systems framework for national assessment of climate risks to infrastructure
NASA Astrophysics Data System (ADS)
Dawson, Richard J.; Thompson, David; Johns, Daniel; Wood, Ruth; Darch, Geoff; Chapman, Lee; Hughes, Paul N.; Watson, Geoff V. R.; Paulson, Kevin; Bell, Sarah; Gosling, Simon N.; Powrie, William; Hall, Jim W.
2018-06-01
Extreme weather causes substantial adverse socio-economic impacts by damaging and disrupting the infrastructure services that underpin modern society. Globally, $2.5tn a year is spent on infrastructure which is typically designed to last decades, over which period projected changes in the climate will modify infrastructure performance. A systems approach has been developed to assess risks across all infrastructure sectors to guide national policy making and adaptation investment. The method analyses diverse evidence of climate risks and adaptation actions, to assess the urgency and extent of adaptation required. Application to the UK shows that despite recent adaptation efforts, risks to infrastructure outweigh opportunities. Flooding is the greatest risk to all infrastructure sectors: even if the Paris Agreement to limit global warming to 2°C is achieved, the number of users reliant on electricity infrastructure at risk of flooding would double, while a 4°C rise could triple UK flood damage. Other risks are significant, for example 5% and 20% of river catchments would be unable to meet water demand with 2°C and 4°C global warming respectively. Increased interdependence between infrastructure systems, especially from energy and information and communication technology (ICT), are amplifying risks, but adaptation action is limited by lack of clear responsibilities. A programme to build national capability is urgently required to improve infrastructure risk assessment. This article is part of the theme issue `Advances in risk assessment for climate change adaptation policy'.
A systems framework for national assessment of climate risks to infrastructure
Thompson, David; Johns, Daniel; Darch, Geoff; Paulson, Kevin
2018-01-01
Extreme weather causes substantial adverse socio-economic impacts by damaging and disrupting the infrastructure services that underpin modern society. Globally, $2.5tn a year is spent on infrastructure which is typically designed to last decades, over which period projected changes in the climate will modify infrastructure performance. A systems approach has been developed to assess risks across all infrastructure sectors to guide national policy making and adaptation investment. The method analyses diverse evidence of climate risks and adaptation actions, to assess the urgency and extent of adaptation required. Application to the UK shows that despite recent adaptation efforts, risks to infrastructure outweigh opportunities. Flooding is the greatest risk to all infrastructure sectors: even if the Paris Agreement to limit global warming to 2°C is achieved, the number of users reliant on electricity infrastructure at risk of flooding would double, while a 4°C rise could triple UK flood damage. Other risks are significant, for example 5% and 20% of river catchments would be unable to meet water demand with 2°C and 4°C global warming respectively. Increased interdependence between infrastructure systems, especially from energy and information and communication technology (ICT), are amplifying risks, but adaptation action is limited by lack of clear responsibilities. A programme to build national capability is urgently required to improve infrastructure risk assessment. This article is part of the theme issue ‘Advances in risk assessment for climate change adaptation policy’. PMID:29712793
NASA Astrophysics Data System (ADS)
Porter, Brandon L.; North, Leslie A.; Polk, Jason S.
2016-12-01
The interconnected nature of surface and subsurface karst environments allows easy disturbance to their aquifers and specialized ecosystems from anthropogenic impacts. The karst disturbance index is a holistic tool used to measure disturbance to karst environments and has been applied and refined through studies in Florida and Italy, among others. Through these applications, the karst disturbance index has evolved into two commonly used methods of application; yet, the karst disturbance index is still susceptible to evaluation and modification for application in other areas around the world. The geographically isolated and highly vulnerable municipality of Arecibo, Puerto Rico's karst area provides an opportunity to test the usefulness and validity of the karst disturbance index in an island setting and to compare and further refine the application of the original and modified methods. This study found the both methods of karst disturbance index application resulted in high disturbance scores (Original Method 0.54 and Modified Method 0.69, respectively) and uncovered multiple considerations for the improvement of the karst disturbance index. An evaluation of multiple methods together in an island setting also resulted in the need for adding additional indicators, including Mogote Removal and Coastal Karst. Collectively, the results provide a holistic approach to using the karst disturbance index in an island karst setting and suggest a modified method by which scaling and weighting may compensate for the difference between the original and modified method scores and allow interested stakeholders to evaluate disturbance regardless of his or her level of expertise.
GEMSS: grid-infrastructure for medical service provision.
Benkner, S; Berti, G; Engelbrecht, G; Fingberg, J; Kohring, G; Middleton, S E; Schmidt, R
2005-01-01
The European GEMSS Project is concerned with the creation of medical Grid service prototypes and their evaluation in a secure service-oriented infrastructure for distributed on demand/supercomputing. Key aspects of the GEMSS Grid middleware include negotiable QoS support for time-critical service provision, flexible support for business models, and security at all levels in order to ensure privacy of patient data as well as compliance to EU law. The GEMSS Grid infrastructure is based on a service-oriented architecture and is being built on top of existing standard Grid and Web technologies. The GEMSS infrastructure offers a generic Grid service provision framework that hides the complexity of transforming existing applications into Grid services. For the development of client-side applications or portals, a pluggable component framework has been developed, providing developers with full control over business processes, service discovery, QoS negotiation, and workflow, while keeping their underlying implementation hidden from view. A first version of the GEMSS Grid infrastructure is operational and has been used for the set-up of a Grid test-bed deploying six medical Grid service prototypes including maxillo-facial surgery simulation, neuro-surgery support, radio-surgery planning, inhaled drug-delivery simulation, cardiovascular simulation and advanced image reconstruction. The GEMSS Grid infrastructure is based on standard Web Services technology with an anticipated future transition path towards the OGSA standard proposed by the Global Grid Forum. GEMSS demonstrates that the Grid can be used to provide medical practitioners and researchers with access to advanced simulation and image processing services for improved preoperative planning and near real-time surgical support.
Open | SpeedShop: An Open Source Infrastructure for Parallel Performance Analysis
Schulz, Martin; Galarowicz, Jim; Maghrak, Don; ...
2008-01-01
Over the last decades a large number of performance tools has been developed to analyze and optimize high performance applications. Their acceptance by end users, however, has been slow: each tool alone is often limited in scope and comes with widely varying interfaces and workflow constraints, requiring different changes in the often complex build and execution infrastructure of the target application. We started the Open | SpeedShop project about 3 years ago to overcome these limitations and provide efficient, easy to apply, and integrated performance analysis for parallel systems. Open | SpeedShop has two different faces: it provides an interoperable tool set covering themore » most common analysis steps as well as a comprehensive plugin infrastructure for building new tools. In both cases, the tools can be deployed to large scale parallel applications using DPCL/Dyninst for distributed binary instrumentation. Further, all tools developed within or on top of Open | SpeedShop are accessible through multiple fully equivalent interfaces including an easy-to-use GUI as well as an interactive command line interface reducing the usage threshold for those tools.« less
Body area network--a key infrastructure element for patient-centered telemedicine.
Norgall, Thomas; Schmidt, Robert; von der Grün, Thomas
2004-01-01
The Body Area Network (BAN) extends the range of existing wireless network technologies by an ultra-low range, ultra-low power network solution optimised for long-term or continuous healthcare applications. It enables wireless radio communication between several miniaturised, intelligent Body Sensor (or actor) Units (BSU) and a single Body Central Unit (BCU) worn at the human body. A separate wireless transmission link from the BCU to a network access point--using different technology--provides for online access to BAN components via usual network infrastructure. The BAN network protocol maintains dynamic ad-hoc network configuration scenarios and co-existence of multiple networks.BAN is expected to become a basic infrastructure element for electronic health services: By integrating patient-attached sensors and mobile actor units, distributed information and data processing systems, the range of medical workflow can be extended to include applications like wireless multi-parameter patient monitoring and therapy support. Beyond clinical use and professional disease management environments, private personal health assistance scenarios (without financial reimbursement by health agencies / insurance companies) enable a wide range of applications and services in future pervasive computing and networking environments.
The INDIGO-Datacloud Authentication and Authorization Infrastructure
NASA Astrophysics Data System (ADS)
Ceccanti, A.; Hardt, M.; Wegh, B.; Millar, AP; Caberletti, M.; Vianello, E.; Licehammer, S.
2017-10-01
Contemporary distributed computing infrastructures (DCIs) are not easily and securely accessible by scientists. These computing environments are typically hard to integrate due to interoperability problems resulting from the use of different authentication mechanisms, identity negotiation protocols and access control policies. Such limitations have a big impact on the user experience making it hard for user communities to port and run their scientific applications on resources aggregated from multiple providers. The INDIGO-DataCloud project wants to provide the services and tools needed to enable a secure composition of resources from multiple providers in support of scientific applications. In order to do so, a common AAI architecture has to be defined that supports multiple authentication mechanisms, support delegated authorization across services and can be easily integrated in off-the-shelf software. In this contribution we introduce the INDIGO Authentication and Authorization Infrastructure, describing its main components and their status and how authentication, delegation and authorization flows are implemented across services.
Deng, Yihan; Bürkle, Thomas; Holm, Jürgen; Zetz, Erwin; Denecke, Kerstin
2018-01-01
A precise and timely care delivery depends on an efficient triage performed by primary care providers and smooth collaboration with other medical specialities. In recent years telemedicine gained increasing importance for efficient care delivery. It's use, however, has been limited by legal issues, missing digital infrastructures, restricted support from health insurances and the digital divide in the population. A new era towards eHealth and telemedicine starts with the establishment of national eHealth regulations and laws. In Switzerland, a nation-wide digital infrastructure and electronic health record will be established. But appropriate healthcare apps to improve patient care based on this infrastructure remain rare. In this paper, we present two applications (self-anamnesis and eMedication assistant) for eHealth enabled care delivery which have the potential to speed up diagnosis and treatment.
First results from a combined analysis of CERN computing infrastructure metrics
NASA Astrophysics Data System (ADS)
Duellmann, Dirk; Nieke, Christian
2017-10-01
The IT Analysis Working Group (AWG) has been formed at CERN across individual computing units and the experiments to attempt a cross cutting analysis of computing infrastructure and application metrics. In this presentation we will describe the first results obtained using medium/long term data (1 months — 1 year) correlating box level metrics, job level metrics from LSF and HTCondor, IO metrics from the physics analysis disk pools (EOS) and networking and application level metrics from the experiment dashboards. We will cover in particular the measurement of hardware performance and prediction of job duration, the latency sensitivity of different job types and a search for bottlenecks with the production job mix in the current infrastructure. The presentation will conclude with the proposal of a small set of metrics to simplify drawing conclusions also in the more constrained environment of public cloud deployments.
Scientific Services on the Cloud
NASA Astrophysics Data System (ADS)
Chapman, David; Joshi, Karuna P.; Yesha, Yelena; Halem, Milt; Yesha, Yaacov; Nguyen, Phuong
Scientific Computing was one of the first every applications for parallel and distributed computation. To this date, scientific applications remain some of the most compute intensive, and have inspired creation of petaflop compute infrastructure such as the Oak Ridge Jaguar and Los Alamos RoadRunner. Large dedicated hardware infrastructure has become both a blessing and a curse to the scientific community. Scientists are interested in cloud computing for much the same reason as businesses and other professionals. The hardware is provided, maintained, and administrated by a third party. Software abstraction and virtualization provide reliability, and fault tolerance. Graduated fees allow for multi-scale prototyping and execution. Cloud computing resources are only a few clicks away, and by far the easiest high performance distributed platform to gain access to. There may still be dedicated infrastructure for ultra-scale science, but the cloud can easily play a major part of the scientific computing initiative.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hadley, Mark D.; Clements, Samuel L.
2009-01-01
Battelle’s National Security & Defense objective is, “applying unmatched expertise and unique facilities to deliver homeland security solutions. From detection and protection against weapons of mass destruction to emergency preparedness/response and protection of critical infrastructure, we are working with industry and government to integrate policy, operational, technological, and logistical parameters that will secure a safe future”. In an ongoing effort to meet this mission, engagements with industry that are intended to improve operational and technical attributes of commercial solutions that are related to national security initiatives are necessary. This necessity will ensure that capabilities for protecting critical infrastructure assets aremore » considered by commercial entities in their development, design, and deployment lifecycles thus addressing the alignment of identified deficiencies and improvements needed to support national cyber security initiatives. The Secure Firewall (Sidewinder) appliance by Secure Computing was assessed for applicable use in critical infrastructure control system environments, such as electric power, nuclear and other facilities containing critical systems that require augmented protection from cyber threat. The testing was performed in the Pacific Northwest National Laboratory’s (PNNL) Electric Infrastructure Operations Center (EIOC). The Secure Firewall was tested in a network configuration that emulates a typical control center network and then evaluated. A number of observations and recommendations are included in this report relating to features currently included in the Secure Firewall that support critical infrastructure security needs.« less
Borbulevych, Oleg Y; Plumley, Joshua A; Martin, Roger I; Merz, Kenneth M; Westerhoff, Lance M
2014-05-01
Macromolecular crystallographic refinement relies on sometimes dubious stereochemical restraints and rudimentary energy functionals to ensure the correct geometry of the model of the macromolecule and any covalently bound ligand(s). The ligand stereochemical restraint file (CIF) requires a priori understanding of the ligand geometry within the active site, and creation of the CIF is often an error-prone process owing to the great variety of potential ligand chemistry and structure. Stereochemical restraints have been replaced with more robust functionals through the integration of the linear-scaling, semiempirical quantum-mechanics (SE-QM) program DivCon with the PHENIX X-ray refinement engine. The PHENIX/DivCon package has been thoroughly validated on a population of 50 protein-ligand Protein Data Bank (PDB) structures with a range of resolutions and chemistry. The PDB structures used for the validation were originally refined utilizing various refinement packages and were published within the past five years. PHENIX/DivCon does not utilize CIF(s), link restraints and other parameters for refinement and hence it does not make as many a priori assumptions about the model. Across the entire population, the method results in reasonable ligand geometries and low ligand strains, even when the original refinement exhibited difficulties, indicating that PHENIX/DivCon is applicable to both single-structure and high-throughput crystallography.
Zhao, Lijia; Park, Nokeun; Tian, Yanzhong; Shibata, Akinobu; Tsuji, Nobuhiro
2016-01-01
Dynamic recrystallization (DRX) is an important grain refinement mechanism to fabricate steels with high strength and high ductility (toughness). The conventional DRX mechanism has reached the limitation of refining grains to several microns even though employing high-strain deformation. Here we show a DRX phenomenon occurring in the dynamically transformed (DT) ferrite, by which the required strain for the operation of DRX and the formation of ultrafine grains is significantly reduced. The DRX of DT ferrite shows an unconventional temperature dependence, which suggests an optimal condition for grain refinement. We further show that new strategies for ultra grain refinement can be evoked by combining DT and DRX mechanisms, based on which fully ultrafine microstructures having a mean grain size down to 0.35 microns can be obtained without high-strain deformation and exhibit superior mechanical properties. This study will open the door to achieving optimal grain refinement to nanoscale in a variety of steels requiring no high-strain deformation in practical industrial application. PMID:27966603
An approach for prominent enhancement of the quality of konjac flour: dimethyl sulfoxide as medium.
Ye, Ting; Wang, Ling; Xu, Wei; Liu, Jinjin; Wang, Yuntao; Zhu, Kunkun; Wang, Sujuan; Li, Bin; Wang, Chao
2014-01-01
In this paper, an approach to improve several konjac flour (KF) qualities by dimethyl sulfoxide (DMSO) addition using various concentrations at different temperature levels was proposed. Also, various properties of native and refined KF, including transparency, chemical composition and rheological properties have been investigated. The results showed that the KF refined by 75% DMSO achieved 27.7% improvement in transparency, 99.7% removal of starch, 99.4% removal of soluble sugar, and 98.2% removal of protein as well as more satisfactory viscosity stability. In addition, the morphology structure of refined KF showed a significant difference compared with the native one as observed using the SEM, which is promising for further industrial application. Furthermore, the rheological properties of both native and refined konjac sols were studied and the results showed that DMSO refinement is an effective and alternative approach to improve the qualities of KF in many aspects. Copyright © 2013 Elsevier Ltd. All rights reserved.
Terwilliger, Thomas C.; Grosse-Kunstleve, Ralf W.; Afonine, Pavel V.; Moriarty, Nigel W.; Zwart, Peter H.; Hung, Li-Wei; Read, Randy J.; Adams, Paul D.
2008-01-01
The PHENIX AutoBuild wizard is a highly automated tool for iterative model building, structure refinement and density modification using RESOLVE model building, RESOLVE statistical density modification and phenix.refine structure refinement. Recent advances in the AutoBuild wizard and phenix.refine include automated detection and application of NCS from models as they are built, extensive model-completion algorithms and automated solvent-molecule picking. Model-completion algorithms in the AutoBuild wizard include loop building, crossovers between chains in different models of a structure and side-chain optimization. The AutoBuild wizard has been applied to a set of 48 structures at resolutions ranging from 1.1 to 3.2 Å, resulting in a mean R factor of 0.24 and a mean free R factor of 0.29. The R factor of the final model is dependent on the quality of the starting electron density and is relatively independent of resolution. PMID:18094468
Park, Junyeong; Jones, Brandon; Koo, Bonwook; Chen, Xiaowen; Tucker, Melvin; Yu, Ju-Hyun; Pschorn, Thomas; Venditti, Richard; Park, Sunkyu
2016-01-01
Mechanical refining is widely used in the pulp and paper industry to enhance the end-use properties of products by creating external fibrillation and internal delamination. This technology can be directly applied to biochemical conversion processes. By implementing mechanical refining technology, biomass recalcitrance to enzyme hydrolysis can be overcome and carbohydrate conversion can be enhanced with commercially attractive levels of enzymes. In addition, chemical and thermal pretreatment severity can be reduced to achieve the same level of carbohydrate conversion, which reduces pretreatment cost and results in lower concentrations of inhibitors. Refining is versatile and a commercially proven technology that can be operated at process flows of ∼ 1500 dry tons per day of biomass. This paper reviews the utilization of mechanical refining in the pulp and paper industry and summarizes the recent development in applications for biochemical conversion, which potentially make an overall biorefinery process more economically viable. Copyright © 2015 Elsevier Ltd. All rights reserved.
40 CFR 279.50 - Applicability.
Code of Federal Regulations, 2011 CFR
2011-07-01
... 40 Protection of Environment 27 2011-07-01 2011-07-01 false Applicability. 279.50 Section 279.50 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) SOLID WASTES (CONTINUED) STANDARDS FOR THE MANAGEMENT OF USED OIL Standards for Used Oil Processors and Re-Refiners § 279.50 Applicability...
40 CFR 279.50 - Applicability.
Code of Federal Regulations, 2014 CFR
2014-07-01
... 40 Protection of Environment 27 2014-07-01 2014-07-01 false Applicability. 279.50 Section 279.50 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) SOLID WASTES (CONTINUED) STANDARDS FOR THE MANAGEMENT OF USED OIL Standards for Used Oil Processors and Re-Refiners § 279.50 Applicability...
40 CFR 279.50 - Applicability.
Code of Federal Regulations, 2010 CFR
2010-07-01
... 40 Protection of Environment 26 2010-07-01 2010-07-01 false Applicability. 279.50 Section 279.50 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) SOLID WASTES (CONTINUED) STANDARDS FOR THE MANAGEMENT OF USED OIL Standards for Used Oil Processors and Re-Refiners § 279.50 Applicability...
40 CFR 279.50 - Applicability.
Code of Federal Regulations, 2013 CFR
2013-07-01
... 40 Protection of Environment 28 2013-07-01 2013-07-01 false Applicability. 279.50 Section 279.50 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) SOLID WASTES (CONTINUED) STANDARDS FOR THE MANAGEMENT OF USED OIL Standards for Used Oil Processors and Re-Refiners § 279.50 Applicability...
40 CFR 279.50 - Applicability.
Code of Federal Regulations, 2012 CFR
2012-07-01
... 40 Protection of Environment 28 2012-07-01 2012-07-01 false Applicability. 279.50 Section 279.50 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) SOLID WASTES (CONTINUED) STANDARDS FOR THE MANAGEMENT OF USED OIL Standards for Used Oil Processors and Re-Refiners § 279.50 Applicability...
Exploring Cloud Computing for Large-scale Scientific Applications
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lin, Guang; Han, Binh; Yin, Jian
This paper explores cloud computing for large-scale data-intensive scientific applications. Cloud computing is attractive because it provides hardware and software resources on-demand, which relieves the burden of acquiring and maintaining a huge amount of resources that may be used only once by a scientific application. However, unlike typical commercial applications that often just requires a moderate amount of ordinary resources, large-scale scientific applications often need to process enormous amount of data in the terabyte or even petabyte range and require special high performance hardware with low latency connections to complete computation in a reasonable amount of time. To address thesemore » challenges, we build an infrastructure that can dynamically select high performance computing hardware across institutions and dynamically adapt the computation to the selected resources to achieve high performance. We have also demonstrated the effectiveness of our infrastructure by building a system biology application and an uncertainty quantification application for carbon sequestration, which can efficiently utilize data and computation resources across several institutions.« less
Using high-performance networks to enable computational aerosciences applications
NASA Technical Reports Server (NTRS)
Johnson, Marjory J.
1992-01-01
One component of the U.S. Federal High Performance Computing and Communications Program (HPCCP) is the establishment of a gigabit network to provide a communications infrastructure for researchers across the nation. This gigabit network will provide new services and capabilities, in addition to increased bandwidth, to enable future applications. An understanding of these applications is necessary to guide the development of the gigabit network and other high-performance networks of the future. In this paper we focus on computational aerosciences applications run remotely using the Numerical Aerodynamic Simulation (NAS) facility located at NASA Ames Research Center. We characterize these applications in terms of network-related parameters and relate user experiences that reveal limitations imposed by the current wide-area networking infrastructure. Then we investigate how the development of a nationwide gigabit network would enable users of the NAS facility to work in new, more productive ways.
Jiang, Yang; Zhang, Haiyang; Feng, Wei; Tan, Tianwei
2015-12-28
Metal ions play an important role in the catalysis of metalloenzymes. To investigate metalloenzymes via molecular modeling, a set of accurate force field parameters for metal ions is highly imperative. To extend its application range and improve the performance, the dummy atom model of metal ions was refined through a simple parameter screening strategy using the Mg(2+) ion as an example. Using the AMBER ff03 force field with the TIP3P model, the refined model accurately reproduced the experimental geometric and thermodynamic properties of Mg(2+). Compared with point charge models and previous dummy atom models, the refined dummy atom model yields an enhanced performance for producing reliable ATP/GTP-Mg(2+)-protein conformations in three metalloenzyme systems with single or double metal centers. Similar to other unbounded models, the refined model failed to reproduce the Mg-Mg distance and favored a monodentate binding of carboxylate groups, and these drawbacks needed to be considered with care. The outperformance of the refined model is mainly attributed to the use of a revised (more accurate) experimental solvation free energy and a suitable free energy correction protocol. This work provides a parameter screening strategy that can be readily applied to refine the dummy atom models for metal ions.
Panter, Jenna; Ogilvie, David
2015-09-03
Some studies have assessed the effectiveness of environmental interventions to promote physical activity, but few have examined how such interventions work. We investigated the environmental mechanisms linking an infrastructural intervention with behaviour change. Natural experimental study. Three UK municipalities (Southampton, Cardiff and Kenilworth). Adults living within 5 km of new walking and cycling infrastructure. Construction or improvement of walking and cycling routes. Exposure to the intervention was defined in terms of residential proximity. Questionnaires at baseline and 2-year follow-up assessed perceptions of the supportiveness of the environment, use of the new infrastructure, and walking and cycling behaviours. Analysis proceeded via factor analysis of perceptions of the physical environment (step 1) and regression analysis to identify plausible pathways involving physical and social environmental mediators and refine the intervention theory (step 2) to a final path analysis to test the model (step 3). Participants who lived near and used the new routes reported improvements in their perceptions of provision and safety. However, path analysis (step 3, n=967) showed that the effects of the intervention on changes in time spent walking and cycling were largely (90%) explained by a simple causal pathway involving use of the new routes, and other pathways involving changes in environmental cognitions explained only a small proportion of the effect. Physical improvement of the environment itself was the key to the effectiveness of the intervention, and seeking to change people's perceptions may be of limited value. Studies of how interventions lead to population behaviour change should complement those concerned with estimating their effects in supporting valid causal inference. Published by the BMJ Publishing Group Limited. For permission to use (where not already granted under a licence) please go to http://group.bmj.com/group/rights-licensing/permissions.
Polymers for hydrogen infrastructure and vehicle fuel systems :
DOE Office of Scientific and Technical Information (OSTI.GOV)
Barth, Rachel Reina; Simmons, Kevin L.; San Marchi, Christopher W.
2013-10-01
This document addresses polymer materials for use in hydrogen service. Section 1 summarizes the applications of polymers in hydrogen infrastructure and vehicle fuel systems and identifies polymers used in these applications. Section 2 reviews the properties of polymer materials exposed to hydrogen and/or high-pressure environments, using information obtained from published, peer-reviewed literature. The effect of high pressure on physical and mechanical properties of polymers is emphasized in this section along with a summary of hydrogen transport through polymers. Section 3 identifies areas in which fuller characterization is needed in order to assess material suitability for hydrogen service.
Volcanic risk assessment: Quantifying physical vulnerability in the built environment
NASA Astrophysics Data System (ADS)
Jenkins, S. F.; Spence, R. J. S.; Fonseca, J. F. B. D.; Solidum, R. U.; Wilson, T. M.
2014-04-01
This paper presents structured and cost-effective methods for assessing the physical vulnerability of at-risk communities to the range of volcanic hazards, developed as part of the MIA-VITA project (2009-2012). An initial assessment of building and infrastructure vulnerability has been carried out for a set of broadly defined building types and infrastructure categories, with the likelihood of damage considered separately for projectile impact, ash fall loading, pyroclastic density current dynamic pressure and earthquake ground shaking intensities. In refining these estimates for two case study areas: Kanlaon volcano in the Philippines and Fogo volcano in Cape Verde, we have developed guidelines and methodologies for carrying out physical vulnerability assessments in the field. These include identifying primary building characteristics, such as construction material and method, as well as subsidiary characteristics, for example the size and prevalence of openings, that may be important in assessing eruption impacts. At-risk buildings around Kanlaon were found to be dominated by timber frame buildings that exhibit a high vulnerability to pyroclastic density currents, but a low vulnerability to failure from seismic shaking. Around Fogo, the predominance of unreinforced masonry buildings with reinforced concrete slab roofs suggests a high vulnerability to volcanic earthquake but a low vulnerability to ash fall loading. Given the importance of agriculture for local livelihoods around Kanlaon and Fogo, we discuss the potential impact of infrastructure vulnerability for local agricultural economies, with implications for volcanic areas worldwide. These methodologies and tools go some way towards offering a standardised approach to carrying out future vulnerability assessments for populated volcanic areas.
NASA Technical Reports Server (NTRS)
1991-01-01
Recommendations are made after 32 interviews, lesson identification, lesson analysis, and mission characteristics identification. The major recommendations are as follows: (1) to develop end-to-end planning and scheduling operations concepts by mission class and to ensure their consideration in system life cycle documentation; (2) to create an organizational infrastructure at the Code 500 level, supported by a Directorate level steering committee with project representation, responsible for systems engineering of end-to-end planning and scheduling systems; (3) to develop and refine mission capabilities to assess impacts of early mission design decisions on planning and scheduling; and (4) to emphasize operational flexibility in the development of the Advanced Space Network, other institutional resources, external (e.g., project) capabilities and resources, operational software and support tools.
NASA Technical Reports Server (NTRS)
Morisette, Jeffrey T.; Richardson, Andrew D.; Knapp, Alan K.; Fisher, Jeremy I.; Graham, Eric A.; Abatzoglou, John; Wilson, Bruce E.; Breshears, David D.; Hanebry, Geoffrey M.; Hanes, Jonathan M.;
2008-01-01
Phenology is the study of recurring life-cycle events, of which classic examples include flowering by plants as well as animal migration. Phenological responses are increasingly relevant for addressing applied environmental issues. Yet, challenges remain with respect to spanning scales of observation, integrating observations across taxa, and modeling phenological sequences to enable ecological forecasts in light of future climate change. Recent advances that are helping to address these challenges include refined landscape-scale phenology estimates from satellite data, advanced instrument-based approaches for field measurements, and new cyber-infrastructure for archiving and distribution of products. These advances are aiding in diverse areas including modeling land-surface exchange, evaluating climate-phenology relationships, and aiding land management decisions.
Cyberinfrastructure for Airborne Sensor Webs
NASA Technical Reports Server (NTRS)
Freudinger, Lawrence C.
2009-01-01
Since 2004 the NASA Airborne Science Program has been prototyping and using infrastructure that enables researchers to interact with each other and with their instruments via network communications. This infrastructure uses satellite links and an evolving suite of applications and services that leverage open-source software. The use of these tools has increased near-real-time situational awareness during field operations, resulting in productivity improvements and the collection of better data. This paper describes the high-level system architecture and major components, with example highlights from the use of the infrastructure. The paper concludes with a discussion of ongoing efforts to transition to operational status.
Elastic Cloud Computing Infrastructures in the Open Cirrus Testbed Implemented via Eucalyptus
NASA Astrophysics Data System (ADS)
Baun, Christian; Kunze, Marcel
Cloud computing realizes the advantages and overcomes some restrictionsof the grid computing paradigm. Elastic infrastructures can easily be createdand managed by cloud users. In order to accelerate the research ondata center management and cloud services the OpenCirrusTM researchtestbed has been started by HP, Intel and Yahoo!. Although commercialcloud offerings are proprietary, Open Source solutions exist in the field ofIaaS with Eucalyptus, PaaS with AppScale and at the applications layerwith Hadoop MapReduce. This paper examines the I/O performance ofcloud computing infrastructures implemented with Eucalyptus in contrastto Amazon S3.
Raja Ikram, Raja Rina; Abd Ghani, Mohd Khanapi; Abdullah, Noraswaliza
2015-11-01
This paper shall first investigate the informatics areas and applications of the four Traditional Medicine systems - Traditional Chinese Medicine (TCM), Ayurveda, Traditional Arabic and Islamic Medicine and Traditional Malay Medicine. Then, this paper shall examine the national informatics infrastructure initiatives in the four respective countries that support the Traditional Medicine systems. Challenges of implementing informatics in Traditional Medicine Systems shall also be discussed. The literature was sourced from four databases: Ebsco Host, IEEE Explore, Proquest and Google scholar. The search term used was "Traditional Medicine", "informatics", "informatics infrastructure", "traditional Chinese medicine", "Ayurveda", "traditional Arabic and Islamic medicine", and "traditional malay medicine". A combination of the search terms above was also executed to enhance the searching process. A search was also conducted in Google to identify miscellaneous books, publications, and organization websites using the same terms. Amongst major advancements in TCM and Ayurveda are bioinformatics, development of Traditional Medicine databases for decision system support, data mining and image processing. Traditional Chinese Medicine differentiates itself from other Traditional Medicine systems with documented ISO Standards to support the standardization of TCM. Informatics applications in Traditional Arabic and Islamic Medicine are mostly ehealth applications that focus more on spiritual healing, Islamic obligations and prophetic traditions. Literature regarding development of health informatics to support Traditional Malay Medicine is still insufficient. Major informatics infrastructure that is common in China and India are automated insurance payment systems for Traditional Medicine treatment. National informatics infrastructure in Middle East and Malaysia mainly cater for modern medicine. Other infrastructure such as telemedicine and hospital information systems focus its implementation in modern medicine or are not implemented and strategized at a national level to support Traditional Medicine. Informatics may not be able to address all the emerging areas of Traditional Medicine because the concepts in Traditional Medicine system of medicine are different from modern system, though the aim may be same, i.e., to give relief to the patient. Thus, there is a need to synthesize Traditional Medicine systems and informatics with involvements from modern system of medicine. Future research works may include filling the gaps of informatics areas and integrate national informatics infrastructure with established Traditional Medicine systems. Copyright © 2015 Elsevier Ireland Ltd. All rights reserved.
Army Civil Affairs Functional Specialists: On the Verge of Extinction
2012-03-22
the following six areas: rule of law, economic stability , infrastructure, governance, public health and welfare, and public education and information...as defined in table 1. Rule of Law Economic Stability Infrastructure Rule of law pertains to the fair, competent, and efficient application and... Economic stability pertains to the efficient management (for example, production, distribution, trade, and consumption) of resources, goods
DOE Office of Scientific and Technical Information (OSTI.GOV)
Two online resources help fleets evaluate the economic soundness of a compressed natural gas program. The National Renewable Energy Laboratory's (NREL's) Vehicle Infrastructure and Cash-Flow Evaluation (VICE 2.0) model and the accompanying report, Building a Business Case for Compressed Natural Gas in Fleet Applications, are uniquely designed for fleet managers considering an investment in CNG and can help ensure wise investment decisions about CNG vehicles and infrastructure.
ERIC Educational Resources Information Center
Special Libraries Association, New York, NY.
These conference proceedings address the key issues relating to the National Information Infrastructure, including social policy, cultural issues, government policy, and technological applications. The goal is to provide the knowledge and resources needed to conceptualize and think clearly about this topic. Proceedings include: "Opening…
Modular Infrastructure for Rapid Flight Software Development
NASA Technical Reports Server (NTRS)
Pires, Craig
2010-01-01
This slide presentation reviews the use of modular infrastructure to assist in the development of flight software. A feature of this program is the use of model based approach for application unique software. A review of two programs that this approach was use on are: the development of software for Hover Test Vehicle (HTV), and Lunar Atmosphere and Dust Environment Experiment (LADEE).
Algorithms for Lightweight Key Exchange †
Santonja, Juan; Zamora, Antonio
2017-01-01
Public-key cryptography is too slow for general purpose encryption, with most applications limiting its use as much as possible. Some secure protocols, especially those that enable forward secrecy, make a much heavier use of public-key cryptography, increasing the demand for lightweight cryptosystems that can be implemented in low powered or mobile devices. This performance requirements are even more significant in critical infrastructure and emergency scenarios where peer-to-peer networks are deployed for increased availability and resiliency. We benchmark several public-key key-exchange algorithms, determining those that are better for the requirements of critical infrastructure and emergency applications and propose a security framework based on these algorithms and study its application to decentralized node or sensor networks. PMID:28654006
Online catalog access and distribution of remotely sensed information
NASA Astrophysics Data System (ADS)
Lutton, Stephen M.
1997-09-01
Remote sensing is providing voluminous data and value added information products. Electronic sensors, communication electronics, computer software, hardware, and network communications technology have matured to the point where a distributed infrastructure for remotely sensed information is a reality. The amount of remotely sensed data and information is making distributed infrastructure almost a necessity. This infrastructure provides data collection, archiving, cataloging, browsing, processing, and viewing for applications from scientific research to economic, legal, and national security decision making. The remote sensing field is entering a new exciting stage of commercial growth and expansion into the mainstream of government and business decision making. This paper overviews this new distributed infrastructure and then focuses on describing a software system for on-line catalog access and distribution of remotely sensed information.
NASA Technical Reports Server (NTRS)
Westerlund, F. V.
1975-01-01
User applications of remote sensing in Washington State are described. The first project created a multi-temporal land use/land cover data base for the environs of the Seattle-Tacoma International Airport, to serve planning and management operations of the Port of Seattle. The second is an on-going effort to develop a capability within the Puget Sound Governmental Conference, a council of governments (COG), to inventory and monitor land use within its four county jurisdiction. Developmental work has focused on refinement of land use/cover classification systems applicable at this regional scale and various levels of detail in relation to program requirements of the agency. Related research, refinement of manual methods, user training and approaches to technology transfer are discussed.
NASA Astrophysics Data System (ADS)
Manuhara, Godras Jati; Praseptiangga, Danar; Muhammad, Dimas Rahadian Aji; Maimuni, Bawani Hindami
2016-02-01
Shorter and easier processing of semi-refined kappa carrageenan extracted from Euchema cottonii red seaweed result in cheaper price of the polysaccharide. In this study, edible film was prepared from the semi-refined carrageenan without any salt addition. The effect of the carrageenan concentration (1.0, 1.5, and 2.0% w/v) on physical and mechanical properties of the edible film was studied. Edible film thickness and tensile strength increased but elongation at break and water vapor transmission rate (WVTR) decreased as the concentration increased. Based on the characteristic of the edible film, formulation using 2% carrageenan was recommended. The edible film demonstrated the characteristic as follow: 0.054 mm thickness, 21.14 MPa tensile strength, 12.36% elongation at break, and 9.56 g/m2.hour WVTR. It was also noted the carrageenan-based edible film indicated potential physical and mechanical characteristics for nano coating applications on minimally processed food.
NASA Astrophysics Data System (ADS)
Cavanaugh, C.; Gille, J.; Francis, G.; Nardi, B.; Hannigan, J.; McInerney, J.; Krinsky, C.; Barnett, J.; Dean, V.; Craig, C.
2005-12-01
The High Resolution Dynamics Limb Sounder (HIRDLS) instrument onboard the NASA Aura spacecraft experienced a rupture of the thermal blanketing material (Kapton) during the rapid depressurization of launch. The Kapton draped over the HIRDLS scan mirror, severely limiting the aperture through which HIRDLS views space and Earth's atmospheric limb. In order for HIRDLS to achieve its intended measurement goals, rapid characterization of the anomaly, and rapid recovery from it were required. The recovery centered around a new processing module inserted into the standard HIRDLS processing scheme, with a goal of minimizing the effect of the anomaly on the already existing processing modules. We describe the software infrastructure on which the new processing module was built, and how that infrastructure allows for rapid application development and processing response. The scope of the infrastructure spans three distinct anomaly recovery steps and the means for their intercommunication. Each of the three recovery steps (removing the Kapton-induced oscillation in the radiometric signal, removing the Kapton signal contamination upon the radiometric signal, and correcting for the partially-obscured atmospheric view) is completely modularized and insulated from the other steps, allowing focused and rapid application development towards a specific step, and neutralizing unintended inter-step influences, thus greatly shortening the design-development-test lifecycle. The intercommunication is also completely modularized and has a simple interface to which the three recovery steps adhere, allowing easy modification and replacement of specific recovery scenarios, thereby heightening the processing response.
NASA Astrophysics Data System (ADS)
Marrero, J. M.; Pastor Paz, J. E.; Erazo, C.; Marrero, M.; Aguilar, J.; Yepes, H. A.; Estrella, C. M.; Mothes, P. A.
2015-12-01
Disaster Risk Reduction (DRR) requires an integrated multi-hazard assessment approach towards natural hazard mitigation. In the case of volcanic risk, long term hazard maps are generally developed on a basis of the most probable scenarios (likelihood of occurrence) or worst cases. However, in the short-term, expected scenarios may vary substantially depending on the monitoring data or new knowledge. In this context, the time required to obtain and process data is critical for optimum decision making. Availability of up-to-date volcanic scenarios is as crucial as it is to have this data accompanied by efficient estimations of their impact among populations and infrastructure. To address this impact estimation during volcanic crises, or other natural hazards, a web interface has been developed to execute an ANSI C application. This application allows one to compute - in a matter of seconds - the demographic and infrastructure impact that any natural hazard may cause employing an overlay-layer approach. The web interface is tailored to users involved in the volcanic crises management of Cotopaxi volcano (Ecuador). The population data base and the cartographic basis used are of public domain, published by the National Office of Statistics of Ecuador (INEC, by its Spanish acronym). To run the application and obtain results the user is expected to upload a raster file containing information related to the volcanic hazard or any other natural hazard, and determine categories to group population or infrastructure potentially affected. The results are displayed in a user-friendly report.
Application of the unified mask data format based on OASIS for VSB EB writers
NASA Astrophysics Data System (ADS)
Suzuki, Toshio; Hirumi, Junji; Suga, Osamu
2005-11-01
Mask data preparation (MDP) for modern mask manufacturing becomes a complex process because many kinds of EB data formats are used in mask makers and EB data files continue to become bigger by the application of RET. Therefore we developed a unified mask pattern data format named "OASIS.VSB1" and a job deck format named "MALY2" for Variable-Shaped-Beam (VSB) EB writers. OASIS.VSB is the mask pattern data format based on OASISTM 3 (Open Artwork System Interchange Standard) released as a successive format to GDSII by SEMI. We defined restrictions on OASIS for VSB EB writers to input OASIS.VSB data directly to VSB EB writers just like the native EB data. OASIS.VSB specification and MALY specification have been disclosed to the public and will become a SEMI standard in the near future. We started to promote the spread activities of OASIS.VSB and MALY. For practical use of OASIS.VSB and MALY, we are discussing the infrastructure system of MDP processing using OASIS.VSB and MALY with mask makers, VSB EB makers, and device makers. We are also discussing the tools for the infrastructure system with EDA vendors. The infrastructure system will enable TAT, the man-hour, and the cost in MDP to be reduced. In this paper, we propose the plan of the infrastructure system of MDP processing using OASIS.VSB and MALY as an application of OASIS.VSB and MALY.
Public key infrastructure for DOE security research
DOE Office of Scientific and Technical Information (OSTI.GOV)
Aiken, R.; Foster, I.; Johnston, W.E.
This document summarizes the Department of Energy`s Second Joint Energy Research/Defence Programs Security Research Workshop. The workshop, built on the results of the first Joint Workshop which reviewed security requirements represented in a range of mission-critical ER and DP applications, discussed commonalties and differences in ER/DP requirements and approaches, and identified an integrated common set of security research priorities. One significant conclusion of the first workshop was that progress in a broad spectrum of DOE-relevant security problems and applications could best be addressed through public-key cryptography based systems, and therefore depended upon the existence of a robust, broadly deployed public-keymore » infrastructure. Hence, public-key infrastructure ({open_quotes}PKI{close_quotes}) was adopted as a primary focus for the second workshop. The Second Joint Workshop covered a range of DOE security research and deployment efforts, as well as summaries of the state of the art in various areas relating to public-key technologies. Key findings were that a broad range of DOE applications can benefit from security architectures and technologies built on a robust, flexible, widely deployed public-key infrastructure; that there exists a collection of specific requirements for missing or undeveloped PKI functionality, together with a preliminary assessment of how these requirements can be met; that, while commercial developments can be expected to provide many relevant security technologies, there are important capabilities that commercial developments will not address, due to the unique scale, performance, diversity, distributed nature, and sensitivity of DOE applications; that DOE should encourage and support research activities intended to increase understanding of security technology requirements, and to develop critical components not forthcoming from other sources in a timely manner.« less
Airoldi, Laura; Bulleri, Fabio
2011-01-01
Background Coastal landscapes are being transformed as a consequence of the increasing demand for infrastructures to sustain residential, commercial and tourist activities. Thus, intertidal and shallow marine habitats are largely being replaced by a variety of artificial substrata (e.g. breakwaters, seawalls, jetties). Understanding the ecological functioning of these artificial habitats is key to planning their design and management, in order to minimise their impacts and to improve their potential to contribute to marine biodiversity and ecosystem functioning. Nonetheless, little effort has been made to assess the role of human disturbances in shaping the structure of assemblages on marine artificial infrastructures. We tested the hypothesis that some negative impacts associated with the expansion of opportunistic and invasive species on urban infrastructures can be related to the severe human disturbances that are typical of these environments, such as those from maintenance and renovation works. Methodology/Principal Findings Maintenance caused a marked decrease in the cover of dominant space occupiers, such as mussels and oysters, and a significant enhancement of opportunistic and invasive forms, such as biofilm and macroalgae. These effects were particularly pronounced on sheltered substrata compared to exposed substrata. Experimental application of the disturbance in winter reduced the magnitude of the impacts compared to application in spring or summer. We use these results to identify possible management strategies to inform the improvement of the ecological value of artificial marine infrastructures. Conclusions/Significance We demonstrate that some of the impacts of globally expanding marine urban infrastructures, such as those related to the spread of opportunistic, and invasive species could be mitigated through ecologically-driven planning and management of long-term maintenance of these structures. Impact mitigation is a possible outcome of policies that consider the ecological features of built infrastructures and the fundamental value of controlling biodiversity in marine urban systems. PMID:21826224
Airport Financing and User Charge Systems in the USA
NASA Technical Reports Server (NTRS)
Bartle, John R.
1998-01-01
This paper examines the financing of U.S. public airports in a turbulent era of change, and projects toward the future. It begins by briefly outlining historical patterns that have changed the industry, and airport facilities in particular. It then develops basic principles of public finance as applied to public infrastructure, followed by the applicable principles of management. Following that, the current airport financing system is analyzed and contrasted with a socially optimal financing system. A concluding section suggests policy reforms and their likely benefits. The principles of finance and management discussed here are elementary. However, their implications are radical for U.S. airport policy. There is a great deal of room to improve the allocation of aviation infrastructure resources. The application of these basic principles makes it evident that in many cases, current practice is wasteful, environmentally unsound, overly costly, and inequitable. Future investments in public aviation capital will continue to be wasteful until more efficient pricing systems are instituted. Thus, problem in the U.S. is not one of insufficient investment in airport infrastructure, but investment in the wrong types of infrastructure. In the U.S., the vast majority of publically-owned airports are owned by local governments. Thus, while the federal government bad a great deal of influence in financing airports, ultimately these are local decisions. The same is true with many other public infrastructure issues. Katz and Herman (1997) report that in 1995, U.S. net public capital stock equaled almost $4.6 trillion, 72% of which ($3.9 trillion) was owned by state and local governments, most of it in buildings, highways, Streets, sewer systems, and water supply facilities. Thus, public infrastructure finance is fundamentally a local government issue, with implications for federal and state governments in the design of their aid programs.
Building Task-Oriented Applications: An Introduction to the Legion Programming Paradigm
2015-02-01
These domain definitions are validated prior to execution and represent logical regions that each task can access and manipulate as per the dictates of...Introducing Enzo, an AMR cosmology application, in adaptive mesh refinement - theory and applications. Chicago (IL): Springer Berlin Heidelberg; c2005. p
Proceedings of a Workshop on Applications of Tethers in Space: Executive Summary
NASA Technical Reports Server (NTRS)
Baracat, W. A. (Compiler)
1986-01-01
The workshop was attended by persons from government, industry, and academic institutions to discuss the rapidly evolving area of tether applications in space. Many new applications were presented at the workshop, and existing applications were revised, refined, and prioritized as to which applications should be implemented and when. The workshop concluded with summaries developed individually and jointly by each of the applications panels.
Structure Refinement of Protein Low Resolution Models Using the GNEIMO Constrained Dynamics Method
Park, In-Hee; Gangupomu, Vamshi; Wagner, Jeffrey; Jain, Abhinandan; Vaidehi, Nagara-jan
2012-01-01
The challenge in protein structure prediction using homology modeling is the lack of reliable methods to refine the low resolution homology models. Unconstrained all-atom molecular dynamics (MD) does not serve well for structure refinement due to its limited conformational search. We have developed and tested the constrained MD method, based on the Generalized Newton-Euler Inverse Mass Operator (GNEIMO) algorithm for protein structure refinement. In this method, the high-frequency degrees of freedom are replaced with hard holonomic constraints and a protein is modeled as a collection of rigid body clusters connected by flexible torsional hinges. This allows larger integration time steps and enhances the conformational search space. In this work, we have demonstrated the use of a constraint free GNEIMO method for protein structure refinement that starts from low-resolution decoy sets derived from homology methods. In the eight proteins with three decoys for each, we observed an improvement of ~2 Å in the RMSD to the known experimental structures of these proteins. The GNEIMO method also showed enrichment in the population density of native-like conformations. In addition, we demonstrated structural refinement using a “Freeze and Thaw” clustering scheme with the GNEIMO framework as a viable tool for enhancing localized conformational search. We have derived a robust protocol based on the GNEIMO replica exchange method for protein structure refinement that can be readily extended to other proteins and possibly applicable for high throughput protein structure refinement. PMID:22260550
Application of sensor networks to intelligent transportation systems.
DOT National Transportation Integrated Search
2009-12-01
The objective of the research performed is the application of wireless sensor networks to intelligent transportation infrastructures, with the aim of increasing their dependability and improving the efficacy of data collection and utilization. Exampl...
ERIC Educational Resources Information Center
Yoose, Becky
2010-01-01
With the growth of Web 2.0 library intranets in recent years, many libraries are leaving behind legacy, first-generation intranets. As Web 2.0 intranets multiply and mature, how will traditional intranet best practices--especially in the areas of planning, implementation, and evaluation--translate into an existing Web 2.0 intranet infrastructure?…
ERIC Educational Resources Information Center
Collins, J. Michael
2009-01-01
This paper uses 2005 Home Mortgage Disclosure Act data aggregated by census tract to measure the relationship between census tract-level college completion rates and the rates at which first lien refinance mortgage applicants submit incomplete loan applications, withdraw loan applications before they are reviewed, and reject lender approved loan…
40 CFR 63.541 - Applicability.
Code of Federal Regulations, 2010 CFR
2010-07-01
... Hazardous Air Pollutants from Secondary Lead Smelting § 63.541 Applicability. (a) The provisions of this subpart apply to the following affected sources at all secondary lead smelters: blast, reverberatory, rotary, and electric smelting furnaces; refining kettles; agglomerating furnaces; dryers; process...
40 CFR 63.541 - Applicability.
Code of Federal Regulations, 2011 CFR
2011-07-01
... Hazardous Air Pollutants from Secondary Lead Smelting § 63.541 Applicability. (a) The provisions of this subpart apply to the following affected sources at all secondary lead smelters: blast, reverberatory, rotary, and electric smelting furnaces; refining kettles; agglomerating furnaces; dryers; process...
Benefits of dynamic mobility applications : preliminary estimates from the literature.
DOT National Transportation Integrated Search
2012-12-01
This white paper examines the available quantitative information on the potential mobility benefits of the connected vehicle Dynamic Mobility Applications (DMA). This work will be refined as more and better estimates of benefits from mobility applica...
19 CFR 19.20 - Withdrawal of products from bonded smelting or refining warehouses.
Code of Federal Regulations, 2010 CFR
2010-04-01
... to another bonded warehouse shall be at the risk and expense of the applicant, and the general... far as applicable. (2) In the case of transportation to another port, the transportation entry shall...
Water Development, Allocation, and Institutions: A Role for Integrated Tools
NASA Astrophysics Data System (ADS)
Ward, F. A.
2008-12-01
Many parts of the world suffer from inadequate water infrastructure, inefficient water allocation, and weak water institutions. Each of these three challenges compounds the burdens imposed by inadequacies associated with the other two. Weak water infrastructure makes it hard to allocate water efficiently and undermines tracking of water rights and use, which blocks effective functioning of water institutions. Inefficient water allocation makes it harder to secure resources to develop new water infrastructure. Poorly developed water institutions undermine the security of water rights, which damages incentives to develop water infrastructure or use water efficiently. This paper reports on the development of a prototype basin scale economic optimization, in which existing water supplies are allocated more efficiently in the short run to provide resources for more efficient long-run water infrastructure development. Preliminary results provide the basis for designing water administrative proposals, building effective water infrastructure, increasing farm income, and meeting transboundary delivery commitments. The application is to the Kabul River Basin in Afghanistan, where food security has been compromised by a history of drought, war, damaged irrigation infrastructure, lack of reservoir storage, inefficient water allocation, and weak water institutions. Results illustrate increases in economic efficiency achievable when development programs simultaneously address interdependencies in water allocation, development, and institutions.
"Tactic": Traffic Aware Cloud for Tiered Infrastructure Consolidation
ERIC Educational Resources Information Center
Sangpetch, Akkarit
2013-01-01
Large-scale enterprise applications are deployed as distributed applications. These applications consist of many inter-connected components with heterogeneous roles and complex dependencies. Each component typically consumes 5-15% of the server capacity. Deploying each component as a separate virtual machine (VM) allows us to consolidate the…
Cloud Computing Based E-Learning System
ERIC Educational Resources Information Center
Al-Zoube, Mohammed; El-Seoud, Samir Abou; Wyne, Mudasser F.
2010-01-01
Cloud computing technologies although in their early stages, have managed to change the way applications are going to be developed and accessed. These technologies are aimed at running applications as services over the internet on a flexible infrastructure. Microsoft office applications, such as word processing, excel spreadsheet, access database…
78 FR 24107 - Version 5 Critical Infrastructure Protection Reliability Standards
Federal Register 2010, 2011, 2012, 2013, 2014
2013-04-24
... native applications or print-to-PDF format and not a scanned format. Mail/Hand Delivery: Those unable to... criteria that characterize their impact for the application of cyber security requirements commensurate... recognition. Requirement R2 requires testing to verify response plan effectiveness and consistent application...
Effect of Al Addition on Microstructure of AZ91D
NASA Astrophysics Data System (ADS)
Joshi, Utsavi; Babu, Nadendla Hari
Casting is a net shape or near net shape forming process so work-hardening will not be applicable for improving properties of magnesium cast alloys. Grain refinement, solid-solution strengthening, precipitation hardening and specially designed heat treatment are the techniques used to enhance the properties of these alloys. This research focusses on grain refinement of magnesium alloy AZ91D, which is a widely used commercial cast alloy. Recently, Al-B based master alloys have shown potential in grain refining AZ91D. A comparative study of the grain refinement of AZ91D by addition of 0.02wt%B, 0.04wt%B, 0.1wt%B, 0.5wt%B and 1.0wt%B of A1-5B master alloy and equivalent amount of solute element aluminium is described in this paper. Hardness profile of AZ91D alloyed with boron and aluminium is compared.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Los Alamos National Laboratory, Mailstop M888, Los Alamos, NM 87545, USA; Lawrence Berkeley National Laboratory, One Cyclotron Road, Building 64R0121, Berkeley, CA 94720, USA; Department of Haematology, University of Cambridge, Cambridge CB2 0XY, England
The PHENIX AutoBuild Wizard is a highly automated tool for iterative model-building, structure refinement and density modification using RESOLVE or TEXTAL model-building, RESOLVE statistical density modification, and phenix.refine structure refinement. Recent advances in the AutoBuild Wizard and phenix.refine include automated detection and application of NCS from models as they are built, extensive model completion algorithms, and automated solvent molecule picking. Model completion algorithms in the AutoBuild Wizard include loop-building, crossovers between chains in different models of a structure, and side-chain optimization. The AutoBuild Wizard has been applied to a set of 48 structures at resolutions ranging from 1.1 {angstrom} tomore » 3.2 {angstrom}, resulting in a mean R-factor of 0.24 and a mean free R factor of 0.29. The R-factor of the final model is dependent on the quality of the starting electron density, and relatively independent of resolution.« less
Variability of Protein Structure Models from Electron Microscopy.
Monroe, Lyman; Terashi, Genki; Kihara, Daisuke
2017-04-04
An increasing number of biomolecular structures are solved by electron microscopy (EM). However, the quality of structure models determined from EM maps vary substantially. To understand to what extent structure models are supported by information embedded in EM maps, we used two computational structure refinement methods to examine how much structures can be refined using a dataset of 49 maps with accompanying structure models. The extent of structure modification as well as the disagreement between refinement models produced by the two computational methods scaled inversely with the global and the local map resolutions. A general quantitative estimation of deviations of structures for particular map resolutions are provided. Our results indicate that the observed discrepancy between the deposited map and the refined models is due to the lack of structural information present in EM maps and thus these annotations must be used with caution for further applications. Copyright © 2017 Elsevier Ltd. All rights reserved.
Refining Linear Fuzzy Rules by Reinforcement Learning
NASA Technical Reports Server (NTRS)
Berenji, Hamid R.; Khedkar, Pratap S.; Malkani, Anil
1996-01-01
Linear fuzzy rules are increasingly being used in the development of fuzzy logic systems. Radial basis functions have also been used in the antecedents of the rules for clustering in product space which can automatically generate a set of linear fuzzy rules from an input/output data set. Manual methods are usually used in refining these rules. This paper presents a method for refining the parameters of these rules using reinforcement learning which can be applied in domains where supervised input-output data is not available and reinforcements are received only after a long sequence of actions. This is shown for a generalization of radial basis functions. The formation of fuzzy rules from data and their automatic refinement is an important step in closing the gap between the application of reinforcement learning methods in the domains where only some limited input-output data is available.
3D magnetospheric parallel hybrid multi-grid method applied to planet–plasma interactions
DOE Office of Scientific and Technical Information (OSTI.GOV)
Leclercq, L., E-mail: ludivine.leclercq@latmos.ipsl.fr; Modolo, R., E-mail: ronan.modolo@latmos.ipsl.fr; Leblanc, F.
2016-03-15
We present a new method to exploit multiple refinement levels within a 3D parallel hybrid model, developed to study planet–plasma interactions. This model is based on the hybrid formalism: ions are kinetically treated whereas electrons are considered as a inertia-less fluid. Generally, ions are represented by numerical particles whose size equals the volume of the cells. Particles that leave a coarse grid subsequently entering a refined region are split into particles whose volume corresponds to the volume of the refined cells. The number of refined particles created from a coarse particle depends on the grid refinement rate. In order tomore » conserve velocity distribution functions and to avoid calculations of average velocities, particles are not coalesced. Moreover, to ensure the constancy of particles' shape function sizes, the hybrid method is adapted to allow refined particles to move within a coarse region. Another innovation of this approach is the method developed to compute grid moments at interfaces between two refinement levels. Indeed, the hybrid method is adapted to accurately account for the special grid structure at the interfaces, avoiding any overlapping grid considerations. Some fundamental test runs were performed to validate our approach (e.g. quiet plasma flow, Alfven wave propagation). Lastly, we also show a planetary application of the model, simulating the interaction between Jupiter's moon Ganymede and the Jovian plasma.« less
Applications of the INTELSAT system to remote health care
NASA Technical Reports Server (NTRS)
Maleter, Andrea
1991-01-01
INTELSAT, the International Telecommunications Satellite Organization, is a not-for-profit commercial cooperate of 124 member nations, created on 20 August 1964. It owns and operates a global system of communications satellites that provides international telecommunications services to 180 countries, territories, and dependencies, and domestic telecommunications services to 40 nations. INTELSAT has actively encouraged the use of satellites for both telemedicine and disaster relief. Topics discussed include: INTELSAT domestic/regional services; use of transportable antennas; INTELNET; using the existing telecommunications infrastructure for remote health care applications: Project Access; INTELSAT's role in disaster telecommunications efforts; and how INTELSAT's existing infrastructure can be used for disaster telecommunications.
Symmetric Key Services Markup Language (SKSML)
NASA Astrophysics Data System (ADS)
Noor, Arshad
Symmetric Key Services Markup Language (SKSML) is the eXtensible Markup Language (XML) being standardized by the OASIS Enterprise Key Management Infrastructure Technical Committee for requesting and receiving symmetric encryption cryptographic keys within a Symmetric Key Management System (SKMS). This protocol is designed to be used between clients and servers within an Enterprise Key Management Infrastructure (EKMI) to secure data, independent of the application and platform. Building on many security standards such as XML Signature, XML Encryption, Web Services Security and PKI, SKSML provides standards-based capability to allow any application to use symmetric encryption keys, while maintaining centralized control. This article describes the SKSML protocol and its capabilities.
Optical network democratization.
Nejabati, Reza; Peng, Shuping; Simeonidou, Dimitra
2016-03-06
The current Internet infrastructure is not able to support independent evolution and innovation at physical and network layer functionalities, protocols and services, while at same time supporting the increasing bandwidth demands of evolving and heterogeneous applications. This paper addresses this problem by proposing a completely democratized optical network infrastructure. It introduces the novel concepts of the optical white box and bare metal optical switch as key technology enablers for democratizing optical networks. These are programmable optical switches whose hardware is loosely connected internally and is completely separated from their control software. To alleviate their complexity, a multi-dimensional abstraction mechanism using software-defined network technology is proposed. It creates a universal model of the proposed switches without exposing their technological details. It also enables a conventional network programmer to develop network applications for control of the optical network without specific technical knowledge of the physical layer. Furthermore, a novel optical network virtualization mechanism is proposed, enabling the composition and operation of multiple coexisting and application-specific virtual optical networks sharing the same physical infrastructure. Finally, the optical white box and the abstraction mechanism are experimentally evaluated, while the virtualization mechanism is evaluated with simulation. © 2016 The Author(s).
FY16 Analysis report: Financial systems dependency on communications
DOE Office of Scientific and Technical Information (OSTI.GOV)
Beyeler, Walter E.
Within the Department of Homeland Security (DHS), the Office of Cyber and Infrastructure Analysis (OCIA)'s National Infrastructure Simulation and Analysis Center (NISAC) develops capabilities to support the DHS mission and the resilience of the Nation’s critical infrastructure. At Sandia National Laboratories, under DHS/OCIA direction, NISAC is developing models of financial sector dependence on communications. This capability is designed to improve DHS's ability to assess potential impacts of communication disruptions to major financial services and the effectiveness of possible mitigations. This report summarizes findings and recommendations from the application of that capability as part of the FY2016 NISAC program plan.
NASA Astrophysics Data System (ADS)
Hodgkins, Alex Liam; Diez, Victor; Hegner, Benedikt
2012-12-01
The Software Process & Infrastructure (SPI) project provides a build infrastructure for regular integration testing and release of the LCG Applications Area software stack. In the past, regular builds have been provided using a system which has been constantly growing to include more features like server-client communication, long-term build history and a summary web interface using present-day web technologies. However, the ad-hoc style of software development resulted in a setup that is hard to monitor, inflexible and difficult to expand. The new version of the infrastructure is based on the Django Python framework, which allows for a structured and modular design, facilitating later additions. Transparency in the workflows and ease of monitoring has been one of the priorities in the design. Formerly missing functionality like on-demand builds or release triggering will support the transition to a more agile development process.
Satellite Communications for Aeronautical Applications: Recent research and Development Results
NASA Technical Reports Server (NTRS)
Kerczewski, Robert J.
2001-01-01
Communications systems have always been a critical element in aviation. Until recently, nearly all communications between the ground and aircraft have been based on analog voice technology. But the future of global aviation requires a more sophisticated "information infrastructure" which not only provides more and better communications, but integrates the key information functions (communications, navigation, and surveillance) into a modern, network-based infrastructure. Satellite communications will play an increasing role in providing information infrastructure solutions for aviation. Developing and adapting satellite communications technologies for aviation use is now receiving increased attention as the urgency to develop information infrastructure solutions grows. The NASA Glenn Research Center is actively involved in research and development activities for aeronautical satellite communications, with a key emphasis on air traffic management communications needs. This paper describes the recent results and status of NASA Glenn's research program.
Contour Crafting Simulation Plan for Lunar Settlement Infrastructure Build-Up
NASA Technical Reports Server (NTRS)
Khoshnevis, B.; Carlson, A.; Leach N.; Thangavelu, M.
2016-01-01
Economically viable and reliable building systems and tool sets are being sought, examined and tested for extraterrestrial infrastructure buildup. This project focused on a unique architecture weaving the robotic building construction technology with designs for assisting rapid buildup of initial operational capability Lunar and Martian bases. The project aimed to study new methodologies to construct certain crucial infrastructure elements in order to evaluate the merits, limitations and feasibility of adapting and using such technologies for extraterrestrial application. Current extraterrestrial settlement buildup philosophy holds that in order to minimize the materials needed to be flown in, at great transportation costs, strategies that maximize the use of locally available resources must be adopted. Tools and equipment flown as cargo from Earth are proposed to build required infrastructure to support future missions and settlements on the Moon and Mars.
An infrastructure for ontology-based information systems in biomedicine: RICORDO case study.
Wimalaratne, Sarala M; Grenon, Pierre; Hoehndorf, Robert; Gkoutos, Georgios V; de Bono, Bernard
2012-02-01
The article presents an infrastructure for supporting the semantic interoperability of biomedical resources based on the management (storing and inference-based querying) of their ontology-based annotations. This infrastructure consists of: (i) a repository to store and query ontology-based annotations; (ii) a knowledge base server with an inference engine to support the storage of and reasoning over ontologies used in the annotation of resources; (iii) a set of applications and services allowing interaction with the integrated repository and knowledge base. The infrastructure is being prototyped and developed and evaluated by the RICORDO project in support of the knowledge management of biomedical resources, including physiology and pharmacology models and associated clinical data. The RICORDO toolkit and its source code are freely available from http://ricordo.eu/relevant-resources. sarala@ebi.ac.uk.
Accounting for Induced Travel in Evaluation of Urban Highway Expansion
DOT National Transportation Integrated Search
2013-02-01
The US DOT sponsored Dynamic Mobility Applications (DMA) program seeks to identify, develop, and deploy applications that leverage the full potential of connected vehicles, travelers and infrastructure to enhance current operational practices and tra...
Johanson, Bradley E.; Fox, Armando; Winograd, Terry A.; Hanrahan, Patrick M.
2010-04-20
An efficient and adaptive middleware infrastructure called the Event Heap system dynamically coordinates application interactions and communications in a ubiquitous computing environment, e.g., an interactive workspace, having heterogeneous software applications running on various machines and devices across different platforms. Applications exchange events via the Event Heap. Each event is characterized by a set of unordered, named fields. Events are routed by matching certain attributes in the fields. The source and target versions of each field are automatically set when an event is posted or used as a template. The Event Heap system implements a unique combination of features, both intrinsic to tuplespaces and specific to the Event Heap, including content based addressing, support for routing patterns, standard routing fields, limited data persistence, query persistence/registration, transparent communication, self-description, flexible typing, logical/physical centralization, portable client API, at most once per source first-in-first-out ordering, and modular restartability.
Applications of CCSDS recommendations to Integrated Ground Data Systems (IGDS)
NASA Technical Reports Server (NTRS)
Mizuta, Hiroshi; Martin, Daniel; Kato, Hatsuhiko; Ihara, Hirokazu
1993-01-01
This paper describes an application of the CCSDS Principle Network (CPH) service model to communications network elements of a postulated Integrated Ground Data System (IGDS). Functions are drawn principally from COSMICS (Cosmic Information and Control System), an integrated space control infrastructure, and the Earth Observing System Data and Information System (EOSDIS) Core System (ECS). From functional requirements, this paper derives a set of five communications network partitions which, taken together, support proposed space control infrastructures and data distribution systems. Our functional analysis indicates that the five network partitions derived in this paper should effectively interconnect the users, centers, processors, and other architectural elements of an IGDS. This paper illustrates a useful application of the CCSDS (Consultive Committee for Space Data Systems) Recommendations to ground data system development.
Assessing the risk posed by natural hazards to infrastructures
NASA Astrophysics Data System (ADS)
Eidsvig, Unni; Kristensen, Krister; Vidar Vangelsten, Bjørn
2015-04-01
The modern society is increasingly dependent on infrastructures to maintain its function, and disruption in one of the infrastructure systems may have severe consequences. The Norwegian municipalities have, according to legislation, a duty to carry out a risk and vulnerability analysis and plan and prepare for emergencies in a short- and long term perspective. Vulnerability analysis of the infrastructures and their interdependencies is an important part of this analysis. This paper proposes a model for assessing the risk posed by natural hazards to infrastructures. The model prescribes a three level analysis with increasing level of detail, moving from qualitative to quantitative analysis. This paper focuses on the second level, which consists of a semi-quantitative analysis. The purpose of this analysis is to perform a screening of the scenarios of natural hazards threatening the infrastructures identified in the level 1 analysis and investigate the need for further analyses, i.e. level 3 quantitative analyses. The proposed level 2 analysis considers the frequency of the natural hazard, different aspects of vulnerability including the physical vulnerability of the infrastructure itself and the societal dependency on the infrastructure. An indicator-based approach is applied, ranking the indicators on a relative scale. The proposed indicators characterize the robustness of the infrastructure, the importance of the infrastructure as well as interdependencies between society and infrastructure affecting the potential for cascading effects. Each indicator is ranked on a 1-5 scale based on pre-defined ranking criteria. The aggregated risk estimate is a combination of the semi-quantitative vulnerability indicators, as well as quantitative estimates of the frequency of the natural hazard and the number of users of the infrastructure. Case studies for two Norwegian municipalities are presented, where risk to primary road, water supply and power network threatened by storm and landslide is assessed. The application examples show that the proposed model provides a useful tool for screening of undesirable events, with the ultimate goal to reduce the societal vulnerability.
Mansour, M M; Spink, A E F
2013-01-01
Grid refinement is introduced in a numerical groundwater model to increase the accuracy of the solution over local areas without compromising the run time of the model. Numerical methods developed for grid refinement suffered certain drawbacks, for example, deficiencies in the implemented interpolation technique; the non-reciprocity in head calculations or flow calculations; lack of accuracy resulting from high truncation errors, and numerical problems resulting from the construction of elongated meshes. A refinement scheme based on the divergence theorem and Taylor's expansions is presented in this article. This scheme is based on the work of De Marsily (1986) but includes more terms of the Taylor's series to improve the numerical solution. In this scheme, flow reciprocity is maintained and high order of refinement was achievable. The new numerical method is applied to simulate groundwater flows in homogeneous and heterogeneous confined aquifers. It produced results with acceptable degrees of accuracy. This method shows the potential for its application to solving groundwater heads over nested meshes with irregular shapes. © 2012, British Geological Survey © NERC 2012. Ground Water © 2012, National GroundWater Association.
Macromolecular refinement by model morphing using non-atomic parameterizations.
Cowtan, Kevin; Agirre, Jon
2018-02-01
Refinement is a critical step in the determination of a model which explains the crystallographic observations and thus best accounts for the missing phase components. The scattering density is usually described in terms of atomic parameters; however, in macromolecular crystallography the resolution of the data is generally insufficient to determine the values of these parameters for individual atoms. Stereochemical and geometric restraints are used to provide additional information, but produce interrelationships between parameters which slow convergence, resulting in longer refinement times. An alternative approach is proposed in which parameters are not attached to atoms, but to regions of the electron-density map. These parameters can move the density or change the local temperature factor to better explain the structure factors. Varying the size of the region which determines the parameters at a particular position in the map allows the method to be applied at different resolutions without the use of restraints. Potential applications include initial refinement of molecular-replacement models with domain motions, and potentially the use of electron density from other sources such as electron cryo-microscopy (cryo-EM) as the refinement model.
Application of GIS in exploring spatial dimensions of Efficiency in Competitiveness of Regions
NASA Astrophysics Data System (ADS)
Rahmat, Shahid; Sen, Joy
2017-04-01
Infrastructure is an important component in building competitiveness of a region. Present global scenario of economic slowdown that is led by slump in demand of goods and services and decreasing capacity of government institutions in investing public infrastructure. Strategy of augmenting competitiveness of a region can be built around improving efficient distribution of public infrastructure in the region. This efficiency in the distribution of infrastructure will reduce the burden of government institution and improve the relative output of the region in relative lesser investment. A rigorous literature study followed by an expert opinion survey (RIDIT scores) reveals that Railway, Road, ICTs and Electricity infrastructure is very crucial for better competitiveness of a region. Discussion with Experts in ICTs, Railways and Electricity sectors were conducted to find the issues, hurdles and possible solution for the development of these sectors. In an underdeveloped country like India, there is a large constrain of financial resources, for investment in infrastructure sector. Judicious planning for allocation of resources for infrastructure provisions becomes very important for efficient and sustainable development. Data Envelopment Analysis (DEA) is the mathematical programming optimization tool that measure technical efficiency of the multiple-input and/or multiple-output case by constructing a relative technical efficiency score. This paper tries to utilize DEA to identify the efficiency at which present level of selected components of Infrastructure (Railway, Road, ICTs and Electricity) is utilized in order to build competitiveness of the region. This paper tries to identify a spatial pattern of efficiency of Infrastructure with the help of spatial auto-correlation and Hot-spot analysis in Arc GIS. This analysis leads to policy implications for efficient allocation of financial resources for the provision of infrastructure in the region and building a prerequisite to boost an efficient Regional Competitiveness.
Zheng Dang; Thomas Elder; Jeffery S. Hsieh; Arthur J. Ragauskas
2007-01-01
The effect of increased fiber charge on refining, cationic starch adsorption, and hornification was examined. Two pulps were investigated: (1) a softwood (SW) kraft pulp (KP) which was bleached elementally chlorine-free (ECF) and sewed as control; and (2) a control pulp treated with alkaline peroxide, which had a higher fiber charge. It was shown that increased fiber...
Multiplexing Low and High QoS Workloads in Virtual Environments
NASA Astrophysics Data System (ADS)
Verboven, Sam; Vanmechelen, Kurt; Broeckhove, Jan
Virtualization technology has introduced new ways for managing IT infrastructure. The flexible deployment of applications through self-contained virtual machine images has removed the barriers for multiplexing, suspending and migrating applications with their entire execution environment, allowing for a more efficient use of the infrastructure. These developments have given rise to an important challenge regarding the optimal scheduling of virtual machine workloads. In this paper, we specifically address the VM scheduling problem in which workloads that require guaranteed levels of CPU performance are mixed with workloads that do not require such guarantees. We introduce a framework to analyze this scheduling problem and evaluate to what extent such mixed service delivery is beneficial for a provider of virtualized IT infrastructure. Traditionally providers offer IT resources under a guaranteed and fixed performance profile, which can lead to underutilization. The findings of our simulation study show that through proper tuning of a limited set of parameters, the proposed scheduling algorithm allows for a significant increase in utilization without sacrificing on performance dependability.
Extreme Light Infrastructure - Nuclear Physics Eli-Np Project
NASA Astrophysics Data System (ADS)
Gales, S.
2015-06-01
The development of high power lasers and the combination of such novel devices with accelerator technology has enlarged the science reach of many research fields, in particular High energy, Nuclear and Astrophysics as well as societal applications in Material Science, Nuclear Energy and Medicine. The European Strategic Forum for Research Infrastructures (ESFRI) has selected a proposal based on these new premises called "ELI" for Extreme Light Infrastructure. ELI will be built as a network of three complementary pillars at the frontier of laser technologies. The ELI-NP pillar (NP for Nuclear Physics) is under construction near Bucharest (Romania) and will develop a scientific program using two 10 PW class lasers and a Back Compton Scattering High Brilliance and Intense Low Energy Gamma Beam , a marriage of Laser and Accelerator technology at the frontier of knowledge. In the present paper, the technical description of the facility, the present status of the project as well as the science, applications and future perspectives will be discussed.
DOT National Transportation Integrated Search
2015-03-01
This document serves as an Operational Concept for the Transit Vehicle and Center Data Exchange application. The purpose of this document is to provide an operational description of how the Transit Vehicle and Center Data Exchange application m...
DOT National Transportation Integrated Search
2013-11-01
This document serves as an Operational Concept for the Transit Bus-Pedestrian/Cyclist Crossing Safety application. The purpose of this document is to provide an operational description of how the Transit Bus-Pedestrian/Cyclist Crossing Safety W...
Economic Perspective on Cloud Computing: Three Essays
ERIC Educational Resources Information Center
Dutt, Abhijit
2013-01-01
Improvements in Information Technology (IT) infrastructure and standardization of interoperability standards among heterogeneous Information System (IS) applications have brought a paradigm shift in the way an IS application could be used and delivered. Not only an IS application can be built using standardized component but also parts of it can…
Towards smart mobility in urban spaces: Bus tracking and information application
NASA Astrophysics Data System (ADS)
Yue, Wong Seng; Chye, Koh Keng; Hoy, Cheong Wan
2017-10-01
Smart city can be defined as an urban space with complete and advanced infrastructure, intelligent networks and platforms, with millions of sensors among which people themselves and their mobile devices. Urban mobility is one of the global smart city project which offers traffic management in real-time, management of passenger transport means, tracking applications and logistics, car sharing services, car park management and more smart mobility services. Due to the frustrated waiting time for the arrival of buses and the difficulty of accessing shuttle bus-related information in a one-stop centre, bus tracking and information application (BTA) is one the proposed solutions to solve the traffic problems in urban spaces. This paper is aimed to design and develop a bus tracking and information application in a selected city in Selangor state, Malaysia. Next, this application also provides an alternative to design public transport tracking and information application for the urban places in Malaysia. Furthermore, the application also provides a smart solution for the management of public infrastructures and urban facilities in Malaysia in future.
Prior, Helen; Bottomley, Anna; Champéroux, Pascal; Cordes, Jason; Delpy, Eric; Dybdal, Noel; Edmunds, Nick; Engwall, Mike; Foley, Mike; Hoffmann, Michael; Kaiser, Robert; Meecham, Ken; Milano, Stéphane; Milne, Aileen; Nelson, Rick; Roche, Brian; Valentin, Jean-Pierre; Ward, Gemma; Chapman, Kathryn
2016-01-01
The Safety Pharmacology Society (SPS) and National Centre for the Replacement, Refinement & Reduction of Animals in Research (NC3Rs) conducted a survey and workshop in 2015 to define current industry practices relating to housing of non-rodents during telemetry recordings in safety pharmacology and toxicology studies. The aim was to share experiences, canvas opinion on the study procedures/designs that could be used and explore the barriers to social housing. Thirty-nine sites, either running studies (Sponsors or Contract Research Organisations, CROs) and/or outsourcing work responded to the survey (51% from Europe; 41% from USA). During safety pharmacology studies, 84, 67 and 100% of respondents socially house dogs, minipigs and non-human primates (NHPs) respectively on non-recording days. However, on recording days 20, 20 and 33% of respondents socially house the animals, respectively. The main barriers for social housing were limitations in the recording equipment used, study design and animal temperament/activity. During toxicology studies, 94, 100 and 100% of respondents socially house dogs, minipigs and NHPs respectively on non-recording days. However, on recording days 31, 25 and 50% of respondents socially house the animals, respectively. The main barriers for social housing were risk of damage to and limitations in the recording equipment used, food consumption recording and temperament/activity of the animals. Although the majority of the industry does not yet socially house animals during telemetry recordings in safety pharmacology and toxicology studies, there is support to implement this refinement. Continued discussions, sharing of best practice and data from companies already socially housing, combined with technology improvements and investments in infrastructure are required to maintain the forward momentum of this refinement across the industry. Copyright © 2016 The Authors. Published by Elsevier Inc. All rights reserved.
International Symposium on Grids and Clouds (ISGC) 2014
NASA Astrophysics Data System (ADS)
The International Symposium on Grids and Clouds (ISGC) 2014 will be held at Academia Sinica in Taipei, Taiwan from 23-28 March 2014, with co-located events and workshops. The conference is hosted by the Academia Sinica Grid Computing Centre (ASGC).“Bringing the data scientist to global e-Infrastructures” is the theme of ISGC 2014. The last decade has seen the phenomenal growth in the production of data in all forms by all research communities to produce a deluge of data from which information and knowledge need to be extracted. Key to this success will be the data scientist - educated to use advanced algorithms, applications and infrastructures - collaborating internationally to tackle society’s challenges. ISGC 2014 will bring together researchers working in all aspects of data science from different disciplines around the world to collaborate and educate themselves in the latest achievements and techniques being used to tackle the data deluge. In addition to the regular workshops, technical presentations and plenary keynotes, ISGC this year will focus on how to grow the data science community by considering the educational foundation needed for tomorrow’s data scientist. Topics of discussion include Physics (including HEP) and Engineering Applications, Biomedicine & Life Sciences Applications, Earth & Environmental Sciences & Biodiversity Applications, Humanities & Social Sciences Application, Virtual Research Environment (including Middleware, tools, services, workflow, ... etc.), Data Management, Big Data, Infrastructure & Operations Management, Infrastructure Clouds and Virtualisation, Interoperability, Business Models & Sustainability, Highly Distributed Computing Systems, and High Performance & Technical Computing (HPTC).
Protection efficiency of a standard compliant EUV reticle handling solution
NASA Astrophysics Data System (ADS)
He, Long; Lystad, John; Wurm, Stefan; Orvek, Kevin; Sohn, Jaewoong; Ma, Andy; Kearney, Patrick; Kolbow, Steve; Halbmaier, David
2009-03-01
For successful implementation of extreme ultraviolet lithography (EUVL) technology for late cycle insertion at 32 nm half-pitch (hp) and full introduction for 22 nm hp high volume production, the mask development infrastructure must be in place by 2010. The central element of the mask infrastructure is contamination-free reticle handling and protection. Today, the industry has already developed and balloted an EUV pod standard for shipping, transporting, transferring, and storing EUV masks. We have previously demonstrated that the EUV pod reticle handling method represents the best approach in meeting EUVL high volume production requirements, based on then state-of-the-art inspection capability at ~53nm polystyrene latex (PSL) equivalent sensitivity. In this paper, we will present our latest data to show defect-free reticle handling is achievable down to 40 nm particle sizes, using the same EUV pod carriers as in the previous study and the recently established world's most advanced defect inspection capability of ~40 nm SiO2 equivalent sensitivity. The EUV pod is a worthy solution to meet EUVL pilot line and pre-production exposure tool development requirements. We will also discuss the technical challenges facing the industry in refining the EUV pod solution to meet 22 nm hp EUVL production requirements and beyond.
Evaluation of life-cycle air emission factors of freight transportation.
Facanha, Cristiano; Horvath, Arpad
2007-10-15
Life-cycle air emission factors associated with road, rail, and air transportation of freight in the United States are analyzed. All life-cycle phases of vehicles, infrastructure, and fuels are accounted for in a hybrid life-cycle assessment (LCA). It includes not only fuel combustion, but also emissions from vehicle manufacturing, maintenance, and end of life, infrastructure construction, operation, maintenance, and end of life, and petroleum exploration, refining, and fuel distribution. Results indicate that total life-cycle emissions of freight transportation modes are underestimated if only tailpipe emissions are accounted for. In the case of CO2 and NOx, tailpipe emissions underestimate total emissions by up to 38%, depending on the mode. Total life-cycle emissions of CO and SO2 are up to seven times higher than tailpipe emissions. Sensitivity analysis considers the effects of vehicle type, geography, and mode efficiency on the final results. Policy implications of this analysis are also discussed. For example, while it is widely assumed that currently proposed regulations will result in substantial reductions in emissions, we find that this is true for NOx, emissions, because fuel combustion is the main cause, and to a lesser extent for SO2, but not for PM10 emissions, which are significantly affected by the other life-cycle phases.
Development of enterprise architecture in university using TOGAF as framework
NASA Astrophysics Data System (ADS)
Amalia, Endang; Supriadi, Hari
2017-06-01
The university of XYZ is located in Bandung, West Java. It has an infrastructure of technology information (IT) which is managed independently. Currently, the IT at the University of XYZ employs a complex conventional management pattern that does not result in a fully integrated IT infrastructure. This is not adaptive in addressing solutions to changing business needs and applications. In addition, it impedes the innovative development of sustainable IT services and also contributes to an unnecessary high workload for managers. This research aims to establish the concept of IS/IT strategic planning. This is used in the development of the IS/IT and in designing the information technology infrastructure based on the framework of The Open Group Architecture Framework (TOGAF) and Architecture Development Method (ADM). A case study will be done at the University of XYZ using the concept of qualitative research through review of literatures and interviews. This study generates the following stages:(1) forming a design using TOGAF and the ADM around nine functional areas of business and propose 12 application candidates to be developed at XYZ University; (2) generating 11 principles of the development of information technology architecture; (3) creating a portfolio for future applications (McFarlan Grid), generating 6 applications in the strategic quadrant (SIAKAD-T, E-LIBRARY, SIPADU-T, DSS, SIPPM-T, KMS), 2 quadrant application operations (PMS-T, CRM), 4 quadrant application supports (MNC-T, NOPEC-T, EMAIL-SYSTEM, SSO); and (4) modelling the enterprise architecture of this study which could be a reference in making a blueprint for the development of information systems and information technology at the University of XYZ.
Is the work flow model a suitable candidate for an observatory supervisory control infrastructure?
NASA Astrophysics Data System (ADS)
Daly, Philip N.; Schumacher, Germán.
2016-08-01
This paper reports on the early investigation of using the work flow model for observatory infrastructure software. We researched several work ow engines and identified 3 for further detailed, study: Bonita BPM, Activiti and Taverna. We discuss the business process model and how it relates to observatory operations and identify a path finder exercise to further evaluate the applicability of these paradigms.
A Combination Therapy of JO-I and Chemotherapy in Ovarian Cancer Models
2013-10-01
which consists of a 3PAR storage backend and is sharing data via a highly available NetApp storage gateway and 2 high throughput commodity storage...Environment is configured as self- service Enterprise cloud and currently hosts more than 700 virtual machines. The network infrastructure consists of...technology infrastructure and information system applications designed to integrate, automate, and standardize operations. These systems fuse state of
About opportunities of the sharing of city infrastructure centralized warmly - and water supply
NASA Astrophysics Data System (ADS)
Zamaleev, M. M.; Gubin, I. V.; Sharapov, V. I.
2017-11-01
It is shown that joint use of engineering infrastructure of centralized heat and water supply of consumers will be the cost-efficient decision for municipal services of the city. The new technology for regulated heating of drinking water in the condenser of steam turbines of combined heat and power plant is offered. Calculation of energy efficiency from application of new technology is executed.
NASA Astrophysics Data System (ADS)
Schwing, Alan Michael
For computational fluid dynamics, the governing equations are solved on a discretized domain of nodes, faces, and cells. The quality of the grid or mesh can be a driving source for error in the results. While refinement studies can help guide the creation of a mesh, grid quality is largely determined by user expertise and understanding of the flow physics. Adaptive mesh refinement is a technique for enriching the mesh during a simulation based on metrics for error, impact on important parameters, or location of important flow features. This can offload from the user some of the difficult and ambiguous decisions necessary when discretizing the domain. This work explores the implementation of adaptive mesh refinement in an implicit, unstructured, finite-volume solver. Consideration is made for applying modern computational techniques in the presence of hanging nodes and refined cells. The approach is developed to be independent of the flow solver in order to provide a path for augmenting existing codes. It is designed to be applicable for unsteady simulations and refinement and coarsening of the grid does not impact the conservatism of the underlying numerics. The effect on high-order numerical fluxes of fourth- and sixth-order are explored. Provided the criteria for refinement is appropriately selected, solutions obtained using adapted meshes have no additional error when compared to results obtained on traditional, unadapted meshes. In order to leverage large-scale computational resources common today, the methods are parallelized using MPI. Parallel performance is considered for several test problems in order to assess scalability of both adapted and unadapted grids. Dynamic repartitioning of the mesh during refinement is crucial for load balancing an evolving grid. Development of the methods outlined here depend on a dual-memory approach that is described in detail. Validation of the solver developed here against a number of motivating problems shows favorable comparisons across a range of regimes. Unsteady and steady applications are considered in both subsonic and supersonic flows. Inviscid and viscous simulations achieve similar results at a much reduced cost when employing dynamic mesh adaptation. Several techniques for guiding adaptation are compared. Detailed analysis of statistics from the instrumented solver enable understanding of the costs associated with adaptation. Adaptive mesh refinement shows promise for the test cases presented here. It can be considerably faster than using conventional grids and provides accurate results. The procedures for adapting the grid are light-weight enough to not require significant computational time and yield significant reductions in grid size.
Refinement of NMR structures using implicit solvent and advanced sampling techniques.
Chen, Jianhan; Im, Wonpil; Brooks, Charles L
2004-12-15
NMR biomolecular structure calculations exploit simulated annealing methods for conformational sampling and require a relatively high level of redundancy in the experimental restraints to determine quality three-dimensional structures. Recent advances in generalized Born (GB) implicit solvent models should make it possible to combine information from both experimental measurements and accurate empirical force fields to improve the quality of NMR-derived structures. In this paper, we study the influence of implicit solvent on the refinement of protein NMR structures and identify an optimal protocol of utilizing these improved force fields. To do so, we carry out structure refinement experiments for model proteins with published NMR structures using full NMR restraints and subsets of them. We also investigate the application of advanced sampling techniques to NMR structure refinement. Similar to the observations of Xia et al. (J.Biomol. NMR 2002, 22, 317-331), we find that the impact of implicit solvent is rather small when there is a sufficient number of experimental restraints (such as in the final stage of NMR structure determination), whether implicit solvent is used throughout the calculation or only in the final refinement step. The application of advanced sampling techniques also seems to have minimal impact in this case. However, when the experimental data are limited, we demonstrate that refinement with implicit solvent can substantially improve the quality of the structures. In particular, when combined with an advanced sampling technique, the replica exchange (REX) method, near-native structures can be rapidly moved toward the native basin. The REX method provides both enhanced sampling and automatic selection of the most native-like (lowest energy) structures. An optimal protocol based on our studies first generates an ensemble of initial structures that maximally satisfy the available experimental data with conventional NMR software using a simplified force field and then refines these structures with implicit solvent using the REX method. We systematically examine the reliability and efficacy of this protocol using four proteins of various sizes ranging from the 56-residue B1 domain of Streptococcal protein G to the 370-residue Maltose-binding protein. Significant improvement in the structures was observed in all cases when refinement was based on low-redundancy restraint data. The proposed protocol is anticipated to be particularly useful in early stages of NMR structure determination where a reliable estimate of the native fold from limited data can significantly expedite the overall process. This refinement procedure is also expected to be useful when redundant experimental data are not readily available, such as for large multidomain biomolecules and in solid-state NMR structure determination.
Energy Systems Integration Facility (ESIF) Facility Stewardship Plan: Revision 2.1
DOE Office of Scientific and Technical Information (OSTI.GOV)
Torres, Juan; Anderson, Art
The U.S. Department of Energy (DOE), Office of Energy Efficiency and Renewable Energy (EERE), has established the Energy Systems Integration Facility (ESIF) on the campus of the National Renewable Energy Laboratory (NREL) and has designated it as a DOE user facility. This 182,500-ft2 research facility provides state-of-the-art laboratory and support infrastructure to optimize the design and performance of electrical, thermal, fuel, and information technologies and systems at scale. This Facility Stewardship Plan provides DOE and other decision makers with information about the existing and expected capabilities of the ESIF and the expected performance metrics to be applied to ESIF operations.more » This plan is a living document that will be updated and refined throughout the lifetime of the facility.« less
Earth and Space Science Informatics: Raising Awareness of the Scientists and the Public
NASA Astrophysics Data System (ADS)
Messerotti, M.; Cobabe-Ammann, E.
2009-04-01
The recent developments in Earth and Space Science Informatics led to the availability of advanced tools for data search, visualization and analysis through e.g. the Virtual Observatories or distributed data handling infrastructures. Such facilities are accessible via web interfaces and allow refined data handling to be carried out. Notwithstanding, to date their use is not exploited by the scientific community for a variety of reasons that we will analyze in this work by considering viable strategies to overcome the issue. Similarly, such facilities are powerful tools for teaching and for popularization provided that e-learning programs involving the teachers and respectively the communicators are made available. In this context we will consider the present activities and projects by stressing the role and the legacy of the Electronic Geophysical Year.
Venturi, Francesca; Sanmartin, Chiara; Taglieri, Isabella; Nari, Anita; Andrich, Gianpaolo; Terzuoli, Erika; Donnini, Sandra; Nicolella, Cristiano; Zinnai, Angela
2017-08-22
While in the last few years the use of olive cake and mill wastewater as natural sources of phenolic compounds has been widely considered and several studies have focused on the development of new extraction methods and on the production of functional foods enriched with natural antioxidants, no data has been available on the production of a phenol-enriched refined olive oil with its own phenolic compounds extracted from wastewater produced during physical refining. In this study; we aimed to: (i) verify the effectiveness of a multi-step extraction process to recover the high-added-value phenolic compounds contained in wastewater derived from the preliminary washing degumming step of the physical refining of vegetal oils; (ii) evaluate their potential application for the stabilization of olive oil obtained with refined olive oils; and (iii) evaluate their antioxidant activity in an in vitro model of endothelial cells. The results obtained demonstrate the potential of using the refining wastewater as a source of bioactive compounds to improve the nutraceutical value as well as the antioxidant capacity of commercial olive oils. In the conditions adopted, the phenolic content significantly increased in the prototypes of phenol-enriched olive oils when compared with the control oil.
Effect of Al on Grain Refinement and Mechanical Properties of Mg-3Nd Casting Alloy
NASA Astrophysics Data System (ADS)
Wang, Lei; Feng, Yicheng; Wang, Liping; Chen, Yanhong; Guo, Erjun
2018-05-01
The effect of Al on the grain refinement and mechanical properties of as-cast Mg-3Nd alloy was investigated systematically by a series of microstructural analysis, solidification analysis and tensile tests. The results show that Al has an obvious refining effect on the as-cast Mg-3Nd alloy. With increasing Al content, the grain size of the as-cast Mg-3Nd alloy decreases firstly, then increases slightly after the Al content reaching 3 wt.%, and the minimum grain size of the Mg-3Nd alloy is 48 ± 4.0 μm. The refining mechanism can be attributed to the formation of Al2Nd particles, which play an important role in the heterogeneous nucleation. The strength and elongation of the Mg-3Nd alloy refined by Al also increase with increasing Al content and slightly decrease when the Al content is more than 3 wt.%, and the strengthening mechanism is attributed to the grain refinement as well as dispersed intermetallic particles. Furthermore, the microstructural thermal stability of the Mg-3Nd-3Al alloy is higher than that of the Mg-3Nd-0.5Zr alloy. Overall, the Mg-3Nd alloy with Al addition is a novel alloy with wide and potential application prospects.
Taglieri, Isabella; Nari, Anita; Andrich, Gianpaolo; Terzuoli, Erika; Donnini, Sandra; Nicolella, Cristiano; Zinnai, Angela
2017-01-01
While in the last few years the use of olive cake and mill wastewater as natural sources of phenolic compounds has been widely considered and several studies have focused on the development of new extraction methods and on the production of functional foods enriched with natural antioxidants, no data has been available on the production of a phenol-enriched refined olive oil with its own phenolic compounds extracted from wastewater produced during physical refining. In this study; we aimed to: (i) verify the effectiveness of a multi-step extraction process to recover the high-added-value phenolic compounds contained in wastewater derived from the preliminary washing degumming step of the physical refining of vegetal oils; (ii) evaluate their potential application for the stabilization of olive oil obtained with refined olive oils; and (iii) evaluate their antioxidant activity in an in vitro model of endothelial cells. The results obtained demonstrate the potential of using the refining wastewater as a source of bioactive compounds to improve the nutraceutical value as well as the antioxidant capacity of commercial olive oils. In the conditions adopted, the phenolic content significantly increased in the prototypes of phenol-enriched olive oils when compared with the control oil. PMID:28829365
40 CFR 419.20 - Applicability; description of the cracking subcategory.
Code of Federal Regulations, 2010 CFR
2010-07-01
... cracking subcategory. 419.20 Section 419.20 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) EFFLUENT GUIDELINES AND STANDARDS PETROLEUM REFINING POINT SOURCE CATEGORY Cracking Subcategory § 419.20 Applicability; description of the cracking subcategory. The provisions of this subpart are...
40 CFR 419.20 - Applicability; description of the cracking subcategory.
Code of Federal Regulations, 2011 CFR
2011-07-01
... cracking subcategory. 419.20 Section 419.20 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) EFFLUENT GUIDELINES AND STANDARDS PETROLEUM REFINING POINT SOURCE CATEGORY Cracking Subcategory § 419.20 Applicability; description of the cracking subcategory. The provisions of this subpart are...
caGrid 1.0: An Enterprise Grid Infrastructure for Biomedical Research
Oster, Scott; Langella, Stephen; Hastings, Shannon; Ervin, David; Madduri, Ravi; Phillips, Joshua; Kurc, Tahsin; Siebenlist, Frank; Covitz, Peter; Shanbhag, Krishnakant; Foster, Ian; Saltz, Joel
2008-01-01
Objective To develop software infrastructure that will provide support for discovery, characterization, integrated access, and management of diverse and disparate collections of information sources, analysis methods, and applications in biomedical research. Design An enterprise Grid software infrastructure, called caGrid version 1.0 (caGrid 1.0), has been developed as the core Grid architecture of the NCI-sponsored cancer Biomedical Informatics Grid (caBIG™) program. It is designed to support a wide range of use cases in basic, translational, and clinical research, including 1) discovery, 2) integrated and large-scale data analysis, and 3) coordinated study. Measurements The caGrid is built as a Grid software infrastructure and leverages Grid computing technologies and the Web Services Resource Framework standards. It provides a set of core services, toolkits for the development and deployment of new community provided services, and application programming interfaces for building client applications. Results The caGrid 1.0 was released to the caBIG community in December 2006. It is built on open source components and caGrid source code is publicly and freely available under a liberal open source license. The core software, associated tools, and documentation can be downloaded from the following URL: https://cabig.nci.nih.gov/workspaces/Architecture/caGrid. Conclusions While caGrid 1.0 is designed to address use cases in cancer research, the requirements associated with discovery, analysis and integration of large scale data, and coordinated studies are common in other biomedical fields. In this respect, caGrid 1.0 is the realization of a framework that can benefit the entire biomedical community. PMID:18096909
Applications of UAVs for Remote Sensing of Critical Infrastructure
NASA Technical Reports Server (NTRS)
Wegener, Steve; Brass, James; Schoenung, Susan
2003-01-01
The surveillance of critical facilities and national infrastructure such as waterways, roadways, pipelines and utilities requires advanced technological tools to provide timely, up to date information on structure status and integrity. Unmanned Aerial Vehicles (UAVs) are uniquely suited for these tasks, having large payload and long duration capabilities. UAVs also have the capability to fly dangerous and dull missions, orbiting for 24 hours over a particular area or facility providing around the clock surveillance with no personnel onboard. New UAV platforms and systems are becoming available for commercial use. High altitude platforms are being tested for use in communications, remote sensing, agriculture, forestry and disaster management. New payloads are being built and demonstrated onboard the UAVs in support of these applications. Smaller, lighter, lower power consumption imaging systems are currently being tested over coffee fields to determine yield and over fires to detect fire fronts and hotspots. Communication systems that relay video, meteorological and chemical data via satellite to users on the ground in real-time have also been demonstrated. Interest in this technology for infrastructure characterization and mapping has increased dramatically in the past year. Many of the UAV technological developments required for resource and disaster monitoring are being used for the infrastructure and facility mapping activity. This paper documents the unique contributions from NASA;s Environmental Research Aircraft and Sensor Technology (ERAST) program to these applications. ERAST is a UAV technology development effort by a consortium of private aeronautical companies and NASA. Details of demonstrations of UAV capabilities currently underway are also presented.
caGrid 1.0: an enterprise Grid infrastructure for biomedical research.
Oster, Scott; Langella, Stephen; Hastings, Shannon; Ervin, David; Madduri, Ravi; Phillips, Joshua; Kurc, Tahsin; Siebenlist, Frank; Covitz, Peter; Shanbhag, Krishnakant; Foster, Ian; Saltz, Joel
2008-01-01
To develop software infrastructure that will provide support for discovery, characterization, integrated access, and management of diverse and disparate collections of information sources, analysis methods, and applications in biomedical research. An enterprise Grid software infrastructure, called caGrid version 1.0 (caGrid 1.0), has been developed as the core Grid architecture of the NCI-sponsored cancer Biomedical Informatics Grid (caBIG) program. It is designed to support a wide range of use cases in basic, translational, and clinical research, including 1) discovery, 2) integrated and large-scale data analysis, and 3) coordinated study. The caGrid is built as a Grid software infrastructure and leverages Grid computing technologies and the Web Services Resource Framework standards. It provides a set of core services, toolkits for the development and deployment of new community provided services, and application programming interfaces for building client applications. The caGrid 1.0 was released to the caBIG community in December 2006. It is built on open source components and caGrid source code is publicly and freely available under a liberal open source license. The core software, associated tools, and documentation can be downloaded from the following URL: https://cabig.nci.nih.gov/workspaces/Architecture/caGrid. While caGrid 1.0 is designed to address use cases in cancer research, the requirements associated with discovery, analysis and integration of large scale data, and coordinated studies are common in other biomedical fields. In this respect, caGrid 1.0 is the realization of a framework that can benefit the entire biomedical community.
Liu, Dunyi; Liu, Yumin; Zhang, Wei; Chen, Xinping; Zou, Chunqin
2017-01-01
Zinc (Zn) deficiency is a common disorder of humans in developing countries. The effect of Zn biofortification (via application of six rates of Zn fertilizer to soil) on Zn bioavailability in wheat grain and flour and its impacts on human health was evaluated. Zn bioavailability was estimated with a trivariate model that included Zn homeostasis in the human intestine. As the rate of Zn fertilization increased, the Zn concentration increased in all flour fractions, but the percentages of Zn in standard flour (25%) and bran (75%) relative to total grain Zn were constant. Phytic acid (PA) concentrations in grain and flours were unaffected by Zn biofortification. Zn bioavailability and the health impact, as indicated by disability-adjusted life years (DALYs) saved, increased with the Zn application rate and were greater in standard and refined flour than in whole grain and coarse flour. The biofortified standard and refined flour obtained with application of 50 kg/ha ZnSO4·7H2O met the health requirement (3 mg of Zn obtained from 300 g of wheat flour) and reduced DALYs by >20%. Although Zn biofortification increased Zn bioavailability in standard and refined flour, it did not reduce the bioavailability of iron, manganese, or copper in wheat flour. PMID:28481273
NASA Astrophysics Data System (ADS)
Chakraborty, Souvik; Chowdhury, Rajib
2017-12-01
Hybrid polynomial correlated function expansion (H-PCFE) is a novel metamodel formulated by coupling polynomial correlated function expansion (PCFE) and Kriging. Unlike commonly available metamodels, H-PCFE performs a bi-level approximation and hence, yields more accurate results. However, till date, it is only applicable to medium scaled problems. In order to address this apparent void, this paper presents an improved H-PCFE, referred to as locally refined hp - adaptive H-PCFE. The proposed framework computes the optimal polynomial order and important component functions of PCFE, which is an integral part of H-PCFE, by using global variance based sensitivity analysis. Optimal number of training points are selected by using distribution adaptive sequential experimental design. Additionally, the formulated model is locally refined by utilizing the prediction error, which is inherently obtained in H-PCFE. Applicability of the proposed approach has been illustrated with two academic and two industrial problems. To illustrate the superior performance of the proposed approach, results obtained have been compared with those obtained using hp - adaptive PCFE. It is observed that the proposed approach yields highly accurate results. Furthermore, as compared to hp - adaptive PCFE, significantly less number of actual function evaluations are required for obtaining results of similar accuracy.
Liu, Dunyi; Liu, Yumin; Zhang, Wei; Chen, Xinping; Zou, Chunqin
2017-05-06
Zinc (Zn) deficiency is a common disorder of humans in developing countries. The effect of Zn biofortification (via application of six rates of Zn fertilizer to soil) on Zn bioavailability in wheat grain and flour and its impacts on human health was evaluated. Zn bioavailability was estimated with a trivariate model that included Zn homeostasis in the human intestine. As the rate of Zn fertilization increased, the Zn concentration increased in all flour fractions, but the percentages of Zn in standard flour (25%) and bran (75%) relative to total grain Zn were constant. Phytic acid (PA) concentrations in grain and flours were unaffected by Zn biofortification. Zn bioavailability and the health impact, as indicated by disability-adjusted life years (DALYs) saved, increased with the Zn application rate and were greater in standard and refined flour than in whole grain and coarse flour. The biofortified standard and refined flour obtained with application of 50 kg/ha ZnSO₄·7H₂O met the health requirement (3 mg of Zn obtained from 300 g of wheat flour) and reduced DALYs by >20%. Although Zn biofortification increased Zn bioavailability in standard and refined flour, it did not reduce the bioavailability of iron, manganese, or copper in wheat flour.
A Public Health Grid (PHGrid): Architecture and value proposition for 21st century public health.
Savel, T; Hall, K; Lee, B; McMullin, V; Miles, M; Stinn, J; White, P; Washington, D; Boyd, T; Lenert, L
2010-07-01
This manuscript describes the value of and proposal for a high-level architectural framework for a Public Health Grid (PHGrid), which the authors feel has the capability to afford the public health community a robust technology infrastructure for secure and timely data, information, and knowledge exchange, not only within the public health domain, but between public health and the overall health care system. The CDC facilitated multiple Proof-of-Concept (PoC) projects, leveraging an open-source-based software development methodology, to test four hypotheses with regard to this high-level framework. The outcomes of the four PoCs in combination with the use of the Federal Enterprise Architecture Framework (FEAF) and the newly emerging Federal Segment Architecture Methodology (FSAM) was used to develop and refine a high-level architectural framework for a Public Health Grid infrastructure. The authors were successful in documenting a robust high-level architectural framework for a PHGrid. The documentation generated provided a level of granularity needed to validate the proposal, and included examples of both information standards and services to be implemented. Both the results of the PoCs as well as feedback from selected public health partners were used to develop the granular documentation. A robust high-level cohesive architectural framework for a Public Health Grid (PHGrid) has been successfully articulated, with its feasibility demonstrated via multiple PoCs. In order to successfully implement this framework for a Public Health Grid, the authors recommend moving forward with a three-pronged approach focusing on interoperability and standards, streamlining the PHGrid infrastructure, and developing robust and high-impact public health services. Published by Elsevier Ireland Ltd.
Brokered virtual hubs for facilitating access and use of geospatial Open Data
NASA Astrophysics Data System (ADS)
Mazzetti, Paolo; Latre, Miguel; Kamali, Nargess; Brumana, Raffaella; Braumann, Stefan; Nativi, Stefano
2016-04-01
Open Data is a major trend in current information technology scenario and it is often publicised as one of the pillars of the information society in the near future. In particular, geospatial Open Data have a huge potential also for Earth Sciences, through the enablement of innovative applications and services integrating heterogeneous information. However, open does not mean usable. As it was recognized at the very beginning of the Web revolution, many different degrees of openness exist: from simple sharing in a proprietary format to advanced sharing in standard formats and including semantic information. Therefore, to fully unleash the potential of geospatial Open Data, advanced infrastructures are needed to increase the data openness degree, enhancing their usability. In October 2014, the ENERGIC OD (European NEtwork for Redistributing Geospatial Information to user Communities - Open Data) project, funded by the European Union under the Competitiveness and Innovation framework Programme (CIP), has started. In response to the EU call, the general objective of the project is to "facilitate the use of open (freely available) geographic data from different sources for the creation of innovative applications and services through the creation of Virtual Hubs". The ENERGIC OD Virtual Hubs aim to facilitate the use of geospatial Open Data by lowering and possibly removing the main barriers which hampers geo-information (GI) usage by end-users and application developers. Data and services heterogeneity is recognized as one of the major barriers to Open Data (re-)use. It imposes end-users and developers to spend a lot of effort in accessing different infrastructures and harmonizing datasets. Such heterogeneity cannot be completely removed through the adoption of standard specifications for service interfaces, metadata and data models, since different infrastructures adopt different standards to answer to specific challenges and to address specific use-cases. Thus, beyond a certain extent, heterogeneity is irreducible especially in interdisciplinary contexts. ENERGIC OD Virtual Hubs address heterogeneity adopting a mediation and brokering approach: specific components (brokers) are dedicated to harmonize service interfaces, metadata and data models, enabling seamless discovery and access to heterogeneous infrastructures and datasets. As an innovation project, ENERGIC OD integrates several existing technologies to implement Virtual Hubs as single points of access to geospatial datasets provided by new or existing platforms and infrastructures, including INSPIRE-compliant systems and Copernicus services. A first version of the ENERGIC OD brokers has been implemented based on the GI-Suite Brokering Framework developed by CNR-IIA, and complemented with other tools under integration and development. It already enables mediated discovery and harmonized access to different geospatial Open Data sources. It is accessible by users as Software-as-a-Service through a browser. Moreover, open APIs and a Javascript library are available for application developers. Six ENERGIC OD Virtual Hubs have been currently deployed: one at regional level (Berlin metropolitan area) and five at national-level (in France, Germany, Italy, Poland and Spain). Each Virtual Hub manager decided the deployment strategy (local infrastructure or commercial Infrastructure-as-a-Service cloud), and the list of connected Open Data sources. The ENERGIC OD Virtual Hubs are under test and validation through the development of ten different mobile and Web applications.
Evolving a Puncture Black Hole with Fixed Mesh Refinement
NASA Technical Reports Server (NTRS)
Imbiriba, Breno; Baker, John; Choi, Dae-II; Centrella, Joan; Fiske. David R.; Brown, J. David; vanMeter, James R.; Olson, Kevin
2004-01-01
We present a detailed study of the effects of mesh refinement boundaries on the convergence and stability of simulations of black hole spacetimes. We find no technical problems. In our applications of this technique to the evolution of puncture initial data, we demonstrate that it is possible to simulaneously maintain second order convergence near the puncture and extend the outer boundary beyond 100M, thereby approaching the asymptotically flat region in which boundary condition problems are less difficult.
NASA Technical Reports Server (NTRS)
Ma, Chopo
2004-01-01
Since the ICRF was generated in 1995, VLBI modeling and estimation, data quality: source position stability analysis, and supporting observational programs have improved markedly. There are developing and potential applications in the areas of space navigation Earth orientation monitoring and optical astrometry from space that would benefit from a refined ICRF with enhanced accuracy, stability and spatial distribution. The convergence of analysis, focused observations, and astrometric needs should drive the production of a new realization in the next few years.
Dynamic grid refinement for partial differential equations on parallel computers
NASA Technical Reports Server (NTRS)
Mccormick, S.; Quinlan, D.
1989-01-01
The fast adaptive composite grid method (FAC) is an algorithm that uses various levels of uniform grids to provide adaptive resolution and fast solution of PDEs. An asynchronous version of FAC, called AFAC, that completely eliminates the bottleneck to parallelism is presented. This paper describes the advantage that this algorithm has in adaptive refinement for moving singularities on multiprocessor computers. This work is applicable to the parallel solution of two- and three-dimensional shock tracking problems.